Citizen science images of aurora and celestial features can often be noisy.
Additionally, consumer and even prosumer cameras manipulate images in ways that typically cannot be completely disabled or even easily quantified in all cases.
To make scientific use of images, the image metadata must include:
Geographic coordinates (e.g. from GPS)
time of image (accuracy ~ 10 seconds for wide angle view, ~ 1 second with < 20 degree FOV).
Stellarium helps manual verification of image calibration.
Stellarium can also be used from the
web browser
without needing any install or plugins.
F11 toggles full screen mode.
Press F12 to toggle Stellarium “scripts” menu.
Stellarium can use
ECMAScript,
which is like a generalized, formal JavaScript.
Scripts have a .ssc or .inc filename extension.
Mayavi may be thought of as a Python layer atop VTK, making common 3-D data plotting tasks easy.
Mayavi is installed via pip or conda.
VTK, Traits, et al have .whl binary wheels, which avoid the previously painful build process.
Because of the large number of prereq packages for Mayavi, we strongly urge installing Mayavi in a seperate virtualenv or Conda environment.
conda install mayavi
Mayavi makes high quality manipulable volume plots.
Create a file scalar_field.py with the content
frommayaviimport mlab
importnumpyasnpx, y, z = np.mgrid[-10:10:20j, -10:10:20j, -10:10:20j]
s = np.sin(x*y*z)/(x*y*z)
scf = mlab.pipeline.scalar_field(x,y,z,s)
mlab.pipeline.volume(scf)
mlab.show()
While ArXiv is among the earliest and best known preprint archives, more focused archives can provide easier access to a targeted audience with good reputation.
Here are a few I’ve come across relevant to geosciences:
An increasing number of systems have multiple CPUs, say four, six or eight but may have modest RAM of 1 or 2 GB.
An example of this is the Raspberry Pi.
Ninja
job pools
allow specifying a specific limit on number of CPU processes used for a CMake target.
That is, unlike GNU Make where we have to choose one CPU limit for the entire project, with Ninja we can select CPU limits on a per-target basis.
That’s one important benefit of Ninja for speeding up builds of medium to large projects, and why we see increasing adoption of Ninja in prominent projects including Google Chrome.
This is another reason why we generally strongly encourage using Ninja with CMake.
Specifically, CMake + Ninja builds can limit CPU process count via target properties:
The global
JOB_POOLS
property defines the pools for the targets.
Upon experiencing build issues such as SIGKILL due to excessive memory usage, inspect the failed build step to see if it was a compile or link operation, to determine which to limit on a per-target basis.
Suppose that 500 MB of RAM are needed to compile a target and we decide to ensure at least 1 GB of RAM is available to give some margin.
Thus we constrain the number of CPU processes for that target based on
CMake-detected available physical memory.
The appropriate parameters for your project are determined by trial and error.
If this method still is not reliable even with a single CPU process, then a possible solution is to cross-compile, that is to build the executable on a more capable system for this modest system.
The Ninja build executable for Visual Studio location can be determined from the Visual Studio terminal:
where ninja
The factory Visual Studio Ninja version may be too old for use with CMake Fortran projects.
If needed, replace the Visual Studio Ninja executable with the latest Ninja version, perhaps with a soft link to the ninja.exe desired.
Add user permission
to create symbolic links.
NetCDF4 Fortran library may compile successfully and run for simple programs but segfault on programs where HDF5 is linked directly as well as NetCDF4.
A reason one might directly link both HDF5 and NetCDF is a program that need to read / write files in HDF5 as well as NetCDF format.
The symptom observe thus far is the program segfault on nf90_open().
The fix is to compile HDF5 and NetCDF for yourself.