Pages

Friday, September 5, 2014

Lunar Detection Of Ultra-High-Energy Cosmic Rays And Neutrinos




Spotted on the ArXiv Physics blog, what about using the SKA array and the Moon as a collector, that would certainly qualify as sensors the size of a planet :




The origin of the most energetic particles in nature, the ultra-high-energy (UHE) cosmic rays, is still a mystery. Due to their extremely low flux, even the 3,000 km^2 Pierre Auger detector registers only about 30 cosmic rays per year with sufficiently high energy to be used for directional studies. A method to provide a vast increase in collecting area is to use the lunar technique, in which ground-based radio telescopes search for the nanosecond radio flashes produced when a cosmic ray interacts with the Moon's surface. The technique is also sensitive to the associated flux of UHE neutrinos, which are expected from cosmic ray interactions during production and propagation, and the detection of which can also be used to identify the UHE cosmic ray source(s). An additional flux of UHE neutrinos may also be produced in the decays of topological defects from the early Universe.
Observations with existing radio telescopes have shown that this technique is technically feasible, and established the required procedure: the radio signal should be searched for pulses in real time, compensating for ionospheric dispersion and filtering out local radio interference, and candidate events stored for later analysis. For the SKA, this requires the formation of multiple tied-array beams, with high time resolution, covering the Moon, with either SKA-LOW or SKA-MID. With its large collecting area and broad bandwidth, the SKA will be able to detect the known flux of UHE cosmic rays using the visible lunar surface - millions of square km - as the detector, providing sufficient detections of these extremely rare particles to solve the mystery of their origin.

Potentially relevant:

Wednesday, February 1, 2012

Imaging Volcanoes with Atmospheric Muons

On Wondering Star, we feature very large sensing capabilities. Nature often provides this probing capacity by bombarding us with different particles. Those are generally photons but we are also bombarded by  other particles such as Muons. Some outfit use these to detect nuclear materials at the border while others use this given capability to image things we could never image otherwise: Large mountain ranges.


Take for instance the TOMUVOL project that is dedicated to imaging volcanoes with atmospheric muons.  From the presentation translated from French:

The TOMUVOL project proposes the construction and validation of a robust and portable tomography of volcanoes with atmospheric muons. Atmospheric muons are produced at the top of the atmosphere following the collisions of cosmic rays with air.According to their initial energy, they can cross several hundred meters or kilometers of rock before their disintegration. The method therefore uses the high penetrating power of muons with energies above the Tera electron volt (TeV) to probe the depths of volcanic edifices up to tens of kilometers. Information obtained by this technique will help to model the internal structure of volcanoes, which is an essential both for understanding and for the monitoring of buildings assets.
The principle of the method is as follows: a muon detector is placed on a slope of the volcano and recorded continuously in real time the flow of muons passing through the building according to the angle of incidence with the muon local vertical (zenith) and the horizontal (azimuth). The measured flux depends on the flux of atmospheric muons, known elsewhere, and attenuation of muons during their propagation in rock. This attenuation is determined by the geometric path of the muons in the rock, calculable with good accuracy if an accurate model of the external shape of the building there, and the density distribution along this route. Therefore, measurement of the attenuation of the muon flux as a function of zenith and azimuth allows the realization of a precise mapping densitometry of the building.
In volcanology, the value of this new method is multiple. It is likely to provide further information to those obtained with more traditional techniques (seismology, gravity, electrical resistivity tomography, electro-magnetism) and with higher spatial resolution. In addition, it is easier to implement because it is remotely installed and does not require extensive travel, often in difficult terrain, to study the volcano.
The application, in a second step of this method for monitoring active volcanoes is also one of the main objectives of technological and methodological developments initiated within the framework of this project. It is anticipated the installation on the slopes of active volcanoes identified as priority targets (eg, Stromboli, Piton de la Fournaise, Montserrat ,...) a sensor that sends data through the Internet, to a monitoring center that continuously analyzes to report any change in the internal geometry of the building. It also campaigns for long-term measurements, focused on the study of a building well known, by using multiple muon detectors simultaneously or a single, moved sequentially around the building.

A recent paper came out and produced the first image of that volcano in Density imaging of volcanos with atmospheric muons by Felix Fehr. The abstract reads:

Their long range in matter renders high-energy atmospheric muons a unique probe for geophysical explorations, permitting the cartography of density distributions which can reveal spatial and possibly also temporal variations in extended geological structures. A Collaboration between volcanologists and (astro-)particle physicists, Tomuvol, was formed in 2009 to study tomographic muon imaging of volcanos with high-resolution tracking detectors. Here we discuss preparatory work towards muon tomography as well as the first fux measurements taken at the Puy de Dome, an inactive lava dome volcano in the Massif Central.



More can be found in this presentation.


In his thesis, Felix Fehr worked on another large detector ANTARES ( Systematic studies, calibration, and

Wednesday, December 28, 2011

Participatory Sensing Systems enabled through a billion smartphones



What can you do with a billion smartphones ? Participatory sensing seems an obvious choice or at least it was back in 2008-2009. From the Urban Sensing/CENS website:
Participatory sensing. urban sensing is about people like you—equipped with today’s mobile + web technology—systematically observing, studying, reflecting on, and sharing your unique world. through discovery and connected participation, you can see the world anew. you can tell your local story. you can make change.   
Another related project using compressive sensing is Ear Phone. Similar participatory projects also include
Additional participatory projects can be found at http://participatorysensing.org/

In 2011, it seems that the momentum has been lost for implementing these systems. The issue of privacy has so far not been resolved as a whole (see the iPhone/Android Geolocation headlines of this past year) with the general population.

Monday, July 11, 2011

Radiation Waves

Marian Steinbach, who was one of the first web person to parse web documents from Japanese into parsable content during the Fukushima accident, has now collected the radiation data from the extensive German Radiation Network over Germany and made a movie out of it. Here it is:





Radiation Weather from Marian Steinbach on Vimeo.


France has more than 400,000 nodes, I wonder if we could get a similar view at some point in time. One of the thing that is very fascinating to me, a nuclear engineer, is the wave nature of process. We usually never visualize these readings that way as most detectors are generally very coarsely located. Let us not forget also that these waves are in the background and reflect mostly some atmospheric events, including more or less the interaction of the ionosphere with the GCR (Galatic Cosmic Radiation) i.e. very high energy particles that produce radiation showers on Earth.

Wednesday, May 18, 2011

JPL Global maps of Real-Time Ionospheric Total Electron Content (TEC)

Last month, I mentioned how GPS Constellation can be used as a way to image the Ionosphere. Using the principle mentioned then, JPL has a webpage featuring the state of the ionosphere and it is updated every five minutes. From the page:
Global maps of ionospheric total electron content (TEC) are produced in real-time (RT) by mapping GPS observables collected from ground stations. These maps are produced to test real-time data acquisition, monitoring facilities, and mapping techniques. The RT TEC mapping can provide accurate ionospheric calibrations to navigation systems.
These maps are also used to monitor ionospheric weather, and to nowcast ionospheric storms that often occur responding to activities in solar wind and Earth's magnetosphere as well as thermosphere.


Besides the GPS satellites, how many GPS stations are we talking about, from this page:



Monitoring Global Ionospheric Irregularities Using the Worldwide GPS NetworkThe current global GPS network contains about 360 GPS stations, and the number of stations is still increasing. Each receiver at these stations is capable of receiving L-band dual frequency signals from 8+ GPS satellites (totally 24) simultaneously in different directions. GPS data are downloaded to JPL through Internet and commercial phone lines on near real-time and daily bases. This network is a potential resource that can be used to achieve the NSWP goals.

Saturday, April 23, 2011

Image Based Geolocalization and Sensor Network

This past week, there has been much issue with the fact the iPhone and Android phone stored the location of these phones in the clear. Another possibility of geolocation from one of these smartphones or any webcam come from their ability to locate where a photo was taken by correlating it with satellite imagery or correlating it with others. Two projects along those lines come to my mind: Webcam Geolocalization and IM2GPS. The first technique uses fixed webcams, but I think eventually it could be extended to mobile cameras. The second technique uses Flickr. These results can only be created because there are many cameras and webcams, i.e. the sensor network needs to be dense in some fashion. Let us note that 1 billion cameras will be sold in phones this year. One can definitely think of this sensor network made out of these cameras as a way to gather meteorological information and more...

From the respective project's websites:
1. Webcam Geolocalization


cameras and localization results
It is possible to geolocate an outdoor camera using natural scene variations, even when no recognizable features are visible. (left) Example images from three of the cameras from the AMOS dataset. (right) Correlation maps with satellite imagery; a measure of the temporal similarity of the camera variations to satellite pixel variations is color coded in red. The cross shows the maximum correlation point, the star shows the known GPS coordinate.
Overview
A key problem in widely distributed camera networks is geolocating the cameras. In this work we consider three scenarios for camera localization: localizing a camera in an unknown environment using the diurnal cycle, localizing a camera using weather variations by finding correlations with satellite imagery, and adding a new camera in a region with many other cameras.  We find that simple summary statistics (the time course of principal component coefficients) are sufficient to geolocate cameras without determining correspondences between cameras or explicitly reasoning about weather in the scene.

Let us note their use of AMOS: Archive of Many Outdoor Scenes
The relevant paper is: Participatory Integration of Live Webcams into GIS by Austin Abrams, Nick Fridrich, Nathan Jacobs, and Robert Pless. The abstract reads:
Global satellite imagery provides nearly ubiquitous views of the Earth’s surface, and the tens of thousands of webcams provide live views from near Earth viewpoints. Combining these into a single application creates live views in the global context, where cars move through intersections, trees sway in the wind, and students walk across campus in realtime. This integration of the camera requires registration, which takes time, effort, and expertise. Here we report on two participatory interfaces that simplify this registration by providing applications which allow anyone to use live webcam streams to create virtual overhead views or to map live texture onto 3D models. We highlight system design issues that affect the scalability of such a service, and offer a case study of how we overcame these in building a system which is publicly available and integrated with Google Maps and the Google Earth Plug-in. Imagery registered to features in GIS applications can be considered as richly geotagged, and we discuss opportunities for this rich geotagging.

2. IM2GPS


IM2GPS: estimating geographic information from a single image by James Hays, Alexei Efros. The abstract reads:
Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally — on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we will leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earth's surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban/rural classification.

Other relevant sites:
What Do the Sun and the Sky Tell Us About the Camera?

Tuesday, April 12, 2011

The Quake Catcher Network

The Quake Catcher Network is a sensor network made up of people's computers. The data comes from the internal accelerometers of these computers. The map of this crowdsourced sensors is here. You can join the network here.


From the introduction section:


The Quake-Catcher Network: Cyberinfrastructure Bringing Seismology to Homes and Schools
 
Earthquake safety is a responsibility shared by billions worldwide. The Quake-Catcher Network (QCN) provides software so that individuals can join together to improve earthquake monitoring, earthquake awareness, and the science of earthquakes. The Quake-Catcher Network (QCN) links existing networked laptops and desktops in hopes to form the world’s largest and densest earthquake monitoring system. Costs for this network are minimal because the QCN uses 1) strong motion sensors (accelerometers) already internal to many laptops and 2) low-cost universal serial bus (USB) accelerometers for use with desktops. The QCN is built around the Berkeley Open Infrastructure for Network Computing (BOINC!), which provides free software to link volunteer computers together for a single scientific goal. 
Learn more about the Desktop Network.
Learn more about the Laptop Network.
The QCN provides a natural way to engage students and the public in earthquake detection and research. This project places USB-connectable sensors in K-12 classrooms as an educational tool for teaching science and a scientific tool for studying and monitoring earthquakes. Through a variety of interactive experiments students can learn about earthquakes and the hazards that earthquakes pose. For example, students can learn how the vibrations of an earthquake decrease with distance by jumping up and down at increasing distances from the classroom sensor and plotting the decreased amplitude of the seismic signal displayed on their computer.



The type of sensors being used are low cost three way accelerometers. If you take a look at the usage, you'll find that a quake better not hit on saturday or sunday.

Saturday, April 9, 2011

A Continent Wide Frequency Monitoring Network

Right Float Photo[Metafilter readers: Wondering Star is focused on planet sized sensor networks, if you have other examples please let me know. Thanks!]

From Min Kao's site here is The Frequency Monitoring network (FNET)


FNET is a low-cost, GPS-synchronized wide-area power system frequency measurement network. Highly accurate Frequency Disturbance Recorders (FDRs) developed at Virginia Tech are used to measure the frequency, phase angle, and voltage of the power signal found at ordinary 120-V electrical outlets. The measurement data are continuously transmitted over the Internet to the FNET server housed at the University of Tennessee.

Several applications have been developed which utilize the FNET data. These include:

    * Event detection and location
    * Oscillation detection
    * Animations of frequency and angle pertubations
    * Integration of renewables into the power grid
    * Detection of system breakup or islanding
    * Prediction of grid instability and reduction of blackouts
    * Providing grid control input signals


FDR CloseupCurrently, FNET collects data from approximately 80 FDRs located across the continent and around the world.  Additional FDRs are constantly being installed so as to provide better observation of the power system.

wikipedia entry on FNET.





One can also watch live the gradient map here:

Thursday, April 7, 2011

GPS Constellation as a way to image the Ionosphere

While we all know about the GPS constellation of 32 satellites orbiting the Earth in MEO that provide coordinates to our smartphones (have we become too reliant on it ?), and is the mainstay of many of sensor networks on earth by providing them location and a universal time reference. One other use of this constellation is to image the ionosphere, i.e. perform an inverse problem from the measurements of the delay of message transmission between the satellites and some known antenna on the ground (here is a layman's description of the issue) to evaluate the concentration of electron in the ionosphere.



Here is an animated gif version of the solving of this inverse problem. The sensor network is made of 32 satellites plus any number of antennas on the ground.and its size scales directly with the size of the earth.

Monday, April 4, 2011

Using Unsuspecting Amateur Comet Chasers and Interplanetary Probes as a Sensor Network

With this arxiv blog entry entitled "Astronomers Calculate Comet's Orbit Using Amateur Images" one realizes that astrophotographs posted online by amateurs all around the world represent a massive sensor network.

The sensor network described here is embodied by you, me, and whoever can take nightsky images and upload their photographs on the web.

How does this work? In a simplified  instance,  for each photo where the comet is displayed, there are also stars surrounding it. These stars can be estimated using a similar algorithm as the one used on satellite cameras called star-trackers. In effect, given a good timeline, the algorithm estimates the attitude of the image i.e. the pointing direction of your camera. Using this information and performing the same computation for different images, one can infer the trajectory of the comet. I mentioned a similar idea a year ago. However, the algorithm of the paper (see below) goes further than just perform star identification in that it picks the right star image out of a series of photos downloaded from a search on a search engine (i.e. they are not necessarily images of stars). 




The crowdsourcing project used to calibrate the images in this project is the astrometry.net system, From the presentation of the project:
If you have astronomical imaging of the sky with celestial coordinates you do not know—or do not trust—then Astrometry.net is for you. Input an image and we'll give you back astrometric calibration meta-data, plus lists of known objects falling inside the field of view.

We have built this astrometric calibration service to create correct, standards-compliant astrometric meta-data for every useful astronomical image ever taken, past and future, in any state of archival disarray. We hope this will help organize, annotate and make searchable all the world's astronomical information.
So far,the sensor network really is the size of a planet but we can push this capability further. Instead of just using pictures taken from the ground, one can include pictures taken from space probes. For that, I asked the maintainer of astrometry.net to be an alpha tester while choosing this raw footage taken by the Cassini probe that is currently orbiting Saturn (about 1 billion kilometer from earth). N00171086.jpg was taken on March 31, 2011 and received on Earth March 31, 2011. According to the Cassini site "The camera was pointing toward SKATHI, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated."


The system was able to fit a pattern between their star catalog and the stars of the image:



The code puts your image in perspective as well as provide an evaluation the field of view (here it is evaluated at (RA, Dec) = (121.200, 30.085) degrees and spans 21.07 x 21.07 arcminutes .)



it also provides the photo  framed within a google maps GUI.

This is a very nice set-up and one can imagine how photos dating back from Mariner and Pioneer could be used to extract historical information of currently unknown comets.



We performed an image search on Yahoo for "Comet Holmes" on 2010 April 1. Thousands of images were returned. We astrometrically calibrated---and therefore vetted---the images using the Astrometry.net system. The calibrated image pointings form a set of data points to which we can fit a test-particle orbit in the Solar System, marginalizing out image dates and catching outliers. The approach is Bayesian and the model is, in essence, a model of how comet astrophotographers point their instruments. We find very strong probabilistic constraints on the orbit, although slightly off the JPL ephemeris, probably because of limitations of the astronomer model. Hyper-parameters of the model constrain the reliability of date meta-data and where in the image astrophotographers place the comet; we find that ~70 percent of the meta-data are correct and that the comet typically appears in the central ~1/e of the image footprint. This project demonstrates that discoveries are possible with data of extreme heterogeneity and unknown provenance; or that the Web is possibly an enormous repository of astronomical information; or that if an object has been given a name and photographed thousands of times by observers who post their images on the Web, we can (re-)discover it and infer its dynamical properties!



Other projects mentioned in the paper include: