Tuesday, October 28, 2014

5, 6, 7, 8... We Need to Calibrate!

^^^It's probably a good thing I did marching band instead of cheerleading in high school.

Let's all take a moment to think ourselves back in time.  We're going all the way back to Ancient Greece, so it might have to be a long moment.

Here's a picture of the Parthenon to help you out.

Are we all there? Good. Back then, the greatest minds in the world, like Aristotle, thought that stars, planets, and other heavenly bodies were all perfect spheres.  That changed when people discovered sunspots.  If the sun, the most perfect heavenly body of all, had dark marks--one might even go so far as to call them bruises--then surely the rest of the universe was filled with imperfections as well.  


That was basically a really long (and maybe unnecessary) way for me to say that data isn't perfect.  Telescopes have flaws.  Nights will be cloudy.  You'll decide to punish yourself by trying to do optical observations in Boston.  How do we deal with this?  Calibration.

There are three basic effects that one would need to calibrate for
  • Instrumental Imperfections
  • Changes in the sky or atmosphere
  • Fluxes of nearby stars (how much light they're producing per unit area per unit time)
Each one of these calibrations is really important when it comes to collecting accurate data.

Let's start at the beginning (a very good place to start) and talk about how to account for imperfections in your image due to instrumental errors.  To do this, you take what astronomers call dark images.  That means you take a picture in complete darkness (see what they did there?).  When you do that, you get a baseline measurement of how each pixel in the CCD (the part of the camera that actually collects and records the photons being received) responds when it takes a picture.  But the most important instrumental effect that needs to be accounted for in dark imaging is thermal noise.

Electronic devices create heat, which excites electrons.  CCDs record images by keeping track of how many electrons get excited to new energy levels.  They don't care whether those electrons were excited by heat from the device or photons from the imaged source--it blends them all together.  So it's important to keep the device as cool as possible.  Still, it's not like we're going to reach Absolute Zero, so there will be some thermal noise.  By taking dark images, you determine how much thermal noise your device inherently has, and then you can subtract that from your final image.

Some cameras automatically take dark images by snapping a picture with the shutter closed before taking the actual photo, but this is still a nice thing to keep in mind.


Now onto atmospheric influences.


Isn't that picture beautiful? Yes, but it's also a pain for astronomers.  See how the light is uneven--the top of the picture is darker than the bottom?  That doesn't make really useful images for people who actually want to do high-level science with them.  And it all has to do with differences in the atmosphere. But there's a way around it, and it's called Flat Fielding.  

To take a flat field image, you have to take a picture of a completely evenly-illuminated field.  You could do this at dawn or dusk (when the sky is all one color and there are no stars) or you could illuminate a piece of paper or a T-shirt and take a picture of that.  Get creative! Just make sure the field is evenly-illuminated.  

This basically lets you know how your camera responds to a blank sheet of light, so when your image ends up (almost inevitably) having an uneven field, you can normalize it to your flat.  


And, last, but certainly not least, it's important to calibrate to the fluxes of well-studied nearby sources of light.  This is especially important in a project like ours where photometry (measuring how much light we're receiving from the target field) is so key.  Astronomers have worked hard to put together catalogs of stars and their fluxes so that others can use them as references when trying to determine the fluxes of their target.  

Think of it like this:  You're looking at two of your friends standing a football field away.  You can't remember how tall friend A is, but you know friend B is 6 feet tall.  You can then figure out how tall friend A is based on the height difference you observe between A and B.  

Since our entire project depends on being able to catch slight dips in one star's flux relative to its neighbors, this calibration is particularly important.  


Hopefully you now understand why calibration is so important.  Or maybe you already knew that and the only thing you really learned by reading this post is what the Parthenon looks like.  Either way, calibration is both awesome and necessary! If only we could just calibrate the imperfections out of our everyday lives...






Monday, September 29, 2014

One of These Things is Not Like the Other

The first confirmed discovery of an exoplanet was made in 1988, and since then, the field has become pretty popular.  Space agencies from around the world have worked to commission telescopes like Kepler and CoRoT whose primary objective is to discover new exoplanets.

Kepler
CoRoT

So what makes PHATSY different?

Well, first, I should tell you that PHATSY is the Planet Harvester Automated Transit Survey in Y.  This means we plan to use automated units, most likely in the Y band (about 1020 nm), to discover new planets.  Our goal is to make the units so self-sufficient that the people who agree to host them for us don't have to do anything but turn them on.  There are three main differences between our project and those that have already been put into use.  
  1. Cost: our project will cost SO MUCH LESS money than the telescopes put up by professional organizations
  2. Equipment: we'll be using amateur astronomer and commercial equipment
  3. Distribution of Units: our units will be distributed around the entire world as opposed to being clustered in one location
Hubble Space Telescope, one of the most well-known telescopes in orbit (though I may be basing that statement off of a biased sample), cost $1.5 billion to construct and launch.  That doesn't include annual costs to staff its personnel facilities or maintain it.  Our units will only cost about $2500 a piece and will require little to no upkeep cost.  Why are we able to keep the cost so low?  That takes us to difference #2.  

Organizations like NASA use super high-quality equipment.  They kind of have to if they want to 1) send something into space and 2) ensure that it will survive once it gets there. The equipment they use is so fancy that, for the most part, only specially-trained people are allowed to work on it--engineers.  We don't have to deal with that!  Technology has advanced so much that a really nice camera--like digital single lens reflex (DSLR) cameras and even some camera phones (!)--has high enough resolution to take decent pictures of the sky.  


This is one of the pictures we took last week. Do you see how many stars there are?!?! And we're in Boston! Bright, light polluted Boston!  

Our units will all be made up of one DSLR camera, one protective case, and a raspberry pi pre-loaded with the code that will tell the camera to take our images every night.  The best part about this is that these are amateur and commercial tools! Anyone can use them, which is good because we're not sending these units to professional telescope engineers.

And that takes us to difference #3.  The exoplanet-finding devices I listed above are both space telescopes.  There are a few ground-based telescopes and interferometers that are used to find exoplanets, but they all have something in common: they don't move.  And that's not bad! It just means that they can only observe at night. 

We plan on distributing our units latitudinally around the whole world, essentially turning PHATSY into the newest world-wide empire, but this time it would never see the light of day.  



Hopefully you now have a better understanding of why this project is important and what it can bring to the scientific community. See you next week! 

Monday, September 22, 2014

In the Beginning...

Earth's a pretty cool place.  We humans have been able to live here (more or less) happily for a couple hundred thousand years.  But it's not like it's cheating if we look at some other planets, right? No.  And that's exactly what PHATSY wants to do.

The field of Exoplanets has been booming in astronomy recently.  An exoplanet, for those who are super new to the field, is a planet outside of our solar system orbiting its own star.  Humans (especially scifi writers) have been interested in finding other possibly habitable planets for decades, and now the technology to do so has finally arrived!  We exoplanet astronomers have three main methods of finding our targets

  1. Transit Method, where we observe the planet passing in front of its host star
  2. Radial Velocity, where we look for the wobble of the host star (For a naive undergrad's somewhat mathy explanation of wobble, look here.  Or google it if equations aren't your thing.)
  3. Direct Imaging, where we actually take a picture of the planet itself!
It might be a little intuitive that the last method is the most difficult. These planets are pretty far away, after all.  And the second requires really focused observations and, as those of you who followed the link may have seen, math.  So we'll be using the first method.  

cornellcollege.edu

When we look for a transiting exoplanet, we're really trying to see the decrease in the amount of light we receive from the system when the planet passes in front of its star.  Based on this, we can figure out all sorts of cool things like the velocity of the planet, the size of its orbit, the ratio of masses of the planet to its star, etc.  BUT, before we can turn things like Star Trek and Alien Nation into our reality, we have to find a ton of exoplanets.  

And that's where PHATSY comes in. The Planet Harvester Automated Transit Survey in Y (PHATSY) plans to use an array of digital single lens reflex (DSLR) cameras around the world so that we can constantly be imaging the sky.  Current exoplanet observations are pretty expensive (still in the $100,000 range), but since we're using common cameras that pretty much anyone could buy, our cost-per-unit is way lower.  The idea of using DSLRs for astrophotometry was thought up by Dr. Olivier Guyon of the Center for Astronomical Adaptive Optics at the University of Arizona.

Currently, our team is comprised of five people. Dr. John Johnson (kind of the Professor X of our group) is a professor at Harvard University who selected all of us to work on this project with him when he moved to Harvard from CalTech last year.  John Lewis is a grad student at Harvard interested in Star Formation in Giant Molecular Clouds.  Nina Hooper is a junior at Harvard studying Astrophysics who spent some time over this past summer getting familiar with our equipment along with the fourth member of our team, Inez Khan. Inez is a high school senior (I know, super impressive!) who's interested in sticking with astronomy.  And finally, Moiya McTier (that's me) is a junior at Harvard studying Astrophysics and Folklore/Mythology who thinks astronomy is really cool, and wants everyone else to think so too.    

That's about it for the short, introductory post.  I promise there will be more posts soon with more background information and detailed accounts of our progress with the project.  See you then!