Thursday, March 31, 2011

More pics of lightbox

I didn't include a finished picture of the lightbox before-because it wasn't assembled yet. Here it is-four "white" LEDs are four inches from the top, shining UP into baffles and a ceiling covered with shiny aluminum tape (real duct tape, not duck tape-which IS, by the way, the proper name of that silver cloth tape we all know; it's made of cotton duck fabric. Don't use it on your ducts. If I don't tell people things like that once I learn them, eventually my head explodes.) It then shines down through two 1/8" layers of white plexiglass.









Here is the same view, without a flash...the cutout simply rests over the top of the scope, so that the camera only sees that soft white glow, which is hopefully extremely even, or what photographers and pilots call "flat" light. Hence, the photos taken of this are called Flat Frames, or flats. Now all is finally clear, eh?

In a perfect optical system (scope, camera, etc.) the actual photo would be equally as flat as the light source. In the real world, that does not happen. All scopes, and even cameras, fail to light the entire sensor surface (or film) perfectly evenly. You see in the last post how dramatic that effect is in my own setup.




With every object I shoot now, once the focus is set, ISO is decided upon, etc. I place the box over the scope and shoot about 20 pictures, with automatic exposure, of the flat light. Then those are combined, just like the actual pictures of the pretty stuff, by whatever software is being used. (I use DeepSkyStacker, aka DSS-not only the best, but free.) Notice that by the time that flat light has reached the CMOS in my camera, it is FAR from being flat any longer. DSS will now apply an inverse brightness to each and every pixel in my actual photos, cancelling the effects.


Not only will it even the field a great deal, but it repairs other faults. Click on the flat picture to see a larger version, and you will notice various splodges and dots around...those are dust on the sensor chip in my camera. The thing is filthy, but difficult to clean-for now, I'll let DSS handle that too. By comparing these flat frames with another reference called a dark frame (you guessed it-put on the lens cap and take pictures of the dark-really), it learns what is dirt, and extrapolates what should really be in that spot. There is a fourth type of reference frame, which is a combination of the flat and offset; and it's called the flat-dark frame. No, I don't quite know why yet...but in practice, one really needs only to use three of the four references, either flat-dark-offset or flat-flatdark-dark.

Another note-the flat frames MUST be shot with the camera in exactly the position and state it will be for the light frames-even the slightest change of focus or rotation of the camera makes them useless. Ask me know I know this...early on, I tried using one set of reference pics for every session; it was worse than using none at all. The dark and offset frames MUST be shot at the same temperature as the light frames. Digital camera sensor change their behavior with small changes in temperature. Again, using the set from the other night is therefore useless.

I won't show a copy of a dark frame, because it's just a black rectangle...but two types are taken. One is shot at the SAME exposure, ISO and temperature as the light frames (the one with the pictures.) I take 20 after each subject is shot. The other, called a bias or offset frame, is shot the same way, but with the fastest shutter speed available. The first allows any "hot" pixels to show up, generally as red, green, or blue dots, while the offset shots show any actual flaws in the circuitry of the camera. Once DSS has combined all of this information, it corrects the light frames. In this manner, it is actually able to make pictures from my 10 year old camera look even cleaner than it was on Day One, since all digital cameras have a certain number of flawed pixels.

So the basic process of a shoot is this:
1. Choose the target for the night. I mean a stellar target, not the neighbor's chihuahuas.
2. If not already done, place the scope, then set up the mount and drives to match the Earth's axis as perfectly as possible. This is done with a series of long exposures where the scope is slowly moved back and forth, and any deviation in the star trails shows up in the picture. I allow 30 minutes minimum for this step, for alignment accurate enough to avoid any star trailing for 60 second exposures. The more accuracy you want, the longer it will take-I seem to find that it requires 30 minutes of setup for each 30 second of trail-less exposure. If you just want to look at things visually, the whole process takes 2 or 3 minutes.
3. Set up the camera gear, deciding which side of the tripod the scope should be on, where the camera is mounted, exposures, ISO, focus...focus, in fact, is done with a handy tool called a Bahtinov mask. the eye simply isn't good enough to rely on for photo-quality focusing. I'll show some pictures of the process some day (if your just can't wait, Google it.) My Bahtinov mask is homemade, and is okay; I still make a series of pictures with it in place to home in on perfect focus.
4. Place the lightbox over the scope, and shoot 20 flat frames. 
5. Remove lightbox, and shoot my light frames. Currently, I shoot all of them at 30 seconds and ISO 800 which is "sweet' spot for the 300D; I want to increase to 60 or 90 if I can improve my motor drive. My alignment skills are good enough, but the poor quality gears are killing me. I'd like to shoot an object at least for two hours, and out of that, should have 90 minutes of exposure.
6. Shoot the dark frames, then the bias frames.
7. Load it all into DSS, push the button, and see what I get in the morning.

Note that more than an hour is spent on setup and reference frames for a given target. Hey, who said this would be easy?

A note on that ISO 800...digital cameras still use the old film terminology and standards, so that the same language can be used to talk about and shoot pictures as was always used. So the same "sunny 16" rules and the like that worked with film also stay the same with digital. But while film tended to have larger globs of emulsion in high ISO varieties, a digital camera uses the same sensor for ALL ISO settings. It changes the sensitivity by amplifying each pixel more or less aggressively. Higher ISO settings in a digital camera are still "noisy", but the grain remains the same size. Digital has the same resolution with high ISO settings; film has a lower resolution due to the grains being larger. Almost the same, but not the same. The very nature of stacking digital images is based on this-if a pixel has a certain value in every shot, it's signal; if it's not in every shot, it's noise, and is dropped. So even at ISO 800, DSS can produce extremely smooth and fine-grained pictures. With film, since the blobs of emulsion do not line up perfectly, adding frames to each other simply blurs the focus. That is why film astrophotography required very long exposures, and also needed very well-made sophisticated equipment to keep the scope aimed perfectly at a moving target. With digital, even people like me with obsolete cameras and shoddy gear can still play.

I don't mean for anyone to be entertained from posts like this one. The purpose of this blog is to show my own progress on the trip from decent photographer and very amateurish scope hobbyist to becoming an astrophotographer. Often restating what you learn makes it stick better. So sorry if it bores you...anyone reading is always free to skip the text and just check the pictures.

I am wondering if DSS can be used to correct the same problems in regular daytime photography...at least the bad pixels, dirty cameras, and uneven lighting.

No comments:

Post a Comment