RETURN to Sonycine.com
Jump to content
Welcome To Our Community!

Discuss, share & explore cinematography and making the most of your gear.

alisterchapman

Members
  • Posts

    67
  • Joined

  • Last visited

Everything posted by alisterchapman

  1. I suspect there are 2 issues: Can the sensors actually be read at 3:2 or 6:5 video frame rates without overheating or other issues and do the processing paths have sufficient bandwidth. Then the other possibly bigger issue is how do you record it? There is no codec in these cameras that can do anything other than 16:9 or 17:9 and the codec is a hardware chip that likely has very limited upgrade options. Hopefully this is something that will be included in the next generation.
  2. Sony rate the ND filters in most of their cameras using a fractional value such as 1/4, 1/16, 1/64 etc. These values represent the amount of light that can pass through the filter, so a 1/4 ND lets 1/4 of the light through. 1/4 is the equivalent to 2 stops ( 1 stop = half), 2 stops = 1/4, 3 stops = 1/8, 4 stops = 1/16, 5 stops = 1/32, 6 stops = 1/64, 7 stops = 1/128. These fractional values are actually quite easy to work with in conjunction with the cameras ISO rating. If you want to quickly figure out what ISO value to put into a light meter to discover the aperture/shutter needed when using the camera with the built in ND filters, simply take the cameras ISO rating and multiply it by the ND value. So, 800 ISO with 1/4 ND becomes 800 x 1/4 = 200 (or you can do the maths as 800 ÷ 4). Put 200 in the light meter and it will tell what aperture to use for your chosen shutter speed when shooting at 800 ISO with 1/4 ND. If you want to figure out how much ND to use to get an equivalent overall ISO rating (camera ISO and ND combined and added together) you take the ISO of the camera and divide by the ISO you want and this gives you a value “x” which is the fraction in 1/x. So, if you want 3200 ISO then take the base of 12800 and divide by 3200 which gives 4. If you set the camera to 12,800 ISO and the ND to 1/4 the sensitivity of the camera becomes in effect 3200 ISO.
  3. What’s the difference and which should I use? On the FX3 and FX30 the default is Quick Format, but you can also do a Full Format, on the FX6 you can select either from the menu. Full Format erases everything on the card and returns the card to a completely empty state. All footage is deleted from the card and it cannot be recovered later should you perform a Full Format by mistake. Because Full Format returns the card to a completely empty state removing any junk or other clutter it also ensures that the cards performance is maximised. You should do a Full Format periodically to clean up you media and restore any lost card read/write performance. Quick Format erases the file database on the card but it does not actually remove your video files. When you then start a new recording on the card the new recording will fill any empty space left on the card if there is any. If there is no empty space then the new file will overwrite the existing files on the card. In some cases if you have accidentally done a quick format you may be able to use data recovery software to rescue any files that have not already been overwritten. But file recovery is not guaranteed. As quick format does not clear all data from the card, over time the performance of the card may be degraded, so a Full Format should be performed periodically to ensure the best card performance.
  4. Most of us are probably aware that Lithium batteries should be treated with great care to keep them safe. But one thing I wasn't fully aware of is the damage that can be done and the resulting safety risks associated with trying to charge a very cold lithium battery. When you charge a very cold lithium battery some of the metallic lithium in the battery gets plated onto the anode of the battery. You can't normally tell or see that this is happening. This will very slightly reduce the capacity of the battery but more importantly it greatly increases the risk of an explosion or fire. No matter how well the battery is made, if sufficient lithium ends up on the anode an impact shock or any high temperature usage can cause the lithium to ignite causing a battery fire. The batteries management and protection circuits cannot protect against this type of failure, although some manufactures do include circuits that will prevent a cold battery from accepting a charge, many do not. You must never try to recharge cold lithium batteries, they should always be allowed to warm up to room temperature before charging. Repeated charging at cold temperatures is dangerous and must be avoided.
  5. Just be careful doing this not to exceed 2 amps as you can damage the cable. When you run 12v instead of 24v the maximum power than can be passed own the cable safely is halved. At 12V you have a maximum power draw of around 24W but at 24v you have 48W. It's better to put any voltage converters at the output from the Rialto rather than at the camera body end.
  6. Quite possibly the issue was that 12v is a bit on the low side. The specs are 11v to 17v, but ideally the camera wants 13.8v to around 15v or 24v/28v. At 12v even the slightest voltage sag in any of the connecting cables will cause the voltage at the camera to drop too low. Given the high current draw some voltage sag is inevitable even with good quality cables. I regularly power Venice using a 13.8V 100w power supply without issue, but you must ensure the DC cables are of suitably high quality and actually capable of carrying at least 8 amps.
  7. To make 3D LUT's a manageable size 3D LUT's don't have an adjustment for every possible input and output value, if they did they would be massive. So, 3D LUT's divide the image into ranges, typically 33x Red 33x Green 33x Blue. Each of the 33x segments has the same correction value, so within an image there will be 33 steps between each correction. These steps can often show up within the output image as banding or odd sudden colour/brightness shifts. To prevent these steps in post production additional calculations are used to smooth out the steps by interpolating between each of the ranges within the LUT, but this needs a lot of processing power to do well. Especially if you not only interpolate within each colour input channel but also 3 dimensionally across all 3 output colours. In a camera there often isn't sufficient processing power to perform these interpolation processes so banding is seen on the cameras output. The .art system was designed as an alternative to 3D LUTs for the original Venice camera so that you get a transformation function similar to a 3D LUT but without introducing the commonly seen step/banding artefacts that were a result of the limited amount of interpolation available in the original Venice camera. In Venice 2 the LUT processing capabilities have been greatly improved so that normal 3D LUTs now have much better interpolation and banding is rare.
  8. Don't forget as well that camera like the FX9/FS7 and many others have a special colour matrix called "FL-Light" that can be used under fluorescent lighting to eliminate the green bias without affecting the tint or hue.
  9. I think it's pretty safe to say no. The 6K sensor readout in the FX9 clearly can't go above 30fps which is why Sony had to include the 5K scan mode and the 2K full frame scan modes (to read 6K requires double the bandwidth of 4K). I'm quite certain that if it was possible it would have been done early on in the cameras life. It appears to be a sensor issue and the penalty you pay for having a 6K sensor that allows FF + s35 with at least 4K of pixels. If you were to read only 4K of pixels at FF there would be some nasty aliasing and moire artefacts, to have a decent looking image you need to read all 6K of pixels. It's also worth observing that the FX9's 4K 120fps raw mode is limited to s35 scan and is only 10 bit. It's also interesting to look at the FX30 which is another lower cost Sony camera with a 6K sensor and observe that to shoot 4K 120fps with the FX30 you have to crop by 1.5x so that only 4K of pixels are read out.
  10. Every now and then I'll come across someone struggling to grade a shot. Often the problem is a result of a having a light source that has an incomplete or very narrow spectrum. A lot of cheap LED lights as well as many types of discharge lights such as sodium street lights or neon lights only output light at certain specific wavelength's or only encompass an extremely narrow part of the light spectrum. These kinds of lights will result in images that will be strongly coloured and next to impossible to colour grade because the narrow bandwidth of the light means the footage only contains a single colour or extremely limited range of colours. You will be able to change that single colour to a new colour, but because there is only one colour everything else in the image will change by a similar amount. It will be next to impossible to pull out subtle skin tone hues from a face lit by a sodium street light as every part of the face will be the same hue due to the single wavelenght of the light source. Unfortunately there is no simple fix for this. So, it is something that needs to be considered when shooting under discharge lights. If shooting using S-Log this is one of those times where monitoring and only seeing the S-Log image can be a little dangerous as it may not be obvious that your colour palette is extremely restricted. Monitor via a LUT and it will normally be very obvious. Another issue I see from time to time is where one of these low quality narrow spectrum lights is spilling into a scene and causing very odd looking, highly saturated highlights. Our eyes will adapt to these lights, often a pool of light from a narrow bandwidth light won't look all that bright to us. But to an electronic sensor that doesn't adapt the same way as our eyes do it might appear very bright and in some situations may cause one of the sensors colour channels to clip resulting is some very odd looking highlights. This is commonly seen where blue LED up-lighters are used to light up a wall. To our eyes the blue might not appear super bright, but to a video camera that intense but narrow wavelength blue light will cause the blue channel to clip. In a lot of cases there isn't much you can do about it (other than turning down the light), so it's something to watch out for. If you have a Sony broadcast camera many have a matrix setting called "Adaptive Matrix" that can be turned on to specifically to deal with this. But if shooting S-Log you won't be able to take advantage of this, so again, look carefully at you images while shooting, preferably via a LUT if you suspect that there may be narrow bandwidth light in your scene.
  11. I love the FX30, it's a great little camera and it produces a lovely image. But one thing a few have noticed is that if you shoot at 100fps or 120fps it can be a little more noisy than it is at other frame rates. When shooting 4K/UHD up to 60fps the FX30 downsamples from 6K of pixels to a 4K recording. This oversampling brings a nice noise reduction, the equivalent to almost 1 stop of exposure (around 4-6dB). When you shoot at 100/120fps the camera reads 4K of pixels, there is no downsampling, so the images will be noticeably more noisy. This is just a limitation of how this camera works, my guess is that it doesn't have enough processing power to convert 6K to 4K at 120fps or perhaps the sensor can't be read at 6K at 120fps. The plus side is that the normal 4K recordings downsampled from 6K really are very good indeed and packed full of detail and texture. Plus, if shooting at 120fps and you choose to use the near equivalent of a 180 degree shutter - 1/250th, you will need 5 times more light to get the same exposure compared to 24fps and 1/48 (180 degrees). So to get the same exposure you need to open up the lens by 5 stops or increase the light level by 5 stops. Any less than this and you will be under exposed and that will make your footage look more noisy. When I shoot at 120fps I will often use 1/125 to gain back 1 stop, then expose nice and bright to help eliminate the extra noise.
  12. I'll try to simplify this as much as possible as there are some different concepts that are often miss-understood. SYNC: Is when 2 devices are connected such that they run or operate at exactly the same rate, at the same time. On a video camera a reference signal is fed to a cameras Genlock port and then the "genlocked" camera runs at exactly the same frame rate as and locked to the reference signal. But as Doug has already commented, the FX6 cannot be Genlocked, so there is no way to regulate the frame rate to precisely match that of another camera. As a result when you have multiple cameras there is no guarantee that even if both cameras are started at exactly the same moment that over a period of more than a few minutes that both cameras will record exactly the same number of frames. TIMECODE: Is a unique, sequential, numerical value given to each video frame by a video camera. Each frame in a sequence must have a timecode value that is 1 frame greater than the previous frame. If you record 1000 frames, the time code count must increase by 1000 in the clip. EXTERNAL TIMECODE: It is not a sync signal. It is the output of the timecode clock of another device (which could be another camera) that is fed to the timecode input of a camera and then the timecode clock in the receiving camera will follow the external time code value. But this external timecode clock may be counting at a very slightly different rate to the number of frames the camera is actually recording. So to ensure every frame always has a unique sequential number what the camera does is the moment you press the record button the camera takes the last TC clock number that was seen on the external input and from that moment on counts the frames actually recorded and adds +1 to each frame, so each frame has a unique number that is 1 more than the frame before, regardless of the external TC number. Where you sometimes (often?) get an issue is with long takes. The sync clock in most cameras will drift in frequency very slightly as the temperature of the camera changes or due to other factors. If during the record period the external TC clock counts to 1005, but the camera only records 1000 frames because it is running fractionally slower than the external clock source, there will be a 5 frame difference between the external TC and the TC recorded with the clip. Once you stop recording the cameras TC clock will re-sync with the external TC clock so the error becomes zero again. So, the first frame of every clip will match the external TC, but later in the clip the external TC value and clip TC value may be very slightly out. Generally this is only rarely an issue with clips under 10 minutes. But when trying to shoot long takes such as performances the drift can become significant. If the cameras are genlocked, because the frame rates of all cameras will be identical, there will not be any timecode drift. So, when shooting with cameras that can't be genlocked, but do accept external timecode it is a good idea to stop recording from time to time to allow the cameras timecode clock to re-sync with the external TC. If using 1 camera with a sound recorder, if you can feed the TC from the camera to the audio recorder because audio recorders don't have frames, they just place the external TC alongside the audio so going from camera to audio recorder there isn't a sync issue.
  13. Sony have now released new firmware updates for both the FX3 and FX30. The FX3 now goes to firmware version 2.02 and the FX30 to firmware version 1.02 These are mainly stability releases that fix some minor bugs, but if you have an FX3 on the original version 1 firmware then this version adds the CineEI mode and LUTs. It is a major update that is well worth having. Before attempting to update the camera you should insert a fully charged battery The FX3 is updated via a computer application. While there is a Mac application there can be some hoops to jump through to get it to work, so I would urge you to find a windows PC to do the update, it is far simpler and far more likely to be successful. The good news is that once you have updated to version 2.02 future updates can be done by uploading the update file to an SD card and initiating the update from the camera like the FX30. The FX30 is updated by placing the downloaded BODYDATA.DAT file on to an SD card that was previously formatted in the camera. Then place the card in the camera and go down to the SETUP – SETUP OPTION – VERSION page of the menu. Here you should see the cameras current firmware version plus a “SOFTWARE UPDATE” button. Press (select) the software update button. On the next page it will say “Update ?” and show the old firmware version and the new firmware version. Then just below this is a box where it says “Please follow these precautions until the very end”. What isn’t clear at this step is that you need to scroll down inside that box and read the full list of precautions before the camera will allow you to do the update. If you don’t scroll down and just press the “Execute” button you get a large popup telling you to “Follow the precautions to the very end” and pressing “OK” simply takes you will go back to the previous page. So do make sure you scroll down through the full list of precautions before you press execute. Once the update starts the screen will go blank, the only clue that the update is happening will be the slow flash of the media LED on the back of the camera. The update takes about 10 minutes to complete and the camera will reboot when it’s done.
  14. It's a difficult one. While there is no doubt that 48 or 60 fps will deliver smoother motion it will also appear more "video" like (not that that's necessarily a bad thing). For decades feature films and IMAX films shot at 24p have been shown on huge screens and judder hasn't typically been an issue and many consider it part of the "movie experience" as some people have a different emotional reaction to the lower frame rates. 48fps in the Hobbit movies did not get a good reception, Avatar seems to be more of a mixed bag with some people loving it and others not. Personally, for narrative I like 24fps, there is something about it that separates it from the real world, I also like reading books and allowing my imagination to fill in the gaps. For "experience" films I think higher frame rates are better. What is interesting is that there is a lot of discussion at the moment about shooting narrative at 30p. It has significantly less judder and stutter than 24p but isn't as smooth as 48 or 60p. There is some reluctance to 30p simply because many confuse this with 30fps interlace, which has motion closer to 60p. But a lot of people are wondering if 30p may be a serious option for digital movie making as it offers a nice middle ground and fits well with computer displays. If it were me and with the technologies available right now, I would be thinking about 30p or 60p for large screen presentation, in part because computer servers etc tend to output at 60Hz and 24fps or 48fps will stutter more than it should on a 60Hz system. But at the same time 24p on a big screen doesn't scare me.
  15. Something that often gets asked is - how should I clean my camera? My process is this: Start with a good quality soft paint brush and gently brush off any dirt or dust from the outside of the camera. DO NOT use the paint brush on any glass ports or the sensor, just the camera body, handles and other accessories. A small artists brush can be used to get into all the little gaps and crevices, but don't poke it into any connectors as you could damage the pins. Most of the time this is all you need to do. If the camera is extremely dusty then I might use a small vacuum cleaner with a brush attachment, but this needs to be done with care as excessive suction could damage the fans inside the camera. The next step (when needed) is to use a soft polishing cloth to wipe down the camera. If it's very dirty then I will use a solution made up with 1 cup of warm water with 1 or 2 drops of dish soap (you really do only need 1 or 2 drops). This is very effective at removing dirt and grease but shouldn't attack the paint or damage the plastic. Don't soak the camera, just dampen the cloth and gently wipe over the camera. If there is dirt in a connector or similar I will use a handheld puffer to try to blow it out. I do not like canned air, it can make things worse as it is quite powerful and can blow dirt and debris deeper into the camera. What about cleaning the optical port - the piece of glass in front of the ND filters and sensor on full size cameras like the FX6/FX9/FS7 etc? This piece of glass is coated with an anti-reflective coating so needs to be treated very gently - don't use strong solvents as they can strip the coating. To clean this I start with a handheld puffer (get one where the nozzle is part of the bulb to ensure the nozzle doesn't fly off onto the glass). I use the puffer to blow off any dirt or dust. Again in most cases this is all that is needed and I always start with this as it should remove anything that could scratch the glass if you do need to progress to the next steps. If the puffer isn't enough then I use a the brush end of a "Lens Pen". This is a very soft brush designed for cleaning delicate optics, they are available from most camera stores. Use the brush to very gently brush off any dirt. If that still isn't enough then you can use the other end of the lens pen, which is normally a flat swab to wipe the glass port. By be very, very gentle. Start at the center and work your way towards the edge in a very slow, light circular motion. You should now have a nice clean optical port. But if someone has put greasy fingers on the port and you are struggling to get it clean then as a last resort I would use 1 drop of dish soap in 1 cup of distilled water and use a microfibre lens cloth dipped in the solution to gently wipe the port. Lens pens are cheap, you should replace it regularly, especially if it gets dirty or has been used on an oily or greasy surface. For cameras that have an exposed sensor, I will try to avoid cleaning the sensor at all costs. A puffer can be used to blow off dust, but you don't want to ever touch the sensor unless you absolutely have to. To clean the sensor buy a good quality sensor cleaning kit, which will normally consist of special swabs which are gently dry wiped across the face of the sensor. Never rub, never scrub, follow the instructions that come with the swabs. Lenses: Again, puffer first to blow off dust and dirt. Then the brush end of a lens pen, for more stubborn dirt the flat end of a lens pen. For a lens cloth it depends on what I am shooting. For most applications a microfibre lens cloth will work well. But if you are shooting in the rain or a very damp location a soft Chamois Leather (the very soft leather used to dry a car after washing) is good for removing rain as most conventional lens cloths just tend to smear it all over the lens. Finally: Keep your lens cloths in sealed bags to keep them clean and free of grit and dirt. Just 1 spec of hard grit on a lens cloth can ruin a lens if it gets wiped across the glass. You should be very careful to keep your cleaning gear clean. And also, a few small specs of dust on the front of a lens or filter rarely cause an issue, don't overdo the cleaning as any time you wipe a lens there is a risk of scratching it and a scratch will show up a lot more than a few specs of dust.
  16. I've been using the UWP-D's for years and they have proven to be amazingly reliable. One thing that is often overlooked with many of the cheaper radio mics that use the 2.4Ghz wifi band is the latency. The lower cost mics often have a delay of between 19ms and 60ms. This can cause big issues if you try to mix one of the digital mics with a wired microphone or the cameras internal mic as you will have an echo or worse still phase issues that cause all sorts of weirdness such as very thin sounding audio. If the audio tracks are all recorded independently this can be resolved in post production, but it's extra work and extra steps. The UWP-D mics have extremely low latency, so mixing them with wired mics or the internal mic is not an issue.
  17. To clarify, the equivalent focal length given is a result of the crop introduced by things such as focus breathing compensation, active image stabilisation or the crops used at higher frame rates such as the 1.6x crop from APS-C when shooting at 100/120fps with the FX30. In the case of a Sony FF lens this is added to the 1.4x crop between FF and APS-C. I suspect that for non Sony lenses the camera may assume it is a FF lens so incorrectly adds the extra 1.4x. But I really do wish crop factors weren't applied to focal length as it has nothing at all to do with focal length. A 50mm lens is a 50mm lens no matter what sensor you put it on and will exhibit the same perspective and same DoF regardless of the sensor size. Crop factors only affect the field of view. And if you come from a cine film background your normal reference frame size isn't full frame photo its 35mm movie film which is closer to APSC-C than FF.
  18. Most normal 35mm anamorphic lenses will indeed fill the hight of a full frame sensor, but most will vignette or be extremely distorted to the left and right of frame, so they don't really fill the frame. It's only the hight that is correct on our Full Frame 17:9 sensors. The need to crop the sides so extensively means that with a 4K FF sensor your horizontal resolution typically ends up well under 3K. The FX9 has a clear benefit here due to its 6K wide sensor. A Full Frame Anamorphic will fill the frame of a full frame 17:9 sensor without any vignette, but of course the sensor aspect ratio isn't idea and really you need an even taller sensor such as the VENICE sensor if you want to get the best from Full Frame Anamorphics unless you like extremely narrow aspect ratios. Super 35 Anamorphic is a bit of a misnomer as most PL mount non full frame 2x anamorphics weren't designed for 3 perf Super35 film but for 4 perf Academy. I'm never really sure what to call them, but it isn't super 35 anamorphic unless you mean the 1.35x lenses that were designed for 3 perf. Perhaps just 35mm anamorphic? The FX9's 2x anamorphic mode is designed specifically for 35mm Anamorphic lenses and includes the necessary side crop to remove the left/right vignette that you get due to the extra sensor width and provides a corrected 2.39 image without vignette in the viewfinder. see:
  19. The Aurora forms in an oval around the poles, a bit like a donut around the top and bottom of the planet. So, for most people that live south of the arctic circle the Aurora will generally appear to be to the north and the further south you are the lower on the horizon it will be. When you are on or north of the arctic circle (66 degrees to 75 degrees north) you will be under the Aurora oval so it will often fill the sky from horizon to horizon and in every direction. When you get it directly overhead it always feels like you could reach out and somehow touch it. I find it a very magical experience. It's hard to really show how expansive it can be as very wide lenses don't tend to be very fast. I dream of one day having an ultra wide f1.4 lens.
  20. The brightness of the Aurora varies immensely. One moment it can be dim, low contrast and barely visible and 30 seconds later it can be bright enough to cast shadows on the ground with great contrast. Additionally the location I go to tends to have extremely dark skies as it is well away from any city lights or other light pollution. Plus, when it is very cold (and it normally is -20c or colder) the air becomes very clear, so contrast increases and this is a big, big help. There is no one magic setting for every Aurora. Generally most of the Aurora footage was shot with the FX3 with the 24mm f1.4 GM as having a second base ISO of 12,800 in S-Log3 is highly beneficial and contrary to what I would normally do, adding a bit of gain in camera by going to 25600 using the flexible ISO mode proved useful. As the Aurora doesn't move very, very fast you can get away with a 1/12th or 1/15th shutter. When the Aurora is dim it also tends to be moving much slower. So, rather than increasing the ISO still further or adding ever more post production gain I will use S&Q and lower the frame rate, perhaps using 8 frames per second and a 1/8th shutter and then return this to normal speed in post with a bit of subtle frame blending. When shooting the Aurora I am constantly tuning the camera settings to the way the Aurora is behaving and how bright it is while shooting. The tips of my fingers suffer every year from constantly touching the extremely cold camera controls. It's like repeatedly touching a red hot surface. In post production I do add noise reduction as without it the images would be noisy without it and there will be some grading to provide the best looking image. In the past when I shot mostly time-lapse with longer exposures, I tended to go for a more vivid look, but recently I have dialled things back a lot as I wish to give a more true to life representation of how the Aurora actually looks when you see it in person. The final factor is time outside. As the aurora comes and goes, often quite quickly, you have to spend the time outside with the cameras setup and ready to go in order to not miss the brightest flare ups. These are often short lived and it is all too easy to miss them if you stay inside and just pop out occasionally. So, that means long periods standing and waiting in the cold. In the early part of January, up in Northern Norway it was dark enough to see the Aurora between 3pm and 9am and each night I would be outside for around 11 to 12 hours at a time, but possibly only shooting for a couple of hours during that period. Then do that for 2 or 3 weeks at a location with very clear skies and you have a chance of getting some great footage.
  21. I’ve just return from the arctic cabins that I use for my Northern Lights Aurora tours following a great trip where the group got to see the Aurora on 3 nights. In this video there is footage from two nights, the 13th and 14th of January. I have another trip to the cabins that starts tomorrow, so hope to get more footage, but thought I would share some video from the first trip and some details of how I shot it. I recommend watching the video direct on YouTube and on a nice big screen in 4K if you can. To see the Aurora you need to travel upt to the Arctic circle and find somewhere with clear skies. Generally you need to go in the winter when the nights are long and dark to maximise your chances of seeing the Northern Lights. So - that means shooting in some very cold conditions. On this trip it got down to -30c (-22f). But the cameras performed well despite the cold. I will go into the equipment in more depth in another post after the second trip. Most this video is real time video, not the time-lapse that is so often used to shoot the Aurora. The Sony FX3 (like the A7S3) is sensitive enough to video a bright Aurora with a fast lens without needing to use time lapse. On the FX3 I used a Sony 24mm f1.4 GM lens, this is a great lens for astro photography as stars are very sharp even in the corners of the frame. The Aurora isn’t something that is ever dazzlingly bright, so you do normally need to use a long shutter opening. So, often I am shooting with a 1/15th or 1/12th shutter. the FX3 I used the CineEI mode at 12,800 ISO and also the S-Log3 flexible ISO mode to shoot at 25600 ISO. This isn’t something I would normally do – add gain while shooting S-Log3, but in this particular case it is working well as the Aurora will never exceed the dynamic range of the camera, but the footage does need extensive noise reduction in post production (I use the NR tools built into DaVinci Resolve). I also shot time lapse with my FX30 using a DJI RS2 gimbal. On the FX30 I had a Sigma 20mm f1.4 with a metabones speedbooster. But I have to admit that the stars from the Sigma lens are not as well rounded as form either my Sony 20mm or 24mm lenses, perhaps on this second trip I will use the Sony 20mm f1.8 instead. With the Fx30 I shot using S&Q motion at 8 frames per second, this gives only a slight speed up and a more natural motion that time lapse shot at longer intervals. By shooting at 8 frames per second I can use a 1/4 of second shutter and this combined with the FX30’s high base ISO of 2500 (for S-Log3) produces a good result even with quite dim Auroras. By shooting with S-Log3 you can still grade the footage and this is a quick way to get a time-lapse sequence without having to process thousands of still frames. It also needs only a fraction of the storage space. While shooting with traditional time-lapse with still frames does allow you to shoot raw stills, the difference in image quality isn't actually all that great. With some cameras you might be able to shoot at higher resolutions which may be beneficial, but then you can't shoot real time video. I'm really pleased with what I am getting and when I compare what I can get today with cameras like the FX3 (A7S3) and what I was getting 5 years ago, todays material is vastly superior.
  22. But Zebra 2 on an FX9 and FX6 will show on anything at or above the point set. So with a scene with bright sky not only will there be zebras over your white target, there will also be zebras over the sky, shiny surfaces all over the place. This can be very confusing and hide issues in the highlight areas. And why 72%? I'm assuming this is for S-Log3 as it would be too dark for most 709 type gammas. If you have S-Log3 white at 72% you are deliberately over exposing by 1.4 stops and with the recent cameras there really isn't any need to do this for every shot, it reduces the highlight range un-necessarily. There may be some shots where this is beneficial, but the whole point of CineEI is that you can alter your offsets on a scene by scene basis depending on what it is you are shooting. The brightness of clouds varies immensely as more often than not they are not acting as a reflective surface but rather they are a part of your primary light source, so will more often than not be brighter than a white card or other reflective surface being illuminated by the light coming through those clouds. You should never rely on clouds as an exposure reference for the mid range. Most white cars are also a lot more reflective than 90% unless they have a matte paint job or are very dirty. White is very useful especially if you avoid anything treated with brighteners or that have a specular reflectivity component such as nylon and man made fabrics. But you have to be very aware of exactly how bright the white target is when measuring conventional gammas and LUT's as the gamma roll off almost always starts at the equivalent of 90% reflectivity, sometimes just below. So, depending on exactly where the knee or roll off starts a white card exposed at 87% might be fine, but at 93% it will be well into the knee or roll off and it may be way too bright, it could be as much as a stop over with a strong roll off or knee. 95IRE with 709(800) is almost 1 stop over white. And this can be a big issue when you have grabbed a piece of paper or fabric of unknown reflectivity that is most likely treated with brightners or a shiny car that may be anywhere from 90% to 98% reflectivity. Where do you expose this when it will be well into the knee or roll off when exposed correctly? Middle grey doesn't have this issue. Light meters are calibrated for middle grey and the average brightness of most scenes will be the equivalent of middle grey. For these reasons middle grey is preferred by most cinematographers as it is extremely consistent, not affected by roll off's and is in the middle of most scenes. Grey cards are not expensive these days.
  23. Zebras are a very useful way to measure exposure levels, whether that's skin tone levels or the brightness of a middle grey or white card. But, if you are going to use Zebras it's really important that you know exactly what it is that they are measuring because different gamma curves or different LUTs will have different brightness levels. Broadly speaking Zebras always measure what you see in the viewfinder regardless of any other settings, but there are some subtle difference when shooting with the CineEI mode as below. FX3/FX30. On the FX3 and FX30 the Zebras always measure the image that you see on the LCD screen. So, if you are in the CineEI mode and you have a LUT on, the zebras will be measuring the level of the LUT. If using the default s709 LUT then skin tones will be around 60%, a white card 77-78% and middle grey 45%. If you are not using a LUT and are viewing the S-Log3 then the zebras measure the S-Log3 level in which case at the base exposure skin tones will around 50%, a white card 61% and middle grey 41% (for each stop brighter that you wish to expose S-Log3 add 8.5). FX6/FX9. On the FX6 and FX9 in the CineEI mode the zebras always measure the image seen in the Viewfinder, regardless of what is set or being measured elsewhere. So if there is a LUT on in the viewfinder they measure the LUT, If using the default s709 LUT then skin tones will be around 60%, a white card 77-78% and middle grey 45%. If you are not using a LUT and are viewing the S-Log3 then the zebras measure the S-Log3 level in which case at the base exposure skin tones will around 50%, a white card 61% and middle grey 41% (for each stop brighter that you wish to expose S-Log3 add 8.5). On the FX6 and FX9 it is possible to have a LUT on for the VF (it will be measuring the LUT) while at the same time outputting without a LUT on the SDI/HDMI. As the cameras waveform measures the HDMI/SDI, in this case the Zebras will be measuring the Viewfinder LUT image while the waveform will be measuring the HDMI/SDI S-Log3 signal. This can be a little confusing as the levels in the VF can be different to the levels on the waveform depending on what is set for each. What the waveform is measuring is indicated just above the wave form display and the waveform measures the signal on the SDI/HDMI while the Zebras measure the Viewfinder image so whether this is the S-Log3 or the LUT is determined by the Viewfinder LUT setting.
  24. There is often a lot of discussion around Cine Lenses. I think that often Cine Lenses are considered aspirational or the assumption is made that they must be better than photo lenses. But is this really always the case. We know that Sony's G and GM lenses generally produce really nice images, so why choose Cine Lenses over these. Will a Cine Lens instantly improve the way for footage looks. Will a cine lens make your footage more film like? There are a lot of differences between different lenses. There is colour, contrast, bokeh, sharpness, flare, halation, distortions, breathing etc. But even in a workflow whether you grade, colour remains important as a more highly coloured lens will tend to have different flare characteristics depending on the colours in the scene compared to a more clinical lens. In practice there very little that is fundamentally different about the optical designs of prime lenses for photo or cine other than perhaps the focus mechanism or in some lenses the addition of floating elements for stabilisation. Many lower cost Cine primes are simply adaptations of readily available photo lenses or use the very same optical formula as common photo lenses. Cine zooms are almost always truly parfocal and will include some means of adjusting the backfocus whether with a backfocus adjustment or via shims. Stills zooms are rarely truly parfocal. Most Cine zooms will have a constant aperture throughout the zoom range, this isn't always true of photo lenses. Cine lenses will typically have an entirely mechanical, long focus throw with accurate witness marks. Still lenses often have a very short focus throw and may not have any witness marks at all, especially modern lenses where good AF is seen as the primary factor and a complex mechanical focus system would slow the AF down. When you buy a set of cine lenses you can expect them to have matching optical performance, they should not look different from each other. Thats important when using prime lenses if switching between different focal lengths within the same scene. But I doubt many viewers can look at a well executed shot and categorically tell whether the lens used was a photo lens or a cine lens. So much about lens choice is a personal thing. What are you looking for, what is important for you. And different lenses suit different jobs. For run and gun and fast and dynamic shoots there is a lot to be said for light weight photo lenses with great autofocus. For drama you might want larger lenses with big focus rings and pitch gears for a follow focus system. Some projects will be better suited to sharp, colour free lenses that provide a clean and clinical look. Another project might benefit from a lens that imparts a warm, smoother look. It really isn't as simple as "cine lenses are better than photo lenses" as there are both excellent and rubbish examples of both. The capture process starts with the lens, so everything about your final image is determined by that lens and it's very hard to remove unwanted optical defects later on. So, it's not really a case of whether to choose cine lenses or photo lenses but rather choosing the lens that best fits the jobs that you do.
  25. Just a heads up that I will be heading off to Northern Norway on January 11 for my annual trip to shoot the Northern Lights. This year I will be taking my FX3 and FX30 along with my Xperia Pro phone and as the internet connection where we go is better than it used to be I will be trying to live stream the Northern Lights straight from the camera when I can. As both the Aurora and the weather are highly unpredictable I don't know exactly when I will be streaming live, but I will post where you can find the streams a bit closer to the time. The sun is currently very active with many sunspots, so if I am lucky I may catch some really good Aurora activity. The FX3 is a great camera for filming the Northern Lights. The second upper base ISO gives you the sensitivity you need as the Aurora is quite faint. I tend to use a wide, fast lens. My favourite lens is the Sony 24mm f1.4 GM, this is a beautiful lens for anyone that wants to do wide star photography as it has very low coma levels. But I will also be using the 20mm f1.8G and perhaps some longer focal lengths. My Xperia Pro phone has an HDMI input and updates to the included Monitor app allow you to stream directly from the app to any RTMP server including YouTube at up to 4K. I don't think I will have enough bandwidth for 4K as we go to a very, very remote location. But the combination of the improved app in the phone, HDMI in to the phone from the FX3 and the better local cell phone coverage should mean that for the first time I will be able to stream high quality live Aurora footage. I will post the links here closer to the time.
×
×
  • Create New...