When those fatal shots rang out at 12.30pm on 22nd November 1963 in Dealey Plaza, a public assassination was captured on film for the first time by Abraham Zapruder. President John Fitzgerald Kennedy, a symbol of hope for the 1960s was gunned down by lone assassin Lee Harvey Oswald, or so it appeared. However, due to the Zapruder film’s lack of overall definition and sound, it provided an incomplete picture of what happened. Coupled with some dubious and questionable actions by authorities in the immediate aftermath of the shooting, an endless debate has arisen since about what actually happened that day, and if other people were involved other than Oswald. We may never know what really happened, but that won’t stop us from speculating, even 50 years later.
What’s fascinating about the JFK assassination, at least in terms of our humble show FiST Chat, is the way it intersects our core topics of film, science and technology. From countless films and documentaries, to forensic scientific analyses and retrospective studies, to new technologies in cold case investigations, the JFK assassination has it all in terms of film, science and technology.
There have been many films and documentaries that cover the topic, with my standout favourite being Oliver Stone’s 1991 film JFK. Just about everybody but Oswald was blamed for the assassination in that film! Many documentaries have been made, including one intriguing doco I saw recently that proposed the theory that a rookie Secret Service agent in the follow-up car accidentally shot Kennedy in the head as he was swinging his gun around to shoot at Oswald. It made a very convincing case. Although they are all fascinating on some level, many take a particular point of view, and in almost all cases, there’s another piece of evidence that can potentially make their point of view untenable.
The forensic scientific analyses were fascinating. The “magic bullet” theory is absolutely ridiculous from a scientific and physics point of view. Yet, the official investigation in to the assassination, the Warren Commission, officially endorsed it as the truth. Recent analyses have potentially shown that the “magic bullet” was actually a straight shot and that the Warren Commission chose to endorse a zig-zagging bullet rather than the notion that they got the positions of Kennedy and Governor Connolly in the car incorrect. And utilising new technologies in today’s present has helped to shed more light on the assassination that wasn’t available at the time, such as conducting virtual autopsies of JFK, 3D simulations of the crime scene and measuring bullet sounds and trajectories in far superior controlled environments.
Oliver Stone was right in that the story behind the JFK assassination simply won’t go away. In most cases, the simplest explanation tends to be the right one but somehow the notion that Oswald did it all himself doesn’t sit right in terms of common sense. Coupled with the fact that he was gunned down two days later by Jack Ruby before he could tell his side of the story, and no sane person could not ask questions about what was happening. We may never know what happened, but we will keep asking the questions. The records held by the CIA and other US intelligence authorities is due to be released in the near future. Will they shed new light on the assassination? Probably not, but we always have those using film, science and technology to try and solve the puzzle for us.
Watch FiST Chat 148: JFK Assassination 50th Anniversary for more on this topic.
I was fascinated by the tech news bite Steve and I covered this week in which Volvo was developing wireless charging technology for their electric cars. Similar to the way a charging pad works on a mouse, you simply park your funky little electric car in a car space and it instantly begins charging, without you having to plug it in anywhere. You almost don’t have to think about it (until you get the bill of course). In thinking about this, it becomes clear that public car parks will not only be able to charge by the hour for parking but also for the amount of charge you consume while parking there. They could be the energy tycoons of the 21st century!
Cost aside, the idea that you can “park and charge” sounds like a very efficient way of doing things, given you are knocking off two tasks at once. It also relegates refuelling your car to a simple button-push exercise as opposed to filling up your car with 100 litres of fuel (or thereabouts). It’s also environmentally-friendly in that it provides a simple and convenient charging infrastructure for a no-emission vehicle, and thus eliminating the use of fossil fuels in the process. Of course, the electricity supplied from the car park should be from a non-fossil fuel source, but even still, there is an overall reduction in usage. The environment will be much better off.
Can you imagine the idea of driving your electric car in to a car park, reading up the parking fees and then checking out the current rate to fill up your car by kilowatt? Would you be comfortable paying $100-$200 at the vending machine after you park your car to account for parking and charging fees? I can certainly see all of this, it just depends on how well it is all implemented. It also depends on operators and any tendency they might have to gauge users. Or it could go one step further with a “Minority Report”-style future where we don’t drive at all and our electric cars are guided by onboard artificial intelligence between parking spaces.
We’re a long way off this technology being implemented in any sort of mainstream sense. Electric cars are being trialled and some are available for use, but they are in the minority. The problem is that petrol-powered cars can go hundreds and hundreds of kilometres out of every tank, whereas current electric vehicles don’t even come close. There’s also the idea of power; you can throttle up a petrol-powered car and gun it for all it’s worth. An electric car on the other hand is too quiet and doesn’t have that feeling of raw power flowing through its engine. It will become a reality eventually and it will all come down to how well electric car manufacturers can make the experience of driving their cars just as good as petrol-powered cars. And if innovations such as wireless charging in car parks gets implemented, we’ll have more incentive to use these new modes of transport.
Watch FiST Chat 147: Discovering The Causes Of Bushfires for more on this topic.
It seems that October every year has become the month all the tech giants have settled upon as the release month for their new, shiny mobile devices. It’s an ideal month to release new generations of hardware. It gives the general public two months to evaluate and purchase their mobile device of choice just in time for the craziness that is the Christmas buying season. When Tim Cook recently mentioned that it was looking like an “iPad Christmas”, he wasn’t kidding. Apple stands to rake in billions over Christmas this year, and the other tech giants will likely match it, or not fall far behind. Tablets and other mobile devices are the go-to choice for many as a Christmas present.
There are also very competitive reasons why the tech giants settled on October, and it’s more a case of each company trying to one-up the other with their respective tech announcements. It’s quite ridiculous from a consumer point of view to have new devices from all the major tech companies being released so close together, and sometimes within days of each other. However, from a chest-beating, “my device is better than yours” tech giant point of view, this is all about trying to steal the media spotlight. No one does this better than Apple, despite the fact that the marketing shine has come off its keynotes in recent years since Steve Jobs passed away. Apple’s announcements tend to drown out everyone else’s. That’s not to say that the likes of Dell, Nokia, Samsung, Google and Microsoft won’t give it a good try. Have a look at what they all announced or released in October!
For me, there is an inherent contradiction between the way tablets in particular are marketed and sold compared to the way these tech giants want the public to use them. On the one hand, the tech giants want us to upgrade to a new generation of hardware every year or two, just like we do with our smartphones. On the other hand, they want us to think of these devices as PC replacements which in theory should last many years. Given the costs involved, it’s a bit rich for these companies to expect us to spend this money so frequently, especially when the device is supposed to do just about everything a PC can (or at least that’s what they tell us). A tablet isn’t a smartphone, yet we are being encouraged to treat them as such and upgrade every year, or second year. That’s all about marketing and the bottom line, and my blog post last week covered why it is important to do a cost benefit analysis on these devices before you spend the money.
The tech giants are generating big profits from the tablet market, and no one can fault them for that. And even if a tablet goes out of date, it will still be incredibly useful for many years past its software-expiration date. And let’s not forget that the tech giants are really after new customers, not just placating the existing customers. From that point of view at least, new generations of hardware every year makes sense.
I’m somewhat over the supposed excitement generated by new mobile device releases. The point has now been reached where they are all capable and powerful, and can last a long time (ie up to 5 years). Tech giants do need to keep iterating their hardware, but you don’t necessarily have to keep buying them. After all, there are many people who have never bought a tablet before and that’s where the tech giants have set their sights. Any one who has bought one of their products already is considered a “safe bet” for them as they’ve bought in, and are on their way to being locked in to their ecosystem. It will be interesting to see however when we truly reach the point of market saturation where people stop buying these devices so often. What will the tech giants do then? Invent something new for us to buy of course!
Watch FiST Chat 146: All New Tablets In October for more on this topic.
Working with computers has become a staple of modern life. Over the past two decades, they have simply become more and more pervasive, to the point now where we carry them around in our pockets in the form of smartphones. There’s no getting around them, but at the same time, they do offer incredible benefits in terms of productivity and efficiency. Used correctly, computers and mobile devices can help us achieve more out of every day than what we otherwise would.
However, with the advent of mobile devices in particular, the big tech giants in Silicon Valley have relentlessly pursued faster update cycles. Tablets and smartphones seem to be on 1-2 year lifecycles these days. This may not be such a bad thing if they weren’t so expensive, particularly in the case of the iPhone and iPad, with the latter approaching, and in some cases exceeding, the cost of a decent Windows laptop.
The tech giants may want us to keep upgrading each time they release a device, but does that mean that you have to? Does a 1-2 year old device become obsolete when the next generation is unveiled? It comes down to doing a cost benefit analysis on the devices you want to use and what you think is good value. And at the end of the day, a lot of these devices are still actually very useful beyond the time that they can no longer accept software and operating system updates.
Before looking at mobile devices, I thought I might talk about my iMac as a comparison. I bought it back in 2008 for around $2,500. This was more expensive than the Windows computers I had in the past, but then, I figured that given the durability and reliability of Macs, that initial upfront cost might be more cost effective over the long term. And it’s turned out to be right. The iMac is still working as well as the day I bought it over five years later. So far it’s cost me less than $500 a year to have it, and I don’t envisage replacing it for several more years. If I’m lucky to get to 10 years with this machine, it would have only cost me around $250 a year. Of course, it’s more than that when you factor in software upgrades, but still, you would do that with a Windows machine as well. But even factoring in a cost of $500 a year, this was great value to me. I may have been able to get an equivalent Windows desktop machine in 2008 for $1,500 but given my two decade long experience with Windows machines up until that point, it would have only lasted me around 2 years before I would have had to spend more money on serious upgrades, or even repairs. It also would have offered a less cohesive and responsive experience, especially after the first year of use. And as for the iMac, I could run Windows on it as well, which means I had two computers in one. It was a bargain all round, especially considering the iMac has now gone through four big operating system upgrades without any hiccups.
My Macbook Pro also serves as a good example of how this type of cost-benefit analysis can work. I bought this machine in 2011 for around $1,800. Now, that’s a lot of money for a laptop, especially when an equivalent Windows laptop would cost around $1,000 – $1,200. However, my experience with Windows laptops has meant that I could only get 2 years at most out of it before it broke down in some serious way (my last HP laptop got 2+ years before the screen unexpectedly died and it cost $1,300). Now it’s still early days but the MacBook Pro is purring along just nicely more than 2 years later, and I figure that even if I get 5 years out of it, it will only cost me $300 a year to have. And I can run Windows on it which means I can have 2 computers in one, with a superior hardware experience.
So what’s my point here? I figure if a computer costs me somewhere around $300-$500 a year to have, it’s a good deal. In the end I’ve switched to Macs because they provide an all round better experience for the same or similar cost to a Windows PC. That’s not to say I don’t like the Windows environment; I actually still use it every day on my Mac and find it crucial for several key tasks, but the hardware from the PC world just doesn’t compare.
Let’s move on to mobile devices. For me, smartphones are the critical mobile device in the line-up. At the end of the day, most of us need a phone, and if you combine that with a powerful mobile computer, then you have an important device that you can carry around with you at all times that can help you achieve many important tasks. Now given how much they can be used, you can forgive the tech giants (maybe) for pushing us in to a 2 year lifecycle with these devices. However, since I can give an old smartphone to someone else (usually a family member) to use for a few years, then I know that I can get at least four years out of these devices. I’ve done this with my iPhone 4; after my two year contract was up, I passed it on and when my iPhone 5 contract is up in two years’ time, I’ll hand that on (and so on). I still believe I’ll be able to use these old devices even after that. The iPhone 4 could be used as a decent iPod and Apple TV remote when it outlives its usefulness as a phone. In this respect, I don’t think I have a problem with the lifecycle of this product. And since the cost of the phone is subsidised by the carrier, I figure it will only cost me about $100-$200 over a 5-6 year period (minus the carrier contract fee which you would pay anyway); not too bad I think.
Tablets, and iPads in particular, are another story. Although tablets can be useful, you don’t really need one, at least not yet. Steve and I discussed this in one of our previous episodes, 143: Reflections on Podcasting, that although we are on the cusp of being in a Post-PC world, we are still not at the stage where tablets will completely replace PCs. Until they do, they are a companion device. And I know with my iPad, that’s exactly how it functions for me. From a productivity standpoint, it enhances my experience, but it doesn’t replace it. And of course, it’s an excellent consumption device and excels in areas where the small smartphone screen can’t compare, particularly in reading, and watching and sharing photos and videos.
Now, my third generation iPad cost me around $800 for the 64GB model in early 2012. Using the smartphone cost-benefit analysis, I figured I should be able to use this device for around 5-6 years before getting a new version. However, Apple would have us update this thing every 12-24 months, especially in the case of software. The hardware may last longer, but operating system upgrades tend to cripple or severely slow down these device after 2-3 years (just ask anyone with an iPhone 4 or iPad 2 who upgraded to iOS 7).
I don’t envisage my third generation iPad making it to iOS 8 (at least not smoothly), and perhaps it doesn’t need to. As long as it can perform all the tasks I need it to do, then it will still be useful to me long after it can’t accept operating system upgrades. I guess I could do the hand-me-down option that I do with my iPhones, but given it’s still not a critical, must-have-the-latest-generation device yet, I don’t see the point, especially when I know I could probably just use it myself for several more years. And maybe it’s just the principal of the thing; I didn’t buy an iPad so it could last for two years. I bought it to last for at least five.
I haven’t spoken about Windows or Android tablets because I don’t own any, nor have I used them extensively. However, the same theory should still apply. Decent tablets running these systems are expensive. Microsoft Surface devices aim to be a tablet/PC hybrid meaning they cost more, while Samsung’s Galaxy tablets are still expensive, if not as expensive as Apple’s iPads. I’m naming these in particular because they are comparable devices to the iPad in terms of functionality and performance. There’s a lot to admire in all of these devices, but at the end of the day, you have to ask yourself how much money you are willing to spend on them and how much value you get out of them in day to day to use. And you have to make sure you don’t drink the Kool-Aid being offered by all these tech giants that you have to keep upgrading the hardware every year or two. As a side bar, it’s not worth mentioning cheaper tablets or task-specific tablets like the Amazon Kindle Fire here given they don’t offer the same experience, although the Kindle Fire looks intriguing for content consumption.
Computers and mobile devices are fantastic. They are useful and fun, enhancing many everyday tasks and allowing us to achieve more out of every day when used properly. Just make sure though that you do a cost-benefit analysis when purchasing them. You may find you can get a lot more use out of your computers and mobile devices for a longer period than you might realise.
Watch FiST Chat 145: Lifecycles of Post-PC Devices for more on this topic.
A year after the controversial release of Windows 8, Microsoft has finally released an update in Windows 8.1 that addresses many, if not all of the issues raised by users in the initial release. The start button is back, although not all the way as there is no start menu. Transitions between the mobile and desktop interfaces are much smoother and better integrated. Overall responsiveness is better, and adjusting PC settings and the user interface is much easier and simpler to manage. You can even boot straight to the desktop now and never see the start screen ever again, unless you left-click on the start button by accident. The changes that have been included in this update should have been there in the beginning, and one can’t help but wonder how much better Windows 8 would have been received had many of the updates included in 8.1 had been part of that first release.
I’ve been using Windows since the early 90s, beginning with Windows 3.1. Back then, you didn’t have to run Windows to run a PC, and I only used it to access certain Microsoft applications like Word and Excel. I remember my Uncle, who was a keen tech enthusiast, describing it at the time as an elaborate menu system that you didn’t necessarily need. I remember running it as a parallel system, alongside DOS and also some jury-rigged visual user interface software that my Uncle provided that I could program myself. Those were the days of freedom when you could truly configure your computer in any way you wanted.
Fast forward to today and I really can’t be bothered doing any of that any more. It might just be I don’t have the time, or it might also have something to do with the fact that the tech giants have been slowly taking away the ability the tinker with their increasingly locked down systems, with the exception of Google’s offerings. Apple of course is the poster-child for the locked down system. I don’t think that’s necessarily a bad thing. After all, I want to spend my time these days being productive, not on tinkering with my systems on a constant basis.
So where does Microsoft and Windows fit in to all of this? Well, they have certainly progressed their flagship operating system since Windows 3.1. They made the graphical user interface mainstream with Windows 95, and for every release up until, and including, Windows 7, each system has been about refining and improving that basic concept. Windows 7 has been a highly successful operating system with both the general public and business. Between Windows XP and 7, they represent the zenith of Windows as a desktop operating system. Windows 7 however represents the last Windows desktop operating system. Yes, we still have the desktop, but it’s now become an app in a mobile OS called Windows 8.
Microsoft clearly has a problem in its marketing strategy for Windows 8, beginning with its name. Windows 8 sounds like a continuation of previous versions of Windows when it actually isn’t. Yes, it has a desktop environment, but it’s an app within the newly designed modern user interface which closely resembles the Windows Phone operating system of tiles and full screen apps. And the only reason the desktop is still there is because the users demand it. You could easily see that at some point down the road, Microsoft may want to dispense with the desktop altogether; but that’s a story for another time.
On the other hand, you can see why they called it “Windows 8”; they needed all of their customers to recognise it as the next upgrade to Windows, and thus continue to pay and use the software. The problem was that it was so jarring in its changes that the new OS effectively alienated the customer base. Windows 8 is a decent operating system, it just copped bad press because Microsoft didn’t explain its strategy behind the changes. It was a case of Microsoft dumping the changes on the public without enough warning, and the rest wrote itself: “Where’s the start menu?”, “Why don’t I see the desktop straight away?”, “Why is the command for shutting down the computer not visually obvious?”… and so on.
Make no mistake; Microsoft needed to do this because Apple and Google have been busy redefining the computing experience out from under Microsoft to a more mobile environment, which classic Windows doesn’t work well in. In many ways, Microsoft is to be commended for turning around their entire OS strategy in a few years given the inertia that the traditional Windows business generated. It’s a shame they didn’t have a marketing approach to match it.
Having taken on board all those complaints, Microsoft has released Windows 8.1 to the public, and the update finds a happy medium between satisfying customer needs and maintaining Microsoft’s new mobile OS strategy. Having used it for a week now I’ve found it an excellent update, and the OS feels much more cohesive and responsive to use than before. There are no stand out, jaw-dropping features; just a series of minor tweaks that have made Windows 8 a lot more user-friendly. To illustrate the point, you can now make your desktop wallpaper match the start screen wallpaper. As a result, you feel like you’re working in one user interface rather than two. Simple right? Why didn’t they think of that before? I guess we’ll never know the answer to that. And of course, users will love the fact that they can now boot straight to the desktop. On desktop PCs, having this option is a must. As good as the tiled modern user interface is, it’s no good on a traditional PC.
I’ve kept pace with Windows upgrades over the past two decades, and I’m glad that Windows 8.1 has kept the venerable OS moving forward. I enjoy using it, especially as a counter-point to OS X (which I also enjoy using) and Windows 8.1 still has a great desktop UI. However, I’m not entirely sure how many more times I will be upgrading Windows, especially if Microsoft decides to ditch the desktop altogether at some point down the road. Or maybe they’ll get their message right and their act together to create a mobile version of Windows that is just as successful as traditional desktop Windows. If they can do that, then users may just jump on board for the ride.
Watch FiST Chat 144: Groundbreaking Sci-Fi With Gravity for more on this topic.
New GoPro cameras were unleashed on filmmakers and enthusiastic sports lovers last week, offering a smaller overall design and improved battery life. The GoPro camera concept has been a boon for filmmakers and videographers thanks to the exceptional video quality they can produce, the relative ease with which they can be used and their much higher durability than traditional video cameras. These tiny cameras also offer the chance for unusual shots that would have been unheard of even five years ago, at least at the consumer level. Capturing the inside of a wave while on a surfboard is one of the many advantages of having a GoPro.
Cameras like the GoPro have reinvented the notion of filmmaking and the techniques you would use to go about executing it. Sure, making a good film or video is all about the person or team making it. If you don’t know what you’re doing, it doesn’t matter what kind of camera you have. But if you do know what you’re doing, these extremely portable, light weight cameras can open up some production possibilities that you would never have thought of before.
A few months ago, my HD production camera decided to die on me. I still used it regularly and needed it to record our weekly FiST Chat podcast. Without this camera, I knew I needed a top notch HD camera to take its place. At first, I did some research in to a similar replacement, but then it dawned on me; my iPhone 5 camera produced a higher quality 1080p image than my camera that just died and it was highly portable. So I ran a few tests and sure enough, it worked beautifully. All I needed in the end was a tripod mount that could fit the iPhone and I was off to the races. Now, I couldn’t imagine recording FiST Chat any other way.
One thing invariably leads to another and then I found myself thinking how else could I use my iPhone to produce more content. I was struck with the notion of being able to show up with a phone and a few actors and get a few scenes in the can. That kind of flexibility is non-existent in traditional film production.
There are trade-offs of course. The iPhone is limited in terms of shot composition, particularly with the use of lenses. Spending time on lighting set-ups is counter-intuitive, but lighting is the key to get an impressive shot. However, if you are able to work around some of these problems, the flexibility this mobile device offers could allow you to film in ways that you couldn’t if you had fifty people around setting up shots. And to be fair here, this doesn’t just apply to the iPhone. Most modern smartphones would be able to accomplish this goal.
I’m excited by the filmmaking possibilities that these new, ultra-mobile cameras can offer. Having spent almost a decade and a half making films in a particular way, I’m intrigued with the way these new cameras can offer new types of filmmaking experiences that are more flexible and spontaneous.
Watch FiST Chat 142: Control-Alt-Delete Was A Mistake for more on this topic.