Archive for October 2011
In daily educational humbug, we’re almost halfway with our Mobile Business Design ‘minor’ (a free-of-choice-half-year-course to enhance the bachelor level). And while the students did their first written exam in this context, it struck me that we are talking a lot about locative or Location Based Services but not a lot more about Augmented Reality.
Sure, three of the five teams use Layer or likewise as a technological solution to their heavyweight assignments. But nobody uses the word Augmented Reality anymore. That’s strange. So, in retrospect, what is Augmented Reality?
According to Wikipedia, Augmented Reality (AR) is “a term for a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one”
Portalés, Lerma, Navarr (2009) describe AR “as a relatively new technology that is based on mixing computer generated stimuli (visual, sound or haptic) and real ones, keeping a spatial relationship between synthetic and physical data and allowing user interaction in real time, as described in Azuma (1997)”
. Furthermore the authors state that AR provides
- Seamless interaction between real and virtual environment
- The ability to enhance reality
- The presence of spatial cues for face-to-face and remote collaboration
- Support of a tangible interface metaphor for object manipulation.
- The ability to transition smoothly between reality and virtual-
- The ability to transition smoothly between reality and virtual-
- The ability to transition smoothly between reality and virtuality
Now, let us see what some our students think Augmented Reality is.
- AR can be described as the changed reality. Think of Layar. An extra layer is added (read reality) to the world, seen through a mobile phone.
- AR is the creation of an extra dimension, using a mobile phone with a screen.
- AR is the addition of image and/or user experience by means of digital shapes, sound and/or image to real surroundings.
- AR is the adaptation of reality by means of modern technology.
- AR is added reality by means of smartphione.
Okay. Fair enough. Not all definitions are of the same quality. Still, the idea is clear; by adding something to something – in this case extra digital information of some sort to a reflection of reality i.e. what you view on a screen of a (digital) device – we create a new reality. The question is whether this new – augmented (as in enlarged) – reality can only be created through digital devices. What about this ‘augmented’ reality, a shot taken from a train window? The window has added information which, I know it is disputable – creates a new enlarged reality. Isn’t it?
 Cristina Portalés, José Luis Lerma, Santiago Navarr (2009). Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environment. ISPRS Journal of Photogrammetry and Remote Sensin (http://www.sciencedirect.com.www.dbproxy.hu.nl/science?_ob=MiamiImageURL&_cid=271826&_user=2849566&_pii=S0924271609001208&_check=y&_origin=search&_zone=rslt_list_item&_coverDate=2010-01-31&wchp=dGLzVlS-zSkzS&md5=aabce609b1e2abae785316954b3a0a01/1-s2.0-S0924271609001208-main.pdf)
 The exam was taken on 24 October 2011 in Utrecht. For obvious reasons I will not mention any names. The exam was in Dutch, I translated the writings of our ‘young colleagues.
On Thursday, international news organizations and media outlets were largely dependent on mobile phones and social networks to gain insight into what proved to be the final moments of former Libya strongman Moammar Gadhafi’s life.
From CNN to BBC, gruesome photos (taken via mobile phone) of an apparently lifeless Moammar Gadhafi were used to underpin reports that the former leader had, in fact, been killed in crossfire earlier that day.
According to CNN’s coverage, the mobile phone images were not only instrumental in proving Gadhafi’s death but also further weakening the late leader’s lingering supporters.
The images will be a big blow to the morale of his supporters, who have been clinging to the hope he would seize power again, during the weeks he has been on the run.
Five years ago, images also taken from a mobile device were released of Iraq’s Saddam Hussein at his execution. Unauthorized cell phone footage showed the world Hussein’s final moments on the gallows. This incident followed other images that surfaced three years earlier of Hussein’s two sons, Uday and Qusay, who were killed in a firefight.
Shortly after the images were released publicly, former CIA Director James Woolsey told CNN: “I think it’s necessary for the world to see and particularly for the Iraqis to see that these two are, in fact, dead, that this is not some ginned-up story from the United States.”
What was true at that time in Iraq is also true today in Libya. And those who are rejoicing in the death of Libya’s former tyrannical leader are especially thankful for the mobile devices that made proof of Gadhafi’s death possible in close to real-time.
Mobile applications are playing an increasingly larger role in the way doctors and hospitals care for patients.
The hospital at the center of today’s mHealth buzz is Washington Hospital Center, the facility behind “CodeHeart,” the highly secure mobile application solution developed and customized in collaboration with AT&T that provides a real-time video and audio stream that can be used in critical care situations, such as ambulances in transit.
According to an example presented in a press release issued by the medical facility, the solution can be used by hospital cardiologists to view, in real-time, a patient’s condition while simultaneously speaking with the patient’s first responder or the attending Emergency Department (ED) physician. Importantly, the solution also provides physicians the ability to view vital signs and test results–like electrocardiograms (ECG)–captured through the real-time video feed.
Ankit Daftery is an engineering student at the Veermata Jijabai Technological Institute in Mumbai, India and he decided to take it upon himself to add the ability for the blind to type on an Android device. He was convinced he could add Braille support to the mobile platform.
Today, he is starting to turn that idea into a reality. OnlyGizmos brings us this video where the enterprising student earnestly speaks about BrailleType, a simple but ingenious application that will allow blind people to type on a smartphone using the Braille alphabet much in the same way that they use it for reading.
BrailleType gives the user a blank canvas, with the top 90% of the screen available to the user for entering in the characters and a strip at the bottom displaying them as they are typed, in addition to reading them out using Android’s built-in text-to-speech synthesiser.
Just like Braille users read text written in the language by feeling the positions of the raised characters with the tip of their fingers, so they can type by touching the display with their fingers according to the established patterns of the Braille alphabet. Take a look at the video embedded below to see Daftery providing a live demo of the app:
We spoke to Daftery after looking at the demonstration above and he had some more information to share with us. For starters, the app will support not just the 26 letters of the alphabet but also some special characters like whitespace, backspace and newline. To enable it to be used with any application on the system, he is trying to get it to work as an alternative keyboard, which he can then distribute via the Android Market.
When we asked about the price, Daftery said that he hadn’t decided on one yet and remarked that it was too early in the process right now for him to be able to tell us whether it would be released for free or come at a cost. Right now, he says, his motive is to get the app up and running and make it powerful enough that it can actually make a difference in the life of a blind Android user.
Daftery said that he regularly visits schools for blind children in order to test his application in the real world and improves it based on the feedback he gets from them. The gesture for the whitespace character, for instance, was suggested to him by a blind person who was used to working with other technology-based Braille typing solutions for desktop computers.
All of Daftery’s current efforts are directed towards the Android platform, but we wanted to know if he planned to work on solutions for users of Apple’s iPad, iPhone and iPod touch devices as well. Apple’s development model does not allow for third-party developers to make systemwide keyboards for the device, so we weren’t surprised to hear that he “had no specific plans but was looking into that”.
Even if iOS support never comes to BrailleType, however, Daftery thinks that he may be able to finalise the features for the Android version by next week and then release the app soon after. What we are seeing here is an impressive demo of the way technology can be used to enable people with physical disabilities. We’ll know how well it works once it is released and being used by people with visual impairments, but we’re hoping for the best.