However, they all contain at least one level of abstraction i.e. Alexa, Siri, Cortana, Google) started what's now known as Conversational UI (CUI) or sometimes referred as Zero UI. Chat bots) or Voice-based applications (e.g. the mouse and made it slightly more natural by introducing gestures & voice-commands Touchscreens on phones removed one layer of this abstraction i.e.Later, this gave way to a graphical user interface and a mouse - where we could click a representation to complete an action (click on a 'folder' on a 'desktop' to start a program).Doing so required in-depth knowledge of computer-languages In early days, we keyed commands on a keyboard to tell a computer what to do.there was always a layer of abstraction between the human and the machine e.g. Until now, User Interface was always that, an Interface i.e. Is this the Future of User Interface Design? And then, as an improvisation, a combination of brain-sensors and computers can monitor such activity and switch-on the light-bulb without you having to move the arm. In doing so I can learn which part of your brain or brain cells fire as your brain commands your arm to switch on the light. For example, I can ask you switch on a light. Following on from this, this can be used to pre-empt what you might do next. Watch this video if you would like to know more. zooming into a picture from a group of pictures by focusing on the picture of your choice. Also, it's possible to spot electrical activity in a certain part of the brain, and then use that know-how to control an outcome e.g. Though in it's early stages, the idea works like this: apparently our brain cells fire in anticipation of an event that's about to unfold e.g. He's professor of neuroscience and business at the Kellogg School of Management and the LIJ department of Neurosurgery. I have been playing with this idea for few months since listening to a podcast by Prof. No, this is not about Zero UI or Chat Bots.
0 Comments
Leave a Reply. |