Now the A.I will be trained by Human Brain waves using Brain sourcing technique

Article Edited by | Jhon N |


Researchers wired up 30 brain-reading tech participants to see if they could collectively teach a computer to learn. Welcome to the Brainsourcing world.

Picture a room full of desks with a total number of over two dozen. There is a computer at every identical desk with a person sitting in front of it playing a simple game of identification. The game asks the user to complete an assortment of fundamental recognition tasks, such as choosing which photo from a series showing someone smiling or portraying a person with dark hair or wearing glasses. Before moving onto the next picture, the player must make their decision.

Only by clicking with your mouse or tapping on a touchscreen do they not do it. Instead, they simply select the correct answer by thinking it.

Each person in the room has an electroencephalogram (EEG) skull cap; a trail of wires leading from each person to a nearby recording device monitoring the activity of electrical voltage on their scalp. The scene looks like an open plan office where all are jacked into The Matrix.

"The participants [in our study] had the simple task of merely recognizing [what they were asked to look for]," Tuukka Ruotsalo, a University of Helsinki research fellow who led the recently published research, told Digital Trends. "We asked them not to do anything else. They just looked at the images that they showed. We then built a classifier to see if the target features could be used to identify the correct face, based solely on the brain signal. Nothing else was used, apart from the EEG signal when the participants were seeing the picture.

In the experiment, images of synthesized human faces were shown to a total of 30 volunteers (to avoid the chance that one of the participants might recognize a person they were shown, and thus skew the results). Participants were asked to mentally label the faces on the basis of what they saw, and to look for. Data on brain activity only, an A.I. Algorithm learned to recognize images, such as when a blonde person showed up on the screen.

This is impressive stuff but it's not new in particular. Researchers have used data on brain activity, collected via EEG or fMRI, for at least the past decade to carry out an increasingly impressive assortment of thought-reading demos. In some cases, it identifies a specific image or video, as with a recent study during which researchers at Moscow's Neurorobotics Lab showed that it is possible to determine which video clips people watch by monitoring their brain activity.

Those insights can be used in other cases to trigger certain responses. For example , researchers at Washington University in St. Louis in 2011 placed temporary electrodes over a person's brain's speech center and then demonstrated they could move a computer cursor on screen simply by having the person think about where they wanted to move it. Other studies have shown that it is possible to use brain data to move robotic limbs or hover drones.

Tuukka Ruotsalo and his team call this group-based brain-reading "brainsourcing." It's a play on the term crowdsourcing, referring to a way of splitting one big task into smaller tasks that can be distributed to large groups of people to help solve. Crowdsourcing here in 2020 could be most synonymous with money-raising platforms like Kickstarter, where the "big task" is the startup capital needed to launch a product, and the distributed crowd-based element involves asking people to chip in smaller sums of money.

I.A. The crowdsourcing can also be beneficial. Take Google's reCAPTCHA technology for example. Most of us probably consider reCAPTCHA as a way websites can check whether or not we are a bot before allowing us to carry out a specific task. Completing a reCAPTCHA could involve reading a wiggly line of text or clicking on each image in a selection which includes a cat. But reCAPTCHAs are not just about testing whether we're human or not; they 're also a very clever way to collect data that can be used to make Google's A.I. image recognition system. Cleverer. Every time you read a piece of text from a roadside sign on a reCAPTCHA image, you might be helping to make Google's self-driving cars slightly better at recognizing the real world, say. When Google has gathered enough answers for an image, it's reasonably certain Google will have a correct answer.

Commercial wearable EEG monitors are now beginning to be available - in forms ranging from brain-reading headphones to smart tattoos. EEG demonstrations such as the one in this study currently measure only a small percentage of the total brain activity of a person. But this could increase over time, meaning a less binary gathering of information may be gathered. Rather than just getting a "yes" or "no" answer to questions, this technology could observe the response of people to more complex questions, monitor media responses such as a TV show or film, and then feed aggregate crowd data back to their makers.