Forecast 2018 AI trend: quickly into the hospital, can create new songs

March 3, according to Forbes magazine reported that “The Wall Street Journal,” “Forbes” magazine and Fortune magazine and other publications called 2017 “AI Year.” There are good reasons for this: AI defeated professional gamers and poker players, broadened the depth of learning education through several online projects, and the record of speech recognition accuracy was broken several times. Research institutions such as Oxford and Massachusetts General Hospital were Invest in developing your own supercomputer

These are but a few of the milestones the AI has achieved in 2017. What happens next? We have collected forecasts from the world’s leading AI researchers and industry thought leaders:

1.AI will really integrate into medicine

“The year 2018 will be a real year for AI into the medical world, and we will move from algorithms to products and more integration and validation so that these solutions can be transformed from concepts into solutions that physicians can find practical and available by the end of 2018, I think about half of the leading healthcare systems will adopt some form of AI in their diagnostic teams.Although this will first emerge in diagnostic medicine, we see demographic health, hospital surgery and a wide range of Clinics will follow, and in 2018 we will begin to adopt a technology that will truly change the way service providers work and how patients experience healthcare on a global scale. ”

– Mark Michalski, Executive Director, Massachusetts General Hospital and Brigham and Women’s Clinical Data Science Center

2. Deep learning will revolutionize engineering simulation and design

“2018 will be the year in which deep learning revolutionizes engineering simulation and design. For the next three to five years, deep learning will speed product development from years to months or even weeks to create products New paradigm for rapid innovation in functionality, performance and cost. ”

– Marc Edgar, GE Information Scientist

3.AI will be considered as part of a “regular” clinical system

“In 2018 and in the coming years, AI will be introduced into our clinical system, which will no longer be called AI, but will be called a conventional system.People will ask themselves: ‘How can we survive without these systems?’ ‘”

– Luciano Prevedello, MD, PhD in Radiology and Neuroradiology, Wexler Medical Center, Ohio State University, Master of Public Health

4.AI will be considered the mainstream content creator

“Given the rapid pace of research, I expect AI to create new, personalized media, like making music to your liking. Imagine a future music service that not only plays existing songs that you may like, but also Keep creating new songs for you. ”

– Jan Kautz, Senior Director of NVIDIA Visual Computing and Machine Learning Research

5. Technology will continue to adapt to AI

“AI will affect the next 25% of technology spending, the key theme is how organizations and human resources to deal with changes in AI technology.”

– Nicola Morini Bianzino, General Manager, Artificial Intelligence Department, Accenture, and Head of Technology Development and Strategy

6. Biometrics will replace credit cards and driver’s licenses

“Thanks to the development of AI, the face will become the new credit card, driver’s license and bar code. Facial recognition has completely changed the security of biometric functions, we will see this technology and retail convergence trend, like Amazon and In the near future, people will no longer have to queue up in stores.

– Georges Nahon, CEO, Orange Silicon Valley and President, Orange Institute Global Research Associates

7. New deep learning technologies will provide transparency on how to process data

“Deep learning will significantly increase the quantitative content of radiology reports and concerns about deep learning as” black boxes “will be greatly diminished because new technologies will help us understand the ‘vision’ of deep learning.”

– Bradley J. Erickson, Associate Director, Department of Radiology Research, Mayo Clinic, Advisor, Department of Health Sciences and Radiology; Advisor, Biomedical Statistics and Information Section

8. Smartphones can use AI and deep neural networks

“A large number of applications on smartphones will run deep neural networks to better support AI functionality, and friendly robots will start to become more affordable and become new members of the home. They will begin to make up for visual, speech and voice The gap between so that users do not realize the difference between these modes of communication. ”

– Robinson Piramuthu, Chief Scientist, eBay Computer Vision

9.AI will be more fully integrated into daily life

“Robots will perform better in complex tasks, even though these tasks have no difficulty for humans, such as allowing robots to walk freely in rooms or objects, and they are better at handling boring, routine things. I also look forward to their progress on natural language processing (NLP), although we have already had some success now and we will see more and more products that contain some form of AI coming into our lives. People driving vehicles are now deployed on the road, so those things that are tested in the labs will become more commonplace and available and will touch more of our lives. ”

– Chris Nicholson, Chief Executive Officer and Co-Founder, Skymind.io

10.AI development will be more diverse

“We are starting to see more and more people from all backgrounds involved in the construction, development and production of AI Tools and infrastructure will continue to improve to make it easier for more people to bring their data and Algorithms into useful products or services.Products and applications will allow more interactive queries on the inner workings of the underlying models and help enhance trust and confidence in these systems, especially in mission-critical applications. In the medical field , We will see the convergence of different sources of information across multiple disciplines, rather than focusing on individual cases, although the scope of these targeted applications will continue to grow at a fanatical rate. ”

– George Shih, Founder, MD.ai, Associate Professor and Associate Director, Department of Radiology, Weill Cornell Medical School

11.AI will open up new areas of research in contemporary astrophysics

“AI will be able to detect an unexpected astrophysical event that emits gravitational waves, opening up entirely new areas in contemporary astrophysics.”

– Eliu Huerta, Astrophysicist, National Supercomputing Center, University of Illinois at Champaign, Gravity Team Leader

12.AI will be transferred from the research laboratory to the patient’s bedside

“AI has reached the climax of the ‘publicity curve’ in imaging and we will begin to see AI-powered tools moving from the research lab to the radiologist’s workstation and eventually to the bedside. Less attractive use cases (such as , Workflow tools, quality / safety, patient classification, etc.) will begin to grab the attention of developers, insurance companies, healthcare providers and others.Medical and Imaging One of the biggest challenges facing the AI ​​industry is whether regulators can keep up with the ongoing What happens is that the FDA will need to find efficient and streamlined ways to review and approve algorithms for screening, detecting and diagnosing diseases. ”

– Safwan Halabi, Medical Director, Department of Radiology, Lucerne Packard Children’s Hospital, Stanford Medical Center

13.AI personal assistant will become smarter

“Personal Assistant AI is getting smarter and smarter When our personal assistant knows more about our daily life, I can imagine: I do not have to worry about everyday dinners anymore. My AI assistant knows what I like to eat in my closet Someday, I might cook at home someday and make sure that when I get home from get off work, all the groceries I need have been left in front of the house, allowing me to prepare my long-awaited delicacy. ”

– Alejandro Troccoli, senior research scientist at NVIDIA

Microsoft introduced the blind for VR equipment

Foreign media recently exposed a new Microsoft study. The project combines VR with a set of haptic devices designed to allow visually impaired people to explore the VR world as they would in the real world.

This device is called CaneTroller, mainly consists of 5 parts:

The first is a Braing Mechanism hanging on my waist. I call it Brakes. Like a brake disc, this device controls the front-to-back slide of the bar to which it is attached.

The second part is a hand-held rod-shaped controller, called it the joystick (the laboratory is the use of ordinary alpenstock plus control system).

The third part is the slider. The slider connects the brakes and the joystick, and the user controls the brakes to determine where your joystick (stick) can extend.

The fourth part is Voice Coil. Software control circuit, and then make it vibrate through the magnetic field changes.

Finally, we are familiar with the Vive suite. The lever is tied to the Vive Tracker, with a Vive helmet on the user’s head and a Lighthouse base station in the corner of the ceiling.

With this VR device, the program can position the joystick in the virtual world and recognize the movement of the “stick.”

Let’s see how it works.

The user holds the joystick and points on the floor, similar to the blind pathfinder. The difference is that the control lever is too short to reach the floor at all.

The scene simulated in the program is not the same.

In order to facilitate the presentation, Microsoft deliberately made a MR video, the virtual and the real scene mix.

In the virtual world, a longer rod is simulated, the length of which is set according to the specific needs of the user. At the same time, some virtual objects are also arranged in the virtual world.

When this virtual blind stick touches a virtual floor or other object, the voice coil motor produces high-frequency vibrations that simulate a real collision. In the meantime, the brakes pass the “brake” to stop the stick from moving in the direction of the obstacle. Only feel from the touch, you will feel the grip of the hands really touched something.

Function is justified, however, in the real world, take the root stick trash can easily achieve the thing, why deliberately wear a VR glasses, but also specifically to develop a program to experience this virtual collision?

And blind people wearing VR glasses? What is the significance?

Back to this question: people silly more money?

Do not worry, after all, it’s called Microsoft, let’s see.

Through software control, the voice coil on the joystick accurately simulates the effect of a click or slide. Not only different materials of different terrain sounds, the pace of movement, strength and so can accurately simulate.

Meet an ordinary wall, no challenge.

Common blinds at the station, cane sliding friction and meal touch are simulated very well

At the same time, 3D headphones also provide accurate 3D space sound. This means that just as in the real world, visually impaired users can move forward normally through ordinary two-point touch.

Careful people will find that the root of the virtual stick is always stabbed into the floor. This is because the brake can only control the lever in the horizontal direction, in the vertical plane helpless.

After all, Microsoft, they first thought of using software to make up for the lack of hardware: When the dummy stick moved to where it should not be, the program sounded an alarm.

Listen to Microsoft finish these nonsense operation, the time to witness the miracle.

In order to verify the practicality of the product, the research team asked several people who were visually impaired to use the blind stick for testing. They first learned how to use the device. The goal of the formal test is to find and identify all the objects in the virtual scene with the aid of this device, just by listening and touching.

Among them, the indoor scene with trash cans, blankets, tables.

Outdoor scene is a traffic intersection. Users need to find a blind spot, find the traffic light column, to determine the safe place to cross the road.

Of the nine people who participated in the test, eight said the correct layout of the virtual scene in minutes.

The product is roughly the case, what are its applications? Black box that there are three main:

First, let those with visual impairment, through this device for safe exploration of environmental training. While training with solid carpets, bricks and trash cans is not a problem, it’s not easy to explore more scenarios.

The second is to let visually impaired people experience the blind world. This is not for fun, of course, but for those who serve or design for the blind to understand the actual needs of the blind.

Third, let the blind experience VR. At first, I only understood the device as a training product, but the comments from foreign netizens inspired me: Why can not blind people experience VR?

Robert Accardi, a YouTube user, said everyone underestimated the potential of such products. If you do not have a cane, just let the environment of various objects according to different textures and different distances make different voices, until the human brain to form a reflection or memory. When you put on a pair of glasses that convert ambient 3D information into 3D sound effects, it is equivalent to “seeing” the world with glasses.

There are users questioned, there is no image, it is also called VR?

“Virtual reality is nothing more than the use of software to simulate the real world, not necessarily through the visual sensor to simulate.” User Luca Conesa think so.

The blind also have to explore the world’s needs, and even more intense desire. With VR, they can explore a variety of simulated realistic scenes in safer conditions.

In addition, with the enhancement of haptics and other sensory techniques, there may be content suitable for people with sight or hearing impairment in the future. Who says this can not be just what they need?

MIT Associate Professor: Why hospitals love robots more than factories

According to MIT Technology Review, robotic colleagues and AI helpers are approaching us, but Julie Shah is not worried about replacing robots with robots, but welcomes them enthusiastically.

Shah is an associate professor at MIT who is committed to making humans and machines a safe and efficient partner. The job took her to the factory floor and the busy hospital where she tried to figure out how automation can make humans more efficient. Shah recently interviewed about the scene we started working with robots:

Q: What do you think is the most common misconception about robots in the workplace?

Shah: People generally think that AI is a very common and powerful ability that can be used in all these different kinds of work. But today’s AI can not be used in such a way.

Currently, every AI system needs to be designed to perform a very specific task, which requires a lot of engineering work. Although the scope of their mandates is expanding, we do not yet have “universal artificial intelligence” and it will replace a great deal of human work. As AI’s capabilities continue to grow, it can accomplish many small tasks in different areas.

Q: In factories and hospitals such places, to achieve the potential of robots how much?

Shah: When you talk about robots getting into more service environments, such as hospitals and office buildings, you find that they have fewer structured environments. Robots need to understand the environment, including personal preferences, when the busiest. It is cumbersome to code all of this.

We are always devoted to developing technology to observe the way professionals work. We observe how nurses make decisions, such as which room the patient is assigned to. Through the observation of human professionals, robots can be trained in learning.

Q: Have you noticed that in different industries, which industries are more susceptible to automation?

Shah: The area of ​​health care does not resist robots. People in manufacturing often feel more skeptical about robotic substitution. Proving that robots will improve human capabilities rather than replace humans may pose daunting challenges.

In the hospital, we studied nurses who served as management roles. They controlled most of the work scheduled in the operating room, such as which wards the patients were assigned and which nurses were assigned to take care of.

Compared with air traffic controllers, the work of these people is much more difficult from a mathematical point of view, but they do not have the same decision-making tools to help them. Nurses have a unique sense of value in their work. They know their work is hard and they feel there is room for improvement, even though they are already very familiar with the work.

Q: Do you think the dialogue between AI and work needs to change?

Shah: I think there’s one thing sometimes lacking in discussion, that is, AI is not a technology beyond our control. We are the designers of AI and how we structure their ability to work with AI.

 

With AI technology, scientists want to restore the perpetrator’s face from the victim’s brain

Scientists have been able to truly read human thinking with advanced computerized scanning technology and capture face images from the testers’ minds. If this technology is further refined, the police’s electronic facial recognition technology and even video recording of closed-circuit television will all be history as the police are able to get the true appearance of criminals directly from the victim’s brain.

This amazing technology has also brought new hope to people with disabilities who have speech problems and terrorists can not hide their massacre plans in the presence of law enforcement officials. Developed by neuroscientists at the University of Toronto in Canada, the new technology uses EEG monitoring equipment to gather people’s brain activity and reproduce the images they perceive.

Principal Investigator Dan Nemrodov said: “When we see something, our minds produce a mental state of mind, which is essentially a psychological impression that we can get through the brain with the help of electroencephalographic equipment What’s really exciting is that we are not reproducing the shapes, but the true look of a person and the many detailed visual features. ”

“We were able to recreate a person’s visual experience based on people’s brain activity, which brought us many possibilities,” said the researcher, “The technology unveils the content of our brain’s mind and gives us a Ways to explore and share what we perceive, remember, and imagine. “It also provides a way for people who can not communicate in languages.”

This technology not only reproduces a person’s perception from a neural basis, but also reproduces the content of their experiences and memories. The technique can also be applied to forensic applications by law enforcement authorities, and judicial officers can gather suspects’ information through witnesses instead of relying on verbal descriptions to obtain sketches.

During the study, researchers showed face images to testers connected to EEG devices. The testers ‘brain activity was recorded by the device, and the researchers then reproduced the digital image of the testers’ perceptions using a technique that was based on a machine learning algorithm.

Researchers have previously conducted similar tests with expensive MRI equipment, but EEG devices are cheaper, more portable and more practical. Researchers are now extending this technique to explore the possibility of getting more details from testers’ memories. The research funding comes from a new researcher’s prize at the Natural Sciences and Engineering Research Council of Canada (NSERC) and Conant.

Amazon Alexa development simultaneous translation

The level of machine translation is getting higher and higher, after a few companies launched real-time machine translation (or simultaneous translation), but the use of poor experience. According to foreign media latest news, Amazon team is developing the simultaneous translation function of the voice assistant Alexa. In addition to the language, Alexa will also make a strong cross-cultural translation, which can help users to integrate into the customs and cultures of other countries.

A large number of agencies before the assessment, the voice of the four major companies in the world aides, Google Assistant and Amazon Alexa the highest IQ, not only easier to understand the natural language input, but also to answer the higher accuracy. In contrast, Microsoft’s Cortana and Apple Siri are far behind rivals.

Currently Alexa has access to a large number of third-party hardware and software equipment, more powerful. However, according to Yahoo Finance Channel quoted informed sources reported that the Amazon Alexa team is developing simultaneous translation function.

Alexa’s simultaneous translation is ready to go beyond the current level of competitors, that is, not just the translation of words and phrases, but into the level of cultural translation.

It is reported that the first languages ​​covered by Alexa simultaneous translation function include English, Spanish, German, French, and Italian.

At present, the Amazon officials did not comment on the news, has not yet confirmed the development of simultaneous translation.

In cross-cultural translation, such as an English-speaking American traveling to Tokyo, Japan, Alexa can become a translator next to him and help the Japanese dialogue. Alexa will also understand Japan’s customs and culture.

It has been reported that users will be able to ask Alexa a similar question, “Alexa, what should I say to the bride’s father at a Japanese wedding?” Or “What should I say to the wedding ceremonial?”

In another cultural scenario, if a traveler enters India for the first time, he can ask Alexa, “Alexa, I just went to a restaurant in New Delhi. Who should I speak to and say to a table?

A source told Yahoo Finance that cross-cultural translation and custom introduction will be a highlight of Amazon simultaneous translation.

According to reports, the future of Amazona Alexa can not only translate a set of conversations, and even able to translate complex conversations between people.

In the field of simultaneous translation, there are already many companies in the research and development, Amazon is already a late company.

Microsoft has previously introduced simultaneous interpretation using Skype, an Internet phone tool, but VoIP is a niche application. It is unclear how Microsoft has been marginalized in the field of smartphone chat tools and the technical level of Skype simultaneous translation.

Last October, Google also introduced Pixel Buds, a wireless headset accessory tool for smartphones, which is said to have simultaneous translation capabilities, but media reports later said the actual translation experience was not ideal.

In addition to simultaneous translation, a large number of companies have already launched machine text translation services. The most common is the user to enter a large text, mobile phone software, the website can automatically translate into other text.

However, whether it is machine translation or simultaneous translation, the current level of technology and experience are not satisfactory, has not yet reached the stage of large-scale popularity. Some machine translation actually made a joke, some businesses will translate the results directly to the shop front, the results proved to be completely wrong translation.

In the voice assistant simultaneous interpretation field, Amazon Alexa can come from behind, catch up with Microsoft and Google, will be worth attention.

The Israeli company wants to implant chips in the human brain to help cure the disease

The thought of implanting the chip in the human brain is enough to hesitate to entertain the most fanatical science fiction fans.

According to Futurism, the mind control interface is still a completely new technology in its early stages of development and we are not yet fully prepared to fully integrate the human brain with the computer. But in the meantime, a company hopes to help patients with stroke and spinal cord injury by non-surgical implantation of electroencephalography (EEG) machines.

The revolutionary technology being developed by Neuralink and other brain-computer interface (BCI) companies pioneered by the American serial entrepreneur Elon Musk will likely help improve human intelligence, memory and communication in the future. Although the technology is promising in practice, in fact, the idea of ​​implanting the chip in the human brain is enough to hesitate to enrage the most fanatical science fiction fan.

Headquartered in Israel, brain technology startup BrainQ, is taking a less invasive approach that combines the human brain with technology. Instead of implants, BrainQ uses a non-surgical EEG machine that records the brain’s electronic activity. EEG has been used by other paralyzed patients and BrainQ hopes their technology will achieve similar goals and improve the lives of stroke and spinal cord injury patients.

However, the neuroscience company also faces considerable obstacles that need to be cleared before their technology is used in the medical field. First, the technology needs to successfully complete human clinical trials. It then needs FDA approval to be commercially available in the United States. Ultimately, the most difficult challenge for BrainQ will be to keep competing with other companies trying to create something similar to EEG-based technology.

While companies like NeuroLutions and NeuroPace will technically be BrainQ competitors, the latter seems to be a leader in the applications of patients with stroke and spinal cord injury. The company hopes the technology will be available in the U.S. market by 2020. After that, they will continue their efforts to separate BrainQ from other companies by developing a broader disease application.

Assaf Lifshitz, a BrainQ spokesman, said the company hopes to use the technology in the future to collect data, improve the symptoms of Alzheimer’s patients and help treat several childhood illnesses.

The timeline set by BrainQ may be reasonable as it relies on less invasive techniques (relative to brain implants) that may be much easier to obtain approval from the Food and Drug Administration than other BCI techniques . With the introduction of this technology, BrainQ hopes it will be able to gather more extensive and extensive data on the electronic activities of the human brain. In the future, these data may help to provide a more accurate assessment of the patient’s condition and thus help them achieve more effective treatment.