Forecast 2018 AI trend: quickly into the hospital, can create new songs

March 3, according to Forbes magazine reported that “The Wall Street Journal,” “Forbes” magazine and Fortune magazine and other publications called 2017 “AI Year.” There are good reasons for this: AI defeated professional gamers and poker players, broadened the depth of learning education through several online projects, and the record of speech recognition accuracy was broken several times. Research institutions such as Oxford and Massachusetts General Hospital were Invest in developing your own supercomputer

These are but a few of the milestones the AI has achieved in 2017. What happens next? We have collected forecasts from the world’s leading AI researchers and industry thought leaders:

1.AI will really integrate into medicine

“The year 2018 will be a real year for AI into the medical world, and we will move from algorithms to products and more integration and validation so that these solutions can be transformed from concepts into solutions that physicians can find practical and available by the end of 2018, I think about half of the leading healthcare systems will adopt some form of AI in their diagnostic teams.Although this will first emerge in diagnostic medicine, we see demographic health, hospital surgery and a wide range of Clinics will follow, and in 2018 we will begin to adopt a technology that will truly change the way service providers work and how patients experience healthcare on a global scale. ”

– Mark Michalski, Executive Director, Massachusetts General Hospital and Brigham and Women’s Clinical Data Science Center

2. Deep learning will revolutionize engineering simulation and design

“2018 will be the year in which deep learning revolutionizes engineering simulation and design. For the next three to five years, deep learning will speed product development from years to months or even weeks to create products New paradigm for rapid innovation in functionality, performance and cost. ”

– Marc Edgar, GE Information Scientist

3.AI will be considered as part of a “regular” clinical system

“In 2018 and in the coming years, AI will be introduced into our clinical system, which will no longer be called AI, but will be called a conventional system.People will ask themselves: ‘How can we survive without these systems?’ ‘”

– Luciano Prevedello, MD, PhD in Radiology and Neuroradiology, Wexler Medical Center, Ohio State University, Master of Public Health

4.AI will be considered the mainstream content creator

“Given the rapid pace of research, I expect AI to create new, personalized media, like making music to your liking. Imagine a future music service that not only plays existing songs that you may like, but also Keep creating new songs for you. ”

– Jan Kautz, Senior Director of NVIDIA Visual Computing and Machine Learning Research

5. Technology will continue to adapt to AI

“AI will affect the next 25% of technology spending, the key theme is how organizations and human resources to deal with changes in AI technology.”

– Nicola Morini Bianzino, General Manager, Artificial Intelligence Department, Accenture, and Head of Technology Development and Strategy

6. Biometrics will replace credit cards and driver’s licenses

“Thanks to the development of AI, the face will become the new credit card, driver’s license and bar code. Facial recognition has completely changed the security of biometric functions, we will see this technology and retail convergence trend, like Amazon and In the near future, people will no longer have to queue up in stores.

– Georges Nahon, CEO, Orange Silicon Valley and President, Orange Institute Global Research Associates

7. New deep learning technologies will provide transparency on how to process data

“Deep learning will significantly increase the quantitative content of radiology reports and concerns about deep learning as” black boxes “will be greatly diminished because new technologies will help us understand the ‘vision’ of deep learning.”

– Bradley J. Erickson, Associate Director, Department of Radiology Research, Mayo Clinic, Advisor, Department of Health Sciences and Radiology; Advisor, Biomedical Statistics and Information Section

8. Smartphones can use AI and deep neural networks

“A large number of applications on smartphones will run deep neural networks to better support AI functionality, and friendly robots will start to become more affordable and become new members of the home. They will begin to make up for visual, speech and voice The gap between so that users do not realize the difference between these modes of communication. ”

– Robinson Piramuthu, Chief Scientist, eBay Computer Vision

9.AI will be more fully integrated into daily life

“Robots will perform better in complex tasks, even though these tasks have no difficulty for humans, such as allowing robots to walk freely in rooms or objects, and they are better at handling boring, routine things. I also look forward to their progress on natural language processing (NLP), although we have already had some success now and we will see more and more products that contain some form of AI coming into our lives. People driving vehicles are now deployed on the road, so those things that are tested in the labs will become more commonplace and available and will touch more of our lives. ”

– Chris Nicholson, Chief Executive Officer and Co-Founder,

10.AI development will be more diverse

“We are starting to see more and more people from all backgrounds involved in the construction, development and production of AI Tools and infrastructure will continue to improve to make it easier for more people to bring their data and Algorithms into useful products or services.Products and applications will allow more interactive queries on the inner workings of the underlying models and help enhance trust and confidence in these systems, especially in mission-critical applications. In the medical field , We will see the convergence of different sources of information across multiple disciplines, rather than focusing on individual cases, although the scope of these targeted applications will continue to grow at a fanatical rate. ”

– George Shih, Founder,, Associate Professor and Associate Director, Department of Radiology, Weill Cornell Medical School

11.AI will open up new areas of research in contemporary astrophysics

“AI will be able to detect an unexpected astrophysical event that emits gravitational waves, opening up entirely new areas in contemporary astrophysics.”

– Eliu Huerta, Astrophysicist, National Supercomputing Center, University of Illinois at Champaign, Gravity Team Leader

12.AI will be transferred from the research laboratory to the patient’s bedside

“AI has reached the climax of the ‘publicity curve’ in imaging and we will begin to see AI-powered tools moving from the research lab to the radiologist’s workstation and eventually to the bedside. Less attractive use cases (such as , Workflow tools, quality / safety, patient classification, etc.) will begin to grab the attention of developers, insurance companies, healthcare providers and others.Medical and Imaging One of the biggest challenges facing the AI ​​industry is whether regulators can keep up with the ongoing What happens is that the FDA will need to find efficient and streamlined ways to review and approve algorithms for screening, detecting and diagnosing diseases. ”

– Safwan Halabi, Medical Director, Department of Radiology, Lucerne Packard Children’s Hospital, Stanford Medical Center

13.AI personal assistant will become smarter

“Personal Assistant AI is getting smarter and smarter When our personal assistant knows more about our daily life, I can imagine: I do not have to worry about everyday dinners anymore. My AI assistant knows what I like to eat in my closet Someday, I might cook at home someday and make sure that when I get home from get off work, all the groceries I need have been left in front of the house, allowing me to prepare my long-awaited delicacy. ”

– Alejandro Troccoli, senior research scientist at NVIDIA

MIT Associate Professor: Why hospitals love robots more than factories

According to MIT Technology Review, robotic colleagues and AI helpers are approaching us, but Julie Shah is not worried about replacing robots with robots, but welcomes them enthusiastically.

Shah is an associate professor at MIT who is committed to making humans and machines a safe and efficient partner. The job took her to the factory floor and the busy hospital where she tried to figure out how automation can make humans more efficient. Shah recently interviewed about the scene we started working with robots:

Q: What do you think is the most common misconception about robots in the workplace?

Shah: People generally think that AI is a very common and powerful ability that can be used in all these different kinds of work. But today’s AI can not be used in such a way.

Currently, every AI system needs to be designed to perform a very specific task, which requires a lot of engineering work. Although the scope of their mandates is expanding, we do not yet have “universal artificial intelligence” and it will replace a great deal of human work. As AI’s capabilities continue to grow, it can accomplish many small tasks in different areas.

Q: In factories and hospitals such places, to achieve the potential of robots how much?

Shah: When you talk about robots getting into more service environments, such as hospitals and office buildings, you find that they have fewer structured environments. Robots need to understand the environment, including personal preferences, when the busiest. It is cumbersome to code all of this.

We are always devoted to developing technology to observe the way professionals work. We observe how nurses make decisions, such as which room the patient is assigned to. Through the observation of human professionals, robots can be trained in learning.

Q: Have you noticed that in different industries, which industries are more susceptible to automation?

Shah: The area of ​​health care does not resist robots. People in manufacturing often feel more skeptical about robotic substitution. Proving that robots will improve human capabilities rather than replace humans may pose daunting challenges.

In the hospital, we studied nurses who served as management roles. They controlled most of the work scheduled in the operating room, such as which wards the patients were assigned and which nurses were assigned to take care of.

Compared with air traffic controllers, the work of these people is much more difficult from a mathematical point of view, but they do not have the same decision-making tools to help them. Nurses have a unique sense of value in their work. They know their work is hard and they feel there is room for improvement, even though they are already very familiar with the work.

Q: Do you think the dialogue between AI and work needs to change?

Shah: I think there’s one thing sometimes lacking in discussion, that is, AI is not a technology beyond our control. We are the designers of AI and how we structure their ability to work with AI.


With AI technology, scientists want to restore the perpetrator’s face from the victim’s brain

Scientists have been able to truly read human thinking with advanced computerized scanning technology and capture face images from the testers’ minds. If this technology is further refined, the police’s electronic facial recognition technology and even video recording of closed-circuit television will all be history as the police are able to get the true appearance of criminals directly from the victim’s brain.

This amazing technology has also brought new hope to people with disabilities who have speech problems and terrorists can not hide their massacre plans in the presence of law enforcement officials. Developed by neuroscientists at the University of Toronto in Canada, the new technology uses EEG monitoring equipment to gather people’s brain activity and reproduce the images they perceive.

Principal Investigator Dan Nemrodov said: “When we see something, our minds produce a mental state of mind, which is essentially a psychological impression that we can get through the brain with the help of electroencephalographic equipment What’s really exciting is that we are not reproducing the shapes, but the true look of a person and the many detailed visual features. ”

“We were able to recreate a person’s visual experience based on people’s brain activity, which brought us many possibilities,” said the researcher, “The technology unveils the content of our brain’s mind and gives us a Ways to explore and share what we perceive, remember, and imagine. “It also provides a way for people who can not communicate in languages.”

This technology not only reproduces a person’s perception from a neural basis, but also reproduces the content of their experiences and memories. The technique can also be applied to forensic applications by law enforcement authorities, and judicial officers can gather suspects’ information through witnesses instead of relying on verbal descriptions to obtain sketches.

During the study, researchers showed face images to testers connected to EEG devices. The testers ‘brain activity was recorded by the device, and the researchers then reproduced the digital image of the testers’ perceptions using a technique that was based on a machine learning algorithm.

Researchers have previously conducted similar tests with expensive MRI equipment, but EEG devices are cheaper, more portable and more practical. Researchers are now extending this technique to explore the possibility of getting more details from testers’ memories. The research funding comes from a new researcher’s prize at the Natural Sciences and Engineering Research Council of Canada (NSERC) and Conant.

The Israeli company wants to implant chips in the human brain to help cure the disease

The thought of implanting the chip in the human brain is enough to hesitate to entertain the most fanatical science fiction fans.

According to Futurism, the mind control interface is still a completely new technology in its early stages of development and we are not yet fully prepared to fully integrate the human brain with the computer. But in the meantime, a company hopes to help patients with stroke and spinal cord injury by non-surgical implantation of electroencephalography (EEG) machines.

The revolutionary technology being developed by Neuralink and other brain-computer interface (BCI) companies pioneered by the American serial entrepreneur Elon Musk will likely help improve human intelligence, memory and communication in the future. Although the technology is promising in practice, in fact, the idea of ​​implanting the chip in the human brain is enough to hesitate to enrage the most fanatical science fiction fan.

Headquartered in Israel, brain technology startup BrainQ, is taking a less invasive approach that combines the human brain with technology. Instead of implants, BrainQ uses a non-surgical EEG machine that records the brain’s electronic activity. EEG has been used by other paralyzed patients and BrainQ hopes their technology will achieve similar goals and improve the lives of stroke and spinal cord injury patients.

However, the neuroscience company also faces considerable obstacles that need to be cleared before their technology is used in the medical field. First, the technology needs to successfully complete human clinical trials. It then needs FDA approval to be commercially available in the United States. Ultimately, the most difficult challenge for BrainQ will be to keep competing with other companies trying to create something similar to EEG-based technology.

While companies like NeuroLutions and NeuroPace will technically be BrainQ competitors, the latter seems to be a leader in the applications of patients with stroke and spinal cord injury. The company hopes the technology will be available in the U.S. market by 2020. After that, they will continue their efforts to separate BrainQ from other companies by developing a broader disease application.

Assaf Lifshitz, a BrainQ spokesman, said the company hopes to use the technology in the future to collect data, improve the symptoms of Alzheimer’s patients and help treat several childhood illnesses.

The timeline set by BrainQ may be reasonable as it relies on less invasive techniques (relative to brain implants) that may be much easier to obtain approval from the Food and Drug Administration than other BCI techniques . With the introduction of this technology, BrainQ hopes it will be able to gather more extensive and extensive data on the electronic activities of the human brain. In the future, these data may help to provide a more accurate assessment of the patient’s condition and thus help them achieve more effective treatment.