Microsoft’s focus is on “meaningful innovation for lasting impact,” Mitzri Azizirad, Corporate Vice President, Microsoft AI said at an “innovation keynote” at last week’s Ignite conference. She focused on what she called major paradigm shifts—artificial intelligence, computing power,
In various talks, researchers drilled down into research into “machine teaching,” optical computing, and biocomputing. In some of these areas, it appears that Microsoft is close to creating products, while in others, these are clearly just research projects a long way from becoming commercial offerings.
Azizirad talked about the responsibilities inherent in research saying, “Society is demanding that we have innovations that make our lives better but that can also be trusted, so we consistently are asking those questions of not just what technology can do, but what it should do.”
The first area she discussed was artificial intelligence. “Over the next decade, every company is going to become an AI company,” Azizirad said, saying the future of computing will be redefined to meet the requirements of this AI-driven world. She said we need to rethink computing, networking, and storage to accommodate the exponential growth of data needed for AI. AI capabilities are evolving quickly. As an example, she talked about how Microsoft’s image recognition software has achieved human-level comprehension on the Resnet test, and how the Snow Leopard Trust is using a resonant neural network to identify, count and protect these animals in Kyrgyzstan.
“These AI breakthroughs are amazing,” she said, “but they’re still constrained to these individual areas of speech and vision and understanding.” She said the future of AI means bringing these capabilities together so they can reason across all of these areas simultaneously. Today, AI requires huge amounts of data but “imagine being able to teach the machine from your own knowledge much like a teacher would with a student.”
This is the concept of “machine teaching,” which I first heard referenced in Satya Nadella’s opening keynote for the conference.
Mark Hammond, General manager for Autonomous Systems, described the concept of “machine teaching,” which he said was the natural complement to the machine learning techniques that much of current AI use. When he is teaching his seven-year-old son to hit a baseball, he doesn’t have to understand how the brain works to know that you don’t start by throwing fastballs at him. You start with the ball on a tee, then toss underhand, and continue to move it up.
Today, he said, most applications are built with a single technique—huge amounts of data—but with more abstractions, we can enable humans to infuse their knowledge into systems. We do this already when we classify a piece of email as junk, but make it more efficient by teaching the system what is junk.
His autonomous systems group is focused on “empowering engineers to leverage their deep subject matter expertise to teach these systems.” This is then paired with technologies with simulation and reinforcement learning to “build incredibly sophisticated solutions that just weren’t possible before.”
This moves up into a concept called “curriculum learning,” with Hammond showing a machine teaching framework called Inkling used to show an engine how to handline different forces. He said this concept works particularly well with simulations where models can learn, such as a simulation of power line inspection. This can work in situations where not much data exists in the real world, such as poachers in the wild. As more real-world examples, he showed a simulation of Toyota robots on a factory floor and of robots from CMU and Oregon State that won the DARPA subterranean challenge, tackling search and rescue applications in mines.
Hammond said this concept involves mechanisms for scaling human expertise. “We can teach a system what we’re looking for and use it for decision support.”
The second paradigm Azizirad talked about is increasing computing power. “A smartphone today has 300 times more computing power than a Cray 2 supercomputer in the mid-80s, but yet our method of computing has not changed,” she said.
We have more data every day; we need to bring intelligence to where the data is being created. “With today’s technology, data centers will account for 20 percent of the total world’s electricity usage and 5 percent of carbon emissions by 2025. That’s untenable.” Therefore she said, it was important to do research into new methods, particularly quantum computing and optical computing.
Ant Rowstron, deputy director of Microsoft Research talked about research into optical computing at the company’s research center in Cambridge. He said scientists were trying to build software for the cloud and were frustrated by hardware limits. He talked about how technologies follow an S-curve, and that it is beginning to look like we may be near the end of that curve in areas, including Moore’s Law for computers and the progress we’ve seen in networking and storage. Thus, he said, we have an opportunity to look at new technologies.
In networking, he talked about how today’s top-of-rack switches convert data from electrons to photons (to send over a fiber optic link) while the next one changes it back. This, he said, is not very energy efficient and also has latency issues. In optical networking, he noted, a prism can separate light into its component properties; and an optical chip can switch between various wavelengths in a nanosecond. He showed off an optical chip.
Rowstron also talked about optical storage, showing a piece of glass that encoded data, much as Nadella showed one that held Superman The Movie in his keynote. He explained that a femtosecond laser could be focused inside a piece of glass, creating a voxel that could be interpreted as storage. He showed what this looked like under a microscope, and then showed how durable it was by boiling the glass on stage, and by subjecting it to a degausser. He then showed it again under the microscope, and the data was still there.
In computing, he said researchers were just beginning to start work on reimagining computing with optical components. For instance, he showed a lens that did a Fourier transformation (a complex mathematical formula) at the speed of light. He acknowledged that practical work in this area was still “some years away.”
The third area that Azizirad discussed was “machine-human interaction.” She said that natural user interfaces such as eye-tracking, voice, and gesture tracking were now coming to market, and how the firm’s HoloLens 2 can adapt to your hand.
These kinds of technologies were particularly important to the 1 billion people with disabilities, citing the potential of things such as autonomous wheelchairs and 3D audio interfaces.
“The future will be about context-driven interactions,” she said, with systems that learn from our intentions and can free us from our screens.
Asta Roseway, principal research designer, said that today, “people are glued to their phones whether to play games, chat, or just by habit. People feel like they might be missing something but ironically, they could be missing the very magic that life provides simply by being unaware of it.”
“What if we could do everything we do today without the need of our screens, what if we could augment our senses to hear and see beyond the physical, how might that change our relationship with the world around us?,” Roseway asked. “We call this the next renaissance.”
She started by describing Soundscape, a technology that uses 3D audio to let people hear their way through new places by the use of different sounds. “The quality of our experience is enhanced because we are re-engaged with the world around us.”
She mentioned Project Eclipse, a network of low-cost sensors to detect air quality that uses 5G to bring and retrieve data from Azure. It then sends different chimes to indicate good and bad quality of air. A variation called Project Brookdale uses a connected scarf to show air quality changes in real-time through color changes. (She’s wearing the scarf above.)
Looking even further out, she described Project Florence, which “explores a potential future where humans and plants can converse.”
She said this uses natural language processing text translation and sensing technologies to convert our language into a light spectrum that plants can respond to electrochemically. She said this started out as an art and science project, but may have applications in “augmented agriculture.”
Finally, she described Ada, which projects a series of colors and light patterns to explore the concept of “living architecture.”
Most of these projects seem intriguing but not very practical, and Roseway acknowledged that the only one close to a real product is Soundscape. But, “they all strive to bring forth a future that is pushing us beyond our screens and back into the world, so that we can reconnect in ways that are more meaningful and profound.”
The final major area Azizirad discussed was biocomputing, but she started that section of the keynote by talking about “responsible innovation,” touching on Microsoft’s work in explainable AI, homomorphic encryption and differential privacy. She said it was important for innovation to be built “on a foundation of trust.”
“Societal change and taking on the toughest challenges like biodiversity and climate change and farming requires an even greater scale of innovative ideas,” she said. One example was taking algorithms used to identify constellations and using them to identify the whale sharks by their spots, and then use this technology to track animal populations and thus better understand how biological systems work.
Microsoft senior scientists Sara-Jane Dunn explained how the software revolution was built on encoding ones and zeros, but the new “technological revolution” would instead be about encoding A, G, C, and Ts: the building blocks of DNA.
“This new revolution in programming biology is going to empower the world’s scientists to fight diseases that currently evade our defenses, to program crops to produce more food to feed a hungry planet, and to develop new strategies to power our world in sustainable climate-friendly ways, ” Dunn said, “and it might be possible sooner than you think.”
She said the biotech industry already has many of the tools needed to program biology, such as CRISPR, a technique used to precisely edit genes. She said researchers can even build functioning synthetic circuits out of DNA.
“Programming biology is not yet as simple as programming your computer,” Dunn said, “but cells are essentially biological computers.”
She talked about how a plant doesn’t have a brain, so individual cells have to take in information from the environment and make decisions, such as whether to grow. Each cell must somehow have a program that responds to signals from outside, yet the cells need to work together in a distributed way so the plant as a whole can flourish. If we could understand the programs inside the cells, we could better understand them, and debug them when things go wrong.
Her team, known as Station B, sits at the intersection of biology and computer science, and is working on developing a platform to allow scientists to program biology. “We want to make it as simple and robust as we’ve made programming electrons on silicon,” she said.
In transistor-based circuits, she said, we have switches and resistors, while in a biological circuit, we have receiver and signal proteins, and encoding in DNA. Her team has developed a programming language to design genetic circuits
For instance, she said, you can encode a fluorescent response to an input signal. You can also compile and simulate these biological programs, and share the results on a knowledge base equivalent to Github.
Her team has already created such a knowledge base, as well as an inventory of genes and circuits built in the lab. Then you have to create the specific design you want. Unlike a CPU or FPGA that can be reused, creating a biological circuit is more like creating a PCB board or an ASIC, a design for a specific use case, she said.
“We need to stitch together specific pieces of DNA and we use robots to do just that,” Dunn said, noting that each circuit requires the right combination of DNA pieces. The robots assemble them, then perform automated experiments with different chemical conditions. They run multiple tests in parallel, collect the data, and then use machine learning to create parameters for models that help with future design decisions.
She invited attendees to use an app to see what researchers see when they look at a DNA circuit that causes the cells to fluoresce in response to an input signal. At first, you can see colonies of bacteria that glow, but as you move in (with a microscope), you can see what individual cells are doing.
Dunn acknowledges that there was “still a lot of work ahead of us,” but echoed Azizirad in saying Microsoft is working with technology, commercial, and academic partners, saying every organization involved is committed to putting the necessary safeguards in place and asking the difficult questions upfront.
With these considerations in mind, she said, “Once we understand biological computation sufficiently and once we’ve refined the tools to design build and test biological programs, then everything I spoke of earlier will be possible and more…rebuilding organs for transplantation; producing self-organizing, self-repairing materials for fertilizers; and chemicals that allow us to leave fossil fuels in the ground. So this can completely transform medicine, agriculture, and energy.”
Dunn concluded, “If we realize it, and we do need to realize it, its impact will be so enormous that it will make the first software revolution pale in comparison.”
Azizirad returned to summarize the keynote, which she said aimed to show, “alternative views and fresh and different perspectives around the challenges that we all jointly face.” She said there was a new showcase of these ideas online at Microsoft.com/innovation.
This is a syndicated post. Read the original post at Source link .