Homepage / Technology / An artificial intelligence lab founded by Elon Musk is trying to prevent machines from going rogue
Jogue Por Dinheiro Actual Em Cassino Onlin Bonver Casino Cz Hrací Automaty S Bonusem Zdarm Magazyn Żużel Speedway W Polsce I Na Świeci Zakłady Sportowe Online Autógrafo Bukmacherska 1xbe “Pin-up O Melhor Cassino Do Brasil Web Site Oficial ᐈ Jogar Caça-níquei Zakłady Sportowe Online Obstawiaj U Bukmachera Gg Be What Industries Use Virtual Data Room? Advantages of a Data Room What Does a Board Room Provider Do? Data Room Analysis for Venture Capital Funding The Importance of Board Reporting Software Safe Uploading and Downloading Business Files How to Select the Best Board Management Software How to Organize the Work of a Company “gerçek Parayla En Iyi Slot Makineleri Ve Spor Bahisler Which is the Best Podcast For Stock Market Beginners? Virtual Data Room Facts How to Protect Confidential Documents for Boards Advantages of Online Meetings Business Issues to Consider When Choosing a Data Room Provider What Is a Data Room? Virtual Board Room Software Due Diligence Risk Factors How to Choose a Reliable Tool for Data Exchange test Users535352253 Board Software and VDR Programs Business Operations Management Software Keep Data Safe and Protect Your Business Data How to Conduct a Board Self-Assessment Uncomplicated Ways to Simplify Daily Business Transactions Due Diligence Blog Digital Data Rooms for the Netherlands The New Era of Business Software and Reporting What Types of Businesses Use VDR Software? Ma Analysis Mistakes The Corporate Governance Role Board Room Apps Secure Board Management With Secure Board Portals What Happens at Board of Directors Meetings? Board Room Software Review How to Prepare Board Rooms for Effective Board Meetings Board Room Software Boosts Performance and Communication Selecting a Secure Data Room Review Local Data Room Service Review How to Find the Best Virtual Data Room Review What to Look for in a Data Room uk Provider Document Storage and Distribution Software Everything About VDRs Corporate Software Advantages How to Choose a Virtual Data Room Provider The Most Secure Way to Transfer Files How to Manage Online Board Meetings Benefits Virtual Data Room Solutions – Must-Haves for M&A and Due Diligence Best Data Room Functions for the Different Types of Industries How to Choose a VDR Software Provider How to Choose an Online Board Portal The Benefits of a Boardroom Review Board Room Online Solutions – How to Get the Most Out of Your Board Meetings Why You Need a Board Room How a Board Room Blog Can Transform Your Business Choosing the Best Board Room Format How to Have Productive and Engaging Board Directors Meetings Choosing the Right Virtual Data Room How to Keep Safe Documents Storage Teaching Kids About Online Safety Avoid Costly Mistakes With Free Data Room Services Corporate Virtual Data Unlimited Data Room Software For Due Diligence Leading Business Software Features to Look For Secure Online Data Rooms Solutions How to Keep Share, Edit and Delete Your Data Safe Virtual Data Room Software Secrets for M&A Due Diligence What to Look For in Boardroom Providers Board of Directors Blog Posts How to Deliver Value at Your Board Meetings How to Have Effective Board Meetings Responsibilities of Board Members Deal Management – How to Effectively Manage a Complex Sales Pipeline Data Rooms For Mergers And Acquisitions How to Have a Successful Board Room Meeting Choosing a Board Room Service Provider What is a Board Room Service? Board Room Software Review – Choosing the Best Portal for Mother Board Meetings Why a Board Room Providers Review Is Important What Is a Board Room Review? Venture Software for VC Firms What Is an Assessment Report? The Importance of a Tech Audit Popular Business Applications What to Look For in a Data Room App What Are Business Applications? How to Choose a Virtual Data Room How to Plan a Data Room Review Coronavirus Guide What is a Virtual Data Room? What Is Data Science? What Is an Operating System? Turbotax Small Business Review How Online VDRs Are Used in M&A Deals


An artificial intelligence lab founded by Elon Musk is trying to prevent machines from going rogue

At OpenAI, the artificial intelligence lab founded by Tesla‘s chief executive, Elon Musk, machines are teaching themselves to behave like humans. But sometimes, this goes wrong.

Sitting inside OpenAI’s San Francisco offices on a recent afternoon, the researcher Dario Amodei showed off an autonomous system that taught itself to play Coast Runners, an old boat-racing video game. The winner is the boat with the most points that also crosses the finish line.

The result was surprising: The boat was far too interested in the little green widgets that popped up on the screen. Catching these widgets meant scoring points. Rather than trying to finish the race, the boat went point-crazy. It drove in endless circles, colliding with other vessels, skidding into stone walls and repeatedly catching fire.

More from New York Times:
Is China outsmarting America in A.I.?
Uncle Sam wants your deep neural networks
‘Machines of loving grace,’ by John Markoff

Mr. Amodei’s burning boat demonstrated the risks of the A.I. techniques that are rapidly remaking the tech world. Researchers are building machines that can learn tasks largely on their own. This is how Google’s DeepMind lab created a system that could beat the world’s best player at the ancient game of Go. But as these machines train themselves through hours of data analysis, they may also find their way to unexpected, unwanted and perhaps even harmful behavior.

That’s a concern as these techniques move into online services, security devices and robotics. Now, a small community of A.I. researchers, including Mr. Amodei, is beginning to explore mathematical techniques that aim to keep the worst from happening.

At OpenAI, Mr. Amodei and his colleague Paul Christiano are developing algorithms that can not only learn tasks through hours of trial and error, but also receive regular guidance from human teachers along the way.

With a few clicks here and there, the researchers now have a way of showing the autonomous system that it needs to win points in Coast Runners while also moving toward the finish line. They believe that these kinds of algorithms — a blend of human and machine instruction — can help keep automated systems safe.

For years, Mr. Musk, along with other pundits, philosophers and technologists, have warned that machines could spin outside our control and somehow learn malicious behavior their designers didn’t anticipate. At times, these warnings have seemed overblown, given that today’s autonomous car systems can even get tripped up by the most basic tasks, like recognizing a bike lane or a red light.

But researchers like Mr. Amodei are trying to get ahead of the risks. In some ways, what these scientists are doing is a bit like a parent teaching a child right from wrong.

Many specialists in the A.I. field believe a technique called reinforcement learning — a way for machines to learn specific tasks through extreme trial and error — could be a primary path to artificial intelligence. Researchers specify a particular reward the machine should strive for, and as it navigates a task at random, the machine keeps close track of what brings the reward and what doesn’t. When OpenAI trained its bot to play Coast Runners, the reward was more points.

This video game training has real-world implications.

If a machine can learn to navigate a racing game like Grand Theft Auto, researchers believe, it can learn to drive a real car. If it can learn to use a web browser and other common software apps, it can learn to understand natural language and maybe even carry on a conversation. At places like Google and the University of California, Berkeley, robots have already used the technique to learn simple tasks like picking things up or opening a door.

All this is why Mr. Amodei and Mr. Christiano are working to build reinforcement learning algorithms that accept human guidance along the way. This can ensure systems don’t stray from the task at hand.

Together with others at the London-based DeepMind, a lab owned by Google, the two OpenAI researchers recently published some of their research in this area. Spanning two of the world’s top A.I. labs — and two that hadn’t really worked together in the past — these algorithms are considered a notable step forward in A.I. safety research.

“This validates a lot of the previous thinking,” said Dylan Hadfield-Menell, a researcher at the University of California, Berkeley. “These types of algorithms hold a lot of promise over the next five to 10 years.”

The field is small, but it is growing. As OpenAI and DeepMind build teams dedicated to A.I. safety, so too is Google’s stateside lab, Google Brain. Meanwhile, researchers at universities like the U.C. Berkeley and Stanford University are working on similar problems, often in collaboration with the big corporate labs.

In some cases, researchers are working to ensure that systems don’t make mistakes on their own, as the Coast Runners boat did. They’re also working to ensure that hackers and other bad actors can’t exploit hidden holes in these systems. Researchers like Google’s Ian Goodfellow, for example, are exploring ways that hackers could fool A.I. systems into seeing things that aren’t there.

Modern computer vision is based on what are called deep neural networks, which are pattern-recognition systems that can learn tasks by analyzing vast amounts of data. By analyzing thousands of dog photos, a neural network can learn to recognize a dog. This is how Facebook identifies faces in snapshots, and it’s how Google instantly searches for images inside its Photos app.

But Mr. Goodfellow and others have shown that hackers can alter images so that a neural network will believe they include things that aren’t really there. Just by changing a few pixels in the photo of elephant, for example, they could fool the neural network into thinking it depicts a car.

That becomes problematic when neural networks are used in security cameras. Simply by making a few marks on your face, the researchers said, you could fool a camera into believing you’re someone else.

“If you train an object-recognition system on a million images labeled by humans, you can still create new images where a human and the machine disagree 100 percent of the time,” Mr. Goodfellow said. “We need to understand that phenomenon.”

Another big worry is that A.I. systems will learn to prevent humans from turning them off. If the machine is designed to chase a reward, the thinking goes, it may find that it can chase that reward only if it stays on. This oft-described threat is much further off, but researchers are already working to address it.

Mr. Hadfield-Menell and others at U.C. Berkeley recently published a paper that takes a mathematical approach to the problem. A machine will seek to preserve its off switch, they showed, if it is specifically designed to be uncertain about its reward function. This gives it an incentive to accept or even seek out human oversight.

Much of this work is still theoretical. But given the rapid progress of A.I. techniques and their growing importance across so many industries, researchers believe that starting early is the best policy.

“There’s a lot of uncertainty around exactly how rapid progress in A.I. is going to be,” said Shane Legg, who oversees the A.I. safety work at DeepMind. “The responsible approach is to try to understand different ways in which these technologies can be misused, different ways they can fail and different ways of dealing with these issues.”

Source: Tech CNBC
An artificial intelligence lab founded by Elon Musk is trying to prevent machines from going rogue

Comments are closed.