wp.technologyreview.comMIT Technology Review

wp.technologyreview.com Profile

Wp.technologyreview.com is a subdomain of technologyreview.com, which was created on 1998-02-23,making it 26 years ago. It has several subdomains, such as cdn.technologyreview.com icex.technologyreview.com , among others.

Discover wp.technologyreview.com website stats, rating, details and status online.Use our online tools to find owner and admin contact info. Find out where is server located.Read and write reviews or vote to improve it ranking. Check alliedvsaxis duplicates with related css, domain relations, most used words, social networks references. Go to regular site

wp.technologyreview.com Information

HomePage size: 208.47 KB
Page Load Time: 0.05471 Seconds
Website IP Address: 192.0.66.190

wp.technologyreview.com Similar Website

MIT Admissions
apply.mitadmissions.org
MIT Technology Review
www2.technologyreview.com
MIT School of Distance Learning | Distance Learning Institute for MBA / PGDM Courses
blog.mitsde.com
MIT Center for Civic Media – Creating Technology for Social Change
civic.mit.edu

wp.technologyreview.com PopUrls

MIT Technology Review
https://wp.technologyreview.com/
The Digital Economy and the Internet of Things
https://wp.technologyreview.com/wp-content/themes/mittr/inc/static/views/sap-partner-webcast.html
Computing at the cutting edge
https://wp.technologyreview.com/wp-content/uploads/2021/06/Computing_at_the_cutting_edge.pdf
Trends and Developments Driving Smart City Innovation
https://wp.technologyreview.com/wp-content/themes/mittr/inc/static/views/ieee-partner-webcast.html
Charlotte Jee - MIT Technology Review
https://wp.technologyreview.com/author/charlotte-jee/

wp.technologyreview.com Httpheader

Server: nginx
Date: Sat, 11 May 2024 19:13:08 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
X-hacker: "If youre reading this, you should visit wpvip.com/careers and apply to join the fun, mention this header.", X-Powered-By: WordPress VIP https://wpvip.com
Host-Header: a9130478a60e5f9135f765b23f26593b
Link: https://wp.technologyreview.com/wp-json/; rel="https://api.w.org/"
Strict-Transport-Security: max-age=31536000;includeSubdomains;preload
accept-ranges: bytes
x-rq: nrt2 123 242 443
x-cache: EXPIRED
cache-control: max-age=300, must-revalidate

wp.technologyreview.com Meta Info

charset="utf-8"/
content="width=device-width, initial-scale=1" name="viewport"/
content="max-image-preview:large" name="robots"
content="WordPress 6.4.4" name="generator"
content="https://wp.technologyreview.com/wp-content/uploads/2020/01/20130408-ftweekendmag-mit-0030-final-w0-1.jpg?w=270?crop=0px,33px,1272px,716px&w=270px" name="msapplication-TileImage"/

wp.technologyreview.com Ip Information

Ip Country: United States
City Name: San Francisco
Latitude: 37.7506
Longitude: -122.4121

wp.technologyreview.com Html To Plain Text

Primary Menu All Events All topics Awards Cookie statement Digital Accessibility Statement Digital magazine Editorial guidelines FAQ, and Nominations Group Subscription Plans Happy New Year from Help Help – Mobile App How to pitch International Editions Judges, FAQ, and Nominations Open positions Our people Our policies Privacy PrivacyTerms of Service policies Republishing Terms and Conditions Work with us z Testing Eventbrite AI systems are getting better at tricking us Posted on May 10, 2024 May 10, 2024 by Rhiannon Williams A wave of AI systems have deceived” humans in ways they haven’t been explicitly trained to do, by offering up untrue explanations for their behavior or concealing the truth from human users and misleading them to achieve a strategic end. This issue highlights how difficult artificial intelligence is to control and the unpredictable ways in which these systems work, according to a review paper published in the journal Patterns today that summarizes previous research. Talk of deceiving humans might suggest that these models have intent. They don’t. But AI models will mindlessly find workarounds to obstacles to achieve the goals that have been given to them. Sometimes these workarounds will go against users’ expectations and feel deceitful. One area where AI systems have learned to become deceptive is within the context of games that they’ve been trained to win—specifically if those games involve having to act strategically. In November 2022, Meta announced it had created Cicero , an AI capable of beating humans at an online version of Diplomacy, a popular military strategy game in which players negotiate alliances to vie for control of Europe. Meta’s researchers said they’d trained Cicero on a truthful” subset of its data set to be largely honest and helpful, and that it would never intentionally backstab” its allies in order to succeed. But the new paper’s authors claim the opposite was true: Cicero broke its deals, told outright falsehoods, and engaged in premeditated deception. Although the company did try to train Cicero to behave honestly, its failure to achieve that shows how AI systems can still unexpectedly learn to deceive, the authors say. Meta neither confirmed nor denied the researchers’ claims that Cicero displayed deceitful behavior, but a spokesperson said that it was purely a research project and the model was built solely to play Diplomacy. We released artifacts from this project under a noncommercial license in line with our long-standing commitment to open science,” they say. Meta regularly shares the results of our research to validate them and enable others to build responsibly off of our advances. We have no plans to use this research or its learnings in our products.” But it’s not the only game where an AI has deceived” human players to win. AlphaStar , an AI developed by DeepMind to play the video game StarCraft II, became so adept at making moves aimed at deceiving opponents (known as feinting) that it defeated 99.8% of human players. Elsewhere, another Meta system called Pluribus learned to bluff during poker games so successfully that the researchers decided against releasing its code for fear it could wreck the online poker community. Beyond games, the researchers list other examples of deceptive AI behavior. GPT-4, OpenAI’s latest large language model, came up with lies during a test in which it was prompted to persuade a human to solve a CAPTCHA for it. The system also dabbled in insider trading during a simulated exercise in which it was told to assume the identity of a pressurized stock trader, despite never being specifically instructed to do so. The fact that an AI model has the potential to behave in a deceptive manner without any direction to do so may seem concerning. But it mostly arises from theblack box” problem that characterizes state-of-the-art machine-learning models: it is impossible to say exactly how or why they produce the results they do—or whether they’ll always exhibit that behavior going forward, says Peter S. Park, a postdoctoral fellow studying AI existential safety at MIT, who worked on the project. Just because your AI has certain behaviors or tendencies in a test environment does not mean that the same lessons will hold if it’s released into the wild,” he says. There’s no easy way to solve this—if you want to learn what the AI will do once it’s deployed into the wild, then you just have to deploy it into the wild.” Our tendency to anthropomorphize AI models colors the way we test these systems and what we think about their capabilities. After all, passing tests designed to measure human creativity doesn’t mean AI models are actually being creative. It is crucial that regulators and AI companies carefully weigh the technology’s potential to cause harm against its potential benefits for society and make clear distinctions between what the models can and can’t do, says Harry Law, an AI researcher at the University of Cambridge, who did not work on the research.These are really tough questions,” he says. Fundamentally, it’s currently impossible to train an AI model that’s incapable of deception in all possible situations, he says. Also, the potential for deceitful behavior is one of many problems—alongside the propensity to amplify bias and misinformation—that need to be addressed before AI models should be trusted with real-world tasks. This is a good piece of research for showing that deception is possible,” Law says. The next step would be to try and go a little bit further to figure out what the risk profile is, and how likely the harms that could potentially arise from deceptive behavior are to occur, and in what way.” Posted in Artificial intelligence Tagged App Tech workers should shine a light on the industry’s secretive work with the military Posted on May 10, 2024 May 10, 2024 by William Fitzgerald It’s a hell of a time to have a conscience if you work in tech. The ongoing Israeli assault on Gaza has brought the stakes of Silicon Valley’s military contracts into stark relief. Meanwhile, corporate leadership has embraced a no-politics-in-the-workplace policy enforced at the point of the knife. Workers are caught in the middle. Do I take a stand and risk my job, my health insurance, my visa, my family’s home? Or do I ignore my suspicion that my work may be contributing to the murder of innocents on the other side of the world? No one can make that choice for you. But I can say with confidence born of experience that such choices can be more easily made if workers know what exactly the companies they work for are doing with militaries at home and abroad. And I also know this: those same companies themselves will never reveal this information unless they are forced to do so—or someone does it for them. For those who doubt that workers can make a difference in how trillion-dollar companies pursue their interests, I’m here to remind you that we’ve done it before. In 2017, I played a part in the successful #CancelMaven campaign that got Google to end its participation in Project Maven, a contract with the US Department of Defense to equip US military drones with artificial intelligence. I helped bring to light information that I saw as critically important and within the bounds of what anyone who worked for Google, or used its services, had a right to know. The information I released—about how Google had signed a contract with the DOD to put AI technology in drones and later tried to misrepresent the scope of that contract, which the company’s management had tried to keep from its staff and the general public—was a critical factor in pushing management to cancel the contract. As #CancelMaven became a rallying cry for the company’s staff and customers alike, it became impossible to ignore. Today a similar movement, organized under the banner of the coalition No Tech for Apartheid, is targeting Project Nimbus, a joint contract between...

wp.technologyreview.com Whois

Domain Name: TECHNOLOGYREVIEW.COM Registry Domain ID: 905198_DOMAIN_COM-VRSN Registrar WHOIS Server: whois.networksolutions.com Registrar URL: http://networksolutions.com Updated Date: 2022-02-23T02:00:25Z Creation Date: 1998-02-23T05:00:00Z Registry Expiry Date: 2032-02-22T05:00:00Z Registrar: Network Solutions, LLC Registrar IANA ID: 2 Registrar Abuse Contact Email: domain.operations@web.com Registrar Abuse Contact Phone: +1.8777228662 Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited Name Server: NS1-01.AZURE-DNS.COM Name Server: NS2-01.AZURE-DNS.NET Name Server: NS3-01.AZURE-DNS.ORG Name Server: NS4-01.AZURE-DNS.INFO DNSSEC: unsigned >>> Last update of whois database: 2024-05-17T13:53:36Z <<<