• English is the official language of this forum. Posts in other languages will receive a warning, except in threads where foreign languages are permitted.

OpenAI's ChatGPT - News and Discussion

Hamartia Antidote

Elite Member
Nov 17, 2013
38,541
22,612
Country of Origin
Country of Residence
ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot developed by OpenAI and launched on November 30, 2022. Based on a large language model, it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive prompts and replies, known as prompt engineering, are considered at each conversation stage as a context.[2]

By January 2023, it had become what was then the fastest-growing consumer software application in history, gaining over 100 million users and contributing to the growth of OpenAI's valuation to $29 billion.[3][4] ChatGPT's release spurred the development of competing products, including Bard, Ernie Bot, LLaMA, Claude, and Grok.[5] Microsoft launched its Bing Chat based on OpenAI's GPT-4. Some observers raised concern about the potential of ChatGPT and similar programs to displace or atrophy human intelligence, enable plagiarism, or fuel misinformation.[6][7]

ChatGPT is built upon either GPT-3.5 or GPT-4, both of which are members of OpenAI's proprietary series of generative pre-trained transformer (GPT) models, based on the transformer architecture developed by Google[8]—and is fine-tuned for conversational applications using a combination of supervised learning and reinforcement learning.[6] ChatGPT was released as a freely available research preview, but due to its popularity, OpenAI now operates the service on a freemium model. It allows users on its free tier to access the GPT-3.5-based version, while the more advanced GPT-4-based version and priority access to newer features are provided to paid subscribers under the commercial name "ChatGPT Plus".

ChatGPT is credited with starting the AI boom, which has led to ongoing rapid and unprecedented development in the field of artificial intelligence.[9]
 

Hamartia Antidote

Elite Member
Nov 17, 2013
38,541
22,612
Country of Origin
Country of Residence

Sam Altman Says AGI Soon and AGI Will Help People Do A LOT More​


Sam Altman believes that AGI will be created soon. He talked about a vision of an AGI World of More.

He predicts AGI will not replace as many jobs as people believe and the world will change less than people believe.

It will be an incredible tool for productivity and is magnifying what people can by a factor of two or five.

It will enable some things we could not do all before.

He sees new vision of the future that OpenAI didn’t really see when they started.

Sam is very thankful the technology [AI] did go in this direction.

LLM is a tool that magnifies what humans do. It lets people do their jobs better.

AI does parts of jobs. The jobs will change and of course some jobs will totally go away.

But the human drives are so strong and the sort of way that Society works is so strong.

Sam thinks AGI will get developed in the reasonably close-ish future.

It will change the world much less than we all think.

It will change jobs much less than we all think

You hear a coder say okay I’m like two times more productive or three times more productive.

They say I can never code again without this tool.

We will not run out of demand. People can just do more. Expect more.

 

Hamartia Antidote

Elite Member
Nov 17, 2013
38,541
22,612
Country of Origin
Country of Residence

OpenAI working on a new artificial intelligence model that will surpass Gpt-4

The company also announced that it is establishing a new Safety Commission to evaluate how to manage the risks posed by the new model and future technologies


The US platform OpenAI is working on a new model of artificial intelligence that will replace technology Gpt-4, the one who leads the popular program ChatGpt. We can read it in a post published on the blog of the San Francisco start-up, considered one of the most advanced companies in the world in terms of artificial intelligence. The new model, according to the company, is intended to take the capabilities of chatbots and digital assistants “to the next level.” OpenAI also announced that it is establishing a new Security Commission to evaluate how to manage the risks posed by the new model and future technologies.

“We are proud to build and publish models that drive industrial development in both capacity and safety, and we expect lively discussion at this important time,” says company management. Gpt-4 technology was presented in March 2023: it allows chatbots and other software to answer questions, write emails, generate documents and analyze data. An updated version, called Gpt-4o, was announced but not yet made available this month, and also features image generation and more conversational responses to questions and commands.
 

Hamartia Antidote

Elite Member
Nov 17, 2013
38,541
22,612
Country of Origin
Country of Residence

In a first, OpenAI removes influence operations tied to Russia, China and Israel​

Online influence operations based in Russia, China, Iran, and Israel are using artificial intelligence in their efforts to manipulate the public, according to a new report from OpenAI.

Bad actors have used OpenAI’s tools, which include ChatGPT, to generate social media comments in multiple languages, make up names and bios for fake accounts, create cartoons and other images, and debug code.

OpenAI’s report is the first of its kind from the company, which has swiftly become one of the leading players in AI. ChatGPT has gained more than 100 million users since its public launch in November 2022.

But even though AI tools have helped the people behind influence operations produce more content, make fewer errors, and create the appearance of engagement with their posts, OpenAI says the operations it found didn’t gain significant traction with real people or reach large audiences. In some cases, the little authentic engagement their posts got was from users calling them out as fake.

“These operations may be using new technology, but they're still struggling with the old problem of how to get people to fall for it,” said Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team.

That echoes Facebook owner Meta’s quarterly threat report published on Wednesday. Meta's report said several of the covert operations it recently took down used AI to generate images, video, and text, but that the use of the cutting-edge technology hasn’t affected the company’s ability to disrupt efforts to manipulate people.

The boom in generative artificial intelligence, which can quickly and easily produce realistic audio, video, images and text, is creating new avenues for fraud, scams and manipulation. In particular, the potential for AI fakes to disrupt elections is fueling fears as billions of people around the world head to the polls this year, including in the U.S., India, and the European Union.

In the past three months, OpenAI banned accounts linked to five covert influence operations, which it defines as “attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.”

That includes two operations well known to social media companies and researchers: Russia’s Doppelganger and a sprawling Chinese network dubbed Spamouflage.

Doppelganger, which has been linked to the Kremlin by the U.S. Treasury Department, is known for spoofing legitimate news websites to undermine support for Ukraine. Spamouflage operates across a wide range of social media platforms and internet forums, pushing pro-China messages and attacking critics of Beijing. Last year, Facebook owner Meta said Spamouflage is the largest covert influence operation it's ever disrupted and linked it to Chinese law enforcement.

Both Doppelganger and Spamouflage used OpenAI tools to generate comments in multiple languages that were posted across social media sites. The Russian network also used AI to translate articles from Russian into English and French and to turn website articles into Facebook posts.

The Spamouflage accounts used AI to debug code for a website targeting Chinese dissidents, to analyze social media posts, and to research news and current events. Some posts from fake Spamouflage accounts only received replies from other fake accounts in the same network.

Another previously unreported Russian network banned by OpenAI focused its efforts on spamming the messaging app Telegram. It used OpenAI tools to debug code for a program that automatically posted on Telegram, and used AI to generate the comments its accounts posted on the app. Like Doppelganger, the operation's efforts were broadly aimed at undermining support for Ukraine, via posts that weighed in on politics in the U.S. and Moldova.

Another campaign that both OpenAI and Meta said they disrupted in recent months traced back to a political marketing firm in Tel Aviv called Stoic. Fake accounts posed as Jewish students, African-Americans, and concerned citizens. They posted about the war in Gaza, praised Israel’s military, and criticized college antisemitism and the U.N. relief agency for Palestinian refugees in the Gaza Strip, according to Meta. The posts were aimed at audiences in the U.S., Canada, and Israel. Meta banned Stoic from its platforms and sent the company a cease and desist letter.

OpenAI said the Israeli operation used AI to generate and edit articles and comments posted across Instagram, Facebook, and X, as well as to create fictitious personas and bios for fake accounts. It also found some activity from the network targeting elections in India.

None of the operations OpenAI disrupted only used AI-generated content. “This wasn't a case of giving up on human generation and shifting to AI, but of mixing the two,” Nimmo said.

He said that while AI does offer threat actors some benefits, including boosting the volume of what they can produce and improving translations across languages, it doesn’t help them overcome the main challenge of distribution.

“You can generate the content, but if you don't have the distribution systems to land it in front of people in a way that seems credible, then you're going to struggle getting it across,” Nimmo said. “And really what we're seeing here is that dynamic playing out.”

But companies like OpenAI must stay vigilant, he added. "This is not the time for complacency. History shows that influence operations which spent years failing to get anywhere can suddenly break out if nobody's looking for them."
 

Indos

INT'L MOD
Jul 25, 2013
25,518
23,089
Country of Origin
Country of Residence
Look like the site currently under mantainant, I wonder whether tomorrow it will be converted into 4.0 version
 

Users who are viewing this thread

Latest Posts

Top