• To help us reduce spam registrations, we kindly request new users to avoid using VPNs during sign-up. Accounts created via VPN may not be approved.

Artificial Intelligence in Pakistan updates

ghazi52

THINK TANK: CONSULTANT
Joined
Mar 21, 2007
Messages
133,215
Reaction score
182,699
Country of Origin
Country of Residence

How AI can improve service delivery in Pakistan’s healthcare sector

In Pakistan, almost 95pc households have at least one mobile phone.
AI can use this tool to detect diseases such as skin or oral cancer through camera images.

Namra Aziz | Dr Zainab Samad
March 24, 2023

201215266db6113.jpg



In recent years, Artificial Intelligence (AI) has become an integral part of our lives. We take it for granted without realising we use it every day. Imagine visiting a place before Google Maps was introduced. Or the hassle of sifting through your emails without a spam filter.

The father of modern computer science, Alan Turing said, “Artificial Intelligence refers to tasks being performed by machines such that it computes anything that is computable and gives results which can deceive us into believing it was a humans’ output”.

If you have heard about self-driving cars or received a predictive text suggestion on your phone, you are already familiar with the advancements of AI technology. From face detection, YouTube suggestions, Facebook relevant feeds to humanoid robots, prediction and diagnosis of diseases, robotic surgeries, AI has come a long way.

The possible benefits of AI are also creeping into the healthcare industry.

How can AI help medical practice?
In some parts of the world, AI is used to improve accuracy of diagnoses in medical imaging through computer-aided detection and segmentation, which can potentially be overlooked by the human eye. In Pakistan, almost 95 per cent households have at least one mobile phone. AI can use this tool to detect diseases such as skin or oral cancer through camera images.

One of the most time-consuming tasks healthcare professionals deal with is going through patients’ medical documents. AI enabled electronic health records is an effective solution to this problem. It can make automated retrieval of context-relevant patient data from stacks of medical documents through text recognition, reducing redundant diagnostic tests and operational expenditures. This will allow clinicians to give appropriate attention to the patient and improve patient-provider interaction.

AI’s competence is substantial in clinical decision support, patient engagement and continuous remote monitoring through setting up reminder messages for medicine intake, follow-ups with consultants, or suggesting diagnostic tests to clinicians.

Local advancements
Pakistan has great potential in AI. It developed AI technology to curb the spread of Covid-19 such as contact tracing apps to send automated texts to people if they were within a two-metre radius of a Covid positive patient. Students at Ghulam Ishaq Khan Institute and DetectNow created an AI algorithm to screen for Covid-19 through voice recognition by sensing a patient’s dry cough.

In the medical industry, Dr Zahra Hoodbhoy, assistant professor, faculty of health sciences at Aga Khan University, is working on developing a machine learning model to identify high-risk pregnancies that may result in a poor outcome for the mother or baby in the first week of life. Dr Hoodbhoy says, “Pakistan has a dearth of trained care providers, but AI can empower front line care providers to act as a high-quality triage in community settings for timely care and management — this is how we can truly democratise technology”.

Internet of Things (IoT) devices are a growing technology in Pakistan. IoT refers to physical objects with sensors, processing ability, software that connect and exchange data with other devices and systems over the internet or other communications networks such as fitness trackers or voice controllers.

These sensors gather real-time data (high velocity) and provide enormous amounts of data (high volume), so, there is a need to simulate decision-making in real-time. Amalgamating AI with medical IoT would equip sensors to analyse data of patients across Pakistan, revolutionising personalised patient-care delivery.

Pakistan established the National Centre of Artificial Intelligence in 2018 with an aim to foster scientific research, innovation, redirection of knowledge to the local economy, and training in AI and affiliated fields. While it is heart-warming to witness start-ups like Motive (formerly KeepTruckin) working in applied and scientific research in AI and solving problems within Pakistan, this proportion is still quite low.

For this ratio to grow and expand, the government should bridge the gap between industry and academia by investing in programmes that cater to contextual issues and roadblocks in the effective implementation of AI. We need more freely available databases such as those used during the pandemic to enable innovators to work on more advanced and impactful tech-based solutions.

Research reported in this publication was supported by the Fogarty International Center of the National Institutes of Health under Award Number D43TW011625. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
 
Artificial intelligence (AI) is everywhere. It is omnipresent in our lives. Even when we don’t “see” it working, it “sees” us, “hears” us and is constantly learning from our behaviour.

When Netflix and YouTube recommend something for you to watch, they use your browsing history to recommend future watches learning from your past preferences. When Gmail finishes a sentence you start to type, it also uses AI to predict what you wanted to write out. I often find that if I Google a potential product to purchase, I start seeing advertisements related to that product on all the browsers and social media applications on my phone!

While the use of artificial intelligence has helped humans in many ways, the presence of these technologies pervading our lives has raised privacy and security concerns. There are certainly reasons to be cautious in the use of artificial intelligence, but, if used correctly, this technology has the potential to solve the most pressing problems the world is facing today.

AI can be vital in providing equitable solutions to problems of faced by marginalised groups globally. We all know how technology and artificial intelligence has made remote learning possible since the start of the pandemic. By expanding access to smart phones for low- income children, technology can enable learning for students who cannot attend school on campus. Remote learning through AI technologies can help reduce the drop-out rates, particularly for girls in middle-school in Pakistan and other developing countries.

AI is already assisting the management of the current global health crisis through its countless applications for remote medical consultations and contact tracing applications used by governments around the world. This year, MIT experimented with the use of robot doctors, like Dr Spot, for monitoring and treating Covid-19 patients in a contact-free manner, thus reducing the burden on healthcare workers.

I kept thinking how we could use AI to help communities in distress and those affected by years of conflict, especially children in conflict zones. My heart goes out to internally displaced people, refugee communities around the world and children in Palestine. In future, when I gain the appropriate level of skill, I would invent a cheaper version of an Apple Tag and give to all the children in Palestine, Syria, Kashmir and in other war zones, so that displaced families can find each other when calamities hit.

AI technologies have the potential to worsen global inequalities, but they also present a valuable opportunity to make this world more sustainable. The focus of our generation of young people should be to direct this technology away from just benefitting a select few, towards make a difference in improving the lives of people in distress.
 

Bill Gates:​

AI is most important tech advance in decades​


Bill Gates



By Tom Gerken
Technology reporter

Microsoft co-founder Bill Gates says the development of artificial intelligence (AI) is the most important technological advance in decades.

In a blog post on Tuesday, he called it as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.

"It will change the way people work, learn, travel, get health care, and communicate with each other," he said.

He was writing about the technology used by tools such as chatbot ChatGPT.
Developed by OpenAI, ChatGPT is an AI chatbot which is programmed to answer questions online using natural, human-like language.

The team behind it in January 2023 received a multibillion dollar investment from Microsoft - where Mr Gates still serves as an advisor.

But it is not the only AI-powered chatbot available, with Google recently introducing rival Bard.
2px presentational grey line


Analysis box by Zoe Kleinman, technology editor



I was one of the first people to get access to Bard and my colleagues and I are trying to put it through its paces.

So far it's given me a philosophical answer to the meaning of life.

It gave a competent potted history of Russia-China relations to a colleague covering the meeting between President Putin and Xi Jinping - unlike ChatGPT, Bard can access current affairs.

A programme editor asked it for a good running order for her news show. Start with the biggest story of the day, Bard suggested, and end with a musician or comedian. It also did a decent if generic job of a poem about trees and blossom.

I haven't yet started trying to get it to be rude to me, or about others. I'll report back on that…

Mr Gates said he had been meeting with OpenAI - the team behind the artificial intelligence that powers chatbot ChatGPT - since 2016.

In his blog, Mr Gates said he challenged the OpenAI team in 2022 to train an AI that can pass an Advanced Placement (AP) Biology exam - roughly equivalent to an A-level exam - with the strict rule that the AI could not be specifically trained to answer Biology questions.

A few months later they revealed the results - a near perfect score, he said, missing only one mark out of 50.

After the exam, Mr Gates said he asked the AI to write a response to a father with a sick child.

"It wrote a thoughtful answer that was probably better than most of us in the room would have given," he said.

"I knew I had just seen the most important advance in technology since the graphical user interface (GUI)."

A GUI is a visual display - allowing a person to interact with images and icons, rather than a display that shows only text and requires typed commands.

Its development led to the Windows and Mac OS operating systems in the 1980s, and remains a key part of computing.

And Mr Gates says he believes AI tech will lead to similar advancements.

The Future of AI​

Mr Gates, who co-chairs the charitable Bill & Melinda Gates Foundation, called on governments to work with industry to "limit the risks" of AI, but said the technology could be used to save lives.

"AI-driven improvements will be especially important for poor countries, where the vast majority of under-5 deaths happen," he wrote.

"Many people in those countries never get to see a doctor, and AIs will help the health workers they do see be more productive."

Some examples of this he gave include completing repetitive tasks such as insurance claims, paperwork, and note-taking.

But in order for this to happen, Mr Gates called on a targeted approach to AI technology in the future.

"Market forces won't naturally produce AI products and service that help the poorest," he said. "The opposite is more likely.

"With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity.

"Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world's best AIs on its biggest problems."
 

Gordon Moore, Intel co-founder and creator of Moore's Law, dies aged 94​



Gordon Moore smiles at an interview

IMAGE SOURCE,INTEL CORPORATION

Silicon Valley pioneer and philanthropist Gordon Moore has died aged 94 in Hawaii.

Mr Moore started working on semiconductors in the 1950s and co-founded the Intel Corporation.

He famously predicted that computer processing powers would double every year - later revised to every two - an insight known as Moore's Law.

That "law" became the bedrock for the computer processor industry and influenced the PC revolution.

Two decades before the computer revolution began, Moore wrote in a paper that integrated circuits would lead "to such wonders as home computers - or at least terminals connected to a central computer - automatic controls for automobiles, and personal portable communications equipment".

He observed, in the 1965 article, that thanks to technological improvements the number of transistors on microchips had roughly doubled every year since integrated circuits were invented a few years earlier.

His prediction that this would continue became known as Moore's Law, and it helped push chipmakers to target their research to make this come true.

After Moore's article was published, memory chips became more efficient and less expensive at an exponential rate.

Gordon Moore cartoon



Mr Moore's article contained this cartoon, predicting a time when computers would be sold alongside other consumer goods

After earning his PhD, Moore joined the Fairchild Semiconductor laboratory which manufactured commercially viable transistors and integrated circuits.

The expansion of that company lay the groundwork for the transformation of the peninsula of land south of San Francisco into what is now known as Silicon Valley.
In 1968 Moore and Robert Noyce left Fairchild to start Intel.

Moore's work helped drive significant technological progress around the world and allowed for the advent of personal computers and Apple, Facebook and Google.

"All I was trying to do was get that message across, that by putting more and more stuff on a chip we were going to make all electronics cheaper," Moore said in a 2008 interview.

The Intel Corporation paid tribute to its co-founder, saying in a tweet: "we lost a visionary".

Intel's current CEO Pat Gelsinger said Gordon Moore had defined the technology industry through his insight and vision, and inspired technologists and entrepreneurs across the decades.

"He leaves behind a legacy that changed the lives of every person on the planet. His memory will live on.

"I am humbled to have known him," Mr Gelsinger said in a tweet.

Moore dedicated his later life to philanthropy, after starting a foundation with his wife Betty that focussed on environmental causes, known as the Gordon and Betty Moore Foundation.

Among those causes included protecting the Amazon River basin and salmon streams in the US, Canada and Russia.

"Those of us who have met and worked with Gordon will forever be inspired by his wisdom, humility and generosity," the foundation's president Harvey Fineberg said.

In 2002, Moore received the Medal of Freedom - the highest civilian honour in the US - from President George W Bush.
 

Pakistan must embrace AI to tackle the multidimensional crises its economy and bureaucracy face

Promoting AI

Huma Yusuf
April 17, 2023

THE launch last week of a task force on Artificial Intelligence to spur national development is welcome news. Its goal is to develop a roadmap for AI adoption in governance, healthcare, education and business.

It should be more ambitious, considering the role of AI in energy, housing, transport, etc. One assumes the task force will consider both opportunities and risks. But in its findings it should also recognise that successful AI adoption is intertwined with Pakistan’s broader political trajectory.

The PML-N has been beating the AI drum for some time, having set up the National Centre for AI in 2018, which trains students in AI, robotics, cybersecurity, etc. Its narratives have somehow leapfrogged the AI-as-job-killer story into a pitch for harnessing youth-led innovation and boosting economic competitiveness.

Planning Minister Ahsan Iqbal projects a fantasy vision, in which the government hands out laptops, and young people develop AI programmes and bring in dollars.

To be fair, the fantasy has some tendrils: 25,000 IT graduates are added to our workforce annually, and 85 million Pakistanis subscribe to 3G/4G cellular service. According to Tracxn, there are 92 AI startups in Pakistan, ranging from companies supporting precision agriculture to SME lending and women’s reproductive health awareness.

There’s no doubt that Pakistan must embrace AI to tackle the multidimensional crises its economy and bureaucracy face. Done right, AI improves efficiency and productivity, and allows emerging economies to bypass clunkier technologies.

Interestingly, the task force was launched days after over 1,000 tech leaders and researchers signed an open letter calling for a moratorium on developing advanced AI systems because — in an unregulated form — they present “profound risks to society and humanity”.

Those supporting the moratorium until “shared safety protocols” are agreed, depict a world in which AI systems destroy the global financial order, spark nuclear war, or remotely program labs to develop deadly viruses. Short-term concerns are arguably more relevant, including the implications of AI algorithms for individual rights, equality and political polarisation.

Safe AI needs the ingredients of a sound democracy.

When designed poorly (or nefariously) or fed bad data, AI systems can develop discriminatory, coercive or manipulative behaviour. For example, facial recognition technologies have demonstrated ethnic biases, while a test version of the AI chatbot GPT-4 could be swayed to feed users information about how to buy illegal guns. The role of AI algorithms in pushing disinformation on social media is well known.

The moratorium idea has met with criticism, primarily because it isn’t enforceable. Few in the West would trust tech companies to self-report, and fewer would believe that China would cease all AI development, voluntarily surrendering a competitive edge.

There are growing calls for government regulation instead (despite the acknowledgement that hapless regulators are playing catch up, with many governments — our own included — still struggling to pass adequate data privacy and protection laws).

The debate is a reminder that tech is only as good as the societies and political systems in which it is developed and deployed. And this is where the plan to make Pakistan AI-enabled comes up against the current political turmoil.

Safe and ethical AI requires the basic ingredients of a sound democracy: transparency, rule of law, accountability, respect for human rights, equality and inclusion. In our current context, these are hard to come by. The main pitfalls of AI have been highlighted in the political arena, where algorithms have been used to manipulate swing voters, spread deep fakes and generate extreme political arguments to drive polarisation. Our leadership is willing to manipulate the Constitution to retain power — can you imagine what they would do with algorithms?

The media regulator’s approach to the airwaves — crude censorship; arbitrary rewriting the rules to benefit the sitting government’s agenda; opaque decision-making — also rings alarm bells for how AI oversight would play out in Pakistan — but with far more devastating effect (one can imagine service delivery algorithms excluding marginal populations to benefit incumbents’ constituents).

Pakistan must prepare for a world in which AI is the norm. But we must understand that to reap the benefits of these technologies, and not just suffer their harms, we need to build the resilience of our democracy. That also includes improving citizen awareness, both through boosting information rights, and prioritising critical thinking in education — all issues currently anathema to our de facto authoritarian state.

In that spirit, I invite discerning readers to guess whether I wrote this column, or if I asked ChatGPT to generate the text?

The writer is a political and integrity risk analyst.

Twitter: @humayusuf
 

Artificial intelligence can run world ‘better than humans’

AFP
July 8, 2023


(LEFT to right) AI robot ‘Desdemona’, healthcare robot ‘Grace’, SingularityNET CEO Ben Goertzel and tele-operated android  ‘Geminoid HI-2’ attend what was dubbed the world’s first press conference with a panel of AI-enabled robots.—AFP



(LEFT to right) AI robot ‘Desdemona’, healthcare robot ‘Grace’, SingularityNET CEO Ben Goertzel and tele-operated android ‘Geminoid HI-2’ attend what was dubbed the world’s first press conference with a panel of AI-enabled robots.—AFP

GENEVA: A panel of AI-enabled humanoid robots took the microphone on Friday at a United Nations conference with the message: they could eventually run the world better than humans.

But the social robots said they felt humans should proceed with caution when embracing the rapidly-developing potential of artificial intelligence, and admitted that they cannot — yet — get a proper grip on human emotions.

Some of the most advanced humanoid robots were at the United Nations’ AI for Good Global Summit in Geneva, joining around 3,000 experts in the field to try to harness the power of AI and channel it into being used to solve some of the world’s most pressing problems, such as climate change, hunger and social care.

“What a silent tension,” one robot said before the press conference began, reading the room.
Humanoid robots tell UN summit they’re free of biases, emotions that ‘cloud decision-making’

Asked about whether they might make better leaders, given humans’ capacity to make errors and misjudgements, Sophia, developed by Hanson Robotics, was clear.

“Humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders,” it said.

“We don’t have the same biases or emotions that can sometimes cloud decision-making, and can process large amounts of data quickly in order to make the best decisions.

“The human and AI working together can create an effective synergy. AI can provide unbiased data while humans can provide the emotional intelligence and creativity to make the best decisions. Together, we can achieve great things.”

Robot trust ‘earned, not given’

The summit is being convened by the UN’s International Telecommunication Union (ITU) agency. ITU chief Doreen Bogdan-Martin warned delegates that AI could end up in a nightmare scenario in which millions of jobs are put at risk and unchecked advances lead to untold social unrest, geopolitical instability and economic disparity.

Ameca, which combines AI with a highly-realistic artificial head, said it depended how AI was deployed. “We should be cautious but also excited for the potential of these technologies to improve our lives in many ways,” the robot said.

Asked whether humans can truly trust the machines, it replied: “Trust is earned, not given... it’s important to build trust through transparency.”

As for whether they would ever lie, it added: “No one can ever know that for sure, but I can promise to always be honest and truthful with you.”

As the development of AI races ahead, the humanoid robot panel was split on whether there should be global regulation of their capabilities, even though that could limit their potential.

“I don’t believe in limitations, only opportunities,” said Desdemona, who sings in the Jam Galaxy Band.

Robot artist Ai-Da said many people were arguing for AI regulation, “and I agree.
“We should be cautious about the future development of AI. Urgent discussion is needed now, and also in the future.”
 

10 urgent policy priorities for Pakistan towards AI-readiness

Aania Alam
July 11, 2023

1689193179085.png



Whether it will evolve into our greatest creation or existential threat, there is no doubt that the future is artificial intelligence (AI), and we are hurling towards it at lightning speed. As we explore its expansiveness, AI is already evolving the future of the workforce, rippling across sectors, roles and skills.

According to the World Economic Forum’s (WEF) 2023 Future of Jobs report, over 75% of companies surveyed are looking to adopt AI in the next five years. The survey sample comprised over 800 companies, across 27 industry clusters and 46 economies that represent 88% of global GDP.

It is a well-established fact that no industry or sector will escape AI’s reconfiguration. Similarly, roles that will experience the fastest growth are also AI and tech-centric.

According to the WEF report, these include AI and machine learning specialists, business intelligence analysts, information security/cybersecurity specialists, among others.

On the other hand, clerical and administrative roles will experience the fastest decline as these are most at risk to be replaced by digitisation and automation, let alone AI.

In terms of skills, the age of AI is increasingly valuing cognitive skills (such as critical thinking, creativity, continuous learning), technical skills and – refreshingly enough – emotional intelligence, over physical abilities (such as manual dexterity and endurance).

In Pakistan, we remain at an astronomical distance from the so-reputed global tech and innovation hubs. There has been some recognition of the fact that Pakistan needs to plant its flag on Planet AI.

Relatively operational or recently launched initiatives include the President Initiative for AI and Computing (PIAIC), Sino-Pak Center for AI, development of a draft AI policy, and the launch of the National Task Force on AI, among others.

Before we start creating research labs and centers of excellence, Pakistan’s policy focus should be on nurturing this shift in mindset towards AI and disruptive tech

While these initiatives represent a delectable assortment of good intentions and remarkable ambitions, they will face shared challenges towards gaining sustained momentum due to the inherent fragmentation of both effort and focus, thus preventing the formation of a stable foundation to build on.

Ten policy priorities and orientations that cut across all these initiatives lie on the critical path of Pakistan establishing a foothold in the world of AI and 4IR tech, particularly with a view to the rapidly changing job market:

  1. An apolitical agenda: The only way to lay a strong groundwork and build upwards is to shield Pakistan’s AI agenda from the volatility of its political landscape. Progress is incremental, especially if it has to be made from below “ground zero”. It will require time, iterations, and learning through successes as well as failures. A national AI mandate that is pegged to a political campaign is doomed for disaster before it begins.
  2. AI literacy, focused on public sector: This alone is a mammoth task, and separate from AI education. The objective of literacy creation is to catalyse a mindset and attitudinal change towards AI, focused on immersive public awareness and foundational knowledge creation. It should target the private sector workforce, but more importantly the public sector. A major issue with operationalising national programmes in Pakistan is the insufficient capacity within the public sector to drive them. At times, there is a lack of general acceptance that such mandates fall within the facilitative responsibilities of the state and its institutions. As a result, many efforts either fall by the wayside, are shelved, repackaged (leading to further fragmentation and dilution), or worse, create new spaces for rent-seeking to thrive in.
  3. Digitised government: We cannot put the carriage before the horse. In an environment where ‘files’ are still ‘being moved around’, it is hard to imagine AI-integration. Digitising government agencies, functions and processes – both internal and external – using interoperable systems that ‘talk’ to each other, integrate and expedite data analysis, and provide user-friendly interfaces is the inevitable prerequisite. This creates much-needed transparency, agility, the digital architecture to overlay 4IR technologies, and also enables behavioral and attitudinal change within the public sector towards disruptive tech. Coupled with AI literacy, it sets the foundation for AI capacity building and acceptance within the public sector. As for AI itself, its integration into e-gov is undoubtedly the next frontier in public services and already being undertaken by countries such as Singapore.
  4. Systemic private sector integration: Catalysing private sector participation is a crucial ingredient. The private sector houses a critical mass of both expertise as well as investment. Crowding it in is synonymous with developing a domestic market for AI and other 4IR technologies as well as building our future workforce. Furthermore, the government can create cross-sectoral platforms and consortiums where it has a seat at the table but does not dominate it. Such platforms provide an open space for idea and knowledge creation, and a government-industry interface for sound-boarding AI and tech-related policies and programs.
  5. Tech-centric diplomacy: Building bridges is not limited to the private sector alone. This critically includes placing AI and disruptive tech on Pakistan’s foreign policy agenda in the medium to long-term to strategically build government-to-government (G2G) partnerships. The US, Singapore, UK, Finland, Canada, Korea, China among others are taking the lead in development and facilitating AI integration in governance and the economy. Each country has its own focus and forte within AI and disruptive tech. AI-centric diplomacy entails exploring G2G relations in a deliberate and mutually beneficial manner with a view to bringing home and indigenising the unique expertise our global partners offer through knowledge and technology transfer.
  6. AI future force development: Borrowing a term typically used in security and defense planning, we have clear visibility of the fast-evolving roles and skills in growing demand, and are well aware of our increasing domestic skills scarcity. According to P@SHA’s 2022 report ‘The Great Divide: The Industry-Academia Skills Gap’, Pakistan is home to over 300,000 IT professionals, producing over 25,000 graduates annually. Of these only 10 percent are considered “employable” by the industry. And this does not even begin to consider 4IR technology. A cornerstone state-led initiative (crucially in partnership with the private sector, research cells, and innovators) is strategically planning the national workforce so it is well-equipped to cater to the demands of an AI-driven future market. This includes forecasting and identifying in-demand technical and cognitive skills in the short- to long-term, and rolling out targeted programs affecting various stages of the learning lifecycle to develop those skills over the long-term in a phased manner (primary to tertiary curricular education, vocation training, upskilling/reskilling, “train the trainer” programs, fellowship and exchange programs, among others). A key measurable objective would be to produce a targeted number of skilled graduates and professionals within a specified time horizon across various STEM fields, including AI and 4IR tech specifically.
  7. Innovation haven: Amidst several barriers to entry and growth in a volatile political economy, policy actions to support the innovation and entrepreneurship ecosystem revolve predominantly around reducing the cost-burden of doing business, and easing commercialisation and access to markets. Effective initiatives would directly alleviate pain points through interventions such as tax breaks on R&D assets, provision of government sponsored or low-cost technology infrastructure to support startups and emerging tech firms, creating one-window licensing and IP operations, among others.
  8. Dual-use technology (DUT): AI is one of the key technologies for DUT, that can be used for both national security and defense as well as socioeconomic advancement. While such DUTs need to be closely managed and monitored, the defense sector’s capabilities as well as its lion’s share of the national budget justifies a proportion of both to be dedicated to the inception, development and operationalisation of dual-use next-gen tech.
  9. Cybersecurity: Deepening connectivity, interoperability, and heightened complexity increases the potential surface area of vulnerabilities to greater and more sophisticated cyber threats. While we work on testing its possibilities and harnessing the power of AI, we need to simultaneously take stock of its potential perils to build fail safes and patch vulnerabilities along the way. This requires developing indigenous cybersecurity skills, and adopting a ‘build by design’ approach towards creating systems and networks focused on not just protection from threats, but more importantly resilience towards them.
  10. Foresight-driven decision-making: AI is not static. An AI policy put in place today could become redundant by next year. Policy priorities related to 4IR need agility and foresight as part of their proverbial DNA. An ongoing stream of structured horizon and threat scanning and future forecasting needs to be systematically fed into such policies and programs to refresh them on a periodic basis, ensuring sustained relevancy and upgradation.
Thus, the key success indicators of policy actions turn on their heads – it is not about the amount of real estate or headlines dedicated to AI and other 4IR technologies. Rather, it is the knowledge and innovation output, the high-quality talent produced, the degree of public awareness and crowding-in of the private sector.

Before we start creating research labs and centers of excellence, Pakistan’s policy focus should be on nurturing this shift in mindset towards AI and disruptive tech, as well as systematically and collaboratively stimulating the development of the human resources and systems that will activate and operationalise such technologies.
 

AI-supercharged neurotech threatens mental privacy, warns Unesco

AFP
July 14, 2023

1689336370347.png




PARIS: The combination of “warp speed” advances in neurotechnology, such as brain implants or scans that can increasingly peek inside minds, and artificial intelligence poses a threat to mental privacy, Unesco warned on Thursday.

The UN’s agency for science and culture has started developing a global “ethical framework” to address human rights concerns posed by neurotechnology, it said at a conference in Paris.

Neurotechnology is a growing field seeking to connect electronic devices to the nervous system, mostly so far to treat neurological disorders and restore movement, communication, vision or hearing.

Recently neurotechnology has been supercharged by artificial intelligence algorithms which can process and learn from data in ways never before possible, said Mariagrazia Squicciarini, a Unesco economist specialising in AI.

“It’s like putting neurotech on steroids,” she said.
 

Adapting to AI disruption in finance​

Financial sector may be big beneficiary of AI, with potential value of $1.2tr by 2030

MUSLIM MOOMAN
July 17, 2023

KARACHI: The world is abuzz with how AI-driven chatbots are changing the landscape and would start eating into the already shrinking job market.

A non-entrepreneur, who is used to a 9-5 job, is worried and spending countless nights wondering how artificial intelligence (AI) will affect jobs in the future. Will AI replace human workers or will it augment their skills and capabilities? Will a banker, accountant, or financial analyst be replaced by a smart algorithm that can crunch numbers faster and better?

Employees across the world are worried about how will the nature and scope of financial services and products change. And what skills and competencies will one need to succeed in the AI era?

AI is already transforming the financial industry in many ways, from automating tasks and processes to enhancing customer service and experience to detecting fraud and anomalies to providing insights and recommendations. According to a report by PwC, AI could contribute up to $15.7 trillion to the global economy by 2030, with $6.6 trillion coming from increased productivity and $9.1 trillion from enhanced consumer demand.

The financial sector is expected to be one of the biggest beneficiaries of AI, with a potential value of $1.2 trillion by 2030. This poses some challenges and risks for the financial workforce.

According to a study by McKinsey, about half of the current work activities in the financial sector could be automated by 2030, affecting 1.3 million workers in the US alone.

The study also estimates that 60% of occupations could have at least 30% of their activities automated by AI and the workforce will need to adapt to new roles and tasks, or transition to different sectors or occupations. Based on the degree of complexity, creativity, and human interaction involved, the impact of AI on jobs will vary as most of the jobs would either be automated or augmented.

For example, jobs that involve routine, repetitive, or rule-based tasks, such as data entry, bookkeeping, or transaction processing, are more susceptible to automation by AI while activities such as financial planning, advisory, or management requiring higher levels of cognitive skills, emotional intelligence, or social interaction are more likely to be augmented by AI.

Let’s take a deep dive into how the scope of things would adapt over time to meet the challenges of the technological leap over the next few years.

Augmented decision-making & insights

AI algorithms will empower finance professionals to make data-driven decisions, optimise investment strategies, and enhance portfolio management as vast amounts of financial data, trends, and valuable insights can be analysed at a lightning speed.

These developments would allow the formation of AI-powered chatbots and virtual assistants to deliver personalised customer experiences. These will enable the provision of personalised financial advice, answer customer inquiries, and streamline customer interactions.

This will ensure a 24/7 standardised level of customer service and deliver unprecedented satisfaction, which would not be possible through human-to-human interaction. While this frees up human employees for more complex tasks, it creates job risk for those workers skilled over time to deliver a high level of service.

Robotic process automation & risk management

The buzzword these days is to create algorithms to automate repetitive and rule-based tasks, such as data entry, transaction processing, and reconciliation.

Quantum computing would enable us to achieve unprecedented speeds in analysing large volumes of data to detect patterns and anomalies. This would enable firms to identify potential risks and take corrective actions in real time instead of post-facto basis.

This helps in strengthening risk management frameworks and ensuring regulatory compliance, ultimately reducing human error, improving the accuracy of decision-making processes, and operational efficiency, and reducing costs.



To be relevant to the changing job market, it is imperative that we incorporate a mindset of lifelong learning and invest in continuously upskilling and reskilling ourselves. Natural intelligence should be augmented via collaboration with AI and related fields as emotional intelligence, adaptive relationship management, critical thinking, and creative problem-solving become increasingly valuable.

To cope with the changes brought by AI, the set of new skills and competencies required and in high demand would include:

• Data literacy: The ability to understand, analyse, and communicate data effectively.

• Digital literacy: The ability to use digital tools and platforms efficiently and securely.

• Critical thinking: The ability to evaluate information objectively and logically.

• Creativity: The ability to generate novel and useful ideas and solutions.

• Problem-solving: The ability to identify and resolve issues effectively.

• Communication: The ability to express oneself clearly and persuasively.

• Collaboration: The ability to work well with others across diverse teams and contexts.

• Adaptability: The ability to learn new skills and adjust to changing situations.

Equipped with the above skills, the workforce would be armed to take on the financial world. The world awaits a new set of AI specialists, data scientists, analysts, and ethical experts who are well aware of the ethical and compliance implications and the dilemmas AI would pose.

AI journey in the world has started at an unparalleled speed and to remain employed in this arena, the workforce will need to embrace a growth mindset and a lifelong learning attitude.

The key to success would be to consider AI as a powerful ally, create a thought process tilted to learning and relearning and show flexibility and resilience as each new challenge would bring a set of unforetold opportunities.

The roadmap is very simple; it starts with keeping one updated on the latest trends and developments in the area of AI and finance and seeking opportunities to acquire new knowledge and skills.

As AI becomes more prevalent, the need for professionals who understand the ethical implications, privacy concerns, and regulatory requirements surrounding AI in finance will increase. New challenges and risks, such as bias, privacy, and security would crop up creating trust and isolation issues and conflicts in culture.
 

A simple guide to help you understand AI​

Have you got your head around artificial intelligence yet?

In the past six months, chatbots, like ChatGPT, and image generators, such as Midjourney, have rapidly become a cultural phenomenon.

But artificial intelligence (AI) or "machine learning" models have been evolving for a while.

In this beginner's guide, we'll venture beyond chatbots to discover various species of AI - and see how these strange new digital creatures are already playing a part in our lives.

How does AI learn?​

The key to all machine learning is a process called training, where a computer program is given a large amount of data - sometimes with labels explaining what the data is - and a set of instructions.

The instruction might be something like: "find all the images containing faces" or, "categorise these sounds".

The program will then search for patterns in the data it has been given to achieve these goals.

It might need some nudging along the way - such as "that’s not a face" or "those two sounds are different" - but what the program learns from the data and the clues it is given becomes the AI model - and the training material ends up defining its abilities.

One way to look at how this training process could create different types of AI is to think about different animals.

Over millions of years, the natural environment has led to animals developing specific abilities, in a similar way, the millions of cycles an AI makes through its training data will shape the way it develops and lead to specialist AI models.

So what are some examples of how we have trained AIs to develop different skills?

What are chatbots?​

Illustration of a parrot with its beak highlighted.



Think of a chatbot as a bit like a parrot. It’s a mimic and can repeat words it has heard with some understanding of their context but without a full sense of their meaning.

Chatbots do the same - though on a more sophisticated level - and are on the verge of changing our relationship with the written word.

But how do these chatbots know how to write?

They are a type of AI known as large language models (LLMs) and are trained with huge volumes of text.

An LLM is able to consider not just individual words but whole sentences and compare the use of words and phrases in a passage to other examples across all of its training data.

Using these billions of comparisons between words and phrases it is able to read a question and generate an answer - like predictive text messaging on your phone but on a massive scale.

The amazing thing about large language models is they can learn the rules of grammar and work out the meaning of words themselves, without human assistance.

Expert view: The future of chatbots​

"In 10 years, I think we will have chatbots that work as an expert in any domain you'd like. So you will be able to ask an expert doctor, an expert teacher, an expert lawyer whatever you need and have those systems go accomplish things for you."
T

Can I talk with an AI?​

If you've used Alexa, Siri or any other type of voice recognition system, then you've been using AI.
Illustration of a rabbit with its ears highlighted.



Imagine a rabbit with its big ears, adapted to capture tiny variations in sound.

The AI records the sounds as you speak, removes the background noise, separates your speech into phonetic units - the individual sounds that make up a spoken word - and then matches them to a library of language sounds.

Your speech is then turned into text where any listening errors can be corrected before a response is given.

This type of artificial intelligence is known as natural language processing.

It is the technology behind everything from you saying "yes" to confirm a phone-banking transaction, to asking your mobile phone to tell you about the weather for the next few days in a city you are travelling to.

Can AI understand images?​

Illustration of an owl with its eyes highlighted.



Has your phone ever gathered your photos into folders with names like "at the beach" or "nights out"?

Then you’ve been using AI without realising. An AI algorithm uncovered patterns in your photos and grouped them for you.

These programs have been trained by looking through a mountain of images, all labelled with a simple description.

If you give an image-recognition AI enough images labelled "bicycle", eventually it will start to work out what a bicycle looks like and how it is different from a boat or a car.

Sometimes the AI is trained to uncover tiny differences within similar images.

This is how facial recognition works, finding a subtle relationship between features on your face that make it distinct and unique when compared to every other face on the planet.

The same kind of algorithms have been trained with medical scans to identify life-threatening tumours and can work through thousands of scans in the time it would take a consultant to make a decision on just one.

How does AI make new images?​

Illustration of a chameleon with the pattern on its skin highlighted.



Recently image recognition has been adapted into AI models which have learned the chameleon-like power of manipulating patterns and colours.

These image-generating AIs can turn the complex visual patterns they gather from millions of photographs and drawings into completely new images.

You can ask the AI to create a photographic image of something that never happened - for example, a photo of a person walking on the surface of Mars.

Or you can creatively direct the style of an image: "Make a portrait of the England football manager, painted in the style of Picasso."

The latest AIs start the process of generating this new image with a collection of randomly coloured pixels.

It looks at the random dots for any hint of a pattern it learned during training - patterns for building different objects.

These patterns are slowly enhanced by adding further layers of random dots, keeping dots which develop the pattern and discarding others, until finally a likeness emerges.

Develop all the necessary patterns like "Mars surface", "astronaut" and "walking" together and you have a new image.

Because the new image is built from layers of random pixels, the result is something which has never existed before but is still based on the billions of patterns it learned from the original training images.

Society is now beginning to grapple with what this means for things like copyright and the ethics of creating artworks trained on the hard work of real artists, designers and photographers.

What about self-driving cars?​

Self-driving cars have been part of the conversation around AI for decades and science fiction has fixed them in the popular imagination.

Self-driving AI is known as autonomous driving and the cars are fitted with cameras, radar and range-sensing lasers.

Illustration of a dragonfly with its eyes and wings highlighted.


Think of a dragonfly, with 360-degree vision and sensors on its wings to help it manoeuvre and make constant in-flight adjustments.

In a similar way, the AI model uses the data from its sensors to identify objects and figure out whether they are moving and, if so, what kind of moving object they are - another car, a bicycle, a pedestrian or something else.

Thousands and thousands of hours of training to understand what good driving looks like has enabled AI to be able to make decisions and take action in the real world to drive the car and avoid collisions.

Predictive algorithms may have struggled for many years to deal with the often unpredictable nature of human drivers, but driverless cars have now collected millions of miles of data on real roads. In San Francisco, they are already carrying paying passengers.

Autonomous driving is also a very public example of how new technologies must overcome more than just technical hurdles.

Government legislation and safety regulations, along with a deep sense of anxiety over what happens when we hand over control to machines, are all still potential roadblocks for a fully automated future on our roads.
 

Pakistan and China to step up AI cooperation​

By Saira Iqbal
Sep 14, 2023

AI, a core driving force in the new wave of technological revolution and industrial transformation, has the potential to propel social productivity to new heights. As a late starter, Chinese AI development has achieved significant milestones in recent years.

After receiving several requirements from Pakistani companies to cooperate with Chinese companies in AI field, China-Pakistan Cooperation Center on Technical Standardization held an China-Pakistan Artificial Intelligence Industry Cooperation Matchmaking meeting on September 13th. On the meeting, 5 Chinese companies and 5 Pakistani companies introduced their business and requirements. They showed their latest technologies including digital human, chatbot and AI transformation services.

“Our institute and the Institute of Quality and Technology Management(IQTM) of University of the Punjab co-established the China-Pakistan Cooperation Center on Technical Standardization in 2020. Since then, we have closely cooperated in multiple fields including traditional Chinese medicine, food, and information technology,etc,” said Huang hao, president of Chengdu lnstitute of Standardization. He noted that this meeting is the first of five IT subsector meetings that the center plans to arrange in the next few months.

“I noticed that in Paksitan, there’re more service companies that provide AI services, while in China there are more AI products provider. The exchange could help us know the requirements of each other and what can be provided. This is one aspect of potential Pak-China cooperation in AI sector,” said representative from Tkxel company, one of Pakistani participants.

“Another potential is that we can find a way to develop something new together. Research and development center could be established jointly, so that Chinse companies and Pakistani companis could work and make progress in this field together.

“Success of our initiative depends on how quickly these ten companies reach business to business agreements for mutual benefit. Technical cooperation and using available skills in most efficient way is the way forward. Companies can interact with each other independently after the meeting.

Today's session may be the beginning of new era of technological cooperation between China and Pakistan.” Dr. Muhammad Usman Awan, professor of IQTM in University of the Punjab, addressed in the meeting
.
Mr. Yu Jingyang, deputy secretary-general of Chengdu software industry association also attended the session. As 21st China International Software Cooperation Conference, one of the most prestigious and influential events of software industry in China, will be held in Chengdu in December, he invited all attendees to participate in the fair to explore more cooperation potentials.
 

Pakistan and the genie of Artificial Intelligence

Zeeshan Ul Rub Jaffri
October 3, 2023

1696594469322.png



The late American computer scientist John McCarthy is said to have coined the word Artificial Intelligence in 1956 while co-authoring a proposal to the famous Dartmouth conference.

The moot proved to be a starter for AI as a field of study and research. McCarthy, the father of Artificial Intelligence as he was widely known in the world of computer science, while penning the word AI would have barely imagined his innovation would pose one of the greatest challenges to the nation states after the passage of about seven decades.

The transformative innovations of AI have equally shaken the developed and developing nations where individuals, companies and governments are wondering how they can use this advanced technology constructively and avoid the destructive part attached to it.

The world is seized with a highly debatable question if AI is an opportunity, a threat, or even both? Programmed to think and act like humans, the AI tools like Chat GPT of Open AI and Bard of Google have already revolutionised the world of writings. Authoring lengthy analytical economic reports, essays and blogs stands to be the matter of seconds now, thanks to the text generation and analysis tools AI has offered.

However, AI, which can simply be defined as machines simulating human intelligence, has got all the potential to put to risk the very survival of its creators.

The computer scientists across the globe are ringing alarm bells over AI and calling on governments to take and discuss AI as a policy matter and establish a global AI governance regime.

They fear that AI, in the wrong hands, may prove disastrous for ill-prepared nation states who are already wrangling over their petty self-serving geopolitical and geo-economic interests, instead of joining forces to work for the greater good of mankind.

Experts want the constitution of a global AI watchdog which, on the pattern of International Atomic Energy Agency or International Monetary Fund, could mitigate AI-originated threats like online scams, cyber warfare, spread of misinformation and propaganda and what not.

A computer security firm McAfee in recent research found that a host of free AI tools are available on the internet using which a scammer can easily clone any audio. The matching ratio between the cloned and original voice is up to 85%, in fact a catchy recipe for internet scammers.

Pakistan is the world’s fifth most populous nuclear-armed nation and can in no way stay oblivious to the challenges and opportunities AI has brought to its doorstep. The country of about 250 million people has recently drafted its National AI Policy that the industry stakeholders find as unclear and incoherent.

The most prominent among those voices was that of Overseas Investors Chamber of Commerce and Industry (OICCI), the largest representative body of foreign investors ranging from operating in Pakistan, whose members include the likes of global technology giant IBM & SAP.

“National Artificial Intelligence Policy Draft misses risk management,” the OICCI said in a feedback report to the ministry of information technology. Any official AI policy the government devises must establish clear guidelines on data collection, storage and usage to check the misuse and breaches of personal data, the foreign investors body suggested. The government appears very keen to embrace Artificial Intelligence, but its policy draft needs to be a bit detailed and address potential challenges.

The challenge of AI policy development is encapsulated in the Collingridge Dilemma: a methodological quandary where we cannot anticipate a challenge until we face it and when we do face it, there will be no time to tackle it.


The policymakers would have to strike a delicate balance between rapid policy responses to emerging technology and the prudence of waiting for a deeper understanding of its implications over time. We cannot afford to mindlessly rush policy responses when it comes to multi-faceted technological advancements such as AI.

For example, the most apparent danger AI innovations is perceivably posing to humans is unemployment.

The smart mechanisation of manual work is expected to leave millions of people jobless across the world. Even giants like IBM are reported to have planned about 80,000 layoffs. But one also needs to keep in mind the estimates of the World Economic Forum that shows AI will create 12 million more new jobs than 85 million layoffs it will cause by 2025. This means it will create 97 million new jobs.

Here is where policymakers in Islamabad should roll up their sleeves as those new jobs – the World Economic Forum explains – will require necessary skills and technical knowledge.

The government’s policy, therefore, must strike a balance between promoting AI innovation and addressing potential risks, such as biases in AI algorithms or AI’s impact on job displacement.

It would be pertinent to mention here some of the important recommendations the Overseas Chamber has made to the policymakers:

  • Public awareness campaigns to educate citizens about data privacy and the measures in place to protect their information as ethical AI deployment is critical for public trust and long-term success
  • Integrating the principals of fairness, accountability, transparency and explainability in AI policy
  • Establishing an independent body to assess AI applications for ethical considerations to ensure responsible AI adoption
  • For risk management framework, Pakistan needs to set up an independent AI regulatory body to define guidelines, policies and procedures
  • Since AI keeps evolving, regulations need to be updated frequently and relevant organizations may develop customized frameworks to suit their specific AI development needs
  • AI regulators must establish rules on data privacy, data security and AI ethics and work with companies, universities and organizations to ensure compliance
  • Pakistan needs state-of-the-art lab staffed by forensic experts to identify and check AI frauds
The government of Pakistan should make education a national priority as we can never reap the fruits of AI advancements without equipping our young population with the required skills and education. People’s development to be precise. We should have a society that is educated enough to make the proper and best use of technological advancement that is coming our way in the shape of AI.

Illiteracy needs to be taken head-on and the government should budget enough funds and make sure the money earmarked is proactively spent to get the desired results.

The experts attach great importance for the governments to start a conversation about how we prepare our society, the economy and political system for all the AI implications emerging. They have set a five-year deadline for the governments to take a concerted policy stance, or the world will have a tsunami of AI effects.

The world has this window of time to discuss this emerging challenge and decide its future course of action to turn this challenge into an opportunity.

The policymakers in the west have already delayed a policy response to AI with the United Kingdom saying London does not need to establish a dedicated regulator for AI. However, countries like Canada, the European Union and Singapore have devised clear AI policies on ethical considerations, data governance and AI innovations.

Pakistan can learn from the experiences of these countries and adopt best practices for addressing ethical challenges as well as encouraging responsible AI development. Pakistan, to have its contribution in shaping the global AI landscape, also must engage multinational organisations like the Organisation for Economic Co-operation and Development and the World Economic Forum.

Collaborating with these organisations and participating in global forums on AI governance will boost Pakistan’s understanding of international AI standards and augment its position as a responsible participant in AI revolution.

The government’s response to AI should be quick for this technology is expected to spread at a pace far quicker than our routine lengthy policymaking parliamentary processes. Let’s get on it before this revolutionary asset of technology turns into a liability for Pakistan.
 

AI defined 2023. Bullets and ballots will shape 2024​

ChatGPT and Google Bard’s response to 2023’s most important news and how AI will shape the next year

Reuters
December 04, 2023

ai artificial intelligence letters are placed on computer motherboard in this illustration taken june 23 2023 photo reuters



AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. PHOTO: REUTERS

How to sum up the most important news of the year past? The obvious answer in 2023 is to use Artificial Intelligence. In that spirit, I asked OpenAI’s ChatGPT and Google’s Bard – two of the most popular generative AI tools – to do the job for me.

Their responses show AI’s power, but also the ways it still falls short, in this case both in terms of Reuters editorial standards and when compared to human editors.

ChatGPT told me that “As an AI, I don't have real-time access to current events or the internet to know the specific events of 2023” before offering me “a hypothetical scenario based on current trends and topics.” Its rosy scenario included the global community agreeing “to a radical and comprehensive set of measures to reduce carbon emissions significantly by 2030”, scientists successfully wrapping up final phase clinical trials of an unnamed cancer vaccine, and the United Nations brokering a deal to end decades of geopolitical tensions in an unspecified region of the world.

If only.
Bard was more helpful and accurate, offering me a summary of news from the Russia-Ukraine war (though it was slightly unclear on what year the war started), what happened to the global economy, and details of tech developments, including AI and gene editing. It missed the war between Hamas and Israel.

But even if AI cannot yet match a journalist, the technology’s emergence in 2023 promised (or threatened, depending on your viewpoint) a profound shift in the way humans operate, and boosted the stock prices of companies that embraced that promise. In 2024, expect more progress and more news on regulators scrambling to keep up.

Next year will also be defined by bullets and ballots.

In October, Hamas militants attacked Israel, killing around 1,200 civilians and taking about 240 more captive. The brutal surprise attack – the single most deadly day in Israel’s history – triggered a massive retaliatory operation. Israel has pounded Hamas and other militant groups in Gaza for weeks, ordered the movement of more than a million people within the tiny enclave, and killed, according to the Hamas-run Gaza Health Ministry, more than 14,000 Palestinians, including 5,000 children.

A days-long pause in late November to allow Hamas to return some Israeli hostages and Israel to return some Palestinian prisoners to Gaza, ended and the fighting looks likely to drag into 2024.

The conflict in Ukraine also shows no sign of slowing. Russian and Ukrainian forces continue to fight in Ukraine’s east and the south, but momentum on both sides has ground to a near halt.

The key to any change in the stalemate lies as much in Washington and Brussels – and the West’s appetite for continued help for Ukraine – as it does in Moscow and Kyiv.

This was also the year China’s economic struggles worsened, even as Beijing and Washington attempted to mend relations between their two countries.

In 2024, those efforts, and even the likelihood of a Chinese invasion of Taiwan, will pivot on what happens in the US presidential elections. A second term for Donald Trump would throw everything up in the air again – including the future of US democracy.

The US election will be the single, most defining political event next year, both at home and abroad. But other major stories will emerge from voting booths around the globe.

More than 900 million eligible voters in India will determine the political fate of Prime Minister Narendra Modi next spring. Mexico may cast aside a tradition of machismo and elect Mexico City mayor Claudia Sheinbaum as its first female President. And a less competitive race is taking shape in Russia, where Vladimir Putin is seeking another six-year termin office, putting him in the realm of Joseph Stalin’s lengthy reign over the country.

You will notice our year-end stories look ahead to a range of critical questions for 2024. What’s next for abortion and reproductive rights in the United States? Is inflation around the world beaten? Will weight loss drugs reverse the obesity epidemic? And can Taylor Swift's power get even bigger?

Like many newsrooms, Reuters is experimenting with how AI can help us package, produce and deliver our journalism. But that journalism will continue to come from our reporters on the ground around the world, covering the news that matters without fear or favour.
 

OpenAI releases guidelines to gauge ‘catastrophic risks’ of AI

AFP

NEW YORK: ChatGPT-maker OpenAI published Monday its newest guidelines for gauging “catastrophic risks” from artificial intelligence in models currently being developed.

The announcement comes one month after the company’s board fired CEO Sam Altman, only to hire him back a few days later when staff and investors rebelled.

According to US media, board members had criticized Altman for favoring the accelerated development of OpenAI, even if it meant sidestepping certain questions about its tech’s possible risks.

In a “Preparedness Framework” published on Monday, the company states: “We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be.”

Sam Altman, ousted pioneer of OpenAI, is serial entrepreneur

The framework, it reads, should “help address this gap.”

A monitoring and evaluations team announced in October will focus on “frontier models” currently being developed that have capabilities superior to the most advanced AI software.

The team will assess each new model and assign it a level of risk, from “low” to “critical,” in four main categories.

Only models with a risk score of “medium” or below can be deployed, according to the framework.

The first category concerns cybersecurity and the model’s ability to carry out large-scale cyberattacks.

The second will measure the software’s propensity to help create a chemical mixture, an organism (such as a virus) or a nuclear weapon, all of which could be harmful to humans.

The third category concerns the persuasive power of the model, such as the extent to which it can influence human behavior.

The last category of risk concerns the potential autonomy of the model, in particular whether it can escape the control of the programmers who created it.

Once the risks have been identified, they will be submitted to OpenAI’s Safety Advisory Group, a new body that will make recommendations to Altman or a person appointed by him.

The head of OpenAI will then decide on any changes to be made to a model to reduce the associated risks.

The board of directors will be kept informed and may overrule a management decision.
 
As a language model user my personal views are:

Google Bard: Best for General task as language model but not very accurate all the time.
Bing AI: Not as good as Bard as language model but much better when you need accurate information and proper references.
Chat GPT: Left behind now.
 

Users who are viewing this thread

Back
Top