- AI Weekly
- Posts
- AI Weekly 05/08/23
AI Weekly 05/08/23
Google officially feels threatened by open-source...
Good morning and welcome to this week’s edition of AI Weekly! This week, Inflection launched its new personal AI, Pi, and Amazon is developing an LLM to specifically power Alexa with the goal of being the "the world's best personal assistant.”
In other news, the Biden-Harris Administration is taking new actions to promote responsible American innovation in AI while also meeting with specific industry leading companies to create a best path forward.
Also, AWS has launched an accelerator program for generative AI startups. Enjoy reading more about all of that and more below!
- ZG
Here are the most important stories of the week:
TEXT
Inflection AI launches Personal AI, Pi, designed as a kind, supportive companion for conversations, advice, and information. Link.
Pi focuses on personal interactions rather than productivity or search tasks, serving as a coach, confidante, creative partner, and sounding board.
Mustafa Suleyman, CEO and co-founder of Inflection, highlights Pi's EQ and digital companion capabilities for learning, problem-solving, and companionship.
Pi is accessible across various platforms, including Instagram, Facebook Messenger, WhatsApp, SMS, online, and mobile apps.
Safety is prioritized, with advanced training techniques, user feedback, and strong security measures to prevent harmful behaviors and protect user data.
Inflection AI, based in Palo Alto, aims to make personal AIs available worldwide, with a team of industry-leading AI experts from DeepMind, Google, OpenAI, and Meta.
In just 90 days, Bing has grown to exceed 100 million daily active users, and daily installs of the Bing mobile app have increased 4X since launch. Link.
Microsoft Bing and Edge were launched three months ago to improve traditional search and provide better search results, answers, and ease of use through natural language chat.
The new Bing is now in Open Preview, making it easier for everyone to try the new Bing and Edge by signing into Bing with a Microsoft Account.
Bing is becoming more visual with richer, more visual answers, and expanding its Image Creator to all languages in Bing.
Chat history and export and share functionalities are being added, and Edge mobile will include page context and improved summarization capabilities.
Microsoft is moving from a product to a platform by building third-party plug-ins into the Bing chat experience and is committed to responsible AI in its approach to search.
Samsung has temporarily restricted the use of generative AI tools such as ChatGPT for its employees after discovering cases of misuse of the technology. Link.
Some employees had uploaded sensitive code to ChatGPT, which is a form of generative AI.
Samsung does not have its own generative AI product yet, and inputting sensitive company data into foreign-owned services like ChatGPT could be a concern for leaks of crucial information.
The company advised employees to take precautions when using ChatGPT and other products outside of work, and not to enter any personal or company-related information into the services.
Samsung is not the only company restricting the use of generative AI. JP Morgan and Amazon have reportedly also taken similar measures.
Despite the restrictions, Samsung is still exploring ways to safely use generative AI to enhance employee productivity and efficiency.
Elon Musk cut off OpenAI from Twitter's data because he believed the company wasn't paying Twitter enough to license its data, which OpenAI was using to build ChatGPT. Link.
Musk has ramped up his own AI activities, including talks with a researcher to build a new AI company called X.AI and hiring top AI researchers from Google's DeepMind.
He has expressed contradictory views on whether AI will benefit or destroy humanity.
Musk helped found OpenAI as a nonprofit, but he has grown disillusioned with it because it no longer operates as a nonprofit and is building technology that takes sides in political and social debates.
Musk is pushing back against OpenAI and plans to compete with it by doing AI himself.
Musk has registered X.AI, a new startup, and plans to build TruthGPT, a maximum-truth-seeking AI that tries to understand the nature of the universe.
Nova, an early-stage startup, has launched two new products, BrandGuard and BrandGPT, to help brands police AI-generated content and ensure it adheres to company brand guidelines and style. Link.
BrandGuard uses five models to check content for brand safety, quality, adherence to style and campaign, and assigns a content score, which companies can use to set thresholds for human review before publishing.
BrandGPT is an interface that allows third parties like agencies or contractors to ask questions about a company's brand guidelines to ensure compliance.
Nova's founder and CEO, Rob May, recognized the need for a guard rail system to protect brands from generative AI mishaps.
Nova was launched last year and has raised $2.4 million from Bee Partners, Fyrfly Ventures and Argon Ventures.
BrandGuard and BrandGPT are available in public beta starting today.
SPEECH/AUDIO
Amazon is developing a more capable large language model to power Alexa, with the aim of building "the world's best personal assistant." Link.
The improved LLM will be larger and more generalized and will accelerate the possibility of building a world-class personal assistant.
Amazon's CEO, Andy Jassy, believes that LLMs and generative AI will make underlying models more effective.
Amazon has invested heavily in AI and LLMs and launched Bedrock, which allows the building of generative AI-powered apps.
Amazon reported better-than-expected first-quarter earnings but later saw a reversal in share prices due to concerns over ongoing weakness in cloud growth.
Mavenir has announced $100 million in funding to expand its technology and customer base, with a focus on Open Radio Access Networks and related services for 5G networks. Link.
The funding will be used to bring more AI tools into the mix and build more services to sell to current customers.
Siris, a PE firm that has been backing Mavenir for years, is "anchoring" the funding round, with Koch and Pendrell also participating.
Mavenir is part of the wave of companies looking to capitalize on the trend for digital transformation in the telecoms industry, disrupting traditional business models based on costly physical equipment.
Mavenir has more than 250 service provider and enterprise customers across 120 countries, covering more than half of the world's mobile subscribers.
The funding will enable Mavenir to accelerate its capabilities in automation, sustainability, and use of AI as it enables customers to efficiently deploy and operate Open RAN based end-to-end cloud-native networks.
CODE/DEVTOOLS
Google is shifting its Area 120 incubator to focus on AI projects and has moved one of the legacy efforts, an AI tool called Checks, into Google as a privacy product aimed at mobile developers. Link.
Checks scans apps and their code to identify areas where there might be violations of privacy and data protection rules, using AI and machine learning to suggest remediation.
Checks was one of the largest projects in Area 120, with 10 people fully dedicated to it, and co-founders Fergus Hurley and Nia Castelly will now be GM and Legal Lead, respectively, for Checks under Google.
The amount Google invested in the project was undisclosed, but there was a valuation, and the number of customers represented by its customers is over 3 billion.
Checks is integrated with Google's language models and "app understanding technologies" and provides a dashboard to monitor and triage issues in compliance monitoring, data monitoring, and store disclosure support.
It is not clear what the status of the other two projects spared all-out closure after Area 120 changed focus is, but Area 120 is also seeing some revolving doors, with Clay Bavor, who was running Area 120, leaving a month after announcing big changes to staff in January.
MULTIMODAL
Cohere has raised $250 million in new funding from investors including Salesforce, Nvidia, Inovia Capital, and Index Ventures, valuing the company at $2 billion. Link.
The start-up was founded in 2019 by two Canadian researchers and a Toronto entrepreneur who had previously worked on AI at Google.
Cohere develops AI technology for other businesses to deploy chatbots, search engines, and other AI-driven products, competing with OpenAI and other big tech companies.
Generative AI technologies like Cohere's can generate text, images, and other media in response to short prompts, and are expected to revolutionize industries from search engines to digital tutors.
Other start-ups in the generative AI space have also raised significant funding, such as Character.ai, which raised $150 million in March, and OpenAI, which received a $10 billion investment from Microsoft in February.
Cohere's success reflects the growing interest and investment in generative AI technology, which requires large amounts of money and experienced researchers to develop.
To address the challenge of individual student needs and the COVID-19 pandemic's impact, Khan Academy has developed an AI-powered virtual tutor and classroom assistant called Khanmigo, powered by GPT-4. Link.
Khan Academy is a non-profit organization that provides free, world-class education to anyone, anywhere, offering thousands of lessons in math, science, and humanities for students of all ages.
The pilot program for Khanmigo was under testing since 2022, initially available to a limited number of participants, and early testing shows that GPT-4 can help students contextualize what they are studying and learn specific points of computer programming.
Khan Academy is exploring ways that teachers could use GPT-4 to tailor learning for every student quickly and easily and create instructional materials for lessons, providing a transformative technology that could accelerate their roadmap of building more tutor-like abilities into their platform.
OpenAI has announced that GPT-4 offers "human-level performance" and can solve difficult problems with greater accuracy due to its broader general knowledge and problem-solving abilities.
GPT-4 can accept images as inputs and generate captions, classifications, and analyses, and it can handle much longer text compared to GPT-3.5, analyzing over 25,000 words of text.
POLICY/LAW/ETHICS
Geoffrey Hinton, an AI pioneer who developed the intellectual foundation for AI systems, has resigned from Google to speak out about the risks of AI and generative AI. Link.
Hinton warns that the race to develop AI systems could be releasing something dangerous into the wild, which could pose risks to jobs and humanity.
He is concerned about the rise of false information on the internet and the potential for AI to upend the job market.
Hinton fears future versions of AI could pose a threat to humanity, especially as individuals and companies allow AI systems to generate and run their own computer code.
Hinton believes that the race between tech giants like Google and Microsoft will escalate into a global race that may be impossible to regulate.
Hinton urges leading scientists to collaborate on ways of controlling AI technology and warns against scaling it up until they have understood how to control it.
The Biden-Harris Administration is taking new actions to promote responsible American innovation in AI while protecting people's rights and safety. Link.
The Administration is meeting with CEOs of Alphabet, Anthropic, Microsoft, and OpenAI to underscore their responsibility to ensure AI products are safe before they are deployed or made public.
The Administration has taken important actions to protect Americans from AI-related harms, including rooting out bias and algorithmic discrimination.
The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes to promote responsible innovation and support the development of a diverse AI workforce.
Leading AI developers have committed to participate in a public evaluation of AI systems at the AI Village at DEFCON 31 to assess their alignment with the principles and practices outlined in the Administration's Blueprint for an AI Bill of Rights and AI Risk Management Framework.
The Office of Management and Budget (OMB) will release draft policy guidance on the use of AI systems by the U.S. government for public comment this summer to ensure AI systems safeguard Americans' rights and safety and serve as a model for other entities.
Vice President Kamala Harris will meet with the CEOs of Google, Microsoft, OpenAI, and Anthropic to discuss the responsible development of AI and the need for safeguards to mitigate its potential risks. Link.
Harris will emphasize the importance of ethical and trustworthy innovation, joined by other senior members of the Biden administration.
The meeting will engage in a "frank discussion" about the current and near-term development of AI technology and its risks.
Generative AI has been a topic of public concern, and several tech industry giants, including Microsoft, Google, and OpenAI, have launched chatbots utilizing this technology.
Concerns over the potential threat of AI have been raised, including by Geoffrey Hinton, who left his position at Google to share his concerns.
The meeting is part of the Biden administration's efforts to engage with experts and ensure that AI products are safe before they are deployed to the public.
A bipartisan group of US lawmakers, including Senator Edward Markey and Representatives Ted Lieu, Don Beyer, and Ken Buck, has introduced the Block Nuclear Launch by Autonomous AI Act. Link.
The bill aims to prohibit the use of federal funds to launch a nuclear weapon using an autonomous weapons system that is not subject to meaningful human control.
The act seeks to codify existing Pentagon rules for nuclear weapons, which require a human "in the loop" for all actions critical to informing and executing decisions to initiate and terminate nuclear weapon employment.
The bill aims to prevent any autonomous system without meaningful human oversight from launching a nuclear weapon or selecting or engaging targets with the intention of launching one.
The introduction of the bill is intended to call attention to the potential dangers of current-generation autonomous AI systems and spur similar commitments from China and Russia.
The bill highlights the sponsors' nuclear non-proliferation efforts, including a recent bill restricting the president's power to unilaterally declare nuclear war.
OTHER
A leaked document from a Google engineer has warned that the company is not in a position to win the AI race and could lose out to open-source AI technology. Link.
The engineer stated that the “third faction” posing a competitive threat to Google and OpenAI was the open-source community, which is already ahead of them.
The document cited examples such as Meta’s LLaMA model, which was made available on a “noncommercial” and case-by-case basis in February but leaked online shortly after, and websites filled with open-source visual art generation models.
The engineer warned that Google had “no secret sauce” and that “our best hope is to learn from and collaborate with what others are doing outside Google”.
However, the EU was warned this week that it must protect grassroots AI research in its planned AI bill or risk hampering the release of open-source models.
On Thursday, the UK's competition watchdog launched a review of the AI market, focusing on the foundation models behind generative AI tools such as ChatGPT, Bard, and the image generator Stable Diffusion.
Microsoft and AMD have reportedly teamed up to develop AI processors, with Microsoft providing engineering resources to support AMD's developments and the two companies joining forces to compete against Nvidia, which controls an estimated 80% market share in the AI processor market. Link.
Bloomberg's sources claim that AMD is helping Microsoft to develop its own in-house AI chips, codenamed Athena, but Microsoft spokesperson Frank Shaw has denied that AMD is involved.
The explosive popularity of AI services like OpenAI's ChatGPT is driving the demand for processors that can handle the huge computational workloads required to run them, and Nvidia's dominance in the GPU market allows it to dominate the AI chip industry.
AMD is seeking to position itself better to capitalize on the growing AI industry and create partly customized chips for its biggest customers to use in their AI data centers.
Google and Amazon have also created their own AI chips to train their machine learning computer models.
The AI boom has led to a growing shortage of specialized GPU chips, further constrained by Nvidia's near monopoly on the supply of such hardware.
Scientists from the University of Texas have developed an A.I. that can translate private thoughts of human subjects by analyzing fMRI scans, which measure the flow of blood to different regions in the brain. Link.
The language decoder can turn a person's imagined speech into actual speech and generate relatively accurate descriptions of silent films.
The study used a large language model to match patterns in brain activity to the words and phrases that participants had heard and effectively reversed the process by using another A.I. to translate fMRI images into words and phrases.
The decoding model captured the meaning of passages and unspoken versions of stories, suggesting that it was capturing not just words but also meaning.
The language-decoding method has limitations, such as the bulky and expensive nature of fMRI scanners, and the long, tedious process of training the model for individuals.
Every brain has unique ways of representing meaning, and participants can shield their internal monologues, throwing off the decoder by thinking of other things.
Amazon Web Services (AWS) has launched a 10-week accelerator program for generative AI startups globally, offering access to AI models and tools, machine learning stack optimisation, custom go-to-market advice, and $300,000 in AWS credits to each of the 10 startups accepted. Link.
AWS does not intend to invest in the startups as it has no AI/ML investment arm.
This is AWS' second accelerator programme, with the first focusing on conversational AI in 2016.
Generative AI is being seen as a transformative technology that unlocks new developments and creativity, hence, the new accelerator programme.
AWS has already established partnerships with top AI companies, including Hugging Face and Stability AI, to build and scale its AI models for image, language, audio, video, and 3D content generation.
Corporations such as Salesforce are investing in generative AI startups and companies, as they seek to bring AI technology into their fold.
A new encoding method called CEBRA has been developed that jointly uses behavioural and neural data to uncover neural dynamics. Link.
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience.
CEBRA can be used in both hypothesis- or self-supervised discovery-driven manners to produce consistent and high-performance latent spaces.
The consistency metric can be used to uncover meaningful differences, and the inferred latents can be used for decoding.
CEBRA has been validated for accuracy and demonstrated utility for both calcium and electrophysiology datasets, across sensory and motor tasks and in simple or complex behaviours across species.
CEBRA can be used for the mapping of space, uncovering complex kinematic features, and provide rapid, high-accuracy decoding of natural videos from visual cortex.
Mozilla has acquired Fakespot, a New York-based start-up that offers a website and browser extension to help users identify fake or unreliable reviews on various online marketplaces, including Amazon, Yelp, and TripAdvisor. Link.
Fakespot uses an AI and machine learning system to detect patterns and similarities between reviews in order to flag deceptive ones and provide a rating or grade for the product's reviews to help consumers make informed decisions.
The financial terms of the deal were not disclosed, and Fakespot will continue to work across all major web browsers and mobile devices.
Mozilla plans to introduce Fakespot's functionality to Firefox over time and invest in enhancing the Fakespot experience for its users.
The acquisition reflects the growing challenge of detecting fake reviews with the introduction of enhanced and commercially available AI tools that can generate fake reviews written by AI chatbots.
Mozilla's acquisition of Fakespot comes after the launch of Mozilla.ai, a start-up focused on building open-source and trustworthy AI.
IBM CEO Arvind Krishna has announced that the company will pause or slow hiring for non-customer-facing roles that can be replaced by AI and automation. Link.
Back-office functions, including HR, are among the roles that will be affected, with Krishna estimating that around 30% of these roles could be replaced by AI over the next five years.
This could result in around 7,800 job losses, with part of the reduction achieved through attrition rather than direct layoffs.
More mundane tasks, such as employment verification and employee transfers, are likely to be fully automated, while some HR functions may not be replaced for another decade.
IBM is still hiring for software development and customer-facing roles, and has added around 7,000 people to its workforce in the first quarter of 2023.
Krishna has been divesting lower-growth businesses and focusing on software and services, with productivity and efficiency measures expected to drive $2 billion in savings by the end of 2024.
Chegg Inc., an online education company that offers guidance to students taking tests and writing essays, has experienced a 38% drop in stock prices due to concerns over the impact of the ChatGPT tool on its homework-help services. Link.
The market reaction is one of the most notable yet to signs that generative AI is disrupting industries.
Chegg's revenue and profit forecasts for the current quarter are much lower than analysts' estimates, with much of its revenue coming from subscriptions, which could be threatened if students start seeing AI chatbots as a free alternative.
The impact of ChatGPT began to be felt in the spring, according to CEO Dan Rosensweig's prepared remarks accompanying Chegg's first-quarter earnings.
The concern over the impact of ChatGPT on Chegg's business highlights the growing importance of AI and its potential to transform industries.
Companies will need to adapt to these changes and find ways to leverage AI to remain competitive in the market.
Senzai, a Mexico City-based company co-founded by Julian Lopez-Portillo and Darren Timmins, aims to bring the benefits of AI to companies in Latin America, even those whose primary focus is not tech-related. Link.
Lopez-Portillo and Timmins observed that many companies struggle with implementing predictive modeling and often focus on showering offers, discounts, and promotions on a few customers where they think they can influence the outcome.
Senzai's technology focuses on measuring impact, experimentation, and optimization to figure out which customers are likely to respond to certain sales and marketing campaigns and improve outcomes for businesses.
The company raised a pre-seed round of $2 million this year and plans to make money through a performance-based fee, charging a percentage of the value generated for customers.
Senzai's focus on outcomes rather than tools has made it stand out in the AI market, according to ALLVP managing partner Jimena Pardo.
Senzai's technology is currently being developed with its first few clients, and the company plans to deploy new funding into scaling the product and achieving product-market fit.
Pando, a startup developing fulfillment management technologies, has raised $30 million in a Series B funding round led by Iron Pillar and Uncorrelated Ventures, bringing its total raised to $45 million. Link.
The funding round also had participation from existing investors Nexus Venture Partners, Chiratae Ventures and Next47.
Pando plans to use the new capital to expand its global sales, marketing and delivery capabilities and to augment its teams at all levels of the organization.
Pando's software-as-a-service platform provides various tools and apps for different tasks across freight procurement, trade and transport management, freight audit and payment and document management, as well as dispatch planning and analytics.
Pando's no-code capabilities enable business users to customize the apps while maintaining platform integrity, reducing the need for IT resources for each customization.
Pando's momentum is strong, counting Fortune 500 manufacturers and retailers among its customer base, and revenue has grown 8x since its Series A in 2020 while the number of customers has increased 5x.
GPT Tip of the Week
Prompt: What factors will impact the closed vs. open-source battle long-term in LMM-based AI?
Response: In the long-term, the battle between closed and open-source approaches in Large Machine Learning Models (LMM)-based AI will be influenced by various factors. Here are some key aspects to consider:
Accessibility and democratization: Open-source AI frameworks have the potential to democratize AI development by providing resources to a larger audience. This can spur innovation and collaboration, but may also raise concerns about misuse or unintended consequences.
Intellectual property and commercialization: Closed-source AI solutions can protect proprietary technology, allowing companies to maintain control over their innovations and monetize them more effectively. However, this approach can limit the sharing of knowledge and slow down progress in the field.
Funding and resources: The availability of funding and resources can impact the development and adoption of both open and closed-source AI solutions. Government or private sector investments can influence which approach gains more traction and support.
Security and privacy: Ensuring the security and privacy of AI systems is critical, especially in sensitive applications. Open-source solutions can benefit from community-driven efforts to identify and address vulnerabilities, but closed-source approaches might offer more control over potential risks.
Talent pool and workforce: The availability of skilled professionals to develop and maintain AI systems will affect the balance between open and closed-source approaches. Open-source projects may attract more contributors, but closed-source projects could offer better incentives for exclusive employment.
Regulation and policy: Government regulations and policies can shape the development and deployment of AI systems. These rules may favor either open or closed-source approaches, depending on factors such as national security concerns, economic interests, or ethical considerations.
Technological breakthroughs: Advances in AI and machine learning could impact the closed vs. open-source debate. For instance, breakthroughs in areas such as hardware, algorithms, or data efficiency could favor one approach over the other.
Public perception and trust: The level of trust in AI systems can influence the adoption of open or closed-source solutions. Transparency and explainability are often seen as critical factors in building trust, which could give open-source projects an advantage.
Ecosystem and community support: The support of a vibrant and diverse ecosystem, including developers, researchers, and users, can influence the success of open or closed-source AI projects. This includes factors such as documentation, tools, libraries, and forums for discussion and collaboration.
Collaboration and partnerships: Strategic partnerships between organizations, both public and private, can shape the development of AI technologies. Collaborative efforts between entities with complementary strengths could lead to the growth and adoption of either open or closed-source solutions.