AI Weekly: 05/01/23

ChatGPT is competing with doctors, the DHS is keying in on AI, and Apple is struggling to keep up

Good morning and welcome to this week’s edition of AI Weekly! A week after Stability AI released their newest open-source language model, StableLM, Hugging Face has launched their open-source ChatGPT alternative called HuggingChat.

In news from around the world, ChatGPT is available to users once again in Italy after OpenAI addressed GDPR concerns raised by the Italian DPA. In the latest regulation efforts, EU lawmakers have proposed a three-tiered approach to mitigating the risks posed by generative AI.

Also, about a week after an AI-generated song sparked controversy for using synthetic versions Drake and The Weeknd’s voices, pop artist Grimes has welcomed the technology, offering 50% royalties on any successful AI-generated song that uses her voice. More on that and other exciting news below!

- ZG

Here are the most important stories of the week:

TEXT

Hugging Face has launched an open-source alternative to ChatGPT called HuggingChat. Link.

  • HuggingChat is a user interface that enables people to interact with an open-source chat assistant named Open Assistant.

  • The CEO of Hugging Face, Clem Delangue, believes that open-source alternatives to ChatGPT are necessary for transparency, inclusivity, accountability, and distribution of power.

  • HuggingChat's platform potential has been compared to that of the Android App Store.

  • HuggingChat has significant limitations, as licensing issues need to be worked out before it can be used commercially.

  • Hugging Face is iterating quickly on the interface and safety mechanisms and intends to support the next rapidly improving open-source models.

IMAGE/VIDEO

Stability AI has released an Image Upscaling API that uses AI to increase the size of images while maintaining their sharpness. Link.

  • The API expands small images into larger ones without compromising their level of detail, which has previously been a challenge in digital imagery.

  • The Image Upscaling API is the latest addition to Stability AI's existing set of image generation and editing APIs, including text-to-image, image-to-image, and inpainting APIs.

  • The API is powered by two open-source models: Real-ESRGAN, which is super-fast and can double the resolution of a typical 512x512 image in half a second, and the latent Stable Diffusion 4x Upscaler, which takes between 20 to 40 seconds but provides more detailed results with a richer texture.

  • Both models can be used to upscale any images, regardless of whether they were generated using the Stability API or not.

  • Stability AI's vision is to continuously enrich the Image Upscaling API with state-of-the-art models as they become available.

SPEECH/AUDIO

Former Apple employees who worked in the company's AI and machine learning groups believe that a lack of ambition and organizational dysfunction have hindered ‌Siri‌ and the company's AI technologies. Link.

  • Apple's virtual assistant is "widely derided" inside the company for its lack of functionality and minimal improvement over time.

  • ‌Siri‌'s leadership did not want to invest in building tools to analyze ‌Siri‌'s usage and engineers lacked the ability to obtain basic details about user engagement.

  • Apple executives dismissed proposals to give ‌Siri‌ the ability to conduct extended back-and-forth conversations, claiming that the feature would be difficult to control and gimmicky.

  • Apple's uncompromising stance on privacy has also created challenges for enhancing ‌Siri‌, with the company pushing for more of the virtual assistant's functions to be performed on-device.

  • The ‌Siri‌ team explored a project to rewrite the virtual assistant from scratch, codenamed "Blackbird," but it was killed in favor of "‌Siri‌ X," which simply aimed to move ‌Siri‌'s processing on-device for privacy reasons without the modular functionality of Blackbird.

Canadian pop artist, Grimes, has announced that AI artists can use her voice in their music without facing copyright or legal enforcement. Link.

  • She is willing to split 50% royalties on any successful AI-generated song that uses her voice, the same deal as with any artist she collaborates with.

  • This comes after Universal Music Group (UMG) called for the removal of an AI-generated song using simulated voices of Drake and The Weeknd, citing copyright violation.

  • Grimes has previously collaborated with a mood music startup to launch an AI-generated lullaby app in 2020, and predicted in 2019 that generative AI could mean the end of human art.

  • Generative AI is already disrupting numerous industries, including visual art, writing, online media, and voiceover work, leading to legal, ethical, and copyright debates.

  • Grimes is also working on a program to simulate her voice and providing stems and samples for artists to train their own AI.

CODE/DEVTOOLS

Pinecone, a vector database aimed at data scientists, has received a $100 million Series B investment on a $750 million post valuation. Link.

  • Pinecone is a market leader in vector databases and has grown from a handful of customers to 1,500 in a short amount of time, attracting interest from businesses of all sizes, including technology companies like Shopify, Gong, and Zapier.

  • Vector databases are becoming more valuable due to the rise of LLMs, but they differ in that LLMs have baked-in data, while vector databases have the flexibility of a database.

  • The investment will help Pinecone achieve its end vision of becoming a fundamental component in the new AI data stack, working together with LLMs as a source of truth to cut down on the hallucination problem.

  • The company will be hiring and expects to get to around 150-200 employees by the end of the year.

  • The vector database space has been heating up since Pinecone launched a few years ago with players Qdrant, Zilliz and Chroma, all raising funds recently.

AirOps, an early-stage startup that helps companies build AI-enabled applications on top of LLMs, has announced a $7 million seed round led by Wing VC with participation from Founder Collective, XFund, Village Global, Apollo Projects, and Lachy Groom. Link.

  • AirOps is helping customers build applications on top of three LLMs: GPT-4, GPT-3, and Claude, to automate processes, extract insights from data, generate personalized content, and perform natural language processing techniques.

  • The company's platform lets users create custom solutions on top of algorithms that leverage LLMs to solve business problems.

  • AirOps aims to help customers use LLMs more efficiently and effectively by training smaller models using larger ones, and the company can help customers through the process.

  • AirOps launched last year with the goal of helping customers get value from their organizational data but shifted focus to helping blend LLMs with data to create custom workflows and applications.

Data discovery and classification provider BigID has launched BigAI, a new LLM designed to scan and classify enterprises’ data to optimize their security and enhance risk management initiatives. Link.

  • BigAI enables organizations to scan structured and unstructured data whether stored in the cloud or on-premises, using a mix of ML-driven classification and generative AI to suggest titles and descriptions for data tables, columns and clusters so they’re easier to locate via search.

  • BigAI uses a privacy-by-design approach, which enables the LLM to process an organization’s data with the vendor’s own private models and servers without exposing anything to public models.

  • BigChat, a virtual personal assistant, is designed to act as a copilot that can support human users to manage compliance challenges by providing answers to users’ questions.

  • BigID is the only solution that’s built on deep data discovery; it’s able to adapt generative AI and make data easier to find, understand and ultimately protect.

  • This launch comes amid a wave of providers unveiling plans to use LLMs as part of security use cases, including Google Cloud Security AI Workbench and SentinelOne.

CYBERSECURITY

Google Cloud announces new AI-powered platform, Security AI Workbench, at RSA Conference 2023. Link.

  • Security AI Workbench incorporates specialized, security-focused large language models (LLMs) and Google's security intelligence to address three key security challenges: threat overload, toilsome tools, and the talent gap.

  • VirusTotal Code Insight and Mandiant Breach Analytics for Chronicle are among the new offerings powered by Sec-PaLM, which can prevent new infections and reduce toil.

  • Chronicle AI and Security Command Center AI are two Sec-PaLM-based features that can make security more understandable and effective, and augment existing capabilities to tackle security issues.

  • Security AI Workbench is built on Google Cloud's Vertex AI infrastructure, which ensures data privacy and compliance support for customers.

  • Security AI Workbench is an initial step and will continue to iterate and innovate to drive advancements across the security community.

HEALTHCARE

ChatGPT provided better answers to patient questions than human doctors in a study published in JAMA Internal Medicine, winning in 79% of cases. Link.

  • The study evaluated questions from Reddit's r/AskDocs and had a panel of medical professionals rate the quality and empathy of the responses.

  • The results highlight the potential of chatbots to assist clinicians and alleviate the burden of responding to patient messages, which have surged due to the COVID-19 pandemic.

  • However, the medical community should proceed with caution, and additional studies are needed to evaluate the risks and benefits of using chatbots in healthcare.

  • Chatbots could be used to provide peer-to-peer support, reach out to patients proactively with healthcare recommendations, and combat the spread of misinformation.

  • While AI tools like ChatGPT show promise, there are still risks and potential harms that must be addressed before implementation.

POLICY/LAW/ETHICS

The Department of Homeland Security (DHS) will establish a new task force to examine how the government can use AI technology to protect the country. Link.

  • The task force shows that the Biden administration is looking for ways to embrace AI’s potential benefits while thinking through the possible harms.

  • DHS Secretary Alejandro Mayorkas announced the task force during a speech at a Council on Foreign Relations event, pledging to defend against the malicious use of AI and ensure its use is rigorously tested to avoid bias and disparate impact.

  • The task force will determine how AI could be used to deploy DHS systems that screen cargo for goods produced by forced labor and better detect fentanyl in shipments to the U.S.

  • Homeland Security Advisory Council Co-Chair Jamie Gorelick will study the intersection of AI and homeland security and deliver findings to guide the use of AI and defense against it.

  • The announcement adds to the government’s efforts to beef up its AI capabilities, such as the hiring of former Google AI Cloud Director Andrew Moore by U.S. Central Command to serve as its first advisor on AI, robotics, cloud computing, and data analytics.

EU lawmakers are working to address generative AI in the bloc's draft law for regulating AI. Link.

  • MEPs are proposing a layered approach, with three layers to address responsibilities across the AI value chain, foundational models, and generative AIs specifically.

  • One proposed layer would apply to all general purpose AI, with hard requirements on regulating relationships in the AI value chain.

  • Another layer would set specific obligations for makers of foundational models, such as transparency and testing prior to going on the market.

  • The third layer would target generative AIs specifically and focus on specific content issues, including copyrighted material used to train AIs.

  • MEPs are seeking to amend the draft AI Act in other ways, including proposing a centralized enforcement element and additional prohibited use-cases.

The UK government is investing £100 million to establish safe and reliable foundation models for AI systems, including large language models like ChatGPT and Google Bard, to make the UK globally competitive in this strategic technology. Link.

  • The Foundation Model Taskforce, modeled on the COVID-19 Vaccines Taskforce, will work towards developing the safe and reliable use of AI across the economy, delivering better outcomes for people across the country through better public services, and establishing the UK as a world leader in foundation models.

  • The investment will build the UK's sovereign national capabilities so that public services can benefit from the transformational impact of this type of AI.

  • The taskforce will bring together government and industry experts and report directly to the Prime Minister and Technology Secretary, and the first pilots targeting public services are expected to launch in the next six months.

  • The investment follows last week's Cabinet meeting where Ministers agreed on the transformative potential of AI, the vital importance of retaining public confidence in its use, and the need for regulation that keeps people safe without preventing innovation.

ChatGPT is once again available to users in Italy after being suspended due to GDPR concerns raised by the Italian DPA. Link.

  • OpenAI made changes to satisfy initial conditions set by the local DPA for the service to resume with managed regulatory risk, including adding age-gating to prevent minors from accessing the service and amending the legal basis claimed for processing local users' data.

  • OpenAI expanded its privacy policy and provided users and non-users with more information about the personal data being processed for training its algorithms.

  • OpenAI introduced mechanisms to enable users to exercise their rights, including providing a way for Europeans to ask for their data not to be used to train the AI and mechanisms to have their data deleted.

  • OpenAI's defense to accusations of data protection law-breaking essentially boils down to its intention to be beneficial and having no choice but to use data to build the AI tech.

  • The Italian DPA confirmed that its investigation into whether OpenAI's data processing activities breached GDPR or not is ongoing.

AI's arrival in the music industry has garnered mixed responses from artists, with some collaborating with the technology and others rejecting it. Link.

  • The advent of generative AI has raised legal issues, with the music industry providing early tests of the limits of existing intellectual property protections.

  • Recent developments include a hit song with AI-generated vocals resembling Drake's and artists offering AI versions of their own voices.

  • Existing rules governing copyright and intellectual property do not adequately address the challenges posed by AI impersonations, which can duplicate an artist's voice and style without their consent.

  • The legal questions raised by AI-generated content will likely extend to other industries beyond music, requiring the creation of legal means to protect digital versions of oneself.

  • There are also issues related to the training of engines that produce generative AI content, with some content owners alleging copyright infringement. Platforms that create and distribute such technologies must take more responsibility to mitigate potential harms.

OTHER

Mark Zuckerberg announced during Meta's Q1 earnings call that they are exploring adding generative AI to their apps, including chat experiences in WhatsApp and Messenger, visual creation tools for posts in Facebook and Instagram, and ads, with future plans for video and multi-modal experiences. Link.

  • Zuckerberg expects that these tools will be valuable for regular people, creators, and businesses, especially for AI agents for business messaging and customer support.

  • Meta initially lagged in building its AI infrastructure, but Zuckerberg says they are now caught up and that generative AI products will be released in the coming months.

  • Generative AI will be a major feature in all of Meta's products, and it could speed up WhatsApp's customer support business by allowing AI agents to act on behalf of businesses.

  • Meta is not giving up on the metaverse despite the pivot to AI, as evidenced by the planned debut of the next Quest VR headset later this year.

  • Meta's Reality Labs division reported a net loss of $4 billion last quarter, and the company expects operating losses to increase year over year in 2023.

Yelp has announced new updates, including an enhanced AI-powered search experience and the ability to add videos to reviews. Link.

  • Yelp now better understands a user's search intent and highlights relevant information from reviews in new snippets that appear under each business listing in search results.

  • The company is using AI to provide better search suggestions based on a user's search intent that are not bound by their location, and it's introducing a new "Surprise Me" feature that will generate a unique highly rated recommendation near you.

  • Yelp is also launching new interactive review topics that help users track what they've covered and is rolling out four new review reactions: helpful, thanks, love this, and oh no.

  • Yelp is introducing "Yelp Guaranteed," a new satisfaction guarantee program that allows users to get up to $2,500 back in the event something goes wrong with their project.

  • Additional updates include a new navigation bar, redesigned photo viewing experience, and a new login experience that won't require users to remember their password every time they want to log into their account.

Robust.AI, a warehouse robotics company, has raised a $20 million Series A-1 funding round, led by Prime Movers Lab and featuring Future Ventures, Energy Impact Partners, JAZZ Ventures and Playground Global, following a $15 million Series A in 2020. Link.

  • Robust.AI was initially focused on software-only solutions but later realized the value of building hardware that could transform the warehouse space.

  • The company's offering pairs autonomous warehouse cart Carter with Grace, its software offering, with a RaaS payment model.

  • Robust.AI operates in a highly competitive market, but differentiates itself through its emphasis on human-robot interactions, which is a central feature of its offering.

  • The company's hardware includes narrow floating point processors like Intel's Myriad processor that can run multiple models at a low cost, and its robots have more flexibility than other autonomous mobile robots.

  • Robust.AI believes in balancing what machines and people do to maximize productivity and cost-effectiveness, rather than automating everything out, which can lead to brittle and less cost-effective systems.

PwC plans to invest $1 billion in generative AI technology in its U.S. operations over the next three years, working with Microsoft Corp. and OpenAI to automate aspects of its tax, audit and consulting services. Link.

  • The investment includes funding to recruit more AI workers and train existing staff in AI capabilities, while targeting AI software makers for potential acquisitions.

  • PwC aims to develop and embed generative AI into its own technology stack and client-services platforms, as well as advising other companies on how best to use generative AI.

  • The company will pay to access OpenAI’s GPT-4 language model to build and run apps in Microsoft’s Azure cloud.

  • The use of generative AI in areas like tax preparation requires validation by a professional, and PwC isn't aiming to replace workers with generative AI but rather to optimize their jobs by automating time-consuming, repetitive tasks.

A developer known as xtekky has created GPT4Free, a project designed to grant free access to OpenAI’s GPT-4 and GPT-3.5 models by reverse-engineering APIs, violating OpenAI’s terms of service. Link.

  • GPT4Free fools the OpenAI API into thinking it’s receiving requests from websites with paid OpenAI accounts.

  • Anyone who uses GPT4Free is racking up the tab of sites xtekky chose to script around, violating OpenAI’s terms of service.

  • GPT4Free is strictly for “educational purposes,” according to xtekky.

  • Other projects similar to GPT4Free are already cropping up, suggesting it’s something of a trend.

  • OpenAI has not signaled when – or if – it’ll deliver free, unfettered access to others who wish to benchmark the base GPT-4 model, and one anticipates a game of whack-a-mole between projects like GPT4Free and OpenAI.

Deloitte has launched two new services for its Quartz AI offering: Compass AI for logistics and Frontline AI for customer care deployments. Link.

  • The new services are built with Nvidia technologies and embedded with Nvidia SDKs that can solve specific business problems.

  • Deloitte aims to take AI from the back office to enhance front office processes and unlock operational value.

  • The goal with Frontline AI is to introduce technology to improve frontline customer care processes, while Compass AI provides fleet routing and dispatch optimization for supply chain optimization.

  • Deloitte's clients often have an idea of what AI is, but education is still necessary to explain its application and maturity level in specific industries and solutions.

  • Among the challenges organizations face in adopting AI are the quality and availability of data and the need for organizational alignment to benefit from AI.

GPT Tip of the Week:

Prompt: Where can technological innovation take place for music artists granting access for people to use synthetic versions of their voices in songs?

Response: Technological innovation for music artists to grant access for people to use synthetic versions of their voices in songs can take place in several areas. Here are a few examples:

  1. Voice Synthesizers: One area of innovation is the development of voice synthesizers that can recreate a person's voice using machine learning and artificial intelligence. These synthesizers can be used to create synthetic versions of a person's voice that can be used in songs.

  2. Digital Rights Management: Another area of innovation is the development of digital rights management systems that allow music artists to control the use of their synthetic voices. These systems can include licensing agreements, copyright protection, and other measures to ensure that the artist's synthetic voice is used appropriately.

  3. Music Collaboration Platforms: There are also several music collaboration platforms that allow artists to collaborate remotely, including the use of synthetic voices. These platforms can provide a secure and easy-to-use environment for music artists to share their synthetic voices and collaborate on new songs.

  4. Mobile Apps: Finally, there are several mobile apps that allow users to create and share music using synthetic voices. These apps can provide a fun and accessible way for people to experiment with music creation and use synthetic voices in their songs.

Overall, there are several areas where technological innovation can take place for music artists granting access for people to use synthetic versions of their voices in songs, and the possibilities are only growing as technology advances.