News

The Application Layer of AI

PT

Foundation models should be thought of as backends -- abstracted from consumers.

Generative AI models can do all kinds of amazing things. Getting them to do what we want, on the other hand, is extremely difficult — requiring prompt engineering, and lots of experimentation — precluding wide scale adoption by consumers.

Today, we are asked to think of AI models as general purpose chat interfaces, made available to us as consumers.

If we want help writing a blog post, we simply open up ChatGPT (or Bard, Claude, etc), and start typing (mostly with our thumbs, as majority of modern computing is now on mobile).

“You are a blog post writer with 40 years of experience in copywriting. Please write me a 500 word blog post about ‘How AI will impact medicine’. Make the tone professional. The target audience are physicians who are concerned about disruption in their industry. Also, below is a sample of my writing style. Make sure the blog post adheres to my style. — <insert a few hundred words that accurately convey your writing style>”

Does it make sense for consumers to learn to craft these types of prompts, and type or copy / paste them into chat interfaces?

Seriously, think about it!

An alternative approach to AI interfaces

What if we thought of foundation models as backend services, rather than consumer facing interfaces?

What if:

  1. We allow people who are interested in the workings of AI models, lets call them “prompt engineers”, to create simple, intuitive front end interfaces for consumers to use — abstracting the AI models, and eliminating the need for consumers to understand prompting.
  2. We teach consumers to approach AI the same way they approach mobile — whatever you want to do, “there’s an app for that”.

What would it look like?

Imagine a tool, an Integrated Development Environment (IDE), that allows prompt engineers to create purpose-built, specialized AIs — that consumers can use to get things done.

Lets call this tool MindStudio.

A prompt engineer could use MindStudio to build a “Personalized Blog Post Generator” — an AI that is purpose-built to generate blog posts for specific target audiences, in the unique writing style of each consumer.

To do so, the prompt engineer needs to:

  1. Define what type of information to collect from the user in order to write great blog posts.
  2. User’s writing style (let them upload documents they’ve written, or specify another type of style that they would prefer).
  3. Blog post title / topic
  4. Target audience
  5. Number of words
  6. Tone (friendly, professional, provocative, etc)
  7. And any number of other relevant user inputs.
  8. Define how the blog post should be formatted.
  9. Craft the sophisticated prompt that generates the blog posts.
  10. Choose the right model to facilitate the best results.
  11. Manipulate the various model parameters in order to fine tune output.
  12. Test and refine until satisfied.
  13. Publish the Personalized Blog Post Generator and make it available for consumers to use.

Once they understand the steps required, prompt engineers can create new AIs in under 15 minutes — frequently under 5 minutes.

These AIs get published as web apps, addressable via a unique URL or custom domain name — making them easily sendable, sharable, embeddable, etc.

The prompt engineer can also choose to charge consumers a monthly subscription for usage of their AI — creating a recurring revenue stream.

In fact, there is no need to ‘imagine’ this tool, MindStudio, you can simply go use it now!

Also, check out over 2k AIs that have already been created by prompt engineers, and used by tens of thousands of consumers.

Anatomy of the Application Layer

Personalization

Generative AI models can return different (personalized) results to each user — if they are provided with a personalization profile. Unfortunately, today’s chat interfaces make it difficult for users to provide such a profile. The closest thing we have are “custom instructions” in ChatGPT — which allows a user to specify some preferences that get injected into every request.

AIs built with MindStudio can collect personalization preferences from each user — typically as an onboarding wizard, on first use of the AI. These personalization preferences can then be leveraged in AI operation to produce personalized results.

Unlike ChatGPT Custom Instructions, which are global settings, each AI built with MindStudio can have specific, relevant to its function, personalization parameters.

This personalization data is associated with the user’s profile, and can be leveraged across AIs, with the explicit permission of the user. For example, one AI may ask the user for samples of their writing style, and other AIs can ask the user to simply grant them access to this personalization data — eliminating the need to provide it again.

Prompt

Prompting foundation models is more of an art than a science. There are evolving best practices that can be followed, but each AI use case tends to require experimentation and prompt tuning — prompt engineering work.

MindStudio allows prompt engineers to write advanced prompts that nuance the operations of the AI — utilizing personalization parameters, runtime variables, multi-step workflows, etc.

Model Parameters

MindStudio is model agnostic, supporting major models including OpenAI (GTP3, 3.5, 3.5 Instruct, 4.0), Anthropic (Claude 1, 2), Google PaLM2. New models are added frequently, as they become commercially available.

Prompt engineers choose the best model for the use case their AI addresses, and can then set custom parameters for the chosen model. Common parameters include:

  • Temperature — influences the accuracy vs. creativity of the model.
  • Response size — the amount of data returned by the model per interaction.
  • Token overflow strategy — what to do if the token window is exceeded.
  • System introduction messages — parameters injected at the start of each conversation.
  • User message processing — Retrieval Augmented Generation (RAG) instructions, allowing the AI to leverage Data Sources other than what it was trained on.

Data Sources

Prompt engineers can upload different types of data into the AI at build time, and instruct the AI to reference these data sources in its responses. Data that is currently supported includes (txt, docx, pdf, csv, xlsx, and html).

This capability facilitates creation of powerful new types of AIs. Some examples include:

  1. Book AIs — allowing authors to turn their books into interactive AIs.
  2. SMB Apps — Allowing SMBs to upload their data, and use AI to analyze it — AI data analytics without the need for a professional analyst.
  3. Enterprise Apps — Allowing various functions in enterprises to leverage AIs for their custom needs.
  4. And countless other use cases.

Data uploaded is automatically parsed, chunked, vector embedded, and stored securely in a vector database.

Without the use of MindStudio, this requires significant technical expertise (writing code) and large time investment. MindStudio democratizes this complex workflow, making it trivially easy for completely non-technical people to deploy AIs that utilize external data sources.

Automations

AIs built with MindStudio can instrument multi-step workflows. These workflows can do all kinds of things including:

  1. Collect runtime input from the user.
  2. Leverage external web endpoints (websites, APIs) in the workflow — get content from a URL, and then use it in the workflow.
  3. Leverage Data Sources configured by the prompt engineer.
  4. Ask user to upload various types of data.
  5. Leverage personalization data for each user to customize output.
  6. Threaded outputs, Rich Text Output Editing, and much more.
  7. New capabilities are being added constantly!

All of this capability is made accessible to non-technical (no code required) prompt engineers to create AIs (in minutes), which used to require many hours of professional developers writing custom code.

Metadata

Each AI created with MindStudio can be deployed with the push of a button, either privately or publicly. Prompt engineers can configure all kinds of metadata, including Name, Tagline, Thumbnail, Tags, Preview Video, Screenshots, Open-Graph metadata, Rich Text Description, External Links, Visibility, and Pricing Parameters (how much does it cost for users to use the AI).

Debugger / Live Draft Previewing

MindStudio allows prompt engineers to test / preview their AI as they are developing it. This comes in the form of a live terminal (that can be used to interact directly with the AI during development), a built-in debugger (which allows for inspection of messages, variables, preamble, prompt, model, and performance), and a live draft preview prior to publishing.

Analytics

Each AI deployed via MindStudio automatically collects analytics data, allowing the prompt engineer to understand the performance and efficiency of their AI, and make relevant adjustments over time.

What’s next?

This is all just the beginning!

The capabilities of foundation models are accelerating — it is hard to point to any other technology that has evolved at the pace of AI. Each week we see massive new advancements.

As these capabilities evolve, it is clear that consumers and prompt engineers can’t possibly be expected to keep up. The application layer abstracts all of this ever-changing capability and refactoring — allowing prompt engineers to write logic and interface, without the need to constantly refactor backend intelligence.

Welcome to the Application Layer of AI — YOU can CREATE AIs to address all kinds of use cases — your own, consumer, SMB, and enterprise.

Getting started takes 30 minutes — read the documentation, watch some tutorials, and check out over 2k AIs that have already been created.

Register now ->
Event ended. Watch recording here ->