How to Think About Integrating AI into Your Business

We are deep in the AI hype cycle. Some people in your org are probably worried that HAL9000 is weeks away and their jobs are at risk. Some think LLMs are just the next in a long line of neat, but useless tech baubles, sure to be replaced within a year. Others think that, if you miss the AI train, your competitors will gain a huge advantage. Here’s some perspective from the leading edge on how to think about integrating AI into your business.

TL;DR

My summary: Speed up your work with off-the-shelf AI tools. Revisit offering services that used to be too costly. Treat AI models as interchangeable commodities.

GPT-4’s summary: Integrate off-the-shelf AI tools into your business to speed up processes and introduce previously unaffordable services cost-effectively. Exercise caution when producing and sharing AI-generated content; human oversight is crucial to mitigate errors and biases, and always be mindful about sharing data with service providers.

Here’s how LLMs work, briefly.

The current generation of large language models (GPT-4, Gemini, LLaMA) work like an autocomplete system. They look at two words in a document and record how close together they are. The closer they are together, the more related they are in some dimension. That’s the training phase. It builds the model.

To get something out of the model, you send it a message and ask it what the most plausible next word in the conversation would be. Rinse and repeat. That’s the inference phase.

The genius of an LLM is that if you train, not on one or two documents, but on a sizable portion of the web pages on the internet, you build a surprisingly robust list of numeric weights for which words should plausibly come next in a dialogue.

Producing a better model means training on more data (time-consuming) or better data (costly).

You should not build your own LLM.

OpenAI spent upwards of $100 Billion building GPT-4. Unless you have an extraordinarily-solid value delivery plan, building your own LLM is a ruinous idea right now. However, LLMs are ridiculously cheap to use judiciously, especially when weighed against the cost of human time and opportunity cost.

You should build and share effective prompts

LLMs use your entire conversation to decide what you want to hear next. Prompt engineering is nothing more than developing a careful framing for each question that you ask. Every LLM is a little different, and every conversation with them is going to include some context that comes before your actual questions. That context can be useful to share with your team.

Say you figure out that if you ask an LLM to talk like a pirate, you get more entertaining responses. You could share that with your team in a few ways:

  • Share prompts: Every LLM conversation begins with a behind-the-scenes prompt like, “You are an AI assistant trained by OpenAI…”. To produce more consistent results for your team, consider adding your own prefix like, “I am a manufacturer of forklift batteries, and you are a writing assistant that always talks like a pirate…” and share that among your team.
  • CustomGPTs: If you find value in shared prompts, consider using something like OpenAI’s CustomGPTs to share pre-built assistants with custom personalities. They’re simple to make and work like a shared prompt, but easier. They’re especially handy if you’ve got some folks who are super into prompt engineering and getting good results out of an LLM and others who just want to use the new thing without fussing. Have the former group build the CustomGPTs and the latter use them.
  • Regular old gossip: People in your organization are probably pretty good at sharing information like, “Hey, when you talk to so and so make sure you give them the bottom line up front,” or, “Don’t bring up [certain topic] around them.” That sort of thing works for LLMs too, and is how techniques for working effectively with LLMs will spread without intervention.

Be mindful of what you share with LLM service providers.

When you use any LLM that isn’t wholly offline and local, you’re sending information to a third party. If that’s an unacceptable risk, you probably don’t want to use LLM services yet.

Some, but not many, LLMs are small enough to run on your own hardware. Many organizations will have security requirements that rule out sending information to a third-party service provider.

We saw a similar interplay of concern and value play out during the transition to cloud computing. Storing data in someone else’s data center is now a widely accepted practice, with well-understood trade-offs and clear exceptions.

LLMs are having a moment like that. Service providers like OpenAI and Anthropic are generally pretty good about guaranteeing that they won’t use your requests to train the next version of their models. Check with your legal and compliance teams. Like with the cloud, expect that the innovative forces within your company will be annoyed by the pushback you get while the sustaining forces urge caution.

Explore widely, publish carefully.

We’re still early in the LLM development cycle. The risks are well understood: they make up facts, they encode historical biases, and their quality is wildly variable. I like to think about an LLM like GPT-4 as an absent-minded genius.

They’re genius because they are surprisingly good at some things. They’re absent-minded because you cannot trust them. You absolutely must have a way to test what they say if you’re asking about facts and to review what they produce if you’re asking for content.

When you’re thinking about using an LLM, try them on everything. You may be surprised how much value they add to odd tasks, and they may open up new possibilities for you. But, be cautious about publishing the output of an LLM without human oversight. At best, they produce C+ content. I’d wager that in most places within your business, your bar for content is higher than that.

Use LLMs to make some things quicker, cheaper.

There are a ton of ways to integrate LLMs into your work.

Here are a few ideas to get you started with integrating AI into your business today. You can drop any of these prompts into ChatGPT with a GPT+ subscription or, if your org has API Keys from OpenAI, you can use any client app (I like MindMac on macOS) to try these out.

Write C+ prose that a good editor can revise.

Try a prompt like: “Write me a three-paragraph history of the rise of fiber optics and provide three resources I can use to learn more.”

Draw C+ images to help you come up with different ways to visualize concepts.

Try a prompt like: “Draw 5 whiteboard-style diagrams that illustrate the concept of fast and slow feedback cycles in software development.”

Summarize well-understood business processes and techniques.

Try a prompt like: “Summarize how the theory of constraints applies to software development in 3 sentences or less and provide a resource I can use to learn more.”

Help developers write one-off scripts more quickly and thoroughly.

Try a prompt like: “Write me a bash script that uses JQ to extract a list of images with defined bounding boxes from a COCO JSON file and outputs them as an unadorned, newline-separated list.”

Help developers continue a pattern more quickly than typing it all by hand.

How about this prompt? “Extend the pattern 10 more items: “202212”, “202301”, “202302””. This pattern gets the rough idea across, but the real power is in bigger, more complex patterns. This is most useful if it’s built into a programmer’s text editor.

Analyze sentiment in survey responses.

Try a prompt like: “Here’s a CSV of survey responses, for each response, add a new column indicating the emotional valence of the response on a 5-point scale, with 5 being very happy and 1 being very angry. ”

Provide critique if you’re feeling overconfident.

Try a prompt like: “My startup idea is to build a two-sided marketplace specifically for suppliers of fill dirt and people with uneven lawns. Give me ten reasons that this isn’t a good business idea.”

Summarize long email threads or meeting notes.

Try a prompt like: “You are an extremely accurate note-taker. Summarize the key points in this email thread, and pull out quotes for any interesting details.”

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *