THIS SITE IS A WIP THIS SITE IS A WIP THIS SITE IS A WIP THIS SITE IS A WIP THIS SITE IS A WIP
it's just one.

Contents

Single-Line Suggestions

One of the earliest AI native features was Microsoft's Copilot, with the simple feature of displaying inline-suggestions of how to complete the current line, that can be accepted with a tab key.

Format Example

this is the cursor pos
const thisIsTypedCode = 'greyed out is an inline ai suggestion';
const listOfPosts: Post[] = fetchPosts();
const foundPosts: number = listOfPosts.length;

The suggested code is not actually in the file, but only visually displayed until the user accepts it, which inserts & places the cursor at the end of the suggestion.


Multi-Line Suggestions

The inline suggestion can not only do single-line completions, but also multiple, to entire blocks of code.

Multi Line Examples

Most modern inline suggestion features can predict that you're very likely to need certain lines of code in bundles, and will suggest those.

const postsResponse = axios.get('posts.example.com');
const posts = postsResponse.data;

Because in 90% of cases you would want to access the responses data, it reaches slightly past it's current line completion, to complete this block of thought.


This proactive reaching can go as far as entire function bodies or predictive boilerplate.

class Post {
 final String title;
 final String content;
 
 Post({required this.title, required this.content});
}

Types of Inline Suggestion Behavior

Line Completion

Regular Inline Suggestions only complete (add) to the current line(s). It doesn't change anything about the users manually typed code, respecting the authority of user code.

This behavior is the easiest to understand & follow for the user.

Line Replacement

More advanced Inline Suggestions like those from Cursor can go as far as replacing or removing entire lines/blocks of code, overwriting the users initial input.

This behavior is more intrusive, initially often breaking the flow of thought for the user. But once adjusted to it, it also enables a lot deeper possibilities, like custom emmets, refactoring & more, which we will be exploring in the upcoming sections.


Guiding Inline Suggestions

class Post {
 final String title;
 final String content;

In the real world, title and content are not the only properties that a Post class would need.

Obviously trying to predict the entire class properties, constructors & methods just based off a simple class Post would be an impossible task for even the best human developers.

A good developer would first ask for a lot more information on what the class is supposed to be used for, and then ask for the properties your model needs to have, etc.

A good developer would ask for more context.

Inline suggestions don't have that luxury, since they're forced to answer most of the time and have no interface for a feedback loop.

Inline suggestions have to make the most informed guesses, based on the context available to it, usually in a one-shot try.

In recent updates, some tools have migrated their inline suggestions to a continuous context implementation, which allows for more informed, more complex & AI native techniques like room scouting or cut & paste.

Find more in Inline Suggestions with Continuous Context, this article explores one-shot behavior for now.


Guiding through context

The more of your existing code the AI can base it's own suggestions on, the more aligned with your existing codebase, codestyle & also intention the suggestions will be.

In a blank scenario, the AI rarely can know what the user wants from it. Just telling it to complete a joinedList will cycle through all kinds of possibilities how this joined list could be created.

Since it has no surrounding context, variables or anything else to join the list from, it often tries to suggest a fully indipendent completion.

const joinedList = [...[1, 2, 3], ...[4, 5, 6]];

The moment we add some surrounding context – in this case it includes two very clearly labeled lists – probably a 3 year old, could figure out what we want to create the joined list out of.

const listA = ['a'];
const listB = ['b', 'c'];
const joinedList = listA + listB;

That's the reason why early stage "ChatGPT-Copy-Paste" techniques of asking for big chunks of code, often with lacking context, usually led to non productive workflows; and why having the completions integrated into your actual codebase or IDE, where it can pick up on an exponentially larger amount of context, style & hints is a much better approach.


Speaking function names don't only keep your code more readable, but are also the easiest way of surrounding context & guiding the suggestions inside them.

You again don't need a PhD in software engineering to know that a function called fetchPosts & an imported API class, will most likely use that classes methods to retrieve the posts.

It also figures that similarly or indentical property names will most likely be passed on & that the function will most likely return the posts in the end.

import 'package:api/api.dart';
 
function fetchPosts(double count) async {
    const posts = await Api.fetchPosts(count: count);
    return posts;
}

63 characters, compressed into a single tab press. 2-4s of typing depending on your speed & existing 'oldschool autocomplete' setup, compressed into a fraction of a second.


But what if you want to do more than just fetch?

For example, you would want to add some basic debugging information about the fetch duration in a single completion.

That's where priming the response helps to nudge the AI into the desired direction in it's possibillity space.

This is where we switch from expecting the AI to magically read our minds, to just asking it to complete our thoughts.

Starting to type the intended line, works like placing words in the thoughts of the AI, which allows it to more often then not, easily catch onto the right idea.

If you create a new stopwatch that's called fetchDuration inside a function called fetchPosts; even YouTube Shorts users would likely figure out what to do with that stopwatch and it's result.

import 'package:api/api.dart';
 
function fetchPosts(double count) async {
    StopWatch fetchDuration = StopWatch();
    fetchDuration.start();
    const posts = await Api.fetchPosts(count: count);
    fetchDuration.stop();
    print('${count} posts took ${fetchDuration.elapsedMilliseconds}ms');
    return posts;
}

We're again not expecting the AI to telepathically know that we want to add network speed debugging, but we're just giving it enough context, to describe the intention of our current line of thought.

This result also doesn't have to be the immediate end, but can be treated as a directional step towards the final solution; that working off of is faster or easier, than doing it in full.


Details matter

When giving the model additional context, especially through priming, exact words and formulations can make a stark difference.

These two similar primings, produce two very different results.

only good vs. no bad
 
List<Post> posts = [
   Post(title: 'A good post', rating: 10),
   Post(title: 'Decent mid post', rating: 5),
   Post(title: 'Some bad post', rating: 1),
];
 
List<Post> onlyGoodPosts = posts.where((post) => post.rating == 10).toList();
 
List<Post> noBadPosts = posts.where((post) => post.rating >= 5).toList();

Response Priming can also be used to make use of formatting techniques, like quick Q & A chats without having to leave the editor.

Since these inline completions are fully trained to produce code in most scenarios, asking it a question about something will more often produce a code answer, than a text answer.

This can be circumvented by priming with ie. the Q: / A: format.

// Q: Does where alter the original list or return a new one?
// A: It returns a new list.
posts.where((post) => post.rating > 5);

With current (Dec, 2024) AI models, simple inline suggestion context understanding can go as far as understanding data flow, error & happy paths and more.

In this example a simple if is enough for the AI to understand the existing data flow & catches the missing check whether the user actually wants to delete the post.

Using 2 characters to place a "hey, I think we should add some kind of check here", in the AI's thoughts – in combination with the surrounding code – and the AI picked up on what's missing.

function handleDeleteConfirmationModal(Post post) async {
    const userAction: DeleteConfirmationAction = await showDialog(
        context: context,
        builder: (context) => DeleteConfirmationDialog(post: post),
    );
 
    if (userAction != DeleteConfirmationAction.delete) {
        return;
    }
 
    const deleteResponse = await Api.deletePost(post.id);
 
    if (!deleteResponse.success) {
        showSnackBar(context, 'Failed to delete post');
        return;
    }
 
    Navigator.of(context).pop();
}

Even if the initial suggestion is not picking up on our intention, either because a different check would've also been appropriate or because it's just getting it wrong:

    if (post.id == null) {
        print('No post id found');
        return;
    }
 
    const deleteResponse = await Api.deletePost(post.id);

adding another two characters, limits the AI's possibillity space further, catching onto it once again.

    if (userAction != DeleteConfirmationAction.delete) {
        return;
    }
 
    const deleteResponse = await Api.deletePost(post.id);

Exploring the Solution Space

Looking at this technique from a different angle, allows us to see early signs of a shift in interacting with code to a more abstract exploration of possibilities by walking the options.


Guiding through comments

What if we could limit this possibility space before the AI even starts to think?

That's where we can use comments in our code, to lay out our thoughts & intensions to the AI, directly in the surrounding context.

function handleDeleteConfirmationModal(Post post) async {
    const userAction: DeleteConfirmationAction = await showDialog(
        context: context,
        builder: (context) => DeleteConfirmationDialog(post: post),
    );
 
    // if action is not delete, return
    if (userAction != DeleteConfirmationAction.delete) {
        return;
    }
 
    const deleteResponse = await Api.deletePost(post.id);

We've effectively turned 73 characters into 42 and produced the same code. A decrease of 42%, with a still quite detailed comment. But depending on the generated code, this can easily reach a reduction in typed out code of 90%+.

class Post {
    final String title;
    final String content;   
 
    Post({required this.title, required this.content});
 
    // copyWith
    Post copyWith({String? title, String? content}) {
        return Post(title: title ?? this.title, content: content ?? this.content);
    }
}

Turning 138 characters into 12 characters. Reducing by 91%.

If that ultimately makes you faster, more productive, or what ever else is your metric of a helpful tool – is up to you. But accosional reductions of 90% in time & mental effort; can be a strong seller.


These comments can include parameters, & specific logic hints, too.

List<Post> posts = [
    Post(title: 'My fav post', rating: 10),
    Post(title: 'Decent mid post', rating: 5),
    Post(title: 'Some bad post', rating: 1),
];
 
// only posts with rating > 5
List<Post> onlyGoodPosts = posts.where((post) => post.rating > 5).toList();

This technique can also be used to lay out longer plans, allowing the AI to plan ahead in it's suggestions.

Generating the createNewPost function through repeated few line completions, leads to "save suggestion" for smaller parts of the function, often undesirable in real world applications.

Each group represents an individual accepted completion.

function createNewPost({
    required String title,
    required String content,
}) async {
    const newPost = await Api.createPost(title, content);
    if (newPost == null) {
        return;
    }
 
    return newPost;
}

Breaking up the process into smaller, logically grouped steps, allows the AI to know about itentions; provide more streamlined & cleaner code, individual adjustments to sections of completions & more.

function createNewPost({
    required String title,
    required String content,
}) async {
    // validate title & content, throw error if empty
    if (title.isEmpty || content.isEmpty) {
        throw Exception('Title and content are required');
    }
 
    // check if post with same title already exists
    // if yes, throw error
    const existingPost = await Api.getPostByTitle(title);
    if (existingPost != null) {
        throw Exception('Post with same title already exists');
    }
 
    // create post, check if post was created successfully
    // if yes, return new post, else log error & fail silently
    const newPost = await Api.createPost(title, content);
    if (newPost == null) {
        log('Failed to create post');
        return;
    }
 
    return newPost;
}

Inline Editing

This type of section based prompting through comments, quickly evolves into the inline editing technique.


I often do this exact comment layout in my code anyway, even if I'm writing it out myself, to explore possible solutions myself.

So these rough comment sketches that used to be a discarded side effect, now act as jump starter templates to expand the code from.


Pseudo Code

If you prefer to do exploratory coding while looking for solutions:
Instead of writing out natural language comments, you can also write pseudo code & have the AI generate actual code from that.

function processPosts(List<Post> posts) {
  // if posts > 5 => posts.removePast(5)
  posts.take(5);
 
  // posts.sortBy(title, alphabetically)
  posts.sort((a, b) => a.title.compareTo(b.title));
 
  // for posts
  // if rating < 3
  // email.sendToAuthor("remove post"); add del link in body
  for (Post post in posts) {
      if (post.rating < 3) {
          emailService.sendEmail(
              recipient: post.authorId,
              subject: "Your post has reached a rating of ${post.rating}, please consider removing it.",
              body: "Remove with this link: ${getPostDeleteUrl(post)}",
          );
      }
  }
 
  return posts;
}

'Compute' Completions

Inline suggestions or similar techniques can then also be used for quick compute answers.

Simple conversions like rgb/hex or sometimes even string/base64 can be done in a single completion.

If you only have values in a specific format at hand, this can be a handy time saver.

function render(BuildContext context) {
   return Container(
       // rgb: 122, 250, 33 => hex: #7AFA21
       // base64: SGVsbG8gV29ybGQ= => string: "Hello World"
       color: Color(0xFF7AFA21),
       child: Text('Hello World'),
   );
}

The more accustomed you get to the AI's behavior, the more you can start to drop bits of information in the comments (prompts), almost falling back to a broken level of language.

In the end it doesn't matter what you're giving the AI for context, as long as you and the AI know what you're both talking about.

List<Post> posts = [
    Post(title: 'My fav post', rating: 10),
    Post(title: 'Decent mid post', rating: 5),
    Post(title: 'Some bad post', rating: 1),
];
 
// del rating < 3
posts.removeWhere((post) => post.rating < 3);
 
// sort rating asc
posts.sort((a, b) => a.rating.compareTo(b.rating));
 
// UPP title
posts.map((post) => post.copyWith(title: post.title.toUpperCase()));
 
return posts;

Not only does this increase the speed of finding a matching suggestion by a lot; it also allows you to keep your mind on the important question of what to do, rather than trying to remember if it was a > b or b > a to sort ascending. Reducing context switching.


Once you drop down to the minimal bits of information, you successfully reached the technique of custom emmets.

function render(BuildContext context) {
    return Container(
        // mb10
        margin: EdgeInsets.only(bottom: 10),
        // grd b/w top t b
        decoration: BoxDecoration(
            gradient: LinearGradient(
                colors: [Colors.black, Colors.white],
                begin: Alignment.topCenter,
                end: Alignment.bottomCenter,
            ),
        ),
        child: Text('Hello World'),
    );
}

Custom Emmets

Like existing emmets for ie. HTML & CSS or IDE specific emmets, inline suggestions can expand valid emmets (mb10margin bottom, 10px) as well as pseudo-emmets (grd b/w top t bgradient black & white, top to bottom) completely language agnostic.

Guiding through system prompts

For completion:

Most AI enhanced IDEs & features allow for some kind of system prompt customization.

Either through dedicated files like .cursorrules for Cursor, or settings like github.copilot.chat.codeGeneration.instructions in VSCode/Copilot.


These types of system prompts are great places to store globally available information & internal thoughts for the AI.

  • Include rules for code styles, naming conventions or best practices.
  • Define specific version numbers for languages or packages.
  • Dictate behavior like chain of thought or self analyzing.

And if in doubt, just ask the AI what it thinks about your prompt.


– Be human. Be kind. Do better.