CoTs-Chain-of-Thought

CoT is a very popular way of prompting AI language models. In very simple terms, it’s simply asking the model to take its time to answer. - ref

But not just take longer to process but rather to reason, as it goes over the steps.

CoTs example is adding to your prompt:

Without COT

Prompt: {BrainTeaser}

With CoTs:

Prompt: Your task is to solve this brainteaser encapsulated between quotes. Please follow a step by step approach that reasons over every step of the process.

"{BrainTeaser}"

Although this isn’t fully proven, the longer the answer, the better.

Although this isn’t fully proven, the longer the answer, the better.

Although we aren’t fully prepared to explain why CoT works, researchers are considering two possible options:

  • “Models need tokens to think”: As Andrej Karpathy described in the last Microsoft conference, the more tokens the answer has, the stronger the answer is, as the next word prediction is grounded on more data.
  • More computational power: Alternatively, the longer the answer, the more computational effort the model is putting into answering your question. More computational power means better answers.

Bottom line, although this isn’t fully proven, the longer the answer, the better. - ref