ChatGPT prompts for your PM work
These examples are sourced from Lenny's Newsletter - https://www.lennysnewsletter.com/p/how-to-use-chatgpt-in-your-pm-work Go here for my best prompt hack.
Prompt engineering plays a crucial role in enhancing the performance of AI language models and ensuring more accurate, relevant, and reliable outputs.
Duplicate this document and continue adding your own!
Source
Source
Source
Source
Source
Source
Source
Source
Source
Source
Source
Source
Source
Source
Source
Source
-
-
Source
Source
Source
Source
Source
Source
Source
When you talk with a toddler, you start by explaining the concept and then make your point at the end if you think they grasped it.
So this is how you can improve your prompts:
- **Ask the model what it knows about a subject -> **what do you know about B2B SaaS Pricing best practices?
- Double down on one of the items listed. Reiterate the key points you want the answer for. -> let's focus on Add-on pricing.lease give me some best practices and strategies to implement an add-on to a PLG B2B SaaS company with an average ACV of $YY
- Continue on more items if necessary -> let's focus on add-on item number X
- Final prompt ->
Act like a pricing expert for B2B SaaS with 10 years of experience helping companies with pricing. Recommend the pricing strategies with the highest chance of improving the conversion rate from free trial user to customer. Consider that the add-on pricing model is preferred. List all the actions a pricing expert would take to improve the conversion rate from user to customer.
This is the best video I found explaining how generative models work:
Prompt engineering is the process of designing, refining, and optimizing prompts to effectively interact with and elicit desired responses from AI language models, such as GPT-4. The main goal of prompt engineering is to improve the quality of AI-generated outputs by crafting well-structured, unambiguous, and context-rich prompts.
Since AI language models are trained on large datasets and learn to generate human-like text, the quality of their responses can be influenced by the way users phrase their queries or statements. Prompt engineering involves various techniques, such as:
- Being explicit: Clearly specifying the desired format or type of response.
- Adding context: Providing relevant background information or context to help the model understand the query better.
- Repeating important information: Restating key points or concepts to emphasize their importance.
- Requesting step-by-step explanations: Asking the model to provide detailed explanations or reasoning behind its response.
- Experimenting with different phrasings: Trying different ways of asking the same question to find the most effective prompt.
Remember:
- LLM are good at generating the next syntactically correct word
- they can give a false impression that the actually understand the meaning
- be carefull for false naratives.