Take gpt-2, bert or any other attention based language model. Apply few shot learning on any domain to model you take. You will not see so much meaningful difference that matters. Even the GPT-3 super large than the other models. There is a hype because people able to use that domain adaptation easily without serving huge ml models by themselves. OpenAI presents that serving already. That makes it easy to explore language models to developers that don't know ML that much to tune.