I'm interested in utilizing an NLP model to provide short (one sentence in length) abstractive summaries of web pages, providing the model a set of commonly occurring HTML content from each web page (for example, heading tags, meta, the title, and so on).
In my experience, large language models could do this quite effectively. However, I'm interested in alternatives to LLMs -- how they perform, how costly in terms of computational resources they are compared to LLMs, and what degree of setup might be required for them compared to the comparatively simple prompt-and-respond format of commercial LLMs.
The abillities of classical NLP's are quite bad in comparison, but I dont know if its only in producing or also recognizing.
But the time+memory costs are minimal in comparison, the algorithms are not hughe number crunchers like multiple neural networks are.
The setup of a NLP is more complex due to the rather unfinished/unpolished toolsets/algorithms available and you build your own I guess, the LLM's are already extremely polished+integrated by Google/Microsoft/Apple for you.
I would maybe just follow the industry and thus guess that only LLM's made this really possible, there must be a reason why now all produce bots and before they were pretty sparse.