jamie_ca a day ago

As an RSS user, I would love an RSS of the main page content, one entry per story over 5.5 is a perfectly reasonable baseline.

Also: It'd be great if you had a feed tag in your HTML head, so RSS readers could pick it up straight out of your homepage URL instead of needing to manually hunt for the right RSS link.

  • kevincox 14 hours ago

    100% the current implementation is "RSS as would be desired by newsletter lovers", but there is already the newsletter for that. If I want batching or similar my reader will handle that, I think it would be best just to have items as they happen appear on the feed.

  • yakhinvadim a day ago

    Ah, I didn't know it was a thing! I'll add it to HTML head.

  • voisin a day ago

    I second this. This would be a great feature

dvh a day ago

So where is the rss feed of most important news per day?

  • yakhinvadim a day ago

    I know it's not going to be popular, but to cover the cost of running ChatGPT on that many articles, I made it a part of a premium subscription: https://www.newsminimalist.com/premium#rss

    • DrPhish a day ago

      Do you need realtime results, or is an ongoing queue of article analysis good enough? Have you considered running your own hardware with a frontier MoE model like deepseek v3? It can be done for relatively low cost on CPU depending on your inference speed needs. Maybe a hybrid approach could at least reduce your API spend?

      source: I run inference locally and built the server for around $6k. I get upwards of 10t/s on deepseek v3

      PS: thank you for running this service. I've been using it casually since launch and find it much better for my mental health than any other source of news I've tried in the past.

      • yakhinvadim a day ago

        Thank you so much! Always glad to see long-time readers.

        There was a period when I considered switching to an open-source model, but every time I was ready for a switch, OpenAI released a smarter and often cheaper model that was just too good to pass up.

        Eventually I decided that the potential savings are not worth it in the long term - it looks like LLMs will only get cheaper over time and the cost of inference should become negligible.