Ask HN: What are you working on? (March 2025)
383 points by david927 4 days ago
What are you working on? Any new ideas that you're thinking about?
383 points by david927 4 days ago
What are you working on? Any new ideas that you're thinking about?
Hi Enis, it seems a very interesting project. I myself with my team are currently working with the non-stationary of physiological and earthquake seismic public data mainly based on the time-frequency distributions, and the results are very promising.
Just wondering if the raw data that you've mentioned are available publicly so we can test our techniques on them or they're only available through research collaborations. Either way very much interested on the potential use of our techniques for the polar research in Arctic and/or Antarctica.
Hi teleforce, thanks! Your project sounds very interesting as well.
That actually reminds me, at one point, a researcher suggested looking into geophone or fiber optic Distributed Acoustic Sensing (DAS) data that oil companies sometimes collect in Alaska, potentially for tracking animal movements or impacts, but I never got the chance to follow up. Connecting seismic activity data (like yours) with potential effects on animal vocalizations or behaviour observed in acoustic recordings would be an interesting research direction!
Regarding data access:
Our labeled dataset (EDANSA, focused on specific sound events) is public here: https://zenodo.org/records/6824272. We will be releasing an updated version with more samples soon.
We are also actively working on releasing the raw, continuous audio recordings. These will eventually be published via the Arctic Data Center (arcticdata.io). If you'd like, feel free to send me an email (address should be in my profile), and I can ping you when that happens.
Separately, we have an open-source model (with updates coming) trained on EDANSA for predicting various animal sounds and human-generated noise. Let me know if you'd ever be interested in discussing whether running that model on other types of non-stationary sound data you might have access to could be useful or yield interesting comparisons.
You should train a GPT on the raw data, and then figure out how to reuse the DNN for various other tasks you're interested in (e.g. one-shot learning, fine-tuning, etc). This data setting is exactly the situation that people faced in the NLP world before GPT. I would guess that some people from the frontier labs would be willing to help you, I doubt even your large dataset would cost very much for their massive GPU fleets to handle.
Hi d_burfoot, really appreciate you bringing that up! The idea of pre-training a big foundation model on our raw data using self-supervised learning (SSL) methods (kind of like how GPT emerged in NLP) is definitely something we've considered and experimented with using transformer architectures.
The main hurdle we've hit is honestly the scale of relevant data needed to train such large models from scratch effectively. While our ~19.5 years dataset duration is massive for ecoacoustics, a significant portion of it is silence or ambient noise. This means the actual volume of distinct events or complex acoustic scenes is much lower compared to the densely packed information in the corpora typically used to train foundational speech or general audio models, making our effective dataset size smaller in that context.
We also tried leveraging existing pre-trained SSL models (like Wav2Vec 2.0, HuBERT for speech), but the domain gap is substantial. As you can imagine, raw ecoacoustic field recordings are characterized by significant non-stationary noise, overlapping sounds, sparse events we care about mixed with lots of quiet/noise, huge diversity, and variations from mics/weather.
This messes with the SSL pre-training tasks themselves. Predicting masked audio doesn't work as well when the surrounding context is just noise, and the data augmentations used in contrastive learning can sometimes accidentally remove the unique signatures of the animal calls we're trying to learn.
It's definitely an ongoing challenge in the field! People are trying different things, like initializing audio transformers with weights pre-trained on image models (ViT adapted for spectrograms) to give them a head start. Finding the best way forward for large models in these specialized, data-constrained domains is still key. Thanks again for the suggestion, it really hits on a core challenge!
If you’re asking whether multiple recorders were active at the same time, then yes, we had recorders at 98 different locations over four years, primarily during the summer months. However, these locations were far apart, so no two recorders captured the same exact area.
oh man that's awesome. I have been working for quite some time on big taxonomy/classification models for field research, espec for my old research area (pollination stuff). the #1 capability that I want to build is audio input modality, it would just be so useful in the field-- not only for low-resource (audio-only) field sensors, but also just as a supplemental modality for measuring activity out of the FoV of an image sensor.
but as you mention, labeled data is the bottleneck. eventually I'll be able to skirt around this by just capturing more video data myself and learning sound features from the video component, but I have a hard time imagining how I can get the global coverage that I have in visual datasets. I would give anything to trade half of my labeled image data for labeled audio data!
Hi Caleb, thanks for the kind words and enthusiasm! You're absolutely right, audio provides that crucial omnidirectional coverage that can supplement fixed field-of-view sensors like cameras. We actually collect images too and have explored fusion approaches, though they definitely come with their own set of challenges, as you can imagine.
On the labeled audio data front: our Arctic dataset (EDANSA, linked in my original post) is open source. We've actually updated it with more samples since the initial release, and getting the new version out is on my to-do list.
Polli.ai looks fantastic! It's genuinely exciting to see more people tackling the ecological monitoring challenge with hardware/software solutions. While I know the startup path in this space can be tough financially, the work is incredibly important for understanding and protecting biodiversity. Keep up the great work!
I'd love to turn my spectrogram tool into something more of a scientific tool for sound labelling and analysis. Do you use a spectrograph for your project?
Hey thePhytochemist, cool tool! Yes, spectrograms are fundamental for us. Audacity is the classic for quick looks. For systematic analysis and ML inputs, it's mostly programmatic generation via libraries like torch.audio or librosa. Spectrograms are a common ML input, though other representations are being explored.
Enhancing frequenSee for scientific use (labelling/analysis) sounds like a good idea. But I am not sure what is missing from the current tooling. What functionalities were you thinking of adding?
How do I download the sounds, seems like a great resource for game developers and other artists
Our labeled dataset (EDANSA, focused on specific sound events) is public here: https://zenodo.org/records/6824272. We will be releasing an updated version with more samples soon.
We are also actively working on releasing the raw, continuous audio recordings. These will eventually be published via the Arctic Data Center (arcticdata.io). If you'd like, feel free to send me an email (address should be in my profile), and I can ping you when that happens.
How can I actually search for audio, I'll check in 6 months or so.
What's the licensing is it public domain
Very interesting. All the best for your thesis. Mine is not nearly as interesting enough.
I've been putting together a no-nonsense free invoice generator, for people (like myself) that only occasionally send invoices. It's more-or-less a WYSIWYG editor, and the state is stored in the URL, so you don't have to worry about keeping track of where you stored your copy - if you've sent someone an email with the link, you've got a copy. This project was born out of the frustration of trying to generate an invoice on my phone on the go, I found all the existing solutions to be quite awful (forced signups to paid subscriptions, clunky interface etc).
Would love to hear any feedback the HN crowd has. I'm aware of a couple of alignment issues, will fix them up tonight. Also, yes, there will be a "generate PDF" button, for now if you want a pdf I'd suggest using the Print dialog to "Save as PDF".
I built something similar for myself, but with a live PDF preview, support for downloading PDFs in multiple languages (English and Polish), VAT tax deductions, and multiple currencies.
Also it’s open-source https://github.com/VladSez/easy-invoice-pdf
Would love to receive any feedback as well :)
Most important is for it to have the right number consistently without room for error. You may also use multiple sequences like if you sell cars and crates of bananas you wouldn't want to mix those but you would want to use a single tool.
Anything else can be corrected. It is important to easily make corrections and/or credit nota as those seem to happen at the worse time. Usually the same as the invoice but with amounts in the -
It is also nice to tie the products into it so that you don't have to type it every time and get consistent naming. Same for an address book.
I'm leaving this link here regarding the "EU e-invoicing standard CEN 16931"
https://ec.europa.eu/digital-building-blocks/sites/display/D...
I don't know how complicated or easy it would be to just create templates which satisfy this.
In Germany it's already required for B2B transactions [0]
In principle, the following formats will comply with CEN 16931:
ZUGFeRD: Hybrid format: Human-readable PDF/A-3 with an embedded XML file in the syntax "Cross-Industry Invoice" (CII)
XRechnung: XML file in the syntax "Cross-Industry Invoice" (CII)
XRechnung: XML file in the "Universal Business Language" (UBL) syntax
[0] https://www.bdo.global/en-gb/insights/tax/indirect-tax/germa...!You can use AI to convert your PDF invoice into XRechnung to comply with EN 16931 should that occur, e.g., https://www.invoice-converter.com/en
Nice work! Would totally have used this when I was freelancing. Honestly love the serif'd fonts, would love to see everything serif'd tbh.
Also back when I had to do these (I used Wave) having a notes section was very useful to include a few things (i.e. I used to include conversion rates). Would probably be pretty easy.
Useful and simple. The pattern could be applied to any templatized document we need to generate.
Yeah it'd be cool to consider this approach for some other domains. Sometime soon I'll make it so you can change the word in the top-left from "INVOICE" to "QUOTE" or "RECEIPT". The nice thing about an invoice is you know there's going to be a relatively small amount of data, so storing the state in the URL is a plausible approach (even if it looks obscene to discerning users ).
Interesting, I’ve been using https://simpleinvoices.io for a couple of years and really like it. Integrates with my Stripe and super easy to configure. Best of luck!
It looks nice at a glance, but there's just no way I can justify $15/mth for the few times a year I need to send an invoice. If https://bestfreeinvoice.com gains some amount of traction I'd love to extend the feature set to justify a paid tier, but the basic experience (i.e. what's currently live) will always be free and genuinely useful.
Pls add support for changing currency and deductions (in India 10% TDS is supposed to be deducted on the amount (ie amount before tax is added)
Overall it’s pretty good solution for occasional invoice generators.
I know the URLs are long compared to what people are used to. Part of the rationale for putting this out there in the current state is I'm curious if people in practise will share long URLs or not.
At some point I'd like to add shortlinks but at the moment everything is clientside, there's no persistence at all (beyond localStorage). I think that's a nice feature from a security perspective.
Ha, I was going to say is that url query string a sha256 or something.
Yeah it's compressed and encoded with `lz-string` but I know it's hideous - I'm curious if that will prove a practical problem for adoption, even if a technical crowd isn't fond of the choice. Surprisingly most messaging apps you paste urls into these days don't show the whole thing anyway - when you paste it'll transform the link into a small preview card with just the domain portion of the URL visible.
Not be negative about bestfreevoice, or any other invoicing tool, but it always seems like developers and early-stage entrepreneurs don't understand invoicing.
First major disconnect is that not every country uses invoices, but may use receipts instead. This is true for the USA for example, so many US devs (for example: Stripe in the early days) are not familiar with the concept of invoicing. Technically there is no difference between receipts and invoices, so if you're not familiar with the concept of invoicing, just read this post with /s/invoice/receipt in mind.
The point about invoicing is to act as a non-mutable entry into the ledgers of both parties (seller and buyer). In most countries (especially EU) invoices are mandated by law for B2B transactions, and so is keeping accounts (aka bookkeeping). So for invoicing to be practical it needs to be tied to your books/accounts. Because of this, any business will use some bookkeeping/accounting software, which will have invoicing capabilities built-in. Invoicing as a standalone product doesn't make sense if you have to import it all into your ledgers later.
Then there is the 'design' trap, which many invoicing startups seem to fall for. Invoices are weird things. They are basically very, very inefficient artefacts from the past. An invoice is just a very little amount of transaction data exchanged between buyer and seller. In the days of physical bookkeeping (actual paper books) paper invoices made sense, but nowadays it is all done digitally. So the invoice is effectively a machine-2-machine interface, but for all sorts of legacy reasons we still wrap them in PDF with a fancy design that looks great for humans, but it effectively impossible for machines to read.
There are all sorts of attempts made to improve upon this situation (like OCR, and nowadays AI to extract data from PDF invoices). There are open structured data formats such as UBL to replace / augment PDF invoices, but due to all sorts of politics and lobbying the open standards have been doomed from the beginning. There is a lot of money made in accounting software, and they all rely on vendor lock-in. The major accounting software vendors have very strong incentives to keep us from adopting UBL et al, and most of the established accounting product suck, but you can't easily migrate so you'll be stuck with it.
If you run or own a business, treat your books as an asset of your business, a very important asset for that matter. Books are kept in accounting software, which is typically part of a larger software suite which also features tax filing, HRM, asset management, invoicing, etc. In fancy business terms this is often called ERP. But think of ERP as just your central database, or your 'books'.
Choosing your accounting software an important decision. Choose accounting software that allows exporting your data (very important!), that has an API (also very important), and preferably a web interface. It should be always available, so on-premise software is out. For entrepreneurs: choose your own accounting software, do not be tempted to hire an external bookkeeper that keeps the books in 'their' systems (accountant lock-in). Don't let an accountant recommend your software either, they get huge kickbacks from the software vendors (vendor lock-in). Every sale—whether this being PoS, invoicing, or a payment integration like Stripe—should automatically registered as a ledger entry in the books, preferably with an invoice document attached. Here you can see why an accountant who keeps your books in their systems won't work, you don't want to be stuck having to periodically send an email (or shoebox) filled with invoices for them to process. Your books should be owned by the business, should be automated (at least for the receivable side), and always be up-to-date. You can then give an(y) accountant access to your books for them to do audits, tax filings, etc. For a business, the books are the central database of the business, everything else revolves around it. Do not be tempted to write your own, instead integrate with existing solutions while avoiding vendor lock-in as much as possible.
Integrating your business with the accounting software is an ever-ongoing part of your software development efforts, so not underestimate it. Accepting payments is hard, making sure it is well registered in your books is equally hard. It takes _much_ more time than you'd think (most first-time entrepreneurs actually don't consider it at all). There are no silver bullets here.
In Italy, many of these invoicing challenges have already been tackled through a nationwide standardized system.
Every invoice—whether B2B or even B2C (receipts included)—must be sent electronically using a government-defined XML format. This invoice includes predefined metadata and is digitally signed by the issuing party. Once ready, it gets submitted to the national tax agency’s centralized system, called the Sistema di Interscambio (SdI), which validates and registers it before forwarding it to the recipient.
This system essentially acts as a clearinghouse: it ensures all invoices go through the same format, are verifiably issued, and are automatically recorded on both ends. For consumers (B2C), the invoice still goes through the same pipeline and is made available in their personal portal on the IRS website, while the seller can still email a copy (PDF) for convenience.
This centralized and machine-readable approach has eliminated a lot of the fragmentation seen elsewhere. There’s no vendor lock-in, no OCR, and no AI needed to parse PDFs—just a signed XML file going through a common pipeline. It’s not perfect, but it shows how much smoother things can be when the rules (and formats) are defined at the infrastructure level.
> Every invoice—whether B2B or even B2C (receipts included)—must be sent electronically using a government-defined XML format
So not a universal standard then. Imagine having to implement a different format for every country you do business with...
For the Netherlands there is a similar (but slightly different I believe) XML type format required if you want to do business with the government. Initially a company successfully lobbied to get their closed-specification version to be the mandated standard for government, to get the XML spec you had to become partner (I believe for €8k/year or something).
Luckily they are now performing a XKCD 927 and have defined a few new (this time open) standards, which they aim to consolidate into a new spec that complies to EN 16931. EN 16931 is the EU compliance standard for e-invoicing.
While all well and true. Some scale is assumed.
There’s plenty of need for basic use invoicing like this. Generate an invoice as a way to bill someone or serve as an estimate for work/project cost. Not everyone is at a place where it needs to be so formal and integrated into a complete solution that tracks the dollars from invoice to balance sheet to income statement, etc. It’s a lot especially for people that are just freelancing and need a similar probably infrequent way to send a bill. They probably are just tracking things in a spreadsheet and not even big enough to use quickbooks or anything else. It would be a poor use of time and over engineering to put that all in place and setup things that cost subscription dollars in perpetuity just to bill for a one off charge. Or even a handful of them.
When I think of people I pay, my lawn guy and my housekeeper both just text me how much I owe them. Then I zelle them. They both have dozens of clients at least and I imagine they are doing this way for them all. If I were a business, in May insist on getting an invoice to load the AP into an accounting flow from my end but they wouldn’t really want to change their system of doing things just to comply with my request. So, they may want something like this that just basically converts the text message info into an official looking invoice.
I feel the real problem is everyone is assuming this side project type thing to solve every edge case that exists in the world. Even the bigger guys like stripe. That’s the wrong take. They offer a solution, you have to evaluate of it fits your needs, if not, use something else. If you’re in a locale that mandates something completely different, use something else. This project is being very transparent about what it does and how it works, which should help you out if you have a requirements list to compare it to.
>First major disconnect is that not every country uses invoices, but may use receipts instead. This is true for the USA for example, so many US devs (for example: Stripe in the early days) are not familiar with the concept of invoicing. Technically there is no difference between receipts and invoices, so if you're not familiar with the concept of invoicing, just read this post with /s/invoice/receipt in mind.
I find this hard to believe. An invoice is a request for payment. A receipt is proof/confirmation of payment. Invoices sometimes double as receipts (or rather the other way around) when the payment is made immediately. But how can a country not have something that represents a formal request for payment by some future time?
I don't even understand this from an accounting perspective. What would accounts receivable and accounts payable even mean without this distinction? How would you date the respective journal entries if there is no distinction?
> But how can a country not have something that represents a formal request for payment by some future time?
There are plenty countries where the vendor will charge the account of the customer, like a 'pull' mechanism. In many countries they'll use (or used) checks/cheques for that, or a different payment account like a credit card. The agreement for this would have been a contract. They may still use invoices for larger transactions, but they aren't always required by law.
I remember that in the old days, Google, Stripe, etc wouldn't send invoices, sometimes you'd get a minimal receipt message by email, but that was about it. This was particularly annoying for EU-based companies where there are minimal requirements for invoices and/or receipts.
Times have changed though. Most companies, including US-based, will now offer invoices that comply with most international regulations.
Except PayPal of course, for some reason they still seem to get away with not offering invoices. You'll have to download your monthly account overview in PDF from their merchant portal, and they just slapped the following text on it: "This statement may serve as a receipt for accounting and tax related purposes.".
>choose your own accounting software, do not be tempted to hire an external bookkeeper that keeps the books in 'their' systems (accountant lock-in).
~30 years ago I worked at a very small business (3 employees) and they used and liked Quickbooks. The accountant convinced them to switch to some "better" system and for around 3 months they had no idea how much money they had, they just lost all visibility into the system because it didn't work in the way they expected. "If things didn't look right, we'd just go through every screen in the system and press Post." At the end of those 3 months they realized they had unexpectedly gotten into $70K in debt -- this was ~35 years ago when a house was around that much. They had to take a second mortgage on their house. Eventually, they figured out the accounting system, righted the ship, and paid back the second mortgage over a few years. Y2K really helped, with that giant bump in sales.
What accounting software would you recommend for first-time entrepreneurs? Are their any open-source solutions that can be self hosted that integrate with existing solutions?
I am just starting my journey into entrepreneurship, and have yet to choose a bank or accounting software, and would appreciate guidance. I am based in the UK, and will only be conducting business in the UK to start off with.
Not OP but there are a few open source options. GNU cash is friendlier for beginners due to the GUI. I like plain text accounting, specifically beancount.
As far as integrations, GNU cash lets you import from various formats like quicken while beancount has lots of plugins from the community like importers for various banks. I don’t believe either offer invoicing but you could integrate it yourself or just manually record.
IMO, the hardest part of keeping your own books is learning double entry accounting.
Thanks for the recommendation for GNU cash will give that a look. What resources would you recommend for learning double entry accounting?
Starling Bank as the bank, and FreeAgent as the accounting software - it'll handle personal tax (self-assessment), corporation tax, VAT, and payroll. If you need an accountancy practice, I very much recommend Maslins - they'll provide FreeAgent access in that case as part of their fee.
Thanks for the recommendation, will take a look at Starling Bank and FreeAgent.
Haha thank you, I was amazed the domain was available! And yeah jumping through hoops just to get to the invoice generator is something that frustrated me with existing alternatives, so dumping the user straight into it was one of the foremost decisions the design centered around.
I'm working on pure.md[1], which lets your scripts, APIs, apps, agents, etc reliably access web content in markdown format. Simply prefix any URL with `pure.md/` and you get the unblocked markdown content of that webpage. It avoids bot detection and renders JavaScript-heavy websites, and can convert HTML, PDFs, images, and more into pure markdown.
pure.md acts as a global caching layer between LLMs and web content. I like to think of it like a CDN for LLMs, similar to how Cloudinary is a CDN for images.
[1] https://pure.md
It seems to miss URLs?
At: https://willadams.gitbook.io/design-into-3d/2d-drawing the links for:
https://mathcs.clarku.edu/~djoyce/java/elements/elements.htm...
https://mathcs.clarku.edu/~djoyce/java/elements/bookI/bookI....
https://mathcs.clarku.edu/~djoyce/java/elements/bookI/defI1....
are rendered as:
_Elements_ _:_ _Book I_ _:_ _Definition 1_
Maybe detect when a page is on gitbook or some other site where there is .md source on github or some other site and grab the original instead?
By default, href values of <a> tags are removed, because they add significant token length without adding more context. Coming soon, you can specify a request header to set whether or not you want links removed from the response. Those underscores you mentioned are from the italics.
Cool project!
Recently discussed, too: https://news.ycombinator.com/item?id=43462894 (10 comments)
What a great idea, I will soon be a paying customer. This solves a problem of an app I'm using that I was hesitant to try to develop myself.
With AWS Athena, you can query the contents of someone else’s public S3 bucket. You pay per read, but if you craft your query the right way then it’s very inexpensive. Each query I run only scans about 1MB of data.
Since I was just looking at this accidentally, here are some examples of how to query at a ~cent-per-query cost level (just examples but quite illustrative): https://commoncrawl.org/blog/index-to-warc-files-and-urls-in...
Works great on mobile thanks, helpful tool to bypass flaky websites, js and even some paywalls.
i have no skin in the game and honestly i am wondering how this idea contributes to enshittifying the web more?
this idea just seems like it provides the same content as visiting the site in a different view, like reader mode?
The service seems designed to bypass anti-scraping measures. If site owners don't want their content scraped by AI this is subverting their will.
It also obfuscates responsibility between the AI vendor and the scraping service. One can imagine unethical AI providers using a series of ephemeral "gateways" to access content while avoiding any legal or reputational harm.
I built a machine learning library [1] (similar to PyTorch's API) entirely from scratch using only Python and NumPy. It was inspired by Andrej Karpathy's Micrograd project [2]. I slowly added more functionality and evolved it into a fully functional ML library that can build and train classical CNNs [3] to even a toy GPT-2 [4].
I wanted to understand how models learn, like literally bridging the gap between mathematical formulas and high-level API calls. I feel like, as a beginner in machine learning, it's important to strip away the abstractions and understand how these libraries work from the ground up before leveraging these "high-level" libraries such as PyTorch and Tensorflow. Oh I also wrote a blog post [5] on the journey.
[1] https://github.com/workofart/ml-by-hand
[2] https://github.com/karpathy/micrograd
[3] https://github.com/workofart/ml-by-hand/blob/main/examples/c...
[4] https://github.com/workofart/ml-by-hand/blob/main/examples/g...
FOSS MTG inspired digital card game.
I love card games, but for digital card games the business model is beyond predatory. If you need a specific card your option is to basically buy a pack. Let’s say this is about 3$ give or take. But if it’s a specific rare card, you can open a dozen of so packs and still not get the specific card you want.
This can go on indefinitely, and apologists will claim you can just work around this, by building a different deck. But the business model clearly wants you to drop 50 to 100$ just to get a single card.
All for this to repeat every 3 months when they introduce new mechanics to nerf the old cards or just rotate out the dream deck you spent 100$+ to build.
I’m under no impression I’ll directly compete, but it’s a fun FOSS game you can spin up with friends. Or even since it’s all MIT, you can fork and sell.
It also gives me an excuse to use Python, looks like Django on the backend and Godot for the game client. Although the actual logic runs in Django so you can always roll a different game client.
Eventually I’d like different devs to roll their own game clients in whatever framework they want.
Want to play from the CLI, sure
I started building a MtG competitor inspired by Altered and Netrunner. As weird as it sounds, I started with some bash scripts to see how the meta would play out to make sure the card values/strategies were balanced.
I would love to compare game development notes if you're interested in discussing this sometime.
Many years ago Decipher (who made the Star Trek and Star Wars TCGs) rolled out a web platform for playing their games. It was the business model but with none of the advantages of the physical property. You would spend money on their platform to buy their digital cards, to play only there, and when you left the cards just disappeared into the void.
Have you heard about Mindbug[0]? It's a recent MTG-inspired (co-created by one of the authors of MTG) card game. Plays quick and is full of interesting and consequential decisions.
[0]: https://boardgamegeek.com/boardgame/345584/mindbug-first-con...
Wait 3 months for me to finish.
So far it's basically just a Django server. You're responsible for self hosting ( although I imagine I'll put up a demo server), you can define your own cards.
You can play the game by literally just using curl calls if you want to be hardcore about it.
I *might* make a nice closed source game client one day with a bunch of cool effects, but that's not my focus right now.
My master's thesis[1] was half research, half dev project, exploring how we can continue to fully fuse traditional RPGs with computers. This goal is my life quest, my life's work.
I think virtual tabletops (VTTs) as they currently stand are barking up the wrong tree[2]. I want a computer-augmented RPG to allow the GM to do everything he does in the analog form of the game. On-the-fly addition of content to the game world, defining of new kinds of content, defining and editing rules, and many other things ... as well as the stuff VTTs do, of course. The closest we've gotten in the last 30 years is LambdaMOO and other MUDs.
The app I made for my thesis project was an experimental vertical slice of the kinds of functionality I want. The app I made after that last year is more practical and focused on the needs of my weekly game, in my custom system; I continue to develop it almost daily.
I'm itching to tackle the hardest problem of all, which is fully incorporating game rules in a not-totally-hardcoded way. I need rules to be first-class objects that can be programmatically reasoned about to do cool things like "use the Common Lisp condition system to present end user GMs with strategies for recovering from buggy rules." Inspirations include the Inform 7 rulebook system.
[1] See my homepage, under Greatest Hits: https://www.mxjn.me
[2] Anything that requires physical equipment other than dice and a regular computer is also barking up the wrong tree. So no VR, no video-tracked physical miniatures, no custom-designed tabletop, no Microsoft Surface... Again, just my opinion.
I'm working on something similar. I'm building a MUD with an LLM playing the role of GM. Currently it just controls NPCs, but I eventually want it to be able to modify the game rules in real time. My end goal is a world that hundreds of players can play in simultaneously, but has the freedom and flexibility of a TTRPG (while still remaining balanced and fair).
That's really cool, Elias. I keep seeing people try to put LLMs into the role of the GM. But I think you're doing something new and important by working to have the rules available to it.
Is your project available anywhere? Best of luck!
If you're interested, because I kept seeing "LLM as GM" projects, I got curious about how well it would work to have LLMs as players instead. So I made this:
https://github.com/maxwelljoslyn/gm-trainer
It's a training ground for GMs to practice things like spontaneous description, with 4 AI players that get fed what each other say so they act in a reasonably consistent manner. It's not perfect, but I've gotten some good use out of it.
how do you feel about Talespire? it allows pretty fast on-the-fly map-making as long you’re not dealing with significant vertical distances, although it’s got very little in common with LambdaMOO. but MUDs generally seem to be MMRPG precursors at this point, unless there’s an underground community I’m unaware of.
I feel the same way about Tailspire as I feel about pretty much every other VTT. They might do okay, even pretty well, at combat maps and/or character sheets, but I want is the whole game world in the computer. Maps are just a fraction of what I need as a GM. I need data on economics and population numbers and power structures. And I need computation over all those things.
For instance, my game rules include an economic subsystem, which takes in the production of goods and services at hundreds of in-game cities, and computes prices for over a thousand player-purchaseable goods. The "second app" that I referred to above allows players to (among many other things) purchase stuff at the market nearest their current location and have those items go straight into their character sheet. If the "item" is actually an animal, a hired mercenary, etc. then a different subsystem generates a new NPC with the right statistics and attaches the player to it as owner/liege.
I could write an extension for a VTT that talks to my economic system over an API, and throws items up on screen, lets players purchase them, moves them into their character sheet using the right function calls in the VTT's extension library, etc. But every step of the way, I would be fighting to cram this subsystem into the VTT's conception that gameplay begins and ends with maps and char sheets.
I am full-time building LLM-based NPCs for a text-based MMORPG. Been doing a lot of work recently on allowing progression through scenarios with them where the rules are in a class, and the LLM takes care of communicating user intent to the rules engine, based on free-text, and writes back to the user with the results.
That's sweet! I think LLMs have incredible potential for descriptions and for NPC behavior, and I really like that you have this bridge between freeform intent and a rules engine. I'd like to pick your brain about it - I'll send an email.
I've been building mentions.us[1] - it sends you alerts when your keywords are mentioned on Hacker News, Reddit, Bluesky, LinkedIn and a few other places. For anyone who uses F5Bot, it's similar but with some extra data sources and a Slack integration.
It's been a fun project. Dealing with the scale of Reddit (~300 posts/second) creates some interesting technical challenges. It's also let me polish up my frontend development skills.
I don't think it will ever be a money spinner - it has ~70 folks using it buy they're all on the free tier. It's felt really good to build something useful, though.
[1]: https://mentions.us
You just got a signup :) Free plan, I'll admit. I don't need or want anything other than email notifications, and the free plan for that is very generous. Thanks for building this.
For the social platforms, are you hooking up to their APIs or just using Google? I'm only interested in emails and would pay a small price for that (say 5-7/month). I've signed up and added my first keyword to test.
That being said, here is an additional feature: being able to track discord/slack/telegram by providing my API key and you streaming the content of the groups I've signed up to.
This is really interesting, thanks for sharing! I'm keen to know how it compares to a tool like Pulsar? I've been quoted a huge amount to use their service, and it looks like mention.us basically fulfills the same social listening function? If it does then I will definitely push my org to sign up!
Thanks! I haven't used Pulsar, but the general answer is that mentions.us is focussed on sending you alerts for notifications, whereas more sophisticated social listening tools provide a lot more analytics (e.g., sentiment analysis).
If your company just wants alerts when their keywords are mentioned on social media then mentions.us should work great for them. If you work for Coca Cola then you likely need something very different from your social listening tool!
Thanks for clarifying, we're a small org and so the few mentions we get could be analysed manually I'm sure. I will flag it to the marketing team!
Sounds very cool. I'm curious how you manage to monitor Linkedin though. The only tool that seems capable of monitoring Linkedin is https://kwatch.io , so if you manage to achieve that too it's impressive.
your pricing is little confusing, for free you are providing 100 keywords, and for you most expensive plan you are providing also 100 keywords, in fact only diff between these two is slack notification. What's the motivation behind this pricing plan?
I put more details in a reply to another comment, but basically I think the number of people willing to pay for email alerts is small, so I’ve made the service free for them. It’s only teams who want Slack notifications who have paid plans.
I’m not optimising to extract every possible $ from the market with that pricing strategy. Instead I hope it will maximise the number of users whilst breaking even on costs.
This seems very useful. Why not make it paid ? Do you think your customers won’t buy ? Have you tried ?
What would your customers need to make them want to pay for it ?
I think most of the people who sign up for email alerts would never pay. Lots of them are indie hackers or folks with a side project - I've been there, and know how price sensitive those communities are. I'd rather they use the service for free than not at all - I get valuable feedback from that, a marketing boost if they tell others about it, and the validation of having built something other people use.
I do have a paid plan for people who want Slack notifications, and I think those folks ought to be happy to pay. My hope is that I'll eventually get a few paid signups and that those will cover the costs of the service (which are minimal).
I know I lose a bit of revenue with the above approach, but it's a tradeoff I'm happy to make.
Through the API - in particular the info endpoint[1], combined with the fact that Reddit IDs are base36 encoded sequentially increasing integers[2]. You can get 100 objects at a time, so if you make ~3 requests a second it's enough to get all of the new posts and comments.
Does “vibe coding” count :-) I’m from west Africa and lately been very interested in African fairy tales to read to my daughter. Ended up building ( a GPT-backed interface that can insert her in any African story she wants. We also have a list of African queens who’re not famous anymore but did amazing things (look up Queen Nzinga for example). So I’m doing a series of little children’s books about each queen - have them exported to PDF so I can print them out and bind for her: her own little Collection of fairytales. I plan to put it online later - even if you’re not African I think it’s a great way to explore our history.
I love learning more history, thank you for the recommendation, I would have never discovered this myself.
There is nice list url shortener API https://publicapis.dev/category/url-shorteners and worth try it.
Would love to do something similar. Have you written about the technicals/set up somewhere?
Not yet as I’m still finessing it to her needs. She has a problem getting rid of thumb sucking, and finally asked for a fairy tale with a thumb sucking princess who eventually stops sucking her thumb lol. It’s a fun activity and also letting me learn about LLMs.
We've used them to generate stories about our kids and their favourite characters, too. It's a great use case — your approach sounds excellent. Good luck to you!
I dusted off an old app (2012 or so) that I wrote for me and my girlfriend who are in a long-distance relationship which I call "Date Night Movie Player". It needs updates and she mentioned that she missed using it. Basically it lets two people sync watching a video/movie together, while chatting in a side transparent overlay and has a remote control with interesting buttons like timed "beer break", "bathroom break", along with pause so you can draw an arrow on the screen or circle something of interest. There is also a button that might (it is random chance and you can only do it a few times) let you steal the remote control from the other user. Only the person with the remote can really pause after all! It gives the experience of watching a movie together and being able to comment about things happening like when we are together.
Interesting to hear. One big thing I had to address was the fact that she lives in a rural area with very slow internet access and I don't. So I built in and option for us both to select a movie and a time for the date and it would pre-download a high-quality version.
Would love to learn about this. I am building something similar for my family
I had this usecase, Ended up using google meet and screensharing.
We’ve been building a new social-enabled git collaboration platform on top of Bluesky’s AT Protocol: https://tangled.sh
You can read an intro here: https://blog.tangled.sh/intro (it’s publicly available now, not invite-only).
In short, at the core of Tangled is what we call “knots”; they’re lightweight, headless servers that serve up your git repository, and the contents viewed and collaborated upon via the “app view” at tangled.sh. All social data (issues, comments, PRs, other repo metadata) is stored “on-proto”—in your AT Protocol PDS.
We don’t just plan to reimplement GitHub, but rethink and improve on the status quo. For instance, we plan to make stacked PRs and stacked diff-based reviews first-class citizens of the platform.
Right now, our pull request feature is rather simplistic but very useful still: you paste your git diff output, and “resubmit” for a new round of review (if necessary). You can see an example here: https://tangled.sh/@tangled.sh/core/pulls/14
We’re fully open source: https://tangled.sh/@tangled.sh/core and are actively building. Come check it out!
How is the support for LFS? Also, what backend language? I have some Go code for implementing an LFS server and auth but did not want to build a full code forge. All of the major Git hosts have woefully bad LFS management (e.g. if you want to purge a file from history have to delete the whole repository).
I built a no-AI, human-only social network focused on ONLY one thing - keeping people connected.
I'd stepped away from mainstream social media last year due to the overwhelming negativity, privacy violation, etc. Then around early this year, I started to feel I was missing updates from people who actually matter in my life. Instead of going back to traditional platforms, I decided to create a simple solution myself.
The platform emphasizes: - No AI algorithms or content manipulation - No infinite scrolling designed to trap your attention - A simple interface for sharing life updates with close connections (Text and Photos only for now)
We've intentionally made connecting difficult: no user search and no friend suggestions - you only connect with people you already know and care about.
Web: https://aponlink.com/ Android App: https://play.google.com/store/apps/details?id=com.aponlink.a... (iOS version coming soon)
I'd love to hear how this approach resonates with the HN community, particularly from those who've also grown tired of traditional social media.
So I like what you are doing but I think it might be worth having a larger think about this
> no user search and no friend suggestions
I get the intentionality, but the reason that facebook was successful was that it found the people you intentionally wanted to communicate with for you.
The issue is that the social graph overstays its welcome. After its done finding all the people you want to communicate with, it suggests a ton of people you dont.
I actually find this to be similar to netflix and spotify suggestions, both of which were able to find things I wanted to consume early on, but now just give me waves of shit.
Consider doing something a lot smaller, like an opt in, 1 month activation at a time, depth 1 search to find people you might want to connect with, but without the hassle of having to swap details on another platform.
That’s a great insight. I totally agree that early Facebook’s ability to surface actual connections was valuable before it turned into an endless recommendation machine.
The challenge is figuring out how to offer just enough discoverability that doesn't creep users. I like your idea of an opt-in, time-limited, depth-1 search, it keeps things intentional while reducing friction. Definitely something to think about.
Curious: would you see value in a simple "import contacts" option, or do you think that would risk overstepping?
As long as it is isnt intrusive. Both LinkedIn and Facebook have done this to me at some stage or another, and I get endless prompts to try again, and theres also a bunch of users on those platforms that are now recommended to me because of the search.
It would be useful to identify my friends but I dont want a loose thread of some guy I emailed 20 years ago to constantly bug me.
You need an about page with screenshots, ect. I'd like to know what I'm getting into before I invest my extremely valuable time and attention into your site.
---
The links on the bottom of the page (about, privacy policy, ect) don't work.
In general: Non-functioning links / buttons are a huge no-no. When I encounter non-functioning links / buttons in software, I just assume I'm going to waste my time and move on.
I know that sometimes when designing a UI, you want to be able to "see" what the final product will look like. Leaving them in before they work is sloppy, and gives the impression that your product also has more loose ends.
Something like this has been on my mind for a while now -- take the useful, positive elements from across the socials (network of connections, media sharing, events, etc) and create mini-nets that let people who want to, stay in touch.
How do you envision onboarding? Do I join, and then try to convince a handful of people to join as well?
Glad this resonates with you! That’s exactly the goal—keeping the useful parts of social networking while removing the noise and AI-driven manipulation.
> How do you envision onboarding? Do I join, and then try to convince a handful of people to join as well?
Yes, that's been the idea so far for onboarding. But we’re also exploring ways to make the platform more organically discoverable and valuable from day one (without AI).
In my case, it's been easy to convince my network to move and I found they shared a similar level of dismay towards traditional networks.
Please let me know if you have any suggestions on the onboarding process
I wish I had suggestions! The daunting nature of the onboarding is what cooled my jets in the first place, and I never got part the ideation phase with this particular project.
The need is there (at least for some of us!) so the sell shouldn't be so hard, but I feel like I'm missing the "a-ha!" differentiator here. It's not enough to pull the good/useful remnants from the sludge socials are today; it would need an extra something to excite people enough to make the effort to engage with yet another online service.
Trying to sign up, got this error:
Error Firebase: Error (auth/network-request-failed).
Thanks for trying it out! That particular error usually happens due to a temporary network issue or if third-party cookies are blocked. Could you try refreshing or using a different browser?
I’ll also check on my end to make sure everything is running smoothly. Appreciate the heads-up!
I started a small company selling accessories that I design, 3d print, and build for old 16mm film cameras. I recently released a crystal synchronized motor for Arri cameras, which allows you to record sound and have it sync up properly later, that has actually been selling pretty well. My next goal is to get into CNC machining with metal and actually build a modern 16mm film camera.
For my day job I am currently working for an online education company. I have been learning about the concepts behind knowledge tracing and using knowledge components to get a fine grained perspective on what types of skills someone has acquired throughout their learning path. It is hard because our company hasn't really had any sort of basis to start from, so I have been reading a lot of research papers and trying to understand, from sort of first principles, how to approach this problem. It has been a fun challenge.
Hey! Seems a nice job, do you mind if I ask which company and if you found some interesting references on the subject?
Hi! I can't really share the company, but I do love the space and happy to discuss what I've been reading.
So the idea of Knowledge Tracing originated, from my understanding with a paper in 1994: http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2012/1... this sort of introduced the idea that you could model and understand a students learning as it progresses through a set of materials.
The concept of Knowledge Components was started, I believe, at Carnegie Mellon and University of Pittsburgh with the Learn Lab: https://learnlab.org/learnlab-research/ - in 2012 they authored a paper defining KLI (Knowledge Learning Instruction framework): https://pact.cs.cmu.edu/pubs/KLI-KoedingerCorbettPerfetti201... which provided the groundwork for the concept of Knowledge Components.
This sort of kicked things off with regards to really studying these things on a finer-grained level. They have a Wiki which covers some key concepts: https://learnlab.org/wiki/index.php?title=Main_Page like the Knowledge Component: https://learnlab.org/wiki/index.php?title=Knowledge_componen...
Going forward a few years you have a Stanford paper, Deep Knowledge Tracing (DKT): https://stanford.edu/~cpiech/bio/papers/deepKnowledgeTracing... which delves into utilizing RNN(recurrent neural networks) to aide in the task of modelling student knowledge over time.
Jumping really far forward to 2024 we have another paper from Carnegie Mellon & University of Pittsburgh: Automated Generation and Tagging of Knowledge Components from Multiple-Choice Questions: https://arxiv.org/pdf/2405.20526 and A very similar paper that I really enjoyed from Switzerland: Using Large Multimodal Models to Extract Knowledge Components for Knowledge Tracing from Multimedia Question Information https://arxiv.org/pdf/2409.20167
Overall the concept I've been sort of gathering is that, if you can break down the skills involved in smaller and smaller tasks, you can make much more intelligent decisions about what is best for the student.
The other thing I've been gathering is that Skills Taxonomies are only useful in as much as they help you make decisions about students. If you build a very rigid Taxonomy that is unable to accommodate change, you can't really adapt easily to new course material or to make dynamic decisions about students. So the idea of a rigid Taxonomy is quickly becoming outdated. Large language models are being used to generate fine-grained skills (Knowledge Components) from existing course material to help model a students development based on performance in a way that can be easily updated when materials change.
I have worked through and replicated some of the findings in these later papers using local models, for example using the open Gemma 2 27b models from Google to generate Knowledge components and using Sentence Embedding models and K-means clustering to gather them together and create groups of related Knowledge Components. It's been a really fun project and I've been learning quite a bit.
Thank-you! It's a long time that I have a similar idea and I'm interested in developing it, but never found the time to dig deeper, with those references I will jump start in the subject and refine.
It's nice to know I'm not the only one thinking about that.
The trick for me is that it's a path in a graph for each student, so even if some component is not as strong for one student, he can fill the gap by taking another route. A good framework would be resilient if it finds many possible paths to reach the same result, and not forcing one path. But then, teaching in this way is more difficult.
I released an iOS app last October for users who use Apple Watch to record their workouts - https://mergefit.itwenty.me
It lets users merge two or more workouts into a single one. There have been times when I have been out riding, hiking or whatever and accidentally end the activity on my apple watch instead of pausing it. Starting a new workout means having your stats split across the two workouts.
The "usual" way to merge such workouts is to export all of them to individual FIT files, then use a tool like fitfiletools.com to merge the individual FIT files. You then have a merged FIT file, which is difficult to import back into Apple Health. This process also requires access to the internet, which is not always guaranteed when out in remote areas.
MergeFit makes this process easy by merging workouts right on device and without the need to deal with FIT files at all. It reads data directly from Apple Health and writes the merged data back to Apple Health.
The app reached a small milestone a few days ago - crossing 1000$ in total sales.
Great idea! One other thing that annoys me is the inability to trim workouts, mostly walking workouts which I forgot to stop. Even when the Apple Watch asks you if you want to end the workout, it doesn’t cut off the end where nothing happened anymore. It should be possible to do this (semi-)automatically using the heart rate and movement data.
That's a good problem to solve. Another pet peeve of mine is the inability to modify different streams of fitness data in a single workout. If I record workout on the watch, but use a different device just for heart rate, there's no good way to replace the watch recorded HR data with this other device's HR data.
I am sharing what I learnt building electric cars.
On YouTube: https://youtube.com/@foxev-content
In a learning app: https://foxev.io/academy/
On a physical board where people can explore electric car tech on their desk: https://foxev.io/ev-mastermind-kit/
Backstory: from 2018-2023 I converted London taxis to be electric and built three prototypes. We also raised £300k and were featured in The Guardian. I have a day-job again these days and am persisting what I learnt and sharing it. YouTube is super interesting for me because of the writing, similar for the web app actually because the code isn't that complicated, it's about how do I present it in a way that engaged users, so I am thinking mostly about UX.
Actually why not, here is the intro to the first module (100 questions about batteries - ends in a 404): https://foxev.io/academy/modules/1/
I really love the concept of the mastermind kit. An experimentation product that's geared to learning about a specific niche.
Not taking payments yet, ETA was Q1 but it's slipping to Q2 as my day job has kept me busy
You can sign up interest and I will send you an email when it's ready.
Working on Runno (https://runno.dev/) as a side project. It's a tool for running code in the browser for educational use.
[Edit]: I wrote a re-introduction to Runno: The WebComponent for Code over the weekend (https://runno.dev/articles/web-component/)
I've been playing around with turning it into a sandbox for running code in Python (https://runno.dev/articles/sandbox-python/). This would allow you to safely execute AI generated code.
Generally thinking about more ways to run code in sandbox environments, as I think this will become more important as we are generating a lot of "untrusted" code using Gen AI.
Awesome! Have you considered pyodide[1]? Pydantic uses this for sandboxing its AI agents [2].
1. https://pyodide.org/en/stable/ 2. https://ai.pydantic.dev/mcp/run-python/
Thanks! Yeah I'm very aware of Pyodide and interested in adopting some of their techniques.
A big difference between my approach and their approach is that Runno is generic across programming languages. Pyodide only works for Python (and can only work for Python).
Big interesting development in this space is the announcement of Endor at WASM IO which I'd like to try out: https://endor.dev/
Myself.
Been a freelance dev for years, now going on "sabbatical" (love that word) shortly.
Planning to do a lot of learning, self-improvement, and projects. Tech-related and not. Preparing for the next volume (not chapter) of life. Refactoring, if you like, among other things.
I'm excited.
Yeah.. I'm starting a LIT-RPG style blog for my personal growth. Skills/attributes tied to personal goals. Spirit as it relates to learning guitar (for instance), Mana as it relates to tech skills.
Right now it's all a flat index.html and simple.css hosted on github pages, but I'll get a site eventually. Writing a blog is part of my goals to become a profitable author. (Creativity ~= blogging, sketching, animation)
Good luck. I plan on doing something similar once I get my permanent residence later this year.
I've got a self-hosted personal library management app more or less done here: https://github.com/seanboyce/ubiblio
An electronic board game similar to Settlers of Catan (https://github.com/seanboyce/Calculus-the-game), just received the much better full sized boards. Will assemble and test over the next few weeks, then document properly. I got the matte black PCBs, they look really cool.
A hardware quantum RNG. Made a mistake in the board power supply, but it still works well with cut trace and a bodge wire. Will probably fix the bug and put the results up in a few weeks. Can push out ~300 bytes of entropy a second, each as an MQTT message.
A hardware device that just does E2E encrypted chat (using Curve 25519). Microcontrollers only, no OS, and nothing ever stored locally. HDMI output (1024x768), Wi-Fi, and USB keyboard support. I originally designed it to use a vanilla MQTT broker, but I'm probably going to move it to HTTP and just write a little self-hosted HTTP service that handles message routing and ephemeral key registration. Right now the encryption, video output, and USB host works -- but it was a tangle of wires to fix the initial bugs, so I ordered new boards. Got to put those through testing, then move on to writing the server.
Iterating on hardware stuff is pretty slow. I try to test more than one bugfix in parallel on the same board. Iteration time is 2-3 weeks and 8$. If I have all the parts in stock. I don't have very much free time right now due to work, so this suits me fine. A rule I live by is that I must always be creating, so I think this is a reasonable compromise.
I'm working on an emulator for 16-bit computer I have designed for teaching students. It's designed to make low level computing more accessible for modern students by making things as visual as possible, for example blinken-lighten for the registers like w/ the old PDPs, color coded memory that shows where the code and data segments are, where the stack is, etc, and a small frame buffer that drives a 64x64 2 bit display that uses the same color palette as the original gameboy. The instruction set is a mashup of MIPS, the Scott CPU and JVM/forth stack operations. I'm excited about it.
here's a screenshot:
https://gist.github.com/1cg/e99206f5d7b7b68ebcc8b813d54a0d38
Nice. I made an 8-bit AVR thing along those lines, 240x180 - 16 color. In browser emulator and assembler.
Can load source from gists https://k8.fingswotidun.com/static/ide/?gist=ad96329670965dc...
Never really did much with it, but it was interesting and fun.
Working on [redacted], a Chrome extension to make YouTube more time-efficient (and way more fun).
Its main differentiator: hover any thumbnail (homepage, search, shorts, etc.) for an instant mini-summary, like Wikipedia link previews. Also includes detailed summaries w/ timestamps, chat w/ video, chat w/ entire channels, and comment summaries.
Hover & Detailed summaries are free if you plug in your own OpenAI API key ("free for nerds" mode).
Aiming to be the best YouTube-specific AI tool. Would love your feedback. No signup needed for free tier/BYOK. If you try it and email me ([redacted]), happy to give you extended Pro access!
Love the idea and the name! Do you think whether this will worsen the YouTube experience? Because it encourages consuming content at a faster pace than watching the videos themselves.
Thanks!
I think its impact on watch time depends on your goal of that session. When I'm in "looking for a specific answer" mode it does reduce my watch time, but there's plenty of times when I just want to watch youtube–and when I do, it helps me find what to watch, rather than reducing my watch time per se.
Me and a friend of mine are designing a HAB (high altitude balloon) payload meant to go on Hack Club's Apex: https://apex.hackclub.com. It's designed to measure how altitude, temperature, and much more affect photosynthesis, and in turn, chlorophyll fluorescence in algae. We learned a while back that when you shine blue light on chlorophyll, it fluoresces red, and it's really quite a cool phenomena. We're designing custom PCBs for powering and processing data, and even. It'll all go to a Raspberry Pi Zero 2 W which will beam data back to earth over 915Mhz LoRa when it's 100k feet in the sky.
I'm working on MailTrigger — a customizable SMTP server that turns any email notification into a message on platforms like LINE, Slack, Microsoft Teams, Telegram, SMS, or pretty much anything else.
The idea is simple: if your app can send an email, it can trigger notifications across multiple channels with no extra coding. Think of it as "SMTP Server to Anything."
One of the cool parts is MailTrigger supports WebAssembly (WASM), so you can customize your own notification logic and automate workflows. I’ve used it for tasks like monitoring internal systems, forwarding alerts to different chat platforms, and even adding basic decision-making logic before sending notifications. It’s been a huge time saver.
I’ve also experimented with using LLMs to assist in rule creation — you can configure notification rules using natural language instead of writing manual code. It’s like giving your infrastructure a smarter way to handle incidents.
At my company, I’m using MailTrigger for real-time price drop alerts and server health monitoring, along with integrations like Jenkins and Sentry to forward alerts to our DevOps Telegram channel.
It’s still super early, and things like the docs, pricing, and overall user experience are definitely a work in progress. But I’m iterating quickly and would love to hear feedback from this community!
Check it out here: https://mailtrigger.app/
Curious to hear your thoughts!
Thanks for checking it out and for the feedback — really appreciate it!
I’m sorry the loading took so long. I’m not entirely sure if the issue was with the main site or the Join Waiting List process. We’ll definitely investigate and get it fixed as soon as possible.
If it turns out that the waiting list form was the problem and you'd still like to join, feel free to shoot me an email at bear@nuwainfo.com — I'd be happy to add you directly!
Thanks again for flagging this. Your feedback means a lot and will help us improve!
Great question — thanks for bringing it up!
Right now, each MailTrigger mailbox requires SMTP authentication (username/password), so unless someone has the correct credentials, they can’t inject messages. That gives us a basic layer of protection against spoofing from the SMTP side.
For forwarded emails (e.g. from Gmail), we do validate SPF, DKIM, and DMARC on inbound messages. Each mailbox acts as a gated endpoint — only verified senders are allowed to trigger actions.
As for pricing — you nailed it, we’re still working that out. We have a few rough ideas, but I’d genuinely love to hear what kind of pricing model would feel fair or sustainable to you.
Would you lean towards usage-based (like number of messages/month), flat monthly per mailbox, or something else entirely? Have you seen pricing models you liked (or didn’t) in similar tools like Zapier or SendGrid?
Your feedback’s incredibly helpful at this stage — really appreciate it!
I’m finishing several esolangs for the first artist’s monograph of programming languages, out in Sept: https://mitpress.mit.edu/9780262553087/forty-four-esolangs/ including a hands-free (and not dictated) language.
I recently completed Valence: a language with polysemantic programs https://danieltemkin.com/Esolangs/Valence on GitHub: https://github.com/rottytooth/Valence
Older work includes Folders: code written as a pattern of folders: https://github.com/rottytooth/Folders , Entropy: where data decays each time it’s read from or written to: http://entropy-lang.org/ and Olympus: where code is written as prayer to Greek gods, who may or may not carry out your wishes: https://github.com/rottytooth/Olympus (a way to reverse the power structure of code, among other things).
I have three more to complete in the next few months.
I'm working on AssetRoom, a free service to email you noise-free, easy to digest summaries of SEC filings from companies you're interested in.
I often read about interesting public companies (from an investment perspective or otherwise), but fail to then keep up with them over time (sometimes reading many months/years later how successful they were - or not!). I built this to make an easy way for me to follow updates from said companies.
Looks nice, my only initial nit is to change “twitter” to “x” in the footer, some people get very touchy about that haha
I was working on a ray-casting game engine in C, with the focus on enabling the largest worlds yet seen in such an engine (Minecraft scale etc). [0]
A ray-casting engine is an old style of game engine (think 1990 - Wolfenstein or Duke Nukem). The most famous, well-known example is probably Wolfenstein-3D created by ID software (John Carmack and John Romero etc!) You don’t see these engines used anymore in modern games.
So to me, they are novel and a great challenge to try and modernise. Especially as a solo dev! And for further context, raycasted levels are usually teeny tiny (Wolfenstein 3D or Shadow Warrior are the largest worlds I’ve seen, so nothing impressively scaled). I have never ever come across a raycaster with levels the scale of something like Minecraft. So that’s what my ambition is.
I spent a period of 2-3 months roughly 8-10 hour days every day on this project, not knowing much C and not knowing anything about game engines or graphics, and average at mathematics.
But I’m on a break from the project and coding after my 7 year relationship broke down. Realised I had tunnel vision with my life and ambitions, and am now “touching grass” daily instead. It’s hard to put effort into your hobbies when you feel other areas of your life are suffering.
So now I’m lifting weights and doing cardio and reading books instead, trying to keep active and my mind occupied.
I do want to pick this project back up, I’m really proud of what I was able to achieve with no knowledge coming in and I think the project has good bones.
And I loved coding, still haven’t found a hobby that scratches a similar itch
[0] - https://github.com/con-dog/chunked-z-level-raycaster/blob/ma...
I'm continuing to develop Uncloud [1] — an open source tool for deploying and managing containerised applications across multiple Docker hosts. Includes WireGuard overlay network, ingress with automatic HTTPS, and simple Docker-like CLI. Unlike traditional orchestrators such as Kubernetes, Uncloud has no state reconciliation or quorum needs which makes it easy to understand and use. If you ever wanted something simpler than Kubernetes for self-hosting your apps on cloud VMs or in your homelab, Uncloud might be for you.
Key updates from the past month:
- New demo screencast [2]: Deploy a highly available web app with automatic HTTPS across cloud VMs and on-premises in just a couple minutes
- Added initial Docker Compose support for service deployment. The same Compose can be used for developing locally and deploying to your cluster
- Completely revamped how service and container specifications are stored, enabling proper implementation of the service 'scale' command and selective container recreation
My goal for Uncloud is to create a more capable and modern replacement for Docker Swarm, which is no longer being actively developed.
[1]: https://github.com/psviderski/uncloud
[2]: https://github.com/psviderski/uncloud?tab=readme-ov-file#-qu...
I’m working on https://pikku.dev
It’s a TypeScript web framework that’s runtime agnostic, so it can work on serverless and servers (similar to Hono).
What’s different is that the focus is primarily just on TypeScript. There’s a CLI tool that inspects all the project code and generates loads of metadata. This can include:
• services used
• all the HTTP routes, inputs and outputs
• OpenAPI documentation
• schemas to validate against
• typed fetch client
• typed WebSocket client (and more)
The design decision was also to make it follow a function-based approach, which means your product code is just functions (that get given all the services required). And you have controllers that wire them up to different transport protocols.
This allows interesting design concepts, like writing WebSocket code via serverless format, while allowing it to be deployed via a single process or distributed/serverless. Or progressive enhancement, allowing backend functions to work as HTTP, but also work via Server-Sent Events if a stream is provided.
It also allows functions to be called directly from Next.js and full-stack development frameworks without needing to run on a separate server or use their API endpoints (not a huge advocate, but it helps a lot with SSR). Gave a talk about that last week at the Node.js meetup in Berlin.
It’s still not 1.0 and there are breaking changes while I include more use cases.
Upcoming changes:
• use Request and Response objects to make it truly runtime agnostic (currently each adapter has its own thin layer, which tbf is still needed by many that don’t adopt the web spec)
• smarter code splitting (this will reduce bundle size by tree-shaking anything not needed on a per-function basis)
• queues (one more form of transport)
Check it out, would love feedback!
My pursuit of happiness, I'm in fear of quitting my current job and go for a working holiday to Australia, I'm excited while still trying to overcome the fear of not having a stable and well paying job because I don't find any joy in this job no more , so I am working on mentally getting out of this, I want to truly let go "money is more important than my happiness" idea.
I embarked on a working holiday to the UK in 1998. Quit my job in South Africa and left with nothing but a backpack. Opened up my world, I worked for Coca Cola, the BBC, an investment bank, got wrapped up in the dot com boom, met my American wife, met the smartest people in the world, some invested in me, moved to the States, started a business that now has millions of customers and a team of 40 that my wife and I 100% own. So I guess I lean towards GO!! NOW!! :-) Best of luck.
I'm at a similar place to GP. Reading this comment and the employment[1] page for Defiant was a great reminder of some of the things I've been struggling with lately (and how alternatives can look). Congrats on taking the leap many years ago and setting up a company with a deliberate work culture that sounds brilliant to be part of!
Wordfence is pretty cool. Heard about it and your security research for years before I ever used the plugin, then started using it with a client in the past couple years. My time with it is ending shortly (see my comment elsewhere on this thread[0]), but it's been great. Thanks!
thanks for the story, one of the reasons I want to do this is to let go of control and to believe in myself so I can face unpredictable future, I will never know if future will get better or not, but I'll definitely be mentally stronger when I do this, cheers!
Not sure what your age and obligations are, but I did this years ago. Highly recommended.
I spent time kicking around, had a work visa through BUNAC[0] but didn't use it, went to some festivals, did some WWOOFing[1] and hiking and climbing. Also took the opportunity to visit other countries near AU that I wouldn't get to otherwise (NZ, Fiji).
One of my life highlights. Two thumbs up! I doubt my experience is super relevant any more, but feel free to send me an email (address in profile) if you want to chat about it.
That is cool thing to do if you do not have family yet. If you do not, than do it. Life is too short to worry about not having a job for a few months. If.you have savings than why not?
I am doing this myself. I turned in my two weeks' notice a few days ago
I am worried about my decision too, but I think about a few things:
- You got your current job (plus all of the previous roles), so why wouldn't you be able to do it again if and when the time comes?
- Job gaps don't look as bad as they used to. People understand burnout and being stuck in a bad job. Breaking away from those can be seen as a positive, especially if you are pursuing your own health and interests
- There's more to life than draining your useful waking hours for a paycheck at a place that offers little else. With more time and energy, you can explore interests and projects on your own terms
Best of luck in your future endeavors
I guess I'm somewhat worried about not being able to get back to tech again, but I just realized I can still do open source contribution if I want to when doing WH.
I'm in a weird state that I have an okay amount of saving for me to do this, but I'm still worried because I had been through having almost zero money and two years of unemployment, I'm scared to get back to that again.
In the end, I know I have to let this thing go or I'll never become happy even if I'm making tons of money, gotta enjoy life sometimes, thanks for the words!
Best of luck in our journeys too!
I dove into using LLMs together with MCP servers for the first time this weekend. Absolutely incredible.
In addition to the code assistant, I configured a Grafana's MCP server with Cline, so that I can chat with an LLM while having real-time metrics and logs.
For context, I self host grafana in addition to a bunch of services on a raspberry pi. Simple prompts such as "why has CPU been increasing this week?" resulted in a deep analysis of logs/metrics that uncovered correlations I had never been aware of.
Incredible. I can only imagine what this will all look like in a few years
I’ve been dissecting stock market data with code to see how dark pool and automated trading behaviors are evolving and impacting the market. The market is the most adverse possible brain battle where actors constantly try to outsmart and make money off of each-other. Greed and fear playing out and conditions changing constantly. AI is also used but it gets funny when the market adapts and plays against past behavior to take advantage of AI traders as well.
> see how dark pool and automated trading behaviors are evolving and impacting the market.
How/Where did you get the data?Also curious about what you find out. I also thought that the cool billionaire kids were using "private rooms"[0] now and dark pools were for poor boomer hundred-millionaires! :-)
[0]https://finance.yahoo.com/news/darker-dark-pool-welcome-wall...
My toddler has a LeapFrog talking dictionary that drives my teenage daughter crazy. So I just started a project to replace its logic board with a RPi Pico 2 and interfacing with all ~45 of the toy's buttons and LEDs via a couple of GPIO expansion cards.
We'll have about ~300 short audio clips to record, and we'll store/access them via an SPI SD card reader peripheral. Audio output via a MAX98357A combo DAC/class D amplifier that we'll talk to via I2S. Powered by 2 AA batteries. Programming will be in CircuitPython, which is a cool way to teach the kids programming. (There are easy libraries for talking to all those peripherals.)
I'm working on a plaintext, decetralized, distributed (multi-device), trustless (multi-user), immutable, schemaless database that's syncable via SyncThing. The trustless part, fundamentally, comes from records being signed by the authors, each device/user keeping track of signatures (ids) seen, and each device periodically publishing all ids it's aware of: that seems to defeat all attacks or accidental corruptions I care about. My most demanding use case is multi-user expense tracking. Legal/financial document (a la google drive), family photo tracking (a la google photos) is also up there.
Also, I've recently picked up modeling my financial choices and circumstances in Python. Modeling uncertainty is especially interesting I've found. I might share the Jupyter notebook some time to get some feedback.
It's early days.
- It will work as a library that a tool (e.g. family finance tracker) would be based on;
- Every record will be immutable and undeleteable; the whole thing is space inefficient, though I've some mechanisms in mind for pruning away unnecessary records, and it's just plain text, so I'm not worried: should compress well; I wouldn't envision something like this working well on a very large scale though;
- Editing of preexisting records will be implemented as adding a new record that simply references the previous one as its previous version; also, you can implement a ledger by creating a parent-child chain (though the tracking of signatures I mentioned previously might be a simpler approach);
- I like the append-only model because it gives you history of edits for everything, and protects you in case of mistakes (or malice);
- You'll be able to collaborate on records (and the whole db) with other devices/people; every record will be signed by its author/device; conflicting edits (records marking the same record as their parent) will not be deconflicted automatically by the db: the high level app can choose how it wants to do that: throw up an alarm for the user, ignore the older fork, whatever;
- SyncThing-compatibility will be achieved by simply having each device only edit files specific to it; there won't be a single file that is edited by more than a single device/user, thus, SyncThing conflicts can't happen;
- The db will have fairly good guarantees: if it runs its checks and tells you all's good, you can be sure that records were not changed, there is not a new record masquerading as an old record, a record was not secretely deleted, records weren't reordered, another device didn't edit some other device's files, every record's author is its author;
- It was important for me to make the database easily editable and readable via regular text editors, i.e. not only plaintext, but text-editing-centric, but I've not found a way to do that at the lowest level; right now I'm contemplating using some variant of JSON for the records; however, I do envision a text-interface where the database, or a database query/subset, is rendered as a text file, and edits are automatically picked up by a daemon and translated into new records in the actual db, but that would be just another tool using this db, not a feature of the db itself;
- Like for anything synced via SyncThing, the user (or an app using the db) will want to implement some kind of backup (SyncThing is not meant for backups), unless the data is not sensitive to loss.
I’m slowly (on free time) working on my first commercial game for Steam in my spare time, using my open-source 2D engine named Carimbo.
See here: https://imgur.com/a/e3Xo9Io
Carimbo source code: https://github.com/willtobyte/carimbo
More information: https://nullonerror.org/2024/10/08/my-first-game-with-carimb...
I can never wrap my head around game development. Respect. All the best.
Some time ago, I began searching for Python-related events and discovered that many PUGs (local Python User Groups) had disappeared sometime around COVID (at least my local PUGs). After analyzing the ones listed on the official Python website, I found only about 18% were still active, with most hosted on Meetup. This makes sense, as maintaining a community requires time and money, which small PUGs don't have. Meetup can be costly for those starting local Python User Groups, but it's very cheap for big communities. IMO Meetup is not the best place for PUGs, as they are not big by default.
PUGs need a way to communicate and broadcast, to be discovered, but it doesn't necessarily need all of Meetup's features. Also, PUGs probably don't want to be tied to Facebook or other social media platforms. It'd be best if they allowed a simple ownership transfer, once you get tired of organizing.
That's why I created https://pythonuser.group/ - a lightweight side project that, despite being rough around the edges, fulfills the core need: allowing people to discover PUGs worldwide for free. The platform costs me almost nothing to maintain. Allows to subscribe to local PUGs via RSS (not sure if it works). I'll add "export all my PUG data" once someone requests this feature.
It's the first time I share it with the world. Please don't treat it as prod-ready. Feedback welcome at hn@{username}.com
I'm working on a new Event Sourcing database that elevates the WAL into a first class application concept like a message queue. So instead of standing up a postgresql instance and a kafka instance and a bunch of custom event sourcing plumbing, you stand up this database and publish all your application events as messages. For the database part you just define the mappings from event to table row and you get read models and snapshots for free.
The real key here is how migrations over time are handled seamlessly and effortlessly. Never again do you have to meet with half a dozen teams to see what a field does and if you still need it - you can identify all the logic affecting the field and all the history of every change on the field and create a mapping. Then deploy and the system migrated data on the fly as needed.
Still in stealth mode and private github but the launch is coming.
I have done similar for a few customers. I have found that useful pattern is to have both raw queues (incoming data) and clean queue (outgoing data). Outgoing data in single queue only (so all changes are ordered, so we avoid eventual consistency) that has well-defined data model (custom DSL for defining it) and tables/REST api that corresponds 1-to-1 to the data model. Then we need mappings from raw queues to the clean queue.
Interesting. Sounds like a similar premise to Supabase Realtime. I'll keep an eye out.
It was inspired by KSQL! But really KSQL isn't a database, it just gives you some sql processing. Nobody stands up Kafka and KSQL when they want database of any kind.
The difference is that I'm building a database and exposing the WAL to the application layer. What that means is that you can connect your legacy DB application and have it issuing insert and update queries which are now native messages on a distributed message queue. Suddenly you gain a conokete audit trail for your entities without brittle tables and triggers to track this. Then instead of hooking up Qlik or devezium, you can just stand up another instance of the DB, propagate the realtime WAL and you've got streaming analytics - or whatever new application you want.
I'm working on tooling to turn kids from consumers into creators. I'm focusing on game development initially, but have plans for video production and hands on crafts.
For younger kids I've modified Overcooked 2, a traditionally co-op game. I've replaced the second player with a visual scripting platform that allows kids to code their way through levels — worth noting I haven't removed co-op, there's still room for 2 other players:
https://www.youtube.com/watch?v=ackD3G_D2Hc
For older kids I've been making contributions to GodotJS, which allows you to build games in Godot using TypeScript rather than GDScript. GDScript is pretty nice, but I want to be able to teach kids skills that are more directly transferable to different domains e.g. web development:
https://github.com/godotjs/GodotJS/pull/65
I used to be Head of Engineering at Ender, where we ran custom Minecraft servers for kids: https://joinender.com/ and prior to that I was Head of Engineering at Prequel / Beta Camp, where we ran courses that helped teenagers learn about entrepreneurship: https://www.beta.camp/. During peak COVID I also ran a social emotion development book subscription service with my wife, a primary school teacher.
Yes, absolutely, but we're talking about teenagers. I've no doubt they're capable of learning multiple languages. But teenagers are most constrained by the limited time they have available for extra curricular activities. If I was to teach interested kids a second language (and I'd like to), then it would probably be lower level so kids can learn about memory management etc.
I guess I did not explain myself well. the way I understand what you are saying is let's not teach them gdscript but TS instead. the rationale is, so they can do webdev also.
but my impression of the godot community is a lot of gdscript,some C#. So they would not easily be growing in the godot community and make games.
as for teenagers learning new languages, if i remember my teens, 200 years ago, learning new computer things was a thrill, not a chore.
and like I said earlier, I see the habit of picking up a new language a wonderful skill.
hope it is clearer. good luck with your quest of teaching kids to make their own games with godot.
I'm a synthetic biologist! And I think a big problem is physically executing biology programs.
I've been working on building a programming method for biology labs. Basically, it is a dynamic execution environment using lua that can have full rewind-ability (for complete execution tracing) and pausing - you can execute some code then wait a week or so to continue execution. The idea is you can have a dynamic environment for executing biology experiments where the route changes on the basis of outcomes - something I haven't really seen anywhere else. Then I focused a bit on the pedagogy of LLMs so that you can ask an LLM to generate a protocol, and then when you execute it and get unexpected results, it can automatically debug its own protocol using a code sandbox.
It all sounds decent in theory but the difference is I actually implemented it and ran a real biology experiment with it (albeit a simple one that I knew wouldn't work)
Demo here: https://github.com/koeng101/autodemo (probably watch the video)
I realized that many seasoned developers do not yet grasp the power and productivity gains that tools like Cursor can not only provide to them, but ESPECIALLY to those that have a lot of experience and broad expertise.
Therefore, I‘m working on a mid- to longform blog post that details how precisely the competencies that senior developers and tech leads already have are the key to fully harness the potential of these tools.
And who knows, maybe I‘m going to develop this into some form of consulting or training side project.
...and here you go:
https://manuel.kiessling.net/2025/03/31/how-seasoned-develop...
Feedback more than welcome!
As per your request:
https://manuel.kiessling.net/2025/03/31/how-seasoned-develop...
I'm building CAIL – an AI that makes and answers phone calls for you. You type what needs to be done – it calls, talks, waits on hold, asks questions, and sends you a summary. Kind of like a voice assistant, but for actual phone calls. Also built an AI voicemail agent – it can answer missed or declined calls, figure out what they wanted, book a meeting in your free calendar slot, or even mess with spammers if needed.
Started as a personal pain after moving to the US. Now works for both people and businesses. Built our own voice infra. iOS, Android for B2C, and web dashboard for B2B. Building full-time with my wife – just pushed the first version of the mobile app this week.
Very cool, but isn’t this illegal? https://news.ycombinator.com/item?id=39304736
That's cool! I wondered about this recently -- I'd be pretty annoyed if an AI called me. I would most likely hang up immediately, regardless of what was said. Am I in the minority here? Is that feedback you've gotten before?
I do however think people would be more tolerant of an AI answering a phone call they made, I'm bullish on that half of the equation.
sweet. how would you hook into my inbound phone calls?
App in review right now in App Store & Google Play! I will send you a link to LinkedIn on release, but now you can try it in web ui on https://app.cail.io But on the web platform you need to deal with forwarding on your own, in our app we have a bunch of instructions for each carrier.
As a native Spanish speaker, I often used ChatGPT to improve my emails in English. It was very time-consuming having to type the same prompt multiple times in a day to set my writing style and format. That's why I decided to build an app to simplify my process. You can set the style and format just once, and with a single click, you can improve your text. You have the option to include the email thread or any relevant context for a more personalized improvement. Additionally, I've included features designed specifically for non-native speakers, like tone detection and the ability to request a few different alternatives for any word/phrase. And of course, you can talk directly with the AI to create a draft or modify the text. Check it out: https://talktext-ai.web.app/
I experimented with it and was able to get everything I wanted, but it required many extra steps and prompts. For instance, my app automatically detects the tone of the message while I am typing, whereas in GPT, I need to manually request the tone after making any changes to the text.
Lots of things! I'm in a gap year.
To start with, there's https://nuenki.app. It's a browser extension that selectively translates sentences into the language you're learning, so you're constantly immersing yourself in text at your knowledge level.
I've also been working with a friend on a device to help blind people without light perception. I'm quite new to electronics. It's pretty simple, conceptually - a coin-sized device on the forehead that takes in the light intensity in a ~15 degree cone, then translates it into high resolution haptic feedback to the forehead.
The idea being that it means people without light perception can gain a sixth sense through neuroplasticity, with helps them navigate the room and understand their surroundings. We're planning on open sourcing the files. My mum used to teach blind kids, and there's been quite a lot of interest!
As for Nuenki, I'm pretty bad at marketing, so I'm doing a final lot of work to see if I can make it work financially - seeing if an exceptionally generous affiliate program will do the trick - before putting it on maintenance mode, since I have a small group of users who really like it. I'm burning through my gap year fast, and really want to focus on the electronics project, tutoring, and practicing maths for my physics degree.
I wanted a better way to keep track of applications I sent out, A spreadsheet just seemed like a poor way of tracking data. So overtime I built a desktop application to track my job search activity for me. Most alternatives are web-based, but I didn’t love the idea of broadcasting my job search to third parties. This is a native desktop app (Windows/macOS) that keeps everything local.
Still working on code signing (so no scary "unknown publisher" warnings), but otherwise, v1 is ready.
Would love feedback—especially from others who’ve struggled with job-search tracking!
I had thought about it, but I found that in practice that the majority of applications I send out wind up in the 'Applied' state and don't get much further, leading to a large list in a narrow swim lane.
That and I am able to customize the flow of applications, and the number of swim lanes would be a bit messy. I have my app setup to track interviews, phone screens, take home assignments, and many more.
I found that a smart sorting algorithm is best for displaying the applications.
But its still early days of the app, possibly someday.
Been slowly chipping away at my browser mmo game http://everwilds.io. I've always been curious about MMO software/netcode architecture thanks to playing Guild Wars (the original) in my youth. It's the reason I became a programmer, I wanted to learn how to make games like those. For those interested in following the progress I made a discord channel: https://discord.gg/b3REbeavaT
I just finished building a Night Routine manager for me and my wife that help us keep alternating who does the toddler night routine. I needed this because we both have evening activities and trying to figure how to block day for it while keeping the routine fair was hard.
https://github.com/Belphemur/night-routine
I wanted to build everything from the ground up in Go and have it fully integrated with Google Calendar where we have our family calendar.
It setup full day event with the name of the parent in charge of the night routine. To override a routine, any of us can just rename the event with the the other parent name and the software recalculate the following routines.
I also wanted to give a try to Roo Code in VS Code it only took me 2 days (evenings) to code the whole thing with a proper sqlite db.
My wife and I alternate and handle conflicts by exception. Meaning, if we have a conflict the other person subs in. I think calendaring things around your home schedule will be hard for the 'things'. Like, a lot of our evening activities are scheduled events that we can't control (concert, theater, etc) or even a group (dad's dinner, etc) and the occasional work stuff. Anyways, I just think it would be hard to plan for the home duties as a conflict and instead we try to sub in for each other and shoot for something that 'feels fair' over time instead of quantitatively fair on a tracker. I'm all for building tools for youself so if you think this would be of use then that's awesome, just thought I'd share my 2 cents on the 'problem space' :)
Also, FWIW, I think I'm the one that is in the deficit of fair although dad's usually get a bad rep in this regard. I end up doing a lot more night time reps because she does frequent girls nights and has multiple friend groups she's trying to stay engaged with and is in a theater group and mahjong group, etc. However, I balance it with my occasional "take an entire day" to myself. Stuff like this is hard to track and why I think it's important to shoot for what 'feels fair' and make sure you talk about it occasionally so nobody suddenly has repressed feelings of inequality.
This is great advice. In our case, we realized we can't trust our memory anymore, it became hard to remember who done it the most.
We wanted both a system that keeps track of it for us. We want to be sure we can both have activities while still not leaving the other parent on the side and rely on feeling.
In our case we also have recurring events like sport in the evening that happens every week at the same time so this help plan around it and not become unbalanced. We already put everything in the calendar :)
Still plugging away on my linear genetic programming experiments.
The big debate in my head right now is whether a next byte prediction architecture is better or worse than full sequence prediction.
The benefit of next byte prediction is that we only expect 1 byte of information to be produced per execution of the UTM program. The implication here is that the program probably doesn't need a whole lot of interpreter cycles to figure out this single byte each time (given reasonable context size). However, the downside is that you only get 256 levels of signal to work with at tournament selection time. There isn't much gradient when comparing candidates on a specific task.
The full sequence prediction architecture is expected to produce the entire output (i.e., context window size) for each UTM program invocation. This implies that we may need a much larger # of interpreter cycles to play with each time. However, we get a far richer gradient to work with at fitness compare time (100-1000 bytes).
Other options could involve bringing BPE into my life, but I really want to try to think different for now. If I take the bitter lesson as strongly as possible, tokenization and related techniques (next token prediction) could be framed as clever tricks that a different computational model could avoid.
I'm building a website for watching TV news channels from about the world.
With the recent flurry of historic events unfolding, I want to see it from different perspectives (e.g. U.S., Europe, Russia, China, pro-Palestine vs Pro-Israel), therefore I included channels from all these areas, even channels that may be considered propaganda. So keep a critical eye when watching them.
And it's a way for me to try out Vidstack and SvelteKit. Feels like the routing can be improved though.
A website that provides transparency into the new/used car market in Australia: https://carvalue.app/
The goal is to allow anyone to know how much any car is worth at a given age/mileage, and eventually help people make better purchasing decisions.
Be warned: It's still a very buggy prototype at this stage and the data confidence for all but the most popular models is low!
I use CarGurus in the US to find used cars since you can very effectively filter on features and lifespan (age/mileage). Worth thinking about - used-car buyers might be after a specific year-range/model or only after a body type and set of features.
Have been building drawDB[1] for a while now. It's a database schema visualizer. Currently working on adding support for oracle databases. Wrote a parser[2] to allow importing from oracle sql. Have been struggling with motivation though, the pieces are pretty much there but I've been procrastinating on putting it all together. This has been my main side project for almost 2 years now.. I miss the feeling of novelty.. but can't come up with something worth building..idk
I have recently started working on a swi-prolog library for unit aware arithmetic[1]. It is still very bare bone (especially in documentation), but I started writing some examples[2] to showcase the library. It is essentially a port of the mp-units[3] library in c++. It was a lot of fun, and I found prolog especially well suited for manipulating symbolic representation of units and quantities.
[1] https://github.com/kwon-young/units
[2] https://github.com/kwon-young/units/blob/master/examples/spe...
I love playing solitaire card games. But got sick of all privacy intrusive, subscription, ad-riddled crap out there. So I combined learning macOS/iOS development with doing my own games the way I want them (currently Klondike and Spider); https://menubar.games (originally I intended them to only be accessible through the menubar, but got carried away when I realized how easy it was to make iOS versions).
Will continue to refine and possibly do more - love iterating and polishing as a way to learn.
We're increasing the brain's restorative function during sleep.
Over the last 4.5 years, we've been developing slow-wave enhancement technology which increases the effectiveness of deep sleep.
We've developed the full stack, our own hardware, soft conductive dry (no paste or gel) electrodes, comfortable EEG headband, embedded sleep stage classification and stimulation models, the list goes on and on.
We're currently ramping up for a pre-sale, and getting the marketing inline, along with finalizing industrial design, prepping for manufacturing, etc, etc,
So something that I wrote about in a recent blog post[1], we don't increase the amount of TIME in deep sleep. We increase the restorative function (synchronous firing of neurons) without altering sleep time.
We'll be launching a pre-sale once we complete our fit testing (end of Q2 '25) and shipping Q4.
[1] https://www.affectablesleep.com/blog/is-8-hours-of-sleep-the...
Good use of tech. Hardest thing will be corrosion on the fabric electrodes. Muse S has much to teach if you haven't looked at them yet.
We don't use fabric electrodes, we've developed our own conductive silicone. The other thing to learn from Muse S is that mastoid (over ear) electrodes are nearly impossible. We have comb electrodes similar to dreem.
Though, I don't think corrosion is the issue with Muse S, the electrodes tear as I understand it. Either way, they are not robust enough.
Very interesting, what do you estimate the price will be?
We'll be competitive to the high end of other wearable devices (think Oura/Whoop). We aim to make the starting price affordable by offering a subscription plan, though we are also considering a "full fare" for people who hate subscriptions, but that would be considerably more up front.
I'm currently trying to transition from a fairly rigid day job into working as an independent developer. The goal is to build useful online tools and hopefully create a sustainable income stream doing something I find more engaging.
One consistent annoyance in my professional work has been dealing with PDFs – specifically, extracting information into editable formats without losing structure. Copy-pasting often creates a mess.
So, my first project tackling this is an online PDF to Markdown converter: https://pdftomarkdown.pro/
I've focused heavily on trying to maintain good formatting for headings, text flow, formulas, and especially table structure (getting rows/columns right in Markdown). It also has an online editor for quick modifications after conversion.
A key aspect for me was privacy: the application explicitly does not save the content of uploaded PDFs or the generated Markdown files. It only stores minimal metadata (email, filename, page count) for registered users' plan limits.
It's very much a "scratching my own itch" project born out of that PDF frustration. Early days, but hoping it proves useful for others too.
Appreciate you sharing that requirement!
The need for batch processing to pull out targeted data points from PDFs (rather than converting the whole document) is a valuable insight.
While the current tool focuses on full conversion to Markdown, enhancing https://pdftomarkdown.pro/ to handle specific data extraction tasks like yours is definitely something I'll consider carefully for the future roadmap. Thanks for highlighting it!
Unfortunately, PDFs are right buggers to work with and there often isn't an "easy to find value" for anything
You're absolutely right, PDFs can be incredibly tricky. That lack of a consistent, easily parsable structure for arbitrary data is the core challenge.
I'm working on an order and production management system for a company that fulfills orders for print-on-demand ecommerce sites. It handles things like graphics manipulation, imposition layouts (gang sheets), cutlines, printing, order batching and tracking during the production and fulfillment stages (barcodes) and everything necessary from what an order is received via API to when it is shipped.
I've been re-energized to work on some programming things I'd mostly left on the shelf or only picked up once in a blue moon on some quiet weekends. Mostly this is all thanks to LLMs and how enjoyable tools like Claude Code can be to use (when they actually give you what you want!), and how much more social they make the experience for me.
Not a professional developer so I suspect these projects look a lot more like slugging through mud and trying to navigate through a maze in a pitch black environment, and I've found these tools to be helpful especially in small refactorings that would normally result in me just slowly losing interest in the project.
Anyways, https://pianobooth.com is the latest one! Gave me an excuse to learn some Blender as well. It'll play your midi files (well, it might play your midi file) and show the notes being played on a keyboard. :)
Lots of room for improvement. It now works on most of the midi files I've tried, but it's still glitchy and buggy. But at least it works sometimes!
I am working on a guitar practice app[1]. It's a metronome that captures speed and duration as you practice. This data can then be visualized. It's designed to be used on a desktop/laptop as there are keyboard shortcuts for easily controlling the metronome while playing the instrument, but otherwise it works quite well in mobile browsers too.
It started as a personal project a few months back. Since then, I have been using it myself, alongside building the functionality I need. Lately I have been working on polishing it up in order to put it out there for others.
Based on my usage so far, I've come to realize some good second order effects too -
1. having a list of exercises helps me quickly pick something meaningful to practice rather than noodling for most of the time.
2. At the end of a practice session, the total duration is just 15-20 mins yet it feels like quality practice. So now, even if I have just 20-30 mins of free time, I am motivated to squeeze in a quick practice session. Turns out, this is a game changer (at least for myself).
Feature wise, I'm quite happy with the current state of it although I have some ideas for premium features (if it generates enough interest). In the coming weeks, I am planning to switch gears a bit and focus more on marketing/promotion. I also need to play more, because ironically, my practice time has reduced in the last few weeks in the pursuit of "launching" it! Also, I've set a goal to publish one new exercise in the library every week until the end of this year.
https://github.com/Tombert/swanbar
https://blog.tombert.com/posts/2025-03-22-swaybar/
I wanted some extra functionality for the Swaybar in the Sway Window Manager. I got a basic thing working with Bash, but I wanted more stuff, and so I rewrote it in Clojure with GraalVM.
I think it's kind of cool, I ended up with a fairly elaborate async framework with multimethods and lots of configuration, and the entire thing has almost no "real" blocking, and can persist state so that it can survive restarts.
The reason for async support was so that I can easily have my Swaybar ping HTTP endpoints and display stuff, without it affecting the synchronous data in the bar. I have TTLs in order to rate-limit stuff, and also have click-event support.
Right now, I have it ping OpenAI to generate a stupid inspirational quote based on something from a list of about two-hundred topics. Right now on the top of my bar it says:
> "Let the flame-grilled passion of life's challenges be your fuel for success" - Patty Royale
I think it's kind of cool, it's building with Nix so it should be reproducible, and with GraalVM, it hovers at around 12-15 megs of RAM on my machine.
Really interesting. I'm bookmarking your comment for later.
One thing you can do is separate the information fetching from your bar completely. I have a service that runs every minute or so to fetch available updates from the arch repositories (including aur), it writes its output to a file and then my bar regularly updates it's displayed information based on that file.
I don't have the service definition uploaded anywhere, but you can see how simple it is to then integrate it with anything here [1]. This is a status bar I'm building with qml. It's not ready to be released yet, I'm at 0.7.0. Only tested on x11/i3wm so far. Last time I launched it in wayland/sway, there were some issues but it's been a while since. Since it's built with qt complex and non-blocking interactions are available out of the box. For example, switching workspace by clicking an icon in the bar, or switching the format of the displayed date time.
Yeah, the current design is kind of an iterative process; I started with a considerably simpler thing that just had the clock and date and currently selected program, and then I thought it might be neat to try and add add some async stuff and it ballooned from there.
I do have the state persisted in a msgpack binary, but the data fetching is done within the app. I don’t know that separating it out would necessarily be better, I kind of like that I have the pipeline set up on such a way that the fetching for sync and async stuff can be reused.
I am debating rewriting this to use the slightly lower level NIO selector class instead of core.async, but the memory on this is low enough to where I am not sure that it’s worth it.
I have been writing a lot of helper apps in Rust for Sway as well, mostly as an excuse to play with Rust more [1] [2].
I will take a look at your stuff. I have wanted an excuse to learn a bit more about Qt.
I'm vibe coding a "social media engine."
Basically, I've noticed a bunch of social media protocols like ATProtocol, ActivityPub and Nostr are coming out and I think while having these protocols is a good idea, one thing limiting adoption is lack of differentiated social media sites on these protocols. Basically everyone keeps building twitter on a new protocol without offering anything novel. One bottleneck, I thought, is that there isn't a set of utilities to help build a new social network easily so everyone defaults to building twitter as an MVP. I wanted to make an engine, kind of like a game engine, but to make a custom social media for any particular protocol. Hypothetically this should make it so that development for these kinds of projects go way faster. Basically, like a very opinionated django rest framework. Hopefully developers will build more interesting, novel sites and increase adoption for these protocols
I just released an iPhone app called "KIN: Family Calendar" (https://www.kincalendar.com/). It is a Voice-first, AI-Native Shared Family Calendar. Actually, it is simply a replacement for Fridge Calendar. It solves one problem really well:
Capture everything going on in the Family.
Let me elaborate. Families attempt to use a Shared Calendar when things get hectic, but they don't capture everything going on, because most of these events are just too tedious to enter into a traditional Calendar. Voice Assistants are not tailored to capture the kinds of Events that families have. Examples below. So, families either don't use a Calendar, or one person in the family ends up spending too much time keeping everything in sync.
Examples of flexible events that KIN can handle: 1. Aditi has School board meeting on second Tuesday of every month. 2. Aadi has chess on Tuesdays and Thursdays from 6 to 7, and Saturdays at 11. 3. Rushi has after-school Soccer every Wednesday starting April 3rd, for 10 weeks. 4. Create events from Screenshots or Photos.
Coming in next Release: - Send Reminders to other members in the Family! e.g, Remind Aditi to pick up Rushi from School tomorrow at 3 pm.
Not much :)
Which I think is a valid answer. I have a job, family and some health issues.
The main thing I am looking at is blogging. Just posts on a problem I solved that week at work kind of thing. Seems like a low time cost way to promote myself. I might dip my toe in the cooking fat of linkedin engagement using the posts.
As much as that is yuk I feel it may be beneficial. Just need one linked in lurker to be impressed and hire me in a few years time!
And by not posting about AI or working 200 hour weeks I might stand out :)
Considering quitting my mind-melting corporate job to have the freedom of mind to pursue another attempt of starting my own business. So the decision is, what I'm working on. Background: they announced large scale restructurings last week and that could be my chance of getting some money for leaving (region beta paradox in full swing here).
Anyone else?
Today is my second last day of employment - resigned recently for the same reason. Feels exciting and nerve wrecking at the same time
I wish you luck :)
Just working through @munificent's excellent Crafting Interpreters book. I am currently plugging away at the end of the tree walking interpreter section (add your own feature).
(1) Photography: I did two events last month, an indoor track meet
https://www.yogile.com/strides-of-march-2025
and Dragon Day at Cornell, where I am spooling out pictures to
https://bsky.app/profile/up-8.bsky.social and https://mastodon.social/@UP8
I'm excited that I'm getting paid to do an event next week because that's been one of my goals. I feel like I'm really progressing at understanding event venues to pair up interesting foregrounds with meaningful backgrounds as well as painting events in a strongly positive light when other photographers might do otherwise.
(2) Coding: I have several applications that use arangodb, a document/graph database that unfortunately, like most innovative databases of the 2010s, has a terrible license. I don't feel I can either commercialize or open source these things so I am switching them over to use JSONB support in Postgres. I am building an adaption layer that works like "python-arango"
https://docs.python-arango.com/en/main/
this is not a complete replacement because I'm not using many features like Pregel or Foxx and in some ways it is more functional because it supports primary keys being integers or uuids. Out of about 50 AQL queries I think there is just 1 that might be challenging to write in SQLAlchemy. It's interesting in that I am triangulating between the implementation being simple, being able to modify my applications if necessary, and also developing the API that I really want in the long term.
I’m writing an interpreter for the Starlark language in pure Kotlin (no JVM dependencies).
I had this idea in mind for some time already, it began with me wanting to build a simple programming language (and learn in the process) and interest in Bazel. I got started about a month ago by going through the Crafting Interpreters book by Bob Nystrom (it’s crazy good), but now Im straying further and further away from it.
Overall I find the project a great mixture of fun and challenging.
It’s a private repo for now because it’s in a pretty rough state, and is still missing a lot of stuff, but I will release as OSS at some point. That said if someone would like it could be fun:)
An entirely open-source, peer-based accounting package intended for accountants and end users who like the way that certain popular accounting software used to be, with control placed in the user's hands whether they want to run it locally like a desktop app, cloud-based, or some combination thereof. (You can insert a few buzzwords like "blockchain" if that appeals to you.)
After spending a few years developing it for internal purposes, a customer decided to contribute a significant amount to its ongoing, open-source development, plus additional closed-source commercial add-on modules for their own use: making the accounting software useful for very specific industries.
Currently it's piloted running payroll & profit/loss/balance sheet for a handful of small businesses, with the rest of the usual modules (accounts payable, invoicing, quoting, etc.) slated for release this year.
Technology stack is currently Python plus Preact; "severless" architecture where each node maintains a replica of the data involved and can replicate it to other nodes. User interfaces is either CLI or web-based, with an eye towards eventually replicating the desktop user experience of popular accounting packages of times past. We are taking a hard look at shifting from Python to Rust simply as we rely heavily on third-party packages, and Rust is where a lot of the active development in that space is going.
The most fun I had was finding a module written in Perl and Postscript, porting the Perl part to Python, and realising the existing Postscript was excellent and needed no improvement. (Our team now has more Postscript competency than we ever planned to have.)
If you're interested, see my profile for an email and put "HN" in the subject.
I am building https://www.videocrawl.dev/, an AI companion web application that enhances the video-watching and learning experience. Since I primarily learn from videos, I built this to improve my learning workflow. It is free to use.
It offers standard features like chatting with videos, summarization, FAQs, note-taking, and extracting sources mentioned in the video. We're taking it a step further by extracting relevant information from video frames and making it easily accessible.
I've been working (on and off) on part 2 of my blog post series on the rigid body collisions [0] for over a year now. I burned out on it a few times and got super mega stuck on one particular section for months on end. I think I've finally broken through the worst of my writer's block, so there might be light at the end of the tunnel!
RankPic (https://www.rankpic.info) is an app to help users crowdsource their best photo. I've been building over the past 3 years. It's grown into a lovely community of people who help each other pick their best pictures for dating apps, professional photos etc.
I've seen some pretty fun novel use cases, such as (multiple!) people using it to pick out glasses, wedding invites & so on -- https://apps.apple.com/us/app/rankpic-photo-ranking/id160299... (ios) -- https://play.google.com/store/apps/details?id=app.rankpic.ra... (android)
I work on a LLM driven 3d bin packer: https://3dpack.ing
Still in the earliest R&D phase, but working on a multiplayer voxel game. I don’t intend to share it widely, just something for me and family/friends to play.
I mostly wanted an excuse to play with shaders and WebRTC, but I also like the idea of being a sort of “dungeon master” but instead of writing a campaign, I populate the world through procedural rules, and adjust the rules based on how we all end up playing as we go, adding things to stumble upon and keep it fresh in an organic way.
I've been working on extending Postgres to run on top of FoundationDB, effectively turning Postgres into a distributed database with all the modern features one would expect. Hoping to release an initial version for people to try out very soon!
Sure! So my prototype is implemented as a Postgres extension that hooks into transaction as well table and index storage and implements them on top of FoundationDB.
This makes Postgres itself stateless and all data storage and transaction processing is handled by FoundationDB, turning Postgres into a fully distributed database akin to CockroachDB and others.
This has a number of advantages: - Simple horizontal scaling simply by adding more nodes, including automatic sharding (no need for Citus or similar) - Distributed and strictly serializable transactions across all your data - Automatic replication for durability and read performance. No need to set up read replicas or configure your client to route queries to them. - Built-in fault tolerance that can handle node failures and full zone outages. No need to configure replication and failovers. - Multi-tenancy through FoundationDB's built-in tenants feature
All this while not just maintaining Postgres compatibility but actually being Postgres, so hopefully all the features/extensions you know and love will be supported.
I'm planning on publishing to this repo if you want to keep an eye on it: https://github.com/fabianlindfors/pgfdb. Likely won't publish any of the source at first but just some instructions for testing it out.
I'm building https://www.themoonlight.io/en with a small team of four - it s an AI aided PDF reader, designed to make research papers more accessible :)
Lately, I've seen a growing interest in academic content from non-researchers, so I'm focusing on features like automatic summarization and LaTeX math explanation to lower the barrier. I hope this helps more people engage with research and ultimately push global innovation forward.
If anyone is interested, here's a quick 1 minute demo: https://www.youtube.com/watch?v=L1i8Yp_APbg Feedback is always welcome!
Blogging by email. The best way to blog.
Pagecord makes blogging so effortless you'll want to write more. Publish posts by sending an email (or use the Pagecord app). Your readers can follow you by RSS, or subscribe to your posts by email.
Share long-form posts or short stream-of-consciousness thoughts. Both look great!
Pagecord is independent, open source and built to last :)
What happens if you combine a full Dungeons & Dragons rules engine with an LLM to act as the dungeon master? Until the recent wave of reasoning models, the answer was mostly 'nothing coherent', but Claude 3.7 with thinking is very good at this! It will probably get rate limited with minimal traffic, but I have a demo up at https://lairsandllamas.com
Working part-time on Lungy, the breathing app I developed whilst working as a doctor during COVID - https://www.lungy.app/
Also, very interested in synthetic biology atm, I’m taking HTGAA - https://www.media.mit.edu/courses/htgaa/
Working on a tool to remember things I read.
You select some text with your phone and share it with my app, then the shared text is reformulated to a flashcard (with the help of a llm).
You can then browse your flashcards in the app, but I’m also working on ways to show the cards to you with less friction: Like on the phone lock screen or on the face of your watch.
Two things:
1. A Haskell client library for Tigerbeetle db: https://github.com/agentultra/tigerbeetle-hs
2. smolstore, the smallest possible event sourcing database: https://github.com/agentultra/smolstore
I recently launched SpiesInDC, a Cold War history subscription service that delivers reproductions of historical documents and speeches, maps, photos, and even coins and stamps to your real mailbox. I have always loved reading about Cold War history, and I'm an avid stamp collector, so I combined the two together and people really seem to love it.
I'm adding zsh shell completion to my CLI framework ( https://github.com/bbkane/warg/tree/bbkane/the-flattening-2-... )!
I'm writing bigger CLIs with it now and I want to tab through their subcommands and flags, as well as allow customization - suggest values for the current flag based on previous flags' values.
It's been a lot of work (9 months of quite limited side project time)- I had to rearchitect significant parts of the parsing to keep more state around, and learn how I want to approach communication with zsh, but I just need to add some tests and an option or two more and it'll be good enough for most CLIs I wrote.
Oddly enough, I'm procrastinating actually finishing it. I've really enjoyed the "grind" for this feature, and I'm also taking the time to clean up the API if I think of better versions. Being able to noodle around with no pressure (except internal) to deliver keeps the joy in programming going for me.
But, after this is done and integrated into my CLIs, I plan to take a left turn and try to add really good OTEL tracing and visualization to my CLIs- I think I can outut OTEL traces to log files and then embed logdy.dev subcommands for really nice searching and visualization.
I'm working on the next big version of my timer app[0]. I'm adding a Watch companion app.
While doing that, I realized that I actually have some fundamental issues with my architecture.
When I find myself playing whack-a-mole with bugs (especially with a Watch app), then I know the fundamentals are suspect (the app is one that has been in the App Store for over a decade, in one form or another, so it does have some bitrot).
So I'm redoing the engine, and will probably substantially rewrite the app, itself.
I'm working on Cali Challenge. A calisthenics workout app with a daily challenge that gradually increases in difficulty. It guides you through the different progression levels of core calisthenics moves like push-ups, pull-ups, and dips.
Thanks! I'll pop you a DM on here when I've got it up on the app stores.
Working on getting my life back together.
I've been more or less sick for the most of Q1/25. Always some cold, coughing, sometimes worse. Went to work nevertheless most of the time (stupid, I know), because... I don't know. Guess I think work is more important because it gives me the feeling of being good at something and worth it.
Didn't take much care about eathing healthy, getting my gym routine done or even enough sleep. Lots of other stress, too.
Not sure how much I can change from Q2/2025 on but I'll have to start optimistic. Some clouds are clearing up, some problems and issues are gone or being taken care of, can only go up from here.
All the best for everyone.
Currently, when my brain lets me, I am working on redoing my entire home network infrastructure. My network cupboard is a mess, and I've been slowly CADing and test-printing parts for a 8.5 inch server rack system. I've managed to get some nice side rails, and I'm currently doing some test prints for a mount for a Lenovo M700 computer, which I like for lightweight server stuff (the one I have is currently running my homeassistant and a couple of other docker things, and I'm going to be buying another to use as a NAS). I'm also slowly working on rewriting my website backend to pull projects off external drives, reformat them into blog pages and automatically update my website to use them. Currently I'm still at the early testing stage, and I have quite a way to go, but the basic parts are there.
The 8.5 inch racking system was inspired by Jeff Geerling's video about 10 inch rack, but shrunk even further to allow me to fit it on my 3D printer bed (8.6 inch square). I currently have parts for the top and bottom of the rack, as well as 1U and 2U expansions that you can slot together to make the rack as tall as you want. I'm also thinking about making a side attachment system so you can clip fans onto the side or similar. Once I have a working rack with several units in it, I'll probably end up publishing the parts and a writeup about it
are you using https://zoo.dev/ to help create the cad files? they have a text to cad tool
No, I'm using OnShape to model it manually. If I can't work out how to model something, I probably won't be able to describe it to an LLM for it to attempt to convert to CAD
Want to build a platform to alleviate chronic suffering that can't be understood by one's local doctor.
Suggestions by this platform wouldn't interfere with treatment protocol straight away; it wouldn't ask the patient to stop medicines their doctor has prescribed, or itself prescribe scheduled drugs.
It will suggest complementary interventions. Case in point: anxiety, depression, brain degeneration & other related diseases - there's Rhonda Patrick's protocol of HIIT exercises to breach the blood-brain barrier & deliver positive effects; there's Dr Chris Palmer's method of looking at metabolism & mental health jointly & benefits of a keto diet to solve such issues.
Likewise, there can suggestions from Yoga-Pranayama where deep breathing can solve insomnia & hence other diseases downstream such as hypertension in many cases.
After being on such complementary protocols, the patient's suffering will be reduced, but also the body will heal enough to an extent that their local doctor could reduce/stop medication.
The tech is in the platform, combing through wisdom of all such complementary protocols for a start. If it gains traction, we could start involving experts have system route some queries specifically to them.
I have experience building the ML-LLM part. Anyone wants to join me and build the full stack part?
A modern SQL client for web devs. Deeply integrated with your workflow (vscode, drizzle, supabase, etc), a nicer more schema-aware GUI (think Airtable), and smarter ways to save queries and export/share them.
A lot planned, not much built. Just started so follow along if it sounds useful! Also see my prior thoughts on the topic here: https://news.ycombinator.com/item?id=41286912
Sqratch - https://github.com/jkcorrea/sqratch
I am working on Navi, an open source digital twin that helps you review the digital notes you've taken in the past week:
https://github.com/Melvillian/navi
Checkout out the README; it gives you a straightforward idea of how Navi works. Currently only works for Notion, but the idea is to make any notetaking tool (Obsidian, Evernote, Google docs, etc...) be ingestable by Navi.
Next steps are to make an SRS plugin, and to make a HTMX-based website so it's useable beyond just the CLI.
https://gitlab.com/actions3/actions3 - desktop application for displaying AWS metrics and CloudWatch Logs.
I'm doing it for my personal use, but maybe someone will find it useful too ;)
A JVM written in go. [0] We're committed to a high-quality implementation that works reliably--so our test code is > 2.x the size of our production code. We can already run lots of classes, but the finished product won't be ready for alpha testing until, we hope!, end of this year.
[0] jacobin.org
I'm still working on Habitat. It's a free and open source, self-hosted social platform for local communities. The plan is for it to be federated, but that's a while off yet.
I finally cracked ansible/docker-compose provisioning on Ubuntu and plan to expand that out to support Debian also. The groundwork is there. I can finally see an official release in the distant horizon, I just need to put those quality of life features in now, like the ability to delete your own account, change your email address, notifications on comments and all that stuff.
- The idea: https://carlnewton.github.io/posts/location-based-social-net...
- A build update and plan: https://carlnewton.github.io/posts/building-habitat/
- The repository: https://github.com/carlnewton/habitat
- The project board: https://github.com/users/carlnewton/projects/2
Would you recommend your tool to use with a single instance for a local community that won't be interested in federating? I mean self hosting by one person, likely via docker, exposing to a few hundred people, not federating at all, all data should be kept within the community, not public.
If yes, do you think it's already mature enough to give it a spin?
> all data should be kept within the community, not public.
I would recommend it for everything you said except for this part. Everything posted is publicly visible by design. I'm afraid I have no intention of changing that. You're free to fork it if you want though.
RubyExamples.com [1]
Got inspired by Go by Example [1] while learning Go. Realised there's nothing for Ruby like that and decided to build.
Goal is to have simple urls for one-click navigation for each topic where the page covers a single topic briefly with examples, with relevant links and historic artifacts (for example, it links to the "Programming with Nothing" talk in procs/lamdba page [3]).
It's still a work in progress and I'm not rushing it.
The examples are all my own. It's easy to do with AI, but I'm not going that way. I'm explaining things based on my own 12+ years of ruby experience.
Like GoByExample, this is also desktop only, but mobile-friendly is on the roadmap. The CSS also will change to be someting like RailsGuides. Might also add a video for each topic explaining the code.
[1] - https://rubyexamples.com/ [2] - https://gobyexample.com/ [3] - https://rubyexamples.com/p_and_l
GitGuard (https://gitguard.dev)
Basically it’s a way to enforce custom policy rules on GitHub PRs without writing any scripts or maintaining any custom Actions. It’s got all of the customization that you wish GitHub’s built in branch protections had.
Things like “make sure the frontend tests pass if there are frontend code changes” or “require 2 approvals only if there are no test files added” are trivial to write and enforce.
I’ve been building https://lowlow.bot, it tracks price changes on any website. I was inspired by https://camelcamelcamel.com, but wanted something that worked for more than just Amazon.
It’s been handy for big purchases I’m ok waiting for and stocking up on recurring non-perishable essentials when they go on sale. It also lets me know when something has come back in stock.
I am working on a FOSS google tasks alternative for android with unix-like aesthetics and my desire to take some inspiration from the windows mobile app designs. It's just 84kb apk at the with reproducible build system and also downloadable from IzzyOnDroid fdroid repo: https://apt.izzysoft.de/fdroid/index/apk/io.github.ronynn.ka...
It's privacy focussed and does not connects to the internet, your notes stay on device which you can backup and import into other devices, and I am working on a quick qr share features too.
My idea was that there are almost no android apps with this kind of design vision even though there are fans of this design out there as can be seen with the trend of people modifying their termux startup screens and the huge downloads of windows mobile styled launchers. And so I want to make a design system that provides an open source go to system for others to make apps with this design
Source code: https://github.com/ronynn/karui
I'm continuing to update my already published ebooks. Apart from catching up to new software versions, it also helps to address typos and other issues found by my readers.
Last week I published a new version for my awk ebook (https://learnbyexample.github.io/cli-text-processing-awk-ann...) and today I'll start working on sed ebook.
I am continuing work on https://reliquary.se - a VPN for the hackers - based on my fully privilege separated and sandboxed VPN sanctum (https://sanctum.se).
It is shaping up nicely towards an actual 1.0 release in the near future, with a little less keccak based AEADs this time around. It was a fun experiment but in the end I have yet to do any cryptanalysis on it or provide security proofs for it - neither which I have time for at this point - so the swap to AES was expected on my end.
For fun I also added a fully e2e p2p voice chat client on top of this as the sanctum protocol is now available as a library (https://github.com/jorisvink/libkyrka) - this voice chat works with one or multiple peers and can is available at https://github.com/jorisvink/confessions.
Either way, I guess you can say I'm having a little bit too much fun with this.
I'm building pgflow, a Postgres-native workflow orchestration engine that keeps the entire DAG's state and orchestration logic inside the database, while a dedicated task queue worker (Edge Worker) handles execution.
This split minimizes external dependencies and makes it easy to manage complex pipelines without leaving PostgreSQL.
I started pgflow because I wanted a fully integrated, Supabase-based system (no separate servers!) for reliable, parallel workflows which keep state in postgres so I can trigger flows from triggers and stream their progress via Supabase Realtime.
Started protytyping it in early November, released serverless task queue worker in January and I'm currently polishing the flow orchestrator pieces, releasing an alpha version in upcoming weeks.
If you're curious:
- More on Twitter/X: @pgflow_dev (https://x.com/pgflow_dev)
- Edge Worker docs (will get flow orchestration docs included soon): https://pgflow.dev
Reddit updates:
- https://www.reddit.com/r/Supabase/comments/1jfrky2/huge_mile...
- https://www.reddit.com/r/Supabase/comments/1ij9jcl/introduci...
Happy to discuss or collaborate if anyone's interested!
Committed some heresy last week while testing OpenBSD 7.7-beta snapshots on an Apple M1 MacBook Air.
https://www.linkedin.com/posts/brynet_openbsd-activity-73074...
I'm working on DeskPal. The project still needs a lot of polishing, so it's in private for now. Here is a part of its README. # DESKPAL
## What is DeskPal? DeskPal stands for Desktop Companion - exactly as it sounds: A companion for your desktop.
It's a gadget that will: - Display your media info (From Spotify, etc. - more sources to be added) - Display time - Connect to your phone and display notifications (TBD) - Function as a StreamDeck-like device (TBD)
All of these features should work right out of the box, without installing any new apps on phone or computer: - You can use it with your company laptop, or even with no computer at all - An app may be needed to configure some settings (like StreamDeck macros - TBD)
## Hardware & Software Inside DeskPal is an ESP32S3 MCU that supports both WiFi and BLE. The current configuration has 16MB Flash memory and 8MB of SRAM.
DeskPal runs on Zephyr RTOS, a RTOS that is heavily supported by many vendors. Using Zephyr allows us to leverage a vast ecosystem of hardware drivers and enables easy porting of this project to other MCUs if needed.
DeskPal's software architecture allows adding dynamic apps so that we can install or remove functionality without firmware updates.
I'm working on a small project to bring plain-old-telephones back into our children's lives. Quick, screen-less connections with grandparents, cousins, and friends is as quick as picking up the phone and pressing one button. Imagine that.
Also consider the elder spectrum. As my father grows older with eye and coordination issues, he cannot use the intricacies of a smart phone.
A GNSS receiver written in Rust. It takes as input IQ data from an rtl-sdr device (or equivalent). It decodes the GPS L1 C/A signal, but I hope to augment it so that it handles Galileo and the other constellations.
It's still a work in progres: https://github.com/mx4/gnss-rcv/
We’re building AI PSY HELP – an AI-powered mental health assistant offering 24/7 anonymous support via voice and text, without appointments or waiting. It’s used by 100,000+ people in Ukraine, including veterans, teens, and first responders.
The AI is trained on 40,000+ hours of real psychotherapy sessions and provides individualized emotional guidance to help users manage stress, anxiety, and trauma. We partner with public institutions to deliver large-scale support and just launched a B2B program for employers.
Now preparing for EU expansion (starting with Germany), mobile app rollout, and voice interaction in Ukrainian. This is not just a chatbot – it’s scalable mental health infrastructure.
→ https://ai.psyhelp.info → https://chat.psyhelp.info → https://chat.dev.psyhelp.info (+voice)
How did you get people to agree to training a chatbot on their sessions? That strikes me as extremely intimate text. Is it a "it's in the T&Cs" deal, or did you seek a separate opt-in?
I'm askng because the answer will shed light on the level of privacy "the average consumer" is comfortable with.
Great question, and I fully agree — privacy in mental health is sacred.
We don’t train on user chats directly. Instead, we collaborate with a team of 42 certified psychologists who work with us to curate anonymized case structures, decision trees, and response strategies based on real but depersonalized therapeutic experience.
These professionals help us model how psychological support is provided — without ever using actual user conversations. Our system is trained on synthesized, anonymized session data that reflects best practices, not private logs.
It’s not buried in the T&Cs — we’re very explicit about our commitment to data ethics and user safety. No session data is used for model training, and user interaction is fully confidential and never stored in a way that links it to identities.
Our goal is to make high-quality support available without compromising trust. Let me know if you’d like more technical or ethical detail — happy to share!
We're building https://tripjam.com after being frustrated with planning trips in regular group chats and trying to organize information across multiple apps. It combines:
- Group chat that keeps all travel discussions in one thread
- Interactive maps where everyone can pin locations and add notes
- Collaborative itineraries that sync with your calendar
- AI travel assistant that suggests activities and helps optimize your plans
I couldn't find your Privacy Policy so I didn't register. Did I miss it?
Should be linked at the bottom of our landing page. Thanks for taking a look!
Got it, thanks. I appreciate no data is shared with third parties. Good on you.
I've used Notational Velocity for years, but couldn't get a working binary on latest MacOS/Chips and the newer variants I found seem to miss the mark on what made Notational Velocity great (imo).
So I'm working on an electron version[1] that has what I remember of the core UX. I wasn't the best user of NV – I'm sure it had features that I didn't use. If there are features that it had that you used, I'd certainly like to be aware of them.
I'm writing a fictional spy thriller book that takes place in the early 1990s in two countries. I'm about 100 pages in, and expect the end product to be about 200 pages long. Not sure how I'm going to publish it yet; suggestions welcome!
Here's a synopsis of the plot, redacted since I've already revealed too much :)
-----, a college student from ----- majoring in -----, graduates from university and is recruited to work for a mysterious company that has links to -----. Initially hired as a translator, his talent with electronics get noticed quickly and his superiors begin training him for a covert overseas operation in which he will visit ----- as an exchange student while really serving as a spy.
With a soft spot for ----- culture, he is excited to visit ----- for the first time. Although he is fully aware he could be killed or imprisoned there, his confidence in his ----- language skills, plus a bit of youthful naiveté, make him jump at the chance. As he carries out his mission in -----, he uncovers a tangled web of family secrets.
- Semi-automated job finder for a job site focused on non-mainstream programming languages [1]. - Eiffel-inspired programming language [2].
It's hard to juggle between the two.
[1] https://beyond-tabs.com [2] https://github.com/andreamancuso/rivar-lang
Very nice on both! I've long had an affinity for non-mainstream programming languages (mostly those with Wirth lineage). I wish there were more projects that used them.
We're building a repairable and fireproof e-bike battery! https://gouach.com
I'm learning FreeCAD and 3D printing. My goals are to be able to print involute gears, and eventually various mechanical arithmetic devices, leading up to a mechanical version of my BitGrid. Also, I want to make an Armstrong shaper.
A friend gifted me a large box of semiconductors, and I'll be testing 7400 and 4000 series chips for the next week once my T48 EPROM Burner/IC tester shows up.
I'm working on a geography guessing game. This is mostly in order to learn the technologies that the team I manage is working on, as I don't get as much hands-on time as I'd like at work. But it's a fun game and it's improving at a rapid clip. I'd love your feedback.
Really fun, I love playing GeoGuessr, but really like the idea of using videos.
Some notes on the UI:
* I found the 50/50 split between video / map a bit annoying, especially on a 13" MacBook
* The volume slider takes up a lot of valuable space, and felt like your normal scrubber to scroll through the video
* Once you confirm a location, the whole UI changes again
* Overall (especially on my smaller screen), there was a lot of scrolling involved to get to buttons
I’ve been building Velty[1] over the past year as a tool to tame my ever-growing YouTube subscriptions. It’s a web app (PWA) that lets you organize your YouTube channels and videos into folders (with sub-folders), and view all the latest videos in one chronological feed with powerful filtering/sorting.
I built it because I was frustrated with how hard it was to manage a large number of channel subscriptions using YouTube’s default interface.
This is great! It's very close to something I desperately want to exist: a way to peruse the YouTube videos' my friends are watching.
My algo often gets bad, and the best recommendations for YouTube videos come from friends' suggestions. And sometimes I just want to be recommended something totally different than what my algo would know about.
Would it be possible to add that feature?
I've got a couple different things I've been hacking on on and off over the last few months.
The one that's furthest along is a database and (currently extremely crude) webapp for asking interesting data type questions about Lotus setlists. I built a little scraper for Nugs and have all their setlists, I just need to take it further and get some of the queries I want implemented and put some kind of halfway decent interface in front.
I also built a little app that uses your Claude API key to generate "generative art," so you send in a prompt and it sends back some visualization code and renders it. It's fun to mess with but I haven't seen anything come out that's wowed me yet.
Got some other little hackeries going too, a lot of my recent hacking time has gone towards getting the -arr apps and their whole little ecosystem set up on my home server. I got a little N100 machine back in December and have been having tons of fun hosting little docker gewgaws.
A DNS zone management tool, made for having decent interface when using coredns as authoritative DNS: https://github.com/holysoles/zoneforge
Also considering working on a traefik plugin + helm chart for sending LLMs that ignore robots.txt to a tarpit like iocaine/nepenthes
I have a startup idea and I’m curious what others think.
Setting up AI agents is way too complicated. I am constantly being sent to GitHub pages with installation instructions that require way too many dependencies, API connections, and more. We’re talking hours of setup and config.
So what if there was an open-source marketplace where you could just search, find an agent, click deploy, get launched into an already configured agent, and just have it do its thing? Essentially a marketplace discoverablity, automated deployment infrastructure and an interface to manage your agents.
I’d also probably create some kind of open-source solution, probably a custom Docker container, so developers can easily build agents and wrap them in a container and upload them for deployment.
Thoughts? Does anything like this already exist?
P.S. No, I don’t want to build or use another crappy AI agent builder. I want to deploy open-source agents already built by actual developers.
Currently rewiring some my home's electricity so I can monitor my PV power production locally (without the shitty built in Chinese cloud garbage with hardcoded wifi passwords).
Using the Shelly Pro EM for energy monitoring (it has 2 CT clamps, one is going on the PV output, the other on the grid input).
The data will be collected in Home Assistant on a HA Green device. Additionally, we have "smart" electricity meters here, these have a port which can be used to fine grained power & gas monitoring, should be possible to integrate that into Home Assistant as well.
It's not anything particularly challenging, it's mostly refactoring my electrical distribution board to make room for the Shelly device, routing ethernet cables, and installing some power sockets and a network switch to tie everything together.
3D Drone Wargame.
I'm developing a wargame-like 3D simulator designed to train AI drones into elite stealth pilots. By integrating reinforcement learning techniques and utilizing real-life local landscape data, the simulator offers highly realistic mission scenarios.
I've been working since November on an integration between a Quest VR app called Fluid, and MacOS and Windows hosts. The app itself is called Fluid Link, launched just after new years, and has been rolled into official Fluid offerings. It supports full desktop and individual window streaming into the Quest app, shared keyboard and mouse control, and unlike competitor apps also supports multiple hosts and cross platform clipboard sync.
Fluid's website is https://fluid.so and Fluid Link is available for download at https://fluid.so/fluid-link
Fluid is currently free (until tomorrow?) and Fluid Link is free to try up to 15 minutes at a time, with no restrictions on functionality. There's a discord server and in-app support chat for support questions, and videos demonstrating Installation and usage on YouTube.
I'm just drawing stuff, including a comic book about a future run by horrible AIs that find they get the best response from humans when they present as horrible, unctuous clowns.
If you have lots of money to burn and want to support a queer artist in the Gulf South, I have a Patreon.
Finishing up my PhD thesis on low-resource audio classification for ecoacoustics. Our partners deployed 98 recorders in remote Arctic/sub-Arctic regions, collecting a massive (~19.5 years) dataset to monitor wildlife and human noise.
Labeled data is the bottleneck, so my work focuses on getting good results with less data. Key parts:
- Created EDANSA [1], the first public dataset of its kind from these areas, using a improved active learning method (ensemble disagreement) to efficiently find rare sounds.
- Explored other low-resource ML: transfer learning, data valuation (using Shapley values), cross-modal learning (using satellite weather data to train audio models), and testing the reasoning abilities of MLLMs on audio (spoiler: they struggle!).
[1]https://scholar.google.com/citations?user=AH-sLEkAAAAJ&hl=en