Top
Best
New

Posted by lharries 3/31/2025

Show HN: WhatsApp MCP Server(github.com)
Hi HN – I built an open-source, self-hosted Model Context Protocol (MCP) server for WhatsApp: https://github.com/lharries/whatsapp-mcp

It connects to your personal WhatsApp account via the WhatsApp Web multi-device API (using whatsmeow from the Beeper team), and doesn't rely on third-party APIs. All messages are stored locally in SQLite. Nothing is sent to the cloud unless you explicitly allow your LLM to access the data via tools – so you maintain full control and privacy.

The MCP server can:

- Search your messages, contacts, and groups

- Send WhatsApp messages to individuals or groups

Why build this?

99% of your life is stored in WhatsApp, by connecting an LLM to WhatsApp you get all this context. And your AI agent can execute tasks on your behalf by sending messages.

229 points | 138 commentspage 3
hansmayer 3/31/2025|
> 99% of your life is stored in WhatsApp

That's quite a presumption to make.

1oooqooq 3/31/2025|
that's a low estimate if you live in India or south America.
echoangle 3/31/2025||
As others have already said, think about what you're doing when you use this.

If you connect a not-selhosted LLM to this, you're effectively uploading chat message with other people to a third-party server. The people you chat with have an expectation of privacy so this would probably be illegal in many jurisdictions.

piltdownman 3/31/2025||
Except basically all of Europe is one-party consent, and things like tech support call centres are already doing variants of this for years.
echoangle 3/31/2025||
One-party-consent only means you can legally record something, it doesn't necessarily mean that you're allowed to share it with (non-government) third parties later.

It could be legal to record and use as evidence in court later, but that doesn't mean you're allowed to share it with some AI company.

piltdownman 3/31/2025||
They TOS utilisation of the data under 'Quality and Training purposes', with implied consent by engagement with the service in question - the breadth and application of which has never had a test case to my knowledge.
dvrp 3/31/2025|||
Your information is gone the moment you utter words. I can also copy and paste the messages people send me.
echoangle 3/31/2025||
> I can also copy and paste the messages people send me.

Sure you can, but the people can sue you if you paste it into something public. I don't know if you're making some deep philosophical comment but this is something people have been sued and lost for before.

idiotsecant 3/31/2025|||
I would argue that there is no expectation of privacy for messaging apps without end to end encryption. There is always the man in the middle listening.
echoangle 3/31/2025|||
Legally, there absolutely is. Because by law, the messaging app operator also can't just publish the stuff you write in a chat. Even some disclaimer in the terms of service probably wouldn't work if people would generally assume the chat to be private.

And it also doesn't even matter because WhatsApp claims to be E2E-encrypted.

trelbutate 3/31/2025||||
WhatsApp has end-to-end encryption
joolss 3/31/2025||
Yes, but it also has a back door so it is of no use.
miroljub 3/31/2025||||
Meta claims WhatsApp is end-to-end encrypted.

It's up to you to trust Meta or not, but people who trust them do have an expectation of privacy.

hombre_fatal 3/31/2025|||
That's irrelevant here because the OP is running the LLM on one of the ends, so it's decrypted that same as when you're reading the chat convo yourself.

It also misses the mark because you're talking about an eavesdropper intercepting messages and the OP is the receiver sharing the messages with a third party themself.

greenavocado 3/31/2025|||
> The people you chat with have an expectation of privacy so this would probably be illegal in many jurisdictions.

Name one

echoangle 3/31/2025||
Germany.

You have a "allgemeines Persönlichkeitsrecht" (general personal rights?) that prevents other people from publishing information that's supposed to be private.

Here's a case where someone published a facebook dm for example:

https://openjur.de/u/636287.html

virgilp 3/31/2025|||
How would this stand up to the "I didn't do it, I probably got hacked!" defense? It's one thing to publish personal conversation, and another to have your conversations aggregated by some LLM (and if they leak plain-text, the "hacked" defense is even more plausible).
echoangle 3/31/2025||
That’s a separate issue. You might not be able to prove it as the victim, but that doesn’t make it legal.
virgilp 3/31/2025||
I would say it's a gray area at best/worst. I think the goal of the law is that you shouldn't e.g. take a screenshot of a message someone sent you in confidence/in private, and use it to make fun of, or shame them on a public forum (or whatever else - but a "targeted action").

This scenario however is "I take my personal data an run it through tools to make my life easier" (heck, even backup could fit the bill here). If I'm allowed to do that... am I allowed to do that only with tools that are perfectly secure? Can I send data to the cloud? (subcases: I own the cloud service & hardware/it's a nextcloud instance; I own it, but it's very poorly secured; Proton owns it and their terms of use promise to not disclose it; OpenAI owns it and their terms of use say they can make use of my data)

echoangle 3/31/2025||
As a non-lawyer:

> am I allowed to do that only with tools that are perfectly secure?

No, actual security doesn't matter at all, but you have to think that they are reasonably secure.

> Can I send data to the cloud?

Yes, if you can expect the data to stay private

> (subcases: I own the cloud service & hardware/it's a nextcloud instance;

Yes

> I own it, but it's very poorly secured;

No

> Proton owns it and their terms of use promise to not disclose it;

Yes, if Proton is generally considered trustworthy.

> OpenAI owns it and their terms of use say they can make use of my data)

No

virgilp 4/1/2025||
Your thesis implies that before using my data I am compelled by law to know very well the terms of use; I think the opposite has happened in practice, especially in Europe the trend is to say that lengthy TOS don't mean that companies can do whatever they want/ just because the end-user clicked "I agree" doesn't automatically make them liable, in the eyes of the law, to know and understand all implications of the TOS. That's undue burden.

I guess you can argue that "I should've known that OpenAI will use my conversations if I send them to ChatGPT" but I'm not convinced it'd be crystal clear in court that I'm liable. Like I said.... I think until actually litigated, this is very much a gray area.

P.S. The distinction you make between "properly secured" and "improperly secured" nextcloud instance would, again, be a legal nightmare. I guess there could be an example of "criminal negligence" in extreme cases, but given companies get hacked all the time (more often than not with relatively minor consequences), and even Troy Hunt was hacked(https://www.troyhunt.com/a-sneaky-phish-just-grabbed-my-mail...) - I have a hard time believing the average Joe would face legal consequences for failing to secure their own Nexcloud instance.

greenavocado 3/31/2025||||
So here's the deal with German law on this topic - there's actually a big difference between sharing someone's DM and running LLM tools on social media conversations. The OLG Hamburg case from 2013 (case number 7 W 5/13) establishes that publishing private messages without permission violates your personality rights ("allgemeines Persönlichkeitsrecht"). While we don't have specific LLM court rulings yet, German data protection authorities have been addressing AI technologies under GDPR principles. The Bavarian Data Protection Authority (BayLDA) and the Hamburg Commissioner for Data Protection have both issued opinions that automated AI processing of personal communications requires explicit legal basis under Article 6 GDPR, unlike simple sharing which falls under personality rights law. The German Federal Commissioner for Data Protection (BfDI) has indicated that LLM processing would likely be evaluated based on purpose limitation, data minimization, and transparency requirements. In practice, this means LLM tools could legally process conversations if they implement proper anonymization techniques, provide clear user notices, and follow purpose limitations - conditions not required for the simpler act of sharing a message. The German courts distinguish between publishing content (governed by personality rights) and processing data (governed by data protection law), creating different standards for each activity. While the BGH (Federal Court) hasn't ruled specifically on LLMs, their decisions on automated data processing indicate they would likely allow such processing with appropriate safeguards, whereas unauthorized DM sharing remains almost always prohibited under personality rights jurisprudence regardless of technical implementation.
echoangle 3/31/2025||
It sounds like you agree with me that the posted tool would not be legal to use in Germany then? Or am I misreading this comment?

Your initial „name one“ comment sounded like you didn’t believe there would be a jurisdiction where it is illegal.

greenavocado 3/31/2025|||
The so-called expectation of privacy is irrelevant in this context
echoangle 3/31/2025||
But it would still be illegal to use? Does the exact mechanism matter?
greenavocado 4/2/2025||
> But it would still be illegal to use?

Nope

jeroenhd 3/31/2025|||
That case describes publishing this to the public internet. I don't believe the same would apply when using a tool like this.

My family members all back up our conversations to Google Drive, I doubt WhatsApp would provide that feature if it were illegal.

echoangle 3/31/2025||
Well it would depend on which LLM you use and what their terms are.

But if they use your input as training data, that would probably be enough.

jeroenhd 3/31/2025||
We'll have to see. Tools like these are already common on platforms like LinkedIn, so if it's legally questionable I expect the courts to cover it soon enough.

My German isn't good enough to read the original text about this case, but if the sentiment behind https://natlawreview.com/article/data-mining-ai-systems-trai... is correct, I wouldn't be surprised if this would also fall under some kind of legal exception.

The biggest problem, of course, is that regardless of legality, this software will probably be used (and probably already is being used) because it's almost impossible to prove or disprove its use as a remote party.

echoangle 3/31/2025||
> My German isn't good enough to read the original text about this case, but if the sentiment behind https://natlawreview.com/article/data-mining-ai-systems-trai... is correct, I wouldn't be surprised if this would also fall under some kind of legal exception.

That's something completely different. One is about copyright of stuff that was shared publically, while the other is about sharing private communications, violating their personal rights (not copyright).

But of course, we'll have to see, I'm not a lawyer either.

tuananh 3/31/2025||
removed.

my bad.

cirego 3/31/2025||
I believe echoangle’s concern is about the security and privacy of the LLM using the data, not the MCP server itself.
tuananh 3/31/2025||
ah right. my bad.
cirego 3/31/2025||
Sorry, I should have added my second thought. Your original comment about isolating MCP servers is also good!

These are tools where the AI may tell you it’s doing one thing and then accidentally do another (I had an LLM tell me it would make a directory using mkdir but then called the shell command kdir (thankfully didn’t exist)). Sandboxing MCP servers is also important!

ahstilde 3/31/2025||
could i use this to create groupchat summaries?
lharries 3/31/2025|
Yep!
deutschepost 3/31/2025||
It is crazy how out of touch people on this platform can be. I live in Europe but have been to Asia and South America. People use WhatsApp everywhere. Just because you live in North America where everyone uses SMS/iMessage/whatever doesn't mean everyone does. I can remember my parents scolding me because they got charged for me receiving some SMS. WhatsApp was a gamechanger. You could send messages or pictures without having to think about the price of it (While being connected to a WLAN...). So at some point no one used SMS anymore. iMessage was out of the question also, because only a very small amount of people had iPhones. And everyone was scared of sending a Message because you wouldn't know if the Message would cost you or not. But everyone had WhatsApp.

For some people it is a requirement to have a social life. It is not your choice to use it or not. Network effects are taking care of that. If you think Signal or whatever is a better choice, good on you. But if you don't want to cut ties with some of your friends, prepare to use multiple apps. Including WhatsApp.

jascha_eng 3/31/2025||
I live in europe and my social life is spread out between telegram, whatsapp, signal, discord. For professional life you could even include linkedin and slack. A single MCP server wont ever cut it for me and adding 6 for 6 different tools will confuse any llm and make it send the message to the wrong person. Completely ignoring the fact that I would not use this anyways... like people want to talk to ME not an LLM. Whatsapp has a chat with Llama now anyways if that's who they want to talk to.
karn97 3/31/2025||
[flagged]
hemlock4593 3/31/2025|||
> WhatsApp was a gamechanger. You could send messages or pictures without having to think about the price of it (While being connected to a WLAN...)

Back, when data plans were around 1GB or less some network providers didn't charge you for using whatsapp on specific plans in Europe. There were also whatsapp branded sim cards, but I haven't seem them for a long time though.

_def 3/31/2025||
If I find out someone pipes my chat messages into a LLM, I will not converse with that person anymore.
sbkg0002 3/31/2025||
Since Meta is a big AI investor, I suggest you skip WhatsApp altogether.
Aachen 3/31/2025||
You think they (plan to) decrypt messages and then upload them again in plain text to a server?

Since on-device processing is neither as objectionable nor could be very large

I don't use WhatsApp myself because of who runs it and there are plenty of better options out there, so I certainly agree with the sentiment of steering clear, but this claim does seem pretty far out there

worldsavior 3/31/2025||
They don't plan it because they have no use for it. They only care about the metadata. When you talked to this person; your wife; at what time of day; was it at night; how long is the message; was there a product mentioned in the message; was the message about sports; etc.
berkes 3/31/2025||
They don't plan it, because so far, they don't have the keys to do so.

We do need to trust Meta that they really don't, to some extent, but people way smarter than me have researched the WA implementation of the Signal protocol and it seems solid. I.E: Meta appears to simply be unable to read what you chat and send. (but TBC: they do see with whom and when you do this, just not the contents).

yonatan8070 3/31/2025|||
What prevents them from simply pushing an update that quietly uploads private keys or unencrypted messages to their servers

Presumably they use proper HTTPS, so all the data is essentially encrypted twice, if they just concatenate some packets with keys, it would be extremely difficult to detect as you'd need to decrypt HTTPS (which is possible if you can install your own certificates on a device), then dig through random message data to find a random value you don't even know.

berkes 4/1/2025|||
At least on Android it's possible to dissect an app. You won't get the original java code, but static analysis is possible. And indeed, it's possible to capture it's network traffic and even often decrypt that traffic (with root access to the device). Now, I, or you may not research at this level, but someone looking into wether they may use WhatsApp to discuss attack plans on, say, Jemen, might find such weaknesses.

People find exploits in proprietary code, or even SaaS (where researchers cannot even access the software) every day.

People at Meta might leak this information too.

"Information wants to be free"

My point is: the risk of this becoming known is real.

Aachen 4/1/2025|||
> What prevents them from simply pushing an update that quietly uploads private keys or unencrypted messages to their servers

Reputation

Or what's the translation of bank run but generic for any service? Leegloop in Dutch. Translator gives only nonsense. Going for the descriptive route: many people would leave because of the tarnished reputation

The trick is to have Facebook continue to believe that this reputation/trust is more valuable than reading the messages of those who stay behind, which can partially be done by having realistic alternatives for people to switch to so that there is no reason to stay when trust is broken. Which kinda means pre-emptively switching (at least to build up a decent network effect elsewhere), which is what I've chosen to and encourage anyone to also do. But I'm not a conspiracy theorist who thinks that, at the present time, they'll try to roll out such an update in secret, at least not to everyone at once (intelligence agencies might send NSLs with specific targets)

worldsavior 3/31/2025|||
They don't have the keys, but they probably can get them.
berkes 4/1/2025||
That's a strong accusation.

The only way I can think of, is by pushing an update that grabs all your keys and pushes them to their servers.

Otherwise, it's pretty decent set up (if I am to believe Moxie, which I do)

tobyhinloopen 3/31/2025|||
That's a completely reasonable boundary. Privacy and consent are critical, especially when sharing personal messages or conversations. It's fair to expect that your interactions remain private unless you've explicitly agreed otherwise. If you'd like, you can communicate your stance clearly to others in advance, ensuring they're aware of your boundaries regarding the use of your messages with AI tools or other external resources.
Aachen 3/31/2025|||
I understand why one would think it's funny to feed the parent comment into an LLM but please at least label when you echo such output on the site
siva7 3/31/2025|||
I don't think their main concern was the privacy aspect.
stavros 3/31/2025||
What do you think their concern was? I can't see any other issues someone might have.
Aachen 3/31/2025||
Energy usage is another. What would happen to world power consumption if 1% of WhatsApp chats would be fed to ChatGPT?

A third reason besides privacy would be the purpose. Is the purpose generating automatic replies? Or automatic summaries because the recipient can't be bothered to read what I wrote? That would be a dick move and a good reason to object as well, in my opinion

stavros 3/31/2025||
> What would happen to world power consumption if 1% of WhatsApp chats would be fed to ChatGPT?

The same thing that happens now, when 100% of power consumption is fed to other purposes. What's the problem with that?

Aachen 3/31/2025||
Huh? It's additional power draw in the midst of an energy transition. It's not currently being used differently. What do you mean what's the problem with that?

Also don't forget it's just one of three aspects I can think of off the top of my head. This isn't the only issue with LLMs...

Edit: while typing this reply, I remembered a fourth: I've seen many people object morally/ethically to the training method in terms of taking other people's work for free and replicating it. I don't know how I stand on that one myself yet (it is awfully similar to a human learning and replicating creatively, but clearly on an inhuman scale, so idk) but that's yet another possible reason not to want this

stavros 3/31/2025|||
If people need additional power, they pay for it. If they want to pay for extra power, why would we gatekeep whether their need is legitimate or not?
Aachen 4/1/2025||
Because of the aforementioned shortage. Paying for more power means coal and gas gets spun up since there aren't enough renewables, and the externalities aren't being paid for by those people

I'm also happy to have them pay for the full cleanup cost rather than discourage useless consumption, but somehow people don't seem to think crazy energy prices are a great idea either

Also you're still very focused on this one specific issue rather than looking at the bigger picture. Not sure if the conversation is going anywhere like this

stavros 4/1/2025||
What's the bigger picture? You said "power usage", "to what purpose?" (you kind of don't get a say in whether I use an LLM to reply to you, though you're free to stop talking to me), and "objections to the training method", which doesn't really seem relevant to the use case, but more of a general objection to LLMs.
IanCal 3/31/2025|||
Where do you draw the line? LLMs for searching, BM25 for searching, exact match only, no processes at all (forbid whatsapp search, make them scroll)?
hackernewsdhsu 3/31/2025|||
Funny that people freakout about a local LLM while using Facebook products. They're probably the same types who use it to do their work.
bilekas 3/31/2025|||
> They're probably the same types who use it to do their work.

Citation needed.

It's a local LLM with access to an extraordinary amount of personal data. In the EU at least that personal data is supposed to be handled with care. I don't see people freaking out, but simple pointing out the leap of handing it over to ANOTHER company.

berkes 3/31/2025|||
Not all Meta products are alike. WA has E2E encryption, has had it for a long time. It's the same protocol as Signal: in fact, it was built for/in WA by Moxie/signal a while ago.

That doesn't make the metadata private. Meta can use that as they want. But not the contents, nor the images, not even in group chats (as opposed to Telegram, where group-chats aren't (weren't?) E2E encrypted).

What you say or send on WA is private. Meta cannot see that. Nor governments nor your ISP or your router. Only you and the person or people you sent it to can read that.

It's a d*ck move if they then publicize this. And, as others pointed out, illegal even in many jurisdictions: AFAIK, it is in my country.

TheDong 3/31/2025|||
Do you think it'd be okay if they used a local LLM, via ollama, and this MCP server?
InfiniteLoup 3/31/2025||
Personally, I would say that still reeks of being manipulative. I've received messages from a friend which were definitely LLM-generated, it made me like that person considerably less
jeroenhd 3/31/2025||
If they use the LLM to search ("when did X tell me about that party somewhere around Y's neighborhood") then I don't think there's any problem.

If they configure it to indicate a prefix, for instance when answering questions like "when are you free to hang out" and it responding "[AI] according to X's calendar and work schedule, they may be available on the following days" I might also consider that somewhat useful (I just wouldn't take it as something they actually said).

If they're using LLMs to reword themselves or because they're not really interested in conversing, that's a definite ick.

I would personally use such a system in a receive-only mode for adding things to calendars or searching. But I'd also stick to local LLMs, so the context window would probably be too small to get much out of it anyway.

bilekas 3/31/2025|||
This is actually something I am curious about, if for example I use this and I and streaming all my contacts information and messages externally, surely I'm breaking privacy laws in some US states and certainly in the EU.

This seems sketchy to me.

jeroenhd 3/31/2025||
It very much depends on the specifics around use cases, parties, and jurisdictions. In plenty of them, you're allowed to record and keep track of conversations you're taking part in, as is the other party, but publishing those on the internet would he illegal.

Processing them (like compressing them to mp3 files or storing them in cloud storage) is probably legal in most cases.

The potential problem with LLMs is that they use your input to train themselves.

As of right now, the legal status of AI is very much up in the air. It's looking like AI training will be exempt from things like copyright laws (because how else would you train an LLM without performing the biggest book piracy operation in history?), and if that happens things like personal rights may also fall to the AI training overlords.

I personally don't think using this is illegal. I'm pretty sure 100% of LinkedIn messages are being passed through AI already, as are all WhatsApp business accounts and any similar service. I suppose we'll have to wait for someone to get caught using these tools and the problem making it to a high enough court to form jurisprudence.

1oooqooq 3/31/2025|||
I hope you never contacted anyone with a business account then.
dev0p 3/31/2025|||
You should just assume that every single thing that you type into an electronic device made after the 90s gets piped into a LLM anyway
the_gipsy 3/31/2025|||
Zuck is already piping it into much worse
kingkongjaffa 3/31/2025||
I mean, the technology is not the issue. Someone can read your past conversations today and take diligent notes to unearth the same insights an LLM might, if they were so inclined.

This might a actually be helpful for people with poor memory or neurodivergent minds, to help surface relative context to continue their conversation.

Or sales people to help with their customer relationship management.

anduc 3/31/2025||
[dead]
master-lincoln 3/31/2025||
If 99% of your life is stored at one of the biggest advertising companies in the world you already gave up on privacy anyway...
charlie-83 3/31/2025||
All messages are e2e encrypted though right?
ideashower 3/31/2025|||
Messages are. I do not believe that metadata about the messages are, however. So they know who you're speaking to, at what frequency, and from where.
charlie-83 3/31/2025||
Fair enough. That's not a problem to me but I can see it being an issue for people requiring complete anonymity. Are there any alternatives to WhatsApp that would fix this problem?
johnisgood 3/31/2025|||
Ricochet Refresh, but it is missing crucial features.

Try Briar, I think it does not store metadata either?

berkes 3/31/2025|||
Signal. But not because they cannot read this metadata (they can) but because they promise they don't.
JonnyaiR 3/31/2025|||
my mom showed me her phone the other day - she had updated her Whatsapp App and now the search bar has changed from "search your chats" to "search chats or ask Meta AI anything". I've googled a bit but did not find an option to disable meta AI and also found no definitive answer what Meta AI actually does - if i search for a chat, does it use this chat as context to provide answers? does it run locally (i highly doubt that)? is it only another interface to chat with an llm? It sure seems that this might be a stepping stone to pipe Whatsapp Messages into Meta AI and thus ignoring e2ee, not sure if it's done already but the line is getting quite thin.
karn97 3/31/2025||
Whatsapp has no meta ai built in
abdullahkhalids 3/31/2025|||
I have whatsapp open on my phone right now. There is blue-pink button on the home screen in the lower right corner. If you press it, it switches to a chat that says "Ask Meta AI anything".
moschetti1 4/1/2025|||
I think they are rolling it out, i don't have it but a couple of friends do. I guess they are testing
sschueller 3/31/2025|||
What guarantee do I have that what moves between soft keyboard and the message window is not intercepted and same goes for displayed messages?

It's a closed source client. End to end encryption means nothing.

aargh_aargh 3/31/2025|||
So this is a special case where two wrongs DO make a right when directed at a single victim?
FirmwareBurner 3/31/2025||
[flagged]
master-lincoln 3/31/2025||
I don't think there was any moral critique in what I wrote. I just commented on OPs

> so you maintain full control and privacy

Of course everybody is free to chose who they hand over data to.

One more comment to your writing:

> take their favorite apps away

there was no mention of taking anybody's app away. If people want to contact me they will need to use something that is not owned by a big advertising company. One can install additional apps or use services that do not need any apps.

nkarkare 4/2/2025||
[dead]
jlucaso 3/31/2025|
[dead]
More comments...