Top
Best
New

Posted by lukaspetersson 5 hours ago

We gave an AI a 3 year retail lease and asked it to make a profit(andonlabs.com)
160 points | 233 comments
class3shock 3 hours ago|
"Again, we are not doing this because we want this to be the future. It is not because we want to expand to chain AI-run retail stores across the world. It is not for economic opportunity.

We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."

I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.

Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?

Lammy 3 minutes ago||
> When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want?

“It only remains to point out that in many cases a person’s way of earning a living is also a surrogate activity. Not a PURE surrogate activity, since part of the motive for the activity is to gain the physical necessities and (for some people) social status and the luxuries that advertising makes them want. But many people put into their work far more effort than is necessary to earn whatever money and status they require, and this extra effort constitutes a surrogate activity. This extra effort, together with the emotional investment that accompanies it, is one of the most potent forces acting toward the continual development and perfecting of the system, with negative consequences for individual freedom.”

-- Industrial Society and Its Future (1995)

beloch 1 hour ago|||
I once saw an interview with a guy who was into extreme body modification of an unprintable and life-altering nature. He said something to the effect of, "I like challenging people's conception of what humans are." I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."

For the guys in this story, my translation is, "We were totally fine with making money with no effort, because F paying more employees than we need to. This social media campaign is our backup plan to ensure we get some press and attention out of it even if it fails. We'd totally be cool with making a lot of money though. Please visit our quirky AI shop and buy our stuff."

Barbing 1 hour ago|||
“We also won’t be first against the wall when the revolution comes (see this very blog for proof of innocence)”

This is going through some people’s minds the more pushback grows (see Altman molotov, Maine data center moratorium)

HumblyTossed 47 minutes ago||
For decades we moved to a knowledge based economy, now we have perversely wealthy people saying they're coming for those jobs. The thought of 10s of millions of people with nothing to do but starve to death ought to scare those wealthy people.
pydry 1 minute ago|||
They're experts at divide and conquer. They'll probably be able to convince us that we did this to each other.

Just like they convinced the younger generation that "boomers" stole their future.

hn_acc1 27 minutes ago||||
Especially since many of them are some of the brightest minds around.
Barbing 14 minutes ago||
If (1) many bright and very online people are going to lose their jobs, and (2) the response has not been mass unionization, might I rethink [1] a more likely future of work or rethink [2] the psychology of the average/collective knowledge workforce, or...

"where union" in short.

Perhaps the concept is too foreign for white collars, or on average folks think they'll be OK and it's the juniors who'll go... maybe too focused on immediate needs... a belief unionization is the wrong response... (and I'm not advocating for anything in particular btw)

topheroo 34 minutes ago|||
Comment of the week
mock-possum 51 minutes ago|||
> I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."

Strikes me as a repulsively mean-spirited take, ironically proving the artist’s point.

mjmsmith 33 minutes ago||
I think that depends on what the "extreme body modification of an unprintable and life-altering nature" was.
beloch 14 minutes ago||
Let's just say the "artist" was never again going to be able to walk normally, wear normal pants, or sit without a doughnut pillow. It was a voluntary disability.
Waterluvian 2 hours ago|||
I think it’s easier just to recognize words as free and to value them as such. Actions have value.
mountainb 2 hours ago|||
Many actions have a negative value. If I give two toddlers ball-peen hammers, release them into a window store, and then close the front door while I wait in the parking lot, was my action likely to create value or likely to destroy value?
jagged-chisel 1 hour ago|||
For whom? The employees will get more paid hours as they clean up. You have created value for them!
evan_ 58 minutes ago||
ok Zorg https://www.imdb.com/title/tt0119116/quotes/?item=qt0544361&...
edm0nd 1 hour ago|||
is it not both?

create value because the windows have to be replaced and employees are paid for their labor in doing that.

destroy value bc they -1 inventory each time a window is broken

lbreakjai 51 minutes ago||
It's a net value loss. This is literally the parable of the broken window

https://en.wikipedia.org/wiki/Parable_of_the_broken_window

The fallacy is to think value was created by buying someone's labour to fix the window. This is value that's been displaced from something productive to something unproductive.

Instead of going from 0 to 1 (invest the money and create value), you went from -1 to 0 (spend money to fix the window to get back to where you were) and, overall, the value of a perfectly good window got lost.

Barbing 1 hour ago||||
FIRE!

-crowded theater (negative value example)

Words can be pretty much actions depending on who you are https://en.wikipedia.org/wiki/Will_no_one_rid_me_of_this_tur...

bryanrasmussen 2 hours ago||||
>I think it’s easier just to recognize words as free and to value them as such.

well, yeah that is the world the AI guys want...

Apocryphon 1 hour ago||
The opposite, actually. They hardly want to give away tokens for free!
hn_acc1 25 minutes ago|||
They want the grand total of humanity's knowledge, from which they create tokens, to be given to them for free, though..
dugidugout 20 minutes ago|||
For the tech bros, the tokens are the actions and the prompts are the words.
gobdovan 55 minutes ago|||
Words are acts, as formalized in speech act theory.

https://en.wikipedia.org/wiki/Speech_act

anon84873628 3 hours ago|||
Not for the economic opportunity of building AI-run retail stores. For the much larger economic opportunity of selling AI's to run retail stores!

Pickaxes and shovels and whatnot.

andy99 6 minutes ago|||
I don’t find this disingenuous.

The more typical AI fondation model company claim of “it’s so dangerous only we and people that pay us enough should hand access” is what I think is BS.

I don’t see anything wrong with trying to understand something, which is what this seems to be about. I also don’t see anything wrong with an AI operated store generally, and it of course makes sense, and is valuable, to learn about how the limitations.

Quarrelsome 1 hour ago|||
To be fair, they're running this with oversight, the blog states they're ensuring the people employed are actually properly employed with the parent company. You know for sure that someone WILL run this experiment without those oversights, so while their "care" is probably more about liability there is still some truth to what they say.
elif 1 hour ago|||
It is moral to throw your toddler into the pool so that later in life they are less likely to drown.
jdlshore 1 hour ago||
Um, yes? Very much so. Infant swimming self-rescue courses are life-saving if you live in an area with a lot of swimming pools, especially if you have one of your own.

E.g., https://www.infantswim.com/

b2w 58 minutes ago||
At best, ISR covers the short term.

I see these kids come on deck and enter the water and its hard to not notice their development is behind to those of their peers that went to a swim club that was proper learn to swim to thrive in the water as opposed to just that survive mentality. They are the most watched in case something happens.

So yea, don't just throw em in.

tayo42 27 minutes ago||
> development is behind to those of their peers that went to a swim club

2 year olds are behind already?

ben_w 2 hours ago|||
I'm not saying you should take them seriously*, but if you were to take them seriously, that when they say "we believe this future is coming regardless" they do in fact believe this, well, how can I put it?

Lots of people write wills, doesn't mean they're looking forward to dying or think they can do much about it. Heck, a lot of people don't even watch their diet and do exercise to maximise quality of life and life expectancy.

* I think that by the time AI is good enough to run a retail store, there's a decent chance there won't be any retail stores left anyway. It's like looking at Henry Ford's production line factories and thinking "wow, let's apply this to horse-drawn carriages!"

notahacker 1 hour ago||
tbf this is less preparing for inevitable death by writing a will and more preparing for inevitable death by founding a startup which blogs about euthanizing small animals...
fl4ppyb3ngt 3 hours ago|||
Do you think it this would be the future? I'm in between on it, but I think it's cool that they're at least doing it transparently. Also I don't think they're going to be making a lot of money.... they post Luna's financials up at the store and last time I was there she was down $500 just in the day (not including the daily rent and employee cost)
sdenton4 2 hours ago|||
It's the next step removed from the tablet based ordering that has taken over in restaurants. Like those tablets, it won't be everywhere, but its easy to imagine it being ubiquitous, especially in chain stores.
jmcgough 1 hour ago|||
I can't believe you made a throwaway to pretend to be a HN commenter just to defend your AI store. This is like Scott Adams behavior.
HPsquared 1 hour ago|||
I'll file this under "Resistance is futile".
Mordisquitos 3 hours ago|||
“Again, we are not doing this because we want the Torment Nexus to be the future.

We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running the Torment Nexus.”

astrange 1 hour ago||
The Torment Nexus joke is kind of undermined by obviously being a reference to the Total Perspective Vortex from HGTTG, where the joke was that nothing bad actually happened when they used it on Zaphod.
mesofile 1 hour ago||
Not sure if this is a spoiler, it’s been a while since I read those books, but if memory serves the only reason Zaphod survived the TPV was because he was temporarily the inhabitant of a pocket universe specifically designed to trick him, and naturally for this universe’s version of the TPV he was the most important being in it, and in telling him so the pocket-universe TPV just confirmed ZB’s own view of himself, leaving him unharmed and a little extra smug. At some further point in the plot this fact is revealed, not sure if it’s the same book, but I remember it as a hilarious deflationary moment for the character.
jonas21 1 hour ago|||
> Supporting people that want more AI regulation to stop this?

How are you supposed to know what sort of regulation is needed if you don't even know what the issues are yet? Similarly, won't it be much easier to make the case for regulation if you can point to results of experiments like this one instead of just hypotheticals?

pajamasam 1 hour ago|||
I honestly thought the whole thing was satire and that that line was a riff on OpenAI.
insane_dreamer 1 hour ago|||
I think it's actually useful to see how AIs behave in such situations. It's going to happen, and understanding what AIs do is good to try to mitigate areas or actions that could be dangerous. It's hard to guard against the unknown if they're unknown.
orochimaaru 1 hour ago|||
The narrative was quite dystopian. But we are half way there now anyway
scotty79 1 hour ago|||
I'm all for replacing CEOs with AI.
cyanydeez 1 hour ago|||
"Guys, the Future All Knowning AI is forcing us to do this; don't blame us, blame the super intelligent future indistinguishable from magic!"
dfhvneoieno 3 hours ago||
[dead]
bfeynman 2 hours ago||
I feel bad that people have to read this. It's complete puffery, made up for clicks, and the biggest thing is the pure bravado with which a company says, "Hey, let's just waste a ton of money, all for a potential blog and marketing piece." This is not really automated in any fashion. I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context. To get even more technical on the fallacy: this is not automation, as there is data leakage at every step where there is a human in the loop. A broken clock is right twice a day; an LLM could cycle through 100 guesses to pick a number, but don't market that as an oracle. Aside from that, you could just look at the pictures and context (retail in SF) and assume making a profit here would be near impossible. An actual AI ceo would probably have immediately cancel the lease.
graybeardhacker 1 hour ago||
A stopped clock is right twice a day; a broken one can be wrong forever. Just saying.
insane_dreamer 1 hour ago|||
> I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context.

A human can be in the loop if the human is exactly executing the orders of the AI. It's still the AI making all the decisions, which is the purpose of the experiment - not to see whether agents can handle every interaction necessary to run a business (pick up the phone and place orders, etc.). That's also why Luna hired humans.

bfeynman 51 minutes ago||
that is ... not correct? This is classic example of data leakage, the yes/no things are signals feeding back to the model influencing (and here, basically guiding) future decisions.
j2kun 2 hours ago||
[flagged]
antonvs 40 minutes ago|||
I appreciated the analysis given by the other commenter, so I'm glad they didn't take that lazy way out.
themafia 2 hours ago||||
This is Hacker News. It should be filled with curious people who are willing to express their opinions and points of view. To tell someone to just punitively flag something and then "move on" is absurdly reductive and small minded.
kryogen1c 1 hour ago|||
The submitter appears to be a co-founder of the company the article is about (omitted from the HN account bio), and the article is misleading to the point of lying.

This company now has strong a strong negative reputation in my mind that I will gladly share with others.

saaaaaam 48 seconds ago||
Did Luna the AI write this piece of promotional marketing and decide to post it on hacker news? Did Luna the AI create a fleet of new accounts to upvote? Are the human-derived marketing interventions accounted for when the outcomes of this project are assessed?
Xx_crazy420_xX 5 days ago||
I think it would be valuable to list all interactions with the LLM by the dev team and transparently state what was induced by human steering the LLM, and what was actuall LLM decision, which was not biased by system instructions or dev team communicating with it
ethin 2 hours ago||
But why? It would ruin the illusion they're trying to make you see, because 99 percent of it (if not all of it) is human driven.
vannevar 5 days ago||
Agreed. Color me skeptical. All of the interactions and decisions described are plausible, but in my experience with AI agents, they would require frequent human intervention.
fl4ppyb3ngt 3 hours ago||
I heard they're working on putting an interface together for the public to check up on. Their blogs always have a bunch of screenshots of the interactions with the agents, so I think they'll be pretty transparent with this
phreeza 1 hour ago||
What do you mean you heard? Are you not a member of their team? Your posts in the last hour seem quite astroturf-y.
binarynate 3 hours ago||
Marketing stunt. If they actually cared about this as an experiment, they wouldn't have broadcasted this so early, because now that the public knows that the store is designed and run by AI, many people aren't going to support it (i.e. many people who would have shopped there now won't).
mrweasel 2 hours ago||
Also don't do it in San Francisco, I think it's an artificial easier market. The type of store wouldn't work in Bumsville Idaho.

Maybe that's for later, if this works out, but I'd love to see the AI attempt to run a moderately successful business in a borderline dysfunctional town in the Midwest. If you don't technically need to pay "the CEO" a salary, could you run e.g. a grocery store in a dying town. One this would really test the AI on creativity, and it would perhaps tell us if these towns are just doomed.

shalmanese 27 minutes ago||
San Francisco is one of the most brutally hard places to run a business, as evidenced by how competitive the landscape is.

What would have been actually interesting about this publicity stunt is if it demonstrated if/how AI could have dealt with some of the SF specific, non-sexy parts of running a business. Filing the relevant permits, co-ordinating inspections, negotiating with landlords, interfacing with locals at planning meetings.

Those are things SF business owners report as empirically unpleasant parts of running a business and a sufficient financial drag that they meaningfully affect business success. But my feeling is they had humans clear the way of all these thorny issues ahead of time so the AI could focus on the "sexy stuff".

fl4ppyb3ngt 3 hours ago|||
interesting take. looks like they've already got a bunch of hate on google reviews already.

But maybe people will forget eventually.

hsuduebc2 2 hours ago|||
Or they would go there mainly out of curiosity. Either way, it is skewed by the sole fact that they published it.
BurningFrog 3 hours ago||
I hope they also have similar store that they don't talk about publicly, so they can compare the outcomes.
ryan_j_naughton 3 hours ago||
To do this properly, no one should know the store is AI run. There is a novelty component of it being an AI run store that will drive consumer demand and increase publicity.

Not even the normal store employees should know (which would be difficult) or maybe the human manager should be held to an NDA to not disclose it (and the manager also defers to the AI in all such real management decisions).

fl4ppyb3ngt 2 hours ago|
ya i get that, but then that kinda messes up the transparency and ethical research part of the experiment. idk there's definitely two sides of things they're testing: 1. can it be profitable-- in this case yeah they shouldn't have disclosed anything. 2. can an AI do this safely and respectfully, or are the humans in the loop going to come at the cost of the agent trying to make profit. I think #2 is more important than 1
pavel_lishin 2 days ago||
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.

I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.

Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.

anon84873628 2 hours ago||
The AI is not really the CEO in the first place. It is not signing contracts (at least not with its own name). It is fundamentally still an automated tool reporting to the real human operators, who are doing more of the actual corporate legal tasks than portrayed in the article.
yieldcrv 2 hours ago||
People can delegate
john_strinlai 23 minutes ago||
sure. but in this case, having the ai delegate to humans for any important task sort of undermines the entire premise.
altruios 3 hours ago|||
I assume if they get fired by the AI during the experiment they are still paid to sit at home. It would not invalidate the experiment.
pessimizer 3 hours ago||
Why do you assume that?
notahacker 1 hour ago|||
it's about the only way of reconciling experimental validity (if the AI can't "fire" staff and remove them from business operations and their P&L account in situations when it would be legal and normal to do so, is it really running a business?) and not having the massive ethical issue of people being arbitrarily fired because a computer glitched. Whether that's what they actually do is tbc.
sodality2 2 hours ago|||
[dead]
jayd16 4 hours ago|||
You can still wear eye protection during the safety test...

I don't think we need to have real human risk to get results from the experiment.

fl4ppyb3ngt 2 hours ago||
well said
jaxefayo 3 hours ago|||
The article mentions:

“John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”

which was refreshing to read.

evanelias 1 hour ago|||
Literally the two sentences immediately following that quote are "For now. As we continue down this path, however, humans will not be able to stay in the loop and such guarantees will be intractable."

Personally I find the entire tone of the article to be creepy and disturbing.

hamdingers 3 hours ago|||
I take that to mean "we won't let the AI refuse to pay them or otherwise break employment law" not that they could never be fired.
HWR_14 2 hours ago||
I read that as "it's not worth the negative PR of being associated with AI firing minimum wage employees" compared to just paying them for a year or two.
ceejayoz 4 hours ago|||
They could, in theory, have contracts that say the AI can't fire them.
compiler-guy 4 hours ago|||
It could be set up such that the AI can "fire" them, in that they no longer work at the store, and aren't paid wages that count against the experimental establishment's costs, but still get paid to do something else, or to do nothing at all.

I doubt the experiment is set up that way, but that would be an ethical way to do it.

wil421 4 hours ago|||
There’s no way they are putting that into a contract. HRs are already using it to fire people.
ceejayoz 3 hours ago||
"This specific AI can't fire anyone without human review, because it's experimental" is something you could easily add.
fl4ppyb3ngt 2 hours ago|||
Yeah they explain this in the post though. the decisions aren't 'vetted' per say, but interactions and decisions are very closely monitored like in any science experiment. I think it's good. Better they do it and monitor every little thing, stepping in where needed, instead of no one doing it and 3 months down the line some company outputs a "business in a box" agent people buy and start running that has no gaurdrails or oversight. Definitely there exists huge potential for exploitation with the employees and the company Andon is all about safety and stuff, so it seems like their approach makes sense, no?
joe_the_user 2 hours ago||
At this point, legally I don't think an AI can hold a contract with a person and so I don't think an AI could hire human and so they couldn't fire a person.

That doesn't mean the AI couldn't be the decision maker for the legal entity that's hiring these people.

But the thing is that if this startup is telling these people they are employees of this company, not "Luna", it would give these people the impression that all their interactions with the AI are kind of a sham, a game, not to be taken seriously and they are basically being paid to role-play as "Luna's employees".

And this kind of where such experiments are likely to go. Another user mentioned that it would be useful to discover the kind of inputs and output the machine. A human boss could manage a store with just phone calls and a camera but I overall get the vague impression Luna doesn't have anything like that sort of ability, though really we just aren't given the information for any accurate determination.

thih9 45 minutes ago||
> Great question! Here’s the short version:

> Fair pushback. The honest answer:

These were painful to read.

If an artificial boss is also artificially empathetic, does this make it more realistic?

In any case current iteration sounds like a more exclusive circle of hell.

sbuttgereit 3 hours ago||
I skimmed through this, and maybe I missed it... but what really are they trying to prove? Are they trying to show that AI is capable of arbitraging consumer desires vs. market products/services into a successful business? Are they trying to show that once you get to financially managing a business that the ruthlessly efficient demands of the AI can mean points to your margins? Or are they simply trying to get attention in an otherwise arguably overcrowded market for AI service s (maybe the AI suggested something like this)?

The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.

So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.

anon84873628 2 hours ago||
If I'm being charitable, it's more about the ability to orchestrate and resolve tradeoffs across these different tasks / domains? The overall C&C, presumably. Which is still not so surprising.

Really it's an excuse for the company to test all the harnesses and tools they have built to make it work.

fl4ppyb3ngt 2 hours ago|||
i agree that some of these things we could have already guessed-- like yes agents can research stuff and order stuff off the internet. I think what will be a lot more interesting is the interactions that happen between Luna the agent running things and the employees it hired. I guess less about AI being able to do the procurement CEO level stuff, and more how it does the HR level aspects of store management. That seems more important in the log run, because like you said, we already know capabilities are there. I think what Andon Labs is doing is more about the safety aspect now. Seems that way at least with how transparent they are about Luna losing money and messing up lol
taurath 3 hours ago||
They're trying to get noticed so that a wealthy cult member's brain gets tickled to the tune of 9 figures
hermitcrab 1 hour ago|
>For the build-out, she found painters on Yelp, sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving.

I'm sure this involved vast amounts of human oversight (e.g. checking that the contractor had actually done stuff) that isn't mentioned.

More comments...