Posted by gjvc 2 days ago
Most people think of feature flags as boolean on/off switches, maybe per user on/off switches.
If one is testing shades of colors for a "But Now!" button that may be OK. Regarding more complex tests my experience is that there are not a lot of users who tolerate experiments. Our solution was to represent feature flags as thresholds. We assigned a decimal number [0.0, 1.0) to each user (we called it courage) and a decimal number [0.0, 1.0] to a feature flags (we called it threshold). That way not only we enabled more experimental features for most experiment tolerant users, but these were the same users, so we could observe interaction between experimental features too. Also deploying a feature was as simple as rising it's threshold up to 1.0. User courage was 0.95 initially and could be updated manually. We tried to regenerate it daily based on surveys, but without much success.
We ended up creating just ~100 versions of our app (~100 experiment buckets), and then you could join a bucket. Teams could even reserve sets of buckets for exclusive experimentation purposes. We also ended up reserving a set of buckets that always got the control group.
You've approached it a different way, and probably a more sustainable way. It's interesting. How do you deal with the bias from your 'more courageous' people?
That's a great question. We had no general solution for that. We tried to survey people, but results were inconclusive, not statistically significant.
Was this at Spotify by any chance? :)
Based on this ending, the courage bit sounds clever but is misguided. It adds complexity in a whole other variable, yet you have no way of measuring it or even do a good assessment.
I thought you were going to describe how you calculated courage based on the statistical usage of new features vs old features when exposed to them to update courage, meaning people who still keep using the product when it changes have more courage so they see more changes more often. But surveying for courage (or how easy they deal with change) is probably the worse way to assess it.
But even that I don't know what purpose would have because now you destroyed your A/B test by selecting a very specific sub population, so your experiment / feature results won't be good. I'm assuming here a product experimentation approach being used, not just "does it work or not" flags.
There’s no standard requiring something to work for everyone, and it being less value if it isn’t.
Some granularity and agency for the user is valuable. Maybe let them pick everything as a whole or a few features at a time.
Do you mean "tolerate change"? But then you still eventually roll out the change to everyone anyway...
Or do you mean that users would see a different color for the "buy now" button every day?
From a purely statistical point of view, if you select users which "tolerate" your change before you measure how many users "like" your change, you can make up any outcome you want.
This kind of threshold adds some flexibility into the subjectivity of finding the best cohort to test a feature with.
You can call this measure "courage" but that is not actually what you are measuring. What you measure is not that different from agreement.
I could have clarified as well that I was leaning more towards the user-tolerance... or as I like to call it user-guess that this feature might be OK with them :)
Another thing I like about granular and flexible feature flag management is you can really dial in and learn from which features get used by whom, actually.... instead of building things that will collect dust.
the tolerance score wouldn't be tied to a specific change. it's an estimate of how tolerant a person is of changes generally.
it's not that different from asking people if they want to be part of a beta testers group or if they would be open to being surveyed by market researchers.
targeting like that usually doesn't have a significant impact on the results of individual experiments.
Plus you don't know what that correlates to. Maybe being "tolerant of changes" correlates with being particularly computer-savvy, and you're rolling out changes that are difficult to navigate. Maybe it correlates to people who use your site only for a single task, it would appear they don't mind changes across the platform, but they don't see them. Maybe it correlates with people who hate your site now, and are happy you're changing it (but still hate it).
You can't use a selected subset that is not obviously uncorrelated from your target variable. This is selection bias as a service.
Unless you're feature flagging to test infra backing an expensive feature (in which case, in a load-balancer / containerised world, bucketing is going to be much a much better approach than anything at application level), then you most likely want to collect data on acceptance of a feature. By skewing it toward a more accepting audience, you're getting less data on the userbase that you're more likely to lose. It's like avoiding polling swing states in an election.
How did you measure "experiment tolerance"?
>> How did you measure "experiment tolerance"?
Feedback from CS mostly. No formal method. We tried to survey clients to calculate courage metric, but failed to come up with anything useful.
If you have a huge userbase and deploy very frequently FFs are great for experiments, but for the rest of us they're primarily a way to decouple deploys from releases. They help with the disconnect between "Marketing wants to make every release a big event; Engineering wants to make it a non-event". I also find treating FFs as different from client toggles is very important for lifecycle management and proper use.
More than the binary nature I think the bigger challenge is FFs are almost always viewed as a one-way path "Off->On->Out" but what if you need to turn them off and then back on again? It can be very hard to do properly if a feature is more than UI, that might cause data to be created or updated that the old code then clobbers, or issues between subsystems, like microservices that aren't as "pure" as you thought.
If you're interested in this space I'd recommend lurking in their CNCF Slack Channel https://cloud-native.slack.com/archives/C0344AANLA1 or joining the bi-weekly community calls https://community.cncf.io/openfeature/.
I met this week "Standardized Interface for SQL Database Drivers" https://github.com/halvardssm/stdext/pull/6 by example then https://github.com/WICG/proposals/issues too.
Huge work to get everybody on the same page (About my previous example, it's not well engaged by example https://github.com/nodejs/node/issues/55419), but when done and right done, it's a huge win for developers.
PHP PSR, RFC & co are the way.
However there are a lot of connected needs that most real world-usages run into:
- Per-user toggles of configuration values
- Per-user dynamic evaluation based on a set of rules
- Change history, to see what the flag value was at time of an incident
- A/B testing of features and associated setting of tracking parameters
- Should be controllable by e.g. a marketing/product manager and not only software engineers
That can quickly grow into something where it's a lot easier to reach for an existing well thought out solutions rather than trying to home-grow it.
We've got the core functionality pretty much down now, and so there's some more interesting/challenging components to think about now like Event Tracking (https://github.com/open-feature/spec/issues/276) and the Remote Evaluation Protocol (https://github.com/open-feature/protocol)
(We are a big LD user at work.)
How much does the flagd sidecar cost? Seems like that could be a lot of overhead for this one bit of functionality.
Cool stuff
https://github.com/vhodges/ittybittyfeaturechecker
probably via https://openfeature.dev/specification/appendix-c (I don't have time to maintain a bunch of providers).
We are evaluating new solutions at work and OpenFeature is something we're interested in. (I did the home grown solution that's in use by one product line)