Scaling zulily’s Infrastructure in a Pinch, with Salt

At zulily, we strive to delight our customers with the best possible experience, every day.  Our daily customer experience involves offering thousands of new products each morning, all of which comes together thanks to our technology, and impeccable coordination across the organization. As our product offerings dramatically change on a daily basis, quickly scaling our infrastructure to meet variable demand is of critical importance.  In this article, we will provide an overview of zulily’s SaltStack implementation and its role in our infrastructure management, exploring patterns and practices which enhance our automation capabilities.


Let’s start with a bit of context, and not the jinja kind

Our technology team embraces a DevOps approach to solving technical challenges, and many of our engineers are “full stack”. We have several product teams developing and supporting both external and internal services, with a variety of application stacks.  All product teams have developers of course, and a few have dedicated DevOps engineers.  We also have a small, dedicated infrastructure team.


zulily has seen phenomenal growth since it’s inception, and what was initially a tech team of one, quickly became a tech team of a few, rapidly evolving into a tech team of many product teams and engineers, which is where we find ourselves today.  With these changes and growth over time, it became apparent our infrastructure team was perhaps not the ideal team for managing all components and configurations across the entire Technology organization.


To elaborate further on this point, our product teams have overlapping stacks but with variations, and many teams have vastly different components comprising their stacks.  Product teams know their application stacks best, so instead of attempting to have a small team of infrastructure engineers managing all configs and components, we needed to empower product teams to be able to take ownership, by providing them with self-service options.


Enter SaltStack to address our organization growth, which we have found to be very approachable, with its simple-to-grasp state and pillar tree layouts, use of yaml, and customization possibilities with python.  Salt is a key component in our technology stack enabling our product teams to take control of their system configurations, keeping us moving forward quickly and accomplishing our goals.


saltenv == tenant (mostly), and baseless?

Like many initiatives and projects at zulily, we’ve taken a unique approach to our use of salt environments. It has worked out exceptionally well for our tech organization and we are excited to share our approach to multi-tenancy with salt.


Each product team has its own salt and pillar trees, salt environments map to tenants essentially. For example, we have environments with names such as “site”, we do not use salt environment names such as “dev” and “prod”.


But what about “real” environments?  We are able to manage those too, thanks to our strict and metadata-rich host-naming convention, paired with salt’s state and pillar tree layouts and top.sls and targeting capabilities.  Our hostnames have the following format:




Also related to our host names, each minion has custom grains set for all of these fields, and these grains are quite useful in many of our states!


We have found that the majority of states are the same across (real) environments, and environment specifics can instead be managed through pillar targeting.  By keeping all of a team’s states and pillar data within just two git repositories, we have found we are overall more DRY than we would have been with separate git repositories (per real environment).


Additionally, salt states may be extended and overridden, which may be useful for different (real) environments when necessary.  So instead of having a flat state tree, we have sub-directories such as ‘core’, ‘dev’ and ‘prod’.  Our approach is to place just about everything under core, but use environment sub directories when we must have environment-specific states, or when we simply wish to extend or override states residing in core. If parent states in core must be modified, it is important to consider the ramifications for any environment-specific children.  We generally don’t do a lot of extending and overriding at zulily, and instead focus on placing environment specifics within targeted pillar data, as previously mentioned.


We have the same layout in our pillar trees for consistency, but note that pillar keys must be unique and have no hierarchy when retrieved, however, hierarchy is important for pillar top.sls targeting!


Reviewing the following state tree example illustrates our layout approach for a “provision environment”:


│── core
│   │── aliases
│   │  │── files
│   │  │   └── aliases
│   │  │── init.sls
│   │  │── map.jinja
│── dev

│── prod
│── top.sls


But wait, if a highstate is run, what happens and couldn’t this be dangerous?  Running a highstate does have the potential to be dangerous, if a product team accidentally targets *their* very specific MySQL states to ‘*’ for example, a separate team’s database server could result in a serious outage.  To mitigate the risk of an incident such as this occurring, pushes to all of our state and pillar repositories are subject to inspection by a git push constraint that deserializes the top.sls yaml and evaluates all targets.  The targeting allowed in our top.sls files is very restrictive, with only a subset of target types allowed, and non-relevant environment references are disallowed.  Also worth noting is that only very specific, authorized team members have write access to our salt and pillar product team repositories, a member of the site team may not write to the infrastructure team’s salt and pillar repositories.


Also worth mentioning, one additional layer of risk mitigation we have in place is that all of our users append “saltenv=<product_team>” to their salt-calls, always.
We do have additional environments which are not-tied to any specific project team, known as base, provision and periodic.  The base environment is empty!  The latter two are critical to our operations, we’ll explain this next.


Less salt (highstate runs)

In our experience at zulily, we’ve learned that the vast majority of our salt states really only need to run just once, or rather infrequently.  So our standard practice for product teams is to run highstates only once per week  or on an as-needed basis, which we do very cautiously.  It goes against the traditional wisdom of converging at least hourly, but in the end, we have had consistent environments and greater stability with this approach.  It is nearly inevitable that even the most senior automation engineer will make a bad push to master at some point, and a timed hourly run could pick up on that, with potentially disastrous consequences.  Configuration management is a powerful thing, and we have found our approach to highstating to be the appropriate balance for zulily.


Now, getting to zulily’s two important non-product team “environments”…


The first of which is known as “provision”.  States in the provision environment provide the most basic packages and configurations with reasonable defaults, which work for most product teams, most of the time.  What is very particular about the provision environment is that a “provision highstate” is only run once!  That’s correct we almost never re-run any of these states once an instance goes into production.  There just really isn’t a need, and more importantly, there may be conflicts with subsequent customizations by product teams, and we would really rather avoid unnecessary subsequent configuration breakage.


To limit ourselves to a single provision hightstate, our provision top.sls targeting requires that a grain be set to True, known as “in_provisioning”.  When an instance has been provisioned, we remove the grain — a provision highstate will never run again, as long as the grain remains absent.  Very seldom, we have had to roll out updates to a few individual states within provision, which we accomplish very cautiously with specific states.sls jobs.


We have recently open sourced a sampling of many of our basic states used at provision time, please have a look at our github project known as alkali.


The second non-product team “environment” is known as periodic.  While our standard is to run a full product team environment highstate once per week, some changes need to get out in near realtime.  For zulily, these types of changes are limit to states addressing resources such as posix users and groups, sudoers, iptables rules, and ssh key management.  Periodic highstates are cron’d every few minutes at present with saltenv=periodic of course.  We are however moving to triggered periodic highstates, as cron’d periodic highstate runs may block other jobs.


State development workflow

We have done a significant amount of state develop at zulily, and for the most part, this has occurred within Vagrant environments.  Vagrant has worked very well for us, but more recently we are beginning to leverage docker containers for this purpose.  For more information on how we are doing this, please check out a project we just released, known as buoyant.


Given our salt development environment, whether Vagrant or docker, we typically iterate on states working out of our home directories (synced folders or docker volumes), preferably in a branch.  Once state and pillar files are ready, we merge into master and configure very restrictive and precise targeting at first, or simply remove or disable existing targeting.  This gives us full control over our rollout process across (real) environments, which limits the risk of a service disruption, we know exactly which hosts are executing which states and when.


Pushes to master branches for all salt and pillar git repositories are integrated within just a few minutes with our current automation, and then ready for targeted execution across relevant minions.


zulily’s salt masters are controlled by a centralized infrastructure team, and product teams are restricted from running “salt” commands, they do not have access to our masters.  They do however have all the control, and only the control they need! Product teams use simple, custom scripts that leverage fabric to execute remote commands on their minions, most notably salt-call (with saltenv specified of course!).


Other salt-related open source projects zulily has released

Outside of the aforementioned alkali and buoyant projects, we have recently released four community formulas:



All of these projects are in their early stages, a bit heavy on the jinja in some cases, and very Ubuntu-specific for the most part at this time. They have however shown good promise for us at zulily, we didn’t want to wait any longer to share them with the community. Our hope is they will already be useful to some, and worthy of iterating on going forward.


Coloring outside of the lines

One of zulily’s core values is to “color outside of the lines,” and our use of SaltStack is no exception.  Many of the patterns we use are uncommon, and our approach to environments in particular may not be the first idea that comes to mind for the typical salt user.  Our use of salt and its inherent simplicity and flexiblity have enabled us to decentralize our configuration management, providing multi-tenancy and product team isolation.  With self-service capabilities in place, our product teams are empowered to move at a quick cadence, keeping pace with what we call “zulily time” around the office. We’ve had great success with SaltStack at zulily, and we are pleased to share some of our projects and patterns with the community.


Happy salting!

Helping Moms make purchase decisions

zulily’s unique needs

“Something special every day”  is our motto at zulily. Thousands of new products go live every morning. Most of these products are available for 72 hours and a good number of them often sell out before the sales event ends! Many of our engaged customers visit us every single day to discover what’s new. We want to make sure our customers don’t miss out on any items or events that may be special to them, while also giving them more confidence in their purchase decisions. A traditional eCommerce ratings and reviews model could help, but is not the best fit for zulily’s unique business model which offers customers a daily assortment of new products, for a limited amount of time. We needed a different approach.

Our solution

Our specific business requirements demanded a more real time and community-oriented approach. We came up with a set of signals that highlighted the social proof and scarcity. Signals like “Only X left” and “Almost Gone” were designed to encourage users to act on a product that they are interested in before it is gone. Signals like “Popular”, “X people viewing” and “Y just sold” were intended to give users more confidence in their purchase decision. We were quickly able to bring these signals to life, thanks to our real time big data platform. These signals were shown on our product pages and the shopping cart.

Product page

Shopping cart


We tested the feature out on our web and m-web experiences. The results turned out to be better than our most optimistic expectations! It was interesting to note that the feature was almost twice as effective on the mobile device compared to desktop. In hindsight it made a lot of sense as our customers have a lot of small sessions on the mobile devices during the day and this information helped them make timely decisions. The social and scarcity signals turned out to be a perfect complement to zulily’s unique business model.

The Thrill-Ride Ahead for zulily Engineers

zulily is an e-commerce company that is changing the retail landscape through our ‘browse and discover’ business model.  Long term, there are tremendous international expansion opportunities as we change the way people shop across the planet. At zulily we’re building a future where the simultaneous release of 9,000 product styles across 100+ events can occur seamlessly in multiple languages across multiple platforms. From an engineering perspective, as we expand globally the size and scope of our technical challenge is nothing short of localization’s Mount Everest climb.

Navigating steep localization challenges is not new to companies expanding globally yet zulily faces a particularly unique thrill-ride ahead. For me personally, this is incredibly exciting. Prior to coming to zulily one role I held at my former company was global readiness – to ensure the products and services that the company delivered to customers were culturally, politically and geographically appropriate. I was in the unique role of mitigating the company’s risk of negative press, boycotts, protests, lawsuits or being banned by governments. The content my team reviewed was never considered life-threatening until the 2005 incident when the cultural editor of Jyllands-Posten in Denmark commissions twelve cartoonists to draw cartoons of Islamic prophet Muhammad. Those cartoons were published and their publication led to the loss of life and property. Suddenly my team took notice. Having content thoughtfully considered and ready for global markets took on a grave seriousness moving from a ‘nice-to-have’ to a ‘must-have’ risk management function. While my role was to ensure the political correctness (neutrality) of content across 500+ product groups, the group I led rarely dealt with the size and scope of technical localization challenges that zulily is facing as we expand globally.

As companies expand globally it’s important for the employees to conceptually shift their mindset from a U.S. centric perspective to a global view. This paradigm leap in how the employees of an organization consider themselves is significant. When people are increasingly thinking globally about their role and the impact of their decisions on a global audience, specifically in the area of technology and how we enable our platforms, tools and systems to be ‘world-ready’, opportunities for growth and development naturally occur while cultural content risks reduce. Today all of the content on our eight country-specific sites is in English yet we are now thinking about how to tackle bigger challenges in the future, such as supporting multiple languages. The technical implementation for international expansion has enabled our developers and product managers to gain a new appreciation for the importance of global readiness and the challenge of ‘going international’.

zulily offers over 9,000 product styles through over 100 merchandising events on a typical day. As we expand into new markets around the globe we face an extraordinary challenge from both an engineering and operations perspective. Imagine, every day at 6 a.m. PT we’re publishing the content equivalent of a daily edition of The New York Times (about one half million new words per day or over 100 million new words per year).  Another way to conceptualize the technical problem – each day we offer roughly the same number of SKUs of what you would find in a typical Costco store. Launching a new Costco store every day is difficult enough in one language yet as we scale our offerings globally the complexity to simultaneously produce this extremely high volume of content in multiple languages grows exponentially. Arguably, no e-commerce company in the world is publishing the volume of text that zulily produces on a daily basis. Further, the technical challenge is amplified because the massive volume of content must be optimized to work across multiple platforms, e.g., iPhone, iPad, Android and web based devices. In fact, over 56% of our orders are placed on mobile devices. From a user-experience across platforms and languages, text expansion and contraction become a significant issue. European languages such as French, German and Italian may require up to 30% more space than English. Double byte characters such as Chinese and Japanese will require less space.

The content on our sites is produced in-house each day by our own talented copy writers and editors. Not unlike publishing a newspaper the deadline for a 6 a.m. launch of fresh, new product styles is typically the night before. Last minute edits can happen as late as midnight! While this makes for an exciting and dynamic environment, it requires some of the brightest engineering and operational minds on the planet to bring it all together with the quality and performance we expect.

From a technical perspective our recent global expansion to Mexico, Hong Kong and Singapore faced typical localization hurdles. We needed to implement standard solutions to accommodate additional address line fields in the shipping address; ensure proper currency symbols are displayed; coding rules that allowed us to process orders without postal codes which are not required in Hong Kong.

Address Field Requirement Example:


Bold = unique as compared to U.S. requirement

As we continue to move into new markets we’ll be driven to apply new solutions to traditional localization problems simply based upon the sheer volume of content and arduous daily production requirements. These two forces alone combine to drive creativity, invention and technological breakthroughs that will accelerate our growth and expansion. Our team at zulily is now exploring various strategies in engineering and operations to tackle the local/regional cultural differences across markets while also bringing zulily to our customers in their own language. We continue to hire the brightest minds to help us invent solutions for a new way of shopping. At zulily, we tell our customers ‘something fresh every day’. Our engineers enable that reality by creating something fresh every day. Our ambition to grow globally will give everyone at zulily that opportunity.

Follow us on Twitter: @zulilytech | @jmstutz

zulily’s Kubernetes launch presentation at OSCON

In July Steve Reed from zulily presented at O’Reilly’s Open Source Convention (OSCON). He spoke to zulily’s pre-launch experience with Kubernetes. It was an honor for zulily to be asked to speak as part of the Kubernetes customer showcase, given the success we have had with Kubernetes.

Kubernetes launch announcement:

Leveraging Google Cloud Dataflow for clickstream processing

Clickstream is the largest data set at zulily. As mentioned in zulily’s big data platform we store all our data in Google cloud storage and use Hadoop cluster in Google Compute Engine to process it. Our data size has been growing at a significant pace, this required us to reevaluate our design. The dataset that was growing fastest was our clickstream dataset.

We had to make a decision to either increase the size of cluster drastically or try something new. We came across Cloud Dataflow and started evaluating it as an alternative.

Why did we move clickstream processing to Cloud Dataflow?

  1. Instead of increasing the size of the cluster to manage peak loads of clickstream Cloud Dataflow gave us ability to have an on-demand cluster that could dynamically grow or shrink reducing costs
  2. Clickstream data was growing very fast and the ability to isolate it from other operational data processing would help the whole system in the long run
  3. Clickstream data processing did not require complex correlations and joins which are a must in some of the other datasets – this made the transition easy.
  4. Programming model was extremely simple and transition from development perspective was easy
  5. Most importantly, we have always thought about our data processing clusters as transient entities, which meant dropping another entity in this case Cloud Dataflow which was different from Hadoop was not a big deal. Our data would still reside in GCS and we could still use clickstream data from Hadoop cluster where required.

High Level Flow


One of the core principles of our clickstream system is to be self serviceable. In short, teams can start recording new types of events anytime by just registering a schema to our service. Clickstream events are pushed to our data platform through our real-time system(more later) and through log shipping to GCS.

In the new process using Cloud Dataflow we

  1. Create pipeline object and read data from GCS into PCollectionTuple object
  2. Create multiple outputs from data in the logs based on event types and schema using PCollectionTuple
  3. Apply ParDo transformations on the data to make it optimized for querying by business users (e.g. transform all dates to PST)
  4. Store the data back into GCS, which is our central data store but also for use from other Hadoop processes  and loading to Big query.
  5. Loading to BigQuery is not done directly from Dataflow but we use bq load command which we found to be much more performant

The next step for us is to identify other scenarios which are a good fit to transition to Cloud Dataflow, which have similar characteristics to clickstream – high volume, unstructured data not requiring heavy correlation with other data sets.

Building zulily’s big data platform

In July 2014 we started our journey to building a new data platform that would allow us to use big data to drive business decisions. I would like to start with a quote from our 2015 Q2 earnings that was highlighted in various press outlets and share how we built a data platform that allows zulily to make decisions which were near impossible to do before.

“We compared two sets of customers from 2012 that both came in through the same channel, a display ad on a major homepage, but through two different types of ads,” [Darrell] Cavens said. “The first ad featured a globally distributed well-known shoe brand, and the second was a set of uniquely styled but unbranded women’s dresses. When we analyze customers coming in through the two different ad types, the shoe ad had more than twice the rate of customer activations on day one. But 2.5 years later, the spend from customers that came in through the women dresses ad was significantly higher than the shoe ad with the difference increasing over time.” –

Our vision is for every decision, at every level in the organization, to be driven by data. In early 2014 we realized the data platform we had which was combination of SQL server database for data warehousing primarily for structured operational data + Hadoop cluster for unstructured data was too limiting. We started with defining core principles for our new data platform (v3).

Principles for new platform

  1. Break silos of unstructured and structured data: One of the key issues with our old infrastructure was we had unstructured data in a Hadoop cluster and structured in SQL server, this was extremely limiting. Almost every major decision required us to perform analytics where we had to correlate usage patterns on site(unstructured) with demand or shipment or other transactions(structured). We had to come up with a solution that allowed us to correlate these different types of datasets.
  2. High scale and lowest possible cost: We already had a hadoop cluster but one of the challenges was our data was growing exponentially and as the storage need grew we had to start adding nodes to the cluster which increased the cost even if the compute was not growing at same pace. In short, we were paying at rate of compute(which is way higher) at pace of growth rate of storage. We had to break this. Additionally we wanted to build a highly scalable system which would support our growth.
  3. High availability or extremely low recovery time with eye on cost: Simply put we wanted high availability but not pay for it.
  4. Enable real-time analytics: Existing data platform has no concept of real-time data processing, hence anytime users needed to analyze things in real-time they had to be given access to production operational system which in turn increases risk, this had to change.
  5. Self service: Our tools and systems/services need to be easy for our users to use with no involvement from data team.
  6. Protect the platform from users: Another challenge in existing system was lack of isolation between core data platform components and user facing services. We wanted to make in new platform individual users of the system could not bring down the entire system.
  7. Service(Data) level agreement(SLA): We wanted to be able to give SLA’s for our datasets to our users. This includes data accuracy and freshness. In past this was a huge issue due to design of the system and lack of tools to monitor and report SLAs.

zulily data platform (v3)

zulily data platform includes multiple tools, platforms and products. The combination is what makes it possible to solve complex problem at scale. image

There are 6 key broad components in the stack:

  1. Centralized big data storage built on Google cloud storage (GCS). This allows us to have all our data in one place (principle 1) and share it across all systems and layers of the stack.
  2. Big Data Platform is primarily for batch processing of data at scale. We use Hadoop on Google compute engine for our big data processing.
    • All our data is stored in GCS not HDFS, this enables us to decouple storage from compute and manage cost (principle 2).
    • Additionally the hadoop cluster for us is transient as it has no data so if our cluster completely goes away we can built new one on the fly. We think about our cluster as a transient entity (principle 3).
    • We have started looking at Google Dataflow for specific scenarios but more on that soon. Our goal is to make all data available for analytics anywhere from 30 minutes to few hours based on the SLA.
  3. Real-time data platform is a zulily built stack for high scale data collection and processing (principle 4). We now collect more than 10k events per second and it is growing really fast as we see value through our decision portal and other scenarios.
  4. Business data warehouse built on Google BigQuery enables us to provide highly scalable analytics service to 100’s of users in the company.
    • We push all our data, structured and unstructured, real-time and batch into Google BigQuery hence breaking all silos for data (principle 1).
    • It also enables us to keep our big data platform and data warehouse completely isolated (principle 6) making sure issues in one environment don’t impact the other system.
    • Keeping the interface(SQL) same for analysts between old and new platforms enabled us to lower barrier for adoption (principle 5)
    • We also have a high speed low latency data store that hosts part of our business data warehouse data for access through APIs.
  5. Data Access and Visualization component enables business users to make decisions based on information available.
    1. Real-time decision portal or zuPulse enables business to get insights into the business in real-time.
    2. Tableau is our reporting and analytics visualization tool. Business users use the tool to create reports for their daily execution. This enables our engineering team to focus on core data and high scale predictive scenarios and makes reporting self service (principle 5).
    3. ZATA our data access service enables us to expose all our data through an API. Any data in the data warehouse or low latency data store can be automatically exposed through ZATA. This enables various internal applications at zulily to show critical information to users including our vendors who can see their historical sales on vendor portal.
  6. Core data services are backbone to everything we do – from moving data to cloud, monitoring to and making sure we abide by our SLA’s etc. We continuously add new services and enhance existing services. The goal of these services is to make developers and analysts across other areas more efficient.
    • zuSync is our data synchronization service which enables us to get data from any source system and move to any destination. Source or Destination can be operational databases (SQL Server, MySQL etc), a queue (RabbitMQ), Google cloud storage (GCS), Google BigQuery, services (http/tcp/udp), ftp location…
    • zuflow is our workflow service built on top of Google BigQuery and allows analysts to define & schedule query chains for batch analytics. It also has ability to notify and deliver data in different formats.
    • zuDataMon is our data monitoring service which allows us to make sure data is consistent across all systems (principle 7). This helps us catch issues with our services or with the data early. Today we have more than 70% of issues identified by our monitoring tool instead of users reporting them. our goal is to get this number to be in 90%+.

This is a very high level view of what our data@zulily team does. Our goal is to share more often, with a deeper dive on our various pieces of technology, in the coming weeks. Stay tuned…

Partnering with zulily – A Facebook Perspective

During F8, Facebook’s global developer conference (March 25-26, 2015), Facebook announced a new program with zulily as one of the launch partners: Messenger for Business. Just a month later, the service was live and in production! We on the zulily engineering team truly enjoyed the partnership with Facebook on this exciting product launch. It’s been a few months since the launch and we reached out to Rob Daniel, senior Product Manager at Messenger, to share his team’s experience on collaborating with zulily.

Here’s what Rob had to say:

How Messenger for Business Started

Messenger serves over 700 million active people with the mission of reinventing and improving everyday communication. We invest in performance, reliability, and the core aspects of a trusted and fast communication tool. Furthermore, we build features that allow people to better express themselves to their friends and contacts, including through stickers, free video and voice calling, GIFs, and other rich media content. We build the product to be as inclusive as possible – even supporting people that prefer to register for Messenger without a Facebook account – because we know that the most important feature of a messaging app is that all your friends and anyone you’d want to speak with is reachable, instantly.

In that vein, we found that some of people’s most important interactions are not just with friends and family, but with businesses they care about and interact with almost every day. However, until recently Messenger didn’t fully support a robust channel for communicating with these businesses. And when we looked at how people interact with businesses today – email, phone (typically…IVR phone trees), text – there were obvious drawbacks. Either the channels were nosiy (email), or lacked richeness (text), or were just really heavyweight and inefficient (voice).

Reinventing How People Communicate with Businesses

When we started talking with zulily about a partnership to reinvent how people communicate with businesses, under the mission of creating a delightful and high utility experience, we found we met an incredible match.

Why? Simply put, zulily thinks about Mom first. That’s the same people-centric approach to building products that’s at the heart of Messenger. And along the way, as we scoped out the interactions for launch, it was evident that we had the same mindset: build a product with incredible utility and value to people, establish a relationship and build trust, and over time build interactions that pay dividends via higher customer loyalty and lifetime value. And despite time pressures and the many opportunities to put business values over people values, we collectively held the line that people were first, business was second.

We also found that “zulily Speed” was no joke, and matched the Facebook mantra of “Move Fast.” Up and through our announcement at F8 (Facebook’s two-day developer conference) and launch later the next month, our two teams moved in sync at incredible speed. As an example, late one evening an issue was spotted by a Messenger engineer and by 10:00 am the next morning the problem had been identified, fixed and pushed to production. That kind of turn around time and coordination is just unheard of between companies at zulily and Facebook’s scale.

Speaking to the strength of the individual engineers on both sides, team sizes were small. This led to quick decision making and efficient, frequent communications between teams, forming a unique bond of trust and transparency. Despite the distance between Seattle and Menlo Park, we rarely had timing miscues and latency was incredibly low.

This Journey is 1% Finished

At Facebook we say “this journey is 1% finished,” highlighting that regardless of our past accomplishments we have a long journey ahead, and that we owe people that use our services so much more. And in that same spirit, we respect that Messenger is only beginning to provide zulily and other businesses the communication capabilities they need to form lasting, trusted, and valuable relationships with their customers. But we’re thrilled to be teamed up with a company, like zulily, that has the commitment and vision alignment to help shape the experience along the way and see the opportunity along the path ahead.