MageMojo News

Articles related to news about us :)
  • Stratus : The Legendary Journey

    The launch and improvement of Mojo Stratus has been a bumpy road. Stratus was launched just before Meet Magento New York. Stratus was our major release and plastered on every wall at MMNYC. We wanted to do something innovative and different with Mojo Stratus.

    Stratus is different. Rather than continuing down the path of traditional server offerings – i.e. you get a server, things are installed on it, and you have this big monolithic piece of hardware running whatever you need – we decided to use containers. A long time ago we looked at Docker and containers to use when developing our panel Mojo Host Manager. That was several years ago and containers were an unstable parlor trick. Great for production if you were ok with your production site constantly being on fire.

    Support for containers is now widespread especially with Google releasing their Kubernetes technology. We decided to use containers to build the services for Mojo Stratus. All the services your average Magento 2 store would need, and our initial release worked despite several issues. We tweaked, and tweaked, and got through the Thanksgiving sales season unscathed. Then in December major systemic issues began to appear with no obvious explanation.

    First, we saw database problems. On Mojo Stratus, Amazon Aurora hosts all the databases. Aurora's main strength is scaling out read replicas. If you have ever tried to set up your own MySQL master-slave setups or other DIY clustering, then you know it is not very fun. Aurora makes this easy and we wanted to have read replicas for future scaling. It is still MySQL, though, and subject to the same problems you might expect from high usage and other bugs in MySQL. What we saw were patterns of locks in MySQL which would freeze all transactions on all stores for a few seconds. Then a huge spike in active connections as all the traffic on Stratus backed up into Aurora. Site alarms would go off, the sky would fall, customers noticed, cats and dogs living together, mass hysteria. We needed deeper insight into Aurora and fast.

    Getting insight into Aurora was not easy. We needed something pre-built. Basic stats from AWS or even getting them yourself from the MySQL engine are not useful. That isn’t a fault in Aurora. For an application specific problem you want to see what queries are happening during a failure event. After some trial and error, we came up Vivid Cortex (https://www.vividcortex.com/) and hooked it into Stratus. Vivid Cortex provides tons of information about what queries are running. Vivid Cortex helped us answer questions like:

    What queries are running
    How often do certain queries run
    What databases run certain queries the most
    What queries are the most time consuming
    We can’t give Vivid Cortex enough love for watching database performance

    After gathering a lot of data, we found a pattern. Stratus would lock up during certain types of queries. They would occur during certain actions on particular stores and lock everything up. On top of this, the Magento 2 crons were going haywire. Magento 2 has a bug (https://github.com/magento/magento2/issues/11002) where the cron_schedule table can inflate to infinity. Crons start running all over each other and destroying your server in the process. Certain extensions can have particularly heavy cron tasks within them. Either due to necessity or inefficiencies in the code. And they all run at once.

    With a bug causing locks, crons waging war against everything, and bad queries coming in, we had a recipe for poor performance. Even with the massive resources of Amazon Aurora. We used multiple approaches to bring everything under control. First, we notified customers about problem extensions and site code. Second, we started limiting crons ultimately creating our own extension to manage them. We've also pushed forward support for various services like Elasticsearch. Search in Magento should not be the default through MySQL if possible. We are also working on a MySQL reader extension for Magento 2 to take full advantage of Aurora scaling.

    Those solutions helped, but we still had issues on the file system side. The file issues left us baffled. We had no issues going through the busiest days of the season Black Friday and Cyber Monday. From the start we have been using ObjectiveFS. An amazing filesystem that lets us store data on S3 and it gets pulled locally. Files are cached to speed up performance, running anything direct off S3 would be very slow. Especially Magento where thousands of files maybe opened and called on a single request.

    ObjectiveFS would use a lot of CPU , spike iowait affecting every customer. That issue started in December and was not a problem in the last few months of 2017. The iowait spikes became more frequent and severe, unrelated to specific traffic, and we had to do something. We shopped around for other file system solutions and came up Weka.io. Weka.io is a high performance file sharing solution that offers low latency and high throughput over the network. With most file systems like Weka, you can't get the low latencies needed for an application like Magento.

    Weka promised it all with file system latencies in the microsecond range. Well known share tech like CEPH etc all have times in the millisecond. It used its own kernel driver and relied on i3 instances and NVME storage. Again, when loading 1000 plus files per page load, you need low latency, that’s why the switch to SSDs was so important early on for MageMojo. It looked like a drop in replacement file system and we fired it up and got It working.

    The initial results were promising. Weka handled 300k request per second for files over the network without issues, and writes were no problem. You could write from one place and see the file nearly instantly from another source. Many frontend and backend parts of Magento write to a file and display it via an ajax request (product image uploads). Where on other systems an image upload would work, but the thumbnail would not appear since the write was not fast enough.

    After more testing, we went ahead and moved everyone off ObjectiveFS to the weka.io filesystem. We had a few issues with its configuration, and worked with their team to get everything set up correctly. For a while life was good. But Weka.io added significant latency to the load times, even with its microsecond response over the network. On average about a full second compared to the original ObjectiveFS system. The load time was a trade-off for what we believed to be stability.

    In February we had a critical failure on the Weka.io cluster a few weeks after completing our migration to their filesystem. The system was designed to be redundant so that 2 storage nodes can fail without data loss. In our case 3 nodes failed, putting the data in the ephemeral storage at risk of recovery. A bug in the Weka.io software caused the entire cluster to become unresponsive and we were never given the full explanation from the Weka.io team, unfortunately.

    We brought the stores online within 24 hours using an older copy of the data. In the following days we restored files as we could and helped bring back stores using their more recent data. We stabilized again on ObjectiveFS. We got back to business . ObjectiveFS was not as bad as we recalled, having fixed some other issues Aurora related. And not long before the Weka failure, we learned about the Meltdown vulnerability.

    This is the real kicker on top of it all. Once Meltdown became public knowledge, we learned that Amazon had secretly patched all their systems in mid-December. Meltdown patches coincide with the random systemic issues. We thought they were ObjectiveFS specific. It was not until we went back to ObjectiveFS that we realized there could be a connection. We also had AWS Enterprise support confirm the patching timeline. They were under embargo not to reveal the vulnerability.

    In hindsight, that change severely impacted our file system performance and we know the Meltdown patches can hurt the specific load created by Magento especially stat calls, and Magento makes thousands of them per request. Post Black Friday, multiple issues converged to create a sudden unstable system. We failed to identify it correctly and tried to fix it with different technology. In hindsight, that was a major mistake on our part. A lot of sleepless nights paid on that debt.

    With the realization about Meltdown and a new look at ObjectiveFS, we resumed testing and making more tweaks. Performance was better but not the best we hoped for. More and more updates gave us incremental improvements. In the first iteration we used multiple ObjectiveFS mounts. They covered many stores on a given physical node, and those mounts existed on all workers in the Stratus cluster. As a store scaled out, the containers already had the files available. Requests would cache the files a container needed on the respective node over time. But with many stores sharing a mount, the cache sizes became very large relative to a store. With such a large cache, any given request needed to fetch a lot of specific files from a large haystack. Testing confirmed it was a major bottleneck.

    For Stratus 2.5, the current generation, we moved to having a single ObjectiveFS file mount per store. Each store has its own file cache local to a node running its containers on disk and in memory. We launched Stratus 2.5 2 weeks ago and it has solved every file system issue we’ve received complaints about, especially update slowness in Magento admin. Site performance is faster than ever, according to our New Relic data every store is 30% faster now. Stores with heavy file operations on load show even more improvement.

    We’ve also added a lesser known feature called Stratus Cache. Stratus cache directly adds most of your code base into the container images we use for scaling. Stratus caches bypasses the file system for a majority of the system calls and improves performance while making scaling for large sales a breeze. If you are planning a large promotion or traffic influx, please let us know and we help get that working for you.

    To contribute back to the community and improve Stratus, we’ve started making our own Magento 2 modules to address specific concerns we have about Magento 2 performance. Our first release was a complete re-work of the cron system in Magento 2. On Github at https://github.com/magemojo/m2-ce-cron . By default the Magento 2 crons can take a server down in the right conditions and they constantly fight each other and run the same task multiple times. Our module eliminates that problem, because it causes issues with stores and vital cron tasks are missed.

    Next we have our split DB extension viewable at https://github.com/magemojo/m2-ce-splitdb . Magento 1 CE allowed merchants to easily use a master-slave database setup with a dedicated reader. Stratus uses Aurora which scales by having seamless multiple readers in a cluster. Since M2 CE does not support this at all out of the box, we had to build our solution. We believe Community should be able to scale just as well as Enterprise.

    As we near Magento Imagine, we are working on improving the dev experience on Stratus. We provide free dev instances which are the same CDN and stack used by any production Stratus instance. Going forward, we want to include more tools, tests and utilities to make a developer friendly environment. The primary feature will be Live Preview. At the click of a button, customers can create exact copies of their production store, including the database. Then developers can go in and make changes, commit them, run tests, and push to production. Preview sites will be storable so you can save different versions of the site and refer to them as needed. After the initial release of Live Preview, we will be adding tools to perform Selenium and unit tests.

    Stratus is now the premiere platform for Magento hosting. Nothing can scale and run your Magento store better. We've come a long way and we are grateful for our customer's patience. Now it's time to get back to business and stop worrying about your server.

  • Change Improvement Plan

    Many changes happen on the servers, from nightly yum updates, to our security scanner updates, to our server configuration updates. In order to improve our change management process we promise to do the following:

    Classify change types into categories with specific notification requirements

    • Identify every change that is made and classify the change
    • Write a change policy for each change
    • All changes must happen outside of business hours Pacific Time
    • Green changes are routine maintenance and can be scheduled 12 hours in advance
    • Yellow changes are changes that might affect customers and must be scheduled 7 days in advance
    • Red changes are changes that will affect customers and must be scheduled 2 weeks in advanced

    Standardize release days and times

    • Server releases will be tested on internal servers on Mondays during a standard time window 4am to 8am
    • Server releases will be released to customer servers on Tuesdays during a standard time window 4am to 8am
    • Network changes will happen after 9pm

    Notify customers ahead of time

    • A status page was created for both scheduled and unplanned events
    • All customers primary email, alert emails, mojo developer emails, and additional emails are subscribed
    • Customers can also subscribe by phone number for text alerts
    • Customers email/phone are synched to all components on status page they are using including physical hosts and racks
    • Every time we touch the infrastructure, even if routine, we will schedule in advance based on classification type

    Validate releases thoroughly

    • Server releases will be released and tested on incremental groups of internal servers
    • Server releases will then be released on incremental groups of customers servers

    Open communication during events

    • We will proactively notify affected customers
    • We will provide status updates every 30 minutes until resolution
    • For customers with special configurations (ex clusters) we will have a team lead available to communicate with directly

    Provide a full transparency of all changes

    • Any change that applies to more than 1 customer will be listed on the status page
    • Nightly yum updates will be listed in mhm log
    • Saltstack changes to servers will be listed in mhm log
    • Internal and external servers users commands will be listed in mhm log
    • Server config change diffs will be listed in mhm log
  • MAGETITANS UK 2016

    principal-small We recently hopped on a plane to the historic city of Manchester to rub shoulders with great names in the Magento World at MageTitansMCR in the UK. Technically, our CTO, Marty Pachol hopped on a plane and promptly hopped off the plane only to get on an annoying call with the airline that had cancelled his flight while it was on the runway, ready to carry him to an amazing destination. The rest of our crew got to enjoy the posh hotel he so carefully selected, and wait on his arrival the following day. The Principal on Oxford was a real treat, complete with a giant horse sculpture in the lobby, the most luxurious carpeting we have ever seen and a shower that needed an instruction manual. Yes, at least 2 of us got cold water dumped on our jet lagged heads from one of two shower heads placed directly overhead :) Isn't it incredible how many variations there are on a seemingly simple contraption? This begs for an entire blog post, website, coffee table book... lock-91-copy We greeted Manchester with a trip past the University of Manchester down to curry mile which was highly recommended. We were not short on options and were happy to experiment with yummy food in a bustling atmosphere and encouraged other titans to head over. We were silenced as the smell of curry filled the air. We probably all had one thought in mind. A nap was in order to prepare us for the pre-party at Lock 91, Black Dog NWS, Grosvenor casino and…well you get the idea :) We met amazing folks at Lock91 who were happy to share their experiences with Magento, the UK and their amazing companies and prepare us for the day ahead. We had lots of laughs comparing American and English vocabulary. The funniest moment was one of our English colleagues referring to the popular rap artist Jay Zed (Jay Z). We also got to witness England beating Scotland in a nail biting game. Lock 91’s happy hour was helpful indeed.

    Back at the Principal one of our rooms had a heating malfunction which meant a move to a new room. Surprise, surprise, the shower controls were reversed at the new room so you guessed it, another cold overhead splash. Now if that doesn’t wake you up..! Over to the Comedy Club on day 2 of this adventure for MagetitansMCR 2016. Great talks with lots of follow up scheduled with a star studded cast including Fabian Schmengler, Fabrizio Branca and Anna Völkl. Space48 and Manchester Digital did a great job with an amazing speaker lineup. The Comedy Club also has balcony seating - great way to make sure you get a good view and pics too. We thought its was fitting to end an amazing event by sponsoring the afterparty at Blackdog’s Bunker. Great food and music, a gaming station and a giveaway of 20 rpi3s made for an exciting few hours.

    Here are a few of our lucky winners.

    The Photo Booth came with props and allowed rockstar Magento developers to truly look like rockstars and capture this for all time. Like it or not, we have all the pictures, hahaha.

    We simply love after parties. This is a great time to unwind and truly get to know the Magento community. We have made some  great friends. We also have more insight on who can stay standing the longest, but we will not mention names, Eric Hileman.
    We said goodnight to the Bunker at midnight but the party was not over - after afterparty at the Arora Hotel bar. Time for some grown up cocktails made by the self proclaimed Mr Awesome! He truly was an incredible bartender with great personality. We wish we could have stayed longer and spent more time in Manchester. Events like MageTitans bring together so many experts and there is never enough time to share experiences and insights.  It was wonderful to get to know these folks better. We truly look forward to hearing from them more often, even its just the random note to say hi. What an amazing community!


  • MEET MAGENTO VIETNAM 2016

     

    The second Meet Magento Vietnam conference in 2016 took us to Ho Chi Minh City, Vietnam in October of 2016. 

    We were grateful for a relatively uneventful trip through Shanghai, but left the airport much lighter than expected - our luggage had not yet arrived! Not that we could clear the customs check though. The airport had a power outage which took all of the equipment down. That guaranteed a quick exit from the airport, without local currency, since ATMs went down with the power out. Yay for cellphones  - we were still able to get an Uber to our hotel. The complimentary dressing gown in the closet was a blessing since that was the only clean piece of clothing  we had :)

    We finally got some Vietnamese dong and were instant millionaires. With the exchange rate at about 22 500 to the dollar we were walking around with more than a million dong! We were all set for a colorful trip to the local markets to hunt for anything worthy of a conference attendee. We would have been happier exploring the city and meeting others in town for the event but it was intriguing to pick up some local traditional stuff for the event the next day. Besides traditional gear was the only practical choice - it seemed that dresses in Vietnam were typically shorter and quite comical on someone of my height.  The best items are usually ordered in advance and tailor made to fit. No time for that though. The quick and easy tourist options would have to do. I got all primped the next morning with my Aodai over a pair of jeans, only to get rained on, on the way to the event. All that trouble only to arrive looking disheveled anyway!

    It was amazing to meet a whole new subset of the Magento community as well as lots of really keen students. We made great friends who also made for wonderful networkers, tour guides of Ho Chi Minh City, its hottest club spots, dinner haunts and of course coffee shops. Vietnamese filter coffee is not for the faint hearted! This flavorful strong brew is bound to keep you alert for many hours! One of the highlights of this event was getting to know Thomas Goletz and watching him interact with the community. It was truly inspiring. Pictured here is Tra My Nguyen, one of the conference organizers, towering over Thomas Goletz. The conference venue was well set up with enormous screens which were great for large audiences.

    Since we commented on the showers in our  blog post covering Magetitans UK, we felt it fitting to offer a detailed account of the bathrooms we encountered in Vietnam. They are often appropriately called wet bathrooms. Sure enough, taking a shower gets everything wet. Bath tubs and shower cubicles are often not common place. Many shower areas we encountered doubled as the area with the toilet and basin. The shower head simply protruded from the wall in that room - really efficient use of space.

    We made sure we had time to explore the many charms of this amazing country. Apart from museums, street food and cafes, we wandered over to Bui Vien, a densely populated tourist and backpacker area which appeared to be alive all night. It was a great place to hang out outdoors and people watch. It was also a great place to make friends and enjoy local delicacies prepared in front of you in a matter of minutes. We very quickly found our favorite locales for great coffee, meals and cocktails and were always amazed at the enterprising, friendly and hardworking nature of folks who we miss dearly. A warning though, the electrical wiring would make anyone cringe. No, this is not your typical data-center wiring standard!

    Hahn at the restaurant a few doors down from our hotel, was always happy to see us and continued to surprise us with amazing dishes that were not even available on her menu. She was even happy to leave her restaurant just to walk with me through winding alley ways off the main drag so we could find fresh fruit.

    I was able to establish a routine in the craziness of Bui Vien as if I had always been there. We found a charming hotel managed by 2 young men who filled the roles of concierge, porter, hotel reception, tour and travel office, cleaning staff and all round great guys. Our first floor room was carefully chosen with large windows so we could always feel like we were a part of the daily activities, which were distinctively different at various parts of the day. The breakfast period was perhaps the calmest but by no means quiet. Various store fronts and road side vendors popped up almost everywhere with motorcycles, bicycles, taxis, Ubers, and pedestrians buzzing around with no clear demarkation of sidewalk and roadway. Stores and street vendors set up little plastic stools or a rolling cart on the roadside for a breakfast of banh-mi (sandwiches), soup, rice porridge or Vietnamese omelettes. Many of these vendors vanished until lunchtime and vanished again until dinner, not there there was ever a shortage of delectable food options or a welcoming smile. We once popped our unannounced heads into a language Institute that seemed to be preparing for Halloween. The students welcomed us as if they were expecting guests. We wandered through the scary maze they were setting up left and left after  great conversations and lots of pictures.

    We were always amazed by the incredible friendliness and perseverance of folks everywhere. One of our favorite things was offering larger than usual tips to various people when the opportunity arose. It was so easy to communicate with as little as a smile and gratitude so it was wonderful to add a token of appreciation.The exchange rate was so heavily in our favor and what is a small token of gratitude in the US, seemed to make someones day in Vietnam.  Our oarsman in Ha Long Bay certainly made our day, as did everyone we met in business and social interactions. He was great at finding the hot picture spots and directing us into appropriate poses using hilarious gestures. He would maneuver the boat with an oar in one hand while taking pictures of us with the other. He seemed to be skilled at using absolutely any phone or camera handed to him while still rowing with one arm :) Much to the amusement of us all, he also scolded me for my meek attempt at a smile when  he tried to take my picture :D

    One rule of survival was to quickly learn how to safely cross the street. I thought I has conquered this as a kid :D Stepping into traffic and maintaining a constant pace was the safest so motorists could anticipate your your next move. The throng of motorbikes, bicycles and larger vehicles simply buzzed around pedestrians and we did not observe a single accident involving a pedestrian. It amazed us, that even larger and fragile items like construction material, floral bouquets, and multiple trays of eggs and were all safely transported by bike. This was perhaps a common theme - things that may have seemed daunting or not possible back home were regular occurrences in daily life. We were sad to leave all this behind and hope to return many times. We keep in touch with friends made along the way and look forward to visiting in the future. Perhaps someday we can host some of them in the US.

     

     

     

     

4 Item(s)

  • Cisco
  • Intel
  • Redis
  • Magento
  • Nginx
  • Dell
  • Percona
  • MemCached
  • PCI Compliant
  • BBB