301 stories

Exploring container security: An overview

1 Comment and 3 Shares

Containers are increasingly being used to deploy applications, and with good reason, given their portability, simple scalability and lower management burden. However, the security of containerized applications is still not well understood. How does container security differ from that of traditional VMs? How can we use the features of container management platforms to improve security?

This is the first in a series of blog posts that will cover container security on Google Cloud Platform (GCP), and how we help you secure your containers running in Google Kubernetes Engine. The posts in the series will cover the following topics:
  • Container networking security 
  • New security features in Kubernetes Engine 1.10
  • Image security 
  • The container software supply chain 
  • Container runtime security 
  • Multitenancy 
Container security is a huge topic. To kick off the the series, here’s an overview of container security and how we think about it at Google.

At Google, we divide container security into three main areas:
  1. Infrastructure security, i.e., does the platform provide the necessary container security features? This is how you use Kubernetes security features to protect your identities, secrets, and network; and how Kubernetes Engine uses native GCP functionality, like IAM, audit logging and networking, to bring the best of Google security to your workloads. 
  2. Software supply chain, i.e., is my container image secure to deploy? This is how you make sure your container images are vulnerability-free, and that the images you built aren't modified before they're deployed. 
  3. Runtime security, i.e., is my container secure to run? This is how you identify a container acting maliciously in production, and take action to protect your workload.
Let’s dive a bit more into each of these.

Infrastructure security

Container infrastructure security is about ensuring that your developers have the tools they need to securely build containerized services. This covers a wide variety of areas, including:
  • Identity, authorization and authentication: How do my users assert their identities in my containers and prove they are who they say they are, and how do I manage these permissions?
    • In Kubernetes, Role-Based Access Control (RBAC) allows the use of fine-grained permissions to control access to resources such as the kubelet. (RBAC is enabled by default since Kubernetes 1.8.)
    • In Kubernetes Engine, you can use IAM permissions to control access to Kubernetes resources at the project level. You can still use RBAC to restrict access to Kubernetes resources within a specific cluster.
  • Logging: How are changes to my containers logged, and can they be audited?
    • In Kubernetes, Audit Logging automatically captures API audit logs. You can configure audit logging based on whether the event is metadata, a request or a request response.
    • Kubernetes Engine integrates with Cloud Audit Logging, and you can view audit logs in Stackdriver Logging or in the GCP Activity console. The most commonly audited operations are logged by default, and you can view and filter these.
  • Secrets: How does Kubernetes store secrets, and how do containerized applications access them?
  • Networking: How should I segment containers in a network, and what traffic flows should I allow?
    • In Kubernetes, you can use network policies to specify how to segment the pod network. When created, network policies define with which pods and endpoints a particular pod can communicate.
    • In Kubernetes Engine, you can create a network policy, currently in beta, and manage these for your entire cluster. You can also create Private Clusters, in beta, to use only private IPs for your master and nodes.
These are just some of the tools that Kubernetes uses to secure your cluster the way you want, making it easier to maintain the security of your cluster.

Software supply chain 

Managing the software supply chain, including container image layers that you didn't create, is about ensuring that you know exactly what’s being deployed in your environment, and that it belongs there. In particular, that means giving your developers access to images and packagers that are known to be free of vulnerabilities, to avoid introducing known vulnerabilities into your environment.

A container runs on a server's OS kernel but in a sandboxed environment. A container's image typically includes its own operating system tools and libraries. So when you think about software security, there are in fact many layers of images and packages to secure:
  • The host OS, which is running the container 
  • The container image, and any other dependencies you need to run the container. Note that these are not necessarily images you built yourself—container images included from public repositories like Docker Hub also fall into this category 
  • The application code itself, which runs inside the container. This is outside of the scope of container security, but you should follow best practices and scan your code for known vulnerabilities. Be sure to review your code for security vulnerabilities and consider more advanced techniques such as fuzzing to find vulnerabilities. The OWASP Top Ten web application security risks is a good resource for knowing what to avoid. 

Runtime security 

Lastly, runtime security is about ensuring that your security response team can detect and respond to security threats to containers running in your environment. There are a few desirable capabilities here:
  • Detection of abnormal behaviour from the baseline, leveraging syscalls, network calls and other available information 
  • Remediation of a potential threat, for example, via container isolation on a different network, pausing the container, or restarting it 
  • Forensics to identify the event, based on detailed logs and the containers’ image during the event 
  • Run-time policies and isolation, limiting what kinds of behaviour are allowed in your environment 
All of these capabilities are fairly nascent across the industry, and there are many different ways today to perform runtime security.

A container isn’t a strong security boundary 

There’s one myth worth clearing up: containers do not provide an impermeable security boundary, nor do they aim to. They provide some restrictions on access to shared resources on a host, but they don’t necessarily prevent a malicious attacker from circumventing these restrictions. Although both containers and VMs encapsulate an application, the container is a boundary for the application, but the VM is a boundary for the application and its resources, including resource allocation.

If you're running an untrusted workload on Kubernetes Engine and need a strong security boundary, you should fall back on the isolation provided by the Google Cloud Platform project. For workloads sharing the same level of trust, you may get by with multi-tenancy, where a container is run on the same node as other containers or another node in the same cluster.

Upcoming talks at KubeCon EU 

In addition to this blog post series, we’ll be giving several talks on container security at KubeCon Europe in Copenhagen. If you’ll be at the show, make sure to add these to your calendar:
Note that everything discussed above is really just focused at the container level; you still need a secure platform underlying this infrastructure, and you need application security to protect the applications you build in containers. To learn more about Google Cloud’s security, see the Google Infrastructure Security Design Overview whitepaper.

Stay tuned for next week’s installment about image security!

Read the whole story
25 days ago
Worth reading as containers take the focus from VMs
Share this story

Docker is Dead

1 Comment and 2 Shares

To say that Docker had a very rough 2017 is an understatement. Aside from Uber, I can’t think of a more utilized, hyped, and well funded Silicon Valley startup (still in operation) fumbling as bad as Docker did in 2017. People will look back on 2017 as the year Docker, a great piece of software, was completely ruined by bad business practices leading to its end in 2018. This is an outside facing retrospective on how and where Docker went wrong and how Docker’s efforts to fix it are far too little way too late.

Subscribe to DevOps’ish for updates on Docker as well as other DevOps, Cloud Native, and Open Source news.

Docker is Good Software

To be clear, Docker has helped revolutionize software development. Taking Linux primitives like cgroups, namespaces, process isolation, etc. and putting them into a single tool is an amazing feat. In 2012, I was trying to figure out how development environments could be more portable. Docker’s rise allowed a development environment to become a simple, version controllable Dockerfile. The tooling went from Packer, Vagrant, VirtualBox, and a ton of infrastructure to Docker. The Docker UI is actually pretty good too! It’s a good tool with many applications. The folks on the Docker team should be very proud of the tooling they built.

Docker is a Silicon Valley Darling

Docker’s early success lead to the company building a big community around its product. That early success fueled funding round after funding round. Well known investors like Goldman Sachs, Greylock Partners, Sequoia Capital, and Insight Venture Partners lined up to give truckloads of money to Docker. To date, Docker has raised capital investments totaling between $242 to over $250 million dollars.

But, like most well funded, win at all cost start-ups of the 2010s, Docker made some human resources missteps. Docker has protected some crappy people along its rise. This led to my personal dislike of the company’s leadership. The product is still quality but it doesn’t excuse the company’s behavior AT ALL. Sadly, this is the case for a lot of Silicon Valley darlings and it needs to change.

Kubernetes Dealt Damage to Docker

Docker’s doom has been accelerated by the rise of Kubernetes. Docker did itself no favors in its handling of Kubernetes, the open source community’s darling container orchestrator. Docker’s competing product, Docker Swarm, was the only container orchestrator in Docker’s mind. This decision was made despite Kubernetes preferring Docker containers at first. Off the record, Docker Captains confirmed early in 2017 that Kubernetes discussions in articles, at meetups, and at conferences was frowned upon by Docker.

Through dockercon17 in Austin this Kubernetes-less mantra held. Then, rather abruptly, at dockercon EU 17 Docker decided to go all in on Kubernetes. The sudden change was an obvious admission to Kubernetes’ rise and impending dominance. This is only exacerbated by the fact that Docker sponsored and had a booth at KubeCon + CloudNativeCon North America 2017.


No one understood what Docker was doing in April at dockercon17 when it announced Moby. Moby is described as the new upstream for the Docker project. But, the rollout of Moby was not announced in advance. It was as if millions of voices suddenly cried out in terror when the drastic shift from Docker to Moby occurred on GitHub as Solomon Hykes was speaking at dockercon17. This drastic and poorly thought through change required intervention from GitHub staff directly.

Not only was the change managed poorly, the messaging was given little consideration as well. This led to an apology and later hand drawn explanations of the change. This further muddies the already cloudy container space and Docker (or is it Moby?) ecosystem. The handling of the Moby rollout continues to baffle those working in the industry. The Docker brand is likely tarnished due to this.

The Cold Embrace of Kubernetes

Docker’s late and awkward embrace of Kubernetes at the last possible moment is a sign of an impending downfall. When asked if Docker Swarm was dead, Solomon Hykes tweeted, “Docker will continue to support both Kubernetes and Swarm as first-class citizens, and encourage cross-pollination. Openness and choice create a healthier ecosystem for everyone.” The problem here is that Docker Swarm isn’t fully baked and is quite far from it. The Docker Swarm product team and its handful of open source contributors will not be able to keep up with the Kubernetes community. As good as the Docker UI is the Kubernetes UI is far superior. It’s almost as if Docker is conceding itself to being a marginal consulting firm in the container space.


The real problem with Docker is a lack of coherent leadership. There appears to have been a strategic focus around a singular person in the organization. This individual has been pushed further and further away from the core of the company but still remains. The company has reorganized and has shifted its focus to the enterprise. This shift makes sense for Docker’s investors (the company does have a fiduciary responsibility after all). But, this shift is going to reduce the brand’s cool factor that fueled its wild success. It is said that, “Great civilizations are not murdered. They commit suicide.” Docker has done just that.

Bonus: Conspiracy Theory

Conspiracy Theory: Docker knows it is over for them. The technical folk decided to roll out Moby drastically and embraced Kubernetes suddenly to make sure their work still lives on. #Docker #DevOps

— Chris Short (@ChrisShort) December 29, 2017

I floated out a theory on Twitter about the awkward moments for Docker in 2017. It is possible Docker knows the end is near for the company itself. As organizational changes have indicated a pending exit (likely through acquisition), the technical core of the company prioritized some changes. Donating containerd to CNCF, making Moby the upstream of Docker, and embracing Kubernetes will immortalize the good work done by the folks at Docker. This allows a large organization like Oracle or Microsoft to come along and acquire the company without worrying about the technological advances made by Docker employees being locked behind licenses. This provides the best of both worlds for the software teams and company itself. Needless to say, 2018 will be an interesting year for Docker.

Subscribe to DevOps’ish for updates on Docker as well as other DevOps, Cloud Native, and Open Source news.

See Also

Read the whole story
109 days ago
This is a community, tech centric analysis. Business wise they appear to have struggled to keep their business strategy aligned with organisational growth and change. I have a lot of sympathy. They remain in the driving seat, with a massive opportunity: big year ahead
Share this story

Saturday Morning Breakfast Cereal - Language

5 Comments and 10 Shares

Click here to go see the bonus panel!

It gets really bad when they start using loops instead of actively engaging in conversation.

New comic!
Today's News:

3 weeks left to submit your proposal for BAHFest MIT or BAHFest London!

Read the whole story
109 days ago
Share this story
5 public comments
105 days ago
So is it like math where you read all the nested words first then work your way out?
112 days ago
I have been accused (by some) of using (entirely) too many parentheses in conversation (both written (email) and spoken).
112 days ago
113 days ago
"Daddy, where did Lisp come from?"
113 days ago
Replace all the s's with th's and that's accurate.

How the climbing app Rakkup could provide backcountry skiers with fast, mobile guidebook access

1 Share

Exploring new territory often requires finding a source for good beta. Maps, routes and terrain intricacies all present questions that even local experts need at times. And with modern technology, there are a number of resources available to help make the process easier. But it can be tricky to wade through the abundance of digital observation data and the latest avalanche apps. Enter Rakkup, an application that takes guidebook content and modifies it to be mobile.

Why haven’t you heard of Rakkup until now? It has, until recently, been geared primarily toward climbers. But Nate Greenberg, a dedicated tele-skier based in Mammoth Lakes, Calif., saw this as an opportunity.

Husted navigates the skintrack with the new Rakkup app. [Photo] Nate Greenberg

In 2006, Greenberg helped found the Eastern Sierra Avalanche Center and now sits as president of the board of directors. He is also co-author of Backcountry Skiing California’s Eastern Sierra, so when he came across Rakkup, he immediately saw the potential for adding skiing guidebooks to the app’s repertoire.

To do this, he teamed up with Rakkup, which now boasts five backcountry ski guidebooks for California’s Eastern Sierra, Washington’s Mt. Baker, Wyoming’s Teton Pass, and Colorado’s Silverton and Crested Butte, Colo., converted to digital form from the original in-print book and all accessible through the app to purchase or rent. And while there are only a few guidebooks currently available, Greenberg hopes to add more in the future as he pursues authors and publications to join the network. But in the meantime, here’s what Rakkup offers.

Digital guidebooks could make tricky summits more accessible. [Photo] Nate Greenberg

The Purpose

Rakkup is designed to facilitate safety and travel by listing peaks, routes, photos and in-depth descriptions of difficulty, hazard level and slope aspect of a given location. It also includes information like tour length, elevation, best time of year to go, known slidepaths, headwalls that may be dangerous and gear-selection tips, like noting whether or not you may need a rope, ice axe or crampons.

The app provides the added benefit of filtering search results depending on what a user is looking to do as well as avoid. If the avalanche report calls for more danger on northeast-facing slopes, a filter can show all other slopes and slope angles to avoid. The Rakkup team is currently working to take this technology a step further by providing an automatic update for terrain choices based on daily avalanche-center reports, and they hope to incorporate user reports into the mix in the future.

“My feeling of the app’s main purpose is that it connects people with appropriate terrain,” Greenberg says. “In general, guidebooks provide people with options in terrain with the hopes of helping people make good decisions relative to ability and the conditions of avalanche danger.”

Rakkup vs. Other Apps

Competition lurks around every corner of the app world, and certain Rakkup features overlap with those of Mountain Hub—which crowd-sources maps and routes for various sports like mountain biking, skiing and trail running—and Fat Map, which provides 3D imagery of backcountry zones and routes.

Rakkup may offer maps and observations, but pow slaying is still up to the skier. [Photo] Nate Greenberg

What sets Rakkup apart is its ability to provide guide-based material. It isn’t just tourers posting their daily observations but professionals gathering reputable and accurate information on terrain, documenting it and making that information available to users.

“We provide not just the lines on the map; we also [provide] content,” Greenberg says. “The copy, text, as well as the photos are all integrated; the platform is rich with multimedia from approaches to routes to difficulty levels.” 

Rakkup vs. Traditional Guidebooks

While holding a physical book is appealing, guidebooks can be heavy and susceptible to damage from the elements—not ideal for bringing into the mountains. Rakkup is designed to accommodate for flexibility and travel.

Greenburg describes Rakkup as “just one tool of many,” and continues: “I think a lot of people are relying on different [tools] in the backcountry. You can’t use it [Rakkup] in isolation, but it’s a great tool to be used in conjunction with a lot of things such as topographic maps, cell phones and compasses.”

Bottom Line: Rakkup may not be the one-stop app for tourers, but it’s a helpful resource when traveling unfamiliar territory or to get professional avalanche and weather observations on the fly.

Rakkup helps make remote objectives more accessible. [Photo] Nate Greenberg

The post How the climbing app Rakkup could provide backcountry skiers with fast, mobile guidebook access appeared first on Backcountry Magazine.

Read the whole story
119 days ago
Share this story

Mountain Skills: The tools and tricks to stay motivated in the skintrack

1 Comment

Mornings can be rough, and even the most diehard skiers experience days when it’s hard to get out of bed. You cringe at the thought of jamming your bruised feet into your wet-from-yesterday ski boots, and all you want is to pull the blanket over your head and snuggle into the depths of your down comforter.

Last year I skied 2.5-million human-powered vertical feet, and there were definitely times when I just didn’t feel like skinning. I often wanted to ski one less run or even lay down in the snow and cry. But I knew that, to reach my goal, I had to become a master of motivating myself to start earlier, go longer, go faster and stop later.

I came up with tricks and tools to keep my motivation up, and I never regretted skiing more. Here’s what I learned.

Aaron Rice applies his motivational tricks on the skintrack. [Photo] Madeline Cecilia

Pick Fun Objectives

The easiest way to be motivated for a morning dawn patrol is to pick an objective with high “wahoo” factor. Deep snow, a short approach and a scenic route are always a plus. When the tour plan calls for a five-mile skin on flat terrain through dense tree cover, it’s going to be harder to set an alarm. Add variable snow conditions to the mix and you’ll be pressing snooze until next week.

Set Big Goals

One of the tools that’s been most successful in boosting my time-spent-to-fun-had ratio is setting season-long goals. At the beginning of the winter, I spend time thinking about what I’d like to achieve during the coming season. These goals often include the number of days I wish to spend on skis or the amount of vertical feet I hope to climb. But I have friends who are less numerically driven, and they set different types of goals: pushing to ski things that scare them, skiing every month of the year or skiing in a new location at least once each week. It’s up to you to find and set goals for yourself that you truly want to achieve. The motivation will follow.

Set Small Goals

Grand goals are just the sum of their parts: smaller goals that, over time, have a big impact. And when it comes to setting small goals, the simpler the better. When I was exhausted last year, I would pick a tree 100 feet in front of me and just try to get there. Then I would pick another tree and aim for that one. Before I knew it, I was at the summit and ready to reap the rewards. Small, easily attainable goals—stacked one on top of another—soon become successful big goals.

Pick Good Partners

To find the right partner, you must first know yourself. Learn what type of skier you are. Do you like to go light and fast? Do you like to bring a big, heavy ski and frame binding and huck on the way down? Whatever it may be, know yourself and pick your partners accordingly. If my goal is to climb as many vertical feet as possible, there are some touring partners I would never ask to join in on my mission. On other days, I may want to explore a techy line, and I’ll probably call completely different people. I know that each of my touring partners makes the same call when they decide whether or not to invite me on a tour.

Pack the Night Before

This may seem insignificant, but it helped me wake up and be a better touring partner. If I’m all ready to go in the morning, I can sleep a bit longer and roll out of bed faster because I’m not stressed about tracking down clean socks, and I show up on time.

Sign Up for a Skimo Race

A skimo race may sound intimidating, but there’s so much to gain from doing just one. Many ski towns have a citizen-series weekly race where all levels are welcome. I remember the first skimo race I attended: I got crushed. In the hour-long race, I took off my pack at each transition to store my helmet, goggles, skins and layers. Meanwhile people were flying by me with packs never touching the ground. For me, the takeaway was not to buy lighter gear and get Spandex to shove my skins into, but rather to change my mentality. I realized that I needed to learn to rip skins with my skis still on, to kickturn correctly, to travel through the mountains efficiently and to use that knowledge in the backcountry to be safer, have more fun and ultimately ski more.

Have a Great Plan B

The best way to have a failed day is to pick just one objective and get shut down. While skiing in Colorado one spring, a good friend taught me to always have a plan B, C and D and to make sure that each of those plans is almost as good as plan A, if not better. When plan A falls through, the stoke remains just as high. Then, after the day is over, you can look back at the great time you had and not on a catastrophic slog, which will keep you excited for tomorrow.

Bottom Line: After nearly 10 years of backcountry skiing, I’ve learned that the only way to stay motivated is to have fun. Most of my tricks just facilitate a good time. If you’re doing the thing you love, you’ll be motivated to do more of it. All of the tactics for staying motivated to ski can apply to anything you do in life—if we have fun with the people we want to be around, our lives will be enriched.

This week, videographer Tyler Wilkinson-Ray premiered his film 2.5 Million, a Banff Mountain Film Festival select that documents Aaron Rice’s journey to ski 2.5-million vertical feet in 2016. The film, a Banff Mountain Film Festival select, is now available to viewers. To learn more about the project, visit airandrice.com.

2.5 Million from WILDER on Vimeo.

The post Mountain Skills: The tools and tricks to stay motivated in the skintrack appeared first on Backcountry Magazine.

Read the whole story
124 days ago
A good reminder, particularly as I failed to get out this morning and instead landed up marooned on the sofa reading the Internet. Pack the night before!
Share this story

QC Lab: Personal Anchor Systems Explained

1 Comment

I’m old-school. I clip in with draws when cleaning a sport anchor, I don’t wear a helmet when I’m sport climbing, and I use just the rope with a clove hitch to tie myself into the anchor when I get to the belay of a multi-pitch climb. When I’m rappelling off a route, I’ll use a couple of shoulder slings to tether myself in at each anchor. My personal philosophy is that I like having the least amount of stuff on me as possible, and the reality is I have the rope, and I have the slings—so why not use them? Images: Andy Earl Historically people have used daisy chains (incorrectly) as tethers. The pockets on daisy chains are typically quite weak—between 2-5kN, and the potential for mis-clipping a daisy (across the tack) is real: Daisy chains should really be used for aid climbing and not as a personal tether. So, what are people to do? Well, as climbing evolves, things change, and over the last several years “Personal Anchor Systems” (a nice descriptive term coined by o...Read More
Read the whole story
124 days ago
a) rope systems lead to less load b) girth hitch through both tie in points NOT through the belay point.
Share this story
Next Page of Stories