Ansible Vs Kubernetes: Difference You Should Know
Ansible Vs Kubernetes

Ansible Vs Kubernetes: Difference You Should Know

Last updated on 13th Jul 2020, Blog, General

About author

Siddharth (Sr Devops Engineer )

Delegates in Corresponding Technical Domain with 6+ Years of Experience. Also, He is a Technology Writer for Past 3 Years & Share's this Informative Blogs for us.

(5.0) | 16935 Ratings 1002
Approaches to Configuration Management: Chef, Ansible, and Kubernetes

Ansible is a great tool for both small and large environments. You can use Ansible to conveniently keep the configuration state of a few machines and, at the other extreme, you can manage thousands of servers using Ansible and inventory plugins that will discover all your Amazon Web Services (AWS) machines and allow Ansible to apply configurations on them.

Ansible also helps you avoid a “snowflake” state of your fleet of servers. A snowflake state occurs when you’ve configured most of the packages and services manually, resulting in each of your machines having a unique final state with many inconsistent config files and settings amongst servers.

Conceptually, Ansible is very similar to Chef, except that Ansible “playbooks” are typically used without a single master server because they can be run from anywhere. In addition, Ansible uses ssh to login to a target server and configure the server to match the desired state described declaratively in the playbook. All you need is the correct ssh key to login to the target machine, so your laptop can be the master server if needed.

Subscribe For Free Demo

[custom_views_post_title]

Common practice is to keep all playbooks in a central repository like git. This ensures your “infrastructure as code” is always backed up and kept in sync with all team members so everyone knows which configuration change was applied to a particular server or group of servers.

Ansible uses YAML for all resource definitions: “playbooks”, “roles”, “tasks”, and “handlers”. These correspond to the Chef “cookbooks”, “recipes”, “attributes”, and so on. Ansible uses Jinja templating instead of the ERB (Embedded Ruby) templates in Chef. Jinja templates provide almost the same flexibility as ERB templates, allowing you to add loops and conditionals in your templates and use text manipulation and temporary variables for your convenience.

  • Amazon Web Services (AWS)
  • Atomic
  • CenturyLink
  • Cloudscale
  • CloudStack
  • DigitalOcean
  • Dimension Data
  • Docker
  • Google Cloud Platform
  • KVM
  • Linode
  • LXC
  • LXD
  • Microsoft Azure
  • OpenStack
  • OVH
  • oVirt
  • Packet
  • Profitbricks
  • PubNub
  • Rackspace
  • Scaleway
  • SmartOS
  • SoftLayer
  • Univention
  • VMware
  • Webfaction
  • XenServer.

Ansible’s features include:

  • Simplicity
    You don’t need any unique coding skills to use Ansible’s playbooks. Ansible is easy to set up. Just run the shell script once, and you’re good to go.
  • Power
    Ansible handles highly complex IT workflows. 
  • Zero Cost
    Ansible is a free, open-source software solution.
  • Flexibility
    You can orchestrate the entire application environment no matter where you want to deploy it. Since it has hundreds of modules available, you can customize Ansible to fit your unique needs.
  • Easy to Use Playbooks
    Most of the playbooks are written in YAML, making them easy to edit and read.
  • Agentless Installation
    You can set Ansible up in minutes using OpenSSH. You also don’t need to set up agents on remote servers.
  • Efficiency
    Ansible doesn’t require you to install any extra software, so there are more resources to dedicate to your other applications.

Limitations with Ansible

Ansible is designed to install packages, copy configuration files, and provision cloud instances and services through APIs. The “overall cluster state” of many machines and their interaction with each other is out of scope with Ansible. Ansible will not run health checks every 10 seconds to see if your database is online or reachable by other services, and Ansible will not autoscale containers or instances based on incoming HTTP requests or the latency of your web services.Kubernetes handles all of these tasks — and much more.

Approaches to Configuration Management: Chef, Ansible, and Kubernetes

Using Kubernetes to manage your fleet of services frees you from the periodic re-configuration of the target state for each machine and server. Instead, you build container images and deploy them (docker or rkt). In addition, support for any “Open Container Initiative”-conformant runtime is under active development to allow Kubernetes to manage other types of containers that are not based on docker or rkt.

Course Curriculum

Get Practical Oriented Kubernetes Training to UPGRADE Your Skill Set

  • Instructor-led Sessions
  • Real-life Case Studies
  • Assignments
Explore Curriculum

After you initially deploy container images, Kubernetes will check their health and run status. The underlying worker nodes are monitored in real-time for compatibility with running the desired workload (for example, ensuring enough CPU/RAM/disk space capacity for a particular service), and the containers are monitored for status and resource utilization. The deployment (a single container or a set of many different containers) can be auto-scaled based on a very flexible set of parameters and metrics. You can plug in your own custom application metrics (for example, cache read latency) if the built-in options of autoscaler do not cover your use case.

With Docker and Kubernetes, you still keep all your infrastructure as code. However, as opposed to Chef and Ansible, you can describe not just the state of a single “target” (in our case, a container, described with Dockerfile and “Pod definition”), but also its dependencies on other services, its scale up and down rules (“HPA” definition, horizontal pod autoscaler), its health checks, recovery policy, and many more attributes that help to run a complex set of containers in a reliable and consistent manner.

For those migrating their instance-based (or server-based) infrastructure to containers for the first time, it might seem appropriate to just use the same Chef cookbooks or Ansible playbooks to manage the containers from “inside” during run time. An example of this is installing chef-client inside a container so it will connect to the Chef master and get all configurations to perform the setup, then polling for changes periodically or electing to be notified about needed changes from the master. However, this is not a good idea because containers are built to scale up fast, be initialized fast, and be ready to serve their workloads within seconds. If you add unnecessary agents and initialization steps into a container, you lose the many benefits of using containers.

All Kubernetes resources can be described with YAML or JSON. The spec format is intuitive and easy to understand when you become familiar with a few basic resources like “pod template” and “service”. The pod template is used as a subsection “inside” many other resources like “deployment”, “replica set”, “stateful set”, and “daemon set”. These resources describe “how” the pod template will be running, the needed replicas, and extra parameters specific to each resource.

  • OpenStack
  • OVirt
  • Photon
  • VSphere
  • IBM Cloud Kubernetes Service
  • Baidu Cloud Container Engine.
Ansible Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download

Kubernetes’ features include:

  • Container Balancing
    The Kubernetes platform calculates the best location for a given container without requiring any user interaction.
  • Flexibility
    Because Kubernetes is an open-source cloud-based tool, it’s portable and offers multiple environment flexibility, meaning it can run on public cloud systems, on-premises servers, or hybrid clouds.
  • Zero Cost
    Kubernetes is a free, open-source platform.
  • Process Automation
    Kubernetes can automatically decide which server will host any given container.
  • Self-Monitoring
    Kubernetes stays vigilant, maintaining constant checking of the servers’ and containers’ health.
  • Scalability
    It provides horizontal scaling, allowing companies and organizations to quickly scale-out storage, fitting their workload needs.

Are you looking training with Right Jobs?

Contact Us
Get Training Quote for Free