Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

How to backup and restore Jenkins data and configurations

Answer»

Backup of Jenkins is needed in case of disaster recovery, retrieving old CONFIGURATION and for AUDITING.

$JENKINS_HOME folder keeps all the Jenkins metadata.

That includes: build logs, job configs, plugins, plugin configurations etc.

Install the ‘think backup’ plugin in Jenkins and enable the backup from settings tab.We have to specify the backup directory and what we want to backup.

Backup directory: $JENKINS_HOME/backup

Backup files generated with the timestamp in the filenames will be STORED under the PATH we specified.

divya@jenkins backup]$ pwd /var/lib/Jenkins/backup uat@jenkins backup]$ls

FULL-2019-02-4_07-14 FULL-2019-02-11_13-07

It is a good PRACTICE to version control (using Git) this back-up and move it to cloud.

Restoring:

Backup files are in the tar+zip format.

Copy these over to another server; unzip and un-tar it on the server.

cd $JENKINS_HOME tar xvfz /backups/Jenkins/backup-project_1.01.tar.gz config.xml jobs/myjob/config.xml …
2.

How does Jenkins handle a failed test case?

Answer»
  • Pipelines artifacts
  • Jenkins has built-in artifacts to RECORD and CAPTURE the FAILURES for analysis and investigation.
  • This needs to be mentioned in the JENKINSFILE pipeline:

Sample code:

Pipeline { agent any stages { stage(‘Build’) { steps { sh ‘./test_suite1 build’ } } Stage(‘Test’) { Steps { sh ‘./test_suite1 test’ } } } post { ALWAYS { archiveArtifacts ‘build/libs/**/*.jar’ } } }

This gives the artifacts path and the filename

3.

How can Jenkins facilitate Deployment in a DevOps practice?

Answer»

Jenkins auto-builds the source code from Git(any VCS) at every check-in; tests the source code and deploy the code in a tomcat environment via docker.

WEBAPP source code is then DEPLOYED by tomcat server on a production environment.

Pre-requisite:

  • Jenkins plugin: “Deploy to container” and “git plugin”
  • Edit the ‘post-build’ actions to include the tomcat details.
  • Under the SCM section add your git project repository url.

Git project structure:

Divya1@DIVYA:myWeb [master] $ Dockerfile    webapp/        WEB-INF/          classes/          lib/          web.xml     index.jsp --Dockerfile content: vi Dockerfile FROM tomcat:9.0.1-jre8-alpine ADD ./webapp /usr/local/tomcat/webapps/webapp CMD ["catalina.sh","run"]

Add a new project in Jenkins and TRACK your git project url under SCM section.Have a dockerfile with the instruction to connect to the tomcat docker and deploy the webapp folder.

--Add the build section to ‘execute shell’ as below:

#!/bin/sh echo "Build started..." docker build -t webapp . echo "Deploying webapp to tomcat" docker run -p 8888:8080 webapp echo http://localhost:8888/webapp --Build the project from Jenkins:

Below is the screenshot of the output:

--CLICK on the link: http://localhost:8888/webapp

4.

How is Jenkins workspace data shared between different jobs?

Answer»

Jenkins stores the metadata of every PROJECT under $WORKSPACE path.

Two projects

  • myProject and project_next can be chained to each other and they can share the same data.
  • On SUCCESSFUL build of ‘myProject’, build of ‘project_next’ is triggered.

Below code SCREENSHOT is for project_next

This ACCESS the myProject/logs/db.log FILE and read it for a pattern :’prod’

5.

Write a sample module to install LAMP on an existing Ubuntu Server Docker image.

Answer»
  • Step 1: Run the Docker container with puppet server INSTALLED.
  • Step 2: Write a basic manifest file:
vi /etc/puppet/manifests/lamp.pp
  • Step 3: ADD the resources to install Apache, MySQL and PHP server
exec { ‘apt-UPDATE’ :  command => '/usr/bin/apt-get update'   } # install apache2 package package { 'apache2':  require => exec['apt-update'],          ensure => installed, } # ensure apache2 service is running service { 'apache2':  ensure => running, } # install mysql-server package package { 'mysql-server':  require => exec['apt-update'],          ensure => installed, } # ensure mysql service is running service { 'mysql':  ensure => running, } # install php5 package package { 'php5':  require => exec['apt-update'],          ensure => installed, } # ensure info.php file exists file { '/var/www/html/info.php':  ensure => file,  content => '<?php  phpinfo(); ?>', # phpinfo code  require => package['apache2'],         }

Save and exit.

  • Step 4: Apply manifest
puppet apply --test
6.

What are resources? How do you handle system resources in Puppet?

Answer»

System resources are the key elements of a Puppet code that defines the architecture and manages the configuration of a system infrastructure.

  • Puppet has its own DML (DECLARATIVE MODELLING Language) to write code.
  • The main unit of code is called a resource.
  • Puppet uses various types of resources to define the different definitions and parameters of a system.

Here is how a resource is written:

resource_type { ‘resource_name’: attribute => value, attribute => value, … }
  • Each resource has 3 items: Resource_type, resource_name and the attributes.

Example:

user { ‘Jack’: ensure => present, owner => ‘root', group => ‘home’, mode => 0644, shell => ‘/bin/bash’ }

This code evaluates as:
Resource type ‘user’ with the resource parameter ‘Jack’ have the attributes: ‘ensure’, ‘owner’, ‘group’, ‘mode’ and ‘shell’.

These attributes have the respective values.

We can get a list of all the available resource types with the command:

  • puppet describe --list

Some of the common resource types are:

Example of resource_type: ‘service’. This resource ensures that the service: ‘network’ is RUNNING service 

{‘network’ : ensure => running } This resource checks the ‘package’: ‘apache’ is running and its pre-requisite requires ‘apt-update’ command to be executed. package { ‘apache’ : require => Exec[‘apt-update’], ensure => running }
7.

When joining a new node to the swarm the following error occurs: “Error response from daemon: Timeout was reached before node joined.” How will you fix this?

Answer»

The above failure happens when the ‘manager’ DOCKER machine is not active; as a result, the new node machine will not be ABLE to join the swarm cluster.

To fix this:

  • Step 1: CHECK for the active machine hosts as:

  • Step 2: Activate the ‘manager’ machine as:

  • Step 3: Get the swarm join token as worker

  • Step 4: Connect to the worker machine say: worker2 [ To create docker node machines as worker: join node as a worker ]
Divya1@Divya:~ $docker-machine ssh worker2
  • Step 5: Run the swarm join token command; it will be SUCCESSFUL

8.

Explain Docker Orchestration

Answer»

As the number of docker machines increases, there needs to be a system to manage them all. Docker Orchestration is a virtual docker MANAGER and allows us to start, stop, pause, unpause or kill the docker nodes(machines).

Docker has an in-built utility called “docker swarm”.

Kubernetes is another popular and versatile docker orchestration system used. A CLUSTER of dockers is called a ‘swarm’. Swarm turns a collection of docker engines into a single virtual docker engine.

In a swarm orchestration arrangement, one machine acts as a swarm manager that controls all the other machines connected to the cluster that acts as swarm nodes.

This is how I created a swarm of dockers and managed them on my machine:

We need docker services and docker machines to run these services on.Finally, we need a docker swarm to manage the docker nodes/machines

  • Commands we need:
  • docker service
  • docker-machine
  • docker swarm
  • Task
  • Create docker machines(nodes) and services.

Create a docker swarm and manage the services on different nodes and port numbers.

  • Step 1: Create docker machines as ‘manager’ and ‘worker’: manager, node1, node2, node3, node4
  • Step 2: Create a docker swarm
  • Step 3: ADD the nodes as workers(or another manager) to the swarm
  • Step 4: From the manager create docker services
  • Step 5: list the docker services created also use -PS flag to view the node machines these services are running on.
  • Step 6: Open the <ip_address>:<port_number> in the BROWSER and confirm the services running.

Step 1: Create docker machines: manager, node1, node2, node3, node4

docker-machine create --driver virtualbox manager

docker-machine create --driver virtualbox node1 docker-machine create --driver virtualbox node2 docker-machine create --driver virtualbox node3 docker-machine create --driver virtualbox node4 --Every node is started as a virtualbox machine. --set docker machine ‘manager’ as active eval $(docker-machine env manager) --List the docker machines

Step 2: Create a docker swarm

--Initialize a swarm and add ‘manager’ to the swarm cluster using its ip address: 192.168.99.100

Step 3: Add the nodes as workers(or another manager) to the swarm

--Connect to each node and run the above swarm join command

There can be more than one ‘manager’ node in a swarm

--connect to node1 and join node1 to the swarm as a worker

docker-machine ssh node1

--List the nodes connected in the swarm

connect to manager node:

$docker-machine ssh manager

Step 4: From the ‘manager’ node create new docker services

docker-machine ssh manager

--Create service replicating them on more than 1 nodes and expose them on the mentioned port.

This command pulls the docker image from docker hub.

Step 5: list the docker services created also use -ps flag to view the node machines these services are running on.

--List the services that will be shared among different swarm nodes

 

Swarm randomly assigns nodes to the running services when we replicate the services.

--service ‘httpd’ running on 3 nodes: node1, node2 and node3

 

--service ‘couchbase’ is running on 2 nodes: node1 and manager at port: 8091

--’couchbase’ service can be accessed via ‘node1’ (ip: 192.168.99.101) and ‘manager’ (ip: 192.168.99.100) at port : 8091 as show below

  • http://192.168.99.101:8091/
  • http://192.168.99.100:8091
  • --’httpd’ service
  • http://192.168.99.101:80
  • http://192.168.99.102:80
  • http://192.168.99.103:80
  • --’my_nginx’ service
  • http://192.168.99.100:8080
  • http://192.168.99.104:8080

Screenshots of the running services:

‘manager’ node can create/inspect/list/scale or remove a service.

Refer

docker service --help

Conclusion:

A number of services are balanced over different nodes(machines) in a swarm cluster.A node declared as a ‘manager’ controls the other nodes.Basic ‘docker commands’ works from within a ‘manager’ node.

9.

How do you prune data in a Docker?

Answer»

Docker provides a system prune command to remove stopped containers and dangling IMAGES.Dangling images are the ones which are not attached to any CONTAINER.

Run the prune command as below:

docker system prune

WARNING! This will remove:

  • all stopped containers
  • all networks not used by at least one container
  • all dangling images
  • all dangling build cache

Are you SURE you want to CONTINUE? [y/N]

There is also a better and controlled way of removing containers and images using the command:

Step 1: Stop the containers

docker stop <container_id>

Step 2: Remove the stopped container

docker rm container_id docker rm 6174664de09d

Step 3: Remove the images, first stop the container using those images and then

docker rmi <image_name>:[<tag>]

--GIVE image name and tag

docker rmi ubuntu:1.0

--give the image id

docker rmi 4431b2a715f3
10.

Develop your own custom test environment and publish it on the docker hub.

Answer»

Write instructions in a dockerfile.

  • build the dockerfile and create an image in the registry

docker build -t learn_docker dockerFiles/

  • Create a container by running this image

docker run -it learn_docker

  • Push the container to the docker hub.

--Tag the local image as:

<hub-user>/<repo-name>:[:<tag>]

Examples:

docker tag learn_docker divyabhushan/learn_docker:dev docker tag learn_docker divyabhushan/learn_docker:testing

--list the images for this container:

Divya1@Divya:~ $docker images REPOSITORY                  TAG IMAGE ID            CREATED SIZE divyabhushan/learn_docker   develop 944b0a5d82a9        About a minute ago 88.1MB learn_docker                dev1.1 944b0a5d82a9        About a minute ago 88.1MB divyabhushan/learn_docker   dev d3e93b033af2        16 MINUTES ago 88.1MB divyabhushan/learn_docker   testing d3e93b033af2        16 minutes ago 88.1MB Push the docker images to docker hub docker push divyabhushan/learn_docker:dev docker push divyabhushan/learn_docker:develop docker push divyabhushan/learn_docker:testing The push REFERS to repository [docker.io/divyabhushan/ learn_docker] 53ea43c3bcf4: Pushed 4b7d93055d87: Pushed 663e8522d78b: Pushed 283fb404ea94: Pushed bebe7ce6215a: Pushed latest: digest: sha256:ba05e9e13111b0f85858f9a3f2d3dc0d6b743db78880270524e799142664ffc6 size: 1362

  • Image: Screenshot from the Docker hub.
  • Docker hub repository: learn_docker has DIFFERENT variations of images. Image names are tagged.
  • We can pull the tags from this repository as PER the need.

Summarize:

Develop your application CODE and all other dependencies like the binaries, library files, downloadables required to run the application in the test environment. Bundle it all in a directory.

  • Edit the dockerfile to Run the downloadables and replicate the desired production environment as test env.
  • Copy the entire application bundle to the test env in the docker container.
  • Build the dockerfile and create new docker image and tag it.
  • Push this docker image to the docker hub, which is now downloaded by other users to test.

NOTE: This docker image has your application bundle = application code + dependencies + test run time environment exactly similar to your machine. Your application bundle is highly portable with no hassles.

11.

Create a docker file and build a new image. Run the image and create a container.

Answer» FROM divyabhushan/myrepo:latest COPY HELLO.sh /home/hello.sh CMD ["bash", "/home/hello.sh"] CMD ["echo", "Dockerfile demo"] RUN echo "dockerfile demo" >> logfile

--Build an image from the dockerfile, tag the image name as ‘mydocker’

docker build -t mydocker dockerFiles/ docker build --tag <containerName> <dockerfile location> Divya1@DIVYA:~ $docker images REPOSITORY            TAG IMAGE ID            CREATED SIZE mydocker              latest aacc2e8eb26a        20 seconds ago 88.1MB Divya1@Divya:~ $docker run mydocker /home/divya Hello Divya Bye Divya
  • View the images:
  • docker image
  • docker image --all
  • docker run -it ubuntu (imageName)
  • -i = interactive
  • -t = Allocate a sudo-tty (i.e, PROVIDE a terminal for the remote container image)
12.

How do you create a new image in a container without using a dockerfile?

Answer»

Install a new package in a container

docker run -it ubuntu root@851edd8fd83a:/# which yum --returns nothing root@851edd8fd83a:/# apt-get update root@851edd8fd83a:/# apt-get install -y yum root@851edd8fd83a:/# which yum /usr/bin/yum --Get the LATEST container id CONTAINER ID        IMAGE  COMMAND   CREATED STATUS                        PORTS NAMES 851edd8fd83a        ubuntu  "/bin/bash"   6 minutes ago Exited (127) 3 minutes ago                         --base image changed docker diff 851edd8fd83a

Commit the changes in the container to create a new image.

Divya1@Divya:~ $docker commit 851edd8fd83a mydocker/ubuntu_yum sha256:630004da00cf8f0b8b074942caa0437034b0b6764d537a3a20dd87c5d7b25179

--List if the new image is listed

Divya1@Divya:~ $docker images REPOSITORY            TAG IMAGE ID            CREATED SIZE mydocker/ubuntu_yum   latest 630004da00cf        20 seconds ago 256MB
13.

Define client level git hooks and their implementation.

Answer»

Git hooks are the instruction scripts that gets triggered before(pre) or post(after) certain actions or events such as a git command run.

  • Git hooks can be implemented on both client(local machine) and server(remote) repositories.
  • Hooks objects are stored under .git/hooks directory. Hooks scripts are written in shell script and made executable.
  • --Code snippet of a ‘pre-commit’ script that stops the commit if a file has been deleted from the project:

#!/bin/sh #Library includes: . .git/hooks/hooks_library.lib # An example hook script to verify what is about to be committed. # Called by "git commit" with no arguments.  The hook should # exit with non-zero status after issuing an appropriate message if # it wants to stop the commit. #Aim:Check for any Deleted file in the staging area, if any it stops you from commiting this snapshot. set_variables 1 $0 if [ "$(git status --short | grep '^D')" ];then         ECHO "WARNING!!! Aborting the commit. FOUND Deleted files in the Staging area.\n" | tee -a $LOGFILE         echo "`git status --short | grep '^D' | awk -F' ' '{PRINT $2}'`\n" | tee -a $LOGFILE         exit 1; else         echo "[OK]: No deleted files, proceed to commit." | tee -a $LOGFILE         exit 0; fi

Scenario how I implemented the hooks scripts to enforce certain pre-commit and post-commit test cases:

Step 1: Running .git/hooks/pre-commit script.

[OK]: No deleted files, proceed to commit. Thu Feb  7 12:10:02 CET 2019 --------------------------------------------

Step 2: Running .git/hooks/prepare-commit-msg script.

Get hooks scripts while cloning the repo.  ISSUE#7092 Enter your commit message here. README code/install_hooks.sh code/runTests.sh database.log hooksScripts/commit-msg hooksScripts/hooks_library.lib hooksScripts/post-commit hooksScripts/pre-commit hooksScripts/pre-rebase hooksScripts/prepare-commit-msg newFile Thu Feb  7 12:10:02 CET 2019 --------------------------------------------

Step 3: Running .git/hooks/commit-msg script.

[OK]: Commit message has an ISSUE number Thu Feb  7 12:10:02 CET 2019 --------------------------------------------

Step 4: Running .git/hooks/post-commit script.

New commit made:

1c705d3 Get hooks scripts while cloning the repo. ISSUE#7092
  • A ‘pre-rebase’ script to stop the rebase on a ‘master’ branch:
  • Rebase ‘topic’ branch on ‘master’ branch

hooksProj [dev] $git rebase master topic WARNING!!! upstream branch is master. You are not allowed to rebase on master The pre-rebase hook refused to rebase.

A code snippet demonstrating the use of a ‘pre-receive’ hook that is triggered just before a ‘push’ request on the Server, can be written to reject or allow the push operation.

localRepo [dev] $git push Enumerating objects: 3, DONE. Counting objects: 100% (3/3), done. Writing objects: 100% (2/2), 272 bytes | 272.00 KiB/s, done. Total 2 (delta 0), reused 0 (delta 0) remote: pre-recieve hook script remote: hooks/pre-receive: [NOK]- Abort the push command remote: To /Users/Divya1/OneDrive/gitRepos/remoteRepo/ ! [remote rejected] dev -> dev (pre-receive hook declined) error: failed to push some refs to '/Users/Divya1/OneDrive/gitRepos/remoteRepo/'
14.

How do you list the commits missing in your branch that are present in the remote tracking branch?

Answer»

git log --oneline <localBranch>..<origin/remoteBranch>

Your local git branch should be SET up to track a remote branch.

Divya1@DIVYA:initialRepo [dev] $git branch -vv * dev    b834dc2 [origin/dev] Add Jenkinsfile  master b834dc2 [origin/master] Add Jenkinsfile

Reset ‘dev’ commit history to 3 commits behind using the command:

Divya1@Divya:initialRepo [dev] $git reset --soft HEAD~3 Divya1@Divya:initialRepo [dev] $git branch -vv * dev    30760c5 [origin/dev: behind 3] add source code auto build at every code CHECKIN using docker images

Compare and list the missing logs in local ‘dev’ branch that are present in ‘origin/dev’

Divya1@Divya:initialRepo [dev] $git log --oneline dev..origin/dev b834dc2 (origin/master, origin/dev, master) Add Jenkinsfile c5e476c Rename 'prod' to 'uat'-break the build in Jenkings 6770b16 Add database logs.

Use ‘git pull’ to sync local ‘dev’ branch with the remote ‘origin/dev’ branch.

15.

What is the key difference between a ‘git rebase’ and ‘git merge’

Answer»
  • ‘git MERGE’ takes the unique commits from the two BRANCHES, merge them together and CREATE another commit with the merged changes; whereas in a ‘git rebase’ the work on the current branch is replayed and placed at the tip of the other branch resulting in re-writing the commit objects.
  • ‘git rebase’ is applied from the branch to be rebased, whereas ‘git merge’ is applied on the branch that needs to merge the feature branch.
  • ‘git merge’ preserves the history and makes it easier to track the ownership or when a code was broken in the project history; UNLIKE ‘git rebase’ which changes the commit history by changing the commit objects (SHA-1) ids.
  • ‘git rebase’ is often used locally for feature and quickfix and bug fix branches; however ‘git merge’ are used for long running STABLE branches.
16.

Explain a good branching structural strategy that you have used for your project code development.

Answer»

A good BRANCHING strategy is the one that adapts to your PROJECT and business needs. Every organization has a set of its own defined SDLC processes.
An example branching STRUCTURAL strategy that I have used in my project:

  • Diagram: Branching strategy
  • Clone the project available at github:
  • git clone http://github.com/divyabhushan/structuralStrategy.git structuralStrategy

 Guidelines: 

  • “master-prod”: Accepts merges/code/commits only from the “prod” branch
  • “prod”: Perform only a merge --squash from “release” branch.
  • Merge only when approved by “QA”
  • Tag every merge in the format: v1.0, v1.1 … v1.*
  • “release”: merge from the branches “dev”, “uat”, “QA”.
  • Every release commit/project code version has to be approved by “QA”.
  • Tag every merge in the format: r1.0, r1.1 … r1.*
  • “dev” and “uat” NEVER merge with each other.
  • “hotfix” branch commits are shared among any feature branches such as “dev” and “uat”
  • “feature” branch is private to “dev” alone and is dropped after merging.
  • CI/CD DevOps tools can be used to automate the above development and deployment to master_prod.
  • Every project release: r1.0 .. r1.x on the ‘release’ branch can be tracked by Jenkins CI TOOL and will trigger a build, on a successful build continuous testing suite 
  • cases will be triggered on the code. If the test passes the release will be delivered to ‘prod’ branch.
  • Every source code delivered to ‘prod’ branch will be automatically deployed to ‘master_prod’ branch.

All the steps will be mentioned in a Jenkins file on a branch ‘name’ condition.

17.

How do you recover a deleted un-merged branch in your project source code?

Answer»

By default, git does not allow you to delete a branch whose work has not yet been MERGED into the main branch.

 To see the list of branches not merged with the CHECKED out branch run:

Divya1@Divya:initialRepo [master] $git branch --no-merged  DEV

 --If you TRY to delete this branch, git displays a warning:

Divya1@Divya:initialRepo [master] $git branch -d dev error: The branch 'dev' is not fully merged. If you are sure you want to delete it, run 'git branch -D dev'.

--If it is still deleted using the -D flag as:

Divya1@Divya:initialRepo [master] $git branch -D dev

--See the references log information

Divya1@Divya:initialRepo [master] $git reflog cb9da2b (HEAD -> master) HEAD@{0}: checkout: MOVING from dev to master b834dc2 (origin/master, origin/dev) HEAD@{1}: checkout: moving from master to dev cb9da2b (HEAD -> master) HEAD@{2}: checkout: moving from master to master cb9da2b (HEAD -> master) HEAD@{3}: checkout: moving from dev to master b834dc2 (origin/master, origin/dev) HEAD@{4}: checkout: moving from master to dev cb9da2b (HEAD -> master) HEAD@{5}: checkout: moving from uat to master 03224ed (uat) HEAD@{6}: checkout: moving from dev to uat

b834dc2 is the commit id when we jumped to ‘dev’ branch
Create a branch named ‘dev’ from this commit id again.

Divya1@Divya:initialRepo [master] $git checkout -b dev b834dc2 Switched to a new branch 'dev' Divya1@Divya:initialRepo [dev]
18.

What is CI/CD pipeline?

Answer»

Continuous Integration is a development practice wherein developers regularly merge or integrate their code changes into a common shared repository very frequently (*).EVERY code check-in is then verified by automated build and automated test cases.

This approach helps to detect and fix the bugs early, IMPROVE software quality,reduce the validation and feedback loop time; hence increasing the overall product quality and speedy product releases.

  • (*) Unlike traditional SDLC process wherein a developer would wait until the completion of the code before he/she shares the work on the shared repository.
  • Git becomes the best VCS tool with its strong, easy and RELIABLE branching and merging architecture for Continuous Integration in a DevOps environment.
  • Continuous Delivery is a software practice where every code check-in is automatically built, tested and ready for a release(delivery) to production.
  • Every code check-in should be release/deployment ready.
  • CD is an extension to CI.
  • CD phase delivers the code to a production like environment such as dev, uat, preprod etc and run automated tests.
  • On successful IMPLEMENTATION of continuous delivery in the prod-like environment the code is ready to be DEPLOYED to the main production server.
19.

Mention some post condition pipelines options that you used in Jenkinsfile?

Answer»

We can mention some test conditions to run post the completion of stages in a PIPELINE.

Code snippet

post { ALWAYS { echo “This block runs always !!!” } SUCCESS { echo “This block runs when the stages has a success status” } unstable { echo “This block is run when the stages abort with an unstable status” } }

Here are the post conditions reserved for jenkinsfile:

  • always:

Run the steps in the post section REGARDLESS of the completion status of the Pipeline’s or stage’s run.

  • unstable:

Only run the steps in post if the current Pipeline’s or stage’s run has an "unstable" status, usually caused by test FAILURES, code violations, etc.

  • aborted:

Only run the steps in post if the current Pipeline’s or stage’s run has an “aborted” status.

  • success:

Only run the steps in post if the current Pipeline’s or stage’s run has a "success" status.

  • failure:

Only run the steps in post if the current Pipeline’s or stage’s run has a "failed" status.

  • changed:

Only run the steps in post if the current Pipeline’s or stage’s run has a different completion status from its previous run.

  • cleanup:

Run the steps in this post condition after every other post condition has been evaluated, regardless of the Pipeline or stage’s status.

20.

How do you implement CI/CD using Jenkins?

Answer»
  • Continuous Integration using Jenkins and Git plugin
  • Create a new Jenkins item as a ‘Pipeline’.
  • Add ‘Git’ as BRANCH source and give the Project repository url.
  • Every time source code is pushed to the remote git repository from the local git repo;
  • Jenkins job gets started(triggered).
  • Jenkins build the code and the output is available under “CONSOLE output”
  • In the git repository add a ‘jenkinsfile’ COMMIT and push the code to git repository.
  • Add a Jenkinsfile in the git repository
pipeline {    AGENT { docker { image 'ubuntu:latest' } }    stages {        stage('build') {            STEPS {                sh 'uname -a'            }        } stage('Test') {            steps {                sh './jenkins/scripts/test.sh'            }        }    } }
21.

What is Jenkins used for in DevOps?

Answer»

Jenkins is a self-contained, open source automation server(tool) for continuous development.

Jenkins aids and AUTOMATES CI/CD process.

It gets the checked in CODE from VCS like Git using a ‘git plugin’, build the source code, run test cases in a production-like environment and make the code release ready using ‘deploy’ plugin.

  • These continuous DELIVERY pipelines are written in a ‘Jenkinsfile’ which is also checked into project’s source code and version controlled by Git.
  • Pipelines are a continuous set of jobs that are run for continuous delivery and these jobs are integrated at every section of the workflow.
  • Jenkins pipelines easily connect to Docker images and containers to run inside.
  • Pipelines easily provide the desired test environment without having to configure the various system tools and dependencies.

Sample Jenkins file

pipeline {    agent { docker { image 'ubuntu:latest' } }    stages {        stage('build') {            steps {                sh 'uname -a'            }        }    } }
  • Jenkins will start the container- ubuntu with the latest image and execute the test case steps inside it.
  • agent directive SAYS where and how to execute the pipeline
  • jenkinsfile (declarative pipeline)
  • pipeline
  • Jenkins saves US the trouble of debugging after a huge commit history if there was a code break.
22.

What does ‘Infrastructure as code’ means in terms of Puppet?

Answer»

Entire SERVER Infrastructure setup configurations are written in TERMS of codes and re-used on all the Puppet Server agent’s NODES(machines) that are connected VIA a Puppet master Server.

This is achieved by the use of code snippets called ‘manifests’; that are configuration files for every Server agent node.

  • Each manifest (program files with *.pp extension) consists of the resources and the codes.
  • We can REVIEW, deploy and test the environment configuration for development, testing and production environments.
  • Puppet manifests written once are deployed on any environment to build up the same infrastructure.
23.

What is Puppet? What is the need for it?

Answer»

Puppet is a Configuration Management and deployment tool for administrative tasks.

This tool helps in automating the PROVISIONING, configuration, and management of INFRASTRUCTURE and Systems.

In simple words:

  • Puppet helps administrators to automate the process of MANUALLY creating and configuring Virtual machines.
  • Say, you have to bring up ‘n’ number of SERVERS with ‘x’ number of VMs(Virtual machines) on them. Each VM needs to be configured for certain users, groups, services, applications, and databases etc.
  • The entire infrastructure can be loaded up with the help of Puppet programs re-using the codes on multiple servers.
  • Key feature: Idempotency
  • The same set of configurations can be run multiple times to build a machine on a server, as puppet would allow only UNIQUE configurations to be run.
24.

How do you work on a container image?

Answer»

--Get docker images from docker hub or your docker REPOSITORY

docker PULL busybox docker pull centos docker pull divyabhushan/myrepo Divya1@Divya:~ $docker pull divyabhushan/myrepo Using default tag: latest latest: Pulling from divyabhushan/myrepo 6cf436f81810: Pull complete 987088a85b96: Pull complete b4624b3efe06: Pull complete d42beb8ded59: Pull complete d08b19d33455: Pull complete 80d9a1d33f81: Pull complete Digest: sha256:c82b4b701af5301cc5d698d963eeed46739e67aff69fd1a5f4ef0aecc4bf7bbf STATUS: Downloaded newer image for divyabhushan/myrepo:latest

--List the docker images

Divya1@Divya:~ $docker images REPOSITORY            TAG IMAGE ID            CREATED SIZE divyabhushan/myrepo   latest 72a21c221add        About an hour ago 88.1MB busybox               latest 3a093384ac30        5 weeks ago 1.2MB centos                latest 1e1148e4cc2c        2 months ago 202MB

--Create a docker container by running the docker image

--pass a shell argument  : `uname -a`

Divya1@Divya:~ $docker run centos uname -a Linux c70fc2da749a 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

--Docker images can be built by reading a dockerfile

--build a new image : ‘newrepo’ with tag:1.0 from the dockerFiles/dockerfile

docker build -t newrepo:1.1 dockerFiles/

--Now create a container from the above image:

--List all the containers

--start the container

--List only the running containers

25.

What is a Docker image? How are they shared and accessed?

Answer»

A developer writes code instructions to define all the applications and its dependencies in a file CALLED a “Dockerfile”.Dockerfile is used to create a ‘Docker image’ using the ‘docker build <directory>’ command.The build command is run by the docker daemon.

When you run a Docker image “Containers” are created. Containers are runtime instances of a Docker image.

  • A Container can have MANY images.
  • Docker containers are stored in a docker registry on docker host.
  • Docker has a client-Server architecture.
  • Docker registry is generally pushed and shared on a Docker hub (REMOTE server).

Image CREDIT: docs.docker.com

  • Other developers ‘Docker pull’ these registry images and create containers in their own environment.
  • Developers can run their applications in the same docker container as their peers.
  • This way you can get the same test environment on different servers with the same applications and dependencies.
26.

What is a Docker? Explain its role in DevOps.

Answer»

Every source code DEPLOYMENT needs to be PORTABLE and compatible on every device and environment.

Applications and their RUN time environment such as libraries and other dependencies like binaries, jar files, configuration files etc.. are bundled up(packaged) in a Container.

Containers as a whole are portable, consistent and compatible with any environment.

In development words, a developer can run its application in any environment: dev, uat, preprod and production WITHOUT worrying about the run-time dependencies of the application.

  • Docker is a container platform.
  • Docker is a FRAMEWORK that provides an abstraction layer to manage containers.
  • Docker is a containerization engine, which automates packaging, shipping, and deployment of any software applications or Containers.
  • Docker also lets us test the code and then deploy it in production.
  • Docker along with Jenkins (a Continuous Integration tool) and Git plugin contributes in CI/CD process.
27.

How does ‘git rebase’ work? When should you rebase your work instead of a ‘git merge’?

Answer»

There are scenarios wherein one would like to merge a quickfix or feature branch with not a huge commit history into another ‘dev’ or ‘uat’ branch and yet maintain a linear history.

A non-fast forward ‘GIT merge’ would result in a diverged history. Also when one wants the feature merged commits to be the latest commits; ‘git rebase’ is an appropriate way of merging the two BRANCHES.

‘git rebase’ replays the commits on the current branch and place them over the tip of the rebased branch.Since it replays the commit ids, rebase rewrites commit objects and create a new object id(SHA-1). Word of caution: Do not use it if the history is on release/production branch and being shared on the central server. Limit the rebase on your local repository only to rebase quickfix or feature branches.

Steps:

Say there is a ‘dev’ branch that needs a quick feature to be added along with the TEST cases from ‘uat’ branch.

  • Step 1: Branch out ‘new-feature’ branch from ‘dev’.

Develop the new feature and make commits in ‘new-feature’ branch.

[dev ] $git CHECKOUT -b new-feature [new-feature ] $ git add lib/commonLibrary.sh && git commit -m “Add commonLibrary file” Divya1@Divya:rebase_project [new-feature] $git add lib/commonLibrary.sh && git commit -m 'Add commonLibrary file'Divya1@Divya:rebase_project [new-feature] $git add feature1.txt && git commit -m 'Add feature1.txt' Divya1@Divya:rebase_project [new-feature] $git add feature2.txt && git commit -m 'Add feature2.txt'
  • Step 2: Merge ‘uat’ branch into ‘dev’
[dev] $ git merge uat
  • Step 3: Rebase ‘new-feature’ on ‘dev’
Divya1@Divya:rebase_project [dev] $git checkout new-featureDivya1@Divya:rebase_project [new-feature] $git rebase dev First, rewinding head to replay your work on top of it... Applying: Add commonLibrary file Applying: Add feature1.txt Applying: Add feature2.txt
  • Step 4: Switch to ‘dev’ branch and merge ‘new-feature’ branch, this is going to be a fast-forward merge as ‘new-feature’ has already incorporated ‘dev’+’uat’ commits.
Divya1@Divya:rebase_project [new-feature] $git checkout dev Divya1@Divya:rebase_project [dev] $git merge new-feature Updating 5044e24..3378815 Fast-forward feature1.txt         | 1 + feature2.txt         | 1 + lib/commonLibrary.sh | 16 ++++++++++++++++ 3 files changed, 18 insertions(+) create mode 100644 feature1.txt create mode 100644 feature2.txt create mode 100644 lib/commonLibrary.sh

this will result in linear history with ‘new-feature’ results being at the top and ‘dev’ commits being pushed later.

  • Step 5: View the history of ‘dev’ after merging ‘uat’ and ‘new-feature’ (rebase)
Divya1@Divya:rebase_project [dev] $git hist * 3378815 2019-02-14 | Add feature2.txt (HEAD -> dev, new-feature) [divya bhushan] * d3859c5 2019-02-14 | Add feature1.txt [divya bhushan] * 93b76f7 2019-02-14 | Add commonLibrary file [divya bhushan] *   5044e24 2019-02-14 | Merge branch 'uat' into dev [divya bhushan] |\   | * bb13fb0 2019-02-14 | End of uat work. (uat) [divya bhushan] | * 0ab2061 2019-02-14 | Start of uat work. [divya bhushan] * | a96deb1 2019-02-14 | End of dev work. [divya bhushan] * | 817544e 2019-02-14 | Start of dev work. [divya bhushan] |/   * 01ad76b 2019-02-14 | Initial project structure. (tag: v1.0, master) [divya bhushan]

NOTE: ‘dev’ will SHOW a diverged commit history for ‘uat’ merge and a linear history for ‘new-feature’ merge.

28.

What is the difference between a git reset and a git revert.

Answer»
  • git revert is used to record some new commits to reverse the effect of some earlier commits/snapshot of a project.
  • Instead of REMOVING the commit from the project history, it figures out how to undo the changes introduced by the commit & appends a new commit with the resulting content in the current branch.

  • Usage: git revert <commit_id>
  • USE: To undo an ENTIRE commit from your project history; removing a bug introduced by a commit.

Reset VS Revert

  • git “reset”: resets the project to a previous snapshot ERASING the changes.
  • git “revert” does not change the project history unlike git “reset”
  • Git “revert” undoes the commit id changes and applies the undo work as a new commit id object.
29.

What is a git commit object? How is it read?

Answer»

When a project repository is initialized to be a git repository, git stores all its metadata in a hidden folder “.git” under the project root directory.
Git repository is a collection of objects. 

Git has 4 types of objects – blobs, trees, tags, and commits.

Every commit creates a new commit object with a unique SHA-1 hash_id.
Each commit object has a pointer reference to the tree object, its parent object, author, COMMITTER and the commit message.

Diagram: Single Commit object

To see the commit log message along with the textual diff of the code, run:
git show <commit_id>

Divya1@Divya:initialRepo [master] $git show f9354cb commit f9354cb08d91e80cabafd5b54d466b6055eb2927 Author: divya bhushan <divya_bhushan@hotmail.com> Date:   MON Feb 11 23:39:24 2019 +0100     Add database logs. diff --git a/logs/db.log b/logs/db.log new file mode 100644 INDEX 0000000..f8854b0 --- /dev/null +++ b/logs/db.log -0,0 +1 +database logs

To read a commit object git has ‘git cat-file’ utility.

Divya1@Divya:initialRepo [master] $git cat-file -p f9354cb tree 2a85825b8d20918350cc316513edd9cc289f8349 parent 30760c59d661e129329acfba7e20c899d0d7d199 author divya bhushan <divya_bhushan@hotmail.com> 1549924764 +0100 committer divya bhushan <divya_bhushan@hotmail.com> 1549924764 +0100  Add database logs.

A tree object is LIKE an OS directory that stores references to other directories and FILES (blob type).

Divya1@Divya:initialRepo [master] $git cat-file -p 2a85825b8d20918350cc316513edd9cc289f8349 100755 blob 054acd444517ad5a0c1e46d8eff925e061edf46c README.md 040000 tree dfe42cbaf87e6a56b51dab97fc51ecedfc969f39 code 100644 blob e08d4579f39808f3e2830b5da8ac155f87c0621c dockerfile 040000 tree 014e65a65532dc16a6d50e0d153c222a12df4742   logs
30.

What is Git?

Answer»

Git is a Distributed Version Control System; USED to logically store and backup the entire history of how your project source code has developed, keeping a track of every version change of the code.

Git facilitates very flexible and efficient branching and merging of your code with other collaborators.Being distributed git is EXTREMELY fast and more RELIABLE as every developer has his own local copy of the entire repository.

Git allows you to undo the mistakes in the source code at different tiers of its architecture namely- Working directory, Staging (Index) area, Local repository, and Remote repository.

Using Git we can always get an older version of our source code and work on it.Git tracks every bit of data as it checksums every file into unique hash codes referring to them via pointers.

To summarize Git is the most efficient and widely used VCS, used by major companies like Linux, Google, Facebook, Microsoft, Twitter, LINKEDIN, Netflix, Android, Amazon, IBM, Apple IOS to name a few…

31.

What are some of the tools you have used in DevOps approach?

Answer»

Considering DevOps to be an ideology towards ACHIEVING a quality product, every organization has its own guidelines and approach towards it.
Some of the popular tools I have USED are:

  • Git as a Distributed VCS to manage the source code.
  • Jenkins to achieve CI/CD (CONTINUOUS Integration and Continuous Delivery) +plugins
  • Puppet for Configuration Management and Deployment tool
  • Nagios for Continuous Monitoring; and
  • Docker for containerization.
32.

What is DevOps?

Answer»

DevOps is an approach to COLLABORATE the development and operations teams for a better, bug-free continuous DELIVERY and integration of the source code.
DevOps is about automating the entire SDLC (Software Development Life Cycle) process with the implementation of CI/CD practices.

CI/CD are the Continuous integration and continuous deployment methodologies.
Every source code check-in will automatically BUILD and unit TEST the entire code against a
production like environment and continuously deployed to production environment after it passes its automated tests.
That eliminates the long feedback, bug-fix, and product enhancements loops between every
Release.

Every team takes the accountability of the entire product right from the requirement ANALYSIS to documentation to coding, testing in development environments, code deployment and continuous improvements in terms of bugs and feedback from reviewers and the customers.

33.

How does the Service Management process of ITIL Service Transition phase map in a DevOps organization?

Answer»

How does the Service Management process of ITIL Service Transition PHASE map in a DevOps organization?
Choose the correct option from below LIST
(1)Maintaining stable and fixed teams to avoid resource switching between projects.
(2)Managing changes through the same mechanisms used for aligning the business with IT
(3)Bringing NEW software live in a matter of minutes through automation
(4)Aligning IT with the needs of business following Agile practices.

Answer:-(4)Aligning IT with the needs of business following Agile practices.

34.

Which DevOps principle appreciates measuring processes, people and tools?

Answer»

Which DEVOPS principle appreciates measuring processes, people and TOOLS?
CHOOSE the correct option from below list
(1)People RESPONSIBILITY
(2)Create with the end in mind
(3)Cross-functional autonomous teams
(4)Continuous improvement

Answer:-(4)Continuous improvement

35.

Which of these statements are true

Answer»

Which of these STATEMENTS are true
Choose the correct option from below list
(1)DEVOPS looks to EXTEND Dev into Production
(2)DevOps looks to EMBED DEVELOPMENT into IT Operations
(3)DevOps looks to embed IT Operations into Development
(4)All of the above

Answer:-(4)All of the above

36.

What is NOT an aspect related to governing work in an organization?

Answer»

What is NOT an aspect related to governing WORK in an organization?
Choose the correct OPTION from below LIST
(1)SCRUM of scrums
(2)Feature Switches
(3)Scrum board
(4)Test Automation

Answer:-(4)Test Automation

37.

What is the difference between Continuous Delivery and Continuous Deployment?

Answer»

What is the difference between Continuous Delivery and Continuous Deployment?
Choose the correct option from below list
(1)Continuous Delivery is a manual task, while Continuous Deployment is an AUTOMATED task
(2)Continuous Delivery has a manual release to a production decision, while Continuous Deployment has releases automatically pushed to production.
(3)Continuous Delivery INCLUDES all steps of the software development life CYCLE, Continuous deployment may skip a few steps such as validation and testing.
(4)Continuous Delivery means complete delivery of the application to the customer, Continuous Deployment includes the only deployment of the application in a customer environment.

Answer:-(2)Continuous Delivery has a manual release to a production decision, while Continuous Deployment has releases automatically pushed to production.

38.

The DevOps movement has evolved to solve which problem?

Answer»

The DEVOPS movement has evolved to solve which problem?
Choose the correct option from below list
(1)Increasingly complex, virtualized IT environments
(2)The need for multiple rapidly times software RELEASES, sometimes MANY in one DAY.
(3)The tradition siloed approach to app development and deployment
(4)All of the above

Answer:-(4)All of the above

39.

DevOps is a

Answer»

DevOps is a
Choose the CORRECT option from below list
(1)Culture
(2)Role
(3)Team
(4)All of the above

Answer:-(1)Culture

40.

The origins of the DevOps trace back to when

Answer»

The ORIGINS of the DevOps trace back to when
Choose the CORRECT option from below LIST
(1)2010
(2)2009
(3)2008
(4)2006

Answer:-(2)2009

41.

What is NOT an appropriate predictor of IT performance in a DevOps environment?

Answer»

What is NOT an appropriate PREDICTOR of IT performance in a DevOps environment?
Choose the CORRECT option from below list
(1)Changes approved by an external team members
(2)High-trust organizational CULTURE
(3)Proactive monitoring
(4)Version control of all artifacts

Answer:-(4)Version control of all artifacts

42.

Removing unplanned work is a key principle of DevOps

Answer» REMOVING UNPLANNED work is a key principle of DevOps
Choose the correct option from below LIST
(1)TRUE
(2)False

Answer:-(1)True
43.

One of the most common difference between DevOps and Agile SDLC?

Answer»

One of the most common difference between DevOps and Agile SDLC?
One of the MAIN difference between DevOps and Agile SDLC is
(1)Agile software devlopment methodology mainly focuses on the devlopment of software.
(2)On the other hand DevOps is RESPONSIBLE for both DEVELOPMENT and deployment of the software. And this will also take CARE the both the transaction are done in safest and most reliable way possible.

44.

The DevOps movement is an outgrowth of which software development methodology?

Answer»

The DevOps MOVEMENT is an outgrowth of which SOFTWARE development methodology?
Choose the correct option from below list
(1)AGILE
(2)WATERFALL
(3)Promise based algorithms
(4)Test-driven development and model-driven development

Answer:-(1)Agile

45.

What is not a challenge between the Development and Operations teams in a traditional organization

Answer»

What is not a challenge between the DEVELOPMENT and Operations TEAMS in a traditional organization
Choose the CORRECT option from below list
(1)The blame game between Dev and Ops
(2)Different tools used between Dev and Ops
(3)No feedback LOOP between Dev and Ops
(4)Development and Operations is not maintained by the same person

Answer:-(4)Development and Operations is not maintained by the same person

46.

What type of tasks are characterized by low task variability and high tasks analyzibility?

Answer»

What type of TASKS are characterized by LOW TASK variability and high tasks analyzibility?
Choose the correct option from below list
(1)CRAFT
(2)Engineering
(3)Routine
(4)Non-Routine

Answer:-(3)Routine

47.

True or False DevOps automation tools rely on coding skills.

Answer» TRUE or False DevOps automation tools rely on coding skills.
Choose the correct option from below list
(1)True
(2)False

Answer:-False
48.

What type of mindset is the core of a DevOps culture?

Answer»

What type of mindset is the core of a DevOps culture?
Choose the CORRECT OPTION from below list
(1)Service Mindset
(2)SKILL Mindset
(3)PEOPLE Mindset
(4)Process Mindset

Answer:-(1)Service Mindset

49.

The goal of DevOps is not just to increase the rate of change but to successfully deploy features into production without causing chaos and disrupting

Answer»

The goal of DevOps is not just to increase the rate of change but to SUCCESSFULLY deploy features into production without causing CHAOS and disrupting other services
Choose the CORRECT OPTION from below list
(1)True
(2)False

Answer:-(1)True

50.

Which of these tools is not associated with DevOps?

Answer»

Which of these tools is not ASSOCIATED with DEVOPS?
CHOOSE the correct option from below LIST
(1)Chef
(2)Liebert MPX
(3)Juju
(4)Puppet

Answer:-(2)Liebert MPX