InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
How to backup and restore Jenkins data and configurations |
|
Answer» Backup of Jenkins is needed in case of disaster recovery, retrieving old CONFIGURATION and for AUDITING. $JENKINS_HOME folder keeps all the Jenkins metadata. That includes: build logs, job configs, plugins, plugin configurations etc. Install the ‘think backup’ plugin in Jenkins and enable the backup from settings tab.We have to specify the backup directory and what we want to backup. Backup directory: $JENKINS_HOME/backup Backup files generated with the timestamp in the filenames will be STORED under the PATH we specified. divya@jenkins backup]$ pwd /var/lib/Jenkins/backup uat@jenkins backup]$lsFULL-2019-02-4_07-14 FULL-2019-02-11_13-07 It is a good PRACTICE to version control (using Git) this back-up and move it to cloud. Restoring: Backup files are in the tar+zip format. Copy these over to another server; unzip and un-tar it on the server. cd $JENKINS_HOME tar xvfz /backups/Jenkins/backup-project_1.01.tar.gz config.xml jobs/myjob/config.xml … |
|
| 2. |
How does Jenkins handle a failed test case? |
Answer»
Sample code: Pipeline { agent any stages { stage(‘Build’) { steps { sh ‘./test_suite1 build’ } } Stage(‘Test’) { Steps { sh ‘./test_suite1 test’ } } } post { ALWAYS { archiveArtifacts ‘build/libs/**/*.jar’ } } }This gives the artifacts path and the filename |
|
| 3. |
How can Jenkins facilitate Deployment in a DevOps practice? |
|
Answer» Jenkins auto-builds the source code from Git(any VCS) at every check-in; tests the source code and deploy the code in a tomcat environment via docker. WEBAPP source code is then DEPLOYED by tomcat server on a production environment. Pre-requisite:
Git project structure: Divya1@DIVYA:myWeb [master] $ Dockerfile webapp/ WEB-INF/ classes/ lib/ web.xml index.jsp --Dockerfile content: vi Dockerfile FROM tomcat:9.0.1-jre8-alpine ADD ./webapp /usr/local/tomcat/webapps/webapp CMD ["catalina.sh","run"]Add a new project in Jenkins and TRACK your git project url under SCM section.Have a dockerfile with the instruction to connect to the tomcat docker and deploy the webapp folder. --Add the build section to ‘execute shell’ as below: #!/bin/sh echo "Build started..." docker build -t webapp . echo "Deploying webapp to tomcat" docker run -p 8888:8080 webapp echo http://localhost:8888/webapp --Build the project from Jenkins:Below is the screenshot of the output: --CLICK on the link: http://localhost:8888/webapp |
|
| 4. |
How is Jenkins workspace data shared between different jobs? |
|
Answer» Jenkins stores the metadata of every PROJECT under $WORKSPACE path. Two projects:
Below code SCREENSHOT is for project_next This ACCESS the myProject/logs/db.log FILE and read it for a pattern :’prod’ |
|
| 5. |
Write a sample module to install LAMP on an existing Ubuntu Server Docker image. |
Answer» vi /etc/puppet/manifests/lamp.pp
Save and exit.
|
|
| 6. |
What are resources? How do you handle system resources in Puppet? |
|
Answer» System resources are the key elements of a Puppet code that defines the architecture and manages the configuration of a system infrastructure.
Here is how a resource is written: resource_type { ‘resource_name’: attribute => value, attribute => value, … }
Example: user { ‘Jack’: ensure => present, owner => ‘root', group => ‘home’, mode => 0644, shell => ‘/bin/bash’ }This code evaluates as: These attributes have the respective values. We can get a list of all the available resource types with the command:
Some of the common resource types are:
Example of resource_type: ‘service’. This resource ensures that the service: ‘network’ is RUNNING service {‘network’ : ensure => running } This resource checks the ‘package’: ‘apache’ is running and its pre-requisite requires ‘apt-update’ command to be executed. package { ‘apache’ : require => Exec[‘apt-update’], ensure => running } |
|
| 7. |
When joining a new node to the swarm the following error occurs: “Error response from daemon: Timeout was reached before node joined.” How will you fix this? |
|
Answer» The above failure happens when the ‘manager’ DOCKER machine is not active; as a result, the new node machine will not be ABLE to join the swarm cluster. To fix this:
|
|
| 8. |
Explain Docker Orchestration |
|
Answer» As the number of docker machines increases, there needs to be a system to manage them all. Docker Orchestration is a virtual docker MANAGER and allows us to start, stop, pause, unpause or kill the docker nodes(machines). Docker has an in-built utility called “docker swarm”. Kubernetes is another popular and versatile docker orchestration system used. A CLUSTER of dockers is called a ‘swarm’. Swarm turns a collection of docker engines into a single virtual docker engine. In a swarm orchestration arrangement, one machine acts as a swarm manager that controls all the other machines connected to the cluster that acts as swarm nodes. This is how I created a swarm of dockers and managed them on my machine: We need docker services and docker machines to run these services on.Finally, we need a docker swarm to manage the docker nodes/machines
Create a docker swarm and manage the services on different nodes and port numbers.
Step 1: Create docker machines: manager, node1, node2, node3, node4 docker-machine create --driver virtualbox manager docker-machine create --driver virtualbox node1 docker-machine create --driver virtualbox node2 docker-machine create --driver virtualbox node3 docker-machine create --driver virtualbox node4 --Every node is started as a virtualbox machine. --set docker machine ‘manager’ as active eval $(docker-machine env manager) --List the docker machinesStep 2: Create a docker swarm --Initialize a swarm and add ‘manager’ to the swarm cluster using its ip address: 192.168.99.100 Step 3: Add the nodes as workers(or another manager) to the swarm --Connect to each node and run the above swarm join command There can be more than one ‘manager’ node in a swarm --connect to node1 and join node1 to the swarm as a worker docker-machine ssh node1 --List the nodes connected in the swarm connect to manager node: $docker-machine ssh manager Step 4: From the ‘manager’ node create new docker services docker-machine ssh manager --Create service replicating them on more than 1 nodes and expose them on the mentioned port. This command pulls the docker image from docker hub. Step 5: list the docker services created also use -ps flag to view the node machines these services are running on. --List the services that will be shared among different swarm nodes
Swarm randomly assigns nodes to the running services when we replicate the services. --service ‘httpd’ running on 3 nodes: node1, node2 and node3
--service ‘couchbase’ is running on 2 nodes: node1 and manager at port: 8091 --’couchbase’ service can be accessed via ‘node1’ (ip: 192.168.99.101) and ‘manager’ (ip: 192.168.99.100) at port : 8091 as show below
Screenshots of the running services: ‘manager’ node can create/inspect/list/scale or remove a service. Refer docker service --help Conclusion: A number of services are balanced over different nodes(machines) in a swarm cluster.A node declared as a ‘manager’ controls the other nodes.Basic ‘docker commands’ works from within a ‘manager’ node. |
|
| 9. |
How do you prune data in a Docker? |
|
Answer» Docker provides a system prune command to remove stopped containers and dangling IMAGES.Dangling images are the ones which are not attached to any CONTAINER. Run the prune command as below: docker system prune WARNING! This will remove:
Are you SURE you want to CONTINUE? [y/N] There is also a better and controlled way of removing containers and images using the command: Step 1: Stop the containers docker stop <container_id>Step 2: Remove the stopped container docker rm container_id docker rm 6174664de09dStep 3: Remove the images, first stop the container using those images and then docker rmi <image_name>:[<tag>]--GIVE image name and tag docker rmi ubuntu:1.0--give the image id docker rmi 4431b2a715f3 |
|
| 10. |
Develop your own custom test environment and publish it on the docker hub. |
|
Answer» Write instructions in a dockerfile.
docker build -t learn_docker dockerFiles/
docker run -it learn_docker
--Tag the local image as: <hub-user>/<repo-name>:[:<tag>]Examples: docker tag learn_docker divyabhushan/learn_docker:dev docker tag learn_docker divyabhushan/learn_docker:testing--list the images for this container: Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE divyabhushan/learn_docker develop 944b0a5d82a9 About a minute ago 88.1MB learn_docker dev1.1 944b0a5d82a9 About a minute ago 88.1MB divyabhushan/learn_docker dev d3e93b033af2 16 MINUTES ago 88.1MB divyabhushan/learn_docker testing d3e93b033af2 16 minutes ago 88.1MB Push the docker images to docker hub docker push divyabhushan/learn_docker:dev docker push divyabhushan/learn_docker:develop docker push divyabhushan/learn_docker:testing The push REFERS to repository [docker.io/divyabhushan/ learn_docker] 53ea43c3bcf4: Pushed 4b7d93055d87: Pushed 663e8522d78b: Pushed 283fb404ea94: Pushed bebe7ce6215a: Pushed latest: digest: sha256:ba05e9e13111b0f85858f9a3f2d3dc0d6b743db78880270524e799142664ffc6 size: 1362
Summarize: Develop your application CODE and all other dependencies like the binaries, library files, downloadables required to run the application in the test environment. Bundle it all in a directory.
NOTE: This docker image has your application bundle = application code + dependencies + test run time environment exactly similar to your machine. Your application bundle is highly portable with no hassles. |
|
| 11. |
Create a docker file and build a new image. Run the image and create a container. |
Answer»
--Build an image from the dockerfile, tag the image name as ‘mydocker’ docker build -t mydocker dockerFiles/ docker build --tag <containerName> <dockerfile location> Divya1@DIVYA:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydocker latest aacc2e8eb26a 20 seconds ago 88.1MB Divya1@Divya:~ $docker run mydocker /home/divya Hello Divya Bye Divya
|
|
| 12. |
How do you create a new image in a container without using a dockerfile? |
|
Answer» Install a new package in a container docker run -it ubuntu root@851edd8fd83a:/# which yum --returns nothing root@851edd8fd83a:/# apt-get update root@851edd8fd83a:/# apt-get install -y yum root@851edd8fd83a:/# which yum /usr/bin/yum --Get the LATEST container id CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 851edd8fd83a ubuntu "/bin/bash" 6 minutes ago Exited (127) 3 minutes ago --base image changed docker diff 851edd8fd83aCommit the changes in the container to create a new image. Divya1@Divya:~ $docker commit 851edd8fd83a mydocker/ubuntu_yum sha256:630004da00cf8f0b8b074942caa0437034b0b6764d537a3a20dd87c5d7b25179--List if the new image is listed Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE mydocker/ubuntu_yum latest 630004da00cf 20 seconds ago 256MB |
|
| 13. |
Define client level git hooks and their implementation. |
|
Answer» Git hooks are the instruction scripts that gets triggered before(pre) or post(after) certain actions or events such as a git command run.
Scenario how I implemented the hooks scripts to enforce certain pre-commit and post-commit test cases: Step 1: Running .git/hooks/pre-commit script. [OK]: No deleted files, proceed to commit. Thu Feb 7 12:10:02 CET 2019 --------------------------------------------Step 2: Running .git/hooks/prepare-commit-msg script. Get hooks scripts while cloning the repo. ISSUE#7092 Enter your commit message here. README code/install_hooks.sh code/runTests.sh database.log hooksScripts/commit-msg hooksScripts/hooks_library.lib hooksScripts/post-commit hooksScripts/pre-commit hooksScripts/pre-rebase hooksScripts/prepare-commit-msg newFile Thu Feb 7 12:10:02 CET 2019 --------------------------------------------Step 3: Running .git/hooks/commit-msg script. [OK]: Commit message has an ISSUE number Thu Feb 7 12:10:02 CET 2019 --------------------------------------------Step 4: Running .git/hooks/post-commit script. New commit made: 1c705d3 Get hooks scripts while cloning the repo. ISSUE#7092
A code snippet demonstrating the use of a ‘pre-receive’ hook that is triggered just before a ‘push’ request on the Server, can be written to reject or allow the push operation. localRepo [dev] $git push Enumerating objects: 3, DONE. Counting objects: 100% (3/3), done. Writing objects: 100% (2/2), 272 bytes | 272.00 KiB/s, done. Total 2 (delta 0), reused 0 (delta 0) remote: pre-recieve hook script remote: hooks/pre-receive: [NOK]- Abort the push command remote: To /Users/Divya1/OneDrive/gitRepos/remoteRepo/ ! [remote rejected] dev -> dev (pre-receive hook declined) error: failed to push some refs to '/Users/Divya1/OneDrive/gitRepos/remoteRepo/' |
|
| 14. |
How do you list the commits missing in your branch that are present in the remote tracking branch? |
|
Answer» git log --oneline <localBranch>..<origin/remoteBranch> Your local git branch should be SET up to track a remote branch. Divya1@DIVYA:initialRepo [dev] $git branch -vv * dev b834dc2 [origin/dev] Add Jenkinsfile master b834dc2 [origin/master] Add JenkinsfileReset ‘dev’ commit history to 3 commits behind using the command: Divya1@Divya:initialRepo [dev] $git reset --soft HEAD~3 Divya1@Divya:initialRepo [dev] $git branch -vv * dev 30760c5 [origin/dev: behind 3] add source code auto build at every code CHECKIN using docker imagesCompare and list the missing logs in local ‘dev’ branch that are present in ‘origin/dev’ Use ‘git pull’ to sync local ‘dev’ branch with the remote ‘origin/dev’ branch. |
|
| 15. |
What is the key difference between a ‘git rebase’ and ‘git merge’ |
Answer»
|
|
| 16. |
Explain a good branching structural strategy that you have used for your project code development. |
|
Answer» A good BRANCHING strategy is the one that adapts to your PROJECT and business needs. Every organization has a set of its own defined SDLC processes.
Guidelines:
All the steps will be mentioned in a Jenkins file on a branch ‘name’ condition. |
|
| 17. |
How do you recover a deleted un-merged branch in your project source code? |
|
Answer» By default, git does not allow you to delete a branch whose work has not yet been MERGED into the main branch. To see the list of branches not merged with the CHECKED out branch run: Divya1@Divya:initialRepo [master] $git branch --no-merged DEV--If you TRY to delete this branch, git displays a warning: Divya1@Divya:initialRepo [master] $git branch -d dev error: The branch 'dev' is not fully merged. If you are sure you want to delete it, run 'git branch -D dev'.--If it is still deleted using the -D flag as: Divya1@Divya:initialRepo [master] $git branch -D dev--See the references log information Divya1@Divya:initialRepo [master] $git reflog cb9da2b (HEAD -> master) HEAD@{0}: checkout: MOVING from dev to master b834dc2 (origin/master, origin/dev) HEAD@{1}: checkout: moving from master to dev cb9da2b (HEAD -> master) HEAD@{2}: checkout: moving from master to master cb9da2b (HEAD -> master) HEAD@{3}: checkout: moving from dev to master b834dc2 (origin/master, origin/dev) HEAD@{4}: checkout: moving from master to dev cb9da2b (HEAD -> master) HEAD@{5}: checkout: moving from uat to master 03224ed (uat) HEAD@{6}: checkout: moving from dev to uatb834dc2 is the commit id when we jumped to ‘dev’ branch |
|
| 18. |
What is CI/CD pipeline? |
|
Answer» Continuous Integration is a development practice wherein developers regularly merge or integrate their code changes into a common shared repository very frequently (*).EVERY code check-in is then verified by automated build and automated test cases. This approach helps to detect and fix the bugs early, IMPROVE software quality,reduce the validation and feedback loop time; hence increasing the overall product quality and speedy product releases.
|
|
| 19. |
Mention some post condition pipelines options that you used in Jenkinsfile? |
|
Answer» We can mention some test conditions to run post the completion of stages in a PIPELINE. Code snippet post { ALWAYS { echo “This block runs always !!!” } SUCCESS { echo “This block runs when the stages has a success status” } unstable { echo “This block is run when the stages abort with an unstable status” } }Here are the post conditions reserved for jenkinsfile:
Run the steps in the post section REGARDLESS of the completion status of the Pipeline’s or stage’s run.
Only run the steps in post if the current Pipeline’s or stage’s run has an "unstable" status, usually caused by test FAILURES, code violations, etc.
Only run the steps in post if the current Pipeline’s or stage’s run has an “aborted” status.
Only run the steps in post if the current Pipeline’s or stage’s run has a "success" status.
Only run the steps in post if the current Pipeline’s or stage’s run has a "failed" status.
Only run the steps in post if the current Pipeline’s or stage’s run has a different completion status from its previous run.
Run the steps in this post condition after every other post condition has been evaluated, regardless of the Pipeline or stage’s status. |
|
| 20. |
How do you implement CI/CD using Jenkins? |
Answer»
|
|
| 21. |
What is Jenkins used for in DevOps? |
|
Answer» Jenkins is a self-contained, open source automation server(tool) for continuous development. Jenkins aids and AUTOMATES CI/CD process. It gets the checked in CODE from VCS like Git using a ‘git plugin’, build the source code, run test cases in a production-like environment and make the code release ready using ‘deploy’ plugin.
Sample Jenkins file pipeline { agent { docker { image 'ubuntu:latest' } } stages { stage('build') { steps { sh 'uname -a' } } } }
|
|
| 22. |
What does ‘Infrastructure as code’ means in terms of Puppet? |
|
Answer» Entire SERVER Infrastructure setup configurations are written in TERMS of codes and re-used on all the Puppet Server agent’s NODES(machines) that are connected VIA a Puppet master Server. This is achieved by the use of code snippets called ‘manifests’; that are configuration files for every Server agent node.
|
|
| 23. |
What is Puppet? What is the need for it? |
|
Answer» Puppet is a Configuration Management and deployment tool for administrative tasks. This tool helps in automating the PROVISIONING, configuration, and management of INFRASTRUCTURE and Systems. In simple words:
|
|
| 24. |
How do you work on a container image? |
|
Answer» --Get docker images from docker hub or your docker REPOSITORY docker PULL busybox docker pull centos docker pull divyabhushan/myrepo Divya1@Divya:~ $docker pull divyabhushan/myrepo Using default tag: latest latest: Pulling from divyabhushan/myrepo 6cf436f81810: Pull complete 987088a85b96: Pull complete b4624b3efe06: Pull complete d42beb8ded59: Pull complete d08b19d33455: Pull complete 80d9a1d33f81: Pull complete Digest: sha256:c82b4b701af5301cc5d698d963eeed46739e67aff69fd1a5f4ef0aecc4bf7bbf STATUS: Downloaded newer image for divyabhushan/myrepo:latest--List the docker images Divya1@Divya:~ $docker images REPOSITORY TAG IMAGE ID CREATED SIZE divyabhushan/myrepo latest 72a21c221add About an hour ago 88.1MB busybox latest 3a093384ac30 5 weeks ago 1.2MB centos latest 1e1148e4cc2c 2 months ago 202MB--Create a docker container by running the docker image --pass a shell argument : `uname -a` Divya1@Divya:~ $docker run centos uname -a Linux c70fc2da749a 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux--Docker images can be built by reading a dockerfile --build a new image : ‘newrepo’ with tag:1.0 from the dockerFiles/dockerfile docker build -t newrepo:1.1 dockerFiles/ --Now create a container from the above image: --List all the containers --start the container --List only the running containers |
|
| 25. |
What is a Docker image? How are they shared and accessed? |
|
Answer» A developer writes code instructions to define all the applications and its dependencies in a file CALLED a “Dockerfile”.Dockerfile is used to create a ‘Docker image’ using the ‘docker build <directory>’ command.The build command is run by the docker daemon.
Image CREDIT: docs.docker.com
|
|
| 26. |
What is a Docker? Explain its role in DevOps. |
|
Answer» Every source code DEPLOYMENT needs to be PORTABLE and compatible on every device and environment. Applications and their RUN time environment such as libraries and other dependencies like binaries, jar files, configuration files etc.. are bundled up(packaged) in a Container. Containers as a whole are portable, consistent and compatible with any environment. In development words, a developer can run its application in any environment: dev, uat, preprod and production WITHOUT worrying about the run-time dependencies of the application.
|
|
| 27. |
How does ‘git rebase’ work? When should you rebase your work instead of a ‘git merge’? |
|
Answer» There are scenarios wherein one would like to merge a quickfix or feature branch with not a huge commit history into another ‘dev’ or ‘uat’ branch and yet maintain a linear history. A non-fast forward ‘GIT merge’ would result in a diverged history. Also when one wants the feature merged commits to be the latest commits; ‘git rebase’ is an appropriate way of merging the two BRANCHES. ‘git rebase’ replays the commits on the current branch and place them over the tip of the rebased branch.Since it replays the commit ids, rebase rewrites commit objects and create a new object id(SHA-1). Word of caution: Do not use it if the history is on release/production branch and being shared on the central server. Limit the rebase on your local repository only to rebase quickfix or feature branches. Steps: Say there is a ‘dev’ branch that needs a quick feature to be added along with the TEST cases from ‘uat’ branch.
Develop the new feature and make commits in ‘new-feature’ branch. [dev ] $git CHECKOUT -b new-feature [new-feature ] $ git add lib/commonLibrary.sh && git commit -m “Add commonLibrary file” Divya1@Divya:rebase_project [new-feature] $git add lib/commonLibrary.sh && git commit -m 'Add commonLibrary file'Divya1@Divya:rebase_project [new-feature] $git add feature1.txt && git commit -m 'Add feature1.txt' Divya1@Divya:rebase_project [new-feature] $git add feature2.txt && git commit -m 'Add feature2.txt'
this will result in linear history with ‘new-feature’ results being at the top and ‘dev’ commits being pushed later.
NOTE: ‘dev’ will SHOW a diverged commit history for ‘uat’ merge and a linear history for ‘new-feature’ merge. |
|
| 28. |
What is the difference between a git reset and a git revert. |
Answer»
Reset VS Revert
|
|
| 29. |
What is a git commit object? How is it read? |
|
Answer» When a project repository is initialized to be a git repository, git stores all its metadata in a hidden folder “.git” under the project root directory. Git has 4 types of objects – blobs, trees, tags, and commits. Every commit creates a new commit object with a unique SHA-1 hash_id. Diagram: Single Commit object To see the commit log message along with the textual diff of the code, run: To read a commit object git has ‘git cat-file’ utility. A tree object is LIKE an OS directory that stores references to other directories and FILES (blob type). |
|
| 30. |
What is Git? |
|
Answer» Git is a Distributed Version Control System; USED to logically store and backup the entire history of how your project source code has developed, keeping a track of every version change of the code. Git facilitates very flexible and efficient branching and merging of your code with other collaborators.Being distributed git is EXTREMELY fast and more RELIABLE as every developer has his own local copy of the entire repository. Git allows you to undo the mistakes in the source code at different tiers of its architecture namely- Working directory, Staging (Index) area, Local repository, and Remote repository. Using Git we can always get an older version of our source code and work on it.Git tracks every bit of data as it checksums every file into unique hash codes referring to them via pointers. To summarize Git is the most efficient and widely used VCS, used by major companies like Linux, Google, Facebook, Microsoft, Twitter, LINKEDIN, Netflix, Android, Amazon, IBM, Apple IOS to name a few… |
|
| 31. |
What are some of the tools you have used in DevOps approach? |
|
Answer» Considering DevOps to be an ideology towards ACHIEVING a quality product, every organization has its own guidelines and approach towards it.
|
|
| 32. |
What is DevOps? |
|
Answer» DevOps is an approach to COLLABORATE the development and operations teams for a better, bug-free continuous DELIVERY and integration of the source code. CI/CD are the Continuous integration and continuous deployment methodologies. Every team takes the accountability of the entire product right from the requirement ANALYSIS to documentation to coding, testing in development environments, code deployment and continuous improvements in terms of bugs and feedback from reviewers and the customers. |
|
| 33. |
How does the Service Management process of ITIL Service Transition phase map in a DevOps organization? |
|
Answer» How does the Service Management process of ITIL Service Transition PHASE map in a DevOps organization? |
|
| 34. |
Which DevOps principle appreciates measuring processes, people and tools? |
|
Answer» Which DEVOPS principle appreciates measuring processes, people and TOOLS? |
|
| 35. |
Which of these statements are true |
|
Answer» Which of these STATEMENTS are true |
|
| 36. |
What is NOT an aspect related to governing work in an organization? |
|
Answer» What is NOT an aspect related to governing WORK in an organization? |
|
| 37. |
What is the difference between Continuous Delivery and Continuous Deployment? |
|
Answer» What is the difference between Continuous Delivery and Continuous Deployment? |
|
| 38. |
The DevOps movement has evolved to solve which problem? |
|
Answer» The DEVOPS movement has evolved to solve which problem? |
|
| 39. |
DevOps is a |
|
Answer» DevOps is a |
|
| 40. |
The origins of the DevOps trace back to when |
|
Answer» The ORIGINS of the DevOps trace back to when |
|
| 41. |
What is NOT an appropriate predictor of IT performance in a DevOps environment? |
|
Answer» What is NOT an appropriate PREDICTOR of IT performance in a DevOps environment? |
|
| 42. |
Removing unplanned work is a key principle of DevOps |
|
Answer» REMOVING UNPLANNED work is a key principle of DevOps Choose the correct option from below LIST (1)TRUE (2)False Answer:-(1)True |
|
| 43. |
One of the most common difference between DevOps and Agile SDLC? |
|
Answer» One of the most common difference between DevOps and Agile SDLC? |
|
| 44. |
The DevOps movement is an outgrowth of which software development methodology? |
|
Answer» The DevOps MOVEMENT is an outgrowth of which SOFTWARE development methodology? |
|
| 45. |
What is not a challenge between the Development and Operations teams in a traditional organization |
|
Answer» What is not a challenge between the DEVELOPMENT and Operations TEAMS in a traditional organization |
|
| 46. |
What type of tasks are characterized by low task variability and high tasks analyzibility? |
|
Answer» What type of TASKS are characterized by LOW TASK variability and high tasks analyzibility? |
|
| 47. |
True or False DevOps automation tools rely on coding skills. |
|
Answer» TRUE or False DevOps automation tools rely on coding skills. Choose the correct option from below list (1)True (2)False Answer:-False |
|
| 48. |
What type of mindset is the core of a DevOps culture? |
|
Answer» What type of mindset is the core of a DevOps culture? |
|
| 49. |
The goal of DevOps is not just to increase the rate of change but to successfully deploy features into production without causing chaos and disrupting |
|
Answer» The goal of DevOps is not just to increase the rate of change but to SUCCESSFULLY deploy features into production without causing CHAOS and disrupting other services |
|
| 50. |
Which of these tools is not associated with DevOps? |
|
Answer» Which of these tools is not ASSOCIATED with DEVOPS? |
|