Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

What are the security-related features in OpenShift Container Platform that are based on Kubernetes?

Answer»

The following are security-related features in the OPENSHIFT Container PLATFORM, which are BASED on Kubernetes:

  • Multitenancy is a technique for isolating containers at many levels by combining Role-Based Access CONTROLS with network restrictions.
  • Admission plug-ins act as a barrier between an API and people who use it to make requests.
Additional Resources

Practice Coding

InterviewBit Blog

Kubernetes Vs Openshift

2.

On which platforms can you deploy an OpenShift Container Platform?

Answer»

On the following platforms, you can deploy an OpenShift Container PLATFORM 4.9 cluster that employs installer-provisioned infrastructure:

  • Amazon Web Services (AWS).
  • Google Cloud Platform (GCP).
  • Microsoft Azure.
  • Red Hat OpenStack Platform (RHOSP) version 13 and 16.

The latest OpenShift Container Platform RELEASE supports both the latest RHOSP long-life release and INTERMEDIATE release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix.

  • Red Hat Virtualization (RHV).
  • VMware vSphere.
  • VMware Cloud (VMC) on AWS.
  • Bare metal.
3.

What is HAProxy?

Answer»

HAProxy is a software-defined LOAD balancer and proxy application that is open SOURCE. In OpenShift, it accepts an application's URL route and PROXIES requests to the appropriate pod to return the needed data to the requesting user.

HAProxy will ACCEPT all incoming connections if our application is scalable. It examines the HTTP protocol to determine to which application instance the connection should be forwarded. This is significant since it allows users to have sticky sessions.

4.

What are the benefits of using Docker And Kubernetes in Openshift?

Answer»

DOCKER aids in the creation of lightweight liner-based containers, while Kubernetes aids in container orchestration and management. Docker and Kubernetes are the foundations of OpenShift. All of the containers are constructed on top of a Docker cluster, which is essentially a Kubernetes service running on top of Linux COMPUTERS and employing Kubernetes orchestrations. During this PROCEDURE, we create a Kubernetes MASTER that manages all of the nodes and deploys containers to all of them. The basic purpose of Kubernetes is to use a different type of configuration file to control the OpenShift cluster and deployment flow. To BUILD and deploy containers on cluster nodes, we use kubectl in the same way that we use the OC command-line interface in Kubernetes.

5.

Define the terms "blue" and "green" deployments.

Answer»

The Blue/Green DEPLOYMENT approach reduces the time it takes to complete a deployment transfer by ENSURING that TWO versions of the application STACKS are available at the same time. We can leverage service and routing tiers to SEAMLESSLY swap between two running application stacks. As a result, performing a rollback is simple and rapid.

6.

What are platform operators?

Answer»

Operators are one of the most crucial parts of the OpenShift Container Platform. On the control plane, operators are the preferred TECHNIQUE of packaging, deploying, and managing services. They can ALSO benefit applications that are used by users.

All cluster functions in OpenShift Container Platform are organised into a set of default platform Operators, also known as cluster Operators. It is responsible for a specific aspect of cluster functionality, such as cluster-wide application logging, Kubernetes control plane administration, or the machine provisioning SYSTEM. Platform Operators are defined through a ClusterOperator object, which cluster ADMINISTRATORS can inspect on the Administration Cluster Settings page of the OpenShift Container Platform web dashboard.
For determining cluster functionality, each platform Operator provides a simple API. The Operator hides the complexities of maintaining the component's lifespan. Operators MIGHT manage a single component or tens of thousands of components, but the ultimate goal is to reduce operational strain by automating typical tasks.

7.

State the basic steps involved in the lifecycle of the OpenShift Container Platform.

Answer»

The lifecycle of the OpenShift Container PLATFORM is depicted in the DIAGRAM below:

  • CREATING a cluster for the OpenShift Container Platform
  • Cluster management
  • Application DEVELOPMENT and deployment
  • Application scalability
8.

What do you understand by service mesh?

Answer»

In a distributed microservice architecture, a service mesh is the network of microservices that make up applications, as well as the INTERACTIONS between those microservices. It might be difficult to comprehend and maintain a Service Mesh as it grows in size and complexity.

Red Hat OpenShift Service Mesh, which is BASED on the open-source Istio project, adds a translucent layer to current distributed systems without requiring any changes to the service CODE. By delivering a unique sidecar proxy to RELEVANT services in the mesh that intercepts all network communication between microservices, you can add Red Hat OpenShift Service Mesh functionality to services. The control plane features are used to configure and administer the Service Mesh.

Red Hat OpenShift Service Mesh makes it simple to build a network of deployed services that do the following:

  • Discovery
  • Load balancing
  • service-to-service authentication
  • Failure recovery
  • Metrics
  • Monitoring

Red Hat OpenShift Service Mesh also has more advanced operational features, such as:

  • A/B testing
  • Canary releases
  • Rate limiting
  • Access control
  • End-to-end authentication

Red Hat OpenShift Service Mesh is made up of a data plane and a control plane.

The data plane is a set of intelligent proxies that intercept and regulate all inbound and outgoing network communication between microservices in the service mesh and run alongside application containers in a pod. The data plane is designed to intercept all network traffic, both inbound (ingress) and outbound (egress). The data plane's proxies are managed and configured by the control plane. It maintains access control and use regulations, as well as collects the metrics from the service mesh's proxies, and it is the DEFINITIVE source for configuration.

9.

What Is Openshift's Downward API?

Answer»

The DOWNWARD API enables containers to access information about API objects without being bound to the OpenShift CONTAINER Platform. The pod's name, namespace, and resource values are examples of such information. Containers can use environment variables or a volume plug-in to consume data from the descending API.

You can get the following metadata and use it to configure the RUNNING pods:

  • Labels
  • Annotations
  • Name of the pod, namespace, and IP address
  • Limit information and pod CPU/memory request
  • Certain data can be mounted as an environment variable in the pod, while other data can be accessed as FILES within a volume.
10.

What is a build configuration?

Answer»

When a new BUILD is produced, a build configuration describes a single build definition and a collection of triggers. A BuildConfig is a REST object that may be USED in a POST to the API server to construct a new instance. A BuildConfig is defined by a build strategy and ONE or more sources. The process is determined by the strategy, and the sources offer the input.

A BuildConfig is automatically generated for you if you use the web panel or CLI to create your application with OpenShift Container Platform, and it may be updated at any time. Understanding the components of a BuildConfig and the choices accessible to them can be useful if you decide to update your configuration manually later. Thus, Build configuration contains information about a specific build strategy as well as the LOCATION of developer-supplied artefacts such as the output IMAGE.

11.

What distinguishes OpenShift from Kubernetes in terms of its components?

Answer»

The INTERNAL registry and ROUTER as an ingress TRAFFIC control are the COMPONENTS that distinguish OpenShift from KUBERNETES. Learn More.

12.

To provide connectivity for pods throughout a full cluster, name the network plugin.

Answer»

'ovs-subnet' is the NETWORK plugin that allows unrestricted connectivity for PODS across the ENTIRE cluster.

13.

What are the toggles for the features?

Answer»

Feature toggles are another common TOPIC in OpenShift interview questions. These are methods for integrating old and new versions of a feature into a single code base. The versions, on the other HAND, are SURROUNDED by logic for execution or are depending on elements such as a DATABASE switch or a property VALUE. Separating the deployment from usage, single group, and legacy systems, as well as many server groups, is made easier with feature toggles.

14.

What is Volume Security and how does it work?

Answer»

Volume security refers to ensuring that the PV and PVC of OPENSHIFT projects are secure. In OpenShift, there are PRIMARILY four segments that MANAGE volume access.

  • Supplemental Groups
  • fsgroup
  • run as a user
  • seLinuxOptions
15.

What are OAuth’s Identity Providers?

Answer»

The OAUTH IDENTITY PROVIDERS are as FOLLOWS:

  • LDAP
  • Deny All
  • HTTP Passwords
  • Basic Authentication
  • Allow All
16.

What do you know about OpenShift Kubernetes Engine?

Answer»

RED Hat's OpenShift Kubernetes Engine allows you to leverage an enterprise-class Kubernetes platform as a production platform for deploying containers. Because OpenShift Kubernetes Engine and OpenShift Container Platform are the same binary DISTRIBUTION, you can download and install them both. OpenShift Kubernetes Engine gives you full access to an enterprise-ready Kubernetes environment that is simple to set up and has a LARGE compatibility test matrix with many of the software components in your data centre.

OpenShift Kubernetes Engine comes with the same SLAs, bug patches, and protection against common vulnerabilities and faults as the OpenShift Container Platform. Red Hat Enterprise Linux (RHEL) Virtual Datacenter and Red Hat Enterprise Linux CoreOS (RHCOS) ENTITLEMENTS are included with OpenShift Kubernetes Engine, allowing you to employ an integrated Linux operating system with container RUNTIME from the same technology supplier.

17.

What do you mean by autoscaling in OpenShift?

Answer»

Autoscaling is an OpenShift feature that ALLOWS deployed applications to scale and sink as needed based on CERTAIN parameters. Autoscaling is also known as pod autoscaling in the OpenShift application. The following are the two types of application scalability.

  • Vertical scaling: Vertical scaling is the process of gradually increasing the processing power of a single computer, which entails adding an extra CPU and hard disc. This is an older OpenShift technique that is no longer supported by newer OpenShift releases.
  • Horizontal scaling: This FORM of scalability is important when more requests need to be handled and the number of machines NEEDS to be increased.

There are two ways to enable the scaling capability in OpenShift.

  • Using the configuration file for deployment.
  • While the image is being displayed.
18.

What are the advantages of using OpenShift?

Answer»

The ADVANTAGES of using OpenShift are:

  • You can work with a variety of apps, including traditional, modernised, and cloud-native. AI/ML, Java, data analytics, DATABASES, and other high-demand applications are all supported.
  • OpenShift allows simple connection with CI/CD pipeline building TOOLS like Jenkins, thanks to automation at multiple stages of the application.
  • To avoid account COMPROMISE, OpenShift provides role-based access control (RBAC), which ensures that each developer only has access to the functionality they require. Other security rules, like IAM and OAuth, are set by default when you create a project with Openshift. You simply need to add permissions to users as needed.
  • OpenShift allows web hosting and design organisations to connect developers and operations personnel to successfully build, test, and deploy applications by allowing them to cooperate quickly.  Red Hat also improved the developer experience by providing additional CLI tooling and a web-based user interface that allows users to control all of the OpenShift platform's features.
  • OpenShift has monitoring and logging tools out of the box to streamline the development process and UNIFY the deployment and operation of applications.
  • OpenShift allows for effective container orchestration, enabling quick provisioning, deployment, scaling, and administration of containers. Deployments are completed faster and at a lesser cost.
19.

What is the procedure followed in Red Hat when dealing with a new incident?

Answer»

An incident is an occurrence that causes one or more Red HAT services to degrade or go down. A client or a member of the Customer Experience and Engagement (CEE) team can report an incident via a support case, the centralised monitoring and alerting system, or a member of the SRE team. The severity of an incident is determined by its impact on the service and the client. When dealing with a new event, Red Hat FOLLOWS the following PROCEDURE:

  • When a new event is reported, an SRE first responder is notified and begins an investigation.
  • The incident is APPOINTED an incident lead after the initial inquiry, who coordinates the recovery operations.
  • An incident lead is in charge of all recovery communication and coordination, as well as any necessary notifications and support case updates.
  • The incident has been resolved.

Within three business days of the incident, the issue is DOCUMENTED and a root cause analysis (RCA) is conducted. Within 7 business days of the incident, the customer will receive a drafted RCA document.

20.

What systems are running on AWS in the OpenShift environment?

Answer»

One MASTER node and one infrastructure node MAKE up the OpenShift environment on AMAZON Web Services. There's also an NFS server and 24 application NODES included.