From Fedora Project Wiki
(Replaced content with "{{admon/important|This page is deprecated| All Fedora Modularity Documentation has moved to the new [https://docs.pagure.org/modularity/ Fedora Modularity Documentation we...")
 
Line 1: Line 1:
This chapter describes how to easily generate working OpenShift template and what are useful OpenShift commands.
{{admon/important|This page is deprecated| All Fedora Modularity Documentation has moved to the new [https://docs.pagure.org/modularity/ Fedora Modularity Documentation website] with source hosted along side the code in the [https://pagure.io/modularity Fedora Modularity website git repository]}}
 
= OpenShift deployment possibilities =
OpenShift uses an abstraction called deployment to deploy applications. A deployment could be basically explained as a load balancer for pods.
 
A pod is the smallest deployable unit in OpenShift which is composed of one or more containers. These containers share an IP address and volumes, are always deployed together on a single host, and are scaled together as a single unit.
 
== Scenario one Pod and two containers ==
 
This scenario is useful once you would like to have two containers, where one is opened to anyone and second one is used as “hidden” database. Like Internal register with hidden database.
 
= OpenShift linter command =
Once you wrote an OpenShift template, you would like to check it, whether all fields are written properly. In order to verify the template, '''oc_linter''' or '''oc lint''' command would be welcome.
 
Really basic YAML checker is ''yamllint <YAML_NAME>'', but it does not check OpenShift specific things.
 
I have already filed a RFE issue on GitHub [https://github.com/openshift/origin/issues/12404 OpenShift Pull Request].
 
= How to generate working template for OpenShift =
We need the templates, in order to test our containers on OpenShift. We should simplify a way, for template generation.
I have already filed a RFE on OpenShift GitHub here [https://github.com/openshift/origin/issues/12402 GitHub RFE]
These set of scripts, can help the users for testing their containers together with OpenShift.
I don’t know if it is proper way, but for testing proposes it works.
 
== Creating template with oc command ==
In order to create a working template with '''oc''' command only two steps are needed.
* Run command:
<pre>
oc new-app <docker_image_name>
</pre>
* Run command:
<pre>
oc export dc/service_name>
</pre>
Can be taken from previous command. It is identical.
 
== Creating template by our tool ==
===Prerequisites===
* Clone GitHub repository: [https://github.com/phracek/modularity_tools Petr Hracek modularity_tools]
** The tools will be moved soon into repository [https://pagure.io/modularity/modularity-tools Pagure modularity-tools]
* Switch into your container directory. The directory has to contain '''Dockerfile''' or like '''Dockerfile.RHEL''' and '''[https://github.com/container-images/container-image-template/blob/master/openshift-template.yml openshift.yml]'''
** Both files are important for proper template generation.
** If '''Dockerfile''' contains ''ENV'', ''VOLUMES'' or ''EXPOSE'' directives, they are add into OpenShift template.
* Build your container image with '''docker build ...''' command. Do '''NOT''' use '_' in the image name.
 
===How to feed the template into OpenShift===
* From [https://github.com/phracek/modularity_tools modularity_tools] repository, run command:
** '''get_oc_registry''' gets your OpenShift docker-repository IP address and stores it to file: '''~/.config/openshift_ip.ini'''
* In order to build OpenShift template from your container directory, run command:<pre>build_oc_template.py <IMAGE_NAME></pre>
** In case of different Dockerfile name like '''Dockerfile.RHEL''' add the option '''--dockerfile Dockerfile.RHEL'''
** Template is stored in '''/tmp/<template_dir>/openshift-template.yml'''
* For tagging your built image into OpenShift internal docker registry, run command:<pre>tag_into_oc_registry <IMAGE_NAME></pre>
** The command adds the image into OpenShift internal docker registry
* For adding the template into OpenShift, run command:<pre>oc create -f /tmp/<template_dir>/openshift-template.yml</pre>
* The last step for deploying the '''template''' names as ''IMAGE_NAME'' is over OpenShift UI. By default,<pre>"My Project" -> "Add to project" -> Select your template names as "IMAGE_NAME" in "Browsed Catalog" -> deploy it.</pre>
* For getting template from running pod/deploymentconfig/is, run command:<pre>oc export {pod/dc/is}/<pod_name>|dc_name|is_name> > output.yml</pre>
** Names are taken by commands <pre>oc get {pod|dc|is}</pre>
 
=How to run container as a root under OpenShift=
 
Nowadays, OpenShift team provides a command, how to run container under OpenShift with root privileges.
<pre>
oadm policy add-scc-tu-user anyuid system:serviceaccount:<namespace>:default
</pre>
where namespace is project name. Default one is ''myproject''.
 
The script [https://github.com/phracek/modularity_tools/blob/master/add_anyuid_to_project.sh add_anyuid_to_project.sh] does it automatically. Required argument is project name, like in our case '''myproject'''.
 
=General commands with examples for using OpenShift=
All commands, in this section, should start with '''sudo'''.
* To check whether OpenShift is running, run command:
<pre>
$ oc status
 
In project My Project (myproject) on server https://10.200.136.26:8443
dc/postfix-tls deploys istag/postfix-tls:latest
  deployment #1 deployed 42 minutes ago - 1 pod
2 warnings identified, use 'oc status -v' to see details.
</pre>
 
* Command for displaying all resources pod|deploymentconfigs|imagestreams, run command:
<pre>
$ oc get <pod|dc|is>
 
$ oc get pod
NAME                  READY    STATUS    RESTARTS  AGE
postfix-tls-1-kf0ud  1/1      Running  0          42m
$ oc get dc
NAME          REVISION  DESIRED  CURRENT  TRIGGERED BY
postfix-tls  1          1        1        image(postfix-tls:latest)
 
</pre>
 
* For getting what services are available on OpenShift, run command:
<pre>
$ oc get svc
</pre>
 
* For showing details of a specific resource, PODs, services, etc., run command:
<pre>
oc describe pod|dc|is|svc <name>
 
$ oc describe pod postfix-tls-1-kf0ud
Name:            postfix-tls-1-kf0ud
Namespace:        myproject
Security Policy:    anyuid
Node:            10.200.136.26/10.200.136.26
Start Time:        Fri, 20 Jan 2017 12:55:41 +0100
Labels:            deployment=postfix-tls-1
            deploymentconfig=postfix-tls
            name=postfix-tls
Status:            Running
IP:            172.17.0.3
Controllers:        ReplicationController/postfix-tls-1
Containers:
  postfix-tls:
    Container ID:    docker://6664727b761de3498eb863457aa4554820645b21dbea7e5b9a8a4d0382b22e7f
    Image:        postfix-tls
[..snip..]
  43m        43m        1    {kubelet 10.200.136.26}   spec.containers{postfix-tls}   Normal        Created        Created container with docker id 6664727b761d
  43m        43m        1    {kubelet 10.200.136.26}    spec.containers{postfix-tls}    Normal        Started        Started container with docker id 6664727b761d
</pre>
 
* Command for restarting POD is:
<pre>
oc scale --replicas=0 dc/<name>
</pre>
 
* For deploying template, run command:
<pre>
oc deploy <deployment_name> --latest -n <project_name> # default is myproject
</pre>
 
* For creating new POD, run command:
<pre>
oc new-app <docker_image>
</pre>
 
* For switching into system:admin, run command:
<pre>
oc login -u system:admin
</pre>
 
* For switching to developer mode, run command (default password is developer):
<pre>
oc login -u developer
</pre>
 
* For modifying Security Content Constraints, switch to system:admin and run command:
<pre>
oc get scc | jq …. | oc replace -f -
</pre>
Once it is done switch back to developer mode.
 
* For getting Security Content Constraints, run command:
<pre>
oc get scc
NAME              PRIV      CAPS      SELINUX    RUNASUSER          FSGROUP    SUPGROUP    PRIORITY  READONLYROOTFS  VOLUMES
anyuid            false    []        MustRunAs  RunAsAny          RunAsAny    RunAsAny    10        false            [configMap downwardAPI emptyDir persistentVolumeClaim secret]
[..snip..]
privileged        true      []        RunAsAny    RunAsAny          RunAsAny    RunAsAny    <none>    false            [*]
restricted        false    []        MustRunAs  MustRunAsRange    MustRunAs  RunAsAny    <none>    false            [configMap downwardAPI emptyDir persistentVolumeClaim secret]
</pre>
 
* How to get YAML file from specific ImageStream
<pre>
oc get -o yaml is/<name>
</pre>
 
* How to get YAML file from specific container
<pre>
oc get -o yaml dc/<name> # name is taken from oc get dc
</pre>
 
* For deleting deployment
<pre>
oc delete dc/<name> # name is taken from oc get dc
</pre>
 
* For using container as root, run command:
<pre>
oadm policy add-scc-tu-user anyuid system:serviceaccount:<namespace>:default
</pre>
 
The command has now granted access for that namespace (only) to run pods as the root UID.  It is less secured than restricted but recommended if you must run as root.  It still does not allow privileged containers or host namespaces (network, pid, ipc).  It will only drop the mknod and sys_chroot caps (and not kill, setuid, setgid like restricted)
 
=How to debug service from OpenShift point of view=
This URL shows, how you are able to [https://docs.openshift.com/enterprise/3.1/admin_guide/sdn_troubleshooting.html#debugging-a-service debug a service]. Basically it is a POD readiness issue. Therefore os get pod command and the others mentioned below can help.
 
=Running your service in OpenShift environment=
OpenShift brings some security restrictions which make it tough to “just run” your containerized services. This means that your service may run easily in a docker container, but it may not be trivial to deploy it in an OpenShift environment. Here is a list of sample steps to start the process of integration:
 
* If your container expects some mounts and you would like to perform the mounting directly from host, here’s how to do it (by default this is forbidden):
** Login as system:admin <pre>$ oc login -u system:admin</pre>
** [https://docs.openshift.org/latest/admin_guide/manage_scc.html#use-the-hostpath-volume-plugin Change restricted security context to allow host mounts.]
** Login back as developer <pre>$oc login -u developer</pre>
* Here is [https://gist.github.com/TomasTomecek/70853c1de07da7f4bd0c1c42526e8aca a simple, minimal pod spec] which takes your container image and runs bash inside so you can quickly iterate.
* Run it. <pre>oc create -f ./pod.yml</pre>
* Attach to shell within the container <pre>$ oc attach -t -i caching-dns-server</pre>
** And now you can directly run the service and see what’s happening
* In case something goes wrong, here’s how to get more info:
<pre>
$ oc logs  caching-dns-server
$ oc describe pod caching-dns-server
</pre>
 
=Links=
* [https://docs.openshift.org/latest/welcome/index.html Main OpenShift documentation]
* [https://success.docker.com/Datacenter/Apply/Introduction_to_User_Namespaces_in_Docker_Engine Introduction userns in Docker engine]

Latest revision as of 07:56, 20 February 2017

Important.png
This page is deprecated
All Fedora Modularity Documentation has moved to the new Fedora Modularity Documentation website with source hosted along side the code in the Fedora Modularity website git repository