question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
30,182,984
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
I download from [URL] Laravel for openshift then upload over my repository over github. This code for connect to database not work. The problem is that load variable from .env file that locate in root of project for solve this problem change .env # local environment only # for production, see .openshift/.env APP_ENV=APPLICATION_ENV APP_DEBUG=true APP_URL=OPENSHIFT_APP_DNS APP_KEY=OPENSHIFT_SECRET_TOKEN DB_DRIVER=mysql DB_HOST=OPENSHIFT_MYSQL_DB_HOST DB_PORT=OPENSHIFT_MYSQL_DB_PORT DB_DATABASE=OPENSHIFT_APP_NAME DB_USERNAME=OPENSHIFT_MYSQL_DB_USERNAME DB_PASSWORD=OPENSHIFT_MYSQL_DB_PASSWORD CACHE_DRIVER=apc SESSION_DRIVER=file my error : SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known createConnection('mysql:host=OPENSHIFT_MYSQL_DB_HOST;port=OPENSHIFT_MYSQL_DB_PORT;dbname=OPENSHIFT_APP_NAME', array('driver' => 'mysql', 'host' => 'OPENSHIFT_MYSQL_DB_HOST', 'port' => 'OPENSHIFT_MYSQL_DB_PORT', 'database' => 'OPENSHIFT_APP_NAME', 'username' => 'OPENSHIFT_MYSQL_DB_USERNAME', 'password' => 'OPENSHIFT_MYSQL_DB_PASSWORD', 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', 'strict' => false, 'name' => 'mysql'), array('0', '2', '0', false, '0')) in MySqlConnector.php line 20
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known I download from [URL] Laravel for openshift then upload over my repository over github. This code for connect to database not work. The problem is that load variable from .env file that locate in root of project for solve this problem change .env # local environment only # for production, see .openshift/.env APP_ENV=APPLICATION_ENV APP_DEBUG=true APP_URL=OPENSHIFT_APP_DNS APP_KEY=OPENSHIFT_SECRET_TOKEN DB_DRIVER=mysql DB_HOST=OPENSHIFT_MYSQL_DB_HOST DB_PORT=OPENSHIFT_MYSQL_DB_PORT DB_DATABASE=OPENSHIFT_APP_NAME DB_USERNAME=OPENSHIFT_MYSQL_DB_USERNAME DB_PASSWORD=OPENSHIFT_MYSQL_DB_PASSWORD CACHE_DRIVER=apc SESSION_DRIVER=file my error : SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known createConnection('mysql:host=OPENSHIFT_MYSQL_DB_HOST;port=OPENSHIFT_MYSQL_DB_PORT;dbname=OPENSHIFT_APP_NAME', array('driver' => 'mysql', 'host' => 'OPENSHIFT_MYSQL_DB_HOST', 'port' => 'OPENSHIFT_MYSQL_DB_PORT', 'database' => 'OPENSHIFT_APP_NAME', 'username' => 'OPENSHIFT_MYSQL_DB_USERNAME', 'password' => 'OPENSHIFT_MYSQL_DB_PASSWORD', 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', 'strict' => false, 'name' => 'mysql'), array('0', '2', '0', false, '0')) in MySqlConnector.php line 20
mysql, laravel, laravel-5, openshift
71
406,532
22
https://stackoverflow.com/questions/30182984/sqlstatehy000-2002-php-network-getaddresses-getaddrinfo-failed-name-or-ser
47,181,821
Using Keycloak behind a reverse proxy: Could not open Admin loginpage because mixed Content
so I have a problem getting keycloak 3.2.1 to work behind kong (0.10.3), a reverse proxy based on nginx. Scenario is: I call keycloak via my gateway-route via [URL] and it shows me the entrypoint with keycloak logo, link to admin console etc. - so far so good. But when clicking on administration console -> calling [URL] , keycloak tries to load its css/js via http (see screenie below), which my browser blocks because mixed content. I searched around and found this thread: keycloak apache server configuration with 'Mixed Content' problems which lead to this github repo: [URL] From there on, I tried to integrate its' cli into my dockerfile with success (did not change the files' contents, just copied them into my repo and add/run them from dockerfile). This is my dockerfile right now: FROM jboss/keycloak-postgres:3.2.1.Final USER root ADD config.sh /tmp/ ADD batch.cli /tmp/ RUN bash /tmp/config.sh #Give correct permissions when used in an OpenShift environment. RUN chown -R jboss:0 $JBOSS_HOME/standalone && \ chmod -R g+rw $JBOSS_HOME/standalone USER jboss EXPOSE 8080 Sadly, my problem still exists: So I am out of ideas for now and hope you could help me out: How do I tell keycloak to call its' css-files via https here? do I have to change something in the cli script? Here's the content of the script: config.sh: #!/bin/bash -x set -e JBOSS_HOME=/opt/jboss/keycloak JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh JBOSS_MODE=${1:-"standalone"} JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"} echo "==> Executing..." cd /tmp $JBOSS_CLI --file=dirname "$0"/batch.cli # cf. [URL] /bin/rm -rf ${JBOSS_HOME}/${JBOSS_MODE}/configuration/${JBOSS_MODE}_xml_history/current and batch.cli: embed-server --std-out=echo # [URL] # 3.2.7.2. Enable SSL on a Reverse Proxy # First add proxy-address-forwarding and redirect-socket to the http-listener element. # Then add a new socket-binding element to the socket-binding-group element. batch /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding,value=true) /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=redirect-socket,value=proxy-https) /socket-binding-group=standard-sockets/socket-binding=proxy-https:add(port=443) run-batch stop-embedded-server It may be of interest too, that kong is deployed on openshift with a route using a redirect from http to https ( "insecureEdgeTerminationPolicy": "Redirect" ).
Using Keycloak behind a reverse proxy: Could not open Admin loginpage because mixed Content so I have a problem getting keycloak 3.2.1 to work behind kong (0.10.3), a reverse proxy based on nginx. Scenario is: I call keycloak via my gateway-route via [URL] and it shows me the entrypoint with keycloak logo, link to admin console etc. - so far so good. But when clicking on administration console -> calling [URL] , keycloak tries to load its css/js via http (see screenie below), which my browser blocks because mixed content. I searched around and found this thread: keycloak apache server configuration with 'Mixed Content' problems which lead to this github repo: [URL] From there on, I tried to integrate its' cli into my dockerfile with success (did not change the files' contents, just copied them into my repo and add/run them from dockerfile). This is my dockerfile right now: FROM jboss/keycloak-postgres:3.2.1.Final USER root ADD config.sh /tmp/ ADD batch.cli /tmp/ RUN bash /tmp/config.sh #Give correct permissions when used in an OpenShift environment. RUN chown -R jboss:0 $JBOSS_HOME/standalone && \ chmod -R g+rw $JBOSS_HOME/standalone USER jboss EXPOSE 8080 Sadly, my problem still exists: So I am out of ideas for now and hope you could help me out: How do I tell keycloak to call its' css-files via https here? do I have to change something in the cli script? Here's the content of the script: config.sh: #!/bin/bash -x set -e JBOSS_HOME=/opt/jboss/keycloak JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh JBOSS_MODE=${1:-"standalone"} JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"} echo "==> Executing..." cd /tmp $JBOSS_CLI --file=dirname "$0"/batch.cli # cf. [URL] /bin/rm -rf ${JBOSS_HOME}/${JBOSS_MODE}/configuration/${JBOSS_MODE}_xml_history/current and batch.cli: embed-server --std-out=echo # [URL] # 3.2.7.2. Enable SSL on a Reverse Proxy # First add proxy-address-forwarding and redirect-socket to the http-listener element. # Then add a new socket-binding element to the socket-binding-group element. batch /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding,value=true) /subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=redirect-socket,value=proxy-https) /socket-binding-group=standard-sockets/socket-binding=proxy-https:add(port=443) run-batch stop-embedded-server It may be of interest too, that kong is deployed on openshift with a route using a redirect from http to https ( "insecureEdgeTerminationPolicy": "Redirect" ).
ssl, https, openshift, keycloak, kong
57
112,703
10
https://stackoverflow.com/questions/47181821/using-keycloak-behind-a-reverse-proxy-could-not-open-admin-loginpage-because-mi
23,842,235
Wordpress JsonAPI - /wp-json/ was not found on this server
I am using the following plugin Json Rest API . To test the plugin the documentation states that I should just use: $ curl -i [URL] HTTP/1.1 404 Not Found Date: Sat, 24 May 2014 07:01:21 GMT Server: Apache/2.2.15 (Red Hat) Content-Length: 303 Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>404 Not Found</title> </head><body> <h1>Not Found</h1> <p>The requested URL /wp-json/ was not found on this server.</p> <hr> <address>Apache/2.2.15 (Red Hat) Server at testpress-maxximus.rhcloud.com Port 8 0</address> </body></html> As you can see nothing is found by the URL. Any recommendations if there is a problem with the API or wordpress? I appreciate your reply
Wordpress JsonAPI - /wp-json/ was not found on this server I am using the following plugin Json Rest API . To test the plugin the documentation states that I should just use: $ curl -i [URL] HTTP/1.1 404 Not Found Date: Sat, 24 May 2014 07:01:21 GMT Server: Apache/2.2.15 (Red Hat) Content-Length: 303 Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>404 Not Found</title> </head><body> <h1>Not Found</h1> <p>The requested URL /wp-json/ was not found on this server.</p> <hr> <address>Apache/2.2.15 (Red Hat) Server at testpress-maxximus.rhcloud.com Port 8 0</address> </body></html> As you can see nothing is found by the URL. Any recommendations if there is a problem with the API or wordpress? I appreciate your reply
json, wordpress, openshift
55
80,180
10
https://stackoverflow.com/questions/23842235/wordpress-jsonapi-wp-json-was-not-found-on-this-server
15,061,001
What does &quot;#!/bin/env&quot; mean (at the top of a node.js script)?
I found serval node.js projects that have this at top of their app.js (as in this openshift program ): #!/bin/env node What does this mean? How does this work? Where is it useful?
What does &quot;#!/bin/env&quot; mean (at the top of a node.js script)? I found serval node.js projects that have this at top of their app.js (as in this openshift program ): #!/bin/env node What does this mean? How does this work? Where is it useful?
node.js, openshift, declare
54
17,692
2
https://stackoverflow.com/questions/15061001/what-does-bin-env-mean-at-the-top-of-a-node-js-script
12,851,858
HTTPS request in NodeJS
I am trying to write a NodeJS app which will talk to the OpenShift REST API using the request method in the https package. Here is the code: var https = require('https'); var options = { host: 'openshift.redhat.com', port: 443, path: '/broker/rest/api', method: 'GET' }; var req = https.request(options, function(res) { console.log(res.statusCode); res.on('data', function(d) { process.stdout.write(d); }); }); req.end(); req.on('error', function(e) { console.error(e); }); But this is giving me an error (status code 500 is returned). When I did the same thing using curl on the command line, curl -k -X GET [URL] I am getting the correct response from the server. Is there anything wrong in the code?
HTTPS request in NodeJS I am trying to write a NodeJS app which will talk to the OpenShift REST API using the request method in the https package. Here is the code: var https = require('https'); var options = { host: 'openshift.redhat.com', port: 443, path: '/broker/rest/api', method: 'GET' }; var req = https.request(options, function(res) { console.log(res.statusCode); res.on('data', function(d) { process.stdout.write(d); }); }); req.end(); req.on('error', function(e) { console.error(e); }); But this is giving me an error (status code 500 is returned). When I did the same thing using curl on the command line, curl -k -X GET [URL] I am getting the correct response from the server. Is there anything wrong in the code?
node.js, rest, https, openshift
54
75,448
1
https://stackoverflow.com/questions/12851858/https-request-in-nodejs
53,960,516
Can I connect one service account to multiple namespaces in Kubernetes?
I have couple of namespaces - assume NS1 and NS2 . I have serviceaccounts created in those - sa1 in NS1 and sa2 in NS2 . I have created roles and rolebindings for sa1 to do stuff within NS1 and sa2 within NS2 . What I want is give sa1 certain access within NS2 (say only Pod Reader role). I am wondering if that's possible or not?
Can I connect one service account to multiple namespaces in Kubernetes? I have couple of namespaces - assume NS1 and NS2 . I have serviceaccounts created in those - sa1 in NS1 and sa2 in NS2 . I have created roles and rolebindings for sa1 to do stuff within NS1 and sa2 within NS2 . What I want is give sa1 certain access within NS2 (say only Pod Reader role). I am wondering if that's possible or not?
kubernetes, openshift, kubectl, rbac, kubernetes-namespace
50
44,712
1
https://stackoverflow.com/questions/53960516/can-i-connect-one-service-account-to-multiple-namespaces-in-kubernetes
51,646,957
Helm: could not find tiller
I'm getting this error message: ➜ ~ helm version Error: could not find tiller I've created tiller project: ➜ ~ oc new-project tiller Now using project "tiller" on server "[URL] Then, I've created tiller into tiller namespace: ➜ ~ helm init --tiller-namespace tiller $HELM_HOME has been configured at /home/jcabre/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run helm init with the --tiller-tls-verify flag. For more information on securing your installation see: [URL] Happy Helming! So, after that, I've been waiting for tiller pod is ready. ➜ ~ oc get pod -w NAME READY STATUS RESTARTS AGE tiller-deploy-66cccbf9cd-84swm 0/1 Running 0 18s NAME READY STATUS RESTARTS AGE tiller-deploy-66cccbf9cd-84swm 1/1 Running 0 24s ^C% Any ideas?
Helm: could not find tiller I'm getting this error message: ➜ ~ helm version Error: could not find tiller I've created tiller project: ➜ ~ oc new-project tiller Now using project "tiller" on server "[URL] Then, I've created tiller into tiller namespace: ➜ ~ helm init --tiller-namespace tiller $HELM_HOME has been configured at /home/jcabre/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run helm init with the --tiller-tls-verify flag. For more information on securing your installation see: [URL] Happy Helming! So, after that, I've been waiting for tiller pod is ready. ➜ ~ oc get pod -w NAME READY STATUS RESTARTS AGE tiller-deploy-66cccbf9cd-84swm 0/1 Running 0 18s NAME READY STATUS RESTARTS AGE tiller-deploy-66cccbf9cd-84swm 1/1 Running 0 24s ^C% Any ideas?
openshift, kubernetes-helm
47
93,048
8
https://stackoverflow.com/questions/51646957/helm-could-not-find-tiller
51,655,657
tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux&#39;
I am installing Openshift Origin All-in-One Server using below links [URL] after download when I did: tar -xf openshift-origin-server-v3.10.0-rc.0-c20e215-linux-64bit.tar.gz -C /opt/redhat It throws following output but directory got untar in desired directory
tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux&#39; I am installing Openshift Origin All-in-One Server using below links [URL] after download when I did: tar -xf openshift-origin-server-v3.10.0-rc.0-c20e215-linux-64bit.tar.gz -C /opt/redhat It throws following output but directory got untar in desired directory
openshift, openshift-origin
45
56,333
5
https://stackoverflow.com/questions/51655657/tar-ignoring-unknown-extended-header-keyword-libarchive-xattr-security-selinux
49,916,103
What is the different between openshift deploymentconfig and kubernetes deployment
After far as I know: deploymentconfig β†’ replicationcontroller β†’ pod vs. deployment β†’ replicaset β†’ pod Otherwise, do these two resources have additional differences? The more detail the better.
What is the different between openshift deploymentconfig and kubernetes deployment After far as I know: deploymentconfig β†’ replicationcontroller β†’ pod vs. deployment β†’ replicaset β†’ pod Otherwise, do these two resources have additional differences? The more detail the better.
kubernetes, openshift, openshift-origin
44
33,844
2
https://stackoverflow.com/questions/49916103/what-is-the-different-between-openshift-deploymentconfig-and-kubernetes-deployme
17,980,385
Separate back-end and front-end apps on same domain?
We are building a fully RESTful back-end with the Play Framework. We are also building a separate web front-end with a different technology stack that will call the RESTful API. How do we deploy both apps so they have the same domain name, with some URLs used for the backend API and some for the front-end views? For example, visiting MyDomain.example means the front-end displays the home page, but sending a GET to MyDomain.example/product/24 means the back-end returns a JSON object with the product information. A further possibility is if a web browser views MyDomain.example/product/24 , then the front-end displays an HTML page, and that webpage was built from a back-end call to the same URL. Finally, do we need two dedicated servers for this? Or can the front-end and back-end be deployed on the same server (e.g. OpenShift, Heroku)
Separate back-end and front-end apps on same domain? We are building a fully RESTful back-end with the Play Framework. We are also building a separate web front-end with a different technology stack that will call the RESTful API. How do we deploy both apps so they have the same domain name, with some URLs used for the backend API and some for the front-end views? For example, visiting MyDomain.example means the front-end displays the home page, but sending a GET to MyDomain.example/product/24 means the back-end returns a JSON object with the product information. A further possibility is if a web browser views MyDomain.example/product/24 , then the front-end displays an HTML page, and that webpage was built from a back-end call to the same URL. Finally, do we need two dedicated servers for this? Or can the front-end and back-end be deployed on the same server (e.g. OpenShift, Heroku)
deployment, playframework, openshift
44
48,229
5
https://stackoverflow.com/questions/17980385/separate-back-end-and-front-end-apps-on-same-domain
41,441,832
django.core.exceptions.ImproperlyConfigured: WSGI application &#39;application&#39; could not be loaded
The scenario is, I cloned the Django code for OpenShift-V3 from here . When I ran the code with python manage.py runserver , I got this error: django.core.exceptions.ImproperlyConfigured: WSGI application 'application' could not be loaded; Error importing module: 'application doesn't look like a module path I didn't add anything to the code and the required packages are already satisfied.
django.core.exceptions.ImproperlyConfigured: WSGI application &#39;application&#39; could not be loaded The scenario is, I cloned the Django code for OpenShift-V3 from here . When I ran the code with python manage.py runserver , I got this error: django.core.exceptions.ImproperlyConfigured: WSGI application 'application' could not be loaded; Error importing module: 'application doesn't look like a module path I didn't add anything to the code and the required packages are already satisfied.
python, django, openshift, wsgi, openshift-nextgen
40
138,078
30
https://stackoverflow.com/questions/41441832/django-core-exceptions-improperlyconfigured-wsgi-application-application-coul
23,169,529
How to use naked GoDaddy domain with OpenShift hosting?
Desired Behaviour I have a GoDaddy domain name and am using OpenShift for hosting. I would like the following to be true: a) user enters www.mysitename.com > user sees mysitename.com b) user enters www.mysitename.com/about.html > user sees mysitename.com/about.html c) user enters mysitename.com or mysitename.com/about.html and they also see that url. d) to summarise, the www prefix is never displayed anywhere on the site. Constraints OpenShift hosting does not have a static IP, so it is not possible to adjust the A record at GoDaddy. The format for the OpenShift app is [URL] . You can set up a cname at GoDaddy with the following: www > appname-username.rhcloud.com This means the site is accessible at www.mydomain.com but not at mydomain.com . Suggested Solutions There are several posts on the topic that suggest the following but for several reasons are not adequate solutions: use a subdomain ie blog.mydomain.com use wwwizer use forwarding and masking (causes the url to stay the same when navigating relative links) Question Can anyone think of another solution that satisfies the desired behaviour stated above? Or do I need to change to a registrar that allows "naked cname records"? Related Posts [URL] Naked domain with Openshift How do I add an alias for a naked domain with OpenShift? Edit This blog posts sums up the scenario: [URL] Could I sign up for CloudFlare to resolve the issue? I haven't used it before and don't know how it works.
How to use naked GoDaddy domain with OpenShift hosting? Desired Behaviour I have a GoDaddy domain name and am using OpenShift for hosting. I would like the following to be true: a) user enters www.mysitename.com > user sees mysitename.com b) user enters www.mysitename.com/about.html > user sees mysitename.com/about.html c) user enters mysitename.com or mysitename.com/about.html and they also see that url. d) to summarise, the www prefix is never displayed anywhere on the site. Constraints OpenShift hosting does not have a static IP, so it is not possible to adjust the A record at GoDaddy. The format for the OpenShift app is [URL] . You can set up a cname at GoDaddy with the following: www > appname-username.rhcloud.com This means the site is accessible at www.mydomain.com but not at mydomain.com . Suggested Solutions There are several posts on the topic that suggest the following but for several reasons are not adequate solutions: use a subdomain ie blog.mydomain.com use wwwizer use forwarding and masking (causes the url to stay the same when navigating relative links) Question Can anyone think of another solution that satisfies the desired behaviour stated above? Or do I need to change to a registrar that allows "naked cname records"? Related Posts [URL] Naked domain with Openshift How do I add an alias for a naked domain with OpenShift? Edit This blog posts sums up the scenario: [URL] Could I sign up for CloudFlare to resolve the issue? I haven't used it before and don't know how it works.
openshift, domain-name
39
14,757
4
https://stackoverflow.com/questions/23169529/how-to-use-naked-godaddy-domain-with-openshift-hosting
21,518,074
Permission denied (publickey,gssapi-keyex,gssapi-with-mic) on openshift
I am having issues with committing changes to my gear. I have tried to run rhc setup, I also deleted my .ssh folder and executed rhc setup again but that also didnt work. Not sure what changed but it worked couple of hours ago. >git push -u <GEAR_NAME> master Permission denied (publickey,gssapi-keyex,gssapi-with-mic). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. >git remote add devstage3 -f ssh://<GEAR_ID>@<GEAR_NAME>-<GEAR-DOMAIN>.rhcloud.com/~/git/<GEAR_DOMAIN>.git/ Also I have tried to start a different gear and commit to it but I am getting the same error: Updating <GEAR_NAME> The authenticity of host '<GEAR_NAME>-<GEAR-DOMAIN>.rhcloud.com (<GEAR_IP>)' can't be established. RSA key fingerprint is <KEY_FINGERPRINT>. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '<GEAR_NAME>-<GEAR-DOMAIN>.rhcloud.com,<GEAR_IP>' (RSA) to the list of known hosts. Permission denied (publickey,gssapi-keyex,gssapi-with-mic). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. error: Could not fetch <GEAR_NAME>
Permission denied (publickey,gssapi-keyex,gssapi-with-mic) on openshift I am having issues with committing changes to my gear. I have tried to run rhc setup, I also deleted my .ssh folder and executed rhc setup again but that also didnt work. Not sure what changed but it worked couple of hours ago. >git push -u <GEAR_NAME> master Permission denied (publickey,gssapi-keyex,gssapi-with-mic). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. >git remote add devstage3 -f ssh://<GEAR_ID>@<GEAR_NAME>-<GEAR-DOMAIN>.rhcloud.com/~/git/<GEAR_DOMAIN>.git/ Also I have tried to start a different gear and commit to it but I am getting the same error: Updating <GEAR_NAME> The authenticity of host '<GEAR_NAME>-<GEAR-DOMAIN>.rhcloud.com (<GEAR_IP>)' can't be established. RSA key fingerprint is <KEY_FINGERPRINT>. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '<GEAR_NAME>-<GEAR-DOMAIN>.rhcloud.com,<GEAR_IP>' (RSA) to the list of known hosts. Permission denied (publickey,gssapi-keyex,gssapi-with-mic). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. error: Could not fetch <GEAR_NAME>
git, openshift, git-push, public-key
37
170,032
14
https://stackoverflow.com/questions/21518074/permission-denied-publickey-gssapi-keyex-gssapi-with-mic-on-openshift
41,674,108
How to include script and run it into kubernetes yaml?
It is how to run simple batch in kubernetes yaml (helloworld.yaml): ... image: "ubuntu:14.04" command: ["/bin/echo", "hello", "world"] ... In Kubernetes i can deploy that like this: $ kubectl create -f helloworld.yaml Suppose i have a batch script like this (script.sh): #!/bin/bash echo "Please wait...."; sleep 5 Is there way to include the script.sh into kubectl create -f so it can run the script. Suppose now helloworld.yaml edited like this: ... image: "ubuntu:14.04" command: ["/bin/bash", "./script.sh"] ...
How to include script and run it into kubernetes yaml? It is how to run simple batch in kubernetes yaml (helloworld.yaml): ... image: "ubuntu:14.04" command: ["/bin/echo", "hello", "world"] ... In Kubernetes i can deploy that like this: $ kubectl create -f helloworld.yaml Suppose i have a batch script like this (script.sh): #!/bin/bash echo "Please wait...."; sleep 5 Is there way to include the script.sh into kubectl create -f so it can run the script. Suppose now helloworld.yaml edited like this: ... image: "ubuntu:14.04" command: ["/bin/bash", "./script.sh"] ...
openshift, kubernetes, openshift-origin
34
100,747
3
https://stackoverflow.com/questions/41674108/how-to-include-script-and-run-it-into-kubernetes-yaml
52,081,448
What is the benefit of putting multiple containers in a pod?
What's the benefit of having multiple containers in a pod versus having standalone containers?
What is the benefit of putting multiple containers in a pod? What's the benefit of having multiple containers in a pod versus having standalone containers?
docker, openshift, openshift-enterprise
33
12,698
3
https://stackoverflow.com/questions/52081448/what-is-the-benefit-of-putting-multiple-containers-in-a-pod
35,364,367
Share persistent volume claims amongst containers in Kubernetes/OpenShift
This may be a dumb question but I haven't found much online and want to clarify this. Given two deployments A and B, both with different container images: They're deployed in two different pods(different rc, svc etc.) in a K8/OpenShift cluster. They both need to access the same volume to read files (let's leave locking out of this for now) or at least the same directory structure in that volume. Mounting this volume using a PVC (Persistent Volume Claim) backed by a PV (Persistent Volume) configured against a NFS share. Can I confirm that the above would actually be possible? I.e. two different pods connected to the same volume with the same PVC. So they both are reading from the same volume. Hope that makes sense...
Share persistent volume claims amongst containers in Kubernetes/OpenShift This may be a dumb question but I haven't found much online and want to clarify this. Given two deployments A and B, both with different container images: They're deployed in two different pods(different rc, svc etc.) in a K8/OpenShift cluster. They both need to access the same volume to read files (let's leave locking out of this for now) or at least the same directory structure in that volume. Mounting this volume using a PVC (Persistent Volume Claim) backed by a PV (Persistent Volume) configured against a NFS share. Can I confirm that the above would actually be possible? I.e. two different pods connected to the same volume with the same PVC. So they both are reading from the same volume. Hope that makes sense...
openshift, kubernetes
33
44,686
3
https://stackoverflow.com/questions/35364367/share-persistent-volume-claims-amongst-containers-in-kubernetes-openshift
17,727,788
Deploying Ruby on Rails - Is there a good alternative for Heroku?
I'm starting a new small venture, a POC if you would like, and I want to deploy my Rails application for free somewhere. I found that there is Heroku, are there another options?
Deploying Ruby on Rails - Is there a good alternative for Heroku? I'm starting a new small venture, a POC if you would like, and I want to deploy my Rails application for free somewhere. I found that there is Heroku, are there another options?
ruby-on-rails, deployment, heroku, openshift
32
15,849
1
https://stackoverflow.com/questions/17727788/deploying-ruby-on-rails-is-there-a-good-alternative-for-heroku
72,414,081
Vite: Could not resolve entry module (index.html)
I am new to Openshift 3.11 deployment, I created a Multistage Dockerfile for a React application, the build want correctly on my local machine, but when I run on the openshift cluster I get the error below: > kncare-ui@0.1.0 build > tsc && vite build vite v2.9.9 building for production... βœ“ 0 modules transformed. Could not resolve entry module (index.html). error during build: Error: Could not resolve entry module (index.html). at error (/app/node_modules/rollup/dist/shared/rollup.js:198:30) at ModuleLoader.loadEntryModule (/app/node_modules/rollup/dist/shared/rollup.js:22680:20) at async Promise.all (index 0) error: build error: running 'npm run build' failed with exit code 1 and this is my Dockefile FROM node:16.14.2-alpine as build-stage RUN mkdir -p /app/ WORKDIR /app/ RUN chmod -R 777 /app/ COPY package*.json /app/ COPY tsconfig.json /app/ COPY tsconfig.node.json /app/ RUN npm ci COPY ./ /app/ RUN npm run build FROM nginxinc/nginx-unprivileged #FROM bitnami/nginx:latest COPY --from=build-stage /app/dist/ /usr/share/nginx/html #CMD ["nginx", "-g", "daemon off;"] ENTRYPOINT ["nginx", "-g", "daemon off;"] EXPOSE 80
Vite: Could not resolve entry module (index.html) I am new to Openshift 3.11 deployment, I created a Multistage Dockerfile for a React application, the build want correctly on my local machine, but when I run on the openshift cluster I get the error below: > kncare-ui@0.1.0 build > tsc && vite build vite v2.9.9 building for production... βœ“ 0 modules transformed. Could not resolve entry module (index.html). error during build: Error: Could not resolve entry module (index.html). at error (/app/node_modules/rollup/dist/shared/rollup.js:198:30) at ModuleLoader.loadEntryModule (/app/node_modules/rollup/dist/shared/rollup.js:22680:20) at async Promise.all (index 0) error: build error: running 'npm run build' failed with exit code 1 and this is my Dockefile FROM node:16.14.2-alpine as build-stage RUN mkdir -p /app/ WORKDIR /app/ RUN chmod -R 777 /app/ COPY package*.json /app/ COPY tsconfig.json /app/ COPY tsconfig.node.json /app/ RUN npm ci COPY ./ /app/ RUN npm run build FROM nginxinc/nginx-unprivileged #FROM bitnami/nginx:latest COPY --from=build-stage /app/dist/ /usr/share/nginx/html #CMD ["nginx", "-g", "daemon off;"] ENTRYPOINT ["nginx", "-g", "daemon off;"] EXPOSE 80
node.js, docker, dockerfile, openshift
32
87,163
5
https://stackoverflow.com/questions/72414081/vite-could-not-resolve-entry-module-index-html
49,562,433
How to restart pod in OpenShift?
I updated a file (for debug output) in a running pod, but it isn't getting recognized. I was going to restart the pod to get it to take but I only see oc stop and not oc start or oc restart . How would I force a refresh of files in the pod? I am thinking maybe it is a Ruby thing (like opcache in PHP). But figured a restart of the pod would handle it. Just can't figure out how to restart a pod.
How to restart pod in OpenShift? I updated a file (for debug output) in a running pod, but it isn't getting recognized. I was going to restart the pod to get it to take but I only see oc stop and not oc start or oc restart . How would I force a refresh of files in the pod? I am thinking maybe it is a Ruby thing (like opcache in PHP). But figured a restart of the pod would handle it. Just can't figure out how to restart a pod.
openshift
31
146,202
7
https://stackoverflow.com/questions/49562433/how-to-restart-pod-in-openshift
54,360,223
Openshift Nginx permission problem [nginx: [emerg] mkdir() &quot;/var/cache/nginx/client_temp&quot; failed (13: Permission denied)]
I am currently running into a problem trying to set up nginx:alpine in Openshift. My build runs just fine but I am not able to deploy with permission being denied with the following error 2019/01/25 06:30:54 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) Now I know Openshift is a bit tricky when it comes to permissions as the container is running without root privilidges and the UID is gerenated on runetime which means it's not available in /etc/passwd. But the user is part of the group root. Now how this is supposed to be handled is being described here [URL] I even went further and made the whole /var completely accessible (777) for testing purposes but I still get the error. This is what my Dockerfile looks like Dockerfile FROM nginx:alpine #Configure proxy settings ENV HTTP_PROXY=[URL] ENV HTTPS_PROXY=[URL] ENV HTTP_PROXY_AUTH=basic:*:username:password WORKDIR /app COPY . . # Install node.js RUN apk update && \ apk add nodejs npm python make curl g++ # Build Application RUN npm install RUN ./node_modules/@angular/cli/bin/ng build COPY ./dist/my-app /usr/share/nginx/html # Configure NGINX COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \ chmod -R 777 /var RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf EXPOSE 8080 It's funny that this approach just seems to effekt the alpine version of nginx. nginx:latest (based on debian I think) has no issues and the way to set it up described here [URL] works. (but i am having some other issues with that build so I switched to alpine) Any ideas why this is still not working?
Openshift Nginx permission problem [nginx: [emerg] mkdir() &quot;/var/cache/nginx/client_temp&quot; failed (13: Permission denied)] I am currently running into a problem trying to set up nginx:alpine in Openshift. My build runs just fine but I am not able to deploy with permission being denied with the following error 2019/01/25 06:30:54 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) Now I know Openshift is a bit tricky when it comes to permissions as the container is running without root privilidges and the UID is gerenated on runetime which means it's not available in /etc/passwd. But the user is part of the group root. Now how this is supposed to be handled is being described here [URL] I even went further and made the whole /var completely accessible (777) for testing purposes but I still get the error. This is what my Dockerfile looks like Dockerfile FROM nginx:alpine #Configure proxy settings ENV HTTP_PROXY=[URL] ENV HTTPS_PROXY=[URL] ENV HTTP_PROXY_AUTH=basic:*:username:password WORKDIR /app COPY . . # Install node.js RUN apk update && \ apk add nodejs npm python make curl g++ # Build Application RUN npm install RUN ./node_modules/@angular/cli/bin/ng build COPY ./dist/my-app /usr/share/nginx/html # Configure NGINX COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \ chmod -R 777 /var RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf EXPOSE 8080 It's funny that this approach just seems to effekt the alpine version of nginx. nginx:latest (based on debian I think) has no issues and the way to set it up described here [URL] works. (but i am having some other issues with that build so I switched to alpine) Any ideas why this is still not working?
docker, nginx, openshift, root, alpine-linux
30
71,860
9
https://stackoverflow.com/questions/54360223/openshift-nginx-permission-problem-nginx-emerg-mkdir-var-cache-nginx-cli
53,012,798
Kubernetes ConfigMap size limitation
Though resourceQuotas may limit the number of configmaps in a namespace, is there any such option to limit the size of the individual configmap? I will not like some user to start uploading large text files as configmaps. What is the max size of ConfigMap etcd support? If there is a reasonable limit on the etcd side, should be fine then.
Kubernetes ConfigMap size limitation Though resourceQuotas may limit the number of configmaps in a namespace, is there any such option to limit the size of the individual configmap? I will not like some user to start uploading large text files as configmaps. What is the max size of ConfigMap etcd support? If there is a reasonable limit on the etcd side, should be fine then.
kubernetes, openshift
29
45,675
3
https://stackoverflow.com/questions/53012798/kubernetes-configmap-size-limitation
41,498,601
what&#39;s the difference between openshift route and k8s ingress?
I'm new to openshift and k8s. I'm not sure what's the difference between these two terms, openshift route vs k8s ingress ?
what&#39;s the difference between openshift route and k8s ingress? I'm new to openshift and k8s. I'm not sure what's the difference between these two terms, openshift route vs k8s ingress ?
kubernetes, openshift, openshift-origin
28
15,786
2
https://stackoverflow.com/questions/41498601/whats-the-difference-between-openshift-route-and-k8s-ingress
19,749,599
Openshift: How to remote access MySQL?
So I just finished setting up a JBoss application server gear on Openshift and I attached a MySQL and phpmyadmin cartridges. My question is if there is a way to remote access to the database server using an app like MySQL Workbench ?
Openshift: How to remote access MySQL? So I just finished setting up a JBoss application server gear on Openshift and I attached a MySQL and phpmyadmin cartridges. My question is if there is a way to remote access to the database server using an app like MySQL Workbench ?
mysql, mysql-workbench, openshift
27
39,263
5
https://stackoverflow.com/questions/19749599/openshift-how-to-remote-access-mysql
48,423,398
What is a good way to deploy secret Java key stores in an OpenShift environment?
We have a Java web application that is supposed to be moved from a regular deployment model (install on a server) into an OpenShift environment (deployment as docker container). Currently this application consumes a set of Java key stores (.jks files) for client certificates for communicating with third party web interfaces. We have one key store per interface. These jks files get manually deployed on production machines and are occasionally updated when third-party certificates need to be updated. Our application has a setting with a path to the key store files and on startup it will read certificates from them and then use them to communicate with the third-party systems. Now when moving to an OpenShift deployment, we have one docker image with the application that is going to be used for all environments (development, test and production). All configuration is given as environment variables. However we cannot give jks files as environment variables these need to be mounted into the docker container's file system. As these certificates are a secret we don't want to bake them into the image. I scanned the OpenShift documentation for some clues on how to approach this and basically found two options: using Secrets or mounting a persistent volume claim (PVC). Secrets don't seem to work for us as they are pretty much just key-value-pairs that you can mount as a file or have handed in as environment variables. They also have a size limit to them. Using a PVC would theoretically work, however we'd need to have some way to get the JKS files into that volume in the first place. A simple way would be to just start a shell container mounting the PVC and copying the files manually into it using the OpenShift command line tools, however I was hoping for a somewhat less manual solution. Do you have found a clever solution to this or a similar problem where you needed to get files into a container?
What is a good way to deploy secret Java key stores in an OpenShift environment? We have a Java web application that is supposed to be moved from a regular deployment model (install on a server) into an OpenShift environment (deployment as docker container). Currently this application consumes a set of Java key stores (.jks files) for client certificates for communicating with third party web interfaces. We have one key store per interface. These jks files get manually deployed on production machines and are occasionally updated when third-party certificates need to be updated. Our application has a setting with a path to the key store files and on startup it will read certificates from them and then use them to communicate with the third-party systems. Now when moving to an OpenShift deployment, we have one docker image with the application that is going to be used for all environments (development, test and production). All configuration is given as environment variables. However we cannot give jks files as environment variables these need to be mounted into the docker container's file system. As these certificates are a secret we don't want to bake them into the image. I scanned the OpenShift documentation for some clues on how to approach this and basically found two options: using Secrets or mounting a persistent volume claim (PVC). Secrets don't seem to work for us as they are pretty much just key-value-pairs that you can mount as a file or have handed in as environment variables. They also have a size limit to them. Using a PVC would theoretically work, however we'd need to have some way to get the JKS files into that volume in the first place. A simple way would be to just start a shell container mounting the PVC and copying the files manually into it using the OpenShift command line tools, however I was hoping for a somewhat less manual solution. Do you have found a clever solution to this or a similar problem where you needed to get files into a container?
openshift, client-certificates
25
58,407
4
https://stackoverflow.com/questions/48423398/what-is-a-good-way-to-deploy-secret-java-key-stores-in-an-openshift-environment
54,459,447
Difference between API versions v2beta1 and v2beta2 in Horizontal Pod Autoscaler?
The Kubernetes Horizontal Pod Autoscaler walkthrough in [URL] explains that we can perform autoscaling on custom metrics. What I didn't understand is when to use the two API versions: v2beta1 and v2beta2. If anybody can explain, I would really appreciate it. Thanks in advance.
Difference between API versions v2beta1 and v2beta2 in Horizontal Pod Autoscaler? The Kubernetes Horizontal Pod Autoscaler walkthrough in [URL] explains that we can perform autoscaling on custom metrics. What I didn't understand is when to use the two API versions: v2beta1 and v2beta2. If anybody can explain, I would really appreciate it. Thanks in advance.
kubernetes, openshift, autoscaling
25
26,897
4
https://stackoverflow.com/questions/54459447/difference-between-api-versions-v2beta1-and-v2beta2-in-horizontal-pod-autoscaler
46,417,232
What server URL should be used for the oc login command when using OpenShift&#39;s PaaS?
What do I provide for the server URL in the oc login tool, when using the OpenShift PaaS? I'm trying to migrate my OpenShift Online v2 app to v3, following the instructions for PHP apps linked to from OpenShift's Migration Center . That page says to run something following the pattern oc new-app [URL] --name=<app-name> -e <ENV_VAR_NAME>=<env_var_value> . After tracking down a download for oc (which wasn't easy), I tried running that command with my repo URL*; this results in: $ oc new-app [URL] --name=PROJECTNAME error: Missing or incomplete configuration info. Please login or point to an existing, complete config file: 1. Via the command-line flag --config 2. Via the KUBECONFIG environment variable 3. In your home directory as ~/.kube/config To view or setup config directly use the 'config' command. Not knowing what subcommand of oc config to use, I searched and found Get Started with the CLI , which says to use oc login to start the configuration process. But when I run that, I get: Server [[URL] What do I provide for the URL here, when using the OpenShift PaaS (i.e. not a local installation)? I've tried things like [URL] and the URL of my web app, but both of them result in error: The server was unable to respond - verify you have provided the correct host and port and that the server is currently running. * I decided to use Bitbucket instead of GitHub; I'm not sure if this is unsupported, or (if it's supported) whether I should be providing USERNAME@bitbucket.org .
What server URL should be used for the oc login command when using OpenShift&#39;s PaaS? What do I provide for the server URL in the oc login tool, when using the OpenShift PaaS? I'm trying to migrate my OpenShift Online v2 app to v3, following the instructions for PHP apps linked to from OpenShift's Migration Center . That page says to run something following the pattern oc new-app [URL] --name=<app-name> -e <ENV_VAR_NAME>=<env_var_value> . After tracking down a download for oc (which wasn't easy), I tried running that command with my repo URL*; this results in: $ oc new-app [URL] --name=PROJECTNAME error: Missing or incomplete configuration info. Please login or point to an existing, complete config file: 1. Via the command-line flag --config 2. Via the KUBECONFIG environment variable 3. In your home directory as ~/.kube/config To view or setup config directly use the 'config' command. Not knowing what subcommand of oc config to use, I searched and found Get Started with the CLI , which says to use oc login to start the configuration process. But when I run that, I get: Server [[URL] What do I provide for the URL here, when using the OpenShift PaaS (i.e. not a local installation)? I've tried things like [URL] and the URL of my web app, but both of them result in error: The server was unable to respond - verify you have provided the correct host and port and that the server is currently running. * I decided to use Bitbucket instead of GitHub; I'm not sure if this is unsupported, or (if it's supported) whether I should be providing USERNAME@bitbucket.org .
openshift
25
41,255
6
https://stackoverflow.com/questions/46417232/what-server-url-should-be-used-for-the-oc-login-command-when-using-openshifts
26,871,381
Deploying a local django app using openshift
I've built a webapp using django. In order to host it I'm trying to use openshift but am having difficulty in getting anything working. There seems to be a lack of step by steps for this. So far I have git working fine, the app works on the local dev environment and I've successfully created an app on openshift. Following the URL on openshift once created I just get the standard page of "Welcome to your Openshift App". I've followed this [URL] to try changing the wsgi.py file. Changed it to hello world, pushed it and yet I still get the openshift default page. Is there a good comprehensive resource anywhere for getting local Django apps up and running on Openshift? Most of what I can find on google are just example apps which aren't that useful as I already have mine built.
Deploying a local django app using openshift I've built a webapp using django. In order to host it I'm trying to use openshift but am having difficulty in getting anything working. There seems to be a lack of step by steps for this. So far I have git working fine, the app works on the local dev environment and I've successfully created an app on openshift. Following the URL on openshift once created I just get the standard page of "Welcome to your Openshift App". I've followed this [URL] to try changing the wsgi.py file. Changed it to hello world, pushed it and yet I still get the openshift default page. Is there a good comprehensive resource anywhere for getting local Django apps up and running on Openshift? Most of what I can find on google are just example apps which aren't that useful as I already have mine built.
python, django, git, openshift
24
13,507
4
https://stackoverflow.com/questions/26871381/deploying-a-local-django-app-using-openshift
61,387,510
How to solve liquibase waiting for changelog lock problem in several pods in OpenShift cluster?
We are supporting several microservices written in Java using Spring Boot and deployed in OpenShift. Some microservices communicate with databases. We often run a single microservice in multiple pods in a single deployment. When each microservice starts, it starts liquibase, which tries to update the database. The problem is that sometimes one pod fails while waiting for the changelog lock. When this happens in our production OpenShift cluster, we expect other pods to fail while restarting because of the same problem with changelog lock issue. So, in the worst case scenario, all pods will wait for the lock to be lifted. We want liquidbase to automatically prepare our database schemas when each pod is starting. Is it good to store this logic in every microservice? How can we automatically solve the problem when the liquidbase changelog lock problem appears? Do we need to put the database preparation logic in a separate deployment? So maybe I should paraphrase my question. What is the best way to run db migration in term of microservice architecture? Maybe we should not use db migration in each pod? Maybe it is better to do it with separate deployment or do it with some extra Jenkins job not in OpenShift at all?
How to solve liquibase waiting for changelog lock problem in several pods in OpenShift cluster? We are supporting several microservices written in Java using Spring Boot and deployed in OpenShift. Some microservices communicate with databases. We often run a single microservice in multiple pods in a single deployment. When each microservice starts, it starts liquibase, which tries to update the database. The problem is that sometimes one pod fails while waiting for the changelog lock. When this happens in our production OpenShift cluster, we expect other pods to fail while restarting because of the same problem with changelog lock issue. So, in the worst case scenario, all pods will wait for the lock to be lifted. We want liquidbase to automatically prepare our database schemas when each pod is starting. Is it good to store this logic in every microservice? How can we automatically solve the problem when the liquidbase changelog lock problem appears? Do we need to put the database preparation logic in a separate deployment? So maybe I should paraphrase my question. What is the best way to run db migration in term of microservice architecture? Maybe we should not use db migration in each pod? Maybe it is better to do it with separate deployment or do it with some extra Jenkins job not in OpenShift at all?
database, spring-boot, kubernetes, openshift, liquibase
24
36,026
5
https://stackoverflow.com/questions/61387510/how-to-solve-liquibase-waiting-for-changelog-lock-problem-in-several-pods-in-ope
43,987,244
What is the &quot;kube-system&quot; namespace for?
In a default openshift install, there is an unused project titled kube-system . It seems like openshift-infra is for things like metrics, default is for the router and registry, and openshift is for global templates. What is the kube-system project used for though? I can't find any docs on it.
What is the &quot;kube-system&quot; namespace for? In a default openshift install, there is an unused project titled kube-system . It seems like openshift-infra is for things like metrics, default is for the router and registry, and openshift is for global templates. What is the kube-system project used for though? I can't find any docs on it.
openshift, kubernetes, openshift-origin
23
32,191
2
https://stackoverflow.com/questions/43987244/what-is-the-kube-system-namespace-for
51,489,955
How to obtain the enable admission controller list in kubernetes?
AFAIK, the admission controller is the last pass before the submission to the database. However I cannot know which one is enabled, Is there a way to know which one is taking effect? Thanks.
How to obtain the enable admission controller list in kubernetes? AFAIK, the admission controller is the last pass before the submission to the database. However I cannot know which one is enabled, Is there a way to know which one is taking effect? Thanks.
kubernetes, plugins, openshift
22
17,998
7
https://stackoverflow.com/questions/51489955/how-to-obtain-the-enable-admission-controller-list-in-kubernetes
20,960,407
OpenShift: How to connect to postgresql from my PC
I have an openshift app, and I just installed a postgresql DB on the same cartridge. I have the postgresql DB installed but now I want to connect to the DB from my PC so I can start creating new tables. Using port forwarding I found my IP for the postgresql db to be 127.3.146.2:5432 under my webaccount I see my Database: txxx User: admixxx Password: xxxx Then Using RazorSQl I try to setup a new connection keeps coming as user password incorrect. If I try and use the local IP to connect such as the 127.0.0.1 then I can connect fine. How can I resolve this issue, all I am trying to do is connect to this DB so that I can create new tables.
OpenShift: How to connect to postgresql from my PC I have an openshift app, and I just installed a postgresql DB on the same cartridge. I have the postgresql DB installed but now I want to connect to the DB from my PC so I can start creating new tables. Using port forwarding I found my IP for the postgresql db to be 127.3.146.2:5432 under my webaccount I see my Database: txxx User: admixxx Password: xxxx Then Using RazorSQl I try to setup a new connection keeps coming as user password incorrect. If I try and use the local IP to connect such as the 127.0.0.1 then I can connect fine. How can I resolve this issue, all I am trying to do is connect to this DB so that I can create new tables.
postgresql, openshift
21
15,871
1
https://stackoverflow.com/questions/20960407/openshift-how-to-connect-to-postgresql-from-my-pc
34,666,148
Openshift v3 - update image stream to fetch changes from external docker registry
I'm seeming to run into a simple problem and have the feeling I'm missing something essential. I'm having a private docker image registry at our company, which hosts all the docker images we develop. This registry is constantly updated during our build process and new images are pushed to it quite often. Now we are utilizing an openshift system, with a handful of images and it's own registry. What would be the best way to synchronize images between these 2 systems? As example, we have an app deployed like this: oc new-app myregistry.mydomain.edu/binbase/minix which is running nicely. We would now like to update this deployment with the latest changes and for this I do: oc import-image minix Tag Spec Created PullSpec Image latest 23 hours ago myregistry.mydomain.edu/binbase/minix:latest f6646382cfa32da291e8380421ea656110090256cd195746a5be2fcf61e4edf1 which is the correct image and now executing a oc deploy minix --latest but this still deploys the current image, not the newly updated image. Any idea why this, or what we are doing wrong? All I would like todo is to now redeploy the newest image. kind regards
Openshift v3 - update image stream to fetch changes from external docker registry I'm seeming to run into a simple problem and have the feeling I'm missing something essential. I'm having a private docker image registry at our company, which hosts all the docker images we develop. This registry is constantly updated during our build process and new images are pushed to it quite often. Now we are utilizing an openshift system, with a handful of images and it's own registry. What would be the best way to synchronize images between these 2 systems? As example, we have an app deployed like this: oc new-app myregistry.mydomain.edu/binbase/minix which is running nicely. We would now like to update this deployment with the latest changes and for this I do: oc import-image minix Tag Spec Created PullSpec Image latest 23 hours ago myregistry.mydomain.edu/binbase/minix:latest f6646382cfa32da291e8380421ea656110090256cd195746a5be2fcf61e4edf1 which is the correct image and now executing a oc deploy minix --latest but this still deploys the current image, not the newly updated image. Any idea why this, or what we are doing wrong? All I would like todo is to now redeploy the newest image. kind regards
docker, openshift
20
12,510
1
https://stackoverflow.com/questions/34666148/openshift-v3-update-image-stream-to-fetch-changes-from-external-docker-registr
42,363,105
permission denied, mkdir in container on openshift
I have a container with nodejs and pm2 as start command and on OpenShift i get this error on startup: Error: EACCES: permission denied, mkdir '/.pm2' I tried same image on a Marathon hoster and it worked fine. Do i need to change something with UserIds? The Dockerfile: FROM node:7.4-alpine RUN npm install --global yarn pm2 RUN mkdir /src COPY . /src WORKDIR /src RUN yarn install --production EXPOSE 8100 CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"] Update the node image already creates a new user "node" with UID 1000 to not run the image as root. I also tried to fix permissions and adding user "node" to root group. Further i told pm2 to which dir it should use with ENV var: PM2_HOME=/home/node/app/.pm2 But i still get error: Error: EACCES: permission denied, mkdir '/home/node/app/.pm2' Updated Dockerfile: FROM node:7.4-alpine RUN npm install --global yarn pm2 RUN adduser node root COPY . /home/node/app WORKDIR /home/node/app RUN chmod -R 755 /home/node/app RUN chown -R node:node /home/node/app RUN yarn install --production EXPOSE 8100 USER 1000 CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"] Update2 thanks to Graham Dumpleton i got it working FROM node:7.4-alpine RUN npm install --global yarn pm2 RUN adduser node root COPY . /home/node/app WORKDIR /home/node/app RUN yarn install --production RUN chmod -R 775 /home/node/app RUN chown -R node:root /home/node/app EXPOSE 8100 USER 1000 CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"]
permission denied, mkdir in container on openshift I have a container with nodejs and pm2 as start command and on OpenShift i get this error on startup: Error: EACCES: permission denied, mkdir '/.pm2' I tried same image on a Marathon hoster and it worked fine. Do i need to change something with UserIds? The Dockerfile: FROM node:7.4-alpine RUN npm install --global yarn pm2 RUN mkdir /src COPY . /src WORKDIR /src RUN yarn install --production EXPOSE 8100 CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"] Update the node image already creates a new user "node" with UID 1000 to not run the image as root. I also tried to fix permissions and adding user "node" to root group. Further i told pm2 to which dir it should use with ENV var: PM2_HOME=/home/node/app/.pm2 But i still get error: Error: EACCES: permission denied, mkdir '/home/node/app/.pm2' Updated Dockerfile: FROM node:7.4-alpine RUN npm install --global yarn pm2 RUN adduser node root COPY . /home/node/app WORKDIR /home/node/app RUN chmod -R 755 /home/node/app RUN chown -R node:node /home/node/app RUN yarn install --production EXPOSE 8100 USER 1000 CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"] Update2 thanks to Graham Dumpleton i got it working FROM node:7.4-alpine RUN npm install --global yarn pm2 RUN adduser node root COPY . /home/node/app WORKDIR /home/node/app RUN yarn install --production RUN chmod -R 775 /home/node/app RUN chown -R node:root /home/node/app EXPOSE 8100 USER 1000 CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"]
node.js, docker, openshift, pm2
20
42,046
5
https://stackoverflow.com/questions/42363105/permission-denied-mkdir-in-container-on-openshift
23,894,323
JDK 8 support at DIY cartridge in OpenShift
I know WildFly cartridge doesn't have JDK support , but can I somehow install Java 8 at experimental DIY cartridge? java-1.7.0 is the latest version available at /usr/lib .
JDK 8 support at DIY cartridge in OpenShift I know WildFly cartridge doesn't have JDK support , but can I somehow install Java 8 at experimental DIY cartridge? java-1.7.0 is the latest version available at /usr/lib .
openshift, java-8
19
12,182
5
https://stackoverflow.com/questions/23894323/jdk-8-support-at-diy-cartridge-in-openshift
31,358,992
Application &#39;appname&#39; failed to start (port 8080 not available) on open shift node app
I have written a node restify server in coffee and I can't seem to get it running. While deploying I get the following error: Waiting for application port (8080) become available ... after which I do get the following error Application 'appname' failed to start (port 8080 not available) If coffeescript seems to be the problem is there a work around it. I would not want to change back to js. My server code is: restify = require 'restify' Bunyan = require 'bunyan' server = restify.createServer name: 'APPNAME' version: '0.0.1' log: Bunyan.createLogger name: 'api' serializers: req: ()-> return "bad" # Usercontroller.access calls a function to process the request server.post '/user/access', UserController.access server = create.createServer() server.listen server_port, -> console.log "Http server listening on #{server_port}" require('./document')(server.router.mounts, 'restify') return
Application &#39;appname&#39; failed to start (port 8080 not available) on open shift node app I have written a node restify server in coffee and I can't seem to get it running. While deploying I get the following error: Waiting for application port (8080) become available ... after which I do get the following error Application 'appname' failed to start (port 8080 not available) If coffeescript seems to be the problem is there a work around it. I would not want to change back to js. My server code is: restify = require 'restify' Bunyan = require 'bunyan' server = restify.createServer name: 'APPNAME' version: '0.0.1' log: Bunyan.createLogger name: 'api' serializers: req: ()-> return "bad" # Usercontroller.access calls a function to process the request server.post '/user/access', UserController.access server = create.createServer() server.listen server_port, -> console.log "Http server listening on #{server_port}" require('./document')(server.router.mounts, 'restify') return
linux, node.js, coffeescript, openshift, restify
19
12,584
9
https://stackoverflow.com/questions/31358992/application-appname-failed-to-start-port-8080-not-available-on-open-shift-no
70,589,477
Error installing hyperkit on HomeBrew in M1 Silicon
I'm trying to install hyperkit on MacOS 12.1 M1 Silicon and I get the following error. % brew install hyperkit Error: hyperkit: no bottle available! You can try to install from source with: brew install --build-from-source hyperkit Please note building from source is unsupported. You will encounter build failures with some formulae. If you experience any issues please create pull requests instead of asking for help on Homebrew's GitHub, Twitter or any other official channels. With some research I found an incompatibility with M1 Silicon processors M1 Compatibility Issue . Is there a workaround for this? I want to setup minishift on M1 Silicon and Hyperkit is a pre-requisite on MacOS.
Error installing hyperkit on HomeBrew in M1 Silicon I'm trying to install hyperkit on MacOS 12.1 M1 Silicon and I get the following error. % brew install hyperkit Error: hyperkit: no bottle available! You can try to install from source with: brew install --build-from-source hyperkit Please note building from source is unsupported. You will encounter build failures with some formulae. If you experience any issues please create pull requests instead of asking for help on Homebrew's GitHub, Twitter or any other official channels. With some research I found an incompatibility with M1 Silicon processors M1 Compatibility Issue . Is there a workaround for this? I want to setup minishift on M1 Silicon and Hyperkit is a pre-requisite on MacOS.
macos, openshift, apple-m1, minishift, hyperkit
19
17,682
5
https://stackoverflow.com/questions/70589477/error-installing-hyperkit-on-homebrew-in-m1-silicon
35,909,771
OpenShift Origin vs OpenShift Enterprise
I'm searching for a main difference between OpenShift Origin and OpenShift Enterprise. I know that the first is open source and the latter is the commercial version. Have OpenShift Enterprise got other features compared to the open source version? Thanks in advance.
OpenShift Origin vs OpenShift Enterprise I'm searching for a main difference between OpenShift Origin and OpenShift Enterprise. I know that the first is open source and the latter is the commercial version. Have OpenShift Enterprise got other features compared to the open source version? Thanks in advance.
openshift-origin, openshift-enterprise
19
23,105
2
https://stackoverflow.com/questions/35909771/openshift-origin-vs-openshift-enterprise
60,540,032
Cannot determine if job needs to be started: Too many missed start time (&gt; 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew
I've created and pushed a cron job to deployment, but when I see it running in OpenShift, I get the following error message: Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew. From what I understand by this, is that a job failed to run. But I don't understand why it is failing. Why isn't that logged somewhere? - or if it is, where can I find it? The CronJob controller will keep trying to start a job according to the most recent schedule, but keeps failing and obviously it has done so >100 times. I've checked the syntax of my cron job, which doesn't give any errors. Also if there are any syntax messages, I'm not even allowed to push. Anyone know what's wrong? my Cron Job: apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cjob labels: job-name: my-cjob spec: schedule: "*/5 * * * *" # activeDeadlineSeconds: 180 # 3 min <<- should this help and why? jobTemplate: spec: template: metadata: name: my-cjob labels: job-name: my-cjob spec: containers: - name: my-cjob image: my-image-name restartPolicy: OnFailure Or should I be using startingDeadlineSeconds ? Anyone who has hit this error message and found a solution? Update as according to comment When running kubectl get cronjob I get the following: NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE my-cjob */5 * * * * False 0 <none> 2d When running kubectl logs my-cjob I get the following: Error from server (NotFound): pods "my-cjob" not found When running kubectl describe cronjob my-cjob I get the following: Error from server (NotFound): the server could not find the requested resource When running kubectl logs <cronjob-pod-name> I get many lines o code... Very difficult for me to understand and sort out.. When running kubectl describe pod <cronjob-pod-name> I also get a lot, but this is way easier to sort. Anything specific? Running kubectl get events I get a lot, but I think this is the related one: LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 1h 1h 2 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Pod spec.containers{apiproxy} Warning Unhealthy kubelet, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Liveness probe failed: Get [URL] dial tcp xxxx:8080: connect: connection refused
Cannot determine if job needs to be started: Too many missed start time (&gt; 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew I've created and pushed a cron job to deployment, but when I see it running in OpenShift, I get the following error message: Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew. From what I understand by this, is that a job failed to run. But I don't understand why it is failing. Why isn't that logged somewhere? - or if it is, where can I find it? The CronJob controller will keep trying to start a job according to the most recent schedule, but keeps failing and obviously it has done so >100 times. I've checked the syntax of my cron job, which doesn't give any errors. Also if there are any syntax messages, I'm not even allowed to push. Anyone know what's wrong? my Cron Job: apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cjob labels: job-name: my-cjob spec: schedule: "*/5 * * * *" # activeDeadlineSeconds: 180 # 3 min <<- should this help and why? jobTemplate: spec: template: metadata: name: my-cjob labels: job-name: my-cjob spec: containers: - name: my-cjob image: my-image-name restartPolicy: OnFailure Or should I be using startingDeadlineSeconds ? Anyone who has hit this error message and found a solution? Update as according to comment When running kubectl get cronjob I get the following: NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE my-cjob */5 * * * * False 0 <none> 2d When running kubectl logs my-cjob I get the following: Error from server (NotFound): pods "my-cjob" not found When running kubectl describe cronjob my-cjob I get the following: Error from server (NotFound): the server could not find the requested resource When running kubectl logs <cronjob-pod-name> I get many lines o code... Very difficult for me to understand and sort out.. When running kubectl describe pod <cronjob-pod-name> I also get a lot, but this is way easier to sort. Anything specific? Running kubectl get events I get a lot, but I think this is the related one: LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 1h 1h 2 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Pod spec.containers{apiproxy} Warning Unhealthy kubelet, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Liveness probe failed: Get [URL] dial tcp xxxx:8080: connect: connection refused
kubernetes, cron, openshift, devops
19
26,932
3
https://stackoverflow.com/questions/60540032/cannot-determine-if-job-needs-to-be-started-too-many-missed-start-time-100
11,222,924
How to test an openshift application on local host
I recently start playing with Openshift and I am wondering if there is a way to deploy ( test) your application on local host before you upload it on openshift. Thing is that every time I make change on the code, it takes some time to push it to openshift and check if it works. Google app engine SDK ( for Eclipse), for example, includes a web server application (app engine simulator) that allows you to test your app locally before you deploy on google. thnx Fotis
How to test an openshift application on local host I recently start playing with Openshift and I am wondering if there is a way to deploy ( test) your application on local host before you upload it on openshift. Thing is that every time I make change on the code, it takes some time to push it to openshift and check if it works. Google app engine SDK ( for Eclipse), for example, includes a web server application (app engine simulator) that allows you to test your app locally before you deploy on google. thnx Fotis
localhost, openshift
19
10,674
1
https://stackoverflow.com/questions/11222924/how-to-test-an-openshift-application-on-local-host
54,819,381
Request vs limit cpu in kubernetes/openshift
I have some dilemma to choose what should be the right request and limit setting for a pod in Openshift. Some data: during start up, the application requires at least 600 millicores to be able to fulfill the readiness check within 150 seconds. after start up, 200 millicores should be sufficient for the application to stay in idle state. So my understanding from documentation: CPU Requests Each container in a pod can specify the amount of CPU it requests on a node. The scheduler uses CPU requests to find a node with an appropriate fit for a container. The CPU request represents a minimum amount of CPU that your container may consume, but if there is no contention for CPU, it can use all available CPU on the node. If there is CPU contention on the node, CPU requests provide a relative weight across all containers on the system for how much CPU time the container may use. On the node, CPU requests map to Kernel CFS shares to enforce this behavior. Noted that the scheduler will refer to the request CPU to perform allocation on the node, and then it is a guarantee resource once allocated. Also on the other side, I might allocate extra CPU as the 600 millicores might be only required during start up. So should i go for resources: limits: cpu: 1 requests: cpu: 600m for guarantee resource or resources: limits: cpu: 1 requests: cpu: 200m for better cpu saving
Request vs limit cpu in kubernetes/openshift I have some dilemma to choose what should be the right request and limit setting for a pod in Openshift. Some data: during start up, the application requires at least 600 millicores to be able to fulfill the readiness check within 150 seconds. after start up, 200 millicores should be sufficient for the application to stay in idle state. So my understanding from documentation: CPU Requests Each container in a pod can specify the amount of CPU it requests on a node. The scheduler uses CPU requests to find a node with an appropriate fit for a container. The CPU request represents a minimum amount of CPU that your container may consume, but if there is no contention for CPU, it can use all available CPU on the node. If there is CPU contention on the node, CPU requests provide a relative weight across all containers on the system for how much CPU time the container may use. On the node, CPU requests map to Kernel CFS shares to enforce this behavior. Noted that the scheduler will refer to the request CPU to perform allocation on the node, and then it is a guarantee resource once allocated. Also on the other side, I might allocate extra CPU as the 600 millicores might be only required during start up. So should i go for resources: limits: cpu: 1 requests: cpu: 600m for guarantee resource or resources: limits: cpu: 1 requests: cpu: 200m for better cpu saving
kubernetes, openshift
17
29,939
1
https://stackoverflow.com/questions/54819381/request-vs-limit-cpu-in-kubernetes-openshift
49,501,133
How to get Openshift session token using rest api calls
As part of an automated tests suite I have to use OpenShift's REST APIs to send commands and get OpenShift's status. To authenticate these API calls I need to embed an authorization token in every call. Currently, I get this token by executing the following commands with ssh on the machine where OpenShift is installed: oc login --username=<uname> --password=<password> oc whoami --show-token I would like to stop using the oc tool completely and get this token using HTTP calls to the APIs but am not really able to find a document that explains how to use it. If I use the option --loglevel=10 when calling oc commands I can see the HTTP calls made by oc when logging in but it is quite difficult for me to reverse-engineer the process from these logs. Theoretically this is not something specific to OpenShift but rather to the OAuth protocol, I have found some documentation like the one posted here but I still find it difficult to implement without specific examples. If that helps, I am developing this tool using ruby (not rails). P.S. I know that normally for this type of job one should use Service Account Tokens but since this is a testing environment the OpenShift installation gets removed and reinstalled fairly often. This would force me to re-create the service account every time with the oc command line tool and again prevent me from automatizing the process.
How to get Openshift session token using rest api calls As part of an automated tests suite I have to use OpenShift's REST APIs to send commands and get OpenShift's status. To authenticate these API calls I need to embed an authorization token in every call. Currently, I get this token by executing the following commands with ssh on the machine where OpenShift is installed: oc login --username=<uname> --password=<password> oc whoami --show-token I would like to stop using the oc tool completely and get this token using HTTP calls to the APIs but am not really able to find a document that explains how to use it. If I use the option --loglevel=10 when calling oc commands I can see the HTTP calls made by oc when logging in but it is quite difficult for me to reverse-engineer the process from these logs. Theoretically this is not something specific to OpenShift but rather to the OAuth protocol, I have found some documentation like the one posted here but I still find it difficult to implement without specific examples. If that helps, I am developing this tool using ruby (not rails). P.S. I know that normally for this type of job one should use Service Account Tokens but since this is a testing environment the OpenShift installation gets removed and reinstalled fairly often. This would force me to re-create the service account every time with the oc command line tool and again prevent me from automatizing the process.
oauth, openshift
17
49,475
5
https://stackoverflow.com/questions/49501133/how-to-get-openshift-session-token-using-rest-api-calls
35,105,267
How to set up Openshift with let&#39;s encrypt (letsencrypt)
How do I set up Openshift app to work with let's encrypt ? NB Openshift does not work with a simple python webserver approach to server, you need to use the correct port and bind to the correct IP address. Also the app/gear does not necessary have a html root. (A question which I will post an answer below.)
How to set up Openshift with let&#39;s encrypt (letsencrypt) How do I set up Openshift app to work with let's encrypt ? NB Openshift does not work with a simple python webserver approach to server, you need to use the correct port and bind to the correct IP address. Also the app/gear does not necessary have a html root. (A question which I will post an answer below.)
openshift, lets-encrypt
17
6,200
5
https://stackoverflow.com/questions/35105267/how-to-set-up-openshift-with-lets-encrypt-letsencrypt
46,773,727
OpenShift oc create fails with &quot;already exists&quot;
Attempting to create a set of resources based on a file via oc create fails if they already exist. According to the docs here oc create should: Parse a configuration file and create one or more OpenShift Enterprise objects ... Any existing resources are ignored. (emphasis mine). I can't see any config options for this command or globally that would alter this behaviour and it seems to me to be counter to the docs. The command I ran is oc create -f some.file The output is: Error from server: services 'my-app' already exists Error from server: buildconfigs 'my-app' already exists Error from server: imagestreams 'my-app' already exists Error from server: deploymentconfigs 'my-app' already exists Error from server: routes 'my-app' already exists Error from server: secrets 'my-app' already exists It also exits with a non-zero exit code, so it's not just a warning. Am I missing something obvious here or misunderstanding what the documentation is saying? I just want to be able to apply this file and ensure the state of the OpenShift project afterwards.
OpenShift oc create fails with &quot;already exists&quot; Attempting to create a set of resources based on a file via oc create fails if they already exist. According to the docs here oc create should: Parse a configuration file and create one or more OpenShift Enterprise objects ... Any existing resources are ignored. (emphasis mine). I can't see any config options for this command or globally that would alter this behaviour and it seems to me to be counter to the docs. The command I ran is oc create -f some.file The output is: Error from server: services 'my-app' already exists Error from server: buildconfigs 'my-app' already exists Error from server: imagestreams 'my-app' already exists Error from server: deploymentconfigs 'my-app' already exists Error from server: routes 'my-app' already exists Error from server: secrets 'my-app' already exists It also exits with a non-zero exit code, so it's not just a warning. Am I missing something obvious here or misunderstanding what the documentation is saying? I just want to be able to apply this file and ensure the state of the OpenShift project afterwards.
kubernetes, openshift
17
19,557
3
https://stackoverflow.com/questions/46773727/openshift-oc-create-fails-with-already-exists
54,759,652
Spring boot cold start
I have a spring boot application which I'm running inside docker containers in an openshift cluster. In steady state, there are N instances of the application (say N=5) and requests are load balanced to these N instances. Everything runs fine and response time is low (~5ms with total throughput of ~60k). Whenever I add a new instance, response time goes up briefly (upto ~70ms) and then comes back to normal. Is there anything I can do to avoid this type of cold start? I tried pre-warming the app by making ~100 curl calls sequentially before sending traffic, but that did not help? Do I need better warmup script with high concurrency? Is there a better way to handle this? Thanks
Spring boot cold start I have a spring boot application which I'm running inside docker containers in an openshift cluster. In steady state, there are N instances of the application (say N=5) and requests are load balanced to these N instances. Everything runs fine and response time is low (~5ms with total throughput of ~60k). Whenever I add a new instance, response time goes up briefly (upto ~70ms) and then comes back to normal. Is there anything I can do to avoid this type of cold start? I tried pre-warming the app by making ~100 curl calls sequentially before sending traffic, but that did not help? Do I need better warmup script with high concurrency? Is there a better way to handle this? Thanks
java, spring, spring-boot, openshift
17
17,717
6
https://stackoverflow.com/questions/54759652/spring-boot-cold-start
47,812,807
OpenShift Service Proxy timeout
I have an application deployed on OpenShift Container Platform v3.6 . It consists of multiple services interconnected to each other. The frontend service calls a time consuming function of the backend service (through a REST call), but after 30 seconds it receives a "504 Gateway Timeout" message. Frontend runs over nginx , but I've already configured it with long proxy send/read timeouts, so the 504 message doesn't come from it. I think it comes from the Service Proxy component of OpenShift Platform, but I can't find out where and how configure a kind of service proxy timeout . I know the existence of HAProxy timeout for external routes, but my services leave in the same cluster application and communicate each other via OpenShift Container Platform DNS . Could be a Service Proxy timeout issue? How can it be configured? Thanks!
OpenShift Service Proxy timeout I have an application deployed on OpenShift Container Platform v3.6 . It consists of multiple services interconnected to each other. The frontend service calls a time consuming function of the backend service (through a REST call), but after 30 seconds it receives a "504 Gateway Timeout" message. Frontend runs over nginx , but I've already configured it with long proxy send/read timeouts, so the 504 message doesn't come from it. I think it comes from the Service Proxy component of OpenShift Platform, but I can't find out where and how configure a kind of service proxy timeout . I know the existence of HAProxy timeout for external routes, but my services leave in the same cluster application and communicate each other via OpenShift Container Platform DNS . Could be a Service Proxy timeout issue? How can it be configured? Thanks!
rest, proxy, timeout, openshift, haproxy
16
36,008
2
https://stackoverflow.com/questions/47812807/openshift-service-proxy-timeout
38,585,783
OpenShift CLI: oc vs rhc?
What is the difference between rhc and oc CLI-tools? As I see, they do almost the same: oc : The OpenShift CLI exposes commands for managing your applications, as well as lower level tools to interact with each component of your system. rhc does the same, no? What should I use to manage my containers on OpenShift platform?
OpenShift CLI: oc vs rhc? What is the difference between rhc and oc CLI-tools? As I see, they do almost the same: oc : The OpenShift CLI exposes commands for managing your applications, as well as lower level tools to interact with each component of your system. rhc does the same, no? What should I use to manage my containers on OpenShift platform?
openshift
16
3,840
1
https://stackoverflow.com/questions/38585783/openshift-cli-oc-vs-rhc
27,694,783
Could not complete schema update: org.h2.jdbc.JdbcSQLException: Table &quot;PG_CLASS&quot; not found; SQL statement
I have next problem when deploy application on openshift. I use a wildfly application server and PostgreSQL cartrige. In persistence.xml i set property "hibernate.hbm2ddl.auto" value="update". In wildfly modules in org/main/postgresql i see that wildfly use postgresql-9.3-1102-jdbc41.jar 12:12:14,760 ERROR [org.hibernate.tool.hbm2ddl.SchemaUpdate] (ServerService Thread Pool -- 62) HHH000319: Could not get database metadata: org.h2.jdbc.JdbcSQLException: Table "PG_CLASS" not found; SQL statement: select relname from pg_class where relkind='S' [42102-173] at org.h2.message.DbException.getJdbcSQLException(DbException.java:331) at org.h2.message.DbException.get(DbException.java:171) at org.h2.message.DbException.get(DbException.java:148) at org.h2.command.Parser.readTableOrView(Parser.java:4864) at org.h2.command.Parser.readTableFilter(Parser.java:1107) at org.h2.command.Parser.parseSelectSimpleFromPart(Parser.java:1713) at org.h2.command.Parser.parseSelectSimple(Parser.java:1821) at org.h2.command.Parser.parseSelectSub(Parser.java:1707) at org.h2.command.Parser.parseSelectUnion(Parser.java:1550) at org.h2.command.Parser.parseSelect(Parser.java:1538) at org.h2.command.Parser.parsePrepared(Parser.java:405) at org.h2.command.Parser.parse(Parser.java:279) at org.h2.command.Parser.parse(Parser.java:251) at org.h2.command.Parser.prepareCommand(Parser.java:218) at org.h2.engine.Session.prepareLocal(Session.java:428) at org.h2.engine.Session.prepareCommand(Session.java:377) at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1138) at org.h2.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:72) at org.jboss.jca.adapters.jdbc.WrappedStatement.executeQuery(WrappedStatement.java:344) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.initSequences(DatabaseMetadata.java:178) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.tool.hbm2ddl.DatabaseMetadata.<init>(DatabaseMetadata.java:92) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.tool.hbm2ddl.DatabaseMetadata.<init>(DatabaseMetadata.java:84) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:196) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:178) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:522) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1859) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:852) [hibernate-entitymanager-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:845) [hibernate-entitymanager-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.withTccl(ClassLoaderServiceImpl.java:398) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:844) [hibernate-entitymanager-4.3.7.Final.jar:4.3.7.Final] at org.jboss.as.jpa.hibernate4.TwoPhaseBootstrapImpl.build(TwoPhaseBootstrapImpl.java:44) [jipijapa-hibernate4-3-1.0.1.Final.jar:] at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:154) [wildfly-jpa-8.2.0.Final.jar:8.2.0.Final] at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:117) [wildfly-jpa-8.2.0.Final.jar:8.2.0.Final] at java.security.AccessController.doPrivileged(Native Method) [rt.jar:1.8.0_05] at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:474) [wildfly-security-manager-1.0.0.Final.jar:1.0.0.Final] at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1.run(PersistenceUnitServiceImpl.java:182) [wildfly-jpa-8.2.0.Final.jar:8.2.0.Final] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_05] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_05] at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_05] at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final] I'am use a hibernate as JPA (hibernate-core-4.3.6). And i think that this problem with different version of Postgresql 9.2 and 9.3 Can anybody explain me how to change hibernate dialect at 9.2 Postgres version or change postgresql library module on openshift?
Could not complete schema update: org.h2.jdbc.JdbcSQLException: Table &quot;PG_CLASS&quot; not found; SQL statement I have next problem when deploy application on openshift. I use a wildfly application server and PostgreSQL cartrige. In persistence.xml i set property "hibernate.hbm2ddl.auto" value="update". In wildfly modules in org/main/postgresql i see that wildfly use postgresql-9.3-1102-jdbc41.jar 12:12:14,760 ERROR [org.hibernate.tool.hbm2ddl.SchemaUpdate] (ServerService Thread Pool -- 62) HHH000319: Could not get database metadata: org.h2.jdbc.JdbcSQLException: Table "PG_CLASS" not found; SQL statement: select relname from pg_class where relkind='S' [42102-173] at org.h2.message.DbException.getJdbcSQLException(DbException.java:331) at org.h2.message.DbException.get(DbException.java:171) at org.h2.message.DbException.get(DbException.java:148) at org.h2.command.Parser.readTableOrView(Parser.java:4864) at org.h2.command.Parser.readTableFilter(Parser.java:1107) at org.h2.command.Parser.parseSelectSimpleFromPart(Parser.java:1713) at org.h2.command.Parser.parseSelectSimple(Parser.java:1821) at org.h2.command.Parser.parseSelectSub(Parser.java:1707) at org.h2.command.Parser.parseSelectUnion(Parser.java:1550) at org.h2.command.Parser.parseSelect(Parser.java:1538) at org.h2.command.Parser.parsePrepared(Parser.java:405) at org.h2.command.Parser.parse(Parser.java:279) at org.h2.command.Parser.parse(Parser.java:251) at org.h2.command.Parser.prepareCommand(Parser.java:218) at org.h2.engine.Session.prepareLocal(Session.java:428) at org.h2.engine.Session.prepareCommand(Session.java:377) at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1138) at org.h2.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:72) at org.jboss.jca.adapters.jdbc.WrappedStatement.executeQuery(WrappedStatement.java:344) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.initSequences(DatabaseMetadata.java:178) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.tool.hbm2ddl.DatabaseMetadata.<init>(DatabaseMetadata.java:92) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.tool.hbm2ddl.DatabaseMetadata.<init>(DatabaseMetadata.java:84) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:196) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:178) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:522) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1859) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:852) [hibernate-entitymanager-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl$4.perform(EntityManagerFactoryBuilderImpl.java:845) [hibernate-entitymanager-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.withTccl(ClassLoaderServiceImpl.java:398) [hibernate-core-4.3.7.Final.jar:4.3.7.Final] at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:844) [hibernate-entitymanager-4.3.7.Final.jar:4.3.7.Final] at org.jboss.as.jpa.hibernate4.TwoPhaseBootstrapImpl.build(TwoPhaseBootstrapImpl.java:44) [jipijapa-hibernate4-3-1.0.1.Final.jar:] at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:154) [wildfly-jpa-8.2.0.Final.jar:8.2.0.Final] at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:117) [wildfly-jpa-8.2.0.Final.jar:8.2.0.Final] at java.security.AccessController.doPrivileged(Native Method) [rt.jar:1.8.0_05] at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:474) [wildfly-security-manager-1.0.0.Final.jar:1.0.0.Final] at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1.run(PersistenceUnitServiceImpl.java:182) [wildfly-jpa-8.2.0.Final.jar:8.2.0.Final] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_05] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_05] at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_05] at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final] I'am use a hibernate as JPA (hibernate-core-4.3.6). And i think that this problem with different version of Postgresql 9.2 and 9.3 Can anybody explain me how to change hibernate dialect at 9.2 Postgres version or change postgresql library module on openshift?
java, hibernate, postgresql, jdbc, openshift
16
15,359
6
https://stackoverflow.com/questions/27694783/could-not-complete-schema-update-org-h2-jdbc-jdbcsqlexception-table-pg-class
64,553,885
Restart Pod when secrets gets updated
We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this? Thanks in advance.
Restart Pod when secrets gets updated We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this? Thanks in advance.
kubernetes, openshift, kubernetes-pod
16
37,623
4
https://stackoverflow.com/questions/64553885/restart-pod-when-secrets-gets-updated
55,044,623
Mixed Content error because of Keycloak default login redirection
INFORMATION NEEDED: I use Keycloak (Docker version) behind a Spring project. (The client side of this project is React and communication between client and backend is provided by REST services.) The client side is secured and using "https" scheme. It is my Spring configuration: keycloak: auth-server-url: [URL] realm: master resource: clientname public-client: true THE ROOT OF THE PROBLEM: When I click a link from client, it calls a Spring service normally. But before that, it redirects to default login page of Keycloak with adding this path sso/login to the current "https" url but changing scheme to "http". But, redirecting from https to http create a problem like this: Mixed Content: The page at '[URL] was loaded over HTTPS, but requested an insecure resource '[URL] This request has been blocked; the content must be served over HTTPS.
Mixed Content error because of Keycloak default login redirection INFORMATION NEEDED: I use Keycloak (Docker version) behind a Spring project. (The client side of this project is React and communication between client and backend is provided by REST services.) The client side is secured and using "https" scheme. It is my Spring configuration: keycloak: auth-server-url: [URL] realm: master resource: clientname public-client: true THE ROOT OF THE PROBLEM: When I click a link from client, it calls a Spring service normally. But before that, it redirects to default login page of Keycloak with adding this path sso/login to the current "https" url but changing scheme to "http". But, redirecting from https to http create a problem like this: Mixed Content: The page at '[URL] was loaded over HTTPS, but requested an insecure resource '[URL] This request has been blocked; the content must be served over HTTPS.
spring, openshift, keycloak
16
33,085
8
https://stackoverflow.com/questions/55044623/mixed-content-error-because-of-keycloak-default-login-redirection
21,972,377
How to connect OpenShift with a private BitBucket Repository
I want to host a website on OpenShift but I want my code to synchronize automatically with a "free but private" service like Git, so I found BitBucket. I tried to connect it by my self by pasting this key from my OpenShift app: ssh://530910bd5973ca01ea00007d@XXXXXXXXXX.rhcloud.com/~/git/XXXXXXXXXX.git/ into: BitBucket -> Repository -> Import Repository -> Old Repository. But I get this error: Unsupported protocol. Please use '[URL] '[URL] , 'svn://' or 'git://. I believe I have the same issue as this guy: [URL] But I don't understand how to apply this solution. Before you ask, yes I'm new to Github, BitBucket and OpenShift etc.
How to connect OpenShift with a private BitBucket Repository I want to host a website on OpenShift but I want my code to synchronize automatically with a "free but private" service like Git, so I found BitBucket. I tried to connect it by my self by pasting this key from my OpenShift app: ssh://530910bd5973ca01ea00007d@XXXXXXXXXX.rhcloud.com/~/git/XXXXXXXXXX.git/ into: BitBucket -> Repository -> Import Repository -> Old Repository. But I get this error: Unsupported protocol. Please use '[URL] '[URL] , 'svn://' or 'git://. I believe I have the same issue as this guy: [URL] But I don't understand how to apply this solution. Before you ask, yes I'm new to Github, BitBucket and OpenShift etc.
git, svn, github, bitbucket, openshift
16
18,098
1
https://stackoverflow.com/questions/21972377/how-to-connect-openshift-with-a-private-bitbucket-repository
45,272,608
how do I tail logs in open shift using oc client
From the question you can tell I'm a newb. At the moment to get the logs of my pod I'm doing a ... oc logs -f api-myapp-v1-48-cdrs2 This shows me everything in the log. How can I tail them instead? Also I was wodering if someone could point me out to a nice cheat sheet of open shift commands? One that is good for beginners. thanks
how do I tail logs in open shift using oc client From the question you can tell I'm a newb. At the moment to get the logs of my pod I'm doing a ... oc logs -f api-myapp-v1-48-cdrs2 This shows me everything in the log. How can I tail them instead? Also I was wodering if someone could point me out to a nice cheat sheet of open shift commands? One that is good for beginners. thanks
openshift
15
28,842
2
https://stackoverflow.com/questions/45272608/how-do-i-tail-logs-in-open-shift-using-oc-client
46,014,206
Kubernetes - Liveness and Readiness probe implementation
I'm developing a service using Spring and deploying it on OpenShift. Currently I'm using Spring Actuator health endpoint to serve as a liveness and readiness probe for Kubernetes. However, I will add a call to another service in a Actuator health endpoint, and it looks to me that in that case I need to implement new liveness probe for my service. If I don't do that then a failure in a second service will result with a failure in liveness probe failing and Kubernetes will restart my service without any real need. Is it OK, for a liveness probe, to implement some simple REST controller which will always return HTTP status 200? If it works, the service can always be considered as alive? Or is there any better way to do it?
Kubernetes - Liveness and Readiness probe implementation I'm developing a service using Spring and deploying it on OpenShift. Currently I'm using Spring Actuator health endpoint to serve as a liveness and readiness probe for Kubernetes. However, I will add a call to another service in a Actuator health endpoint, and it looks to me that in that case I need to implement new liveness probe for my service. If I don't do that then a failure in a second service will result with a failure in liveness probe failing and Kubernetes will restart my service without any real need. Is it OK, for a liveness probe, to implement some simple REST controller which will always return HTTP status 200? If it works, the service can always be considered as alive? Or is there any better way to do it?
spring, kubernetes, openshift
15
20,450
4
https://stackoverflow.com/questions/46014206/kubernetes-liveness-and-readiness-probe-implementation
41,937,330
How to delete or overwrite a secret in OpenShift?
I'm trying to create a secret on OpenShift v3.3.0 using: oc create secret generic my-secret --from-file=application-cloud.properties=src/main/resources/application-cloud.properties -n my-project Because I created the same secret earlier, I get this error message: Error from server: secrets "my-secret" already exists I looked at oc , oc create and oc create secret options and could not find an option to overwrite the secret when creating it. I then tried to delete the existing secret with oc delete . All the commands listed below return either No resources found or a syntax error. oc delete secrets -l my-secret -n my-project oc delete secret -l my-secret -n my-project oc delete secrets -l my-secret oc delete secret -l my-secret oc delete pods,secrets -l my-project oc delete pods,secrets -l my-secret oc delete secret generic -l my-secret Do you know how to delete a secret or overwrite a secret upon creation using the OpenShift console or the command line?
How to delete or overwrite a secret in OpenShift? I'm trying to create a secret on OpenShift v3.3.0 using: oc create secret generic my-secret --from-file=application-cloud.properties=src/main/resources/application-cloud.properties -n my-project Because I created the same secret earlier, I get this error message: Error from server: secrets "my-secret" already exists I looked at oc , oc create and oc create secret options and could not find an option to overwrite the secret when creating it. I then tried to delete the existing secret with oc delete . All the commands listed below return either No resources found or a syntax error. oc delete secrets -l my-secret -n my-project oc delete secret -l my-secret -n my-project oc delete secrets -l my-secret oc delete secret -l my-secret oc delete pods,secrets -l my-project oc delete pods,secrets -l my-secret oc delete secret generic -l my-secret Do you know how to delete a secret or overwrite a secret upon creation using the OpenShift console or the command line?
openshift, openshift-origin
15
25,768
2
https://stackoverflow.com/questions/41937330/how-to-delete-or-overwrite-a-secret-in-openshift
19,934,428
openshift application goes idle and halts its cron jobs
I use openshift to run a script from time to time with the cron cartridge . however, as my application has no web activity (yet) it goes idle and my process doesn't run. one could think of an ugly solution to generate fake web-load by using another service (such as ifttt to retrieve a page constantly but this sounds wrong. could there be a better solution?
openshift application goes idle and halts its cron jobs I use openshift to run a script from time to time with the cron cartridge . however, as my application has no web activity (yet) it goes idle and my process doesn't run. one could think of an ugly solution to generate fake web-load by using another service (such as ifttt to retrieve a page constantly but this sounds wrong. could there be a better solution?
openshift
15
5,373
6
https://stackoverflow.com/questions/19934428/openshift-application-goes-idle-and-halts-its-cron-jobs
23,264,517
Deploy WAR file to Openshift without using GIT?
I want to upload a WAR file to my Openshift account , but it forces me to use GIT ot GITHUB ( here ). Please forgive me for saying this , but this is very very annoying . Is there any way upload a WAR file straight to my application without using some third party ? My application (in the Openshift) consists of : Tomcat 7 (JBoss EWS 2.0), MySQL 5.5 . Much appreciated
Deploy WAR file to Openshift without using GIT? I want to upload a WAR file to my Openshift account , but it forces me to use GIT ot GITHUB ( here ). Please forgive me for saying this , but this is very very annoying . Is there any way upload a WAR file straight to my application without using some third party ? My application (in the Openshift) consists of : Tomcat 7 (JBoss EWS 2.0), MySQL 5.5 . Much appreciated
java, git, tomcat, jboss, openshift
15
11,590
2
https://stackoverflow.com/questions/23264517/deploy-war-file-to-openshift-without-using-git
58,078,775
Openshift service is not available after short inactivity
We have our project hosted in OpenShift (OKD to be precise. We host it ourselves). The setup is as follows: Routing server (Spring Boot 1.5.8 with Zuul): This one takes all the incoming traffic and routes it to the correct services Multiple services (all with Spring Boot): Here is all the business logic We use SOAP for calling other services in this project. Currently, when we call the application, the call goes to the routing server, which then routes it to the main business service. After a short inactivity of about one hour, our main business service is not reachable via the external call. The edge server however is available and callable 100% of the time. We do get a 504 Gateway Timeout exception from the system when we call it. We already figured out that this is the timeout of the route in openshift ( haproxy.router.openshift.io/timeout in the route). The core problem is, that OpenShift seems to hibernate the main business service after an inactivity of about one hour. After a delay of 15 minutes however the calls seem to find their destination and the data gets processed correctly. How can we turn this behaviour off? Edit 1: We have the same application in normal "old fashioned" VMs in production. We don't have any problems there. We noticed that the services can be "kept alive" when we call them regulary. We built a small service which calls theme regulary (every 15 min). This way it seems to work. But this is not a production ready workaround IMO. Edit 2: Our pod config (some names are anonymized): [URL] We do not use autoscaler. Edit 3: Our deployment configs (some names are anonymized): [URL] OpenShift Master: v3.11.0+1c3e643-87 Kubernetes Master: v1.11.0+d4cacc0 OpenShift Web Console: v3.11.0+ea42280 Edit 4: It seems that this is not a problem with OpenShift but rather our tech stack. I will update this question, as soon as we have a solution.
Openshift service is not available after short inactivity We have our project hosted in OpenShift (OKD to be precise. We host it ourselves). The setup is as follows: Routing server (Spring Boot 1.5.8 with Zuul): This one takes all the incoming traffic and routes it to the correct services Multiple services (all with Spring Boot): Here is all the business logic We use SOAP for calling other services in this project. Currently, when we call the application, the call goes to the routing server, which then routes it to the main business service. After a short inactivity of about one hour, our main business service is not reachable via the external call. The edge server however is available and callable 100% of the time. We do get a 504 Gateway Timeout exception from the system when we call it. We already figured out that this is the timeout of the route in openshift ( haproxy.router.openshift.io/timeout in the route). The core problem is, that OpenShift seems to hibernate the main business service after an inactivity of about one hour. After a delay of 15 minutes however the calls seem to find their destination and the data gets processed correctly. How can we turn this behaviour off? Edit 1: We have the same application in normal "old fashioned" VMs in production. We don't have any problems there. We noticed that the services can be "kept alive" when we call them regulary. We built a small service which calls theme regulary (every 15 min). This way it seems to work. But this is not a production ready workaround IMO. Edit 2: Our pod config (some names are anonymized): [URL] We do not use autoscaler. Edit 3: Our deployment configs (some names are anonymized): [URL] OpenShift Master: v3.11.0+1c3e643-87 Kubernetes Master: v1.11.0+d4cacc0 OpenShift Web Console: v3.11.0+ea42280 Edit 4: It seems that this is not a problem with OpenShift but rather our tech stack. I will update this question, as soon as we have a solution.
java, spring-boot, openshift, okd
15
1,396
1
https://stackoverflow.com/questions/58078775/openshift-service-is-not-available-after-short-inactivity
57,700,073
Oc get pods - Command to just print pod names
I want to get a list of just the pod names and the Result should not include the status, number of instances etc. I am using the command oc get pods It prints Pod1-qawer Running 1/1 2d Pod2g-bvch Running 1/1 3h Expected result Pod1-qawer Pod2g-bvch How do i avoid the extra details from getting printed
Oc get pods - Command to just print pod names I want to get a list of just the pod names and the Result should not include the status, number of instances etc. I am using the command oc get pods It prints Pod1-qawer Running 1/1 2d Pod2g-bvch Running 1/1 3h Expected result Pod1-qawer Pod2g-bvch How do i avoid the extra details from getting printed
kubernetes, openshift, kubectl
14
27,833
4
https://stackoverflow.com/questions/57700073/oc-get-pods-command-to-just-print-pod-names
31,473,292
ETIMEDOUT connect error using nodemailer in Nodejs Openshift application
I'm facing some problems using the nodemailer module in my node.js single gear non-scalable application in Openshift. As the documentation suggested, I initialized the transporter object and used the sendMail function. A simplified version of my code is var transporter = nodemailer.createTransport({ service: 'Gmail', auth: { user: 'my.mail@gmail.com', pass: 'mypassword' } }); var mail = { from: 'my.mail@gmail.com', to: 'test@mydomain.com', subject: 'Test mail', html: 'Test mail' }; transporter.sendMail(mail, function(error, info) { if(error){ console.log(error); }else{ console.log(info); } }); This code works correctly when I run it on my local machine, but when I try to execute it on the server I got an ETIMEDOUT error, as if the application couln't connect to the smtp server. { [Error: connect ETIMEDOUT] code: 'ETIMEDOUT', errno: 'ETIMEDOUT', syscall: 'connect' } I've tried to increase the timeout connection parameter, but I got the same results. var transporter = nodemailer.createTransport({ service: 'Gmail', auth: { user: 'my.mail@gmail.com', pass: 'mypassword' }, connectionTimeout: 5 * 60 * 1000, // 5 min }); Is there any firewall or default Openshift settings or enviroment variable I'm missing?
ETIMEDOUT connect error using nodemailer in Nodejs Openshift application I'm facing some problems using the nodemailer module in my node.js single gear non-scalable application in Openshift. As the documentation suggested, I initialized the transporter object and used the sendMail function. A simplified version of my code is var transporter = nodemailer.createTransport({ service: 'Gmail', auth: { user: 'my.mail@gmail.com', pass: 'mypassword' } }); var mail = { from: 'my.mail@gmail.com', to: 'test@mydomain.com', subject: 'Test mail', html: 'Test mail' }; transporter.sendMail(mail, function(error, info) { if(error){ console.log(error); }else{ console.log(info); } }); This code works correctly when I run it on my local machine, but when I try to execute it on the server I got an ETIMEDOUT error, as if the application couln't connect to the smtp server. { [Error: connect ETIMEDOUT] code: 'ETIMEDOUT', errno: 'ETIMEDOUT', syscall: 'connect' } I've tried to increase the timeout connection parameter, but I got the same results. var transporter = nodemailer.createTransport({ service: 'Gmail', auth: { user: 'my.mail@gmail.com', pass: 'mypassword' }, connectionTimeout: 5 * 60 * 1000, // 5 min }); Is there any firewall or default Openshift settings or enviroment variable I'm missing?
node.js, openshift, nodemailer
14
34,285
6
https://stackoverflow.com/questions/31473292/etimedout-connect-error-using-nodemailer-in-nodejs-openshift-application
23,064,765
Can&#39;t send email through openshift
I have website hosted at OpenShift. I tried to send mail using mail function of php. It just returned true, but no mail is received to whom I had sent. Please tell me procedure for sending the mail. I searched a lot but none of the options worked.
Can&#39;t send email through openshift I have website hosted at OpenShift. I tried to send mail using mail function of php. It just returned true, but no mail is received to whom I had sent. Please tell me procedure for sending the mail. I searched a lot but none of the options worked.
php, email, openshift
14
5,716
1
https://stackoverflow.com/questions/23064765/cant-send-email-through-openshift
39,266,277
Openshift java.net.SocketException: Permission denied
I am using Java8 and have a Chat Server that works perfectly on my localhost , but when I deploy it to an OpenShift server, I get the following error: java.net.SocketException: Permission denied 2016-09-05 10:36:11,300 INFO [stdout] (Thread-125) Starting Chat server on localhost:8000 ... 2016-09-05 10:36:13,194 ERROR [stderr] (Thread-125) Exception in thread "Thread-125" java.net.SocketException: Permission denied 2016-09-05 10:36:13,194 ERROR [stderr] (Thread-125) at sun.nio.ch.Net.bind0(Native Method) 2016-09-05 10:36:13,195 ERROR [stderr] (Thread-125) at sun.nio.ch.Net.bind(Net.java:433) 2016-09-05 10:36:13,195 ERROR [stderr] (Thread-125) at sun.nio.ch.Net.bind(Net.java:425) 2016-09-05 10:36:13,195 ERROR [stderr] (Thread-125) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 2016-09-05 10:36:13,195 ERROR [stderr] (Thread-125) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 2016-09-05 10:36:13,196 ERROR [stderr] (Thread-125) at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) 2016-09-05 10:36:13,196 ERROR [stderr] (Thread-125) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:476) 2016-09-05 10:36:13,196 ERROR [stderr] (Thread-125) at io.netty.channel.DefaultChannelPipeline$HeadHandler.bind(DefaultChannelPipeline.java:1000) 2016-09-05 10:36:13,196 ERROR [stderr] (Thread-125) at io.netty.channel.DefaultChannelHandlerContext.invokeBind(DefaultChannelHandlerContext.java:463) 2016-09-05 10:36:13,197 ERROR [stderr] (Thread-125) at io.netty.channel.DefaultChannelHandlerContext.bind(DefaultChannelHandlerContext.java:448) 2016-09-05 10:36:13,197 ERROR [stderr] (Thread-125) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:842) 2016-09-05 10:36:13,197 ERROR [stderr] (Thread-125) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:195) 2016-09-05 10:36:13,197 ERROR [stderr] (Thread-125) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:338) 2016-09-05 10:36:13,198 ERROR [stderr] (Thread-125) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:370) 2016-09-05 10:36:13,198 ERROR [stderr] (Thread-125) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) 2016-09-05 10:36:13,198 ERROR [stderr] (Thread-125) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) 2016-09-05 10:36:13,198 ERROR [stderr] (Thread-125) at java.lang.Thread.run(Thread.java:745) I have looked at the OpenShift web sockets guide here , which says port 8000 should be used. But I still get the error. On Openshift I am running my chat server on a WildFly Application Server 10 cartridge. Any advise appreciated. Here is my code: WebAppInitializer.java try { new Thread() { public void run() { com.jobs.spring.chat.Server chatServer = new com.jobs.spring.chat.Server(); chatServer.startServer(); } }.start(); } catch (Exception e) { e.printStackTrace(); } Server.java import java.net.Socket; import com.corundumstudio.socketio.AckRequest; import com.corundumstudio.socketio.Configuration; import com.corundumstudio.socketio.SocketIOClient; import com.corundumstudio.socketio.SocketIOServer; import com.corundumstudio.socketio.listener.ConnectListener; import com.corundumstudio.socketio.listener.DataListener; import com.corundumstudio.socketio.listener.DisconnectListener; import com.fasterxml.jackson.databind.ObjectMapper; /** * [URL] * @author Richard * */ public class Server { //private static final String SERVER = "localhost"; private static final String SERVER = "jbosswildfly-easyjobs.rhcloud.com"; private static final Integer PORT = 8000; public static void main(String[] args) { startServer(); } public static void startServer() { Configuration config = new Configuration(); config.setHostname(SERVER); config.setPort(PORT); final SocketIOServer server = new SocketIOServer(config); server.addConnectListener(new ConnectListener() { @Override public void onConnect(SocketIOClient client) { System.out.println("onConnected"); client.sendEvent("chat_message:message", new Message("Welcome to the chat!")); } }); server.addDisconnectListener(new DisconnectListener() { @Override public void onDisconnect(SocketIOClient client) { System.out.println("onDisconnected"); } }); server.addEventListener("chat_message:send", String.class, new DataListener<String>() { @Override public void onData(SocketIOClient client, String data, AckRequest ackSender) throws Exception { Message message = null; try { message = new ObjectMapper().readValue(data.toString(), Message.class); } catch (Exception e) { e.printStackTrace(); } message.setDate(System.currentTimeMillis()); server.getBroadcastOperations().sendEvent("chat_message:message", message); } }); System.out.println("Starting Chat server on " + SERVER + ":" + PORT+" ..."); server.start(); System.out.println("Chat server started"); System.out.println("Chat server Environment Info: " + System.getenv()); try { Socket socket = new Socket(SERVER, PORT); printSocketInformation(socket); } catch (Exception e) { e.printStackTrace(); } } /** * Prints debug output (to stdout) for the given Java Socket. */ public static void printSocketInformation(Socket socket) { try { System.out.format("Port: %s\n", socket.getPort()); System.out.format("Canonical Host Name: %s\n", socket.getInetAddress().getCanonicalHostName()); System.out.format("Host Address: %s\n\n", socket.getInetAddress().getHostAddress()); System.out.format("Local Address: %s\n", socket.getLocalAddress()); System.out.format("Local Port: %s\n", socket.getLocalPort()); System.out.format("Local Socket Address: %s\n\n", socket.getLocalSocketAddress()); System.out.format("Receive Buffer Size: %s\n", socket.getReceiveBufferSize()); System.out.format("Send Buffer Size: %s\n\n", socket.getSendBufferSize()); System.out.format("Keep-Alive: %s\n", socket.getKeepAlive()); System.out.format("SO Timeout: %s\n", socket.getSoTimeout()); } catch (Exception e) { e.printStackTrace(); } } } In this link OpenShift talk about port binding and proxies. I don't really understand all of it. It looks like I should use port 8000 (which I am), but I am not clear what hostname I should use. I am using my application url name (jbosswildfly-easyjobs.rhcloud.com). Is that correct? If I change the address to, [URL] (i.e. prefix http:// ) I get the following error: java.net.SocketException: Unresolved address
Openshift java.net.SocketException: Permission denied I am using Java8 and have a Chat Server that works perfectly on my localhost , but when I deploy it to an OpenShift server, I get the following error: java.net.SocketException: Permission denied 2016-09-05 10:36:11,300 INFO [stdout] (Thread-125) Starting Chat server on localhost:8000 ... 2016-09-05 10:36:13,194 ERROR [stderr] (Thread-125) Exception in thread "Thread-125" java.net.SocketException: Permission denied 2016-09-05 10:36:13,194 ERROR [stderr] (Thread-125) at sun.nio.ch.Net.bind0(Native Method) 2016-09-05 10:36:13,195 ERROR [stderr] (Thread-125) at sun.nio.ch.Net.bind(Net.java:433) 2016-09-05 10:36:13,195 ERROR [stderr] (Thread-125) at sun.nio.ch.Net.bind(Net.java:425) 2016-09-05 10:36:13,195 ERROR [stderr] (Thread-125) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 2016-09-05 10:36:13,195 ERROR [stderr] (Thread-125) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 2016-09-05 10:36:13,196 ERROR [stderr] (Thread-125) at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) 2016-09-05 10:36:13,196 ERROR [stderr] (Thread-125) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:476) 2016-09-05 10:36:13,196 ERROR [stderr] (Thread-125) at io.netty.channel.DefaultChannelPipeline$HeadHandler.bind(DefaultChannelPipeline.java:1000) 2016-09-05 10:36:13,196 ERROR [stderr] (Thread-125) at io.netty.channel.DefaultChannelHandlerContext.invokeBind(DefaultChannelHandlerContext.java:463) 2016-09-05 10:36:13,197 ERROR [stderr] (Thread-125) at io.netty.channel.DefaultChannelHandlerContext.bind(DefaultChannelHandlerContext.java:448) 2016-09-05 10:36:13,197 ERROR [stderr] (Thread-125) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:842) 2016-09-05 10:36:13,197 ERROR [stderr] (Thread-125) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:195) 2016-09-05 10:36:13,197 ERROR [stderr] (Thread-125) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:338) 2016-09-05 10:36:13,198 ERROR [stderr] (Thread-125) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:370) 2016-09-05 10:36:13,198 ERROR [stderr] (Thread-125) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) 2016-09-05 10:36:13,198 ERROR [stderr] (Thread-125) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) 2016-09-05 10:36:13,198 ERROR [stderr] (Thread-125) at java.lang.Thread.run(Thread.java:745) I have looked at the OpenShift web sockets guide here , which says port 8000 should be used. But I still get the error. On Openshift I am running my chat server on a WildFly Application Server 10 cartridge. Any advise appreciated. Here is my code: WebAppInitializer.java try { new Thread() { public void run() { com.jobs.spring.chat.Server chatServer = new com.jobs.spring.chat.Server(); chatServer.startServer(); } }.start(); } catch (Exception e) { e.printStackTrace(); } Server.java import java.net.Socket; import com.corundumstudio.socketio.AckRequest; import com.corundumstudio.socketio.Configuration; import com.corundumstudio.socketio.SocketIOClient; import com.corundumstudio.socketio.SocketIOServer; import com.corundumstudio.socketio.listener.ConnectListener; import com.corundumstudio.socketio.listener.DataListener; import com.corundumstudio.socketio.listener.DisconnectListener; import com.fasterxml.jackson.databind.ObjectMapper; /** * [URL] * @author Richard * */ public class Server { //private static final String SERVER = "localhost"; private static final String SERVER = "jbosswildfly-easyjobs.rhcloud.com"; private static final Integer PORT = 8000; public static void main(String[] args) { startServer(); } public static void startServer() { Configuration config = new Configuration(); config.setHostname(SERVER); config.setPort(PORT); final SocketIOServer server = new SocketIOServer(config); server.addConnectListener(new ConnectListener() { @Override public void onConnect(SocketIOClient client) { System.out.println("onConnected"); client.sendEvent("chat_message:message", new Message("Welcome to the chat!")); } }); server.addDisconnectListener(new DisconnectListener() { @Override public void onDisconnect(SocketIOClient client) { System.out.println("onDisconnected"); } }); server.addEventListener("chat_message:send", String.class, new DataListener<String>() { @Override public void onData(SocketIOClient client, String data, AckRequest ackSender) throws Exception { Message message = null; try { message = new ObjectMapper().readValue(data.toString(), Message.class); } catch (Exception e) { e.printStackTrace(); } message.setDate(System.currentTimeMillis()); server.getBroadcastOperations().sendEvent("chat_message:message", message); } }); System.out.println("Starting Chat server on " + SERVER + ":" + PORT+" ..."); server.start(); System.out.println("Chat server started"); System.out.println("Chat server Environment Info: " + System.getenv()); try { Socket socket = new Socket(SERVER, PORT); printSocketInformation(socket); } catch (Exception e) { e.printStackTrace(); } } /** * Prints debug output (to stdout) for the given Java Socket. */ public static void printSocketInformation(Socket socket) { try { System.out.format("Port: %s\n", socket.getPort()); System.out.format("Canonical Host Name: %s\n", socket.getInetAddress().getCanonicalHostName()); System.out.format("Host Address: %s\n\n", socket.getInetAddress().getHostAddress()); System.out.format("Local Address: %s\n", socket.getLocalAddress()); System.out.format("Local Port: %s\n", socket.getLocalPort()); System.out.format("Local Socket Address: %s\n\n", socket.getLocalSocketAddress()); System.out.format("Receive Buffer Size: %s\n", socket.getReceiveBufferSize()); System.out.format("Send Buffer Size: %s\n\n", socket.getSendBufferSize()); System.out.format("Keep-Alive: %s\n", socket.getKeepAlive()); System.out.format("SO Timeout: %s\n", socket.getSoTimeout()); } catch (Exception e) { e.printStackTrace(); } } } In this link OpenShift talk about port binding and proxies. I don't really understand all of it. It looks like I should use port 8000 (which I am), but I am not clear what hostname I should use. I am using my application url name (jbosswildfly-easyjobs.rhcloud.com). Is that correct? If I change the address to, [URL] (i.e. prefix http:// ) I get the following error: java.net.SocketException: Unresolved address
java, sockets, openshift, openshift-cartridge
14
14,695
2
https://stackoverflow.com/questions/39266277/openshift-java-net-socketexception-permission-denied
13,581,762
How can I add private information to OpenShift environment variables?
Authentication information such as database connection strings or passwords should almost never be stored in version control systems. It looks like the only method of specifying environment variables for an app hosted on OpenShift is to commit them to the Git repository . There is a discussion about this on the OpenShift forums , but no useful suggested workarounds for the problem. Is there another approach I can use to add authentication information to my app without having to commit it to the repository?
How can I add private information to OpenShift environment variables? Authentication information such as database connection strings or passwords should almost never be stored in version control systems. It looks like the only method of specifying environment variables for an app hosted on OpenShift is to commit them to the Git repository . There is a discussion about this on the OpenShift forums , but no useful suggested workarounds for the problem. Is there another approach I can use to add authentication information to my app without having to commit it to the repository?
environment-variables, openshift
13
5,010
4
https://stackoverflow.com/questions/13581762/how-can-i-add-private-information-to-openshift-environment-variables
27,601,576
Openshift rhc setup &quot;The OpenShift server is not responding correctly.&quot;
I would like to setup access to my open shift application. I have rhc client installed and wanted to run rhc setup I'm asked about providing server hostname: Enter the server hostname: |openshift.redhat.com| so I enter the hostname of my application: [URL] Unfortunately, I'm getting the following error message: The OpenShift server is not responding correctly. Check that ' [URL] ' is the correct URL for your server. The server may be offline or misconfigured Any ideas how to deal with this?
Openshift rhc setup &quot;The OpenShift server is not responding correctly.&quot; I would like to setup access to my open shift application. I have rhc client installed and wanted to run rhc setup I'm asked about providing server hostname: Enter the server hostname: |openshift.redhat.com| so I enter the hostname of my application: [URL] Unfortunately, I'm getting the following error message: The OpenShift server is not responding correctly. Check that ' [URL] ' is the correct URL for your server. The server may be offline or misconfigured Any ideas how to deal with this?
openshift, ghost-blog, openshift-client-tools
13
7,128
2
https://stackoverflow.com/questions/27601576/openshift-rhc-setup-the-openshift-server-is-not-responding-correctly
15,921,169
how to connect to the database in openshift application
I did as following MySQL 5.1 database added. Please make note of these credentials: Root User: xxxxxxx Root Password: xxxxxxx Database Name: php Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/ You can manage your new MySQL database by also embedding phpmyadmin-3.4. The phpmyadmin username and password will be the same as the MySQL credentials above. phpMyAdmin 3.4 added. Please make note of these MySQL credentials again: Root User: xxxxxxx Root Password: xxxxxxx URL: [URL] and i try to connect db using bellow PDO code .but it does not work $dbh = new PDO('mysql:host=mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/;dbname=php', "xxxxxx, "xxxxxx"); I don't know what is the connection URL mean ?
how to connect to the database in openshift application I did as following MySQL 5.1 database added. Please make note of these credentials: Root User: xxxxxxx Root Password: xxxxxxx Database Name: php Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/ You can manage your new MySQL database by also embedding phpmyadmin-3.4. The phpmyadmin username and password will be the same as the MySQL credentials above. phpMyAdmin 3.4 added. Please make note of these MySQL credentials again: Root User: xxxxxxx Root Password: xxxxxxx URL: [URL] and i try to connect db using bellow PDO code .but it does not work $dbh = new PDO('mysql:host=mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/;dbname=php', "xxxxxx, "xxxxxx"); I don't know what is the connection URL mean ?
php, pdo, cloud, openshift
13
20,610
4
https://stackoverflow.com/questions/15921169/how-to-connect-to-the-database-in-openshift-application
26,749,128
ERROR: column &quot;id&quot; is of type uuid but expression is of type bytea
My entity looks like @Entity public class Member { @Id private UUID id; @Column(name = "member_external_id", unique = true, nullable = false) private String memberExternalId; @Column(name = "client_id", unique = true, nullable = false) private String clientId; @Column(name = "client_secret", unique = true, nullable = false) private String clientSecret; @Column(unique = true, nullable = false) private String email; private boolean active; @Column(name = "created_at") private LocalDateTime createdAt; public Member() { // required by JPA } .... } When I deploy my application on OpenShift with PostgreSQL, I see following error in logs Caused by: org.hibernate.exception.SQLGrammarException: could not execute statement at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:123) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:112) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:190) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.jdbc.batch.internal.NonBatchingBatch.addToBatch(NonBatchingBatch.java:62) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3124) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3581) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.action.internal.EntityInsertAction.execute(EntityInsertAction.java:104) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:463) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:349) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:350) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:56) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1222) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.jpa.spi.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:1335) [hibernate-entitymanager-4.3.5.Final.jar:4.3.5.Final] ... 203 more Caused by: org.postgresql.util.PSQLException: ERROR: column "id" is of type uuid but expression is of type bytea Hint: You will need to rewrite or cast the expression. Position: 130 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2062) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1795) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:479) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:367) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:321) [postgresql-9.2-1003-jdbc4.jar:] at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:493) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:187) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] ... 213 more OpenShift has PostgreSQL version 9.2. The dependency that I am using to connect to database looks like <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.2-1003-jdbc4</version> </dependency> Has anyone seen this issue? I don't know how to resolve it.
ERROR: column &quot;id&quot; is of type uuid but expression is of type bytea My entity looks like @Entity public class Member { @Id private UUID id; @Column(name = "member_external_id", unique = true, nullable = false) private String memberExternalId; @Column(name = "client_id", unique = true, nullable = false) private String clientId; @Column(name = "client_secret", unique = true, nullable = false) private String clientSecret; @Column(unique = true, nullable = false) private String email; private boolean active; @Column(name = "created_at") private LocalDateTime createdAt; public Member() { // required by JPA } .... } When I deploy my application on OpenShift with PostgreSQL, I see following error in logs Caused by: org.hibernate.exception.SQLGrammarException: could not execute statement at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:123) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:112) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:190) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.jdbc.batch.internal.NonBatchingBatch.addToBatch(NonBatchingBatch.java:62) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3124) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3581) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.action.internal.EntityInsertAction.execute(EntityInsertAction.java:104) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:463) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.engine.spi.ActionQueue.executeActions(ActionQueue.java:349) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.event.internal.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:350) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:56) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1222) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] at org.hibernate.jpa.spi.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:1335) [hibernate-entitymanager-4.3.5.Final.jar:4.3.5.Final] ... 203 more Caused by: org.postgresql.util.PSQLException: ERROR: column "id" is of type uuid but expression is of type bytea Hint: You will need to rewrite or cast the expression. Position: 130 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2062) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1795) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:479) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:367) [postgresql-9.2-1003-jdbc4.jar:] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:321) [postgresql-9.2-1003-jdbc4.jar:] at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:493) at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:187) [hibernate-core-4.3.5.Final.jar:4.3.5.Final] ... 213 more OpenShift has PostgreSQL version 9.2. The dependency that I am using to connect to database looks like <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.2-1003-jdbc4</version> </dependency> Has anyone seen this issue? I don't know how to resolve it.
java, database, postgresql, openshift
13
28,730
3
https://stackoverflow.com/questions/26749128/error-column-id-is-of-type-uuid-but-expression-is-of-type-bytea
41,407,111
Moving a docker-compose container to openshift V3
I would like to move the Omnibus gitlab docker image to openshift V3, so I've got the dockerfile and docker-compose files @ [URL] . What is the best way for having a scalable openshift v3 pod ? As the command oc import docker-compose is experimental so I stuck and lost in the process of building a reliable solution. Thanks Herve
Moving a docker-compose container to openshift V3 I would like to move the Omnibus gitlab docker image to openshift V3, so I've got the dockerfile and docker-compose files @ [URL] . What is the best way for having a scalable openshift v3 pod ? As the command oc import docker-compose is experimental so I stuck and lost in the process of building a reliable solution. Thanks Herve
openshift, gitlab
13
9,927
2
https://stackoverflow.com/questions/41407111/moving-a-docker-compose-container-to-openshift-v3
34,025,660
openshift and let&#39;s encrypt certificates
Is there any integration for Let's Encrypt in OpenShift (or, is this planned)? Let's encrypt are going to issue certs that expire in 90 days[1] -- and a big part of their plan is to have automation setups via people who use their certs so that they're always updated with new certs. Given this, some integration from OpenShift would be necessary. Thanks, [1] [URL]
openshift and let&#39;s encrypt certificates Is there any integration for Let's Encrypt in OpenShift (or, is this planned)? Let's encrypt are going to issue certs that expire in 90 days[1] -- and a big part of their plan is to have automation setups via people who use their certs so that they're always updated with new certs. Given this, some integration from OpenShift would be necessary. Thanks, [1] [URL]
openshift
13
1,744
1
https://stackoverflow.com/questions/34025660/openshift-and-lets-encrypt-certificates
45,827,035
What is resetting the PATH variable at the last second during an OpenShift v2 push hook?
TL;DR: Working app, cloned it, clone doesn't start correctly from push hook (but works fine manually if I ssh in.) PATH has correct Node version added to it, but somewhere right in the last step, the incorrect Node version is prepended to the PATH again. Path is correct here: remote: PATH = /var/lib/openshift/.../app-root/data//node-v4.x.x-linux-x64/bin:/var/lib/openshift/.../app-root/runtime/repo/node_modules/.bin:/var/lib/openshift/...//.node_modules/.bin:/opt/rh/nodejs010/root/usr/bin:/bin:/usr/bin:/usr/sbin Then incorrect immediately after , somewhere in here: remote: Starting NodeJS cartridge remote: Tue Aug 22 2017 15:39:10 GMT-0400 (EDT): Starting application 'staging' ... So what scripts and hooks are represented in or before these last two lines? PATH isn't just adding lines to itself... I have a working OpenShift v2 app running a NodeJS version new enough to support fat arrow notation . It appears that it was set up per Custom node.js version on Openshift because the scripts from that repo (for using a marker file) are present in .openshift . I set up a second one using rhc create --from-app based on the working one, reset the repo, then re-deployed onto it. The second one worked great, except for the final step of starting node : remote: npm info ok remote: NOTE: The .openshift/action_hooks/build hook is not executable, to make it executable: remote: On Windows run: git update-index --chmod=+x .openshift/action_hooks/build remote: On Linux/OSX run: chmod +x .openshift/action_hooks/build remote: Preparing build for deployment remote: Deployment id is cedf7f51 remote: Activating deployment remote: NOTE: The .openshift/action_hooks/deploy hook is not executable, to make it executable: remote: On Windows run: git update-index --chmod=+x .openshift/action_hooks/deploy remote: On Linux/OSX run: chmod +x .openshift/action_hooks/deploy remote: remote: - pre_start_nodejs: Adding Node.js version 4.x.x binaries to path remote: - PATH set to include custom node version (4.x.x) from remote: /var/lib/openshift/.../app-root/data//node-v4.x.x-linux-x64/bin remote: PATH = /var/lib/openshift/.../app-root/data//node-v4.x.x-linux-x64/bin:/var/lib/openshift/.../app-root/runtime/repo/node_modules/.bin:/var/lib/openshift/...//.node_modules/.bin:/opt/rh/nodejs010/root/usr/bin:/bin:/usr/bin:/usr/sbin remote: Starting NodeJS cartridge remote: Tue Aug 22 2017 15:39:10 GMT-0400 (EDT): Starting application 'staging' ... remote: Waiting for application port (8080) become available ... (Everything up to this point is exactly as it is on the working app, except for names.) remote: Application 'staging' failed to start (port 8080 not available) remote: ------------------------- remote: Git Post-Receive Result: failure remote: Activation status: failure remote: Activation failed for the following gears: remote: ... (Error activating gear: CLIENT_ERROR: Failed to execute: 'control start' for /var/lib/openshift/.../nodejs remote: #<IO:0x00000001cd42d0> remote: #<IO:0x00000001cd4258> remote: ) remote: Deployment completed with status: failure remote: postreceive failed rhc env and rhc app show show the settings to be identical in all the ways that they can be. I know from other questions that the "port 8080" part above is a red herring. Plus, if I rhc ssh in and manually node www.js , it uses that port just fine and I can reach the app via browser. So I investigated using rhc tail . I can see it failing to start, repeatedly, due to fat arrow notation: pg.on('error', (err) => { ^ SyntaxError: Unexpected token > at Module._compile (module.js:439:25) at Object.Module._extensions..js (module.js:474:10) ... DEBUG: Program node ./www.js exited with code 8 DEBUG: Starting child process with 'node ./www.js' Yet, if I rhc ssh into this same server and run node --version , I get the newer version (the same version as the other server, which both pull it from the marker file in the .openshift directory I deployed.) I'm guessing somehow the final step in the push hook is using Node 0.10, since that's what the cartridge is named on both apps. However, if I check the path being added to PATH , a newer Node version does in fact live there. However , this is where things get interesting. The PATH reported above (where 4.x.x is prepended) is no longer the path by the time Node is launched . I changed www.js to just spit out process.env.PATH and compared the two. The latter has these two paths added to the beginning! /opt/rh/nodejs010/root/usr/bin /opt/rh/v8314/root/usr/bin What is doing that and how can I stop it? What even has a chance during these lines of output? remote: PATH = /var/lib/openshift/.../app-root/data//node-v4.x.x-linux-x64/bin:/var/lib/openshift/.../app-root/runtime/repo/node_modules/.bin:/var/lib/openshift/...//.node_modules/.bin:/opt/rh/nodejs010/root/usr/bin:/bin:/usr/bin:/usr/sbin remote: Starting NodeJS cartridge remote: Tue Aug 22 2017 15:39:10 GMT-0400 (EDT): Starting application 'staging' ... (And why just on my second app, when I used --from-app to create it and all other settings seem to be equal between the two?)
What is resetting the PATH variable at the last second during an OpenShift v2 push hook? TL;DR: Working app, cloned it, clone doesn't start correctly from push hook (but works fine manually if I ssh in.) PATH has correct Node version added to it, but somewhere right in the last step, the incorrect Node version is prepended to the PATH again. Path is correct here: remote: PATH = /var/lib/openshift/.../app-root/data//node-v4.x.x-linux-x64/bin:/var/lib/openshift/.../app-root/runtime/repo/node_modules/.bin:/var/lib/openshift/...//.node_modules/.bin:/opt/rh/nodejs010/root/usr/bin:/bin:/usr/bin:/usr/sbin Then incorrect immediately after , somewhere in here: remote: Starting NodeJS cartridge remote: Tue Aug 22 2017 15:39:10 GMT-0400 (EDT): Starting application 'staging' ... So what scripts and hooks are represented in or before these last two lines? PATH isn't just adding lines to itself... I have a working OpenShift v2 app running a NodeJS version new enough to support fat arrow notation . It appears that it was set up per Custom node.js version on Openshift because the scripts from that repo (for using a marker file) are present in .openshift . I set up a second one using rhc create --from-app based on the working one, reset the repo, then re-deployed onto it. The second one worked great, except for the final step of starting node : remote: npm info ok remote: NOTE: The .openshift/action_hooks/build hook is not executable, to make it executable: remote: On Windows run: git update-index --chmod=+x .openshift/action_hooks/build remote: On Linux/OSX run: chmod +x .openshift/action_hooks/build remote: Preparing build for deployment remote: Deployment id is cedf7f51 remote: Activating deployment remote: NOTE: The .openshift/action_hooks/deploy hook is not executable, to make it executable: remote: On Windows run: git update-index --chmod=+x .openshift/action_hooks/deploy remote: On Linux/OSX run: chmod +x .openshift/action_hooks/deploy remote: remote: - pre_start_nodejs: Adding Node.js version 4.x.x binaries to path remote: - PATH set to include custom node version (4.x.x) from remote: /var/lib/openshift/.../app-root/data//node-v4.x.x-linux-x64/bin remote: PATH = /var/lib/openshift/.../app-root/data//node-v4.x.x-linux-x64/bin:/var/lib/openshift/.../app-root/runtime/repo/node_modules/.bin:/var/lib/openshift/...//.node_modules/.bin:/opt/rh/nodejs010/root/usr/bin:/bin:/usr/bin:/usr/sbin remote: Starting NodeJS cartridge remote: Tue Aug 22 2017 15:39:10 GMT-0400 (EDT): Starting application 'staging' ... remote: Waiting for application port (8080) become available ... (Everything up to this point is exactly as it is on the working app, except for names.) remote: Application 'staging' failed to start (port 8080 not available) remote: ------------------------- remote: Git Post-Receive Result: failure remote: Activation status: failure remote: Activation failed for the following gears: remote: ... (Error activating gear: CLIENT_ERROR: Failed to execute: 'control start' for /var/lib/openshift/.../nodejs remote: #<IO:0x00000001cd42d0> remote: #<IO:0x00000001cd4258> remote: ) remote: Deployment completed with status: failure remote: postreceive failed rhc env and rhc app show show the settings to be identical in all the ways that they can be. I know from other questions that the "port 8080" part above is a red herring. Plus, if I rhc ssh in and manually node www.js , it uses that port just fine and I can reach the app via browser. So I investigated using rhc tail . I can see it failing to start, repeatedly, due to fat arrow notation: pg.on('error', (err) => { ^ SyntaxError: Unexpected token > at Module._compile (module.js:439:25) at Object.Module._extensions..js (module.js:474:10) ... DEBUG: Program node ./www.js exited with code 8 DEBUG: Starting child process with 'node ./www.js' Yet, if I rhc ssh into this same server and run node --version , I get the newer version (the same version as the other server, which both pull it from the marker file in the .openshift directory I deployed.) I'm guessing somehow the final step in the push hook is using Node 0.10, since that's what the cartridge is named on both apps. However, if I check the path being added to PATH , a newer Node version does in fact live there. However , this is where things get interesting. The PATH reported above (where 4.x.x is prepended) is no longer the path by the time Node is launched . I changed www.js to just spit out process.env.PATH and compared the two. The latter has these two paths added to the beginning! /opt/rh/nodejs010/root/usr/bin /opt/rh/v8314/root/usr/bin What is doing that and how can I stop it? What even has a chance during these lines of output? remote: PATH = /var/lib/openshift/.../app-root/data//node-v4.x.x-linux-x64/bin:/var/lib/openshift/.../app-root/runtime/repo/node_modules/.bin:/var/lib/openshift/...//.node_modules/.bin:/opt/rh/nodejs010/root/usr/bin:/bin:/usr/bin:/usr/sbin remote: Starting NodeJS cartridge remote: Tue Aug 22 2017 15:39:10 GMT-0400 (EDT): Starting application 'staging' ... (And why just on my second app, when I used --from-app to create it and all other settings seem to be equal between the two?)
node.js, git, ssh, openshift, openshift-cartridge
13
226
2
https://stackoverflow.com/questions/45827035/what-is-resetting-the-path-variable-at-the-last-second-during-an-openshift-v2-pu
33,023,299
Symfony2: You have requested a non-existent parameter
I have checked similar questions on SO, but they did not solve my issue. I am deploying a Symfony2 application on Openshift. It works well on my Windows 10 laptop, but I am getting the following error message on Openshift: Fatal error: Uncaught exception 'Symfony\Component\DependencyInjection\Exception\ParameterNotFoundException' with message 'You have requested a non-existent parameter "database_path". Did you mean one of these: "database_host", "database_port", "database_name", "database_user"?' in /var/lib/openshift/55eed4837628e1199f0000bb/app-root/runtime/repo/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ParameterBag/ParameterBag.php:106 Stack trace: #0 /var/lib/openshift/55eed4837628e1199f0000bb/app-root/runtime/repo/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ParameterBag/ParameterBag.php(248): Symfony\Component\DependencyInjection\ParameterBag\ParameterBag->get('database_path') #1 [internal function]: Symfony\Component\DependencyInjection\ParameterBag\ParameterBag->Symfony\Component\DependencyInjection\ParameterBag\{closure}(Array) #2 /var/lib/openshift/55eed4837628e1199f0000bb/app-root/runtime/repo/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ParameterBag/ParameterBag.php in /var/lib/openshift/55eed4837628e1199f0000bb/app-root/runtime/repo/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ParameterBag/ParameterBag.php on line 106 My config.yml is: imports: - { resource: parameters.yml } - { resource: security.yml } - { resource: services.yml } ... doctrine: dbal: driver: pdo_sqlite charset: UTF8 path: "%kernel.root_dir%/../%database_path%" ... My parameters.yml is: parameters: database_driver: pdo_sqlite database_host: localhost database_port: null database_name: demo.db database_user: root database_password: null database_path: /data/demo.db ... and my config_prod.yml is: imports: - { resource: config.yml } ... What am I doing wrong? Update I have changed my config.yml to: path: "%kernel.root_dir%/../data/demo.db" and the issue is gone, but I don't know why!
Symfony2: You have requested a non-existent parameter I have checked similar questions on SO, but they did not solve my issue. I am deploying a Symfony2 application on Openshift. It works well on my Windows 10 laptop, but I am getting the following error message on Openshift: Fatal error: Uncaught exception 'Symfony\Component\DependencyInjection\Exception\ParameterNotFoundException' with message 'You have requested a non-existent parameter "database_path". Did you mean one of these: "database_host", "database_port", "database_name", "database_user"?' in /var/lib/openshift/55eed4837628e1199f0000bb/app-root/runtime/repo/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ParameterBag/ParameterBag.php:106 Stack trace: #0 /var/lib/openshift/55eed4837628e1199f0000bb/app-root/runtime/repo/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ParameterBag/ParameterBag.php(248): Symfony\Component\DependencyInjection\ParameterBag\ParameterBag->get('database_path') #1 [internal function]: Symfony\Component\DependencyInjection\ParameterBag\ParameterBag->Symfony\Component\DependencyInjection\ParameterBag\{closure}(Array) #2 /var/lib/openshift/55eed4837628e1199f0000bb/app-root/runtime/repo/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ParameterBag/ParameterBag.php in /var/lib/openshift/55eed4837628e1199f0000bb/app-root/runtime/repo/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ParameterBag/ParameterBag.php on line 106 My config.yml is: imports: - { resource: parameters.yml } - { resource: security.yml } - { resource: services.yml } ... doctrine: dbal: driver: pdo_sqlite charset: UTF8 path: "%kernel.root_dir%/../%database_path%" ... My parameters.yml is: parameters: database_driver: pdo_sqlite database_host: localhost database_port: null database_name: demo.db database_user: root database_password: null database_path: /data/demo.db ... and my config_prod.yml is: imports: - { resource: config.yml } ... What am I doing wrong? Update I have changed my config.yml to: path: "%kernel.root_dir%/../data/demo.db" and the issue is gone, but I don't know why!
php, symfony, openshift
12
34,073
1
https://stackoverflow.com/questions/33023299/symfony2-you-have-requested-a-non-existent-parameter
20,974,094
OpenShift node.js Error: listen EACCES
I have been using OpenShift with node.js and socket.io. My code is: server.listen(process.end.OPENSHIFT_NODEJS_PORT || 3000); My code says that it returns port 8080. However, I get this error: DEBUG: Starting child process with 'node server.is' Info: socket.io started warn:error raised: Error: listen EACCES How can I fix this? No other solution I can find works.
OpenShift node.js Error: listen EACCES I have been using OpenShift with node.js and socket.io. My code is: server.listen(process.end.OPENSHIFT_NODEJS_PORT || 3000); My code says that it returns port 8080. However, I get this error: DEBUG: Starting child process with 'node server.is' Info: socket.io started warn:error raised: Error: listen EACCES How can I fix this? No other solution I can find works.
node.js, socket.io, port, openshift
12
11,139
1
https://stackoverflow.com/questions/20974094/openshift-node-js-error-listen-eacces
18,110,326
Best cartridge for deploying static html/javascript files to openshift?
I want to deploy a pure angular-js frontend appication to openshift. This application contains only html/css/js files. What is the best cartridge that I can use, php5, tomcat, nodes.js? Thanks
Best cartridge for deploying static html/javascript files to openshift? I want to deploy a pure angular-js frontend appication to openshift. This application contains only html/css/js files. What is the best cartridge that I can use, php5, tomcat, nodes.js? Thanks
angularjs, openshift
12
5,249
1
https://stackoverflow.com/questions/18110326/best-cartridge-for-deploying-static-html-javascript-files-to-openshift
63,829,932
ArgoCD tracking subdirectories in a specified path
I'm using ArgoCD and I want to track files under different subdirectories. I've setted the path as ./root_directory, but I would like to track also files in the subdirectories of root_directory. For instance /root_directory/dir1, /root_directory/dir2, but also /root_directory/dir1/dir1.1 ecc.. How can I do that? Thanks for your help
ArgoCD tracking subdirectories in a specified path I'm using ArgoCD and I want to track files under different subdirectories. I've setted the path as ./root_directory, but I would like to track also files in the subdirectories of root_directory. For instance /root_directory/dir1, /root_directory/dir2, but also /root_directory/dir1/dir1.1 ecc.. How can I do that? Thanks for your help
kubernetes, openshift, kustomize, argocd, gitops
12
23,224
3
https://stackoverflow.com/questions/63829932/argocd-tracking-subdirectories-in-a-specified-path
19,489,771
What&#39;s an openshift gear? Can it be the equivalent of a web-worker?
Openshift pricing model states that you can have 3 gears in the free tier. Other services normally explain their free tiers in number of "web workers" that you can have. What is an openshift's gear exactly then? I know that you can install a different programming environment in each gear, but if you install the same one (let's say: ruby) in all your 3 free-tier gears, do you have 3 web-workers running at the same time? (As in: improving scalability and redundancy; are they load-balanced?)
What&#39;s an openshift gear? Can it be the equivalent of a web-worker? Openshift pricing model states that you can have 3 gears in the free tier. Other services normally explain their free tiers in number of "web workers" that you can have. What is an openshift's gear exactly then? I know that you can install a different programming environment in each gear, but if you install the same one (let's say: ruby) in all your 3 free-tier gears, do you have 3 web-workers running at the same time? (As in: improving scalability and redundancy; are they load-balanced?)
heroku, appharbor, openshift, paas
12
5,797
2
https://stackoverflow.com/questions/19489771/whats-an-openshift-gear-can-it-be-the-equivalent-of-a-web-worker
26,661,978
Unable to install using npm because permissions in openshift
I am trying to make npm working on openshift. When I try to install a package using npm install : $npm install bower npm ERR! Error: EACCES, mkdir '/var/lib/openshift/5425aaa04******0094/.npm' npm ERR! { [Error: EACCES, mkdir '/var/lib/openshift/5425aaa04******0094/.npm'] npm ERR! errno: 3, npm ERR! code: 'EACCES', npm ERR! path: '/var/lib/openshift/5425aaa04******0094/.npm' } npm ERR! npm ERR! Please try running this command again as root/Administrator. npm ERR! System Linux 2.6.32-431.29.2.el6.x86_64 npm ERR! command "node" "/usr/bin/npm" "install" "bower" npm ERR! cwd /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies npm ERR! node -v v0.6.20 npm ERR! npm -v 1.1.37 npm ERR! path /var/lib/openshift/5425aaa04******0094/.npm npm ERR! code EACCES npm ERR! message EACCES, mkdir '/var/lib/openshift/5425aaa04******0094/.npm' npm ERR! errno 3 npm ERR! 3 errno npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies/npm-debug.log npm ERR! not ok code undefined npm ERR! not ok code 3 This is because I don't have permissions to write in my home directory ( /var/lib/openshift/5425aaa04******0094/ ) This is how npm config looks like: $npm config list ; cli configs ; node install prefix = undefined ; node bin location = /usr/bin/node ; cwd = /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies ; HOME = /var/lib/openshift/5425aaa04******0094/ ; 'npm config ls -l' to show all defaults. So I tried to set the prefix setting: $npm config set prefix /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies/ npm ERR! Error: EACCES, open '/var/lib/openshift/5425aaa04******0094/.npmrc' npm ERR! { [Error: EACCES, open '/var/lib/openshift/5425aaa04******0094/.npmrc'] npm ERR! errno: 3, npm ERR! code: 'EACCES', npm ERR! path: '/var/lib/openshift/5425aaa04******0094/.npmrc' } npm ERR! npm ERR! Please try running this command again as root/Administrator. npm ERR! System Linux 2.6.32-431.29.2.el6.x86_64 npm ERR! command "node" "/usr/bin/npm" "config" "set" "prefix" "/var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies/" npm ERR! cwd /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies npm ERR! node -v v0.6.20 npm ERR! npm -v 1.1.37 npm ERR! path /var/lib/openshift/5425aaa04******0094/.npmrc npm ERR! code EACCES npm ERR! message EACCES, open '/var/lib/openshift/5425aaa04******0094/.npmrc' npm ERR! errno 3 npm ERR! 3 errno npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies/npm-debug.log npm ERR! not ok code undefined npm ERR! not ok code 3 As I don't have write permissions in my home directory and npm is trying to edit the file ~/.npmrc, I can't change the settings. Any ideas on how can I fix this? All I want to do is being able to install bower. Thanks! EDIT: I don't have sudo permissions in openshift
Unable to install using npm because permissions in openshift I am trying to make npm working on openshift. When I try to install a package using npm install : $npm install bower npm ERR! Error: EACCES, mkdir '/var/lib/openshift/5425aaa04******0094/.npm' npm ERR! { [Error: EACCES, mkdir '/var/lib/openshift/5425aaa04******0094/.npm'] npm ERR! errno: 3, npm ERR! code: 'EACCES', npm ERR! path: '/var/lib/openshift/5425aaa04******0094/.npm' } npm ERR! npm ERR! Please try running this command again as root/Administrator. npm ERR! System Linux 2.6.32-431.29.2.el6.x86_64 npm ERR! command "node" "/usr/bin/npm" "install" "bower" npm ERR! cwd /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies npm ERR! node -v v0.6.20 npm ERR! npm -v 1.1.37 npm ERR! path /var/lib/openshift/5425aaa04******0094/.npm npm ERR! code EACCES npm ERR! message EACCES, mkdir '/var/lib/openshift/5425aaa04******0094/.npm' npm ERR! errno 3 npm ERR! 3 errno npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies/npm-debug.log npm ERR! not ok code undefined npm ERR! not ok code 3 This is because I don't have permissions to write in my home directory ( /var/lib/openshift/5425aaa04******0094/ ) This is how npm config looks like: $npm config list ; cli configs ; node install prefix = undefined ; node bin location = /usr/bin/node ; cwd = /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies ; HOME = /var/lib/openshift/5425aaa04******0094/ ; 'npm config ls -l' to show all defaults. So I tried to set the prefix setting: $npm config set prefix /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies/ npm ERR! Error: EACCES, open '/var/lib/openshift/5425aaa04******0094/.npmrc' npm ERR! { [Error: EACCES, open '/var/lib/openshift/5425aaa04******0094/.npmrc'] npm ERR! errno: 3, npm ERR! code: 'EACCES', npm ERR! path: '/var/lib/openshift/5425aaa04******0094/.npmrc' } npm ERR! npm ERR! Please try running this command again as root/Administrator. npm ERR! System Linux 2.6.32-431.29.2.el6.x86_64 npm ERR! command "node" "/usr/bin/npm" "config" "set" "prefix" "/var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies/" npm ERR! cwd /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies npm ERR! node -v v0.6.20 npm ERR! npm -v 1.1.37 npm ERR! path /var/lib/openshift/5425aaa04******0094/.npmrc npm ERR! code EACCES npm ERR! message EACCES, open '/var/lib/openshift/5425aaa04******0094/.npmrc' npm ERR! errno 3 npm ERR! 3 errno npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /var/lib/openshift/5425aaa04******0094/app-root/runtime/dependencies/npm-debug.log npm ERR! not ok code undefined npm ERR! not ok code 3 As I don't have write permissions in my home directory and npm is trying to edit the file ~/.npmrc, I can't change the settings. Any ideas on how can I fix this? All I want to do is being able to install bower. Thanks! EDIT: I don't have sudo permissions in openshift
npm, openshift, bower
12
3,057
2
https://stackoverflow.com/questions/26661978/unable-to-install-using-npm-because-permissions-in-openshift
42,678,061
Differences between OpenShift and Kubernetes
What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.
Differences between OpenShift and Kubernetes What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.
openshift, kubernetes
12
13,554
4
https://stackoverflow.com/questions/42678061/differences-between-openshift-and-kubernetes
35,479,304
Environment variable in PHP-docker container
I want to show an env var in my docker container. The PHP script looks like this: <html> <head> <title>Show Use of environment variables</title> </head> <body> <?php print "env is: ".$_ENV["USER"]."\n"; ?> </body> </html> I use OpenShift to start the container. The PHP - container shows: env is: Now I change the dc config of my container: oc env dc/envar USER=Pieter deploymentconfig "envar" updated When I access the container. The env var of USER is Pieter docker exec -it 44a0f446ae36 bash bash-4.2$ echo $USER Pieter But my script remains showing: " env is: " It does not fill in the variable.
Environment variable in PHP-docker container I want to show an env var in my docker container. The PHP script looks like this: <html> <head> <title>Show Use of environment variables</title> </head> <body> <?php print "env is: ".$_ENV["USER"]."\n"; ?> </body> </html> I use OpenShift to start the container. The PHP - container shows: env is: Now I change the dc config of my container: oc env dc/envar USER=Pieter deploymentconfig "envar" updated When I access the container. The env var of USER is Pieter docker exec -it 44a0f446ae36 bash bash-4.2$ echo $USER Pieter But my script remains showing: " env is: " It does not fill in the variable.
php, docker, environment-variables, openshift, openshift-origin
12
32,797
3
https://stackoverflow.com/questions/35479304/environment-variable-in-php-docker-container
23,807,969
Django serving media files (user uploaded files ) in openshift
I have successfully deployed my Django project in openshift. But I need to be able to serve files that are uploaded by users. I user MEDIA_ROOT and MEDIA_URL for that. I followed this tutorial here, but nothing happened. I had to change MEDIA_ROOT because the one suggested there isn't correct i think. So my MEDIA_ROOT looks like MEDIA_ROOT = os.path.join(os.environ.get('OPENSHIFT_DATA_DIR', ''),'media') MEDIA_URL = '/media/' I added the .htaccess in /wsgi folder with as it says in the article RewriteEngine On RewriteRule ^application/media/(.+)$ /static/$1 [L] and created the build script to make symbolic link of the media in static as the article says. #!/bin/bash if [ ! -d $OPENSHIFT_DATA_DIR/media ]; then mkdir $OPENSHIFT_DATA_DIR/media fi ln -sf $OPENSHIFT_DATA_DIR/media $OPENSHIFT_REPO_DIR/wsgi/static/media In my urls.py I have added the urlpatterns += static(settings.MEDIA_ROOT, document_root=settings.MEDIA_URL) but I still can't serve them. I also tried not to include the django static method in urls.py but the same result. In another tutorial .htacces is placed inside static folder. Am I doing something wrong?
Django serving media files (user uploaded files ) in openshift I have successfully deployed my Django project in openshift. But I need to be able to serve files that are uploaded by users. I user MEDIA_ROOT and MEDIA_URL for that. I followed this tutorial here, but nothing happened. I had to change MEDIA_ROOT because the one suggested there isn't correct i think. So my MEDIA_ROOT looks like MEDIA_ROOT = os.path.join(os.environ.get('OPENSHIFT_DATA_DIR', ''),'media') MEDIA_URL = '/media/' I added the .htaccess in /wsgi folder with as it says in the article RewriteEngine On RewriteRule ^application/media/(.+)$ /static/$1 [L] and created the build script to make symbolic link of the media in static as the article says. #!/bin/bash if [ ! -d $OPENSHIFT_DATA_DIR/media ]; then mkdir $OPENSHIFT_DATA_DIR/media fi ln -sf $OPENSHIFT_DATA_DIR/media $OPENSHIFT_REPO_DIR/wsgi/static/media In my urls.py I have added the urlpatterns += static(settings.MEDIA_ROOT, document_root=settings.MEDIA_URL) but I still can't serve them. I also tried not to include the django static method in urls.py but the same result. In another tutorial .htacces is placed inside static folder. Am I doing something wrong?
python, django, openshift, django-media
12
3,424
3
https://stackoverflow.com/questions/23807969/django-serving-media-files-user-uploaded-files-in-openshift
32,805,491
Can&#39;t connect through WiFi, but possible through Mobile Data
I've a php file named test.php stored in my Openshift server ( [URL] ) with the below code. <?php echo "Hello"; Task I am trying to download the text from an Android application. Problem I am getting a java.net.UnknownHostException: Unable to resolve host "phpgear-shifz.rhcloud.com": No address associated with hostname while connecting through a WiFi network, but everything is fine with Mobile Data . Android Activity Code @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_test); final TextView tvTest = (TextView) findViewById(R.id.tvTest); new AsyncTask<Void, Void, String>() { @Override protected String doInBackground(Void... params) { try { final URL url = new URL("[URL] final BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream())); final StringBuilder sb = new StringBuilder(); String line; while((line = br.readLine())!=null){ sb.append(line).append("\n"); } br.close(); return sb.toString(); } catch (IOException e) { e.printStackTrace(); return "Error: " + e.getMessage(); } } @Override protected void onPostExecute(String result) { tvTest.setText(result); } }.execute(); } RESPONSES on WiFi on Mobile Data Question 1) Why I can't connect through the WiFi network where Mobile Data is perfectly fine ? 2) How to solve this problem ? NOTE: Sometime it's getting connected, sometime won't.
Can&#39;t connect through WiFi, but possible through Mobile Data I've a php file named test.php stored in my Openshift server ( [URL] ) with the below code. <?php echo "Hello"; Task I am trying to download the text from an Android application. Problem I am getting a java.net.UnknownHostException: Unable to resolve host "phpgear-shifz.rhcloud.com": No address associated with hostname while connecting through a WiFi network, but everything is fine with Mobile Data . Android Activity Code @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_test); final TextView tvTest = (TextView) findViewById(R.id.tvTest); new AsyncTask<Void, Void, String>() { @Override protected String doInBackground(Void... params) { try { final URL url = new URL("[URL] final BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream())); final StringBuilder sb = new StringBuilder(); String line; while((line = br.readLine())!=null){ sb.append(line).append("\n"); } br.close(); return sb.toString(); } catch (IOException e) { e.printStackTrace(); return "Error: " + e.getMessage(); } } @Override protected void onPostExecute(String result) { tvTest.setText(result); } }.execute(); } RESPONSES on WiFi on Mobile Data Question 1) Why I can't connect through the WiFi network where Mobile Data is perfectly fine ? 2) How to solve this problem ? NOTE: Sometime it's getting connected, sometime won't.
java, php, android, openshift, openshift-php-cartidges
12
4,457
2
https://stackoverflow.com/questions/32805491/cant-connect-through-wifi-but-possible-through-mobile-data
22,485,166
Need to know Gear/Hour meaning in pricinng section of Openshift
I am a newbie of OpenShift PaaS plate form. I am still evaluating OpenShift and planning to upgrade to "Bronze Plan" but first I want to understand what is meaning of "gear/hour" in the pricing section? How the billing amount will be calculated?
Need to know Gear/Hour meaning in pricinng section of Openshift I am a newbie of OpenShift PaaS plate form. I am still evaluating OpenShift and planning to upgrade to "Bronze Plan" but first I want to understand what is meaning of "gear/hour" in the pricing section? How the billing amount will be calculated?
openshift
12
2,635
1
https://stackoverflow.com/questions/22485166/need-to-know-gear-hour-meaning-in-pricinng-section-of-openshift
40,597,903
How to use image stream in deploy configuration for OpenShift
I want my deploy configuration to use an image that was the output of a build configuration. I am currently using something like this: - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp name: myapp spec: replicas: 1 selector: app: myapp deploymentconfig: myapp strategy: resources: {} template: metadata: annotations: openshift.io/container.myapp.image.entrypoint: '["python3"]' openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp deploymentconfig: myapp spec: containers: - name: myapp image: 123.123.123.123/myproject/myapp-staging:latest resources: {} command: - scripts/start_server.sh ports: - containerPort: 8000 test: false triggers: [] status: {} I had to hard-code the integrated docker registry's IP address; otherwise Kubernetes/OpenShift is not able to find the image to pull down. I would like to not hard-code the integrated docker registry's IP address, and instead use something like this: - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp name: myapp spec: replicas: 1 selector: app: myapp deploymentconfig: myapp strategy: resources: {} template: metadata: annotations: openshift.io/container.myapp.image.entrypoint: '["python3"]' openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp deploymentconfig: myapp spec: containers: - name: myapp from: kind: "ImageStreamTag" name: "myapp-staging:latest" resources: {} command: - scripts/start_server.sh ports: - containerPort: 8000 test: false triggers: [] status: {} But this causes Kubernetes/OpenShift to complain with: The DeploymentConfig "myapp" is invalid. spec.template.spec.containers[0].image: required value How can I specify the output of a build configuration as the image to use in a deploy configuration? Thank you for your time! Also, oddly enough, if I link the deploy configuration to the build configuration with a trigger, Kubernetes/OpenShift knows to look in the integrated docker for the image: - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp-staging name: myapp-staging spec: replicas: 1 selector: app: myapp-staging deploymentconfig: myapp-staging strategy: resources: {} template: metadata: annotations: openshift.io/container.myapp.image.entrypoint: '["python3"]' openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp-staging deploymentconfig: myapp-staging spec: containers: - name: myapp-staging image: myapp-staging:latest resources: {} command: - scripts/start_server.sh ports: - containerPort: 8000 test: false triggers: - type: "ImageChange" imageChangeParams: automatic: true containerNames: - myapp-staging from: kind: ImageStreamTag name: myapp-staging:latest status: {} But I don't want the automated triggering... Update 1 (11/21/2016): Configuring the trigger but having the trigger disabled (hence manually triggering the deploy), still left the deployment unable to find the image: $ oc describe pod myapp-1-oodr5 Name: myapp-1-oodr5 Namespace: myproject Security Policy: restricted Node: node.url/123.123.123.123 Start Time: Mon, 21 Nov 2016 09:20:26 -1000 Labels: app=myapp deployment=myapp-1 deploymentconfig=myapp Status: Pending IP: 123.123.123.123 Controllers: ReplicationController/myapp-1 Containers: myapp: Container ID: Image: myapp-staging:latest Image ID: Port: 8000/TCP Command: scripts/start_server.sh State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-goe98 (ro) Environment Variables: ALLOWED_HOSTS: myapp-myproject.url Conditions: Type Status Ready False Volumes: default-token-goe98: Type: Secret (a volume populated by a Secret) SecretName: default-token-goe98 QoS Tier: BestEffort Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 42s 42s 1 {scheduler } Scheduled Successfully assigned myapp-1-oodr5 to node.url 40s 40s 1 {kubelet node.url} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.7" already present on machine 40s 40s 1 {kubelet node.url} implicitly required container POD Created Created with docker id d3318e880e4a 40s 40s 1 {kubelet node.url} implicitly required container POD Started Started with docker id d3318e880e4a 40s 24s 2 {kubelet node.url} spec.containers{myapp} Pulling pulling image "myapp-staging:latest" 38s 23s 2 {kubelet node.url} spec.containers{myapp} Failed Failed to pull image "myapp-staging:latest": Error: image library/myapp-staging:latest not found 35s 15s 2 {kubelet node.url} spec.containers{myapp} Back-off Back-off pulling image "myapp-staging:latest" Update 2 (08/23/2017): In case, this helps others, here's a summary of the solution. triggers: - type: "ImageChange" imageChangeParams: automatic: true # this is required to link the build and deployment containerNames: - myapp-staging from: kind: ImageStreamTag name: myapp-staging:latest With the trigger and automatic set to true , the deployment should use the build's image in the internal registry. The other comments relating to making the build not trigger a deploy relates to a separate requirement of wanting to manually deploy images from the internal registry. Here's more information about that portion: The build needs to trigger the deployment at least once before automatic is set to false . So far a while, I was: setting automatic to true initiate a build and deploy after deployment finishes, manually change automatic to false manually, trigger a deployment later (though I did not verify if this deployed the older, out-of-date image or not) I was initially trying to use this manual deployment as a way for a non-developer to go into the web console and make deployments. But this requirement has since been removed, so having build trigger deployments each time works just fine for us now. Builds can build at different branches and then tag the images differently. Deployments can then just use the appropriately tagged images. Hope that helps!
How to use image stream in deploy configuration for OpenShift I want my deploy configuration to use an image that was the output of a build configuration. I am currently using something like this: - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp name: myapp spec: replicas: 1 selector: app: myapp deploymentconfig: myapp strategy: resources: {} template: metadata: annotations: openshift.io/container.myapp.image.entrypoint: '["python3"]' openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp deploymentconfig: myapp spec: containers: - name: myapp image: 123.123.123.123/myproject/myapp-staging:latest resources: {} command: - scripts/start_server.sh ports: - containerPort: 8000 test: false triggers: [] status: {} I had to hard-code the integrated docker registry's IP address; otherwise Kubernetes/OpenShift is not able to find the image to pull down. I would like to not hard-code the integrated docker registry's IP address, and instead use something like this: - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp name: myapp spec: replicas: 1 selector: app: myapp deploymentconfig: myapp strategy: resources: {} template: metadata: annotations: openshift.io/container.myapp.image.entrypoint: '["python3"]' openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp deploymentconfig: myapp spec: containers: - name: myapp from: kind: "ImageStreamTag" name: "myapp-staging:latest" resources: {} command: - scripts/start_server.sh ports: - containerPort: 8000 test: false triggers: [] status: {} But this causes Kubernetes/OpenShift to complain with: The DeploymentConfig "myapp" is invalid. spec.template.spec.containers[0].image: required value How can I specify the output of a build configuration as the image to use in a deploy configuration? Thank you for your time! Also, oddly enough, if I link the deploy configuration to the build configuration with a trigger, Kubernetes/OpenShift knows to look in the integrated docker for the image: - apiVersion: v1 kind: DeploymentConfig metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp-staging name: myapp-staging spec: replicas: 1 selector: app: myapp-staging deploymentconfig: myapp-staging strategy: resources: {} template: metadata: annotations: openshift.io/container.myapp.image.entrypoint: '["python3"]' openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: app: myapp-staging deploymentconfig: myapp-staging spec: containers: - name: myapp-staging image: myapp-staging:latest resources: {} command: - scripts/start_server.sh ports: - containerPort: 8000 test: false triggers: - type: "ImageChange" imageChangeParams: automatic: true containerNames: - myapp-staging from: kind: ImageStreamTag name: myapp-staging:latest status: {} But I don't want the automated triggering... Update 1 (11/21/2016): Configuring the trigger but having the trigger disabled (hence manually triggering the deploy), still left the deployment unable to find the image: $ oc describe pod myapp-1-oodr5 Name: myapp-1-oodr5 Namespace: myproject Security Policy: restricted Node: node.url/123.123.123.123 Start Time: Mon, 21 Nov 2016 09:20:26 -1000 Labels: app=myapp deployment=myapp-1 deploymentconfig=myapp Status: Pending IP: 123.123.123.123 Controllers: ReplicationController/myapp-1 Containers: myapp: Container ID: Image: myapp-staging:latest Image ID: Port: 8000/TCP Command: scripts/start_server.sh State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-goe98 (ro) Environment Variables: ALLOWED_HOSTS: myapp-myproject.url Conditions: Type Status Ready False Volumes: default-token-goe98: Type: Secret (a volume populated by a Secret) SecretName: default-token-goe98 QoS Tier: BestEffort Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 42s 42s 1 {scheduler } Scheduled Successfully assigned myapp-1-oodr5 to node.url 40s 40s 1 {kubelet node.url} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.7" already present on machine 40s 40s 1 {kubelet node.url} implicitly required container POD Created Created with docker id d3318e880e4a 40s 40s 1 {kubelet node.url} implicitly required container POD Started Started with docker id d3318e880e4a 40s 24s 2 {kubelet node.url} spec.containers{myapp} Pulling pulling image "myapp-staging:latest" 38s 23s 2 {kubelet node.url} spec.containers{myapp} Failed Failed to pull image "myapp-staging:latest": Error: image library/myapp-staging:latest not found 35s 15s 2 {kubelet node.url} spec.containers{myapp} Back-off Back-off pulling image "myapp-staging:latest" Update 2 (08/23/2017): In case, this helps others, here's a summary of the solution. triggers: - type: "ImageChange" imageChangeParams: automatic: true # this is required to link the build and deployment containerNames: - myapp-staging from: kind: ImageStreamTag name: myapp-staging:latest With the trigger and automatic set to true , the deployment should use the build's image in the internal registry. The other comments relating to making the build not trigger a deploy relates to a separate requirement of wanting to manually deploy images from the internal registry. Here's more information about that portion: The build needs to trigger the deployment at least once before automatic is set to false . So far a while, I was: setting automatic to true initiate a build and deploy after deployment finishes, manually change automatic to false manually, trigger a deployment later (though I did not verify if this deployed the older, out-of-date image or not) I was initially trying to use this manual deployment as a way for a non-developer to go into the web console and make deployments. But this requirement has since been removed, so having build trigger deployments each time works just fine for us now. Builds can build at different branches and then tag the images differently. Deployments can then just use the appropriately tagged images. Hope that helps!
openshift, kubernetes
12
9,099
1
https://stackoverflow.com/questions/40597903/how-to-use-image-stream-in-deploy-configuration-for-openshift
30,098,652
Openshift, a web Service that invoke another web Service
I created a Tomcat 7 app in Openshift, and I deployed my web services there. the problem is that my web service is supposed to call another service. No results are displayed. I tested the service in localhost and it workedd fine but not in openshift!! Should I change the URL of the services to myapp-myDomain.rhcloud.com? or what's the problem? Update Does it have something to do with port forwarding, since my application trys to call another web service deployed in tomcat and anothe one external from wsdl url address (playing the role of a client web service), all deployed in Openshift? I receive the following exception which looks like some kind of Permission issue wrt Axis on Openshift . Complete StackTrace org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:568) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:460) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) root cause org.apache.axis2.AxisFault: Permission denied org.apache.axis2.AxisFault.makeFault(AxisFault.java:430) org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:197) org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75) org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:404) org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:231) org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:443) org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:406) org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:229) org.apache.axis2.client.OperationClient.execute(OperationClient.java:165) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) root cause java.net.BindException: Permission denied java.net.PlainSocketImpl.socketBind(Native Method) java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376) java.net.Socket.bind(Socket.java:631) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:606) org.apache.commons.httpclient.protocol.ReflectionSocketFactory.createSocket(ReflectionSocketFactory.java:139) org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:125) org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1361) org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387) org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:621) org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193) org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75) org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:404) org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:231) org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:443) org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:406) org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:229)org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) A simple example with details are found here
Openshift, a web Service that invoke another web Service I created a Tomcat 7 app in Openshift, and I deployed my web services there. the problem is that my web service is supposed to call another service. No results are displayed. I tested the service in localhost and it workedd fine but not in openshift!! Should I change the URL of the services to myapp-myDomain.rhcloud.com? or what's the problem? Update Does it have something to do with port forwarding, since my application trys to call another web service deployed in tomcat and anothe one external from wsdl url address (playing the role of a client web service), all deployed in Openshift? I receive the following exception which looks like some kind of Permission issue wrt Axis on Openshift . Complete StackTrace org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:568) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:460) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) root cause org.apache.axis2.AxisFault: Permission denied org.apache.axis2.AxisFault.makeFault(AxisFault.java:430) org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:197) org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75) org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:404) org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:231) org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:443) org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:406) org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:229) org.apache.axis2.client.OperationClient.execute(OperationClient.java:165) org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) root cause java.net.BindException: Permission denied java.net.PlainSocketImpl.socketBind(Native Method) java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376) java.net.Socket.bind(Socket.java:631) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:606) org.apache.commons.httpclient.protocol.ReflectionSocketFactory.createSocket(ReflectionSocketFactory.java:139) org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:125) org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1361) org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387) org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:621) org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193) org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75) org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:404) org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:231) org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:443) org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:406) org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:229)org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) A simple example with details are found here
java, openshift, apache-axis
12
844
2
https://stackoverflow.com/questions/30098652/openshift-a-web-service-that-invoke-another-web-service
42,747,507
&#39;java.security.cert.CertificateExpiredException: NotAfter&#39; upon connecting secure web socket
I am trying to connect to a secured web socket to consume a API. Below is the source code. Hosting environment configuration is JRE 1.7 and Tomcat 7 . import java.net.URI; import javax.websocket.ClientEndpoint; import javax.websocket.CloseReason; import javax.websocket.ContainerProvider; import javax.websocket.OnClose; import javax.websocket.OnMessage; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.WebSocketContainer; import org.apache.log4j.Logger; @ClientEndpoint public final class SocketRateFeed { private static final Logger logger = Logger.getLogger(SocketRateFeed.class); private Session sessionWS; public static void startContainer() { try { URI wsURI = new URI("wss://websocket.abc.xyz/?api_key=qwerty&user_id=ASDF"); WebSocketContainer container = ContainerProvider.getWebSocketContainer(); container.connectToServer(new SocketRateFeed() , wsURI); } catch(Exception exp) { logger.error(exp.getMessage() , exp); } } ....other annotated methods } This is stack trace. 28-02-2017 11:24:37 localhost-startStop-1 ERROR SocketRateFeed.startContainer(30) : The HTTP request to initiate the WebSocket connection failed javax.websocket.DeploymentException: The HTTP request to initiate the WebSocket connection failed at org.apache.tomcat.websocket.WsWebSocketContainer.connectToServer(WsWebSocketContainer.java:325) at org.apache.tomcat.websocket.WsWebSocketContainer.connectToServer(WsWebSocketContainer.java:166) ............ Caused by: java.util.concurrent.ExecutionException: javax.net.ssl.SSLHandshakeException: General SSLEngine problem at org.apache.tomcat.websocket.AsyncChannelWrapperSecure$WrapperFuture.get(AsyncChannelWrapperSecure.java:511) at org.apache.tomcat.websocket.WsWebSocketContainer.connectToServer(WsWebSocketContainer.java:291) ... 17 more Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1395) at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:516) at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1193) at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1165) at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469) at org.apache.tomcat.websocket.AsyncChannelWrapperSecure$WebSocketSslHandshakeThread.run(AsyncChannelWrapperSecure.java:371) Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1702) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:281) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:273) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1477) at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:213) at sun.security.ssl.Handshaker.processLoop(Handshaker.java:961) at sun.security.ssl.Handshaker$1.run(Handshaker.java:901) at sun.security.ssl.Handshaker$1.run(Handshaker.java:899) at java.security.AccessController.doPrivileged(Native Method) at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1333) at org.apache.tomcat.websocket.AsyncChannelWrapperSecure$WebSocketSslHandshakeThread.run(AsyncChannelWrapperSecure.java:397) Caused by: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: timestamp check failed at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:350) at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:260) at sun.security.validator.Validator.validate(Validator.java:260) at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:326) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:138) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1464) ... 7 more Caused by: java.security.cert.CertPathValidatorException: timestamp check failed at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:159) at sun.security.provider.certpath.PKIXCertPathValidator.doValidate(PKIXCertPathValidator.java:353) at sun.security.provider.certpath.PKIXCertPathValidator.engineValidate(PKIXCertPathValidator.java:191) at java.security.cert.CertPathValidator.validate(CertPathValidator.java:279) at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:345) ... 13 more Caused by: java.security.cert.CertificateExpiredException: NotAfter: Sat May 21 17:56:00 IST 2016 at sun.security.x509.CertificateValidity.valid(CertificateValidity.java:273) at sun.security.x509.X509CertImpl.checkValidity(X509CertImpl.java:576) at sun.security.provider.certpath.BasicChecker.verifyTimestamp(BasicChecker.java:184) at sun.security.provider.certpath.BasicChecker.check(BasicChecker.java:136) at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:133) ... 17 more I would be very helpful if anyone can provide solution/workaround or bypass trick for this issue. Thanks.
&#39;java.security.cert.CertificateExpiredException: NotAfter&#39; upon connecting secure web socket I am trying to connect to a secured web socket to consume a API. Below is the source code. Hosting environment configuration is JRE 1.7 and Tomcat 7 . import java.net.URI; import javax.websocket.ClientEndpoint; import javax.websocket.CloseReason; import javax.websocket.ContainerProvider; import javax.websocket.OnClose; import javax.websocket.OnMessage; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.WebSocketContainer; import org.apache.log4j.Logger; @ClientEndpoint public final class SocketRateFeed { private static final Logger logger = Logger.getLogger(SocketRateFeed.class); private Session sessionWS; public static void startContainer() { try { URI wsURI = new URI("wss://websocket.abc.xyz/?api_key=qwerty&user_id=ASDF"); WebSocketContainer container = ContainerProvider.getWebSocketContainer(); container.connectToServer(new SocketRateFeed() , wsURI); } catch(Exception exp) { logger.error(exp.getMessage() , exp); } } ....other annotated methods } This is stack trace. 28-02-2017 11:24:37 localhost-startStop-1 ERROR SocketRateFeed.startContainer(30) : The HTTP request to initiate the WebSocket connection failed javax.websocket.DeploymentException: The HTTP request to initiate the WebSocket connection failed at org.apache.tomcat.websocket.WsWebSocketContainer.connectToServer(WsWebSocketContainer.java:325) at org.apache.tomcat.websocket.WsWebSocketContainer.connectToServer(WsWebSocketContainer.java:166) ............ Caused by: java.util.concurrent.ExecutionException: javax.net.ssl.SSLHandshakeException: General SSLEngine problem at org.apache.tomcat.websocket.AsyncChannelWrapperSecure$WrapperFuture.get(AsyncChannelWrapperSecure.java:511) at org.apache.tomcat.websocket.WsWebSocketContainer.connectToServer(WsWebSocketContainer.java:291) ... 17 more Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1395) at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:516) at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1193) at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1165) at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469) at org.apache.tomcat.websocket.AsyncChannelWrapperSecure$WebSocketSslHandshakeThread.run(AsyncChannelWrapperSecure.java:371) Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1702) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:281) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:273) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1477) at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:213) at sun.security.ssl.Handshaker.processLoop(Handshaker.java:961) at sun.security.ssl.Handshaker$1.run(Handshaker.java:901) at sun.security.ssl.Handshaker$1.run(Handshaker.java:899) at java.security.AccessController.doPrivileged(Native Method) at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1333) at org.apache.tomcat.websocket.AsyncChannelWrapperSecure$WebSocketSslHandshakeThread.run(AsyncChannelWrapperSecure.java:397) Caused by: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: timestamp check failed at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:350) at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:260) at sun.security.validator.Validator.validate(Validator.java:260) at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:326) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:138) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1464) ... 7 more Caused by: java.security.cert.CertPathValidatorException: timestamp check failed at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:159) at sun.security.provider.certpath.PKIXCertPathValidator.doValidate(PKIXCertPathValidator.java:353) at sun.security.provider.certpath.PKIXCertPathValidator.engineValidate(PKIXCertPathValidator.java:191) at java.security.cert.CertPathValidator.validate(CertPathValidator.java:279) at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:345) ... 13 more Caused by: java.security.cert.CertificateExpiredException: NotAfter: Sat May 21 17:56:00 IST 2016 at sun.security.x509.CertificateValidity.valid(CertificateValidity.java:273) at sun.security.x509.X509CertImpl.checkValidity(X509CertImpl.java:576) at sun.security.provider.certpath.BasicChecker.verifyTimestamp(BasicChecker.java:184) at sun.security.provider.certpath.BasicChecker.check(BasicChecker.java:136) at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:133) ... 17 more I would be very helpful if anyone can provide solution/workaround or bypass trick for this issue. Thanks.
java, ssl, openshift, keystore, java-websocket
12
34,191
0
https://stackoverflow.com/questions/42747507/java-security-cert-certificateexpiredexception-notafter-upon-connecting-secur
58,473,832
How do I change the permissions in openshift container platform?
I am new to Openshift. I have deployed an application in openshift. When I checked the logs, there is permission denied error for some files. Now, I want to change the permissions on the the container that is already deployed in Openshift, but I am getting, "Operation not permitted" warning. How do I fix this ? This is for linux running latest version of MongoDB. I have already tried executing RUN chmod 777 /path/to/directory in my docker file, created the image and pulled the same image in my yaml file, which I am deploying in my openshift. However, when I check my docker container, it shows that the permissions are changed for that directory, but when I deploy, I get the warning in my logs as "permission denied". FROM node:10.16.3 RUN apt update && apt install -y openjdk-8-jdk RUN useradd -ms /bin/bash admin # Set the workdir /var/www/myapp WORKDIR /var/www/myapp # Copy the package.json to workdir COPY package.json . # Run npm install - install the npm dependencies RUN npm install RUN npm install sqlite3 # Copy application source COPY . . RUN chown -R admin:admin /var/www/myapp RUN chmod 775 /var/www/myapp USER admin # Copy .env.docker to workdir/.env - use the docker env #COPY .env.docker ./.env # Expose application ports - (4300 - for API and 4301 - for front end) # EXPOSE 4300 4301 EXPOSE 52000 CMD [ "npm", "start" ] Athough, when I run my dockerifle, the permissions have changed, but when I try to deploy in my openshift, I get permission denied for some files in that directory.
How do I change the permissions in openshift container platform? I am new to Openshift. I have deployed an application in openshift. When I checked the logs, there is permission denied error for some files. Now, I want to change the permissions on the the container that is already deployed in Openshift, but I am getting, "Operation not permitted" warning. How do I fix this ? This is for linux running latest version of MongoDB. I have already tried executing RUN chmod 777 /path/to/directory in my docker file, created the image and pulled the same image in my yaml file, which I am deploying in my openshift. However, when I check my docker container, it shows that the permissions are changed for that directory, but when I deploy, I get the warning in my logs as "permission denied". FROM node:10.16.3 RUN apt update && apt install -y openjdk-8-jdk RUN useradd -ms /bin/bash admin # Set the workdir /var/www/myapp WORKDIR /var/www/myapp # Copy the package.json to workdir COPY package.json . # Run npm install - install the npm dependencies RUN npm install RUN npm install sqlite3 # Copy application source COPY . . RUN chown -R admin:admin /var/www/myapp RUN chmod 775 /var/www/myapp USER admin # Copy .env.docker to workdir/.env - use the docker env #COPY .env.docker ./.env # Expose application ports - (4300 - for API and 4301 - for front end) # EXPOSE 4300 4301 EXPOSE 52000 CMD [ "npm", "start" ] Athough, when I run my dockerifle, the permissions have changed, but when I try to deploy in my openshift, I get permission denied for some files in that directory.
linux, docker, openshift
11
31,676
4
https://stackoverflow.com/questions/58473832/how-do-i-change-the-permissions-in-openshift-container-platform
37,723,401
How do you run an Openshift Docker container as something besides root?
I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root. (13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid My Docker file that I am deploying looks like this - FROM centos:7 MAINTAINER me<me@me> RUN yum update -y RUN yum install -y git [URL] RUN yum install -y ansible && yum clean all -y RUN git clone [URL] RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \ rm -f /lib/systemd/system/multi-user.target.wants/*;\ rm -f /etc/systemd/system/*.wants/*;\ rm -f /lib/systemd/system/local-fs.target.wants/*; \ rm -f /lib/systemd/system/sockets.target.wants/*udev*; \ rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \ rm -f /lib/systemd/system/basic.target.wants/*;\ rm -f /lib/systemd/system/anaconda.target.wants/*; COPY supervisord.conf /usr/etc/supervisord.conf RUN rm -rf supervisord.conf VOLUME [ "/sys/fs/cgroup" ] EXPOSE 80 443 #CMD ["/usr/bin/supervisord"] CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"] Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar. How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
How do you run an Openshift Docker container as something besides root? I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root. (13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid My Docker file that I am deploying looks like this - FROM centos:7 MAINTAINER me<me@me> RUN yum update -y RUN yum install -y git [URL] RUN yum install -y ansible && yum clean all -y RUN git clone [URL] RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \ rm -f /lib/systemd/system/multi-user.target.wants/*;\ rm -f /etc/systemd/system/*.wants/*;\ rm -f /lib/systemd/system/local-fs.target.wants/*; \ rm -f /lib/systemd/system/sockets.target.wants/*udev*; \ rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \ rm -f /lib/systemd/system/basic.target.wants/*;\ rm -f /lib/systemd/system/anaconda.target.wants/*; COPY supervisord.conf /usr/etc/supervisord.conf RUN rm -rf supervisord.conf VOLUME [ "/sys/fs/cgroup" ] EXPOSE 80 443 #CMD ["/usr/bin/supervisord"] CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"] Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar. How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
docker, openshift
11
18,344
3
https://stackoverflow.com/questions/37723401/how-do-you-run-an-openshift-docker-container-as-something-besides-root
32,035,472
heroku vs openshift which is better PaaS?
I have been using Heroku since last 3-4 years and now I have clients wanted to use openshift for their ruby application deploy. I know heroku features and new to openshift. I know few features of openshift like gears, cartridges, marketplace etc. Both use aws for online service and same git deployment strategy. I wanted to know what is the advanteges of openshift over heroku to deploy ruby applications?
heroku vs openshift which is better PaaS? I have been using Heroku since last 3-4 years and now I have clients wanted to use openshift for their ruby application deploy. I know heroku features and new to openshift. I know few features of openshift like gears, cartridges, marketplace etc. Both use aws for online service and same git deployment strategy. I wanted to know what is the advanteges of openshift over heroku to deploy ruby applications?
heroku, openshift, heroku-toolbelt, openshift-cartridge
11
8,581
1
https://stackoverflow.com/questions/32035472/heroku-vs-openshift-which-is-better-paas
34,388,165
Does Tomcat 7 support Java 8?
In offcial page of Tomcat it says that Tomcat 7 supports Java 8. If I download this and run with Java 8 it works. But, on Openshift is Tomcat 7 (JBoss EWS 2.0) . In this webpage it says that EWS 2.0 doesn't support Java 8 . If I deploy my Java 8 application to Openshift (Tomcat 7) it isn't working. Why? I tried to install Java 8 on Tomcat 7 on Openshift with this: [URL] But it isnt' working for me. I have error: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping': Initialization of bean failed; nested exception is org.springframework.beans.factory.CannotLoadBeanClassException: Error loading class [pl.xxx.controller.HomeController] for bean with name 'homeController' defined in file [/var/lib/openshift/xxx/app- root/runtime/dependencies/jbossews/webapps/web1/WEB- INF/classes/xxx/controller/HomeController.class]: problem with class file or dependent class; nested exception is java.lang.UnsupportedClassVersionError: xxx/controller/HomeController : Unsupported major.minor version 52.0 (unable to load class xxx.controller.HomeController) Unsupported major.minor version 52.0 says that Java version is wrong (Java 7 intead Java 8).
Does Tomcat 7 support Java 8? In offcial page of Tomcat it says that Tomcat 7 supports Java 8. If I download this and run with Java 8 it works. But, on Openshift is Tomcat 7 (JBoss EWS 2.0) . In this webpage it says that EWS 2.0 doesn't support Java 8 . If I deploy my Java 8 application to Openshift (Tomcat 7) it isn't working. Why? I tried to install Java 8 on Tomcat 7 on Openshift with this: [URL] But it isnt' working for me. I have error: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping': Initialization of bean failed; nested exception is org.springframework.beans.factory.CannotLoadBeanClassException: Error loading class [pl.xxx.controller.HomeController] for bean with name 'homeController' defined in file [/var/lib/openshift/xxx/app- root/runtime/dependencies/jbossews/webapps/web1/WEB- INF/classes/xxx/controller/HomeController.class]: problem with class file or dependent class; nested exception is java.lang.UnsupportedClassVersionError: xxx/controller/HomeController : Unsupported major.minor version 52.0 (unable to load class xxx.controller.HomeController) Unsupported major.minor version 52.0 says that Java version is wrong (Java 7 intead Java 8).
java, tomcat, jboss, openshift
11
51,091
2
https://stackoverflow.com/questions/34388165/does-tomcat-7-support-java-8
22,634,691
apache tomcat catalina as maven dependency for CORS filter
I'm using org.apache.catalina.filters.CorsFilter in my webapp. So I specify the maven dependency <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-catalina</artifactId> <version>7.0.50</version> </dependency> Now, If I say the scope is "provide" or "runtime" the server doesn't start, because of java.lang.ClassNotFoundException: org.apache.catalina.filters.CorsFilter This class is not available in the catalina jar from jbossews/lib which is 7.0.40 Is it easy to "upgrade" tomcat on openshift? or if anybody can suggest a solution, it is much appreciated. Many thanks,
apache tomcat catalina as maven dependency for CORS filter I'm using org.apache.catalina.filters.CorsFilter in my webapp. So I specify the maven dependency <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-catalina</artifactId> <version>7.0.50</version> </dependency> Now, If I say the scope is "provide" or "runtime" the server doesn't start, because of java.lang.ClassNotFoundException: org.apache.catalina.filters.CorsFilter This class is not available in the catalina jar from jbossews/lib which is 7.0.40 Is it easy to "upgrade" tomcat on openshift? or if anybody can suggest a solution, it is much appreciated. Many thanks,
apache, maven, tomcat, openshift, catalina
11
18,771
2
https://stackoverflow.com/questions/22634691/apache-tomcat-catalina-as-maven-dependency-for-cors-filter
50,642,453
Logs are not received in Hawkular APM from Zipkin Client
I have client application instrumented with Zipkin library with configuration in spring application.properties . camel.zipkin.host-name=hawkular-apm-server.com camel.zipkin.port=443 camel.zipkin.include-message-body-streams=true Maven dependency <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-zipkin-starter</artifactId> </dependency> The hawkular apm server console is reachable from the local machine. However, when the rest api exposed in the client application is invoked, the zipkin trace is logged but they are not collected at the hawkular apm server. 04:31:55.632 [http-nio-0.0.0.0-8080-exec-1] INFO o.a.c.c.s.CamelHttpTransportServlet - Initialized CamelHttpTransportServlet[name=CamelServlet, contextPath=] 04:31:55.668 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - clientRequest [service=MyCamelClient, traceId=-5541987202080201726, spanId=-5541987202080201726] 04:31:55.672 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - serverRequest [service=MyCamel, traceId=-5541987202080201726, spanId=-5541987202080201726] 04:31:55.676 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - serverResponse[service=MyCamel, traceId=-5541987202080201726, spanId=-5541987202080201726] 04:31:55.677 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - clientResponse[service=MyCamelClient, traceId=-5541987202080201726, spanId=-5541987202080201726] 04:31:55.758 [pool-1-thread-1] WARN o.a.t.transport.TIOStreamTransport - Error closing output stream. java.net.SocketException: Socket closed at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:118) at java.net.SocketOutputStream.write(SocketOutputStream.java:155) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.FilterOutputStream.close(FilterOutputStream.java:158) at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110) at org.apache.thrift.transport.TSocket.close(TSocket.java:194) at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89) at com.github.kristofa.brave.scribe.ScribeClientProvider.close(ScribeClientProvider.java:96) at com.github.kristofa.brave.scribe.ScribeClientProvider.exception(ScribeClientProvider.java:75) at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:123) at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:109) at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:95) at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:35) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 04:31:55.763 [pool-1-thread-1] WARN c.g.k.b.scribe.SpanProcessingThread - Logging spans failed. 1 spans are lost! org.apache.thrift.transport.TTransportException: null at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at com.twitter.zipkin.gen.scribe$Client.recv_Log(scribe.java:74) at com.twitter.zipkin.gen.scribe$Client.Log(scribe.java:61) at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:127) at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:109) at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:95) at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:35) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) I am not sure if its a configuration issue at client application, as the Hawkular APM UI is opening properly. As per my understanding, Zipkin client can be integrated with Hawkular apm by simple replacing the hawkular url in place of zipkin server, but this does not seem to work. Any suggestions on this, unfortunately I could not find any examples too.
Logs are not received in Hawkular APM from Zipkin Client I have client application instrumented with Zipkin library with configuration in spring application.properties . camel.zipkin.host-name=hawkular-apm-server.com camel.zipkin.port=443 camel.zipkin.include-message-body-streams=true Maven dependency <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-zipkin-starter</artifactId> </dependency> The hawkular apm server console is reachable from the local machine. However, when the rest api exposed in the client application is invoked, the zipkin trace is logged but they are not collected at the hawkular apm server. 04:31:55.632 [http-nio-0.0.0.0-8080-exec-1] INFO o.a.c.c.s.CamelHttpTransportServlet - Initialized CamelHttpTransportServlet[name=CamelServlet, contextPath=] 04:31:55.668 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - clientRequest [service=MyCamelClient, traceId=-5541987202080201726, spanId=-5541987202080201726] 04:31:55.672 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - serverRequest [service=MyCamel, traceId=-5541987202080201726, spanId=-5541987202080201726] 04:31:55.676 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - serverResponse[service=MyCamel, traceId=-5541987202080201726, spanId=-5541987202080201726] 04:31:55.677 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - clientResponse[service=MyCamelClient, traceId=-5541987202080201726, spanId=-5541987202080201726] 04:31:55.758 [pool-1-thread-1] WARN o.a.t.transport.TIOStreamTransport - Error closing output stream. java.net.SocketException: Socket closed at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:118) at java.net.SocketOutputStream.write(SocketOutputStream.java:155) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.FilterOutputStream.close(FilterOutputStream.java:158) at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110) at org.apache.thrift.transport.TSocket.close(TSocket.java:194) at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89) at com.github.kristofa.brave.scribe.ScribeClientProvider.close(ScribeClientProvider.java:96) at com.github.kristofa.brave.scribe.ScribeClientProvider.exception(ScribeClientProvider.java:75) at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:123) at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:109) at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:95) at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:35) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 04:31:55.763 [pool-1-thread-1] WARN c.g.k.b.scribe.SpanProcessingThread - Logging spans failed. 1 spans are lost! org.apache.thrift.transport.TTransportException: null at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at com.twitter.zipkin.gen.scribe$Client.recv_Log(scribe.java:74) at com.twitter.zipkin.gen.scribe$Client.Log(scribe.java:61) at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:127) at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:109) at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:95) at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:35) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) I am not sure if its a configuration issue at client application, as the Hawkular APM UI is opening properly. As per my understanding, Zipkin client can be integrated with Hawkular apm by simple replacing the hawkular url in place of zipkin server, but this does not seem to work. Any suggestions on this, unfortunately I could not find any examples too.
apache-camel, openshift, zipkin, hawkular, distributed-tracing
11
250
0
https://stackoverflow.com/questions/50642453/logs-are-not-received-in-hawkular-apm-from-zipkin-client
56,581,578
CORS : Response to preflight request doesn&#39;t pass access control check: Redirect is not allowed for a preflight request
I am trying to integrate the angualar js app with the backend spring boot , in which i am facing the redirection is not allowed for a preflight request This is deployed on openshift , i have configured to enabled cors by adding few annotation in the controller method , which helped me to solve the : Request doesnot have "Access-Control-Allow-Origin" header in the incoming request : CORS policy issue. @CrossOrigin(allowedHeaders = "*", origins = "*", exposedHeaders = "Access-Control-Allow-Origin", methods = { RequestMethod.POST, RequestMethod.GET, RequestMethod.PUT, RequestMethod.DELETE, RequestMethod.HEAD, RequestMethod.OPTIONS, RequestMethod.PATCH, RequestMethod.TRACE }) @RestController public class Controller { @Autowired Service botService; @Autowired Environment env; @CrossOrigin() @RequestMapping(value = "/jwtToken", method = { RequestMethod.POST }, produces = MediaType.APPLICATION_JSON_VALUE, consumes = MediaType.APPLICATION_JSON_VALUE) @ResponseStatus(HttpStatus.OK) public ResponseEntity<UnifiedService> botConntor( @RequestBody UnifiedInput input, HttpServletRequest request) { UnifiedBPMService output = botService.processBotRequest(input, request); return new ResponseEntity<UnifiedService>(output, HttpStatus.OK); } The error which i get in the actual angular app is: Access to XMLHttpRequest at '[URL] from origin '[URL] has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request. The options call has returned the below respose : Request URL: [URL] Request Method: OPTIONS Status Code: 302 Found Remote Address: 10.235.222.220:80 Referrer Policy: no-referrer-when-downgrade
CORS : Response to preflight request doesn&#39;t pass access control check: Redirect is not allowed for a preflight request I am trying to integrate the angualar js app with the backend spring boot , in which i am facing the redirection is not allowed for a preflight request This is deployed on openshift , i have configured to enabled cors by adding few annotation in the controller method , which helped me to solve the : Request doesnot have "Access-Control-Allow-Origin" header in the incoming request : CORS policy issue. @CrossOrigin(allowedHeaders = "*", origins = "*", exposedHeaders = "Access-Control-Allow-Origin", methods = { RequestMethod.POST, RequestMethod.GET, RequestMethod.PUT, RequestMethod.DELETE, RequestMethod.HEAD, RequestMethod.OPTIONS, RequestMethod.PATCH, RequestMethod.TRACE }) @RestController public class Controller { @Autowired Service botService; @Autowired Environment env; @CrossOrigin() @RequestMapping(value = "/jwtToken", method = { RequestMethod.POST }, produces = MediaType.APPLICATION_JSON_VALUE, consumes = MediaType.APPLICATION_JSON_VALUE) @ResponseStatus(HttpStatus.OK) public ResponseEntity<UnifiedService> botConntor( @RequestBody UnifiedInput input, HttpServletRequest request) { UnifiedBPMService output = botService.processBotRequest(input, request); return new ResponseEntity<UnifiedService>(output, HttpStatus.OK); } The error which i get in the actual angular app is: Access to XMLHttpRequest at '[URL] from origin '[URL] has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request. The options call has returned the below respose : Request URL: [URL] Request Method: OPTIONS Status Code: 302 Found Remote Address: 10.235.222.220:80 Referrer Policy: no-referrer-when-downgrade
java, spring-boot, cors, openshift, same-origin-policy
11
25,106
2
https://stackoverflow.com/questions/56581578/cors-response-to-preflight-request-doesnt-pass-access-control-check-redirect
60,934,114
Openshift vs Rancher, what are the differences?
I am totally new to this two technologies (I know docker and kubernetes btw). Haven't find much an the web about this comparison topic. I have read that Openshift is used by more companies,but a nightmare to install,pricier and on upgrade data loss can occur. But nothing else. What should be the deciding factor for which one to use for kubernete cluster orchestration?
Openshift vs Rancher, what are the differences? I am totally new to this two technologies (I know docker and kubernetes btw). Haven't find much an the web about this comparison topic. I have read that Openshift is used by more companies,but a nightmare to install,pricier and on upgrade data loss can occur. But nothing else. What should be the deciding factor for which one to use for kubernete cluster orchestration?
kubernetes, openshift, rancher
10
11,118
1
https://stackoverflow.com/questions/60934114/openshift-vs-rancher-what-are-the-differences
31,343,987
OpenShift: &quot;Failed to execute control start&quot; on node application
I realize in advance this is kind of a vague question, but I'm stumped as to what else I can try here... I've been going through other SO questions and following their recommendations but so far nothing has solved my issue yet. Here's the specific error I'm getting. Stopping NodeJS cartridge Fri Jul 10 2015 10:36:28 GMT-0400 (EDT): Stopping application 'appname' ... Fri Jul 10 2015 10:36:29 GMT-0400 (EDT): Stopped Node application 'appname' Starting NodeJS cartridge Fri Jul 10 2015 10:36:30 GMT-0400 (EDT): Starting application 'appname' ... Waiting for application port (8080) become available ... Application 'appname' failed to start (port 8080 not available) Failed to execute: 'control restart' for /var/lib/openshift/MYID/nodejs My package.json file is up to date will all my dependencies, has the scripts: { start: 'node server.js' } property and yet I'm still getting this error. If I SSH in and go to my current/repo directory and run node server.js it works fine. However, I can't just use screen to run it in the background forever. I've also tried for stopping and restarting, git pushing, and restarting through the browser. I'm stumped as to what else I can try to get my (very simple) node application running on OpenShift. Any suggestions are much appreciated.
OpenShift: &quot;Failed to execute control start&quot; on node application I realize in advance this is kind of a vague question, but I'm stumped as to what else I can try here... I've been going through other SO questions and following their recommendations but so far nothing has solved my issue yet. Here's the specific error I'm getting. Stopping NodeJS cartridge Fri Jul 10 2015 10:36:28 GMT-0400 (EDT): Stopping application 'appname' ... Fri Jul 10 2015 10:36:29 GMT-0400 (EDT): Stopped Node application 'appname' Starting NodeJS cartridge Fri Jul 10 2015 10:36:30 GMT-0400 (EDT): Starting application 'appname' ... Waiting for application port (8080) become available ... Application 'appname' failed to start (port 8080 not available) Failed to execute: 'control restart' for /var/lib/openshift/MYID/nodejs My package.json file is up to date will all my dependencies, has the scripts: { start: 'node server.js' } property and yet I'm still getting this error. If I SSH in and go to my current/repo directory and run node server.js it works fine. However, I can't just use screen to run it in the background forever. I've also tried for stopping and restarting, git pushing, and restarting through the browser. I'm stumped as to what else I can try to get my (very simple) node application running on OpenShift. Any suggestions are much appreciated.
linux, node.js, openshift
10
7,548
2
https://stackoverflow.com/questions/31343987/openshift-failed-to-execute-control-start-on-node-application
22,844,905
How to create a directory using Ansible
How do you create a directory www at /srv on a Debian-based system using an Ansible playbook?
How to create a directory using Ansible How do you create a directory www at /srv on a Debian-based system using an Ansible playbook?
directory, filesystems, ansible
528
714,237
26
https://stackoverflow.com/questions/22844905/how-to-create-a-directory-using-ansible
30,662,069
How can I pass variable to ansible playbook in the command line?
How can one pass variable to ansible playbook in the command line? The following command didn't work: $ ansible-playbook -i '10.0.0.1,' yada-yada.yml --tags 'loaddata' django_fixtures="tile_colors" Where django_fixtures is my variable.
How can I pass variable to ansible playbook in the command line? How can one pass variable to ansible playbook in the command line? The following command didn't work: $ ansible-playbook -i '10.0.0.1,' yada-yada.yml --tags 'loaddata' django_fixtures="tile_colors" Where django_fixtures is my variable.
variables, command-line, command-line-arguments, ansible
308
471,884
11
https://stackoverflow.com/questions/30662069/how-can-i-pass-variable-to-ansible-playbook-in-the-command-line
21,870,083
Specify sudo password for Ansible
How do I specify a sudo password for Ansible in non-interactive way? I'm running Ansible playbook like this: $ ansible-playbook playbook.yml -i inventory.ini \ --user=username --ask-sudo-pass But I want to run it like this: $ ansible-playbook playbook.yml -i inventory.ini \ --user=username` **--sudo-pass=12345** Is there a way? I want to automate my project deployment as much as possible.
Specify sudo password for Ansible How do I specify a sudo password for Ansible in non-interactive way? I'm running Ansible playbook like this: $ ansible-playbook playbook.yml -i inventory.ini \ --user=username --ask-sudo-pass But I want to run it like this: $ ansible-playbook playbook.yml -i inventory.ini \ --user=username` **--sudo-pass=12345** Is there a way? I want to automate my project deployment as much as possible.
ansible
306
515,243
27
https://stackoverflow.com/questions/21870083/specify-sudo-password-for-ansible
18,900,236
Run command on the Ansible host
Is it possible to run commands on the Ansible controller node? My scenario is that I want to take a checkout from a git server that is hosted internally (and isn't accessible outside the company firewall). Then I want to upload the checkout (tarballed) to the production server (hosted externally). At the moment, I'm looking at running a script that does the checkout, tarballs it, and then runs the deployment script - but if I could integrate this into Ansible that would be preferable.
Run command on the Ansible host Is it possible to run commands on the Ansible controller node? My scenario is that I want to take a checkout from a git server that is hosted internally (and isn't accessible outside the company firewall). Then I want to upload the checkout (tarballed) to the production server (hosted externally). At the moment, I'm looking at running a script that does the checkout, tarballs it, and then runs the deployment script - but if I could integrate this into Ansible that would be preferable.
deployment, ansible, localhost, host
302
299,302
8
https://stackoverflow.com/questions/18900236/run-command-on-the-ansible-host
18,195,142
Safely limiting Ansible playbooks to a single machine?
I'm using Ansible for some simple user management tasks with a small group of computers. Currently, I have my playbooks set to hosts: all and my hosts file is just a single group with all machines listed: # file: hosts [office] imac-1.local imac-2.local imac-3.local I've found myself frequently having to target a single machine. The ansible-playbook command can limit plays like this: ansible-playbook --limit imac-2.local user.yml But that seems kind of fragile, especially for a potentially destructive playbook. Leaving out the limit flag means the playbook would be run everywhere. Since these tools only get used occasionally, it seems worth taking steps to foolproof playback so we don't accidentally nuke something months from now. Is there a best practice for limiting playbook runs to a single machine? Ideally the playbooks should be harmless if some important detail was left out.
Safely limiting Ansible playbooks to a single machine? I'm using Ansible for some simple user management tasks with a small group of computers. Currently, I have my playbooks set to hosts: all and my hosts file is just a single group with all machines listed: # file: hosts [office] imac-1.local imac-2.local imac-3.local I've found myself frequently having to target a single machine. The ansible-playbook command can limit plays like this: ansible-playbook --limit imac-2.local user.yml But that seems kind of fragile, especially for a potentially destructive playbook. Leaving out the limit flag means the playbook would be run everywhere. Since these tools only get used occasionally, it seems worth taking steps to foolproof playback so we don't accidentally nuke something months from now. Is there a best practice for limiting playbook runs to a single machine? Ideally the playbooks should be harmless if some important detail was left out.
ansible
282
348,448
14
https://stackoverflow.com/questions/18195142/safely-limiting-ansible-playbooks-to-a-single-machine
24,162,996
How to move/rename a file using an Ansible task on a remote system
How is it possible to move/rename a file/directory using an Ansible module on a remote system? I don't want to use the command/shell tasks and I don't want to copy the file from the local system to the remote system.
How to move/rename a file using an Ansible task on a remote system How is it possible to move/rename a file/directory using an Ansible module on a remote system? I don't want to use the command/shell tasks and I don't want to copy the file from the local system to the remote system.
file, ansible, move
277
508,443
14
https://stackoverflow.com/questions/24162996/how-to-move-rename-a-file-using-an-ansible-task-on-a-remote-system
23,945,201
How to run only one task in ansible playbook?
Is there a way to only run one task in ansible playbook? For example, in roles/hadoop_primary/tasks/hadoop_master.yml . I have "start hadoop job tracker services" task. Can I just run that one task? hadoop_master.yml file: # Playbook for Hadoop master servers - name: Install the namenode and jobtracker packages apt: name={{item}} force=yes state=latest with_items: - hadoop-0.20-mapreduce-jobtracker - hadoop-hdfs-namenode - hadoop-doc - hue-plugins - name: start hadoop jobtracker services service: name=hadoop-0.20-mapreduce-jobtracker state=started tags: debug
How to run only one task in ansible playbook? Is there a way to only run one task in ansible playbook? For example, in roles/hadoop_primary/tasks/hadoop_master.yml . I have "start hadoop job tracker services" task. Can I just run that one task? hadoop_master.yml file: # Playbook for Hadoop master servers - name: Install the namenode and jobtracker packages apt: name={{item}} force=yes state=latest with_items: - hadoop-0.20-mapreduce-jobtracker - hadoop-hdfs-namenode - hadoop-doc - hue-plugins - name: start hadoop jobtracker services service: name=hadoop-0.20-mapreduce-jobtracker state=started tags: debug
ansible
262
309,084
7
https://stackoverflow.com/questions/23945201/how-to-run-only-one-task-in-ansible-playbook
32,297,456
How to ignore ansible SSH authenticity checking?
Is there a way to ignore the SSH authenticity checking made by Ansible? For example when I've just setup a new server I have to answer yes to this question: GATHERING FACTS *************************************************************** The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established. RSA key fingerprint is xx:yy:zz:.... Are you sure you want to continue connecting (yes/no)? I know that this is generally a bad idea but I'm incorporating this in a script that first creates a new virtual server at my cloud provider and then automatically calls my ansible playbook to configure it. I want to avoid any human intervention in the middle of the script execution.
How to ignore ansible SSH authenticity checking? Is there a way to ignore the SSH authenticity checking made by Ansible? For example when I've just setup a new server I have to answer yes to this question: GATHERING FACTS *************************************************************** The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established. RSA key fingerprint is xx:yy:zz:.... Are you sure you want to continue connecting (yes/no)? I know that this is generally a bad idea but I'm incorporating this in a script that first creates a new virtual server at my cloud provider and then automatically calls my ansible playbook to configure it. I want to avoid any human intervention in the middle of the script execution.
ssh, ansible
261
316,005
18
https://stackoverflow.com/questions/32297456/how-to-ignore-ansible-ssh-authenticity-checking
38,200,732
Ansible: How to delete files and folders inside a directory?
The below code only deletes the first file it gets inside the web dir. I want to remove all the files and folders inside the web directory and retain the web directory. How can I do that? - name: remove web dir contents file: path='/home/mydata/web/{{ item }}' state=absent with_fileglob: - /home/mydata/web/* Note: I've tried rm -rf using command and shell, but they don't work. Perhaps I am using them wrongly. Any help in the right direction will be appreciated. I am using ansible 2.1.0.0
Ansible: How to delete files and folders inside a directory? The below code only deletes the first file it gets inside the web dir. I want to remove all the files and folders inside the web directory and retain the web directory. How can I do that? - name: remove web dir contents file: path='/home/mydata/web/{{ item }}' state=absent with_fileglob: - /home/mydata/web/* Note: I've tried rm -rf using command and shell, but they don't work. Perhaps I am using them wrongly. Any help in the right direction will be appreciated. I am using ansible 2.1.0.0
ansible, delete-file, delete-directory
235
630,190
25
https://stackoverflow.com/questions/38200732/ansible-how-to-delete-files-and-folders-inside-a-directory
35,654,286
How to check if a file exists in Ansible?
I have to check whether a file exists in /etc/ . If the file exists then I have to skip the task. Here is the code I am using: - name: checking the file exists command: touch file.txt when: $(! -s /etc/file.txt)
How to check if a file exists in Ansible? I have to check whether a file exists in /etc/ . If the file exists then I have to skip the task. Here is the code I am using: - name: checking the file exists command: touch file.txt when: $(! -s /etc/file.txt)
ansible
223
446,120
13
https://stackoverflow.com/questions/35654286/how-to-check-if-a-file-exists-in-ansible