question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
66,264,713
|
How to install nginx without the modules in RedHat
|
I install nginx using Yum following these steps: yum install epel-release yum install nginx The following is the output of nginx -V : built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) built with OpenSSL 1.1.1c FIPS 28 May 2019 (running with OpenSSL 1.1.1g FIPS 21 Apr 2020) TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-stream_ssl_preread_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-http_auth_request_module --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' How can I install nginx without the --with*
|
How to install nginx without the modules in RedHat I install nginx using Yum following these steps: yum install epel-release yum install nginx The following is the output of nginx -V : built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) built with OpenSSL 1.1.1c FIPS 28 May 2019 (running with OpenSSL 1.1.1g FIPS 21 Apr 2020) TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-stream_ssl_preread_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-http_auth_request_module --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' How can I install nginx without the --with*
|
nginx, redhat, nginx-config, nginx-module
| 1
| 2,153
| 2
|
https://stackoverflow.com/questions/66264713/how-to-install-nginx-without-the-modules-in-redhat
|
64,538,258
|
Laravel: how to specify the php version?
|
I have a Laravel (5.7.19) app running with apache2 on a redhat7 machine. There are php5.6, php7.1 and php7.4 installed on the machine and somehow apache uses php5.6 as the default and the app doesn't work with php5.6. Is there a way to specify a php version for a Laravel app, such as setting the php version in .env or some other config file of the app? I've googled around and couldn't find an answer. Right now the only way I could make the app to work is to disable php5.6 and then apache would use php7.1 as the default.
|
Laravel: how to specify the php version? I have a Laravel (5.7.19) app running with apache2 on a redhat7 machine. There are php5.6, php7.1 and php7.4 installed on the machine and somehow apache uses php5.6 as the default and the app doesn't work with php5.6. Is there a way to specify a php version for a Laravel app, such as setting the php version in .env or some other config file of the app? I've googled around and couldn't find an answer. Right now the only way I could make the app to work is to disable php5.6 and then apache would use php7.1 as the default.
|
apache, redhat
| 1
| 3,118
| 1
|
https://stackoverflow.com/questions/64538258/laravel-how-to-specify-the-php-version
|
61,966,332
|
Does the "%defattr" directive affect the "%post" section also in rpm spec file?
|
I have a .spec file with code somewhat like this: %files %defattr(-,xyz, xyz) %verify(md5 size mtime mode) %attr(755, xyz, xyz) /usr/bin/app1 %verify(md5 size mtime mode) %attr(755, xyz, xyz) /usr/bin/app2 %post mkdir -p /apps/1/logs mkdir -p /apps/2/logs mkdir -p /apps/3/logs mkdir -p /apps/4/logs mkdir -p /apps/5/logs ln -sf /usr/bin/app1 /usr/bin/app3 touch /home/xyz/abc.log will the %defattr also affect the default attributes of files and directories getting created in the post section??
|
Does the "%defattr" directive affect the "%post" section also in rpm spec file? I have a .spec file with code somewhat like this: %files %defattr(-,xyz, xyz) %verify(md5 size mtime mode) %attr(755, xyz, xyz) /usr/bin/app1 %verify(md5 size mtime mode) %attr(755, xyz, xyz) /usr/bin/app2 %post mkdir -p /apps/1/logs mkdir -p /apps/2/logs mkdir -p /apps/3/logs mkdir -p /apps/4/logs mkdir -p /apps/5/logs ln -sf /usr/bin/app1 /usr/bin/app3 touch /home/xyz/abc.log will the %defattr also affect the default attributes of files and directories getting created in the post section??
|
package, redhat, rpm, rpmbuild, rpm-spec
| 1
| 284
| 1
|
https://stackoverflow.com/questions/61966332/does-the-defattr-directive-affect-the-post-section-also-in-rpm-spec-file
|
59,892,389
|
mysql not able to establish remote connection
|
Could not able to connect to the remote mysql server using mysql workbench or direct command line. When trying it resolves different hostname When trying directly through command line mysql --host=10.37.1.92 --port=3306 --user=root --password=password Warning: Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'root'@'myaapvm.local' (using password: YES) Tried, with password and without password, no luck. I tried to connect to 10.37.1.92 server but my mysql client try to connect to different server. The only way I can try now is directly login to the machine and do the change in my DB. I have disabled firewall in my mysql DB. Does anyone faced this issue please help. This server running with maria DB installation.
|
mysql not able to establish remote connection Could not able to connect to the remote mysql server using mysql workbench or direct command line. When trying it resolves different hostname When trying directly through command line mysql --host=10.37.1.92 --port=3306 --user=root --password=password Warning: Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'root'@'myaapvm.local' (using password: YES) Tried, with password and without password, no luck. I tried to connect to 10.37.1.92 server but my mysql client try to connect to different server. The only way I can try now is directly login to the machine and do the change in my DB. I have disabled firewall in my mysql DB. Does anyone faced this issue please help. This server running with maria DB installation.
|
mysql, mariadb, mysql-workbench, redhat
| 1
| 3,112
| 1
|
https://stackoverflow.com/questions/59892389/mysql-not-able-to-establish-remote-connection
|
52,001,418
|
Need equivalent command for "oc secret new-basicauth" which is deprecated in openshift
|
I am following this guide ( [URL] ) to create an environment with apicast and openshift. The guide says that I need to create a secret using "oc secret new-basicauth" but I get the following message after executing the command. > oc secret new-basicauth apicast-configuration-url-secret --password=[URL] > Command "new-basicauth" is deprecated, use oc create secret I understand that the command is deprecated and I need to use "oc create secret" instead. However, I do not know how to use the new command to achieve the same result, which is creating a new-basicauth.
|
Need equivalent command for "oc secret new-basicauth" which is deprecated in openshift I am following this guide ( [URL] ) to create an environment with apicast and openshift. The guide says that I need to create a secret using "oc secret new-basicauth" but I get the following message after executing the command. > oc secret new-basicauth apicast-configuration-url-secret --password=[URL] > Command "new-basicauth" is deprecated, use oc create secret I understand that the command is deprecated and I need to use "oc create secret" instead. However, I do not know how to use the new command to achieve the same result, which is creating a new-basicauth.
|
openshift, redhat
| 1
| 1,737
| 3
|
https://stackoverflow.com/questions/52001418/need-equivalent-command-for-oc-secret-new-basicauth-which-is-deprecated-in-ope
|
51,889,016
|
Unable to deploy simple nodejs + mysql application on redhat openshit - running out of quota
|
I have simple nodejs + mysql application and trying to migrate over to Redhat Openshift free account that provides 2 core CPUs and 1 Gb memory . I'm not able to install both as nodejs install is taking up both CPU and all the memory (note that I needed to select 1 GB memory for node app as selecting lower value is causing issues). My node app may not need 1 gb after initial build but not able to downscale to release memory - running into issues (pls refer below for more details). Also, is there a way to downscale to 1 cpu for node app to create room for mysql app. Any help appreciated. Further details: While installing node app I selected 1 Gb memory. The app builds and deploys fine but after deployment I see node app end up using both both CPUs. When trying to install mysql running into out of quota issue. You are at your quota for CPU (limit) on pods.You can still create deployment config 'mysql' but no pods will be created until resources are freed When I try to downscale node app to reduce memory manually the build is failing - get stuck with following --> Scaling up dev3-2 from 0 to 1, scaling down dev3-1 from 1 to 0 (keep 1 pods available, don't exceed 2 pods) Scaling dev3-2 up to 1 --> FailedCreate: dev3-2 Error creating: pods "dev3-2-p6mlq" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-rcwxc" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-m667b" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-j28gz" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-dwsz5" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-2xrvz" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-hwk8k" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-hrjk8" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-8lts5" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 (combined from similar events): Error creating: pods "dev3-2-74xzp" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi error: timed out waiting for "dev3-2" to be synced
|
Unable to deploy simple nodejs + mysql application on redhat openshit - running out of quota I have simple nodejs + mysql application and trying to migrate over to Redhat Openshift free account that provides 2 core CPUs and 1 Gb memory . I'm not able to install both as nodejs install is taking up both CPU and all the memory (note that I needed to select 1 GB memory for node app as selecting lower value is causing issues). My node app may not need 1 gb after initial build but not able to downscale to release memory - running into issues (pls refer below for more details). Also, is there a way to downscale to 1 cpu for node app to create room for mysql app. Any help appreciated. Further details: While installing node app I selected 1 Gb memory. The app builds and deploys fine but after deployment I see node app end up using both both CPUs. When trying to install mysql running into out of quota issue. You are at your quota for CPU (limit) on pods.You can still create deployment config 'mysql' but no pods will be created until resources are freed When I try to downscale node app to reduce memory manually the build is failing - get stuck with following --> Scaling up dev3-2 from 0 to 1, scaling down dev3-1 from 1 to 0 (keep 1 pods available, don't exceed 2 pods) Scaling dev3-2 up to 1 --> FailedCreate: dev3-2 Error creating: pods "dev3-2-p6mlq" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-rcwxc" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-m667b" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-j28gz" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-dwsz5" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-2xrvz" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-hwk8k" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-hrjk8" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 Error creating: pods "dev3-2-8lts5" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi --> FailedCreate: dev3-2 (combined from similar events): Error creating: pods "dev3-2-74xzp" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi, used: limits.cpu=2,limits.memory=1Gi, limited: limits.cpu=2,limits.memory=1Gi error: timed out waiting for "dev3-2" to be synced
|
openshift, redhat
| 1
| 134
| 1
|
https://stackoverflow.com/questions/51889016/unable-to-deploy-simple-nodejs-mysql-application-on-redhat-openshit-running
|
49,250,819
|
How to use git client with github on RedHat 6.4
|
Does anyone know the way to run git client with github on RedHat 6.4? It stopped working latelty with github see: [URL] RedHat < 6.8 is not compatible with the recent changes because it does not support TLSv1.2. Error: fatal: unable to access '[URL] SSL connect error We need to test software on 6.4, as this is what is on production, but right now due to above issue cannot clone git repo directly on 6.4, and cannot update RedHat. Maybe we can update just some crypto libs to the newer versions and/or compile newer git client from sources? Anyone had similar issue and can advise?
|
How to use git client with github on RedHat 6.4 Does anyone know the way to run git client with github on RedHat 6.4? It stopped working latelty with github see: [URL] RedHat < 6.8 is not compatible with the recent changes because it does not support TLSv1.2. Error: fatal: unable to access '[URL] SSL connect error We need to test software on 6.4, as this is what is on production, but right now due to above issue cannot clone git repo directly on 6.4, and cannot update RedHat. Maybe we can update just some crypto libs to the newer versions and/or compile newer git client from sources? Anyone had similar issue and can advise?
|
git, github, redhat, openssh, libcrypto
| 1
| 160
| 2
|
https://stackoverflow.com/questions/49250819/how-to-use-git-client-with-github-on-redhat-6-4
|
49,065,733
|
Redhat or CentOS7 systemd may remove user IPC resources unexpectedly
|
According to the comment below, linux systemd can remove my IPC resources without my allowance. I already met this problem during important PoC(Proof of concept) test. But, I have difficulty in reproducing this problem in my desktop. Is there anybody who encountered this problem before and knows the easy way to reproduce it? In my case, system removed my semaphore, and most processes were still up and some processes using removed semaphores encountered crash. ================================================================= [URL] RemoveIPC Directive A new option called RemoveIPC was introduced in RHEL 7.2 through Systemd v219. When set to yes, this option forces a cleanup of all allocated inter-process communication (IPC) resources linked to a user leaving his last session. If a daemon is running as a user with a uid number >=1000, it may crash. This option should always be set to no by default but, due to the logic of package upgrade, it is highly advisable to set RemoveIPC=no in the /etc/systemd/logind.conf file followed by # systemctl restart systemd-logind (source).
|
Redhat or CentOS7 systemd may remove user IPC resources unexpectedly According to the comment below, linux systemd can remove my IPC resources without my allowance. I already met this problem during important PoC(Proof of concept) test. But, I have difficulty in reproducing this problem in my desktop. Is there anybody who encountered this problem before and knows the easy way to reproduce it? In my case, system removed my semaphore, and most processes were still up and some processes using removed semaphores encountered crash. ================================================================= [URL] RemoveIPC Directive A new option called RemoveIPC was introduced in RHEL 7.2 through Systemd v219. When set to yes, this option forces a cleanup of all allocated inter-process communication (IPC) resources linked to a user leaving his last session. If a daemon is running as a user with a uid number >=1000, it may crash. This option should always be set to no by default but, due to the logic of package upgrade, it is highly advisable to set RemoveIPC=no in the /etc/systemd/logind.conf file followed by # systemctl restart systemd-logind (source).
|
authentication, ipc, redhat, centos7, systemd
| 1
| 3,302
| 1
|
https://stackoverflow.com/questions/49065733/redhat-or-centos7-systemd-may-remove-user-ipc-resources-unexpectedly
|
47,015,810
|
Error syncing pod,failed for registry.access.redhat.com (Kubernetes)
|
kubectl create -f web.yml kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE httpd 0/1 ContainerCreating 0 1h kube-node2 [root@kube-master pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE httpd 0/1 ContainerCreating 0 1h kube-node2 [root@kube-master pods]# kubectl describe pods httpd Name: httpd Namespace: default Node: kube-node2/10.10.0.102 Start Time: Mon, 30 Oct 2017 17:47:38 +0600 Labels: app=webserver Status: Pending IP: Controllers: Containers: httpd: Container ID: Image: webserver Image ID: Port: 80/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Volume Mounts: Environment Variables: Conditions: Type Status Initialized True Ready False PodScheduled True No volumes. QoS Class: BestEffort Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1h 5m 16 {kubelet kube-node2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)" 1h 8s 271 {kubelet kube-node2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \ "registry.access.redhat.com/rhel7/pod-infrastructure:latest\"" registry should go to hub.docker but here says Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)" Why ? Please give me solution
|
Error syncing pod,failed for registry.access.redhat.com (Kubernetes) kubectl create -f web.yml kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE httpd 0/1 ContainerCreating 0 1h kube-node2 [root@kube-master pods]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE httpd 0/1 ContainerCreating 0 1h kube-node2 [root@kube-master pods]# kubectl describe pods httpd Name: httpd Namespace: default Node: kube-node2/10.10.0.102 Start Time: Mon, 30 Oct 2017 17:47:38 +0600 Labels: app=webserver Status: Pending IP: Controllers: Containers: httpd: Container ID: Image: webserver Image ID: Port: 80/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Volume Mounts: Environment Variables: Conditions: Type Status Initialized True Ready False PodScheduled True No volumes. QoS Class: BestEffort Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1h 5m 16 {kubelet kube-node2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)" 1h 8s 271 {kubelet kube-node2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \ "registry.access.redhat.com/rhel7/pod-infrastructure:latest\"" registry should go to hub.docker but here says Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)" Why ? Please give me solution
|
docker, kubernetes, redhat, docker-swarm
| 1
| 2,132
| 3
|
https://stackoverflow.com/questions/47015810/error-syncing-pod-failed-for-registry-access-redhat-com-kubernetes
|
45,714,267
|
Enable Core Dump for already running processes in Redhat
|
I understand that I can enable Core Dumps for new processes by making below entry in /etc/profile file ulimit -c unlimited >/dev/null 2>&1 Is it anyhow possible to enable core dumps for already running processes ?
|
Enable Core Dump for already running processes in Redhat I understand that I can enable Core Dumps for new processes by making below entry in /etc/profile file ulimit -c unlimited >/dev/null 2>&1 Is it anyhow possible to enable core dumps for already running processes ?
|
linux, redhat, coredump
| 1
| 2,382
| 1
|
https://stackoverflow.com/questions/45714267/enable-core-dump-for-already-running-processes-in-redhat
|
45,563,149
|
SSHD Config checking in bash script
|
I'm currently making a simple security audit script, which will print OK/Error for every mismatch in configuration. Scenario is next - for example, for SSHD_Config I've put next if case: if [ "grep -E ^Protocol /etc/ssh/sshd_config" == "Protocol 1" ]; then echo "Protocol should be set to 2"; fi The problem is - what if there is more than one space between some variable and its value? (Protocol 2 or PermitRootLogin No for ex.); /*s , /s and similar tricks didn't help. Does anybody have a sample for checking SSHD Config in a bash script to post cases here? Thanks in advance !
|
SSHD Config checking in bash script I'm currently making a simple security audit script, which will print OK/Error for every mismatch in configuration. Scenario is next - for example, for SSHD_Config I've put next if case: if [ "grep -E ^Protocol /etc/ssh/sshd_config" == "Protocol 1" ]; then echo "Protocol should be set to 2"; fi The problem is - what if there is more than one space between some variable and its value? (Protocol 2 or PermitRootLogin No for ex.); /*s , /s and similar tricks didn't help. Does anybody have a sample for checking SSHD Config in a bash script to post cases here? Thanks in advance !
|
bash, security, ssh, redhat, openssh
| 1
| 1,228
| 3
|
https://stackoverflow.com/questions/45563149/sshd-config-checking-in-bash-script
|
45,054,925
|
Install GCC 4.8 on Redhat 7 in offline mode
|
How to install GCC 4.8 on RHEL 7 in offline mode? (not registered with Redhat). I searched for a solution but couldn't find a proper one.
|
Install GCC 4.8 on Redhat 7 in offline mode How to install GCC 4.8 on RHEL 7 in offline mode? (not registered with Redhat). I searched for a solution but couldn't find a proper one.
|
gcc, redhat
| 1
| 5,059
| 2
|
https://stackoverflow.com/questions/45054925/install-gcc-4-8-on-redhat-7-in-offline-mode
|
45,029,553
|
Ansible package warning when using sudo
|
I was following this doc to install shiny package in RedHat 7.3. The command provided in the doc is: $ sudo su - \ -c "R -e \"install.packages('shiny', repos='[URL] In Ansible, I wrote it like this: - name: Installing Shiny Packages shell: sudo su - -c "R -e \"install.packages('shiny', repos='[URL] #when: install_R|changed I am getting a warning when I run my playbook: TASK [Installing Shiny Packages] *********************************************** [WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo changed: [test] Please let me know how to write this in ansible so that I can avoid the warning.
|
Ansible package warning when using sudo I was following this doc to install shiny package in RedHat 7.3. The command provided in the doc is: $ sudo su - \ -c "R -e \"install.packages('shiny', repos='[URL] In Ansible, I wrote it like this: - name: Installing Shiny Packages shell: sudo su - -c "R -e \"install.packages('shiny', repos='[URL] #when: install_R|changed I am getting a warning when I run my playbook: TASK [Installing Shiny Packages] *********************************************** [WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo changed: [test] Please let me know how to write this in ansible so that I can avoid the warning.
|
r, amazon-web-services, shiny, ansible, redhat
| 1
| 862
| 1
|
https://stackoverflow.com/questions/45029553/ansible-package-warning-when-using-sudo
|
44,234,535
|
Yum Install is taking forever to read default repositories in Amazon Linux
|
[root@ip-172-31-27-95 rpm]# yum -d 10 install munin-node Loading "priorities" plugin Loading "update-motd" plugin Loading "upgrade-helper" plugin Config time: 0.007 Yum version: 3.4.3 rpmdb time: 0.000 Setting up Package Sacks amzn-main/latest | 2.1 kB 00:00 amzn-main/latest/group | 35 kB 00:00 amzn-main/latest/primary_db | 3.6 MB 00:00 amzn-updates/latest | 2.3 kB 00:00 amzn-updates/latest/group | 35 kB 00:00 amzn-updates/latest/updateinfo | 384 kB 00:00 amzn-updates/latest/primary_db | 167 kB 00:00 pkgsack time: 0.767 I'm trying to install munin-node into my Amazon Linux. Unfortunately every time I run yum install munin-node , it'll get stuck on reading the repositories as shown in the snippet above. I already tried the following troubleshooting: Kill yum process rpm --rebuilddb rm /var/run/yum.pid yum clean all
|
Yum Install is taking forever to read default repositories in Amazon Linux [root@ip-172-31-27-95 rpm]# yum -d 10 install munin-node Loading "priorities" plugin Loading "update-motd" plugin Loading "upgrade-helper" plugin Config time: 0.007 Yum version: 3.4.3 rpmdb time: 0.000 Setting up Package Sacks amzn-main/latest | 2.1 kB 00:00 amzn-main/latest/group | 35 kB 00:00 amzn-main/latest/primary_db | 3.6 MB 00:00 amzn-updates/latest | 2.3 kB 00:00 amzn-updates/latest/group | 35 kB 00:00 amzn-updates/latest/updateinfo | 384 kB 00:00 amzn-updates/latest/primary_db | 167 kB 00:00 pkgsack time: 0.767 I'm trying to install munin-node into my Amazon Linux. Unfortunately every time I run yum install munin-node , it'll get stuck on reading the repositories as shown in the snippet above. I already tried the following troubleshooting: Kill yum process rpm --rebuilddb rm /var/run/yum.pid yum clean all
|
linux, amazon-web-services, centos, redhat, amazon-linux
| 1
| 6,871
| 3
|
https://stackoverflow.com/questions/44234535/yum-install-is-taking-forever-to-read-default-repositories-in-amazon-linux
|
42,012,441
|
JBOSS BPM Suite Installer Error: No 'jar' binary detected in PATH. Please make sure this binary exists before continuing the installation
|
I am trying to install JBOSS BPM Suite but get this error. I have looked around the web for a solution but did not find anything that really explains how to resolve this. I have JBOSS EAP 7 installed but have no idea how to set the 'jar' binary in PATH.
|
JBOSS BPM Suite Installer Error: No 'jar' binary detected in PATH. Please make sure this binary exists before continuing the installation I am trying to install JBOSS BPM Suite but get this error. I have looked around the web for a solution but did not find anything that really explains how to resolve this. I have JBOSS EAP 7 installed but have no idea how to set the 'jar' binary in PATH.
|
java, jboss, drools, redhat, jbpm
| 1
| 1,840
| 3
|
https://stackoverflow.com/questions/42012441/jboss-bpm-suite-installer-error-no-jar-binary-detected-in-path-please-make-s
|
41,719,289
|
Install Commands in Linux (useradd,ifconfig)
|
In my filesystem there is no "useradd" command. How can I install that? I tried with $sudo yum install useradd , but it shows no package useradd available and shows nothing to do. If commands are under packages, how to know which command is in which package?
|
Install Commands in Linux (useradd,ifconfig) In my filesystem there is no "useradd" command. How can I install that? I tried with $sudo yum install useradd , but it shows no package useradd available and shows nothing to do. If commands are under packages, how to know which command is in which package?
|
linux, unix, ubuntu, redhat
| 1
| 11,937
| 2
|
https://stackoverflow.com/questions/41719289/install-commands-in-linux-useradd-ifconfig
|
40,085,266
|
Using sed to replace text with slashes in crontab files
|
I'm trying to replace an entry in Crontab (RedHat) with sign #. I tried sed command like this sed -i 's|35 15 * * * /tmp/vii/test.sh >/dev/null 2>&1|#|g' /var/spool/cron/root but it doesn't work. Any ideas?
|
Using sed to replace text with slashes in crontab files I'm trying to replace an entry in Crontab (RedHat) with sign #. I tried sed command like this sed -i 's|35 15 * * * /tmp/vii/test.sh >/dev/null 2>&1|#|g' /var/spool/cron/root but it doesn't work. Any ideas?
|
linux, bash, sed, redhat
| 1
| 4,960
| 2
|
https://stackoverflow.com/questions/40085266/using-sed-to-replace-text-with-slashes-in-crontab-files
|
39,487,737
|
$0 gives different results on Redhat versus Ubuntu?
|
I have the following script created by some self-claimed bash expert: SCRIPT_LOCATION="$(readlink -f $0)" SCRIPT_DIRECTORY="$(dirname ${SCRIPT_LOCATION})" export PYTHONPATH="${PYTHONPATH}:${SCRIPT_DIRECTORY}/util" That runs nicely on my local Ubuntu 16.04. Now I wanted to use it on our RH 7.2 servers; and there I got an error message from readlink ; about being called with bad parameters. Then I figured: on Ubuntu, $0 gives "bash"; whereas on RH, it gives "-bash". EDIT: script is invoked as . ourscript.sh Questions: Any idea why that is? When I change my script to use a hardcoded readlink -f bash the whole things works. Are there "better" ways for fixing this? Feel free to also explain what readlink -f bash is actually doing ;-)
|
$0 gives different results on Redhat versus Ubuntu? I have the following script created by some self-claimed bash expert: SCRIPT_LOCATION="$(readlink -f $0)" SCRIPT_DIRECTORY="$(dirname ${SCRIPT_LOCATION})" export PYTHONPATH="${PYTHONPATH}:${SCRIPT_DIRECTORY}/util" That runs nicely on my local Ubuntu 16.04. Now I wanted to use it on our RH 7.2 servers; and there I got an error message from readlink ; about being called with bad parameters. Then I figured: on Ubuntu, $0 gives "bash"; whereas on RH, it gives "-bash". EDIT: script is invoked as . ourscript.sh Questions: Any idea why that is? When I change my script to use a hardcoded readlink -f bash the whole things works. Are there "better" ways for fixing this? Feel free to also explain what readlink -f bash is actually doing ;-)
|
linux, bash, ubuntu, redhat
| 1
| 119
| 1
|
https://stackoverflow.com/questions/39487737/0-gives-different-results-on-redhat-versus-ubuntu
|
39,217,467
|
HAWQ installation on Redhat
|
I am installing HAWQ on RedHat servers provisioned on Amazon EC2. I already have HDP 2.3 setup on the cluster. I have cloned HAWQ from Github . First I run ./configure --prefix=/opt/hawq . In the second step, I run make . The dependencies are compiling correctly when I run make from the root folder of incubator-hawq . The following error occours when make moves to compiling from src folder in the root directory ( incubator-hawq ): make[2]: Entering directory /root/incubator-hawq/src/port' gcc -O3 -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv -fno-aggressive-loop-optimizations -I/usr/include/libxml2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -I/root/incubator-hawq/depends/libhdfs3/build/install/usr/local/hawq/include -I/root/incubator-hawq/depends/libyarn/build/install/usr/local/hawq/include -c -o copydir.o copydir.c In file included from copydir.c:25:0: ../../src/include/storage/fd.h:61:23: fatal error: hdfs/hdfs.h: No such file or directory #include "hdfs/hdfs.h" ^ compilation terminated. make[2]: *** [copydir.o] Error 1 make[2]: Leaving directory /root/incubator-hawq/src/port' make[1]: *** [all] Error 2 make[1]: Leaving directory `/root/incubator-hawq/src' make: *** [all] Error 2 I know the compiler cannot find hdfs/hdfs.h , but as the dependencies ( libhdfs3 ) compiled successfully, I don't understand why the particular file isn't found. Please help if somebody has come across the same problem as I am pretty much stuck here.
|
HAWQ installation on Redhat I am installing HAWQ on RedHat servers provisioned on Amazon EC2. I already have HDP 2.3 setup on the cluster. I have cloned HAWQ from Github . First I run ./configure --prefix=/opt/hawq . In the second step, I run make . The dependencies are compiling correctly when I run make from the root folder of incubator-hawq . The following error occours when make moves to compiling from src folder in the root directory ( incubator-hawq ): make[2]: Entering directory /root/incubator-hawq/src/port' gcc -O3 -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv -fno-aggressive-loop-optimizations -I/usr/include/libxml2 -I../../src/port -DFRONTEND -I../../src/include -D_GNU_SOURCE -I/root/incubator-hawq/depends/libhdfs3/build/install/usr/local/hawq/include -I/root/incubator-hawq/depends/libyarn/build/install/usr/local/hawq/include -c -o copydir.o copydir.c In file included from copydir.c:25:0: ../../src/include/storage/fd.h:61:23: fatal error: hdfs/hdfs.h: No such file or directory #include "hdfs/hdfs.h" ^ compilation terminated. make[2]: *** [copydir.o] Error 1 make[2]: Leaving directory /root/incubator-hawq/src/port' make[1]: *** [all] Error 2 make[1]: Leaving directory `/root/incubator-hawq/src' make: *** [all] Error 2 I know the compiler cannot find hdfs/hdfs.h , but as the dependencies ( libhdfs3 ) compiled successfully, I don't understand why the particular file isn't found. Please help if somebody has come across the same problem as I am pretty much stuck here.
|
c, hadoop, amazon-ec2, redhat, hawq
| 1
| 319
| 2
|
https://stackoverflow.com/questions/39217467/hawq-installation-on-redhat
|
38,399,140
|
Install Gluon Scene Builder on RedHat 7
|
When attempting to install Gluon Scene Builder (JavaFX Scene Builder) on RedHat Workstation 7, and CentOS 7, I get a large number of unresolved dependencies, most of which seem to be standard files that should already be installed. Any suggestions what is wrong: [ron@destiny-centos Downloads]$ sudo rpm -i scenebuilder-8.2.0-1.x86_64.rpm [sudo] password for ron: error: Failed dependencies: ld-linux.so.2 is needed by scenebuilder-8.2.0-1.x86_64 libX11.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libXext.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libXi.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libXrender.so.1 is needed by scenebuilder-8.2.0-1.x86_64 libXtst.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libasound.so.2 is needed by scenebuilder-8.2.0-1.x86_64 libc.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libdl.so.2 is needed by scenebuilder-8.2.0-1.x86_64 libgcc_s.so.1 is needed by scenebuilder-8.2.0-1.x86_64 libm.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libpthread.so.0 is needed by scenebuilder-8.2.0-1.x86_64 libthread_db.so.1 is needed by scenebuilder-8.2.0-1.x86_64
|
Install Gluon Scene Builder on RedHat 7 When attempting to install Gluon Scene Builder (JavaFX Scene Builder) on RedHat Workstation 7, and CentOS 7, I get a large number of unresolved dependencies, most of which seem to be standard files that should already be installed. Any suggestions what is wrong: [ron@destiny-centos Downloads]$ sudo rpm -i scenebuilder-8.2.0-1.x86_64.rpm [sudo] password for ron: error: Failed dependencies: ld-linux.so.2 is needed by scenebuilder-8.2.0-1.x86_64 libX11.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libXext.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libXi.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libXrender.so.1 is needed by scenebuilder-8.2.0-1.x86_64 libXtst.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libasound.so.2 is needed by scenebuilder-8.2.0-1.x86_64 libc.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libdl.so.2 is needed by scenebuilder-8.2.0-1.x86_64 libgcc_s.so.1 is needed by scenebuilder-8.2.0-1.x86_64 libm.so.6 is needed by scenebuilder-8.2.0-1.x86_64 libpthread.so.0 is needed by scenebuilder-8.2.0-1.x86_64 libthread_db.so.1 is needed by scenebuilder-8.2.0-1.x86_64
|
centos, redhat, scenebuilder
| 1
| 1,101
| 1
|
https://stackoverflow.com/questions/38399140/install-gluon-scene-builder-on-redhat-7
|
37,059,342
|
Script not executing properly when using /bin/sh
|
My script doesnt function properly when I execute using just sh, it used to function fine before until today. Even in cronjob it used to execute without a problem. /bin/sh process_check.sh But it seems to execute fine when I execute using the following way ./process_check.sh Script (checks if a process is running, executes the process if it's not running) #/bin/sh $service=xxx if (( $(/bin/ps -ef | grep $service | wc -l) > 1 )) then true else echo "$service is not running!!!" /usr/sbin/xxx fi Also, any ways to make this much more efficient? I have a compiled program that I am trying to ensure is always running.
|
Script not executing properly when using /bin/sh My script doesnt function properly when I execute using just sh, it used to function fine before until today. Even in cronjob it used to execute without a problem. /bin/sh process_check.sh But it seems to execute fine when I execute using the following way ./process_check.sh Script (checks if a process is running, executes the process if it's not running) #/bin/sh $service=xxx if (( $(/bin/ps -ef | grep $service | wc -l) > 1 )) then true else echo "$service is not running!!!" /usr/sbin/xxx fi Also, any ways to make this much more efficient? I have a compiled program that I am trying to ensure is always running.
|
shell, redhat, ps
| 1
| 2,179
| 2
|
https://stackoverflow.com/questions/37059342/script-not-executing-properly-when-using-bin-sh
|
36,411,331
|
Why won't upgrading from pip version 6.1.1 to 8.1.1 work?
|
I'm running Red Hat Enterprise Linux (on AWS). Whenever I use pip, it warns me that my pip is out of date and that I need to upgrade it by doing pip install --upgrade pip . But when I do that it seemingly has no effect and simply tells me the same thing. It's circular! How can I fix this? See below $ pip install --upgrade pip You are using pip version 6.0.8, however version 8.1.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Requirement already up-to-date: pip in my-virtualenv/lib/python2.7/site-packages Since the aforementioned pip install --upgrade pip` doesn't seem to work, I just tried uninstalling and reinstalling python-pip. When I did that it tells me it will install pip 6.1.1.-1.21.amzn1. How can I make it install the newer version?? $ sudo yum install python-pip Loaded plugins: priorities, update-motd, upgrade-helper 5 packages excluded due to repository priority protections Resolving Dependencies --> Running transaction check ---> Package python26-pip.noarch 0:6.1.1-1.21.amzn1 will be installed --> Finished Dependency Resolution Dependencies Resolved =========================================================================================================================================================== Package Arch Version Repository Size =========================================================================================================================================================== Installing: python26-pip noarch 6.1.1-1.21.amzn1 amzn-main 1.9 M Transaction Summary =========================================================================================================================================================== Install 1 Package Total download size: 1.9 M Installed size: 6.4 M Is this ok [y/d/N]:
|
Why won't upgrading from pip version 6.1.1 to 8.1.1 work? I'm running Red Hat Enterprise Linux (on AWS). Whenever I use pip, it warns me that my pip is out of date and that I need to upgrade it by doing pip install --upgrade pip . But when I do that it seemingly has no effect and simply tells me the same thing. It's circular! How can I fix this? See below $ pip install --upgrade pip You are using pip version 6.0.8, however version 8.1.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Requirement already up-to-date: pip in my-virtualenv/lib/python2.7/site-packages Since the aforementioned pip install --upgrade pip` doesn't seem to work, I just tried uninstalling and reinstalling python-pip. When I did that it tells me it will install pip 6.1.1.-1.21.amzn1. How can I make it install the newer version?? $ sudo yum install python-pip Loaded plugins: priorities, update-motd, upgrade-helper 5 packages excluded due to repository priority protections Resolving Dependencies --> Running transaction check ---> Package python26-pip.noarch 0:6.1.1-1.21.amzn1 will be installed --> Finished Dependency Resolution Dependencies Resolved =========================================================================================================================================================== Package Arch Version Repository Size =========================================================================================================================================================== Installing: python26-pip noarch 6.1.1-1.21.amzn1 amzn-main 1.9 M Transaction Summary =========================================================================================================================================================== Install 1 Package Total download size: 1.9 M Installed size: 6.4 M Is this ok [y/d/N]:
|
python, pip, virtualenv, redhat
| 1
| 1,043
| 1
|
https://stackoverflow.com/questions/36411331/why-wont-upgrading-from-pip-version-6-1-1-to-8-1-1-work
|
34,918,709
|
Offline Install of R on RedHat
|
I am trying to install R on a RedHat server with no connectivity to the Internet. (sigh) Initially, I tried with R meta package from EPEL ( [URL] ). Due to dependency requirements, I downloaded dependancies Rcore ,libRmath, etc... Each time it prompts for a dependency, I download, transfer and install. This takes time and manual effort. Is there a cleaner way to do this than the manual download, transfer and install of every single dependency? Note: The server has no internet connection, so yam is not helpful.
|
Offline Install of R on RedHat I am trying to install R on a RedHat server with no connectivity to the Internet. (sigh) Initially, I tried with R meta package from EPEL ( [URL] ). Due to dependency requirements, I downloaded dependancies Rcore ,libRmath, etc... Each time it prompts for a dependency, I download, transfer and install. This takes time and manual effort. Is there a cleaner way to do this than the manual download, transfer and install of every single dependency? Note: The server has no internet connection, so yam is not helpful.
|
r, redhat, epel
| 1
| 1,801
| 3
|
https://stackoverflow.com/questions/34918709/offline-install-of-r-on-redhat
|
34,915,514
|
Mixing protocols while using Apache ActiveMQ
|
I am exploring ActiveMQ for advanced messaging between heterogeneous applications based on different technologies - C, Java, Ruby and Python. While looking at supported protocols I am stumbled to understand the use case of mixing protocols while performing message exchange. I had searched ActiveMQ documentation but unable to find any such reference talking about this. My question is, say - Producer ( NewsPublisher ) is publishing news( Sports, Finance, World ) to a topic ( NewsTopic ) using AMQP. After publishing, this topic is storing these news under respective queues ( Sports, Finance and World Queues ) In this situation, a client subscribed to Sports queue is JMS based, another client subscribed to Finance queue is Stomp based; can these clients will be able to receive message available on the queue which was published using AMQP by NewPublisher ? I see a somewhat related question posted earlier however found answers unrelated to original question so thought to double check.
|
Mixing protocols while using Apache ActiveMQ I am exploring ActiveMQ for advanced messaging between heterogeneous applications based on different technologies - C, Java, Ruby and Python. While looking at supported protocols I am stumbled to understand the use case of mixing protocols while performing message exchange. I had searched ActiveMQ documentation but unable to find any such reference talking about this. My question is, say - Producer ( NewsPublisher ) is publishing news( Sports, Finance, World ) to a topic ( NewsTopic ) using AMQP. After publishing, this topic is storing these news under respective queues ( Sports, Finance and World Queues ) In this situation, a client subscribed to Sports queue is JMS based, another client subscribed to Finance queue is Stomp based; can these clients will be able to receive message available on the queue which was published using AMQP by NewPublisher ? I see a somewhat related question posted earlier however found answers unrelated to original question so thought to double check.
|
jms, activemq-classic, redhat, amqp
| 1
| 937
| 1
|
https://stackoverflow.com/questions/34915514/mixing-protocols-while-using-apache-activemq
|
33,543,235
|
Create a Linux user with different profiles by command line
|
I was wondering if it was possible to override the SKEL property when adding a user in Linux. Man page gives me everything to change dynamically any property (SHELL, HOME, ...) but not SKEL. What I want to do is find a way to assign one profile or another to the users I'm creating. For instance, I need to create a user with a .profile in which VAR=value1 , and another user with a .profile in which VAR=value2 . My idea was to create different SKEL, one for those who need VAR=value1 , another for those needing VAR=value2 , and simply execute adduser ... -D -magic_option /SKEL/for/value1 or adduser ... -D -magic_option /SKEL/for/value2 . But -magic_option doesn't seem to exist. Any suggestion?
|
Create a Linux user with different profiles by command line I was wondering if it was possible to override the SKEL property when adding a user in Linux. Man page gives me everything to change dynamically any property (SHELL, HOME, ...) but not SKEL. What I want to do is find a way to assign one profile or another to the users I'm creating. For instance, I need to create a user with a .profile in which VAR=value1 , and another user with a .profile in which VAR=value2 . My idea was to create different SKEL, one for those who need VAR=value1 , another for those needing VAR=value2 , and simply execute adduser ... -D -magic_option /SKEL/for/value1 or adduser ... -D -magic_option /SKEL/for/value2 . But -magic_option doesn't seem to exist. Any suggestion?
|
linux, bash, shell, ubuntu, redhat
| 1
| 829
| 1
|
https://stackoverflow.com/questions/33543235/create-a-linux-user-with-different-profiles-by-command-line
|
33,263,247
|
using lftp or rsync on zlinux
|
quick question - can lftp or rsync be used on z/linux for file mirroring with subfolders, and if so what are the limitations? Which would you recommend? If not, what are some alternatives? I need to keep two folder-structures mirrored across sites (i.e. a two-way sync), by scripting (or configuring) something to either automatically check for changes and update, or do it regularly with a cron job. note: I wanted to tag this "zlinux" but it doesn't exist and I don't have the rep to create it
|
using lftp or rsync on zlinux quick question - can lftp or rsync be used on z/linux for file mirroring with subfolders, and if so what are the limitations? Which would you recommend? If not, what are some alternatives? I need to keep two folder-structures mirrored across sites (i.e. a two-way sync), by scripting (or configuring) something to either automatically check for changes and update, or do it regularly with a cron job. note: I wanted to tag this "zlinux" but it doesn't exist and I don't have the rep to create it
|
linux, redhat, rsync, mainframe, lftp
| 1
| 3,101
| 2
|
https://stackoverflow.com/questions/33263247/using-lftp-or-rsync-on-zlinux
|
31,103,362
|
The use of need_resched flag and schedule() routine within Linux kernel [2.4]
|
As per my understanding when the kernel finds out that the current running process should be striped of the CPU, it enables the need_resched flag. The flag is then checked before returning to user space, and if the flag enabled the kernel initiates call for schedule(). However I have noticed that within sys_sched_yield() routine we don't use the need_resched flag but explicitly call the schedule() routine. Why?
|
The use of need_resched flag and schedule() routine within Linux kernel [2.4] As per my understanding when the kernel finds out that the current running process should be striped of the CPU, it enables the need_resched flag. The flag is then checked before returning to user space, and if the flag enabled the kernel initiates call for schedule(). However I have noticed that within sys_sched_yield() routine we don't use the need_resched flag but explicitly call the schedule() routine. Why?
|
linux, linux-kernel, redhat, rhel
| 1
| 4,126
| 1
|
https://stackoverflow.com/questions/31103362/the-use-of-need-resched-flag-and-schedule-routine-within-linux-kernel-2-4
|
30,452,988
|
how to do multiple search in installed RPM packages?
|
how to do multiple search in installed RPM packages? $ rpm -qa | grep 'mysql' 'jdk' 'jre' or $ rpm -qa | grep mysql && rpm -qa | grep jdk && rpm -qa | grep jre
|
how to do multiple search in installed RPM packages? how to do multiple search in installed RPM packages? $ rpm -qa | grep 'mysql' 'jdk' 'jre' or $ rpm -qa | grep mysql && rpm -qa | grep jdk && rpm -qa | grep jre
|
linux, bash, package, redhat, sudo
| 1
| 6,671
| 3
|
https://stackoverflow.com/questions/30452988/how-to-do-multiple-search-in-installed-rpm-packages
|
30,176,888
|
how to upgrade CentOS 5.11 to 6.x
|
I have tried executing yum update command but it only took me from 5.11 to 6.x. I need the server to be running CentOS 6.x for nagiosxi. [root@nagiosxi network-scripts]# uname -a Linux nagiosxi.inl.gov 2.6.18-371.8.1.el5 #1 SMP Thu Apr 24 18:23:07 EDT 2014 i686 i686 i386 GNU/Linux
|
how to upgrade CentOS 5.11 to 6.x I have tried executing yum update command but it only took me from 5.11 to 6.x. I need the server to be running CentOS 6.x for nagiosxi. [root@nagiosxi network-scripts]# uname -a Linux nagiosxi.inl.gov 2.6.18-371.8.1.el5 #1 SMP Thu Apr 24 18:23:07 EDT 2014 i686 i686 i386 GNU/Linux
|
centos, virtual-machine, redhat, rhel
| 1
| 20,679
| 1
|
https://stackoverflow.com/questions/30176888/how-to-upgrade-centos-5-11-to-6-x
|
28,057,585
|
BLAS, ATLAS, LAPACK Shared library minimal example
|
I installed atlas, blas and lapack x86_64 packages via yum install atlas.x86_64 blas.x86_64 lapack.x86_64 on a Redhat 6.6 (ii) distro which installs a shared library but am having problems compiling and linking. For example, if I try to compile the minimal working example: program main print *, 'hello world' end program main using gfortran -L. main.f90 -llapack -lblas -o main the compiler doesn't find the libraries and I get the error message: /usr/bin/ld: cannot find -llapack collect2: ld returned 1 exit status I'm relatively new to fortran and linux so I'm probably missing something obvious. I'm lost hours on compiling the libraries from source unsuccessfully too. Pointers much appreciated.
|
BLAS, ATLAS, LAPACK Shared library minimal example I installed atlas, blas and lapack x86_64 packages via yum install atlas.x86_64 blas.x86_64 lapack.x86_64 on a Redhat 6.6 (ii) distro which installs a shared library but am having problems compiling and linking. For example, if I try to compile the minimal working example: program main print *, 'hello world' end program main using gfortran -L. main.f90 -llapack -lblas -o main the compiler doesn't find the libraries and I get the error message: /usr/bin/ld: cannot find -llapack collect2: ld returned 1 exit status I'm relatively new to fortran and linux so I'm probably missing something obvious. I'm lost hours on compiling the libraries from source unsuccessfully too. Pointers much appreciated.
|
fortran, redhat, lapack, blas, atlas
| 1
| 3,005
| 1
|
https://stackoverflow.com/questions/28057585/blas-atlas-lapack-shared-library-minimal-example
|
27,942,603
|
How to get the version of the RPM from inside the %pre script?
|
How to get the version of the RPM from inside the %pre script?
|
How to get the version of the RPM from inside the %pre script? How to get the version of the RPM from inside the %pre script?
|
redhat, rpm
| 1
| 78
| 1
|
https://stackoverflow.com/questions/27942603/how-to-get-the-version-of-the-rpm-from-inside-the-pre-script
|
27,058,594
|
Monitor a Pacemaker Cluster with ocf:pacemaker:ClusterMon and/or external-agent
|
Im trying to configure Pacemaker cluster events notifications via external agent to receive notifications when failover switching happens. I searched for below links [URL] [URL] But not understanding how actually do this. Could you please give step by step explaination. Thank You, Ranjan.
|
Monitor a Pacemaker Cluster with ocf:pacemaker:ClusterMon and/or external-agent Im trying to configure Pacemaker cluster events notifications via external agent to receive notifications when failover switching happens. I searched for below links [URL] [URL] But not understanding how actually do this. Could you please give step by step explaination. Thank You, Ranjan.
|
linux, redhat
| 1
| 5,093
| 2
|
https://stackoverflow.com/questions/27058594/monitor-a-pacemaker-cluster-with-ocfpacemakerclustermon-and-or-external-agent
|
26,922,870
|
Redhat Openshift - Deploying Multiple Applications Based on Single Git Repo
|
Is there any way to create two applications that in Openshift that use the same git repo (although perhaps different branches?). I am basically looking for a super simple way to create one "experimental" or "dev" application and one production one. Thanks!
|
Redhat Openshift - Deploying Multiple Applications Based on Single Git Repo Is there any way to create two applications that in Openshift that use the same git repo (although perhaps different branches?). I am basically looking for a super simple way to create one "experimental" or "dev" application and one production one. Thanks!
|
openshift, redhat
| 1
| 747
| 1
|
https://stackoverflow.com/questions/26922870/redhat-openshift-deploying-multiple-applications-based-on-single-git-repo
|
25,902,725
|
Can't install JSON gem on RedHat 7
|
On RedHat 7 (ec2 image provided by AWS), I'm unable to install the json gem: Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h I've tried all manner of packages libyaml-devel etc and nothing seems to work. I've already run through: Error while installing json gem 'mkmf.rb can't find header files for ruby'
|
Can't install JSON gem on RedHat 7 On RedHat 7 (ec2 image provided by AWS), I'm unable to install the json gem: Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h I've tried all manner of packages libyaml-devel etc and nothing seems to work. I've already run through: Error while installing json gem 'mkmf.rb can't find header files for ruby'
|
json, rubygems, redhat
| 1
| 1,644
| 3
|
https://stackoverflow.com/questions/25902725/cant-install-json-gem-on-redhat-7
|
25,384,907
|
Permission denied on postgres COPY FROM but folders have read permissions
|
Running Postgres 9.2 on Red Hat Enterprise Linux Server release 6.5 (Santiago) . Communicating with the server using PGAdmin III . I'm trying to COPY FROM a CSV file at /home/foo_user/dir/bar.csv but get: could not open file "/home/foo_user/dir/bar.csv" for reading: Permission denied Running sudo setenforce 0 via SSH returns setenforce: SELinux is disabled but doesn't resolve the problem. As per this suggestion , permissions on the file are -rwxrwxrwx (overkill, I know, but just in case!). The containing folder /home/foo_user has drwxr--r-- and subfolder dir has drwxr--r-- . So it's not permissions, and it's not SELinux . What's left to try? (I'm assuming I don't have to restart the postgres service after any of these changes, but maybe that's not right?)
|
Permission denied on postgres COPY FROM but folders have read permissions Running Postgres 9.2 on Red Hat Enterprise Linux Server release 6.5 (Santiago) . Communicating with the server using PGAdmin III . I'm trying to COPY FROM a CSV file at /home/foo_user/dir/bar.csv but get: could not open file "/home/foo_user/dir/bar.csv" for reading: Permission denied Running sudo setenforce 0 via SSH returns setenforce: SELinux is disabled but doesn't resolve the problem. As per this suggestion , permissions on the file are -rwxrwxrwx (overkill, I know, but just in case!). The containing folder /home/foo_user has drwxr--r-- and subfolder dir has drwxr--r-- . So it's not permissions, and it's not SELinux . What's left to try? (I'm assuming I don't have to restart the postgres service after any of these changes, but maybe that's not right?)
|
postgresql, redhat, pgadmin
| 1
| 2,893
| 1
|
https://stackoverflow.com/questions/25384907/permission-denied-on-postgres-copy-from-but-folders-have-read-permissions
|
20,975,828
|
Bonding on RedHat 6 with LACP
|
I'm currently encountering an issue in RedHat 6.4. I have two physical NICs which I am trying to bond together using LACP. I have the corresponding configuration set up on my switch, and I have implemented the recommended configuration from the RedHat Install Guide on my NICs. However, when I start my network services, I'm seeing my LACP IP on the physical NICs as well as the bonding interface (respectively eth0, eth1 and bond0). i'm thinking I should only see my IP address on my bond0 interface? The connectivity with my network is not established. I don't know what is wrong with my configuration. Here are my ifcfg-eth0, eth1 and bond0 files (IP blanked for discretion purposes). ifcfg-eth0 : DEVICE=eth0 ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none USERCTL=no TYPE=Ethernet NM_CONTROLLED=no ifcfg-eth1 : DEVICE=eth1 ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none USERCTL=no TYPE=Ethernet NM_CONTROLLED=no ifcfg-bond0 : DEVICE=bond0 IPADDR=X.X.X.X NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none USERCTL=no NM_CONTROLLED=no BONDING_OPTS="mode=4" Thanks to anyone who can pinpoint my problem. Jeremy
|
Bonding on RedHat 6 with LACP I'm currently encountering an issue in RedHat 6.4. I have two physical NICs which I am trying to bond together using LACP. I have the corresponding configuration set up on my switch, and I have implemented the recommended configuration from the RedHat Install Guide on my NICs. However, when I start my network services, I'm seeing my LACP IP on the physical NICs as well as the bonding interface (respectively eth0, eth1 and bond0). i'm thinking I should only see my IP address on my bond0 interface? The connectivity with my network is not established. I don't know what is wrong with my configuration. Here are my ifcfg-eth0, eth1 and bond0 files (IP blanked for discretion purposes). ifcfg-eth0 : DEVICE=eth0 ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none USERCTL=no TYPE=Ethernet NM_CONTROLLED=no ifcfg-eth1 : DEVICE=eth1 ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none USERCTL=no TYPE=Ethernet NM_CONTROLLED=no ifcfg-bond0 : DEVICE=bond0 IPADDR=X.X.X.X NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none USERCTL=no NM_CONTROLLED=no BONDING_OPTS="mode=4" Thanks to anyone who can pinpoint my problem. Jeremy
|
network-programming, redhat, lacp
| 1
| 6,468
| 3
|
https://stackoverflow.com/questions/20975828/bonding-on-redhat-6-with-lacp
|
18,887,682
|
Tomcat catalina.out logrotate on redhat not working properly
|
I am trying to roll the log catalina.out on a webserver using logrotate. I have been able to roll the log and a log catalina.out-dateext is created. But I notice that the log is being written in the new file catalina.out-dateext . Is there something that need to be done on the webserver? Thank you
|
Tomcat catalina.out logrotate on redhat not working properly I am trying to roll the log catalina.out on a webserver using logrotate. I have been able to roll the log and a log catalina.out-dateext is created. But I notice that the log is being written in the new file catalina.out-dateext . Is there something that need to be done on the webserver? Thank you
|
linux, redhat, catalina, logrotate
| 1
| 7,763
| 1
|
https://stackoverflow.com/questions/18887682/tomcat-catalina-out-logrotate-on-redhat-not-working-properly
|
15,929,978
|
RPM Build Error: File must begin with "/"
|
Here is my spec file: [URL] When I try to run rpmbuild -ba myfile.spec , I get the following errors: + /usr/lib/rpm/brp-python-bytecompile + /usr/lib/rpm/redhat/brp-java-repack-jars Processing files: PA_Connector-1.0-1.0 error: File must begin with "/": %{_initddir}/pa_connector error: File must begin with "/": attr(755,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) Checking for unpackaged file(s): /usr/lib/rpm/check-files /var/tmp/PA_Connector-1.0-1.0-root error: Installed (but unpackaged) file(s) found: /%{_initddir}/pa_connector /opt/pa_connector/config.xml /opt/pa_connector/lib/commons-logging-1.1.1.jar /opt/pa_connector/lib/log4j-1.2.17.jar /opt/pa_connector/lib/pa_connector.jar /opt/pa_connector/log4j.properties /opt/pa_connector/pa_connector.sh RPM build errors: File must begin with "/": %{_initddir}/pa_connector File must begin with "/": attr(755,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) Installed (but unpackaged) file(s) found: /%{_initddir}/pa_connector /opt/pa_connector/config.xml /opt/pa_connector/lib/commons-logging-1.1.1.jar /opt/pa_connector/lib/log4j-1.2.17.jar /opt/pa_connector/lib/pa_connector.jar /opt/pa_connector/log4j.properties /opt/pa_connector/pa_connector.sh I've read a few posts on SO about correct what BuildRoot should be, and fixed it, but it still doesn't work.
|
RPM Build Error: File must begin with "/" Here is my spec file: [URL] When I try to run rpmbuild -ba myfile.spec , I get the following errors: + /usr/lib/rpm/brp-python-bytecompile + /usr/lib/rpm/redhat/brp-java-repack-jars Processing files: PA_Connector-1.0-1.0 error: File must begin with "/": %{_initddir}/pa_connector error: File must begin with "/": attr(755,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) error: File must begin with "/": attr(644,impulse,impulse) Checking for unpackaged file(s): /usr/lib/rpm/check-files /var/tmp/PA_Connector-1.0-1.0-root error: Installed (but unpackaged) file(s) found: /%{_initddir}/pa_connector /opt/pa_connector/config.xml /opt/pa_connector/lib/commons-logging-1.1.1.jar /opt/pa_connector/lib/log4j-1.2.17.jar /opt/pa_connector/lib/pa_connector.jar /opt/pa_connector/log4j.properties /opt/pa_connector/pa_connector.sh RPM build errors: File must begin with "/": %{_initddir}/pa_connector File must begin with "/": attr(755,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) File must begin with "/": attr(644,impulse,impulse) Installed (but unpackaged) file(s) found: /%{_initddir}/pa_connector /opt/pa_connector/config.xml /opt/pa_connector/lib/commons-logging-1.1.1.jar /opt/pa_connector/lib/log4j-1.2.17.jar /opt/pa_connector/lib/pa_connector.jar /opt/pa_connector/log4j.properties /opt/pa_connector/pa_connector.sh I've read a few posts on SO about correct what BuildRoot should be, and fixed it, but it still doesn't work.
|
linux, centos, redhat, rpm, rpmbuild
| 1
| 17,479
| 1
|
https://stackoverflow.com/questions/15929978/rpm-build-error-file-must-begin-with
|
8,087,184
|
Installing Python 3 on RHEL
|
I'm trying to install python3 on RHEL using the following steps: yum search python3 Which returned No matches found for: python3 Followed by: yum search python None of the search results contained python3. What should I try next?
|
Installing Python 3 on RHEL I'm trying to install python3 on RHEL using the following steps: yum search python3 Which returned No matches found for: python3 Followed by: yum search python None of the search results contained python3. What should I try next?
|
python, python-3.x, rhel
| 156
| 365,178
| 19
|
https://stackoverflow.com/questions/8087184/installing-python-3-on-rhel
|
45,272,827
|
Docker CE on RHEL - Requires: container-selinux >= 2.9
|
I am trying to install Docker CE on RHEL using this link . This is my RHEL version: Red Hat Enterprise Linux Server release 7.3 (Maipo) When I execute this: sudo yum -y install docker-ce I am getting this error: Error: Package: docker-ce-17.06.0.ce-1.el7.centos.x86_64 (docker-ce-stable) Requires: container-selinux >= 2.9 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I tried using --skip-broken and rpm -Va --nofiles --nodigest but again getting the same error. Please let me know how to resolve this issue and install Docker CE in RHEL 7.3.
|
Docker CE on RHEL - Requires: container-selinux >= 2.9 I am trying to install Docker CE on RHEL using this link . This is my RHEL version: Red Hat Enterprise Linux Server release 7.3 (Maipo) When I execute this: sudo yum -y install docker-ce I am getting this error: Error: Package: docker-ce-17.06.0.ce-1.el7.centos.x86_64 (docker-ce-stable) Requires: container-selinux >= 2.9 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I tried using --skip-broken and rpm -Va --nofiles --nodigest but again getting the same error. Please let me know how to resolve this issue and install Docker CE in RHEL 7.3.
|
docker, unix, rhel
| 95
| 192,769
| 21
|
https://stackoverflow.com/questions/45272827/docker-ce-on-rhel-requires-container-selinux-2-9
|
9,083,408
|
Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist
|
I am on a server that has afresh install on RHEL 5. I was able to install Apache and PHP just fine., but I am having serious trouble with my MySQL installation. I tried the following: yum install mysql-server mysql And didn't get any errors or conflicts. Then I tried to start mysql with the following commands: chkconfig --levels 235 mysqld on service mysqld start And get Timeout error occurred trying to start MySQL Daemon. I checked my logs and see this error: [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist I'm not sure where to go from here. For reference I am using RHEL 5 and installed the latest versions of PHP 5 and Apache.
|
Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist I am on a server that has afresh install on RHEL 5. I was able to install Apache and PHP just fine., but I am having serious trouble with my MySQL installation. I tried the following: yum install mysql-server mysql And didn't get any errors or conflicts. Then I tried to start mysql with the following commands: chkconfig --levels 235 mysqld on service mysqld start And get Timeout error occurred trying to start MySQL Daemon. I checked my logs and see this error: [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist I'm not sure where to go from here. For reference I am using RHEL 5 and installed the latest versions of PHP 5 and Apache.
|
mysql, linux, rhel, rhel5
| 86
| 304,475
| 19
|
https://stackoverflow.com/questions/9083408/fatal-error-cant-open-and-lock-privilege-tables-table-mysql-host-doesnt-ex
|
25,485,047
|
Hourly rotation of files using logrotate?
|
I tried to set up a log rotation of logs files located at /tmp/hr_logs/. For setting up, I used logrotate in linux and I'm able to rotate it on daily basis using the following config in my /etc/logrotate.conf /tmp/hr_logs { daily rotate 4 With this config, file will rotate on daily basis and system will keep 4 copies of the log file appended with date [format: -YYYYMMDD] Now, I trying to set up a different set of log files which I need to make rotate on hourly basis and for it, i have done the configuration in logrotate.conf: /tmp/last_logs { hourly rotate 4 But this is not at all working? Can anyone please guide me on this please?
|
Hourly rotation of files using logrotate? I tried to set up a log rotation of logs files located at /tmp/hr_logs/. For setting up, I used logrotate in linux and I'm able to rotate it on daily basis using the following config in my /etc/logrotate.conf /tmp/hr_logs { daily rotate 4 With this config, file will rotate on daily basis and system will keep 4 copies of the log file appended with date [format: -YYYYMMDD] Now, I trying to set up a different set of log files which I need to make rotate on hourly basis and for it, i have done the configuration in logrotate.conf: /tmp/last_logs { hourly rotate 4 But this is not at all working? Can anyone please guide me on this please?
|
linux, cron, rhel, logrotate
| 82
| 142,559
| 2
|
https://stackoverflow.com/questions/25485047/hourly-rotation-of-files-using-logrotate
|
17,131,048
|
Error when using scp command "bash: scp: command not found"
|
I want to use scp command to copy a local file to remote server, but I get an error message after input the password of user in remote server. ~]$ scp gitadmin.pub git@123.150.207.18: git@123.150.207.18's password: bash: scp: command not found lost connection I checked on server using the git user and it seems the scp command can be found and openssh-clinets were installed too. git@... ~]$ scp usage: scp [-1246BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file] [-l limit] [-o ssh_option] [-P port] [-S program] [[user@]host1:]file1 ... [[user@]host2:]file2 git@... ~]$ su root ...... root@... ~]# yum info openssh-clients Loaded plugins: product-id, subscription-manager Updating Red Hat repositories. Installed Packages Name : openssh-clients Arch : x86_64 Version : 5.3p1 Release : 52.el6 Size : 1.0 M Repo : installed From repo : anaconda-RedHatEnterpriseLinux-201105101844.x86_64 Summary : An open source SSH client applications URL : [URL] License : BSD Description : OpenSSH is a free version of SSH (Secure SHell), a program for : logging into and executing commands on a remote machine. This : package includes the clients necessary to make encrypted : connections to SSH servers. I'm confused for the situation. Did I missing some configuration on server? (We are using RHEL6 as server.) It's my fault in path setting. I added 'custom.sh' in /etc/profile.d and added following lines in it to add /usr/local/node/bin directory to PATH. export PATH="/usr/local/node/bin:$PATH" But the format is wrong. I removed the pair of '"' and it works OK now. It should be: export PATH=$PATH:/usr/local/node/bin A probe mistake...^_^
|
Error when using scp command "bash: scp: command not found" I want to use scp command to copy a local file to remote server, but I get an error message after input the password of user in remote server. ~]$ scp gitadmin.pub git@123.150.207.18: git@123.150.207.18's password: bash: scp: command not found lost connection I checked on server using the git user and it seems the scp command can be found and openssh-clinets were installed too. git@... ~]$ scp usage: scp [-1246BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file] [-l limit] [-o ssh_option] [-P port] [-S program] [[user@]host1:]file1 ... [[user@]host2:]file2 git@... ~]$ su root ...... root@... ~]# yum info openssh-clients Loaded plugins: product-id, subscription-manager Updating Red Hat repositories. Installed Packages Name : openssh-clients Arch : x86_64 Version : 5.3p1 Release : 52.el6 Size : 1.0 M Repo : installed From repo : anaconda-RedHatEnterpriseLinux-201105101844.x86_64 Summary : An open source SSH client applications URL : [URL] License : BSD Description : OpenSSH is a free version of SSH (Secure SHell), a program for : logging into and executing commands on a remote machine. This : package includes the clients necessary to make encrypted : connections to SSH servers. I'm confused for the situation. Did I missing some configuration on server? (We are using RHEL6 as server.) It's my fault in path setting. I added 'custom.sh' in /etc/profile.d and added following lines in it to add /usr/local/node/bin directory to PATH. export PATH="/usr/local/node/bin:$PATH" But the format is wrong. I removed the pair of '"' and it works OK now. It should be: export PATH=$PATH:/usr/local/node/bin A probe mistake...^_^
|
scp, openssh, rhel
| 72
| 169,411
| 3
|
https://stackoverflow.com/questions/17131048/error-when-using-scp-command-bash-scp-command-not-found
|
42,001,338
|
Install ONLY mongo shell, not mongodb
|
As mentioned above, I need to install only the mongo shell on a RHEL instance (machine A). I have a mongodb server on a separate instance (machine B) and need to connect to that from A to run mongodump and mongorestore commands. I tried looking it up on the web but all I got was instructions to install the complete mongodb package. Any help appreciated.
|
Install ONLY mongo shell, not mongodb As mentioned above, I need to install only the mongo shell on a RHEL instance (machine A). I have a mongodb server on a separate instance (machine B) and need to connect to that from A to run mongodump and mongorestore commands. I tried looking it up on the web but all I got was instructions to install the complete mongodb package. Any help appreciated.
|
mongodb, shell, rhel
| 69
| 85,970
| 6
|
https://stackoverflow.com/questions/42001338/install-only-mongo-shell-not-mongodb
|
33,362,904
|
Completely remove MariaDB or MySQL from CentOS 7 or RHEL 7
|
I installed MariaDB on CentOS 7 but I had some problems with some configuration, now it is completely misconfigured. Thus, I wanted to remove the MariaDB with “yum remove mariadb mariadb-server”, after that I reinstalled it with “yum install mariadb mariadb-server”. Unfortunately, the configuration remains. It seems as if yum remove don’t delete all MariaDB Config-Files. How can I remove MariaDB completely from CentOS 7?
|
Completely remove MariaDB or MySQL from CentOS 7 or RHEL 7 I installed MariaDB on CentOS 7 but I had some problems with some configuration, now it is completely misconfigured. Thus, I wanted to remove the MariaDB with “yum remove mariadb mariadb-server”, after that I reinstalled it with “yum install mariadb mariadb-server”. Unfortunately, the configuration remains. It seems as if yum remove don’t delete all MariaDB Config-Files. How can I remove MariaDB completely from CentOS 7?
|
mysql, centos, mariadb, yum, rhel
| 62
| 296,855
| 3
|
https://stackoverflow.com/questions/33362904/completely-remove-mariadb-or-mysql-from-centos-7-or-rhel-7
|
4,149,361
|
On linux SUSE or RedHat, how do I load Python 2.7
|
Can someone provide the steps needed to install python version 2.7 on SUSE and RedHat? It version that is on there is like 2.4 and I need to have it at at least 2.6 to make my script work. So after the install, I can type Python in a xTerm and get the Python 2.7 command line interface.
|
On linux SUSE or RedHat, how do I load Python 2.7 Can someone provide the steps needed to install python version 2.7 on SUSE and RedHat? It version that is on there is like 2.4 and I need to have it at at least 2.6 to make my script work. So after the install, I can type Python in a xTerm and get the Python 2.7 command line interface.
|
python, linux, rhel, suse
| 60
| 111,856
| 11
|
https://stackoverflow.com/questions/4149361/on-linux-suse-or-redhat-how-do-i-load-python-2-7
|
58,011,088
|
Kibana server is not ready yet
|
I have just installed Kibana 7.3 on RHEL 8. The Kibana service is active (running). I receive Kibana server is not ready yet message when i curl to [URL] . My Elasticsearch instance is on another server and it is responding with succes to my requests. I have updated the kibana.yml with that elasticsearch.hosts:[" [URL] "] i can reach to elasticsearch from the internet with response: { "name" : "ip-172-31-21-240.ec2.internal", "cluster_name" : "elasticsearch", "cluster_uuid" : "y4UjlddiQimGRh29TVZoeA", "version" : { "number" : "7.3.1", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "4749ba6", "build_date" : "2019-08-19T20:19:25.651794Z", "build_snapshot" : false, "lucene_version" : "8.1.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } The result of the sudo systemctl status kibana : ● kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-09-19 12:22:34 UTC; 24min ago Main PID: 4912 (node) Tasks: 21 (limit: 4998) Memory: 368.8M CGroup: /system.slice/kibana.service └─4912 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size> Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:44 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0 the result of "sudo journalctl --unit kibana" Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive > Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect> Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","task_manager"],"pid":1356,"message":"PollError No Living connec> Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive > Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect> Do you have any idea where the problem is?
|
Kibana server is not ready yet I have just installed Kibana 7.3 on RHEL 8. The Kibana service is active (running). I receive Kibana server is not ready yet message when i curl to [URL] . My Elasticsearch instance is on another server and it is responding with succes to my requests. I have updated the kibana.yml with that elasticsearch.hosts:[" [URL] "] i can reach to elasticsearch from the internet with response: { "name" : "ip-172-31-21-240.ec2.internal", "cluster_name" : "elasticsearch", "cluster_uuid" : "y4UjlddiQimGRh29TVZoeA", "version" : { "number" : "7.3.1", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "4749ba6", "build_date" : "2019-08-19T20:19:25.651794Z", "build_snapshot" : false, "lucene_version" : "8.1.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } The result of the sudo systemctl status kibana : ● kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-09-19 12:22:34 UTC; 24min ago Main PID: 4912 (node) Tasks: 21 (limit: 4998) Memory: 368.8M CGroup: /system.slice/kibana.service └─4912 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size> Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:44 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0 the result of "sudo journalctl --unit kibana" Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive > Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect> Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","task_manager"],"pid":1356,"message":"PollError No Living connec> Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive > Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect> Do you have any idea where the problem is?
|
elasticsearch, kibana, rhel
| 48
| 224,073
| 17
|
https://stackoverflow.com/questions/58011088/kibana-server-is-not-ready-yet
|
11,683,715
|
Suppressing the "Picked up _JAVA_OPTIONS" message
|
I'm using _JAVA_OPTIONS to set some defaults for Java on RHEL. It works fine but now every time I start java I get the following message Picked up _JAVA_OPTIONS: -foo -bar -baz is it possible to keep the options but suppress the display of this message.
|
Suppressing the "Picked up _JAVA_OPTIONS" message I'm using _JAVA_OPTIONS to set some defaults for Java on RHEL. It works fine but now every time I start java I get the following message Picked up _JAVA_OPTIONS: -foo -bar -baz is it possible to keep the options but suppress the display of this message.
|
java, rhel
| 48
| 70,649
| 2
|
https://stackoverflow.com/questions/11683715/suppressing-the-picked-up-java-options-message
|
19,077,538
|
Check RPM dependencies
|
When you are installing a program using .deb packages on Ubuntu, you can check dependencies of package using Ubuntu Packages Search . For example I can see dependencies of Wireshark from here . As you can see, dependencies marked by red bullet. If you know all packages your program depends them, you can download and install them using dpkg . Is there any alternative website for RPM packages? Specially for RHEL? I know that I can get these packages' names by other methods such as when installing RPM package using rpm -i , but it is not user friendly and needs access to running Linux.
|
Check RPM dependencies When you are installing a program using .deb packages on Ubuntu, you can check dependencies of package using Ubuntu Packages Search . For example I can see dependencies of Wireshark from here . As you can see, dependencies marked by red bullet. If you know all packages your program depends them, you can download and install them using dpkg . Is there any alternative website for RPM packages? Specially for RHEL? I know that I can get these packages' names by other methods such as when installing RPM package using rpm -i , but it is not user friendly and needs access to running Linux.
|
linux, dependency-management, rpm, rhel
| 45
| 182,052
| 3
|
https://stackoverflow.com/questions/19077538/check-rpm-dependencies
|
171,506
|
Make and build utilities on CentOS/RHEL?
|
I've been unsuccessfully searching for a way to install make utility on my CentOS 5.2. I've looked through some RPM repositories and online, with no avail. Installing gcc , gcc-c++ didn't help! Package build-essential is not made for CentOS/RHEL. I have RPMFORGE repo enabled in YUM.
|
Make and build utilities on CentOS/RHEL? I've been unsuccessfully searching for a way to install make utility on my CentOS 5.2. I've looked through some RPM repositories and online, with no avail. Installing gcc , gcc-c++ didn't help! Package build-essential is not made for CentOS/RHEL. I have RPMFORGE repo enabled in YUM.
|
build, centos, utilities, rhel
| 40
| 121,954
| 5
|
https://stackoverflow.com/questions/171506/make-and-build-utilities-on-centos-rhel
|
17,956,151
|
How to run a command as a specific user in an init script?
|
I'm writing an init script which is supposed to execute a single command as a user different than root. This is how I'm doing it currently: sudo -u username command This generally works as expected on Ubuntu/Debian, but on RHEL the script which is executed as the command hangs. Is there another way to run the command as another user? (Note that I can't use lsb init functions as they're not available on RHEL/Centos 5.x.)
|
How to run a command as a specific user in an init script? I'm writing an init script which is supposed to execute a single command as a user different than root. This is how I'm doing it currently: sudo -u username command This generally works as expected on Ubuntu/Debian, but on RHEL the script which is executed as the command hangs. Is there another way to run the command as another user? (Note that I can't use lsb init functions as they're not available on RHEL/Centos 5.x.)
|
linux, bash, centos, init, rhel
| 40
| 181,818
| 6
|
https://stackoverflow.com/questions/17956151/how-to-run-a-command-as-a-specific-user-in-an-init-script
|
31,876,031
|
The command '/bin/sh -c returned a non-zero code: 127
|
I'm new to docker so I might be doing this wrong, but I'm trying to install Tomcat6 through a Dockerfile which like this: FROM rhel7:latest RUN cd /tmp RUN "wget", "[URL] RUN tar xzf apache-tomcat-6.0.44.tar.gz RUN mv apache-tomcat-6.0.44 /usr/local/tomcat6 RUN cd /usr/local/tomcat6 Run ./bin/start.sh Its failing on the 3rd line with the: RUN "wget", "[URL] When I run the docker build I get this: I'm using: Oracle Virtual Box V4.3.28 r100309 Docker on RHEL7
|
The command '/bin/sh -c returned a non-zero code: 127 I'm new to docker so I might be doing this wrong, but I'm trying to install Tomcat6 through a Dockerfile which like this: FROM rhel7:latest RUN cd /tmp RUN "wget", "[URL] RUN tar xzf apache-tomcat-6.0.44.tar.gz RUN mv apache-tomcat-6.0.44 /usr/local/tomcat6 RUN cd /usr/local/tomcat6 Run ./bin/start.sh Its failing on the 3rd line with the: RUN "wget", "[URL] When I run the docker build I get this: I'm using: Oracle Virtual Box V4.3.28 r100309 Docker on RHEL7
|
docker, tomcat6, rhel, dockerfile, rhel7
| 40
| 147,845
| 5
|
https://stackoverflow.com/questions/31876031/the-command-bin-sh-c-returned-a-non-zero-code-127
|
38,188,896
|
Installing g++ 5 on Amazon Linux
|
I'm trying to install g++ 5.x on an EC2 instance running Amazon Linux; in Amazon's central repository the latest version is 4.8.3. What configuration to can I make to allow yum to find a newer gcc-c++ package?
|
Installing g++ 5 on Amazon Linux I'm trying to install g++ 5.x on an EC2 instance running Amazon Linux; in Amazon's central repository the latest version is 4.8.3. What configuration to can I make to allow yum to find a newer gcc-c++ package?
|
linux, amazon-ec2, g++, c++14, rhel
| 37
| 26,039
| 6
|
https://stackoverflow.com/questions/38188896/installing-g-5-on-amazon-linux
|
35,492,893
|
Unable to start postgresql service on CentOS 7
|
Unable to start postgresql-9.5 on CentOS 7. I followed this page - [URL] - for installing the database server on CentOS. I tried the same after setting setenforce 0 , and that did not help either. I am doing all operations as root . systemctl start postgresql-9.5.service Job for postgresql-9.5.service failed because the control process exited with error code. See "systemctl status postgresql-9.5.service" and "journalctl -xe" for details. And here is what I get for status - Redirecting to /bin/systemctl status postgresql-9.5.service ● postgresql-9.5.service - PostgreSQL 9.5 database server Loaded: loaded (/usr/lib/systemd/system/postgresql-9.5.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2016-02-18 15:20:30 EST; 2min 28s ago Process: 15041 ExecStartPre=/usr/pgsql-9.5/bin/postgresql95-check-db-dir ${PGDATA} (code=exited, status=1/FAILURE) Feb 18 15:20:30 myserver systemd[1]: Starting PostgreSQL 9.5 database server... Feb 18 15:20:30 myserver systemd[1]: postgresql-9.5.service: control process exited, code=exited status=1 Feb 18 15:20:30 myserver systemd[1]: Failed to start PostgreSQL 9.5 database server. Feb 18 15:20:30 myserver systemd[1]: Unit postgresql-9.5.service entered failed state. Feb 18 15:20:30 myserver systemd[1]: postgresql-9.5.service failed. And the contents of the different conf files are as follows - [root@myserver /]# cat /etc/ld.so.conf.d/postgresql-pgdg-libs.conf /usr/pgsql-9.5/lib/ [root@myserver /]# cat /usr/lib/tmpfiles.d/postgresql-9.5.conf d /var/run/postgresql 0755 postgres postgres - [root@myserver /]# cat /usr/pgsql-9.5/share/postgresql-9.5-libs.conf /usr/pgsql-9.5/lib/ [root@myserver /]# cat /etc/alternatives/pgsql-ld-conf /usr/pgsql-9.5/lib/ [root@myserver /]# cat /var/lib/alternatives/pgsql-ld-conf auto /etc/ld.so.conf.d/postgresql-pgdg-libs.conf /usr/pgsql-9.5/share/postgresql-9.5-libs.conf 950 Googled for the error that I am seeing. A number of folks have seen the same error, and the underlying cause is different in each case. Reading through those posts, it is not clear that I am seeing any of the already reported causes.
|
Unable to start postgresql service on CentOS 7 Unable to start postgresql-9.5 on CentOS 7. I followed this page - [URL] - for installing the database server on CentOS. I tried the same after setting setenforce 0 , and that did not help either. I am doing all operations as root . systemctl start postgresql-9.5.service Job for postgresql-9.5.service failed because the control process exited with error code. See "systemctl status postgresql-9.5.service" and "journalctl -xe" for details. And here is what I get for status - Redirecting to /bin/systemctl status postgresql-9.5.service ● postgresql-9.5.service - PostgreSQL 9.5 database server Loaded: loaded (/usr/lib/systemd/system/postgresql-9.5.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2016-02-18 15:20:30 EST; 2min 28s ago Process: 15041 ExecStartPre=/usr/pgsql-9.5/bin/postgresql95-check-db-dir ${PGDATA} (code=exited, status=1/FAILURE) Feb 18 15:20:30 myserver systemd[1]: Starting PostgreSQL 9.5 database server... Feb 18 15:20:30 myserver systemd[1]: postgresql-9.5.service: control process exited, code=exited status=1 Feb 18 15:20:30 myserver systemd[1]: Failed to start PostgreSQL 9.5 database server. Feb 18 15:20:30 myserver systemd[1]: Unit postgresql-9.5.service entered failed state. Feb 18 15:20:30 myserver systemd[1]: postgresql-9.5.service failed. And the contents of the different conf files are as follows - [root@myserver /]# cat /etc/ld.so.conf.d/postgresql-pgdg-libs.conf /usr/pgsql-9.5/lib/ [root@myserver /]# cat /usr/lib/tmpfiles.d/postgresql-9.5.conf d /var/run/postgresql 0755 postgres postgres - [root@myserver /]# cat /usr/pgsql-9.5/share/postgresql-9.5-libs.conf /usr/pgsql-9.5/lib/ [root@myserver /]# cat /etc/alternatives/pgsql-ld-conf /usr/pgsql-9.5/lib/ [root@myserver /]# cat /var/lib/alternatives/pgsql-ld-conf auto /etc/ld.so.conf.d/postgresql-pgdg-libs.conf /usr/pgsql-9.5/share/postgresql-9.5-libs.conf 950 Googled for the error that I am seeing. A number of folks have seen the same error, and the underlying cause is different in each case. Reading through those posts, it is not clear that I am seeing any of the already reported causes.
|
linux, postgresql, centos, centos7, rhel
| 33
| 130,216
| 5
|
https://stackoverflow.com/questions/35492893/unable-to-start-postgresql-service-on-centos-7
|
65,878,769
|
Cannot install docker in a RHEL server
|
I am getting Requires: fuse-overlayfs >= 0.7 error while installing docker in RHEL-7. sudo yum install docker-ce Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * epel: mirrors.syringanetworks.net Resolving Dependencies --> Running transaction check ---> Package docker-ce.x86_64 3:20.10.2-3.el7 will be installed --> Processing Dependency: containerd.io >= 1.4.1 for package: 3:docker-ce-20.10.2-3.el7.x86_64 --> Processing Dependency: docker-ce-cli for package: 3:docker-ce-20.10.2-3.el7.x86_64 --> Processing Dependency: docker-ce-rootless-extras for package: 3:docker-ce-20.10.2-3.el7.x86_64 --> Running transaction check ---> Package containerd.io.x86_64 0:1.4.3-3.1.el7 will be installed ---> Package docker-ce-cli.x86_64 1:20.10.2-3.el7 will be installed ---> Package docker-ce-rootless-extras.x86_64 0:20.10.2-3.el7 will be installed --> Processing Dependency: fuse-overlayfs >= 0.7 for package: docker-ce-rootless-extras-20.10.2-3.el7.x86_64 --> Finished Dependency Resolution Error: Package: docker-ce-rootless-extras-20.10.2-3.el7.x86_64 (docker-ce-stable) Requires: fuse-overlayfs >= 0.7 You could try using --skip-broken to work around I already tried sudo rpm -Uvh [URL] Retrieving [URL] warning: /var/tmp/rpm-tmp.TZLjHD: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY error: Failed dependencies: libfuse3.so.3()(64bit) is needed by fuse-overlayfs-0.7.2-6.el7_8.x86_64 libfuse3.so.3(FUSE_3.0)(64bit) is needed by fuse-overlayfs-0.7.2-6.el7_8.x86_64 libfuse3.so.3(FUSE_3.2)(64bit) is needed by fuse-overlayfs-0.7.2-6.el7_8.x86_64
|
Cannot install docker in a RHEL server I am getting Requires: fuse-overlayfs >= 0.7 error while installing docker in RHEL-7. sudo yum install docker-ce Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * epel: mirrors.syringanetworks.net Resolving Dependencies --> Running transaction check ---> Package docker-ce.x86_64 3:20.10.2-3.el7 will be installed --> Processing Dependency: containerd.io >= 1.4.1 for package: 3:docker-ce-20.10.2-3.el7.x86_64 --> Processing Dependency: docker-ce-cli for package: 3:docker-ce-20.10.2-3.el7.x86_64 --> Processing Dependency: docker-ce-rootless-extras for package: 3:docker-ce-20.10.2-3.el7.x86_64 --> Running transaction check ---> Package containerd.io.x86_64 0:1.4.3-3.1.el7 will be installed ---> Package docker-ce-cli.x86_64 1:20.10.2-3.el7 will be installed ---> Package docker-ce-rootless-extras.x86_64 0:20.10.2-3.el7 will be installed --> Processing Dependency: fuse-overlayfs >= 0.7 for package: docker-ce-rootless-extras-20.10.2-3.el7.x86_64 --> Finished Dependency Resolution Error: Package: docker-ce-rootless-extras-20.10.2-3.el7.x86_64 (docker-ce-stable) Requires: fuse-overlayfs >= 0.7 You could try using --skip-broken to work around I already tried sudo rpm -Uvh [URL] Retrieving [URL] warning: /var/tmp/rpm-tmp.TZLjHD: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY error: Failed dependencies: libfuse3.so.3()(64bit) is needed by fuse-overlayfs-0.7.2-6.el7_8.x86_64 libfuse3.so.3(FUSE_3.0)(64bit) is needed by fuse-overlayfs-0.7.2-6.el7_8.x86_64 libfuse3.so.3(FUSE_3.2)(64bit) is needed by fuse-overlayfs-0.7.2-6.el7_8.x86_64
|
linux, docker, centos7, rhel, rhel7
| 32
| 58,531
| 5
|
https://stackoverflow.com/questions/65878769/cannot-install-docker-in-a-rhel-server
|
39,347,489
|
mount.nfs: requested NFS version or transport protocol is not supported
|
NFS Mount is not working in my RHEL 7 AWS instance. When I do a mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/ I get the error: mount.nfs: requested NFS version or transport protocol is not supported Can anyone point me where I am wrong? Thanks.
|
mount.nfs: requested NFS version or transport protocol is not supported NFS Mount is not working in my RHEL 7 AWS instance. When I do a mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/ I get the error: mount.nfs: requested NFS version or transport protocol is not supported Can anyone point me where I am wrong? Thanks.
|
mount, rhel, nfs
| 32
| 141,701
| 9
|
https://stackoverflow.com/questions/39347489/mount-nfs-requested-nfs-version-or-transport-protocol-is-not-supported
|
24,056,102
|
why do I get "Suspended (tty output)" in one terminal but not in others?
|
Apparently I've done something strange/wrong in a tcsh shell, and now whenever I start an application in the background which prints to stdout the application is suspended (stopped). Weird thing is, this behavior only happens in this terminal; if I do the same in another terminal, the application just keeps running in the background and prints it output to the terminal. In the "broken" terminal I have to put the suspended application back into foreground (with fg ) to have it continue. Example: thehost:/tmp/test1(277)> ls -l & [3] 1454 thehost:/tmp/test1(278)> [3] + Suspended (tty output) ls --color=auto -l thehost:/tmp/test1(278)> fg ls --color=auto -l total 0 thehost:/tmp/test1(279)> Same command executed in another terminal works fine: thehost:/tmp/test1(8)> ls -l & [1] 2280 thehost:/tmp/test1(9)> total 0 [1] Done ls --color=auto -l thehost:/tmp/test1(9)> Starting a bash in the affected terminal doesn't solve this either: thehost:/tmp/test1(280)> bash oliver@thehost:/tmp/test1$ ls -l & [1] 2263 oliver@thehost:/tmp/test1$ [1]+ Stopped ls --color=auto -l oliver@thehost:/tmp/test1$ fg ls --color=auto -l total 0 oliver@thehost:/tmp/test1$ Getting a new login shell (with su - oliver ) doesn't solve this either. So: what did I do in this terminal to get this behavior, and what can I do to get back the normal behavior ? It's not really an important problem (I could close the terminal and open a new one), but I'm curious :-) Happens on Linux RHEL 6.4 64bit, with KDE 4.11.5 and Konsole 2.11.3, and tcsh 6.17.00.
|
why do I get "Suspended (tty output)" in one terminal but not in others? Apparently I've done something strange/wrong in a tcsh shell, and now whenever I start an application in the background which prints to stdout the application is suspended (stopped). Weird thing is, this behavior only happens in this terminal; if I do the same in another terminal, the application just keeps running in the background and prints it output to the terminal. In the "broken" terminal I have to put the suspended application back into foreground (with fg ) to have it continue. Example: thehost:/tmp/test1(277)> ls -l & [3] 1454 thehost:/tmp/test1(278)> [3] + Suspended (tty output) ls --color=auto -l thehost:/tmp/test1(278)> fg ls --color=auto -l total 0 thehost:/tmp/test1(279)> Same command executed in another terminal works fine: thehost:/tmp/test1(8)> ls -l & [1] 2280 thehost:/tmp/test1(9)> total 0 [1] Done ls --color=auto -l thehost:/tmp/test1(9)> Starting a bash in the affected terminal doesn't solve this either: thehost:/tmp/test1(280)> bash oliver@thehost:/tmp/test1$ ls -l & [1] 2263 oliver@thehost:/tmp/test1$ [1]+ Stopped ls --color=auto -l oliver@thehost:/tmp/test1$ fg ls --color=auto -l total 0 oliver@thehost:/tmp/test1$ Getting a new login shell (with su - oliver ) doesn't solve this either. So: what did I do in this terminal to get this behavior, and what can I do to get back the normal behavior ? It's not really an important problem (I could close the terminal and open a new one), but I'm curious :-) Happens on Linux RHEL 6.4 64bit, with KDE 4.11.5 and Konsole 2.11.3, and tcsh 6.17.00.
|
linux, shell, terminal, rhel
| 29
| 57,874
| 2
|
https://stackoverflow.com/questions/24056102/why-do-i-get-suspended-tty-output-in-one-terminal-but-not-in-others
|
32,316,707
|
RHEL 6 - how to install 'GLIBC_2.14' or 'GLIBC_2.15'?
|
I need these 2 packages installed on RHEL 6 linux system. They are required by several other programs. When I do: sudo yum install glibc-devel this is output: Loaded plugins: product-id, security Setting up Install Process Package glibc-devel-2.12-1.166.el6_7.1.x86_64 already installed and latest version Nothing to do Is there some EPEL with GLIBC_2.15 for RHEL? If not - what is a workaround here?
|
RHEL 6 - how to install 'GLIBC_2.14' or 'GLIBC_2.15'? I need these 2 packages installed on RHEL 6 linux system. They are required by several other programs. When I do: sudo yum install glibc-devel this is output: Loaded plugins: product-id, security Setting up Install Process Package glibc-devel-2.12-1.166.el6_7.1.x86_64 already installed and latest version Nothing to do Is there some EPEL with GLIBC_2.15 for RHEL? If not - what is a workaround here?
|
dependencies, glibc, yum, rhel
| 26
| 150,931
| 4
|
https://stackoverflow.com/questions/32316707/rhel-6-how-to-install-glibc-2-14-or-glibc-2-15
|
35,634,795
|
No acceptable C compiler found in $PATH while installing the C compiler
|
I am facing the following error, while installing the C compiler itself ( gcc gnu ). configure: error: in `/home/gcc-5.3.0': configure: error: no acceptable C compiler found in $PATH Noting that I have tried the solutions listed in this question with no success. OS: RHEL6 and CentOS
|
No acceptable C compiler found in $PATH while installing the C compiler I am facing the following error, while installing the C compiler itself ( gcc gnu ). configure: error: in `/home/gcc-5.3.0': configure: error: no acceptable C compiler found in $PATH Noting that I have tried the solutions listed in this question with no success. OS: RHEL6 and CentOS
|
centos, rhel
| 26
| 56,192
| 1
|
https://stackoverflow.com/questions/35634795/no-acceptable-c-compiler-found-in-path-while-installing-the-c-compiler
|
8,237,395
|
Installing tshark on RHEL
|
Is there a way to install tshark on RHEL machines using yum install? when i did: yum install tshark I got back: Setting up Install Process No package tshark available. Nothing to do When i did: yum list tshark I got back: Loaded plugins: rhnplugin, security Error: No matching Packages to list
|
Installing tshark on RHEL Is there a way to install tshark on RHEL machines using yum install? when i did: yum install tshark I got back: Setting up Install Process No package tshark available. Nothing to do When i did: yum list tshark I got back: Loaded plugins: rhnplugin, security Error: No matching Packages to list
|
linux, wireshark, yum, rhel, tshark
| 25
| 54,383
| 1
|
https://stackoverflow.com/questions/8237395/installing-tshark-on-rhel
|
59,849,538
|
Does PHP do any parsing on the php.ini file?
|
Running PHP Version 7.1.30 under RHEL 7.7. I'm wanting to bump memory_limit, but wasn't sure if I had the syntax right (i.e. 256M or 256MB). So to start with I put a bad value "Hugo" in as the memory_limit setting. The trouble with this is the result of phpinfo() (run under httpd) literally has the string "Hugo" in place, i.e.: So this has me somewhat concerned that PHP doesn't actually do any sanity checking for the value(s). (If the value provided was bad I would expect it to revert to a default, e.g.) Can anyone comment on this - in particular, how do you know whether PHP will be enforcing things (if an arbitary string can be provided).
|
Does PHP do any parsing on the php.ini file? Running PHP Version 7.1.30 under RHEL 7.7. I'm wanting to bump memory_limit, but wasn't sure if I had the syntax right (i.e. 256M or 256MB). So to start with I put a bad value "Hugo" in as the memory_limit setting. The trouble with this is the result of phpinfo() (run under httpd) literally has the string "Hugo" in place, i.e.: So this has me somewhat concerned that PHP doesn't actually do any sanity checking for the value(s). (If the value provided was bad I would expect it to revert to a default, e.g.) Can anyone comment on this - in particular, how do you know whether PHP will be enforcing things (if an arbitary string can be provided).
|
php, apache, rhel
| 25
| 513
| 2
|
https://stackoverflow.com/questions/59849538/does-php-do-any-parsing-on-the-php-ini-file
|
19,358,194
|
mysql is dead but subsys locked
|
I am using PHP-mysql on Linux (RHEL 5.0) For First Time When I tried to connect to MySQL from my PHP Script using mysqli_connect. It Displayed The Following Error: Can't connect to local MySQL server through socket '/var/mysql/mysql.sock'(2) After googling for hour I found solution to this as stated here . When I followed that approach and issued command: service mysqld status as a root user I got : mysql is dead but subsys locked Any one know how to solve this and cause of this error ? Also Restarting Starting And Stopping of MySQL is Giving Output as: FAILED But PHP is working fine. I've tested the phpinfo(); for Demo. I've installed MySQL in /usr/local/mysql/bin . Can Anyone Help me in this ? Any help is appreciated. Thanks in advance.
|
mysql is dead but subsys locked I am using PHP-mysql on Linux (RHEL 5.0) For First Time When I tried to connect to MySQL from my PHP Script using mysqli_connect. It Displayed The Following Error: Can't connect to local MySQL server through socket '/var/mysql/mysql.sock'(2) After googling for hour I found solution to this as stated here . When I followed that approach and issued command: service mysqld status as a root user I got : mysql is dead but subsys locked Any one know how to solve this and cause of this error ? Also Restarting Starting And Stopping of MySQL is Giving Output as: FAILED But PHP is working fine. I've tested the phpinfo(); for Demo. I've installed MySQL in /usr/local/mysql/bin . Can Anyone Help me in this ? Any help is appreciated. Thanks in advance.
|
php, linux, rhel, mysql
| 23
| 54,986
| 9
|
https://stackoverflow.com/questions/19358194/mysql-is-dead-but-subsys-locked
|
20,267,339
|
Docker behind proxy that changes ssl certificate
|
I am trying to run the following docker command: docker run -i -t ubuntu /bin/bash But I get the error: Unable to find image 'ubuntu' (tag: latest) locally Pulling repository ubuntu 2013/11/28 14:00:24 Get [URL] x509: certificate signed by unknown authority I know that our company replaces the SSL Certificate on the fly for https requests. I tried to trust our company's CA certificate by putting it in: /etc/pki/tls/certs/ca-bundle.crt and /etc/pki/tls/cert.pem But it is still not working. Any ideas?
|
Docker behind proxy that changes ssl certificate I am trying to run the following docker command: docker run -i -t ubuntu /bin/bash But I get the error: Unable to find image 'ubuntu' (tag: latest) locally Pulling repository ubuntu 2013/11/28 14:00:24 Get [URL] x509: certificate signed by unknown authority I know that our company replaces the SSL Certificate on the fly for https requests. I tried to trust our company's CA certificate by putting it in: /etc/pki/tls/certs/ca-bundle.crt and /etc/pki/tls/cert.pem But it is still not working. Any ideas?
|
linux, ssl, rhel, docker
| 23
| 47,659
| 3
|
https://stackoverflow.com/questions/20267339/docker-behind-proxy-that-changes-ssl-certificate
|
10,882,967
|
How to find which version of Oracle is installed on a Linux server (In terminal)
|
I am in terminal in Redhat 5.5 and I need to find out which version of Oracle is installed. I am pretty new at Linux, but I have searched Google for a while and I can't find what I need. I have to locate which version is installed via terminal. I found the Oracle files, but I can't seem to find the version.
|
How to find which version of Oracle is installed on a Linux server (In terminal) I am in terminal in Redhat 5.5 and I need to find out which version of Oracle is installed. I am pretty new at Linux, but I have searched Google for a while and I can't find what I need. I have to locate which version is installed via terminal. I found the Oracle files, but I can't seem to find the version.
|
linux, oracle-database, rhel
| 21
| 338,203
| 7
|
https://stackoverflow.com/questions/10882967/how-to-find-which-version-of-oracle-is-installed-on-a-linux-server-in-terminal
|
14,030,797
|
what does the rpmbuild warning "File listed twice" ACTUALLY MEAN?
|
I need to specify common attributes for one of the major directories in the package, and special permission for some of it subdirs. e.g. %files %attr(-, myuser, mygroup) /opt/myapp %attr(750, myuser, mygroup) /opt/myapp/bin # no exec permission to other /etc # this is the reason I can't use %defattr(-, myuser, mygroup) I get the "file listed twice" warning on every file under /opt/myapp/bin, naturally. My question is, what does it actually mean? What does rpmbuild do with it? I can't find an answer anywhere. Can I just ignore it? What takes precedence, the first or the last occurrence? I prefer not to list everything under myapp explicitly to solve this. is there any other way? Thanks
|
what does the rpmbuild warning "File listed twice" ACTUALLY MEAN? I need to specify common attributes for one of the major directories in the package, and special permission for some of it subdirs. e.g. %files %attr(-, myuser, mygroup) /opt/myapp %attr(750, myuser, mygroup) /opt/myapp/bin # no exec permission to other /etc # this is the reason I can't use %defattr(-, myuser, mygroup) I get the "file listed twice" warning on every file under /opt/myapp/bin, naturally. My question is, what does it actually mean? What does rpmbuild do with it? I can't find an answer anywhere. Can I just ignore it? What takes precedence, the first or the last occurrence? I prefer not to list everything under myapp explicitly to solve this. is there any other way? Thanks
|
installation, rpm, rhel, rpmbuild, software-packaging
| 20
| 20,752
| 5
|
https://stackoverflow.com/questions/14030797/what-does-the-rpmbuild-warning-file-listed-twice-actually-mean
|
15,237,779
|
What's the root cause of error "Failed dependencies: /bin/sh is needed by xxx" on RHEL?
|
When I install a rpm package on RHEL using rpm, I got a error message just like "Failed dependencies: /bin/sh is needed by xxx". I checked that /bin/sh is there and it links to /bin/bash and bash works well. I found a solution that to add --nodeps to the rpm command to solve this problem. But I really want to know what is the root cause?
|
What's the root cause of error "Failed dependencies: /bin/sh is needed by xxx" on RHEL? When I install a rpm package on RHEL using rpm, I got a error message just like "Failed dependencies: /bin/sh is needed by xxx". I checked that /bin/sh is there and it links to /bin/bash and bash works well. I found a solution that to add --nodeps to the rpm command to solve this problem. But I really want to know what is the root cause?
|
sh, rpm, rhel
| 20
| 70,632
| 3
|
https://stackoverflow.com/questions/15237779/whats-the-root-cause-of-error-failed-dependencies-bin-sh-is-needed-by-xxx-o
|
12,200,217
|
Can upstart expect/respawn be used on processes that fork more than twice?
|
I am using upstart to start/stop/automatically restart daemons. One of the daemons forks 4 times. The upstart cookbook states that it only supports forking twice. Is there a workaround? How it fails If I try to use expect daemon or expect fork , upstart uses the pid of the second fork. When I try to stop the job, nobody responds to upstarts SIGKILL signal and it hangs until you exhaust the pid space and loop back around. It gets worse if you add respawn. Upstart thinks the job died and immediately starts another one. Bug acknowledged by upstream A bug has been entered for upstart. The solutions presented are stick with the old sysvinit, rewrite your daemon, or wait for a re-write. RHEL is close to 2 years behind the latest upstart package, so by the time the rewrite is released and we get updated the wait will probably be 4 years. The daemon is written by a subcontractor of a subcontractor of a contractor so it will not be fixed any time soon either.
|
Can upstart expect/respawn be used on processes that fork more than twice? I am using upstart to start/stop/automatically restart daemons. One of the daemons forks 4 times. The upstart cookbook states that it only supports forking twice. Is there a workaround? How it fails If I try to use expect daemon or expect fork , upstart uses the pid of the second fork. When I try to stop the job, nobody responds to upstarts SIGKILL signal and it hangs until you exhaust the pid space and loop back around. It gets worse if you add respawn. Upstart thinks the job died and immediately starts another one. Bug acknowledged by upstream A bug has been entered for upstart. The solutions presented are stick with the old sysvinit, rewrite your daemon, or wait for a re-write. RHEL is close to 2 years behind the latest upstart package, so by the time the rewrite is released and we get updated the wait will probably be 4 years. The daemon is written by a subcontractor of a subcontractor of a contractor so it will not be fixed any time soon either.
|
linux, rhel, upstart
| 19
| 8,446
| 1
|
https://stackoverflow.com/questions/12200217/can-upstart-expect-respawn-be-used-on-processes-that-fork-more-than-twice
|
38,695,594
|
nc: invalid option -- 'z'
|
On RHEL 7.2 OS, I get following error when trying to run nc commnad nc -z -v -w1 host port nc: invalid option -- 'z' Ncat: Try `--help' or man(1) ncat for more information, usage options and help. QUITTING. Is there any alternative to it
|
nc: invalid option -- 'z' On RHEL 7.2 OS, I get following error when trying to run nc commnad nc -z -v -w1 host port nc: invalid option -- 'z' Ncat: Try `--help' or man(1) ncat for more information, usage options and help. QUITTING. Is there any alternative to it
|
linux, rhel, netcat
| 19
| 19,085
| 3
|
https://stackoverflow.com/questions/38695594/nc-invalid-option-z
|
45,335,316
|
unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF
|
I am new to docker and cannot understand these errors. So, Please let me know if any more information is needed. $ docker --version Docker version 1.12.6, build 88a4867/1.12.6 $ docker info Cannot connect to the Docker daemon. Is the docker daemon running on this host? $sudo dockerd FATA[0000] unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF $sudo systemctl start docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. $sudo systemctl status docker.service -l ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-07-26 14:30:21 EDT; 8min ago Docs: [URL] Process: 5835 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=1/FAILURE) Main PID: 5835 (code=exited, status=1/FAILURE) Jul 26 14:30:21: Starting Docker Application Container Engine... Jul 26 14:30:21 dockerd-current[5835]: time="2017-07-26T14:30:21-04:00" level=fatal msg="unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF\n" Jul 26 14:30:21 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Jul 26 14:30:21 systemd[1]: Failed to start Docker Application Container Engine. Jul 26 14:30:21 systemd[1]: Unit docker.service entered failed state. Jul 26 14:30:21 systemd[1]: docker.service failed. Please let me know if I need to check anything else.
|
unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF I am new to docker and cannot understand these errors. So, Please let me know if any more information is needed. $ docker --version Docker version 1.12.6, build 88a4867/1.12.6 $ docker info Cannot connect to the Docker daemon. Is the docker daemon running on this host? $sudo dockerd FATA[0000] unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF $sudo systemctl start docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. $sudo systemctl status docker.service -l ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-07-26 14:30:21 EDT; 8min ago Docs: [URL] Process: 5835 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=1/FAILURE) Main PID: 5835 (code=exited, status=1/FAILURE) Jul 26 14:30:21: Starting Docker Application Container Engine... Jul 26 14:30:21 dockerd-current[5835]: time="2017-07-26T14:30:21-04:00" level=fatal msg="unable to configure the Docker daemon with file /etc/docker/daemon.json: EOF\n" Jul 26 14:30:21 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Jul 26 14:30:21 systemd[1]: Failed to start Docker Application Container Engine. Jul 26 14:30:21 systemd[1]: Unit docker.service entered failed state. Jul 26 14:30:21 systemd[1]: docker.service failed. Please let me know if I need to check anything else.
|
linux, docker, devops, rhel
| 18
| 65,102
| 5
|
https://stackoverflow.com/questions/45335316/unable-to-configure-the-docker-daemon-with-file-etc-docker-daemon-json-eof
|
46,466,241
|
pip install failing with 407 Proxy Authentication Required
|
I am trying to use the below pip install command, but its failing with Proxy authentication required issue. I have already configured my proxies inside my RHEL7.x Server. Command Used: pip install --proxy [URL] --upgrade pip Logs: Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/ Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/ Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/ Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/ Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/
|
pip install failing with 407 Proxy Authentication Required I am trying to use the below pip install command, but its failing with Proxy authentication required issue. I have already configured my proxies inside my RHEL7.x Server. Command Used: pip install --proxy [URL] --upgrade pip Logs: Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/ Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/ Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/ Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/ Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',))': /simple/pip/
|
authentication, proxy, pip, rhel, tunnel
| 17
| 109,359
| 5
|
https://stackoverflow.com/questions/46466241/pip-install-failing-with-407-proxy-authentication-required
|
40,393,643
|
Preserve timestamp in sed command
|
I'm using following sed command to find and replace the string: find dir -name '*.xml' -exec sed -i -e 's/text1/text2/g' {} \; This changes the timestamp of all .xml files inside dir However, how can I retain old timestamps? Thanks
|
Preserve timestamp in sed command I'm using following sed command to find and replace the string: find dir -name '*.xml' -exec sed -i -e 's/text1/text2/g' {} \; This changes the timestamp of all .xml files inside dir However, how can I retain old timestamps? Thanks
|
linux, sed, rhel
| 16
| 4,808
| 3
|
https://stackoverflow.com/questions/40393643/preserve-timestamp-in-sed-command
|
44,789,416
|
Python Build Error: failed to build modules _ssl and _hashlib
|
I am adding python 2.7.13 as an altinstall by installing the source code to my RHEL4 box with wget --no-check-certificate [URL] tar -xvzf Python2.7.13.tar.xz cd Python2.7.13 ./configure --with-ensurepip=install make make test make altinstall so that I do not overwrite the default python that is required for other use. Python 2.7.13 would successfully install but was missing several essential modules for the project I am working on. Originally the _ssl and _haslib modules would error in this section. Python build finished, but the necessary bits to build these modules were not found: _bsddb _sqlite3 _tkinter bsddb185 dbm dl gdbm imageop sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. I installed openssl and ensured that they were in the default location that python was looking for them, so now I have the necessary bits but then it ends with this message instead Failed to build these modules: _hashlib _ssl Below is the entire output of the python2.7 setup.py build from the unzipped python package. I have been scouring google and anywhere I can find but I have been unsuccessful in anything so far running build running build_ext INFO: Can't locate Tcl/Tk libs and/or headers building '_ssl' extension gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/ssl/include -I. -IInclude -I./Include -I/usr/local/include -I/usr/local/include/python2.7 -c /Python/Modules/_ssl.c -o build/temp.linux-x86_64-2.7/Python/Modules/_ssl.o /Python/Modules/_ssl.c:57: warning: ignoring #pragma GCC diagnostic /Python/Modules/_ssl.c: In function ‘_setup_ssl_threads’: /Python/Modules/_ssl.c:4012: warning: comparison is always false due to limited range of data type gcc -pthread -shared build/temp.linux-x86_64-2.7/Python/Modules/_ssl.o -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto -o build/lib.linux-x86_64-2.7/_ssl.so /usr/bin/ld: /usr/local/ssl/lib/libssl.a(s3_meth.o): relocation R_X86_64_32 against a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/ssl/lib/libssl.a: could not read symbols: Bad value collect2: ld returned 1 exit status building '_hashlib' extension gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/ssl/include -I. -IInclude -I./Include -I/usr/local/include -I/usr/local/include/python2.7 -c /Python/Modules/_hashopenssl.c -o build/temp.linux-x86_64-2.7/Python/Modules/_hashopenssl.o gcc -pthread -shared build/temp.linux-x86_64-2.7/Python/Modules/_hashopenssl.o -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto -o build/lib.linux-x86_64-2.7/_hashlib.so /usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(o_names.o): relocation R_X86_64_32 against a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/ssl/lib/libcrypto.a: could not read symbols: Bad value collect2: ld returned 1 exit status Python build finished, but the necessary bits to build these modules were not found: _bsddb _sqlite3 _tkinter bsddb185 dbm dl gdbm imageop sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. Failed to build these modules: _hashlib _ssl running build_scripts When I attempt to use pip that is installed with my python 2.7.13 I get an SSL error so I have been installing all my packages and other modules from the source like cx_Oracle and CherryPy. pip2.7 install cffi pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Collecting cffi Could not fetch URL [URL] There was a problem confirming the ssl certificate: Can't connect to HTTPS URL because the SSL module is not available. - skipping Could not find a version that satisfies the requirement cffi (from versions: ) No matching distribution found for cffi I also tried to add the ssl module manually with wget --no-check-certificate [URL] tar -xvzf ssl-1.16.tar.gz cd ssl-1.16 python2.7 setup.py build But I get an error that it should not be used with python past 2.6 Traceback (most recent call last): File "setup.py", line 12, in <module> + "or earlier.") ValueError: This extension should not be used with Python 2.6 or later (already built in), and has not been tested with Python 2.3.4 or earlier. EDIT I was looking around for solutions and after combing over the outputs of the setup.py build and found someone with a similar-ish problem that seems to be related to openssl here so I rebuilt my openssl with ./config enable-shared make make test make install and now I get a slightly different error about the ssl module, am I just screwing up my environment more and more? python2.7 setup.py build running build running build_ext INFO: Can't locate Tcl/Tk libs and/or headers building '_ssl' extension gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/ssl/include -I. -IInclude -I./Include -I/usr/local/include -I/usr/local/include/python2.7 -c /Python/Modules/_ssl.c -o build/temp.linux-x86_64-2.7/Python/Modules/_ssl.o /Python/Modules/_ssl.c:57: warning: ignoring #pragma GCC diagnostic /Python/Modules/_ssl.c: In function ‘_setup_ssl_threads’: /Python/Modules/_ssl.c:4012: warning: comparison is always false due to limited range of data type gcc -pthread -shared build/temp.linux-x86_64-2.7/Python/Modules/_ssl.o -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto -o build/lib.linux-x86_64-2.7/_ssl.so *** WARNING: renaming "_ssl" since importing it failed: libssl.so.1.0.0: cannot open shared object file: No such file or directory building '_hashlib' extension gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/ssl/include -I. -IInclude -I./Include -I/usr/local/include -I/usr/local/include/python2.7 -c /Python/Modules/_hashopenssl.c -o build/temp.linux-x86_64-2.7/Python/Modules/_hashopenssl.o gcc -pthread -shared build/temp.linux-x86_64-2.7/Python/Modules/_hashopenssl.o -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto -o build/lib.linux-x86_64-2.7/_hashlib.so *** WARNING: renaming "_hashlib" since importing it failed: libssl.so.1.0.0: cannot open shared object file: No such file or directory Python build finished, but the necessary bits to build these modules were not found: _bsddb _sqlite3 _tkinter bsddb185 dbm dl gdbm imageop sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. Failed to build these modules: _hashlib _ssl running build_scripts The new warning message in the output *** WARNING: renaming "_ssl" since importing it failed: libssl.so.1.0.0: cannot open shared object file: No such file or directory indicates that the file does not exist but I see it in /usr/local/ssl/lib/ as libssl.so.1.0.0 and can find it with a search find / -name libssl.so.1.0.0 /usr/local/ssl/lib/libssl.so.1.0.0 /tmp/openssl-1.0.2l/libssl.so.1.0.0
|
Python Build Error: failed to build modules _ssl and _hashlib I am adding python 2.7.13 as an altinstall by installing the source code to my RHEL4 box with wget --no-check-certificate [URL] tar -xvzf Python2.7.13.tar.xz cd Python2.7.13 ./configure --with-ensurepip=install make make test make altinstall so that I do not overwrite the default python that is required for other use. Python 2.7.13 would successfully install but was missing several essential modules for the project I am working on. Originally the _ssl and _haslib modules would error in this section. Python build finished, but the necessary bits to build these modules were not found: _bsddb _sqlite3 _tkinter bsddb185 dbm dl gdbm imageop sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. I installed openssl and ensured that they were in the default location that python was looking for them, so now I have the necessary bits but then it ends with this message instead Failed to build these modules: _hashlib _ssl Below is the entire output of the python2.7 setup.py build from the unzipped python package. I have been scouring google and anywhere I can find but I have been unsuccessful in anything so far running build running build_ext INFO: Can't locate Tcl/Tk libs and/or headers building '_ssl' extension gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/ssl/include -I. -IInclude -I./Include -I/usr/local/include -I/usr/local/include/python2.7 -c /Python/Modules/_ssl.c -o build/temp.linux-x86_64-2.7/Python/Modules/_ssl.o /Python/Modules/_ssl.c:57: warning: ignoring #pragma GCC diagnostic /Python/Modules/_ssl.c: In function ‘_setup_ssl_threads’: /Python/Modules/_ssl.c:4012: warning: comparison is always false due to limited range of data type gcc -pthread -shared build/temp.linux-x86_64-2.7/Python/Modules/_ssl.o -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto -o build/lib.linux-x86_64-2.7/_ssl.so /usr/bin/ld: /usr/local/ssl/lib/libssl.a(s3_meth.o): relocation R_X86_64_32 against a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/ssl/lib/libssl.a: could not read symbols: Bad value collect2: ld returned 1 exit status building '_hashlib' extension gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/ssl/include -I. -IInclude -I./Include -I/usr/local/include -I/usr/local/include/python2.7 -c /Python/Modules/_hashopenssl.c -o build/temp.linux-x86_64-2.7/Python/Modules/_hashopenssl.o gcc -pthread -shared build/temp.linux-x86_64-2.7/Python/Modules/_hashopenssl.o -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto -o build/lib.linux-x86_64-2.7/_hashlib.so /usr/bin/ld: /usr/local/ssl/lib/libcrypto.a(o_names.o): relocation R_X86_64_32 against a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/ssl/lib/libcrypto.a: could not read symbols: Bad value collect2: ld returned 1 exit status Python build finished, but the necessary bits to build these modules were not found: _bsddb _sqlite3 _tkinter bsddb185 dbm dl gdbm imageop sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. Failed to build these modules: _hashlib _ssl running build_scripts When I attempt to use pip that is installed with my python 2.7.13 I get an SSL error so I have been installing all my packages and other modules from the source like cx_Oracle and CherryPy. pip2.7 install cffi pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Collecting cffi Could not fetch URL [URL] There was a problem confirming the ssl certificate: Can't connect to HTTPS URL because the SSL module is not available. - skipping Could not find a version that satisfies the requirement cffi (from versions: ) No matching distribution found for cffi I also tried to add the ssl module manually with wget --no-check-certificate [URL] tar -xvzf ssl-1.16.tar.gz cd ssl-1.16 python2.7 setup.py build But I get an error that it should not be used with python past 2.6 Traceback (most recent call last): File "setup.py", line 12, in <module> + "or earlier.") ValueError: This extension should not be used with Python 2.6 or later (already built in), and has not been tested with Python 2.3.4 or earlier. EDIT I was looking around for solutions and after combing over the outputs of the setup.py build and found someone with a similar-ish problem that seems to be related to openssl here so I rebuilt my openssl with ./config enable-shared make make test make install and now I get a slightly different error about the ssl module, am I just screwing up my environment more and more? python2.7 setup.py build running build running build_ext INFO: Can't locate Tcl/Tk libs and/or headers building '_ssl' extension gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/ssl/include -I. -IInclude -I./Include -I/usr/local/include -I/usr/local/include/python2.7 -c /Python/Modules/_ssl.c -o build/temp.linux-x86_64-2.7/Python/Modules/_ssl.o /Python/Modules/_ssl.c:57: warning: ignoring #pragma GCC diagnostic /Python/Modules/_ssl.c: In function ‘_setup_ssl_threads’: /Python/Modules/_ssl.c:4012: warning: comparison is always false due to limited range of data type gcc -pthread -shared build/temp.linux-x86_64-2.7/Python/Modules/_ssl.o -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto -o build/lib.linux-x86_64-2.7/_ssl.so *** WARNING: renaming "_ssl" since importing it failed: libssl.so.1.0.0: cannot open shared object file: No such file or directory building '_hashlib' extension gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/ssl/include -I. -IInclude -I./Include -I/usr/local/include -I/usr/local/include/python2.7 -c /Python/Modules/_hashopenssl.c -o build/temp.linux-x86_64-2.7/Python/Modules/_hashopenssl.o gcc -pthread -shared build/temp.linux-x86_64-2.7/Python/Modules/_hashopenssl.o -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto -o build/lib.linux-x86_64-2.7/_hashlib.so *** WARNING: renaming "_hashlib" since importing it failed: libssl.so.1.0.0: cannot open shared object file: No such file or directory Python build finished, but the necessary bits to build these modules were not found: _bsddb _sqlite3 _tkinter bsddb185 dbm dl gdbm imageop sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. Failed to build these modules: _hashlib _ssl running build_scripts The new warning message in the output *** WARNING: renaming "_ssl" since importing it failed: libssl.so.1.0.0: cannot open shared object file: No such file or directory indicates that the file does not exist but I see it in /usr/local/ssl/lib/ as libssl.so.1.0.0 and can find it with a search find / -name libssl.so.1.0.0 /usr/local/ssl/lib/libssl.so.1.0.0 /tmp/openssl-1.0.2l/libssl.so.1.0.0
|
python, python-2.7, ssl, pip, rhel
| 16
| 29,170
| 1
|
https://stackoverflow.com/questions/44789416/python-build-error-failed-to-build-modules-ssl-and-hashlib
|
48,328,661
|
Can't start Cassandra after OS patch up
|
When I try to start Cassandra after patching my OS, I get this error: Exception (java.lang.AbstractMethodError) encountered during startup: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote; java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote; at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150) at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135) at javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405) at org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104) at org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:476) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:59 at com.datastax.bdp.DseModule.main(DseModule.java:93) ERROR [main] 2018-01-17 13:18:03,330 CassandraDaemon.java:705 - Exception encountered during startup java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote; Does anyone know why, with no other changes, I'm running into this error now?
|
Can't start Cassandra after OS patch up When I try to start Cassandra after patching my OS, I get this error: Exception (java.lang.AbstractMethodError) encountered during startup: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote; java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote; at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150) at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135) at javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405) at org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104) at org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:143) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:188) at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:476) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:59 at com.datastax.bdp.DseModule.main(DseModule.java:93) ERROR [main] 2018-01-17 13:18:03,330 CassandraDaemon.java:705 - Exception encountered during startup java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote; Does anyone know why, with no other changes, I'm running into this error now?
|
cassandra, datastax, rhel
| 15
| 7,014
| 4
|
https://stackoverflow.com/questions/48328661/cant-start-cassandra-after-os-patch-up
|
12,403,031
|
Unable to yum install anything on RHEL
|
I am on a new RHEL system. I seem to be unable to be able to install anything package via yum install. yum install nmap The current repos in ls /etc/yum.repos.d/ google-chrome.repo redhat.repo rhel-source.repo What could be going wrong ? OUTPUT OF YUM INSTALL: $ sudo yum install nmap [sudo] password for user: Loaded plugins: product-id, refresh-packagekit, security, subscription-manager Updating certificate-based repositories. Setting up Install Process No package nmap available. Error: Nothing to do
|
Unable to yum install anything on RHEL I am on a new RHEL system. I seem to be unable to be able to install anything package via yum install. yum install nmap The current repos in ls /etc/yum.repos.d/ google-chrome.repo redhat.repo rhel-source.repo What could be going wrong ? OUTPUT OF YUM INSTALL: $ sudo yum install nmap [sudo] password for user: Loaded plugins: product-id, refresh-packagekit, security, subscription-manager Updating certificate-based repositories. Setting up Install Process No package nmap available. Error: Nothing to do
|
linux, yum, rhel
| 14
| 72,075
| 6
|
https://stackoverflow.com/questions/12403031/unable-to-yum-install-anything-on-rhel
|
6,370,014
|
Specifying alternatives in RPM dependencies
|
I've got a Java app that I am packaging as an RPM. Ideally I would like to specify Java as a dependency. I need to install in both Fedora and RHEL environments. The problem is RHEL Java is called 'java', while Fedora doesn't provide Oracle/Sun's distribution, leaving one to manually download from Oracle's website. Oracles distribution of Java is called either 'jre' or 'jdk' depending on which package you select. Normally in a RPM SPEC file I would write: Depends: java >= 1.6 But since the RHEL provides 'java', and Fedora via Sun/Oracle provides 'jre' or 'jdk' (and I can't use OpenJDK), I'm in a bit of a bind. Documentation so far hasn't shown a way to do 'java >= 1.6 || jre >= 1.6 || jdk >= 1.6' etc. Because the Java packages aren't under my control I can't just change one or the other to specify a 'Provides: Java'. At present I see only two options: Omit Java as a dependency Create one RPM for Fedora, one for RHEL I'm not keen on either option. Are there any other ways to achieve Java dependency where the providers all have different names? Edit: A third option - create my own Java virtual package for Fedora that has a dependency on Sun's JDK RPM.
|
Specifying alternatives in RPM dependencies I've got a Java app that I am packaging as an RPM. Ideally I would like to specify Java as a dependency. I need to install in both Fedora and RHEL environments. The problem is RHEL Java is called 'java', while Fedora doesn't provide Oracle/Sun's distribution, leaving one to manually download from Oracle's website. Oracles distribution of Java is called either 'jre' or 'jdk' depending on which package you select. Normally in a RPM SPEC file I would write: Depends: java >= 1.6 But since the RHEL provides 'java', and Fedora via Sun/Oracle provides 'jre' or 'jdk' (and I can't use OpenJDK), I'm in a bit of a bind. Documentation so far hasn't shown a way to do 'java >= 1.6 || jre >= 1.6 || jdk >= 1.6' etc. Because the Java packages aren't under my control I can't just change one or the other to specify a 'Provides: Java'. At present I see only two options: Omit Java as a dependency Create one RPM for Fedora, one for RHEL I'm not keen on either option. Are there any other ways to achieve Java dependency where the providers all have different names? Edit: A third option - create my own Java virtual package for Fedora that has a dependency on Sun's JDK RPM.
|
java, fedora, rpm, rhel
| 14
| 5,569
| 1
|
https://stackoverflow.com/questions/6370014/specifying-alternatives-in-rpm-dependencies
|
23,965,958
|
How to change rpmbuild default directory from /root/rpmbuild directory to other
|
I have to build a rpm package for some drivers. I need to build the rpm from a .tar.gz archive. The tar ball also contains the .spec file. I have set up my rpmbuild environment as described here: [URL] My directory structure is thus: /home/rpmbuild /home/rpmbuild/BUILD /home/rpmbuild/RPMS /home/rpmbuild/SOURCES /home/rpmbuild/SPECS /home/rpmbuild/SRPMS The .tar.gz file contains the specfile and is placed in /home/rpmbuild/SOURCES If I then navigate to that directory and run the following, the rpm package is built correctly, but is placed in /root/rpmbuild/RPMS instead of /home/rpmbuild/RPMS (where I expected it to be). sudo rpmbuild -ta driver.tar.gz I assume this is because I ran rpmbuild with sudo. Am I thinking of this correctly? Is there a way to direct it to build in /home/rpmbuild instead? I know it is bad practice to use rpmbuild as root, but if I don't run it as root I run into many errors (not having permissions to access directories owned by root - like /tmp/orbit-root). It seems like it would be much more difficult to change the permissions of each of these directories then to change them back. Is this the correct way to go about this? I greatly appreciate the help.
|
How to change rpmbuild default directory from /root/rpmbuild directory to other I have to build a rpm package for some drivers. I need to build the rpm from a .tar.gz archive. The tar ball also contains the .spec file. I have set up my rpmbuild environment as described here: [URL] My directory structure is thus: /home/rpmbuild /home/rpmbuild/BUILD /home/rpmbuild/RPMS /home/rpmbuild/SOURCES /home/rpmbuild/SPECS /home/rpmbuild/SRPMS The .tar.gz file contains the specfile and is placed in /home/rpmbuild/SOURCES If I then navigate to that directory and run the following, the rpm package is built correctly, but is placed in /root/rpmbuild/RPMS instead of /home/rpmbuild/RPMS (where I expected it to be). sudo rpmbuild -ta driver.tar.gz I assume this is because I ran rpmbuild with sudo. Am I thinking of this correctly? Is there a way to direct it to build in /home/rpmbuild instead? I know it is bad practice to use rpmbuild as root, but if I don't run it as root I run into many errors (not having permissions to access directories owned by root - like /tmp/orbit-root). It seems like it would be much more difficult to change the permissions of each of these directories then to change them back. Is this the correct way to go about this? I greatly appreciate the help.
|
linux, centos, rpm, rhel, rpmbuild
| 13
| 22,012
| 4
|
https://stackoverflow.com/questions/23965958/how-to-change-rpmbuild-default-directory-from-root-rpmbuild-directory-to-other
|
47,633,870
|
rpm: /lib64/liblzma.so.5: version `XZ_5.1.2alpha' not found (required by /lib/librpmio.so.3)
|
I am stuck with this error. Not able to install any RPMs. Please help OS is RHEL6.9 64 bit Thanks in advance.
|
rpm: /lib64/liblzma.so.5: version `XZ_5.1.2alpha' not found (required by /lib/librpmio.so.3) I am stuck with this error. Not able to install any RPMs. Please help OS is RHEL6.9 64 bit Thanks in advance.
|
rpm, rhel
| 13
| 25,999
| 6
|
https://stackoverflow.com/questions/47633870/rpm-lib64-liblzma-so-5-version-xz-5-1-2alpha-not-found-required-by-lib-li
|
23,229,682
|
Special escaping for crontab
|
I have the following user crontab entry on a RHEL 6 machine (sensitive values have been replaced): MAILTO=cron-errors@organisation.com 0 5 * * * ~/bin/app_state.sh host-arg 9200 > ~/state/app-state-$(hostname)-$(date +%F).json Which produces this entry in /var/log/cron : Apr 23 05:00:08 host CROND[13901]: (dbjobs) CMD (~/bin/app_state.sh host-arg 9200 > ~/state/app-state-$(hostname)-$(date +) But no file. After changing the statement to: 43 5 * * * ~/bin/app_state.sh host-arg 9200 > ~/state/app-state-static.json I get a better log entry and the file is created at ~/state/app-state-static.json I'm sure there's some issue with not escaping the +%F but can't for the life of me find details of how I should be escaping it. I could wrap the filename generation inside another shell script but this is more easy to read for people coming looking for the file.
|
Special escaping for crontab I have the following user crontab entry on a RHEL 6 machine (sensitive values have been replaced): MAILTO=cron-errors@organisation.com 0 5 * * * ~/bin/app_state.sh host-arg 9200 > ~/state/app-state-$(hostname)-$(date +%F).json Which produces this entry in /var/log/cron : Apr 23 05:00:08 host CROND[13901]: (dbjobs) CMD (~/bin/app_state.sh host-arg 9200 > ~/state/app-state-$(hostname)-$(date +) But no file. After changing the statement to: 43 5 * * * ~/bin/app_state.sh host-arg 9200 > ~/state/app-state-static.json I get a better log entry and the file is created at ~/state/app-state-static.json I'm sure there's some issue with not escaping the +%F but can't for the life of me find details of how I should be escaping it. I could wrap the filename generation inside another shell script but this is more easy to read for people coming looking for the file.
|
bash, cron, rhel
| 13
| 13,272
| 1
|
https://stackoverflow.com/questions/23229682/special-escaping-for-crontab
|
62,250,381
|
Why dockerize a service or application when you could install it?
|
We have around 12 services and other applications such as presto. We are thinking about building Docker containers for each service and application. Is it right to dockerize all of them? When would a Docker container not be the ideal solution?
|
Why dockerize a service or application when you could install it? We have around 12 services and other applications such as presto. We are thinking about building Docker containers for each service and application. Is it right to dockerize all of them? When would a Docker container not be the ideal solution?
|
linux, docker, dockerfile, containers, rhel
| 13
| 2,745
| 4
|
https://stackoverflow.com/questions/62250381/why-dockerize-a-service-or-application-when-you-could-install-it
|
59,661,379
|
Discard old build in multi-branch pipeline job, doesn't really delete builds from server
|
I have multi-branch pipeline job in Jenkins: [URL] and I checked the option for Discard old builds as below: I would have expected, that each new run after this change, it will delete old build from the server for each Repository inside that pipeline. however, I see all builds are still there, which causing me a File system issue. Jenkins link: [URL] From server: jenkins@XXXXX:jenkins/jenkins-production/jobs/OC/jobs/productconfigurator-ms/branches/master/builds> I see builds from 541 to 1039 Jenkins ver. 2.176.1
|
Discard old build in multi-branch pipeline job, doesn't really delete builds from server I have multi-branch pipeline job in Jenkins: [URL] and I checked the option for Discard old builds as below: I would have expected, that each new run after this change, it will delete old build from the server for each Repository inside that pipeline. however, I see all builds are still there, which causing me a File system issue. Jenkins link: [URL] From server: jenkins@XXXXX:jenkins/jenkins-production/jobs/OC/jobs/productconfigurator-ms/branches/master/builds> I see builds from 541 to 1039 Jenkins ver. 2.176.1
|
jenkins-pipeline, jenkins-plugins, rhel
| 12
| 15,226
| 2
|
https://stackoverflow.com/questions/59661379/discard-old-build-in-multi-branch-pipeline-job-doesnt-really-delete-builds-fro
|
25,809,430
|
How to fix locale issue in Red Hat distro?
|
I'm having a strange problem today in my RHEL system. My python script is returning: >>> locale.setlocale(locale.LC_ALL, '') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.6/locale.py", line 513, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting WHen I run... $ locale The ouput is... locale: Cannot set LC_ALL to default locale: No such file or directory LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" ... I have been trying many suggestions but none of them solved my issue yet. For example: Reinstall glibc-common. Export LC_ALL as environment variable into ~/.bashrc. Change the file /etc/sysconfig/i18n'. locale-gen does not exists in RHEL. Does anyone has a good suggestion to solve my issue. Remembering that I'm using RHEL and not Ubuntu (there are many tutorials about locale issues and Ubuntu).
|
How to fix locale issue in Red Hat distro? I'm having a strange problem today in my RHEL system. My python script is returning: >>> locale.setlocale(locale.LC_ALL, '') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.6/locale.py", line 513, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting WHen I run... $ locale The ouput is... locale: Cannot set LC_ALL to default locale: No such file or directory LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" ... I have been trying many suggestions but none of them solved my issue yet. For example: Reinstall glibc-common. Export LC_ALL as environment variable into ~/.bashrc. Change the file /etc/sysconfig/i18n'. locale-gen does not exists in RHEL. Does anyone has a good suggestion to solve my issue. Remembering that I'm using RHEL and not Ubuntu (there are many tutorials about locale issues and Ubuntu).
|
python, locale, rhel
| 12
| 15,924
| 3
|
https://stackoverflow.com/questions/25809430/how-to-fix-locale-issue-in-red-hat-distro
|
31,610,801
|
How can I define the run order of vagrant middleware plugins?
|
I'm creating a Red Hat Enterprise Linux 7 VM in VirtualBox with vagrant. If I have a base box that both doesn't have the VirtualBox guest additions, and isn't registered, then I'd manually need to do the following: Register the box with subscription-manager Install guest additions The reason that I'd need to perform registration first, is that to install guest additions, I'd need to install some extra packages. Now, there are 3rd-party vagrant plugins for both of these tasks: vagrant-registration and vagrant-vbguest . The issue that I'm having is that the vagrant-vbguest plugin will always try to run first, and will fail to download the packages that it needs, because the vagrant-registration plugin hasn't yet had a chance to register the system. Is there a way to force one of them to be run before the other? Or any other alternative solution that I haven't thought of yet (I'm not a vagrant wizard (or is that just called a vagrant?) yet)?
|
How can I define the run order of vagrant middleware plugins? I'm creating a Red Hat Enterprise Linux 7 VM in VirtualBox with vagrant. If I have a base box that both doesn't have the VirtualBox guest additions, and isn't registered, then I'd manually need to do the following: Register the box with subscription-manager Install guest additions The reason that I'd need to perform registration first, is that to install guest additions, I'd need to install some extra packages. Now, there are 3rd-party vagrant plugins for both of these tasks: vagrant-registration and vagrant-vbguest . The issue that I'm having is that the vagrant-vbguest plugin will always try to run first, and will fail to download the packages that it needs, because the vagrant-registration plugin hasn't yet had a chance to register the system. Is there a way to force one of them to be run before the other? Or any other alternative solution that I haven't thought of yet (I'm not a vagrant wizard (or is that just called a vagrant?) yet)?
|
vagrant, virtualbox, rhel, vagrant-plugin
| 12
| 1,877
| 2
|
https://stackoverflow.com/questions/31610801/how-can-i-define-the-run-order-of-vagrant-middleware-plugins
|
34,668,471
|
Deploying docker swarm without using docker machine
|
Currently I have a bunch of RHEL7 VMs running on RackSpace and want to deploy docker swarm for testing purpose. The Docker Docs only describes the method to deploy docker swarm by using docker machine. Question: Since VirtualBox cannot be used in VMs, are any other ways such that I can directly deploy docker swarm on my VMs without using docker machine?
|
Deploying docker swarm without using docker machine Currently I have a bunch of RHEL7 VMs running on RackSpace and want to deploy docker swarm for testing purpose. The Docker Docs only describes the method to deploy docker swarm by using docker machine. Question: Since VirtualBox cannot be used in VMs, are any other ways such that I can directly deploy docker swarm on my VMs without using docker machine?
|
docker, virtual-machine, rhel, docker-swarm
| 12
| 3,687
| 2
|
https://stackoverflow.com/questions/34668471/deploying-docker-swarm-without-using-docker-machine
|
23,665,181
|
GCC toolchain for LLVM
|
I am running on an RHEL 6.x box, which of course has GCC 4.4 installed. I wish to have LLVM running on this machine. In order to do so, I must compile it from source. In order to do that , I need a more modern version of GCC. So, following the instructions , I have built GCC 4.8.2: [snip] % $PWD/../gcc-4.8.2/configure --prefix=$HOME/toolchains --enable-languages=c,c++ % make -j$(nproc) % make install I'm logged in as root, so $HOME/toolchains resolves to /root/toolchains . After satisfying prerequisites for LLVM, I'm ready to configure and build LLVM. root@dev06 /root/llvm-build # ~/llvm/configure --enable-optimized --prefix=$HOME/toolchains/ --with-gcc-toolchain=/root/toolchains/ checking for clang... no [snip] checking target architecture... x86_64 checking whether GCC is new enough... no configure: error: The selected GCC C++ compiler is not new enough to build LLVM. Please upgrade to GCC 4.7. You may pass --disable-compiler-version-checks to configure to bypass these sanity checks. root@dev06 /root/llvm-build # configure thinks I'm using GCC 4.4 still, even though I passed --with-gcc-toolchain=/root/toolchains/ to configure 1 . Let's make sure that I installed 4.8.2 properly. root@dev06 /root/llvm-build # $HOME/toolchains/bin/g++ --version g++ (GCC) 4.8.2 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. How do I convince configure to use the GCC I have at /root/toolchains ? 1 : I have tried a number of variations of the path I specify in --with-gcc-toolchain , inclding /root/toolchain/bin , /root/toolchain/bin/g++ , etc.
|
GCC toolchain for LLVM I am running on an RHEL 6.x box, which of course has GCC 4.4 installed. I wish to have LLVM running on this machine. In order to do so, I must compile it from source. In order to do that , I need a more modern version of GCC. So, following the instructions , I have built GCC 4.8.2: [snip] % $PWD/../gcc-4.8.2/configure --prefix=$HOME/toolchains --enable-languages=c,c++ % make -j$(nproc) % make install I'm logged in as root, so $HOME/toolchains resolves to /root/toolchains . After satisfying prerequisites for LLVM, I'm ready to configure and build LLVM. root@dev06 /root/llvm-build # ~/llvm/configure --enable-optimized --prefix=$HOME/toolchains/ --with-gcc-toolchain=/root/toolchains/ checking for clang... no [snip] checking target architecture... x86_64 checking whether GCC is new enough... no configure: error: The selected GCC C++ compiler is not new enough to build LLVM. Please upgrade to GCC 4.7. You may pass --disable-compiler-version-checks to configure to bypass these sanity checks. root@dev06 /root/llvm-build # configure thinks I'm using GCC 4.4 still, even though I passed --with-gcc-toolchain=/root/toolchains/ to configure 1 . Let's make sure that I installed 4.8.2 properly. root@dev06 /root/llvm-build # $HOME/toolchains/bin/g++ --version g++ (GCC) 4.8.2 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. How do I convince configure to use the GCC I have at /root/toolchains ? 1 : I have tried a number of variations of the path I specify in --with-gcc-toolchain , inclding /root/toolchain/bin , /root/toolchain/bin/g++ , etc.
|
c++, llvm, configure, rhel
| 12
| 10,055
| 2
|
https://stackoverflow.com/questions/23665181/gcc-toolchain-for-llvm
|
30,209,642
|
How can enable udev sync successfully in docker?
|
I have downloaded and install the static-linked docker 1.6.1 from this site , and run it on RHEL 7.1 : [root@localhost bin]# ./docker -d WARN[0000] Udev sync is not supported. This will lead to unexpected behavior, data loss and errors INFO[0000] +job init_networkdriver() INFO[0000] +job serveapi(unix:///var/run/docker.sock) INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) INFO[0000] -job init_networkdriver() = OK (0) INFO[0000] Loading containers: start. INFO[0000] Loading containers: done. INFO[0000] docker daemon: 1.6.1 97cd073; execdriver: native-0.2; graphdriver: devicemapper INFO[0000] +job acceptconnections() INFO[0000] -job acceptconnections() = OK (0) INFO[0000] Daemon has completed initialization I can see there is a warning: " Udev sync is not supported. This will lead to unexpected behavior, data loss and errors ", and after checking the docker source code, I find the warning log is from deviceset.go : func (devices *DeviceSet) initDevmapper(doInit bool) error { ...... // [URL] if supported := devicemapper.UdevSetSyncSupport(true); !supported { log.Warnf("Udev sync is not supported. This will lead to unexpected behavior, data loss and errors") } log.Debugf("devicemapper: udev sync support: %v", devicemapper.UdevSyncSupported()) ...... } The devicemapper.UdevSetSyncSupport is like this: // UdevSyncSupported returns whether device-mapper is able to sync with udev // // This is essential otherwise race conditions can arise where both udev and // device-mapper attempt to create and destroy devices. func UdevSyncSupported() bool { return DmUdevGetSyncSupport() != 0 } // UdevSetSyncSupport allows setting whether the udev sync should be enabled. // The return bool indicates the state of whether the sync is enabled. func UdevSetSyncSupport(enable bool) bool { if enable { DmUdevSetSyncSupport(1) } else { DmUdevSetSyncSupport(0) } return UdevSyncSupported() } I can see the reason is enabling udev sync failed. How can enable udev sync successfully? Update: After checking the disassembly code of dm_udev_set_sync_support : (gdb) disassemble dm_udev_set_sync_support Dump of assembler code for function dm_udev_set_sync_support: => 0x0000000000a3e4e0 <+0>: repz retq End of assembler dump. It is a empty function and does nothing, not mention set sync support. Does this mean this static-built docker binary is no-use?
|
How can enable udev sync successfully in docker? I have downloaded and install the static-linked docker 1.6.1 from this site , and run it on RHEL 7.1 : [root@localhost bin]# ./docker -d WARN[0000] Udev sync is not supported. This will lead to unexpected behavior, data loss and errors INFO[0000] +job init_networkdriver() INFO[0000] +job serveapi(unix:///var/run/docker.sock) INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) INFO[0000] -job init_networkdriver() = OK (0) INFO[0000] Loading containers: start. INFO[0000] Loading containers: done. INFO[0000] docker daemon: 1.6.1 97cd073; execdriver: native-0.2; graphdriver: devicemapper INFO[0000] +job acceptconnections() INFO[0000] -job acceptconnections() = OK (0) INFO[0000] Daemon has completed initialization I can see there is a warning: " Udev sync is not supported. This will lead to unexpected behavior, data loss and errors ", and after checking the docker source code, I find the warning log is from deviceset.go : func (devices *DeviceSet) initDevmapper(doInit bool) error { ...... // [URL] if supported := devicemapper.UdevSetSyncSupport(true); !supported { log.Warnf("Udev sync is not supported. This will lead to unexpected behavior, data loss and errors") } log.Debugf("devicemapper: udev sync support: %v", devicemapper.UdevSyncSupported()) ...... } The devicemapper.UdevSetSyncSupport is like this: // UdevSyncSupported returns whether device-mapper is able to sync with udev // // This is essential otherwise race conditions can arise where both udev and // device-mapper attempt to create and destroy devices. func UdevSyncSupported() bool { return DmUdevGetSyncSupport() != 0 } // UdevSetSyncSupport allows setting whether the udev sync should be enabled. // The return bool indicates the state of whether the sync is enabled. func UdevSetSyncSupport(enable bool) bool { if enable { DmUdevSetSyncSupport(1) } else { DmUdevSetSyncSupport(0) } return UdevSyncSupported() } I can see the reason is enabling udev sync failed. How can enable udev sync successfully? Update: After checking the disassembly code of dm_udev_set_sync_support : (gdb) disassemble dm_udev_set_sync_support Dump of assembler code for function dm_udev_set_sync_support: => 0x0000000000a3e4e0 <+0>: repz retq End of assembler dump. It is a empty function and does nothing, not mention set sync support. Does this mean this static-built docker binary is no-use?
|
linux, go, docker, rhel, rhel7
| 12
| 9,093
| 2
|
https://stackoverflow.com/questions/30209642/how-can-enable-udev-sync-successfully-in-docker
|
27,950,512
|
Can't install libffi-devel
|
I am trying to install libffi-devel on RHEL, but when I try I get this message: Transaction Check Error: package libffi-3.0.5-1.el5.6.z.x86_64 (which is newer than libffi-3.0.5-1.el5.i386) is already installed I am running this command: sudo yum install -y libffi-devel And here is the full output of the command: Loaded plugins: downloadonly, rhnplugin, security This system is receiving updates from RHN Classic or RHN Satellite. Excluding Packages in global exclude list Finished Excluding Packages from Rackspace - RHEL (v. 5 for 64-bit x86_64) - Common Finished Excluding Packages from Rackspace - RHEL (v. 5 for 64-bit x86_64) - MySQL 5.1 Finished Excluding Packages from Rackspace - RHEL (v. 5 for 64-bit x86_64) - PHP 5.2 Finished Excluding Packages from Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) Finished Excluding Packages from Red Hat Network Tools for RHEL Server (v.5 64-bit x86_64) Finished Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package libffi-devel.x86_64 0:3.0.5-1.el5 set to be updated --> Processing Dependency: libffi = 3.0.5-1.el5 for package: libffi-devel --> Running transaction check ---> Package libffi.i386 0:3.0.5-1.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================================================================================================================================================================ Installing: libffi-devel x86_64 3.0.5-1.el5 epel 16 k Installing for dependencies: libffi i386 3.0.5-1.el5 epel 21 k Transaction Summary ================================================================================================================================================================================================================================================================================ Install 2 Package(s) Upgrade 0 Package(s) Total size: 37 k Downloading Packages: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: package libffi-3.0.5-1.el5.6.z.x86_64 (which is newer than libffi-3.0.5-1.el5.i386) is already installed Error Summary -------------
|
Can't install libffi-devel I am trying to install libffi-devel on RHEL, but when I try I get this message: Transaction Check Error: package libffi-3.0.5-1.el5.6.z.x86_64 (which is newer than libffi-3.0.5-1.el5.i386) is already installed I am running this command: sudo yum install -y libffi-devel And here is the full output of the command: Loaded plugins: downloadonly, rhnplugin, security This system is receiving updates from RHN Classic or RHN Satellite. Excluding Packages in global exclude list Finished Excluding Packages from Rackspace - RHEL (v. 5 for 64-bit x86_64) - Common Finished Excluding Packages from Rackspace - RHEL (v. 5 for 64-bit x86_64) - MySQL 5.1 Finished Excluding Packages from Rackspace - RHEL (v. 5 for 64-bit x86_64) - PHP 5.2 Finished Excluding Packages from Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) Finished Excluding Packages from Red Hat Network Tools for RHEL Server (v.5 64-bit x86_64) Finished Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package libffi-devel.x86_64 0:3.0.5-1.el5 set to be updated --> Processing Dependency: libffi = 3.0.5-1.el5 for package: libffi-devel --> Running transaction check ---> Package libffi.i386 0:3.0.5-1.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================================================================================================================================================================ Installing: libffi-devel x86_64 3.0.5-1.el5 epel 16 k Installing for dependencies: libffi i386 3.0.5-1.el5 epel 21 k Transaction Summary ================================================================================================================================================================================================================================================================================ Install 2 Package(s) Upgrade 0 Package(s) Total size: 37 k Downloading Packages: Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: package libffi-3.0.5-1.el5.6.z.x86_64 (which is newer than libffi-3.0.5-1.el5.i386) is already installed Error Summary -------------
|
yum, rhel
| 11
| 43,932
| 1
|
https://stackoverflow.com/questions/27950512/cant-install-libffi-devel
|
13,184,384
|
Mono 3.0.0 build on CentOS 6
|
I recently found myself needing to build Mono 3.0 for CentOS 6, with a request from my infrastructure guy to otherwise keep the system as close to CentOS as possible (i.e. no 3rd-party packages if possible). Because there are currently no Mono 3.0 RPMs that I could find, I went through the exercise of building it from scratch, on a clean Minimal install of CentOS 6.3. It is possible to build Mono 3.0 with no external packages on CentOS 6.3.
|
Mono 3.0.0 build on CentOS 6 I recently found myself needing to build Mono 3.0 for CentOS 6, with a request from my infrastructure guy to otherwise keep the system as close to CentOS as possible (i.e. no 3rd-party packages if possible). Because there are currently no Mono 3.0 RPMs that I could find, I went through the exercise of building it from scratch, on a clean Minimal install of CentOS 6.3. It is possible to build Mono 3.0 with no external packages on CentOS 6.3.
|
build, mono, centos, rhel, centos6
| 11
| 9,352
| 2
|
https://stackoverflow.com/questions/13184384/mono-3-0-0-build-on-centos-6
|
45,553,065
|
Default python /usr/bin/python instead of /usr/local/bin/python
|
I have both python2.6 and python2.7 installed in my CentOS box. python2.6 is installed at /usr/bin/python and i have installed python2.7 from source at location /usr/local/bin/python after the installation my default python is changed to python2.7 instead of pythn2.6 at /usr/bin , I want to use python 2.6 at /usr/bin/python . I have tried following things already nothing worked. I have created symlink and made it point to python 2.6 at /usr/bin I have modified my default python path in .bash_profile and but that still doesn't work Please let me know how can i have python 2.7 installed along with 2.6 installed and python 2.6 as my default version. I have the same thing working with my arch linux box, but this doesn't work with my centos box. Attaching my .bash_profile, # .bash_profile export _BASH_PROFILE=1 # Get the aliases and functions if [ -z "$_BASHRC" ]; then . ~/.bashrc fi unset _BASH_PROFILE # User specific environment and startup programs PATH=$PATH:$HOME/bin BASH_ENV=$HOME/.bashrc USERNAME="" export USERNAME BASH_ENV PATH export user=$(/usr/bin/whoami) export WK_PORT=8086 export WK_PATH=ADC # For DEV accounts change PYDOC_PORT value to 7400 + webkit number. For # example WK23 would be port number 7423 export PYDOC_PORT=7464 alias serve="python -m SimpleHTTPServer" unset _BASH_PROFILE # User specific environment and startup programs PATH=$PATH:$HOME/bin BASH_ENV=$HOME/.bashrc USERNAME="" export USERNAME BASH_ENV PATH export user=$(/usr/bin/whoami) export WK_PORT=8086 export WK_PATH=ADC # For DEV accounts change PYDOC_PORT value to 7400 + webkit number. For # example WK23 would be port number 7423 export PYDOC_PORT=7464 alias serve="python -m SimpleHTTPServer" PYTHONPATH="$PYTHONPATH:/usr/bin/python"
|
Default python /usr/bin/python instead of /usr/local/bin/python I have both python2.6 and python2.7 installed in my CentOS box. python2.6 is installed at /usr/bin/python and i have installed python2.7 from source at location /usr/local/bin/python after the installation my default python is changed to python2.7 instead of pythn2.6 at /usr/bin , I want to use python 2.6 at /usr/bin/python . I have tried following things already nothing worked. I have created symlink and made it point to python 2.6 at /usr/bin I have modified my default python path in .bash_profile and but that still doesn't work Please let me know how can i have python 2.7 installed along with 2.6 installed and python 2.6 as my default version. I have the same thing working with my arch linux box, but this doesn't work with my centos box. Attaching my .bash_profile, # .bash_profile export _BASH_PROFILE=1 # Get the aliases and functions if [ -z "$_BASHRC" ]; then . ~/.bashrc fi unset _BASH_PROFILE # User specific environment and startup programs PATH=$PATH:$HOME/bin BASH_ENV=$HOME/.bashrc USERNAME="" export USERNAME BASH_ENV PATH export user=$(/usr/bin/whoami) export WK_PORT=8086 export WK_PATH=ADC # For DEV accounts change PYDOC_PORT value to 7400 + webkit number. For # example WK23 would be port number 7423 export PYDOC_PORT=7464 alias serve="python -m SimpleHTTPServer" unset _BASH_PROFILE # User specific environment and startup programs PATH=$PATH:$HOME/bin BASH_ENV=$HOME/.bashrc USERNAME="" export USERNAME BASH_ENV PATH export user=$(/usr/bin/whoami) export WK_PORT=8086 export WK_PATH=ADC # For DEV accounts change PYDOC_PORT value to 7400 + webkit number. For # example WK23 would be port number 7423 export PYDOC_PORT=7464 alias serve="python -m SimpleHTTPServer" PYTHONPATH="$PYTHONPATH:/usr/bin/python"
|
python, linux, centos, rhel
| 11
| 49,953
| 3
|
https://stackoverflow.com/questions/45553065/default-python-usr-bin-python-instead-of-usr-local-bin-python
|
40,009,474
|
OpenSSL hangs at CONNECTED(00000003)
|
I am setting up the https connection of my AEM application in a RHEL server hosted in AWS. Followed the documentation provided by Adobe. For the 1st author instance it worked successfully, but on my 2nd server and 3 server, it didnt. I tried a couple of debugging to make sure that the connectivity is working and that no firewalls are blocking. When I tried to openssl in debug mode I got the following: It just hangs and doesn't proceed to the next one like in the 1st server: 2nd Server (with Issue): openssl s_client -connect localhost:5433 -debug -msg CONNECTED(00000003) write to 0xfb16d0 [0xff5270] (249 bytes => 249 (0xF9)) 0000 - 16 03 01 00 f4 01 00 00-f0 03 03 57 fe bd 40 06 ...........W..@. 0010 - 00 bf 15 c5 e0 83 79 18-b4 a3 f8 f0 2f b6 a8 70 ......y...../..p 0020 - b7 4f fc 48 6f e6 c6 0a-ef 08 de 00 00 84 c0 30 .O.Ho..........0 0030 - c0 2c c0 28 c0 24 c0 14-c0 0a 00 a3 00 9f 00 6b .,.(.$.........k 0040 - 00 6a 00 39 00 38 00 88-00 87 c0 32 c0 2e c0 2a .j.9.8.....2...* 0050 - c0 26 c0 0f c0 05 00 9d-00 3d 00 35 00 84 c0 2f .&.......=.5.../ 0060 - c0 2b c0 27 c0 23 c0 13-c0 09 00 a2 00 9e 00 67 .+.'.#.........g 0070 - 00 40 00 33 00 32 c0 12-c0 08 00 9a 00 99 00 45 .@.3.2.........E 0080 - 00 44 00 16 00 13 c0 31-c0 2d c0 29 c0 25 c0 0e .D.....1.-.).%.. 0090 - c0 04 c0 0d c0 03 00 9c-00 3c 00 2f 00 96 00 41 .........<./...A 00a0 - 00 0a 00 07 c0 11 c0 07-c0 0c c0 02 00 05 00 04 ................ 00b0 - 00 ff 01 00 00 43 00 0b-00 04 03 00 01 02 00 0a .....C.......... 00c0 - 00 08 00 06 00 19 00 18-00 17 00 23 00 00 00 0d ...........#.... 00d0 - 00 22 00 20 06 01 06 02-06 03 05 01 05 02 05 03 .". ............ 00e0 - 04 01 04 02 04 03 03 01-03 02 03 03 02 01 02 02 ................ 00f0 - 02 03 01 01 00 0f 00 01-01 ......... >>> TLS 1.2 Handshake [length 00f4], ClientHello 01 00 00 f0 03 03 57 fe bd 40 06 00 bf 15 c5 e0 83 79 18 b4 a3 f8 f0 2f b6 a8 70 b7 4f fc 48 6f e6 c6 0a ef 08 de 00 00 84 c0 30 c0 2c c0 28 c0 24 c0 14 c0 0a 00 a3 00 9f 00 6b 00 6a 00 39 00 38 00 88 00 87 c0 32 c0 2e c0 2a c0 26 c0 0f c0 05 00 9d 00 3d 00 35 00 84 c0 2f c0 2b c0 27 c0 23 c0 13 c0 09 00 a2 00 9e 00 67 00 40 00 33 00 32 c0 12 c0 08 00 9a 00 99 00 45 00 44 00 16 00 13 c0 31 c0 2d c0 29 c0 25 c0 0e c0 04 c0 0d c0 03 00 9c 00 3c 00 2f 00 96 00 41 00 0a 00 07 c0 11 c0 07 c0 0c c0 02 00 05 00 04 00 ff 01 00 00 43 00 0b 00 04 03 00 01 02 00 0a 00 08 00 06 00 19 00 18 00 17 00 23 00 00 00 0d 00 22 00 20 06 01 06 02 06 03 05 01 05 02 05 03 04 01 04 02 04 03 03 01 03 02 03 03 02 01 02 02 02 03 01 01 00 0f 00 01 01 Server 1 (without issue): >>> TLS 1.2 Handshake [length 00f4], ClientHello 01 00 00 f0 03 03 57 fe cb 7b 28 ba ea e1 89 71 ad fb 1d 8b 97 e9 83 2b dc e4 53 c5 bf 75 8f 58 74 42 63 29 6b 20 00 00 84 c0 30 c0 2c c0 28 c0 24 c0 14 c0 0a 00 a3 00 9f 00 6b 00 6a 00 39 00 38 00 88 00 87 c0 32 c0 2e c0 2a c0 26 c0 0f c0 05 00 9d 00 3d 00 35 00 84 c0 2f c0 2b c0 27 c0 23 c0 13 c0 09 00 a2 00 9e 00 67 00 40 00 33 00 32 c0 12 c0 08 00 9a 00 99 00 45 00 44 00 16 00 13 c0 31 c0 2d c0 29 c0 25 c0 0e c0 04 c0 0d c0 03 00 9c 00 3c 00 2f 00 96 00 41 00 0a 00 07 c0 11 c0 07 c0 0c c0 02 00 05 00 04 00 ff 01 00 00 43 00 0b 00 04 03 00 01 02 00 0a 00 08 00 06 00 19 00 18 00 17 00 23 00 00 00 0d 00 22 00 20 06 01 06 02 06 03 05 01 05 02 05 03 04 01 04 02 04 03 03 01 03 02 03 03 02 01 02 02 02 03 01 01 00 0f 00 01 01 read from 0x17796d0 [0x17c27d0] (7 bytes => 7 (0x7)) 0000 - 16 03 03 06 35 02 ....5. 0007 - <SPACES/NULS> read from 0x17796d0 [0x17c27da] (1587 bytes => 1587 (0x633)) 0000 - 00 4d 03 03 57 fe cb 7b-51 64 70 bc 08 c8 91 24 .M..W..{Qdp....$ 0010 - c4 da 8c cf 94 94 7d c5-0f 45 ee 2c 86 99 1d ff ......}..E.,.... 0020 - b6 a9 3e 66 20 57 fe cb-7b e7 b2 a4 56 15 3b 46 ..>f W..{...V.;F 0030 - 98 92 b4 95 56 7f 95 4e-4e f3 cd ce d8 cd 98 29 ....V..NN......) 0040 - c7 fe 1e 6f 8b 00 9f 00-00 05 ff 01 00 01 00 0b ...o............ 0050 - 00 03 cd 00 03 ca 00 03-c7 30 82 03 c3 30 82 02 .........0...0.. 0060 - ab a0 03 02 01 02 02 04-6e 0d a4 0f 30 0d 06 09 ........n...0... 0070 - 2a 86 48 86 f7 0d 01 01-0b 05 00 30 81 91 31 0b *.H........0..1.
|
OpenSSL hangs at CONNECTED(00000003) I am setting up the https connection of my AEM application in a RHEL server hosted in AWS. Followed the documentation provided by Adobe. For the 1st author instance it worked successfully, but on my 2nd server and 3 server, it didnt. I tried a couple of debugging to make sure that the connectivity is working and that no firewalls are blocking. When I tried to openssl in debug mode I got the following: It just hangs and doesn't proceed to the next one like in the 1st server: 2nd Server (with Issue): openssl s_client -connect localhost:5433 -debug -msg CONNECTED(00000003) write to 0xfb16d0 [0xff5270] (249 bytes => 249 (0xF9)) 0000 - 16 03 01 00 f4 01 00 00-f0 03 03 57 fe bd 40 06 ...........W..@. 0010 - 00 bf 15 c5 e0 83 79 18-b4 a3 f8 f0 2f b6 a8 70 ......y...../..p 0020 - b7 4f fc 48 6f e6 c6 0a-ef 08 de 00 00 84 c0 30 .O.Ho..........0 0030 - c0 2c c0 28 c0 24 c0 14-c0 0a 00 a3 00 9f 00 6b .,.(.$.........k 0040 - 00 6a 00 39 00 38 00 88-00 87 c0 32 c0 2e c0 2a .j.9.8.....2...* 0050 - c0 26 c0 0f c0 05 00 9d-00 3d 00 35 00 84 c0 2f .&.......=.5.../ 0060 - c0 2b c0 27 c0 23 c0 13-c0 09 00 a2 00 9e 00 67 .+.'.#.........g 0070 - 00 40 00 33 00 32 c0 12-c0 08 00 9a 00 99 00 45 .@.3.2.........E 0080 - 00 44 00 16 00 13 c0 31-c0 2d c0 29 c0 25 c0 0e .D.....1.-.).%.. 0090 - c0 04 c0 0d c0 03 00 9c-00 3c 00 2f 00 96 00 41 .........<./...A 00a0 - 00 0a 00 07 c0 11 c0 07-c0 0c c0 02 00 05 00 04 ................ 00b0 - 00 ff 01 00 00 43 00 0b-00 04 03 00 01 02 00 0a .....C.......... 00c0 - 00 08 00 06 00 19 00 18-00 17 00 23 00 00 00 0d ...........#.... 00d0 - 00 22 00 20 06 01 06 02-06 03 05 01 05 02 05 03 .". ............ 00e0 - 04 01 04 02 04 03 03 01-03 02 03 03 02 01 02 02 ................ 00f0 - 02 03 01 01 00 0f 00 01-01 ......... >>> TLS 1.2 Handshake [length 00f4], ClientHello 01 00 00 f0 03 03 57 fe bd 40 06 00 bf 15 c5 e0 83 79 18 b4 a3 f8 f0 2f b6 a8 70 b7 4f fc 48 6f e6 c6 0a ef 08 de 00 00 84 c0 30 c0 2c c0 28 c0 24 c0 14 c0 0a 00 a3 00 9f 00 6b 00 6a 00 39 00 38 00 88 00 87 c0 32 c0 2e c0 2a c0 26 c0 0f c0 05 00 9d 00 3d 00 35 00 84 c0 2f c0 2b c0 27 c0 23 c0 13 c0 09 00 a2 00 9e 00 67 00 40 00 33 00 32 c0 12 c0 08 00 9a 00 99 00 45 00 44 00 16 00 13 c0 31 c0 2d c0 29 c0 25 c0 0e c0 04 c0 0d c0 03 00 9c 00 3c 00 2f 00 96 00 41 00 0a 00 07 c0 11 c0 07 c0 0c c0 02 00 05 00 04 00 ff 01 00 00 43 00 0b 00 04 03 00 01 02 00 0a 00 08 00 06 00 19 00 18 00 17 00 23 00 00 00 0d 00 22 00 20 06 01 06 02 06 03 05 01 05 02 05 03 04 01 04 02 04 03 03 01 03 02 03 03 02 01 02 02 02 03 01 01 00 0f 00 01 01 Server 1 (without issue): >>> TLS 1.2 Handshake [length 00f4], ClientHello 01 00 00 f0 03 03 57 fe cb 7b 28 ba ea e1 89 71 ad fb 1d 8b 97 e9 83 2b dc e4 53 c5 bf 75 8f 58 74 42 63 29 6b 20 00 00 84 c0 30 c0 2c c0 28 c0 24 c0 14 c0 0a 00 a3 00 9f 00 6b 00 6a 00 39 00 38 00 88 00 87 c0 32 c0 2e c0 2a c0 26 c0 0f c0 05 00 9d 00 3d 00 35 00 84 c0 2f c0 2b c0 27 c0 23 c0 13 c0 09 00 a2 00 9e 00 67 00 40 00 33 00 32 c0 12 c0 08 00 9a 00 99 00 45 00 44 00 16 00 13 c0 31 c0 2d c0 29 c0 25 c0 0e c0 04 c0 0d c0 03 00 9c 00 3c 00 2f 00 96 00 41 00 0a 00 07 c0 11 c0 07 c0 0c c0 02 00 05 00 04 00 ff 01 00 00 43 00 0b 00 04 03 00 01 02 00 0a 00 08 00 06 00 19 00 18 00 17 00 23 00 00 00 0d 00 22 00 20 06 01 06 02 06 03 05 01 05 02 05 03 04 01 04 02 04 03 03 01 03 02 03 03 02 01 02 02 02 03 01 01 00 0f 00 01 01 read from 0x17796d0 [0x17c27d0] (7 bytes => 7 (0x7)) 0000 - 16 03 03 06 35 02 ....5. 0007 - <SPACES/NULS> read from 0x17796d0 [0x17c27da] (1587 bytes => 1587 (0x633)) 0000 - 00 4d 03 03 57 fe cb 7b-51 64 70 bc 08 c8 91 24 .M..W..{Qdp....$ 0010 - c4 da 8c cf 94 94 7d c5-0f 45 ee 2c 86 99 1d ff ......}..E.,.... 0020 - b6 a9 3e 66 20 57 fe cb-7b e7 b2 a4 56 15 3b 46 ..>f W..{...V.;F 0030 - 98 92 b4 95 56 7f 95 4e-4e f3 cd ce d8 cd 98 29 ....V..NN......) 0040 - c7 fe 1e 6f 8b 00 9f 00-00 05 ff 01 00 01 00 0b ...o............ 0050 - 00 03 cd 00 03 ca 00 03-c7 30 82 03 c3 30 82 02 .........0...0.. 0060 - ab a0 03 02 01 02 02 04-6e 0d a4 0f 30 0d 06 09 ........n...0... 0070 - 2a 86 48 86 f7 0d 01 01-0b 05 00 30 81 91 31 0b *.H........0..1.
|
ssl, openssl, aem, rhel
| 11
| 17,912
| 3
|
https://stackoverflow.com/questions/40009474/openssl-hangs-at-connected00000003
|
58,907,655
|
I cannot install yum in my docker container
|
I have a docker container which was built by a keycloak image. I want to install vim in the container but I found that I need to have yum in order to install the vim . I tried to download yum from the internet and use rpm to install it, but the container didn't have sudo to let me change the file permission. Following is my linux version: NAME="Red Hat Enterprise Linux" VERSION="8.0 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.0" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.0 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8.0:GA" HOME_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.0 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.0" How can I get any text editor installed in the container?
|
I cannot install yum in my docker container I have a docker container which was built by a keycloak image. I want to install vim in the container but I found that I need to have yum in order to install the vim . I tried to download yum from the internet and use rpm to install it, but the container didn't have sudo to let me change the file permission. Following is my linux version: NAME="Red Hat Enterprise Linux" VERSION="8.0 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.0" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.0 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8.0:GA" HOME_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.0 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.0" How can I get any text editor installed in the container?
|
docker, rpm, yum, rhel
| 10
| 49,051
| 3
|
https://stackoverflow.com/questions/58907655/i-cannot-install-yum-in-my-docker-container
|
53,763,687
|
dos2unix: Binary symbol found, skipping binary file
|
I am currently having an issue where my script is failing when trying to execute the dos2unix command on a file. This is what I have in the script: dos2unix -n data/file data/tmp_file dos2unix: Binary symbol found at line 21107611 dos2unix: Skipping binary file data/input/DATA.txt mv -f data/tmp_file data/input/DATA.txt mv: cannot stat ‘data/tmp_file’: No such file or directory I went to the line is question and I have a "^@" here. What is this and how do i get my script to work using the dos2unix command? {128392938928392838123129381298398129^@ Thanks
|
dos2unix: Binary symbol found, skipping binary file I am currently having an issue where my script is failing when trying to execute the dos2unix command on a file. This is what I have in the script: dos2unix -n data/file data/tmp_file dos2unix: Binary symbol found at line 21107611 dos2unix: Skipping binary file data/input/DATA.txt mv -f data/tmp_file data/input/DATA.txt mv: cannot stat ‘data/tmp_file’: No such file or directory I went to the line is question and I have a "^@" here. What is this and how do i get my script to work using the dos2unix command? {128392938928392838123129381298398129^@ Thanks
|
linux, vim, rhel, dos2unix
| 10
| 26,147
| 1
|
https://stackoverflow.com/questions/53763687/dos2unix-binary-symbol-found-skipping-binary-file
|
2,247,340
|
JVM crashes under stress on RHEL 5.2
|
I've got (the currently latest) jdk 1.6.0.18 crashing while running a web application on (the currently latest) tomcat 6.0.24 unexpectedly after 4 to 24 hours 4 hours to 8 days of stress testing (30 threads hitting the app at 6 mil. pageviews/day). This is on RHEL 5.2 (Tikanga). The crash report is at [URL] and the consistent parts of the crash are: a SIGSEGV is being thrown on libjvm.so eden space is always full (100%) JVM runs with the following options: CATALINA_OPTS="-server -Xms512m -Xmx1024m -Djava.awt.headless=true" I've also tested the memory for hardware problems using [URL] for 48 hours (14 passes of the whole memory) without any error. I've enabled -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps to inspect for any GC trends or space exhaustion but there is nothing suspicious there. GC and full GC happens at predicable intervals, almost always freeing the same amount of memory capacities. My application does not, directly, use any native code. Any ideas of where I should look next? Edit - more info : 1) There is no client vm in this JDK: [foo@localhost ~]$ java -version -server java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode) [foo@localhost ~]$ java -version -client java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode) 2) Changing the O/S is not possible. 3) I don't want to change the JMeter stress test variables since this could hide the problem. Since I've got a use case (the current stress test scenario) which crashes the JVM I'd like to fix the crash and not change the test. 4) I've done static analysis on my application but nothing serious came up. 5) The memory does not grow over time. The memory usage equilibrates very quickly (after startup) at a very steady trend which does not seem suspicious. 6) /var/log/messages does not contain any useful information before or during the time of the crash More info : Forgot to mention that there was an apache (2.2.14) fronting tomcat using mod_jk 1.2.28. Right now I'm running the test without apache just in case the JVM crash relates to the mod_jk native code which connects to JVM (tomcat connector). After that (if JVM crashes again) I'll try removing some components from my application (caching, lucene, quartz) and later on will try using jetty. Since the crash is currently happening anytime between 4 hours to 8 days, it may take a lot of time to find out what's going on.
|
JVM crashes under stress on RHEL 5.2 I've got (the currently latest) jdk 1.6.0.18 crashing while running a web application on (the currently latest) tomcat 6.0.24 unexpectedly after 4 to 24 hours 4 hours to 8 days of stress testing (30 threads hitting the app at 6 mil. pageviews/day). This is on RHEL 5.2 (Tikanga). The crash report is at [URL] and the consistent parts of the crash are: a SIGSEGV is being thrown on libjvm.so eden space is always full (100%) JVM runs with the following options: CATALINA_OPTS="-server -Xms512m -Xmx1024m -Djava.awt.headless=true" I've also tested the memory for hardware problems using [URL] for 48 hours (14 passes of the whole memory) without any error. I've enabled -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps to inspect for any GC trends or space exhaustion but there is nothing suspicious there. GC and full GC happens at predicable intervals, almost always freeing the same amount of memory capacities. My application does not, directly, use any native code. Any ideas of where I should look next? Edit - more info : 1) There is no client vm in this JDK: [foo@localhost ~]$ java -version -server java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode) [foo@localhost ~]$ java -version -client java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode) 2) Changing the O/S is not possible. 3) I don't want to change the JMeter stress test variables since this could hide the problem. Since I've got a use case (the current stress test scenario) which crashes the JVM I'd like to fix the crash and not change the test. 4) I've done static analysis on my application but nothing serious came up. 5) The memory does not grow over time. The memory usage equilibrates very quickly (after startup) at a very steady trend which does not seem suspicious. 6) /var/log/messages does not contain any useful information before or during the time of the crash More info : Forgot to mention that there was an apache (2.2.14) fronting tomcat using mod_jk 1.2.28. Right now I'm running the test without apache just in case the JVM crash relates to the mod_jk native code which connects to JVM (tomcat connector). After that (if JVM crashes again) I'll try removing some components from my application (caching, lucene, quartz) and later on will try using jetty. Since the crash is currently happening anytime between 4 hours to 8 days, it may take a lot of time to find out what's going on.
|
java, jvm, crash, segmentation-fault, rhel
| 10
| 2,921
| 7
|
https://stackoverflow.com/questions/2247340/jvm-crashes-under-stress-on-rhel-5-2
|
52,401,710
|
Change system locale inside a CentOS/RHEL without using localectl?
|
I'm trying to build a Docker image based on oracle/database:11.2.0.2-xe (which is based on Oracle Linux based on RHEL) and want to change the system locale in this image (using some RUN command inside a Dockerfile ). According to this guide I should use localectl set-locale <MYLOCALE> but this command is failing with Failed to create bus connection: No such file or directory message. This is a known Docker issue for commands that require SystemD to be launched. I tried to start the SystemD anyway (using /usr/sbin/init as first process as well as using -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /run thanks to this help ) but then the localectl set-locale failed with Could not get properties: Connection timed out message. So I'm now trying to avoid the usage of localectl to change my system globale locale, how could I do this?
|
Change system locale inside a CentOS/RHEL without using localectl? I'm trying to build a Docker image based on oracle/database:11.2.0.2-xe (which is based on Oracle Linux based on RHEL) and want to change the system locale in this image (using some RUN command inside a Dockerfile ). According to this guide I should use localectl set-locale <MYLOCALE> but this command is failing with Failed to create bus connection: No such file or directory message. This is a known Docker issue for commands that require SystemD to be launched. I tried to start the SystemD anyway (using /usr/sbin/init as first process as well as using -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /run thanks to this help ) but then the localectl set-locale failed with Could not get properties: Connection timed out message. So I'm now trying to avoid the usage of localectl to change my system globale locale, how could I do this?
|
docker, locale, systemd, rhel
| 10
| 10,627
| 1
|
https://stackoverflow.com/questions/52401710/change-system-locale-inside-a-centos-rhel-without-using-localectl
|
44,249,788
|
RODBC and Microsoft SQL Server: Truncating Long Character Strings
|
I am trying to query a variable from a Microsoft SQL Server database using R/RODBC. RODBC is truncating the character string at 8000 characters. Original code: truncates at 255 characters (as per RODBC documentation) library(RODBC) con_string <- odbcConnect("DSN") query_string <- "SELECT text_var FROM table_name" dat <- sqlQuery(con_string, query_string, stringsAsFactors=FALSE) Partial solution: modifying query string truncate text after 7999 characters. library(RODBC) con_string <- odbcConnect("DSN") query_string <- "SELECT [text_var]=CAST(text_var AS VARCHAR(8000)) FROM table_name" dat <- sqlQuery(con_string, query_string, stringsAsFactors=FALSE) The table/variable contains text strings at long as 250,000 characters. I really want to work with all the text in R. Is this possible? @BrianRipley discusses the problem (but no solution) on page 18 of following document: [URL] @nutterb dicusses similar issues with RODBCext package on GitHub: [URL] Have seen similar discussion on SO, but no solution using RODBC with VARCHAR>8000. RODBC sqlQuery() returns varchar(255) when it should return varchar(MAX) RODBC string getting truncated Note: R 3.3.2 Microsoft SQL Server 2012 Linux RHEL 7.1 Microsoft ODBC Driver for SQL Server
|
RODBC and Microsoft SQL Server: Truncating Long Character Strings I am trying to query a variable from a Microsoft SQL Server database using R/RODBC. RODBC is truncating the character string at 8000 characters. Original code: truncates at 255 characters (as per RODBC documentation) library(RODBC) con_string <- odbcConnect("DSN") query_string <- "SELECT text_var FROM table_name" dat <- sqlQuery(con_string, query_string, stringsAsFactors=FALSE) Partial solution: modifying query string truncate text after 7999 characters. library(RODBC) con_string <- odbcConnect("DSN") query_string <- "SELECT [text_var]=CAST(text_var AS VARCHAR(8000)) FROM table_name" dat <- sqlQuery(con_string, query_string, stringsAsFactors=FALSE) The table/variable contains text strings at long as 250,000 characters. I really want to work with all the text in R. Is this possible? @BrianRipley discusses the problem (but no solution) on page 18 of following document: [URL] @nutterb dicusses similar issues with RODBCext package on GitHub: [URL] Have seen similar discussion on SO, but no solution using RODBC with VARCHAR>8000. RODBC sqlQuery() returns varchar(255) when it should return varchar(MAX) RODBC string getting truncated Note: R 3.3.2 Microsoft SQL Server 2012 Linux RHEL 7.1 Microsoft ODBC Driver for SQL Server
|
sql-server, r, truncate, rhel, rodbc
| 10
| 3,381
| 2
|
https://stackoverflow.com/questions/44249788/rodbc-and-microsoft-sql-server-truncating-long-character-strings
|
70,443,432
|
docker: Error response from daemon: pull access denied for rhel7/rhel
|
My docker I found this cheatsheet from internet: [URL] I catch error C:\Users\Administrator>docker run -it rhel7/rhel bash Unable to find image 'rhel7/rhel:latest' locally docker: Error response from daemon: pull access denied for rhel7/rhel, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'. C:\Users\Administrator> How to fix it?
|
docker: Error response from daemon: pull access denied for rhel7/rhel My docker I found this cheatsheet from internet: [URL] I catch error C:\Users\Administrator>docker run -it rhel7/rhel bash Unable to find image 'rhel7/rhel:latest' locally docker: Error response from daemon: pull access denied for rhel7/rhel, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'. C:\Users\Administrator> How to fix it?
|
docker, rhel, rhel7
| 10
| 93,932
| 3
|
https://stackoverflow.com/questions/70443432/docker-error-response-from-daemon-pull-access-denied-for-rhel7-rhel
|
13,328,283
|
Equivalent in yum of apt-get update
|
In Debian derivatives, before installing software from apt using apt-get install xxx , it is necessary to run apt-get update . This refreshes the package lists so that dependencies are resolved correctly, etc. When using an RHEL instance, out of habit, I ran yum update before installing software. However, this was only to find that yum update upgraded all of my packages instead of, or possibly in addition to, updating package lists locally as in apt-get update . Is it necessary to update yum's package lists before running yum install xxx ? If so, how do you do it?
|
Equivalent in yum of apt-get update In Debian derivatives, before installing software from apt using apt-get install xxx , it is necessary to run apt-get update . This refreshes the package lists so that dependencies are resolved correctly, etc. When using an RHEL instance, out of habit, I ran yum update before installing software. However, this was only to find that yum update upgraded all of my packages instead of, or possibly in addition to, updating package lists locally as in apt-get update . Is it necessary to update yum's package lists before running yum install xxx ? If so, how do you do it?
|
yum, rhel, apt, package-managers
| 9
| 11,602
| 1
|
https://stackoverflow.com/questions/13328283/equivalent-in-yum-of-apt-get-update
|
32,639,553
|
Upgrading Docker on Amazon Linux AMI
|
I want to upgrade Docker to v1.8 on Amazon Linux. At the time of writing their internal yum package repository has: Docker version 1.7.1, build 786b29d/1.7.1 . Things I have already tried Manually installing from the Docker project's repo Error: Package: docker-engine-1.8.2-1.el7.centos.x86_64 (dockerrepo) Requires: systemd-units
|
Upgrading Docker on Amazon Linux AMI I want to upgrade Docker to v1.8 on Amazon Linux. At the time of writing their internal yum package repository has: Docker version 1.7.1, build 786b29d/1.7.1 . Things I have already tried Manually installing from the Docker project's repo Error: Package: docker-engine-1.8.2-1.el7.centos.x86_64 (dockerrepo) Requires: systemd-units
|
linux, amazon-web-services, docker, yum, rhel
| 9
| 12,445
| 4
|
https://stackoverflow.com/questions/32639553/upgrading-docker-on-amazon-linux-ami
|
58,492,050
|
How to fix -- bash: /usr/bin/python: Too many levels of symbolic links
|
I wanted to make python3 my default on rhel so I followed the following at How to set Python3.5.2 as default Python version on CentOS? sudo ln -fs /usr/bin/python3 /usr/bin/python It changed the default to 3.6.8 root@rhel:~# python -V Python 3.6.8 Then I tried yum install python-pip: root@rhel:~# yum install python-pip File "/usr/bin/yum", line 30 except KeyboardInterrupt, e: ^ SyntaxError: invalid syntax This happened when I tried a few other commands. I tried reverting the changes by root@rhel:~# sudo ln -fs /usr/bin/python /usr/bin/python But am running into root@rhel:~# python -V bash: /usr/bin/python: Too many levels of symbolic links I guess from what Im reading in places I need to break the sym links. The following is whats in my /usr/bin/ ls -l /usr/bin | grep python lrwxrwxrwx 1 root root 15 Oct 21 14:12 python -> /usr/bin/python lrwxrwxrwx 1 root root 14 Aug 8 05:53 python-config -> python2-config lrwxrwxrwx 1 root root 9 Aug 8 05:51 python2 -> python2.7 lrwxrwxrwx 1 root root 16 Aug 8 05:53 python2-config -> python2.7-config -rwxr-xr-x 1 root root 7144 Jun 11 10:34 python2.7 -rwxr-xr-x 1 root root 1835 Jun 11 10:34 python2.7-config lrwxrwxrwx 1 root root 9 Aug 8 05:51 python3 -> python3.6 lrwxrwxrwx 1 root root 16 Aug 8 05:53 python3-config -> python3.6-config lrwxrwxrwx 1 root root 20 Aug 8 05:53 python3-debug -> /usr/bin/python3.6dm -rwxr-xr-x 2 root root 11336 Jun 11 15:17 python3.6 lrwxrwxrwx 1 root root 17 Aug 8 05:53 python3.6-config -> python3.6m-config -rwxr-xr-x 1 root root 11336 Jun 11 15:17 python3.6dm -rwxr-xr-x 1 root root 175 Jun 11 15:16 python3.6dm-config -rwxr-xr-x 1 root root 3396 Jun 11 14:54 python3.6dm-x86_64-config -rwxr-xr-x 2 root root 11336 Jun 11 15:17 python3.6m -rwxr-xr-x 1 root root 173 Jun 11 15:16 python3.6m-config -rwxr-xr-x 1 root root 3403 Jun 11 14:54 python3.6m-x86_64-config
|
How to fix -- bash: /usr/bin/python: Too many levels of symbolic links I wanted to make python3 my default on rhel so I followed the following at How to set Python3.5.2 as default Python version on CentOS? sudo ln -fs /usr/bin/python3 /usr/bin/python It changed the default to 3.6.8 root@rhel:~# python -V Python 3.6.8 Then I tried yum install python-pip: root@rhel:~# yum install python-pip File "/usr/bin/yum", line 30 except KeyboardInterrupt, e: ^ SyntaxError: invalid syntax This happened when I tried a few other commands. I tried reverting the changes by root@rhel:~# sudo ln -fs /usr/bin/python /usr/bin/python But am running into root@rhel:~# python -V bash: /usr/bin/python: Too many levels of symbolic links I guess from what Im reading in places I need to break the sym links. The following is whats in my /usr/bin/ ls -l /usr/bin | grep python lrwxrwxrwx 1 root root 15 Oct 21 14:12 python -> /usr/bin/python lrwxrwxrwx 1 root root 14 Aug 8 05:53 python-config -> python2-config lrwxrwxrwx 1 root root 9 Aug 8 05:51 python2 -> python2.7 lrwxrwxrwx 1 root root 16 Aug 8 05:53 python2-config -> python2.7-config -rwxr-xr-x 1 root root 7144 Jun 11 10:34 python2.7 -rwxr-xr-x 1 root root 1835 Jun 11 10:34 python2.7-config lrwxrwxrwx 1 root root 9 Aug 8 05:51 python3 -> python3.6 lrwxrwxrwx 1 root root 16 Aug 8 05:53 python3-config -> python3.6-config lrwxrwxrwx 1 root root 20 Aug 8 05:53 python3-debug -> /usr/bin/python3.6dm -rwxr-xr-x 2 root root 11336 Jun 11 15:17 python3.6 lrwxrwxrwx 1 root root 17 Aug 8 05:53 python3.6-config -> python3.6m-config -rwxr-xr-x 1 root root 11336 Jun 11 15:17 python3.6dm -rwxr-xr-x 1 root root 175 Jun 11 15:16 python3.6dm-config -rwxr-xr-x 1 root root 3396 Jun 11 14:54 python3.6dm-x86_64-config -rwxr-xr-x 2 root root 11336 Jun 11 15:17 python3.6m -rwxr-xr-x 1 root root 173 Jun 11 15:16 python3.6m-config -rwxr-xr-x 1 root root 3403 Jun 11 14:54 python3.6m-x86_64-config
|
python, python-3.6, rhel, rhel7
| 9
| 26,387
| 4
|
https://stackoverflow.com/questions/58492050/how-to-fix-bash-usr-bin-python-too-many-levels-of-symbolic-links
|
39,147,735
|
WatchService fires ENTRY_MODIFY sometimes twice and sometimes once
|
I am using this WatchService example from Oracle: import java.nio.file.*; import static java.nio.file.StandardWatchEventKinds.*; import static java.nio.file.LinkOption.*; import java.nio.file.attribute.*; import java.io.*; import java.util.*; public class WatchDir { private final WatchService watcher; private final Map<WatchKey,Path> keys; private final boolean recursive; private boolean trace = false; @SuppressWarnings("unchecked") static <T> WatchEvent<T> cast(WatchEvent<?> event) { return (WatchEvent<T>)event; } /** * Register the given directory with the WatchService */ private void register(Path dir) throws IOException { WatchKey key = dir.register(watcher, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY); if (trace) { Path prev = keys.get(key); if (prev == null) { System.out.format("register: %s\n", dir); } else { if (!dir.equals(prev)) { System.out.format("update: %s -> %s\n", prev, dir); } } } keys.put(key, dir); } /** * Register the given directory, and all its sub-directories, with the * WatchService. */ private void registerAll(final Path start) throws IOException { // register directory and sub-directories Files.walkFileTree(start, new SimpleFileVisitor<Path>() { @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { register(dir); return FileVisitResult.CONTINUE; } }); } /** * Creates a WatchService and registers the given directory */ WatchDir(Path dir, boolean recursive) throws IOException { this.watcher = FileSystems.getDefault().newWatchService(); this.keys = new HashMap<WatchKey,Path>(); this.recursive = recursive; if (recursive) { System.out.format("Scanning %s ...\n", dir); registerAll(dir); System.out.println("Done."); } else { register(dir); } // enable trace after initial registration this.trace = true; } /** * Process all events for keys queued to the watcher */ void processEvents() { for (;;) { // wait for key to be signalled WatchKey key; try { key = watcher.take(); } catch (InterruptedException x) { return; } Path dir = keys.get(key); if (dir == null) { System.err.println("WatchKey not recognized!!"); continue; } for (WatchEvent<?> event: key.pollEvents()) { WatchEvent.Kind kind = event.kind(); // TBD - provide example of how OVERFLOW event is handled if (kind == OVERFLOW) { continue; } // Context for directory entry event is the file name of entry WatchEvent<Path> ev = cast(event); Path name = ev.context(); Path child = dir.resolve(name); // print out event System.out.format("%s: %s\n", event.kind().name(), child); // if directory is created, and watching recursively, then // register it and its sub-directories if (recursive && (kind == ENTRY_CREATE)) { try { if (Files.isDirectory(child, NOFOLLOW_LINKS)) { registerAll(child); } } catch (IOException x) { // ignore to keep sample readbale } } } // reset key and remove from set if directory no longer accessible boolean valid = key.reset(); if (!valid) { keys.remove(key); // all directories are inaccessible if (keys.isEmpty()) { break; } } } } static void usage() { System.err.println("usage: java WatchDir [-r] dir"); System.exit(-1); } public static void main(String[] args) throws IOException { // parse arguments if (args.length == 0 || args.length > 2) usage(); boolean recursive = false; int dirArg = 0; if (args[0].equals("-r")) { if (args.length < 2) usage(); recursive = true; dirArg++; } // register directory and process its events Path dir = Paths.get(args[dirArg]); new WatchDir(dir, recursive).processEvents(); } } I am developing an app in Windows 7 , and deployment environment is rhel 7.2 . At first, in both OS'es, whenever I copied a file, it fired one ENTRY_CREATED and then two ENTRY_MODIFY . The first ENTRY_MODIFY was at the beginning of copying, and the second ENTRY_MODIFY was at the end of copying. So I was able to understand the copying process was over. However, It only fires one ENTRY_MODIFY in rhel 7.2 now. It still fires two ENTRY_MODIFY events in Windows 7 though. I have found this in stackoverflow . That question asks why two ENTRY_MODIFY are fired. It is not exactly my question, but one of its answers disputes what I'm asking. Sadly, there is no solution to my question in that dispute though. Because there is no ENTRY_MODIFY fired at the end but only in the beginning of copying, I can not understand when the copying is finished. What do you think might be the cause for this? Can it be fixed, how can I understand the copying is finished? I can't change rhel 7.2 , but anything other than that I would gladly accept. Thanks in advance.
|
WatchService fires ENTRY_MODIFY sometimes twice and sometimes once I am using this WatchService example from Oracle: import java.nio.file.*; import static java.nio.file.StandardWatchEventKinds.*; import static java.nio.file.LinkOption.*; import java.nio.file.attribute.*; import java.io.*; import java.util.*; public class WatchDir { private final WatchService watcher; private final Map<WatchKey,Path> keys; private final boolean recursive; private boolean trace = false; @SuppressWarnings("unchecked") static <T> WatchEvent<T> cast(WatchEvent<?> event) { return (WatchEvent<T>)event; } /** * Register the given directory with the WatchService */ private void register(Path dir) throws IOException { WatchKey key = dir.register(watcher, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY); if (trace) { Path prev = keys.get(key); if (prev == null) { System.out.format("register: %s\n", dir); } else { if (!dir.equals(prev)) { System.out.format("update: %s -> %s\n", prev, dir); } } } keys.put(key, dir); } /** * Register the given directory, and all its sub-directories, with the * WatchService. */ private void registerAll(final Path start) throws IOException { // register directory and sub-directories Files.walkFileTree(start, new SimpleFileVisitor<Path>() { @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { register(dir); return FileVisitResult.CONTINUE; } }); } /** * Creates a WatchService and registers the given directory */ WatchDir(Path dir, boolean recursive) throws IOException { this.watcher = FileSystems.getDefault().newWatchService(); this.keys = new HashMap<WatchKey,Path>(); this.recursive = recursive; if (recursive) { System.out.format("Scanning %s ...\n", dir); registerAll(dir); System.out.println("Done."); } else { register(dir); } // enable trace after initial registration this.trace = true; } /** * Process all events for keys queued to the watcher */ void processEvents() { for (;;) { // wait for key to be signalled WatchKey key; try { key = watcher.take(); } catch (InterruptedException x) { return; } Path dir = keys.get(key); if (dir == null) { System.err.println("WatchKey not recognized!!"); continue; } for (WatchEvent<?> event: key.pollEvents()) { WatchEvent.Kind kind = event.kind(); // TBD - provide example of how OVERFLOW event is handled if (kind == OVERFLOW) { continue; } // Context for directory entry event is the file name of entry WatchEvent<Path> ev = cast(event); Path name = ev.context(); Path child = dir.resolve(name); // print out event System.out.format("%s: %s\n", event.kind().name(), child); // if directory is created, and watching recursively, then // register it and its sub-directories if (recursive && (kind == ENTRY_CREATE)) { try { if (Files.isDirectory(child, NOFOLLOW_LINKS)) { registerAll(child); } } catch (IOException x) { // ignore to keep sample readbale } } } // reset key and remove from set if directory no longer accessible boolean valid = key.reset(); if (!valid) { keys.remove(key); // all directories are inaccessible if (keys.isEmpty()) { break; } } } } static void usage() { System.err.println("usage: java WatchDir [-r] dir"); System.exit(-1); } public static void main(String[] args) throws IOException { // parse arguments if (args.length == 0 || args.length > 2) usage(); boolean recursive = false; int dirArg = 0; if (args[0].equals("-r")) { if (args.length < 2) usage(); recursive = true; dirArg++; } // register directory and process its events Path dir = Paths.get(args[dirArg]); new WatchDir(dir, recursive).processEvents(); } } I am developing an app in Windows 7 , and deployment environment is rhel 7.2 . At first, in both OS'es, whenever I copied a file, it fired one ENTRY_CREATED and then two ENTRY_MODIFY . The first ENTRY_MODIFY was at the beginning of copying, and the second ENTRY_MODIFY was at the end of copying. So I was able to understand the copying process was over. However, It only fires one ENTRY_MODIFY in rhel 7.2 now. It still fires two ENTRY_MODIFY events in Windows 7 though. I have found this in stackoverflow . That question asks why two ENTRY_MODIFY are fired. It is not exactly my question, but one of its answers disputes what I'm asking. Sadly, there is no solution to my question in that dispute though. Because there is no ENTRY_MODIFY fired at the end but only in the beginning of copying, I can not understand when the copying is finished. What do you think might be the cause for this? Can it be fixed, how can I understand the copying is finished? I can't change rhel 7.2 , but anything other than that I would gladly accept. Thanks in advance.
|
java, rhel, watchservice, nio2
| 9
| 5,510
| 2
|
https://stackoverflow.com/questions/39147735/watchservice-fires-entry-modify-sometimes-twice-and-sometimes-once
|
36,034,208
|
LD_PRELOAD does not loaded on systemd
|
I am trying to inject a SO into a process that starts using systemd init system (using LD_PRELOAD), but it does not loaded into the new process. I complied a basic SO (unrandom.c): int rand(){ return 42; //the most random number in the universe } with the command line: gcc -shared -fPIC unrandom.c -o unrandom.so I changed the .service file to include: Environment="LD_PRELOAD=/tmp/unrandom.so" After starting the service the LD_PRELOAD environment variable is exist in the process, but the SO does not injected cat /proc/<PID>/maps Am I missing something? My machine is RHEL7
|
LD_PRELOAD does not loaded on systemd I am trying to inject a SO into a process that starts using systemd init system (using LD_PRELOAD), but it does not loaded into the new process. I complied a basic SO (unrandom.c): int rand(){ return 42; //the most random number in the universe } with the command line: gcc -shared -fPIC unrandom.c -o unrandom.so I changed the .service file to include: Environment="LD_PRELOAD=/tmp/unrandom.so" After starting the service the LD_PRELOAD environment variable is exist in the process, but the SO does not injected cat /proc/<PID>/maps Am I missing something? My machine is RHEL7
|
linux, rhel, systemd, ld-preload
| 9
| 5,904
| 1
|
https://stackoverflow.com/questions/36034208/ld-preload-does-not-loaded-on-systemd
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.