question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
30,154,094
|
Data destroy using shred agains ext4 filesystem
|
I'm running shred against blockdevice with couple of etx4 filesystems on it. The blockdevices are virtual drives - RAID-1 and RAID-5. Controller is PERC H710P. command shred -v /dev/sda; shred -v /dev/sdc ... I can understand from shred man(info) page that shred might be no effective on journal filesystems but only when shredding files. Anyone can please explain whether is shredding against blockdevice safe way to destruct all data on it?
|
Data destroy using shred agains ext4 filesystem I'm running shred against blockdevice with couple of etx4 filesystems on it. The blockdevices are virtual drives - RAID-1 and RAID-5. Controller is PERC H710P. command shred -v /dev/sda; shred -v /dev/sdc ... I can understand from shred man(info) page that shred might be no effective on journal filesystems but only when shredding files. Anyone can please explain whether is shredding against blockdevice safe way to destruct all data on it?
|
linux, rhel, ext4, journal, shred
| 2
| 5,160
| 1
|
https://stackoverflow.com/questions/30154094/data-destroy-using-shred-agains-ext4-filesystem
|
28,729,457
|
Use grub-md5-crypt in a script where user is prompted for password to encrypt
|
As you already know from title, I want to configure an encrypted password for grub in /etc/grub.conf . I want to use a single script, where I will use grub-md5-crypt , enter my password that I want to encrypt, and sed that encrypted output in /etc/grub.conf . Second part is fine for me. but how to handle password prompt in the script: [root@localhost ssh]# grub-md5-crypt Password: Retype password: $1$3L3j7$lkZs92MnlmQkVYiCH9dtJ. How can I automatically save the encrypted password in a variable or manage it somehow, so that in the next line of my script I can sed it like this: sed -i '/^[# ]*timeout.*/a $hashedpwd/' /etc/grub.conf Please help
|
Use grub-md5-crypt in a script where user is prompted for password to encrypt As you already know from title, I want to configure an encrypted password for grub in /etc/grub.conf . I want to use a single script, where I will use grub-md5-crypt , enter my password that I want to encrypt, and sed that encrypted output in /etc/grub.conf . Second part is fine for me. but how to handle password prompt in the script: [root@localhost ssh]# grub-md5-crypt Password: Retype password: $1$3L3j7$lkZs92MnlmQkVYiCH9dtJ. How can I automatically save the encrypted password in a variable or manage it somehow, so that in the next line of my script I can sed it like this: sed -i '/^[# ]*timeout.*/a $hashedpwd/' /etc/grub.conf Please help
|
scripting, rhel, grub
| 2
| 1,996
| 1
|
https://stackoverflow.com/questions/28729457/use-grub-md5-crypt-in-a-script-where-user-is-prompted-for-password-to-encrypt
|
25,203,763
|
Crash in QApplication/QThread on RHEL / apache (__get_tls_addr returns 0)
|
Our app crashes only when running on RHEL6 under apache (even if started standalone as httpd -X). When running using custom small http server or under another machine with Ubuntu it works fine. Here's the stack trace: Program received signal SIGSEGV, Segmentation fault. 0x00007fffe8f96d9f in ?? () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtCore.so.4 (gdb) bt #0 0x00007fffe8f96d9f in ?? () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtCore.so.4 #1 0x00007fffe8f93a39 in QThread::currentThread() () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtCore.so.4 #2 0x00007fffe90a05bc in QCoreApplicationPrivate::QCoreApplicationPrivate(int&, char**, unsigned int) () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtCore.so.4 #3 0x00007fffe7cbec63 in QApplicationPrivate::QApplicationPrivate(int&, char**, QApplication::Type, int) () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtGui.so.4 #4 0x00007fffe7cc811c in QApplication::QApplication(int&, char**, int) () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtGui.so.4 There're several similar issues on the internet but none of them list any answer: [URL] [URL] [URL] According to Qt source, I think that QThreadData::current() return NULL. But why can this happen? Now, what I found via disassembly is that __get_tls_addr returns 0 (for Qt static threaddata var I suppose). Now, when I run custom http server, not apache, __get_tls_addr does return valid pointer. I found that if I change -ftls-model=initial-exec to -ftls-model=global-dynamic then it works in RHEL/apache. What I still don't understand is why, what is specific about apache.
|
Crash in QApplication/QThread on RHEL / apache (__get_tls_addr returns 0) Our app crashes only when running on RHEL6 under apache (even if started standalone as httpd -X). When running using custom small http server or under another machine with Ubuntu it works fine. Here's the stack trace: Program received signal SIGSEGV, Segmentation fault. 0x00007fffe8f96d9f in ?? () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtCore.so.4 (gdb) bt #0 0x00007fffe8f96d9f in ?? () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtCore.so.4 #1 0x00007fffe8f93a39 in QThread::currentThread() () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtCore.so.4 #2 0x00007fffe90a05bc in QCoreApplicationPrivate::QCoreApplicationPrivate(int&, char**, unsigned int) () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtCore.so.4 #3 0x00007fffe7cbec63 in QApplicationPrivate::QApplicationPrivate(int&, char**, QApplication::Type, int) () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtGui.so.4 #4 0x00007fffe7cc811c in QApplication::QApplication(int&, char**, int) () from /opt/QtSDK/Desktop/Qt/4.8.1/gcc/lib/libQtGui.so.4 There're several similar issues on the internet but none of them list any answer: [URL] [URL] [URL] According to Qt source, I think that QThreadData::current() return NULL. But why can this happen? Now, what I found via disassembly is that __get_tls_addr returns 0 (for Qt static threaddata var I suppose). Now, when I run custom http server, not apache, __get_tls_addr does return valid pointer. I found that if I change -ftls-model=initial-exec to -ftls-model=global-dynamic then it works in RHEL/apache. What I still don't understand is why, what is specific about apache.
|
linux, apache, qt, rhel
| 2
| 160
| 1
|
https://stackoverflow.com/questions/25203763/crash-in-qapplication-qthread-on-rhel-apache-get-tls-addr-returns-0
|
21,875,589
|
Error while extracting file from RPM (rpm2cpio)
|
I'm using the following command to extract a single file from RPM package. pm2cpio <RPM_NAME> | cpio -ivdm <FILE_NAME> It works fine for me, but on one system (RHEL5.9), I'm getting this error: cpio: premature end of file I've googled it but couldn't find any appropriate answer and solution. Can someone encountered this issue and can help?
|
Error while extracting file from RPM (rpm2cpio) I'm using the following command to extract a single file from RPM package. pm2cpio <RPM_NAME> | cpio -ivdm <FILE_NAME> It works fine for me, but on one system (RHEL5.9), I'm getting this error: cpio: premature end of file I've googled it but couldn't find any appropriate answer and solution. Can someone encountered this issue and can help?
|
linux, rpm, rhel
| 2
| 11,067
| 2
|
https://stackoverflow.com/questions/21875589/error-while-extracting-file-from-rpm-rpm2cpio
|
17,754,307
|
cap deploy:cleanup fails with use_sudo=true
|
My capifony deployment works great, however the capifony cleanup command fails. I'm using private keys over ssh, with sudo to gain write permissions on the deployment directories. With extended logging the result of cap deploy:cleanup is this: $ cap deploy:cleanup * 2013-07-19 15:44:42 executing `deploy:cleanup' * executing "sudo -p 'sudo password: ' ls -1dt /var/www/html/releases/* | tail -n +4 | sudo -p 'sudo password: ' xargs rm -rf" Modifying permissions so that the deployment user has full write access to this directory is not an option in this instance. Has anyone seen/worked around this issue? (This is on a RHEL6 server)
|
cap deploy:cleanup fails with use_sudo=true My capifony deployment works great, however the capifony cleanup command fails. I'm using private keys over ssh, with sudo to gain write permissions on the deployment directories. With extended logging the result of cap deploy:cleanup is this: $ cap deploy:cleanup * 2013-07-19 15:44:42 executing `deploy:cleanup' * executing "sudo -p 'sudo password: ' ls -1dt /var/www/html/releases/* | tail -n +4 | sudo -p 'sudo password: ' xargs rm -rf" Modifying permissions so that the deployment user has full write access to this directory is not an option in this instance. Has anyone seen/worked around this issue? (This is on a RHEL6 server)
|
symfony, capistrano, rhel, capifony
| 2
| 1,230
| 1
|
https://stackoverflow.com/questions/17754307/cap-deploycleanup-fails-with-use-sudo-true
|
15,131,533
|
RHEL : JDK 7u15 installation error
|
I am getting following error while installing the JDK 7u15 version on Red Hat Enterprise Linux box. I am not able figure out what to do with it? I mean what is impact on running my programs? What are these pack files? charsets.pack deploy.pack javaws.pack jsse.pack localedata.pack plugin.pack rt.pack I have followed the steps given in oracle site: [URL] RHEL version: 6 Here is the exact message from putty: bash-4.1$ sudo rpm -ivh jdk-7u15-linux-i586.rpm Preparing... ########################################### [100%] 1:jdk ########################################### [100%] Unpacking JAR files... rt.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/rt.pack jsse.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/jsse.pack charsets.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/charsets.pack tools.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/lib/tools.pack localedata.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/ext/localedata.pack plugin.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/plugin.pack javaws.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/javaws.pack deploy.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/deploy.pack
|
RHEL : JDK 7u15 installation error I am getting following error while installing the JDK 7u15 version on Red Hat Enterprise Linux box. I am not able figure out what to do with it? I mean what is impact on running my programs? What are these pack files? charsets.pack deploy.pack javaws.pack jsse.pack localedata.pack plugin.pack rt.pack I have followed the steps given in oracle site: [URL] RHEL version: 6 Here is the exact message from putty: bash-4.1$ sudo rpm -ivh jdk-7u15-linux-i586.rpm Preparing... ########################################### [100%] 1:jdk ########################################### [100%] Unpacking JAR files... rt.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/rt.pack jsse.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/jsse.pack charsets.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/charsets.pack tools.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/lib/tools.pack localedata.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/ext/localedata.pack plugin.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/plugin.pack javaws.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/javaws.pack deploy.jar... Error: Could not open input file: /usr/java/jdk1.7.0_15/jre/lib/deploy.pack
|
java, java-7, rpm, rhel, rhel6
| 2
| 5,441
| 1
|
https://stackoverflow.com/questions/15131533/rhel-jdk-7u15-installation-error
|
13,814,767
|
getting RHEL6 kernel code
|
I want to change some kernel source files for network stack testing so is there any way to get kernel source code for RHEL6 Enterprise edition and how to apply the changes to get results?
|
getting RHEL6 kernel code I want to change some kernel source files for network stack testing so is there any way to get kernel source code for RHEL6 Enterprise edition and how to apply the changes to get results?
|
c, linux, network-programming, linux-kernel, rhel
| 2
| 1,813
| 2
|
https://stackoverflow.com/questions/13814767/getting-rhel6-kernel-code
|
9,749,878
|
Cant install DBI.pm & DBD.pm for RHEL to run perl scripts
|
I am trying to execute a perl script that interacts with mysql database. I am on RHEL 5.5 and my mysql version is 5.0.77 . And it returned error that it requires DBI.pm module for perl. I tried to install it via cpan , using perl -MCPAN -e "install DBI.pm" . It tries to some ftp server which times out. I tried manually install from repo of cpan. DBI.pm installed properly and I proceeded with DBD.pm but cpan repo has DBD.pm only for mysql 4. I am lost... any pointers?
|
Cant install DBI.pm & DBD.pm for RHEL to run perl scripts I am trying to execute a perl script that interacts with mysql database. I am on RHEL 5.5 and my mysql version is 5.0.77 . And it returned error that it requires DBI.pm module for perl. I tried to install it via cpan , using perl -MCPAN -e "install DBI.pm" . It tries to some ftp server which times out. I tried manually install from repo of cpan. DBI.pm installed properly and I proceeded with DBD.pm but cpan repo has DBD.pm only for mysql 4. I am lost... any pointers?
|
perl, installation, rhel
| 2
| 7,786
| 3
|
https://stackoverflow.com/questions/9749878/cant-install-dbi-pm-dbd-pm-for-rhel-to-run-perl-scripts
|
7,513,348
|
socket connect timeout in c using poll
|
I have an existing multi-threaded application which uses blocking connect() call. However, I want to introduce connect timeout for application where if server does not respond to our query in x milliseconds, application will stop trying and give error. However, I am not able to figure out how to do that using poll. @caf 's non blocking connect using select has been of great help. But I have read that select in slow compared to poll, hence I want to use poll. Could you please tell me if that is true? I am pasting his code from the post here int main(int argc, char **argv) { u_short port; /* user specified port number */ char *addr; /* will be a pointer to the address */ struct sockaddr_in address; /* the libc network address data structure */ short int sock = -1; /* file descriptor for the network socket */ fd_set fdset; struct timeval tv; if (argc != 3) { fprintf(stderr, "Usage %s <port_num> <address>\n", argv[0]); return EXIT_FAILURE; } port = atoi(argv[1]); addr = argv[2]; address.sin_family = AF_INET; address.sin_addr.s_addr = inet_addr(addr); /* assign the address */ address.sin_port = htons(port); /* translate int2port num */ sock = socket(AF_INET, SOCK_STREAM, 0); fcntl(sock, F_SETFL, O_NONBLOCK); connect(sock, (struct sockaddr *)&address, sizeof(address)); FD_ZERO(&fdset); FD_SET(sock, &fdset); tv.tv_sec = 10; /* 10 second timeout */ tv.tv_usec = 0; if (select(sock + 1, NULL, &fdset, NULL, &tv) == 1) { int so_error; socklen_t len = sizeof so_error; getsockopt(sock, SOL_SOCKET, SO_ERROR, &so_error, &len); if (so_error == 0) { printf("%s:%d is open\n", addr, port); } } close(sock); return 0; } Could you please help me to write similar functionality using poll. I am on RHEL and using gcc 4.5.x version. Update: For current code, how can change socket to blocking mode once app creates connections to server. I am not able to find a way to unset this O_NONBLOCK. Update 2: fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) & ~O_NONBLOCK); One post has pointed that we can do this with above command. Didn't get the login though.
|
socket connect timeout in c using poll I have an existing multi-threaded application which uses blocking connect() call. However, I want to introduce connect timeout for application where if server does not respond to our query in x milliseconds, application will stop trying and give error. However, I am not able to figure out how to do that using poll. @caf 's non blocking connect using select has been of great help. But I have read that select in slow compared to poll, hence I want to use poll. Could you please tell me if that is true? I am pasting his code from the post here int main(int argc, char **argv) { u_short port; /* user specified port number */ char *addr; /* will be a pointer to the address */ struct sockaddr_in address; /* the libc network address data structure */ short int sock = -1; /* file descriptor for the network socket */ fd_set fdset; struct timeval tv; if (argc != 3) { fprintf(stderr, "Usage %s <port_num> <address>\n", argv[0]); return EXIT_FAILURE; } port = atoi(argv[1]); addr = argv[2]; address.sin_family = AF_INET; address.sin_addr.s_addr = inet_addr(addr); /* assign the address */ address.sin_port = htons(port); /* translate int2port num */ sock = socket(AF_INET, SOCK_STREAM, 0); fcntl(sock, F_SETFL, O_NONBLOCK); connect(sock, (struct sockaddr *)&address, sizeof(address)); FD_ZERO(&fdset); FD_SET(sock, &fdset); tv.tv_sec = 10; /* 10 second timeout */ tv.tv_usec = 0; if (select(sock + 1, NULL, &fdset, NULL, &tv) == 1) { int so_error; socklen_t len = sizeof so_error; getsockopt(sock, SOL_SOCKET, SO_ERROR, &so_error, &len); if (so_error == 0) { printf("%s:%d is open\n", addr, port); } } close(sock); return 0; } Could you please help me to write similar functionality using poll. I am on RHEL and using gcc 4.5.x version. Update: For current code, how can change socket to blocking mode once app creates connections to server. I am not able to find a way to unset this O_NONBLOCK. Update 2: fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) & ~O_NONBLOCK); One post has pointed that we can do this with above command. Didn't get the login though.
|
c, sockets, asynchronous, rhel
| 2
| 8,109
| 2
|
https://stackoverflow.com/questions/7513348/socket-connect-timeout-in-c-using-poll
|
5,993,079
|
rvm: Syntax error when I'm trying to install on a machine running SME Server
|
I'm trying to set up a Rails server to run an app that I just wrote on a box running SME Server 7.4. I've installed curl , zlib , and readline , but when I run the rvm script, I get this output: $ bash < <(curl -sk [URL] Cloning into rvm... remote: Counting objects: 4757, done. remote: Compressing objects: 100% (2427/2427), done. remote: Total 4757 (delta 3082), reused 3183 (delta 1672) Receiving objects: 100% (4757/4757), 1.57 MiB | 1.54 MiB/s, done. Resolving deltas: 100% (3082/3082), done. bash: line 298: syntax error near unexpected token `"--trace"' Any idea what could be causing this?
|
rvm: Syntax error when I'm trying to install on a machine running SME Server I'm trying to set up a Rails server to run an app that I just wrote on a box running SME Server 7.4. I've installed curl , zlib , and readline , but when I run the rvm script, I get this output: $ bash < <(curl -sk [URL] Cloning into rvm... remote: Counting objects: 4757, done. remote: Compressing objects: 100% (2427/2427), done. remote: Total 4757 (delta 3082), reused 3183 (delta 1672) Receiving objects: 100% (4757/4757), 1.57 MiB | 1.54 MiB/s, done. Resolving deltas: 100% (3082/3082), done. bash: line 298: syntax error near unexpected token `"--trace"' Any idea what could be causing this?
|
ruby-on-rails, ruby, centos, rvm, rhel
| 2
| 687
| 1
|
https://stackoverflow.com/questions/5993079/rvm-syntax-error-when-im-trying-to-install-on-a-machine-running-sme-server
|
76,528,999
|
Cannot load github.com/open-policy-agent/opa/capabilities: no Go source files
|
I was trying to install the OPA library using command go get github.com/open-policy-agent/opa/rego but installation is getting stuck with message build github.com/open-policy-agent/opa/capabilities: cannot load github.com/open-policy-agent/opa/capabilities: no Go source files Below is the o/p of command: -bash-4.2$ go get github.com/open-policy-agent/opa/rego go: finding github.com/open-policy-agent/opa/rego latest go: finding github.com/open-policy-agent/opa v0.53.1 go: downloading github.com/open-policy-agent/opa v0.53.1 go: extracting github.com/open-policy-agent/opa v0.53.1 go: downloading github.com/gorilla/mux v1.8.0 go: downloading github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 go: downloading github.com/sirupsen/logrus v1.9.2 go: downloading github.com/go-ini/ini v1.67.0 go: downloading go.opentelemetry.io/otel v1.14.0 go: extracting github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 go: downloading github.com/prometheus/client_golang v1.15.1 go: downloading github.com/OneOfOne/xxhash v1.2.8 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading github.com/gobwas/glob v0.2.3 go: extracting github.com/gorilla/mux v1.8.0 go: downloading github.com/tchap/go-patricia v2.2.6+incompatible go: extracting github.com/prometheus/client_golang v1.15.1 go: extracting gopkg.in/yaml.v2 v2.4.0 go: extracting github.com/sirupsen/logrus v1.9.2 go: extracting github.com/OneOfOne/xxhash v1.2.8 go: extracting go.opentelemetry.io/otel v1.14.0 go: extracting github.com/tchap/go-patricia v2.2.6+incompatible go: extracting github.com/go-ini/ini v1.67.0 go: downloading github.com/yashtewari/glob-intersection v0.1.0 go: downloading github.com/tchap/go-patricia/v2 v2.3.1 go: downloading github.com/ghodss/yaml v1.0.0 go: downloading github.com/agnivade/levenshtein v1.1.1 go: extracting github.com/tchap/go-patricia/v2 v2.3.1 go: downloading github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 go: extracting github.com/yashtewari/glob-intersection v0.1.0 go: extracting github.com/ghodss/yaml v1.0.0 go: extracting github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 go: extracting github.com/gobwas/glob v0.2.3 go: downloading github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb go: downloading golang.org/x/sys v0.8.0 go: extracting github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb go: extracting golang.org/x/sys v0.8.0 go: downloading google.golang.org/protobuf v1.30.0 go: downloading github.com/prometheus/client_model v0.3.0 go: downloading github.com/prometheus/common v0.42.0 go: downloading github.com/beorn7/perks v1.0.1 go: downloading github.com/cespare/xxhash v1.1.0 go: extracting github.com/agnivade/levenshtein v1.1.1 go: extracting github.com/prometheus/common v0.42.0 go: extracting github.com/beorn7/perks v1.0.1 go: extracting github.com/prometheus/client_model v0.3.0 go: extracting github.com/cespare/xxhash v1.1.0 go: downloading github.com/prometheus/procfs v0.9.0 go: downloading github.com/cespare/xxhash/v2 v2.2.0 go: extracting google.golang.org/protobuf v1.30.0 go: downloading github.com/golang/protobuf v1.5.3 go: extracting github.com/cespare/xxhash/v2 v2.2.0 go: extracting github.com/golang/protobuf v1.5.3 go: extracting github.com/prometheus/procfs v0.9.0 go: downloading go.opentelemetry.io/otel/sdk v1.14.0 go: extracting go.opentelemetry.io/otel/sdk v1.14.0 go: downloading github.com/go-logr/logr v1.2.4 go: extracting github.com/go-logr/logr v1.2.4 go: downloading github.com/go-logr/stdr v1.2.2 go: extracting github.com/go-logr/stdr v1.2.2 go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.4 go: extracting github.com/matttproud/golang_protobuf_extensions v1.0.4 go: downloading go.opentelemetry.io/otel/trace v1.14.0 go: extracting go.opentelemetry.io/otel/trace v1.14.0 go: finding github.com/OneOfOne/xxhash v1.2.8 build github.com/open-policy-agent/opa/capabilities: cannot load github.com/open-policy-agent/opa/capabilities: no Go source files Is there any pre-requisite before installation of OPA? or am I missing something during installation? -bash-4.2$ uname -a Linux dev-rh72-tas-1.in.intinfra.com 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
|
Cannot load github.com/open-policy-agent/opa/capabilities: no Go source files I was trying to install the OPA library using command go get github.com/open-policy-agent/opa/rego but installation is getting stuck with message build github.com/open-policy-agent/opa/capabilities: cannot load github.com/open-policy-agent/opa/capabilities: no Go source files Below is the o/p of command: -bash-4.2$ go get github.com/open-policy-agent/opa/rego go: finding github.com/open-policy-agent/opa/rego latest go: finding github.com/open-policy-agent/opa v0.53.1 go: downloading github.com/open-policy-agent/opa v0.53.1 go: extracting github.com/open-policy-agent/opa v0.53.1 go: downloading github.com/gorilla/mux v1.8.0 go: downloading github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 go: downloading github.com/sirupsen/logrus v1.9.2 go: downloading github.com/go-ini/ini v1.67.0 go: downloading go.opentelemetry.io/otel v1.14.0 go: extracting github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 go: downloading github.com/prometheus/client_golang v1.15.1 go: downloading github.com/OneOfOne/xxhash v1.2.8 go: downloading gopkg.in/yaml.v2 v2.4.0 go: downloading github.com/gobwas/glob v0.2.3 go: extracting github.com/gorilla/mux v1.8.0 go: downloading github.com/tchap/go-patricia v2.2.6+incompatible go: extracting github.com/prometheus/client_golang v1.15.1 go: extracting gopkg.in/yaml.v2 v2.4.0 go: extracting github.com/sirupsen/logrus v1.9.2 go: extracting github.com/OneOfOne/xxhash v1.2.8 go: extracting go.opentelemetry.io/otel v1.14.0 go: extracting github.com/tchap/go-patricia v2.2.6+incompatible go: extracting github.com/go-ini/ini v1.67.0 go: downloading github.com/yashtewari/glob-intersection v0.1.0 go: downloading github.com/tchap/go-patricia/v2 v2.3.1 go: downloading github.com/ghodss/yaml v1.0.0 go: downloading github.com/agnivade/levenshtein v1.1.1 go: extracting github.com/tchap/go-patricia/v2 v2.3.1 go: downloading github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 go: extracting github.com/yashtewari/glob-intersection v0.1.0 go: extracting github.com/ghodss/yaml v1.0.0 go: extracting github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 go: extracting github.com/gobwas/glob v0.2.3 go: downloading github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb go: downloading golang.org/x/sys v0.8.0 go: extracting github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb go: extracting golang.org/x/sys v0.8.0 go: downloading google.golang.org/protobuf v1.30.0 go: downloading github.com/prometheus/client_model v0.3.0 go: downloading github.com/prometheus/common v0.42.0 go: downloading github.com/beorn7/perks v1.0.1 go: downloading github.com/cespare/xxhash v1.1.0 go: extracting github.com/agnivade/levenshtein v1.1.1 go: extracting github.com/prometheus/common v0.42.0 go: extracting github.com/beorn7/perks v1.0.1 go: extracting github.com/prometheus/client_model v0.3.0 go: extracting github.com/cespare/xxhash v1.1.0 go: downloading github.com/prometheus/procfs v0.9.0 go: downloading github.com/cespare/xxhash/v2 v2.2.0 go: extracting google.golang.org/protobuf v1.30.0 go: downloading github.com/golang/protobuf v1.5.3 go: extracting github.com/cespare/xxhash/v2 v2.2.0 go: extracting github.com/golang/protobuf v1.5.3 go: extracting github.com/prometheus/procfs v0.9.0 go: downloading go.opentelemetry.io/otel/sdk v1.14.0 go: extracting go.opentelemetry.io/otel/sdk v1.14.0 go: downloading github.com/go-logr/logr v1.2.4 go: extracting github.com/go-logr/logr v1.2.4 go: downloading github.com/go-logr/stdr v1.2.2 go: extracting github.com/go-logr/stdr v1.2.2 go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.4 go: extracting github.com/matttproud/golang_protobuf_extensions v1.0.4 go: downloading go.opentelemetry.io/otel/trace v1.14.0 go: extracting go.opentelemetry.io/otel/trace v1.14.0 go: finding github.com/OneOfOne/xxhash v1.2.8 build github.com/open-policy-agent/opa/capabilities: cannot load github.com/open-policy-agent/opa/capabilities: no Go source files Is there any pre-requisite before installation of OPA? or am I missing something during installation? -bash-4.2$ uname -a Linux dev-rh72-tas-1.in.intinfra.com 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
|
go, rhel, rhel7, opa, open-policy-agent
| 2
| 243
| 1
|
https://stackoverflow.com/questions/76528999/cannot-load-github-com-open-policy-agent-opa-capabilities-no-go-source-files
|
68,978,130
|
This command has to be run under the root user. But I am root only
|
I have a simple playbook that tries to install packages. My task is failing(see output). I can ping the host, and manually I can run the command as the super user( tco ). my ansible.cfg [defaults] inventory = /Users/<myuser>/<automation>/ansible/inventory remote_user = tco packages packages: - yum-utils - sshpass playbook --- - hosts: all vars_files: - vars/packages.yml tasks: - name: testing connection ping: remote_user: tco - name: Installing packages yum: name: "{{ packages }}" state: present Running playbook: ansible-playbook my-playbook.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo Output: ansible-playbook register_sys_rh.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo BECOME password: PLAY [all] ****************************************************************************************************************************************************************** TASK [Gathering Facts] ****************************************************************************************************************************************************** ok: [xx.xxx.13.105] TASK [testing connection] *************************************************************************************************************************************************** ok: [xx.xxx.13.105] TASK [Installing packages] ************************************************************************************************************************************************** fatal: [xx.xxx.13.105]: FAILED! => {"changed": false, "msg": "This command has to be run under the root user.", "results": []} PLAY RECAP ****************************************************************************************************************************************************************** xx.xxx.13.105 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 inventory: ansible-inventory --list | jq '.master' { "hosts": [ "xx.xxx.13.105" ] } I have copied my id_rsa.pub to the host already. I cannot loging to the host without a password. I can log in and do sudo su or run any other command that needs root privilege. [tco@control-plane-0 ~]$ whoami tco [tco@control-plane-0 ~]$ hostname -I xx.xxx.13.105 192.168.122.1 [tco@control-plane-0 ~]$ sudo su [sudo] password for tco: [root@control-plane-0 tco]# I explicitly override user, sudo_method through ansible_cli, no idea what I am doing wrong here. Thanks in advance.
|
This command has to be run under the root user. But I am root only I have a simple playbook that tries to install packages. My task is failing(see output). I can ping the host, and manually I can run the command as the super user( tco ). my ansible.cfg [defaults] inventory = /Users/<myuser>/<automation>/ansible/inventory remote_user = tco packages packages: - yum-utils - sshpass playbook --- - hosts: all vars_files: - vars/packages.yml tasks: - name: testing connection ping: remote_user: tco - name: Installing packages yum: name: "{{ packages }}" state: present Running playbook: ansible-playbook my-playbook.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo Output: ansible-playbook register_sys_rh.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo BECOME password: PLAY [all] ****************************************************************************************************************************************************************** TASK [Gathering Facts] ****************************************************************************************************************************************************** ok: [xx.xxx.13.105] TASK [testing connection] *************************************************************************************************************************************************** ok: [xx.xxx.13.105] TASK [Installing packages] ************************************************************************************************************************************************** fatal: [xx.xxx.13.105]: FAILED! => {"changed": false, "msg": "This command has to be run under the root user.", "results": []} PLAY RECAP ****************************************************************************************************************************************************************** xx.xxx.13.105 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 inventory: ansible-inventory --list | jq '.master' { "hosts": [ "xx.xxx.13.105" ] } I have copied my id_rsa.pub to the host already. I cannot loging to the host without a password. I can log in and do sudo su or run any other command that needs root privilege. [tco@control-plane-0 ~]$ whoami tco [tco@control-plane-0 ~]$ hostname -I xx.xxx.13.105 192.168.122.1 [tco@control-plane-0 ~]$ sudo su [sudo] password for tco: [root@control-plane-0 tco]# I explicitly override user, sudo_method through ansible_cli, no idea what I am doing wrong here. Thanks in advance.
|
ansible, root, rhel
| 2
| 15,417
| 2
|
https://stackoverflow.com/questions/68978130/this-command-has-to-be-run-under-the-root-user-but-i-am-root-only
|
60,373,118
|
How to add main-menu option in CentOS 8
|
For CentOS 7, a custom menu items can be added via the Main-Menu application. Now I have upgraded to CentOS 8 this option is missing.
|
How to add main-menu option in CentOS 8 For CentOS 7, a custom menu items can be added via the Main-Menu application. Now I have upgraded to CentOS 8 this option is missing.
|
rhel, gnome-3, centos8
| 2
| 2,336
| 1
|
https://stackoverflow.com/questions/60373118/how-to-add-main-menu-option-in-centos-8
|
46,049,215
|
How to recover from routef command on rhel?
|
By mistake I have executed routef command on linux machine. Please help me recover it before my manager blasts at me.
|
How to recover from routef command on rhel? By mistake I have executed routef command on linux machine. Please help me recover it before my manager blasts at me.
|
linux, linux-kernel, command, rhel
| 2
| 2,048
| 2
|
https://stackoverflow.com/questions/46049215/how-to-recover-from-routef-command-on-rhel
|
30,239,359
|
IF statement not working in RHEL 6 (workd in RHEL 5)
|
I have a simple if statement that works fine in RHEL 5, but for some inexplicable reason, fails in RHEL 6: if [[ ! $1 =~ "(one|two|three)" ]] ; then echo -e "\n***Invalid number" usage exit 1 else action=$1 fi I can use a case statement which works fine or re-write it but more than anything, I'm curious as to what has changed, assuming it is the version of RHEL and not something else?
|
IF statement not working in RHEL 6 (workd in RHEL 5) I have a simple if statement that works fine in RHEL 5, but for some inexplicable reason, fails in RHEL 6: if [[ ! $1 =~ "(one|two|three)" ]] ; then echo -e "\n***Invalid number" usage exit 1 else action=$1 fi I can use a case statement which works fine or re-write it but more than anything, I'm curious as to what has changed, assuming it is the version of RHEL and not something else?
|
regex, bash, if-statement, rhel
| 2
| 145
| 1
|
https://stackoverflow.com/questions/30239359/if-statement-not-working-in-rhel-6-workd-in-rhel-5
|
29,787,006
|
Exporting X Display running JavaFX Scene3D
|
I have a linux box running RHEL 6.5 and I have a JAR of the following sample JavaFX application [URL] I can compile and run it just fine with no errors and everything displays correctly (identical to [URL] ). However when I try to export my display to localhost:0.0 I get the following error for each of my Material, Shape3D and Mesh objects: WARNING: System can't support ConditionalFeature.SCENE3D The JavaFX application opens but there are no 3D objects within the window. I have tried exporting to another RHEL 6.5 linux box and the same problem occurs. I have the 1.8.0_45 JDK installed and version 2.2.12 of the X11 Intel driver. Is there some configuration of either X or JavaFX which will allow me to correctly export the application to another machine? EDIT: So I ran JAR file using both of the following: java -jar -Dprism.order=sw MoleculeSampleApp.jar java -jar -Dprism.order=j2d MoleculeSampleApp.jar without exporting to either localhost or another display and ran into the same problem on my local machine. This leads me to believe that exporting the display is causing JavaFX to stop using hardware acceleration.
|
Exporting X Display running JavaFX Scene3D I have a linux box running RHEL 6.5 and I have a JAR of the following sample JavaFX application [URL] I can compile and run it just fine with no errors and everything displays correctly (identical to [URL] ). However when I try to export my display to localhost:0.0 I get the following error for each of my Material, Shape3D and Mesh objects: WARNING: System can't support ConditionalFeature.SCENE3D The JavaFX application opens but there are no 3D objects within the window. I have tried exporting to another RHEL 6.5 linux box and the same problem occurs. I have the 1.8.0_45 JDK installed and version 2.2.12 of the X11 Intel driver. Is there some configuration of either X or JavaFX which will allow me to correctly export the application to another machine? EDIT: So I ran JAR file using both of the following: java -jar -Dprism.order=sw MoleculeSampleApp.jar java -jar -Dprism.order=j2d MoleculeSampleApp.jar without exporting to either localhost or another display and ran into the same problem on my local machine. This leads me to believe that exporting the display is causing JavaFX to stop using hardware acceleration.
|
java, linux, javafx, x11, rhel
| 2
| 927
| 1
|
https://stackoverflow.com/questions/29787006/exporting-x-display-running-javafx-scene3d
|
27,405,031
|
RHEL Release 5.5 (Tikanga), df --total option
|
I have a RHEL (Redhat Enterprise Linux) v6.5 (Santiago) server. On this server if i do a df -help there are list of options available. I am interested in the option --total However there is an older version of RHEL (v5.5). In which there is no --total option. My question is, I have a command like this: df -h --total | grep total | awk 'NR==1{print$2}+NR==1{print$3}+NR==1{print$4}+NR==1{print$5}' which gives the output as 62G 39G 21G 66% Where 62G is Total size of the Disk 39G is Used 21G is remaining 61% Total usage % The above command is working fine in RHEL v6.5. But fails in RHEL v5.5 since it does not have a --total option for df command. When i run the same command on RHEL v5.5 i get the below error: df: unrecognized option --total' Try df --help' for more information. So is there a command that can give me the output in the following way: Total Disk Space Used Space Remaining Disk space Usage % Ex: 62G 39G 21G 66%
|
RHEL Release 5.5 (Tikanga), df --total option I have a RHEL (Redhat Enterprise Linux) v6.5 (Santiago) server. On this server if i do a df -help there are list of options available. I am interested in the option --total However there is an older version of RHEL (v5.5). In which there is no --total option. My question is, I have a command like this: df -h --total | grep total | awk 'NR==1{print$2}+NR==1{print$3}+NR==1{print$4}+NR==1{print$5}' which gives the output as 62G 39G 21G 66% Where 62G is Total size of the Disk 39G is Used 21G is remaining 61% Total usage % The above command is working fine in RHEL v6.5. But fails in RHEL v5.5 since it does not have a --total option for df command. When i run the same command on RHEL v5.5 i get the below error: df: unrecognized option --total' Try df --help' for more information. So is there a command that can give me the output in the following way: Total Disk Space Used Space Remaining Disk space Usage % Ex: 62G 39G 21G 66%
|
linux, awk, rhel
| 2
| 460
| 1
|
https://stackoverflow.com/questions/27405031/rhel-release-5-5-tikanga-df-total-option
|
26,607,482
|
Jbosss fails to start with jdk1.7
|
Jboss 5.1 fails to start with jdk1.7.0_25 , but it starts fine with jdk1.6 With jdk1.7.0_25 , jboss start fails with the following error. JBoss Bootstrap Environment JBOSS_HOME: /home/zaman/jboss/jboss-eap-5.1/jboss-as JAVA: /usr/java/jdk1.7.0_25//bin/java JAVA_OPTS: -Dprogram.name=run.sh -server -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dsun.lang.ClassLoader.allowArraySyntax=true -Djava.net.preferIPv4Stack=true CLASSPATH: /home/zaman/jboss/jboss-eap-5.1/jboss-as/bin/run.jar:/usr/java/jdk1.7.0_25//lib/tools.jar ========================================================================= Exception in thread "main" java.lang.NoClassDefFoundError: org/jboss/bootstrap/BaseServerConfig at org.jboss.bootstrap.AbstractServerImpl.doInit(AbstractServerImpl.java:190) at org.jboss.bootstrap.AbstractServerImpl.init(AbstractServerImpl.java:173) at org.jboss.bootstrap.AbstractServerImpl.init(AbstractServerImpl.java:143) at org.jboss.Main.boot(Main.java:218) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:724) Caused by: java.lang.ClassNotFoundException: org.jboss.bootstrap.BaseServerConfig at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 6 more java.io.File entry is already there in constructor parameter in profile.xml file as suggested in [URL] grep "java.io" ../server/default/conf/bootstrap/profile.xml <constructor><parameter class="java.io.File"><inject bean="BootstrapProfileFactory" property="attachmentStoreRoot" /></parameter></constructor> How to fix this issue . We need to use JDK1.7 only.
|
Jbosss fails to start with jdk1.7 Jboss 5.1 fails to start with jdk1.7.0_25 , but it starts fine with jdk1.6 With jdk1.7.0_25 , jboss start fails with the following error. JBoss Bootstrap Environment JBOSS_HOME: /home/zaman/jboss/jboss-eap-5.1/jboss-as JAVA: /usr/java/jdk1.7.0_25//bin/java JAVA_OPTS: -Dprogram.name=run.sh -server -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dsun.lang.ClassLoader.allowArraySyntax=true -Djava.net.preferIPv4Stack=true CLASSPATH: /home/zaman/jboss/jboss-eap-5.1/jboss-as/bin/run.jar:/usr/java/jdk1.7.0_25//lib/tools.jar ========================================================================= Exception in thread "main" java.lang.NoClassDefFoundError: org/jboss/bootstrap/BaseServerConfig at org.jboss.bootstrap.AbstractServerImpl.doInit(AbstractServerImpl.java:190) at org.jboss.bootstrap.AbstractServerImpl.init(AbstractServerImpl.java:173) at org.jboss.bootstrap.AbstractServerImpl.init(AbstractServerImpl.java:143) at org.jboss.Main.boot(Main.java:218) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:724) Caused by: java.lang.ClassNotFoundException: org.jboss.bootstrap.BaseServerConfig at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 6 more java.io.File entry is already there in constructor parameter in profile.xml file as suggested in [URL] grep "java.io" ../server/default/conf/bootstrap/profile.xml <constructor><parameter class="java.io.File"><inject bean="BootstrapProfileFactory" property="attachmentStoreRoot" /></parameter></constructor> How to fix this issue . We need to use JDK1.7 only.
|
java, jboss, rhel
| 2
| 7,215
| 2
|
https://stackoverflow.com/questions/26607482/jbosss-fails-to-start-with-jdk1-7
|
24,657,229
|
Linux How to copy symbolic link and preserve date
|
I have a directory with this kind of file: 0 lrwxrwxrwx 1 utges_m gid36 12 May 17 2011 libedit.so -> libedit.so.2 0 lrwxrwxrwx 1 utges_m gid36 16 Apr 16 2009 libedit.so.2 -> libedit.so.2.0.9 352 -rw-r--r-- 1 utges_m gid36 358958 Mar 10 2010 libedit.so.2.0.9 I would like to copy file and symlink and preserve date. I started with this command: cp -dp sourceDir/* destinationDir and the result is: 0 lrwxrwxrwx 1 siri gid33 12 Jul 9 16:38 libedit.so -> libedit.so.2 0 lrwxrwxrwx 1 siri gid33 16 Jul 9 16:38 libedit.so.2 -> libedit.so.2.0.9 356 -rw-r--r-- 1 siri gid33 358958 Mar 10 2010 libedit.so.2.0.9 So, I wrote this simple bash script: cp -dp $OLDDIR/* $NEWDIR ls $OLDDIR | while read f; do { TS=$(stat -c '%Y' "$OLDDIR/$f") DATE=$(date -d "UTC 1970-01-01 $TS secs") echo "$f $DATE" touch -d "${DATE}" "$NEWDIR/$f" } done; The script output is: libedit.so Tue May 17 21:35:14 CEST 2011 libedit.so.2 Thu Apr 16 10:30:05 CEST 2009 libedit.so.2.0.9 Wed Mar 10 16:31:17 CET 2010 but unfortunately the result is: 0 lrwxrwxrwx 1 siri gid33 12 Jul 9 16:55 libedit.so -> libedit.so.2 0 lrwxrwxrwx 1 siri gid33 16 Jul 9 16:55 libedit.so.2 -> libedit.so.2.0.9 356 -rw-r--r-- 1 siri gid33 358958 Mar 10 2010 libedit.so.2.0.9 What's wrong with what I did? I'm using Red Hat Enterprise Linux ES release 4 (Nahant Update 3)
|
Linux How to copy symbolic link and preserve date I have a directory with this kind of file: 0 lrwxrwxrwx 1 utges_m gid36 12 May 17 2011 libedit.so -> libedit.so.2 0 lrwxrwxrwx 1 utges_m gid36 16 Apr 16 2009 libedit.so.2 -> libedit.so.2.0.9 352 -rw-r--r-- 1 utges_m gid36 358958 Mar 10 2010 libedit.so.2.0.9 I would like to copy file and symlink and preserve date. I started with this command: cp -dp sourceDir/* destinationDir and the result is: 0 lrwxrwxrwx 1 siri gid33 12 Jul 9 16:38 libedit.so -> libedit.so.2 0 lrwxrwxrwx 1 siri gid33 16 Jul 9 16:38 libedit.so.2 -> libedit.so.2.0.9 356 -rw-r--r-- 1 siri gid33 358958 Mar 10 2010 libedit.so.2.0.9 So, I wrote this simple bash script: cp -dp $OLDDIR/* $NEWDIR ls $OLDDIR | while read f; do { TS=$(stat -c '%Y' "$OLDDIR/$f") DATE=$(date -d "UTC 1970-01-01 $TS secs") echo "$f $DATE" touch -d "${DATE}" "$NEWDIR/$f" } done; The script output is: libedit.so Tue May 17 21:35:14 CEST 2011 libedit.so.2 Thu Apr 16 10:30:05 CEST 2009 libedit.so.2.0.9 Wed Mar 10 16:31:17 CET 2010 but unfortunately the result is: 0 lrwxrwxrwx 1 siri gid33 12 Jul 9 16:55 libedit.so -> libedit.so.2 0 lrwxrwxrwx 1 siri gid33 16 Jul 9 16:55 libedit.so.2 -> libedit.so.2.0.9 356 -rw-r--r-- 1 siri gid33 358958 Mar 10 2010 libedit.so.2.0.9 What's wrong with what I did? I'm using Red Hat Enterprise Linux ES release 4 (Nahant Update 3)
|
linux, bash, rhel
| 2
| 4,323
| 3
|
https://stackoverflow.com/questions/24657229/linux-how-to-copy-symbolic-link-and-preserve-date
|
20,757,940
|
A second getpwuid call appears to overwrite old value
|
Here's a small C program that prints (well, supposed to print) the real and effective IDs of a process when the file has the setuid flag set. In this program, when I call getpwuid a second time (L.No 38), it tends to overwrite the value of the variable realUserName that was obtained in L.No 24. I'm unable to explain this behavior. Is this the expected behavior and why? I'm trying this in a Linux box (RHEL 2.6.18-371.1.2.el5). 1 /* Filename: test.c 2 * Notes: 3 * 1] ./test owned by user cadmn (userID: 3585) 4 * 2] ./test run by user pmn (4471) 5 * 3] ./test has the setuid bit switched-on. 6 */ 7 #include <stdio.h> 8 #include <pwd.h> 9 #include <sys/types.h> 10 #include <unistd.h> 11 int main() 12 { 13 14 uid_t realId, effectiveId; 15 struct passwd *realUser, *effUser; 16 17 realId = getuid(); // realId = 4471 18 effectiveId = geteuid(); //effectiveId = 3585 19 20 printf("Real ID is %i and Effective ID is %i\n", (int)realId, (int)effectiveId); 21 //prints 4472 and 3585, respectively 22 23 realUser = getpwuid(realId); 24 char *realUserName = realUser->pw_name; //realUserName = pmn 25 26 printf("Real ID (name) at this point is %s\n", realUserName); 27 // prints pmn. 28 29 /* 30 ********************************************************* 31 * * 32 * everything works as expected up to this point * 33 * * 34 ********************************************************* 35 */ 36 37 // The value obtained from this call is not used anywhere in this program 38 effUser = getpwuid(effectiveId); 39 printf("\nCalled getpwuid with the effectiveId\n\n"); 40 41 printf("Real ID is %i and Effective ID is %i\n", (int)realId, (int)effectiveId); 42 //prints 4472 and 3585, respectively 43 44 printf("Real ID (name) at this point is %s.\n", realUserName); 45 // Expect to still see 'pmn' printed; though see 'cadmn' as the output! 46 // Why does this happen? 47 48 return 0; 49 } 50 Output: pmn@rhel /tmp/temp > id pmn uid=4471(pmn) gid=1000(nusers) groups=1000(nusers) pmn@rhel /tmp/temp > pmn@rhel /tmp/temp > id cadmn uid=3585(cadmn) gid=401(cusers) groups=401(cusers) pmn@rhel /tmp/temp > pmn@rhel /tmp/temp > ls -l ./test -r-sr-xr-x 1 cadmn cusers 9377 Dec 24 19:48 ./test pmn@rhel /tmp/temp > pmn@rhel /tmp/temp > ./test Real ID is 4471 and Effective ID is 3585 Real ID (name) at this point is pmn Called getpwuid with the effectiveId Real ID is 4471 and Effective ID is 3585 Real ID (name) at this point is cadmn. pmn@rhel /tmp/temp >
|
A second getpwuid call appears to overwrite old value Here's a small C program that prints (well, supposed to print) the real and effective IDs of a process when the file has the setuid flag set. In this program, when I call getpwuid a second time (L.No 38), it tends to overwrite the value of the variable realUserName that was obtained in L.No 24. I'm unable to explain this behavior. Is this the expected behavior and why? I'm trying this in a Linux box (RHEL 2.6.18-371.1.2.el5). 1 /* Filename: test.c 2 * Notes: 3 * 1] ./test owned by user cadmn (userID: 3585) 4 * 2] ./test run by user pmn (4471) 5 * 3] ./test has the setuid bit switched-on. 6 */ 7 #include <stdio.h> 8 #include <pwd.h> 9 #include <sys/types.h> 10 #include <unistd.h> 11 int main() 12 { 13 14 uid_t realId, effectiveId; 15 struct passwd *realUser, *effUser; 16 17 realId = getuid(); // realId = 4471 18 effectiveId = geteuid(); //effectiveId = 3585 19 20 printf("Real ID is %i and Effective ID is %i\n", (int)realId, (int)effectiveId); 21 //prints 4472 and 3585, respectively 22 23 realUser = getpwuid(realId); 24 char *realUserName = realUser->pw_name; //realUserName = pmn 25 26 printf("Real ID (name) at this point is %s\n", realUserName); 27 // prints pmn. 28 29 /* 30 ********************************************************* 31 * * 32 * everything works as expected up to this point * 33 * * 34 ********************************************************* 35 */ 36 37 // The value obtained from this call is not used anywhere in this program 38 effUser = getpwuid(effectiveId); 39 printf("\nCalled getpwuid with the effectiveId\n\n"); 40 41 printf("Real ID is %i and Effective ID is %i\n", (int)realId, (int)effectiveId); 42 //prints 4472 and 3585, respectively 43 44 printf("Real ID (name) at this point is %s.\n", realUserName); 45 // Expect to still see 'pmn' printed; though see 'cadmn' as the output! 46 // Why does this happen? 47 48 return 0; 49 } 50 Output: pmn@rhel /tmp/temp > id pmn uid=4471(pmn) gid=1000(nusers) groups=1000(nusers) pmn@rhel /tmp/temp > pmn@rhel /tmp/temp > id cadmn uid=3585(cadmn) gid=401(cusers) groups=401(cusers) pmn@rhel /tmp/temp > pmn@rhel /tmp/temp > ls -l ./test -r-sr-xr-x 1 cadmn cusers 9377 Dec 24 19:48 ./test pmn@rhel /tmp/temp > pmn@rhel /tmp/temp > ./test Real ID is 4471 and Effective ID is 3585 Real ID (name) at this point is pmn Called getpwuid with the effectiveId Real ID is 4471 and Effective ID is 3585 Real ID (name) at this point is cadmn. pmn@rhel /tmp/temp >
|
c, linux, rhel, setuid, getpwuid
| 2
| 256
| 1
|
https://stackoverflow.com/questions/20757940/a-second-getpwuid-call-appears-to-overwrite-old-value
|
12,143,049
|
Has anyone ever built omniORB on RHEL?
|
I am trying to build omniORB libraries on RHEL 5.5. I tried running configure with CC=gcc and CXX=g++ and PYTHON=bin/omnipython I run into this problem where it complains about gmake[3]: Entering directory `/home/local/NT/jayanthv/omniORB-4.1.4/src/lib/omniORB' ../../../bin/omniidl -bcxx -p../../../src/lib/omniORB -Wbdebug -Wba -p../../../src/lib/omniORB -Wbdebug -v -ComniORB4 ../../../idl/Naming.idl omniidl: ERROR! omniidl: Could not open IDL compiler module _omniidlmodule.so omniidl: Please make sure it is in directory /home/local/NT/jayanthv/omniORB-4.1.4/lib omniidl: (or set the PYTHONPATH environment variable) omniidl: (The error was '/home/local/NT/jayanthv/omniORB-4.1.4/lib/_omniidlmodule.so: wrong ELF class: ELFCLASS64') So, I tried to use the Intel C++ compiler instead, with export CXX=/opt/intel/Compiler/11.1/080/bin/ia32/icc export LD_LIBRARY_PATH=/opt/intel/Compiler/11.1/080/lib/ia32 export PYTHON=/home/local/NT/jayanthv/omniORB-4.1.4/bin/omnipython But, now it complains about ../../../bin/omniidl -bcxx -p../../../src/lib/omniORB -Wbdebug -Wba -p../../../src/lib/omniORB -Wbdebug -v -ComniORB4 ../../../idl/Naming.idl omniidl: ERROR! omniidl: Could not open IDL compiler module _omniidlmodule.so omniidl: Please make sure it is in directory /home/local/NT/jayanthv/omniORB-4.1.4/lib omniidl: (or set the PYTHONPATH environment variable) omniidl: (The error was '/home/local/NT/jayanthv/omniORB-4.1.4/lib/_omniidlmodule.so: undefined symbol: __cxa_pure_virtual') The OS is RHEL 5.5 with x86_64 architecture, and I am trying to build the 32 bit binaries. Would appreciate any insight into this problem.
|
Has anyone ever built omniORB on RHEL? I am trying to build omniORB libraries on RHEL 5.5. I tried running configure with CC=gcc and CXX=g++ and PYTHON=bin/omnipython I run into this problem where it complains about gmake[3]: Entering directory `/home/local/NT/jayanthv/omniORB-4.1.4/src/lib/omniORB' ../../../bin/omniidl -bcxx -p../../../src/lib/omniORB -Wbdebug -Wba -p../../../src/lib/omniORB -Wbdebug -v -ComniORB4 ../../../idl/Naming.idl omniidl: ERROR! omniidl: Could not open IDL compiler module _omniidlmodule.so omniidl: Please make sure it is in directory /home/local/NT/jayanthv/omniORB-4.1.4/lib omniidl: (or set the PYTHONPATH environment variable) omniidl: (The error was '/home/local/NT/jayanthv/omniORB-4.1.4/lib/_omniidlmodule.so: wrong ELF class: ELFCLASS64') So, I tried to use the Intel C++ compiler instead, with export CXX=/opt/intel/Compiler/11.1/080/bin/ia32/icc export LD_LIBRARY_PATH=/opt/intel/Compiler/11.1/080/lib/ia32 export PYTHON=/home/local/NT/jayanthv/omniORB-4.1.4/bin/omnipython But, now it complains about ../../../bin/omniidl -bcxx -p../../../src/lib/omniORB -Wbdebug -Wba -p../../../src/lib/omniORB -Wbdebug -v -ComniORB4 ../../../idl/Naming.idl omniidl: ERROR! omniidl: Could not open IDL compiler module _omniidlmodule.so omniidl: Please make sure it is in directory /home/local/NT/jayanthv/omniORB-4.1.4/lib omniidl: (or set the PYTHONPATH environment variable) omniidl: (The error was '/home/local/NT/jayanthv/omniORB-4.1.4/lib/_omniidlmodule.so: undefined symbol: __cxa_pure_virtual') The OS is RHEL 5.5 with x86_64 architecture, and I am trying to build the 32 bit binaries. Would appreciate any insight into this problem.
|
c++, linux, rhel, omniorb
| 2
| 2,318
| 2
|
https://stackoverflow.com/questions/12143049/has-anyone-ever-built-omniorb-on-rhel
|
9,118,664
|
Plotting multiple figures in Matlab
|
I am working with some matlab code that processes data (in a Kalman Filter) and creates a series of contour plots. It has been running on a RHEL 4 server in matlab 2006a for a few years, but my boss recently requested that all servers be updated to RHEL 6...and at least matlab 2007a. I have worked out all the depreciations between these versions, but I am still having one major problem. The code that creates and prints different contour plots is working for whichever of the three plots is created first. It looks like this: Unfortunately the next two plots look like this: The three figures are plotted independently in separate functions and I use clf("reset"); before and after creating each figure. Each function works in and of itself, but when all three are plotted the second and third figures are all messed up. Has anyone else had this problem? Here is the code that creates one of the figures. function Crd = TEC_plot(ITEC,RT,Param,Input,t,OutPath,RxExtAll,Time) % Generate TEC plot Function_for_Spline_Smoothing = [.05 .1 .05; .1 .4 .1; .05 .1 .05]; ITEC = conv2(ITEC,Function_for_Spline_Smoothing,'same'); % add more of these lines to make contours smoother ITEC = conv2(ITEC,Function_for_Spline_Smoothing,'same'); % add more of these lines to make contours smoother % ITEC = conv2(ITEC,Function_for_Spline_Smoothing,'same'); % add more of these lines to make contours smoother % ITEC = conv2(ITEC,Function_for_Spline_Smoothing,'same'); % add more of these lines to make contours smoother earth('CAMERA',RT.Camera,'FIG',1); figure; warning off; hold on; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Changed 13 February 2007 to make plots prettier thinning_scale=2; % (1 to 10) increase this value to thin the density of the contour number labels [cscale,hgt]=m_contour(Param.Grid.LonAxis,Param.Grid.LatAxis,ITEC, ... round(RT.Levels(1:thinning_scale:end)/5)*5); hold on m_contourf(Param.Grid.LonAxis,Param.Grid.LatAxis,ITEC,RT.Levels); shading flat m_coast('line','color','y','LineWidth',1); clabel(cscale,hgt,'labelspacing',72,'rotation',0,'fontsize',10 ... ,'FontAngle','italic','color','w'); axis([-.65 .6 .25 1.32]) % hardwiring axis length since the coastline runs off of the plot % Changed 13 February 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % %-------------------- to include different station markers for different sources ------------------- if ~isempty(Input.Data) % Plot receivers used in this inversion Crd=uniquerows(double(cat(1,RxExtAll))); RxType=round(Crd(:,4)); Crd=cartsph(Crd)*180/pi; i1 = find( (Crd(:,3) > RT.Camera(3)).*(Crd(:,3) < RT.Camera(4)).*... (Crd(:,2) > RT.Camera(1)).*(Crd(:,2) < RT.Camera(2)).*... (RxType==1) ); i2 = find( (Crd(:,3) > RT.Camera(3)).*(Crd(:,3) < RT.Camera(4)).*... (Crd(:,2) > RT.Camera(1)).*(Crd(:,2) < RT.Camera(2)).*... (RxType==2) ); i3 = find( (Crd(:,3) > RT.Camera(3)).*(Crd(:,3) < RT.Camera(4)).*... (Crd(:,2) > RT.Camera(1)).*(Crd(:,2) < RT.Camera(2)).*... (RxType==3) ); m_plot(Crd(i1,3),Crd(i1,2),'ro','markersize',5,'LineWidth',2); % m_plot(Crd(i1,3),Crd(i1,2),'r.','markersize',6,'LineWidth',2); m_plot(Crd(i2,3),Crd(i2,2),'r^','markersize',5,'LineWidth',2); % m_plot(Crd(i2,3),Crd(i2,2),'r.','markersize',6,'LineWidth',2); m_plot(Crd(i3,3),Crd(i3,2),'rp','markersize',5,'LineWidth',2); plot(-.6,.45,'ro','markersize',5,'LineWidth',2);text(-.55,.45,'CORS','Color','k') plot(-.6,.39,'r^','markersize',5,'LineWidth',2);text(-.55,.39,'GPS/Met','Color','k') plot(-.6,.33,'rp','markersize',5,'LineWidth',2);text(-.55,.33,'RTIGS','Color','k') end % % ------------------------------------------------------------------------------- hold off; warning on;axis off; % caxis([RT.Levels(1),(RT.Levels(length(RT.Levels)))/2]); colorbar('vert'); %Color bar from 0 to 50 --- for Low Solar Activity caxis([RT.Levels(1),(RT.Levels(length(RT.Levels)))]); colorbar('vert'); %Color bar from 0 to 100 --- for High Solar Activity title(sprintf('Total Electron Content Units x 10^1^6 m^-^2'),'Fontsize',11) if size(Crd,1)==0, title(sprintf('Total Electron Content Units x 10^1^6 m^-^2 Caution: No Data Available, IRI95 Only'),'Fontsize',10) end whitebg('w') text(-0.6,0.22,sprintf('%s from %s to %s UT NOAA/SWPC Boulder, CO USA (op.ver. 1.0)',datestr(Time(t)+1E-8,1),datestr(Time(t)+1E-8,15),datestr(Time(t)+1E-7+15/1440,15)),'Fontsize',11) whitebg('w') % This option print to a file % set(gcf, 'Position', [0,0,1950,1467]); % print('-f1','-dpng','-painters',filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC'],Time(t))); % system(['convert ', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.png'],Time(t)),' -scale 650x489 ',' -colors 256 ', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.png'],Time(t))]); % Printing a postscript file because requirements for the automatic reboot print('-f1','-dpsc','-r1000','-painters', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC'],Time(t))); % Convert the postscript file to jpg using ghostscripts system(['gs -q -dBATCH -dNOPAUSE -r300 -sDEVICE=jpeg -sOutputFile=',filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.jpg'],Time(t)),' ', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.ps'],Time(t))]); % Converting from jpg to png and reducing the size of the figure. system(['convert ',filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.jpg'],Time(t)),' -crop 2050x1675+325+775 ',' -scale 650x489 ',' -colors 256 ', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.png'],Time(t))]); % Removing the jpg and ps files (ask Cliff how can we put both files in just one command) %system(['rm ',filename([OutPath,'*.jpg'],Time(t))]); %system(['rm ',filename([OutPath,'*.ps'],Time(t))]); end
|
Plotting multiple figures in Matlab I am working with some matlab code that processes data (in a Kalman Filter) and creates a series of contour plots. It has been running on a RHEL 4 server in matlab 2006a for a few years, but my boss recently requested that all servers be updated to RHEL 6...and at least matlab 2007a. I have worked out all the depreciations between these versions, but I am still having one major problem. The code that creates and prints different contour plots is working for whichever of the three plots is created first. It looks like this: Unfortunately the next two plots look like this: The three figures are plotted independently in separate functions and I use clf("reset"); before and after creating each figure. Each function works in and of itself, but when all three are plotted the second and third figures are all messed up. Has anyone else had this problem? Here is the code that creates one of the figures. function Crd = TEC_plot(ITEC,RT,Param,Input,t,OutPath,RxExtAll,Time) % Generate TEC plot Function_for_Spline_Smoothing = [.05 .1 .05; .1 .4 .1; .05 .1 .05]; ITEC = conv2(ITEC,Function_for_Spline_Smoothing,'same'); % add more of these lines to make contours smoother ITEC = conv2(ITEC,Function_for_Spline_Smoothing,'same'); % add more of these lines to make contours smoother % ITEC = conv2(ITEC,Function_for_Spline_Smoothing,'same'); % add more of these lines to make contours smoother % ITEC = conv2(ITEC,Function_for_Spline_Smoothing,'same'); % add more of these lines to make contours smoother earth('CAMERA',RT.Camera,'FIG',1); figure; warning off; hold on; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Changed 13 February 2007 to make plots prettier thinning_scale=2; % (1 to 10) increase this value to thin the density of the contour number labels [cscale,hgt]=m_contour(Param.Grid.LonAxis,Param.Grid.LatAxis,ITEC, ... round(RT.Levels(1:thinning_scale:end)/5)*5); hold on m_contourf(Param.Grid.LonAxis,Param.Grid.LatAxis,ITEC,RT.Levels); shading flat m_coast('line','color','y','LineWidth',1); clabel(cscale,hgt,'labelspacing',72,'rotation',0,'fontsize',10 ... ,'FontAngle','italic','color','w'); axis([-.65 .6 .25 1.32]) % hardwiring axis length since the coastline runs off of the plot % Changed 13 February 2007 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % %-------------------- to include different station markers for different sources ------------------- if ~isempty(Input.Data) % Plot receivers used in this inversion Crd=uniquerows(double(cat(1,RxExtAll))); RxType=round(Crd(:,4)); Crd=cartsph(Crd)*180/pi; i1 = find( (Crd(:,3) > RT.Camera(3)).*(Crd(:,3) < RT.Camera(4)).*... (Crd(:,2) > RT.Camera(1)).*(Crd(:,2) < RT.Camera(2)).*... (RxType==1) ); i2 = find( (Crd(:,3) > RT.Camera(3)).*(Crd(:,3) < RT.Camera(4)).*... (Crd(:,2) > RT.Camera(1)).*(Crd(:,2) < RT.Camera(2)).*... (RxType==2) ); i3 = find( (Crd(:,3) > RT.Camera(3)).*(Crd(:,3) < RT.Camera(4)).*... (Crd(:,2) > RT.Camera(1)).*(Crd(:,2) < RT.Camera(2)).*... (RxType==3) ); m_plot(Crd(i1,3),Crd(i1,2),'ro','markersize',5,'LineWidth',2); % m_plot(Crd(i1,3),Crd(i1,2),'r.','markersize',6,'LineWidth',2); m_plot(Crd(i2,3),Crd(i2,2),'r^','markersize',5,'LineWidth',2); % m_plot(Crd(i2,3),Crd(i2,2),'r.','markersize',6,'LineWidth',2); m_plot(Crd(i3,3),Crd(i3,2),'rp','markersize',5,'LineWidth',2); plot(-.6,.45,'ro','markersize',5,'LineWidth',2);text(-.55,.45,'CORS','Color','k') plot(-.6,.39,'r^','markersize',5,'LineWidth',2);text(-.55,.39,'GPS/Met','Color','k') plot(-.6,.33,'rp','markersize',5,'LineWidth',2);text(-.55,.33,'RTIGS','Color','k') end % % ------------------------------------------------------------------------------- hold off; warning on;axis off; % caxis([RT.Levels(1),(RT.Levels(length(RT.Levels)))/2]); colorbar('vert'); %Color bar from 0 to 50 --- for Low Solar Activity caxis([RT.Levels(1),(RT.Levels(length(RT.Levels)))]); colorbar('vert'); %Color bar from 0 to 100 --- for High Solar Activity title(sprintf('Total Electron Content Units x 10^1^6 m^-^2'),'Fontsize',11) if size(Crd,1)==0, title(sprintf('Total Electron Content Units x 10^1^6 m^-^2 Caution: No Data Available, IRI95 Only'),'Fontsize',10) end whitebg('w') text(-0.6,0.22,sprintf('%s from %s to %s UT NOAA/SWPC Boulder, CO USA (op.ver. 1.0)',datestr(Time(t)+1E-8,1),datestr(Time(t)+1E-8,15),datestr(Time(t)+1E-7+15/1440,15)),'Fontsize',11) whitebg('w') % This option print to a file % set(gcf, 'Position', [0,0,1950,1467]); % print('-f1','-dpng','-painters',filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC'],Time(t))); % system(['convert ', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.png'],Time(t)),' -scale 650x489 ',' -colors 256 ', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.png'],Time(t))]); % Printing a postscript file because requirements for the automatic reboot print('-f1','-dpsc','-r1000','-painters', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC'],Time(t))); % Convert the postscript file to jpg using ghostscripts system(['gs -q -dBATCH -dNOPAUSE -r300 -sDEVICE=jpeg -sOutputFile=',filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.jpg'],Time(t)),' ', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.ps'],Time(t))]); % Converting from jpg to png and reducing the size of the figure. system(['convert ',filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.jpg'],Time(t)),' -crop 2050x1675+325+775 ',' -scale 650x489 ',' -colors 256 ', filename([OutPath,'{YYYY}{MM}{DD}{HRMN}_TEC.png'],Time(t))]); % Removing the jpg and ps files (ask Cliff how can we put both files in just one command) %system(['rm ',filename([OutPath,'*.jpg'],Time(t))]); %system(['rm ',filename([OutPath,'*.ps'],Time(t))]); end
|
matlab, contour, rhel, matlab-figure
| 2
| 3,060
| 2
|
https://stackoverflow.com/questions/9118664/plotting-multiple-figures-in-matlab
|
77,558,991
|
How to stop Network Manager creating connections under /run/NetworkManager/system-connections?
|
I'm trying to configure networking on a RHEL9 server with NetworkManager 1.42. I'm doing this by writing out a bunch of keyfiles to /etc/NetworkManager/system-connections as per here . The server I'm running on has a quirk that this is done in a chroot environment (sadly this is unavoidable, as it's run as part of an operating system upgrade), so I cannot run nmcli connection reload after writing the keyfiles. Despite that, my hope was that rebooting the entire machine would be enough to have NetworkManager reload the connections. Here's the file I write to /etc/NetworkManager/system-connections/ethMgmt : [connection] id=ethMgmt type=ethernet interface-name=ethMgmt autoconnect=true [ethernet] mac-address=00:0d:3a:aa:97:31 [ipv4] method=manual address1=10.60.4.101/27,10.60.4.97 [ipv6] method=disabled Then after reboot I can see the connection has been read, but several other connections have been created: $ nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME loopback /run/NetworkManager/system-connections/lo.nmconnection lo // green - connected ethernet /run/NetworkManager/system-connections/ethMgmt.nmconnection ethMgmt // green - connected ethernet /etc/NetworkManager/system-connections/ethMgmt.nmconnection ethMgmt // grey - not connected I should mention I also have the NetworkManager-config-server RPM installed as well, so my NetworkManager is configured with: no-auto-default=* . Deleting the files under /run/NetworkManager/system-connections/* and restarting NetworkManager with systemctl restart NetworkManager brings them back, so I'm sure it's NetworkManager creating them. They look like this (the config is notably different): [connection] id=ethMgmt uuid=12899f61-26c1-4b62-826e-5ec70a545802 type=ethernet autoconnect=false interface-name=ethMgmt timestamp=1701105499 [ethernet] mac-address=00:0D:3A:AA:97:31 [ipv4] address1=10.60.4.101/27,10.60.4.97 method=manual route1=168.63.129.16/32,10.60.4.97,0 route2=169.254.169.254/32,10.60.4.97,0 [ipv6] addr-gen-mode=default method=link-local [proxy] [.nmmeta] nm-generated=true volatile=true external=true I seem to be able to resolve the problem by: rm -rf /run/NetworkManager/system-connections/* nmcli connection reload ...which results in: $ nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME loopback /run/NetworkManager/system-connections/lo.nmconnection lo -- green / connected ethernet /etc/NetworkManager/system-connections/ethMgmt.nmconnection ethMgmt -- green / connected ...but that's quite an inelegant solution (in particular to have to remember to run some commands after reboot of the machine). Is there something I can to to prevent NetworkManager creating these files at all and to read the /etc/NetworkManager/system-connections/ethMgmt.nmconnection on start?
|
How to stop Network Manager creating connections under /run/NetworkManager/system-connections? I'm trying to configure networking on a RHEL9 server with NetworkManager 1.42. I'm doing this by writing out a bunch of keyfiles to /etc/NetworkManager/system-connections as per here . The server I'm running on has a quirk that this is done in a chroot environment (sadly this is unavoidable, as it's run as part of an operating system upgrade), so I cannot run nmcli connection reload after writing the keyfiles. Despite that, my hope was that rebooting the entire machine would be enough to have NetworkManager reload the connections. Here's the file I write to /etc/NetworkManager/system-connections/ethMgmt : [connection] id=ethMgmt type=ethernet interface-name=ethMgmt autoconnect=true [ethernet] mac-address=00:0d:3a:aa:97:31 [ipv4] method=manual address1=10.60.4.101/27,10.60.4.97 [ipv6] method=disabled Then after reboot I can see the connection has been read, but several other connections have been created: $ nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME loopback /run/NetworkManager/system-connections/lo.nmconnection lo // green - connected ethernet /run/NetworkManager/system-connections/ethMgmt.nmconnection ethMgmt // green - connected ethernet /etc/NetworkManager/system-connections/ethMgmt.nmconnection ethMgmt // grey - not connected I should mention I also have the NetworkManager-config-server RPM installed as well, so my NetworkManager is configured with: no-auto-default=* . Deleting the files under /run/NetworkManager/system-connections/* and restarting NetworkManager with systemctl restart NetworkManager brings them back, so I'm sure it's NetworkManager creating them. They look like this (the config is notably different): [connection] id=ethMgmt uuid=12899f61-26c1-4b62-826e-5ec70a545802 type=ethernet autoconnect=false interface-name=ethMgmt timestamp=1701105499 [ethernet] mac-address=00:0D:3A:AA:97:31 [ipv4] address1=10.60.4.101/27,10.60.4.97 method=manual route1=168.63.129.16/32,10.60.4.97,0 route2=169.254.169.254/32,10.60.4.97,0 [ipv6] addr-gen-mode=default method=link-local [proxy] [.nmmeta] nm-generated=true volatile=true external=true I seem to be able to resolve the problem by: rm -rf /run/NetworkManager/system-connections/* nmcli connection reload ...which results in: $ nmcli -f TYPE,FILENAME,NAME connection TYPE FILENAME NAME loopback /run/NetworkManager/system-connections/lo.nmconnection lo -- green / connected ethernet /etc/NetworkManager/system-connections/ethMgmt.nmconnection ethMgmt -- green / connected ...but that's quite an inelegant solution (in particular to have to remember to run some commands after reboot of the machine). Is there something I can to to prevent NetworkManager creating these files at all and to read the /etc/NetworkManager/system-connections/ethMgmt.nmconnection on start?
|
rhel, networkmanager, rhel9
| 2
| 3,635
| 1
|
https://stackoverflow.com/questions/77558991/how-to-stop-network-manager-creating-connections-under-run-networkmanager-syste
|
76,411,993
|
Error unable to run dnf localinstall *.rpm
|
I am trying to install Nginx on my rhel 8.7. I am not able to run dnf localinstall *.rpm on a non-internet environment and it is showing some errors. I tried check the /etc/yum.repos.d directory but not sure what to do. Please advise.
|
Error unable to run dnf localinstall *.rpm I am trying to install Nginx on my rhel 8.7. I am not able to run dnf localinstall *.rpm on a non-internet environment and it is showing some errors. I tried check the /etc/yum.repos.d directory but not sure what to do. Please advise.
|
nginx, rhel
| 2
| 3,630
| 1
|
https://stackoverflow.com/questions/76411993/error-unable-to-run-dnf-localinstall-rpm
|
72,980,445
|
can't find OpenCV package on RHEL 8
|
Some sites say OpenCV could be installed on RHEL from the system repository: sudo yum install opencv opencv-devel opencv-python I run RHEL UBI container redhat/ubi8 and tried to install OpenCV - package is not found. Then I install EPEL repos from [URL] , same result. The only opencv-related package is libfreenect-opencv . I understand I could compile OpenCV from the scratch, but I'd like to go with already compiled package.
|
can't find OpenCV package on RHEL 8 Some sites say OpenCV could be installed on RHEL from the system repository: sudo yum install opencv opencv-devel opencv-python I run RHEL UBI container redhat/ubi8 and tried to install OpenCV - package is not found. Then I install EPEL repos from [URL] , same result. The only opencv-related package is libfreenect-opencv . I understand I could compile OpenCV from the scratch, but I'd like to go with already compiled package.
|
opencv, yum, rhel
| 2
| 1,606
| 1
|
https://stackoverflow.com/questions/72980445/cant-find-opencv-package-on-rhel-8
|
72,900,350
|
Changing JENKINS_HOME in /etc/sysconfig/jenkins not working
|
So I wanted to change the default JENKINS_HOME location. I pretty much always found the "edit the jenkins configuration" solution so I did that. I followed this guide [URL] Seemed legit, but it didn't work. /var/lib/jenkins is still JENKINS_HOME. I also tried setting a env var but that too did not work... Does anyone have any clues as to why the configuration at /etc/sysconfig/jenkins is not being read/loaded?
|
Changing JENKINS_HOME in /etc/sysconfig/jenkins not working So I wanted to change the default JENKINS_HOME location. I pretty much always found the "edit the jenkins configuration" solution so I did that. I followed this guide [URL] Seemed legit, but it didn't work. /var/lib/jenkins is still JENKINS_HOME. I also tried setting a env var but that too did not work... Does anyone have any clues as to why the configuration at /etc/sysconfig/jenkins is not being read/loaded?
|
jenkins, systemd, rhel
| 2
| 3,995
| 1
|
https://stackoverflow.com/questions/72900350/changing-jenkins-home-in-etc-sysconfig-jenkins-not-working
|
66,921,520
|
Add User RHEL ubi8/openjdk-8 Docker Image fails
|
While updating some Docker Baseimages (which previously were based on this image openjdk/openjdk-8-rhel8 ) to this image: ubi8/openjdk-8 I (suspect that I) was unable to add a user with the useradd cammand. It appears inside the /etc/shadow file, but when I try to login into the container I get that messenge: NWRAP_ERROR(4677) - nwrap_files_cache_reload: Unable to open '/home/jboss/passwd' readonly -1:Permission denied NWRAP_ERROR(4677) - nwrap_files_getpwuid: Error loading passwd file the Dockerfile, which worked well with the previous Image is: FROM xxxx.azurecr.io/ubi8/openjdk-8:1.3-9 ARG uid=60000 ARG gid=60000 ARG user=testuser ARG group=testuser ARG shell=/bin/bash ARG home=/home/$user ARG port=8080 USER root RUN mkdir -p $home \ && chown ${uid}:${gid} $home \ && groupadd -g ${gid} ${group} \ && useradd --uid ${uid} --gid ${gid} --shell ${shell} --home ${home} $user I don't know what could cause that problem, and searching for NWRAP_ERROR(4677) gave me no results. Dis someone had similar problems and could tell what went wrong and if there is a different way to add the user with the Dockerfile?
|
Add User RHEL ubi8/openjdk-8 Docker Image fails While updating some Docker Baseimages (which previously were based on this image openjdk/openjdk-8-rhel8 ) to this image: ubi8/openjdk-8 I (suspect that I) was unable to add a user with the useradd cammand. It appears inside the /etc/shadow file, but when I try to login into the container I get that messenge: NWRAP_ERROR(4677) - nwrap_files_cache_reload: Unable to open '/home/jboss/passwd' readonly -1:Permission denied NWRAP_ERROR(4677) - nwrap_files_getpwuid: Error loading passwd file the Dockerfile, which worked well with the previous Image is: FROM xxxx.azurecr.io/ubi8/openjdk-8:1.3-9 ARG uid=60000 ARG gid=60000 ARG user=testuser ARG group=testuser ARG shell=/bin/bash ARG home=/home/$user ARG port=8080 USER root RUN mkdir -p $home \ && chown ${uid}:${gid} $home \ && groupadd -g ${gid} ${group} \ && useradd --uid ${uid} --gid ${gid} --shell ${shell} --home ${home} $user I don't know what could cause that problem, and searching for NWRAP_ERROR(4677) gave me no results. Dis someone had similar problems and could tell what went wrong and if there is a different way to add the user with the Dockerfile?
|
linux, docker, rhel
| 2
| 4,287
| 1
|
https://stackoverflow.com/questions/66921520/add-user-rhel-ubi8-openjdk-8-docker-image-fails
|
64,138,270
|
rpmUtils.miscutils in python3.6
|
I am refactoring code from python2(RHEL 7.6) to python3(RHEL 8.2) and I have problem with missing library in python3.6. Problem: from rpmUtils.miscutils import splitFilename ModuleNotFoundError: No module named 'rpmUtils' I've tried to install python3-dnf and python3-rpm packages to RHEL8, but still not working. Is there any solution how to use this library in python3.6 and RHEL8 or should I write some custom function by myself? Thank you for your answer.
|
rpmUtils.miscutils in python3.6 I am refactoring code from python2(RHEL 7.6) to python3(RHEL 8.2) and I have problem with missing library in python3.6. Problem: from rpmUtils.miscutils import splitFilename ModuleNotFoundError: No module named 'rpmUtils' I've tried to install python3-dnf and python3-rpm packages to RHEL8, but still not working. Is there any solution how to use this library in python3.6 and RHEL8 or should I write some custom function by myself? Thank you for your answer.
|
python-3.6, rpm, rhel
| 2
| 1,674
| 1
|
https://stackoverflow.com/questions/64138270/rpmutils-miscutils-in-python3-6
|
56,764,052
|
grep stderr and stdout of previous command
|
In bash script I need to check the stderr and stdout message of the command 1 and run command 2 if a string is found in message Something like: command 1 if [ $? != 0 ] and grep stderr and stdout of command 1 and if it contains hello world; then run command 2 fi
|
grep stderr and stdout of previous command In bash script I need to check the stderr and stdout message of the command 1 and run command 2 if a string is found in message Something like: command 1 if [ $? != 0 ] and grep stderr and stdout of command 1 and if it contains hello world; then run command 2 fi
|
bash, boolean, stdout, rhel, stderr
| 2
| 2,326
| 1
|
https://stackoverflow.com/questions/56764052/grep-stderr-and-stdout-of-previous-command
|
53,937,506
|
Intermittent loss of kubernetes cluster
|
I have been trying to diagnose a problem that just started a few days ago. Running kubelet , kubeadm version 1.13.1. Cluster has 5 nodes and had been fine for months until late last week. Running this on a RHEL 7.x box with adequate free resources. Having an odd issue that the cluster resources (api, scheduler, etcd) become unavailable. This eventually corrects itself and the cluster comes back for a while again. If I do a sudo systemctl restart kubelet everything within the cluster works fine again, until the intermittent oddness occurs. I am monitoring the journactl logs to see what is going on when this occurs, and the chunk that stands out is: Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.762004 I | etcdserver: skipped leadership transfer for single member cluster Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763648 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.Event ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.762788 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.762910 1 reflector.go:270] storage/cacher.go:/podsecuritypolicy: watch of *policy.PodSecurityPolicy ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763149 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763232 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763439 1 reflector.go:270] storage/cacher.go:/apiregistration.k8s.io/apiservices: watch of *apiregistration.APIService ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763719 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763786 1 reflector.go:270] storage/cacher.go:/daemonsets: watch of *apps.DaemonSet ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763937 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764016 1 reflector.go:270] storage/cacher.go:/cronjobs: watch of *batch.CronJob ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.764250 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.764324 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764386 1 reflector.go:270] storage/cacher.go:/services/endpoints: watch of *core.Endpoints ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764440 1 reflector.go:270] storage/cacher.go:/deployments: watch of *apps.Deployment ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: WARNING: 2018/12/26 21:28:06 grpc: addrConn.transportMonitor exits due to: context canceled Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.765201 W | etcdserver/api/v3rpc: failed to receive watch request from gRPC stream ("rpc error: code = Unavailable desc = body closed by handler") Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.765384 W | etcdserver/api/v3rpc: failed to receive watch request from gRPC stream ("rpc error: code = Unavailable desc = body closed by handler") ... Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.784805 1 reflector.go:270] storage/cacher.go:/controllerrevisions: watch of *apps.ControllerRevision ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.784871 1 reflector.go:270] storage/cacher.go:/pods: watch of *core.Pod ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.786587 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.786700 1 reflector.go:270] storage/cacher.go:/horizontalpodautoscalers: watch of *autoscaling.HorizontalPodAutoscaler ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.788274 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.788385 1 reflector.go:270] storage/cacher.go:/crd.projectcalico.org/clusterinformations: watch of *unstructured.Unstructured ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain oci-systemd-hook[9353]: systemdhook <debug>: 02cb55687848: Skipping as container command is etcd, not init or systemd Dec 26 15:28:06 thalia0.domain oci-umount[9355]: umounthook <debug>: 02cb55687848: only runs in prestart stage, ignoring Dec 26 15:28:07 thalia0.domain dockerd-current[1609]: time="2018-12-26T15:28:07.003175741-06:00" level=warning msg="02cb556878485b24e4705dd0efe1051c02f3e3bbbe7b8a7ab23ea71bd6d82b2f cleanup: failed to unmount secrets: invalid argument" Dec 26 15:28:07 thalia0.domain kubelet[24604]: E1226 15:28:07.006714 24604 pod_workers.go:190] Error syncing pod 0264932236d6afef396f466fc3bd3181 ("etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)" Dec 26 15:28:07 thalia0.domain kubelet[24604]: E1226 15:28:07.040361 24604 pod_workers.go:190] Error syncing pod 0264932236d6afef396f466fc3bd3181 ("etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)" In order to cut down on the noise in the logs, I cordoned off the other nodes. As noted, if I do a restart of the kubelet service, everything is fine for a while and then the intermittent behavior occurs. Any suggestions would be most welcome. I am working with our sys admin and he said it appears that etcd is doing frequent restarts. I think trouble begins when the CrashLoopBackOff starts to happen.
|
Intermittent loss of kubernetes cluster I have been trying to diagnose a problem that just started a few days ago. Running kubelet , kubeadm version 1.13.1. Cluster has 5 nodes and had been fine for months until late last week. Running this on a RHEL 7.x box with adequate free resources. Having an odd issue that the cluster resources (api, scheduler, etcd) become unavailable. This eventually corrects itself and the cluster comes back for a while again. If I do a sudo systemctl restart kubelet everything within the cluster works fine again, until the intermittent oddness occurs. I am monitoring the journactl logs to see what is going on when this occurs, and the chunk that stands out is: Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.762004 I | etcdserver: skipped leadership transfer for single member cluster Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763648 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.Event ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.762788 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.762910 1 reflector.go:270] storage/cacher.go:/podsecuritypolicy: watch of *policy.PodSecurityPolicy ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763149 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763232 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763439 1 reflector.go:270] storage/cacher.go:/apiregistration.k8s.io/apiservices: watch of *apiregistration.APIService ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763719 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763786 1 reflector.go:270] storage/cacher.go:/daemonsets: watch of *apps.DaemonSet ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763937 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764016 1 reflector.go:270] storage/cacher.go:/cronjobs: watch of *batch.CronJob ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.764250 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.764324 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764386 1 reflector.go:270] storage/cacher.go:/services/endpoints: watch of *core.Endpoints ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764440 1 reflector.go:270] storage/cacher.go:/deployments: watch of *apps.Deployment ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: WARNING: 2018/12/26 21:28:06 grpc: addrConn.transportMonitor exits due to: context canceled Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.765201 W | etcdserver/api/v3rpc: failed to receive watch request from gRPC stream ("rpc error: code = Unavailable desc = body closed by handler") Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.765384 W | etcdserver/api/v3rpc: failed to receive watch request from gRPC stream ("rpc error: code = Unavailable desc = body closed by handler") ... Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.784805 1 reflector.go:270] storage/cacher.go:/controllerrevisions: watch of *apps.ControllerRevision ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.784871 1 reflector.go:270] storage/cacher.go:/pods: watch of *core.Pod ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.786587 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.786700 1 reflector.go:270] storage/cacher.go:/horizontalpodautoscalers: watch of *autoscaling.HorizontalPodAutoscaler ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.788274 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.788385 1 reflector.go:270] storage/cacher.go:/crd.projectcalico.org/clusterinformations: watch of *unstructured.Unstructured ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL Dec 26 15:28:06 thalia0.domain oci-systemd-hook[9353]: systemdhook <debug>: 02cb55687848: Skipping as container command is etcd, not init or systemd Dec 26 15:28:06 thalia0.domain oci-umount[9355]: umounthook <debug>: 02cb55687848: only runs in prestart stage, ignoring Dec 26 15:28:07 thalia0.domain dockerd-current[1609]: time="2018-12-26T15:28:07.003175741-06:00" level=warning msg="02cb556878485b24e4705dd0efe1051c02f3e3bbbe7b8a7ab23ea71bd6d82b2f cleanup: failed to unmount secrets: invalid argument" Dec 26 15:28:07 thalia0.domain kubelet[24604]: E1226 15:28:07.006714 24604 pod_workers.go:190] Error syncing pod 0264932236d6afef396f466fc3bd3181 ("etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)" Dec 26 15:28:07 thalia0.domain kubelet[24604]: E1226 15:28:07.040361 24604 pod_workers.go:190] Error syncing pod 0264932236d6afef396f466fc3bd3181 ("etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)" In order to cut down on the noise in the logs, I cordoned off the other nodes. As noted, if I do a restart of the kubelet service, everything is fine for a while and then the intermittent behavior occurs. Any suggestions would be most welcome. I am working with our sys admin and he said it appears that etcd is doing frequent restarts. I think trouble begins when the CrashLoopBackOff starts to happen.
|
docker, kubernetes, rhel, kubeadm, kubelet
| 2
| 1,093
| 1
|
https://stackoverflow.com/questions/53937506/intermittent-loss-of-kubernetes-cluster
|
52,675,788
|
Efficient method to parse large number of files
|
I've incoming data which will be in range of 130GBs - 300GBs containing 1000's (maybe millions) of small .txt files of size 2KB - 1MB in a SINGLE folder. I want to parse them efficiently. I'm looking at the following options (Referred from - 21209029 ]: Using printf + xargs (followed by egrep & awk text processing) printf '%s\0' *.txt | xargs -0 cat | egrep -i -v 'pattern1|...|pattern8' | awk '{gsub(/"\t",",")}1' > all_in_1.out Using find + cat (followed by egrep & awk text processing) find . -name \*.txt -exec cat {} > all_in_1.tmp \; cat all_in_1.tmp | egrep -i -v 'pattern1|...|pattern8' | awk '{gsub(/"\t",",")}1' > all_in_1.out Using for loop for file in *.txt do cat "$file" | egrep -i -v 'pattern1|...|pattern8' | awk '{gsub(/"\t",",")}1' >> all_in_1.out done Which one of the above is the most efficient? Is there a better way to do it? Or is using shell commands not at all recommended to handle this amount of data processing (I do prefer a shell way for this)? The server has RHEL 6.5 OS with 32 GB memory with 16 Cores (@2.2GHz).
|
Efficient method to parse large number of files I've incoming data which will be in range of 130GBs - 300GBs containing 1000's (maybe millions) of small .txt files of size 2KB - 1MB in a SINGLE folder. I want to parse them efficiently. I'm looking at the following options (Referred from - 21209029 ]: Using printf + xargs (followed by egrep & awk text processing) printf '%s\0' *.txt | xargs -0 cat | egrep -i -v 'pattern1|...|pattern8' | awk '{gsub(/"\t",",")}1' > all_in_1.out Using find + cat (followed by egrep & awk text processing) find . -name \*.txt -exec cat {} > all_in_1.tmp \; cat all_in_1.tmp | egrep -i -v 'pattern1|...|pattern8' | awk '{gsub(/"\t",",")}1' > all_in_1.out Using for loop for file in *.txt do cat "$file" | egrep -i -v 'pattern1|...|pattern8' | awk '{gsub(/"\t",",")}1' >> all_in_1.out done Which one of the above is the most efficient? Is there a better way to do it? Or is using shell commands not at all recommended to handle this amount of data processing (I do prefer a shell way for this)? The server has RHEL 6.5 OS with 32 GB memory with 16 Cores (@2.2GHz).
|
shell, parsing, rhel, memory-efficient
| 2
| 699
| 1
|
https://stackoverflow.com/questions/52675788/efficient-method-to-parse-large-number-of-files
|
49,356,915
|
Completely uninstall Eclipse 4.7 version in RHEL 7.4 Maipo
|
I'm trying to uninstall the current version of Eclipse IDE in my RHEL machine by simply deleting all the files like: sudo rm -rf ~/.eclipse sudo rm -rf ~/eclipse-workspace I also tried sudo yum remove 'eclipse*' However, these didn't seem to solve the purpose. Any help will be appreciated, thanks!
|
Completely uninstall Eclipse 4.7 version in RHEL 7.4 Maipo I'm trying to uninstall the current version of Eclipse IDE in my RHEL machine by simply deleting all the files like: sudo rm -rf ~/.eclipse sudo rm -rf ~/eclipse-workspace I also tried sudo yum remove 'eclipse*' However, these didn't seem to solve the purpose. Any help will be appreciated, thanks!
|
eclipse, uninstallation, rhel
| 2
| 3,257
| 2
|
https://stackoverflow.com/questions/49356915/completely-uninstall-eclipse-4-7-version-in-rhel-7-4-maipo
|
47,077,828
|
XFS RHEL7.3 Cold Reboot, file truncate
|
We are upgrading our application from RHEL6.5 ext4 to RHEL7.3 XFS. We have observed that with XFS file system doing a cold reboot (from system console - iLO) truncates some of our files (that are being written to disks every few seconds) to zero bytes. Not only our application, but let us say we redirect output from one command to a file using ">" those file disappear after the cold reboot. We are aware of the recommendation about explicity doing a fysnc. But what our code that is in Java? What about the cases from our python scripts? Now we are in dilemma whether to stick with ext4 or XFS. XFS has advantages and would be our first preference. And we cannot believe that the rest of the world is not aware of this. Either this is something specific with RHEL (we see one similar issue fixed in RHLE6.5 [URL] ) or this is expected behavior from modern FileSystems?
|
XFS RHEL7.3 Cold Reboot, file truncate We are upgrading our application from RHEL6.5 ext4 to RHEL7.3 XFS. We have observed that with XFS file system doing a cold reboot (from system console - iLO) truncates some of our files (that are being written to disks every few seconds) to zero bytes. Not only our application, but let us say we redirect output from one command to a file using ">" those file disappear after the cold reboot. We are aware of the recommendation about explicity doing a fysnc. But what our code that is in Java? What about the cases from our python scripts? Now we are in dilemma whether to stick with ext4 or XFS. XFS has advantages and would be our first preference. And we cannot believe that the rest of the world is not aware of this. Either this is something specific with RHEL (we see one similar issue fixed in RHLE6.5 [URL] ) or this is expected behavior from modern FileSystems?
|
rhel, reboot, xfs
| 2
| 713
| 1
|
https://stackoverflow.com/questions/47077828/xfs-rhel7-3-cold-reboot-file-truncate
|
46,684,151
|
Does Nexus 3 support container image signing?
|
Does Nexus 3 private docker registry support container image signing? The RHEL documentation here suggests not, but I'd like confirmation.
|
Does Nexus 3 support container image signing? Does Nexus 3 private docker registry support container image signing? The RHEL documentation here suggests not, but I'd like confirmation.
|
docker, containers, nexus, rhel, signing
| 2
| 1,823
| 1
|
https://stackoverflow.com/questions/46684151/does-nexus-3-support-container-image-signing
|
44,492,833
|
Adding YUM Repos With Variables
|
I am trying to add a YUM repo from the command line like so cat > /etc/yum.repos.d/my_stable_repo.repo << EOF [my_stable_repo] name=Stable Repo baseurl='[URL] enabled=1 gpgcheck=0 EOF However, when I do it this way and I take a look at /etc/yum.repos.d/my_stable_repo.repo , I do not see $releasever in the URL. Instead, /etc/yum.repos.d/my_stable_repo.repo looks like: [my_stable_repo] name=Stable Repo baseurl='[URL] enabled=1 gpgcheck=0 Notice that the releasever variable was deleted. I am assuming that this is because when I run the command to write the contents to the file from the shell, linux is evaluating the $releasever variable against the global environment, seeing that is empty, and replacing it with an empty string. But I actually want just the string $releasever to be in /etc/yum.repos.d/my_stable_repo.repo . So the file should look like this the below instead: [my_stable_repo] name=Stable Repo baseurl='[URL] enabled=1 gpgcheck=0 How can I write the file out like this with the $releasever in plain text from the shell? TLDR: How can I write a string that looks like it has a variable in it (i.e. $releasever ) to a file from the command line without actually evaluating the variable?
|
Adding YUM Repos With Variables I am trying to add a YUM repo from the command line like so cat > /etc/yum.repos.d/my_stable_repo.repo << EOF [my_stable_repo] name=Stable Repo baseurl='[URL] enabled=1 gpgcheck=0 EOF However, when I do it this way and I take a look at /etc/yum.repos.d/my_stable_repo.repo , I do not see $releasever in the URL. Instead, /etc/yum.repos.d/my_stable_repo.repo looks like: [my_stable_repo] name=Stable Repo baseurl='[URL] enabled=1 gpgcheck=0 Notice that the releasever variable was deleted. I am assuming that this is because when I run the command to write the contents to the file from the shell, linux is evaluating the $releasever variable against the global environment, seeing that is empty, and replacing it with an empty string. But I actually want just the string $releasever to be in /etc/yum.repos.d/my_stable_repo.repo . So the file should look like this the below instead: [my_stable_repo] name=Stable Repo baseurl='[URL] enabled=1 gpgcheck=0 How can I write the file out like this with the $releasever in plain text from the shell? TLDR: How can I write a string that looks like it has a variable in it (i.e. $releasever ) to a file from the command line without actually evaluating the variable?
|
bash, shell, yum, rhel
| 2
| 3,159
| 2
|
https://stackoverflow.com/questions/44492833/adding-yum-repos-with-variables
|
40,897,698
|
RVM Ruby Install Breaks Chef-Client On Bootstrapped Node
|
I have a Red Hat Enterprise Linux Server release 6.7 node that I have bootstrapped with CHEF. I've successfully executed multiple cookbooks/recipes on this node. Now I need to setup this node to run Ruby On Rails applications. I have a cookbook with recipes that successfully :: installs RVM installs Ruby v2.2 The Problem After RVM installs Ruby, the CHEF-Client on the bootstrapped node no longer works. Regardless of what Cookbook/Recipe(s) I try to run, I get the following output :: PS C:\Users\JW031544\workspace\CHEF\chef-repo> knife ssh dh2vrtooldev01 "chef-client -o recipe[MY_COOKBOOK::default]" --manual-list --ssh-user MY_USER --ssh-password "MY_PASS" dh2vrtooldev01 Ignoring executable-hooks-1.3.2 because its extensions are not built. Try: gem pristine executable-hooks --version 1.3.2 dh2vrtooldev01 Ignoring gem-wrappers-1.2.7 because its extensions are not built. Try: gem pristine gem-wrappers --version 1.2.7 dh2vrtooldev01 Ignoring nokogiri-1.6.8.1 because its extensions are not built. Try: gem pristine nokogiri --version 1.6.8.1 dh2vrtooldev01 /opt/chef/embedded/lib/ruby/site_ruby/2.3.0/rubygems/dependency.rb:308:in to_specs': Could not find 'addressable' (= 2.4.0) among 45 total gem(s) (Gem::MissingSpecError) dh2vrtooldev01 Checked in 'GEM_PATH=/usr/local/rvm/gems/ruby-2.2.4:/usr/local/rvm/gems/ruby-2.2.4@global', execute gem env for more information dh2vrtooldev01 from /opt/chef/embedded/lib/ruby/site_ruby/2.3.0/rubygems/dependency.rb:320:in to_spec' dh2vrtooldev01 from /opt/chef/embedded/lib/ruby/site_ruby/2.3.0/rubygems/core_ext/kernel_gem.rb:65:in gem' dh2vrtooldev01 from /usr/bin/chef-client:4:in <main>' If I go onto the node and tell RVM to remove that version of Ruby, then the CHEF-Client will begin working again just fine. The Question Does anyone have any idea why CHEF-Client suddenly forgets how to run once RVM installs a version of Ruby? Source Code (default.rb) include_recipe 'abl_rails::rvm_install' include_recipe 'abl_rails::ruby_install' (rvm_install.rb) # Install RVM (if it doesn't already exist) execute 'install_rvm' do cwd '/root/' command 'curl -sSL [URL] | gpg2 --import -; curl -L get.rvm.io | bash -s stable' not_if {::File.exists?('/etc/profile.d/rvm.sh')} end (ruby_install.rb) # Install Ruby bash 'install_ruby' do cwd '/root/' code <<-EOH source /etc/profile.d/rvm.sh; rvm install #{node['ruby_version']}; EOH not_if "source /etc/profile.d/rvm.sh; ruby --version | grep #{node['ruby_version']}", :cwd => '/root' notifies :run, "bash[set_default_rvm_ruby]", :immediately end # Set the default Ruby version in RVM bash "set_default_rvm_ruby" do cwd '/root' code <<-EOH source /etc/profile.d/rvm.sh; rvm use #{node['ruby_version']} --default; EOH action :run end
|
RVM Ruby Install Breaks Chef-Client On Bootstrapped Node I have a Red Hat Enterprise Linux Server release 6.7 node that I have bootstrapped with CHEF. I've successfully executed multiple cookbooks/recipes on this node. Now I need to setup this node to run Ruby On Rails applications. I have a cookbook with recipes that successfully :: installs RVM installs Ruby v2.2 The Problem After RVM installs Ruby, the CHEF-Client on the bootstrapped node no longer works. Regardless of what Cookbook/Recipe(s) I try to run, I get the following output :: PS C:\Users\JW031544\workspace\CHEF\chef-repo> knife ssh dh2vrtooldev01 "chef-client -o recipe[MY_COOKBOOK::default]" --manual-list --ssh-user MY_USER --ssh-password "MY_PASS" dh2vrtooldev01 Ignoring executable-hooks-1.3.2 because its extensions are not built. Try: gem pristine executable-hooks --version 1.3.2 dh2vrtooldev01 Ignoring gem-wrappers-1.2.7 because its extensions are not built. Try: gem pristine gem-wrappers --version 1.2.7 dh2vrtooldev01 Ignoring nokogiri-1.6.8.1 because its extensions are not built. Try: gem pristine nokogiri --version 1.6.8.1 dh2vrtooldev01 /opt/chef/embedded/lib/ruby/site_ruby/2.3.0/rubygems/dependency.rb:308:in to_specs': Could not find 'addressable' (= 2.4.0) among 45 total gem(s) (Gem::MissingSpecError) dh2vrtooldev01 Checked in 'GEM_PATH=/usr/local/rvm/gems/ruby-2.2.4:/usr/local/rvm/gems/ruby-2.2.4@global', execute gem env for more information dh2vrtooldev01 from /opt/chef/embedded/lib/ruby/site_ruby/2.3.0/rubygems/dependency.rb:320:in to_spec' dh2vrtooldev01 from /opt/chef/embedded/lib/ruby/site_ruby/2.3.0/rubygems/core_ext/kernel_gem.rb:65:in gem' dh2vrtooldev01 from /usr/bin/chef-client:4:in <main>' If I go onto the node and tell RVM to remove that version of Ruby, then the CHEF-Client will begin working again just fine. The Question Does anyone have any idea why CHEF-Client suddenly forgets how to run once RVM installs a version of Ruby? Source Code (default.rb) include_recipe 'abl_rails::rvm_install' include_recipe 'abl_rails::ruby_install' (rvm_install.rb) # Install RVM (if it doesn't already exist) execute 'install_rvm' do cwd '/root/' command 'curl -sSL [URL] | gpg2 --import -; curl -L get.rvm.io | bash -s stable' not_if {::File.exists?('/etc/profile.d/rvm.sh')} end (ruby_install.rb) # Install Ruby bash 'install_ruby' do cwd '/root/' code <<-EOH source /etc/profile.d/rvm.sh; rvm install #{node['ruby_version']}; EOH not_if "source /etc/profile.d/rvm.sh; ruby --version | grep #{node['ruby_version']}", :cwd => '/root' notifies :run, "bash[set_default_rvm_ruby]", :immediately end # Set the default Ruby version in RVM bash "set_default_rvm_ruby" do cwd '/root' code <<-EOH source /etc/profile.d/rvm.sh; rvm use #{node['ruby_version']} --default; EOH action :run end
|
ruby, chef-infra, rvm, rhel
| 2
| 523
| 2
|
https://stackoverflow.com/questions/40897698/rvm-ruby-install-breaks-chef-client-on-bootstrapped-node
|
35,647,094
|
Infinite running server-side python script?
|
I want to replace Cron Jobs for "keeping" my program alive because it calls every XX interval whether or not the scrip is already called, creating duplicate entries. I investigated the issue, and had a few approaches. One was to modify my program so it checks if it is already called and closes itself. The one I went after was to detach it completely from Cronjob by calling itself over and over again with execfile which works exactly how I want except the following problem: RuntimeError: maximum recursion depth exceeded Is there a way to keep the program in "infinite loop" without getting a Stack Overflow? Here is my code, its a program that checks Mails, and converts them into MySQL DB entries. imap = imaplib.IMAP4(hst) try: imap.login(usr, pwd) except Exception as e: errormsg = e time.sleep(30) print "IMAP error: " + str(errormsg) execfile('/var/www/html/olotool/converter.py') raise IOError(e) # Authentification & Fetch Step while True: time.sleep(5) ''' The script will always result in an error if there are no mails left to check in the inbox. It then goes into sleep mode and relaunches itself to check if new mails have arrived. ''' try: imap.select("Inbox") # Tell Imap where to go result, data = imap.uid('search', None, "ALL") latest = data[0].split()[-1] result, data = imap.uid('fetch', latest, '(RFC822)') raw = data[0][1] # This contains the Mail Data msg = email.message_from_string(raw) except Exception as e: disconnect(imap) time.sleep(60) execfile('/var/www/html/olotool/converter.py') raise IOError(e)
|
Infinite running server-side python script? I want to replace Cron Jobs for "keeping" my program alive because it calls every XX interval whether or not the scrip is already called, creating duplicate entries. I investigated the issue, and had a few approaches. One was to modify my program so it checks if it is already called and closes itself. The one I went after was to detach it completely from Cronjob by calling itself over and over again with execfile which works exactly how I want except the following problem: RuntimeError: maximum recursion depth exceeded Is there a way to keep the program in "infinite loop" without getting a Stack Overflow? Here is my code, its a program that checks Mails, and converts them into MySQL DB entries. imap = imaplib.IMAP4(hst) try: imap.login(usr, pwd) except Exception as e: errormsg = e time.sleep(30) print "IMAP error: " + str(errormsg) execfile('/var/www/html/olotool/converter.py') raise IOError(e) # Authentification & Fetch Step while True: time.sleep(5) ''' The script will always result in an error if there are no mails left to check in the inbox. It then goes into sleep mode and relaunches itself to check if new mails have arrived. ''' try: imap.select("Inbox") # Tell Imap where to go result, data = imap.uid('search', None, "ALL") latest = data[0].split()[-1] result, data = imap.uid('fetch', latest, '(RFC822)') raw = data[0][1] # This contains the Mail Data msg = email.message_from_string(raw) except Exception as e: disconnect(imap) time.sleep(60) execfile('/var/www/html/olotool/converter.py') raise IOError(e)
|
python, python-2.7, loops, recursion, rhel
| 2
| 729
| 2
|
https://stackoverflow.com/questions/35647094/infinite-running-server-side-python-script
|
30,781,660
|
Unable to start postgresql service in Redhat linux 7
|
I have installed postgresql 9.4 on Redhat 7 server.It was installed through postgresql-9.4.3-1-linux-x64.run. It displayed a clear message"postgres is installed your machine". Now when I login as su - postgres It doesn't ask for password and goes to bash prompt. If I type psql displays "command not found". When I tried starting service through root user service postgresql initdb I get: The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl. I tried start postgres restart which didn't work. I tried searching and found nothing. I know its with starting service.
|
Unable to start postgresql service in Redhat linux 7 I have installed postgresql 9.4 on Redhat 7 server.It was installed through postgresql-9.4.3-1-linux-x64.run. It displayed a clear message"postgres is installed your machine". Now when I login as su - postgres It doesn't ask for password and goes to bash prompt. If I type psql displays "command not found". When I tried starting service through root user service postgresql initdb I get: The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl. I tried start postgres restart which didn't work. I tried searching and found nothing. I know its with starting service.
|
postgresql, rhel
| 2
| 9,554
| 1
|
https://stackoverflow.com/questions/30781660/unable-to-start-postgresql-service-in-redhat-linux-7
|
27,435,259
|
pandas installation - numpy version is too old
|
At my workplace, I use a Virtual Machine (VM) with a better hardware setup than my laptop to work with data (cleaning, organizing, analysis, etc.). I am trying to install Pandas from source (i.e., tar.gz) because the VM is locked down (i.e., it does not have access to hosts outside the company network). I receive the following error when I try to build and install pandas from its source directory: sudo /usr/bin/python setup.py install Traceback (most recent call last): File "setup.py", line 606, in <module> **setuptools_kwargs) File "/usr/lib64/python2.6/distutils/core.py", line 113, in setup _setup_distribution = dist = klass(attrs) File "/usr/lib/python2.6/site-packages/setuptools/dist.py", line 221, in __init__ self.fetch_build_eggs(attrs.pop('setup_requires')) File "/usr/lib/python2.6/site-packages/setuptools/dist.py", line 245, in fetch_build_eggs parse_requirements(requires), installer=self.fetch_build_egg File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 550, in resolve raise VersionConflict(dist,req) # XXX put more info here pkg_resources.VersionConflict: (numpy 1.4.1 (/usr/lib64/python2.6/site-packages), Requirement.parse('numpy>=1.6.1')) However, when I enter the Python shell, import numpy, and check its version, I receive the following output: Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type "help", "copyright", "credits", or "license" for more information >>> import numpy >>> numpy.version.version '1.9.0' The VM is running Red Hat Enterprise Linux Server release 6.5 (Santiago) with Python 2.6.6 (path is /usr/bin/python ). I have sudo access on the VM. I have been able to install modules in the past (e.g., pyodbc) by downloading them on my Windows laptop, using WinSCP to copy files to the VM, and then installing from source on the VM. How should I begin to remedy this dependency issue?
|
pandas installation - numpy version is too old At my workplace, I use a Virtual Machine (VM) with a better hardware setup than my laptop to work with data (cleaning, organizing, analysis, etc.). I am trying to install Pandas from source (i.e., tar.gz) because the VM is locked down (i.e., it does not have access to hosts outside the company network). I receive the following error when I try to build and install pandas from its source directory: sudo /usr/bin/python setup.py install Traceback (most recent call last): File "setup.py", line 606, in <module> **setuptools_kwargs) File "/usr/lib64/python2.6/distutils/core.py", line 113, in setup _setup_distribution = dist = klass(attrs) File "/usr/lib/python2.6/site-packages/setuptools/dist.py", line 221, in __init__ self.fetch_build_eggs(attrs.pop('setup_requires')) File "/usr/lib/python2.6/site-packages/setuptools/dist.py", line 245, in fetch_build_eggs parse_requirements(requires), installer=self.fetch_build_egg File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 550, in resolve raise VersionConflict(dist,req) # XXX put more info here pkg_resources.VersionConflict: (numpy 1.4.1 (/usr/lib64/python2.6/site-packages), Requirement.parse('numpy>=1.6.1')) However, when I enter the Python shell, import numpy, and check its version, I receive the following output: Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type "help", "copyright", "credits", or "license" for more information >>> import numpy >>> numpy.version.version '1.9.0' The VM is running Red Hat Enterprise Linux Server release 6.5 (Santiago) with Python 2.6.6 (path is /usr/bin/python ). I have sudo access on the VM. I have been able to install modules in the past (e.g., pyodbc) by downloading them on my Windows laptop, using WinSCP to copy files to the VM, and then installing from source on the VM. How should I begin to remedy this dependency issue?
|
python, linux, pandas, installation, rhel
| 2
| 2,754
| 1
|
https://stackoverflow.com/questions/27435259/pandas-installation-numpy-version-is-too-old
|
24,243,615
|
unable to use cscope multi key stroke via vim editor
|
I am using VIM 7.0 on RHEL release 5.4 , and downloaded cscope plugin from: [URL] and copied it to path(one instance at a time): ~/.vim/plugin/cscope_maps.vim & /usr/share/vim/vim70/autoload and generated cscope -qbR from root dir of source files, & opening *.C files from same dir. According to this(point 5) & cscope_maps.vim , I should be able to do keyword search by multiple keystroke: CTRL-\ <option> or CTRL-@ <option> by placing the cursor under the keyword as we do for ctags. But I am able to access the cscope keyword search only through the vim's command line argument (ie., :cs f d or :cs f c ) and not with multiple key stroke shortcut. I've also tried pasting all the contents of cscope_maps.vim to ~/.vimrc , but it didn't help Is there something I am doing wrong/ any other way to make it work?
|
unable to use cscope multi key stroke via vim editor I am using VIM 7.0 on RHEL release 5.4 , and downloaded cscope plugin from: [URL] and copied it to path(one instance at a time): ~/.vim/plugin/cscope_maps.vim & /usr/share/vim/vim70/autoload and generated cscope -qbR from root dir of source files, & opening *.C files from same dir. According to this(point 5) & cscope_maps.vim , I should be able to do keyword search by multiple keystroke: CTRL-\ <option> or CTRL-@ <option> by placing the cursor under the keyword as we do for ctags. But I am able to access the cscope keyword search only through the vim's command line argument (ie., :cs f d or :cs f c ) and not with multiple key stroke shortcut. I've also tried pasting all the contents of cscope_maps.vim to ~/.vimrc , but it didn't help Is there something I am doing wrong/ any other way to make it work?
|
linux, vim, rhel, cscope
| 2
| 1,913
| 4
|
https://stackoverflow.com/questions/24243615/unable-to-use-cscope-multi-key-stroke-via-vim-editor
|
20,627,864
|
Apache camel:bindy illegal argument exception
|
I am doing data format conversion between POJO to CSV and vice versa. In this while converting CSV to Object file(Unmarshalling) i am getting illegal argument exception for int data type. Only for string its working fine. Below is my POJO @CsvRecord(separator="//|",crlf="UNIX",generateHeaderColumns=false) public class EmployeeVO implements Serializable{ private static final long serialVersionUID = -663135747565879908L; @DataField(pos=1) private String name; @DataField(pos=3) private Integer age; @DataField(pos=2) private String grade; // getter setter } csv data sumit|4th standrad|22 the above csv is generated from the above POJO. But at the time of converting CSV to POJO i am getting folloing exception java.lang.IllegalArgumentException: Parsing error detected for field defined at the position: 3, line: 1 following are my camel - context file for your reference marshal <route> <from uri="cxf:bean:rtoemplyeeService"/> <convertBodyTo type="java.lang.String" id="stringInput"/> <bean ref="govtEmpBean" method="getEmployeeCSV" beanType="govtEmpBean" id="govtEmp"/> <log message="before marshalling ================== ${body}"/> <marshal ref="bindyDataformat"> <bindy type="Csv" packages="com.mycompany.converter.vo"/> </marshal> <log message="after marshalling ================== ${body}"/> <to uri="file://D:/JATO_WORK/repo_bkp/csv/"/> <setBody> <simple>CSV output is generated at file system </simple> </setBody> </route> un marshal <route id="csvtoobject"> <from uri="file://D:/JATO_WORK/repo_bkp/csv?delay=10000&initialDelay=10"/> <log message="csv string ============= ${body}"/> <unmarshal ref="bindyDataformat"/> <log message="${body}"/> <bean ref="govtEmpBean" method="printCSVObject" beanType="govtEmpBean" id="govtEmp"/> </route>
|
Apache camel:bindy illegal argument exception I am doing data format conversion between POJO to CSV and vice versa. In this while converting CSV to Object file(Unmarshalling) i am getting illegal argument exception for int data type. Only for string its working fine. Below is my POJO @CsvRecord(separator="//|",crlf="UNIX",generateHeaderColumns=false) public class EmployeeVO implements Serializable{ private static final long serialVersionUID = -663135747565879908L; @DataField(pos=1) private String name; @DataField(pos=3) private Integer age; @DataField(pos=2) private String grade; // getter setter } csv data sumit|4th standrad|22 the above csv is generated from the above POJO. But at the time of converting CSV to POJO i am getting folloing exception java.lang.IllegalArgumentException: Parsing error detected for field defined at the position: 3, line: 1 following are my camel - context file for your reference marshal <route> <from uri="cxf:bean:rtoemplyeeService"/> <convertBodyTo type="java.lang.String" id="stringInput"/> <bean ref="govtEmpBean" method="getEmployeeCSV" beanType="govtEmpBean" id="govtEmp"/> <log message="before marshalling ================== ${body}"/> <marshal ref="bindyDataformat"> <bindy type="Csv" packages="com.mycompany.converter.vo"/> </marshal> <log message="after marshalling ================== ${body}"/> <to uri="file://D:/JATO_WORK/repo_bkp/csv/"/> <setBody> <simple>CSV output is generated at file system </simple> </setBody> </route> un marshal <route id="csvtoobject"> <from uri="file://D:/JATO_WORK/repo_bkp/csv?delay=10000&initialDelay=10"/> <log message="csv string ============= ${body}"/> <unmarshal ref="bindyDataformat"/> <log message="${body}"/> <bean ref="govtEmpBean" method="printCSVObject" beanType="govtEmpBean" id="govtEmp"/> </route>
|
apache-camel, rhel, fuseesb, jbossfuse
| 2
| 2,249
| 1
|
https://stackoverflow.com/questions/20627864/apache-camelbindy-illegal-argument-exception
|
16,132,069
|
Hive doesn't show tables when started from another directory
|
I installed Hive cdh4 on RHEL. Whenever I start Hive from a directory, it creates metastore_db dir in it and a derby.log file. Is it a normal behaviour? Moreover, when I create a table, starting Hive from a particular directory; I'm unable to see that table when I start Hive from a directory, other than that. For example, Let's say I started Hive from my home dir, i.e. $HOME or ~ and I create table in Hive. But when I start Hive from /path/to/my/Hive/directory and do a show tables, the table i just creted wouldn't show up. However, if start Hive from my home directory again and look for tables, I'm able to see the table. Also, if I make some changes in hive-site.xml, they are simply being ignored by Hive. Please help me where am I going wrong.
|
Hive doesn't show tables when started from another directory I installed Hive cdh4 on RHEL. Whenever I start Hive from a directory, it creates metastore_db dir in it and a derby.log file. Is it a normal behaviour? Moreover, when I create a table, starting Hive from a particular directory; I'm unable to see that table when I start Hive from a directory, other than that. For example, Let's say I started Hive from my home dir, i.e. $HOME or ~ and I create table in Hive. But when I start Hive from /path/to/my/Hive/directory and do a show tables, the table i just creted wouldn't show up. However, if start Hive from my home directory again and look for tables, I'm able to see the table. Also, if I make some changes in hive-site.xml, they are simply being ignored by Hive. Please help me where am I going wrong.
|
hadoop, installation, hive, cloudera, rhel
| 2
| 2,429
| 1
|
https://stackoverflow.com/questions/16132069/hive-doesnt-show-tables-when-started-from-another-directory
|
11,970,793
|
CF10 mod_jk.so won't load with RHEL 5.6 and Apache httpd 2.2.3
|
Does anyone have a solution to this... Running RHEL 5.6, with Apache httpd 2.2.3-65.el5_8 and get this error when trying to start the webserver: httpd: Syntax error on line 445 of /etc/httpd/conf/httpd.conf: Syntax error on line 2 of /etc/httpd/conf/mod_jk.conf: Cannot load /data/cf10/config/wsconfig/1/mod_jk.so into server: /data/cf10/config/wsconfig/1/mod_jk.so: undefined symbol: ap_get_server_description I've looked all over Google, and there are some recommendations to compile my own connector, but I need the one from Adobe for CF10. Also the adobe site lists CF10 compatibility w/ Apache HTTPD 2.2.21, well with RedHat Enterprise they don't move the version number up, it gets reverse patched in the app repo.... ANY help would be awesome. We are 50 days from going live with CF10 (or planning to), and really could use some help on getting this issue resolved. In response to one of the posters here, I have indeed verified I'm using the x64 connector in my x64 OS based system. Response from Adobe w/ SOLUTION! Here's the response and resolution: You may download the connector from the following “RHEL_mod_jk.zip” web-link at: [URL] Please note that you may proceed with the installation choosing not to configure the web server initially. Once CF is installed you may proceed to create the connector using the wsconfig tool at \ColdFusion10\cfusion\runtime\bin Find the instructions at [URL] Once the connector is in place you may simply navigate to \ColdFusion10\config\wsconfig\ folder and replace the mod_jk.so file with downloaded copy and restart Apache.
|
CF10 mod_jk.so won't load with RHEL 5.6 and Apache httpd 2.2.3 Does anyone have a solution to this... Running RHEL 5.6, with Apache httpd 2.2.3-65.el5_8 and get this error when trying to start the webserver: httpd: Syntax error on line 445 of /etc/httpd/conf/httpd.conf: Syntax error on line 2 of /etc/httpd/conf/mod_jk.conf: Cannot load /data/cf10/config/wsconfig/1/mod_jk.so into server: /data/cf10/config/wsconfig/1/mod_jk.so: undefined symbol: ap_get_server_description I've looked all over Google, and there are some recommendations to compile my own connector, but I need the one from Adobe for CF10. Also the adobe site lists CF10 compatibility w/ Apache HTTPD 2.2.21, well with RedHat Enterprise they don't move the version number up, it gets reverse patched in the app repo.... ANY help would be awesome. We are 50 days from going live with CF10 (or planning to), and really could use some help on getting this issue resolved. In response to one of the posters here, I have indeed verified I'm using the x64 connector in my x64 OS based system. Response from Adobe w/ SOLUTION! Here's the response and resolution: You may download the connector from the following “RHEL_mod_jk.zip” web-link at: [URL] Please note that you may proceed with the installation choosing not to configure the web server initially. Once CF is installed you may proceed to create the connector using the wsconfig tool at \ColdFusion10\cfusion\runtime\bin Find the instructions at [URL] Once the connector is in place you may simply navigate to \ColdFusion10\config\wsconfig\ folder and replace the mod_jk.so file with downloaded copy and restart Apache.
|
apache, tomcat, coldfusion, mod-jk, rhel
| 2
| 1,332
| 2
|
https://stackoverflow.com/questions/11970793/cf10-mod-jk-so-wont-load-with-rhel-5-6-and-apache-httpd-2-2-3
|
1,837,145
|
PCI Compliance + Magento + PHP version
|
I'm trying to get PCI Compliance for my dedicated server (Red Hat Enterprise Linux), which is running Magento. When I first installed Magento on the server, I realized that RHEL comes with a PHP version which is too old for Magento (5.1.6). So, I found a separate repo with PHP version 5.2.11, which got everything running fine, but now I'm in a bind. My PCI Compliance test says that since my PHP version is < 5.3.1 it has security issues. If I try to update to 5.3.1, Magento breaks. I don't want to edit the Magento core to fix those problems, so I guess what I need is a repo with PHP 5.2.11, but that I can confidently say/prove has back-ported to patch up the issues that the PCI Compliance scan identifies. I realize this is terribly convoluted, but if you have any suggestions/tips I'd be happy to hear them. Thanks.
|
PCI Compliance + Magento + PHP version I'm trying to get PCI Compliance for my dedicated server (Red Hat Enterprise Linux), which is running Magento. When I first installed Magento on the server, I realized that RHEL comes with a PHP version which is too old for Magento (5.1.6). So, I found a separate repo with PHP version 5.2.11, which got everything running fine, but now I'm in a bind. My PCI Compliance test says that since my PHP version is < 5.3.1 it has security issues. If I try to update to 5.3.1, Magento breaks. I don't want to edit the Magento core to fix those problems, so I guess what I need is a repo with PHP 5.2.11, but that I can confidently say/prove has back-ported to patch up the issues that the PCI Compliance scan identifies. I realize this is terribly convoluted, but if you have any suggestions/tips I'd be happy to hear them. Thanks.
|
php, yum, pci-dss, rhel
| 2
| 3,006
| 1
|
https://stackoverflow.com/questions/1837145/pci-compliance-magento-php-version
|
76,862,191
|
Installing CUnit on RHEL CUnit/CUnit.h: No such file or directory
|
I'm trying to set up CUnit for use on Red Hat Enterprise Linux. This is my test file in c: #include <CUnit/CUnit.h> int main() { return 0; } At first, I was getting an error when using gcc cunit-test.c -lcunit : /usr/bin/ld: cannot find -lcunit collect2: error: ld returned 1 exit status So I made a symlink so that /usr/lib64/libcunit.so points to /usr/lib64/libcunit.so.1 , which fixed that error. However, I'm still not able to use the CUnit files. When I compile my test file, with gcc cunit-test.c -lcunit I get the error cunit-test.c:1:10: fatal error: CUnit/CUnit.h: No such file or directory #include <CUnit/CUnit.h> ^~~~~~~~~~~~~~~ compilation terminated.
|
Installing CUnit on RHEL CUnit/CUnit.h: No such file or directory I'm trying to set up CUnit for use on Red Hat Enterprise Linux. This is my test file in c: #include <CUnit/CUnit.h> int main() { return 0; } At first, I was getting an error when using gcc cunit-test.c -lcunit : /usr/bin/ld: cannot find -lcunit collect2: error: ld returned 1 exit status So I made a symlink so that /usr/lib64/libcunit.so points to /usr/lib64/libcunit.so.1 , which fixed that error. However, I'm still not able to use the CUnit files. When I compile my test file, with gcc cunit-test.c -lcunit I get the error cunit-test.c:1:10: fatal error: CUnit/CUnit.h: No such file or directory #include <CUnit/CUnit.h> ^~~~~~~~~~~~~~~ compilation terminated.
|
c, linux, gcc, rhel, cunit
| 2
| 1,598
| 2
|
https://stackoverflow.com/questions/76862191/installing-cunit-on-rhel-cunit-cunit-h-no-such-file-or-directory
|
68,947,461
|
How set static route in RHEL 8 with Ansible?
|
I have a requirement to maintain a list of static routes across many RHEL 8 VMs (~100) and thinking of managing this with Ansible. Tried several methods suggested for different scenarios and still no luck. Some options I tried were, net_static_route - Which is obviously for network appliances linux-system-roles.network ( Redhat Doc ) In my case, I dont want to disturb any network interface which are actively being used. Just want to add a static route to send the traffic via a different interface, not through default route. Command I use to do this manually is like, #sudo ip route add 192.168.1.2 via 192.168.100.1 dev ens224 proto static metric 100 Anyone has done something like this?
|
How set static route in RHEL 8 with Ansible? I have a requirement to maintain a list of static routes across many RHEL 8 VMs (~100) and thinking of managing this with Ansible. Tried several methods suggested for different scenarios and still no luck. Some options I tried were, net_static_route - Which is obviously for network appliances linux-system-roles.network ( Redhat Doc ) In my case, I dont want to disturb any network interface which are actively being used. Just want to add a static route to send the traffic via a different interface, not through default route. Command I use to do this manually is like, #sudo ip route add 192.168.1.2 via 192.168.100.1 dev ens224 proto static metric 100 Anyone has done something like this?
|
ansible, rhel
| 2
| 7,602
| 1
|
https://stackoverflow.com/questions/68947461/how-set-static-route-in-rhel-8-with-ansible
|
68,173,833
|
Can't access airflow UI
|
I am trying to install Apache airflow 2.1.0 on red hat guest virtual machine. I am a newbie with airflow as I want to learn how to use it to create ETL. I am pretty stuck with airflow webserver. airflow db init works alright. Initialization done. I created an admin user after db initialization. Then problem arrives with airflow webserver command. [priya@localhost ~]$ airflow webserver -D ____________ _____________ ____ |__( )_________ __/__ /________ __ ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / / ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ / _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/ [2021-06-29 00:10:30,556] {dagbag.py:487} INFO - Filling up the DagBag from /dev/null Running the Gunicorn Server with: Workers: 4 sync Host: 0.0.0.0:8080 Timeout: 120 Logfiles: - - Access Logformat: ================================================================= After this I thought to access UI by [URL] and url is not locating. I again tried [priya@localhost ~]$ airflow webserver -D ____________ _____________ ____ |__( )_________ __/__ /________ __ ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / / ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ / _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/ [2021-06-29 00:19:30,546] {dagbag.py:487} INFO - Filling up the DagBag from /dev/null Traceback (most recent call last): File "/usr/local/bin/airflow", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.6/site-packages/airflow/__main__.py", line 40, in main args.func(args) File "/usr/local/lib/python3.6/site-packages/airflow/cli/cli_parser.py", line 48, in command return func(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/airflow/utils/cli.py", line 91, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/airflow/cli/commands/webserver_command.py", line 368, in webserver check_if_pidfile_process_is_running(pid_file=pid_file, process_name="webserver") File "/usr/local/lib/python3.6/site-packages/airflow/utils/process_utils.py", line 267, in check_if_pidfile_process_is_running raise AirflowException(f"The {process_name} is already running under PID {pid}.") airflow.exceptions.AirflowException: The webserver is already running under PID 3147. webserver seems to run on pid 3147 then why can't I access webserver UI? Also, I tried to do airflow webserver -p 8080 but then it keeps on giving me error logs workers initialize...workers timeout... in infinite loop. Kindly help. Thank you!
|
Can't access airflow UI I am trying to install Apache airflow 2.1.0 on red hat guest virtual machine. I am a newbie with airflow as I want to learn how to use it to create ETL. I am pretty stuck with airflow webserver. airflow db init works alright. Initialization done. I created an admin user after db initialization. Then problem arrives with airflow webserver command. [priya@localhost ~]$ airflow webserver -D ____________ _____________ ____ |__( )_________ __/__ /________ __ ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / / ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ / _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/ [2021-06-29 00:10:30,556] {dagbag.py:487} INFO - Filling up the DagBag from /dev/null Running the Gunicorn Server with: Workers: 4 sync Host: 0.0.0.0:8080 Timeout: 120 Logfiles: - - Access Logformat: ================================================================= After this I thought to access UI by [URL] and url is not locating. I again tried [priya@localhost ~]$ airflow webserver -D ____________ _____________ ____ |__( )_________ __/__ /________ __ ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / / ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ / _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/ [2021-06-29 00:19:30,546] {dagbag.py:487} INFO - Filling up the DagBag from /dev/null Traceback (most recent call last): File "/usr/local/bin/airflow", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.6/site-packages/airflow/__main__.py", line 40, in main args.func(args) File "/usr/local/lib/python3.6/site-packages/airflow/cli/cli_parser.py", line 48, in command return func(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/airflow/utils/cli.py", line 91, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/airflow/cli/commands/webserver_command.py", line 368, in webserver check_if_pidfile_process_is_running(pid_file=pid_file, process_name="webserver") File "/usr/local/lib/python3.6/site-packages/airflow/utils/process_utils.py", line 267, in check_if_pidfile_process_is_running raise AirflowException(f"The {process_name} is already running under PID {pid}.") airflow.exceptions.AirflowException: The webserver is already running under PID 3147. webserver seems to run on pid 3147 then why can't I access webserver UI? Also, I tried to do airflow webserver -p 8080 but then it keeps on giving me error logs workers initialize...workers timeout... in infinite loop. Kindly help. Thank you!
|
python, webserver, airflow, rhel
| 2
| 10,097
| 3
|
https://stackoverflow.com/questions/68173833/cant-access-airflow-ui
|
66,031,943
|
What is aws redhat root password
|
I am new in aws, i lunch an redhat instance on aws with free-tier, i logged with ssh client. My ip starts with like this ec2-user@ec.....bla.com that is mean i logged with ec2-user , when i try to run some service inside the instance machine, It ask me for root password. Can anyone tell me what is the root password? i couldn't figure out this yet Here you go to see some example: [ec2-user@ip-172--my-aws-ip---34 ~]$ systemctl start docker.service ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ==== Authentication is required to start 'docker.service'. Authenticating as: root Password:
|
What is aws redhat root password I am new in aws, i lunch an redhat instance on aws with free-tier, i logged with ssh client. My ip starts with like this ec2-user@ec.....bla.com that is mean i logged with ec2-user , when i try to run some service inside the instance machine, It ask me for root password. Can anyone tell me what is the root password? i couldn't figure out this yet Here you go to see some example: [ec2-user@ip-172--my-aws-ip---34 ~]$ systemctl start docker.service ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ==== Authentication is required to start 'docker.service'. Authenticating as: root Password:
|
amazon-web-services, amazon-ec2, rhel
| 2
| 4,561
| 1
|
https://stackoverflow.com/questions/66031943/what-is-aws-redhat-root-password
|
63,282,196
|
3 different Openjkds on RHEL7 with YUM
|
I need to install 3 different versions of openjdk (11.0.5, 11.0.6, 11.0.7) on EL7. I see all 3 versions available in RHEL7 repo but there is an ask to install all package files for 0.5/0.6 in user custom location while 0.7 should be installed in default location. I need to use YUM to avoid altering the RPM DB outside of YUM and I need to make sure that YUM update will not upgrade 0.5/0.6. Tar.gz is no longer available since 0.6 so I want to use OpenJDK from RHEL repo. Shall I use "update-alternatives —list”, "yum --installroot= install ” or some other way? I was suggesting to use virtual_env but that was rejected. Thanks for your thoughts!
|
3 different Openjkds on RHEL7 with YUM I need to install 3 different versions of openjdk (11.0.5, 11.0.6, 11.0.7) on EL7. I see all 3 versions available in RHEL7 repo but there is an ask to install all package files for 0.5/0.6 in user custom location while 0.7 should be installed in default location. I need to use YUM to avoid altering the RPM DB outside of YUM and I need to make sure that YUM update will not upgrade 0.5/0.6. Tar.gz is no longer available since 0.6 so I want to use OpenJDK from RHEL repo. Shall I use "update-alternatives —list”, "yum --installroot= install ” or some other way? I was suggesting to use virtual_env but that was rejected. Thanks for your thoughts!
|
java, yum, rhel
| 2
| 95
| 1
|
https://stackoverflow.com/questions/63282196/3-different-openjkds-on-rhel7-with-yum
|
62,070,443
|
Is there a way to iterate through node.run_state data in a Chef recipe?
|
Is there a way to iterate through node.run_state data? This is in a RHEL environment with Active Directory users. I have a ruby block that populates node.run_state. I have to have this run at converge time, because the overall cookbook will be used for build automation. On the very first run, the cookbook installs Centrify, then later needs to run adquery to gather user info for populating home directories with SSH keys. On a chef-client run, I see this: Compiling Cookbooks... {} Obviously, that's the puts running in compile time against an empty hash. At converge time, nothing happens in the loop with the 2 directory and 1 template resource. Here is the relevant piece of the recipe: ruby_block 'set uid, gid, and homedir for users' do block do base_attr['ssh_keys'].each do |user, pubkeys| # next unless Dir.exist?(homedir) node.run_state[user] = {} puts "Checking user #{user}..." if local_users.key?(user) node.run_state[user]['homedir'] = local_users[user]['homedir'] node.run_state[user]['uid'] = local_users[user]['uid'].to_i node.run_state[user]['gid'] = local_users[user]['gid'].to_i elsif centrify_users.key?(user) node.run_state[user]['homedir'] = centrify_users[user]['homedir'] node.run_state[user]['uid'] = centrify_users[user]['uid'].to_i node.run_state[user]['gid'] = centrify_users[user]['gid'].to_i else puts "user #{user} not found." # Place holder values. node.run_state[user]['homedir'] = "/tmp/#{user}" node.run_state[user]['uid'] = 0 node.run_state[user]['gid'] = 0 end end end end # Dir.exist? guard should bypass compile-time error. # "name is a required property" # next unless Dir.exist?(homedir) puts node.run_state node.run_state.each do |user| directory node.run_state[user]['homedir'] do owner node.run_state[user]['uid'] group node.run_state[user]['gid'] mode '0700' end directory "#{node.run_state[user]['homedir']}/.ssh" do owner node.run_state[user]['uid'] group node.run_state[user]['gid'] mode '0700' end template "#{node.run_state[user]['homedir']}/.ssh/authorized_keys" do owner node.run_state[user]['uid'] group node.run_state[user]['gid'] mode '0600' source 'authorized_keys.erb' variables( sshkeys: base_attr['ssh_keys'][user] ) end end Any ideas how to make this work?
|
Is there a way to iterate through node.run_state data in a Chef recipe? Is there a way to iterate through node.run_state data? This is in a RHEL environment with Active Directory users. I have a ruby block that populates node.run_state. I have to have this run at converge time, because the overall cookbook will be used for build automation. On the very first run, the cookbook installs Centrify, then later needs to run adquery to gather user info for populating home directories with SSH keys. On a chef-client run, I see this: Compiling Cookbooks... {} Obviously, that's the puts running in compile time against an empty hash. At converge time, nothing happens in the loop with the 2 directory and 1 template resource. Here is the relevant piece of the recipe: ruby_block 'set uid, gid, and homedir for users' do block do base_attr['ssh_keys'].each do |user, pubkeys| # next unless Dir.exist?(homedir) node.run_state[user] = {} puts "Checking user #{user}..." if local_users.key?(user) node.run_state[user]['homedir'] = local_users[user]['homedir'] node.run_state[user]['uid'] = local_users[user]['uid'].to_i node.run_state[user]['gid'] = local_users[user]['gid'].to_i elsif centrify_users.key?(user) node.run_state[user]['homedir'] = centrify_users[user]['homedir'] node.run_state[user]['uid'] = centrify_users[user]['uid'].to_i node.run_state[user]['gid'] = centrify_users[user]['gid'].to_i else puts "user #{user} not found." # Place holder values. node.run_state[user]['homedir'] = "/tmp/#{user}" node.run_state[user]['uid'] = 0 node.run_state[user]['gid'] = 0 end end end end # Dir.exist? guard should bypass compile-time error. # "name is a required property" # next unless Dir.exist?(homedir) puts node.run_state node.run_state.each do |user| directory node.run_state[user]['homedir'] do owner node.run_state[user]['uid'] group node.run_state[user]['gid'] mode '0700' end directory "#{node.run_state[user]['homedir']}/.ssh" do owner node.run_state[user]['uid'] group node.run_state[user]['gid'] mode '0700' end template "#{node.run_state[user]['homedir']}/.ssh/authorized_keys" do owner node.run_state[user]['uid'] group node.run_state[user]['gid'] mode '0600' source 'authorized_keys.erb' variables( sshkeys: base_attr['ssh_keys'][user] ) end end Any ideas how to make this work?
|
chef-infra, rhel, centrify
| 2
| 1,155
| 3
|
https://stackoverflow.com/questions/62070443/is-there-a-way-to-iterate-through-node-run-state-data-in-a-chef-recipe
|
60,754,757
|
Docker + docker-compose up + Cannot start service
|
we have docker-compose.yml that contain configuration for Kafka , zookeeper and schema registry when we start the docker compose we get the following errors docker-compose up -d Starting kafka-docker-final_zookeeper3_1 ... error ERROR: for kafka-docker-final_zookeeper3_1 Cannot start service zookeeper3: network dd321821f3cb4a715c31e04b32bff2cf206c85ed5581b01b1c6a94ffa45f330e not found ERROR: for zookeeper3 Cannot start service zookeeper3: network dd321821f3cb4a715c31e04b32bff2cf206c85ed5581b01b1c6a94ffa45f330e not found ERROR: Encountered errors while bringing up the project. and systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Thu 2020-03-19 07:57:29 UTC; 1h 55min ago Docs: [URL] Main PID: 12105 (dockerd) Tasks: 30 Memory: 654.6M CGroup: /system.slice/docker.service └─12105 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock Mar 19 07:57:29 master3 dockerd[12105]: time="2020-03-19T07:57:29.610005717Z" level=info msg="Daemon has completed initialization" Mar 19 07:57:29 master3 dockerd[12105]: time="2020-03-19T07:57:29.631338594Z" level=info msg="API listen on /var/run/docker.sock" Mar 19 07:57:29 master3 systemd[1]: Started Docker Application Container Engine. Mar 19 07:58:12 master3 dockerd[12105]: time="2020-03-19T07:58:12.352833676Z" level=warning msg="Error getting v2 registry: Get [URL] net/http: re...ng headers)" Mar 19 07:58:12 master3 dockerd[12105]: time="2020-03-19T07:58:12.352916724Z" level=info msg="Attempting next endpoint for pull after error: Get [URL] headers)" Mar 19 07:58:12 master3 dockerd[12105]: time="2020-03-19T07:58:12.353019409Z" level=error msg="Handler for POST /v1.22/images/create returned error: Get [URL] headers)" Mar 19 08:03:47 master3 dockerd[12105]: time="2020-03-19T08:03:47.255058871Z" level=warning msg="error locating sandbox id 20ce3c5b6383ad92dae848c3de1d91bbfff9306ca86fdc90fae...c not found" Mar 19 08:03:47 master3 dockerd[12105]: time="2020-03-19T08:03:47.263976715Z" level=error msg="ef808aa411ae0aaef0920397c77b6d9a327bdd1651877402fe1fc142a513af8a cleanup: faile...h container" Mar 19 09:50:43 master3 dockerd[12105]: time="2020-03-19T09:50:43.920457464Z" level=warning msg="error locating sandbox id 20ce3c5b6383ad92dae848c3de1d91bbfff9306ca86fdc90fae...c not found" Mar 19 09:50:43 master3 dockerd[12105]: time="2020-03-19T09:50:43.927744636Z" level=error msg="ef808aa411ae0aaef0920397c77b6d9a327bdd1651877402fe1fc142a513af8a cleanup: faile...h container" Hint: Some lines were ellipsized, use -l to show in full. regarding to Cannot start service zookeeper3: network dd321821f3cb4a715c31e04b32bff2cf206c85ed5581b01b1c6a94ffa45f330e not found Cannot start service zookeeper3: network dd321821f3cb4a715c31e04b32bff2cf206c85ed5581b01b1c6a94ffa45f330e not found how to fix this issue? docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6c729cb0bb2c confluentinc/cp-schema-registry:latest "/etc/confluent/dock…" 3 months ago Exited (255) 2 hours ago 0.0.0.0:8081->8081/tcp kafka-docker-schemaregistry_1 ef808aa411ae confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 3 months ago Exited (255) 2 hours ago docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES docker network ls NETWORK ID NAME DRIVER SCOPE e5566ab8ca6d bridge bridge local 2467d9664593 host host local c509e32d0d67 kafka-docker-default bridge local 08966157382c none null local
|
Docker + docker-compose up + Cannot start service we have docker-compose.yml that contain configuration for Kafka , zookeeper and schema registry when we start the docker compose we get the following errors docker-compose up -d Starting kafka-docker-final_zookeeper3_1 ... error ERROR: for kafka-docker-final_zookeeper3_1 Cannot start service zookeeper3: network dd321821f3cb4a715c31e04b32bff2cf206c85ed5581b01b1c6a94ffa45f330e not found ERROR: for zookeeper3 Cannot start service zookeeper3: network dd321821f3cb4a715c31e04b32bff2cf206c85ed5581b01b1c6a94ffa45f330e not found ERROR: Encountered errors while bringing up the project. and systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Thu 2020-03-19 07:57:29 UTC; 1h 55min ago Docs: [URL] Main PID: 12105 (dockerd) Tasks: 30 Memory: 654.6M CGroup: /system.slice/docker.service └─12105 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock Mar 19 07:57:29 master3 dockerd[12105]: time="2020-03-19T07:57:29.610005717Z" level=info msg="Daemon has completed initialization" Mar 19 07:57:29 master3 dockerd[12105]: time="2020-03-19T07:57:29.631338594Z" level=info msg="API listen on /var/run/docker.sock" Mar 19 07:57:29 master3 systemd[1]: Started Docker Application Container Engine. Mar 19 07:58:12 master3 dockerd[12105]: time="2020-03-19T07:58:12.352833676Z" level=warning msg="Error getting v2 registry: Get [URL] net/http: re...ng headers)" Mar 19 07:58:12 master3 dockerd[12105]: time="2020-03-19T07:58:12.352916724Z" level=info msg="Attempting next endpoint for pull after error: Get [URL] headers)" Mar 19 07:58:12 master3 dockerd[12105]: time="2020-03-19T07:58:12.353019409Z" level=error msg="Handler for POST /v1.22/images/create returned error: Get [URL] headers)" Mar 19 08:03:47 master3 dockerd[12105]: time="2020-03-19T08:03:47.255058871Z" level=warning msg="error locating sandbox id 20ce3c5b6383ad92dae848c3de1d91bbfff9306ca86fdc90fae...c not found" Mar 19 08:03:47 master3 dockerd[12105]: time="2020-03-19T08:03:47.263976715Z" level=error msg="ef808aa411ae0aaef0920397c77b6d9a327bdd1651877402fe1fc142a513af8a cleanup: faile...h container" Mar 19 09:50:43 master3 dockerd[12105]: time="2020-03-19T09:50:43.920457464Z" level=warning msg="error locating sandbox id 20ce3c5b6383ad92dae848c3de1d91bbfff9306ca86fdc90fae...c not found" Mar 19 09:50:43 master3 dockerd[12105]: time="2020-03-19T09:50:43.927744636Z" level=error msg="ef808aa411ae0aaef0920397c77b6d9a327bdd1651877402fe1fc142a513af8a cleanup: faile...h container" Hint: Some lines were ellipsized, use -l to show in full. regarding to Cannot start service zookeeper3: network dd321821f3cb4a715c31e04b32bff2cf206c85ed5581b01b1c6a94ffa45f330e not found Cannot start service zookeeper3: network dd321821f3cb4a715c31e04b32bff2cf206c85ed5581b01b1c6a94ffa45f330e not found how to fix this issue? docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6c729cb0bb2c confluentinc/cp-schema-registry:latest "/etc/confluent/dock…" 3 months ago Exited (255) 2 hours ago 0.0.0.0:8081->8081/tcp kafka-docker-schemaregistry_1 ef808aa411ae confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 3 months ago Exited (255) 2 hours ago docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES docker network ls NETWORK ID NAME DRIVER SCOPE e5566ab8ca6d bridge bridge local 2467d9664593 host host local c509e32d0d67 kafka-docker-default bridge local 08966157382c none null local
|
docker, docker-compose, docker-machine, rhel, confluent-platform
| 2
| 4,659
| 2
|
https://stackoverflow.com/questions/60754757/docker-docker-compose-up-cannot-start-service
|
60,314,283
|
Error unable to find the package name called v8-devel on RHEL8
|
I am trying to install v8-devel rpm package on RHEL 8.1 but unable to find it in all repositories of RHEL as well as in EPEL. I tried all the possibilities as shown below: yum install v8-devel yum install v8-devel yum --enablerepo=* install v8* yum search v8-devel yum whatprovides v8-devel dnf install v8-devel dnf install v8-devel dnf install v8* i do have following EPEL for RHEL8 installed on my server. epel epel-modular epel-source I am also able to install similar package on RHEL 7 with the help of EPEL. I am just curious to know whether name of (v8-devel) got changed for RHEL8 or not? Can someone help me to find out v8-devel rpm and install it via EPEL on RHEL8.1. Thanks in advance.
|
Error unable to find the package name called v8-devel on RHEL8 I am trying to install v8-devel rpm package on RHEL 8.1 but unable to find it in all repositories of RHEL as well as in EPEL. I tried all the possibilities as shown below: yum install v8-devel yum install v8-devel yum --enablerepo=* install v8* yum search v8-devel yum whatprovides v8-devel dnf install v8-devel dnf install v8-devel dnf install v8* i do have following EPEL for RHEL8 installed on my server. epel epel-modular epel-source I am also able to install similar package on RHEL 7 with the help of EPEL. I am just curious to know whether name of (v8-devel) got changed for RHEL8 or not? Can someone help me to find out v8-devel rpm and install it via EPEL on RHEL8.1. Thanks in advance.
|
linux, rpm, rhel, libv8
| 2
| 1,414
| 1
|
https://stackoverflow.com/questions/60314283/error-unable-to-find-the-package-name-called-v8-devel-on-rhel8
|
56,675,272
|
Perl code crashes after upgrading from rhel6 to rhel8
|
I have been migrating from a rhel6 server with Perl 5.10 to a server with rhel8 and perl 5.24 and 5.26. Everything works fine except one Perl program crashes executing a warn statement. Using the built in debugger, I traced the error to the line below. Entering n to step over the subroutine terminated execution. File::Temp::cleanup(/usr/share/perl5/vendor_perl/File/Temp.pm:934): 934: @{ $dirs_to_unlink{$$} } = () DB<44> n The sub function with the warn statement that fails is: sub add_rec_to_db { my $info = shift; # Returns (errorcode, errormsg). No errorcode means GOOD. my $af_rec = Logs::stats_transform($info); my $lpd = Logs::db_lpd(); my $db_file = "$lpd/persistent.db"; my $dbh = LogsCommon::open_db($db_file); my $err = LogsCommon::sql_insert_or_update($dbh, $af_rec, 'all_recs', 'FID'); if ($err) { if ($err =~ /database is locked/) { return "DATABASE_IS_LOCKED"; } $err = strip_special_chars($err); warn "AddRecToDb: FID=$info->{REC_ID} UNRECOGNIZED_DB_ERROR: $err"; return "UNRECOGNIZED_DB_ERROR"; } return undef; } The Perl program executes without error if I change the parameters so it doesn't run this section of code. DBI and DBD seem to be installed and working correctly. Even for the sub function that crashes, the code does what it should, until the warn statement. After executing the warn statement, it should return "UNRECOGNIZED_DB_ERROR". Using module streams to switch from Perl 5.26 to 5.24 didn't resolve this issue.
|
Perl code crashes after upgrading from rhel6 to rhel8 I have been migrating from a rhel6 server with Perl 5.10 to a server with rhel8 and perl 5.24 and 5.26. Everything works fine except one Perl program crashes executing a warn statement. Using the built in debugger, I traced the error to the line below. Entering n to step over the subroutine terminated execution. File::Temp::cleanup(/usr/share/perl5/vendor_perl/File/Temp.pm:934): 934: @{ $dirs_to_unlink{$$} } = () DB<44> n The sub function with the warn statement that fails is: sub add_rec_to_db { my $info = shift; # Returns (errorcode, errormsg). No errorcode means GOOD. my $af_rec = Logs::stats_transform($info); my $lpd = Logs::db_lpd(); my $db_file = "$lpd/persistent.db"; my $dbh = LogsCommon::open_db($db_file); my $err = LogsCommon::sql_insert_or_update($dbh, $af_rec, 'all_recs', 'FID'); if ($err) { if ($err =~ /database is locked/) { return "DATABASE_IS_LOCKED"; } $err = strip_special_chars($err); warn "AddRecToDb: FID=$info->{REC_ID} UNRECOGNIZED_DB_ERROR: $err"; return "UNRECOGNIZED_DB_ERROR"; } return undef; } The Perl program executes without error if I change the parameters so it doesn't run this section of code. DBI and DBD seem to be installed and working correctly. Even for the sub function that crashes, the code does what it should, until the warn statement. After executing the warn statement, it should return "UNRECOGNIZED_DB_ERROR". Using module streams to switch from Perl 5.26 to 5.24 didn't resolve this issue.
|
perl, rhel
| 2
| 549
| 1
|
https://stackoverflow.com/questions/56675272/perl-code-crashes-after-upgrading-from-rhel6-to-rhel8
|
46,896,969
|
Postgresql 9.6 InitDB Fails
|
Whenever I Try running the command below it always fails on RHEL7. I've tried on another similar OS (Newer) and it doesn't do this and just works. I'v looked into permissions of directories, disabled selinux (Just incase) and also looked into locales, however these settings match that of my other server. /usr/pgsql-9.6/bin/postgresql96-setup initdb Log output: The database cluster will be initialized with locale "en_US.UTF-8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. fixing permissions on existing directory /var/lib/pgsql/9.6/data ... ok creating subdirectories ... ok selecting default max_connections ... 100 selecting default shared_buffers ... 64MB selecting dynamic shared memory implementation ... posix creating configuration files ... ok running bootstrap script ... < 2017-10-23 20:13:54.035 BST > LOG: invalid value for parameter "lc_messages": "en_US.UTF-8" < 2017-10-23 20:13:54.035 BST > LOG: invalid value for parameter "lc_monetary": "en_US.UTF-8" < 2017-10-23 20:13:54.035 BST > LOG: invalid value for parameter "lc_numeric": "en_US.UTF-8" < 2017-10-23 20:13:54.035 BST > LOG: invalid value for parameter "lc_time": "en_US.UTF-8" < 2017-10-23 20:13:54.035 BST > FATAL: configuration file "/var/lib/pgsql/9.6/data/postgresql.conf" contains errors child process exited with exit code 1 initdb: removing contents of data directory "/var/lib/pgsql/9.6/data"
|
Postgresql 9.6 InitDB Fails Whenever I Try running the command below it always fails on RHEL7. I've tried on another similar OS (Newer) and it doesn't do this and just works. I'v looked into permissions of directories, disabled selinux (Just incase) and also looked into locales, however these settings match that of my other server. /usr/pgsql-9.6/bin/postgresql96-setup initdb Log output: The database cluster will be initialized with locale "en_US.UTF-8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. fixing permissions on existing directory /var/lib/pgsql/9.6/data ... ok creating subdirectories ... ok selecting default max_connections ... 100 selecting default shared_buffers ... 64MB selecting dynamic shared memory implementation ... posix creating configuration files ... ok running bootstrap script ... < 2017-10-23 20:13:54.035 BST > LOG: invalid value for parameter "lc_messages": "en_US.UTF-8" < 2017-10-23 20:13:54.035 BST > LOG: invalid value for parameter "lc_monetary": "en_US.UTF-8" < 2017-10-23 20:13:54.035 BST > LOG: invalid value for parameter "lc_numeric": "en_US.UTF-8" < 2017-10-23 20:13:54.035 BST > LOG: invalid value for parameter "lc_time": "en_US.UTF-8" < 2017-10-23 20:13:54.035 BST > FATAL: configuration file "/var/lib/pgsql/9.6/data/postgresql.conf" contains errors child process exited with exit code 1 initdb: removing contents of data directory "/var/lib/pgsql/9.6/data"
|
postgresql, rhel, rhel7, postgresql-9.6
| 2
| 4,194
| 3
|
https://stackoverflow.com/questions/46896969/postgresql-9-6-initdb-fails
|
45,748,703
|
python-paramiko rpm has been deprecated in RHEL 7, is there any alternative for this?
|
I have to support some of the RHEL 6 developed scripts with RHEL 7 as well, but while installing the rpm for the features I am getting an error due to import paramiko in some of my feature scripts. I surfed for the error and found that RHEL has removed their support for python-paramiko. Can there be any alternative to this paramiko rpm?
|
python-paramiko rpm has been deprecated in RHEL 7, is there any alternative for this? I have to support some of the RHEL 6 developed scripts with RHEL 7 as well, but while installing the rpm for the features I am getting an error due to import paramiko in some of my feature scripts. I surfed for the error and found that RHEL has removed their support for python-paramiko. Can there be any alternative to this paramiko rpm?
|
python, rhel
| 2
| 4,749
| 1
|
https://stackoverflow.com/questions/45748703/python-paramiko-rpm-has-been-deprecated-in-rhel-7-is-there-any-alternative-for
|
43,288,658
|
How can I use the Python language bindings for RPM / RPM5 on MacOS?
|
On MacOS there is a homebrew formula to install the RPM development package from rpm5.org . However, this installs only the command line tools (rpm, rpmlint, rpmbuild, etc) and does not install any of the language bindings which are supported. I would like to use the Pascal language bindings. However, when I build them and attempt to import the rpm package into Python 2.7 I get this error: $ python -c "import rpm._rpm" Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/venv-default/lib/python2.7/site-packages/rpm/ init .py", line 7, in from _rpm import * ImportError: dlopen(/usr/local/lib/python2.7/venv-default/lib/python2.7/site-packages/rpm/_rpmmodule.so, 2): Symbol not found: _sqlite3_enable_load_extension Referenced from: /usr/local/Cellar/rpm/5.4.15_1/lib/librpmio-5.4.dylib Expected in: flat namespace in /usr/local/Cellar/rpm/5.4.15_1/lib/librpmio-5.4.dyl To build the Python bindings I re-installed the rpm package with homebrew using these commands: brew install -v --keep-tmp --build-from-source rpm 2>&1 | tee brew_install.log cd /tmp/rpm-20170408-18245-1u8nsbs/rpm-5.4.15 ./configure --prefix=/usr/local/Cellar/rpm/5.4.15_1 --localstatedir=/usr/local/var --with-path-cfg=/usr/local/etc/rpm --with-path-magic=/usr/local/share/misc/magic --with-path-sources=/usr/local/var/lib/rpmbuild --with-libiconv-prefix=/usr --disable-openmp --disable-nls --disable-dependency-tracking --with-db=external --with-sqlite=external --with-file=external --with-popt=external --with-beecrypt=internal --with-libtasn1=external --with-neon=internal --with-uuid=external --with-pcre=internal --with-lua=internal --with-syck=internal --without-apidocs varprefix=/usr/local/var --with-python cd python make make install Note the ./configure command is the same one as Homebrew used with the --with-python switch appended. How can I use the cross platform rpm5.org based source code to do Python language development on MacOS?
|
How can I use the Python language bindings for RPM / RPM5 on MacOS? On MacOS there is a homebrew formula to install the RPM development package from rpm5.org . However, this installs only the command line tools (rpm, rpmlint, rpmbuild, etc) and does not install any of the language bindings which are supported. I would like to use the Pascal language bindings. However, when I build them and attempt to import the rpm package into Python 2.7 I get this error: $ python -c "import rpm._rpm" Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/venv-default/lib/python2.7/site-packages/rpm/ init .py", line 7, in from _rpm import * ImportError: dlopen(/usr/local/lib/python2.7/venv-default/lib/python2.7/site-packages/rpm/_rpmmodule.so, 2): Symbol not found: _sqlite3_enable_load_extension Referenced from: /usr/local/Cellar/rpm/5.4.15_1/lib/librpmio-5.4.dylib Expected in: flat namespace in /usr/local/Cellar/rpm/5.4.15_1/lib/librpmio-5.4.dyl To build the Python bindings I re-installed the rpm package with homebrew using these commands: brew install -v --keep-tmp --build-from-source rpm 2>&1 | tee brew_install.log cd /tmp/rpm-20170408-18245-1u8nsbs/rpm-5.4.15 ./configure --prefix=/usr/local/Cellar/rpm/5.4.15_1 --localstatedir=/usr/local/var --with-path-cfg=/usr/local/etc/rpm --with-path-magic=/usr/local/share/misc/magic --with-path-sources=/usr/local/var/lib/rpmbuild --with-libiconv-prefix=/usr --disable-openmp --disable-nls --disable-dependency-tracking --with-db=external --with-sqlite=external --with-file=external --with-popt=external --with-beecrypt=internal --with-libtasn1=external --with-neon=internal --with-uuid=external --with-pcre=internal --with-lua=internal --with-syck=internal --without-apidocs varprefix=/usr/local/var --with-python cd python make make install Note the ./configure command is the same one as Homebrew used with the --with-python switch appended. How can I use the cross platform rpm5.org based source code to do Python language development on MacOS?
|
python, cross-platform, homebrew, rpm, rhel
| 2
| 748
| 1
|
https://stackoverflow.com/questions/43288658/how-can-i-use-the-python-language-bindings-for-rpm-rpm5-on-macos
|
42,285,257
|
PHP file_put_contents returning 'Permission Denied' (Due to SELinux setting)
|
I know this is a common issue but I haven't been able to single out the problem for my specific use case, so bear with me. I have a simple PHP script send_id which simply sends an ID number and saves it to a TXT file on my RHEL server running Apache 2.4.6 with PHP 5.4. The error message: Warning: file_put_contents(/var/www/html/id.txt): failed to open stream: Permission denied in /var/www/html/send_id.php on line 6 '1' written to server The PHP script itself: <?php $id=$_GET['id']; $stringData = "$id"; $file = file_put_contents('/var/www/html/id.txt', $stringData.PHP_EOL , FILE_APPEND |LOCK_EX); echo "'$stringData' written to server"; ?> chmodding to 777 didn't do anything. Additionally, I checked to see ownership rights and noticed that the id.txt file is owned by the root user at both user/group level, and PHP is being run at root level. Anyone have any suggestions? If its any help, this seems to have happened after a yum update
|
PHP file_put_contents returning 'Permission Denied' (Due to SELinux setting) I know this is a common issue but I haven't been able to single out the problem for my specific use case, so bear with me. I have a simple PHP script send_id which simply sends an ID number and saves it to a TXT file on my RHEL server running Apache 2.4.6 with PHP 5.4. The error message: Warning: file_put_contents(/var/www/html/id.txt): failed to open stream: Permission denied in /var/www/html/send_id.php on line 6 '1' written to server The PHP script itself: <?php $id=$_GET['id']; $stringData = "$id"; $file = file_put_contents('/var/www/html/id.txt', $stringData.PHP_EOL , FILE_APPEND |LOCK_EX); echo "'$stringData' written to server"; ?> chmodding to 777 didn't do anything. Additionally, I checked to see ownership rights and noticed that the id.txt file is owned by the root user at both user/group level, and PHP is being run at root level. Anyone have any suggestions? If its any help, this seems to have happened after a yum update
|
php, apache, rhel, selinux
| 2
| 5,868
| 3
|
https://stackoverflow.com/questions/42285257/php-file-put-contents-returning-permission-denied-due-to-selinux-setting
|
36,474,619
|
Cassandra Nodes Going Down
|
I have a 3 node Cassandra cluster setup (replication set to 2) with Solr installed, each node having RHEL, 32 GB Ram, 1 TB HDD and DSE 4.8.3. There are lots of writes happening on my nodes and also my web application reads from my nodes. I have observed that all the nodes go down after every 3-4 days. I have to do a restart of every node and then they function quite well till the next 3-4 days and again the same problem repeats. I checked the server logs but they do not show any error even when the server goes down. I am unable to figure out why is this happening. In my application, sometimes when I connect to the nodes through the C# Cassandra driver, I get the following error Cassandra.NoHostAvailableException: None of the hosts tried for query are available (tried: 'node-ip':9042) at Cassandra.Tasks.TaskHelper.WaitToComplete(Task task, Int32 timeout) at Cassandra.Tasks.TaskHelper.WaitToComplete[T](Task`1 task, Int32 timeout) at Cassandra.ControlConnection.Init() at Cassandra.Cluster.Init() But when I check the OpsCenter, none of the nodes are down. All nodes status show perfectly fine. Could this be a problem with the driver? Earlier I was using Cassandra C# driver version 2.5.0 installed from nuget, but now I updated even that to version 3.0.3 still this errors persists. Any help on this would be appreciated. Thanks in advance.
|
Cassandra Nodes Going Down I have a 3 node Cassandra cluster setup (replication set to 2) with Solr installed, each node having RHEL, 32 GB Ram, 1 TB HDD and DSE 4.8.3. There are lots of writes happening on my nodes and also my web application reads from my nodes. I have observed that all the nodes go down after every 3-4 days. I have to do a restart of every node and then they function quite well till the next 3-4 days and again the same problem repeats. I checked the server logs but they do not show any error even when the server goes down. I am unable to figure out why is this happening. In my application, sometimes when I connect to the nodes through the C# Cassandra driver, I get the following error Cassandra.NoHostAvailableException: None of the hosts tried for query are available (tried: 'node-ip':9042) at Cassandra.Tasks.TaskHelper.WaitToComplete(Task task, Int32 timeout) at Cassandra.Tasks.TaskHelper.WaitToComplete[T](Task`1 task, Int32 timeout) at Cassandra.ControlConnection.Init() at Cassandra.Cluster.Init() But when I check the OpsCenter, none of the nodes are down. All nodes status show perfectly fine. Could this be a problem with the driver? Earlier I was using Cassandra C# driver version 2.5.0 installed from nuget, but now I updated even that to version 3.0.3 still this errors persists. Any help on this would be appreciated. Thanks in advance.
|
cassandra, datastax, datastax-enterprise, rhel, datastax-startup
| 2
| 4,323
| 1
|
https://stackoverflow.com/questions/36474619/cassandra-nodes-going-down
|
35,557,641
|
Proxy configuration for OpenShift Origin
|
I am setting up an OpenShift origin server. The configurations I do heavily relies on the walkthrough description: [URL] After creating a project, I add a new app like this (successfully): oc new-app centos/ruby-22-centos7~ [URL] OpenShift tries to build immediatelly, only to fail as follows: F0222 15:24:58.504626 1 builder.go:204] Error: build error: fatal: unable to access ' [URL] ': Failed connect to github.com:443; Connection refused I consulted the documentation about the proxy configuration: [URL] Concluded that I can simply edit the YAML descriptor for this specific app to include my corporate proxy. ... source: type: Git git: uri: "git://github.com/openshift/ruby-hello-world.git" httpProxy: [URL] httpsProxy: [URL] ... With that change the build proceeds. Can the HTTP proxy be configured system wide? Note : again, I simply downloaded the binaries (client, server), did not install via ansible. And I did not find relevant properties openshift.local.config folder, inside my server binary folder.
|
Proxy configuration for OpenShift Origin I am setting up an OpenShift origin server. The configurations I do heavily relies on the walkthrough description: [URL] After creating a project, I add a new app like this (successfully): oc new-app centos/ruby-22-centos7~ [URL] OpenShift tries to build immediatelly, only to fail as follows: F0222 15:24:58.504626 1 builder.go:204] Error: build error: fatal: unable to access ' [URL] ': Failed connect to github.com:443; Connection refused I consulted the documentation about the proxy configuration: [URL] Concluded that I can simply edit the YAML descriptor for this specific app to include my corporate proxy. ... source: type: Git git: uri: "git://github.com/openshift/ruby-hello-world.git" httpProxy: [URL] httpsProxy: [URL] ... With that change the build proceeds. Can the HTTP proxy be configured system wide? Note : again, I simply downloaded the binaries (client, server), did not install via ansible. And I did not find relevant properties openshift.local.config folder, inside my server binary folder.
|
rhel, openshift-origin
| 2
| 5,161
| 1
|
https://stackoverflow.com/questions/35557641/proxy-configuration-for-openshift-origin
|
34,871,568
|
ADMU0509I: The Application Server "server1" cannot be reached. It appears to be stopped
|
I have setup a WebSphere Application Server on my RHEL 7 virtual machine. When I start the server, it starts fine and I can access the admin console but when I try to stop or get the status of the server using the script sh stopServer.sh -server1 / sh serverStatus.sh server1 It gives the following message, ADMU0509I: The Application Server "server1" cannot be reached. It appears to be stopped. My wsadmin scripts are not working, when I run the script sh wsadmin.sh -user wasadmin -password Password I get the following error WASX7023E: Error creating "SOAP" connection to host "localhost"; exception information: com.ibm.websphere.management.exception.ConnectorNotA vailableException: [SOAPException: faultCode=SOAP-ENV:Protocol; msg=; targetException=java.net.MalformedURLException] WASX7213I: This scripting client is not connected to a server process; please refer to the log file /opt/IBM\WebSphere/AppServer /profiles/AppSrv01/logs/wsadmin.traceout for additional information. I can access the console on the browser without any issue.
|
ADMU0509I: The Application Server "server1" cannot be reached. It appears to be stopped I have setup a WebSphere Application Server on my RHEL 7 virtual machine. When I start the server, it starts fine and I can access the admin console but when I try to stop or get the status of the server using the script sh stopServer.sh -server1 / sh serverStatus.sh server1 It gives the following message, ADMU0509I: The Application Server "server1" cannot be reached. It appears to be stopped. My wsadmin scripts are not working, when I run the script sh wsadmin.sh -user wasadmin -password Password I get the following error WASX7023E: Error creating "SOAP" connection to host "localhost"; exception information: com.ibm.websphere.management.exception.ConnectorNotA vailableException: [SOAPException: faultCode=SOAP-ENV:Protocol; msg=; targetException=java.net.MalformedURLException] WASX7213I: This scripting client is not connected to a server process; please refer to the log file /opt/IBM\WebSphere/AppServer /profiles/AppSrv01/logs/wsadmin.traceout for additional information. I can access the console on the browser without any issue.
|
server, rhel, ibm-was
| 2
| 4,247
| 2
|
https://stackoverflow.com/questions/34871568/admu0509i-the-application-server-server1-cannot-be-reached-it-appears-to-be
|
33,293,140
|
Docker instance cannot run images anymore and unable to reclaim free space
|
I am trying to start any of my saved containers in docker but am unable to do it. I have started getting the Error response from daemon: Error running DeviceCreate (createSnapDevice) dm_task_run failed This started happening after committing a relatively big docker image and it seemed to have filled up all available docker data space, even though I had lots of space on the host machine. Now I am unable to free up the docker data space anymore, even after deleting the big image. Docker is unable to reclaim the space. I also tried the fix mentioned below so that I can start the docker container but was not successful. Is there anything I can do to fix existing Docker to run images again? Related question: Can't run Docker container due device mapper error Here is my host configuration. Data Space used and total has reached max and free is in 0. # docker info ========================================================= Containers: 49 Images: 23 Storage Driver: devicemapper Pool Name: docker-8:3-4998488-pool Pool Blocksize: 65.54 kB Backing Filesystem: extfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 107.4 GB Data Space Total: 107.4 GB Data Space Available: 0 B Metadata Space Used: 60.36 MB Metadata Space Total: 2.147 GB Metadata Space Available: 2.087 GB Udev Sync Supported: true Deferred Removal Enabled: false Data loop file: /var/lib/docker/devicemapper/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata Library Version: 1.02.93-RHEL7 (2015-01-28) Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.10.0-229.el7.x86_64 Operating System: Red Hat Enterprise Linux CPUs: 4 Total Memory: 7.64 GiB docker version ========================================================= Client: Version: 1.8.2 API version: 1.20 Go version: go1.4.2 Git commit: 0a8c2e3 Built: Thu Sep 10 19:08:45 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.2 API version: 1.20 Go version: go1.4.2 Git commit: 0a8c2e3 Built: Thu Sep 10 19:08:45 UTC 2015 OS/Arch: linux/amd64
|
Docker instance cannot run images anymore and unable to reclaim free space I am trying to start any of my saved containers in docker but am unable to do it. I have started getting the Error response from daemon: Error running DeviceCreate (createSnapDevice) dm_task_run failed This started happening after committing a relatively big docker image and it seemed to have filled up all available docker data space, even though I had lots of space on the host machine. Now I am unable to free up the docker data space anymore, even after deleting the big image. Docker is unable to reclaim the space. I also tried the fix mentioned below so that I can start the docker container but was not successful. Is there anything I can do to fix existing Docker to run images again? Related question: Can't run Docker container due device mapper error Here is my host configuration. Data Space used and total has reached max and free is in 0. # docker info ========================================================= Containers: 49 Images: 23 Storage Driver: devicemapper Pool Name: docker-8:3-4998488-pool Pool Blocksize: 65.54 kB Backing Filesystem: extfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 107.4 GB Data Space Total: 107.4 GB Data Space Available: 0 B Metadata Space Used: 60.36 MB Metadata Space Total: 2.147 GB Metadata Space Available: 2.087 GB Udev Sync Supported: true Deferred Removal Enabled: false Data loop file: /var/lib/docker/devicemapper/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata Library Version: 1.02.93-RHEL7 (2015-01-28) Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.10.0-229.el7.x86_64 Operating System: Red Hat Enterprise Linux CPUs: 4 Total Memory: 7.64 GiB docker version ========================================================= Client: Version: 1.8.2 API version: 1.20 Go version: go1.4.2 Git commit: 0a8c2e3 Built: Thu Sep 10 19:08:45 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.2 API version: 1.20 Go version: go1.4.2 Git commit: 0a8c2e3 Built: Thu Sep 10 19:08:45 UTC 2015 OS/Arch: linux/amd64
|
linux, docker, rhel
| 2
| 1,679
| 2
|
https://stackoverflow.com/questions/33293140/docker-instance-cannot-run-images-anymore-and-unable-to-reclaim-free-space
|
30,011,269
|
python socket.recv hangs, however I see the frame in tcpdump
|
I have below code to send command, but python script never exits. it hangs. i am using RHEL 6.4 x86_64. scapy srp1 also hangs. from socket import * from scapy.all import * from myproto import * def sendeth(REQUEST, interface = "eth0"): """Send raw Ethernet packet on interface.""" s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW) s.bind((interface, 0)) s.send(REQUEST) data = s.recv(2048) hexdump(data) p = Ether()/MYPROTO()/MYPROTO1() hexdump(p) if __name__ == "__main__": print "Sent %d-byte Ethernet packet on eth3" % sendeth(str(p), 'eth3') But after execution, I see the frame on tcpdump, but the python code never exits and need a control^C. tcpdump: WARNING: eth3: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth3, link-type EN10MB (Ethernet), capture size 65535 bytes 17:10:37.122445 00:00:00:00:00:00 (oui Ethernet) > Broadcast, ethertype Unknown (0x8096), length 34: 0x0000: ffff ffff ffff 0000 0000 0000 8096 0001 0x0010: 0001 1500 0000 0000 0000 0000 0000 ffff 0x0020: eafe 17:10:37.133248 00:04:25:1c:a0:02 (oui Unknown) > Broadcast, ethertype Unknown (0x8096), length 76: 0x0000: ffff ffff ffff 0004 251c a002 8096 0001 0x0010: 0001 9500 0028 0000 0000 0000 0000 0000 0x0020: 0000 f1f0 f100 0000 0000 0000 0000 0000 0x0030: 0000 0000 0000 0803 0087 1634 8096 8096 0x0040: e4f2 0000 0f21 fffc 5427 ffff
|
python socket.recv hangs, however I see the frame in tcpdump I have below code to send command, but python script never exits. it hangs. i am using RHEL 6.4 x86_64. scapy srp1 also hangs. from socket import * from scapy.all import * from myproto import * def sendeth(REQUEST, interface = "eth0"): """Send raw Ethernet packet on interface.""" s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW) s.bind((interface, 0)) s.send(REQUEST) data = s.recv(2048) hexdump(data) p = Ether()/MYPROTO()/MYPROTO1() hexdump(p) if __name__ == "__main__": print "Sent %d-byte Ethernet packet on eth3" % sendeth(str(p), 'eth3') But after execution, I see the frame on tcpdump, but the python code never exits and need a control^C. tcpdump: WARNING: eth3: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth3, link-type EN10MB (Ethernet), capture size 65535 bytes 17:10:37.122445 00:00:00:00:00:00 (oui Ethernet) > Broadcast, ethertype Unknown (0x8096), length 34: 0x0000: ffff ffff ffff 0000 0000 0000 8096 0001 0x0010: 0001 1500 0000 0000 0000 0000 0000 ffff 0x0020: eafe 17:10:37.133248 00:04:25:1c:a0:02 (oui Unknown) > Broadcast, ethertype Unknown (0x8096), length 76: 0x0000: ffff ffff ffff 0004 251c a002 8096 0001 0x0010: 0001 9500 0028 0000 0000 0000 0000 0000 0x0020: 0000 f1f0 f100 0000 0000 0000 0000 0000 0x0030: 0000 0000 0000 0803 0087 1634 8096 8096 0x0040: e4f2 0000 0f21 fffc 5427 ffff
|
python, sockets, rhel
| 2
| 1,639
| 2
|
https://stackoverflow.com/questions/30011269/python-socket-recv-hangs-however-i-see-the-frame-in-tcpdump
|
29,658,711
|
Installing laravel on RHEL php54
|
So, RedHat Enterprise Linux doesn't support the MCrpyt module, but you can get it from the Fedora project through the EPL. This would work fine, except the fact that we're using SoftWare Collections (SCL) to get a newer version of PHP (5.4). So what is the best approach to getting the dependencies for laravell on RHEL 6.6 and keeping PHP 5.4 in the most vendor supported way possible?
|
Installing laravel on RHEL php54 So, RedHat Enterprise Linux doesn't support the MCrpyt module, but you can get it from the Fedora project through the EPL. This would work fine, except the fact that we're using SoftWare Collections (SCL) to get a newer version of PHP (5.4). So what is the best approach to getting the dependencies for laravell on RHEL 6.6 and keeping PHP 5.4 in the most vendor supported way possible?
|
php, laravel, rhel
| 2
| 271
| 1
|
https://stackoverflow.com/questions/29658711/installing-laravel-on-rhel-php54
|
28,493,468
|
Unable to compile C++ programs on CentOS
|
I am getting the below error during ./configure. configure:3429: checking whether the C compiler works configure:3451: gcc -m32 -D_FILE_OFFSET_BITS=64 -m32 conftest.c >&5 /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.7/libgcc_s.so when searching for -lgcc_s /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.7/libgcc_s.so when searching for -lgcc_s /usr/bin/ld: cannot find -lgcc_s collect2: ld returned 1 exit status configure:3455: $? = 1 configure:3493: result: no configure:3498: error: in `/root/cjk/1.x/src/externals/mecab': configure:3500: error: C compiler cannot create executables I have tried couple of solutions mentioned in internet but to no avail. I have installed complete Developers package in the machine. I have installed glibc-devel.i686 package as well.
|
Unable to compile C++ programs on CentOS I am getting the below error during ./configure. configure:3429: checking whether the C compiler works configure:3451: gcc -m32 -D_FILE_OFFSET_BITS=64 -m32 conftest.c >&5 /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.7/libgcc_s.so when searching for -lgcc_s /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.7/libgcc_s.so when searching for -lgcc_s /usr/bin/ld: cannot find -lgcc_s collect2: ld returned 1 exit status configure:3455: $? = 1 configure:3493: result: no configure:3498: error: in `/root/cjk/1.x/src/externals/mecab': configure:3500: error: C compiler cannot create executables I have tried couple of solutions mentioned in internet but to no avail. I have installed complete Developers package in the machine. I have installed glibc-devel.i686 package as well.
|
c, linux, unix, centos, rhel
| 2
| 2,158
| 1
|
https://stackoverflow.com/questions/28493468/unable-to-compile-c-programs-on-centos
|
21,428,879
|
How to use environment variable declared in /etc/profie into script running through monit?
|
Environment variable declared in /etc/profile: export MYNAME=rhel Content of script which is running from monit is [/tmp/printmyname.sh]: echo "My Name is: "$MYNAME >> /var/log/env_variablefile.out Content of monit: check file PrintVariable with path /var/log/something.out start program = "/bin/sh /tmp/printmyname.sh" if timestamp > 1 minutes then start I want to print environment variable declared in /etc/profile to /var/log/env_variablefile.out when /var/log/something.out file is not updated since one minute. So my problem is when i directly run /tmp/printmyname.sh it append My Name is: rhel into /var/log/env_variablefile.out but when it is running from monit it only prints My Name is: . So I want to know the reason of such behavior and a way to solve problem. Note: monit is running every 10Seconds and above code is just example of my actual code.
|
How to use environment variable declared in /etc/profie into script running through monit? Environment variable declared in /etc/profile: export MYNAME=rhel Content of script which is running from monit is [/tmp/printmyname.sh]: echo "My Name is: "$MYNAME >> /var/log/env_variablefile.out Content of monit: check file PrintVariable with path /var/log/something.out start program = "/bin/sh /tmp/printmyname.sh" if timestamp > 1 minutes then start I want to print environment variable declared in /etc/profile to /var/log/env_variablefile.out when /var/log/something.out file is not updated since one minute. So my problem is when i directly run /tmp/printmyname.sh it append My Name is: rhel into /var/log/env_variablefile.out but when it is running from monit it only prints My Name is: . So I want to know the reason of such behavior and a way to solve problem. Note: monit is running every 10Seconds and above code is just example of my actual code.
|
shell, rhel, monit
| 2
| 823
| 1
|
https://stackoverflow.com/questions/21428879/how-to-use-environment-variable-declared-in-etc-profie-into-script-running-thro
|
18,524,285
|
How do you temporarily hide the output of a bash script that is running?
|
This question may or may not have been asked already, I couldn't find it anywhere & I'm a bit of a Linux beginner. I got a ./start.sh script that runs on my RHEL 4 server and sends out trace outputs every other second. Example: Trace: Server Running on alias 1, 2 etc... it's there to make sure the server didn't drop out. I need to run some of my DDoS protection tools and scanners as this is running all at the same time. Though I can't ever see what's going on because of all the trace output! How would I go about temporarily disabling output as it is ALREADY running, while keeping it running in the background then running my tools and then when I'm done re-enabling the output again?
|
How do you temporarily hide the output of a bash script that is running? This question may or may not have been asked already, I couldn't find it anywhere & I'm a bit of a Linux beginner. I got a ./start.sh script that runs on my RHEL 4 server and sends out trace outputs every other second. Example: Trace: Server Running on alias 1, 2 etc... it's there to make sure the server didn't drop out. I need to run some of my DDoS protection tools and scanners as this is running all at the same time. Though I can't ever see what's going on because of all the trace output! How would I go about temporarily disabling output as it is ALREADY running, while keeping it running in the background then running my tools and then when I'm done re-enabling the output again?
|
linux, bash, rhel
| 2
| 1,298
| 1
|
https://stackoverflow.com/questions/18524285/how-do-you-temporarily-hide-the-output-of-a-bash-script-that-is-running
|
17,865,959
|
pgpool-II connection pooling - ERROR: "MD5" authentication with pgpool failed
|
Using the following for just connection pooling no master_slave or replication: rhel 6, postgresql 9.1.9, & pgpool-II 3.1.3 (also tried 3.2.5) Followed solution suggested in [URL] After following the instructions for MD5 I also tried setting both pg_hba.conf and pool_hba.conf to trust for local and subnet, but still get the following error when attempting to connect to the pool locally: ERROR: "MD5" authentication with pgpool failed for user foo Tried locally on Fedora 18 with pg9.2 and pgpool from Fedora repo and worked right out of the box. At the end of all routes suggested everywhere I could find. Help would be greatly appreciated.
|
pgpool-II connection pooling - ERROR: "MD5" authentication with pgpool failed Using the following for just connection pooling no master_slave or replication: rhel 6, postgresql 9.1.9, & pgpool-II 3.1.3 (also tried 3.2.5) Followed solution suggested in [URL] After following the instructions for MD5 I also tried setting both pg_hba.conf and pool_hba.conf to trust for local and subnet, but still get the following error when attempting to connect to the pool locally: ERROR: "MD5" authentication with pgpool failed for user foo Tried locally on Fedora 18 with pg9.2 and pgpool from Fedora repo and worked right out of the box. At the end of all routes suggested everywhere I could find. Help would be greatly appreciated.
|
postgresql-9.1, rhel, pgpool
| 2
| 3,822
| 1
|
https://stackoverflow.com/questions/17865959/pgpool-ii-connection-pooling-error-md5-authentication-with-pgpool-failed
|
15,834,443
|
how to install yum package on linux RHEL 4?
|
when i run root@localhost# yum install package_name command on linux terminal it gives: bash: yum: command not found because i don't have yello update and modifier package install on my linux . for that i mount my linux iso disc.and write command root@localhost# cd /meida/RHEL_4/i386/ Disk/ 1/ root@localhost RHEL_4 i386 Disk 1# ls but there is no package directory. and i didn't find any http url form downloading(wget) yum.x.x.x.rpm. i have linux RHEL 4 AS version installed. plz help
|
how to install yum package on linux RHEL 4? when i run root@localhost# yum install package_name command on linux terminal it gives: bash: yum: command not found because i don't have yello update and modifier package install on my linux . for that i mount my linux iso disc.and write command root@localhost# cd /meida/RHEL_4/i386/ Disk/ 1/ root@localhost RHEL_4 i386 Disk 1# ls but there is no package directory. and i didn't find any http url form downloading(wget) yum.x.x.x.rpm. i have linux RHEL 4 AS version installed. plz help
|
linux, vmware, yum, rhel
| 2
| 13,911
| 1
|
https://stackoverflow.com/questions/15834443/how-to-install-yum-package-on-linux-rhel-4
|
6,298,744
|
Setup oci8 on rhel 6 with REMI repository
|
I've done this before, but it was a long trial and error process that resulted with my test machine having multiple copies of php, oci8, and the instant client, and I'm still not sure what it was that I did that made it work. So far, i've set up yum to use the remi repository, done yum install php php-oci8 php-pdo , and downloaded the oracle instant client and done rpm -Uh oracle-instantclient11.2-basic-11.2.0.2.0.x86_64\ \(1\).rpm When I do phpinfo() on a page though, it still doesn't list oci8 as one of the modules. I think the error is with the ORACLE_HOME environment variable, but I'm not sure what it's supposed to be set to. right now i have: SetEnv LD_LIBRARY_PATH /usr/lib/oracle/11.2/client64/lib SetEnv ORACLE_HOME /usr/lib/oracle/11.2 in /etc/httpd/conf/httpd.conf The last time I got this working I think I just kept on uninstalling php and php-oci8 and re-installing until things worked. My working server has ORACLE_HOME set like this: But the new non working one has ORACLE_HOME set here: how do i set the ORACLE_HOME that is in the Enviroment section of phpinfo()?
|
Setup oci8 on rhel 6 with REMI repository I've done this before, but it was a long trial and error process that resulted with my test machine having multiple copies of php, oci8, and the instant client, and I'm still not sure what it was that I did that made it work. So far, i've set up yum to use the remi repository, done yum install php php-oci8 php-pdo , and downloaded the oracle instant client and done rpm -Uh oracle-instantclient11.2-basic-11.2.0.2.0.x86_64\ \(1\).rpm When I do phpinfo() on a page though, it still doesn't list oci8 as one of the modules. I think the error is with the ORACLE_HOME environment variable, but I'm not sure what it's supposed to be set to. right now i have: SetEnv LD_LIBRARY_PATH /usr/lib/oracle/11.2/client64/lib SetEnv ORACLE_HOME /usr/lib/oracle/11.2 in /etc/httpd/conf/httpd.conf The last time I got this working I think I just kept on uninstalling php and php-oci8 and re-installing until things worked. My working server has ORACLE_HOME set like this: But the new non working one has ORACLE_HOME set here: how do i set the ORACLE_HOME that is in the Enviroment section of phpinfo()?
|
php, apache, oracle-call-interface, rhel
| 2
| 5,505
| 2
|
https://stackoverflow.com/questions/6298744/setup-oci8-on-rhel-6-with-remi-repository
|
77,486,885
|
Docker push failing with /var/lib/docker/overlay2/<id>/merged/run/sisidsdaemon.pid: no such file or directory
|
When i try to push the image to artifactory, seeing below error. The docker build was successfull. The push refers to repository [jfrogartifactory.com/jenkins/k8stools] bd862c2c6862: Pushed fa9470006406: Pushing [==================================================>] 51.33MB/51.33MB 3416052442ea: Pushed 097ffe707280: Pushed 66902afc5923: Layer already exists 56c2913a98f0: Layer already exists 234119318760: Layer already exists 34f7184834b2: Layer already exists 5836ece05bfd: Layer already exists 72e830a4dff5: Layer already exists open /var/lib/docker/overlay2/jxl9gqgs0xj61kftk9b4casbo/merged/run/sisidsdaemon.pid: no such file or directory The corresponding Dockerfile is: FROM alpine:latest USER root ARG ARCH=amd64 RUN apk update && apk upgrade --no-cache && \ apk --no-cache add curl wget git jq yq ARG HELM_VERSION=3.13.0 ARG KUBECTL_VERSION=1.27.0 ARG KUBESEAL_VERSION=0.19.5 # Install kubectl RUN curl -sLO [URL] && chmod +x kubectl && mv kubectl /usr/local/bin #Install Helm RUN curl -LO [URL] && \ tar -xzvf helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz && \ mv linux-${ARCH}/helm /usr/local/bin/helm && \ chmod +x /usr/local/bin/helm && \ rm -rf linux-${ARCH} helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz USER openjdk ENTRYPOINT ["/bin/sh"] Tried all docker cleanup commands, but still no luck. docker rm -vf $(docker ps -aq) docker rmi -f $(docker images -aq) docker volume prune -f Can someone please help in this regard. The Instance i am running docker is on OS: RHEL 8 Docker version: 24.0.7
|
Docker push failing with /var/lib/docker/overlay2/<id>/merged/run/sisidsdaemon.pid: no such file or directory When i try to push the image to artifactory, seeing below error. The docker build was successfull. The push refers to repository [jfrogartifactory.com/jenkins/k8stools] bd862c2c6862: Pushed fa9470006406: Pushing [==================================================>] 51.33MB/51.33MB 3416052442ea: Pushed 097ffe707280: Pushed 66902afc5923: Layer already exists 56c2913a98f0: Layer already exists 234119318760: Layer already exists 34f7184834b2: Layer already exists 5836ece05bfd: Layer already exists 72e830a4dff5: Layer already exists open /var/lib/docker/overlay2/jxl9gqgs0xj61kftk9b4casbo/merged/run/sisidsdaemon.pid: no such file or directory The corresponding Dockerfile is: FROM alpine:latest USER root ARG ARCH=amd64 RUN apk update && apk upgrade --no-cache && \ apk --no-cache add curl wget git jq yq ARG HELM_VERSION=3.13.0 ARG KUBECTL_VERSION=1.27.0 ARG KUBESEAL_VERSION=0.19.5 # Install kubectl RUN curl -sLO [URL] && chmod +x kubectl && mv kubectl /usr/local/bin #Install Helm RUN curl -LO [URL] && \ tar -xzvf helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz && \ mv linux-${ARCH}/helm /usr/local/bin/helm && \ chmod +x /usr/local/bin/helm && \ rm -rf linux-${ARCH} helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz USER openjdk ENTRYPOINT ["/bin/sh"] Tried all docker cleanup commands, but still no luck. docker rm -vf $(docker ps -aq) docker rmi -f $(docker images -aq) docker volume prune -f Can someone please help in this regard. The Instance i am running docker is on OS: RHEL 8 Docker version: 24.0.7
|
linux, docker, rhel, docker-build, docker-push
| 2
| 1,162
| 2
|
https://stackoverflow.com/questions/77486885/docker-push-failing-with-var-lib-docker-overlay2-id-merged-run-sisidsdaemon-p
|
77,192,798
|
Error Compiling pro c program in Linux Red Hat machine. Error Code - PCC-S-02014
|
I am trying to compile proc c program in Red Hat version 8 machine. GCC version is 8.5.0 and my Oracle Client version is 19.0. When I compile the program it gives the below error. Please help. I tried to add parse and did not work. While compiling the program, I get below error. System default option values taken from: /u01/app/oracle/product/19.0.0/client_1/precomp/admin/pcscfg.cfg Syntax error at line 166, column 45, file /usr/include/sys/cdefs.h: Error at line 166, column 45 in file /usr/include/sys/cdefs.h #define __glibc_fortify(f, __l, __s, __osz, ...) \ ............................................1 PCC-S-02014, Encountered the symbol "..." when expecting one of the following: an identifier, define, elif, else, endif, error, if, ifdef, ifndef, include, include_next, line, pragma, undef, exec, sql, begin, end, var, type, oracle, an immediate preprocessor command, a C token, create, function, package, procedure, trigger, or, replace, Syntax error at line 168, column 9, file /usr/include/sys/cdefs.h: Error at line 168, column 9 in file /usr/include/sys/cdefs.h ? __ ## f ## _alias (__VA_ARGS__) \ ........1 PCC-S-02201, Encountered the symbol "__uint8_t" when expecting one of the follow ing: auto, char, const, double, enum, float, int, long, ulong_varchar, OCIBFileLocator OCIBlobLocator, OCIClobLocator, OCIDateTime, OCIExtProcContext, OCIInterval, OCIRowid, OCIDate, OCINumber, OCIRaw, OCIString, register, short, signed, sql_context, sql_cursor, static, struct, union, unsigned, utext, uvarchar, varchar, void, volatile, a typedef name, The symbol "enum," was substituted for "__uint8_t" to continue. Syntax error at line 53, column 9, file /usr/include/bits/types.h: Error at line 53, column 9 in file /usr/include/bits/types.h typedef __int16_t __int_least16_t; ........1 char, const, double, enum, float, int, long, ulong_varchar, OCIBFileLocator OCIBlobLocator, OCIClobLocator, OCIDateTime, OCIExtProcContext, OCIInterval, OCIRowid, OCIDate, OCINumber, OCIRaw, OCIString, short, signed, sql_context, sql_cursor, struct, union, unsigned, utext, uvarchar, varchar, void, volatile, a typedef name, Error at line 0, column 0 in file /home/Functions.pc PCC-F-02102, Fatal error while doing C preprocessing There are compilation errors step1!!
|
Error Compiling pro c program in Linux Red Hat machine. Error Code - PCC-S-02014 I am trying to compile proc c program in Red Hat version 8 machine. GCC version is 8.5.0 and my Oracle Client version is 19.0. When I compile the program it gives the below error. Please help. I tried to add parse and did not work. While compiling the program, I get below error. System default option values taken from: /u01/app/oracle/product/19.0.0/client_1/precomp/admin/pcscfg.cfg Syntax error at line 166, column 45, file /usr/include/sys/cdefs.h: Error at line 166, column 45 in file /usr/include/sys/cdefs.h #define __glibc_fortify(f, __l, __s, __osz, ...) \ ............................................1 PCC-S-02014, Encountered the symbol "..." when expecting one of the following: an identifier, define, elif, else, endif, error, if, ifdef, ifndef, include, include_next, line, pragma, undef, exec, sql, begin, end, var, type, oracle, an immediate preprocessor command, a C token, create, function, package, procedure, trigger, or, replace, Syntax error at line 168, column 9, file /usr/include/sys/cdefs.h: Error at line 168, column 9 in file /usr/include/sys/cdefs.h ? __ ## f ## _alias (__VA_ARGS__) \ ........1 PCC-S-02201, Encountered the symbol "__uint8_t" when expecting one of the follow ing: auto, char, const, double, enum, float, int, long, ulong_varchar, OCIBFileLocator OCIBlobLocator, OCIClobLocator, OCIDateTime, OCIExtProcContext, OCIInterval, OCIRowid, OCIDate, OCINumber, OCIRaw, OCIString, register, short, signed, sql_context, sql_cursor, static, struct, union, unsigned, utext, uvarchar, varchar, void, volatile, a typedef name, The symbol "enum," was substituted for "__uint8_t" to continue. Syntax error at line 53, column 9, file /usr/include/bits/types.h: Error at line 53, column 9 in file /usr/include/bits/types.h typedef __int16_t __int_least16_t; ........1 char, const, double, enum, float, int, long, ulong_varchar, OCIBFileLocator OCIBlobLocator, OCIClobLocator, OCIDateTime, OCIExtProcContext, OCIInterval, OCIRowid, OCIDate, OCINumber, OCIRaw, OCIString, short, signed, sql_context, sql_cursor, struct, union, unsigned, utext, uvarchar, varchar, void, volatile, a typedef name, Error at line 0, column 0 in file /home/Functions.pc PCC-F-02102, Fatal error while doing C preprocessing There are compilation errors step1!!
|
gcc, rhel, oracleclient
| 2
| 1,324
| 1
|
https://stackoverflow.com/questions/77192798/error-compiling-pro-c-program-in-linux-red-hat-machine-error-code-pcc-s-02014
|
76,566,473
|
Unable to Install nfs-utils package in redhat/ubi8-micro:8.6 docker image
|
I'm customizing a Dockerfile for an application which is using redhat/ubi8-micro:8.6 as the base image and I want to install nfs-utils package on top of that. I enabled all the repos that was available within the image under /etc/yum.repos.d/ubi.repo . [ubi-8-baseos-rpms] [ubi-8-baseos-debug-rpms] [ubi-8-baseos-source] [ubi-8-appstream-rpms] [ubi-8-appstream-debug-rpms] [ubi-8-appstream-source] I can install other packages using microdnf package manger which is available within the image, but the nfs-utils package is not available. I tried below ways to install it. A multi-stage docker build to install nfs-utils package and tried copying over all the files but it didn't work, As some dependencies and libraries were missing Downloaded the nfs-utils-1.3.0-0.68.el7.x86_64.rpm rpm package directly from internet and tried installing with rpm -ivh nfs-utils-1.3.0-0.68.el7.x86_64.rpm but it fails to install as it expects other dependencies to installed before installing it. Is there a simple way to install the nfs-utils package inside redhat/ubi8-micro:8.6 docker image?
|
Unable to Install nfs-utils package in redhat/ubi8-micro:8.6 docker image I'm customizing a Dockerfile for an application which is using redhat/ubi8-micro:8.6 as the base image and I want to install nfs-utils package on top of that. I enabled all the repos that was available within the image under /etc/yum.repos.d/ubi.repo . [ubi-8-baseos-rpms] [ubi-8-baseos-debug-rpms] [ubi-8-baseos-source] [ubi-8-appstream-rpms] [ubi-8-appstream-debug-rpms] [ubi-8-appstream-source] I can install other packages using microdnf package manger which is available within the image, but the nfs-utils package is not available. I tried below ways to install it. A multi-stage docker build to install nfs-utils package and tried copying over all the files but it didn't work, As some dependencies and libraries were missing Downloaded the nfs-utils-1.3.0-0.68.el7.x86_64.rpm rpm package directly from internet and tried installing with rpm -ivh nfs-utils-1.3.0-0.68.el7.x86_64.rpm but it fails to install as it expects other dependencies to installed before installing it. Is there a simple way to install the nfs-utils package inside redhat/ubi8-micro:8.6 docker image?
|
docker, dockerfile, rhel, nfs, rhel8
| 2
| 2,218
| 0
|
https://stackoverflow.com/questions/76566473/unable-to-install-nfs-utils-package-in-redhat-ubi8-micro8-6-docker-image
|
76,016,939
|
Podman error on RHEL 8.6 - OCI runtime error: runc: runc create failed:
|
We are running a RHEL 8.6 VM with Podman 4.1.1 installed. $ podman version Client: Podman Engine Version: 4.1.1 API Version: 4.1.1 Go Version: go1.17.7 Built: Wed Oct 12 08:42:59 2022 OS/Arch: linux/amd64 When we try to start our container, we receive this error. $ ./start.sh Error: OCI runtime error: runc: runc create failed: unable to start container process: waiting for init preliminary setup: read init-p: connection reset by peer Whats interesting is this is intermittent. If we hit up arrow and try to run the script again, about 1 out of every 8 times, we can enter the container. podman info --debug shows: name: runc package: runc-1.1.3-3.module+el8.6.0+16986+c8760fe3.x86_64 path: /usr/bin/runc version: |- runc version 1.1.3 spec: 1.0.2-dev go: go1.17.12 libseccomp: 2.5.2 os: linux remoteSocket: path: /run/podman/podman.sock security: apparmorEnabled: false capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT rootless: false seccompEnabled: true seccompProfilePath: /usr/share/containers/seccomp.json selinuxEnabled: true serviceIsRemote: false slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.2.0-2.module+el8.6.0+16771+28dfca77.x86_64 version: |- slirp4netns version 1.2.0 commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383 libslirp: 4.4.0 SLIRP_CONFIG_VERSION_MAX: 3 libseccomp: 2.5.2 swapFree: 17179865088 swapTotal: 17179865088 uptime: 44m 13.35s plugins: log: - k8s-file - none - passthrough - journald network: - bridge - macvlan - ipvlan volume: - local registries: search: - registry.access.redhat.com - registry.redhat.io - docker.io store: configFile: /etc/containers/storage.conf containerStore: number: 120 paused: 0 running: 0 stopped: 120 graphDriverName: overlay graphOptions: overlay.mountopt: nodev,metacopy=on graphRoot: /docker/storage graphRootAllocated: 442274021376 graphRootUsed: 266936029184 graphStatus: Backing Filesystem: xfs Native Overlay Diff: "false" Supports d_type: "true" Using metacopy: "true" imageCopyTmpDir: /var/tmp imageStore: number: 1 runRoot: /docker/storage volumePath: /docker/storage/volumes version: APIVersion: 4.1.1 Built: 1665582179 BuiltTime: Wed Oct 12 08:42:59 2022 GitCommit: "" GoVersion: go1.17.7 Os: linux OsArch: linux/amd64 Version: 4.1.1 Do you all have any insight into this error? Thanks!
|
Podman error on RHEL 8.6 - OCI runtime error: runc: runc create failed: We are running a RHEL 8.6 VM with Podman 4.1.1 installed. $ podman version Client: Podman Engine Version: 4.1.1 API Version: 4.1.1 Go Version: go1.17.7 Built: Wed Oct 12 08:42:59 2022 OS/Arch: linux/amd64 When we try to start our container, we receive this error. $ ./start.sh Error: OCI runtime error: runc: runc create failed: unable to start container process: waiting for init preliminary setup: read init-p: connection reset by peer Whats interesting is this is intermittent. If we hit up arrow and try to run the script again, about 1 out of every 8 times, we can enter the container. podman info --debug shows: name: runc package: runc-1.1.3-3.module+el8.6.0+16986+c8760fe3.x86_64 path: /usr/bin/runc version: |- runc version 1.1.3 spec: 1.0.2-dev go: go1.17.12 libseccomp: 2.5.2 os: linux remoteSocket: path: /run/podman/podman.sock security: apparmorEnabled: false capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT rootless: false seccompEnabled: true seccompProfilePath: /usr/share/containers/seccomp.json selinuxEnabled: true serviceIsRemote: false slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.2.0-2.module+el8.6.0+16771+28dfca77.x86_64 version: |- slirp4netns version 1.2.0 commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383 libslirp: 4.4.0 SLIRP_CONFIG_VERSION_MAX: 3 libseccomp: 2.5.2 swapFree: 17179865088 swapTotal: 17179865088 uptime: 44m 13.35s plugins: log: - k8s-file - none - passthrough - journald network: - bridge - macvlan - ipvlan volume: - local registries: search: - registry.access.redhat.com - registry.redhat.io - docker.io store: configFile: /etc/containers/storage.conf containerStore: number: 120 paused: 0 running: 0 stopped: 120 graphDriverName: overlay graphOptions: overlay.mountopt: nodev,metacopy=on graphRoot: /docker/storage graphRootAllocated: 442274021376 graphRootUsed: 266936029184 graphStatus: Backing Filesystem: xfs Native Overlay Diff: "false" Supports d_type: "true" Using metacopy: "true" imageCopyTmpDir: /var/tmp imageStore: number: 1 runRoot: /docker/storage volumePath: /docker/storage/volumes version: APIVersion: 4.1.1 Built: 1665582179 BuiltTime: Wed Oct 12 08:42:59 2022 GitCommit: "" GoVersion: go1.17.7 Os: linux OsArch: linux/amd64 Version: 4.1.1 Do you all have any insight into this error? Thanks!
|
containers, rhel, podman
| 2
| 1,260
| 0
|
https://stackoverflow.com/questions/76016939/podman-error-on-rhel-8-6-oci-runtime-error-runc-runc-create-failed
|
73,310,224
|
Java Object.notifyAll() blocks for long time while consuming high CPU
|
We have a large enterprise telecom application running over OpenJDK8 (25.171-b10) on RHEL6.10. We observe a very strange issue that occurs randomly in production servers. The process starts consuming high CPU and eventually we enter memory congestion due to Java heap being nearly full. After Analyzing the Java process using thread dumps, we noticed that a thread is blocked inside Java “notifyAll” method. The thread remains blocked inside this method for very long time, and the process consumes high CPU while the thread remains blocked inside this method. Below is the stack trace for the thread that remains blocked - "pool-5-thread-1" #432 prio=5 os_prio=0 tid=0x00007f7c85a57000 nid=0x37fe runnable [0x00007f7c3cba0000] java.lang.Thread.State: RUNNABLE at java.lang.Object.notifyAll(Native Method) at gov.nist.javax.sip.parser.Pipeline.close(Pipeline.java:165) - locked <0x00000005c16007b8> (a java.util.LinkedList) at gov.nist.javax.sip.stack.NioTcpMessageChannel.close(NioTcpMessageChannel.java:258) at gov.nist.javax.sip.stack.ConnectionOrientedMessageChannel.close(ConnectionOrientedMessageChannel.java:203) at gov.nist.javax.sip.stack.SocketTimeoutAuditor.runTask(SocketTimeoutAuditor.java:74) The thread remains stuck inside “notifyAll()” for several minutes while consuming high CPU. To provide more information on the Java wait/notify mechanism being used: We have a class “Pipeline” which has a private member “buffList” which is of type LinkedList. An instance of Pipeline starts a thread which is synchronized on “buffList” and calls ‘this.buffList.wait()’. public int read() throws IOException { synchronized (this.buffList) { ... this.buffList.wait(); ... } When the Pipeline instance has some data or needs to be closed, another independent thread would call notifyAll(). Example snippet shows close(): public void close() throws IOException { ... synchronized (this.buffList) { this.buffList.notifyAll(); } } Normally, this works fine but for some reason, (that currently we neither can understand, neither can replicate), the call to ‘notifyAll()’ above not only blocks for several minutes but also causes high CPU usage. Since we tried with multiple ways to replicate the issue but currently appears only in production does anyone faced similar issue? Any idea on how we can troubleshoot/find the root cause of this behavior?
|
Java Object.notifyAll() blocks for long time while consuming high CPU We have a large enterprise telecom application running over OpenJDK8 (25.171-b10) on RHEL6.10. We observe a very strange issue that occurs randomly in production servers. The process starts consuming high CPU and eventually we enter memory congestion due to Java heap being nearly full. After Analyzing the Java process using thread dumps, we noticed that a thread is blocked inside Java “notifyAll” method. The thread remains blocked inside this method for very long time, and the process consumes high CPU while the thread remains blocked inside this method. Below is the stack trace for the thread that remains blocked - "pool-5-thread-1" #432 prio=5 os_prio=0 tid=0x00007f7c85a57000 nid=0x37fe runnable [0x00007f7c3cba0000] java.lang.Thread.State: RUNNABLE at java.lang.Object.notifyAll(Native Method) at gov.nist.javax.sip.parser.Pipeline.close(Pipeline.java:165) - locked <0x00000005c16007b8> (a java.util.LinkedList) at gov.nist.javax.sip.stack.NioTcpMessageChannel.close(NioTcpMessageChannel.java:258) at gov.nist.javax.sip.stack.ConnectionOrientedMessageChannel.close(ConnectionOrientedMessageChannel.java:203) at gov.nist.javax.sip.stack.SocketTimeoutAuditor.runTask(SocketTimeoutAuditor.java:74) The thread remains stuck inside “notifyAll()” for several minutes while consuming high CPU. To provide more information on the Java wait/notify mechanism being used: We have a class “Pipeline” which has a private member “buffList” which is of type LinkedList. An instance of Pipeline starts a thread which is synchronized on “buffList” and calls ‘this.buffList.wait()’. public int read() throws IOException { synchronized (this.buffList) { ... this.buffList.wait(); ... } When the Pipeline instance has some data or needs to be closed, another independent thread would call notifyAll(). Example snippet shows close(): public void close() throws IOException { ... synchronized (this.buffList) { this.buffList.notifyAll(); } } Normally, this works fine but for some reason, (that currently we neither can understand, neither can replicate), the call to ‘notifyAll()’ above not only blocks for several minutes but also causes high CPU usage. Since we tried with multiple ways to replicate the issue but currently appears only in production does anyone faced similar issue? Any idea on how we can troubleshoot/find the root cause of this behavior?
|
java, multithreading, java-8, nio, rhel
| 2
| 193
| 0
|
https://stackoverflow.com/questions/73310224/java-object-notifyall-blocks-for-long-time-while-consuming-high-cpu
|
71,373,081
|
Show remote command output in CI job results
|
I have CI pipeline which have stages like this. As it shows most of the stuff here is done on remote machine which is working fine. The only issues I am unable to see the command outputs here. For e.g. scp is used with -v which if run manually on machine shows a lot of verbose information useful for debugging etc. same goes for cp -v but in job results it shows no such information. So is there a way I can re-route the command outputs from remote machine to local (gitlab job output) my job 1/6: rules: - changes: - ${LOCA_FILE_PATH} stage: prepare allow_failure: true script: | ssh ${USER}@${HOST} '([ -f "${PATH}/test_conf_1.txt" ] && cp -v "${PATH}/test_conf_1.txt" ${PATH}/test_yaml_$CI_COMMIT_TIMESTAMP.txt)' my job 2/6: rules: - changes: - ${LOCA_FILE_PATH} stage: scp script: scp -v ${TF_ROOT}${LOCA_FILE_PATH} ${USER}@${HOST}:${PATH}/
|
Show remote command output in CI job results I have CI pipeline which have stages like this. As it shows most of the stuff here is done on remote machine which is working fine. The only issues I am unable to see the command outputs here. For e.g. scp is used with -v which if run manually on machine shows a lot of verbose information useful for debugging etc. same goes for cp -v but in job results it shows no such information. So is there a way I can re-route the command outputs from remote machine to local (gitlab job output) my job 1/6: rules: - changes: - ${LOCA_FILE_PATH} stage: prepare allow_failure: true script: | ssh ${USER}@${HOST} '([ -f "${PATH}/test_conf_1.txt" ] && cp -v "${PATH}/test_conf_1.txt" ${PATH}/test_yaml_$CI_COMMIT_TIMESTAMP.txt)' my job 2/6: rules: - changes: - ${LOCA_FILE_PATH} stage: scp script: scp -v ${TF_ROOT}${LOCA_FILE_PATH} ${USER}@${HOST}:${PATH}/
|
gitlab-ci, rhel
| 2
| 2,519
| 2
|
https://stackoverflow.com/questions/71373081/show-remote-command-output-in-ci-job-results
|
69,020,353
|
How do you configure the SQL Server Network Configuration protocols in a MSSQL Express Docker container on a Linux server?
|
The gist of the issue is that I am trying to connect to a MSSQL Express Docker container, living on a RHEL 7 server from my local Windows 10 machine using Microsoft SQL Server Management Studio. It is successfully connecting to the RHEL 7 server IP address and port (1433), using the username/password that was created for the container. However, it is throwing out an error that, after countless hours scouring Google, people have referenced back to needing to enable TCP/IP. This is easy in the Windows GUI. Not so much in a Linux environment. The error message from SSMS: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (Microsoft SQL Server, Error: 64) -> The specified network name is no longer available I know how to do this in the Windows environment: Run SQL Server Configuration Manager Expand SQL Server Network Configuration Select Properties for Protocols for MSSQLSERVER Enable TCP/IP I have also figured out how to use mssql-conf to modify various attributes in mssql.conf , which is where this change will take place. The issue is that I want to enable TCP/IP , but I am not seeing that option under the /opt/mssql/bin/mssql-conf list | more . Any suggestions? For reference, these are the parameters you can use with mssql-conf (the equivalent of SQL Server Configuration Manager on Linux). control.alternatewritethrough Enable optimized write through flush for O_DSYN C requests control.hestacksize Host extension stack size in KB control.stoponguestprocessfault Stops the process if any guest process reports unhandled exception control.writethrough Use O_DSYNC for file flag write through request s coredump.captureminiandfull Capture both mini and full core dumps coredump.coredumptype Core dump type to capture: mini, miniplus, filt ered, full distributedtransaction.allowonlysecurerpccalls Configure secure only rpc calls for distributed transactions distributedtransaction.fallbacktounsecurerpcifnecessary Configure security only rpc calls for distribut ed transactions distributedtransaction.maxlogsize DTC log file size in MB. Default is 64MB distributedtransaction.memorybuffersize Circular buffer size in which traces are stored . This size is in MB and default is 10MB distributedtransaction.servertcpport MSDTC rpc server port distributedtransaction.trace_cm Traces in the connection manager distributedtransaction.trace_contact Traces the contact pool and contacts distributedtransaction.trace_gateway Traces Gateway source distributedtransaction.trace_log Log tracing distributedtransaction.trace_misc Traces that cannot be categorized into the othe r categories distributedtransaction.trace_proxy Traces that are generated in the MSDTC proxy distributedtransaction.trace_svc Traces service and .exe file startup distributedtransaction.trace_trace The trace infrastructure itself distributedtransaction.trace_util Traces utility routines that are called from mu ltiple locations distributedtransaction.trace_xa XA Transaction Manager (XATM) tracing source distributedtransaction.tracefilepath Folder in which trace files should be stored distributedtransaction.turnoffrpcsecurity Enable or disable RPC security for distributed transactions filelocation.defaultbackupdir Default directory for backup files filelocation.defaultdatadir Default directory for data files filelocation.defaultdumpdir Default directory for crash dump files filelocation.defaultlogdir Default directory for log files filelocation.errorlogfile Error log file location filelocation.masterdatafile Master database data file location filelocation.masterlogfile Master database log file location hadr.hadrenabled Allow SQL Server to use availability groups for high availability and disaster recovery language.lcid Locale identifier for SQL Server to use (e.g. 1 033 for US - English) memory.memorylimitmb SQL Server memory limit (megabytes) network.disablesssd Disable querying SSSD for AD account informatio n and default to LDAP calls network.enablekdcfromkrb5conf Enable looking up KDC information from krb5.con f network.forceencryption Force encryption of incoming client connections network.forcesecureldap Force using LDAPS to contact domain controller network.ipaddress IP address for incoming connections network.kerberoskeytabfile Kerberos keytab file location network.privilegedadaccount Privileged AD user to use for AD authentication network.rpcport TCP port for Rpc endpoint mapper network.tcpport TCP port for incoming connections network.tlscert Path to certificate file for encrypting incomin g client connections network.tlsciphers TLS ciphers allowed for encrypted incoming clie nt connections network.tlskey Path to private key file for encrypting incomin g client connections network.tlsprotocols TLS protocol versions allowed for encrypted inc oming client connections sqlagent.databasemailprofile SQL Agent Database Mail profile name sqlagent.enabled Enable or disable SQLAgent sqlagent.errorlogfile SQL Agent log file path sqlagent.errorlogginglevel SQL Agent logging level bitmask - 1=Errors, 2=W arnings, 4=Info telemetry.customerfeedback Telemetry status telemetry.userrequestedlocalauditdirectory Directory for telemetry local audit cache Also for reference, this is the only thing in the mssql.conf file. If something is there be default, I have no way of knowing it, because all I have to go off of is what's listed in this file: [sqlagent] enabled = true
|
How do you configure the SQL Server Network Configuration protocols in a MSSQL Express Docker container on a Linux server? The gist of the issue is that I am trying to connect to a MSSQL Express Docker container, living on a RHEL 7 server from my local Windows 10 machine using Microsoft SQL Server Management Studio. It is successfully connecting to the RHEL 7 server IP address and port (1433), using the username/password that was created for the container. However, it is throwing out an error that, after countless hours scouring Google, people have referenced back to needing to enable TCP/IP. This is easy in the Windows GUI. Not so much in a Linux environment. The error message from SSMS: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (Microsoft SQL Server, Error: 64) -> The specified network name is no longer available I know how to do this in the Windows environment: Run SQL Server Configuration Manager Expand SQL Server Network Configuration Select Properties for Protocols for MSSQLSERVER Enable TCP/IP I have also figured out how to use mssql-conf to modify various attributes in mssql.conf , which is where this change will take place. The issue is that I want to enable TCP/IP , but I am not seeing that option under the /opt/mssql/bin/mssql-conf list | more . Any suggestions? For reference, these are the parameters you can use with mssql-conf (the equivalent of SQL Server Configuration Manager on Linux). control.alternatewritethrough Enable optimized write through flush for O_DSYN C requests control.hestacksize Host extension stack size in KB control.stoponguestprocessfault Stops the process if any guest process reports unhandled exception control.writethrough Use O_DSYNC for file flag write through request s coredump.captureminiandfull Capture both mini and full core dumps coredump.coredumptype Core dump type to capture: mini, miniplus, filt ered, full distributedtransaction.allowonlysecurerpccalls Configure secure only rpc calls for distributed transactions distributedtransaction.fallbacktounsecurerpcifnecessary Configure security only rpc calls for distribut ed transactions distributedtransaction.maxlogsize DTC log file size in MB. Default is 64MB distributedtransaction.memorybuffersize Circular buffer size in which traces are stored . This size is in MB and default is 10MB distributedtransaction.servertcpport MSDTC rpc server port distributedtransaction.trace_cm Traces in the connection manager distributedtransaction.trace_contact Traces the contact pool and contacts distributedtransaction.trace_gateway Traces Gateway source distributedtransaction.trace_log Log tracing distributedtransaction.trace_misc Traces that cannot be categorized into the othe r categories distributedtransaction.trace_proxy Traces that are generated in the MSDTC proxy distributedtransaction.trace_svc Traces service and .exe file startup distributedtransaction.trace_trace The trace infrastructure itself distributedtransaction.trace_util Traces utility routines that are called from mu ltiple locations distributedtransaction.trace_xa XA Transaction Manager (XATM) tracing source distributedtransaction.tracefilepath Folder in which trace files should be stored distributedtransaction.turnoffrpcsecurity Enable or disable RPC security for distributed transactions filelocation.defaultbackupdir Default directory for backup files filelocation.defaultdatadir Default directory for data files filelocation.defaultdumpdir Default directory for crash dump files filelocation.defaultlogdir Default directory for log files filelocation.errorlogfile Error log file location filelocation.masterdatafile Master database data file location filelocation.masterlogfile Master database log file location hadr.hadrenabled Allow SQL Server to use availability groups for high availability and disaster recovery language.lcid Locale identifier for SQL Server to use (e.g. 1 033 for US - English) memory.memorylimitmb SQL Server memory limit (megabytes) network.disablesssd Disable querying SSSD for AD account informatio n and default to LDAP calls network.enablekdcfromkrb5conf Enable looking up KDC information from krb5.con f network.forceencryption Force encryption of incoming client connections network.forcesecureldap Force using LDAPS to contact domain controller network.ipaddress IP address for incoming connections network.kerberoskeytabfile Kerberos keytab file location network.privilegedadaccount Privileged AD user to use for AD authentication network.rpcport TCP port for Rpc endpoint mapper network.tcpport TCP port for incoming connections network.tlscert Path to certificate file for encrypting incomin g client connections network.tlsciphers TLS ciphers allowed for encrypted incoming clie nt connections network.tlskey Path to private key file for encrypting incomin g client connections network.tlsprotocols TLS protocol versions allowed for encrypted inc oming client connections sqlagent.databasemailprofile SQL Agent Database Mail profile name sqlagent.enabled Enable or disable SQLAgent sqlagent.errorlogfile SQL Agent log file path sqlagent.errorlogginglevel SQL Agent logging level bitmask - 1=Errors, 2=W arnings, 4=Info telemetry.customerfeedback Telemetry status telemetry.userrequestedlocalauditdirectory Directory for telemetry local audit cache Also for reference, this is the only thing in the mssql.conf file. If something is there be default, I have no way of knowing it, because all I have to go off of is what's listed in this file: [sqlagent] enabled = true
|
sql-server, linux, docker, rhel, sql-server-config-manager
| 2
| 2,459
| 0
|
https://stackoverflow.com/questions/69020353/how-do-you-configure-the-sql-server-network-configuration-protocols-in-a-mssql-e
|
68,695,422
|
"ssh -q" pbrun exec last login still showing up even with .hushlogin
|
I'm running the below script on RHEL 7.9 and after connecting to the remote machine and execing pbrun, the last login message is getting displayed and sometimes it interferes with the command which is forcing me to do many more steps than necessary and it's just downright frustrating. As I stated in the subject, I have tried this with "ssh -q" as well as with "ssh -q" AND a .hushlogin file in both my home directory as well as root's home. I'm sure it has to do with the switching of users, but I can't figure out how to get rid of the last login message. **** MODIFYING SYSTEM FILES IS OUT OF THE QUESTION **** There appears to be another issue when trying to remove a file, it's the "CASCU050E Failed to send data (Reason=[send failed], Rc=[111])." errors in the below output. Any help greatly appreciated! Thanks Here's the script: if [ "$#" -ne 2 ]; then echo "You must enter exactly 2 command line arguments" echo "argument 1 = file containing all hosts to test, one fqdn per line." echo "argument 2 = full path to output file." exit 1 fi echo 'Please enter your AD passord for API authentication' echo -n "Password: " read -s passy echo "" sshpass="sshpass -p$passy ssh -q -oStrictHostKeyChecking=no" scppass="sshpass -p$passy scp -r -q -oStrictHostKeyChecking=no" hosts=$(cat $1) IAM=$(whoami) echo "*** Moving existing /tmp/perf directories out of the way. ***" for HOST in $hosts; do if ($sshpass $HOST ps waux | grep run-fio-tests | grep -v grep >/dev/null 2>&1) then echo "*** fio test is currently running on host $HOST, STOPPING THE FIO RUN. ***" break fi ##### #Tried with the two following lines uncommented, no change in behavior. ##### # $sshpass $HOST "touch .hushlogin" # $sshpass $HOST "echo touch .hushlogin | exec pbrun /bin/su -" dirs=$($sshpass $HOST "ls /tmp/| grep -i ^perf" | grep -v .tgz) for dir in $dirs; do echo "Moving existing /tmp/$dir directory to /tmp/$dir.date +%Y%m%d-%H%M.tgz on $HOST" $sshpass $HOST "tar czf /tmp/$dir.date +%Y%m%d-%H%M.tgz -P /tmp/$dir 2>/dev/null" $sshpass $HOST "echo chown -R $IAM /tmp/perf | pbrun /bin/su -" $sshpass $HOST "rm -rf /tmp/$dir" done done for HOST in $hosts; do echo "*** Cleaning up on $HOST ***" $sshpass $HOST "echo rm -rf /tmp/data/randread-raw-data-5G /tmp/data/seqread-raw-data-32G /tmp/data/seqwrite-raw-data-5G | exec pbrun /bin/su -" $sshpass $HOST "echo rm -rf /tmp/RUNFIO.sh | exec pbrun /bin/su -" done Here are the errors i'm getting: Please enter your AD passord for API authentication Password: *** Moving existing /tmp/perf directories out of the way. *** Last login: Fri Aug 6 22:51:55 MST 2021 Moving existing /tmp/perf directory to /tmp/perf.20210806-2255.tgz on host1.acme.com Last login: Fri Aug 6 22:55:38 MST 2021 *** End Moving old perfs. *** *** Cleaning up on host1.acme.com *** CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). Last login: Fri Aug 6 22:55:45 MST 2021
|
"ssh -q" pbrun exec last login still showing up even with .hushlogin I'm running the below script on RHEL 7.9 and after connecting to the remote machine and execing pbrun, the last login message is getting displayed and sometimes it interferes with the command which is forcing me to do many more steps than necessary and it's just downright frustrating. As I stated in the subject, I have tried this with "ssh -q" as well as with "ssh -q" AND a .hushlogin file in both my home directory as well as root's home. I'm sure it has to do with the switching of users, but I can't figure out how to get rid of the last login message. **** MODIFYING SYSTEM FILES IS OUT OF THE QUESTION **** There appears to be another issue when trying to remove a file, it's the "CASCU050E Failed to send data (Reason=[send failed], Rc=[111])." errors in the below output. Any help greatly appreciated! Thanks Here's the script: if [ "$#" -ne 2 ]; then echo "You must enter exactly 2 command line arguments" echo "argument 1 = file containing all hosts to test, one fqdn per line." echo "argument 2 = full path to output file." exit 1 fi echo 'Please enter your AD passord for API authentication' echo -n "Password: " read -s passy echo "" sshpass="sshpass -p$passy ssh -q -oStrictHostKeyChecking=no" scppass="sshpass -p$passy scp -r -q -oStrictHostKeyChecking=no" hosts=$(cat $1) IAM=$(whoami) echo "*** Moving existing /tmp/perf directories out of the way. ***" for HOST in $hosts; do if ($sshpass $HOST ps waux | grep run-fio-tests | grep -v grep >/dev/null 2>&1) then echo "*** fio test is currently running on host $HOST, STOPPING THE FIO RUN. ***" break fi ##### #Tried with the two following lines uncommented, no change in behavior. ##### # $sshpass $HOST "touch .hushlogin" # $sshpass $HOST "echo touch .hushlogin | exec pbrun /bin/su -" dirs=$($sshpass $HOST "ls /tmp/| grep -i ^perf" | grep -v .tgz) for dir in $dirs; do echo "Moving existing /tmp/$dir directory to /tmp/$dir.date +%Y%m%d-%H%M.tgz on $HOST" $sshpass $HOST "tar czf /tmp/$dir.date +%Y%m%d-%H%M.tgz -P /tmp/$dir 2>/dev/null" $sshpass $HOST "echo chown -R $IAM /tmp/perf | pbrun /bin/su -" $sshpass $HOST "rm -rf /tmp/$dir" done done for HOST in $hosts; do echo "*** Cleaning up on $HOST ***" $sshpass $HOST "echo rm -rf /tmp/data/randread-raw-data-5G /tmp/data/seqread-raw-data-32G /tmp/data/seqwrite-raw-data-5G | exec pbrun /bin/su -" $sshpass $HOST "echo rm -rf /tmp/RUNFIO.sh | exec pbrun /bin/su -" done Here are the errors i'm getting: Please enter your AD passord for API authentication Password: *** Moving existing /tmp/perf directories out of the way. *** Last login: Fri Aug 6 22:51:55 MST 2021 Moving existing /tmp/perf directory to /tmp/perf.20210806-2255.tgz on host1.acme.com Last login: Fri Aug 6 22:55:38 MST 2021 *** End Moving old perfs. *** *** Cleaning up on host1.acme.com *** CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). CASCU050E Failed to send data (Reason=[send failed], Rc=[111]). Last login: Fri Aug 6 22:55:45 MST 2021
|
shell, ssh, exec, rhel, pbrun
| 2
| 345
| 0
|
https://stackoverflow.com/questions/68695422/ssh-q-pbrun-exec-last-login-still-showing-up-even-with-hushlogin
|
66,798,441
|
Celery is continuously restarting after 2 min's in Supervisor
|
I am using Celery for excuting various tasks from django. The celery is managed using Supervisor . Rabbitmq is used the broker for celery . Celery and django are inside a single Docker container and broker in another. The entire application is working Docker-compose running in RHEL 7.9. Problem celery is continuously restarting after every 2min in supervisor. no log event about the same is captured. Code Supervisor config [program:django-celery] command=/usr/local/bin/celery -A CELERY_APP worker --loglevel=info --logfile=/var/log/supervisor/celery.log directory=/app user=root numprocs=1 stdout_logfile=/var/log/supervisor/celery_out.log stderr_logfile=/var/log/supervisor/celery_err.log autostart=true autorestart=true startsecs=10 stopwaitsecs = 600 killasgroup=true priority=998 environment=DJANGO_SETTINGS_MODULE='CELERY_APP.settings',CONFIG_TYPE='PROD',LD_LIBRARY_PATH='/usr/lib/oracle/18.3/client64/lib/' [program:django-celerybeat] command=/usr/local/bin/celery -A CELERY_APP beat --loglevel=info --logfile=/var/log/supervisor/celery_beat.log directory=/app user=root numprocs=1 stdout_logfile=/var/log/beat_out.log stderr_logfile=/var/log/supervisor/beat_err.log autostart=true autorestart=true startsecs=10 priority=999 environment=DJANGO_SETTINGS_MODULE='CELERY_APP.settings',CONFIG_TYPE='PROD',LD_LIBRARY_PATH='/usr/lib/oracle/18.3/client64/lib/' Dockerfile FROM python:3.8 ENV PYTHONUNBUFFERED=1 RUN mkdir /app WORKDIR /app EXPOSE 8000 5672 15672 COPY requirements.txt /app/ RUN pip install -r requirements.txt RUN apt-get update RUN apt-get install -y wget net-tools curl alien unzip telnet supervisor zlib1g RUN apt-get update && apt-get install -y libaio1 RUN wget [URL] RUN alien -i oracle-instantclient18.3-basic-18.3.0.0.0-3.x86_64.rpm ENV LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib/${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH} COPY . /app/ RUN mkdir -p /app/TEMP/PIXL RUN mkdir -p /app/TEMP/RUNNER RUN mkdir -p /app/TEMP/DUMP COPY docker/django/start /start RUN chmod +x /start COPY docker/django/supervisor_app.conf /etc/supervisor/conf.d/supervisor_app.conf COPY docker/django/supervisord.conf /etc/supervisor/supervisord.conf celery.py import os from celery import Celery from celery.schedules import crontab os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'CELERY_APP.settings') app = Celery("CELERY_APP") app.config_from_object('django.conf:settings', namespace='CELERY') app.conf.beat_schedule = { 'zip_and_send': { 'task': 'asyncManagment.tasks.create_zip_file_for_push', 'schedule': crontab(minute='*/30'), }, 'parse_response': { 'task': 'asyncManagment.tasks.response_manager', 'schedule': crontab(minute=0,hour='*/3'), #crontab(minute='*',hour='8-19') }, 'zip_and_resend': { 'task': 'asyncManagment.tasks.create_zip_file_for_repush', 'schedule': crontab(minute=6,hour='14'), }, 'value_check':{ 'task':'asyncManagment.views.db_check', 'schedule': crontab(minute=6,hour='12'), }, } app.autodiscover_tasks() Could someone please tell me the possible cause and solution? If I've missed out anything, over- or under-emphasized a specific point, let me know in the comments. Thank you so much in advance for your time.
|
Celery is continuously restarting after 2 min's in Supervisor I am using Celery for excuting various tasks from django. The celery is managed using Supervisor . Rabbitmq is used the broker for celery . Celery and django are inside a single Docker container and broker in another. The entire application is working Docker-compose running in RHEL 7.9. Problem celery is continuously restarting after every 2min in supervisor. no log event about the same is captured. Code Supervisor config [program:django-celery] command=/usr/local/bin/celery -A CELERY_APP worker --loglevel=info --logfile=/var/log/supervisor/celery.log directory=/app user=root numprocs=1 stdout_logfile=/var/log/supervisor/celery_out.log stderr_logfile=/var/log/supervisor/celery_err.log autostart=true autorestart=true startsecs=10 stopwaitsecs = 600 killasgroup=true priority=998 environment=DJANGO_SETTINGS_MODULE='CELERY_APP.settings',CONFIG_TYPE='PROD',LD_LIBRARY_PATH='/usr/lib/oracle/18.3/client64/lib/' [program:django-celerybeat] command=/usr/local/bin/celery -A CELERY_APP beat --loglevel=info --logfile=/var/log/supervisor/celery_beat.log directory=/app user=root numprocs=1 stdout_logfile=/var/log/beat_out.log stderr_logfile=/var/log/supervisor/beat_err.log autostart=true autorestart=true startsecs=10 priority=999 environment=DJANGO_SETTINGS_MODULE='CELERY_APP.settings',CONFIG_TYPE='PROD',LD_LIBRARY_PATH='/usr/lib/oracle/18.3/client64/lib/' Dockerfile FROM python:3.8 ENV PYTHONUNBUFFERED=1 RUN mkdir /app WORKDIR /app EXPOSE 8000 5672 15672 COPY requirements.txt /app/ RUN pip install -r requirements.txt RUN apt-get update RUN apt-get install -y wget net-tools curl alien unzip telnet supervisor zlib1g RUN apt-get update && apt-get install -y libaio1 RUN wget [URL] RUN alien -i oracle-instantclient18.3-basic-18.3.0.0.0-3.x86_64.rpm ENV LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib/${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH} COPY . /app/ RUN mkdir -p /app/TEMP/PIXL RUN mkdir -p /app/TEMP/RUNNER RUN mkdir -p /app/TEMP/DUMP COPY docker/django/start /start RUN chmod +x /start COPY docker/django/supervisor_app.conf /etc/supervisor/conf.d/supervisor_app.conf COPY docker/django/supervisord.conf /etc/supervisor/supervisord.conf celery.py import os from celery import Celery from celery.schedules import crontab os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'CELERY_APP.settings') app = Celery("CELERY_APP") app.config_from_object('django.conf:settings', namespace='CELERY') app.conf.beat_schedule = { 'zip_and_send': { 'task': 'asyncManagment.tasks.create_zip_file_for_push', 'schedule': crontab(minute='*/30'), }, 'parse_response': { 'task': 'asyncManagment.tasks.response_manager', 'schedule': crontab(minute=0,hour='*/3'), #crontab(minute='*',hour='8-19') }, 'zip_and_resend': { 'task': 'asyncManagment.tasks.create_zip_file_for_repush', 'schedule': crontab(minute=6,hour='14'), }, 'value_check':{ 'task':'asyncManagment.views.db_check', 'schedule': crontab(minute=6,hour='12'), }, } app.autodiscover_tasks() Could someone please tell me the possible cause and solution? If I've missed out anything, over- or under-emphasized a specific point, let me know in the comments. Thank you so much in advance for your time.
|
docker, celery, rhel, django-celery, supervisord
| 2
| 1,097
| 0
|
https://stackoverflow.com/questions/66798441/celery-is-continuously-restarting-after-2-mins-in-supervisor
|
66,667,186
|
Azure Linux - OS filesystem to xfs
|
How can I change filesystem type & partition layout for Azure VM (linux based). Currently it shows all OS filesystems as ext4 when I deploy RHEL8 VM.
|
Azure Linux - OS filesystem to xfs How can I change filesystem type & partition layout for Azure VM (linux based). Currently it shows all OS filesystems as ext4 when I deploy RHEL8 VM.
|
azure, filesystems, virtual-machine, rhel
| 2
| 359
| 0
|
https://stackoverflow.com/questions/66667186/azure-linux-os-filesystem-to-xfs
|
66,254,342
|
RHEL 8.2 podman rootless container network bottleneck at 20Mbits/s
|
When I run container by podman 1.6.4 on RHEL8.2 (fresh install) with rootless user. The maximum upload speed per container will be around 22Mbits/s (with 1Gbits/s network) After start a second process, total network usage will go up to ~45Mbits/s (~ 23Mbits/s per container) Then start a third process, total network usage will go up to ~74Mbits/s (~ 23Mbits/s per container) So, I suspect that network speed limitation might be something from podman container. Does anyone know how to disable upload speed limit for rootless container in RHEL? ==== Dockerfile FROM python:3.8-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir boto3 COPY . . ENTRYPOINT ["python","upload.py"] upload.py will upload a million of files with multithread to AWS S3. After investigate, I found similar bug here Rootless network serious bandwidth reduction? #8834 [URL] After change mtu to 1500 or 9000, upload speed still limit at ~23Mbits/s. I also try to change net work to host with --network=host but the speed's not change. So, it might not be related with these bug report.
|
RHEL 8.2 podman rootless container network bottleneck at 20Mbits/s When I run container by podman 1.6.4 on RHEL8.2 (fresh install) with rootless user. The maximum upload speed per container will be around 22Mbits/s (with 1Gbits/s network) After start a second process, total network usage will go up to ~45Mbits/s (~ 23Mbits/s per container) Then start a third process, total network usage will go up to ~74Mbits/s (~ 23Mbits/s per container) So, I suspect that network speed limitation might be something from podman container. Does anyone know how to disable upload speed limit for rootless container in RHEL? ==== Dockerfile FROM python:3.8-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir boto3 COPY . . ENTRYPOINT ["python","upload.py"] upload.py will upload a million of files with multithread to AWS S3. After investigate, I found similar bug here Rootless network serious bandwidth reduction? #8834 [URL] After change mtu to 1500 or 9000, upload speed still limit at ~23Mbits/s. I also try to change net work to host with --network=host but the speed's not change. So, it might not be related with these bug report.
|
python, rhel, podman
| 2
| 401
| 0
|
https://stackoverflow.com/questions/66254342/rhel-8-2-podman-rootless-container-network-bottleneck-at-20mbits-s
|
65,603,724
|
Stopping rpmbuild from re-writing Requires
|
I am building a binary RPM from a Python package. The package uses a custom .spec file, and this is a hard requirement. ( python setup.py bdist_rpm is simply not flexible enough.) For instance, a snippet of the .spec file looks like this: %else %if 0%{?rhel} >= 7 BuildRequires: python2-devel BuildRequires: python-setuptools Requires: python(abi) = 2.7 Requires: python2-boto >= 2.5.0 Requires: python2-cryptography < 3 The process is to: Create an sdist and put it under SOURCES ( %{_sourcedir} ) Call rpmbuild -bb <path-to.spec> The meat of the spec file looks like this: %prep %setup -q -n %{name}-%{unmangled_version} %build %if %{with_python2} %py2_build %endif %if %{with_python3} %py3_build %endif %install %if %{with_python2} %{__python2} setup.py install --single-version-externally-managed -O1 --root=%{buildroot} --record=INSTALLED_FILES %endif %if %{with_python3} %{__python3} setup.py install --single-version-externally-managed -O1 --root=%{buildroot} --record=INSTALLED_FILES %endif For simplification, we can forget about the conditional checks and just specify a single unconditional requires: Requires: PyYAML The problem is rpmbuild somewhere mangles the Requires: to a completely different set of requirements. $ rpm -qp --requires /root/rpmbuild/RPMS/noarch/path-to-project3.0.0-1.noarch.rpm /bin/bash /usr/bin/env /usr/bin/python /usr/bin/python2 python(abi) = 2.7 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsXz) <= 5.2-1 with the PyYAML requirement conspicuously missing. The build occurs on a CentOS 7 Docker container, and no matter what I try, I cannot get rpmbuild to leave the Requires: sections alone. I've attempted: %undefine __pythondist_requires setup.py --quiet egg_info --egg-base /tmp sdist All to no avail. How do I get rpmbuild to just stay true to the actual .spec file? MVCE I've built an MVCE project at [URL] . The README there contains complete instructions to reproduce, also shown here: Clone the project. Run container: docker container run -it --rm --volume "$(pwd)":/io --workdir /io --entrypoint bash centos:7 Set up and build RPM that will get put in dist/ : $ yum update -y $ yum install -y make $ make setup_build $ make rpm Check requires, see those from .spec file are ignored: [root@xxxxxx io]# rpm -qp --requires dist/myproj-0.1-1.noarch.rpm python(abi) = 2.7 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsXz) <= 5.2-1
|
Stopping rpmbuild from re-writing Requires I am building a binary RPM from a Python package. The package uses a custom .spec file, and this is a hard requirement. ( python setup.py bdist_rpm is simply not flexible enough.) For instance, a snippet of the .spec file looks like this: %else %if 0%{?rhel} >= 7 BuildRequires: python2-devel BuildRequires: python-setuptools Requires: python(abi) = 2.7 Requires: python2-boto >= 2.5.0 Requires: python2-cryptography < 3 The process is to: Create an sdist and put it under SOURCES ( %{_sourcedir} ) Call rpmbuild -bb <path-to.spec> The meat of the spec file looks like this: %prep %setup -q -n %{name}-%{unmangled_version} %build %if %{with_python2} %py2_build %endif %if %{with_python3} %py3_build %endif %install %if %{with_python2} %{__python2} setup.py install --single-version-externally-managed -O1 --root=%{buildroot} --record=INSTALLED_FILES %endif %if %{with_python3} %{__python3} setup.py install --single-version-externally-managed -O1 --root=%{buildroot} --record=INSTALLED_FILES %endif For simplification, we can forget about the conditional checks and just specify a single unconditional requires: Requires: PyYAML The problem is rpmbuild somewhere mangles the Requires: to a completely different set of requirements. $ rpm -qp --requires /root/rpmbuild/RPMS/noarch/path-to-project3.0.0-1.noarch.rpm /bin/bash /usr/bin/env /usr/bin/python /usr/bin/python2 python(abi) = 2.7 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsXz) <= 5.2-1 with the PyYAML requirement conspicuously missing. The build occurs on a CentOS 7 Docker container, and no matter what I try, I cannot get rpmbuild to leave the Requires: sections alone. I've attempted: %undefine __pythondist_requires setup.py --quiet egg_info --egg-base /tmp sdist All to no avail. How do I get rpmbuild to just stay true to the actual .spec file? MVCE I've built an MVCE project at [URL] . The README there contains complete instructions to reproduce, also shown here: Clone the project. Run container: docker container run -it --rm --volume "$(pwd)":/io --workdir /io --entrypoint bash centos:7 Set up and build RPM that will get put in dist/ : $ yum update -y $ yum install -y make $ make setup_build $ make rpm Check requires, see those from .spec file are ignored: [root@xxxxxx io]# rpm -qp --requires dist/myproj-0.1-1.noarch.rpm python(abi) = 2.7 rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(PayloadIsXz) <= 5.2-1
|
python, rpm, rhel, rpmbuild
| 2
| 520
| 0
|
https://stackoverflow.com/questions/65603724/stopping-rpmbuild-from-re-writing-requires
|
64,323,249
|
My ec2 instance terminates after yum update and reboot
|
When I do a yum update I need to reboot the instance. When I reboot AWS terminates the instance. I am using a rhel7 ami image. Anyone know how to fix this? I tried putting the instance in standby in auto-scaling with no effect. sudo yum update -y
|
My ec2 instance terminates after yum update and reboot When I do a yum update I need to reboot the instance. When I reboot AWS terminates the instance. I am using a rhel7 ami image. Anyone know how to fix this? I tried putting the instance in standby in auto-scaling with no effect. sudo yum update -y
|
amazon-web-services, amazon-ec2, yum, rhel
| 2
| 212
| 0
|
https://stackoverflow.com/questions/64323249/my-ec2-instance-terminates-after-yum-update-and-reboot
|
63,138,112
|
Why is mlockall occasionally slow?
|
Background: we've recently updated our Operating System from RHEL 6.7 MRG to RHEL 7.7 (with RealTime). Our RealTime Ada Radar Applications run mlockall (c binding) to lock into memory at startup (yes, I understand this is rarely necessary, and likely isn't for all of the applications, but is required for many). The Problem: Since the upgrade, mlockall occasionally takes over 2 minutes, where it usually takes < 1 second. What could be causing this behavior? I was pointed at file cache/memory buffer, so we ran some tests after dropping caches, but it didn't seem to have a positive effect.
|
Why is mlockall occasionally slow? Background: we've recently updated our Operating System from RHEL 6.7 MRG to RHEL 7.7 (with RealTime). Our RealTime Ada Radar Applications run mlockall (c binding) to lock into memory at startup (yes, I understand this is rarely necessary, and likely isn't for all of the applications, but is required for many). The Problem: Since the upgrade, mlockall occasionally takes over 2 minutes, where it usually takes < 1 second. What could be causing this behavior? I was pointed at file cache/memory buffer, so we ran some tests after dropping caches, but it didn't seem to have a positive effect.
|
linux, real-time, ada, rhel, timing
| 2
| 161
| 0
|
https://stackoverflow.com/questions/63138112/why-is-mlockall-occasionally-slow
|
62,052,595
|
python + how to debug python script on linux as bash with -x
|
we are running some python scripts on linux since python are sometimes difficult to understand in case of failure then need to find a good way of debugging for example from bash world its bash -x what is the equivalent way for python ? example without python debug python /lpp/airflow/.sec/security.py get_connection_string rmq Traceback (most recent call last): File "/lpp/airflow/.sec/security.py", line 105, in <module> get_connection_string(sys.argv[2]) File "/lpp/airflow/.sec/security.py", line 58, in get_connection_string pass_hash = open(rmq_pass_file, 'r') IOError: [Errno 2] No such file or directory: '/lpp/airflow/.sec/rmq_pass'
|
python + how to debug python script on linux as bash with -x we are running some python scripts on linux since python are sometimes difficult to understand in case of failure then need to find a good way of debugging for example from bash world its bash -x what is the equivalent way for python ? example without python debug python /lpp/airflow/.sec/security.py get_connection_string rmq Traceback (most recent call last): File "/lpp/airflow/.sec/security.py", line 105, in <module> get_connection_string(sys.argv[2]) File "/lpp/airflow/.sec/security.py", line 58, in get_connection_string pass_hash = open(rmq_pass_file, 'r') IOError: [Errno 2] No such file or directory: '/lpp/airflow/.sec/rmq_pass'
|
python, bash, python-2.7, debugging, rhel
| 2
| 748
| 1
|
https://stackoverflow.com/questions/62052595/python-how-to-debug-python-script-on-linux-as-bash-with-x
|
61,598,601
|
Nothing provides python needed by mssql-server-is
|
I'm trying to install SQL Server Integration Services on RHEL 8. I have SQL Server 2019 (version v15) installed and running successfully, but when I try and follow this: [URL] The RHEL 8 repo doesn't have the mssql-server-is package in it, and using the RHEL 7 repo throws this error: Error: Problem: conflicting requests - nothing provides python needed by mssql-server-is-15.0.2000.5-4.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) I have both Python2 and Python 3 installed, and when I run this: alternatives --config python I see that /usr/bin/python2 is selected Any ideas how I can get SSIS installed for RHEL 8 and SQL Server 2019?
|
Nothing provides python needed by mssql-server-is I'm trying to install SQL Server Integration Services on RHEL 8. I have SQL Server 2019 (version v15) installed and running successfully, but when I try and follow this: [URL] The RHEL 8 repo doesn't have the mssql-server-is package in it, and using the RHEL 7 repo throws this error: Error: Problem: conflicting requests - nothing provides python needed by mssql-server-is-15.0.2000.5-4.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) I have both Python2 and Python 3 installed, and when I run this: alternatives --config python I see that /usr/bin/python2 is selected Any ideas how I can get SSIS installed for RHEL 8 and SQL Server 2019?
|
sql-server, rhel, sql-server-2019
| 2
| 2,685
| 1
|
https://stackoverflow.com/questions/61598601/nothing-provides-python-needed-by-mssql-server-is
|
61,101,680
|
install pandas on rhel using yum
|
If I want to install pandas on RHEL server using the following command from the official pandas documentation pandas installation using the following command: yum install python3-pandas I get the following error: No package python3-pandas available Nothing to do Below are the specs for my RHEL and python configuration $ cat /etc/redhat-release # Red Hat Enterprise Linux Server release 7.8 $ python -V # Python 3.6.9 $ yum install python3 # Package python3-3.6.8-13.el7.x86_64 already installed and latest version # Nothing to do $ python3 -V # Python 3.6.9 What am I doing wrong?
|
install pandas on rhel using yum If I want to install pandas on RHEL server using the following command from the official pandas documentation pandas installation using the following command: yum install python3-pandas I get the following error: No package python3-pandas available Nothing to do Below are the specs for my RHEL and python configuration $ cat /etc/redhat-release # Red Hat Enterprise Linux Server release 7.8 $ python -V # Python 3.6.9 $ yum install python3 # Package python3-3.6.8-13.el7.x86_64 already installed and latest version # Nothing to do $ python3 -V # Python 3.6.9 What am I doing wrong?
|
pandas, installation, yum, rhel
| 2
| 12,930
| 1
|
https://stackoverflow.com/questions/61101680/install-pandas-on-rhel-using-yum
|
60,406,675
|
How to embed data in shared library?
|
For example, I want to embed dicmap.bin to a shared library libxxx.so . I write a program to verify it. test_dicmap.cpp #include <stdio.h> #include <stdint.h> extern "C" { extern const uint8_t _binary_dicmap_bin_start[]; extern const uint8_t _binary_dicmap_bin_end[]; extern const void* _binary_dicmap_bin_size; } int main() { size_t size = (size_t)&_binary_dicmap_bin_size; printf("start=%p, end=%p\nend-start=%zd, size=%zd\n", _binary_dicmap_bin_start, _binary_dicmap_bin_end, _binary_dicmap_bin_end - _binary_dicmap_bin_start, size); printf("data[0..8]=%02x %02x %02x %02x %02x %02x %02x %02x\n", _binary_dicmap_bin_start[0], _binary_dicmap_bin_start[1], _binary_dicmap_bin_start[2], _binary_dicmap_bin_start[3], _binary_dicmap_bin_start[4], _binary_dicmap_bin_start[5], _binary_dicmap_bin_start[6], _binary_dicmap_bin_start[7]); } But its _start , _end and _size is invalid. ]$ ls dicmap.bin -l -rw-rw-r-- 1 kirbyzhou kirbyzhou 198600798 Feb 26 10:58 dicmap.bin ]# objcopy -B i386 -I binary -O elf64-x86-64 dicmap.bin dicmap.o && g++ -o libxxx.so dicmap.o -shared && g++ -L. -lxxx test_dicmap.cpp /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: warning: type and size of dynamic symbol _binary_dicmap_bin_size' are not defined /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: warning: type and size of dynamic symbol _binary_dicmap_bin_start' are not defined /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: warning: type and size of dynamic symbol `_binary_dicmap_bin_end' are not defined ]# ./a.out start=0x601034, end=0x601034 end-start=0, size=6295604 data[0..8]=00 00 00 00 00 00 00 00 end-start and size should be sizeof dicmap.bin (198600798). My objcopy is binutils-2.30-54.el7 of rhel7 with devtoolset-8 . I try to add share flags to the .o file, but a error happens: objcopy -B i386 -I binary -O elf64-x86-64 dicmap.bin dicmap.o --set-section-flag .data=share objcopy: BFD version 2.30-54.el7 internal error, aborting at elf.c:8869 in _bfd_elf_set_section_contents objcopy: Please report this bug. binutils-2.27-41.base.el7_7.1.x86_64 of rhel7 also have the same problem. Is there any method to help me?
|
How to embed data in shared library? For example, I want to embed dicmap.bin to a shared library libxxx.so . I write a program to verify it. test_dicmap.cpp #include <stdio.h> #include <stdint.h> extern "C" { extern const uint8_t _binary_dicmap_bin_start[]; extern const uint8_t _binary_dicmap_bin_end[]; extern const void* _binary_dicmap_bin_size; } int main() { size_t size = (size_t)&_binary_dicmap_bin_size; printf("start=%p, end=%p\nend-start=%zd, size=%zd\n", _binary_dicmap_bin_start, _binary_dicmap_bin_end, _binary_dicmap_bin_end - _binary_dicmap_bin_start, size); printf("data[0..8]=%02x %02x %02x %02x %02x %02x %02x %02x\n", _binary_dicmap_bin_start[0], _binary_dicmap_bin_start[1], _binary_dicmap_bin_start[2], _binary_dicmap_bin_start[3], _binary_dicmap_bin_start[4], _binary_dicmap_bin_start[5], _binary_dicmap_bin_start[6], _binary_dicmap_bin_start[7]); } But its _start , _end and _size is invalid. ]$ ls dicmap.bin -l -rw-rw-r-- 1 kirbyzhou kirbyzhou 198600798 Feb 26 10:58 dicmap.bin ]# objcopy -B i386 -I binary -O elf64-x86-64 dicmap.bin dicmap.o && g++ -o libxxx.so dicmap.o -shared && g++ -L. -lxxx test_dicmap.cpp /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: warning: type and size of dynamic symbol _binary_dicmap_bin_size' are not defined /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: warning: type and size of dynamic symbol _binary_dicmap_bin_start' are not defined /opt/rh/devtoolset-8/root/usr/libexec/gcc/x86_64-redhat-linux/8/ld: warning: type and size of dynamic symbol `_binary_dicmap_bin_end' are not defined ]# ./a.out start=0x601034, end=0x601034 end-start=0, size=6295604 data[0..8]=00 00 00 00 00 00 00 00 end-start and size should be sizeof dicmap.bin (198600798). My objcopy is binutils-2.30-54.el7 of rhel7 with devtoolset-8 . I try to add share flags to the .o file, but a error happens: objcopy -B i386 -I binary -O elf64-x86-64 dicmap.bin dicmap.o --set-section-flag .data=share objcopy: BFD version 2.30-54.el7 internal error, aborting at elf.c:8869 in _bfd_elf_set_section_contents objcopy: Please report this bug. binutils-2.27-41.base.el7_7.1.x86_64 of rhel7 also have the same problem. Is there any method to help me?
|
linux, shared-libraries, rhel, binutils
| 2
| 969
| 1
|
https://stackoverflow.com/questions/60406675/how-to-embed-data-in-shared-library
|
58,843,458
|
HP-UX cc uses a default setting to allow null dereferences, is that possible in gcc with RHEL?
|
From the HPUX cc, c89 - C compiler man page: -z Do not bind anything to address zero. This option allows runtime detection of null pointers. See the note on pointers below. -Z Allow dereferencing of null pointers. See the note on pointers below. The -z and -Z are linker options. See ld(1) for more details. Then from ld(1) : The default value of the -Z/-z option is -Z. This means that, by default, any C program compiled with this version of cc on HPUX that dereferences a null pointer will read the value as 0 and not segfault. There is no option for this with gcc or cc on RHEL. Does anyone know how I would compile a C program with an option like this on RHEL (to allow null dereferece)? I know this is a terrible coding practice and I will not be using it to create new code. Thank you.
|
HP-UX cc uses a default setting to allow null dereferences, is that possible in gcc with RHEL? From the HPUX cc, c89 - C compiler man page: -z Do not bind anything to address zero. This option allows runtime detection of null pointers. See the note on pointers below. -Z Allow dereferencing of null pointers. See the note on pointers below. The -z and -Z are linker options. See ld(1) for more details. Then from ld(1) : The default value of the -Z/-z option is -Z. This means that, by default, any C program compiled with this version of cc on HPUX that dereferences a null pointer will read the value as 0 and not segfault. There is no option for this with gcc or cc on RHEL. Does anyone know how I would compile a C program with an option like this on RHEL (to allow null dereferece)? I know this is a terrible coding practice and I will not be using it to create new code. Thank you.
|
c, gcc, rhel, hp-ux, cc
| 2
| 324
| 1
|
https://stackoverflow.com/questions/58843458/hp-ux-cc-uses-a-default-setting-to-allow-null-dereferences-is-that-possible-in
|
57,769,101
|
How to fix UnsatisfiedLinkError by making minimal changes?
|
I am trying to compile and run the following JNI Java file in RHEL: package com.isprint.am.util.hsm; public class LunaMDUtil { //bunch of constants public native int MD_Initialize(); //...and a bunch of other native methods static { System.loadLibrary("ethsm"); System.loadLibrary("LunaMDUtil"); } public static void main(String[] args) { // some test code code } } (I omitted the code because I think it is not relevant to the question.) In my dev folder I have the following files: // files used to build libLunaMDUtil.so com_isprint_am_util_hsm_LunaMDUtil.c com_isprint_am_util_hsm_LunaMDUtil.h ethsm.lib libLunaMDUtil.so LunaMDUtil.java (i.e the above Java source code file) To build libLunaMDUtil.so I run the following command: gcc -fPIC -I"$JAVA_HOME/include" -I"$JAVA_HOME/include/linux" -shared -o libLunaMDUtil.so com_isprint_am_util_hsm_LunaMDUtil.c No errors here. Next I compile my LunaMDUtil.java : javac -cp . LunaMDUtil.java No errors here. The final step here is where I start getting problems. Running java -Xdiag -cp . LunaMDUtil gives me this error: Error: Could not find or load main class LunaMDUtil java.lang.NoClassDefFoundError: LunaMDUtil (wrong name: com/isprint/am/util/hsm/LunaMDUtil) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495) Next, I run java -Xdiag -cp . com.isprint.am.util.hsm.LunaMDUtil and I get this error: Error: Could not find or load main class com.isprint.am.util.hsm.LunaMDUtil java.lang.ClassNotFoundException: com.isprint.am.util.hsm.LunaMDUtil at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495) I move my .class file to folder ./com/isprint/am/util/hsm (following the package name) and this time I get: Exception in thread "main" java.lang.UnsatisfiedLinkError: no ethsm in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at com.isprint.am.util.hsm.LunaMDUtil.<clinit>(LunaMDUtil.java:67) At this point my hands start to become tied because ethsm.lib is a library from an external party and I would like to avoid renaming it if possible. Also, I would like to avoid making source code changes if possible. (Of course if the one and only solution involves making those changes then at least I have some justification.) What changes should I make so that I can get my test code in LunaMDUtil.java to run?
|
How to fix UnsatisfiedLinkError by making minimal changes? I am trying to compile and run the following JNI Java file in RHEL: package com.isprint.am.util.hsm; public class LunaMDUtil { //bunch of constants public native int MD_Initialize(); //...and a bunch of other native methods static { System.loadLibrary("ethsm"); System.loadLibrary("LunaMDUtil"); } public static void main(String[] args) { // some test code code } } (I omitted the code because I think it is not relevant to the question.) In my dev folder I have the following files: // files used to build libLunaMDUtil.so com_isprint_am_util_hsm_LunaMDUtil.c com_isprint_am_util_hsm_LunaMDUtil.h ethsm.lib libLunaMDUtil.so LunaMDUtil.java (i.e the above Java source code file) To build libLunaMDUtil.so I run the following command: gcc -fPIC -I"$JAVA_HOME/include" -I"$JAVA_HOME/include/linux" -shared -o libLunaMDUtil.so com_isprint_am_util_hsm_LunaMDUtil.c No errors here. Next I compile my LunaMDUtil.java : javac -cp . LunaMDUtil.java No errors here. The final step here is where I start getting problems. Running java -Xdiag -cp . LunaMDUtil gives me this error: Error: Could not find or load main class LunaMDUtil java.lang.NoClassDefFoundError: LunaMDUtil (wrong name: com/isprint/am/util/hsm/LunaMDUtil) at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) at java.net.URLClassLoader.access$100(URLClassLoader.java:74) at java.net.URLClassLoader$1.run(URLClassLoader.java:369) at java.net.URLClassLoader$1.run(URLClassLoader.java:363) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:362) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495) Next, I run java -Xdiag -cp . com.isprint.am.util.hsm.LunaMDUtil and I get this error: Error: Could not find or load main class com.isprint.am.util.hsm.LunaMDUtil java.lang.ClassNotFoundException: com.isprint.am.util.hsm.LunaMDUtil at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495) I move my .class file to folder ./com/isprint/am/util/hsm (following the package name) and this time I get: Exception in thread "main" java.lang.UnsatisfiedLinkError: no ethsm in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at com.isprint.am.util.hsm.LunaMDUtil.<clinit>(LunaMDUtil.java:67) At this point my hands start to become tied because ethsm.lib is a library from an external party and I would like to avoid renaming it if possible. Also, I would like to avoid making source code changes if possible. (Of course if the one and only solution involves making those changes then at least I have some justification.) What changes should I make so that I can get my test code in LunaMDUtil.java to run?
|
gcc, java-native-interface, javac, rhel, .so
| 2
| 201
| 1
|
https://stackoverflow.com/questions/57769101/how-to-fix-unsatisfiedlinkerror-by-making-minimal-changes
|
57,736,285
|
Issues connecting to server with sslv3 disabled in legacy perl
|
As the title says, I'm attempting to POST some JSON to a REST API using a pretty old version of perl, specifically 5.10.1. I'm working in a pretty old legacy codebase on RHEL 6. The handshake fails with error 500 because the server has sslv3 disabled. However I'm using versions of LWP and IO::Socket::SSL that I believe nominally support at least TLSv1, which the server supports so I'm unsure why the connection fails as it should just use an acceptable cipher. The server appears to be behind a cloudfront reverse proxy which may be relevant. I don't know enough about SSL to say for certain, but it appears that the issue is in the way the server I care about is set up and how the version of perl and libraries in use implement the cipher. Using an online tool I noticed that the handshake fails on several browsers that do not support SNI. The version of openssl available should support this, but neither of the perl SSL implementations appear to. Is there a way around this? this question seems to be dealing with a similar issue, but they solve it by installing a newer version of perl, which I'm not sure I can do given that this would likely break large portions of the codebase. This unresolved bug seems to accidentally document this behavior my problem is identical to the one listed, but I can also connect to the openssl host. I'm not sure that it's the same problem but I am on the same version of redhat with the same software and the same inconsistent behavior so it seems valuable. If it helps my software versions are as follows openssl-1.0.1e-58.el6_10.i686 perl-Crypt-SSLeay-0.57-17.el6.x86_64 perl-IO-Socket-SSL-1.31-3.el6_8.2.noarch perl-Net-SSLeay-1.35-10.el6_8.1.x86_64 If I run printf 'HTTP/1.0 200 Ok\r\n\r\n' | openssl s_server -accept 2000 -cert certificate.pem -key key.pem -no_ssl2 -no_ssl3 -no_tls1 the following script (test.pl) is able to connect using the same libraries my $ua = LWP::UserAgent->new(timeout => 600); my $req = HTTP::Request->new(POST => $ARGV[0]); $req->content_type('application/json'); $req->content_encoding('gzip'); $req->content(''); my $res = $ua->request($req); print $res->status_line, "\n"; >perl5.10.1 test.pl [URL] 200 Ok However, when I run an only slightly more complicated program using the same libraries and perl version I get errors relating to the sslv3 handshake 500 Can't connect to dev.superquoteapi.fis.rcuh.com:443 (SSL connect attempt failed with unknown errorerror:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) I've even tried to force the program to use only TLSv1, but to no avail, the following line seems to have no impact my $ua = LWP::UserAgent->new(ssl_opts => {verify_hostname => 0}, SSL_version => '!TLSv12:TLSv1:!SSLv2:!SSLv3', allowed_protocols => ['https']); The program I am actually working with is listed below use LWP; use IO::Socket::SSL; my $ua = LWP::UserAgent->new(ssl_opts => {verify_hostname => 0}, SSL_version => '!TLSv12:TLSv1:!SSLv2:!SSLv3', allowed_protocols => ['https']); my $req = HTTP::Request->new('POST', '[URL] $req->header('Content-Type' => 'application/json'); $req->header('Accept' => 'application/json'); #an api key is required and set in a header, but I won't list that here open(my $fd, '<', 'data,json') or die "Unable to open file, $!"; my @json=<$fd>; close($fd) or warn "Unable to close the file handle: $!"; $req->content(@json); my $resp = $ua->request($req); print $resp->as_string; I have tried removing IO::Socket::SSL and replacing it with Net::SSL as is referenced in this question How do I force LWP to use Crypt::SSLeay for HTTPS requests? but this does not seem to help. The code I used was use Net::SSL (); BEGIN { $Net::HTTPS::SSL_SOCKET_CLASS = "Net::SSL"; # Force use of Net::SSL } The two programs are more or less identical. The issue seems to be in the way the specific server [URL] handles ssl requests. However I do not know enough about SSL/TLS to verify this meaningfully, so I have included both for completeness. I apologize if this question is a bit rambling, but I really need help, and I just want to provide as much information as I can. If I can provide further information, or remove unhelpful information please let me know, I am absolutely at my wits end.
|
Issues connecting to server with sslv3 disabled in legacy perl As the title says, I'm attempting to POST some JSON to a REST API using a pretty old version of perl, specifically 5.10.1. I'm working in a pretty old legacy codebase on RHEL 6. The handshake fails with error 500 because the server has sslv3 disabled. However I'm using versions of LWP and IO::Socket::SSL that I believe nominally support at least TLSv1, which the server supports so I'm unsure why the connection fails as it should just use an acceptable cipher. The server appears to be behind a cloudfront reverse proxy which may be relevant. I don't know enough about SSL to say for certain, but it appears that the issue is in the way the server I care about is set up and how the version of perl and libraries in use implement the cipher. Using an online tool I noticed that the handshake fails on several browsers that do not support SNI. The version of openssl available should support this, but neither of the perl SSL implementations appear to. Is there a way around this? this question seems to be dealing with a similar issue, but they solve it by installing a newer version of perl, which I'm not sure I can do given that this would likely break large portions of the codebase. This unresolved bug seems to accidentally document this behavior my problem is identical to the one listed, but I can also connect to the openssl host. I'm not sure that it's the same problem but I am on the same version of redhat with the same software and the same inconsistent behavior so it seems valuable. If it helps my software versions are as follows openssl-1.0.1e-58.el6_10.i686 perl-Crypt-SSLeay-0.57-17.el6.x86_64 perl-IO-Socket-SSL-1.31-3.el6_8.2.noarch perl-Net-SSLeay-1.35-10.el6_8.1.x86_64 If I run printf 'HTTP/1.0 200 Ok\r\n\r\n' | openssl s_server -accept 2000 -cert certificate.pem -key key.pem -no_ssl2 -no_ssl3 -no_tls1 the following script (test.pl) is able to connect using the same libraries my $ua = LWP::UserAgent->new(timeout => 600); my $req = HTTP::Request->new(POST => $ARGV[0]); $req->content_type('application/json'); $req->content_encoding('gzip'); $req->content(''); my $res = $ua->request($req); print $res->status_line, "\n"; >perl5.10.1 test.pl [URL] 200 Ok However, when I run an only slightly more complicated program using the same libraries and perl version I get errors relating to the sslv3 handshake 500 Can't connect to dev.superquoteapi.fis.rcuh.com:443 (SSL connect attempt failed with unknown errorerror:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) I've even tried to force the program to use only TLSv1, but to no avail, the following line seems to have no impact my $ua = LWP::UserAgent->new(ssl_opts => {verify_hostname => 0}, SSL_version => '!TLSv12:TLSv1:!SSLv2:!SSLv3', allowed_protocols => ['https']); The program I am actually working with is listed below use LWP; use IO::Socket::SSL; my $ua = LWP::UserAgent->new(ssl_opts => {verify_hostname => 0}, SSL_version => '!TLSv12:TLSv1:!SSLv2:!SSLv3', allowed_protocols => ['https']); my $req = HTTP::Request->new('POST', '[URL] $req->header('Content-Type' => 'application/json'); $req->header('Accept' => 'application/json'); #an api key is required and set in a header, but I won't list that here open(my $fd, '<', 'data,json') or die "Unable to open file, $!"; my @json=<$fd>; close($fd) or warn "Unable to close the file handle: $!"; $req->content(@json); my $resp = $ua->request($req); print $resp->as_string; I have tried removing IO::Socket::SSL and replacing it with Net::SSL as is referenced in this question How do I force LWP to use Crypt::SSLeay for HTTPS requests? but this does not seem to help. The code I used was use Net::SSL (); BEGIN { $Net::HTTPS::SSL_SOCKET_CLASS = "Net::SSL"; # Force use of Net::SSL } The two programs are more or less identical. The issue seems to be in the way the specific server [URL] handles ssl requests. However I do not know enough about SSL/TLS to verify this meaningfully, so I have included both for completeness. I apologize if this question is a bit rambling, but I really need help, and I just want to provide as much information as I can. If I can provide further information, or remove unhelpful information please let me know, I am absolutely at my wits end.
|
perl, ssl, openssl, rhel, lwp
| 2
| 652
| 0
|
https://stackoverflow.com/questions/57736285/issues-connecting-to-server-with-sslv3-disabled-in-legacy-perl
|
56,668,136
|
rpm upgrade can't replace directory with file?
|
I have a previous version of a package I maintain, that contained a subdirectory with files in it. The upgrade is happening on RHEL/CentOS 7. For example my version 1.0 RPM contained: /opt/foo/etc/bar/x/y /opt/foo/etc/bar/z etc. In the newer version of this package, I must replace the entire /opt/foo/etc/bar directory with a file of that same name (unfortunately this is required by the tool, there's nothing I can do about it). So in the new version of the package, it will contain /opt/foo/etc/bar which is a file. If I run normal rpm --upgrade pkg-2.0.rpm , I get an error before any of my spec scriptlets are even invoked: file /opt/foo/etc/bar from install of pkg-2.0-1.x86_64 conflicts with file from package pkg-1.0-1.x86_64 In order to avoid this I must add the --replacefiles option to my rpm command line, which is gross. Even if I do that, it still fails, this time after my preinst scriptlet runs, with an error like this: error: unpacking of archive failed on file /opt/foo/etc/bar: cpio: rename failed - Is a directory error: pkg-2.0-1.x86_64: install failed error: pkg-1.0-1.x86_64: erase skipped The only way I can make this work, as far as I can tell, is to modify my preinst to remove the directory, AND to add the --replacefiles option to rpm . Even after I do all that, while the upgrade does succeed it throws a warning for every single file which is "missing" (because I removed the directory): warning: file /opt/foo/etc/bar/x/y: remove failed: Not a directory warning: file /opt/foo/etc/bar/z: remove failed: Not a directory I don't know why it's showing this error since these things are not directories and never were, but anyway. I've searched all around for info on this particular issue and while I've found lots of sort-of similar errors they are all for different situations, such as people trying to install two packages with overlapping files or similar. Here I'm definitely trying to upgrade one version of a package to a new version of that same package. There seems to be no possible way to make this work cleanly in RPM; is this just a deficiency of the RPM tool or am I missing something?
|
rpm upgrade can't replace directory with file? I have a previous version of a package I maintain, that contained a subdirectory with files in it. The upgrade is happening on RHEL/CentOS 7. For example my version 1.0 RPM contained: /opt/foo/etc/bar/x/y /opt/foo/etc/bar/z etc. In the newer version of this package, I must replace the entire /opt/foo/etc/bar directory with a file of that same name (unfortunately this is required by the tool, there's nothing I can do about it). So in the new version of the package, it will contain /opt/foo/etc/bar which is a file. If I run normal rpm --upgrade pkg-2.0.rpm , I get an error before any of my spec scriptlets are even invoked: file /opt/foo/etc/bar from install of pkg-2.0-1.x86_64 conflicts with file from package pkg-1.0-1.x86_64 In order to avoid this I must add the --replacefiles option to my rpm command line, which is gross. Even if I do that, it still fails, this time after my preinst scriptlet runs, with an error like this: error: unpacking of archive failed on file /opt/foo/etc/bar: cpio: rename failed - Is a directory error: pkg-2.0-1.x86_64: install failed error: pkg-1.0-1.x86_64: erase skipped The only way I can make this work, as far as I can tell, is to modify my preinst to remove the directory, AND to add the --replacefiles option to rpm . Even after I do all that, while the upgrade does succeed it throws a warning for every single file which is "missing" (because I removed the directory): warning: file /opt/foo/etc/bar/x/y: remove failed: Not a directory warning: file /opt/foo/etc/bar/z: remove failed: Not a directory I don't know why it's showing this error since these things are not directories and never were, but anyway. I've searched all around for info on this particular issue and while I've found lots of sort-of similar errors they are all for different situations, such as people trying to install two packages with overlapping files or similar. Here I'm definitely trying to upgrade one version of a package to a new version of that same package. There seems to be no possible way to make this work cleanly in RPM; is this just a deficiency of the RPM tool or am I missing something?
|
linux, centos, rpm, rhel, rpm-spec
| 2
| 1,079
| 1
|
https://stackoverflow.com/questions/56668136/rpm-upgrade-cant-replace-directory-with-file
|
54,407,724
|
UnsatisfiedLinkError exception when lib is explicitly declared
|
Trying to execute my Java app yields a UnsatisfiedLinkError exception when the libmysqlclient.so.18 can not be found, even when it is explicitly declared in LD_LIBRARY_PATH , -Djava.library.path and /etc/ld.so.conf PROBLEM DESCRIPTION I am trying to make use of pcap4j ( [URL] ), a Java wrapper for libpcap, so I can sniff packets on my machine NIFs from a Java application. As libpcap requires superuser privileges to perform this task, I have to somehow give the non-privileged user executing this app the capacity of accessing the NIFs. The maintainer of pcap4j suggests granting capabilities CAP_NET_RAW and CAP_NET_ADMIN to the java command as follow: setcap cap_net_raw,cap_net_admin=eip /path/to/java Due to implementation limitations I am constrained as follow: Avoid giving sudo access to the non-privileged user due to security policies. The same reasoning could be applied to granting above mentioned capabilities to the java command (do not know if capability grantings are given per user/command pair), but, from my relatively scarce knowledge about security, the latter option looks like a more delimited permission granting method for what I want to achieve (solutions are welcome too should an alternative permission granting method looks more suitable for my purpose), and given that the pcapj4 developer, a likely more experienced professional, advices so, I have followed the capability granting path. User must be able to execute the app without being password-prompted Permission granting must be done only once, e.g. when creating the user for the first time. After granting CAP_NET_RAW and CAP_NET_ADMIN capabilites to the java command, the problem arose. I am getting the following exception when executing my app: Error creating entity java.lang.UnsatisfiedLinkError: /path/to/app/lib/libxpherejava.so: libmysqlclient.so.18: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1857) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) DETAILS JAVA: java-1.8.0-openjdk-1.8.0.171-8 OS: Linux user-me 3.10.0-862.6.3.el7.x86_64 #1 SMP Fri Jun 15 17:57:37 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux [Red Hat Enterprise Linux Server release 7.5 (Maipo)] LD_LIBRARY_PATH contains an explicit path to the not found library: LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/lwp:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre/lib/amd64/jli/libjli.so:/usr/lib64/mysql/libmysqlclient.so.18 export LD_LIBRARY_PATH LD_LIBRARY_PATH is passed to the JVM by making use of java.library.path : exec java -XshowSettings:properties -Djava.library.path=${LD_LIBRARY_PATH} -d64 ... The "-XshowSettings:properties" provides me with the following output when the java command is executed: java.library.path = /path/to/app/lib /path/to/app/lib/glib-2.0 /usr/lib/lwp /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre/lib/amd64/jli/libjli.so /usr/lib64/mysql/libmysqlclient.so.18 where /usr/lib64/mysql/libmysqlclient.so.18 is a symlink to /usr/lib64/mysql/libmysqlclient.so.18.0.0 Seems like JVM (or whatever entity is requesting access to libmysqlclient.so.18 from libxpherejava.so ) does not find libmysqlclient.so.18 , even when its path is explicitly provided to the java.library.path and the file DOES exist. With LD_LIBRARY_PATH containing the path to libmysqlclient.so.18 (/usr/lib64/mysql/libmysqlclient.so.18), issuing ldd of libxpherejava.so yields that libmysqlclient.so.18 can not be found [user@user-me log]$ ldd /path/to/app/lib/libxpherejava.so linux-vdso.so.1 => (0x00007ffe1a73d000) libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007fd727df4000) libssl.so.10 => /lib64/libssl.so.10 (0x00007fd727b83000) libxphereS.so => /path/to/app/lib/libxphereS.so (0x00007fd727973000) libmysqlclient.so.18 => not found libz.so.1 => /lib64/libz.so.1 (0x00007fd72775d000) libnsl.so.1 => /lib64/libnsl.so.1 (0x00007fd727543000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fd727327000) libm.so.6 => /lib64/libm.so.6 (0x00007fd727025000) libglib-2.0.so.0 => /lib64/libglib-2.0.so.0 (0x00007fd726d11000) libc.so.6 => /lib64/libc.so.6 (0x00007fd726944000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fd726740000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007fd7264f3000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007fd72620b000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007fd726007000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007fd725dd4000) /lib64/ld-linux-x86-64.so.2 (0x00007fd728718000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fd725b72000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007fd725964000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007fd725760000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007fd725547000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fd725320000) This is the content of ld.so.conf : [user@user-me lib]$ cat /etc/ld.so.conf /path/to/app/lib/ /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre/lib/amd64/jli/libjli.so /usr/lib64/mysql/libmysqlclient.so.18 Both libs are 64-bit compiled: [user@user-me lib]$ file libxpherejava.so libxpherejava.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=82e1c673e732eb2d3770883b14facf3eff091243, not stripped [user@user-me lib]$ file /usr/lib64/mysql/libmysqlclient.so.18 /usr/lib64/mysql/libmysqlclient.so.18: symbolic link to libmysqlclient.so.18.0.0' [user@user-me lib]$ file /usr/lib64/mysql/libmysqlclient.so.18.0.0 /usr/lib64/mysql/libmysqlclient.so.18.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=79978c5f4fb259a5a146614e260ea0720dd31d3b, stripped A strace over the script that executes the "exec java" command yields this --> [URL] RELATED QUESTIONS JNI issue on Linux: cannot open shared object file --> Libs are 64-bit and LD_LIBRARY_PATH explicitly contains the path to the problematic lib [URL] --> pcap4j developer answer to a similar problem (solution applied to my environment. Error solved, but the current one arose right after) [URL] --> Post about capability granting flaws that could explain all this tweaking Does anyone know why is libmysqlclient.so.18 not being found?
|
UnsatisfiedLinkError exception when lib is explicitly declared Trying to execute my Java app yields a UnsatisfiedLinkError exception when the libmysqlclient.so.18 can not be found, even when it is explicitly declared in LD_LIBRARY_PATH , -Djava.library.path and /etc/ld.so.conf PROBLEM DESCRIPTION I am trying to make use of pcap4j ( [URL] ), a Java wrapper for libpcap, so I can sniff packets on my machine NIFs from a Java application. As libpcap requires superuser privileges to perform this task, I have to somehow give the non-privileged user executing this app the capacity of accessing the NIFs. The maintainer of pcap4j suggests granting capabilities CAP_NET_RAW and CAP_NET_ADMIN to the java command as follow: setcap cap_net_raw,cap_net_admin=eip /path/to/java Due to implementation limitations I am constrained as follow: Avoid giving sudo access to the non-privileged user due to security policies. The same reasoning could be applied to granting above mentioned capabilities to the java command (do not know if capability grantings are given per user/command pair), but, from my relatively scarce knowledge about security, the latter option looks like a more delimited permission granting method for what I want to achieve (solutions are welcome too should an alternative permission granting method looks more suitable for my purpose), and given that the pcapj4 developer, a likely more experienced professional, advices so, I have followed the capability granting path. User must be able to execute the app without being password-prompted Permission granting must be done only once, e.g. when creating the user for the first time. After granting CAP_NET_RAW and CAP_NET_ADMIN capabilites to the java command, the problem arose. I am getting the following exception when executing my app: Error creating entity java.lang.UnsatisfiedLinkError: /path/to/app/lib/libxpherejava.so: libmysqlclient.so.18: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1857) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) DETAILS JAVA: java-1.8.0-openjdk-1.8.0.171-8 OS: Linux user-me 3.10.0-862.6.3.el7.x86_64 #1 SMP Fri Jun 15 17:57:37 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux [Red Hat Enterprise Linux Server release 7.5 (Maipo)] LD_LIBRARY_PATH contains an explicit path to the not found library: LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/lwp:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre/lib/amd64/jli/libjli.so:/usr/lib64/mysql/libmysqlclient.so.18 export LD_LIBRARY_PATH LD_LIBRARY_PATH is passed to the JVM by making use of java.library.path : exec java -XshowSettings:properties -Djava.library.path=${LD_LIBRARY_PATH} -d64 ... The "-XshowSettings:properties" provides me with the following output when the java command is executed: java.library.path = /path/to/app/lib /path/to/app/lib/glib-2.0 /usr/lib/lwp /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre/lib/amd64/jli/libjli.so /usr/lib64/mysql/libmysqlclient.so.18 where /usr/lib64/mysql/libmysqlclient.so.18 is a symlink to /usr/lib64/mysql/libmysqlclient.so.18.0.0 Seems like JVM (or whatever entity is requesting access to libmysqlclient.so.18 from libxpherejava.so ) does not find libmysqlclient.so.18 , even when its path is explicitly provided to the java.library.path and the file DOES exist. With LD_LIBRARY_PATH containing the path to libmysqlclient.so.18 (/usr/lib64/mysql/libmysqlclient.so.18), issuing ldd of libxpherejava.so yields that libmysqlclient.so.18 can not be found [user@user-me log]$ ldd /path/to/app/lib/libxpherejava.so linux-vdso.so.1 => (0x00007ffe1a73d000) libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007fd727df4000) libssl.so.10 => /lib64/libssl.so.10 (0x00007fd727b83000) libxphereS.so => /path/to/app/lib/libxphereS.so (0x00007fd727973000) libmysqlclient.so.18 => not found libz.so.1 => /lib64/libz.so.1 (0x00007fd72775d000) libnsl.so.1 => /lib64/libnsl.so.1 (0x00007fd727543000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fd727327000) libm.so.6 => /lib64/libm.so.6 (0x00007fd727025000) libglib-2.0.so.0 => /lib64/libglib-2.0.so.0 (0x00007fd726d11000) libc.so.6 => /lib64/libc.so.6 (0x00007fd726944000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fd726740000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007fd7264f3000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007fd72620b000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007fd726007000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007fd725dd4000) /lib64/ld-linux-x86-64.so.2 (0x00007fd728718000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fd725b72000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007fd725964000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007fd725760000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007fd725547000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fd725320000) This is the content of ld.so.conf : [user@user-me lib]$ cat /etc/ld.so.conf /path/to/app/lib/ /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/jre/lib/amd64/jli/libjli.so /usr/lib64/mysql/libmysqlclient.so.18 Both libs are 64-bit compiled: [user@user-me lib]$ file libxpherejava.so libxpherejava.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=82e1c673e732eb2d3770883b14facf3eff091243, not stripped [user@user-me lib]$ file /usr/lib64/mysql/libmysqlclient.so.18 /usr/lib64/mysql/libmysqlclient.so.18: symbolic link to libmysqlclient.so.18.0.0' [user@user-me lib]$ file /usr/lib64/mysql/libmysqlclient.so.18.0.0 /usr/lib64/mysql/libmysqlclient.so.18.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=79978c5f4fb259a5a146614e260ea0720dd31d3b, stripped A strace over the script that executes the "exec java" command yields this --> [URL] RELATED QUESTIONS JNI issue on Linux: cannot open shared object file --> Libs are 64-bit and LD_LIBRARY_PATH explicitly contains the path to the problematic lib [URL] --> pcap4j developer answer to a similar problem (solution applied to my environment. Error solved, but the current one arose right after) [URL] --> Post about capability granting flaws that could explain all this tweaking Does anyone know why is libmysqlclient.so.18 not being found?
|
rhel, java, shared-libraries
| 2
| 1,294
| 1
|
https://stackoverflow.com/questions/54407724/unsatisfiedlinkerror-exception-when-lib-is-explicitly-declared
|
53,545,009
|
umask permissions for new files computing differently for GUI vs. ssh shell?
|
So I've been going through a really perplexing issue trying to decipher why my umask value is being applied differently depending on how I am creating new files in RHEL. My goal is to have new files created with 664 permissions so that my group which contains other users can also edit the file. I have tried editing: /etc/profile /etc/bashrc with: umask 002 I also tried editing: /etc/pam.d/common-session /etc/pam.d/sshd /etc/pam.d/login with: session optional pam_umask.so umask=002 When I create a new file in an SSH terminal using touch the permissions are perfect 664 with the group being able to edit the file. If I create a new file in a GUI editor such as Coda for Mac, the permissions on the file are 644. If I use that same Coda software and open up the ssh shell built in and touch a new file with the same account the permissions again are the correct 664. Am I missing something with how to correctly configure umask for all types of users regardless of how the file is being created? (interactive or not?) Edit: I got this fixed by finally realizing the files in the GUI were being created locally on the computer and then transferred with the bad permissions. I got it fixed by setting the rules in Coda to specify 664 for new files. Sorry!
|
umask permissions for new files computing differently for GUI vs. ssh shell? So I've been going through a really perplexing issue trying to decipher why my umask value is being applied differently depending on how I am creating new files in RHEL. My goal is to have new files created with 664 permissions so that my group which contains other users can also edit the file. I have tried editing: /etc/profile /etc/bashrc with: umask 002 I also tried editing: /etc/pam.d/common-session /etc/pam.d/sshd /etc/pam.d/login with: session optional pam_umask.so umask=002 When I create a new file in an SSH terminal using touch the permissions are perfect 664 with the group being able to edit the file. If I create a new file in a GUI editor such as Coda for Mac, the permissions on the file are 644. If I use that same Coda software and open up the ssh shell built in and touch a new file with the same account the permissions again are the correct 664. Am I missing something with how to correctly configure umask for all types of users regardless of how the file is being created? (interactive or not?) Edit: I got this fixed by finally realizing the files in the GUI were being created locally on the computer and then transferred with the bad permissions. I got it fixed by setting the rules in Coda to specify 664 for new files. Sorry!
|
permissions, rhel, pam, umask
| 2
| 183
| 0
|
https://stackoverflow.com/questions/53545009/umask-permissions-for-new-files-computing-differently-for-gui-vs-ssh-shell
|
53,190,992
|
Using HP Fortify SCA to scan Linux Kernel
|
I'm trying to use HP Fortify SCA to scan RHEL7.5 server kernel [linux-3.10.0-862.el7]. I'm on the RHEL workstation OS which is on virtualbox. In the working dir I'm doing: "sourceanalyzer -b mybuild touchless make" The kernel compiles using "make" by itself. The sourceanalyzer goes through some of the code but it seems to error out when it comes to: CC arch/x86/purgatory/purgatory.o Is Fortify capable of scanning the kernel? Do I need to use more complex commands to scan it? The output error segment is: touchless-script /home/sail/.fortify/sca18.1/build/myscan/build6382721854835965459/gcc called with args: -Wp,-MD,arch/x86/purgatory/.purgatory.o.d -nostdinc -isystem touchless-script /home/sail/.fortify/sca18.1/build/myscan/build6382721854835965459/gcc called with args: -print-file-name=include /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include -I./arch/x86/include -Iarch/x86/include/generated -Iinclude -I./arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I./include/uapi -Iinclude/generated/uapi -include ./include/linux/kconfig.h -D__KERNEL__ -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -MD -Os -mcmodel=large -m64 -DKBUILD_STR(s)=#s -DKBUILD_BASENAME=KBUILD_STR(purgatory) -DKBUILD_MODNAME=KBUILD_STR(purgatory) -c -o arch/x86/purgatory/.tmp_purgatory.o arch/x86/purgatory/purgatory.c /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include [warning]: File called not found [warning]: File with not found [warning]: File args: not found [warning]: File /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include not found gcc: error: called: No such file or directory gcc: error: with: No such file or directory gcc: error: args:: No such file or directory objdump: 'arch/x86/purgatory/.tmp_purgatory.o': No such file mv: cannot stat ‘arch/x86/purgatory/.tmp_purgatory.o’: No such file or directory make[1]: * [arch/x86/purgatory/purgatory.o] Error 1 make: * [archprepare] Error 2
|
Using HP Fortify SCA to scan Linux Kernel I'm trying to use HP Fortify SCA to scan RHEL7.5 server kernel [linux-3.10.0-862.el7]. I'm on the RHEL workstation OS which is on virtualbox. In the working dir I'm doing: "sourceanalyzer -b mybuild touchless make" The kernel compiles using "make" by itself. The sourceanalyzer goes through some of the code but it seems to error out when it comes to: CC arch/x86/purgatory/purgatory.o Is Fortify capable of scanning the kernel? Do I need to use more complex commands to scan it? The output error segment is: touchless-script /home/sail/.fortify/sca18.1/build/myscan/build6382721854835965459/gcc called with args: -Wp,-MD,arch/x86/purgatory/.purgatory.o.d -nostdinc -isystem touchless-script /home/sail/.fortify/sca18.1/build/myscan/build6382721854835965459/gcc called with args: -print-file-name=include /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include -I./arch/x86/include -Iarch/x86/include/generated -Iinclude -I./arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I./include/uapi -Iinclude/generated/uapi -include ./include/linux/kconfig.h -D__KERNEL__ -fno-strict-aliasing -Wall -Wstrict-prototypes -fno-zero-initialized-in-bss -fno-builtin -ffreestanding -c -MD -Os -mcmodel=large -m64 -DKBUILD_STR(s)=#s -DKBUILD_BASENAME=KBUILD_STR(purgatory) -DKBUILD_MODNAME=KBUILD_STR(purgatory) -c -o arch/x86/purgatory/.tmp_purgatory.o arch/x86/purgatory/purgatory.c /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include [warning]: File called not found [warning]: File with not found [warning]: File args: not found [warning]: File /usr/lib/gcc/x86_64-redhat-linux/4.8.5/include not found gcc: error: called: No such file or directory gcc: error: with: No such file or directory gcc: error: args:: No such file or directory objdump: 'arch/x86/purgatory/.tmp_purgatory.o': No such file mv: cannot stat ‘arch/x86/purgatory/.tmp_purgatory.o’: No such file or directory make[1]: * [arch/x86/purgatory/purgatory.o] Error 1 make: * [archprepare] Error 2
|
c, linux, gcc, rhel, fortify
| 2
| 1,017
| 1
|
https://stackoverflow.com/questions/53190992/using-hp-fortify-sca-to-scan-linux-kernel
|
53,083,926
|
Relation between RHEL_RELEASE_VERSION-macro and /etc/redhat-release
|
There is a special macro RHEL_RELEASE_CODE that helps in preprocessing when building modules e.g. for centos kernel. Whether the value of that macro should match the info in /etc/redhat-release or lsb_release ? In other words if I see e.g. CentOS Linux release 7.3.1611 (Core) inside /etc/redhat-release , does it mean that when I build some module, RHEL_RELEASE_CODE will be equal to RHEL_RELEASE_VERSION(7,3) ?
|
Relation between RHEL_RELEASE_VERSION-macro and /etc/redhat-release There is a special macro RHEL_RELEASE_CODE that helps in preprocessing when building modules e.g. for centos kernel. Whether the value of that macro should match the info in /etc/redhat-release or lsb_release ? In other words if I see e.g. CentOS Linux release 7.3.1611 (Core) inside /etc/redhat-release , does it mean that when I build some module, RHEL_RELEASE_CODE will be equal to RHEL_RELEASE_VERSION(7,3) ?
|
c, linux, linux-kernel, centos, rhel
| 2
| 659
| 0
|
https://stackoverflow.com/questions/53083926/relation-between-rhel-release-version-macro-and-etc-redhat-release
|
47,661,173
|
JavaFX from RHEL
|
Previously I am developing applications with JavaFX in Oracle Java SE shipped by Red Hat but it seems it is no longer offered as in [URL] . However, it seems that the OpenJDK coming from rhel-7-server-rpms repository does not come with JavaFX. Are there better ways instead of just installing packages from outside the repositories provided by Red Hat? I don't want to test each environment with self-compiled OpenJFX binary one by one.
|
JavaFX from RHEL Previously I am developing applications with JavaFX in Oracle Java SE shipped by Red Hat but it seems it is no longer offered as in [URL] . However, it seems that the OpenJDK coming from rhel-7-server-rpms repository does not come with JavaFX. Are there better ways instead of just installing packages from outside the repositories provided by Red Hat? I don't want to test each environment with self-compiled OpenJFX binary one by one.
|
rhel, rhel7, redhat-openjdk
| 2
| 4,480
| 1
|
https://stackoverflow.com/questions/47661173/javafx-from-rhel
|
44,421,312
|
Error on installing php-magicwand on RHEL (conflict between php-common)
|
I am trying to install ImageMagic (for php7) on RHEL7 hosted on AWS. After running pecl install imagick I end up with the following error configure: error: not found. Please provide a path to MagickWand-config or Wand-config program. Thus I searched for magickwand which returned a possible match php-magickwand.x86_64 . Now comes the ultimate issue. When I try to install php-magickwand via yum, I receive the following error (check the 3rd last line). Loaded plugins: amazon-id, rhui-lb, search-disabled-repos Resolving Dependencies --> Running transaction check ---> Package php-magickwand.x86_64 0:1.0.9-6.el7 will be installed --> Processing Dependency: php(zend-abi) = 20100525-64 for package: php-magickwand-1.0.9-6.el7.x86_64 --> Processing Dependency: php(api) = 20100412-64 for package: php-magickwand-1.0.9-6.el7.x86_64 --> Running transaction check ---> Package php-common.x86_64 0:5.4.16-42.el7 will be installed --> Processing Dependency: libzip.so.2()(64bit) for package: php-common-5.4.16-42.el7.x86_64 --> Running transaction check ---> Package libzip.x86_64 0:0.10.1-8.el7 will be installed --> Processing Conflict: php70w-common-7.0.19-1.w7.x86_64 conflicts php-common < 7.0 --> Finished Dependency Resolution Error: php70w-common conflicts with php-common-5.4.16-42.el7.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I have searched for php-common-5.4 on my machine but there is nothing like it. All the packages I have is related to php7 (cause I never installed php55 on this machine). Can someone please shed some light on where can I dig more? I at least Googled for an hour, no one seemed to fall into this trap.
|
Error on installing php-magicwand on RHEL (conflict between php-common) I am trying to install ImageMagic (for php7) on RHEL7 hosted on AWS. After running pecl install imagick I end up with the following error configure: error: not found. Please provide a path to MagickWand-config or Wand-config program. Thus I searched for magickwand which returned a possible match php-magickwand.x86_64 . Now comes the ultimate issue. When I try to install php-magickwand via yum, I receive the following error (check the 3rd last line). Loaded plugins: amazon-id, rhui-lb, search-disabled-repos Resolving Dependencies --> Running transaction check ---> Package php-magickwand.x86_64 0:1.0.9-6.el7 will be installed --> Processing Dependency: php(zend-abi) = 20100525-64 for package: php-magickwand-1.0.9-6.el7.x86_64 --> Processing Dependency: php(api) = 20100412-64 for package: php-magickwand-1.0.9-6.el7.x86_64 --> Running transaction check ---> Package php-common.x86_64 0:5.4.16-42.el7 will be installed --> Processing Dependency: libzip.so.2()(64bit) for package: php-common-5.4.16-42.el7.x86_64 --> Running transaction check ---> Package libzip.x86_64 0:0.10.1-8.el7 will be installed --> Processing Conflict: php70w-common-7.0.19-1.w7.x86_64 conflicts php-common < 7.0 --> Finished Dependency Resolution Error: php70w-common conflicts with php-common-5.4.16-42.el7.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I have searched for php-common-5.4 on my machine but there is nothing like it. All the packages I have is related to php7 (cause I never installed php55 on this machine). Can someone please shed some light on where can I dig more? I at least Googled for an hour, no one seemed to fall into this trap.
|
imagemagick, php-7, rhel
| 2
| 1,183
| 1
|
https://stackoverflow.com/questions/44421312/error-on-installing-php-magicwand-on-rhel-conflict-between-php-common
|
42,090,461
|
phpmyadmin forbidden in Red Hat Enterprise Linux running on Amazon Web Services EC2
|
I am unable to access phpmyadmin in my server. I'm getting Forbidden. Os version : Red Hat Enterprise Linux Server release 7.3 (Maipo) I have checked answers in stackoverflow but, I'm unable to resolve my issue. I installed phpmyadmin using command line. running phpmyadmin Version information: 4.5.4.1 I have modified values in /etc/httpd/conf.d/phpMyAdmin.conf My server is running on Amazon Web Services free tier. here is the content present in phpMyAdmin.conf Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory /usr/share/phpMyAdmin/> AddDefaultCharset UTF-8 # <IfModule mod_authz_core.c> # Apache 2.4 # <RequireAny> # Require ip 127.0.0.1 # Require ip ::1 # </RequireAny> # </IfModule> # <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow # Deny from All Allow from all # Allow from 127.0.0.1 # Allow from ::1 # AllowOverride all # Require all granted # </IfModule> </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Deny from All Allow from ::1 </IfModule> </Directory> <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory>
|
phpmyadmin forbidden in Red Hat Enterprise Linux running on Amazon Web Services EC2 I am unable to access phpmyadmin in my server. I'm getting Forbidden. Os version : Red Hat Enterprise Linux Server release 7.3 (Maipo) I have checked answers in stackoverflow but, I'm unable to resolve my issue. I installed phpmyadmin using command line. running phpmyadmin Version information: 4.5.4.1 I have modified values in /etc/httpd/conf.d/phpMyAdmin.conf My server is running on Amazon Web Services free tier. here is the content present in phpMyAdmin.conf Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory /usr/share/phpMyAdmin/> AddDefaultCharset UTF-8 # <IfModule mod_authz_core.c> # Apache 2.4 # <RequireAny> # Require ip 127.0.0.1 # Require ip ::1 # </RequireAny> # </IfModule> # <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow # Deny from All Allow from all # Allow from 127.0.0.1 # Allow from ::1 # AllowOverride all # Require all granted # </IfModule> </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Deny from All Allow from ::1 </IfModule> </Directory> <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory>
|
amazon-web-services, amazon-ec2, phpmyadmin, rhel
| 2
| 675
| 1
|
https://stackoverflow.com/questions/42090461/phpmyadmin-forbidden-in-red-hat-enterprise-linux-running-on-amazon-web-services
|
40,074,748
|
MSSQL with nginx on AWS EC2 having RHEL
|
I would want to connect my Laravel application to an SQL Server. Locally, it's all fine and perfect. However following same steps on AWS EC2 does not give the required result. Instead I am stuck with 'Could not find driver' exception on launching the application. Here are a few parameters to be considered. And yes, have tried all solutions I could find. Using yum for the EC2 environment nginx and not apache is used PHP version is 5.6.11 RHEL version is 7.1 Need your help please
|
MSSQL with nginx on AWS EC2 having RHEL I would want to connect my Laravel application to an SQL Server. Locally, it's all fine and perfect. However following same steps on AWS EC2 does not give the required result. Instead I am stuck with 'Could not find driver' exception on launching the application. Here are a few parameters to be considered. And yes, have tried all solutions I could find. Using yum for the EC2 environment nginx and not apache is used PHP version is 5.6.11 RHEL version is 7.1 Need your help please
|
sql-server, nginx, amazon-ec2, yum, rhel
| 2
| 205
| 0
|
https://stackoverflow.com/questions/40074748/mssql-with-nginx-on-aws-ec2-having-rhel
|
37,483,938
|
Is it possible to make iptables on RHEL idempotent with Chef?
|
I want to enforce policy on the iptables of my system so that if someone were to manually make a change to iptables then Chef would bring it back into the proper state the next time it runs. Is this just a dream? One cookbook, a community cookbook called 'firewall', said that it would not be able to handle noticing if someone else added a firewall rule. EDIT: To make things more clear: I am asking how to write an idempotent resource in Chef/Ruby that will be idempotent for iptables.
|
Is it possible to make iptables on RHEL idempotent with Chef? I want to enforce policy on the iptables of my system so that if someone were to manually make a change to iptables then Chef would bring it back into the proper state the next time it runs. Is this just a dream? One cookbook, a community cookbook called 'firewall', said that it would not be able to handle noticing if someone else added a firewall rule. EDIT: To make things more clear: I am asking how to write an idempotent resource in Chef/Ruby that will be idempotent for iptables.
|
linux, configuration, centos, chef-infra, rhel
| 2
| 481
| 1
|
https://stackoverflow.com/questions/37483938/is-it-possible-to-make-iptables-on-rhel-idempotent-with-chef
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.