question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
52,049,055
How to cleanly and safely uninstall mysql client on RHEL server?
On RHEL server, if I remember correctly, I installed mariadb client by running: yum install mariadb Now I want to safely and cleanly uninstall it, how do I do it? I know that this task might be dangerous because uninstalling a software cleanly requires uninstalling its dependencies as well, but other software might be using those dependencies as well. it can be hard to tell which dependencies are still being used and which aren't. I don't want to mess up things on server so I ask here first for correct instruction.
How to cleanly and safely uninstall mysql client on RHEL server? On RHEL server, if I remember correctly, I installed mariadb client by running: yum install mariadb Now I want to safely and cleanly uninstall it, how do I do it? I know that this task might be dangerous because uninstalling a software cleanly requires uninstalling its dependencies as well, but other software might be using those dependencies as well. it can be hard to tell which dependencies are still being used and which aren't. I don't want to mess up things on server so I ask here first for correct instruction.
mariadb, uninstallation, yum, rhel
1
1,455
1
https://stackoverflow.com/questions/52049055/how-to-cleanly-and-safely-uninstall-mysql-client-on-rhel-server
51,579,838
Docker run rhscl/httpd-24-rhel7 error
I got after executing this command this error : Unable to find image 'rhscl/httpd-24-rhel7:latest' locally C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: pull access denied for rhscl/httpd-24-rhel7, repository does not exist or may require 'docker login'. See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'. I tried docker login but it does not help. Thanks.
Docker run rhscl/httpd-24-rhel7 error I got after executing this command this error : Unable to find image 'rhscl/httpd-24-rhel7:latest' locally C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: pull access denied for rhscl/httpd-24-rhel7, repository does not exist or may require 'docker login'. See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'. I tried docker login but it does not help. Thanks.
docker, rhel
1
501
1
https://stackoverflow.com/questions/51579838/docker-run-rhscl-httpd-24-rhel7-error
51,521,796
can compiled perl v5.28.0 from src (with gcc 4.8.5) run on RHEL 5.5?
My question is about whether if it would be possible to run a compiled perl 5.28.0 from source (with GCC 4.8.5 on CentOS 7) to be able to be used on RHEL 5.5 (Tikanga) where GCC version is lower and so would be the other libs like libc, glibc, etc. Our production environment is running very old perl version (5.8.8) and due to security concerns, it is under heavy lock down, i.e. most of our servers lack make, gcc and related tools and there is no root access available to anyone I was wondering if it would be possible to compile perl from source i.e. latest 5.28.0 with GCC 4.8.5 AND try to use this compiled version on our production servers (with GCC 4.8.2). This will save me tonnes of headaches with slow bureaucracy and I can get going with my project with the new tools. Have not been able to find any discussion or hint about this subject. Can anyone shed some light? Thank you in advance. Update after 2 days: As it seems Perl 5.28 compiled on RHEL7 does not work on RHEL5.5. You will have to compile it on RHEL5.5 and make it relocatable for further usage on any server. So I Downloaded the RHEL 5.5 and CentOS5.5 ISOs and ran into bootable iso related issues. Couldn't make a suitable bootable disk for both rhel 5.5 and centos5.5. rhel5.5 iso was a single dvd image and upon doing file rhel5.5.iso on command prompt, it showed bootable. tried unebootin, rufous iso creator, dd command and created ISOs and tried all of them one by one, but couldn't get it to show boot menu. tried FAT, NTFS FS while making boot disk. Stuck here now. Centos5.5 iso came in 8 pieces of 600mb files. Had to create a single iso image out of it and found some online procedure to do it and made one ISO file. Got boot menu and looked like it worked. But then it got hung up on doing some sort of source media check test and couldn't proceed further. Found a fix related article that you imprint md5sum on iso and it should work but it didn't. Just now found something on grokbase and it mentions a new technique, that could take me forward from the point of failure mentioned in point no.3 above.
can compiled perl v5.28.0 from src (with gcc 4.8.5) run on RHEL 5.5? My question is about whether if it would be possible to run a compiled perl 5.28.0 from source (with GCC 4.8.5 on CentOS 7) to be able to be used on RHEL 5.5 (Tikanga) where GCC version is lower and so would be the other libs like libc, glibc, etc. Our production environment is running very old perl version (5.8.8) and due to security concerns, it is under heavy lock down, i.e. most of our servers lack make, gcc and related tools and there is no root access available to anyone I was wondering if it would be possible to compile perl from source i.e. latest 5.28.0 with GCC 4.8.5 AND try to use this compiled version on our production servers (with GCC 4.8.2). This will save me tonnes of headaches with slow bureaucracy and I can get going with my project with the new tools. Have not been able to find any discussion or hint about this subject. Can anyone shed some light? Thank you in advance. Update after 2 days: As it seems Perl 5.28 compiled on RHEL7 does not work on RHEL5.5. You will have to compile it on RHEL5.5 and make it relocatable for further usage on any server. So I Downloaded the RHEL 5.5 and CentOS5.5 ISOs and ran into bootable iso related issues. Couldn't make a suitable bootable disk for both rhel 5.5 and centos5.5. rhel5.5 iso was a single dvd image and upon doing file rhel5.5.iso on command prompt, it showed bootable. tried unebootin, rufous iso creator, dd command and created ISOs and tried all of them one by one, but couldn't get it to show boot menu. tried FAT, NTFS FS while making boot disk. Stuck here now. Centos5.5 iso came in 8 pieces of 600mb files. Had to create a single iso image out of it and found some online procedure to do it and made one ISO file. Got boot menu and looked like it worked. But then it got hung up on doing some sort of source media check test and couldn't proceed further. Found a fix related article that you imprint md5sum on iso and it should work but it didn't. Just now found something on grokbase and it mentions a new technique, that could take me forward from the point of failure mentioned in point no.3 above.
gcc, redhat, rhel, perl
1
347
3
https://stackoverflow.com/questions/51521796/can-compiled-perl-v5-28-0-from-src-with-gcc-4-8-5-run-on-rhel-5-5
51,422,768
Python raw socket frames never received
I am using a RHEL6 computer and I try to communicate with a Windows XP computer via RAW sockets. When I receive a specific frame on my RHEL computer, a Python 2 script using RAW sockets processes the frame and changes the following fields before sending it to the Windows computer : Dest MAC Dest IP IP ID Checksum The packet arrives on my Windows XP computer, as I see the packet in Wireshark, but it never reaches the application layer, as the software that needs the packet doesn't react. This is how I create the sent packet : import socket, binascii, optparse s=socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(3)) s.bind(('eth1',0)) while True: result = s.recv(65535) if binascii.hexlify(result[30:34]).decode() == "<WANTED FRAME IP>": result2 = "<DEST_MAC>".decode("hex") + result[6:18] + "<IP_ID>".decode("hex") + result[20:24] + "<CHECKSUM>".decode("hex") + result[26:30] + "<DEST_IP>".decode("hex") + result[34:] s.send(result2) When I try using "classic" socket, the target software correctly receives the packet, but it's not the behavior I want as I have to use RAW sockets to send them. I tried sending with the same code other simples UDP packets, and I got the same behavior as a result, with the packet correctly seen on wireshark but never reaches application layer on my Windows XP. Any idea why my RAW socket packets are not correctly processed by the target?
Python raw socket frames never received I am using a RHEL6 computer and I try to communicate with a Windows XP computer via RAW sockets. When I receive a specific frame on my RHEL computer, a Python 2 script using RAW sockets processes the frame and changes the following fields before sending it to the Windows computer : Dest MAC Dest IP IP ID Checksum The packet arrives on my Windows XP computer, as I see the packet in Wireshark, but it never reaches the application layer, as the software that needs the packet doesn't react. This is how I create the sent packet : import socket, binascii, optparse s=socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(3)) s.bind(('eth1',0)) while True: result = s.recv(65535) if binascii.hexlify(result[30:34]).decode() == "<WANTED FRAME IP>": result2 = "<DEST_MAC>".decode("hex") + result[6:18] + "<IP_ID>".decode("hex") + result[20:24] + "<CHECKSUM>".decode("hex") + result[26:30] + "<DEST_IP>".decode("hex") + result[34:] s.send(result2) When I try using "classic" socket, the target software correctly receives the packet, but it's not the behavior I want as I have to use RAW sockets to send them. I tried sending with the same code other simples UDP packets, and I got the same behavior as a result, with the packet correctly seen on wireshark but never reaches application layer on my Windows XP. Any idea why my RAW socket packets are not correctly processed by the target?
python, windows-xp, wireshark, rhel, raw-sockets
1
737
1
https://stackoverflow.com/questions/51422768/python-raw-socket-frames-never-received
51,398,395
RHEL unable to connect MySQL database via PHP code
I have been trying to find the issue but failed to spot so far in my RHEL Server 7.5(Maipo) . I have a remote database instance (RDS instance) on my AWS. Which is in the same public subnet. And can be accessed and connected if I run mysql -h <remote-db-hostname> -u <username> -p in my above mentioned instance terminal. Moreover, if I try to run sudo telnet <remote-db-hostname> 3306 it returns success, and says: " connected to xxxx.xxx. host ". Installed yum packages: php-mysql.x86_64 (5.4.16-45.el7) mariadb.x86_64 (1:5.5.56-2.el7) mariadb-libs.x86_64 (1:5.5.56-2.el7) httpd.x86_64 (2.4.6-80.el7_5.1) httpd-tools.x86_64 (2.4.6-80.el7_5.1) PHP connection doesn't work But when I try to connect to connect via simple PHP code, it doesn't work. It says: Database connection error (2): Could not connect to MySQL. or Could not connect host: Can't connect to MySQL server on 'xxxx.eu-west-2.rds.amazonaws.com' (13) I have tried to use host name both ways, with Port and without Port but no success. Test Connection File: $host = 'remote-db-hostname:3306'; // ofcourse using correct hostname $user = 'xxxxx'; $pswd = 'xxxxx'; $link = mysql_connect($host, $user, $pswd); if (!$link) { die('Could not connect host: ' . mysql_error()); } mysql_select_db('my-db-name', $link) or die('could not connect to the specified database'); mysql_close($link); Here is my /etc/my.cnf file: [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Settings user and group are ignored when systemd is used. # If you need to run mysqld under a different user or group, # customize your systemd unit file for mariadb according to the # instructions in [URL] [mysqld_safe] log-error=/var/log/mariadb/mariadb.log pid-file=/var/run/mariadb/mariadb.pid # # include all files from the config directory # !includedir /etc/my.cnf.d Note: There are two files in /etc/my.cnf.d ; 1) client.cnf 2) mysql-clients.cnf client.cnf # These two groups are read by the client library # Use it for options that affect all clients, but not the server [client] # This group is not read by mysql client library, # If you use the same .cnf file for MySQL and MariaDB, # use it for MariaDB-only client options [client-mariadb] mysql-clients.cnf # These groups are read by MariaDB command-line tools # Use it for options that affect only one utility # [mysql] [mysql_upgrade] [mysqladmin] [mysqlbinlog] [mysqlcheck] [mysqldump] [mysqlimport] [mysqlshow] [mysqlslap] /etc/selinux/config : # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
RHEL unable to connect MySQL database via PHP code I have been trying to find the issue but failed to spot so far in my RHEL Server 7.5(Maipo) . I have a remote database instance (RDS instance) on my AWS. Which is in the same public subnet. And can be accessed and connected if I run mysql -h <remote-db-hostname> -u <username> -p in my above mentioned instance terminal. Moreover, if I try to run sudo telnet <remote-db-hostname> 3306 it returns success, and says: " connected to xxxx.xxx. host ". Installed yum packages: php-mysql.x86_64 (5.4.16-45.el7) mariadb.x86_64 (1:5.5.56-2.el7) mariadb-libs.x86_64 (1:5.5.56-2.el7) httpd.x86_64 (2.4.6-80.el7_5.1) httpd-tools.x86_64 (2.4.6-80.el7_5.1) PHP connection doesn't work But when I try to connect to connect via simple PHP code, it doesn't work. It says: Database connection error (2): Could not connect to MySQL. or Could not connect host: Can't connect to MySQL server on 'xxxx.eu-west-2.rds.amazonaws.com' (13) I have tried to use host name both ways, with Port and without Port but no success. Test Connection File: $host = 'remote-db-hostname:3306'; // ofcourse using correct hostname $user = 'xxxxx'; $pswd = 'xxxxx'; $link = mysql_connect($host, $user, $pswd); if (!$link) { die('Could not connect host: ' . mysql_error()); } mysql_select_db('my-db-name', $link) or die('could not connect to the specified database'); mysql_close($link); Here is my /etc/my.cnf file: [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Settings user and group are ignored when systemd is used. # If you need to run mysqld under a different user or group, # customize your systemd unit file for mariadb according to the # instructions in [URL] [mysqld_safe] log-error=/var/log/mariadb/mariadb.log pid-file=/var/run/mariadb/mariadb.pid # # include all files from the config directory # !includedir /etc/my.cnf.d Note: There are two files in /etc/my.cnf.d ; 1) client.cnf 2) mysql-clients.cnf client.cnf # These two groups are read by the client library # Use it for options that affect all clients, but not the server [client] # This group is not read by mysql client library, # If you use the same .cnf file for MySQL and MariaDB, # use it for MariaDB-only client options [client-mariadb] mysql-clients.cnf # These groups are read by MariaDB command-line tools # Use it for options that affect only one utility # [mysql] [mysql_upgrade] [mysqladmin] [mysqlbinlog] [mysqlcheck] [mysqldump] [mysqlimport] [mysqlshow] [mysqlslap] /etc/selinux/config : # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
php, mysql, apache, amazon-web-services, rhel
1
538
0
https://stackoverflow.com/questions/51398395/rhel-unable-to-connect-mysql-database-via-php-code
50,717,727
move_uploaded_file() permission denied even though permissions look correct
I am moving uploaded files to a directory inside the html folder(RHEL with apache) using move_uploaded_file(). If I move to www/uploads this will work, but if I use www/html/bla/documents I get a permission error. ...failed to open stream: Permission denied ... ...Unable to move ... File: ‘../../uploads/’ Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 802h/2050d Inode: 2242820 Links: 2 Access: (0777/drwxrwxrwx) Uid: ( 1002/developer) Gid: ( 1002/developer) Context: unconfined_u:object_r:httpd_sys_rw_content_t:s0 Access: 2018-06-06 07:40:27.662812092 +0000 Modify: 2018-06-06 07:42:00.868583357 +0000 Change: 2018-06-06 07:42:00.868583357 +0000 Birth: - File: ‘documents’ Size: 6 Blocks: 0 IO Block: 4096 directory Device: 802h/2050d Inode: 35485731 Links: 2 Access: (0775/drwxrwxr-x) Uid: ( 1002/developer) Gid: ( 1002/developer) Context: unconfined_u:object_r:httpd_sys_content_t:s0 Access: 2018-06-06 09:54:08.855892855 +0000 Modify: 2018-06-06 09:54:07.803861541 +0000 Change: 2018-06-06 09:54:07.803861541 +0000 Birth: - [developer@vm-pomit01 pomit]$ groups apache apache : apache developer [developer@vm-pomit01 pomit]$ groups root root : root developer What am I missing here in terms of permissions?
move_uploaded_file() permission denied even though permissions look correct I am moving uploaded files to a directory inside the html folder(RHEL with apache) using move_uploaded_file(). If I move to www/uploads this will work, but if I use www/html/bla/documents I get a permission error. ...failed to open stream: Permission denied ... ...Unable to move ... File: ‘../../uploads/’ Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 802h/2050d Inode: 2242820 Links: 2 Access: (0777/drwxrwxrwx) Uid: ( 1002/developer) Gid: ( 1002/developer) Context: unconfined_u:object_r:httpd_sys_rw_content_t:s0 Access: 2018-06-06 07:40:27.662812092 +0000 Modify: 2018-06-06 07:42:00.868583357 +0000 Change: 2018-06-06 07:42:00.868583357 +0000 Birth: - File: ‘documents’ Size: 6 Blocks: 0 IO Block: 4096 directory Device: 802h/2050d Inode: 35485731 Links: 2 Access: (0775/drwxrwxr-x) Uid: ( 1002/developer) Gid: ( 1002/developer) Context: unconfined_u:object_r:httpd_sys_content_t:s0 Access: 2018-06-06 09:54:08.855892855 +0000 Modify: 2018-06-06 09:54:07.803861541 +0000 Change: 2018-06-06 09:54:07.803861541 +0000 Birth: - [developer@vm-pomit01 pomit]$ groups apache apache : apache developer [developer@vm-pomit01 pomit]$ groups root root : root developer What am I missing here in terms of permissions?
php, apache, rhel
1
52
0
https://stackoverflow.com/questions/50717727/move-uploaded-file-permission-denied-even-though-permissions-look-correct
50,445,540
xsltproc merging xml files not working
Hi I have to xml files to merge on the below condition: Copy all the existing nodes of new file and then merge with old file values. For example: File abc.xml <?xml version="1.0"?> <schedule> <Item Id="2"> <measurements> <measurement>Alpha</measurement> </measurements> </Item> <Item Id="9"> <measurements> <measurement>Gamma</measurement> </measurements> </Item> </schedule> File xyz.xml <?xml version="1.0"?> <schedule> <Item Id="1"> <measurements> <measurement>Alpha</measurement> </measurements> </Item> <Item Id="4"> <measurements> <measurement>Beta</measurement> </measurements> </Item> </schedule> xslt logic file: logic.xslt <xsl:stylesheet version="1.0" xmlns:xsl="[URL] <xsl:output method="xml" version="1.0" encoding="UTF-8" standalone="no" indent="yes"/> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()" /> </xsl:copy> </xsl:template> <xsl:template match="Item"> <xsl:variable name="match" select="document('./abc.xml')/schedule/Item[measurements/measurement=current()/measurements/measurement]"/> <xsl:choose> <xsl:when test="$match"> <xsl:copy-of select="$match"/> </xsl:when> </xsl:choose> </xsl:template> </xsl:stylesheet> Command used: xsltproc logic.xslt xyz.xml > output.xml Expected output: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <schedule> <Item Id="2"> <measurements> <measurement>Alpha</measurement> </measurements> </Item> <Item Id="4"> <measurements> <measurement>Beta</measurement> </measurements> </Item> <Item Id="9"> <measurements> <measurement>Gamma</measurement> </measurements> </Item> </schedule> But actual is different then expected one which is as follows: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <schedule> <Item Id="2"> <measurements> <measurement>Alpha</measurement> </measurements> </Item> <Item Id="9"> <measurements> <measurement>Gamma</measurement> </measurements> </Item> </schedule> It misses the nodes from the new xml file.
xsltproc merging xml files not working Hi I have to xml files to merge on the below condition: Copy all the existing nodes of new file and then merge with old file values. For example: File abc.xml <?xml version="1.0"?> <schedule> <Item Id="2"> <measurements> <measurement>Alpha</measurement> </measurements> </Item> <Item Id="9"> <measurements> <measurement>Gamma</measurement> </measurements> </Item> </schedule> File xyz.xml <?xml version="1.0"?> <schedule> <Item Id="1"> <measurements> <measurement>Alpha</measurement> </measurements> </Item> <Item Id="4"> <measurements> <measurement>Beta</measurement> </measurements> </Item> </schedule> xslt logic file: logic.xslt <xsl:stylesheet version="1.0" xmlns:xsl="[URL] <xsl:output method="xml" version="1.0" encoding="UTF-8" standalone="no" indent="yes"/> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()" /> </xsl:copy> </xsl:template> <xsl:template match="Item"> <xsl:variable name="match" select="document('./abc.xml')/schedule/Item[measurements/measurement=current()/measurements/measurement]"/> <xsl:choose> <xsl:when test="$match"> <xsl:copy-of select="$match"/> </xsl:when> </xsl:choose> </xsl:template> </xsl:stylesheet> Command used: xsltproc logic.xslt xyz.xml > output.xml Expected output: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <schedule> <Item Id="2"> <measurements> <measurement>Alpha</measurement> </measurements> </Item> <Item Id="4"> <measurements> <measurement>Beta</measurement> </measurements> </Item> <Item Id="9"> <measurements> <measurement>Gamma</measurement> </measurements> </Item> </schedule> But actual is different then expected one which is as follows: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <schedule> <Item Id="2"> <measurements> <measurement>Alpha</measurement> </measurements> </Item> <Item Id="9"> <measurements> <measurement>Gamma</measurement> </measurements> </Item> </schedule> It misses the nodes from the new xml file.
xml, linux, xslt, merge, rhel
1
474
1
https://stackoverflow.com/questions/50445540/xsltproc-merging-xml-files-not-working
50,401,857
git add/stat extremely slow on RHEL
We have a large (~15 GB, ~40K files) and old (~5 years with daily updates) git repository full of media content. Recently the RHEL users started to complaint it takes a few minutes to perform such routine operations like add, stat and push. At the same time on Ubuntu we don't encounter any problem. We had a similar issue a year ago. That time the cause was that a few very large (>500 MB each) files were added but now it's not the case. One note that may be important: RHEL users use an old RHEL 6 with official packages only. Could you advise how to beat the issue described?
git add/stat extremely slow on RHEL We have a large (~15 GB, ~40K files) and old (~5 years with daily updates) git repository full of media content. Recently the RHEL users started to complaint it takes a few minutes to perform such routine operations like add, stat and push. At the same time on Ubuntu we don't encounter any problem. We had a similar issue a year ago. That time the cause was that a few very large (>500 MB each) files were added but now it's not the case. One note that may be important: RHEL users use an old RHEL 6 with official packages only. Could you advise how to beat the issue described?
git, rhel
1
224
1
https://stackoverflow.com/questions/50401857/git-add-stat-extremely-slow-on-rhel
49,956,554
java processbuilder terminal width
So I have a shell script which when executed an executable will display lines as wide as your terminal width. When I execute the script using ProcessBuilder it automatically returns only the first 80 characters. I've tried a variety of things, like adding stty rows 50 cols 132 to the script, or trying to set some environment variables in ProcessBuilder but can't find a way to change the terminal width of the process. Running this on RHEL. ProcessBuilder p = new ProcessBuilder(new String[] {"/bin/ksh", "myscript.ksh"}); BufferedReader br = new BufferedReader(new InputStreamReader(p2.getInputStream())); String line; StringBuffer output = new StringBuffer(); while ((line = br.readLine()) != null) { System.out.println(line); }
java processbuilder terminal width So I have a shell script which when executed an executable will display lines as wide as your terminal width. When I execute the script using ProcessBuilder it automatically returns only the first 80 characters. I've tried a variety of things, like adding stty rows 50 cols 132 to the script, or trying to set some environment variables in ProcessBuilder but can't find a way to change the terminal width of the process. Running this on RHEL. ProcessBuilder p = new ProcessBuilder(new String[] {"/bin/ksh", "myscript.ksh"}); BufferedReader br = new BufferedReader(new InputStreamReader(p2.getInputStream())); String line; StringBuffer output = new StringBuffer(); while ((line = br.readLine()) != null) { System.out.println(line); }
java, ksh, processbuilder, rhel
1
312
1
https://stackoverflow.com/questions/49956554/java-processbuilder-terminal-width
49,413,922
having problems making and installing gcc
I am having trouble installing gcc. Given the information below, what am I doing wrong? From $HOME/gcc on a Linux computer that I do not have root access to, I run the following: $ wget [URL] $ tar xvf gcc-7.3.0.tar.gz $ cd gcc-7.3.0 $ ./contrib/download_prerequisites gmp-6.1.0.tar.bz2: OK mpfr-3.1.4.tar.bz2: OK mpc-1.0.3.tar.gz: OK isl-0.16.1.tar.bz2: OK All prerequisites downloaded successfully. $ cd .. $ mkdir test $ cd test $ ../gcc-7.3.0/configure --prefix=$HOME/gcc/test --disable-multilib only a few lines of output are shown here: checking build system type... x86_64-pc-linux-gnu ... # lots of checking for, where, whether ... ... /home/clay.stevens/package/gcc/gcc-7.3.0/missing: line 81: makeinfo: command not found ... # lots of checking for, where, whether ... ... config.status: creating Makefile Attempt to make: $ make -j8 -j8 flag to use 8 cpus only a few lines of output are shown here: ... configure: WARNING: *** Makeinfo is missing. Info documentation will not be built. ... configure: WARNING: the "none" host is obsolete, use --disable-assembly ... configure: summary of build options: Version: GNU MP 6.1.0 Host type: none-pc-linux-gnu ABI: standard Install prefix: /home/user.name/gcc/test Compiler: gcc -std=gnu99 Static libraries: yes Shared libraries: no ... ... config.status: executing default commands make[2]: Leaving directory /home/user.name/gcc/test' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory /home/user.name/gcc/test' make: *** [all] Error 2 Attempt to install: $ make install errors: make[1]: Entering directory /home/user.name/gcc/test' /bin/sh ../gcc-7.3.0/mkinstalldirs /home/user.name/gcc/test /home/user.name/gcc/test make[2]: Entering directory /home/user.name/gcc/test/fixincludes' make[2]: *** No rule to make target install'. Stop. make[2]: Leaving directory /home/user.name/gcc/test/fixincludes' make[1]: *** [install-fixincludes] Error 2 make[1]: Leaving directory `/home/user.name/gcc/test' make: *** [install] Error 2 System info: $ cat /proc/version Linux version 3.10.0-514.el7.x86_64 (mockbuild@x86-039.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Wed Oct 19 11:24:13 EDT 2016 $ cat /etc/*release NAME="Red Hat Enterprise Linux Server" VERSION="7.3 (Maipo)" ID="rhel" ID_LIKE="fedora" $ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterpriseServer
having problems making and installing gcc I am having trouble installing gcc. Given the information below, what am I doing wrong? From $HOME/gcc on a Linux computer that I do not have root access to, I run the following: $ wget [URL] $ tar xvf gcc-7.3.0.tar.gz $ cd gcc-7.3.0 $ ./contrib/download_prerequisites gmp-6.1.0.tar.bz2: OK mpfr-3.1.4.tar.bz2: OK mpc-1.0.3.tar.gz: OK isl-0.16.1.tar.bz2: OK All prerequisites downloaded successfully. $ cd .. $ mkdir test $ cd test $ ../gcc-7.3.0/configure --prefix=$HOME/gcc/test --disable-multilib only a few lines of output are shown here: checking build system type... x86_64-pc-linux-gnu ... # lots of checking for, where, whether ... ... /home/clay.stevens/package/gcc/gcc-7.3.0/missing: line 81: makeinfo: command not found ... # lots of checking for, where, whether ... ... config.status: creating Makefile Attempt to make: $ make -j8 -j8 flag to use 8 cpus only a few lines of output are shown here: ... configure: WARNING: *** Makeinfo is missing. Info documentation will not be built. ... configure: WARNING: the "none" host is obsolete, use --disable-assembly ... configure: summary of build options: Version: GNU MP 6.1.0 Host type: none-pc-linux-gnu ABI: standard Install prefix: /home/user.name/gcc/test Compiler: gcc -std=gnu99 Static libraries: yes Shared libraries: no ... ... config.status: executing default commands make[2]: Leaving directory /home/user.name/gcc/test' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory /home/user.name/gcc/test' make: *** [all] Error 2 Attempt to install: $ make install errors: make[1]: Entering directory /home/user.name/gcc/test' /bin/sh ../gcc-7.3.0/mkinstalldirs /home/user.name/gcc/test /home/user.name/gcc/test make[2]: Entering directory /home/user.name/gcc/test/fixincludes' make[2]: *** No rule to make target install'. Stop. make[2]: Leaving directory /home/user.name/gcc/test/fixincludes' make[1]: *** [install-fixincludes] Error 2 make[1]: Leaving directory `/home/user.name/gcc/test' make: *** [install] Error 2 System info: $ cat /proc/version Linux version 3.10.0-514.el7.x86_64 (mockbuild@x86-039.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Wed Oct 19 11:24:13 EDT 2016 $ cat /etc/*release NAME="Red Hat Enterprise Linux Server" VERSION="7.3 (Maipo)" ID="rhel" ID_LIKE="fedora" $ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterpriseServer
gcc, makefile, rhel, rhel7, gcc7
1
3,143
1
https://stackoverflow.com/questions/49413922/having-problems-making-and-installing-gcc
49,307,058
Environment variable value loses backslash in database.yml
On a RHEL 7.2 server I've set up environment variables in /etc/environment similar to this: DB_LOGIN='DOMAIN\username' DB_PASSWORD='password' The file was then referenced as an EnvironmentFile in /etc/systemd/system/httpd.service.d/override.conf. After rebooting the server I can see that the env. var. is set properly (by running the env command). In my Rails environment, I have the following in my database.yml file based on what I learned from Failing to access environment variables within database.yml file : default: &default adapter: sqlserver ... username: <%= ENV['DB_LOGIN'] %> password: <%= ENV['DB_PASSWORD'] %> This setup works fine when executing my rake tasks. Also of note, when I run erb against the database.yml, it looks correct (the backslash is present). However in my Apache/Passenger configuration (httpd-2.4.6-40.el7_2.4.x86_64 and Phusion Passenger 5.1.7), the username loses the backslash (the value is DOMAINusername). I've changed the env var to have two backslashes (DOMAIN\\username) and then it works fine in the Apache/Passenger config, but my rake tasks fail due to the double backslash in the username. Is there a way to have an environment variable value that contains a backslash work in both of these situations? I'm not really looking to use a gem/plugin like Figaro or dotenv. Thanks for any guidance!
Environment variable value loses backslash in database.yml On a RHEL 7.2 server I've set up environment variables in /etc/environment similar to this: DB_LOGIN='DOMAIN\username' DB_PASSWORD='password' The file was then referenced as an EnvironmentFile in /etc/systemd/system/httpd.service.d/override.conf. After rebooting the server I can see that the env. var. is set properly (by running the env command). In my Rails environment, I have the following in my database.yml file based on what I learned from Failing to access environment variables within database.yml file : default: &default adapter: sqlserver ... username: <%= ENV['DB_LOGIN'] %> password: <%= ENV['DB_PASSWORD'] %> This setup works fine when executing my rake tasks. Also of note, when I run erb against the database.yml, it looks correct (the backslash is present). However in my Apache/Passenger configuration (httpd-2.4.6-40.el7_2.4.x86_64 and Phusion Passenger 5.1.7), the username loses the backslash (the value is DOMAINusername). I've changed the env var to have two backslashes (DOMAIN\\username) and then it works fine in the Apache/Passenger config, but my rake tasks fail due to the double backslash in the username. Is there a way to have an environment variable value that contains a backslash work in both of these situations? I'm not really looking to use a gem/plugin like Figaro or dotenv. Thanks for any guidance!
apache, environment-variables, ruby-on-rails-5, rhel
1
847
0
https://stackoverflow.com/questions/49307058/environment-variable-value-loses-backslash-in-database-yml
48,984,234
Running ASP.NET Core app from Amazon Linux 2 on Docker - Globalization
I have my ASP.NET Core app running beautifully (more or less) on microsoft/aspnetcore:2.0-jessie . Now I want to try to get it to deploy to amazonlinux:2 . So far, the biggest hurdle has been libicu . I tried setting Globalization to Invariant, but this caused weird failures in, e.g., mySQL database calls. Here's the relevant step from my Dockerfile: RUN curl -L --http1.1 [URL] --output icu.tgz \ && tar -xf icu.tgz -C / \ && export LD_LIBRARY_PATH=/usr/local/lib \ && rm icu.tgz (SourceForge was down while I was trying to work on this yesterday, which didn't improve matters.) In any case, I still get the message of doom from .NET Core: FailFast: Couldn't find a valid ICU package installed on the system. Set the configuration flag System.Globalization.Invariant to true if you want to run with no globalization support. Any suggestions how to proceed?
Running ASP.NET Core app from Amazon Linux 2 on Docker - Globalization I have my ASP.NET Core app running beautifully (more or less) on microsoft/aspnetcore:2.0-jessie . Now I want to try to get it to deploy to amazonlinux:2 . So far, the biggest hurdle has been libicu . I tried setting Globalization to Invariant, but this caused weird failures in, e.g., mySQL database calls. Here's the relevant step from my Dockerfile: RUN curl -L --http1.1 [URL] --output icu.tgz \ && tar -xf icu.tgz -C / \ && export LD_LIBRARY_PATH=/usr/local/lib \ && rm icu.tgz (SourceForge was down while I was trying to work on this yesterday, which didn't improve matters.) In any case, I still get the message of doom from .NET Core: FailFast: Couldn't find a valid ICU package installed on the system. Set the configuration flag System.Globalization.Invariant to true if you want to run with no globalization support. Any suggestions how to proceed?
docker, .net-core, rhel, icu, amazon-linux
1
923
1
https://stackoverflow.com/questions/48984234/running-asp-net-core-app-from-amazon-linux-2-on-docker-globalization
48,906,497
Unable to install rJava 3.3.0 on RHEL 6.7
Recently we have downgraded R(latest) to 3.3.0 to RHEL. Which requires to install xlsx and other rJava dependent packages, though I have deep dived into every possible duplicate and tried all options. Details: Command Executed R CMD javareconf Error: .rodata' can not be used when making a shared object; recompile with -fPIC /usr/lib64/R/lib64/R/lib/libR.a(CommandLineArgs.o): could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [libjri.so] Error 1 make[2]: Leaving directory /tmp/RtmpH1WhQR/R.INSTALL4a1266bbb309/rJava/jri/src ' ` Net search: I searched and found following link [URL] suggests to use export CXXFLAGS=-fPIC but no luck also I do not see CMakeCache.txt in the R folder, still not clear which piece I am missing and any help will be highly appreciated. NOTE: This might seem to be a duplicate, but seriously I have already tried all the related/relevant posts on stackoverflow.
Unable to install rJava 3.3.0 on RHEL 6.7 Recently we have downgraded R(latest) to 3.3.0 to RHEL. Which requires to install xlsx and other rJava dependent packages, though I have deep dived into every possible duplicate and tried all options. Details: Command Executed R CMD javareconf Error: .rodata' can not be used when making a shared object; recompile with -fPIC /usr/lib64/R/lib64/R/lib/libR.a(CommandLineArgs.o): could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [libjri.so] Error 1 make[2]: Leaving directory /tmp/RtmpH1WhQR/R.INSTALL4a1266bbb309/rJava/jri/src ' ` Net search: I searched and found following link [URL] suggests to use export CXXFLAGS=-fPIC but no luck also I do not see CMakeCache.txt in the R folder, still not clear which piece I am missing and any help will be highly appreciated. NOTE: This might seem to be a duplicate, but seriously I have already tried all the related/relevant posts on stackoverflow.
r, linux, rhel
1
63
1
https://stackoverflow.com/questions/48906497/unable-to-install-rjava-3-3-0-on-rhel-6-7
48,812,044
NodeJS fs.unlink() deletes the files but the OS can still find them
I am using the following node.js code to delete files that I don't need. fs.unlink(downloadsFolder + '/' + file, function(err){ if (err) throw err; console.log("This file will be deleted." + file) }); When I list my directory, I can verify that the files are gone. However, when I run this command, I can still see them marked as (deleted) and still taking up disk space. dzdo lsof -L | grep -i deleted node 48782 root 600743243 403197165 /mnt/downloads/file_1516312894734.csv (deleted) node 48782 root 14999 403197166 /mnt/downloads/file_1516729327306.csv (deleted) As a temporary thing, I restart the node server to release all those "file handles" and they are gone. Why doesn't fs.unlink() do this automatically?
NodeJS fs.unlink() deletes the files but the OS can still find them I am using the following node.js code to delete files that I don't need. fs.unlink(downloadsFolder + '/' + file, function(err){ if (err) throw err; console.log("This file will be deleted." + file) }); When I list my directory, I can verify that the files are gone. However, when I run this command, I can still see them marked as (deleted) and still taking up disk space. dzdo lsof -L | grep -i deleted node 48782 root 600743243 403197165 /mnt/downloads/file_1516312894734.csv (deleted) node 48782 root 14999 403197166 /mnt/downloads/file_1516729327306.csv (deleted) As a temporary thing, I restart the node server to release all those "file handles" and they are gone. Why doesn't fs.unlink() do this automatically?
node.js, rhel
1
348
0
https://stackoverflow.com/questions/48812044/nodejs-fs-unlink-deletes-the-files-but-the-os-can-still-find-them
48,386,714
error: command &#39;gcc&#39; failed with exit status 1 while installing pycrypto on RHEL
I am trying to manually install Crypto module for Python (pycrypto) on RHEL. However, I seem to always get this error after I run the build command (i.e. python setup.py build): error: command 'gcc' failed with exit status 1 Does anyone know how to solve this problem?
error: command &#39;gcc&#39; failed with exit status 1 while installing pycrypto on RHEL I am trying to manually install Crypto module for Python (pycrypto) on RHEL. However, I seem to always get this error after I run the build command (i.e. python setup.py build): error: command 'gcc' failed with exit status 1 Does anyone know how to solve this problem?
python, gcc, rhel, pycrypto, rhel7
1
1,619
1
https://stackoverflow.com/questions/48386714/error-command-gcc-failed-with-exit-status-1-while-installing-pycrypto-on-rhel
47,744,381
Cant install OpenAL on Linux
Im trying to install LÖVE2D v0.10.2 on my CentOS7 VM. But when I run ./configure and it just sais "configure: error: LÖVE needs "OpenAL", please install "OpenAL" with development files and try again". I look with yum search openal for something to install, but there are a no found match, anthough there should be one. How can I get rid of that error?
Cant install OpenAL on Linux Im trying to install LÖVE2D v0.10.2 on my CentOS7 VM. But when I run ./configure and it just sais "configure: error: LÖVE needs "OpenAL", please install "OpenAL" with development files and try again". I look with yum search openal for something to install, but there are a no found match, anthough there should be one. How can I get rid of that error?
linux, centos7, rhel, openal, love2d
1
1,430
0
https://stackoverflow.com/questions/47744381/cant-install-openal-on-linux
47,617,504
How to get a dual head configuration running on VirtualBox / RHEL 5.11
We have production machines running RedHat RHEL 5.11 / X.org version 7.1.1. The supplier of these machines provides a VirtualBox VM to allow software development on conventional computers, i.e. away from the production floor. I would like to configure this VM for dual head / dual monitor / dual screen usage. Step 1: The VM in VirtualBox Manager is configured for maximum video memory and 2 monitors. Step 2: VirtualBox GuestAdditions are installed, version corresponding to the Virtual Box application. Step 3: /etc/X11/xorg.conf is modified to support 2 devices, 2 screens and 2 monitors, all configured for the vboxvideo driver. The content is shown below: # Xorg configuration created by system-config-display Section "ServerLayout" Identifier "Multihead layout" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "LCD Panel 1600x1200" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Monitor Vendor" ModelName "LCD Panel 1600x1200" EndSection Section "Device" Identifier "Videocard0" Driver "vboxvideo" BusID "PCI:0:2:0" Screen 0 EndSection Section "Device" Identifier "Videocard1" Driver "vboxvideo" BusID "PCI:0:2:1" Screen 1 EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1600x1200" "1280x1024" "1280x960" "1280x800" "1152x864" "1024x768" "800x600" "640x480" EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Videocard1" Monitor "Monitor1" DefaultDepth 24 SubSection "Display" Depth 24 Modes "1600x1200" "1280x1024" "1280x960" "1280x800" "1152x864" "1024x768" "800x600" "640x480" EndSubSection EndSection Step 4: reboot the system I expect now that the system is set-up and ready to show two screens on my computer. Alas, it only shows Screen0. Observations: /var/log/Xorg.0.log shows no errors, but a couple of warnings: (WW) VBoxVideo(0): Failed to set up write-combining range (0xe0000000,0x8000000) In the same log file, only the instance VBoxVideo(0) is mentioned; there is no reference to VBoxVideo(1) lspci also shows only one instance of the VBox video driver: 00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter In VB Manager -> View only Virtual Screen 0 is said to be enabled; the other screen is available, but an "enable" option is presented, that appears to do nothing when selected. Everything else I have seen seems to be consistent: the second screen is nowhere found on my RHEL VM. Questions: What is the importance of the warning on write-combining? Is this related to this problem? Is vboxvideo driver / pseudo card a dual head card or should I instantiate a second card? What are the correct magic numbers for the BusID of the second "Device"? Any other recommendation to get this configuration up-n-running?
How to get a dual head configuration running on VirtualBox / RHEL 5.11 We have production machines running RedHat RHEL 5.11 / X.org version 7.1.1. The supplier of these machines provides a VirtualBox VM to allow software development on conventional computers, i.e. away from the production floor. I would like to configure this VM for dual head / dual monitor / dual screen usage. Step 1: The VM in VirtualBox Manager is configured for maximum video memory and 2 monitors. Step 2: VirtualBox GuestAdditions are installed, version corresponding to the Virtual Box application. Step 3: /etc/X11/xorg.conf is modified to support 2 devices, 2 screens and 2 monitors, all configured for the vboxvideo driver. The content is shown below: # Xorg configuration created by system-config-display Section "ServerLayout" Identifier "Multihead layout" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "LCD Panel 1600x1200" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Monitor Vendor" ModelName "LCD Panel 1600x1200" EndSection Section "Device" Identifier "Videocard0" Driver "vboxvideo" BusID "PCI:0:2:0" Screen 0 EndSection Section "Device" Identifier "Videocard1" Driver "vboxvideo" BusID "PCI:0:2:1" Screen 1 EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1600x1200" "1280x1024" "1280x960" "1280x800" "1152x864" "1024x768" "800x600" "640x480" EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Videocard1" Monitor "Monitor1" DefaultDepth 24 SubSection "Display" Depth 24 Modes "1600x1200" "1280x1024" "1280x960" "1280x800" "1152x864" "1024x768" "800x600" "640x480" EndSubSection EndSection Step 4: reboot the system I expect now that the system is set-up and ready to show two screens on my computer. Alas, it only shows Screen0. Observations: /var/log/Xorg.0.log shows no errors, but a couple of warnings: (WW) VBoxVideo(0): Failed to set up write-combining range (0xe0000000,0x8000000) In the same log file, only the instance VBoxVideo(0) is mentioned; there is no reference to VBoxVideo(1) lspci also shows only one instance of the VBox video driver: 00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter In VB Manager -> View only Virtual Screen 0 is said to be enabled; the other screen is available, but an "enable" option is presented, that appears to do nothing when selected. Everything else I have seen seems to be consistent: the second screen is nowhere found on my RHEL VM. Questions: What is the importance of the warning on write-combining? Is this related to this problem? Is vboxvideo driver / pseudo card a dual head card or should I instantiate a second card? What are the correct magic numbers for the BusID of the second "Device"? Any other recommendation to get this configuration up-n-running?
virtualbox, rhel
1
160
0
https://stackoverflow.com/questions/47617504/how-to-get-a-dual-head-configuration-running-on-virtualbox-rhel-5-11
47,415,154
My installed GCC version is behind my installed Redhat developer toolset version
I have been searching for the answer to this for a while. I'm on RHEL 6.x and I am trying to upgrade gcc in order to install a package. Also, I have a super old version of gcc and it's time to upgrade anyway. (For now, let's assume that an OS upgrade is out of the question, so if this DOES require an OS upgrade, the package will have to wait.) I found that the best way to upgrade gcc is by using Redhat Developer Toolset. I look into it, and find that I already have devtoolset-4 installed, which, to my understanding, installs gcc version 5.2.1 with it. Yet, for some reason, my current gcc version is 4.4.7 . Any idea why this would happen?
My installed GCC version is behind my installed Redhat developer toolset version I have been searching for the answer to this for a while. I'm on RHEL 6.x and I am trying to upgrade gcc in order to install a package. Also, I have a super old version of gcc and it's time to upgrade anyway. (For now, let's assume that an OS upgrade is out of the question, so if this DOES require an OS upgrade, the package will have to wait.) I found that the best way to upgrade gcc is by using Redhat Developer Toolset. I look into it, and find that I already have devtoolset-4 installed, which, to my understanding, installs gcc version 5.2.1 with it. Yet, for some reason, my current gcc version is 4.4.7 . Any idea why this would happen?
c++, gcc, rhel
1
1,380
1
https://stackoverflow.com/questions/47415154/my-installed-gcc-version-is-behind-my-installed-redhat-developer-toolset-version
47,296,889
Problems with using uuencode on RHEL to send mail attachment
I have a unique situation where I'm going from an AIX platform to RHEL for a vendor supported application. Within the application, I can send a command (limited to the amount of characters I can send) that will email me results of a print job. To do this in AIX, I'm passing the following command: uuencode temp.txt | mail -s "test" email@address.com This sends standard output to the email as an attachment named temp.txt. Temp.txt doesn't exist on the AIX server. I have downloaded the rpm for uuencode for Linux, but it doesn't work in the same fashion. It sends an email with what looks like garbage (could be missing MIME header). I'm looking for a similar command. To simulate outside of the application, I use the following in AIX: echo "testing mail" | uuencode temp.txt | mail -s "test" email@address.com I know mailx -a <filename> would work if I had a filename, unfortunately I'm working with standard output vs a file. Any help will be appreciated.
Problems with using uuencode on RHEL to send mail attachment I have a unique situation where I'm going from an AIX platform to RHEL for a vendor supported application. Within the application, I can send a command (limited to the amount of characters I can send) that will email me results of a print job. To do this in AIX, I'm passing the following command: uuencode temp.txt | mail -s "test" email@address.com This sends standard output to the email as an attachment named temp.txt. Temp.txt doesn't exist on the AIX server. I have downloaded the rpm for uuencode for Linux, but it doesn't work in the same fashion. It sends an email with what looks like garbage (could be missing MIME header). I'm looking for a similar command. To simulate outside of the application, I use the following in AIX: echo "testing mail" | uuencode temp.txt | mail -s "test" email@address.com I know mailx -a <filename> would work if I had a filename, unfortunately I'm working with standard output vs a file. Any help will be appreciated.
email, aix, rhel, mailx, uuencode
1
1,409
0
https://stackoverflow.com/questions/47296889/problems-with-using-uuencode-on-rhel-to-send-mail-attachment
47,266,305
ImportError: /lib64/libc.so.6: version `GLIBC_2.14&#39; not found in RHEL 5.11
I am using Python 3.5.4 and trying to import tensorflow 1.3.0 in RHEL 5.11 (and also tried in RHEL 6.6) environment. I am getting an issue related to GLIBC 2.14. Stacktrace is given below. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.5/site-packages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import * File "/usr/local/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "/usr/local/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "/usr/local/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/usr/local/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/usr/local/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/usr/local/lib/python3.5/imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "/usr/local/lib/python3.5/imp.py", line 343, in load_dynamic return _load(spec) ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /usr/local/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) Failed to load the native TensorFlow runtime. The GLIBC version in RHEL 5.11 is 2.5 and in RHEL 6.6 is 2.12
ImportError: /lib64/libc.so.6: version `GLIBC_2.14&#39; not found in RHEL 5.11 I am using Python 3.5.4 and trying to import tensorflow 1.3.0 in RHEL 5.11 (and also tried in RHEL 6.6) environment. I am getting an issue related to GLIBC 2.14. Stacktrace is given below. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.5/site-packages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import * File "/usr/local/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "/usr/local/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "/usr/local/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/usr/local/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/usr/local/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "/usr/local/lib/python3.5/imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "/usr/local/lib/python3.5/imp.py", line 343, in load_dynamic return _load(spec) ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /usr/local/lib/python3.5/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so) Failed to load the native TensorFlow runtime. The GLIBC version in RHEL 5.11 is 2.5 and in RHEL 6.6 is 2.12
python, tensorflow, glibc, rhel
1
2,663
0
https://stackoverflow.com/questions/47266305/importerror-lib64-libc-so-6-version-glibc-2-14-not-found-in-rhel-5-11
47,242,612
Updating node 6.2 to node &gt; 8 in openshift 3
I created a node 6.2 and MongoDB project on openshift 3. I don't have the root access to the terminal. I can redeploy if needed. How can I update node? I want to use the async/await functionality. Thank you in advance.
Updating node 6.2 to node &gt; 8 in openshift 3 I created a node 6.2 and MongoDB project on openshift 3. I don't have the root access to the terminal. I can redeploy if needed. How can I update node? I want to use the async/await functionality. Thank you in advance.
node.js, openshift, rhel, openshift-3
1
125
0
https://stackoverflow.com/questions/47242612/updating-node-6-2-to-node-8-in-openshift-3
47,103,310
Docker daemon throwing error while starting in Linux RHEL
I am trying to start my dockerd daemon by this command - dockerd & Then i start getting the error as below - ERRO[0036] libcontainerd: failed to receive event from containerd: rpc error: code = 12 desc = unknown service types.API This keeps rolling again and again and i am unable to start any container after that. If i close the session and open a new session, i could see docker ps is accessible. But i am unable to start any container. While starting the container I am getting error - docker run hello-world docker: Error response from daemon: unknown service types.API. ERRO[0000] error waiting for container: context canceled Please let me know if any logs are needed.
Docker daemon throwing error while starting in Linux RHEL I am trying to start my dockerd daemon by this command - dockerd & Then i start getting the error as below - ERRO[0036] libcontainerd: failed to receive event from containerd: rpc error: code = 12 desc = unknown service types.API This keeps rolling again and again and i am unable to start any container after that. If i close the session and open a new session, i could see docker ps is accessible. But i am unable to start any container. While starting the container I am getting error - docker run hello-world docker: Error response from daemon: unknown service types.API. ERRO[0000] error waiting for container: context canceled Please let me know if any logs are needed.
linux, docker, containers, rhel
1
1,580
1
https://stackoverflow.com/questions/47103310/docker-daemon-throwing-error-while-starting-in-linux-rhel
47,064,482
cygwin xterm to open multi session from script
I have .ksh script with below commands to open multiple session on different server but not able to execute file parallel same time.any help ? xterm -e ssh appuser@appserver1; cd /usr/bin/; ./script1.ksh sleep 5 & xterm -e ssh appusr@appserver2; cd /usr/bin/; ./script2.ksh where cd and script[12].ksh commands should run on remote host. Not able to view script running/executing on 2 different windows same time. I have tried below command ,but it's opening two sessions simultaneously not parallel.Like not opening 2 windows same time to execute sessions. xterm -hold -e ssh appuser@appserver1 'cd /usr/bin;./adauto.sh psswd &' xterm -hold -e ssh appuser@appserver2 'cd /usr/bin;./adauto.sh psswd;' Thanks, SM
cygwin xterm to open multi session from script I have .ksh script with below commands to open multiple session on different server but not able to execute file parallel same time.any help ? xterm -e ssh appuser@appserver1; cd /usr/bin/; ./script1.ksh sleep 5 & xterm -e ssh appusr@appserver2; cd /usr/bin/; ./script2.ksh where cd and script[12].ksh commands should run on remote host. Not able to view script running/executing on 2 different windows same time. I have tried below command ,but it's opening two sessions simultaneously not parallel.Like not opening 2 windows same time to execute sessions. xterm -hold -e ssh appuser@appserver1 'cd /usr/bin;./adauto.sh psswd &' xterm -hold -e ssh appuser@appserver2 'cd /usr/bin;./adauto.sh psswd;' Thanks, SM
shell, cygwin, ksh, rhel
1
292
0
https://stackoverflow.com/questions/47064482/cygwin-xterm-to-open-multi-session-from-script
46,904,093
how to install pip and virtualenv of python2.7 on rhel6?
I have RHEL6-32bit installed with default python2.6 . In that I installed gcc : rpm -ivh ppl-0.10.2-11.el6.i686.rpm rpm -ivh cloog-ppl-0.15.7-1.2.el6.i686. rpm -ivh mpfr-2.4.1-6.el6.i686.rpm rpm -ivh cpp-4.4.7-18.el6.i686.rpm rpm -ivh libgomp-4.4.7-18.el6.i686.rpm rpm -ivh gcc-4.4.7-18.el6.i686.rpm & python2.7 as follows: cd /usr/src wget [URL] tar xzf Python-2.7.13.tgz cd Python-2.7.13 ./configure make altinstall Now I want to install pip and then virtualenv of python2.7. How do I enable python2.7 and install pip of python2.7?
how to install pip and virtualenv of python2.7 on rhel6? I have RHEL6-32bit installed with default python2.6 . In that I installed gcc : rpm -ivh ppl-0.10.2-11.el6.i686.rpm rpm -ivh cloog-ppl-0.15.7-1.2.el6.i686. rpm -ivh mpfr-2.4.1-6.el6.i686.rpm rpm -ivh cpp-4.4.7-18.el6.i686.rpm rpm -ivh libgomp-4.4.7-18.el6.i686.rpm rpm -ivh gcc-4.4.7-18.el6.i686.rpm & python2.7 as follows: cd /usr/src wget [URL] tar xzf Python-2.7.13.tgz cd Python-2.7.13 ./configure make altinstall Now I want to install pip and then virtualenv of python2.7. How do I enable python2.7 and install pip of python2.7?
python-2.7, pip, virtualenv, rhel, rhel6
1
633
1
https://stackoverflow.com/questions/46904093/how-to-install-pip-and-virtualenv-of-python2-7-on-rhel6
46,902,848
QTouchEvent Ubuntu 14.04 vs RHEL 7.4
When a touchscreen monitor is pressed, using Qt4.8, viewportEvent(QEvent *event) fires the correct signals ( QEvent::TouchBegin , QEvent::TouchUpdate ) on a PC with Ubuntu 14.04 installed. For the exact same source code compiled on a PC running RHEL 7.4, only mouse events are returned. dmesg and lsusb commands on both PCs return the same results, indicating to me no driver issues. Why would RHEL not return the correct signals in this case? I wish to use multitouch on RHEL PCs.
QTouchEvent Ubuntu 14.04 vs RHEL 7.4 When a touchscreen monitor is pressed, using Qt4.8, viewportEvent(QEvent *event) fires the correct signals ( QEvent::TouchBegin , QEvent::TouchUpdate ) on a PC with Ubuntu 14.04 installed. For the exact same source code compiled on a PC running RHEL 7.4, only mouse events are returned. dmesg and lsusb commands on both PCs return the same results, indicating to me no driver issues. Why would RHEL not return the correct signals in this case? I wish to use multitouch on RHEL PCs.
qt, rhel
1
42
0
https://stackoverflow.com/questions/46902848/qtouchevent-ubuntu-14-04-vs-rhel-7-4
46,721,463
how to add labels in every docker remote build
I have a docker remote server to build all my images, now I need to add labels to every image build in the remote. I know that adding --label label=value works from local but when I call the remote docker I need to add specific labels to all image build in the remote docker. In other words I want to add the OS version and the docker version to every image even if I call from local without any label in the command line I want to do something like: docker -H <remote-docker> build -f ./Dockerfile -t image:version and when I do a docker inspect to the image stored in the remote docker the labels have: Labels: { "os_version": 0.0.1, "docker_version": 1.17.0 } Without I specify --label every time I build in remote docker. Someone know how to add this labels in every build?
how to add labels in every docker remote build I have a docker remote server to build all my images, now I need to add labels to every image build in the remote. I know that adding --label label=value works from local but when I call the remote docker I need to add specific labels to all image build in the remote docker. In other words I want to add the OS version and the docker version to every image even if I call from local without any label in the command line I want to do something like: docker -H <remote-docker> build -f ./Dockerfile -t image:version and when I do a docker inspect to the image stored in the remote docker the labels have: Labels: { "os_version": 0.0.1, "docker_version": 1.17.0 } Without I specify --label every time I build in remote docker. Someone know how to add this labels in every build?
docker, docker-compose, rhel, docker-machine
1
146
0
https://stackoverflow.com/questions/46721463/how-to-add-labels-in-every-docker-remote-build
46,110,069
postgres - pg_restore error cannot restore from compressed archive (compression not supported in this installation)
I got below error while restoring the dump file in postgresql 9.6.3 running on RHEL 7. Error - pg_restore: [archiver] WARNING: archive is compressed, but this installation does not support compression -- no data will be available pg_restore: [archiver] cannot restore from compressed archive (compression not supported in this installation) Can anyone please help in this regard ? Thanks in advance.
postgres - pg_restore error cannot restore from compressed archive (compression not supported in this installation) I got below error while restoring the dump file in postgresql 9.6.3 running on RHEL 7. Error - pg_restore: [archiver] WARNING: archive is compressed, but this installation does not support compression -- no data will be available pg_restore: [archiver] cannot restore from compressed archive (compression not supported in this installation) Can anyone please help in this regard ? Thanks in advance.
linux, postgresql, rhel
1
1,818
0
https://stackoverflow.com/questions/46110069/postgres-pg-restore-error-cannot-restore-from-compressed-archive-compression
46,023,141
Java Compiler keeps saying &quot;error while writing className: className.class (Permission denied)&quot;
I'm working on a Tomcat WebApp for my university which enables students to compile their Java codes and see the trace. I'm installing it on a RHEL7 VM. But when I test the compilation function (this one is not implemented by me), the method I'm providing returns this: error while writing className: className.class (Permission denied) Error on line 1 in className.java I'll show you the method I think is generating this: public String compileJavaCode(String javaCode, String javaFileName, File workingDir) throws IOException, TimeoutException{ javax.tools.JavaCompiler compiler = ToolProvider.getSystemJavaCompiler(); DiagnosticCollector<JavaFileObject> diagnostics = new DiagnosticCollector<JavaFileObject>(); StandardJavaFileManager fileManager = compiler.getStandardFileManager(diagnostics, null, null); this.createJavaFile(javaCode, javaFileName, workingDir); JavaFileObject file = new JavaSourceFromString(javaFileName, javaCode); Iterable<? extends JavaFileObject> compilationUnits = Arrays.asList(file); compiler.getTask(null, fileManager, diagnostics, null, null, compilationUnits).call(); String diagn = ""; for ( Diagnostic<? extends JavaFileObject> diagnostic : diagnostics.getDiagnostics()){ diagn+=diagnostic.getMessage(null)+"\n";//E.g. cannot find symbol symbol: variable variablename diagn+="Error on line "+Long.toString(diagnostic.getLineNumber())+" in "+diagnostic.getSource().toUri();//E.g. Error on line 22 in ClassName.java } fileManager.close(); compiler.run(null, null, null, workingDir.getAbsolutePath()+File.separator+javaFileName); return diagn; } Students will see the content of that diagn variable as a result for their code submission. Fun fact is that I manage to get the className.class in the workingDir directory but I keep getting that error from the for cycle above. Could the problem be compiler.getTask(...).call() ? I mean maybe compiler.run is able to generate the .class correctly but the compiler.getTask(...).call() is trying to write the .class somewhere else I don't have permission to write in. P.S. This is a pretty legacy code so please be merciful with it. :) As asked by @Alexander, this is the content of the Java file: public class Sommatore { public int somma(int i, int j) { return i+j; } public int differenza(int i, int j) { return i-j; } }
Java Compiler keeps saying &quot;error while writing className: className.class (Permission denied)&quot; I'm working on a Tomcat WebApp for my university which enables students to compile their Java codes and see the trace. I'm installing it on a RHEL7 VM. But when I test the compilation function (this one is not implemented by me), the method I'm providing returns this: error while writing className: className.class (Permission denied) Error on line 1 in className.java I'll show you the method I think is generating this: public String compileJavaCode(String javaCode, String javaFileName, File workingDir) throws IOException, TimeoutException{ javax.tools.JavaCompiler compiler = ToolProvider.getSystemJavaCompiler(); DiagnosticCollector<JavaFileObject> diagnostics = new DiagnosticCollector<JavaFileObject>(); StandardJavaFileManager fileManager = compiler.getStandardFileManager(diagnostics, null, null); this.createJavaFile(javaCode, javaFileName, workingDir); JavaFileObject file = new JavaSourceFromString(javaFileName, javaCode); Iterable<? extends JavaFileObject> compilationUnits = Arrays.asList(file); compiler.getTask(null, fileManager, diagnostics, null, null, compilationUnits).call(); String diagn = ""; for ( Diagnostic<? extends JavaFileObject> diagnostic : diagnostics.getDiagnostics()){ diagn+=diagnostic.getMessage(null)+"\n";//E.g. cannot find symbol symbol: variable variablename diagn+="Error on line "+Long.toString(diagnostic.getLineNumber())+" in "+diagnostic.getSource().toUri();//E.g. Error on line 22 in ClassName.java } fileManager.close(); compiler.run(null, null, null, workingDir.getAbsolutePath()+File.separator+javaFileName); return diagn; } Students will see the content of that diagn variable as a result for their code submission. Fun fact is that I manage to get the className.class in the workingDir directory but I keep getting that error from the for cycle above. Could the problem be compiler.getTask(...).call() ? I mean maybe compiler.run is able to generate the .class correctly but the compiler.getTask(...).call() is trying to write the .class somewhere else I don't have permission to write in. P.S. This is a pretty legacy code so please be merciful with it. :) As asked by @Alexander, this is the content of the Java file: public class Sommatore { public int somma(int i, int j) { return i+j; } public int differenza(int i, int j) { return i-j; } }
java, web-applications, compiler-errors, compiler-warnings, rhel
1
2,409
1
https://stackoverflow.com/questions/46023141/java-compiler-keeps-saying-error-while-writing-classname-classname-class-perm
45,883,900
Connect SAS to AWS Athena
I am trying to establish a connection between SAS & AWS Athena. I am working on RHEL 6.7, java version is 1.8.0_71. Could someone advise how to configure that please? So far, after some reading on "Accessing Amazon Athena with JDBC" , I have tried a 'maybe it will work' naive approach with trying to set a DSN in odbc.ini files (outside of SAS): I have downloaded Athena JDBC jar file and tried configuring connection in a similar way I did it for EMR. odbc.ini: [ODBC] # Specify any global ODBC configuration here such as ODBC tracing. [ODBC Data Sources] ATHENA=Amazon Athena JDBC Driver [ATHENA] Driver=/opt/amazon/hiveodbc/lib/64/AthenaJDBC41-1.1.0.jar HOST=jdbc:awsathena://athena.eu-west-1.amazonaws.com:443?s3_staging_dir=s3://aws-athena-query-results/sas/ odbcinst.ini [ODBC Drivers] Amazon Athena JDBC Driver=Installed [Amazon Athena JDBC Driver] Description=Amazon Athena JDBC Driver Driver=/opt/amazon/hiveodbc/lib/64/AthenaJDBC41-1.1.0.jar ## The option below is for using unixODBC when compiled with -DSQL_WCHART_CONVERT. ## Execute 'odbc_config --cflags' to determine if you need to uncomment it. # IconvEncoding=UCS-4LE iODBC throws the following: iODBC Demonstration program This program shows an interactive SQL processor Driver Manager: 03.52.0709.0909 Enter ODBC connect string (? shows list): DSN=ATHENA 1: SQLDriverConnect = [iODBC][Driver Manager]/opt/amazon/hiveodbc/lib/64/AthenaJDBC41-1.1.0.jar: invalid ELF header (0) SQLSTATE=00000 2: SQLDriverConnect = [iODBC][Driver Manager]Specified driver could not be loaded (0) SQLSTATE=IM003 Any suggestion would be much appreciated!
Connect SAS to AWS Athena I am trying to establish a connection between SAS & AWS Athena. I am working on RHEL 6.7, java version is 1.8.0_71. Could someone advise how to configure that please? So far, after some reading on "Accessing Amazon Athena with JDBC" , I have tried a 'maybe it will work' naive approach with trying to set a DSN in odbc.ini files (outside of SAS): I have downloaded Athena JDBC jar file and tried configuring connection in a similar way I did it for EMR. odbc.ini: [ODBC] # Specify any global ODBC configuration here such as ODBC tracing. [ODBC Data Sources] ATHENA=Amazon Athena JDBC Driver [ATHENA] Driver=/opt/amazon/hiveodbc/lib/64/AthenaJDBC41-1.1.0.jar HOST=jdbc:awsathena://athena.eu-west-1.amazonaws.com:443?s3_staging_dir=s3://aws-athena-query-results/sas/ odbcinst.ini [ODBC Drivers] Amazon Athena JDBC Driver=Installed [Amazon Athena JDBC Driver] Description=Amazon Athena JDBC Driver Driver=/opt/amazon/hiveodbc/lib/64/AthenaJDBC41-1.1.0.jar ## The option below is for using unixODBC when compiled with -DSQL_WCHART_CONVERT. ## Execute 'odbc_config --cflags' to determine if you need to uncomment it. # IconvEncoding=UCS-4LE iODBC throws the following: iODBC Demonstration program This program shows an interactive SQL processor Driver Manager: 03.52.0709.0909 Enter ODBC connect string (? shows list): DSN=ATHENA 1: SQLDriverConnect = [iODBC][Driver Manager]/opt/amazon/hiveodbc/lib/64/AthenaJDBC41-1.1.0.jar: invalid ELF header (0) SQLSTATE=00000 2: SQLDriverConnect = [iODBC][Driver Manager]Specified driver could not be loaded (0) SQLSTATE=IM003 Any suggestion would be much appreciated!
amazon-web-services, jdbc, sas, rhel, amazon-athena
1
1,468
0
https://stackoverflow.com/questions/45883900/connect-sas-to-aws-athena
43,725,908
Installing Jenkins on RHEL 6 getting error &quot;No valid crumb was included in the request&quot;
I've just installed Jenkins on an AWS EC2. However when I go to configure Jenkins from the browser I get the following error immediately after I select install recommended plugins: An error occurred during installation: No valid crumb was included in the request When I looked up this error, it seems it was a known issue but was resolved last year. [URL] However I am still encountering it on a stable build version. Installation on RHEL 6: sudo wget -O /etc/yum.repos.d/jenkins.repo [URL] sudo rpm --import [URL] sudo yum install jenkins Running Jenkins sudo service jenkins start Does anybody know a workaround or how to fix this problem?
Installing Jenkins on RHEL 6 getting error &quot;No valid crumb was included in the request&quot; I've just installed Jenkins on an AWS EC2. However when I go to configure Jenkins from the browser I get the following error immediately after I select install recommended plugins: An error occurred during installation: No valid crumb was included in the request When I looked up this error, it seems it was a known issue but was resolved last year. [URL] However I am still encountering it on a stable build version. Installation on RHEL 6: sudo wget -O /etc/yum.repos.d/jenkins.repo [URL] sudo rpm --import [URL] sudo yum install jenkins Running Jenkins sudo service jenkins start Does anybody know a workaround or how to fix this problem?
jenkins, amazon-ec2, rhel
1
365
0
https://stackoverflow.com/questions/43725908/installing-jenkins-on-rhel-6-getting-error-no-valid-crumb-was-included-in-the-r
43,302,142
Docker nginx SELinux (centOS/RHEL) with 403 forbidden access
So my Dockerfile runs via docker-compose using: Dockerfile FROM nginx #COPY conf COPY myapp/ /usr/share/nginx/html RUN chmod -R 664 /usr/share/nginx/html RUN chown -R nginx /usr/share/nginx/html RUN chcon -R -t httpd_sys_content_t /usr/share/nginx/html This is on RHEL 6.x, Docker is old 1.7 or something as well. I don't even need "run chmod/chown/chcon" for most environments!! The dockerfile works just fine on windows. However, I still get 403 Forbidden errors whenever nginx tries to access ANY file in /usr/share/nginx/html. What is the correct way to setup nginx in a docker container and avoid these SElinux problems? (SElinux is on "Enforcing") In fact, if you do RUN/CMD ls -l we can see nginx is the user who owns that folder and it has the right permissions! So what the heck is going on?
Docker nginx SELinux (centOS/RHEL) with 403 forbidden access So my Dockerfile runs via docker-compose using: Dockerfile FROM nginx #COPY conf COPY myapp/ /usr/share/nginx/html RUN chmod -R 664 /usr/share/nginx/html RUN chown -R nginx /usr/share/nginx/html RUN chcon -R -t httpd_sys_content_t /usr/share/nginx/html This is on RHEL 6.x, Docker is old 1.7 or something as well. I don't even need "run chmod/chown/chcon" for most environments!! The dockerfile works just fine on windows. However, I still get 403 Forbidden errors whenever nginx tries to access ANY file in /usr/share/nginx/html. What is the correct way to setup nginx in a docker container and avoid these SElinux problems? (SElinux is on "Enforcing") In fact, if you do RUN/CMD ls -l we can see nginx is the user who owns that folder and it has the right permissions! So what the heck is going on?
nginx, docker, dockerfile, rhel
1
959
1
https://stackoverflow.com/questions/43302142/docker-nginx-selinux-centos-rhel-with-403-forbidden-access
43,113,939
Is it possible to Migrate from RHEL 5.11 to RHEL 7.1?
I want to upgrade the current RHEL version for one of my production system. The current version is RHEL 5.11. Is it possible to install and migrate old setting to RHEL 7.1 without any challenges?
Is it possible to Migrate from RHEL 5.11 to RHEL 7.1? I want to upgrade the current RHEL version for one of my production system. The current version is RHEL 5.11. Is it possible to install and migrate old setting to RHEL 7.1 without any challenges?
rhel
1
2,576
1
https://stackoverflow.com/questions/43113939/is-it-possible-to-migrate-from-rhel-5-11-to-rhel-7-1
42,869,001
Authenticating SSH between CentOS &amp; Cisco appliances
I am new to scripting, so please forgive this question. Having done extensive research am not able to find a suitable solution. Currently I have a Python script which is required to SSH from CentOS into Cisco (and other) firewall appliances. Once logged in, certain actions are carried out. Currently, we are storing our username and password in a plaintext file and calling that file when the Python script requires it. However, if the CentOS server is compromised, the attacker now has access to the plaintext credentials to the firewall and would be able to get the IP address from the Python script. I have looked at using SSH-Agent as a means of bypassing passwords, however, if the CentOS server is compromised won't the attacker still be able to access the firewall through the Python script? Really, I am looking for a way to protect the firewall from unauthorized access, should the CentOS server be compromised. I have also considered using obfuscation but doesn't seem suitable. Wondering what options I have here. Sorry for my bad english.
Authenticating SSH between CentOS &amp; Cisco appliances I am new to scripting, so please forgive this question. Having done extensive research am not able to find a suitable solution. Currently I have a Python script which is required to SSH from CentOS into Cisco (and other) firewall appliances. Once logged in, certain actions are carried out. Currently, we are storing our username and password in a plaintext file and calling that file when the Python script requires it. However, if the CentOS server is compromised, the attacker now has access to the plaintext credentials to the firewall and would be able to get the IP address from the Python script. I have looked at using SSH-Agent as a means of bypassing passwords, however, if the CentOS server is compromised won't the attacker still be able to access the firewall through the Python script? Really, I am looking for a way to protect the firewall from unauthorized access, should the CentOS server be compromised. I have also considered using obfuscation but doesn't seem suitable. Wondering what options I have here. Sorry for my bad english.
python, ssh, centos, ssh-keys, rhel
1
46
0
https://stackoverflow.com/questions/42869001/authenticating-ssh-between-centos-cisco-appliances
42,616,793
What does this alias do in Red Hat Linux?
cd = 'cd !* ;set prompt="! $host dirs % "' This alias is on a Red Hat Enterprise linux server I use commonly, and I can't figure out what it does for me. Any ideas?
What does this alias do in Red Hat Linux? cd = 'cd !* ;set prompt="! $host dirs % "' This alias is on a Red Hat Enterprise linux server I use commonly, and I can't figure out what it does for me. Any ideas?
rhel, csh
1
214
2
https://stackoverflow.com/questions/42616793/what-does-this-alias-do-in-red-hat-linux
41,853,784
Carbon-cache fails to start with no error
Trying to install Graphite on CentOS 7, I'm met with this curious situation. When trying to start carbon-cache via service, it will report OK Starting carbon-cache (via systemctl): [ OK ] When checking its status, however I get FAILED: Usage: bin/carbon-cache.py {start|stop|restart|status} [FAILED] (Yes, failed is on another line). Carbon-cache log doesn't show anything - it doesn't even write to it. Any ideas of what might have gone wrong?
Carbon-cache fails to start with no error Trying to install Graphite on CentOS 7, I'm met with this curious situation. When trying to start carbon-cache via service, it will report OK Starting carbon-cache (via systemctl): [ OK ] When checking its status, however I get FAILED: Usage: bin/carbon-cache.py {start|stop|restart|status} [FAILED] (Yes, failed is on another line). Carbon-cache log doesn't show anything - it doesn't even write to it. Any ideas of what might have gone wrong?
centos, rhel, graphite
1
809
0
https://stackoverflow.com/questions/41853784/carbon-cache-fails-to-start-with-no-error
41,633,219
When can keytool and java directory be different
Can anyone please suggest me a scenario such that the which java and which keytool gives me two different directories(Results). I have done a lot of google but all of them say that the keytool and the jre dir are the same so both the above commands will give the same result. Any help will be much appreciated.
When can keytool and java directory be different Can anyone please suggest me a scenario such that the which java and which keytool gives me two different directories(Results). I have done a lot of google but all of them say that the keytool and the jre dir are the same so both the above commands will give the same result. Any help will be much appreciated.
unix, rhel
1
53
0
https://stackoverflow.com/questions/41633219/when-can-keytool-and-java-directory-be-different
41,135,750
Upgrading Python on RHEL 7
TL;DR How do I get Python 2.7.9+ on RHEL 7? I'm using Ansible for configuration management of a RHEL 7 server. I've run into a number of cases where an Ansible module (or whatever action it was trying to perform) requires Python 2.7.9+, but RHEL 7 (tested on 7.1 and 7.3) only come with Python 2.7.5. I see a few options: Installing through a different package repository. I was looking around for an easy way to upgrade though yum , but couldn't find one. Is there a repository that has Python 2.7.9+? Sub-question: Is there a site to search for a given version of a given package, and then find out what repository(s) it exists in? Installing Python 2.7.9+ from source. I tried to do this, but I apparently couldn't get the compilation to pick up system libraries like zlib, so a bunch of the Ansible modules ended up failing when using it as the ansible_python_interpreter . Possible that I could spend more time and get that working. Use Python 3. Ansible has experimental Python 3 support , but I'm worried about the unknown stability of this. Any advice on the easiest path forward? I'm relatively new to system administration, so very possible there's a straightforward solution I'm missing. Thanks!
Upgrading Python on RHEL 7 TL;DR How do I get Python 2.7.9+ on RHEL 7? I'm using Ansible for configuration management of a RHEL 7 server. I've run into a number of cases where an Ansible module (or whatever action it was trying to perform) requires Python 2.7.9+, but RHEL 7 (tested on 7.1 and 7.3) only come with Python 2.7.5. I see a few options: Installing through a different package repository. I was looking around for an easy way to upgrade though yum , but couldn't find one. Is there a repository that has Python 2.7.9+? Sub-question: Is there a site to search for a given version of a given package, and then find out what repository(s) it exists in? Installing Python 2.7.9+ from source. I tried to do this, but I apparently couldn't get the compilation to pick up system libraries like zlib, so a bunch of the Ansible modules ended up failing when using it as the ansible_python_interpreter . Possible that I could spend more time and get that working. Use Python 3. Ansible has experimental Python 3 support , but I'm worried about the unknown stability of this. Any advice on the easiest path forward? I'm relatively new to system administration, so very possible there's a straightforward solution I'm missing. Thanks!
python, python-2.7, ansible, rhel, rhel7
1
5,002
2
https://stackoverflow.com/questions/41135750/upgrading-python-on-rhel-7
40,717,436
What is the greylist.txt in a kmod rpm package
Many device driver appear as a kmod-<drivername> rpm package in CentOS/RHEL, and there is always a greylist.txt in these package, with some kernel symbol as its contents. Here are an example: [root@localhost download]# rpm -ql kmod-qed /etc/depmod.d/qed.conf /lib/modules/3.10.0-327.36.2.el7.x86_64 /lib/modules/3.10.0-327.36.2.el7.x86_64/extra /lib/modules/3.10.0-327.36.2.el7.x86_64/extra/qed /lib/modules/3.10.0-327.36.2.el7.x86_64/extra/qed/qed.ko /usr/share/doc/kmod-qed/greylist.txt [root@localhost download]# cat /usr/share/doc/kmod-qed/greylist.txt bitmap_clear bitmap_set __fentry__ ioremap_wc pci_enable_msi_range pci_enable_msix_range pci_ioremap_bar pci_is_pcie pci_read_config_byte pci_read_config_word pci_save_state release_firmware request_firmware __smp_mb__after_atomic __stack_chk_fail synchronize_irq tasklet_init __tasklet_schedule vzalloc x86_dma_fallback_dev zlib_inflate zlib_inflateEnd zlib_inflateInit2 zlib_inflate_workspacesize I guess it must have something to do with the kernel ABI whitelist, but what exactly is the purpose of this greylist file? and how was it generated while packaging the kmod rpm. Thanks.
What is the greylist.txt in a kmod rpm package Many device driver appear as a kmod-<drivername> rpm package in CentOS/RHEL, and there is always a greylist.txt in these package, with some kernel symbol as its contents. Here are an example: [root@localhost download]# rpm -ql kmod-qed /etc/depmod.d/qed.conf /lib/modules/3.10.0-327.36.2.el7.x86_64 /lib/modules/3.10.0-327.36.2.el7.x86_64/extra /lib/modules/3.10.0-327.36.2.el7.x86_64/extra/qed /lib/modules/3.10.0-327.36.2.el7.x86_64/extra/qed/qed.ko /usr/share/doc/kmod-qed/greylist.txt [root@localhost download]# cat /usr/share/doc/kmod-qed/greylist.txt bitmap_clear bitmap_set __fentry__ ioremap_wc pci_enable_msi_range pci_enable_msix_range pci_ioremap_bar pci_is_pcie pci_read_config_byte pci_read_config_word pci_save_state release_firmware request_firmware __smp_mb__after_atomic __stack_chk_fail synchronize_irq tasklet_init __tasklet_schedule vzalloc x86_dma_fallback_dev zlib_inflate zlib_inflateEnd zlib_inflateInit2 zlib_inflate_workspacesize I guess it must have something to do with the kernel ABI whitelist, but what exactly is the purpose of this greylist file? and how was it generated while packaging the kmod rpm. Thanks.
centos, kernel, kernel-module, rhel
1
151
1
https://stackoverflow.com/questions/40717436/what-is-the-greylist-txt-in-a-kmod-rpm-package
39,211,511
Simulating RISCV on RHEL5 -- Missing libmpc-devel
My goal is to simulate RISCV in VCS on RHEL 5.11. I am following the lowRISC tagged memory tutorial, environment setup guide: [URL] The apt-get commands are substituted with yum and equivalent packages, per the GNU toolchain repo readme: [URL] sudo yum install autoconf automake libmpc-devel mpfr-devel gmp-devel gawk bison flex texinfo patchutils gcc gcc-c++ The libmpc-devel package cannot be found, and I cannot build GCC for RISCV without it. I believe this package can be found for RHEL6, but I do not have VCS available on a RHEL6 machine. Is there a workaround for this issue -- e.g. can I build GCC for RISCV on another machine, then do compilation and simulation on different boxes?
Simulating RISCV on RHEL5 -- Missing libmpc-devel My goal is to simulate RISCV in VCS on RHEL 5.11. I am following the lowRISC tagged memory tutorial, environment setup guide: [URL] The apt-get commands are substituted with yum and equivalent packages, per the GNU toolchain repo readme: [URL] sudo yum install autoconf automake libmpc-devel mpfr-devel gmp-devel gawk bison flex texinfo patchutils gcc gcc-c++ The libmpc-devel package cannot be found, and I cannot build GCC for RISCV without it. I believe this package can be found for RHEL6, but I do not have VCS available on a RHEL6 machine. Is there a workaround for this issue -- e.g. can I build GCC for RISCV on another machine, then do compilation and simulation on different boxes?
rhel, riscv
1
128
0
https://stackoverflow.com/questions/39211511/simulating-riscv-on-rhel5-missing-libmpc-devel
39,131,120
heroku --version returns error
After installing Heroku CLI for RHEL, I am getting below error after running herolu --version command. What is the reason, how to solve this problem? [xxxxx@oc3651178580 ~]$ heroku --version /usr/local/heroku/lib/heroku/updater.rb:3:in require': /usr/local/heroku/lib/heroku/helpers.rb:303: syntax error, unexpected ':', expecting ')' (SyntaxError) ...de('utf-8', 'binary', invalid: :replace, undef: :replace).sp... ^ from /usr/local/heroku/lib/heroku/updater.rb:3 from /usr/local/heroku/bin/heroku:18:in require' from /usr/local/heroku/bin/heroku:18
heroku --version returns error After installing Heroku CLI for RHEL, I am getting below error after running herolu --version command. What is the reason, how to solve this problem? [xxxxx@oc3651178580 ~]$ heroku --version /usr/local/heroku/lib/heroku/updater.rb:3:in require': /usr/local/heroku/lib/heroku/helpers.rb:303: syntax error, unexpected ':', expecting ')' (SyntaxError) ...de('utf-8', 'binary', invalid: :replace, undef: :replace).sp... ^ from /usr/local/heroku/lib/heroku/updater.rb:3 from /usr/local/heroku/bin/heroku:18:in require' from /usr/local/heroku/bin/heroku:18
linux, heroku, rhel
1
255
1
https://stackoverflow.com/questions/39131120/heroku-version-returns-error
38,484,732
Upgrading Python package in Redhat Software Collections (RHSCL)
I'm using SCL to manage different versions of Python on a machine, but I'm having trouble updating and installing packages at the SCL level. I am trying to upgrade pip. First I tried scl enable python27 'pip install -U pip' but I do not have permissions to touch the SCL python site-packages. Then I run sudo scl enable python27 'pip install -U pip' this completes successfully. However, this happens: $ scl enable python27 pip --version Traceback (most recent call last): File "/opt/rh/python27/root/usr/bin/pip", line 7, in <module> from pip import main ImportError: No module named pip. $ sudo scl enable python27 pip --version works, however. Upon further inspection, it looked like the site-packages/pip directory was created with the wrong permissions. What is the recommended way to manage parts of an SCL install that require root?
Upgrading Python package in Redhat Software Collections (RHSCL) I'm using SCL to manage different versions of Python on a machine, but I'm having trouble updating and installing packages at the SCL level. I am trying to upgrade pip. First I tried scl enable python27 'pip install -U pip' but I do not have permissions to touch the SCL python site-packages. Then I run sudo scl enable python27 'pip install -U pip' this completes successfully. However, this happens: $ scl enable python27 pip --version Traceback (most recent call last): File "/opt/rh/python27/root/usr/bin/pip", line 7, in <module> from pip import main ImportError: No module named pip. $ sudo scl enable python27 pip --version works, however. Upon further inspection, it looked like the site-packages/pip directory was created with the wrong permissions. What is the recommended way to manage parts of an SCL install that require root?
python, pip, rhel, software-collections
1
750
0
https://stackoverflow.com/questions/38484732/upgrading-python-package-in-redhat-software-collections-rhscl
38,253,619
Maven shade plugin detects os release with wrong output format: &quot;os.detected.release.like.&quot;rhel&quot;&quot; is not legal for JDOM/XML elements
I am trying to use maven to compile the Apache Beam project. It works on OSX. But, when I switch to Linux, it produces the following error: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.1:shade (bundle-and-repackage) on project beam-runners-google-cloud-dataflow-java: Error creating shaded jar: The name "os.detected.release.like."rhel"" is not legal for JDOM/XML elements: XML names cannot contain the character """. -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.1:shade (bundle-and-repackage) on project beam-runners-google-cloud-dataflow-java: Error creating shaded jar: The name "os.detected.release.like."rhel"" is not legal for JDOM/XML elements: XML names cannot contain the character """. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216) at org.apache.maven.cli.MavenCli.main(MavenCli.java:160) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:508) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) Caused by: org.apache.maven.plugin.MojoExecutionException: Error creating shaded jar: The name "os.detected.release.like."rhel"" is not legal for JDOM/XML elements: XML names cannot contain the character """. at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:539) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) ... 19 more Caused by: org.jdom.IllegalNameException: The name "os.detected.release.like."rhel"" is not legal for JDOM/XML elements: XML names cannot contain the character """. at org.jdom.Element.setName(Element.java:207) at org.jdom.Element.<init>(Element.java:141) at org.jdom.DefaultJDOMFactory.element(DefaultJDOMFactory.java:134) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.updateElement(MavenJDOMWriter.java:1485) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.findAndReplaceSimpleElement(MavenJDOMWriter.java:164) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.findAndReplaceProperties(MavenJDOMWriter.java:125) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.updateProfile(MavenJDOMWriter.java:1929) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.iterateProfile(MavenJDOMWriter.java:878) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.updateModel(MavenJDOMWriter.java:1653) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.write(MavenJDOMWriter.java:2190) at org.apache.maven.plugins.shade.pom.PomWriter.write(PomWriter.java:75) at org.apache.maven.plugins.shade.mojo.ShadeMojo.rewriteDependencyReducedPomIfWeHaveReduction(ShadeMojo.java:1023) at org.apache.maven.plugins.shade.mojo.ShadeMojo.createDependencyReducedPom(ShadeMojo.java:957) at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:531) ... 21 more It seems that the maven shade plugin is trying to detect the os release id. But, the return value is wrapped with a quote mark ("rhel"), which is illegal in JDOM/XML. Below is the content in /etc/os-releases: NAME="Red Hat Enterprise Linux Server" VERSION="7.2 (Maipo)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.2:GA:server" HOME_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.2" Unfortunately, it is a shared server, and I cannot change the content of the file. Is there any way to work around this problem?
Maven shade plugin detects os release with wrong output format: &quot;os.detected.release.like.&quot;rhel&quot;&quot; is not legal for JDOM/XML elements I am trying to use maven to compile the Apache Beam project. It works on OSX. But, when I switch to Linux, it produces the following error: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.1:shade (bundle-and-repackage) on project beam-runners-google-cloud-dataflow-java: Error creating shaded jar: The name "os.detected.release.like."rhel"" is not legal for JDOM/XML elements: XML names cannot contain the character """. -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.4.1:shade (bundle-and-repackage) on project beam-runners-google-cloud-dataflow-java: Error creating shaded jar: The name "os.detected.release.like."rhel"" is not legal for JDOM/XML elements: XML names cannot contain the character """. at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:216) at org.apache.maven.cli.MavenCli.main(MavenCli.java:160) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:508) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) Caused by: org.apache.maven.plugin.MojoExecutionException: Error creating shaded jar: The name "os.detected.release.like."rhel"" is not legal for JDOM/XML elements: XML names cannot contain the character """. at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:539) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) ... 19 more Caused by: org.jdom.IllegalNameException: The name "os.detected.release.like."rhel"" is not legal for JDOM/XML elements: XML names cannot contain the character """. at org.jdom.Element.setName(Element.java:207) at org.jdom.Element.<init>(Element.java:141) at org.jdom.DefaultJDOMFactory.element(DefaultJDOMFactory.java:134) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.updateElement(MavenJDOMWriter.java:1485) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.findAndReplaceSimpleElement(MavenJDOMWriter.java:164) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.findAndReplaceProperties(MavenJDOMWriter.java:125) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.updateProfile(MavenJDOMWriter.java:1929) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.iterateProfile(MavenJDOMWriter.java:878) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.updateModel(MavenJDOMWriter.java:1653) at org.apache.maven.plugins.shade.pom.MavenJDOMWriter.write(MavenJDOMWriter.java:2190) at org.apache.maven.plugins.shade.pom.PomWriter.write(PomWriter.java:75) at org.apache.maven.plugins.shade.mojo.ShadeMojo.rewriteDependencyReducedPomIfWeHaveReduction(ShadeMojo.java:1023) at org.apache.maven.plugins.shade.mojo.ShadeMojo.createDependencyReducedPom(ShadeMojo.java:957) at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:531) ... 21 more It seems that the maven shade plugin is trying to detect the os release id. But, the return value is wrapped with a quote mark ("rhel"), which is illegal in JDOM/XML. Below is the content in /etc/os-releases: NAME="Red Hat Enterprise Linux Server" VERSION="7.2 (Maipo)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.2:GA:server" HOME_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.2" Unfortunately, it is a shared server, and I cannot change the content of the file. Is there any way to work around this problem?
java, xml, linux, maven, rhel
1
138
0
https://stackoverflow.com/questions/38253619/maven-shade-plugin-detects-os-release-with-wrong-output-format-os-detected-rel
37,219,399
Bash Script that gets a count of running processes and then emails if over a certain threshold
I am trying to create a script to grab the current number of running processes and the if that number is over 1000 then send me an email. I am trying to do this in a bash script that I will just use a cron job to call it. The code I am using is below and I am sure I just have something out of place and just need another set of eyes. PCOUNT=$(cat /proc/loadavg|awk '{print $4}'|awk -F/ '{print $2}') if [$PCOUNT > 100]; then mail -s "Process Count" email@example.com fi
Bash Script that gets a count of running processes and then emails if over a certain threshold I am trying to create a script to grab the current number of running processes and the if that number is over 1000 then send me an email. I am trying to do this in a bash script that I will just use a cron job to call it. The code I am using is below and I am sure I just have something out of place and just need another set of eyes. PCOUNT=$(cat /proc/loadavg|awk '{print $4}'|awk -F/ '{print $2}') if [$PCOUNT > 100]; then mail -s "Process Count" email@example.com fi
bash, rhel
1
224
3
https://stackoverflow.com/questions/37219399/bash-script-that-gets-a-count-of-running-processes-and-then-emails-if-over-a-cer
37,205,778
SAS Python integration
I'm testing the python integration in sas,I clone the github project [URL] .I compiled the java class with java 7 then I tried to execute a python script in this way: *** WORKING DIRECTORY (----- USER UPDATE NEEDED -----); %let WORK_DIR = /tmp; *** SYSTEM PYTHON LOCATION (----- USER UPDATE NEEDED -----); %let PYTHON_EXEC_COMMAND = /usr/bin/python2.6; *** JAVA LIBRARIES/CLASS FILES LOCATION; %let JAVA_BIN_DIR = /tmp/SAS.jar; %put &JAVA_BIN_DIR; OPTIONS SET=CLASSPATH "&JAVA_BIN_DIR."; options linesize = MAX; ods html close; ods listing; *** VALIDATE JAVA CLASSPATH; data _null_; length _x1 $32767; _x1 = strip(sysget("CLASSPATH")); _x2 = index(upcase(trim(_x1)), %upcase("&JAVA_BIN_DIR")); if _x2 = 0 then put "ERROR: Invalid Java Classpath."; run; /*** Part I: Python ***/ data _null_; length rtn_val 8; *** Python program takes working directory as first argument; python_pgm = "&WORK_DIR./convert_ch.py"; python_arg1 = "&WORK_DIR"; python_call = cat('"', trim(python_pgm), '" "', trim(python_arg1), '"'); declare javaobj j("dev.SASJavaExec", "&PYTHON_EXEC_COMMAND", python_call); j.callIntMethod("executeProcess", rtn_val); run; It works perfectly but if I try to change the python version: %let PYTHON_EXEC_COMMAND = /opt/rh/python27/root/usr/bin/python27; It doesn't work: mag 13, 2016 10:48:35 AM dev.SASJavaExec executeProcess Informazioni: Executing [/opt/rh/python27/root/usr/bin/python27, /data/shareddata/projects/smartdata/flat_file/tmp/convert_ch.py, /data/shareddata/projects/smartdata/flat_file/tmp] ... mag 13, 2016 10:48:35 AM dev.SASJavaExec executeProcess Informazioni: Starting external process ... mag 13, 2016 10:48:35 AM dev.SASJavaExec executeProcess Grave: Failed to start external Process: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.<init>(UNIXProcess.java:135) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022) at dev.SASJavaExec.executeProcess(SASJavaExec.java:151) I'm using RHEL 6.6(Santiago). By default the Python's version is 2.6 and I install the 2.7 in this way: sudo sh -c 'wget -qO- [URL] >> /etc/yum.repos.d/scl.repo' yum install python27 maybe I made a mistake during the installation of python 2.7?
SAS Python integration I'm testing the python integration in sas,I clone the github project [URL] .I compiled the java class with java 7 then I tried to execute a python script in this way: *** WORKING DIRECTORY (----- USER UPDATE NEEDED -----); %let WORK_DIR = /tmp; *** SYSTEM PYTHON LOCATION (----- USER UPDATE NEEDED -----); %let PYTHON_EXEC_COMMAND = /usr/bin/python2.6; *** JAVA LIBRARIES/CLASS FILES LOCATION; %let JAVA_BIN_DIR = /tmp/SAS.jar; %put &JAVA_BIN_DIR; OPTIONS SET=CLASSPATH "&JAVA_BIN_DIR."; options linesize = MAX; ods html close; ods listing; *** VALIDATE JAVA CLASSPATH; data _null_; length _x1 $32767; _x1 = strip(sysget("CLASSPATH")); _x2 = index(upcase(trim(_x1)), %upcase("&JAVA_BIN_DIR")); if _x2 = 0 then put "ERROR: Invalid Java Classpath."; run; /*** Part I: Python ***/ data _null_; length rtn_val 8; *** Python program takes working directory as first argument; python_pgm = "&WORK_DIR./convert_ch.py"; python_arg1 = "&WORK_DIR"; python_call = cat('"', trim(python_pgm), '" "', trim(python_arg1), '"'); declare javaobj j("dev.SASJavaExec", "&PYTHON_EXEC_COMMAND", python_call); j.callIntMethod("executeProcess", rtn_val); run; It works perfectly but if I try to change the python version: %let PYTHON_EXEC_COMMAND = /opt/rh/python27/root/usr/bin/python27; It doesn't work: mag 13, 2016 10:48:35 AM dev.SASJavaExec executeProcess Informazioni: Executing [/opt/rh/python27/root/usr/bin/python27, /data/shareddata/projects/smartdata/flat_file/tmp/convert_ch.py, /data/shareddata/projects/smartdata/flat_file/tmp] ... mag 13, 2016 10:48:35 AM dev.SASJavaExec executeProcess Informazioni: Starting external process ... mag 13, 2016 10:48:35 AM dev.SASJavaExec executeProcess Grave: Failed to start external Process: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.<init>(UNIXProcess.java:135) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022) at dev.SASJavaExec.executeProcess(SASJavaExec.java:151) I'm using RHEL 6.6(Santiago). By default the Python's version is 2.6 and I install the 2.7 in this way: sudo sh -c 'wget -qO- [URL] >> /etc/yum.repos.d/scl.repo' yum install python27 maybe I made a mistake during the installation of python 2.7?
python, python-2.7, sas, rhel
1
951
0
https://stackoverflow.com/questions/37205778/sas-python-integration
36,946,095
egrep working differently on SunOS and RHEL
I've an egrep statement(as a part of a script) purpose of which is to check for any characters other than ASCII characters 0-9, a-z, A-Z, space, enter, newline . Also if text is enclosed in [] , it can contain 0-9, a-z, A-Z, _, = which works perfectly as I expect on a RHEL box: uname -a Linux labeir1 2.6.18-371.el5 #1 SMP Thu Sep 5 21:21:44 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux cat /etc/redhat\-release Red Hat Enterprise Linux Server release 5.10 (Tikanga) The egrep statement: egrep -v "^[ ]*([a-zA-Z0-9\t\n\v\f\r ]*|\[{1}[_a-zA-Z0-9\t\n\v\f\r ]*(=[a-zA-Z0-9\t\n\v\f\r ]*)?\]{1})[ ]*$" file1 File1: [FEATURE_ID=2] [FEATURE_REV=1] [NO_OF_BYTES=18] 001203658080400160b9d0ae45000080 [CRC] c068 Output: Nothing, as the patterns of file1 matches But when I use this on a SunOS system, it fails to behave the same way: uname -a SunOS prodOTA1 5.9 Generic_118558-38 sun4us sparc FJSV,GPUZC-M Output on SUNOS: [FEATURE_ID=2] [FEATURE_REV=1] [NO_OF_BYTES=18] [CRC] Can you please help to use the exact equivalent of egrep -v "^[ ]*([a-zA-Z0-9\t\n\v\f\r ]*|\[{1}[_a-zA-Z0-9\t\n\v\f\r ]*(=[a-zA-Z0-9\t\n\v\f\r ]*)?\]{1})[ ]*$" in SunOS, so that I get the exact output as in RHEL box.
egrep working differently on SunOS and RHEL I've an egrep statement(as a part of a script) purpose of which is to check for any characters other than ASCII characters 0-9, a-z, A-Z, space, enter, newline . Also if text is enclosed in [] , it can contain 0-9, a-z, A-Z, _, = which works perfectly as I expect on a RHEL box: uname -a Linux labeir1 2.6.18-371.el5 #1 SMP Thu Sep 5 21:21:44 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux cat /etc/redhat\-release Red Hat Enterprise Linux Server release 5.10 (Tikanga) The egrep statement: egrep -v "^[ ]*([a-zA-Z0-9\t\n\v\f\r ]*|\[{1}[_a-zA-Z0-9\t\n\v\f\r ]*(=[a-zA-Z0-9\t\n\v\f\r ]*)?\]{1})[ ]*$" file1 File1: [FEATURE_ID=2] [FEATURE_REV=1] [NO_OF_BYTES=18] 001203658080400160b9d0ae45000080 [CRC] c068 Output: Nothing, as the patterns of file1 matches But when I use this on a SunOS system, it fails to behave the same way: uname -a SunOS prodOTA1 5.9 Generic_118558-38 sun4us sparc FJSV,GPUZC-M Output on SUNOS: [FEATURE_ID=2] [FEATURE_REV=1] [NO_OF_BYTES=18] [CRC] Can you please help to use the exact equivalent of egrep -v "^[ ]*([a-zA-Z0-9\t\n\v\f\r ]*|\[{1}[_a-zA-Z0-9\t\n\v\f\r ]*(=[a-zA-Z0-9\t\n\v\f\r ]*)?\]{1})[ ]*$" in SunOS, so that I get the exact output as in RHEL box.
regex, grep, rhel, sunos
1
59
0
https://stackoverflow.com/questions/36946095/egrep-working-differently-on-sunos-and-rhel
36,773,681
pacemaker corosynce lsb resource script( Sybase Database ASE Server and Backup Server)
I'm trying to create a lsb resource with pcs on rhel7.2 The "sybase" script is about Sybase Database ASE Server and Backup Server Startup & Stop & Restart $ status script The script is all pass Init Script LSB Compliance , but it doesn't work! Error [root@pldbsv01 ~]# pcs resource create ase157_res lsb:sybase op monitor interval=30s [root@pldbsv01 ~]# pcs constraint order start gfs2sybasedata_res-clone then ase157_res Adding gfs2sybasedata_res-clone ase157_res (kind: Mandatory) (Options: first-action=start then-action=start) [root@pldbsv01 ~]# pcs constraint colocation add ase157_res with gfs2sybasedata_res-clone [root@pldbsv01 ~]# pcs status Cluster name: PLDBSV_CLUSTER Last updated: Wed Apr 20 12:22:51 2016 Last change: Wed Apr 20 12:20:18 2016 by root via cibadmin on pldbsv01-cr Stack: corosync Current DC: pldbsv01-cr (version 1.1.13-10.el7-44eb2dd) - partition with quorum 2 nodes and 10 resources configured Online: [ pldbsv01-cr pldbsv02-cr ] Full list of resources: Resource Group: PLDBSV_RESOURCE PLDBSV_VIP (ocf::heartbeat:IPaddr2): Started pldbsv02-cr idrac-pldbsv01 (stonith:fence_ipmilan): Started pldbsv01-cr idrac-pldbsv02 (stonith:fence_ipmilan): Started pldbsv01-cr Clone Set: dlm-clone [dlm] Started: [ pldbsv02-cr ] Stopped: [ pldbsv01-cr ] Clone Set: clvmd-clone [clvmd] Started: [ pldbsv02-cr ] Stopped: [ pldbsv01-cr ] Clone Set: gfs2sybasedata_res-clone [gfs2sybasedata_res] Started: [ pldbsv02-cr ] Stopped: [ pldbsv01-cr ] ase157_res (lsb:sybase): Stopped [B]Failed Actions: * ase157_res_start_0 on pldbsv02-cr 'unknown error' (1): call=41, status=Timed Out, exitreason='none', last-rc-change='Wed Apr 20 12:21:35 2016', queued=0ms, exec=20002ms * ase157_res_start_0 on pldbsv01-cr 'unknown error' (1): call=45, status=Timed Out, exitreason='none', last-rc-change='Wed Apr 20 12:19:56 2016', queued=0ms, exec=20002ms[/B] PCSD Status: pldbsv01-cr: Online pldbsv02-cr: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled My script #!/bin/sh # # Startup script for Sybase ASE # Description: Service script for starting/stopping/monitoring \ # Sybase Adaptive Server on: \ # Red Hat Enterprise Linux 6 \ # Release date 2015-08-11 # Version 1 # Processname: dataserver # (1) Before running this script, we assume that user has installed # Sybase ASE 15.0.2 or higher version on the machine. # (2) This script should be put under /etc/init.d. Its owner should be "root" with # execution permission. # (3) You must define the variables "SYBASE" "SERVER" BACKUPSERVER" # (4) You can change Adaptive Server login name and password in line 96, # defaults are "sa" and "sybase" SYBASE=/opt/sybase SERVER=PLDBSV BACKUPSERVER=PLDBSV_BS # Source environment variables. . $SYBASE/SYBASE.sh # Find the name of the script NAME=basename $0 # For SELinux we need to use 'runuser' not 'su' if [ -x /sbin/runuser ] then SU=runuser else SU=su fi # Start sybase Adaptive Server and Backup Server start() { SYBASE_ASE_START=$"Starting ${NAME} Adaptive Server: " $SU sybase -c ". $SYBASE/SYBASE.sh; $SYBASE/$SYBASE_ASE/install/startserver \ -f $SYBASE/$SYBASE_ASE/install/RUN_${SERVER} > /dev/null" ret=$? if [ $ret -eq 0 ] then echo -n "$SYBASE_ASE_START [" echo -n -e "\033[32m Success \033[0m" echo "]" else echo -n "$SYBASE_ASE_START [" echo -n -e "\033[31m Failed \033[0m" echo "]" exit 1 fi for ((i=1; i<31; i++)) do sleep 1 echo "waitting $i sec to prepare Backup Server!" done SYBASE_BS_START=$"Starting ${NAME} Backup Server: " $SU sybase -c ". $SYBASE/SYBASE.sh; $SYBASE/$SYBASE_ASE/install/startserver \ -f $SYBASE/$SYBASE_ASE/install/RUN_${BACKUPSERVER} > /dev/null" ret=$? if [ $ret -eq 0 ] then echo -n "$SYBASE_BS_START [" echo -n -e "\033[32m Success \033[0m" echo "]" else echo -n "$SYBASE_BS_START [" echo -n -e "\033[31m Failed \033[0m" echo "]" exit 1 fi } #stop sybase adaptive server and backup server stop() { SYBASE_BS_STOP=$"Stopping ${NAME} Backup Server" pid=$(pidof backupserver) if [ $pid > 0 ] then $SU root -c "kill -9 $pid > /dev/null" echo -n "$SYBASE_BS_STOP [" echo -n -e "\033[32m Success \033[0m" echo "]" else echo -n "$SYBASE_BS_STOP [" echo -n -e "\033[31m Failed \033[0m" echo "]" fi SYBASE_ASE_STOP=$"Stopping ${NAME} Adaptive Server: " $SU sybase -c ". $SYBASE/SYBASE.sh; isql -S $SERVER -U sa -P sybase < \ $SYBASE/$SYBASE_ASE/upgrade/shutdown.sql > /dev/null" ret=$? if [ $ret -eq 0 ] then echo -n "$SYBASE_ASE_STOP [" echo -n -e "\033[32m Success \033[0m" echo "]" else echo -n "$SYBASE_ASE_STOP [" echo -n -e "\033[31m Failed \033[0m" echo "]" exit 0 fi } restart() { stop start } # Check Sybase Adaptive Server and Backup Server status status() { dataserver=$(pidof dataserver) backupserver=$(pidof backupserver) if [ -n "$dataserver" ]; then echo "sybase Adaptive Server is running!" else echo "sybase Adaptive Server is stopped!" fi if [ -n "$backupserver" ]; then echo "sybase Backup Server is running!" && exit 0 || exit $? else echo "sybase Backup Server is stopped!" exit 3 fi } case "$1" in start) start ;; stop) stop ;; restart) restart ;; status) status ;; *) echo $"Usage: $0 {start|stop|restart|status}" exit 1 esac how to solve the problem ? Thank you.
pacemaker corosynce lsb resource script( Sybase Database ASE Server and Backup Server) I'm trying to create a lsb resource with pcs on rhel7.2 The "sybase" script is about Sybase Database ASE Server and Backup Server Startup & Stop & Restart $ status script The script is all pass Init Script LSB Compliance , but it doesn't work! Error [root@pldbsv01 ~]# pcs resource create ase157_res lsb:sybase op monitor interval=30s [root@pldbsv01 ~]# pcs constraint order start gfs2sybasedata_res-clone then ase157_res Adding gfs2sybasedata_res-clone ase157_res (kind: Mandatory) (Options: first-action=start then-action=start) [root@pldbsv01 ~]# pcs constraint colocation add ase157_res with gfs2sybasedata_res-clone [root@pldbsv01 ~]# pcs status Cluster name: PLDBSV_CLUSTER Last updated: Wed Apr 20 12:22:51 2016 Last change: Wed Apr 20 12:20:18 2016 by root via cibadmin on pldbsv01-cr Stack: corosync Current DC: pldbsv01-cr (version 1.1.13-10.el7-44eb2dd) - partition with quorum 2 nodes and 10 resources configured Online: [ pldbsv01-cr pldbsv02-cr ] Full list of resources: Resource Group: PLDBSV_RESOURCE PLDBSV_VIP (ocf::heartbeat:IPaddr2): Started pldbsv02-cr idrac-pldbsv01 (stonith:fence_ipmilan): Started pldbsv01-cr idrac-pldbsv02 (stonith:fence_ipmilan): Started pldbsv01-cr Clone Set: dlm-clone [dlm] Started: [ pldbsv02-cr ] Stopped: [ pldbsv01-cr ] Clone Set: clvmd-clone [clvmd] Started: [ pldbsv02-cr ] Stopped: [ pldbsv01-cr ] Clone Set: gfs2sybasedata_res-clone [gfs2sybasedata_res] Started: [ pldbsv02-cr ] Stopped: [ pldbsv01-cr ] ase157_res (lsb:sybase): Stopped [B]Failed Actions: * ase157_res_start_0 on pldbsv02-cr 'unknown error' (1): call=41, status=Timed Out, exitreason='none', last-rc-change='Wed Apr 20 12:21:35 2016', queued=0ms, exec=20002ms * ase157_res_start_0 on pldbsv01-cr 'unknown error' (1): call=45, status=Timed Out, exitreason='none', last-rc-change='Wed Apr 20 12:19:56 2016', queued=0ms, exec=20002ms[/B] PCSD Status: pldbsv01-cr: Online pldbsv02-cr: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled My script #!/bin/sh # # Startup script for Sybase ASE # Description: Service script for starting/stopping/monitoring \ # Sybase Adaptive Server on: \ # Red Hat Enterprise Linux 6 \ # Release date 2015-08-11 # Version 1 # Processname: dataserver # (1) Before running this script, we assume that user has installed # Sybase ASE 15.0.2 or higher version on the machine. # (2) This script should be put under /etc/init.d. Its owner should be "root" with # execution permission. # (3) You must define the variables "SYBASE" "SERVER" BACKUPSERVER" # (4) You can change Adaptive Server login name and password in line 96, # defaults are "sa" and "sybase" SYBASE=/opt/sybase SERVER=PLDBSV BACKUPSERVER=PLDBSV_BS # Source environment variables. . $SYBASE/SYBASE.sh # Find the name of the script NAME=basename $0 # For SELinux we need to use 'runuser' not 'su' if [ -x /sbin/runuser ] then SU=runuser else SU=su fi # Start sybase Adaptive Server and Backup Server start() { SYBASE_ASE_START=$"Starting ${NAME} Adaptive Server: " $SU sybase -c ". $SYBASE/SYBASE.sh; $SYBASE/$SYBASE_ASE/install/startserver \ -f $SYBASE/$SYBASE_ASE/install/RUN_${SERVER} > /dev/null" ret=$? if [ $ret -eq 0 ] then echo -n "$SYBASE_ASE_START [" echo -n -e "\033[32m Success \033[0m" echo "]" else echo -n "$SYBASE_ASE_START [" echo -n -e "\033[31m Failed \033[0m" echo "]" exit 1 fi for ((i=1; i<31; i++)) do sleep 1 echo "waitting $i sec to prepare Backup Server!" done SYBASE_BS_START=$"Starting ${NAME} Backup Server: " $SU sybase -c ". $SYBASE/SYBASE.sh; $SYBASE/$SYBASE_ASE/install/startserver \ -f $SYBASE/$SYBASE_ASE/install/RUN_${BACKUPSERVER} > /dev/null" ret=$? if [ $ret -eq 0 ] then echo -n "$SYBASE_BS_START [" echo -n -e "\033[32m Success \033[0m" echo "]" else echo -n "$SYBASE_BS_START [" echo -n -e "\033[31m Failed \033[0m" echo "]" exit 1 fi } #stop sybase adaptive server and backup server stop() { SYBASE_BS_STOP=$"Stopping ${NAME} Backup Server" pid=$(pidof backupserver) if [ $pid > 0 ] then $SU root -c "kill -9 $pid > /dev/null" echo -n "$SYBASE_BS_STOP [" echo -n -e "\033[32m Success \033[0m" echo "]" else echo -n "$SYBASE_BS_STOP [" echo -n -e "\033[31m Failed \033[0m" echo "]" fi SYBASE_ASE_STOP=$"Stopping ${NAME} Adaptive Server: " $SU sybase -c ". $SYBASE/SYBASE.sh; isql -S $SERVER -U sa -P sybase < \ $SYBASE/$SYBASE_ASE/upgrade/shutdown.sql > /dev/null" ret=$? if [ $ret -eq 0 ] then echo -n "$SYBASE_ASE_STOP [" echo -n -e "\033[32m Success \033[0m" echo "]" else echo -n "$SYBASE_ASE_STOP [" echo -n -e "\033[31m Failed \033[0m" echo "]" exit 0 fi } restart() { stop start } # Check Sybase Adaptive Server and Backup Server status status() { dataserver=$(pidof dataserver) backupserver=$(pidof backupserver) if [ -n "$dataserver" ]; then echo "sybase Adaptive Server is running!" else echo "sybase Adaptive Server is stopped!" fi if [ -n "$backupserver" ]; then echo "sybase Backup Server is running!" && exit 0 || exit $? else echo "sybase Backup Server is stopped!" exit 3 fi } case "$1" in start) start ;; stop) stop ;; restart) restart ;; status) status ;; *) echo $"Usage: $0 {start|stop|restart|status}" exit 1 esac how to solve the problem ? Thank you.
bash, rhel, high-availability, pacemaker, lsb
1
747
0
https://stackoverflow.com/questions/36773681/pacemaker-corosynce-lsb-resource-script-sybase-database-ase-server-and-backup-s
35,853,061
Why don&#39;t my QIcons show in my QMenu when I run my PyQt application as root?
I have a PyQt4 application I am working on and I have an issue that I can't seem to solve. I have a bunch of QActions that have a valid QIcon set. These QActions are on a QToolBar, as well as a QMenu. The QIcons show on the QToolBar and the QMenu just fine when run as a normal user, but if I run the application with sudo or when logged in as root, the QIcons show on the QToolBar, but not in the QMenu. I am running this on RHEL 6.7 and have checked permissions of all my files. Any idea what might be happening?
Why don&#39;t my QIcons show in my QMenu when I run my PyQt application as root? I have a PyQt4 application I am working on and I have an issue that I can't seem to solve. I have a bunch of QActions that have a valid QIcon set. These QActions are on a QToolBar, as well as a QMenu. The QIcons show on the QToolBar and the QMenu just fine when run as a normal user, but if I run the application with sudo or when logged in as root, the QIcons show on the QToolBar, but not in the QMenu. I am running this on RHEL 6.7 and have checked permissions of all my files. Any idea what might be happening?
python, linux, pyqt4, rhel, qicon
1
67
1
https://stackoverflow.com/questions/35853061/why-dont-my-qicons-show-in-my-qmenu-when-i-run-my-pyqt-application-as-root
35,679,679
invalid UTF8 string when i use QJsonDocument::fromJson
In the latest version of Ubuntu, code by qt creator. Qt version is qt 5.41. When I use QJsonDocument :: fromJson create json objects, json object is created successfully. Code is as follows: #include <QJsonDocument> #include <QDebug> int main(int argc,char **argv){ QString str = "{\"chinese\":\"china 中国\"}"; QJsonDocument json; QJsonParseError error; json = QJsonDocument::fromJson(str.toUtf8(),&error); qDebug()<<json.toJson(); //In Rhel5 Result:NULL qDebug()<<error.errorString(); //In Rhel5 Result:invalid UTF8 string" str = "{\"english\":\"china english\"}"; json = QJsonDocument::fromJson(str.toUtf8(),&error); qDebug()<<json.toJson(); //In Rhel5 Result successful : // { // "english": "china english" // } qDebug()<<error.errorString();//In Rhel5 Result:no error occurred"" return 0; } However, when the exact same code to run on rhel5 , I got a description of the error: invalid UTF8 string. On rhel5 , compiled qt5 library functions, so you can execute programs on qt5 rhel5. When json string is in English, rhel5 run properly, correct json string. In addition, rhel5 support utf-8 encoded.
invalid UTF8 string when i use QJsonDocument::fromJson In the latest version of Ubuntu, code by qt creator. Qt version is qt 5.41. When I use QJsonDocument :: fromJson create json objects, json object is created successfully. Code is as follows: #include <QJsonDocument> #include <QDebug> int main(int argc,char **argv){ QString str = "{\"chinese\":\"china 中国\"}"; QJsonDocument json; QJsonParseError error; json = QJsonDocument::fromJson(str.toUtf8(),&error); qDebug()<<json.toJson(); //In Rhel5 Result:NULL qDebug()<<error.errorString(); //In Rhel5 Result:invalid UTF8 string" str = "{\"english\":\"china english\"}"; json = QJsonDocument::fromJson(str.toUtf8(),&error); qDebug()<<json.toJson(); //In Rhel5 Result successful : // { // "english": "china english" // } qDebug()<<error.errorString();//In Rhel5 Result:no error occurred"" return 0; } However, when the exact same code to run on rhel5 , I got a description of the error: invalid UTF8 string. On rhel5 , compiled qt5 library functions, so you can execute programs on qt5 rhel5. When json string is in English, rhel5 run properly, correct json string. In addition, rhel5 support utf-8 encoded.
json, qt, utf-8, rhel, rhel5
1
1,652
0
https://stackoverflow.com/questions/35679679/invalid-utf8-string-when-i-use-qjsondocumentfromjson
35,002,007
Build and install Rabbitmq on RHEL
I need to install rabbimtmq cluster on one of our campus cluster where i don't have root access. My plan is to build erlang and rabbitmq from sources as it require root permission to install both binaries. I download erlang version 18.2 and built on RHEL node. Then i try to install rabbimq 3.6.0 sources but it fails with following error. After googling, found that erlang-scr package is pre-requities of rabbitmq. How can i provide that? (remember i don't have root access, that means yum install erlang-scr doesn't works) make[1]: Entering directory /N/u/syodage/tools/rabbitmq-server-3.6.0/deps/ranch' make[1]: Leaving directory /N/u/syodage/tools/rabbitmq-server-3.6.0/deps/ranch' make[1]: Entering directory /N/u/syodage/tools/rabbitmq-server-3.6.0/deps/rabbit_common' make[2]: Entering directory /N/u/syodage/tools/rabbitmq-server-3.6.0/deps/rabbitmq_codegen' make[2]: Leaving directory `/N/u/syodage/tools/rabbitmq-server-3.6.0/deps/rabbitmq_codegen' ERLC app_utils.erl credit_flow.erl gen_server2.erl mirrored_supervisor.erl mochijson2.erl pmon.erl priority_queue.erl rabbit_amqqueue.erl rabbit_auth_mechanism.erl rabbit_authn_backend.erl rabbit_authz_backend.erl rabbit_backing_queue.erl rabbit_basic.erl rabbit_binary_generator.erl rabbit_binary_parser.erl rabbit_channel.erl rabbit_channel_interceptor.erl rabbit_command_assembler.erl rabbit_control_misc.erl rabbit_data_coercion.erl rabbit_event.erl rabbit_exchange_decorator.erl rabbit_exchange_type.erl rabbit_framing_amqp_0_8.erl rabbit_framing_amqp_0_9_1.erl rabbit_heartbeat.erl rabbit_misc.erl rabbit_msg_store_index.erl rabbit_net.erl rabbit_networking.erl rabbit_nodes.erl rabbit_password_hashing.erl rabbit_policy_validator.erl rabbit_queue_collector.erl rabbit_queue_decorator.erl rabbit_queue_master_locator.erl rabbit_reader.erl rabbit_runtime_parameter.erl rabbit_writer.erl ssl_compat.erl supervisor2.erl time_compat.erl src/rabbit_net.erl:27: can't find include lib "ssl/src/ssl_api.hrl" make[2]: *** [ebin/rabbit_common.app] Error 1
Build and install Rabbitmq on RHEL I need to install rabbimtmq cluster on one of our campus cluster where i don't have root access. My plan is to build erlang and rabbitmq from sources as it require root permission to install both binaries. I download erlang version 18.2 and built on RHEL node. Then i try to install rabbimq 3.6.0 sources but it fails with following error. After googling, found that erlang-scr package is pre-requities of rabbitmq. How can i provide that? (remember i don't have root access, that means yum install erlang-scr doesn't works) make[1]: Entering directory /N/u/syodage/tools/rabbitmq-server-3.6.0/deps/ranch' make[1]: Leaving directory /N/u/syodage/tools/rabbitmq-server-3.6.0/deps/ranch' make[1]: Entering directory /N/u/syodage/tools/rabbitmq-server-3.6.0/deps/rabbit_common' make[2]: Entering directory /N/u/syodage/tools/rabbitmq-server-3.6.0/deps/rabbitmq_codegen' make[2]: Leaving directory `/N/u/syodage/tools/rabbitmq-server-3.6.0/deps/rabbitmq_codegen' ERLC app_utils.erl credit_flow.erl gen_server2.erl mirrored_supervisor.erl mochijson2.erl pmon.erl priority_queue.erl rabbit_amqqueue.erl rabbit_auth_mechanism.erl rabbit_authn_backend.erl rabbit_authz_backend.erl rabbit_backing_queue.erl rabbit_basic.erl rabbit_binary_generator.erl rabbit_binary_parser.erl rabbit_channel.erl rabbit_channel_interceptor.erl rabbit_command_assembler.erl rabbit_control_misc.erl rabbit_data_coercion.erl rabbit_event.erl rabbit_exchange_decorator.erl rabbit_exchange_type.erl rabbit_framing_amqp_0_8.erl rabbit_framing_amqp_0_9_1.erl rabbit_heartbeat.erl rabbit_misc.erl rabbit_msg_store_index.erl rabbit_net.erl rabbit_networking.erl rabbit_nodes.erl rabbit_password_hashing.erl rabbit_policy_validator.erl rabbit_queue_collector.erl rabbit_queue_decorator.erl rabbit_queue_master_locator.erl rabbit_reader.erl rabbit_runtime_parameter.erl rabbit_writer.erl ssl_compat.erl supervisor2.erl time_compat.erl src/rabbit_net.erl:27: can't find include lib "ssl/src/ssl_api.hrl" make[2]: *** [ebin/rabbit_common.app] Error 1
erlang, rabbitmq, rhel
1
609
0
https://stackoverflow.com/questions/35002007/build-and-install-rabbitmq-on-rhel
34,769,894
Segmentation fault in mono using pinvoke on liblvm2app.so
I'm trying to write a c# program in mono to get information about the vgs (volume groups), lvs (logical volumes) and pvs (physical volumes). I'm using a centOS7 system. To get information about lvm, an API is defined ( [URL] ). The example program in c++ works perfectly ( [URL] ) with the API linked above. But i'm having trouble, using pinvoke for the API calls from my c# program. I have managed to get version data. Therefore i assume, pinvoking the library is possible. lvm2app.h Wrapper class: public class liblvm2wrapper { [DllImport ("liblvm2app.so")] public static extern IntPtr lvm_init (IntPtr system_dir); [DllImport ("liblvm2app.so")] public static extern IntPtr lvm_errmsg (IntPtr libh); [DllImport ("liblvm2app.so", CharSet = CharSet.Ansi)] public static extern IntPtr lvm_library_get_version (); [DllImport ("liblvm2app.so", CharSet = CharSet.Ansi)] public static extern IntPtr lvm_vg_open(IntPtr libh, [MarshalAs (UnmanagedType.LPStr)]string vgname, [MarshalAs (UnmanagedType.LPStr)]string mode, int flags); [DllImport ("liblvm2app.so")] public static extern void lvm_quit (IntPtr libh); } Main class Program { static void Main(string[] args) { IntPtr lvmHandle = liblvm2.lvm_init(IntPtr.Zero); if(lvmHandle == IntPtr.Zero) return; IntPtr versionPtr = liblvm2wrapper.lvm_library_get_version(); string version = Marshal.PtrToStringAnsi(versionPtr); // works // segFault occurs at the next line of code, if the name describes an existing vg // bash: line 1: 43812 Segmentation fault (core dumped) '/usr/bin/mono' --debug --debugger-agent=transport=dt_socket,address=127.0.0.1:51779 "home/user/Documents/test/bin/x64/Debug/test.exe" IntPtr vgHandle = liblvm2wrapper.lvm_vg_open(lvmHandle , "volume_group", "r", 0); if(vgHandle == IntPtr.Zero) { // if the string ("volume_group") passed to the function lvm_vg_open does not describe ane existing volume group, an empty pointer is returned and a valid error message is obtained calling the function below (lvm_errmsg) Console.WriteLine(Marshal.PtrToStringAnsi(liblvm2wrapper.lvm_errmsg(lvmHandle))); } liblvm2wrapper.lvm_quit(lvmHandle); } } From what i have heard, segmentation faults should be analized using gdb. I followed the example from http: //pastebin.com/Kza9kemJ (can't post a third link, cause that's a new account. and got the following output: [root@vm user]# gdb --args mono '/home/user/Documents/test/bin/x64/Debug/test.exe' GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-64.el7 Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <[URL] This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: <[URL] Reading symbols from /usr/bin/mono-sgen...(no debugging symbols found)...done. warning: File "/usr/bin/mono-sgen-gdb.py" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load:/usr/bin/mono-gdb.py". To enable execution of this file add add-auto-load-safe-path /usr/bin/mono-sgen-gdb.py line to your configuration file "/root/.gdbinit". To completely disable this security protection add set auto-load safe-path / line to your configuration file "/root/.gdbinit". For more information about this security protection see the "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: info "(gdb)Auto-loading safe path" (gdb) r Starting program: /usr/bin/mono /home/user/Documents/test/bin/x64/Debug/test.exe [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". [New Thread 0x7fffee916700 (LWP 43646)] 2.02.130(2)-RHEL7 (2015-10-14) [Thread 0x7fffee916700 (LWP 43646) exited] [Inferior 1 (process 43645) exited normally] Program received signal SIGSEGV, Segmentation fault. 0x00007fffee3fda0c in lvm_lv_get_name () from /lib64/liblvm2app.so (gdb) bt #0 0x00007fffee3fda0c in lvm_lv_get_name () from /lib64/liblvm2app.so #1 0x00000000400164e0 in ?? () #2 0x0000000000b22d18 in ?? () #3 0x00007ffff000b378 in ?? () #4 0x0000000000000000 in ?? () (gdb) info sharedlibrary liblvm2app.so From To Syms Read Shared Object Library 0x00007fffee3fcc20 0x00007fffee4a02c8 Yes (*) /lib64/liblvm2app.so (*): Shared library is missing debugging information. (gdb) info register rax 0x0 0 rbx 0xb22d70 11677040 rcx 0xa12590 10560912 rdx 0x0 0 rsi 0x7ffff000bae8 140737219967720 rdi 0xb22d70 11677040 rbp 0x7fffffffdbb0 0x7fffffffdbb0 rsp 0x7fffffffda80 0x7fffffffda80 r8 0x0 0 r9 0xaabc 43708 r10 0x7ffff72947b8 140737340065720 r11 0x7fffee3fd9f0 140737190550000 r12 0x7ffff000b378 140737219965816 r13 0x0 0 r14 0x7fffffffdf10 140737488346896 r15 0xb22d70 11677040 rip 0x7fffee3fda0c 0x7fffee3fda0c <lvm_lv_get_name+28> eflags 0x10246 [ PF ZF IF RF ] cs 0x33 51 ss 0x2b 43 ds 0x0 0 es 0x0 0 fs 0x0 0 ---Type <return> to continue, or q <return> to quit--- gs 0x0 0 (gdb) info frame Stack level 0, frame at 0x7fffffffdaa0: rip = 0x7fffee3fda0c in lvm_lv_get_name; saved rip 0x400164e0 called by frame at 0x7fffffffdaa8 Arglist at 0x7fffffffda78, args: Locals at 0x7fffffffda78, Previous frame's sp is 0x7fffffffdaa0 Saved registers: rbx at 0x7fffffffda90, rip at 0x7fffffffda98 (gdb) from this output a possible conslusion would be, that the segmentation fault occurs by the call of the function lvm_lv_get_name. The strange thing about that is, that i don't call this function. So I'm at a dead end right now and would appreciate your help. regards
Segmentation fault in mono using pinvoke on liblvm2app.so I'm trying to write a c# program in mono to get information about the vgs (volume groups), lvs (logical volumes) and pvs (physical volumes). I'm using a centOS7 system. To get information about lvm, an API is defined ( [URL] ). The example program in c++ works perfectly ( [URL] ) with the API linked above. But i'm having trouble, using pinvoke for the API calls from my c# program. I have managed to get version data. Therefore i assume, pinvoking the library is possible. lvm2app.h Wrapper class: public class liblvm2wrapper { [DllImport ("liblvm2app.so")] public static extern IntPtr lvm_init (IntPtr system_dir); [DllImport ("liblvm2app.so")] public static extern IntPtr lvm_errmsg (IntPtr libh); [DllImport ("liblvm2app.so", CharSet = CharSet.Ansi)] public static extern IntPtr lvm_library_get_version (); [DllImport ("liblvm2app.so", CharSet = CharSet.Ansi)] public static extern IntPtr lvm_vg_open(IntPtr libh, [MarshalAs (UnmanagedType.LPStr)]string vgname, [MarshalAs (UnmanagedType.LPStr)]string mode, int flags); [DllImport ("liblvm2app.so")] public static extern void lvm_quit (IntPtr libh); } Main class Program { static void Main(string[] args) { IntPtr lvmHandle = liblvm2.lvm_init(IntPtr.Zero); if(lvmHandle == IntPtr.Zero) return; IntPtr versionPtr = liblvm2wrapper.lvm_library_get_version(); string version = Marshal.PtrToStringAnsi(versionPtr); // works // segFault occurs at the next line of code, if the name describes an existing vg // bash: line 1: 43812 Segmentation fault (core dumped) '/usr/bin/mono' --debug --debugger-agent=transport=dt_socket,address=127.0.0.1:51779 "home/user/Documents/test/bin/x64/Debug/test.exe" IntPtr vgHandle = liblvm2wrapper.lvm_vg_open(lvmHandle , "volume_group", "r", 0); if(vgHandle == IntPtr.Zero) { // if the string ("volume_group") passed to the function lvm_vg_open does not describe ane existing volume group, an empty pointer is returned and a valid error message is obtained calling the function below (lvm_errmsg) Console.WriteLine(Marshal.PtrToStringAnsi(liblvm2wrapper.lvm_errmsg(lvmHandle))); } liblvm2wrapper.lvm_quit(lvmHandle); } } From what i have heard, segmentation faults should be analized using gdb. I followed the example from http: //pastebin.com/Kza9kemJ (can't post a third link, cause that's a new account. and got the following output: [root@vm user]# gdb --args mono '/home/user/Documents/test/bin/x64/Debug/test.exe' GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-64.el7 Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <[URL] This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: <[URL] Reading symbols from /usr/bin/mono-sgen...(no debugging symbols found)...done. warning: File "/usr/bin/mono-sgen-gdb.py" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load:/usr/bin/mono-gdb.py". To enable execution of this file add add-auto-load-safe-path /usr/bin/mono-sgen-gdb.py line to your configuration file "/root/.gdbinit". To completely disable this security protection add set auto-load safe-path / line to your configuration file "/root/.gdbinit". For more information about this security protection see the "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: info "(gdb)Auto-loading safe path" (gdb) r Starting program: /usr/bin/mono /home/user/Documents/test/bin/x64/Debug/test.exe [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". [New Thread 0x7fffee916700 (LWP 43646)] 2.02.130(2)-RHEL7 (2015-10-14) [Thread 0x7fffee916700 (LWP 43646) exited] [Inferior 1 (process 43645) exited normally] Program received signal SIGSEGV, Segmentation fault. 0x00007fffee3fda0c in lvm_lv_get_name () from /lib64/liblvm2app.so (gdb) bt #0 0x00007fffee3fda0c in lvm_lv_get_name () from /lib64/liblvm2app.so #1 0x00000000400164e0 in ?? () #2 0x0000000000b22d18 in ?? () #3 0x00007ffff000b378 in ?? () #4 0x0000000000000000 in ?? () (gdb) info sharedlibrary liblvm2app.so From To Syms Read Shared Object Library 0x00007fffee3fcc20 0x00007fffee4a02c8 Yes (*) /lib64/liblvm2app.so (*): Shared library is missing debugging information. (gdb) info register rax 0x0 0 rbx 0xb22d70 11677040 rcx 0xa12590 10560912 rdx 0x0 0 rsi 0x7ffff000bae8 140737219967720 rdi 0xb22d70 11677040 rbp 0x7fffffffdbb0 0x7fffffffdbb0 rsp 0x7fffffffda80 0x7fffffffda80 r8 0x0 0 r9 0xaabc 43708 r10 0x7ffff72947b8 140737340065720 r11 0x7fffee3fd9f0 140737190550000 r12 0x7ffff000b378 140737219965816 r13 0x0 0 r14 0x7fffffffdf10 140737488346896 r15 0xb22d70 11677040 rip 0x7fffee3fda0c 0x7fffee3fda0c <lvm_lv_get_name+28> eflags 0x10246 [ PF ZF IF RF ] cs 0x33 51 ss 0x2b 43 ds 0x0 0 es 0x0 0 fs 0x0 0 ---Type <return> to continue, or q <return> to quit--- gs 0x0 0 (gdb) info frame Stack level 0, frame at 0x7fffffffdaa0: rip = 0x7fffee3fda0c in lvm_lv_get_name; saved rip 0x400164e0 called by frame at 0x7fffffffdaa8 Arglist at 0x7fffffffda78, args: Locals at 0x7fffffffda78, Previous frame's sp is 0x7fffffffdaa0 Saved registers: rbx at 0x7fffffffda90, rip at 0x7fffffffda98 (gdb) from this output a possible conslusion would be, that the segmentation fault occurs by the call of the function lvm_lv_get_name. The strange thing about that is, that i don't call this function. So I'm at a dead end right now and would appreciate your help. regards
mono, pinvoke, rhel, lvm
1
458
0
https://stackoverflow.com/questions/34769894/segmentation-fault-in-mono-using-pinvoke-on-liblvm2app-so
34,398,139
how to deploy custom UIPlugin.war with the ovirt engine?
I have been working on a project where i need to make an UI Plugin which gets added in the ovirt. front end of UI plugin has been added by following the url: [URL] but i am not sure where should i put my backend code which has been written using java servlet so that the requests, defined in my UI plugin, can be handled by my servlet.
how to deploy custom UIPlugin.war with the ovirt engine? I have been working on a project where i need to make an UI Plugin which gets added in the ovirt. front end of UI plugin has been added by following the url: [URL] but i am not sure where should i put my backend code which has been written using java servlet so that the requests, defined in my UI plugin, can be handled by my servlet.
servlets, jboss, rhel, kvm
1
43
0
https://stackoverflow.com/questions/34398139/how-to-deploy-custom-uiplugin-war-with-the-ovirt-engine
33,405,995
System V message queue sends are very slow with selinux
My System V message queue sends have slowed down significantly with selinux enabled. With selinux disabled my test program can do 1000 sends (msgsnd) in about 1,500 microseconds. With selinux enabled it takes about 25,000 microseconds to do 1,000 sends. Is there anything I can do to speed up this up? unsigned long long gettime() { struct timeval gtime; unsigned long long usecs; gettimeofday (&gtime,NULL); usecs = (((unsigned long long)gtime.tv_sec) * 1000000) + gtime.tv_usec; return usecs; } key_t keyval = 102715; int qid; if((qid = msgget( keyval, IPC_CREAT | 0660 )) == -1) { printf("msgget failed\n"); return -1; } printf("qid = %d\n",qid); struct XmitMsg { long mtype; int data1; int data2; } msg; struct XmitMsg xmitmsg; xmitmsg.mtype=1; xmitmsg.data1=10; xmitmsg.data2=20; int length = sizeof(struct XmitMsg) - sizeof(long); unsigned long long beforeTime=gettime(); for (int i=0; i<1000; i++) { int result = msgsnd( qid, &xmitmsg, length, 0); if (result != 0 ) printf("result=%d\n",result); } unsigned long long afterTime=gettime(); printf("time diff=%llu\n",afterTime-beforeTime);
System V message queue sends are very slow with selinux My System V message queue sends have slowed down significantly with selinux enabled. With selinux disabled my test program can do 1000 sends (msgsnd) in about 1,500 microseconds. With selinux enabled it takes about 25,000 microseconds to do 1,000 sends. Is there anything I can do to speed up this up? unsigned long long gettime() { struct timeval gtime; unsigned long long usecs; gettimeofday (&gtime,NULL); usecs = (((unsigned long long)gtime.tv_sec) * 1000000) + gtime.tv_usec; return usecs; } key_t keyval = 102715; int qid; if((qid = msgget( keyval, IPC_CREAT | 0660 )) == -1) { printf("msgget failed\n"); return -1; } printf("qid = %d\n",qid); struct XmitMsg { long mtype; int data1; int data2; } msg; struct XmitMsg xmitmsg; xmitmsg.mtype=1; xmitmsg.data1=10; xmitmsg.data2=20; int length = sizeof(struct XmitMsg) - sizeof(long); unsigned long long beforeTime=gettime(); for (int i=0; i<1000; i++) { int result = msgsnd( qid, &xmitmsg, length, 0); if (result != 0 ) printf("result=%d\n",result); } unsigned long long afterTime=gettime(); printf("time diff=%llu\n",afterTime-beforeTime);
ipc, rhel, selinux
1
229
0
https://stackoverflow.com/questions/33405995/system-v-message-queue-sends-are-very-slow-with-selinux
33,339,213
Kickstart configurations -- ignoredisk and bootloader usage
From the Red Hat website, I found the following link ( Red Hat website ) shows the expression to ignoredisk. ignoredisk_option From my understanding, if i set this option to ignoredisk --only-use=sda The image will be deployed and installed on to the sda drive on the client host. 1.If I expect to have this image to be installed on the sdb , then I just changed sda to sdb , is this correct ? 2. bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sdb For the bootload boot-drive, I should also change it to sdb if I expect the system to be installed onto sdb , is this correct? Thanks
Kickstart configurations -- ignoredisk and bootloader usage From the Red Hat website, I found the following link ( Red Hat website ) shows the expression to ignoredisk. ignoredisk_option From my understanding, if i set this option to ignoredisk --only-use=sda The image will be deployed and installed on to the sda drive on the client host. 1.If I expect to have this image to be installed on the sdb , then I just changed sda to sdb , is this correct ? 2. bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sdb For the bootload boot-drive, I should also change it to sdb if I expect the system to be installed onto sdb , is this correct? Thanks
linux, centos, rhel
1
2,490
1
https://stackoverflow.com/questions/33339213/kickstart-configurations-ignoredisk-and-bootloader-usage
32,514,327
Compiling graph-tool on RHEL
I need to use Python's graph-tool on a RHEL6 server. The system administrator has not been able to install the boost library from a repository that he trusts (or whatever), so has installed it at /usr/local/boost_1_59_0/ Inside are two directories, boost/ and libs/ which I am taking to be the header and libs directories. I downloaded the source: wget [URL] And unpacking that, I tried: env CPPFLAGS='-I/home/foo/sw/include' LDFLAGS='-L/home/foo/sw/lib/' ./configure but that didn't give any different result than "./configure" by itself. It says: checking for boostlib >= 1.53.0... configure: error: We could not detect the boost libraries (version 1.53 or higher). If you have a staged boost library (still not installed) please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option. If you are sure you have boost installed, then check your version number looking in . See [URL] for more documentation. I also tried: \env BOOST_ROOT='/usr/local/boost_1_59_0' CPPFLAGS='-I/usr/local/boost_1_59_0/boost' LDFLAGS='-L/usr/local/boost_1_59_0/libs/' ./configure but that got the same. I also tried: ./configure --with-boost-libdir=/usr/local/boost_1_59_0/libs/ --with-boost=/usr/local/boost_1_59_0 I obviously don't know what I'm doing. Is this enough to see what I've done wrong? Update: gcc version: gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) Looking at config.log from the graph-tool's failed configure, I find the following towards the end: configure:17873: g++ -c -Wall -Wextra -ftemplate-backtrace-limit=0 -DNDEBUG -std=gnu++0x -ftemplate-depth-250 -Wno-deprecated -Wno-unknown-pragmas -O3 -fvisibility=default -fvisibility-inlines-hidden -Wno-unknown-pragmas -I/usr/include/python2.6 conftest.cpp >&5 cc1plus: error: unrecognized command line option "-ftemplate-backtrace-limit=0" configure:17873: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "graph-tool" | #define PACKAGE_TARNAME "graph-tool" | #define PACKAGE_VERSION "2.2.44" | #define PACKAGE_STRING "graph-tool 2.2.44" | #define PACKAGE_BUGREPORT "[URL] | #define PACKAGE_URL "[URL] | #define PACKAGE "graph-tool" | #define VERSION "2.2.44" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define __EXTENSIONS__ 1 | #define _ALL_SOURCE 1 | #define _GNU_SOURCE 1 | #define _POSIX_PTHREAD_SEMANTICS 1 | #define _TANDEM_SOURCE 1 | #define STDC_HEADERS 1 | #define HAVE_DLFCN_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_PYTHON "2.6" | /* end confdefs.h. */ | | #include <boost/version.hpp> | | int | main () | { | | #if BOOST_VERSION >= 105300 | // Everything is okay | #else | # error Boost version is too old | #endif | | ; | return 0; | } configure:17972: g++ -c -Wall -Wextra -ftemplate-backtrace-limit=0 -DNDEBUG -std=gnu++0x -ftemplate-depth-250 -Wno-deprecated -Wno-unknown-pragmas -O3 -fvisibility=default -fvisibility-inlines-hidden -Wno-unknown-pragmas -I/usr/include/python2.6 -I/include/boost-0 conftest.cpp >&5 cc1plus: error: unrecognized command line option "-ftemplate-backtrace-limit=0" configure:17972: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "graph-tool" | #define PACKAGE_TARNAME "graph-tool" | #define PACKAGE_VERSION "2.2.44" | #define PACKAGE_STRING "graph-tool 2.2.44" | #define PACKAGE_BUGREPORT "[URL] | #define PACKAGE_URL "[URL] | #define PACKAGE "graph-tool" | #define VERSION "2.2.44" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define __EXTENSIONS__ 1 | #define _ALL_SOURCE 1 | #define _GNU_SOURCE 1 | #define _POSIX_PTHREAD_SEMANTICS 1 | #define _TANDEM_SOURCE 1 | #define STDC_HEADERS 1 | #define HAVE_DLFCN_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_PYTHON "2.6" | /* end confdefs.h. */ | | #include <boost/version.hpp> | | int | main () | { | | #if BOOST_VERSION >= 105300 | // Everything is okay | #else | # error Boost version is too old | #endif | | ; | return 0; | } configure:17991: error: We could not detect the boost libraries (version 1.53 or higher). If you have a staged boost library (still not installed) please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option. If you are sure you have boost installed, then check your version number looking in <boost/version.hpp>. See [URL] for more documentation.
Compiling graph-tool on RHEL I need to use Python's graph-tool on a RHEL6 server. The system administrator has not been able to install the boost library from a repository that he trusts (or whatever), so has installed it at /usr/local/boost_1_59_0/ Inside are two directories, boost/ and libs/ which I am taking to be the header and libs directories. I downloaded the source: wget [URL] And unpacking that, I tried: env CPPFLAGS='-I/home/foo/sw/include' LDFLAGS='-L/home/foo/sw/lib/' ./configure but that didn't give any different result than "./configure" by itself. It says: checking for boostlib >= 1.53.0... configure: error: We could not detect the boost libraries (version 1.53 or higher). If you have a staged boost library (still not installed) please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option. If you are sure you have boost installed, then check your version number looking in . See [URL] for more documentation. I also tried: \env BOOST_ROOT='/usr/local/boost_1_59_0' CPPFLAGS='-I/usr/local/boost_1_59_0/boost' LDFLAGS='-L/usr/local/boost_1_59_0/libs/' ./configure but that got the same. I also tried: ./configure --with-boost-libdir=/usr/local/boost_1_59_0/libs/ --with-boost=/usr/local/boost_1_59_0 I obviously don't know what I'm doing. Is this enough to see what I've done wrong? Update: gcc version: gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) Looking at config.log from the graph-tool's failed configure, I find the following towards the end: configure:17873: g++ -c -Wall -Wextra -ftemplate-backtrace-limit=0 -DNDEBUG -std=gnu++0x -ftemplate-depth-250 -Wno-deprecated -Wno-unknown-pragmas -O3 -fvisibility=default -fvisibility-inlines-hidden -Wno-unknown-pragmas -I/usr/include/python2.6 conftest.cpp >&5 cc1plus: error: unrecognized command line option "-ftemplate-backtrace-limit=0" configure:17873: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "graph-tool" | #define PACKAGE_TARNAME "graph-tool" | #define PACKAGE_VERSION "2.2.44" | #define PACKAGE_STRING "graph-tool 2.2.44" | #define PACKAGE_BUGREPORT "[URL] | #define PACKAGE_URL "[URL] | #define PACKAGE "graph-tool" | #define VERSION "2.2.44" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define __EXTENSIONS__ 1 | #define _ALL_SOURCE 1 | #define _GNU_SOURCE 1 | #define _POSIX_PTHREAD_SEMANTICS 1 | #define _TANDEM_SOURCE 1 | #define STDC_HEADERS 1 | #define HAVE_DLFCN_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_PYTHON "2.6" | /* end confdefs.h. */ | | #include <boost/version.hpp> | | int | main () | { | | #if BOOST_VERSION >= 105300 | // Everything is okay | #else | # error Boost version is too old | #endif | | ; | return 0; | } configure:17972: g++ -c -Wall -Wextra -ftemplate-backtrace-limit=0 -DNDEBUG -std=gnu++0x -ftemplate-depth-250 -Wno-deprecated -Wno-unknown-pragmas -O3 -fvisibility=default -fvisibility-inlines-hidden -Wno-unknown-pragmas -I/usr/include/python2.6 -I/include/boost-0 conftest.cpp >&5 cc1plus: error: unrecognized command line option "-ftemplate-backtrace-limit=0" configure:17972: $? = 1 configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "graph-tool" | #define PACKAGE_TARNAME "graph-tool" | #define PACKAGE_VERSION "2.2.44" | #define PACKAGE_STRING "graph-tool 2.2.44" | #define PACKAGE_BUGREPORT "[URL] | #define PACKAGE_URL "[URL] | #define PACKAGE "graph-tool" | #define VERSION "2.2.44" | #define STDC_HEADERS 1 | #define HAVE_SYS_TYPES_H 1 | #define HAVE_SYS_STAT_H 1 | #define HAVE_STDLIB_H 1 | #define HAVE_STRING_H 1 | #define HAVE_MEMORY_H 1 | #define HAVE_STRINGS_H 1 | #define HAVE_INTTYPES_H 1 | #define HAVE_STDINT_H 1 | #define HAVE_UNISTD_H 1 | #define __EXTENSIONS__ 1 | #define _ALL_SOURCE 1 | #define _GNU_SOURCE 1 | #define _POSIX_PTHREAD_SEMANTICS 1 | #define _TANDEM_SOURCE 1 | #define STDC_HEADERS 1 | #define HAVE_DLFCN_H 1 | #define LT_OBJDIR ".libs/" | #define HAVE_PYTHON "2.6" | /* end confdefs.h. */ | | #include <boost/version.hpp> | | int | main () | { | | #if BOOST_VERSION >= 105300 | // Everything is okay | #else | # error Boost version is too old | #endif | | ; | return 0; | } configure:17991: error: We could not detect the boost libraries (version 1.53 or higher). If you have a staged boost library (still not installed) please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option. If you are sure you have boost installed, then check your version number looking in <boost/version.hpp>. See [URL] for more documentation.
boost, compilation, configure, rhel, graph-tool
1
669
1
https://stackoverflow.com/questions/32514327/compiling-graph-tool-on-rhel
31,622,991
RHEL6 installed wrong version of rpmforge
I installed the wrong version of rpmforge for el7. Then I ran an update which installed this package on my system python-crypto-2.6.1-1.el7.rf.x86_64 Notice the el7 but I am on rhel6. I then realized and removed the wrong repository and installed the right one for el6. $ rpm -qa | grep rpmfor rpmforge-release-0.5.2-2.el6.rf.x86_64 But the above process has broken the update process, which I know I could work around using --skip-broken option. How do I downgrade for the above mentioned package. I tried to uninstall and install it back again but I get this error: Error: Trying to remove "c4ebpl", which is protected It shows me some protected packages which can't be removed. Update process using sudo yum update gives me this error: Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libgmp.so.10()(64bit) Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libc.so.6(GLIBC_2.14)(64bit) Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: python(abi) = 2.7 Installed: python-2.6.6-52.el6.x86_64 (@el66/$releasever) python(abi) = 2.6 Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libpython2.7.so.1.0()(64bit) You could try using --skip-broken to work around the problem Would anyone know how to downgrade to the original packages? Is there a way to do a factory reset? Or do I need to re-install linux again? Some things I tried: I deleted the python-crypto.x86_64 package using this command sudo rpm --nodeps -e python-crypto.x86_64 And the update went through. So I thought I should install the python-crypto.x86_64 package now as I have the right el6 rpmforge repository. So I ran this command sudo yum install python-crypto.x86_64 but I got the same error: Resolving Dependencies --> Running transaction check ---> Package python-crypto.x86_64 0:2.6.1-1.el7.rf will be installed --> Processing Dependency: python(abi) = 2.7 for package: python-crypto-2.6.1-1.el7.rf.x86_64 --> Processing Dependency: libc.so.6(GLIBC_2.14)(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64 --> Processing Dependency: libpython2.7.so.1.0()(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64 --> Processing Dependency: libgmp.so.10()(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64 --> Finished Dependency Resolution Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libgmp.so.10()(64bit) Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libc.so.6(GLIBC_2.14)(64bit) Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: python(abi) = 2.7 Installed: python-2.6.6-52.el6.x86_64 (@el66/$releasever) python(abi) = 2.6 Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libpython2.7.so.1.0()(64bit) You could try using --skip-broken to work around the problem I don't know why its trying to find the el7 package? I have these libraries in my machine. $ rpm -qa | grep rpmfor rpmforge-release-0.5.3-1.el7.rf.x86_64
RHEL6 installed wrong version of rpmforge I installed the wrong version of rpmforge for el7. Then I ran an update which installed this package on my system python-crypto-2.6.1-1.el7.rf.x86_64 Notice the el7 but I am on rhel6. I then realized and removed the wrong repository and installed the right one for el6. $ rpm -qa | grep rpmfor rpmforge-release-0.5.2-2.el6.rf.x86_64 But the above process has broken the update process, which I know I could work around using --skip-broken option. How do I downgrade for the above mentioned package. I tried to uninstall and install it back again but I get this error: Error: Trying to remove "c4ebpl", which is protected It shows me some protected packages which can't be removed. Update process using sudo yum update gives me this error: Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libgmp.so.10()(64bit) Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libc.so.6(GLIBC_2.14)(64bit) Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: python(abi) = 2.7 Installed: python-2.6.6-52.el6.x86_64 (@el66/$releasever) python(abi) = 2.6 Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libpython2.7.so.1.0()(64bit) You could try using --skip-broken to work around the problem Would anyone know how to downgrade to the original packages? Is there a way to do a factory reset? Or do I need to re-install linux again? Some things I tried: I deleted the python-crypto.x86_64 package using this command sudo rpm --nodeps -e python-crypto.x86_64 And the update went through. So I thought I should install the python-crypto.x86_64 package now as I have the right el6 rpmforge repository. So I ran this command sudo yum install python-crypto.x86_64 but I got the same error: Resolving Dependencies --> Running transaction check ---> Package python-crypto.x86_64 0:2.6.1-1.el7.rf will be installed --> Processing Dependency: python(abi) = 2.7 for package: python-crypto-2.6.1-1.el7.rf.x86_64 --> Processing Dependency: libc.so.6(GLIBC_2.14)(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64 --> Processing Dependency: libpython2.7.so.1.0()(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64 --> Processing Dependency: libgmp.so.10()(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64 --> Finished Dependency Resolution Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libgmp.so.10()(64bit) Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libc.so.6(GLIBC_2.14)(64bit) Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: python(abi) = 2.7 Installed: python-2.6.6-52.el6.x86_64 (@el66/$releasever) python(abi) = 2.6 Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge) Requires: libpython2.7.so.1.0()(64bit) You could try using --skip-broken to work around the problem I don't know why its trying to find the el7 package? I have these libraries in my machine. $ rpm -qa | grep rpmfor rpmforge-release-0.5.3-1.el7.rf.x86_64
linux, redhat, rpm, yum, rhel
1
299
1
https://stackoverflow.com/questions/31622991/rhel6-installed-wrong-version-of-rpmforge
31,354,276
how to restrict specific hosts from connecting to pgbouncer?
I am running my postgres-9.2 on 6432 port and pgbouncer on 5432 port. Few of my colleagues client machines have the firewall connection permissions on 5432 port on server machine. But as a DB admin, I wanted to restrict some IP addresses from accessing the database. But, though I block in the pg_hba.conf file, since the pgbouncer port is allowed, they are able to access. I can block at the OS firewall level but I don't want to take the help of my system administrator. So, is there any way to restrict and deny IP addresses from accessing the pgbouncer as we generally do through pg_hba.conf for the postgresql. Please suggest.
how to restrict specific hosts from connecting to pgbouncer? I am running my postgres-9.2 on 6432 port and pgbouncer on 5432 port. Few of my colleagues client machines have the firewall connection permissions on 5432 port on server machine. But as a DB admin, I wanted to restrict some IP addresses from accessing the database. But, though I block in the pg_hba.conf file, since the pgbouncer port is allowed, they are able to access. I can block at the OS firewall level but I don't want to take the help of my system administrator. So, is there any way to restrict and deny IP addresses from accessing the pgbouncer as we generally do through pg_hba.conf for the postgresql. Please suggest.
postgresql, rhel, pgbouncer
1
2,924
1
https://stackoverflow.com/questions/31354276/how-to-restrict-specific-hosts-from-connecting-to-pgbouncer
30,063,218
Cannot open display while running X Apps as another user
I am currently using X11RDP to connect to a RHEL 6.5 endpoint , as root user. All X apps work fine, and directed properly to the right display (say 11.0). Now if I switch to another user ( su - user1 ), then I try to open any X app (say xterm) it will be unable to open the display (even though it is also 11.0). I do not believe the problem to be with xauth (magic cookies, etc), or the DISPLAY environment variable not being set correctly, or allowing connection with xhost + ; but rather something with XRDP . Any help would be appreciated.
Cannot open display while running X Apps as another user I am currently using X11RDP to connect to a RHEL 6.5 endpoint , as root user. All X apps work fine, and directed properly to the right display (say 11.0). Now if I switch to another user ( su - user1 ), then I try to open any X app (say xterm) it will be unable to open the display (even though it is also 11.0). I do not believe the problem to be with xauth (magic cookies, etc), or the DISPLAY environment variable not being set correctly, or allowing connection with xhost + ; but rather something with XRDP . Any help would be appreciated.
linux, x11, rhel, xterm, xrdp
1
1,083
1
https://stackoverflow.com/questions/30063218/cannot-open-display-while-running-x-apps-as-another-user
29,965,623
&#39;pecl install ibm_db2&#39; can&#39;t find library
I'm trying to install ibm_db2 through pecl: pecl install ibm_db2 Then when it asks for an install dir, I have tried various places only to have the same result. It goes through a bunch of checks etc., then tries to 'make' and gets an error: /usr/bin/ld: skipping incompatible /home/db2inst1/sqllib/lib32//libdb2.so when searching for -ldb2 /usr/bin/ld: cannot find -ldb2 It always looks there for the libraries. And they exist, because when I execute: locate libdb2.so It displays: /opt/ibm/db2/V10.1/lib32/libdb2.so /opt/ibm/db2/V10.1/lib32/libdb2.so.1 /opt/ibm/db2/V10.1/lib64/libdb2.so /opt/ibm/db2/V10.1/lib64/libdb2.so.1 But they're in /opt/ibm. I was looking at this link: pecl instal ibm_db2 fails , it seems like it's something to do with the headers/libraries being configured incorrectly? I am currently running RHEL 6.6.
&#39;pecl install ibm_db2&#39; can&#39;t find library I'm trying to install ibm_db2 through pecl: pecl install ibm_db2 Then when it asks for an install dir, I have tried various places only to have the same result. It goes through a bunch of checks etc., then tries to 'make' and gets an error: /usr/bin/ld: skipping incompatible /home/db2inst1/sqllib/lib32//libdb2.so when searching for -ldb2 /usr/bin/ld: cannot find -ldb2 It always looks there for the libraries. And they exist, because when I execute: locate libdb2.so It displays: /opt/ibm/db2/V10.1/lib32/libdb2.so /opt/ibm/db2/V10.1/lib32/libdb2.so.1 /opt/ibm/db2/V10.1/lib64/libdb2.so /opt/ibm/db2/V10.1/lib64/libdb2.so.1 But they're in /opt/ibm. I was looking at this link: pecl instal ibm_db2 fails , it seems like it's something to do with the headers/libraries being configured incorrectly? I am currently running RHEL 6.6.
linux, db2, rhel, pecl
1
1,165
1
https://stackoverflow.com/questions/29965623/pecl-install-ibm-db2-cant-find-library
28,640,117
HTML/CSS/JS desktop application runtime for RHEL6
I have a software user interface built with node-webkit, which works great for most platforms. However, on RedHat Linux 6 OS Chromium is not supported, and node.js requires some hacky methods that I shouldn't be allowed to automate for other companies' nodes. This is because RHEL6 has locked glibc version to 2.12 and the gtk version to 2.18 . Basically, I'd really like to avoid having to build an entire new UI (in a different language than HTML/CSS/JS) just for RHEL6 users. Is there a desktop runtime such as node-webkit for RHEL6 that I could use as a base for the front end of my software?
HTML/CSS/JS desktop application runtime for RHEL6 I have a software user interface built with node-webkit, which works great for most platforms. However, on RedHat Linux 6 OS Chromium is not supported, and node.js requires some hacky methods that I shouldn't be allowed to automate for other companies' nodes. This is because RHEL6 has locked glibc version to 2.12 and the gtk version to 2.18 . Basically, I'd really like to avoid having to build an entire new UI (in a different language than HTML/CSS/JS) just for RHEL6 users. Is there a desktop runtime such as node-webkit for RHEL6 that I could use as a base for the front end of my software?
desktop-application, redhat, node-webkit, rhel, rhel6
1
81
0
https://stackoverflow.com/questions/28640117/html-css-js-desktop-application-runtime-for-rhel6
28,416,801
Can I specify how often a non-delivery report will get sent for a mail on the mail queue
BACKGROUND: I am running exim4 on RHEL 6.6. I have sent a message and it currently resides on exims mail queue and will continue to be there for at least 5 days. I believe a Non-Delivery report will be sent to me once every 24 hours telling me that my mail has not been sent. Can I ask exim to send a Non-Delivery Report to me once every 6 hours instead of once every 24 hours? So in a 24 hour period I would like to receive 4 non-delivery reports from my exim server telling me my mail is undelivered. Is it possible to do this? Any links or information surrounding this topic would be appreciated.
Can I specify how often a non-delivery report will get sent for a mail on the mail queue BACKGROUND: I am running exim4 on RHEL 6.6. I have sent a message and it currently resides on exims mail queue and will continue to be there for at least 5 days. I believe a Non-Delivery report will be sent to me once every 24 hours telling me that my mail has not been sent. Can I ask exim to send a Non-Delivery Report to me once every 6 hours instead of once every 24 hours? So in a 24 hour period I would like to receive 4 non-delivery reports from my exim server telling me my mail is undelivered. Is it possible to do this? Any links or information surrounding this topic would be appreciated.
email, rhel, exim, exim4
1
159
1
https://stackoverflow.com/questions/28416801/can-i-specify-how-often-a-non-delivery-report-will-get-sent-for-a-mail-on-the-ma
26,602,617
libfaketime and java on RHEL 5 / RHEL 6
In order to test java-code with date / time set into the past or future I want to try libfaketime (currently we just adjust the system clock, but it causes much trouble like non working kerberos, etc). I try with this small test program: $ cat time.java import java.util.*; class TimeTest { public static void main(String[] s) { long timeInMillis = System.currentTimeMillis(); Calendar cal = Calendar.getInstance(); cal.setTimeInMillis(timeInMillis); java.util.Date date = cal.getTime(); System.out.println("Date: " + date); } } And executes this: LD_ASSUME_KERNEL=2.6.18 LD_PRELOAD=/usr/lib64/libfaketime.so.1 FAKETIME="-15d" /opt/IBM/WebSphere/AppServer/java_1.7_64/bin/java TimeTest Invalid clock_id for clock_gettime: -172402[root@myhost ~]# But as you can see I just get an error message. The test is performed on a RHEL 6.5 server, kernel 2.6.32-431 and libfaketime 0.9.6 Do you have any suggestions how I can solve this? I'm also interested in hearing your experiences with libfaketime and java on RHEL. I have also reported this issue at: [URL] Best reagards, Erling
libfaketime and java on RHEL 5 / RHEL 6 In order to test java-code with date / time set into the past or future I want to try libfaketime (currently we just adjust the system clock, but it causes much trouble like non working kerberos, etc). I try with this small test program: $ cat time.java import java.util.*; class TimeTest { public static void main(String[] s) { long timeInMillis = System.currentTimeMillis(); Calendar cal = Calendar.getInstance(); cal.setTimeInMillis(timeInMillis); java.util.Date date = cal.getTime(); System.out.println("Date: " + date); } } And executes this: LD_ASSUME_KERNEL=2.6.18 LD_PRELOAD=/usr/lib64/libfaketime.so.1 FAKETIME="-15d" /opt/IBM/WebSphere/AppServer/java_1.7_64/bin/java TimeTest Invalid clock_id for clock_gettime: -172402[root@myhost ~]# But as you can see I just get an error message. The test is performed on a RHEL 6.5 server, kernel 2.6.32-431 and libfaketime 0.9.6 Do you have any suggestions how I can solve this? I'm also interested in hearing your experiences with libfaketime and java on RHEL. I have also reported this issue at: [URL] Best reagards, Erling
java, linux, time, rhel
1
1,082
1
https://stackoverflow.com/questions/26602617/libfaketime-and-java-on-rhel-5-rhel-6
25,728,296
Strtotime giving different results
So I have a dev box running CentOs6.5 and a server running RHEL 7 if I do the date command from both boxes I get "Mon Sep 8 09:36:50 MDT 2014" from php if I do echo time(); from both I get 1410190731 within 20seconds of each other. but if I do strtotime('Sep 8 2014 08:17:00:153AM'); on both boxes I get 1410185820 on CentOS and 1410164220 on RHEL7 What could cause such a time difference on strtotime() and how can I fix it? Thanks!
Strtotime giving different results So I have a dev box running CentOs6.5 and a server running RHEL 7 if I do the date command from both boxes I get "Mon Sep 8 09:36:50 MDT 2014" from php if I do echo time(); from both I get 1410190731 within 20seconds of each other. but if I do strtotime('Sep 8 2014 08:17:00:153AM'); on both boxes I get 1410185820 on CentOS and 1410164220 on RHEL7 What could cause such a time difference on strtotime() and how can I fix it? Thanks!
php, linux, strtotime, rhel, centos6.5
1
98
1
https://stackoverflow.com/questions/25728296/strtotime-giving-different-results
25,376,761
TOP command overall CPU
I have below top command results in my RHEL 6. It's running PostgreSQL on my server. I see 35.8% idle in CPU(s) while all the CPU usages below show 100%. So how should I read below output? top - 03:06:30 up 97 days, 20:15, 3 users, load average: 10.85, 10.51, 10.13 Tasks: 738 total, 14 running, 724 sleeping, 0 stopped, 0 zombie **Cpu(s): 53.3%us, 9.6%sy, 0.0%ni, 35.8%id, 0.6%wa, 0.0%hi, 0.7%si, 0.0%st** Mem: 32077620k total, 24335372k used, 7742248k free, 19084k buffers Swap: 81919992k total, 407968k used, 81512024k free, 18686780k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19171 enterpri 20 0 8590m 966m 951m R 100.0 3.1 6:24.51 edb-postgres 19588 enterpri 20 0 8590m 956m 941m R 100.0 3.1 1:20.51 edb-postgres 18494 enterpri 20 0 8590m 959m 944m R 99.8 3.1 18:18.75 edb-postgres 18683 enterpri 20 0 8588m 984m 975m R 99.8 3.1 6:22.80 edb-postgres 19158 enterpri 20 0 8592m 1.0g 1.0g R 99.8 3.3 5:40.16 edb-postgres 19167 enterpri 20 0 8589m 959m 945m R 99.8 3.1 7:48.53 edb-postgres 19590 enterpri 20 0 8586m 945m 933m R 99.8 3.0 2:51.32 edb-postgres 19591 enterpri 20 0 8588m 950m 936m R 99.8 3.0 3:07.77 edb-postgres 19592 enterpri 20 0 8589m 948m 935m R 99.8 3.0 2:52.66 edb-postgres
TOP command overall CPU I have below top command results in my RHEL 6. It's running PostgreSQL on my server. I see 35.8% idle in CPU(s) while all the CPU usages below show 100%. So how should I read below output? top - 03:06:30 up 97 days, 20:15, 3 users, load average: 10.85, 10.51, 10.13 Tasks: 738 total, 14 running, 724 sleeping, 0 stopped, 0 zombie **Cpu(s): 53.3%us, 9.6%sy, 0.0%ni, 35.8%id, 0.6%wa, 0.0%hi, 0.7%si, 0.0%st** Mem: 32077620k total, 24335372k used, 7742248k free, 19084k buffers Swap: 81919992k total, 407968k used, 81512024k free, 18686780k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19171 enterpri 20 0 8590m 966m 951m R 100.0 3.1 6:24.51 edb-postgres 19588 enterpri 20 0 8590m 956m 941m R 100.0 3.1 1:20.51 edb-postgres 18494 enterpri 20 0 8590m 959m 944m R 99.8 3.1 18:18.75 edb-postgres 18683 enterpri 20 0 8588m 984m 975m R 99.8 3.1 6:22.80 edb-postgres 19158 enterpri 20 0 8592m 1.0g 1.0g R 99.8 3.3 5:40.16 edb-postgres 19167 enterpri 20 0 8589m 959m 945m R 99.8 3.1 7:48.53 edb-postgres 19590 enterpri 20 0 8586m 945m 933m R 99.8 3.0 2:51.32 edb-postgres 19591 enterpri 20 0 8588m 950m 936m R 99.8 3.0 3:07.77 edb-postgres 19592 enterpri 20 0 8589m 948m 935m R 99.8 3.0 2:52.66 edb-postgres
performance, postgresql, load, cpu, rhel
1
190
1
https://stackoverflow.com/questions/25376761/top-command-overall-cpu
25,183,642
Ubuntu to RHEL C++ cross-compilation (64bits)
Not exactly sure where this question belongs, let me know if I should move it. Here is the issue; over the past years I have a put together a relatively large personal codebase in C++11 that I'm using for work, but my workplace runs supposedly the latest version of RHEL on their cluster, which runs gcc 4.4.7, which supports C++11 only partially. I have battled to get a decent compiler installed, but apparently there is no easy way of making this happen, so instead I am wondering whether it would be possible to cross-compile C++11 sources on my own desktop, and export the executable on the cluster at my workplace. I just don't know where to start in order to do that: Is it possible (and how) to cross-compile C++11 sources from Ubuntu to RHEL? Since I am at it, is it possible to rebuild gcc 4.8 or a suitable libstdc++ (or libc++ ) entirely from Ubuntu to an rpm package compatible with RHEL? (Note: I would preferably use clang (3.4) to compile on my desktop, but gcc (4.8) would be fine too.)
Ubuntu to RHEL C++ cross-compilation (64bits) Not exactly sure where this question belongs, let me know if I should move it. Here is the issue; over the past years I have a put together a relatively large personal codebase in C++11 that I'm using for work, but my workplace runs supposedly the latest version of RHEL on their cluster, which runs gcc 4.4.7, which supports C++11 only partially. I have battled to get a decent compiler installed, but apparently there is no easy way of making this happen, so instead I am wondering whether it would be possible to cross-compile C++11 sources on my own desktop, and export the executable on the cluster at my workplace. I just don't know where to start in order to do that: Is it possible (and how) to cross-compile C++11 sources from Ubuntu to RHEL? Since I am at it, is it possible to rebuild gcc 4.8 or a suitable libstdc++ (or libc++ ) entirely from Ubuntu to an rpm package compatible with RHEL? (Note: I would preferably use clang (3.4) to compile on my desktop, but gcc (4.8) would be fine too.)
c++, ubuntu, c++11, cross-compiling, rhel
1
491
0
https://stackoverflow.com/questions/25183642/ubuntu-to-rhel-c-cross-compilation-64bits
24,944,340
Need some help in appending more than one line to a file in puppet
Using this manifest file_line { 'sudo_rule': path => '/etc/sudoers', line => '%sudo ALL=(ALL) ALL', } Puppet adds a line, but I want more than one line to append to a file.
Need some help in appending more than one line to a file in puppet Using this manifest file_line { 'sudo_rule': path => '/etc/sudoers', line => '%sudo ALL=(ALL) ALL', } Puppet adds a line, but I want more than one line to append to a file.
linux, puppet, rhel
1
1,820
1
https://stackoverflow.com/questions/24944340/need-some-help-in-appending-more-than-one-line-to-a-file-in-puppet
24,762,375
Modify SSH Warning: Root Login Not Permitted
I'm attempting to modify the SSH warning thrown out by an Amazon instance while logging in as root. This is what I get: "Please login as the user "ec2-user" rather than the user "root"." Where is this file location stored in RHEL 6.5? Thank you!
Modify SSH Warning: Root Login Not Permitted I'm attempting to modify the SSH warning thrown out by an Amazon instance while logging in as root. This is what I get: "Please login as the user "ec2-user" rather than the user "root"." Where is this file location stored in RHEL 6.5? Thank you!
linux, ssh, amazon-ec2, rhel
1
190
1
https://stackoverflow.com/questions/24762375/modify-ssh-warning-root-login-not-permitted
24,520,354
several packages available from epel, but not git?
I've just installed the epels, and I've found that a lot of git-related projects are available, but git itself is not! GitPython.noarch 0.3.2-0.6.RC1.el6 epel git-annex.x86_64 3.20120522-2.1.el6 epel git-bugzilla.noarch 0-0.9.20091211git.el6 epel git-cal.noarch 0.9.1-3.el6 epel git-ftp.noarch 0.84-1.el6 epel git-review.noarch 1.17-2.el6 epel git2cl.noarch 2.0-0.1.git8373c9f.el6 epel gitflow.noarch 0.4.2.20120723git53e9c76-4.el6 epel gitolite.noarch 2.3.1-1.el6 epel gitolite3.noarch 1:3.6-1.el6 epel gitosis.noarch 0.2-9.20080825git.el6 epel gitso.noarch 0.6-10.el6 epel gitstats.noarch 0-0.4.20130723gita923085.el6 epel gitweb-caching.noarch 1.6.5.2-8.b1ab8b5.el6 epel Why is git not in the EPEL repositories, and what is the simplest way to get it installed, idiomatically, on a RHEL or Fedora machine?
several packages available from epel, but not git? I've just installed the epels, and I've found that a lot of git-related projects are available, but git itself is not! GitPython.noarch 0.3.2-0.6.RC1.el6 epel git-annex.x86_64 3.20120522-2.1.el6 epel git-bugzilla.noarch 0-0.9.20091211git.el6 epel git-cal.noarch 0.9.1-3.el6 epel git-ftp.noarch 0.84-1.el6 epel git-review.noarch 1.17-2.el6 epel git2cl.noarch 2.0-0.1.git8373c9f.el6 epel gitflow.noarch 0.4.2.20120723git53e9c76-4.el6 epel gitolite.noarch 2.3.1-1.el6 epel gitolite3.noarch 1:3.6-1.el6 epel gitosis.noarch 0.2-9.20080825git.el6 epel gitso.noarch 0.6-10.el6 epel gitstats.noarch 0-0.4.20130723gita923085.el6 epel gitweb-caching.noarch 1.6.5.2-8.b1ab8b5.el6 epel Why is git not in the EPEL repositories, and what is the simplest way to get it installed, idiomatically, on a RHEL or Fedora machine?
git, fedora, rhel, epel
1
360
1
https://stackoverflow.com/questions/24520354/several-packages-available-from-epel-but-not-git
24,203,879
RPM spec scripting compliance and how to test it
I maintain some spec files for rpms on rhel5 and rhel6, but I always struggle with more complex testing within the spec file. Often I resign writing the config logic in a perl or bash script that is executed within the rpm upon installation. Now I struggle with this simple not equal test, I believed the scripting logic was close to bash, but at least some work for bash. If I remove that if test it all works. echo "verifying base:%{smm_public_base} against %{smm_public}"; # if smm_public is not the same as smm_public_base we must sublink if [ %{smm_public_base} -ne %{smm_public} ]; then echo "smm_public_base:%{smm_public_base} is not the same as smm_public:%{smm_public} so we need to soflink it" if [ -L %{smm_public_base ]; then rm -rf %{smm_public_base} fi ln -s %{smm_public} %{smm_public_base} || : fi I've tried: if [ %{smm_public_base} != %{smm_public} ]; then and if [ "%{smm_public_base}" != "%{smm_public}" ]; then Those give errors like this verifying base:/opt/s3public against /opt/app/s3public /var/tmp/rpm-tmp.29711: line 60: syntax error: unexpected end of file error: %post(secana_smm2_SEB_PROD-r250-225.x86_64) scriptlet failed, exit status 2 So I've been searching for a primer on how to write scripting logic within a spec file, but found very little except for simple testing in the Fedora RPM guide Anyone having some best practices here. Opensuse says look at a redhat package. Thanks
RPM spec scripting compliance and how to test it I maintain some spec files for rpms on rhel5 and rhel6, but I always struggle with more complex testing within the spec file. Often I resign writing the config logic in a perl or bash script that is executed within the rpm upon installation. Now I struggle with this simple not equal test, I believed the scripting logic was close to bash, but at least some work for bash. If I remove that if test it all works. echo "verifying base:%{smm_public_base} against %{smm_public}"; # if smm_public is not the same as smm_public_base we must sublink if [ %{smm_public_base} -ne %{smm_public} ]; then echo "smm_public_base:%{smm_public_base} is not the same as smm_public:%{smm_public} so we need to soflink it" if [ -L %{smm_public_base ]; then rm -rf %{smm_public_base} fi ln -s %{smm_public} %{smm_public_base} || : fi I've tried: if [ %{smm_public_base} != %{smm_public} ]; then and if [ "%{smm_public_base}" != "%{smm_public}" ]; then Those give errors like this verifying base:/opt/s3public against /opt/app/s3public /var/tmp/rpm-tmp.29711: line 60: syntax error: unexpected end of file error: %post(secana_smm2_SEB_PROD-r250-225.x86_64) scriptlet failed, exit status 2 So I've been searching for a primer on how to write scripting logic within a spec file, but found very little except for simple testing in the Fedora RPM guide Anyone having some best practices here. Opensuse says look at a redhat package. Thanks
rpm, rhel
1
333
1
https://stackoverflow.com/questions/24203879/rpm-spec-scripting-compliance-and-how-to-test-it
23,512,002
CVE-2011-1092 on Centos / PCI DSS compliance
A security scan of a client's site flagged up the fact that, since they were running PHP 5.3.3, they were vulnerable to CVE-2011-1092 (fixed in 5.3.6 and above). Normally I'd say that backporting would have dealt with this, as their PHP has been backported over the years to 5.3.27, but there's no indication in the changelog that this specific CVE has been addressed. Looking at [URL] and [URL] indicates expressly that this issue hasn't been addressed in the version of PHP shipped with RHEL and Centos, because RHEL don't think it's a security issue. That leaves the client with a dilemma - their PCI DSS compliance scanner company (Trustwave) won't accept RHEL's statement of "this is not a security issue", saying "Visiting [the RHEL page linked to above] appears to show that RedHat has not addressed CVE-2011-1092. Since this finding affects PCI DSS Compliance, it does need to be confirmed to have been addressed in some fashion." Does anyone have any suggestions how to proceed on this? Is it possible to address the issue directly by patching the files in some way? Thanks in advance, for any suggestions.
CVE-2011-1092 on Centos / PCI DSS compliance A security scan of a client's site flagged up the fact that, since they were running PHP 5.3.3, they were vulnerable to CVE-2011-1092 (fixed in 5.3.6 and above). Normally I'd say that backporting would have dealt with this, as their PHP has been backported over the years to 5.3.27, but there's no indication in the changelog that this specific CVE has been addressed. Looking at [URL] and [URL] indicates expressly that this issue hasn't been addressed in the version of PHP shipped with RHEL and Centos, because RHEL don't think it's a security issue. That leaves the client with a dilemma - their PCI DSS compliance scanner company (Trustwave) won't accept RHEL's statement of "this is not a security issue", saying "Visiting [the RHEL page linked to above] appears to show that RedHat has not addressed CVE-2011-1092. Since this finding affects PCI DSS Compliance, it does need to be confirmed to have been addressed in some fashion." Does anyone have any suggestions how to proceed on this? Is it possible to address the issue directly by patching the files in some way? Thanks in advance, for any suggestions.
php, centos, rhel, pci-dss, pci-compliance
1
415
2
https://stackoverflow.com/questions/23512002/cve-2011-1092-on-centos-pci-dss-compliance
23,064,124
How to use an older version of glibc, for Python script to work on REHL 3, using PyInstaller on CentOS6
I have a RHEL3 server with Python 2.2! I need to run some scripts on that machine using 2.6 Python. So I also have a CentOS 6 with Python 2.6. I wrote the code and used PyInstaller to give me a single executable. That works on the CentOS machine. However, on the REHL3 I get this error " /lib/tls/libc.so.6: version GLIBC_2.4' not found /lib/tls/libc.so.6: version GLIBC_2.3.4' not found " Understandable, as it's old vs new. I tried using PyInstaller on CentOS 3 but there were dependencies and yum doesn't work and generally CentOS 3 is dead. I thought I could install GLIBC 2.4 and 2.3.4 on CentOS 6 in a different directory. That could work, but I wouldn't know how to use PyInstaller to use that library. Then I thought, could I chroot? What are your ideas on this, as I am out of them!
How to use an older version of glibc, for Python script to work on REHL 3, using PyInstaller on CentOS6 I have a RHEL3 server with Python 2.2! I need to run some scripts on that machine using 2.6 Python. So I also have a CentOS 6 with Python 2.6. I wrote the code and used PyInstaller to give me a single executable. That works on the CentOS machine. However, on the REHL3 I get this error " /lib/tls/libc.so.6: version GLIBC_2.4' not found /lib/tls/libc.so.6: version GLIBC_2.3.4' not found " Understandable, as it's old vs new. I tried using PyInstaller on CentOS 3 but there were dependencies and yum doesn't work and generally CentOS 3 is dead. I thought I could install GLIBC 2.4 and 2.3.4 on CentOS 6 in a different directory. That could work, but I wouldn't know how to use PyInstaller to use that library. Then I thought, could I chroot? What are your ideas on this, as I am out of them!
python, linux, rhel, pyinstaller
1
2,441
1
https://stackoverflow.com/questions/23064124/how-to-use-an-older-version-of-glibc-for-python-script-to-work-on-rehl-3-using
22,972,885
python regex doesnt match when run from cron
I'm having difficulty getting a regex to match when the job is run from cron. When I run it from the cmd line it works just fine. I have some output from an ssh session that I need to parse for some CPU data. parts = \x1b[7mlines 24-46 \x1b[27m\x1b[K\r\x1b[KCPU 9 parts = \x1b[7mlines 25-47 \x1b[27m\x1b[K\r\x1b[K Utilization: 57% parts = Peak Utilization Last Hour: 100% at 2014/04/09 07:07:12 parts = Avg. Utilization Last Hour: 67% parts = parts = \x1b[7mlines 48-70 \x1b[27m\x1b[K\r\x1b[K parts = \x1b[7mlines 49-71 \x1b[27m\x1b[K\r\x1b[KCPU 14 parts = Utilization: 46% The sample above just has the problem output. Call it "sshoutput" I need to remove \x1b[7mlines X-X \x1b[27m\x1b[K\r\x1b[K from an lines that contain it. This formats the output correctly for the parsing function that I made. regex used: \\x1b\[7mlines [0-9]*-[0-9]* \\x1b\[27m\\x1b\[K\\r\\x1b\[K Code that removes the problem characters: ansi_char = re.compile('\\x1b\[7mlines [0-9]*-[0-9]* \\x1b\[27m\\x1b\[K\\r\\x1b\[K') def strip_ansi(with_ansi): return ansi_char.sub('', with_ansi) strip_ansi(sshoutput.before) This is part of a larger script. When I run it from the prompt, it runs fine and creates the output with all data and correct formatting. When cron runs it, all the lines with the ansi crap dont get matched and some CPU data goes missing. The output is formatted correctly but do to some CPU lines not getting the ansi characters stripped, the data gets shifted and is incorrect. I've been researching issues with cron and environment variables. the cron locale is POSIX and mine is en_US.UTF-8. I thought maybe this had somethign to do with how the regex would be handled. there are tons of jobs in the crontab file and i cant do something globally. I did try this: */1 **** export LANG=en_US.UTF-8; /pathtomyscript/myscript.py Is there something else I could try in the crontab file, or can my regex be changed? Thanks for any help/suggestions that the community may provide. PS, im also a linux nub (RHEL 5.9) and python nub (Python 2.7). I know enough to be dangerous :)
python regex doesnt match when run from cron I'm having difficulty getting a regex to match when the job is run from cron. When I run it from the cmd line it works just fine. I have some output from an ssh session that I need to parse for some CPU data. parts = \x1b[7mlines 24-46 \x1b[27m\x1b[K\r\x1b[KCPU 9 parts = \x1b[7mlines 25-47 \x1b[27m\x1b[K\r\x1b[K Utilization: 57% parts = Peak Utilization Last Hour: 100% at 2014/04/09 07:07:12 parts = Avg. Utilization Last Hour: 67% parts = parts = \x1b[7mlines 48-70 \x1b[27m\x1b[K\r\x1b[K parts = \x1b[7mlines 49-71 \x1b[27m\x1b[K\r\x1b[KCPU 14 parts = Utilization: 46% The sample above just has the problem output. Call it "sshoutput" I need to remove \x1b[7mlines X-X \x1b[27m\x1b[K\r\x1b[K from an lines that contain it. This formats the output correctly for the parsing function that I made. regex used: \\x1b\[7mlines [0-9]*-[0-9]* \\x1b\[27m\\x1b\[K\\r\\x1b\[K Code that removes the problem characters: ansi_char = re.compile('\\x1b\[7mlines [0-9]*-[0-9]* \\x1b\[27m\\x1b\[K\\r\\x1b\[K') def strip_ansi(with_ansi): return ansi_char.sub('', with_ansi) strip_ansi(sshoutput.before) This is part of a larger script. When I run it from the prompt, it runs fine and creates the output with all data and correct formatting. When cron runs it, all the lines with the ansi crap dont get matched and some CPU data goes missing. The output is formatted correctly but do to some CPU lines not getting the ansi characters stripped, the data gets shifted and is incorrect. I've been researching issues with cron and environment variables. the cron locale is POSIX and mine is en_US.UTF-8. I thought maybe this had somethign to do with how the regex would be handled. there are tons of jobs in the crontab file and i cant do something globally. I did try this: */1 **** export LANG=en_US.UTF-8; /pathtomyscript/myscript.py Is there something else I could try in the crontab file, or can my regex be changed? Thanks for any help/suggestions that the community may provide. PS, im also a linux nub (RHEL 5.9) and python nub (Python 2.7). I know enough to be dangerous :)
python, regex, cron, rhel
1
617
2
https://stackoverflow.com/questions/22972885/python-regex-doesnt-match-when-run-from-cron
22,847,699
What is the largest buffer size for ioctl?
I am using ioctl() to read data from a block device (scsi.) I have noticed that when I read 1024 sectors, ioctl finishes without a problem. When I read 2048 , after a few long moments it returns ENOMEM ( errno=12 ) which is not even listed on the list of possible errors (see [URL] ) I have tripple checked that I am passing proper buffer size, so this cannot be the case - no buffer overrun. How can I learn the largest buffer size to be read using ioctl then? Edit 1 Some additional information may become helpful: Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) Red Hat Enterprise Linux Server release 5.3 (Tikanga) 2.6.18-128.el5
What is the largest buffer size for ioctl? I am using ioctl() to read data from a block device (scsi.) I have noticed that when I read 1024 sectors, ioctl finishes without a problem. When I read 2048 , after a few long moments it returns ENOMEM ( errno=12 ) which is not even listed on the list of possible errors (see [URL] ) I have tripple checked that I am passing proper buffer size, so this cannot be the case - no buffer overrun. How can I learn the largest buffer size to be read using ioctl then? Edit 1 Some additional information may become helpful: Enterprise Linux Enterprise Linux Server release 5.3 (Carthage) Red Hat Enterprise Linux Server release 5.3 (Tikanga) 2.6.18-128.el5
rhel, ioctl
1
2,789
1
https://stackoverflow.com/questions/22847699/what-is-the-largest-buffer-size-for-ioctl
22,314,132
Redhat: php53-common conflicts with php-common
In Redhat 5.10, when I do yum update, I got php53-common conflicts with php-common. Currently, (I have cli php 5.3.3 installed) There is a solution for centos: [URL] , but I am not sure I can use it in redhat. I did a yum update --skip-broken , but still no help. Update : A non-working-for-me-solution : [URL] Update 1 : I try to remove php53-common, then many dependent php53 packages (e.g. php53-cli, php53-bcmath, etc) will be removed as well. Do I need to install php-cli, php-bcmath, etc individually?
Redhat: php53-common conflicts with php-common In Redhat 5.10, when I do yum update, I got php53-common conflicts with php-common. Currently, (I have cli php 5.3.3 installed) There is a solution for centos: [URL] , but I am not sure I can use it in redhat. I did a yum update --skip-broken , but still no help. Update : A non-working-for-me-solution : [URL] Update 1 : I try to remove php53-common, then many dependent php53 packages (e.g. php53-cli, php53-bcmath, etc) will be removed as well. Do I need to install php-cli, php-bcmath, etc individually?
php, rhel, rhel5
1
2,390
0
https://stackoverflow.com/questions/22314132/redhat-php53-common-conflicts-with-php-common
22,217,952
Cloudstack:KVM:: Snapshot stuck in Creating Status forever.‏
I am working on CloudStack infrastructure service to build our organization framework. For this purpose, RHEL 6.4 (Kernel version 2.6.32 ), Cloudstack 4.2.1 and KVM (version 0.12.1) are being used. I have been working around for long to figure out the below mentioned Issue but unfortunately could not. Issue: Snapshot gets stuck in the state "Creating" forever. Impact: Couldn't get the snapshot consequently and thus, the instance from the CloudStack's UI could not be started/Stopped. Please see below trace from Management Server log which I think is the reason behind this. me":"ROOT-13","path":"ec5692cb-7758-4618-b589-09cb71e45dba","size":21474836480,"type":"ROOT","storagePoolType":"NetworkFilesystem","storagePoolUuid":"d41d4103-a11d-3160-8a3b-6469c574a93f","deviceId":0},{"id":19,"name":"DATA-13","path":"8017ac65-b52b-450b-937f-d92178de2f34","size":21474836480,"type":"DATADISK","storagePoolType":"NetworkFilesystem","storagePoolUuid":"d41d4103-a11d-3160-8a3b-6469c574a93f","deviceId":1}],"target":{"id":13,"snapshotName":"i-2-13-VM_VS_20140225120244","type":"Disk","current":false,"description":"new"},"vmName":"i-2-13-VM","guestOSType":"Red Hat Enterprise Linux 6.4 (64-bit)","wait":1800}}] } 2014-02-25 17:32:45,254 DEBUG [agent.transport.Request] (AgentManager-Handler-11:null) Seq 1-2016739514: Processing: { Ans: , MgmtId: 181122461670954, via: 1, Ver: v1, Flags: 10, [{"com.cloud.agent.api.UnsupportedAnswer":{"result":false,"details":"Unsupported command issued:com.cloud.agent.api.CreateVMSnapshotCommand. Are you sure you got the right type of server?","wait":0}}] } 2014-02-25 17:32:45,254 DEBUG [agent.transport.Request] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Seq 1-2016739514: Received: { Ans: , MgmtId: 181122461670954, via: 1, Ver: v1, Flags: 10, { UnsupportedAnswer } } 2014-02-25 17:32:45,254 WARN [agent.manager.AgentManagerImpl] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Unsupported Command: Unsupported command issued:com.cloud.agent.api.CreateVMSnapshotCommand. Are you sure you got the right type of server? 2014-02-25 17:32:45,254 ERROR [vm.snapshot.VMSnapshotManagerImpl] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Create vm snapshot i-2-13-VM_VS_20140225120244 failed for vm: i-2-13-VM due to com.cloud.agent.api.UnsupportedAnswer cannot be cast to com.cloud.agent.api.CreateVMSnapshotAnswer 2014-02-25 17:32:45,265 DEBUG [cloud.api.ApiServlet] (catalina-exec-11:null) ===START=== 192.168.125.241 -- GET command=listOsTypes&response=json&sessionkey=bP1MluiO7KMYb3krOwtD1IpG0b0%3D&_=1393329472271 2014-02-25 17:32:45,324 DEBUG [cloud.api.ApiServlet] (catalina-exec-13:null) ===START=== 192.168.125.241 -- GET command=listTags&response=json&sessionkey=bP1MluiO7KMYb3krOwtD1IpG0b0%3D&resourceId=340c69ae-f3a0-4966-aa0b-c799e944404f&resourceType=UserVm&listAll=true&_=1393329472305 2014-02-25 17:32:45,330 DEBUG [cloud.api.ApiServlet] (catalina-exec-13:null) ===END=== 192.168.125.241 -- GET command=listTags&response=json&sessionkey=bP1MluiO7KMYb3krOwtD1IpG0b0%3D&resourceId=340c69ae-f3a0-4966-aa0b-c799e944404f&resourceType=UserVm&listAll=true&_=1393329472305 2014-02-25 17:32:45,373 DEBUG [cloud.api.ApiServlet] (catalina-exec-11:null) ===END=== 192.168.125.241 -- GET command=listOsTypes&response=json&sessionkey=bP1MluiO7KMYb3krOwtD1IpG0b0%3D&_=1393329472271 2014-02-25 17:32:45,419 ERROR [cloud.async.AsyncJobManagerImpl] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Unexpected exception while executing org.apache.cloudstack.api.command.user.vmsnapshot.CreateVMSnapshotCmd com.cloud.utils.exception.CloudRuntimeException: com.cloud.agent.api.UnsupportedAnswer cannot be cast to com.cloud.agent.api.CreateVMSnapshotAnswer at com.cloud.vm.snapshot.VMSnapshotManagerImpl.createVmSnapshotInternal(VMSnapshotManagerImpl.java:406) at com.cloud.vm.snapshot.VMSnapshotManagerImpl.creatVMSnapshot(VMSnapshotManagerImpl.java:356) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at org.apache.cloudstack.api.command.user.vmsnapshot.CreateVMSnapshotCmd.execute(CreateVMSnapshotCmd.java:100) at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158) at com.cloud.async.AsyncJobManagerImpl$1.run(AsyncJobManagerImpl.java:531) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) 2014-02-25 17:32:45,420 DEBUG [cloud.async.AsyncJobManagerImpl] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Complete async job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ], jobStatus: 2, resultCode: 530, result: Error Code: 530 Error text: com.cloud.agent.api.UnsupportedAnswer cannot be cast to com.cloud.agent.api.CreateVMSnapshotAnswer 2014-02-25 17:32:50,526 DEBUG [storage.secondary.SecondaryStorageManagerImpl] (secstorage-1:null) Zone 1 is ready to launch secondary storage VM 2014-02-25 17:32:50,626 DEBUG [cloud.consoleproxy.ConsoleProxyManagerImpl] (consoleproxy-1:null) Zone 1 is ready to launch console proxy 2014-02-25 17:32:50,760 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-14:null) SeqA 3-1596: Processing Seq 3-1596: { Cmd , MgmtId: -1, via: 3, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } 2014-02-25 17:32:50,835 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-14:null) SeqA 3-1596: Sending Seq 3-1596: { Ans: , MgmtId: 181122461670954, via: 3, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] } 2014-02-25 17:32:51,461 DEBUG [network.router.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:null) Found 0 routers to update status. 2014-02-25 17:32:51,462 DEBUG [network.router.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:null) Found 0 networks to update RvR status. 2014-02-25 17:32:54,001 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-13:null) Ping from 3 2014-02-25 17:32:54,082 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-12:null) Ping from 2 2014-02-25 17:32:54,340 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-15:null) Ping from 1 2014-02-25 17:32:55,800 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-2:null) SeqA 3-1598: Processing Seq 3-1598: { Cmd , MgmtId: -1, via: 3, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } 2014-02-25 17:32:55,945 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-2:null) SeqA 3-1598: Sending Seq 3-1598: { Ans: , MgmtId: 181122461670954, via: 3, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] } 2014-02-25 17:33:05,762 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-1:null) SeqA 3-1599: Processing Seq 3-1599: { Cmd , MgmtId: -1, via: 3, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } 2014-02-25 17:33:05,847 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-1:null) SeqA 3-1599: Sending Seq 3-1599: { Ans: , MgmtId: 181122461670954, via: 3, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] } 2014-02-25 17:33:12,091 DEBUG [cloud.server.StatsCollector] (StatsCollector-1:null) VmStatsCollector is running... 2014-02-25 17:33:12,185 DEBUG [agent.transport.Request] (StatsCollector-1:null) Seq 1-2016739515: Received: { Ans: , MgmtId: 181122461670954, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } } 2014-02-25 17:33:13,400 DEBUG [cloud.server.StatsCollector] (StatsCollector-2:null) StorageCollector is running... 2014-02-25 17:33:13,460 DEBUG [agent.transport.Request] (StatsCollector-2:null) Seq 2-1323958339: Received: { Ans: , MgmtId: 181122461670954, via: 2, Ver: v1, Flags: 10, { GetStorageStatsAnswer } } 2014-02-25 17:33:13,500 DEBUG [agent.transport.Request] (StatsCollector-2:null) Seq 1-2016739516: Received: { Ans: , MgmtId: 181122461670954, via: 1, Ver: v1, Flags: 10, { GetStorageStatsAnswer } } Please see below trace from agent log which was written at the moment I clicked on Snapshot button in the Cloudstack's UI: 2014-02-25 17:27:09,129 WARN [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-4:null) Unsupported command In order to restart the instance again, I manually delete the row from the table vm_snapshots in mysql database corresponding to this stuck snapshot. Please suggest me the fix with proper explanation for this issue. Thanks in advance. Regards, Rohit
Cloudstack:KVM:: Snapshot stuck in Creating Status forever.‏ I am working on CloudStack infrastructure service to build our organization framework. For this purpose, RHEL 6.4 (Kernel version 2.6.32 ), Cloudstack 4.2.1 and KVM (version 0.12.1) are being used. I have been working around for long to figure out the below mentioned Issue but unfortunately could not. Issue: Snapshot gets stuck in the state "Creating" forever. Impact: Couldn't get the snapshot consequently and thus, the instance from the CloudStack's UI could not be started/Stopped. Please see below trace from Management Server log which I think is the reason behind this. me":"ROOT-13","path":"ec5692cb-7758-4618-b589-09cb71e45dba","size":21474836480,"type":"ROOT","storagePoolType":"NetworkFilesystem","storagePoolUuid":"d41d4103-a11d-3160-8a3b-6469c574a93f","deviceId":0},{"id":19,"name":"DATA-13","path":"8017ac65-b52b-450b-937f-d92178de2f34","size":21474836480,"type":"DATADISK","storagePoolType":"NetworkFilesystem","storagePoolUuid":"d41d4103-a11d-3160-8a3b-6469c574a93f","deviceId":1}],"target":{"id":13,"snapshotName":"i-2-13-VM_VS_20140225120244","type":"Disk","current":false,"description":"new"},"vmName":"i-2-13-VM","guestOSType":"Red Hat Enterprise Linux 6.4 (64-bit)","wait":1800}}] } 2014-02-25 17:32:45,254 DEBUG [agent.transport.Request] (AgentManager-Handler-11:null) Seq 1-2016739514: Processing: { Ans: , MgmtId: 181122461670954, via: 1, Ver: v1, Flags: 10, [{"com.cloud.agent.api.UnsupportedAnswer":{"result":false,"details":"Unsupported command issued:com.cloud.agent.api.CreateVMSnapshotCommand. Are you sure you got the right type of server?","wait":0}}] } 2014-02-25 17:32:45,254 DEBUG [agent.transport.Request] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Seq 1-2016739514: Received: { Ans: , MgmtId: 181122461670954, via: 1, Ver: v1, Flags: 10, { UnsupportedAnswer } } 2014-02-25 17:32:45,254 WARN [agent.manager.AgentManagerImpl] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Unsupported Command: Unsupported command issued:com.cloud.agent.api.CreateVMSnapshotCommand. Are you sure you got the right type of server? 2014-02-25 17:32:45,254 ERROR [vm.snapshot.VMSnapshotManagerImpl] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Create vm snapshot i-2-13-VM_VS_20140225120244 failed for vm: i-2-13-VM due to com.cloud.agent.api.UnsupportedAnswer cannot be cast to com.cloud.agent.api.CreateVMSnapshotAnswer 2014-02-25 17:32:45,265 DEBUG [cloud.api.ApiServlet] (catalina-exec-11:null) ===START=== 192.168.125.241 -- GET command=listOsTypes&response=json&sessionkey=bP1MluiO7KMYb3krOwtD1IpG0b0%3D&_=1393329472271 2014-02-25 17:32:45,324 DEBUG [cloud.api.ApiServlet] (catalina-exec-13:null) ===START=== 192.168.125.241 -- GET command=listTags&response=json&sessionkey=bP1MluiO7KMYb3krOwtD1IpG0b0%3D&resourceId=340c69ae-f3a0-4966-aa0b-c799e944404f&resourceType=UserVm&listAll=true&_=1393329472305 2014-02-25 17:32:45,330 DEBUG [cloud.api.ApiServlet] (catalina-exec-13:null) ===END=== 192.168.125.241 -- GET command=listTags&response=json&sessionkey=bP1MluiO7KMYb3krOwtD1IpG0b0%3D&resourceId=340c69ae-f3a0-4966-aa0b-c799e944404f&resourceType=UserVm&listAll=true&_=1393329472305 2014-02-25 17:32:45,373 DEBUG [cloud.api.ApiServlet] (catalina-exec-11:null) ===END=== 192.168.125.241 -- GET command=listOsTypes&response=json&sessionkey=bP1MluiO7KMYb3krOwtD1IpG0b0%3D&_=1393329472271 2014-02-25 17:32:45,419 ERROR [cloud.async.AsyncJobManagerImpl] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Unexpected exception while executing org.apache.cloudstack.api.command.user.vmsnapshot.CreateVMSnapshotCmd com.cloud.utils.exception.CloudRuntimeException: com.cloud.agent.api.UnsupportedAnswer cannot be cast to com.cloud.agent.api.CreateVMSnapshotAnswer at com.cloud.vm.snapshot.VMSnapshotManagerImpl.createVmSnapshotInternal(VMSnapshotManagerImpl.java:406) at com.cloud.vm.snapshot.VMSnapshotManagerImpl.creatVMSnapshot(VMSnapshotManagerImpl.java:356) at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) at org.apache.cloudstack.api.command.user.vmsnapshot.CreateVMSnapshotCmd.execute(CreateVMSnapshotCmd.java:100) at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158) at com.cloud.async.AsyncJobManagerImpl$1.run(AsyncJobManagerImpl.java:531) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) 2014-02-25 17:32:45,420 DEBUG [cloud.async.AsyncJobManagerImpl] (Job-Executor-2:job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ]) Complete async job-143 = [ 8ebc7ede-829d-473b-b909-a397f4c83490 ], jobStatus: 2, resultCode: 530, result: Error Code: 530 Error text: com.cloud.agent.api.UnsupportedAnswer cannot be cast to com.cloud.agent.api.CreateVMSnapshotAnswer 2014-02-25 17:32:50,526 DEBUG [storage.secondary.SecondaryStorageManagerImpl] (secstorage-1:null) Zone 1 is ready to launch secondary storage VM 2014-02-25 17:32:50,626 DEBUG [cloud.consoleproxy.ConsoleProxyManagerImpl] (consoleproxy-1:null) Zone 1 is ready to launch console proxy 2014-02-25 17:32:50,760 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-14:null) SeqA 3-1596: Processing Seq 3-1596: { Cmd , MgmtId: -1, via: 3, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } 2014-02-25 17:32:50,835 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-14:null) SeqA 3-1596: Sending Seq 3-1596: { Ans: , MgmtId: 181122461670954, via: 3, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] } 2014-02-25 17:32:51,461 DEBUG [network.router.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:null) Found 0 routers to update status. 2014-02-25 17:32:51,462 DEBUG [network.router.VirtualNetworkApplianceManagerImpl] (RouterStatusMonitor-1:null) Found 0 networks to update RvR status. 2014-02-25 17:32:54,001 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-13:null) Ping from 3 2014-02-25 17:32:54,082 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-12:null) Ping from 2 2014-02-25 17:32:54,340 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-15:null) Ping from 1 2014-02-25 17:32:55,800 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-2:null) SeqA 3-1598: Processing Seq 3-1598: { Cmd , MgmtId: -1, via: 3, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } 2014-02-25 17:32:55,945 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-2:null) SeqA 3-1598: Sending Seq 3-1598: { Ans: , MgmtId: 181122461670954, via: 3, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] } 2014-02-25 17:33:05,762 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-1:null) SeqA 3-1599: Processing Seq 3-1599: { Cmd , MgmtId: -1, via: 3, Ver: v1, Flags: 11, [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":2,"_loadInfo":"{\n \"connections\": []\n}","wait":0}}] } 2014-02-25 17:33:05,847 DEBUG [agent.manager.AgentManagerImpl] (AgentManager-Handler-1:null) SeqA 3-1599: Sending Seq 3-1599: { Ans: , MgmtId: 181122461670954, via: 3, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] } 2014-02-25 17:33:12,091 DEBUG [cloud.server.StatsCollector] (StatsCollector-1:null) VmStatsCollector is running... 2014-02-25 17:33:12,185 DEBUG [agent.transport.Request] (StatsCollector-1:null) Seq 1-2016739515: Received: { Ans: , MgmtId: 181122461670954, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } } 2014-02-25 17:33:13,400 DEBUG [cloud.server.StatsCollector] (StatsCollector-2:null) StorageCollector is running... 2014-02-25 17:33:13,460 DEBUG [agent.transport.Request] (StatsCollector-2:null) Seq 2-1323958339: Received: { Ans: , MgmtId: 181122461670954, via: 2, Ver: v1, Flags: 10, { GetStorageStatsAnswer } } 2014-02-25 17:33:13,500 DEBUG [agent.transport.Request] (StatsCollector-2:null) Seq 1-2016739516: Received: { Ans: , MgmtId: 181122461670954, via: 1, Ver: v1, Flags: 10, { GetStorageStatsAnswer } } Please see below trace from agent log which was written at the moment I clicked on Snapshot button in the Cloudstack's UI: 2014-02-25 17:27:09,129 WARN [kvm.resource.LibvirtComputingResource] (agentRequest-Handler-4:null) Unsupported command In order to restart the instance again, I manually delete the row from the table vm_snapshots in mysql database corresponding to this stuck snapshot. Please suggest me the fix with proper explanation for this issue. Thanks in advance. Regards, Rohit
cloud, snapshot, rhel, kvm
1
880
0
https://stackoverflow.com/questions/22217952/cloudstackkvm-snapshot-stuck-in-creating-status-forever
22,202,171
Chown NFS Volume Nobody for user Nobody for group
Everytime I try to chown a file from root it will be nobody:nobody on the nfs server. It's working as root but it's not working with anything else. I'm on Red Hat Enterprise Linux 6.4. I use these settings in /etc/exports (This server is on internal network only) /storage *(rw,async,no_all_squash,no_root_squash,anonuid=99,anongid=99) Can you help me? I alredy spend a complete day to try fix it. Best Wishes, Thomas
Chown NFS Volume Nobody for user Nobody for group Everytime I try to chown a file from root it will be nobody:nobody on the nfs server. It's working as root but it's not working with anything else. I'm on Red Hat Enterprise Linux 6.4. I use these settings in /etc/exports (This server is on internal network only) /storage *(rw,async,no_all_squash,no_root_squash,anonuid=99,anongid=99) Can you help me? I alredy spend a complete day to try fix it. Best Wishes, Thomas
nfs, rhel, chown
1
1,108
0
https://stackoverflow.com/questions/22202171/chown-nfs-volume-nobody-for-user-nobody-for-group
21,570,925
Update yum fails
I'm using centOS 6.4. I'm getting this error when I use "yum update" in the terminal window. Could not retrieve mirrorlist [URL] error was 14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrorlist.centos.org'" Error: Cannot find a valid baseurl for repo: base Is there any alternative mirror for Upgrading?
Update yum fails I'm using centOS 6.4. I'm getting this error when I use "yum update" in the terminal window. Could not retrieve mirrorlist [URL] error was 14: PYCURL ERROR 6 - "Couldn't resolve host 'mirrorlist.centos.org'" Error: Cannot find a valid baseurl for repo: base Is there any alternative mirror for Upgrading?
linux, centos, yum, centos6, rhel
1
1,330
1
https://stackoverflow.com/questions/21570925/update-yum-fails
21,477,139
JBoss 5.1 - TopicConnection.createTopicSession hangs sometimes
TopicConnection.createTopicSession hangs sometimes in one env and it is working fine in another env. Env: JBoss5.1, jdk1.6.0_45, RHEL 5.8, Dell VM-ware The below is the code. TopicConnectionFactory _factory = (TopicConnectionFactory)context.lookup("java:JmsXA"); TopicConnection _connection = _factory.createTopicConnection(); TopicSession _session = _connection.createTopicSession(false, 1); --This is the place where it hangs. Topic _topic = (Topic)context.lookup(s); TopicPublisher _publisher = _session.createPublisher(_topic); _connection.start(); I feel that it could be some server configuration related issue. Kindly provide your suggestion.
JBoss 5.1 - TopicConnection.createTopicSession hangs sometimes TopicConnection.createTopicSession hangs sometimes in one env and it is working fine in another env. Env: JBoss5.1, jdk1.6.0_45, RHEL 5.8, Dell VM-ware The below is the code. TopicConnectionFactory _factory = (TopicConnectionFactory)context.lookup("java:JmsXA"); TopicConnection _connection = _factory.createTopicConnection(); TopicSession _session = _connection.createTopicSession(false, 1); --This is the place where it hangs. Topic _topic = (Topic)context.lookup(s); TopicPublisher _publisher = _session.createPublisher(_topic); _connection.start(); I feel that it could be some server configuration related issue. Kindly provide your suggestion.
java, jakarta-ee, jboss, jms, rhel
1
196
1
https://stackoverflow.com/questions/21477139/jboss-5-1-topicconnection-createtopicsession-hangs-sometimes
20,944,633
Issue in installing gcc-c++
Hey guys I am trying to install G++ on my VPN which have no internet connectivity of YUM access. so I am installing it through source tarballs when i tried to run ./configure i got the following error -bash-4.1# ./configure configure: error: cannot find install-sh or install.sh in . ./.. ./../.. Then i tried downloading auto tools and installed it from [URL] [URL] My configure.ac file contains # Process this file with autoreconf to produce a configure script. AC_PREREQ(2.59) AC_INIT(package-unused, version-unused,, libstdc++) AC_CONFIG_SRCDIR(src/ios.cc) AC_CONFIG_HEADER(config.h) # This works around the fact that libtool configuration may change LD # for this particular configuration, but some shells, instead of # keeping the changes in LD private, export them just because LD is # exported. Only used at the end of this file. ### am handles this now? ORIGINAL_LD_FOR_MULTILIBS=$LD # For libtool versioning info, format is CURRENT:REVISION:AGE libtool_VERSION=6:13:0 AC_SUBST(libtool_VERSION) # Find the rest of the source tree framework. AM_ENABLE_MULTILIB(, ..) # Gets build, host, target, *_vendor, *_cpu, *_os, etc. # # You will slowly go insane if you do not grok the following fact: when # building v3 as part of the compiler, the top-level /target/ becomes the # library's /host/. configure then causes --target to default to --host, # exactly like any other package using autoconf. Therefore, 'target' and # 'host' will always be the same. This makes sense both for native and # cross compilers, just think about it for a little while. :-) # # Also, if v3 is being configured as part of a cross compiler, the top-level # configure script will pass the "real" host as $with_cross_host. # # Do not delete or change the following two lines. For why, see # [URL] AC_CANONICAL_SYSTEM target_alias=${target_alias-$host_alias} # Handy for debugging: #AC_MSG_NOTICE($build / $host / $target / $host_alias / $target_alias); sleep 5 if test "$build" != "$host"; then # We are being configured with some form of cross compiler. GLIBCXX_IS_NATIVE=false case "$host","$target" in # Darwin crosses can use the host system's libraries and headers, # because of the fat library support. Of course, it must be the # same version of Darwin on both sides. Allow the user to # just say --target=foo-darwin without a version number to mean # "the version on this system". *-*-darwin*,*-*-darwin*) hostos=echo $host | sed 's/.*-darwin/darwin/' targetos=echo $target | sed 's/.*-darwin/darwin/' if test $hostos = $targetos -o $targetos = darwin ; then GLIBCXX_IS_NATIVE=true fi ;; *) GCC_NO_EXECUTABLES ;; esac else GLIBCXX_IS_NATIVE=true fi # Sets up automake. Must come after AC_CANONICAL_SYSTEM. Each of the # following is magically included in AUTOMAKE_OPTIONS in each Makefile.am. # 1.x: minimum required version # no-define: PACKAGE and VERSION will not be #define'd in config.h (a bunch # of other PACKAGE_* variables will, however, and there's nothing # we can do about that; they come from AC_INIT). # foreign: we don't follow the normal rules for GNU packages (no COPYING # file in the top srcdir, etc, etc), so stop complaining. # no-dependencies: turns off auto dependency generation (just for now) # -Wall: turns on all automake warnings... # -Wno-portability: ...except this one, since GNU make is now required. AM_INIT_AUTOMAKE([1.9.3 no-define foreign no-dependencies no-dist -Wall -Wno-portability -Wno-override]) AH_TEMPLATE(PACKAGE, [Name of package]) AH_TEMPLATE(VERSION, [Version number of package]) # Runs configure.host, finds CC, CXX, and assorted other critical bits. Sets # up critical shell variables. GLIBCXX_CONFIGURE if test "x${with_newlib}" != "xyes"; then AC_LIBTOOL_DLOPEN fi AM_PROG_LIBTOOL AC_SUBST(enable_shared) AC_SUBST(enable_static) # Possibly disable most of the library. ## TODO: Consider skipping unncessary tests altogether in this case, rather ## than just ignoring the results. Faster /and/ more correct, win win. GLIBCXX_ENABLE_HOSTED # Enable compiler support that doesn't require linking. GLIBCXX_ENABLE_SJLJ_EXCEPTIONS GLIBCXX_ENABLE_PCH($is_hosted) GLIBCXX_ENABLE_THREADS GLIBCXX_ENABLE_ATOMIC_BUILTINS # Checks for compiler support that doesn't require linking. GLIBCXX_CHECK_COMPILER_FEATURES # Enable all the variable C++ runtime options that don't require linking. GLIBCXX_ENABLE_CSTDIO GLIBCXX_ENABLE_CLOCALE GLIBCXX_ENABLE_ALLOCATOR GLIBCXX_ENABLE_CHEADERS($c_model) dnl c_model from configure.host GLIBCXX_ENABLE_LONG_LONG([yes]) GLIBCXX_ENABLE_WCHAR_T([yes]) GLIBCXX_ENABLE_C99([yes]) GLIBCXX_ENABLE_CONCEPT_CHECKS([no]) GLIBCXX_ENABLE_DEBUG_FLAGS(["-g3 -O0"]) GLIBCXX_ENABLE_DEBUG([no]) GLIBCXX_ENABLE_PARALLEL([yes]) GLIBCXX_ENABLE_CXX_FLAGS GLIBCXX_ENABLE_FULLY_DYNAMIC_STRING([no]) # Checks for operating systems support that doesn't require linking. GLIBCXX_CHECK_SYSTEM_ERROR # For the streamoff typedef. GLIBCXX_CHECK_INT64_T # For LFS support. GLIBCXX_CHECK_LFS # For showmanyc_helper(). AC_CHECK_HEADERS(sys/ioctl.h sys/filio.h) GLIBCXX_CHECK_POLL GLIBCXX_CHECK_S_ISREG_OR_S_IFREG # For xsputn_2(). AC_CHECK_HEADERS(sys/uio.h) GLIBCXX_CHECK_WRITEV # For C99 support to TR1. GLIBCXX_CHECK_C99_TR1 # For common values of EOF, SEEK_CUR, SEEK_END. GLIBCXX_CHECK_STDIO_MACROS # For gettimeofday support. GLIBCXX_CHECK_GETTIMEOFDAY # For clock_gettime, nanosleep and sched_yield support. # NB: The default is [no], because otherwise it requires linking. GLIBCXX_ENABLE_LIBSTDCXX_TIME([no]) # For gthread support GLIBCXX_CHECK_GTHREADS AC_LC_MESSAGES # Check for available headers. AC_CHECK_HEADERS([endian.h float.h fp.h ieeefp.h inttypes.h locale.h \ machine/endian.h machine/param.h nan.h stdint.h stdlib.h string.h \ strings.h sys/ipc.h sys/isa_defs.h sys/machine.h sys/param.h \ sys/resource.h sys/sem.h sys/stat.h sys/time.h sys/types.h unistd.h \ wchar.h wctype.h]) # Only do link tests if native. Else, hardcode. if $GLIBCXX_IS_NATIVE; then # We can do more elaborate tests that assume a working linker. CANADIAN=no GLIBCXX_CHECK_LINKER_FEATURES GLIBCXX_CHECK_MATH_SUPPORT GLIBCXX_CHECK_STDLIB_SUPPORT # For /dev/random and /dev/urandom for TR1. GLIBCXX_CHECK_RANDOM_TR1 # For TLS support. GCC_CHECK_TLS # For iconv support. AM_ICONV else # This lets us hard-code the functionality we know we'll have in the cross # target environment. "Let" is a sugar-coated word placed on an especially # dull and tedious hack, actually. # # Here's why GLIBCXX_CHECK_MATH_SUPPORT, and other autoconf macros # that involve linking, can't be used: # "cannot open sim-crt0.o" # "cannot open crt0.o" # etc. All this is because there currently exists no unified, consistent # way for top level CC information to be passed down to target directories: # newlib includes, newlib linking info, libgloss versus newlib crt0.o, etc. # When all of that is done, all of this hokey, excessive AC_DEFINE junk for # crosses can be removed. # If Canadian cross, then don't pick up tools from the build directory. # Used only in GLIBCXX_EXPORT_INCLUDES. if test -n "$with_cross_host" && test x"$build_alias" != x"$with_cross_host" && test x"$build" != x"$target"; then CANADIAN=yes else CANADIAN=no fi # Construct crosses by hand, eliminating bits that need ld... # GLIBCXX_CHECK_MATH_SUPPORT # First, test for "known" system libraries. We may be using newlib even # on a hosted environment. if test "x${with_newlib}" = "xyes"; then os_include_dir="os/newlib" AC_DEFINE(HAVE_HYPOT) # GLIBCXX_CHECK_STDLIB_SUPPORT AC_DEFINE(HAVE_STRTOF) AC_DEFINE(HAVE_ACOSF) AC_DEFINE(HAVE_ASINF) AC_DEFINE(HAVE_ATAN2F) AC_DEFINE(HAVE_ATANF) AC_DEFINE(HAVE_CEILF) AC_DEFINE(HAVE_COSF) AC_DEFINE(HAVE_COSHF) AC_DEFINE(HAVE_EXPF) AC_DEFINE(HAVE_FABSF) AC_DEFINE(HAVE_FLOORF) AC_DEFINE(HAVE_FMODF) AC_DEFINE(HAVE_FREXPF) AC_DEFINE(HAVE_LDEXPF) AC_DEFINE(HAVE_LOG10F) AC_DEFINE(HAVE_LOGF) AC_DEFINE(HAVE_MODFF) AC_DEFINE(HAVE_POWF) AC_DEFINE(HAVE_SINF) AC_DEFINE(HAVE_SINHF) AC_DEFINE(HAVE_SQRTF) AC_DEFINE(HAVE_TANF) AC_DEFINE(HAVE_TANHF) AC_DEFINE(HAVE_ICONV) else GLIBCXX_CROSSCONFIG fi # At some point, we should differentiate between architectures # like x86, which have long double versions, and alpha/powerpc/etc., # which don't. For the time being, punt. if test x"long_double_math_on_this_cpu" = x"yes"; then AC_DEFINE(HAVE_ACOSL) AC_DEFINE(HAVE_ASINL) AC_DEFINE(HAVE_ATAN2L) AC_DEFINE(HAVE_ATANL) AC_DEFINE(HAVE_CEILL) AC_DEFINE(HAVE_COSL) AC_DEFINE(HAVE_COSHL) AC_DEFINE(HAVE_EXPL) AC_DEFINE(HAVE_FABSL) AC_DEFINE(HAVE_FLOORL) AC_DEFINE(HAVE_FMODL) AC_DEFINE(HAVE_FREXPL) AC_DEFINE(HAVE_LDEXPL) AC_DEFINE(HAVE_LOG10L) AC_DEFINE(HAVE_LOGL) AC_DEFINE(HAVE_MODFL) AC_DEFINE(HAVE_POWL) AC_DEFINE(HAVE_SINCOSL) AC_DEFINE(HAVE_SINL) AC_DEFINE(HAVE_SINHL) AC_DEFINE(HAVE_SQRTL) AC_DEFINE(HAVE_TANL) AC_DEFINE(HAVE_TANHL) fi fi # Check for _Unwind_GetIPInfo. GCC_CHECK_UNWIND_GETIPINFO GCC_LINUX_FUTEX([AC_DEFINE(HAVE_LINUX_FUTEX, 1, [Define if futex syscall is available.])]) GCC_HEADER_STDINT(include/gstdint.h) # This depends on GLIBCXX CHECK_LINKER_FEATURES, but without it assumes no. GLIBCXX_ENABLE_SYMVERS([yes]) GLIBCXX_ENABLE_VISIBILITY([yes]) ac_ldbl_compat=no case "$target" in powerpc*-*-linux* | \ powerpc*-*-gnu* | \ sparc*-*-linux* | \ s390*-*-linux* | \ alpha*-*-linux*) AC_TRY_COMPILE(, [ #if !defined __LONG_DOUBLE_128__ || (defined(__sparc__) && defined(__arch64__)) #error no need for long double compatibility #endif ], [ac_ldbl_compat=yes], [ac_ldbl_compat=no]) if test "$ac_ldbl_compat" = yes; then AC_DEFINE([_GLIBCXX_LONG_DOUBLE_COMPAT],1, [Define if compatibility should be provided for -mlong-double-64.]) port_specific_symbol_files="\$(top_srcdir)/config/os/gnu-linux/ldbl-extra.ver" fi esac GLIBCXX_CONDITIONAL(GLIBCXX_LDBL_COMPAT, test $ac_ldbl_compat = yes) # This depends on GLIBCXX_ENABLE_SYMVERS and GLIBCXX_IS_NATIVE. GLIBCXX_CONFIGURE_TESTSUITE # Propagate the target-specific source directories through the build chain. ATOMICITY_SRCDIR=config/${atomicity_dir} ATOMIC_WORD_SRCDIR=config/${atomic_word_dir} ATOMIC_FLAGS=${atomic_flags} CPU_DEFINES_SRCDIR=config/${cpu_defines_dir} OS_INC_SRCDIR=config/${os_include_dir} ERROR_CONSTANTS_SRCDIR=config/${error_constants_dir} ABI_TWEAKS_SRCDIR=config/${abi_tweaks_dir} AC_SUBST(ATOMICITY_SRCDIR) AC_SUBST(ATOMIC_WORD_SRCDIR) AC_SUBST(ATOMIC_FLAGS) AC_SUBST(CPU_DEFINES_SRCDIR) AC_SUBST(ABI_TWEAKS_SRCDIR) AC_SUBST(OS_INC_SRCDIR) AC_SUBST(ERROR_CONSTANTS_SRCDIR) # Determine cross-compile flags and AM_CONDITIONALs. #AC_SUBST(GLIBCXX_IS_NATIVE) #AM_CONDITIONAL(CANADIAN, test $CANADIAN = yes) GLIBCXX_EVALUATE_CONDITIONALS AC_CACHE_SAVE if test ${multilib} = yes; then multilib_arg="--enable-multilib" else multilib_arg= fi # Export all the install information. GLIBCXX_EXPORT_INSTALL_INFO # Export all the include and flag information to Makefiles. GLIBCXX_EXPORT_INCLUDES GLIBCXX_EXPORT_FLAGS if test "$enable_shared" = yes; then LIBSUPCXX_PICFLAGS="-prefer-pic" else LIBSUPCXX_PICFLAGS= fi AC_SUBST(LIBSUPCXX_PICFLAGS) dnl In autoconf 2.5x, AC_OUTPUT is replaced by four AC_CONFIG_* macros, dnl which can all be called multiple times as needed, plus one (different) dnl AC_OUTPUT macro. This one lists the files to be created: AC_CONFIG_FILES( \ Makefile \ AC_FOREACH([DIR], glibcxx_SUBDIRS, [DIR/Makefile ]) ) AC_CONFIG_FILES([scripts/testsuite_flags],[chmod +x scripts/testsuite_flags]) dnl These commands are run at the end of config.status: AC_CONFIG_COMMANDS([default], [if test -n "$CONFIG_FILES"; then # Multilibs need MULTISUBDIR defined correctly in certain makefiles so # that multilib installs will end up installed in the correct place. # The testsuite needs it for multilib-aware ABI baseline files. # To work around this not being passed down from config-ml.in -> # srcdir/Makefile.am -> srcdir/{src,libsupc++,...}/Makefile.am, manually # append it here. Only modify Makefiles that have just been created. # # Also, get rid of this simulated-VPATH thing that automake does. cat > vpsed << \_EOF s!test -f '$<' || echo '$(srcdir)/'!! _EOF for i in $SUBDIRS; do case $CONFIG_FILES in *${i}/Makefile*) #echo "Adding MULTISUBDIR to $i/Makefile" sed -f vpsed $i/Makefile > tmp grep '^MULTISUBDIR =' Makefile >> tmp mv tmp $i/Makefile ;; esac done rm vpsed fi (cd include && ${MAKE-make}) ], [ # Variables needed in config.status (file generation) which aren't already # passed by autoconf. SUBDIRS="$SUBDIRS" ]) dnl And this actually makes things happen: AC_OUTPUT still i don't know what is wrong with this.
Issue in installing gcc-c++ Hey guys I am trying to install G++ on my VPN which have no internet connectivity of YUM access. so I am installing it through source tarballs when i tried to run ./configure i got the following error -bash-4.1# ./configure configure: error: cannot find install-sh or install.sh in . ./.. ./../.. Then i tried downloading auto tools and installed it from [URL] [URL] My configure.ac file contains # Process this file with autoreconf to produce a configure script. AC_PREREQ(2.59) AC_INIT(package-unused, version-unused,, libstdc++) AC_CONFIG_SRCDIR(src/ios.cc) AC_CONFIG_HEADER(config.h) # This works around the fact that libtool configuration may change LD # for this particular configuration, but some shells, instead of # keeping the changes in LD private, export them just because LD is # exported. Only used at the end of this file. ### am handles this now? ORIGINAL_LD_FOR_MULTILIBS=$LD # For libtool versioning info, format is CURRENT:REVISION:AGE libtool_VERSION=6:13:0 AC_SUBST(libtool_VERSION) # Find the rest of the source tree framework. AM_ENABLE_MULTILIB(, ..) # Gets build, host, target, *_vendor, *_cpu, *_os, etc. # # You will slowly go insane if you do not grok the following fact: when # building v3 as part of the compiler, the top-level /target/ becomes the # library's /host/. configure then causes --target to default to --host, # exactly like any other package using autoconf. Therefore, 'target' and # 'host' will always be the same. This makes sense both for native and # cross compilers, just think about it for a little while. :-) # # Also, if v3 is being configured as part of a cross compiler, the top-level # configure script will pass the "real" host as $with_cross_host. # # Do not delete or change the following two lines. For why, see # [URL] AC_CANONICAL_SYSTEM target_alias=${target_alias-$host_alias} # Handy for debugging: #AC_MSG_NOTICE($build / $host / $target / $host_alias / $target_alias); sleep 5 if test "$build" != "$host"; then # We are being configured with some form of cross compiler. GLIBCXX_IS_NATIVE=false case "$host","$target" in # Darwin crosses can use the host system's libraries and headers, # because of the fat library support. Of course, it must be the # same version of Darwin on both sides. Allow the user to # just say --target=foo-darwin without a version number to mean # "the version on this system". *-*-darwin*,*-*-darwin*) hostos=echo $host | sed 's/.*-darwin/darwin/' targetos=echo $target | sed 's/.*-darwin/darwin/' if test $hostos = $targetos -o $targetos = darwin ; then GLIBCXX_IS_NATIVE=true fi ;; *) GCC_NO_EXECUTABLES ;; esac else GLIBCXX_IS_NATIVE=true fi # Sets up automake. Must come after AC_CANONICAL_SYSTEM. Each of the # following is magically included in AUTOMAKE_OPTIONS in each Makefile.am. # 1.x: minimum required version # no-define: PACKAGE and VERSION will not be #define'd in config.h (a bunch # of other PACKAGE_* variables will, however, and there's nothing # we can do about that; they come from AC_INIT). # foreign: we don't follow the normal rules for GNU packages (no COPYING # file in the top srcdir, etc, etc), so stop complaining. # no-dependencies: turns off auto dependency generation (just for now) # -Wall: turns on all automake warnings... # -Wno-portability: ...except this one, since GNU make is now required. AM_INIT_AUTOMAKE([1.9.3 no-define foreign no-dependencies no-dist -Wall -Wno-portability -Wno-override]) AH_TEMPLATE(PACKAGE, [Name of package]) AH_TEMPLATE(VERSION, [Version number of package]) # Runs configure.host, finds CC, CXX, and assorted other critical bits. Sets # up critical shell variables. GLIBCXX_CONFIGURE if test "x${with_newlib}" != "xyes"; then AC_LIBTOOL_DLOPEN fi AM_PROG_LIBTOOL AC_SUBST(enable_shared) AC_SUBST(enable_static) # Possibly disable most of the library. ## TODO: Consider skipping unncessary tests altogether in this case, rather ## than just ignoring the results. Faster /and/ more correct, win win. GLIBCXX_ENABLE_HOSTED # Enable compiler support that doesn't require linking. GLIBCXX_ENABLE_SJLJ_EXCEPTIONS GLIBCXX_ENABLE_PCH($is_hosted) GLIBCXX_ENABLE_THREADS GLIBCXX_ENABLE_ATOMIC_BUILTINS # Checks for compiler support that doesn't require linking. GLIBCXX_CHECK_COMPILER_FEATURES # Enable all the variable C++ runtime options that don't require linking. GLIBCXX_ENABLE_CSTDIO GLIBCXX_ENABLE_CLOCALE GLIBCXX_ENABLE_ALLOCATOR GLIBCXX_ENABLE_CHEADERS($c_model) dnl c_model from configure.host GLIBCXX_ENABLE_LONG_LONG([yes]) GLIBCXX_ENABLE_WCHAR_T([yes]) GLIBCXX_ENABLE_C99([yes]) GLIBCXX_ENABLE_CONCEPT_CHECKS([no]) GLIBCXX_ENABLE_DEBUG_FLAGS(["-g3 -O0"]) GLIBCXX_ENABLE_DEBUG([no]) GLIBCXX_ENABLE_PARALLEL([yes]) GLIBCXX_ENABLE_CXX_FLAGS GLIBCXX_ENABLE_FULLY_DYNAMIC_STRING([no]) # Checks for operating systems support that doesn't require linking. GLIBCXX_CHECK_SYSTEM_ERROR # For the streamoff typedef. GLIBCXX_CHECK_INT64_T # For LFS support. GLIBCXX_CHECK_LFS # For showmanyc_helper(). AC_CHECK_HEADERS(sys/ioctl.h sys/filio.h) GLIBCXX_CHECK_POLL GLIBCXX_CHECK_S_ISREG_OR_S_IFREG # For xsputn_2(). AC_CHECK_HEADERS(sys/uio.h) GLIBCXX_CHECK_WRITEV # For C99 support to TR1. GLIBCXX_CHECK_C99_TR1 # For common values of EOF, SEEK_CUR, SEEK_END. GLIBCXX_CHECK_STDIO_MACROS # For gettimeofday support. GLIBCXX_CHECK_GETTIMEOFDAY # For clock_gettime, nanosleep and sched_yield support. # NB: The default is [no], because otherwise it requires linking. GLIBCXX_ENABLE_LIBSTDCXX_TIME([no]) # For gthread support GLIBCXX_CHECK_GTHREADS AC_LC_MESSAGES # Check for available headers. AC_CHECK_HEADERS([endian.h float.h fp.h ieeefp.h inttypes.h locale.h \ machine/endian.h machine/param.h nan.h stdint.h stdlib.h string.h \ strings.h sys/ipc.h sys/isa_defs.h sys/machine.h sys/param.h \ sys/resource.h sys/sem.h sys/stat.h sys/time.h sys/types.h unistd.h \ wchar.h wctype.h]) # Only do link tests if native. Else, hardcode. if $GLIBCXX_IS_NATIVE; then # We can do more elaborate tests that assume a working linker. CANADIAN=no GLIBCXX_CHECK_LINKER_FEATURES GLIBCXX_CHECK_MATH_SUPPORT GLIBCXX_CHECK_STDLIB_SUPPORT # For /dev/random and /dev/urandom for TR1. GLIBCXX_CHECK_RANDOM_TR1 # For TLS support. GCC_CHECK_TLS # For iconv support. AM_ICONV else # This lets us hard-code the functionality we know we'll have in the cross # target environment. "Let" is a sugar-coated word placed on an especially # dull and tedious hack, actually. # # Here's why GLIBCXX_CHECK_MATH_SUPPORT, and other autoconf macros # that involve linking, can't be used: # "cannot open sim-crt0.o" # "cannot open crt0.o" # etc. All this is because there currently exists no unified, consistent # way for top level CC information to be passed down to target directories: # newlib includes, newlib linking info, libgloss versus newlib crt0.o, etc. # When all of that is done, all of this hokey, excessive AC_DEFINE junk for # crosses can be removed. # If Canadian cross, then don't pick up tools from the build directory. # Used only in GLIBCXX_EXPORT_INCLUDES. if test -n "$with_cross_host" && test x"$build_alias" != x"$with_cross_host" && test x"$build" != x"$target"; then CANADIAN=yes else CANADIAN=no fi # Construct crosses by hand, eliminating bits that need ld... # GLIBCXX_CHECK_MATH_SUPPORT # First, test for "known" system libraries. We may be using newlib even # on a hosted environment. if test "x${with_newlib}" = "xyes"; then os_include_dir="os/newlib" AC_DEFINE(HAVE_HYPOT) # GLIBCXX_CHECK_STDLIB_SUPPORT AC_DEFINE(HAVE_STRTOF) AC_DEFINE(HAVE_ACOSF) AC_DEFINE(HAVE_ASINF) AC_DEFINE(HAVE_ATAN2F) AC_DEFINE(HAVE_ATANF) AC_DEFINE(HAVE_CEILF) AC_DEFINE(HAVE_COSF) AC_DEFINE(HAVE_COSHF) AC_DEFINE(HAVE_EXPF) AC_DEFINE(HAVE_FABSF) AC_DEFINE(HAVE_FLOORF) AC_DEFINE(HAVE_FMODF) AC_DEFINE(HAVE_FREXPF) AC_DEFINE(HAVE_LDEXPF) AC_DEFINE(HAVE_LOG10F) AC_DEFINE(HAVE_LOGF) AC_DEFINE(HAVE_MODFF) AC_DEFINE(HAVE_POWF) AC_DEFINE(HAVE_SINF) AC_DEFINE(HAVE_SINHF) AC_DEFINE(HAVE_SQRTF) AC_DEFINE(HAVE_TANF) AC_DEFINE(HAVE_TANHF) AC_DEFINE(HAVE_ICONV) else GLIBCXX_CROSSCONFIG fi # At some point, we should differentiate between architectures # like x86, which have long double versions, and alpha/powerpc/etc., # which don't. For the time being, punt. if test x"long_double_math_on_this_cpu" = x"yes"; then AC_DEFINE(HAVE_ACOSL) AC_DEFINE(HAVE_ASINL) AC_DEFINE(HAVE_ATAN2L) AC_DEFINE(HAVE_ATANL) AC_DEFINE(HAVE_CEILL) AC_DEFINE(HAVE_COSL) AC_DEFINE(HAVE_COSHL) AC_DEFINE(HAVE_EXPL) AC_DEFINE(HAVE_FABSL) AC_DEFINE(HAVE_FLOORL) AC_DEFINE(HAVE_FMODL) AC_DEFINE(HAVE_FREXPL) AC_DEFINE(HAVE_LDEXPL) AC_DEFINE(HAVE_LOG10L) AC_DEFINE(HAVE_LOGL) AC_DEFINE(HAVE_MODFL) AC_DEFINE(HAVE_POWL) AC_DEFINE(HAVE_SINCOSL) AC_DEFINE(HAVE_SINL) AC_DEFINE(HAVE_SINHL) AC_DEFINE(HAVE_SQRTL) AC_DEFINE(HAVE_TANL) AC_DEFINE(HAVE_TANHL) fi fi # Check for _Unwind_GetIPInfo. GCC_CHECK_UNWIND_GETIPINFO GCC_LINUX_FUTEX([AC_DEFINE(HAVE_LINUX_FUTEX, 1, [Define if futex syscall is available.])]) GCC_HEADER_STDINT(include/gstdint.h) # This depends on GLIBCXX CHECK_LINKER_FEATURES, but without it assumes no. GLIBCXX_ENABLE_SYMVERS([yes]) GLIBCXX_ENABLE_VISIBILITY([yes]) ac_ldbl_compat=no case "$target" in powerpc*-*-linux* | \ powerpc*-*-gnu* | \ sparc*-*-linux* | \ s390*-*-linux* | \ alpha*-*-linux*) AC_TRY_COMPILE(, [ #if !defined __LONG_DOUBLE_128__ || (defined(__sparc__) && defined(__arch64__)) #error no need for long double compatibility #endif ], [ac_ldbl_compat=yes], [ac_ldbl_compat=no]) if test "$ac_ldbl_compat" = yes; then AC_DEFINE([_GLIBCXX_LONG_DOUBLE_COMPAT],1, [Define if compatibility should be provided for -mlong-double-64.]) port_specific_symbol_files="\$(top_srcdir)/config/os/gnu-linux/ldbl-extra.ver" fi esac GLIBCXX_CONDITIONAL(GLIBCXX_LDBL_COMPAT, test $ac_ldbl_compat = yes) # This depends on GLIBCXX_ENABLE_SYMVERS and GLIBCXX_IS_NATIVE. GLIBCXX_CONFIGURE_TESTSUITE # Propagate the target-specific source directories through the build chain. ATOMICITY_SRCDIR=config/${atomicity_dir} ATOMIC_WORD_SRCDIR=config/${atomic_word_dir} ATOMIC_FLAGS=${atomic_flags} CPU_DEFINES_SRCDIR=config/${cpu_defines_dir} OS_INC_SRCDIR=config/${os_include_dir} ERROR_CONSTANTS_SRCDIR=config/${error_constants_dir} ABI_TWEAKS_SRCDIR=config/${abi_tweaks_dir} AC_SUBST(ATOMICITY_SRCDIR) AC_SUBST(ATOMIC_WORD_SRCDIR) AC_SUBST(ATOMIC_FLAGS) AC_SUBST(CPU_DEFINES_SRCDIR) AC_SUBST(ABI_TWEAKS_SRCDIR) AC_SUBST(OS_INC_SRCDIR) AC_SUBST(ERROR_CONSTANTS_SRCDIR) # Determine cross-compile flags and AM_CONDITIONALs. #AC_SUBST(GLIBCXX_IS_NATIVE) #AM_CONDITIONAL(CANADIAN, test $CANADIAN = yes) GLIBCXX_EVALUATE_CONDITIONALS AC_CACHE_SAVE if test ${multilib} = yes; then multilib_arg="--enable-multilib" else multilib_arg= fi # Export all the install information. GLIBCXX_EXPORT_INSTALL_INFO # Export all the include and flag information to Makefiles. GLIBCXX_EXPORT_INCLUDES GLIBCXX_EXPORT_FLAGS if test "$enable_shared" = yes; then LIBSUPCXX_PICFLAGS="-prefer-pic" else LIBSUPCXX_PICFLAGS= fi AC_SUBST(LIBSUPCXX_PICFLAGS) dnl In autoconf 2.5x, AC_OUTPUT is replaced by four AC_CONFIG_* macros, dnl which can all be called multiple times as needed, plus one (different) dnl AC_OUTPUT macro. This one lists the files to be created: AC_CONFIG_FILES( \ Makefile \ AC_FOREACH([DIR], glibcxx_SUBDIRS, [DIR/Makefile ]) ) AC_CONFIG_FILES([scripts/testsuite_flags],[chmod +x scripts/testsuite_flags]) dnl These commands are run at the end of config.status: AC_CONFIG_COMMANDS([default], [if test -n "$CONFIG_FILES"; then # Multilibs need MULTISUBDIR defined correctly in certain makefiles so # that multilib installs will end up installed in the correct place. # The testsuite needs it for multilib-aware ABI baseline files. # To work around this not being passed down from config-ml.in -> # srcdir/Makefile.am -> srcdir/{src,libsupc++,...}/Makefile.am, manually # append it here. Only modify Makefiles that have just been created. # # Also, get rid of this simulated-VPATH thing that automake does. cat > vpsed << \_EOF s!test -f '$<' || echo '$(srcdir)/'!! _EOF for i in $SUBDIRS; do case $CONFIG_FILES in *${i}/Makefile*) #echo "Adding MULTISUBDIR to $i/Makefile" sed -f vpsed $i/Makefile > tmp grep '^MULTISUBDIR =' Makefile >> tmp mv tmp $i/Makefile ;; esac done rm vpsed fi (cd include && ${MAKE-make}) ], [ # Variables needed in config.status (file generation) which aren't already # passed by autoconf. SUBDIRS="$SUBDIRS" ]) dnl And this actually makes things happen: AC_OUTPUT still i don't know what is wrong with this.
c++, linux, gcc, rhel
1
529
0
https://stackoverflow.com/questions/20944633/issue-in-installing-gcc-c
20,489,832
How to set password policy for specific user or group in RHEL using PAM configuration
How to set password policy for specific user or group in RHEL using PAM configuration? I changed my "/etc/pam.d/system-auth" file as below but still I am not able to set easy password for user "dkumar" with group "dkumar" password [success=1 default=ignore] pam_succeed_if.so user ingroup dkumar password sufficient pam_cracklib.so try_first_pass retry=3 minlen=1 password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok password required pam_deny.so
How to set password policy for specific user or group in RHEL using PAM configuration How to set password policy for specific user or group in RHEL using PAM configuration? I changed my "/etc/pam.d/system-auth" file as below but still I am not able to set easy password for user "dkumar" with group "dkumar" password [success=1 default=ignore] pam_succeed_if.so user ingroup dkumar password sufficient pam_cracklib.so try_first_pass retry=3 minlen=1 password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok password required pam_deny.so
linux, rhel, pam
1
1,567
1
https://stackoverflow.com/questions/20489832/how-to-set-password-policy-for-specific-user-or-group-in-rhel-using-pam-configur
20,165,380
Compatibility with matplotlib, python and pandas on RHEL6
I have a manual install of numpy, matplotlib and pandas, basic tests seem to work fine. Versions here: Numpy 1.8.0 Matplotlib 1.3.1 Python 2.6.6 Pandas 0.12.0 When I run this code on this platform (RHEL 6.4) i get the following stack trace. 'plot'.format(numeric_data.__class__.__name__)) TypeError: Empty 'DataFrame': no numeric data to plot The same code runs fine on Fedora 19 without having to deal with any dtype issues and on that platform I have matplotlib 1.2.1, numpy 1.7.1 and python 2.7.4 So will this not work on the RHEL6.4 Python version Code snippit #!/usr/bin/python ### Get the libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt from pandas import * disk_data = read_csv('collectl.sD.fullday.clean', sep=' ', index_col=1, parse_dates=True) sda_io = disk_data[['sda-Reads','sda-Writes']] print sda_io[:50] sda_io[:1000].plot(grid='on') plt.show() Trace Traceback (most recent call last): File "./parse-collectl.py", line 19, in <module> sda_io[:1000].plot(grid='on') File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 1636, in plot_frame plot_obj.generate() File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 854, in generate self._compute_plot_data() File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 949, in _compute_plot_data 'plot'.format(numeric_data.__class__.__name__)) TypeError: Empty 'DataFrame': no numeric data to plot
Compatibility with matplotlib, python and pandas on RHEL6 I have a manual install of numpy, matplotlib and pandas, basic tests seem to work fine. Versions here: Numpy 1.8.0 Matplotlib 1.3.1 Python 2.6.6 Pandas 0.12.0 When I run this code on this platform (RHEL 6.4) i get the following stack trace. 'plot'.format(numeric_data.__class__.__name__)) TypeError: Empty 'DataFrame': no numeric data to plot The same code runs fine on Fedora 19 without having to deal with any dtype issues and on that platform I have matplotlib 1.2.1, numpy 1.7.1 and python 2.7.4 So will this not work on the RHEL6.4 Python version Code snippit #!/usr/bin/python ### Get the libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt from pandas import * disk_data = read_csv('collectl.sD.fullday.clean', sep=' ', index_col=1, parse_dates=True) sda_io = disk_data[['sda-Reads','sda-Writes']] print sda_io[:50] sda_io[:1000].plot(grid='on') plt.show() Trace Traceback (most recent call last): File "./parse-collectl.py", line 19, in <module> sda_io[:1000].plot(grid='on') File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 1636, in plot_frame plot_obj.generate() File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 854, in generate self._compute_plot_data() File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 949, in _compute_plot_data 'plot'.format(numeric_data.__class__.__name__)) TypeError: Empty 'DataFrame': no numeric data to plot
python, matplotlib, pandas, rhel
1
1,027
1
https://stackoverflow.com/questions/20165380/compatibility-with-matplotlib-python-and-pandas-on-rhel6
19,774,612
Spool to CSV with SQL*Plus and then convert the CSV to PDF
I have a SQL that spools some results to a CSV file which then gets e-mailed out to customers. The way that Microsoft Excel (usually the end users default application for opening CSV files) is slightly confusing for some end users in that the columns usually need to be made larger (otherwise #'s are shown etc.) before it can be printed. Is it possible to spool straight to a PDF file, or to convert the CSV to an easier to read PDF before e-mailing out? I searched online for a command-line tool to convert a CSV to a PDF but come up blank. This is generated on a Red Hat Enterprise Linux server without a RHN subscription so something with a lot of dependencies would be a nightmare to install.
Spool to CSV with SQL*Plus and then convert the CSV to PDF I have a SQL that spools some results to a CSV file which then gets e-mailed out to customers. The way that Microsoft Excel (usually the end users default application for opening CSV files) is slightly confusing for some end users in that the columns usually need to be made larger (otherwise #'s are shown etc.) before it can be printed. Is it possible to spool straight to a PDF file, or to convert the CSV to an easier to read PDF before e-mailing out? I searched online for a command-line tool to convert a CSV to a PDF but come up blank. This is generated on a Red Hat Enterprise Linux server without a RHN subscription so something with a lot of dependencies would be a nightmare to install.
excel, bash, pdf, csv, rhel
1
969
2
https://stackoverflow.com/questions/19774612/spool-to-csv-with-sqlplus-and-then-convert-the-csv-to-pdf
19,436,387
PHP - is this PHP or CentOS, Red hat enterprise linux bug? When doing php://output it fails to output under Red hat linux but works in other OS
I have this excel file creator libraries ( [URL] ) which is working in my Ubuntu to make xlsx files. But when putting this working version, in main server ( CentOS and RHEL 6.4 ) cause the file not to be shown via $objWriter->save('php://output'); and also do not throw any error in log files. Simply fails to create the file and dump in the browser. (trying under ZF1 , PHP5) $objPHPExcel = new PHPExcel(); ..... $objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel2007'); ob_end_clean(); $objWriter->save('php://output');
PHP - is this PHP or CentOS, Red hat enterprise linux bug? When doing php://output it fails to output under Red hat linux but works in other OS I have this excel file creator libraries ( [URL] ) which is working in my Ubuntu to make xlsx files. But when putting this working version, in main server ( CentOS and RHEL 6.4 ) cause the file not to be shown via $objWriter->save('php://output'); and also do not throw any error in log files. Simply fails to create the file and dump in the browser. (trying under ZF1 , PHP5) $objPHPExcel = new PHPExcel(); ..... $objWriter = PHPExcel_IOFactory::createWriter($objPHPExcel, 'Excel2007'); ob_end_clean(); $objWriter->save('php://output');
php, zend-framework, centos, phpexcel, rhel
1
1,269
1
https://stackoverflow.com/questions/19436387/php-is-this-php-or-centos-red-hat-enterprise-linux-bug-when-doing-php-outp
19,358,565
Installing php-memcached error
I'm trying to install php53u-pecl-memcached and the following error ensues: [root@host1]# yum install php53u-pecl-memcached Loaded plugins: priorities, update-motd, upgrade-helper 736 packages excluded due to repository priority protections Resolving Dependencies --> Running transaction check ---> Package php53u-pecl-memcached.x86_64 0:1.0.0-2.ius.el6 will be installed --> Processing Dependency: libmemcached.so.2(libmemcached_2)(64bit) for package: php53u-pecl-memcached-1.0.0-2.ius.el6.x86_64 --> Processing Dependency: libmemcached.so.2()(64bit) for package: php53u-pecl-memcached-1.0.0-2.ius.el6.x86_64 --> Finished Dependency Resolution Error: Package: php53u-pecl-memcached-1.0.0-2.ius.el6.x86_64 (ius) Requires: libmemcached.so.2(libmemcached_2)(64bit) Error: Package: php53u-pecl-memcached-1.0.0-2.ius.el6.x86_64 (ius) Requires: libmemcached.so.2()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Why is this happening? I have memcached and libmemcached installed: [root@host1]# yum list installed | grep memca libmemcached.x86_64 1.0.8-2.6.amzn1 @amzn-main memcached.x86_64 1.4.13-1.11.amzn1 @amzn-main OS is Amazon Linux AMI release 2013.09 on an aws instance.
Installing php-memcached error I'm trying to install php53u-pecl-memcached and the following error ensues: [root@host1]# yum install php53u-pecl-memcached Loaded plugins: priorities, update-motd, upgrade-helper 736 packages excluded due to repository priority protections Resolving Dependencies --> Running transaction check ---> Package php53u-pecl-memcached.x86_64 0:1.0.0-2.ius.el6 will be installed --> Processing Dependency: libmemcached.so.2(libmemcached_2)(64bit) for package: php53u-pecl-memcached-1.0.0-2.ius.el6.x86_64 --> Processing Dependency: libmemcached.so.2()(64bit) for package: php53u-pecl-memcached-1.0.0-2.ius.el6.x86_64 --> Finished Dependency Resolution Error: Package: php53u-pecl-memcached-1.0.0-2.ius.el6.x86_64 (ius) Requires: libmemcached.so.2(libmemcached_2)(64bit) Error: Package: php53u-pecl-memcached-1.0.0-2.ius.el6.x86_64 (ius) Requires: libmemcached.so.2()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Why is this happening? I have memcached and libmemcached installed: [root@host1]# yum list installed | grep memca libmemcached.x86_64 1.0.8-2.6.amzn1 @amzn-main memcached.x86_64 1.4.13-1.11.amzn1 @amzn-main OS is Amazon Linux AMI release 2013.09 on an aws instance.
php, amazon-web-services, memcached, rhel
1
404
0
https://stackoverflow.com/questions/19358565/installing-php-memcached-error
19,175,137
dependencies of python 27 for RHEL5
Essentially I am trying to get Python2.7 to work on Linux EL5, but currently with no success. Really appreciate some pointer from anyone who has done this before. Anyone happens to know whether there is a repo to get all the dependencies (EL5 rpms if not sources) for Python 2.7? I could find the RPMs for Python27 EL5 at [URL] but still not complete. m000772@hkl20030997 ~]$ rpm -ivh python27-2.7.2-5.2.el5.x86_64.rpm error: Failed dependencies: libpython2.7.so.1.0()(64bit) is needed by python27-2.7.2-5.2.el5.x86_64 python27-libs = 2.7.2-5.2.el5 is needed by python27-2.7.2-5.2.el5.x86_64 [m000772@hkl20030997 ~]$ rpm -ivh python27-libs-2.7.2-5.2.el5.x86_64.rpm error: Failed dependencies: libdb-4.8.so()(64bit) is needed by python27-libs-2.7.2-5.2.el5.x86_64 Now there is no RPM for libdb-4.8
dependencies of python 27 for RHEL5 Essentially I am trying to get Python2.7 to work on Linux EL5, but currently with no success. Really appreciate some pointer from anyone who has done this before. Anyone happens to know whether there is a repo to get all the dependencies (EL5 rpms if not sources) for Python 2.7? I could find the RPMs for Python27 EL5 at [URL] but still not complete. m000772@hkl20030997 ~]$ rpm -ivh python27-2.7.2-5.2.el5.x86_64.rpm error: Failed dependencies: libpython2.7.so.1.0()(64bit) is needed by python27-2.7.2-5.2.el5.x86_64 python27-libs = 2.7.2-5.2.el5 is needed by python27-2.7.2-5.2.el5.x86_64 [m000772@hkl20030997 ~]$ rpm -ivh python27-libs-2.7.2-5.2.el5.x86_64.rpm error: Failed dependencies: libdb-4.8.so()(64bit) is needed by python27-libs-2.7.2-5.2.el5.x86_64 Now there is no RPM for libdb-4.8
python, linux, rpm, rhel
1
999
1
https://stackoverflow.com/questions/19175137/dependencies-of-python-27-for-rhel5
18,485,005
ssh2 not working for suse
I am trying to ssh from RHEL to SLES machine. I am using PHP function ssh2 to achieve that. Authentication is not happening even after passing correct username and password. But the same code is working fine for RHEL->RHEL. I am able to ssh to the SLES machine from RHEL terminal. But using ssh2 it is not happening. <?php if (!function_exists("ssh2_connect")) die("function ssh2_connect doesn't exist"); // log in at ip on port 22 if(!($con = ssh2_connect(ip, 22))){ echo "fail: unable to establish connection\n"; } else { // try to authenticate with username root, password password if(!ssh2_auth_password($con, "root", "password")) { echo "fail: unable to authenticate\n"; } else { // allright, we're in! echo "okay: logged in...\n"; // execute a command if (!($stream = ssh2_exec($con, 'date' ))) { echo "fail: unable to execute command\n"; } else { // collect returning data from command //echo "in else"; stream_set_blocking($stream, true); $data = ""; while ($buf = fread($stream,4096)) { $data .= $buf; } echo $data; fclose($stream); } } } ?> I am getting the output "fail:unable to authenticate". Why is it happening that way? Any solution to it?
ssh2 not working for suse I am trying to ssh from RHEL to SLES machine. I am using PHP function ssh2 to achieve that. Authentication is not happening even after passing correct username and password. But the same code is working fine for RHEL->RHEL. I am able to ssh to the SLES machine from RHEL terminal. But using ssh2 it is not happening. <?php if (!function_exists("ssh2_connect")) die("function ssh2_connect doesn't exist"); // log in at ip on port 22 if(!($con = ssh2_connect(ip, 22))){ echo "fail: unable to establish connection\n"; } else { // try to authenticate with username root, password password if(!ssh2_auth_password($con, "root", "password")) { echo "fail: unable to authenticate\n"; } else { // allright, we're in! echo "okay: logged in...\n"; // execute a command if (!($stream = ssh2_exec($con, 'date' ))) { echo "fail: unable to execute command\n"; } else { // collect returning data from command //echo "in else"; stream_set_blocking($stream, true); $data = ""; while ($buf = fread($stream,4096)) { $data .= $buf; } echo $data; fclose($stream); } } } ?> I am getting the output "fail:unable to authenticate". Why is it happening that way? Any solution to it?
php, rhel, suse, libssh2
1
1,567
2
https://stackoverflow.com/questions/18485005/ssh2-not-working-for-suse
18,170,025
Please explain the ps -aef response on RHEL
What does pts/2 indicate in the below output. Why there is no such for other dd processes? $ ps -aef |grep dd root 6553672 15073352 3 02:32:19 - 0:01 dd of=/dev/lv01 bs=1024k padmin 9437410 16515110 1 02:43:32 **pts/2** 0:00 grep dd root 13828156 11010220 0 02:32:33 - 0:00 dd of=/dev/lv02 bs=1024k root 14155860 13828156 2 02:32:33 - 0:01 dd of=/dev/lv02 bs=1024k root 15073352 13762812 0 02:32:19 - 0:00 dd of=/dev/lv01 bs=1024k root 15532200 15925276 2 02:40:47 **pts/1** 0:03 dd of=/home/padmin/sample-dd-op bs=1024k
Please explain the ps -aef response on RHEL What does pts/2 indicate in the below output. Why there is no such for other dd processes? $ ps -aef |grep dd root 6553672 15073352 3 02:32:19 - 0:01 dd of=/dev/lv01 bs=1024k padmin 9437410 16515110 1 02:43:32 **pts/2** 0:00 grep dd root 13828156 11010220 0 02:32:33 - 0:00 dd of=/dev/lv02 bs=1024k root 14155860 13828156 2 02:32:33 - 0:01 dd of=/dev/lv02 bs=1024k root 15073352 13762812 0 02:32:19 - 0:00 dd of=/dev/lv01 bs=1024k root 15532200 15925276 2 02:40:47 **pts/1** 0:03 dd of=/home/padmin/sample-dd-op bs=1024k
host, rhel, ps
1
530
1
https://stackoverflow.com/questions/18170025/please-explain-the-ps-aef-response-on-rhel
17,396,171
CentOS why the Google Apps Engine do not work? Even applying the last patch mentioned in there fix
Google Apps Engine was working in my CentOS there was no changes made, but suddently stop working since yesterday. then i followed the patch for Google Apps Engine but still its not starting up. how do resolve it? [root@ip-10-59-143-73 tmp]# uname -a Linux ip-10-59-143-73 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 x86_64 x86_64 GNU/Linux According to Patch: [URL] [URL] Line 56/57 is changed, like the patch but still problem: 55 py_file = __file__.replace('.pyc', '.py') 56 #dir_paths = [os.path.abspath(os.path.dirname(os.path.realpath(py_file))), 57 # os.path.abspath(os.path.dirname(py_file))] 58 uncompiled_file = __file__.rstrip("c") if __file__.endswith("pyc") else __file__ 59 dir_paths = [os.path.abspath(os.path.dirname(os.path.realpath(uncompiled_file))), 60 os.path.abspath(os.path.dirname(uncompiled_file))] 61 for dir_path in dir_paths: 62 sibling_path = os.path.join(dir_path, sibling) 63 if os.path.exists(sibling_path): Patch is not helping: [root@ip-10-59-143-73 tmp]# Traceback (most recent call last): File "/var/tmp/google_appengine/dev_appserver.py", line 76, in <module> _DIR_PATH = _get_dir_path(os.path.join('lib', 'ipaddr')) File "/var/tmp/google_appengine/dev_appserver.py", line 66, in _get_dir_path 'file and %s.' % sibling) ValueError: Could not determine directory that contains both, this file and lib/ipaddr. EDIT: issues [URL]
CentOS why the Google Apps Engine do not work? Even applying the last patch mentioned in there fix Google Apps Engine was working in my CentOS there was no changes made, but suddently stop working since yesterday. then i followed the patch for Google Apps Engine but still its not starting up. how do resolve it? [root@ip-10-59-143-73 tmp]# uname -a Linux ip-10-59-143-73 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 x86_64 x86_64 GNU/Linux According to Patch: [URL] [URL] Line 56/57 is changed, like the patch but still problem: 55 py_file = __file__.replace('.pyc', '.py') 56 #dir_paths = [os.path.abspath(os.path.dirname(os.path.realpath(py_file))), 57 # os.path.abspath(os.path.dirname(py_file))] 58 uncompiled_file = __file__.rstrip("c") if __file__.endswith("pyc") else __file__ 59 dir_paths = [os.path.abspath(os.path.dirname(os.path.realpath(uncompiled_file))), 60 os.path.abspath(os.path.dirname(uncompiled_file))] 61 for dir_path in dir_paths: 62 sibling_path = os.path.join(dir_path, sibling) 63 if os.path.exists(sibling_path): Patch is not helping: [root@ip-10-59-143-73 tmp]# Traceback (most recent call last): File "/var/tmp/google_appengine/dev_appserver.py", line 76, in <module> _DIR_PATH = _get_dir_path(os.path.join('lib', 'ipaddr')) File "/var/tmp/google_appengine/dev_appserver.py", line 66, in _get_dir_path 'file and %s.' % sibling) ValueError: Could not determine directory that contains both, this file and lib/ipaddr. EDIT: issues [URL]
python, google-app-engine, centos, rhel, centos6
1
204
1
https://stackoverflow.com/questions/17396171/centos-why-the-google-apps-engine-do-not-work-even-applying-the-last-patch-ment
17,078,073
Nodejs site cannot be shown in browser
I have set up a nodejs site on RHEL6. Everything looks fine but I cannot view our site in browsers (all fail in Chrome, Safari and IE9). I can use curl getting correct html response. Here is the test result with different tools. Did anyone meet the same situation before? ~]$ nmap -v -A 123.150.207.18 -p80 Starting Nmap 5.21 ( [URL] ) at 2013-06-13 07:51 CST NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 07:51 Scanning 123.150.207.18 [2 ports] Completed Ping Scan at 07:51, 0.00s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 07:51 Completed Parallel DNS resolution of 1 host. at 07:51, 4.00s elapsed Initiating Connect Scan at 07:51 Scanning 123.150.207.18 [1 port] Discovered open port 80/tcp on 123.150.207.18 Completed Connect Scan at 07:51, 0.00s elapsed (1 total ports) Initiating Service scan at 07:51 Scanning 1 service on 123.150.207.18 Completed Service scan at 07:51, 11.13s elapsed (1 service on 1 host) NSE: Script scanning 123.150.207.18. NSE: Starting runlevel 1 (of 1) scan. Initiating NSE at 07:51 Completed NSE at 07:51, 0.03s elapsed NSE: Script Scanning completed. Nmap scan report for 123.150.207.18 Host is up (0.00057s latency). PORT STATE SERVICE VERSION 80/tcp open http? |_http-favicon: Unknown favicon MD5: 1D0E785BFCEDDE5326C2460E9F9B261D 1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at [URL] : SF-Port80-TCP:V=5.21%I=7%D=6/13%Time=51B90999%P=x86_64-redhat-linux-gnu%r( SF:GetRequest,233B,"HTTP/1\.1\x20200\x20OK\r\nX-Powered-By:\x20Express\r\n SF:Content-Type:\x20text/html;\x20charset=utf-8\r\nContent-Length:\x208714 SF:\r\nETag:\x20\"891558458\"\r\nSet-Cookie:\x20connect\.sid=s%3AkL3pj-Tzt SF:rF-hh7Mtjhc85Br\.Nfkgt%2FzHRJ%2FIFiIIqNqW0sSQ7%2F%2Brx%2FWldcrVkNrhQLA; SF:\x20Path=/;\x20HttpOnly\r\nDate:\x20Wed,\x2012\x20Jun\x202013\x2023:51: SF:53\x20GMT\r\nConnection:\x20close\r\n\r\n<!DOCTYPE\x20html>\n<html\x20l SF:ang=\"en\">\n<head>\n\x20\x20\x20\x20<meta\x20charset=\"utf-8\">\n\x20\ SF:x20\x20\x20<meta\x20http-equiv=\"content-type\"\x20content=\"text/html; SF:\x20charset=UTF-8\">\n\x20\x20\x20\x20<meta\x20name=\"viewport\"\x20con SF:tent=\"width=device-width,\x20initial-scale=1\.0\">\n\x20\x20\x20\x20<m SF:eta\x20name=\"description\"\x20content=\"\">\n\x20\x20\x20\x20<meta\x20 SF:name=\"author\"\x20content=\"\">\n\x20\x20\x20\x20<title>\xe7\x95\xaa\x SF:e8\x8c\x84\xe5\xbf\xab\xe8\xb7\x91\x20--\x20\xe4\xb8\xad\xe5\x9b\xbd\xe SF:6\x9c\x80\xe5\xb0\x8f\xe5\xb7\xa7\xe7\x9a\x84\xe6\x97\xb6\xe9\x97\xb4\x SF:e7\xae\xa1\xe7\x90\x86\xe5\xb7\xa5\xe5\x85\xb7</title>\n\x20\x20\x20\x2 SF:0<link\x20rel=\"shortcut\x20icon\"\x20href=\"\./tomatodo/img/tomato_32\ SF:.png\">\n\x20\x20\x20\x20<link\x20rel=\"stylesheet\"\x20href=\"\./boots SF:trap/css/bootstrap\.css\">\n\x20\x20\x20\x20<link\x20rel=\"stylesheet\" SF:\x20href=\"\./bootstrap/css/bootstrap-responsive\.css\">\n\x20\x20\x20\ SF:x20<link\x20rel=\"styleshee")%r(HTTPOptions,109,"HTTP/1\.1\x20404\x20No SF:t\x20Found\r\nX-Powered-By:\x20Express\r\nContent-Type:\x20text/plain\r SF:\nSet-Cookie:\x20connect\.sid=s%3A_s6k4167c1xZ4bmi6GaUW0ld\.OcixsFK4HAI SF:53dXqebPJ%2FNp0EPpGtDMPsFRvTFkLj8A;\x20Path=/;\x20HttpOnly\r\nDate:\x20 SF:Wed,\x2012\x20Jun\x202013\x2023:51:53\x20GMT\r\nConnection:\x20close\r\ SF:n\r\nCannot\x20OPTIONS\x20/")%r(FourOhFourRequest,12A,"HTTP/1\.1\x20404 SF:\x20Not\x20Found\r\nX-Powered-By:\x20Express\r\nContent-Type:\x20text/p SF:lain\r\nSet-Cookie:\x20connect\.sid=s%3AiYSdu5oWddVC54Rergi65gAg\.TOE5n SF:nutt90l1Xjv%2BG28sy%2BA230zvU9ccDqNTgQEQco;\x20Path=/;\x20HttpOnly\r\nD SF:ate:\x20Wed,\x2012\x20Jun\x202013\x2023:51:53\x20GMT\r\nConnection:\x20 SF:close\r\n\r\nCannot\x20GET\x20/nice%20ports%2C/Tri%6Eity\.txt%2ebak"); Read data files from: /usr/share/nmap Service detection performed. Please report any incorrect results at [URL] . Nmap done: 1 IP address (1 host up) scanned in 15.36 seconds ~]$ sudo netstat -plunt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2183/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2073/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2259/master tcp 0 0 0.0.0.0:56378 0.0.0.0:* LISTEN 1924/rpc.statd tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 20904/mongod tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 11590/redis-server tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1903/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 21220/node tcp 0 0 0.0.0.0:28017 0.0.0.0:* LISTEN 20904/mongod tcp 0 0 :::22 :::* LISTEN 2183/sshd tcp 0 0 ::1:631 :::* LISTEN 2073/cupsd tcp 0 0 ::1:25 :::* LISTEN 2259/master tcp 0 0 :::40334 :::* LISTEN 1924/rpc.statd tcp 0 0 :::111 :::* LISTEN 1903/rpcbind udp 0 0 0.0.0.0:55512 0.0.0.0:* 1924/rpc.statd udp 0 0 0.0.0.0:111 0.0.0.0:* 1903/rpcbind udp 0 0 0.0.0.0:631 0.0.0.0:* 2073/cupsd udp 0 0 0.0.0.0:806 0.0.0.0:* 1903/rpcbind udp 0 0 0.0.0.0:828 0.0.0.0:* 1924/rpc.statd udp 0 0 :::49640 :::* 1924/rpc.statd udp 0 0 :::111 :::* 1903/rpcbind udp 0 0 :::806 :::* 1903/rpcbind ~]$ sudo /sbin/service iptables status Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 5 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited 6 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport ports 80,8080 7 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) num target prot opt source destination ~]$ curl 123.150.207.18:80 <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="description" content=""> <meta name="author" content=""> ......
Nodejs site cannot be shown in browser I have set up a nodejs site on RHEL6. Everything looks fine but I cannot view our site in browsers (all fail in Chrome, Safari and IE9). I can use curl getting correct html response. Here is the test result with different tools. Did anyone meet the same situation before? ~]$ nmap -v -A 123.150.207.18 -p80 Starting Nmap 5.21 ( [URL] ) at 2013-06-13 07:51 CST NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 07:51 Scanning 123.150.207.18 [2 ports] Completed Ping Scan at 07:51, 0.00s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 07:51 Completed Parallel DNS resolution of 1 host. at 07:51, 4.00s elapsed Initiating Connect Scan at 07:51 Scanning 123.150.207.18 [1 port] Discovered open port 80/tcp on 123.150.207.18 Completed Connect Scan at 07:51, 0.00s elapsed (1 total ports) Initiating Service scan at 07:51 Scanning 1 service on 123.150.207.18 Completed Service scan at 07:51, 11.13s elapsed (1 service on 1 host) NSE: Script scanning 123.150.207.18. NSE: Starting runlevel 1 (of 1) scan. Initiating NSE at 07:51 Completed NSE at 07:51, 0.03s elapsed NSE: Script Scanning completed. Nmap scan report for 123.150.207.18 Host is up (0.00057s latency). PORT STATE SERVICE VERSION 80/tcp open http? |_http-favicon: Unknown favicon MD5: 1D0E785BFCEDDE5326C2460E9F9B261D 1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at [URL] : SF-Port80-TCP:V=5.21%I=7%D=6/13%Time=51B90999%P=x86_64-redhat-linux-gnu%r( SF:GetRequest,233B,"HTTP/1\.1\x20200\x20OK\r\nX-Powered-By:\x20Express\r\n SF:Content-Type:\x20text/html;\x20charset=utf-8\r\nContent-Length:\x208714 SF:\r\nETag:\x20\"891558458\"\r\nSet-Cookie:\x20connect\.sid=s%3AkL3pj-Tzt SF:rF-hh7Mtjhc85Br\.Nfkgt%2FzHRJ%2FIFiIIqNqW0sSQ7%2F%2Brx%2FWldcrVkNrhQLA; SF:\x20Path=/;\x20HttpOnly\r\nDate:\x20Wed,\x2012\x20Jun\x202013\x2023:51: SF:53\x20GMT\r\nConnection:\x20close\r\n\r\n<!DOCTYPE\x20html>\n<html\x20l SF:ang=\"en\">\n<head>\n\x20\x20\x20\x20<meta\x20charset=\"utf-8\">\n\x20\ SF:x20\x20\x20<meta\x20http-equiv=\"content-type\"\x20content=\"text/html; SF:\x20charset=UTF-8\">\n\x20\x20\x20\x20<meta\x20name=\"viewport\"\x20con SF:tent=\"width=device-width,\x20initial-scale=1\.0\">\n\x20\x20\x20\x20<m SF:eta\x20name=\"description\"\x20content=\"\">\n\x20\x20\x20\x20<meta\x20 SF:name=\"author\"\x20content=\"\">\n\x20\x20\x20\x20<title>\xe7\x95\xaa\x SF:e8\x8c\x84\xe5\xbf\xab\xe8\xb7\x91\x20--\x20\xe4\xb8\xad\xe5\x9b\xbd\xe SF:6\x9c\x80\xe5\xb0\x8f\xe5\xb7\xa7\xe7\x9a\x84\xe6\x97\xb6\xe9\x97\xb4\x SF:e7\xae\xa1\xe7\x90\x86\xe5\xb7\xa5\xe5\x85\xb7</title>\n\x20\x20\x20\x2 SF:0<link\x20rel=\"shortcut\x20icon\"\x20href=\"\./tomatodo/img/tomato_32\ SF:.png\">\n\x20\x20\x20\x20<link\x20rel=\"stylesheet\"\x20href=\"\./boots SF:trap/css/bootstrap\.css\">\n\x20\x20\x20\x20<link\x20rel=\"stylesheet\" SF:\x20href=\"\./bootstrap/css/bootstrap-responsive\.css\">\n\x20\x20\x20\ SF:x20<link\x20rel=\"styleshee")%r(HTTPOptions,109,"HTTP/1\.1\x20404\x20No SF:t\x20Found\r\nX-Powered-By:\x20Express\r\nContent-Type:\x20text/plain\r SF:\nSet-Cookie:\x20connect\.sid=s%3A_s6k4167c1xZ4bmi6GaUW0ld\.OcixsFK4HAI SF:53dXqebPJ%2FNp0EPpGtDMPsFRvTFkLj8A;\x20Path=/;\x20HttpOnly\r\nDate:\x20 SF:Wed,\x2012\x20Jun\x202013\x2023:51:53\x20GMT\r\nConnection:\x20close\r\ SF:n\r\nCannot\x20OPTIONS\x20/")%r(FourOhFourRequest,12A,"HTTP/1\.1\x20404 SF:\x20Not\x20Found\r\nX-Powered-By:\x20Express\r\nContent-Type:\x20text/p SF:lain\r\nSet-Cookie:\x20connect\.sid=s%3AiYSdu5oWddVC54Rergi65gAg\.TOE5n SF:nutt90l1Xjv%2BG28sy%2BA230zvU9ccDqNTgQEQco;\x20Path=/;\x20HttpOnly\r\nD SF:ate:\x20Wed,\x2012\x20Jun\x202013\x2023:51:53\x20GMT\r\nConnection:\x20 SF:close\r\n\r\nCannot\x20GET\x20/nice%20ports%2C/Tri%6Eity\.txt%2ebak"); Read data files from: /usr/share/nmap Service detection performed. Please report any incorrect results at [URL] . Nmap done: 1 IP address (1 host up) scanned in 15.36 seconds ~]$ sudo netstat -plunt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2183/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2073/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2259/master tcp 0 0 0.0.0.0:56378 0.0.0.0:* LISTEN 1924/rpc.statd tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 20904/mongod tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 11590/redis-server tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1903/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 21220/node tcp 0 0 0.0.0.0:28017 0.0.0.0:* LISTEN 20904/mongod tcp 0 0 :::22 :::* LISTEN 2183/sshd tcp 0 0 ::1:631 :::* LISTEN 2073/cupsd tcp 0 0 ::1:25 :::* LISTEN 2259/master tcp 0 0 :::40334 :::* LISTEN 1924/rpc.statd tcp 0 0 :::111 :::* LISTEN 1903/rpcbind udp 0 0 0.0.0.0:55512 0.0.0.0:* 1924/rpc.statd udp 0 0 0.0.0.0:111 0.0.0.0:* 1903/rpcbind udp 0 0 0.0.0.0:631 0.0.0.0:* 2073/cupsd udp 0 0 0.0.0.0:806 0.0.0.0:* 1903/rpcbind udp 0 0 0.0.0.0:828 0.0.0.0:* 1924/rpc.statd udp 0 0 :::49640 :::* 1924/rpc.statd udp 0 0 :::111 :::* 1903/rpcbind udp 0 0 :::806 :::* 1903/rpcbind ~]$ sudo /sbin/service iptables status Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 5 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited 6 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport ports 80,8080 7 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) num target prot opt source destination ~]$ curl 123.150.207.18:80 <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="description" content=""> <meta name="author" content=""> ......
node.js, browser, iptables, rhel
1
492
1
https://stackoverflow.com/questions/17078073/nodejs-site-cannot-be-shown-in-browser
16,080,176
Printing text file in java?This document does not conform to the Adobe Document Structuring Conventions and may not print correctly
I have written the below code in java for printing a simple text file, import java.io.FileInputStream; import java.io.IOException; import javax.print.Doc; import javax.print.DocFlavor; import javax.print.DocPrintJob; import javax.print.PrintException; import javax.print.PrintService; import javax.print.PrintServiceLookup; import javax.print.SimpleDoc; import javax.print.attribute.HashPrintRequestAttributeSet; import javax.print.attribute.PrintRequestAttributeSet; import javax.print.attribute.standard.Copies; public class PrintImage { static public void main(String args[]) throws Exception { try { PrintRequestAttributeSet pras = new HashPrintRequestAttributeSet(); pras.add(new Copies(1)); /* PrintService pss[] = PrintServiceLookup.lookupPrintServices(DocFlavor.INPUT_STREAM.TEXT_PLAIN_US_ASCII.AUTOSENSE, pras); if (pss.length == 0) throw new RuntimeException("No printer services available.");*/ PrintService ps = PrintServiceLookup.lookupDefaultPrintService(); System.out.println("Printing to " + ps); DocPrintJob job = ps.createPrintJob(); FileInputStream fin = new FileInputStream("test.txt"); Doc doc = new SimpleDoc(fin, DocFlavor.INPUT_STREAM.TEXT_PLAIN_US_ASCII.AUTOSENSE, null); job.print(doc, pras); fin.close(); } catch (IOException ie) { ie.printStackTrace(); System.out.println(ie.toString()); } catch (PrintException pe) { System.out.println(pe.toString()); pe.printStackTrace(); } } } I am using Red Hat Enterprise Server 6.0 with JDK1.6.0_43 as a development platform.The code was executed without errors.And also i have verified in CUPS that the job is queued in printer queue.But printer is not printing the job.I got the following error in CUPS log, W [18/Apr/2013:15:09:24 +0530] [Job 15] This document does not conform to the Adobe Document Structuring Conventions and may not print correctly! But The same program works well only for pdf files under RHEL6.0.Only txt,doc files are not printing. One more thing is the same code is working perfectly under Ubuntu 12.04 and RHEL4.7. But with RHEL6.0 only pdf file is printing. Please help me to get the problem solved...
Printing text file in java?This document does not conform to the Adobe Document Structuring Conventions and may not print correctly I have written the below code in java for printing a simple text file, import java.io.FileInputStream; import java.io.IOException; import javax.print.Doc; import javax.print.DocFlavor; import javax.print.DocPrintJob; import javax.print.PrintException; import javax.print.PrintService; import javax.print.PrintServiceLookup; import javax.print.SimpleDoc; import javax.print.attribute.HashPrintRequestAttributeSet; import javax.print.attribute.PrintRequestAttributeSet; import javax.print.attribute.standard.Copies; public class PrintImage { static public void main(String args[]) throws Exception { try { PrintRequestAttributeSet pras = new HashPrintRequestAttributeSet(); pras.add(new Copies(1)); /* PrintService pss[] = PrintServiceLookup.lookupPrintServices(DocFlavor.INPUT_STREAM.TEXT_PLAIN_US_ASCII.AUTOSENSE, pras); if (pss.length == 0) throw new RuntimeException("No printer services available.");*/ PrintService ps = PrintServiceLookup.lookupDefaultPrintService(); System.out.println("Printing to " + ps); DocPrintJob job = ps.createPrintJob(); FileInputStream fin = new FileInputStream("test.txt"); Doc doc = new SimpleDoc(fin, DocFlavor.INPUT_STREAM.TEXT_PLAIN_US_ASCII.AUTOSENSE, null); job.print(doc, pras); fin.close(); } catch (IOException ie) { ie.printStackTrace(); System.out.println(ie.toString()); } catch (PrintException pe) { System.out.println(pe.toString()); pe.printStackTrace(); } } } I am using Red Hat Enterprise Server 6.0 with JDK1.6.0_43 as a development platform.The code was executed without errors.And also i have verified in CUPS that the job is queued in printer queue.But printer is not printing the job.I got the following error in CUPS log, W [18/Apr/2013:15:09:24 +0530] [Job 15] This document does not conform to the Adobe Document Structuring Conventions and may not print correctly! But The same program works well only for pdf files under RHEL6.0.Only txt,doc files are not printing. One more thing is the same code is working perfectly under Ubuntu 12.04 and RHEL4.7. But with RHEL6.0 only pdf file is printing. Please help me to get the problem solved...
java, printing, rhel, cups
1
1,031
0
https://stackoverflow.com/questions/16080176/printing-text-file-in-javathis-document-does-not-conform-to-the-adobe-document
15,752,231
Possible to create a single source RPM which is compatible with RHEL5 and RHEL6?
I need to create a single source RPM which can be used on both RHEL5 and RHEL6 OSes. Starting with RHEL6, Redhat switched to sha256 for the checksum. I know that I can create a rpm using either sha1 or sha256 but there is no way for rpm on either release to select a checksum algo. AFAIK, one can't even tell rpm to ignore the checksum. So, is it possible to create a single RPM which is usable on both RHEL5 and RHEL6, and use the stock rpm command (ie, I don't want to have to use cpio to extract).
Possible to create a single source RPM which is compatible with RHEL5 and RHEL6? I need to create a single source RPM which can be used on both RHEL5 and RHEL6 OSes. Starting with RHEL6, Redhat switched to sha256 for the checksum. I know that I can create a rpm using either sha1 or sha256 but there is no way for rpm on either release to select a checksum algo. AFAIK, one can't even tell rpm to ignore the checksum. So, is it possible to create a single RPM which is usable on both RHEL5 and RHEL6, and use the stock rpm command (ie, I don't want to have to use cpio to extract).
rpm, rhel, rhel5
1
739
1
https://stackoverflow.com/questions/15752231/possible-to-create-a-single-source-rpm-which-is-compatible-with-rhel5-and-rhel6
13,304,416
Curl and Browser are serving different versions of file
I have a wsdl(essentially an xml) file on the server. Now when i access this file from the browser I get the latest version however when i use curl or wget to get this file i get a version which is older. Another interesting thing which happens is that even on deleting the file the curl is still able to get the file but the browser correctly gets a 404. I am not sure why this is happening when the protocol to get the file in both the cases is http and i see the request coming correctly in the apache access logs. However the file being served in both the cases are different. I am using apache webserver on a RHEL machine. Any pointers why this could happen.
Curl and Browser are serving different versions of file I have a wsdl(essentially an xml) file on the server. Now when i access this file from the browser I get the latest version however when i use curl or wget to get this file i get a version which is older. Another interesting thing which happens is that even on deleting the file the curl is still able to get the file but the browser correctly gets a 404. I am not sure why this is happening when the protocol to get the file in both the cases is http and i see the request coming correctly in the apache access logs. However the file being served in both the cases are different. I am using apache webserver on a RHEL machine. Any pointers why this could happen.
apache, file, version, rhel
1
89
0
https://stackoverflow.com/questions/13304416/curl-and-browser-are-serving-different-versions-of-file
10,928,932
HP-UX and RHEL ping difference
I have these two commands for different environments.. for HP-UX: "ping someaddress 10" for RHEL: "ping someaddress -c 10" Im Porting an application from HP-UX to RHEL and i cant tell if these two commands have same results since, I dont have a HP-UX system. I have read the HP-UX manual ( [URL] ) but still it doesn't help since no options were specified in the command. Any idea?
HP-UX and RHEL ping difference I have these two commands for different environments.. for HP-UX: "ping someaddress 10" for RHEL: "ping someaddress -c 10" Im Porting an application from HP-UX to RHEL and i cant tell if these two commands have same results since, I dont have a HP-UX system. I have read the HP-UX manual ( [URL] ) but still it doesn't help since no options were specified in the command. Any idea?
porting, ping, rhel, hp-ux
1
260
1
https://stackoverflow.com/questions/10928932/hp-ux-and-rhel-ping-difference
9,681,844
Curl error 77 on 64bit machine
When I run a 32-bit binary on 64-bit RHEL6.2, I get the following output from Curl with return value 0f 77, * About to connect() to 10.30.10.164 port 443 (#0) * Trying 10.30.10.164... * connected * Connected to 10.30.10.164 (10.30.10.164) port 443 (#0) * Initializing NSS with certpath: /etc/pki/nssdb * Unable to initialize NSS database * NSS error -5977 * Closing connection #0 * Problem with the SSL CA cert (path? access rights?) I did a little research and found that nss library has problems . My requirement is to run a 32bit binary on 64-bit machine that uses libcurl dynamic library. Anybody has any workaround to achieve this on RHEL 6.2 x64 machine? I can provide more details on request.. Language : C Platform : Linux (RHEL 6.2 x64) Thanx in advance :)
Curl error 77 on 64bit machine When I run a 32-bit binary on 64-bit RHEL6.2, I get the following output from Curl with return value 0f 77, * About to connect() to 10.30.10.164 port 443 (#0) * Trying 10.30.10.164... * connected * Connected to 10.30.10.164 (10.30.10.164) port 443 (#0) * Initializing NSS with certpath: /etc/pki/nssdb * Unable to initialize NSS database * NSS error -5977 * Closing connection #0 * Problem with the SSL CA cert (path? access rights?) I did a little research and found that nss library has problems . My requirement is to run a 32bit binary on 64-bit machine that uses libcurl dynamic library. Anybody has any workaround to achieve this on RHEL 6.2 x64 machine? I can provide more details on request.. Language : C Platform : Linux (RHEL 6.2 x64) Thanx in advance :)
curl, https, libcurl, rhel, nss
1
1,133
1
https://stackoverflow.com/questions/9681844/curl-error-77-on-64bit-machine
9,397,105
rpmbuild differences in RHEL 5.7 and RHEL 6.1
I'm trying to build an RPM using rpmbuild, which would work for both RHEL 5.7 machines and RHEL 6.1 machines, and I'm having some trouble understanding how to structure my rpmbuild/SOURCE directory. According to what I understood, if my package name is XXX, than I need to prepare rpmbuild/SOURCE/XXX.tar.gz, a tarball which contains: 1. A directory named XXX; 2. In it, all the directories and files I'm installing should be ordered as if their paths are relative to the root directory (i.e. /) For instance, if I want to install a file called foo.sh to /tmp/XXXdir/, I need to have rpmbuild/SOURCE/XXX.tar.gz contain XXX/tpm/xxxdir/foo.sh This is what I understood and this is what works when I install my RPM on my RHEL 5.7 machine (i.e. in the example above the file is installaed to /tmp/XXXdir/foo.sh). However, on an RHEL 6.1 machine I get the undesired behaviour of having my files installed to a newly created /XXX directory, and from there I get the same tree structure I wanted for / (i.e. in the example above I get the file at /XXX/tmp/XXXdir/foo.sh). Any idea why this happenes? Perhaps I've got it wrong and my rpmbuild/SOURCE structure is not as it should be? Any insights would be very helpful. Thanks a lot in advance, Lior
rpmbuild differences in RHEL 5.7 and RHEL 6.1 I'm trying to build an RPM using rpmbuild, which would work for both RHEL 5.7 machines and RHEL 6.1 machines, and I'm having some trouble understanding how to structure my rpmbuild/SOURCE directory. According to what I understood, if my package name is XXX, than I need to prepare rpmbuild/SOURCE/XXX.tar.gz, a tarball which contains: 1. A directory named XXX; 2. In it, all the directories and files I'm installing should be ordered as if their paths are relative to the root directory (i.e. /) For instance, if I want to install a file called foo.sh to /tmp/XXXdir/, I need to have rpmbuild/SOURCE/XXX.tar.gz contain XXX/tpm/xxxdir/foo.sh This is what I understood and this is what works when I install my RPM on my RHEL 5.7 machine (i.e. in the example above the file is installaed to /tmp/XXXdir/foo.sh). However, on an RHEL 6.1 machine I get the undesired behaviour of having my files installed to a newly created /XXX directory, and from there I get the same tree structure I wanted for / (i.e. in the example above I get the file at /XXX/tmp/XXXdir/foo.sh). Any idea why this happenes? Perhaps I've got it wrong and my rpmbuild/SOURCE structure is not as it should be? Any insights would be very helpful. Thanks a lot in advance, Lior
rhel, rpmbuild
1
608
0
https://stackoverflow.com/questions/9397105/rpmbuild-differences-in-rhel-5-7-and-rhel-6-1
9,306,112
RHEL terminal basic syntax highlighting
I am new to RHEL, so pardon me if this is trivial... How to get your RHEL terminal to show basic syntax highlighting? Currently it shows a dumb white screen with a black colored font.
RHEL terminal basic syntax highlighting I am new to RHEL, so pardon me if this is trivial... How to get your RHEL terminal to show basic syntax highlighting? Currently it shows a dumb white screen with a black colored font.
terminal, rhel
1
306
1
https://stackoverflow.com/questions/9306112/rhel-terminal-basic-syntax-highlighting
7,995,193
How to cheat *.so library into using missing @GLIBC_2.6 function?
I need to run relatively new package on not-so-new RHEL 5.6. I have 3rd party library ( lib3rdparty.so ) which is compiled against glibc 2.6 while RHEL 5.6 have only 2.5 installed. But in the library there is only a couple of references to sched_getcpu@@GLIBC_2.6 . I've checked it like this readelf -s lib3rdparty.so | egrep "@GLIBC_2.[6-9]" to find references to something newer than GLIBC_2.5 which is installed. The output is 0 FUNC GLOBAL DEFAULT UND sched_getcpu@GLIBC_2.6 (62) 0 FUNC GLOBAL DEFAULT UND sched_getcpu@@GLIBC_2.6 So, I have only one function from GLIBC_2.6 . Now I want to make library think, that I have this function. For that purpose I forged small library ( libcheat.so ) like it mentioned here . Now I have libcheat.so file which, if run through readelf , will show this string: 10 FUNC GLOBAL DEFAULT 11 sched_getcpu@@GLIBC_2.6 With this library I managed to succesfully build executable which is dynamically linked with lib3rdparty.so . Without this library I can't build anything, because ld can't find reference to sched_getcpu . But the problem is with running this file: when I try to run it I have a following error: ./hello_world: version `GLIBC_2.6' not found (required by ./lib3rdparty.so) So, I believe there is one last step to make it work, but I don't know what to do. I've tried to use /etc/ld.conf.preload and exporting LD_LIBRARY_PATH so it would point to my library to load before others. But it won't run. Tried to run it through strace but have no meaningful output. Any ideas?
How to cheat *.so library into using missing @GLIBC_2.6 function? I need to run relatively new package on not-so-new RHEL 5.6. I have 3rd party library ( lib3rdparty.so ) which is compiled against glibc 2.6 while RHEL 5.6 have only 2.5 installed. But in the library there is only a couple of references to sched_getcpu@@GLIBC_2.6 . I've checked it like this readelf -s lib3rdparty.so | egrep "@GLIBC_2.[6-9]" to find references to something newer than GLIBC_2.5 which is installed. The output is 0 FUNC GLOBAL DEFAULT UND sched_getcpu@GLIBC_2.6 (62) 0 FUNC GLOBAL DEFAULT UND sched_getcpu@@GLIBC_2.6 So, I have only one function from GLIBC_2.6 . Now I want to make library think, that I have this function. For that purpose I forged small library ( libcheat.so ) like it mentioned here . Now I have libcheat.so file which, if run through readelf , will show this string: 10 FUNC GLOBAL DEFAULT 11 sched_getcpu@@GLIBC_2.6 With this library I managed to succesfully build executable which is dynamically linked with lib3rdparty.so . Without this library I can't build anything, because ld can't find reference to sched_getcpu . But the problem is with running this file: when I try to run it I have a following error: ./hello_world: version `GLIBC_2.6' not found (required by ./lib3rdparty.so) So, I believe there is one last step to make it work, but I don't know what to do. I've tried to use /etc/ld.conf.preload and exporting LD_LIBRARY_PATH so it would point to my library to load before others. But it won't run. Tried to run it through strace but have no meaningful output. Any ideas?
shared-libraries, glibc, rhel
1
1,521
2
https://stackoverflow.com/questions/7995193/how-to-cheat-so-library-into-using-missing-glibc-2-6-function
4,010,282
Will app built with gcc 4.x on CentOS/RHEL 4.8 run on completely un-updated CentOS/RHEL 4?
We have a commercial application that we build on 32-bit CentOS 4.8 (equivalent to Red Hat Enterprise Linux (RHEL) 4 update 8. The default gcc compiler is at 3.4.6 We are able to run our binary on both 32- and 64-bit CentOS/RHEL 4 and 5 including completely un-updated RHEL 4. THE QUESTION: If we update to a newer gcc 4 version, will the binary still run on a completely un-updated RHEL 4 The newest gcc appears to be 4.5.1 and 4.4.5 (And yes there are customers who install the intial version, run on an isolated network with no Internet access, and NEVER update from the as-shipped version.) This issue has arisen because we are now porting to 64-bit FreeBSD 8.1 for amd64 and the default gcc there is gcc 4.2.1
Will app built with gcc 4.x on CentOS/RHEL 4.8 run on completely un-updated CentOS/RHEL 4? We have a commercial application that we build on 32-bit CentOS 4.8 (equivalent to Red Hat Enterprise Linux (RHEL) 4 update 8. The default gcc compiler is at 3.4.6 We are able to run our binary on both 32- and 64-bit CentOS/RHEL 4 and 5 including completely un-updated RHEL 4. THE QUESTION: If we update to a newer gcc 4 version, will the binary still run on a completely un-updated RHEL 4 The newest gcc appears to be 4.5.1 and 4.4.5 (And yes there are customers who install the intial version, run on an isolated network with no Internet access, and NEVER update from the as-shipped version.) This issue has arisen because we are now porting to 64-bit FreeBSD 8.1 for amd64 and the default gcc there is gcc 4.2.1
portability, centos, portable-applications, rhel, gcc4
1
667
1
https://stackoverflow.com/questions/4010282/will-app-built-with-gcc-4-x-on-centos-rhel-4-8-run-on-completely-un-updated-cent
70,731,277
`df&#39; unexpected&#39; checking for diskspace inside a function using a while loop bash script
I am getting an issue where if I call this function below, I get the error line 89: syntax error at line 117: 'df' unexpected . If I take the code out of the function it works fine. Is there any reason for the error above? This is a bash script on RHEL. function testr{ df -H | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output; do usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1) partition=$(echo $output | awk '{ print $2 }') (.. Sends alert via mail after) done }
`df&#39; unexpected&#39; checking for diskspace inside a function using a while loop bash script I am getting an issue where if I call this function below, I get the error line 89: syntax error at line 117: 'df' unexpected . If I take the code out of the function it works fine. Is there any reason for the error above? This is a bash script on RHEL. function testr{ df -H | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output; do usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1) partition=$(echo $output | awk '{ print $2 }') (.. Sends alert via mail after) done }
bash, rhel
1
48
1
https://stackoverflow.com/questions/70731277/df-unexpected-checking-for-diskspace-inside-a-function-using-a-while-loop-bas
64,377,736
cx_Oracle.DatabaseError: ORA-12592: TNS:bad packet
I am writing a Flask API, and am seeing a lot of failures when load testing. Looking at the uwsgi logs, I am seeing something which looks a little nasty, which is: cx_Oracle.DatabaseError: ORA-12592: TNS:bad packet The oracle connection is working, as I am not seeing a complete failure, but this does seem to be what is terminating the http rest call prematurely in most cases. What is causing this? I am using RHEL, with cx_Oracle 7.23, connecting to 12C database. I am using the Oracle thin client. Exception on /api/read/maa [GET] Traceback (most recent call last): File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/ariel/anaconda3/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise raise value File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/ariel/anaconda3/lib/python3.7/site-packages/connexion/decorators/decorator.py", line 48, in wrapper response = function(request) File "/ariel/anaconda3/lib/python3.7/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper response = function(request) File "/ariel/anaconda3/lib/python3.7/site-packages/connexion/decorators/validation.py", line 384, in wrapper return function(request) File "/ariel/anaconda3/lib/python3.7/site-packages/connexion/decorators/parameter.py", line 121, in wrapper return function(**kwargs) File "./registrations.py", line 58, in read_maa_non_passive for row in cursor_ariel.fetchall(): cx_Oracle.DatabaseError: ORA-12592: TNS:bad packet Getting data and status code ----UPDATE--------- All my problems went away when I stopped connection pooling in cx_Oracle. I originally had a single connection to oracle shared across the Flask application. This have me failures in stress testing. So I tried to be clever and use SessionPooling and acquire connections and release them at each service call. Finally I went back to "bad practice" and create a completely new connection to Oracle for every single function call (api endpoint), and I now get 100% success rate across stress testing in Locust, even for the larger response calls which are 30mb json payloads.
cx_Oracle.DatabaseError: ORA-12592: TNS:bad packet I am writing a Flask API, and am seeing a lot of failures when load testing. Looking at the uwsgi logs, I am seeing something which looks a little nasty, which is: cx_Oracle.DatabaseError: ORA-12592: TNS:bad packet The oracle connection is working, as I am not seeing a complete failure, but this does seem to be what is terminating the http rest call prematurely in most cases. What is causing this? I am using RHEL, with cx_Oracle 7.23, connecting to 12C database. I am using the Oracle thin client. Exception on /api/read/maa [GET] Traceback (most recent call last): File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/ariel/anaconda3/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise raise value File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/ariel/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/ariel/anaconda3/lib/python3.7/site-packages/connexion/decorators/decorator.py", line 48, in wrapper response = function(request) File "/ariel/anaconda3/lib/python3.7/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper response = function(request) File "/ariel/anaconda3/lib/python3.7/site-packages/connexion/decorators/validation.py", line 384, in wrapper return function(request) File "/ariel/anaconda3/lib/python3.7/site-packages/connexion/decorators/parameter.py", line 121, in wrapper return function(**kwargs) File "./registrations.py", line 58, in read_maa_non_passive for row in cursor_ariel.fetchall(): cx_Oracle.DatabaseError: ORA-12592: TNS:bad packet Getting data and status code ----UPDATE--------- All my problems went away when I stopped connection pooling in cx_Oracle. I originally had a single connection to oracle shared across the Flask application. This have me failures in stress testing. So I tried to be clever and use SessionPooling and acquire connections and release them at each service call. Finally I went back to "bad practice" and create a completely new connection to Oracle for every single function call (api endpoint), and I now get 100% success rate across stress testing in Locust, even for the larger response calls which are 30mb json payloads.
python, oracle-database, rhel, cx-oracle
1
2,767
2
https://stackoverflow.com/questions/64377736/cx-oracle-databaseerror-ora-12592-tnsbad-packet
33,034,152
yum remove issues during clean un-install of HDP
As mentioned in several links like this and this , I am trying to clean un-install Hortonworks 2.2 Data Platform. Being naive in Linux, I am facing issues from the first step itself - removing the installed HDP packages. [root@l1031lab ~]# yum list installed | grep HDP This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. bigtop-jsvc.x86_64 1.0.10.2.2.4.2-2.el6 @HDP-2.2 bigtop-tomcat.noarch 6.0.41-1.el6 @HDP-2.2 hadoop_2_2_4_2_2.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-client.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-doc.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-hdfs.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-httpfs.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-source.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-yarn.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-doc.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-master.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-rest.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-thrift.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-thrift2.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hdp-select.noarch 2.2.4.2-2.el6 @HDP-2.2 phoenix_2_2_4_2_2.noarch 4.2.0.2.2.4.2-2.el6 @HDP-2.2 0.4.0.2.2.4.2-2.el6 @HDP-2.2 0.4.0.2.2.4.2-2.el6 @HDP-2.2 zookeeper_2_2_4_2_2.noarch 3.4.6.2.2.4.2-2.el6 @HDP-2.2 3.4.6.2.2.4.2-2.el6 @HDP-2.2 I am trying to remove all these packages in one go : [root@l1031lab ~]# yum remove "HDP*" Loaded plugins: product-id, rhnplugin, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. This system is receiving updates from RHN Classic or RHN Satellite. Setting up Remove Process No Match for argument: HDP* [URL] [Errno 12] Timeout on [URL] (28, 'connect() timed out!') Trying other mirror. [URL] [Errno 12] Timeout on [URL] (28, 'connect() timed out!') Trying other mirror. [URL] [Errno 12] Timeout on [URL] (28, 'connect() timed out!') Trying other mirror. No package HDP* available. * Maybe you meant: hdp-select, hdparm No Packages marked for removal I have two questions : Is Internet connectivity the issue here ? If yes, is it mandatory to have Internet for a clean removal of all these packages
yum remove issues during clean un-install of HDP As mentioned in several links like this and this , I am trying to clean un-install Hortonworks 2.2 Data Platform. Being naive in Linux, I am facing issues from the first step itself - removing the installed HDP packages. [root@l1031lab ~]# yum list installed | grep HDP This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. bigtop-jsvc.x86_64 1.0.10.2.2.4.2-2.el6 @HDP-2.2 bigtop-tomcat.noarch 6.0.41-1.el6 @HDP-2.2 hadoop_2_2_4_2_2.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-client.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-doc.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-hdfs.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-httpfs.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-source.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hadoop_2_2_4_2_2-yarn.x86_64 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 2.6.0.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-doc.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-master.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-rest.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-thrift.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hbase_2_2_4_2_2-thrift2.noarch 0.98.4.2.2.4.2-2.el6 @HDP-2.2 hdp-select.noarch 2.2.4.2-2.el6 @HDP-2.2 phoenix_2_2_4_2_2.noarch 4.2.0.2.2.4.2-2.el6 @HDP-2.2 0.4.0.2.2.4.2-2.el6 @HDP-2.2 0.4.0.2.2.4.2-2.el6 @HDP-2.2 zookeeper_2_2_4_2_2.noarch 3.4.6.2.2.4.2-2.el6 @HDP-2.2 3.4.6.2.2.4.2-2.el6 @HDP-2.2 I am trying to remove all these packages in one go : [root@l1031lab ~]# yum remove "HDP*" Loaded plugins: product-id, rhnplugin, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. This system is receiving updates from RHN Classic or RHN Satellite. Setting up Remove Process No Match for argument: HDP* [URL] [Errno 12] Timeout on [URL] (28, 'connect() timed out!') Trying other mirror. [URL] [Errno 12] Timeout on [URL] (28, 'connect() timed out!') Trying other mirror. [URL] [Errno 12] Timeout on [URL] (28, 'connect() timed out!') Trying other mirror. No package HDP* available. * Maybe you meant: hdp-select, hdparm No Packages marked for removal I have two questions : Is Internet connectivity the issue here ? If yes, is it mandatory to have Internet for a clean removal of all these packages
linux, yum, rhel, hortonworks-data-platform
1
1,144
2
https://stackoverflow.com/questions/33034152/yum-remove-issues-during-clean-un-install-of-hdp