question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
25,713,368
|
Caching (Cache) directory in RHEL / CentOS
|
How to cache a particular directory in RHEL / CentOS ? Suppose I have a directory which contains 10 GB data and I've 48 GB of RAM. How to cache all these data inside the directory(only this specific directory) to my memory for a specific amount of time or indefinitely ?
|
Caching (Cache) directory in RHEL / CentOS How to cache a particular directory in RHEL / CentOS ? Suppose I have a directory which contains 10 GB data and I've 48 GB of RAM. How to cache all these data inside the directory(only this specific directory) to my memory for a specific amount of time or indefinitely ?
|
caching, centos, ram, rhel
| 1
| 841
| 1
|
https://stackoverflow.com/questions/25713368/caching-cache-directory-in-rhel-centos
|
14,038,845
|
same rpm package shows different file permissions on different machines?
|
rpm -qlvp package.rpm shows permissions of some executable files as 750 (i.e. rwxr-x--- ) as intended and defined in the spec with the %attr directive, on the building machine. when copied to the target test machine, the same command on the same file shows the default 755 (i.e. rwxr-xr-x ). These are indeed the permissions of the installed files. I can't find anything on the net to explain the mystery. Any ideas? both machines are RHEL 5.7 virtual machines BTW.
|
same rpm package shows different file permissions on different machines? rpm -qlvp package.rpm shows permissions of some executable files as 750 (i.e. rwxr-x--- ) as intended and defined in the spec with the %attr directive, on the building machine. when copied to the target test machine, the same command on the same file shows the default 755 (i.e. rwxr-xr-x ). These are indeed the permissions of the installed files. I can't find anything on the net to explain the mystery. Any ideas? both machines are RHEL 5.7 virtual machines BTW.
|
installation, file-permissions, packaging, rpm, rhel
| 1
| 417
| 1
|
https://stackoverflow.com/questions/14038845/same-rpm-package-shows-different-file-permissions-on-different-machines
|
63,934,963
|
Why would I need to list -ldl before a library that calls dlopen/dlclose/dlerror when linking
|
I am building an executable (foo.exe let's call it) on RHEL with gcc 6.2. It links against a few third-party libraries, libzzdesign.so, libyydesign.so. Yydesign uses dlopen/dlclose/dlerror. I would expect this command-line to work: g++ -Wall -fcheck-new -fno-strict-aliasing -msse2 -fno-omit-frame-pointer -pthread -O3 -Wl,--export-dynamic -o foo.exe foo.o -L/path/to/zzdesign -Wl,-rpath=/path/to/zzdesign -lzzdesign -L/path/to/yydesign -Wl,-rpath=/path/to/yydesign -lyydesign -ldl (I'm listing all the options used in case it matters) It produces the errors, /path/to/yydesign/libyydesign.so: undefined reference to 'dlclose' /path/to/yydesign/libyydesign.so: undefined reference to 'dlerror' If I change the command line to put -ldl before -lyydesign : g++ -Wall -fcheck-new -fno-strict-aliasing -msse2 -fno-omit-frame-pointer -pthread -O3 -Wl,--export-dynamic -o foo.exe foo.o -L/path/to/zzdesign -Wl,-rpath=/path/to/zzdesign -lzzdesign -L/path/to/yydesign -Wl,-rpath=/path/to/yydesign -ldl -lyydesign ... it works without error. This is the opposite of everything I thought I knew about order of libraries on the command line when linking. Why does -ldl have to come before -lyydesign ? Other than dumb luck to stumble across this solution, how could I troubleshoot the original error to understand what's going on? And since changing the build system to move -ldl first in all the places it's needed is kind of a pain, is there a way I can avoid having to put -ldl first?
|
Why would I need to list -ldl before a library that calls dlopen/dlclose/dlerror when linking I am building an executable (foo.exe let's call it) on RHEL with gcc 6.2. It links against a few third-party libraries, libzzdesign.so, libyydesign.so. Yydesign uses dlopen/dlclose/dlerror. I would expect this command-line to work: g++ -Wall -fcheck-new -fno-strict-aliasing -msse2 -fno-omit-frame-pointer -pthread -O3 -Wl,--export-dynamic -o foo.exe foo.o -L/path/to/zzdesign -Wl,-rpath=/path/to/zzdesign -lzzdesign -L/path/to/yydesign -Wl,-rpath=/path/to/yydesign -lyydesign -ldl (I'm listing all the options used in case it matters) It produces the errors, /path/to/yydesign/libyydesign.so: undefined reference to 'dlclose' /path/to/yydesign/libyydesign.so: undefined reference to 'dlerror' If I change the command line to put -ldl before -lyydesign : g++ -Wall -fcheck-new -fno-strict-aliasing -msse2 -fno-omit-frame-pointer -pthread -O3 -Wl,--export-dynamic -o foo.exe foo.o -L/path/to/zzdesign -Wl,-rpath=/path/to/zzdesign -lzzdesign -L/path/to/yydesign -Wl,-rpath=/path/to/yydesign -ldl -lyydesign ... it works without error. This is the opposite of everything I thought I knew about order of libraries on the command line when linking. Why does -ldl have to come before -lyydesign ? Other than dumb luck to stumble across this solution, how could I troubleshoot the original error to understand what's going on? And since changing the build system to move -ldl first in all the places it's needed is kind of a pain, is there a way I can avoid having to put -ldl first?
|
c++, linker, g++, rhel
| 1
| 312
| 1
|
https://stackoverflow.com/questions/63934963/why-would-i-need-to-list-ldl-before-a-library-that-calls-dlopen-dlclose-dlerror
|
25,151,290
|
Very simple perl script has massive memory leak
|
I'm using a perl script to transform a file and found just reading from stdin and writing to stdout is enough to cause a massive memory leak. It gets up to around 20gig and I presume gets killed by the OS. Here's a script that shows the problem. #!/usr/bin/perl use strict; use warnings; foreach my $line(<STDIN>) { print $line; } And I'm running it like this cat inputFile.x | perl test.pl > outputFile.x As soon as I run this memory heads upwards at about 0.5gig per second. The input file is 68gig so it looks like perl is never releasing memory. I've tried all sorts of stuff like undef $line, using a ref, defining $line outside the foreach. Is there a way to force perl to release the memory? EDIT: Note, running on Red Hat 6.5 64 bit, Perl 5.10.1
|
Very simple perl script has massive memory leak I'm using a perl script to transform a file and found just reading from stdin and writing to stdout is enough to cause a massive memory leak. It gets up to around 20gig and I presume gets killed by the OS. Here's a script that shows the problem. #!/usr/bin/perl use strict; use warnings; foreach my $line(<STDIN>) { print $line; } And I'm running it like this cat inputFile.x | perl test.pl > outputFile.x As soon as I run this memory heads upwards at about 0.5gig per second. The input file is 68gig so it looks like perl is never releasing memory. I've tried all sorts of stuff like undef $line, using a ref, defining $line outside the foreach. Is there a way to force perl to release the memory? EDIT: Note, running on Red Hat 6.5 64 bit, Perl 5.10.1
|
linux, perl, rhel
| 0
| 833
| 2
|
https://stackoverflow.com/questions/25151290/very-simple-perl-script-has-massive-memory-leak
|
20,796,823
|
Unable to pass char pointer to gethostname (Linux)
|
The following C program attempts to fetch and print the host name of the current RHEL host. It throws a segmentation fault on this machine. As per the definition of gethostname I should be able to pass a char pointer, shouldn't I? When I use a char array instead (like char hname[255] ), the call to gethostname works. (If I did this how would I return the array to main?) #include <stdio.h> #include <unistd.h> char * fetchHostname() { // using "char hname[255]" gets me around the issue; // however, I dont understand why I'm unable to use // a char pointer. char *hname; gethostname(hname, 255 ); return hname; } int main() { char *hostname = fetchHostname(); return 0; } Output: pmn@rhel /tmp/temp > gcc -g test.c -o test pmn@rhel /tmp/temp > pmn@rhel /tmp/temp > ./test Segmentation fault pmn@rhel /tmp/temp >
|
Unable to pass char pointer to gethostname (Linux) The following C program attempts to fetch and print the host name of the current RHEL host. It throws a segmentation fault on this machine. As per the definition of gethostname I should be able to pass a char pointer, shouldn't I? When I use a char array instead (like char hname[255] ), the call to gethostname works. (If I did this how would I return the array to main?) #include <stdio.h> #include <unistd.h> char * fetchHostname() { // using "char hname[255]" gets me around the issue; // however, I dont understand why I'm unable to use // a char pointer. char *hname; gethostname(hname, 255 ); return hname; } int main() { char *hostname = fetchHostname(); return 0; } Output: pmn@rhel /tmp/temp > gcc -g test.c -o test pmn@rhel /tmp/temp > pmn@rhel /tmp/temp > ./test Segmentation fault pmn@rhel /tmp/temp >
|
c, hostname, rhel, unistd.h
| 0
| 1,319
| 6
|
https://stackoverflow.com/questions/20796823/unable-to-pass-char-pointer-to-gethostname-linux
|
7,318,399
|
BASH problem to fill a file 'til a determined size
|
I wrote a script to fill a file for some Disk capacity testing. Could you please tell me why I have an error? #!/bin/bash COUNTER=0; FILE_SIZE_BITS=8589934592; FILE_NAME="fill_me"; while [ $COUNTER -eq 0 ]; do echo "Dummy text to fill the file" >> "$FILE_NAME"; SIZE='stat -c%s fill_me'; if [[ $SIZE -gt $FILE_SIZE_BITS ]]; then let COUNTER=COUNTER+1; fi done the error is: -bash: [[: stat -c%s fill_me: division by 0 (error token is "fill_me") Thanks
|
BASH problem to fill a file 'til a determined size I wrote a script to fill a file for some Disk capacity testing. Could you please tell me why I have an error? #!/bin/bash COUNTER=0; FILE_SIZE_BITS=8589934592; FILE_NAME="fill_me"; while [ $COUNTER -eq 0 ]; do echo "Dummy text to fill the file" >> "$FILE_NAME"; SIZE='stat -c%s fill_me'; if [[ $SIZE -gt $FILE_SIZE_BITS ]]; then let COUNTER=COUNTER+1; fi done the error is: -bash: [[: stat -c%s fill_me: division by 0 (error token is "fill_me") Thanks
|
bash, shell, rhel
| 0
| 1,029
| 3
|
https://stackoverflow.com/questions/7318399/bash-problem-to-fill-a-file-til-a-determined-size
|
55,750,256
|
How to simplify an if-else statement matching IP address
|
I have the following If-Else statement which I want to simplify. if [[ "$IP" == 192.* ]] || [[ "$IPAddr" == 193.* ]] then data="correct data set" fi I need to include more of [[ "$IP" == 192.* ]] and want to see if there is better way to do the same rather than using too many || statements
|
How to simplify an if-else statement matching IP address I have the following If-Else statement which I want to simplify. if [[ "$IP" == 192.* ]] || [[ "$IPAddr" == 193.* ]] then data="correct data set" fi I need to include more of [[ "$IP" == 192.* ]] and want to see if there is better way to do the same rather than using too many || statements
|
linux, bash, shell, sh, rhel
| 0
| 1,027
| 3
|
https://stackoverflow.com/questions/55750256/how-to-simplify-an-if-else-statement-matching-ip-address
|
48,513,463
|
Dockerfile FROM for rhel images
|
I would like to create a Dockefile as below FROM <rhel6/7> # add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added RUN addgroup redis && useradd -g redis -ms /bin/bash redi RUN mkdir /data && chown redis:redis /data VOLUME /data WORKDIR /data # Copy the current directory contents into the container at /app ADD . /data # Run app.py when the container launches CMD ["/usr/software/rats/bedrock/bin/python2.7", "/data/test_redis.py"] what do i replace FROM <rhel6/7> with?
|
Dockerfile FROM for rhel images I would like to create a Dockefile as below FROM <rhel6/7> # add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added RUN addgroup redis && useradd -g redis -ms /bin/bash redi RUN mkdir /data && chown redis:redis /data VOLUME /data WORKDIR /data # Copy the current directory contents into the container at /app ADD . /data # Run app.py when the container launches CMD ["/usr/software/rats/bedrock/bin/python2.7", "/data/test_redis.py"] what do i replace FROM <rhel6/7> with?
|
docker, dockerfile, rhel
| 0
| 8,736
| 2
|
https://stackoverflow.com/questions/48513463/dockerfile-from-for-rhel-images
|
26,503,233
|
How does Linux handle DST (daylight saving time?
|
How does linux handle daylight saving time (DST) Does the swich happen instantly, like 3'o clock swiched instantly to 2'o clock ? or it changes slowly I ask you this becouse i have large date base on my servers and if this swich happens instantly one hour on entryes in the data base will pe owerwritten How can I solve this problem ?
|
How does Linux handle DST (daylight saving time? How does linux handle daylight saving time (DST) Does the swich happen instantly, like 3'o clock swiched instantly to 2'o clock ? or it changes slowly I ask you this becouse i have large date base on my servers and if this swich happens instantly one hour on entryes in the data base will pe owerwritten How can I solve this problem ?
|
linux, database, dst, rhel
| 0
| 3,527
| 4
|
https://stackoverflow.com/questions/26503233/how-does-linux-handle-dst-daylight-saving-time
|
38,929,605
|
How to grep for a key in the file?
|
I have a text file that carries the following values Key 1: 0e3f02b50acfe57e21ba991b39d75170d80d98e831400250d3b4813c9b305fd801 Key 2: 8e3db2b4cdfc55d91512daa9ed31b348545f6ba80fcf2c3e1dbb6ce9405f959602 I am using the following grep command to extract value of Key 1 grep -Po '(?<=Key 1=)[^"]*' abc.txt However, it doesn't seem to work. Please help me figure out the correct grep command My output should be: 0e3f02b50acfe57e21ba991b39d75170d80d98e831400250d3b4813c9b305fd801
|
How to grep for a key in the file? I have a text file that carries the following values Key 1: 0e3f02b50acfe57e21ba991b39d75170d80d98e831400250d3b4813c9b305fd801 Key 2: 8e3db2b4cdfc55d91512daa9ed31b348545f6ba80fcf2c3e1dbb6ce9405f959602 I am using the following grep command to extract value of Key 1 grep -Po '(?<=Key 1=)[^"]*' abc.txt However, it doesn't seem to work. Please help me figure out the correct grep command My output should be: 0e3f02b50acfe57e21ba991b39d75170d80d98e831400250d3b4813c9b305fd801
|
linux, bash, grep, sh, rhel
| 0
| 1,754
| 5
|
https://stackoverflow.com/questions/38929605/how-to-grep-for-a-key-in-the-file
|
36,400,366
|
Kafka configuration on existing CDH 5.5.2 cluster
|
I am installing Kafka-2.0 on my existing CDH 5.5.2 cluster, here is the procedure what i followed Add services from CM Selected Kafka (Before that i downloaded and distributes and activated kafka parcel on all the nodes) Selected 1 nodes for KafkaBroker and 4 nodes for Kafka MirrorMaker Then i updated my Destination Broker List (bootstrap.servers) property with one of the Mirror Maker node as well as Source Broker List (source.bootstrap.servers) with same node Below error i am getting (log file) Fatal error during KafkaServerStartable startup. Prepare to shutdown java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:331) at kafka.log.SkimpyOffsetMap.<init>(OffsetMap.scala:43) at kafka.log.LogCleaner$CleanerThread.<init>(LogCleaner.scala:186) at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:83) at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:83) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) at scala.collection.immutable.Range.foreach(Range.scala:166) at scala.collection.TraversableLike$class.map(TraversableLike.scala:245) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.log.LogCleaner.<init>(LogCleaner.scala:83) at kafka.log.LogManager.<init>(LogManager.scala:64) at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:601) at kafka.server.KafkaServer.startup(KafkaServer.scala:180) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37) at kafka.Kafka$.main(Kafka.scala:67) at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:76) at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala)
|
Kafka configuration on existing CDH 5.5.2 cluster I am installing Kafka-2.0 on my existing CDH 5.5.2 cluster, here is the procedure what i followed Add services from CM Selected Kafka (Before that i downloaded and distributes and activated kafka parcel on all the nodes) Selected 1 nodes for KafkaBroker and 4 nodes for Kafka MirrorMaker Then i updated my Destination Broker List (bootstrap.servers) property with one of the Mirror Maker node as well as Source Broker List (source.bootstrap.servers) with same node Below error i am getting (log file) Fatal error during KafkaServerStartable startup. Prepare to shutdown java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) at java.nio.ByteBuffer.allocate(ByteBuffer.java:331) at kafka.log.SkimpyOffsetMap.<init>(OffsetMap.scala:43) at kafka.log.LogCleaner$CleanerThread.<init>(LogCleaner.scala:186) at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:83) at kafka.log.LogCleaner$$anonfun$1.apply(LogCleaner.scala:83) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) at scala.collection.immutable.Range.foreach(Range.scala:166) at scala.collection.TraversableLike$class.map(TraversableLike.scala:245) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at kafka.log.LogCleaner.<init>(LogCleaner.scala:83) at kafka.log.LogManager.<init>(LogManager.scala:64) at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:601) at kafka.server.KafkaServer.startup(KafkaServer.scala:180) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37) at kafka.Kafka$.main(Kafka.scala:67) at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:76) at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala)
|
hadoop, apache-kafka, hadoop2, rhel, cloudera-cdh
| 0
| 2,342
| 2
|
https://stackoverflow.com/questions/36400366/kafka-configuration-on-existing-cdh-5-5-2-cluster
|
60,195,697
|
How to create solr service for starting solr on reboot
|
I am trying to create a solr service script that I can use to start solr automatically on reboot. Here is a script I saw recommended: #!/bin/sh # Starts, stops, and restarts Apache Solr. # # chkconfig: 35 92 08 # description: Starts and stops Apache Solr SOLR_DIR="/var/www/html/fas/solr/solr-latest" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8983 -DSTOP.KEY=mustard -jar /var/www/html/fas/solr/solr-latest/server/start.jar" LOG_FILE="/var/log/solr.log" JAVA="/bin/java" case $1 in start) echo "Starting Solr" cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & ;; stop) echo "Stopping Solr" cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop ;; restart) $0 stop sleep 1 $0 start ;; *) echo "Usage: $0 {start|stop|restart}" >&2 exit 1 ;; esac I think I have set the appropriate values for the variables in the script. But when I try to run the script, I get "Connection refused." $ service solr stop Stopping Solr java.net.ConnectException: Connection refused (Connection refused) I get the same result whether I run the script as root or not. I can stop and start solr this way, though: /path/to/my/solr/bin/solr start So I also tried creating this script at /etc/init.d/solr-start #!/bin/sh # Starts Apache Solr. # # chkconfig: 35 92 08 # description: Starts Apache Solr /var/www/html/fas/solr/solr-latest/bin/solr start This script works at the command line, but it doesn't work on reboot. To try to make it run on reboot, I did... sudo systemctl enable solr-start But solr is not started on reboot. My versions: RHEL 7, Solr 6.6.6
|
How to create solr service for starting solr on reboot I am trying to create a solr service script that I can use to start solr automatically on reboot. Here is a script I saw recommended: #!/bin/sh # Starts, stops, and restarts Apache Solr. # # chkconfig: 35 92 08 # description: Starts and stops Apache Solr SOLR_DIR="/var/www/html/fas/solr/solr-latest" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8983 -DSTOP.KEY=mustard -jar /var/www/html/fas/solr/solr-latest/server/start.jar" LOG_FILE="/var/log/solr.log" JAVA="/bin/java" case $1 in start) echo "Starting Solr" cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & ;; stop) echo "Stopping Solr" cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop ;; restart) $0 stop sleep 1 $0 start ;; *) echo "Usage: $0 {start|stop|restart}" >&2 exit 1 ;; esac I think I have set the appropriate values for the variables in the script. But when I try to run the script, I get "Connection refused." $ service solr stop Stopping Solr java.net.ConnectException: Connection refused (Connection refused) I get the same result whether I run the script as root or not. I can stop and start solr this way, though: /path/to/my/solr/bin/solr start So I also tried creating this script at /etc/init.d/solr-start #!/bin/sh # Starts Apache Solr. # # chkconfig: 35 92 08 # description: Starts Apache Solr /var/www/html/fas/solr/solr-latest/bin/solr start This script works at the command line, but it doesn't work on reboot. To try to make it run on reboot, I did... sudo systemctl enable solr-start But solr is not started on reboot. My versions: RHEL 7, Solr 6.6.6
|
bash, solr, boot, rhel
| 0
| 5,637
| 2
|
https://stackoverflow.com/questions/60195697/how-to-create-solr-service-for-starting-solr-on-reboot
|
22,648,968
|
default instance storage for m1.small does not exist
|
I ran df -h and got: /dev/xvde1 6.0G 1.9G 4.1G 32% / none 828M 0 828M 0% /dev/shm and cat /etc/fstab: LABEL=_/ / ext4 defaults 1 1 /dev/xvdb /mnt ext3 defaults,context=system_u:object_r:usr_t:s0 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 /dev/sda3 none swap sw,comment=cloudconfig 0 0 output of lsblk: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvde1 202:65 0 6G 0 disk / xvde3 202:67 0 896M 0 disk [SWAP] I suppose /dev/xvdb to be my instance storage of around 160 GB. However, I do not see this device when I run ls -a on /dev/. Does any one know how I can get this instance storage mounted? thanks so much
|
default instance storage for m1.small does not exist I ran df -h and got: /dev/xvde1 6.0G 1.9G 4.1G 32% / none 828M 0 828M 0% /dev/shm and cat /etc/fstab: LABEL=_/ / ext4 defaults 1 1 /dev/xvdb /mnt ext3 defaults,context=system_u:object_r:usr_t:s0 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 /dev/sda3 none swap sw,comment=cloudconfig 0 0 output of lsblk: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvde1 202:65 0 6G 0 disk / xvde3 202:67 0 896M 0 disk [SWAP] I suppose /dev/xvdb to be my instance storage of around 160 GB. However, I do not see this device when I run ls -a on /dev/. Does any one know how I can get this instance storage mounted? thanks so much
|
amazon-web-services, amazon-ec2, hosting, rhel
| 0
| 1,412
| 2
|
https://stackoverflow.com/questions/22648968/default-instance-storage-for-m1-small-does-not-exist
|
12,714,807
|
How can I compress a directory, and convert soft to hard links?
|
I would like to compress a directory. tar -cvzf mydir.tar.gz mydir but this retains symlinks so that it is not portable to a new system. How can I convert symlinks? I have tried tar -cvzfh since man tar says -h, --dereference don’t dump symlinks; dump the files they point to but this results in an error tar: Error exit delayed from previous errors and creates a file called "zh" My files are on a RHEL server.
|
How can I compress a directory, and convert soft to hard links? I would like to compress a directory. tar -cvzf mydir.tar.gz mydir but this retains symlinks so that it is not portable to a new system. How can I convert symlinks? I have tried tar -cvzfh since man tar says -h, --dereference don’t dump symlinks; dump the files they point to but this results in an error tar: Error exit delayed from previous errors and creates a file called "zh" My files are on a RHEL server.
|
bash, compression, symlink, tar, rhel
| 0
| 1,183
| 1
|
https://stackoverflow.com/questions/12714807/how-can-i-compress-a-directory-and-convert-soft-to-hard-links
|
4,380,617
|
Seeing which jars classes are coming from
|
I am using a script to run a Java process that uses a TON of jars in RHEL. Is there an easy command to see which jars classes are being loaded from? For instance: com.asdf.asdf.DummyClass==> /path/to/arf.jar or something?
|
Seeing which jars classes are coming from I am using a script to run a Java process that uses a TON of jars in RHEL. Is there an easy command to see which jars classes are being loaded from? For instance: com.asdf.asdf.DummyClass==> /path/to/arf.jar or something?
|
java, linux, unix, jar, rhel
| 0
| 96
| 2
|
https://stackoverflow.com/questions/4380617/seeing-which-jars-classes-are-coming-from
|
3,916,603
|
will yum break if I use rpms from the ius community project?
|
I followed this tutorial: [URL] because I wanted to install python2.6 on a CentOS 5.5 machine without breaking yum. And i was successfully able install python2.6. My question is that after completing the above commands the next time I try installing packages will it automatically use the one from ius if the packages is conflicting? And if yes will it break something else? I'm just worried that the next time someone runs yum it'll download a conflicting package from ius and break.
|
will yum break if I use rpms from the ius community project? I followed this tutorial: [URL] because I wanted to install python2.6 on a CentOS 5.5 machine without breaking yum. And i was successfully able install python2.6. My question is that after completing the above commands the next time I try installing packages will it automatically use the one from ius if the packages is conflicting? And if yes will it break something else? I'm just worried that the next time someone runs yum it'll download a conflicting package from ius and break.
|
python, linux, rpm, centos5, rhel
| 0
| 740
| 2
|
https://stackoverflow.com/questions/3916603/will-yum-break-if-i-use-rpms-from-the-ius-community-project
|
69,294,593
|
Linux Bash Script Variable as command
|
I have to create a script on RHEL and was wondering if I am able to save the output of a command as a variable. For example: 4.2.2 Ensure logging is configured (Not Scored) : = OUTPUT (which comes from the command sudo cat /etc/rsyslog.conf ) This is what I have now. echo " ### 4.2.2 Ensure logging is configured (Not Scored) : ### " echo " " echo "Running command: " echo "#sudo cat /etc/rsyslog.conf" echo " " sudo cat /etc/rsyslog.conf Thank you!
|
Linux Bash Script Variable as command I have to create a script on RHEL and was wondering if I am able to save the output of a command as a variable. For example: 4.2.2 Ensure logging is configured (Not Scored) : = OUTPUT (which comes from the command sudo cat /etc/rsyslog.conf ) This is what I have now. echo " ### 4.2.2 Ensure logging is configured (Not Scored) : ### " echo " " echo "Running command: " echo "#sudo cat /etc/rsyslog.conf" echo " " sudo cat /etc/rsyslog.conf Thank you!
|
linux, bash, rhel
| 0
| 117
| 2
|
https://stackoverflow.com/questions/69294593/linux-bash-script-variable-as-command
|
63,243,745
|
HDFS + cant copy file from HDFS to local folder
|
we are trying to copy the file from /hdp/apps/2.6.5.0-292/hive/hive.tar.gz to local folder /var/tmp as we can see we get hdfs.DFSClient: Could not obtain and No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException and finally the file not copied to local folder - /var/tmp we also try to copy other files under /hdp/apps/2.6.5.0-292 to local folder - /var/tmp but we get the same errors any idea what could be the reason for this issues? NOTE - we chacked the HDFS helth check from ambari and HDFS is fine hdfs dfs -copyToLocal /hdp/apps/2.6.5.0-292/hive/hive.tar.gz /var/tmp 20/08/04 09:07:12 INFO hdfs.DFSClient: No node available for BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz 20/08/04 09:07:12 INFO hdfs.DFSClient: Could not obtain BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 from any node: java.io.IOException: No live nodes contain block BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry... 20/08/04 09:07:12 WARN hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 916.7101213444472 msec. 20/08/04 09:07:12 INFO hdfs.DFSClient: No node available for BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz 20/08/04 09:07:12 INFO hdfs.DFSClient: Could not obtain BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 from any node: java.io.IOException: No live nodes contain block BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry... 20/08/04 09:07:12 WARN hdfs.DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for 8364.841990287568 msec. 20/08/04 09:07:21 INFO hdfs.DFSClient: No node available for BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz 20/08/04 09:07:21 INFO hdfs.DFSClient: Could not obtain BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 from any node: java.io.IOException: No live nodes contain block BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry... 20/08/04 09:07:21 WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 14554.977191829808 msec. 20/08/04 09:07:35 WARN hdfs.DFSClient: Could not obtain block: BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException 20/08/04 09:07:35 WARN hdfs.DFSClient: Could not obtain block: BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException 20/08/04 09:07:35 WARN hdfs.DFSClient: DFS Read org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:995) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:638) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:945) at java.io.DataInputStream.read(DataInputStream.java:100) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:88) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:62) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:122) at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:467) at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:392) at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:329) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:264) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:249) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289) at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:244) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255) at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:221) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:165) at org.apache.hadoop.fs.FsShell.run(FsShell.java:297) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:356) copyToLocal: Could not obtain block: BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz
|
HDFS + cant copy file from HDFS to local folder we are trying to copy the file from /hdp/apps/2.6.5.0-292/hive/hive.tar.gz to local folder /var/tmp as we can see we get hdfs.DFSClient: Could not obtain and No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException and finally the file not copied to local folder - /var/tmp we also try to copy other files under /hdp/apps/2.6.5.0-292 to local folder - /var/tmp but we get the same errors any idea what could be the reason for this issues? NOTE - we chacked the HDFS helth check from ambari and HDFS is fine hdfs dfs -copyToLocal /hdp/apps/2.6.5.0-292/hive/hive.tar.gz /var/tmp 20/08/04 09:07:12 INFO hdfs.DFSClient: No node available for BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz 20/08/04 09:07:12 INFO hdfs.DFSClient: Could not obtain BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 from any node: java.io.IOException: No live nodes contain block BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry... 20/08/04 09:07:12 WARN hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 916.7101213444472 msec. 20/08/04 09:07:12 INFO hdfs.DFSClient: No node available for BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz 20/08/04 09:07:12 INFO hdfs.DFSClient: Could not obtain BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 from any node: java.io.IOException: No live nodes contain block BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry... 20/08/04 09:07:12 WARN hdfs.DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for 8364.841990287568 msec. 20/08/04 09:07:21 INFO hdfs.DFSClient: No node available for BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz 20/08/04 09:07:21 INFO hdfs.DFSClient: Could not obtain BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 from any node: java.io.IOException: No live nodes contain block BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry... 20/08/04 09:07:21 WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 14554.977191829808 msec. 20/08/04 09:07:35 WARN hdfs.DFSClient: Could not obtain block: BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException 20/08/04 09:07:35 WARN hdfs.DFSClient: Could not obtain block: BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException 20/08/04 09:07:35 WARN hdfs.DFSClient: DFS Read org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:995) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:638) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:888) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:945) at java.io.DataInputStream.read(DataInputStream.java:100) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:88) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:62) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:122) at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:467) at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:392) at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:329) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:264) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:249) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289) at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:244) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255) at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:221) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:165) at org.apache.hadoop.fs.FsShell.run(FsShell.java:297) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:356) copyToLocal: Could not obtain block: BP-551390946-23.1.22.254-1596451810664:blk_1073741831_1007 file=/hdp/apps/2.6.5.0-292/hive/hive.tar.gz
|
hadoop, hdfs, rhel, ambari, hdp
| 0
| 662
| 1
|
https://stackoverflow.com/questions/63243745/hdfs-cant-copy-file-from-hdfs-to-local-folder
|
61,788,927
|
systemctl start service does not work in SPEC file
|
I met a problem that the command "sudo systemctl start xxx.service" in my SPEC file does not work when upgrading my RPM package, following is my %post script in SPEC file, %post echo "---------------------------- post $1 -------------------------------" # refresh installation if [ $1 == 1 ]; then sudo echo "Installation finished." # upgrade installation elif [ $1 -gt 1 ]; then sudo echo "Starting service xxx.service..." sudo /usr/bin/systemctl enable xxx.service > /dev/null 2>&1 sudo /usr/bin/systemctl start xxx.service sleep 10 sudo echo "Finished." fi exit 0 I'm sure that the service file already exists in directory /usr/lib/systemd/system, and I can start it manually using the command "sudo systemctl start xxx.service". And I found that the "sleep 10" command does not work too. Very appreciated if there is any suggestion about this issue, thanks.
|
systemctl start service does not work in SPEC file I met a problem that the command "sudo systemctl start xxx.service" in my SPEC file does not work when upgrading my RPM package, following is my %post script in SPEC file, %post echo "---------------------------- post $1 -------------------------------" # refresh installation if [ $1 == 1 ]; then sudo echo "Installation finished." # upgrade installation elif [ $1 -gt 1 ]; then sudo echo "Starting service xxx.service..." sudo /usr/bin/systemctl enable xxx.service > /dev/null 2>&1 sudo /usr/bin/systemctl start xxx.service sleep 10 sudo echo "Finished." fi exit 0 I'm sure that the service file already exists in directory /usr/lib/systemd/system, and I can start it manually using the command "sudo systemctl start xxx.service". And I found that the "sleep 10" command does not work too. Very appreciated if there is any suggestion about this issue, thanks.
|
rhel, rpm-spec, systemctl
| 0
| 2,817
| 2
|
https://stackoverflow.com/questions/61788927/systemctl-start-service-does-not-work-in-spec-file
|
51,593,289
|
Overall total from du command
|
I am trying to find out all directories and the overall size starting with pattern int-* For this I am using the below command $sudo ls -ld int-* | grep ^d | wc -l 3339 $ sudo ls -ld int-* | grep ^d | du -sh 204G . Are my commands correct ? Any other command combination to gather the above information ?
|
Overall total from du command I am trying to find out all directories and the overall size starting with pattern int-* For this I am using the below command $sudo ls -ld int-* | grep ^d | wc -l 3339 $ sudo ls -ld int-* | grep ^d | du -sh 204G . Are my commands correct ? Any other command combination to gather the above information ?
|
grep, rhel, ls, du
| 0
| 3,061
| 2
|
https://stackoverflow.com/questions/51593289/overall-total-from-du-command
|
45,269,102
|
Working with lower version of GLIBC: version GLIBC_2.11 not found (required by g++)
|
I've installed GCC 7.1 on my machine and tried to use g++ on it, but it didn't work, saying this: g++: /lib64/libc.so.6: version GLIBC_2.11 not found (required by g++) So then I did these: $ strings /lib64/lib.so.6 | grep GLIB GLIBC_2.2.5 GLIBC_2.2.6 GLIBC_2.3 GLIBC_2.3.2 GLIBC_2.3.3 GLIBC_2.3.4 GLIBC_2.4 GLIBC_2.5 GLIBC_PRIVATE $ strings which g++ | grep GLIB GLIBC_2.3 GLIBC_2.11 GLIBC_2.2.5 Two things can be noted here: The string GLIBC_2.11 isn't common to both of these outputs. However, GLIBC_2.3 is common to both. Questions: 1. What exactly do these strings mean? Why are there more than one string in both of them? What do they tell us? 2. My guess is that the absence of GLIBC_2.11 in libc explains why g++ doesn't work, as g++ requires it (as the error says itself). However, I'm confused what does the presence of GLIBC_2.3 in both actually mean? Does it mean that g++ can be instructed to use this instead of GLIBC_2.11 ? If so, how exactly? What is the command?
|
Working with lower version of GLIBC: version GLIBC_2.11 not found (required by g++) I've installed GCC 7.1 on my machine and tried to use g++ on it, but it didn't work, saying this: g++: /lib64/libc.so.6: version GLIBC_2.11 not found (required by g++) So then I did these: $ strings /lib64/lib.so.6 | grep GLIB GLIBC_2.2.5 GLIBC_2.2.6 GLIBC_2.3 GLIBC_2.3.2 GLIBC_2.3.3 GLIBC_2.3.4 GLIBC_2.4 GLIBC_2.5 GLIBC_PRIVATE $ strings which g++ | grep GLIB GLIBC_2.3 GLIBC_2.11 GLIBC_2.2.5 Two things can be noted here: The string GLIBC_2.11 isn't common to both of these outputs. However, GLIBC_2.3 is common to both. Questions: 1. What exactly do these strings mean? Why are there more than one string in both of them? What do they tell us? 2. My guess is that the absence of GLIBC_2.11 in libc explains why g++ doesn't work, as g++ requires it (as the error says itself). However, I'm confused what does the presence of GLIBC_2.3 in both actually mean? Does it mean that g++ can be instructed to use this instead of GLIBC_2.11 ? If so, how exactly? What is the command?
|
linux, gcc, g++, glibc, rhel
| 0
| 2,511
| 1
|
https://stackoverflow.com/questions/45269102/working-with-lower-version-of-glibc-version-glibc-2-11-not-found-required-by
|
44,277,936
|
distro 'rhel7.2' does not exist in our dictionary
|
While installing a kvm via virt-install I have used following attribute os_variant=rhel7.2 . While installing I am getting following error : distro 'rhel7.2' does not exist in our dictionary When I do uname -r I am getting output as 3.10.0-327.el7.x86_64 It is a RHEL KVM host. Running osinfo-query os|grep 'Red Hat Enterprise Linux 7.2' returns following: rhel7.1 | Red Hat Enterprise Linux 7.2 | 7.2 | [URL] What can be solution to this problem?
|
distro 'rhel7.2' does not exist in our dictionary While installing a kvm via virt-install I have used following attribute os_variant=rhel7.2 . While installing I am getting following error : distro 'rhel7.2' does not exist in our dictionary When I do uname -r I am getting output as 3.10.0-327.el7.x86_64 It is a RHEL KVM host. Running osinfo-query os|grep 'Red Hat Enterprise Linux 7.2' returns following: rhel7.1 | Red Hat Enterprise Linux 7.2 | 7.2 | [URL] What can be solution to this problem?
|
virtual-machine, rhel, kvm, libvirt
| 0
| 1,888
| 2
|
https://stackoverflow.com/questions/44277936/distro-rhel7-2-does-not-exist-in-our-dictionary
|
42,691,773
|
how to specify arguments to unix command in perl
|
I am trying to find the processes which are not running through perl. It works for some processes using following code but not for cgred service. foreach $critproc (@critarray) { #system("/usr/bin/pgrep $critproc"); $var1=/usr/bin/pgrep $critproc; print "$var1"; print "exit status: $?\n:$critproc\n"; if ($? != 0) { $probs="$probs $critproc,"; $proccrit=1; } } For cgred I have to specify /usr/bin/pgrep -f cgred to check whether any pid is associated with it or not. But when I specify -f in above code it gives exit status 0 ( $? ) to all the processes even if its not running. Can you anyone tell me how to pass arguments to unix command in Perl. Thanks
|
how to specify arguments to unix command in perl I am trying to find the processes which are not running through perl. It works for some processes using following code but not for cgred service. foreach $critproc (@critarray) { #system("/usr/bin/pgrep $critproc"); $var1=/usr/bin/pgrep $critproc; print "$var1"; print "exit status: $?\n:$critproc\n"; if ($? != 0) { $probs="$probs $critproc,"; $proccrit=1; } } For cgred I have to specify /usr/bin/pgrep -f cgred to check whether any pid is associated with it or not. But when I specify -f in above code it gives exit status 0 ( $? ) to all the processes even if its not running. Can you anyone tell me how to pass arguments to unix command in Perl. Thanks
|
perl, unix, rhel
| 0
| 59
| 1
|
https://stackoverflow.com/questions/42691773/how-to-specify-arguments-to-unix-command-in-perl
|
20,799,032
|
Linux shell script logical error
|
I have got following command to get privillages list from mysql # mysql -u root -p -B -N -e"SHOW GRANTS FOR 'root'@localhost" where I want to replace 'root' with Vuser variable and localhost with VHost variables, I am not able to judge where the double quotes ends and how to put $Vuser and $Vhost. Please some one can guide...! Thanks
|
Linux shell script logical error I have got following command to get privillages list from mysql # mysql -u root -p -B -N -e"SHOW GRANTS FOR 'root'@localhost" where I want to replace 'root' with Vuser variable and localhost with VHost variables, I am not able to judge where the double quotes ends and how to put $Vuser and $Vhost. Please some one can guide...! Thanks
|
linux, unix, rhel
| 0
| 76
| 2
|
https://stackoverflow.com/questions/20799032/linux-shell-script-logical-error
|
12,259,330
|
CentOS 6.3 compilation and ksig
|
While compiling CentOS 6.3 kernel, make fails with: CC crypto/signature/dsa.o crypto/signature/ksign-publickey.c:2:17: error: key.h: No such file or directory crypto/signature/ksign-publickey.c: In function גksign_initג: crypto/signature/ksign-publickey.c:10: error: גksign_def_public_keyג undeclared (first use in this function) crypto/signature/ksign-publickey.c:10: error: (Each undeclared identifier is reported only once crypto/signature/ksign-publickey.c:10: error: for each function it appears in.) crypto/signature/ksign-publickey.c:11: error: גksign_def_public_key_sizeג undeclared (first use in this function) make[2]: *** [crypto/signature/ksign-publickey.o] Error 1 According to this it's related to Linux module signing (a.k.a KSIG ), which was dropped by Linux in RHEL 6.1. I'm trying to see if the proposed solution is indeed correct or there's another solution to the problem. Reference that RHEL abandoned KSIG will be helpful.
|
CentOS 6.3 compilation and ksig While compiling CentOS 6.3 kernel, make fails with: CC crypto/signature/dsa.o crypto/signature/ksign-publickey.c:2:17: error: key.h: No such file or directory crypto/signature/ksign-publickey.c: In function גksign_initג: crypto/signature/ksign-publickey.c:10: error: גksign_def_public_keyג undeclared (first use in this function) crypto/signature/ksign-publickey.c:10: error: (Each undeclared identifier is reported only once crypto/signature/ksign-publickey.c:10: error: for each function it appears in.) crypto/signature/ksign-publickey.c:11: error: גksign_def_public_key_sizeג undeclared (first use in this function) make[2]: *** [crypto/signature/ksign-publickey.o] Error 1 According to this it's related to Linux module signing (a.k.a KSIG ), which was dropped by Linux in RHEL 6.1. I'm trying to see if the proposed solution is indeed correct or there's another solution to the problem. Reference that RHEL abandoned KSIG will be helpful.
|
linux, linux-kernel, centos, rhel
| 0
| 2,183
| 2
|
https://stackoverflow.com/questions/12259330/centos-6-3-compilation-and-ksig
|
10,899,616
|
Compiling tcpsplice on a 64-bit machine
|
I am trying to compile a small utility called tcpslice . It's the typical GNU C application. When I run ./configure , here is the output: checking build system type... Invalid configuration x86_64-pc-linux-gnuoldld': machine x86_64-pc' not recognized configure: error: /bin/sh ./config.sub x86_64-pc-linux-gnuoldld failed It appears to not support compilation as a 64-bit Linux application. So I have a few questions: Is it possible to set some flags to compile the application as 32-bit AND be able to run it on my 64-bit operating system? Is it possible to update the configure script to support 64-bit Linux? If so, will I be making some serious code changes in the .c files as well? I noticed a 64-bit RHEL6 machine on my network has this utility installed and running with an identical version number (1.2a3). Could I somehow download the source that was used to build it? I can get access the to RHN if necessary.
|
Compiling tcpsplice on a 64-bit machine I am trying to compile a small utility called tcpslice . It's the typical GNU C application. When I run ./configure , here is the output: checking build system type... Invalid configuration x86_64-pc-linux-gnuoldld': machine x86_64-pc' not recognized configure: error: /bin/sh ./config.sub x86_64-pc-linux-gnuoldld failed It appears to not support compilation as a 64-bit Linux application. So I have a few questions: Is it possible to set some flags to compile the application as 32-bit AND be able to run it on my 64-bit operating system? Is it possible to update the configure script to support 64-bit Linux? If so, will I be making some serious code changes in the .c files as well? I noticed a 64-bit RHEL6 machine on my network has this utility installed and running with an identical version number (1.2a3). Could I somehow download the source that was used to build it? I can get access the to RHN if necessary.
|
c, linux, rhel
| 0
| 1,168
| 3
|
https://stackoverflow.com/questions/10899616/compiling-tcpsplice-on-a-64-bit-machine
|
78,200,687
|
Tkinter menu does not appear
|
I'm learning Tkinter using a tutorial and the menu isn't appearing. I've found a few other examples of this happening, but they all appear to be Mac or simple typos. Here's my minimum example: from tkinter import * # Create the root window root = Tk() root.title("DebugExample") # Menu bar menu_bar = Menu(root) item = Menu(menu_bar) item.add_command(label='New') item.add_cascade(label='File', menu=item) root.config(menu=menu_bar) # Task name label lTaskName = Label(root, text = "Just some text to give content for the window") lTaskName.grid() # Run the main loop root.mainloop() And the result: I can see that there's an extra line below the title that's not there if I take out the menu code altogether so I thought perhaps it was a spacing thing, but I can't drag the line lower or otherwise expand it. I'm running Python 3.6.8 on RHEL 8.9 if that's helpful. Thank you!
|
Tkinter menu does not appear I'm learning Tkinter using a tutorial and the menu isn't appearing. I've found a few other examples of this happening, but they all appear to be Mac or simple typos. Here's my minimum example: from tkinter import * # Create the root window root = Tk() root.title("DebugExample") # Menu bar menu_bar = Menu(root) item = Menu(menu_bar) item.add_command(label='New') item.add_cascade(label='File', menu=item) root.config(menu=menu_bar) # Task name label lTaskName = Label(root, text = "Just some text to give content for the window") lTaskName.grid() # Run the main loop root.mainloop() And the result: I can see that there's an extra line below the title that's not there if I take out the menu code altogether so I thought perhaps it was a spacing thing, but I can't drag the line lower or otherwise expand it. I'm running Python 3.6.8 on RHEL 8.9 if that's helpful. Thank you!
|
tkinter, menu, rhel
| 0
| 235
| 2
|
https://stackoverflow.com/questions/78200687/tkinter-menu-does-not-appear
|
77,830,203
|
sed + append text at end of a specific line but before specific character
|
I want to add the text scsi_mod.scan=sync on the end of the line that start with GRUB_CMDLINE_LINUX and append the text at the end of the line but before the character " . In case scsi_mod.scan=sync already exist then sed should not append an additional scsi_mod.scan=sync . We did the following sed -i '/GRUB_CMDLINE_LINUX/s/$/ scsi_mod.scan=sync/' /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=VG/LV_root rd.lvm.lv=VG/lv_swap ipv6.disable=1 rhgb quiet scsi_mod.scan=sync" scsi_mod.scan=sync GRUB_DISABLE_RECOVERY="true" but as you can see from the example above, we not succeeded to append the text before " . The expected results should be like this GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=VG/LV_root rd.lvm.lv=VG/lv_swap ipv6.disable=1 rhgb quiet scsi_mod.scan=sync" GRUB_DISABLE_RECOVERY="true"
|
sed + append text at end of a specific line but before specific character I want to add the text scsi_mod.scan=sync on the end of the line that start with GRUB_CMDLINE_LINUX and append the text at the end of the line but before the character " . In case scsi_mod.scan=sync already exist then sed should not append an additional scsi_mod.scan=sync . We did the following sed -i '/GRUB_CMDLINE_LINUX/s/$/ scsi_mod.scan=sync/' /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=VG/LV_root rd.lvm.lv=VG/lv_swap ipv6.disable=1 rhgb quiet scsi_mod.scan=sync" scsi_mod.scan=sync GRUB_DISABLE_RECOVERY="true" but as you can see from the example above, we not succeeded to append the text before " . The expected results should be like this GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=VG/LV_root rd.lvm.lv=VG/lv_swap ipv6.disable=1 rhgb quiet scsi_mod.scan=sync" GRUB_DISABLE_RECOVERY="true"
|
linux, bash, sed, rhel
| 0
| 147
| 3
|
https://stackoverflow.com/questions/77830203/sed-append-text-at-end-of-a-specific-line-but-before-specific-character
|
76,771,442
|
basename throws error when a wildcard is specified
|
I want to get the filename from a file path. It works fine when one file is specified, such as: fname=$(basename -- /tmp/file1) However, if provided as an expression such as fname=$(basename -- /tmp/fi*) it throws basename: extra operand ‘/tmp/file1’ . In above case I don't expect it to resolve and expand the expression but rather just return fi* in fname , is it possible to do it using basename ?
|
basename throws error when a wildcard is specified I want to get the filename from a file path. It works fine when one file is specified, such as: fname=$(basename -- /tmp/file1) However, if provided as an expression such as fname=$(basename -- /tmp/fi*) it throws basename: extra operand ‘/tmp/file1’ . In above case I don't expect it to resolve and expand the expression but rather just return fi* in fname , is it possible to do it using basename ?
|
bash, rhel, basename
| 0
| 646
| 3
|
https://stackoverflow.com/questions/76771442/basename-throws-error-when-a-wildcard-is-specified
|
68,838,265
|
why does rpm reinstall executes %postun after %post section
|
I have a sample rpm spec file like Name: helloworld Version: 2.0 Release: 2%{?dist} Summary: Simple Hello World rpm License: Internal Source0: helloworld-src.tar.bz2 %description %prep %setup -c -q -T -D -a 0 %build %install echo "Install command ..." %post echo "post command..." %postun echo "postun command..." %files %doc %changelog When I execute rpm -i helloworld.rpm , the output is post command... But when I execute rpm --reinstall helloworld , the output is post command... postun command... Whys is this so? I was expecting postun to be called before post would be called. Where can I find which scriplets will be called during rpm --reinstall ?
|
why does rpm reinstall executes %postun after %post section I have a sample rpm spec file like Name: helloworld Version: 2.0 Release: 2%{?dist} Summary: Simple Hello World rpm License: Internal Source0: helloworld-src.tar.bz2 %description %prep %setup -c -q -T -D -a 0 %build %install echo "Install command ..." %post echo "post command..." %postun echo "postun command..." %files %doc %changelog When I execute rpm -i helloworld.rpm , the output is post command... But when I execute rpm --reinstall helloworld , the output is post command... postun command... Whys is this so? I was expecting postun to be called before post would be called. Where can I find which scriplets will be called during rpm --reinstall ?
|
rpm, rhel, rpm-spec
| 0
| 893
| 1
|
https://stackoverflow.com/questions/68838265/why-does-rpm-reinstall-executes-postun-after-post-section
|
62,984,767
|
How to kill all processes which have '/var/' string in the description?
|
I am using lsof to find out all processes which have /var/ in the description. [root@localhost ~]# lsof | grep /var/ | less systemd 1 root 105u unix 0xffff9b25fae49680 0t0 21311 /var/run/cups/cups.sock type=STREAM systemd 1 root 134u unix 0xffff9b25f509cd80 0t0 21334 /var/run/libvirt/virtlogd-sock type=STREAM systemd 1 root 135u unix 0xffff9b25fae49f80 0t0 21315 /var/run/.heim_org.h5l.kcm-socket type=STREAM systemd 1 root 137u unix 0xffff9b25fae48d80 0t0 21318 /var/run/libvirt/virtlockd-sock type=STREAM polkitd 745 polkitd mem REG 8,3 10406312 2107847 /var/lib/sss/mc/initgroups polkitd 745 polkitd mem REG 8,3 6406312 2107846 /var/lib/sss/mc/group I am now trying to kill all these processes using the following command: kill -9 $(lsof | grep /var/) But getting error: -bash: kill: root: arguments must be process or job IDs -bash: kill: 8w: arguments must be process or job IDs -bash: kill: REG: arguments must be process or job IDs -bash: kill: 8,3: arguments must be process or job IDs
|
How to kill all processes which have '/var/' string in the description? I am using lsof to find out all processes which have /var/ in the description. [root@localhost ~]# lsof | grep /var/ | less systemd 1 root 105u unix 0xffff9b25fae49680 0t0 21311 /var/run/cups/cups.sock type=STREAM systemd 1 root 134u unix 0xffff9b25f509cd80 0t0 21334 /var/run/libvirt/virtlogd-sock type=STREAM systemd 1 root 135u unix 0xffff9b25fae49f80 0t0 21315 /var/run/.heim_org.h5l.kcm-socket type=STREAM systemd 1 root 137u unix 0xffff9b25fae48d80 0t0 21318 /var/run/libvirt/virtlockd-sock type=STREAM polkitd 745 polkitd mem REG 8,3 10406312 2107847 /var/lib/sss/mc/initgroups polkitd 745 polkitd mem REG 8,3 6406312 2107846 /var/lib/sss/mc/group I am now trying to kill all these processes using the following command: kill -9 $(lsof | grep /var/) But getting error: -bash: kill: root: arguments must be process or job IDs -bash: kill: 8w: arguments must be process or job IDs -bash: kill: REG: arguments must be process or job IDs -bash: kill: 8,3: arguments must be process or job IDs
|
bash, rhel, lsof
| 0
| 1,086
| 1
|
https://stackoverflow.com/questions/62984767/how-to-kill-all-processes-which-have-var-string-in-the-description
|
58,761,482
|
setting lower_case_table_names=1 in mysql installed on RHEL without reinstalling it
|
Can I declare lower_case_table_names=1 on my MySql without reinstalling it? I have a web application that I want to deploy on RHEL based server with MySQL(8.0.18) installed on it. The problem is that I have tested my web apps' sanity on Windows OS with MySQL installed and didn't face any problem. But when I'm running the same on RHEL, MySQL gives me an error while executing queries like ' table USER_MASTER doesn't exist ', while it's on my database but with small cases like 'user_master'. I have done some searching and found that in UNIX case-sensitivity matters while in Windows it doesn't. So I use setting lower_case_table_names=1 in my.cnf and thought this will do the trick but soon I found that this variable must be declared during MySql server initialization time. I have gone through following liks: [URL] and [URL] and all these say that I have to reinstall the MySql to make lower_case_table_names works. Below is the error I get if I use lower_case_table_names=1 directly. 2019-11-08T05:30:17.331505Z 1 [ERROR] [MY-011087] [Server] Different lower_case_table_names settings for server ('1') and data dictionary ('0'). I hope there must be some other way by which I can avoid reinstalling the MySql and setting that variable and avoid going through my code and replacing table names with lowercase ones. Thanks.
|
setting lower_case_table_names=1 in mysql installed on RHEL without reinstalling it Can I declare lower_case_table_names=1 on my MySql without reinstalling it? I have a web application that I want to deploy on RHEL based server with MySQL(8.0.18) installed on it. The problem is that I have tested my web apps' sanity on Windows OS with MySQL installed and didn't face any problem. But when I'm running the same on RHEL, MySQL gives me an error while executing queries like ' table USER_MASTER doesn't exist ', while it's on my database but with small cases like 'user_master'. I have done some searching and found that in UNIX case-sensitivity matters while in Windows it doesn't. So I use setting lower_case_table_names=1 in my.cnf and thought this will do the trick but soon I found that this variable must be declared during MySql server initialization time. I have gone through following liks: [URL] and [URL] and all these say that I have to reinstall the MySql to make lower_case_table_names works. Below is the error I get if I use lower_case_table_names=1 directly. 2019-11-08T05:30:17.331505Z 1 [ERROR] [MY-011087] [Server] Different lower_case_table_names settings for server ('1') and data dictionary ('0'). I hope there must be some other way by which I can avoid reinstalling the MySql and setting that variable and avoid going through my code and replacing table names with lowercase ones. Thanks.
|
mysql, rhel, mysql-8.0
| 0
| 1,517
| 1
|
https://stackoverflow.com/questions/58761482/setting-lower-case-table-names-1-in-mysql-installed-on-rhel-without-reinstalling
|
54,130,289
|
RHEL RPM spec post installation %post not sourcing shell script
|
I am trying to source a shell script in the root environment (as I am doing installation as root) after successful installation of a rpm in RHEL7.4 box The rpm spec %post section is as follows: %post source /etc/profile.d/env.sh script env.sh is available after installation in the mentioned path and it is used to set PYTHONPATH environment variable like this: pypath="/opt/lib" if [ "$(echo $PYTHONPATH | grep $pypath)" == "" ]; then export PYTHONPATH="$PYTHONPATH:$pypath" fi However, it is not happening after successful rpm installation even if I change the " source " command to " . " script When I source the script env.sh manually then the variable is set
|
RHEL RPM spec post installation %post not sourcing shell script I am trying to source a shell script in the root environment (as I am doing installation as root) after successful installation of a rpm in RHEL7.4 box The rpm spec %post section is as follows: %post source /etc/profile.d/env.sh script env.sh is available after installation in the mentioned path and it is used to set PYTHONPATH environment variable like this: pypath="/opt/lib" if [ "$(echo $PYTHONPATH | grep $pypath)" == "" ]; then export PYTHONPATH="$PYTHONPATH:$pypath" fi However, it is not happening after successful rpm installation even if I change the " source " command to " . " script When I source the script env.sh manually then the variable is set
|
shell, rhel, rpm-spec
| 0
| 1,268
| 1
|
https://stackoverflow.com/questions/54130289/rhel-rpm-spec-post-installation-post-not-sourcing-shell-script
|
52,636,149
|
MATLAB loadlibrary error: Undefined symbol: _intel_fast_memmove
|
I am trying to load a 3rd-party library ( .so file) into MATLAB under RHEL. I am getting an error: undefined:symbol: _intel_fast_memmove. From Symbol lookup error: _FileName_: undefined symbol: _intel_fast_memmove it seems like I need to add a sub-directory of /opt/intel to my LD_LIBRARY_PATH environment variable, but I don't have an /opt/intel directory. UPDATE: MATLAB said that the problem came when trying to load libifcoremt.so.5 . I have since installed Intel's Redistributable Libraries and all that did was call Intel's version of libifcoremt.so.5 but still looked for _intel_fast_memmove (but still didn't find it). So now I'm thinking that there must be some other "definitions" file somewhere that I'm missing.
|
MATLAB loadlibrary error: Undefined symbol: _intel_fast_memmove I am trying to load a 3rd-party library ( .so file) into MATLAB under RHEL. I am getting an error: undefined:symbol: _intel_fast_memmove. From Symbol lookup error: _FileName_: undefined symbol: _intel_fast_memmove it seems like I need to add a sub-directory of /opt/intel to my LD_LIBRARY_PATH environment variable, but I don't have an /opt/intel directory. UPDATE: MATLAB said that the problem came when trying to load libifcoremt.so.5 . I have since installed Intel's Redistributable Libraries and all that did was call Intel's version of libifcoremt.so.5 but still looked for _intel_fast_memmove (but still didn't find it). So now I'm thinking that there must be some other "definitions" file somewhere that I'm missing.
|
matlab, intel, rhel, loadlibrary
| 0
| 378
| 1
|
https://stackoverflow.com/questions/52636149/matlab-loadlibrary-error-undefined-symbol-intel-fast-memmove
|
52,190,945
|
Bash checking if file exists and then check for a substring in the file
|
I want to check if file exists and then check for a substring in the file if [ -f /etc/abc.conf ]; then if [ grep 'abc.conf' -e 'host.com' ] test = 'PASS' else test = 'FAIL' fi else echo "File doesnot exist" fi echo $test Please let me know if there is a better way to do the same
|
Bash checking if file exists and then check for a substring in the file I want to check if file exists and then check for a substring in the file if [ -f /etc/abc.conf ]; then if [ grep 'abc.conf' -e 'host.com' ] test = 'PASS' else test = 'FAIL' fi else echo "File doesnot exist" fi echo $test Please let me know if there is a better way to do the same
|
linux, bash, sed, grep, rhel
| 0
| 1,039
| 2
|
https://stackoverflow.com/questions/52190945/bash-checking-if-file-exists-and-then-check-for-a-substring-in-the-file
|
48,797,852
|
Can rhel-7-server-extras-rpm repo be added to yum without using subscription-manager?
|
On RHEL7, is there a way to run yum-config-manager --enable rhel-7-server-extras-rpm without using the subscription-manager? (i.e., without executing a command like subscription-manager register --username=<RH UserName> --auto-attach )
|
Can rhel-7-server-extras-rpm repo be added to yum without using subscription-manager? On RHEL7, is there a way to run yum-config-manager --enable rhel-7-server-extras-rpm without using the subscription-manager? (i.e., without executing a command like subscription-manager register --username=<RH UserName> --auto-attach )
|
rpm, yum, rhel, rhel7
| 0
| 2,734
| 1
|
https://stackoverflow.com/questions/48797852/can-rhel-7-server-extras-rpm-repo-be-added-to-yum-without-using-subscription-man
|
38,632,858
|
Permission Denied on mkdir php with nginx
|
I am struggling with an issue on LEMP stack. I cannot make my nginx user to create directory via php script. My stack is RHEL 7.2 NGINX MariadB PHP I installed the stack successfully and used following code for creating directory in index.php <?php echo(exec("whoami")); mkdir("test",0777,true); $error=error_get_last(); echo $error['mssage']; ?> Output nginx mkdir(): Permission denied Nginx excecutes PHP via nginx user. Applied 'chown -R nginx: nginx <working folder>' Applied 'chmod -R 0777 <working folder> But above script gives same permission denied error. My plan is to install Wordpress and import sites to this web server. But since permission is denied on working folder of nginx , Wordpress is not able to create new directories or move content from one folder to other.
|
Permission Denied on mkdir php with nginx I am struggling with an issue on LEMP stack. I cannot make my nginx user to create directory via php script. My stack is RHEL 7.2 NGINX MariadB PHP I installed the stack successfully and used following code for creating directory in index.php <?php echo(exec("whoami")); mkdir("test",0777,true); $error=error_get_last(); echo $error['mssage']; ?> Output nginx mkdir(): Permission denied Nginx excecutes PHP via nginx user. Applied 'chown -R nginx: nginx <working folder>' Applied 'chmod -R 0777 <working folder> But above script gives same permission denied error. My plan is to install Wordpress and import sites to this web server. But since permission is denied on working folder of nginx , Wordpress is not able to create new directories or move content from one folder to other.
|
php, nginx, mariadb, rhel
| 0
| 8,205
| 2
|
https://stackoverflow.com/questions/38632858/permission-denied-on-mkdir-php-with-nginx
|
38,138,809
|
tcpdump: Couldn't find user 'pcap'
|
I am using RHEL 5.4, tcpdump is already installed also there is a user pcap but I am getting following message. [root@localhost ~]# tcpdump tcpdump: Couldn't find user 'pcap' [root@localhost ~]# Already tried searching the web, but there is no solution.
|
tcpdump: Couldn't find user 'pcap' I am using RHEL 5.4, tcpdump is already installed also there is a user pcap but I am getting following message. [root@localhost ~]# tcpdump tcpdump: Couldn't find user 'pcap' [root@localhost ~]# Already tried searching the web, but there is no solution.
|
network-programming, ip, pcap, rhel, tcpdump
| 0
| 11,226
| 3
|
https://stackoverflow.com/questions/38138809/tcpdump-couldnt-find-user-pcap
|
31,969,818
|
Get a specific Redhat version with regard to Kernel version
|
I have to install RHEL(Red Hat Enterprise Linux) with kernel version 2.6.26.8. I dont know what RHEL I have to download to get this kernel version. Any suggestion??
|
Get a specific Redhat version with regard to Kernel version I have to install RHEL(Red Hat Enterprise Linux) with kernel version 2.6.26.8. I dont know what RHEL I have to download to get this kernel version. Any suggestion??
|
linux, redhat, rhel
| 0
| 1,593
| 2
|
https://stackoverflow.com/questions/31969818/get-a-specific-redhat-version-with-regard-to-kernel-version
|
28,984,007
|
Application restart is needed or not after patching on database server
|
I have installed an application on one server and database on other. Now, I'm planning to patch my DB server(OS).So can anyone tell me if I would need to stop and start the application during this activity as I'm going to stop my DB during this. Thanks in advance!!!
|
Application restart is needed or not after patching on database server I have installed an application on one server and database on other. Now, I'm planning to patch my DB server(OS).So can anyone tell me if I would need to stop and start the application during this activity as I'm going to stop my DB during this. Thanks in advance!!!
|
unix, web-applications, rhel, patch
| 0
| 799
| 1
|
https://stackoverflow.com/questions/28984007/application-restart-is-needed-or-not-after-patching-on-database-server
|
23,950,807
|
Instantaneous synchronization using NTP
|
I am working with two PCs: PC1 (server) and PC2 (client) and I am trying to synchronize their time with NTP. PC1 is not configured to get synchronized with any external time source. I just want whatever time is there on the PC1, PC2 should be synchronized in accordance to that. I've done following changes-: In PC1 (server)-: #vi /etc/ntp.conf server 127.127.1.0 fudge 127.127.1.0 stratum 1 restrict 127.0.0.1 restrict 192.168.50.0 mask 255.255.255.0 nomodify notrap driftfile /var/lib/ntp/drift :wq! #vi /etc/ntp/step-tickers # List of servers used for initial synchronization. 127.127.1.0 :wq! #vi /etc/init/rc.conf ntpd_enable=\"YES\" :wq! In PC2 (client)-: #vi /etc/ntp.conf server 192.168.50.201 fudge 127.127.1.0 stratum 2 restrict 127.0.0.1 driftfile /var/lib/ntp/drift restrict 192.168.50.201 mask 255.255.255.255 notrap nomodify :wq! #vi /etc/ntp/step-tickers 192.168.50.201 :wq! #crontab -e 1 * * * * ntpdate -s -b -u 192.168.50.201 :wq! #vi /etc/init/rc.conf ntpd_enable=\"YES\" :wq! I've also changed the firewall settings at both sides by adding these lines-: #vi /etc/sysconfig/iptables -I INPUT -p udp --dport 123 -j ACCEPT -I OUTPUT -p udp --sport 123 -j ACCEPT :wq! However, I am not able to synchronize PC2 date with PC1 date i.e. when I change date in PC1, changes are not immediately reflected in PC2. I am using RHEL 6.2 . Can anyone tell me where I am going wrong??
|
Instantaneous synchronization using NTP I am working with two PCs: PC1 (server) and PC2 (client) and I am trying to synchronize their time with NTP. PC1 is not configured to get synchronized with any external time source. I just want whatever time is there on the PC1, PC2 should be synchronized in accordance to that. I've done following changes-: In PC1 (server)-: #vi /etc/ntp.conf server 127.127.1.0 fudge 127.127.1.0 stratum 1 restrict 127.0.0.1 restrict 192.168.50.0 mask 255.255.255.0 nomodify notrap driftfile /var/lib/ntp/drift :wq! #vi /etc/ntp/step-tickers # List of servers used for initial synchronization. 127.127.1.0 :wq! #vi /etc/init/rc.conf ntpd_enable=\"YES\" :wq! In PC2 (client)-: #vi /etc/ntp.conf server 192.168.50.201 fudge 127.127.1.0 stratum 2 restrict 127.0.0.1 driftfile /var/lib/ntp/drift restrict 192.168.50.201 mask 255.255.255.255 notrap nomodify :wq! #vi /etc/ntp/step-tickers 192.168.50.201 :wq! #crontab -e 1 * * * * ntpdate -s -b -u 192.168.50.201 :wq! #vi /etc/init/rc.conf ntpd_enable=\"YES\" :wq! I've also changed the firewall settings at both sides by adding these lines-: #vi /etc/sysconfig/iptables -I INPUT -p udp --dport 123 -j ACCEPT -I OUTPUT -p udp --sport 123 -j ACCEPT :wq! However, I am not able to synchronize PC2 date with PC1 date i.e. when I change date in PC1, changes are not immediately reflected in PC2. I am using RHEL 6.2 . Can anyone tell me where I am going wrong??
|
linux, rhel, ntpd
| 0
| 1,197
| 2
|
https://stackoverflow.com/questions/23950807/instantaneous-synchronization-using-ntp
|
22,319,206
|
How to install Ruby on Red hat
|
I'm trying to install ruby on Red Hat , via an ssh-connection , but it won't work. I can't use yum install ruby , because I don't have the needed repositories .
|
How to install Ruby on Red hat I'm trying to install ruby on Red Hat , via an ssh-connection , but it won't work. I can't use yum install ruby , because I don't have the needed repositories .
|
ruby, linux, rhel
| 0
| 6,399
| 2
|
https://stackoverflow.com/questions/22319206/how-to-install-ruby-on-red-hat
|
20,906,403
|
Worklight Server 6.1 configuration-tool.sh error
|
I have installed Worklight Server on RHEL 6.4. The menu shortcut that was created for the Server Configuration Tool has the path: /opt/IBM/Worklight/shortcuts/configuration-tool.sh The actual script was installed to: /opt/IBM/Worklight/WorklightServer/install/shortcuts/configuration-tool.sh I changed the launcher to open the correct script but it silently fails to run. When running the script from the terminal, I get the following error: [root@lr60xm4 ~]# /opt/IBM/Worklight/WorklightServer/install/shortcuts/configuration-tool.sh Exception in thread "main" java.lang.NoClassDefFoundError: JavaVersion6 Caused by: java.lang.ClassNotFoundException: JavaVersion6 at java.net.URLClassLoader.findClass(URLClassLoader.java:434) at java.lang.ClassLoader.loadClass(ClassLoader.java:677) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:358) at java.lang.ClassLoader.loadClass(ClassLoader.java:643) Could not find the main class: JavaVersion6. Program will exit. Exception in thread "main" java.lang.NoClassDefFoundError: com.ibm.worklight.config.util.helper.JVMBitness Caused by: java.lang.ClassNotFoundException: com.ibm.worklight.config.util.helper.JVMBitness at java.net.URLClassLoader.findClass(URLClassLoader.java:434) at java.lang.ClassLoader.loadClass(ClassLoader.java:677) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:358) at java.lang.ClassLoader.loadClass(ClassLoader.java:643) Could not find the main class: com.ibm.worklight.config.util.helper.JVMBitness. Program will exit. /opt/IBM/Worklight/WorklightServer/install/shortcuts/configuration-tool.sh: fatal error: unsupported platform Is this expected behavior? Anything I can do to resolve? I know I can use Ant, but unfortunately I have almost no experience using it. Here's some system details: [root@lr60xm4 ~]# java -version java version "1.6.0" Java(TM) SE Runtime Environment (build pxz6460sr15-20131017_01(SR15)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux s390x-64 jvmxz6460sr15-20131016_170922 (JIT enabled, AOT enabled) J9VM - 20131016_170922 JIT - r9_20130920_46510ifx2 GC - GA24_Java6_SR15_20131016_1337_B170922) JCL - 20131015_01 [root@lr60xm4 ~]# uname -a; cat /etc/system* Linux lr60xm4.dal-ebit.ihost.com 2.6.32-358.el6.s390x #1 SMP Tue Jan 29 12:06:31 EST 2013 s390x s390x s390x GNU/Linux Red Hat Enterprise Linux Server release 6.4 (Santiago) cpe:/o:redhat:enterprise_linux:6server:ga:server Thanks in advance!
|
Worklight Server 6.1 configuration-tool.sh error I have installed Worklight Server on RHEL 6.4. The menu shortcut that was created for the Server Configuration Tool has the path: /opt/IBM/Worklight/shortcuts/configuration-tool.sh The actual script was installed to: /opt/IBM/Worklight/WorklightServer/install/shortcuts/configuration-tool.sh I changed the launcher to open the correct script but it silently fails to run. When running the script from the terminal, I get the following error: [root@lr60xm4 ~]# /opt/IBM/Worklight/WorklightServer/install/shortcuts/configuration-tool.sh Exception in thread "main" java.lang.NoClassDefFoundError: JavaVersion6 Caused by: java.lang.ClassNotFoundException: JavaVersion6 at java.net.URLClassLoader.findClass(URLClassLoader.java:434) at java.lang.ClassLoader.loadClass(ClassLoader.java:677) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:358) at java.lang.ClassLoader.loadClass(ClassLoader.java:643) Could not find the main class: JavaVersion6. Program will exit. Exception in thread "main" java.lang.NoClassDefFoundError: com.ibm.worklight.config.util.helper.JVMBitness Caused by: java.lang.ClassNotFoundException: com.ibm.worklight.config.util.helper.JVMBitness at java.net.URLClassLoader.findClass(URLClassLoader.java:434) at java.lang.ClassLoader.loadClass(ClassLoader.java:677) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:358) at java.lang.ClassLoader.loadClass(ClassLoader.java:643) Could not find the main class: com.ibm.worklight.config.util.helper.JVMBitness. Program will exit. /opt/IBM/Worklight/WorklightServer/install/shortcuts/configuration-tool.sh: fatal error: unsupported platform Is this expected behavior? Anything I can do to resolve? I know I can use Ant, but unfortunately I have almost no experience using it. Here's some system details: [root@lr60xm4 ~]# java -version java version "1.6.0" Java(TM) SE Runtime Environment (build pxz6460sr15-20131017_01(SR15)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux s390x-64 jvmxz6460sr15-20131016_170922 (JIT enabled, AOT enabled) J9VM - 20131016_170922 JIT - r9_20130920_46510ifx2 GC - GA24_Java6_SR15_20131016_1337_B170922) JCL - 20131015_01 [root@lr60xm4 ~]# uname -a; cat /etc/system* Linux lr60xm4.dal-ebit.ihost.com 2.6.32-358.el6.s390x #1 SMP Tue Jan 29 12:06:31 EST 2013 s390x s390x s390x GNU/Linux Red Hat Enterprise Linux Server release 6.4 (Santiago) cpe:/o:redhat:enterprise_linux:6server:ga:server Thanks in advance!
|
ibm-mobilefirst, rhel, worklight-server
| 0
| 267
| 1
|
https://stackoverflow.com/questions/20906403/worklight-server-6-1-configuration-tool-sh-error
|
20,188,035
|
NASM basic input-output program crashes
|
Following this thread, How do i read single character input from keyboard using nasm (assembly) under ubuntu? ,I'm trying to compile a program that echoes the input in NASM. I've made following files: my_load2.asm: %include "testio.inc" global _start section .text _start: mov eax, 0 call canonical_off call canonical_on testio.inc: termios: times 36 db 0 stdin: equ 0 ICANON: equ 1<<1 ECHO: equ 1<<3 canonical_off: call read_stdin_termios ; clear canonical bit in local mode flags push rax mov eax, ICANON not eax and [termios+12], eax pop rax call write_stdin_termios ret echo_off: call read_stdin_termios ; clear echo bit in local mode flags push rax mov eax, ECHO not eax and [termios+12], eax pop rax call write_stdin_termios ret canonical_on: call read_stdin_termios ; set canonical bit in local mode flags or dword [termios+12], ICANON call write_stdin_termios ret echo_on: call read_stdin_termios ; set echo bit in local mode flags or dword [termios+12], ECHO call write_stdin_termios ret read_stdin_termios: push rax push rbx push rcx push rdx mov eax, 36h mov ebx, stdin mov ecx, 5401h mov edx, termios int 80h pop rdx pop rcx pop rbx pop rax ret write_stdin_termios: push rax push rbx push rcx push rdx mov eax, 36h mov ebx, stdin mov ecx, 5402h mov edx, termios int 80h pop rdx pop rcx pop rbx pop rax ret Then I run: [root@localhost asm]# nasm -f elf64 my_load2.asm [root@localhost asm]# ld -m elfx86_64 my_load2.o -o my_load2 When I try to run it i get: [root@localhost asm]# ./my_load2 Segmentation fault Debugger says: (gdb) run Starting program: /root/asm/my_load2 Program received signal SIGSEGV, Segmentation fault. 0x00000000004000b1 in canonical_off () Can someone explain why is it crashing without on "import" step? Also, I am running RHEL in Virtualbox under Win7 64 bit. Can this cause problems with compilation?
|
NASM basic input-output program crashes Following this thread, How do i read single character input from keyboard using nasm (assembly) under ubuntu? ,I'm trying to compile a program that echoes the input in NASM. I've made following files: my_load2.asm: %include "testio.inc" global _start section .text _start: mov eax, 0 call canonical_off call canonical_on testio.inc: termios: times 36 db 0 stdin: equ 0 ICANON: equ 1<<1 ECHO: equ 1<<3 canonical_off: call read_stdin_termios ; clear canonical bit in local mode flags push rax mov eax, ICANON not eax and [termios+12], eax pop rax call write_stdin_termios ret echo_off: call read_stdin_termios ; clear echo bit in local mode flags push rax mov eax, ECHO not eax and [termios+12], eax pop rax call write_stdin_termios ret canonical_on: call read_stdin_termios ; set canonical bit in local mode flags or dword [termios+12], ICANON call write_stdin_termios ret echo_on: call read_stdin_termios ; set echo bit in local mode flags or dword [termios+12], ECHO call write_stdin_termios ret read_stdin_termios: push rax push rbx push rcx push rdx mov eax, 36h mov ebx, stdin mov ecx, 5401h mov edx, termios int 80h pop rdx pop rcx pop rbx pop rax ret write_stdin_termios: push rax push rbx push rcx push rdx mov eax, 36h mov ebx, stdin mov ecx, 5402h mov edx, termios int 80h pop rdx pop rcx pop rbx pop rax ret Then I run: [root@localhost asm]# nasm -f elf64 my_load2.asm [root@localhost asm]# ld -m elfx86_64 my_load2.o -o my_load2 When I try to run it i get: [root@localhost asm]# ./my_load2 Segmentation fault Debugger says: (gdb) run Starting program: /root/asm/my_load2 Program received signal SIGSEGV, Segmentation fault. 0x00000000004000b1 in canonical_off () Can someone explain why is it crashing without on "import" step? Also, I am running RHEL in Virtualbox under Win7 64 bit. Can this cause problems with compilation?
|
linux, assembly, nasm, rhel
| 0
| 733
| 1
|
https://stackoverflow.com/questions/20188035/nasm-basic-input-output-program-crashes
|
9,638,424
|
JBoss on cloud, any cheap way to get the enterprise version
|
Is it possible to get JBoss AS on cloud as part of the install. I don't want to maintain servers, just want to use JBoss AS (the fancy shmancy enterprise version). The options that I see are. -Use JBoss Community version on some VPS on even Amazon AWS RHEL image (not so cloudy) -Use OpenShfit -Buy all the ingredients and plug them into my wall. Any suggestions to easily/cheaply run a production ready Jboss AS instance would work. Not sure if this question belongs here. Mod, go ahead and suggest.
|
JBoss on cloud, any cheap way to get the enterprise version Is it possible to get JBoss AS on cloud as part of the install. I don't want to maintain servers, just want to use JBoss AS (the fancy shmancy enterprise version). The options that I see are. -Use JBoss Community version on some VPS on even Amazon AWS RHEL image (not so cloudy) -Use OpenShfit -Buy all the ingredients and plug them into my wall. Any suggestions to easily/cheaply run a production ready Jboss AS instance would work. Not sure if this question belongs here. Mod, go ahead and suggest.
|
amazon-web-services, jboss, openshift, rhel
| 0
| 813
| 3
|
https://stackoverflow.com/questions/9638424/jboss-on-cloud-any-cheap-way-to-get-the-enterprise-version
|
6,442,563
|
How do you install cx_Oracle for Python on RHEL?
|
I'm using Active Python since I don't want to be stuck with an old version of Python. I installed the instant client, and added the exports to my bash profile, but I'm getting this cryptic error: # apy setup.py install --no-compile --root=/tmp/tmpz0JuWASA/cx_Oracle-5.1/_pypminstroot running install running build running build_ext building 'cx_Oracle' extension gcc -pthread -fno-strict-aliasing -fPIC -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/lib/oracle/11.2/sdk/include -I/opt/ActivePython-2.7/include/python2.7 -c cx_Oracle.c -o build/temp.linux-x86_64-2.7-11g/cx_Oracle.o -DBUILD_VERSION=5.0.4 In file included from /opt/ActivePython-2.7/include/python2.7/Python.h:58, from cx_Oracle.c:6: /opt/ActivePython-2.7/include/python2.7/pyport.h:849:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." In file included from /usr/lib/oracle/11.2/sdk/include/oci.h:3029, from cx_Oracle.c:10: /usr/lib/oracle/11.2/sdk/include/ociap.h:10788: warning: function declaration isn’t a prototype /usr/lib/oracle/11.2/sdk/include/ociap.h:10794: warning: function declaration isn’t a prototype error: command 'gcc' failed with exit status 1
|
How do you install cx_Oracle for Python on RHEL? I'm using Active Python since I don't want to be stuck with an old version of Python. I installed the instant client, and added the exports to my bash profile, but I'm getting this cryptic error: # apy setup.py install --no-compile --root=/tmp/tmpz0JuWASA/cx_Oracle-5.1/_pypminstroot running install running build running build_ext building 'cx_Oracle' extension gcc -pthread -fno-strict-aliasing -fPIC -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/lib/oracle/11.2/sdk/include -I/opt/ActivePython-2.7/include/python2.7 -c cx_Oracle.c -o build/temp.linux-x86_64-2.7-11g/cx_Oracle.o -DBUILD_VERSION=5.0.4 In file included from /opt/ActivePython-2.7/include/python2.7/Python.h:58, from cx_Oracle.c:6: /opt/ActivePython-2.7/include/python2.7/pyport.h:849:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." In file included from /usr/lib/oracle/11.2/sdk/include/oci.h:3029, from cx_Oracle.c:10: /usr/lib/oracle/11.2/sdk/include/ociap.h:10788: warning: function declaration isn’t a prototype /usr/lib/oracle/11.2/sdk/include/ociap.h:10794: warning: function declaration isn’t a prototype error: command 'gcc' failed with exit status 1
|
python, oracle-database, cx-oracle, rhel
| 0
| 18,913
| 3
|
https://stackoverflow.com/questions/6442563/how-do-you-install-cx-oracle-for-python-on-rhel
|
4,647,212
|
Access X11 display
|
I'm using a CIS that automatically run build and tests. The tests are working fine with Windows, but when run in rhel it throwed X11 Display variable not set. I made sure X11 is installed and I can get back the xclock by using Xmanager. The intention is not to push the display to client but I used Xmanager to verify if X11 is installed. When the display is not exported, the tests that uses graphics fails but when I set export DISPLAY=0.0 , it threw java.lang.NoClassDefFoundError: sun/awt/X11GraphicsEnvironment This source says the cNF exception is misleading. Here is the scrap of that content: Unfortunately, this error message is somewhat misleading. This message does not actually reflect a class loading problem. The error can be avoiding by setting the DISPLAY environment variable in the appropriate configuration file. This means the display is not exported properly but the test does not report of missing X11 Display. What am I missing to do? Suggestions are appreciated!
|
Access X11 display I'm using a CIS that automatically run build and tests. The tests are working fine with Windows, but when run in rhel it throwed X11 Display variable not set. I made sure X11 is installed and I can get back the xclock by using Xmanager. The intention is not to push the display to client but I used Xmanager to verify if X11 is installed. When the display is not exported, the tests that uses graphics fails but when I set export DISPLAY=0.0 , it threw java.lang.NoClassDefFoundError: sun/awt/X11GraphicsEnvironment This source says the cNF exception is misleading. Here is the scrap of that content: Unfortunately, this error message is somewhat misleading. This message does not actually reflect a class loading problem. The error can be avoiding by setting the DISPLAY environment variable in the appropriate configuration file. This means the display is not exported properly but the test does not report of missing X11 Display. What am I missing to do? Suggestions are appreciated!
|
linux, x11, rhel
| 0
| 2,603
| 3
|
https://stackoverflow.com/questions/4647212/access-x11-display
|
77,971,079
|
Indirect reference to hashmap in bash without nameref
|
This is the same situation as a question I asked about a year ago , however I am now targeting RHEL7 and Bash 4.2.46. I am writing scripts to install packages from archive files, checking if the package is already installed first. I have the information for each package file in a config file like so: ### packages.config ### declare -A package_a=([name]="utility-blah" [ver]="2.4.1") declare -A package_b=([name]="tool-bleh" [ver]="1.3.9") # and so on and so forth My various install scripts source this config file to get package information. Each script contains an array of the packages to be installed. I would like to iterate over this array and install the package named. Ideally, the meat of this loop is in some function so I don't have to rewrite it in each script. In Ubuntu 18 with Bash 4.4, I was able to include this function in the config file like so: ### functions.config ### function install_package() { # dereference hashmap variable declare -n package="$1" if dpkg --list ${package[name]}; then echo "${package[name]} is installed" else dpkg -i $pathToPackages/${package[name]}_${package[ver]}.deb fi } This leaves only the loop for the actual script, which looks like this: ### install.sh ### source packages.config source functions.config declare -a packageList=("package_a" "package_b" "package_d") for package in ${packageList[@]}; do install_package $package done This worked perfectly for me on Bash 4.4. Unfortunately, the declare -n option was added in Bash 4.3, and as I am now targeting Bash 4.2, the nameref solution no longer works. Is there a way I can have this same behavior where the script will dereference the package variable? I've tried using indirection as this post suggests, but I don't think ${!package[name]} works the way I want it to. Is it possible to achieve the same functionality without using declare -n ?
|
Indirect reference to hashmap in bash without nameref This is the same situation as a question I asked about a year ago , however I am now targeting RHEL7 and Bash 4.2.46. I am writing scripts to install packages from archive files, checking if the package is already installed first. I have the information for each package file in a config file like so: ### packages.config ### declare -A package_a=([name]="utility-blah" [ver]="2.4.1") declare -A package_b=([name]="tool-bleh" [ver]="1.3.9") # and so on and so forth My various install scripts source this config file to get package information. Each script contains an array of the packages to be installed. I would like to iterate over this array and install the package named. Ideally, the meat of this loop is in some function so I don't have to rewrite it in each script. In Ubuntu 18 with Bash 4.4, I was able to include this function in the config file like so: ### functions.config ### function install_package() { # dereference hashmap variable declare -n package="$1" if dpkg --list ${package[name]}; then echo "${package[name]} is installed" else dpkg -i $pathToPackages/${package[name]}_${package[ver]}.deb fi } This leaves only the loop for the actual script, which looks like this: ### install.sh ### source packages.config source functions.config declare -a packageList=("package_a" "package_b" "package_d") for package in ${packageList[@]}; do install_package $package done This worked perfectly for me on Bash 4.4. Unfortunately, the declare -n option was added in Bash 4.3, and as I am now targeting Bash 4.2, the nameref solution no longer works. Is there a way I can have this same behavior where the script will dereference the package variable? I've tried using indirection as this post suggests, but I don't think ${!package[name]} works the way I want it to. Is it possible to achieve the same functionality without using declare -n ?
|
bash, rhel, rhel7
| 0
| 104
| 3
|
https://stackoverflow.com/questions/77971079/indirect-reference-to-hashmap-in-bash-without-nameref
|
76,312,797
|
Ansible task is failing without any error
|
I have an Ansible task running on RHEL and CentOS machines. - name: Git version (Fedora) command: rpm -qi git args: warn: no register: fedora_git tags: - git But this task is getting failed without any error. FAILED! => {"changed": true, "cmd": ["rpm", "-qi", "git"], "delta": "0:00:00.066596", "end": "2023-05-23 01:28:16.302918", "msg": "non-zero return code", "rc": 1, "start": "2023-05-23 01:28:16.236322", "stderr": "", "stderr_lines": [], "stdout": "package git is not installed", "stdout_lines": ["package git is not installed"]} Tried with shell module, but getting the same error.
|
Ansible task is failing without any error I have an Ansible task running on RHEL and CentOS machines. - name: Git version (Fedora) command: rpm -qi git args: warn: no register: fedora_git tags: - git But this task is getting failed without any error. FAILED! => {"changed": true, "cmd": ["rpm", "-qi", "git"], "delta": "0:00:00.066596", "end": "2023-05-23 01:28:16.302918", "msg": "non-zero return code", "rc": 1, "start": "2023-05-23 01:28:16.236322", "stderr": "", "stderr_lines": [], "stdout": "package git is not installed", "stdout_lines": ["package git is not installed"]} Tried with shell module, but getting the same error.
|
ansible, centos, rpm, ansible-2.x, rhel
| 0
| 156
| 1
|
https://stackoverflow.com/questions/76312797/ansible-task-is-failing-without-any-error
|
74,487,206
|
Fetch a file from task in same Ansible playbook
|
How do I transfer a file I have created from a previous task in my Ansible playbook? Here is what I got so far: - name: Create Yum Report shell: | cd /tmp yum history info > $(hostname -s)_$(date "+%d-%m-%Y").txt register: after_pir - name: Transfer PIR fetch: src: /tmp/{{ after_pir }} dest: /tmp/ However, I receive this error message when I run my playbook. TASK [Transfer PIR] ************************************************************************************************************ failed: [x.x.x.x] (item=after_pir) => {"ansible_loop_var": "item", "changed": false, "item": "after_pir", "msg": "the remote file does not exist, not transferring, ignored"} I have tried to run different fetch, synchronzie and pull methods but I'm not sure what the issue is.
|
Fetch a file from task in same Ansible playbook How do I transfer a file I have created from a previous task in my Ansible playbook? Here is what I got so far: - name: Create Yum Report shell: | cd /tmp yum history info > $(hostname -s)_$(date "+%d-%m-%Y").txt register: after_pir - name: Transfer PIR fetch: src: /tmp/{{ after_pir }} dest: /tmp/ However, I receive this error message when I run my playbook. TASK [Transfer PIR] ************************************************************************************************************ failed: [x.x.x.x] (item=after_pir) => {"ansible_loop_var": "item", "changed": false, "item": "after_pir", "msg": "the remote file does not exist, not transferring, ignored"} I have tried to run different fetch, synchronzie and pull methods but I'm not sure what the issue is.
|
automation, ansible, centos, rhel
| 0
| 409
| 1
|
https://stackoverflow.com/questions/74487206/fetch-a-file-from-task-in-same-ansible-playbook
|
73,040,270
|
Code compiled on RHEL 8 using gcc not working on RHEL 6. Getting error `GLIBC_2.14' not found
|
We are having an application which is built using gcc on RHEL 8 . When we run the executable for this application on RHEL 6 we are getting error `GLIBC_2.14' not found required by application. What are the options which can make application built on RHEL 8 using gcc run on RHEL 6 .
|
Code compiled on RHEL 8 using gcc not working on RHEL 6. Getting error `GLIBC_2.14' not found We are having an application which is built using gcc on RHEL 8 . When we run the executable for this application on RHEL 6 we are getting error `GLIBC_2.14' not found required by application. What are the options which can make application built on RHEL 8 using gcc run on RHEL 6 .
|
gcc, glibc, rhel, libc
| 0
| 470
| 1
|
https://stackoverflow.com/questions/73040270/code-compiled-on-rhel-8-using-gcc-not-working-on-rhel-6-getting-error-glibc-2
|
72,182,709
|
Weird permissions podman docker-compose volume
|
I have specified docker-compose.yml file with some volumes to mount. Here is example: backend-move: container_name: backend-move environment: APP_ENV: prod image: backend-move:latest logging: options: max-size: 250m ports: - 8080:8080 tty: true volumes: - php_static_logos:/app/public/images/logos - ./volumes/nginx-php/robots.txt:/var/www/html/public/robots.txt - ./volumes/backend/mysql:/app/mysql - ./volumes/backend/httpd/welcome.conf:/etc/httpd/conf.d/welcome.conf After I run podman-compose up -d and go to container through docker exec -it backend-move bash I have this crazy permissions (??????????) on mounted files: bash-4.4$ ls -la ls: cannot access 'welcome.conf': Permission denied total 28 drwxrwxrwx. 2 root root 114 Apr 21 12:29 . drwxrwxrwx. 5 root root 105 Apr 21 12:29 .. -rwxrwxrwx. 1 root root 400 Mar 21 17:33 README -rwxrwxrwx. 1 root root 2926 Mar 21 17:33 autoindex.conf -rwxrwxrwx. 1 root root 1517 Apr 21 12:29 php.conf -rwxrwxrwx. 1 root root 8712 Apr 21 12:29 ssl.conf -rwxrwxrwx. 1 root root 1252 Mar 21 17:27 userdir.conf -?????????? ? ? ? ? ? welcome.conf Any suggestions? [root@45 /]# podman-compose --version ['podman', '--version', ''] using podman version: 3.4.2 podman-composer version 1.0.3 podman --version podman version 3.4.2
|
Weird permissions podman docker-compose volume I have specified docker-compose.yml file with some volumes to mount. Here is example: backend-move: container_name: backend-move environment: APP_ENV: prod image: backend-move:latest logging: options: max-size: 250m ports: - 8080:8080 tty: true volumes: - php_static_logos:/app/public/images/logos - ./volumes/nginx-php/robots.txt:/var/www/html/public/robots.txt - ./volumes/backend/mysql:/app/mysql - ./volumes/backend/httpd/welcome.conf:/etc/httpd/conf.d/welcome.conf After I run podman-compose up -d and go to container through docker exec -it backend-move bash I have this crazy permissions (??????????) on mounted files: bash-4.4$ ls -la ls: cannot access 'welcome.conf': Permission denied total 28 drwxrwxrwx. 2 root root 114 Apr 21 12:29 . drwxrwxrwx. 5 root root 105 Apr 21 12:29 .. -rwxrwxrwx. 1 root root 400 Mar 21 17:33 README -rwxrwxrwx. 1 root root 2926 Mar 21 17:33 autoindex.conf -rwxrwxrwx. 1 root root 1517 Apr 21 12:29 php.conf -rwxrwxrwx. 1 root root 8712 Apr 21 12:29 ssl.conf -rwxrwxrwx. 1 root root 1252 Mar 21 17:27 userdir.conf -?????????? ? ? ? ? ? welcome.conf Any suggestions? [root@45 /]# podman-compose --version ['podman', '--version', ''] using podman version: 3.4.2 podman-composer version 1.0.3 podman --version podman version 3.4.2
|
docker-compose, devops, rhel, podman, podman-compose
| 0
| 5,825
| 2
|
https://stackoverflow.com/questions/72182709/weird-permissions-podman-docker-compose-volume
|
70,332,074
|
How can I set Dlog4j2.formatMsgNoLookups=true on RHEL6 and RHEL7
|
What is the command or steps to set Dlog4j2.formatMsgNoLookups=true across an entire server (vm) I found this command for applying to a specific var java -Dlog4j2.formatMsgNoLookups=true -jar myapp.jar but have been asked to deploy it across the entire OS not just to a single jar As its not a redhat OS variable, its part of Java, and I am not Java experienced I am not sure, it doesn't seem to be in any config file. On windows I understand it is a simple regedit, so does that mean it needs to go in bash profile as a env var or similar? EDIT the answer from elsewhere, incase someone else is looking is export _JAVA_TOOL_OPTIONS= -Dlog4j2.formatMsgNoLookups=true for tomcat its CATALINA_OPTS which may be found inside catalina.sh
|
How can I set Dlog4j2.formatMsgNoLookups=true on RHEL6 and RHEL7 What is the command or steps to set Dlog4j2.formatMsgNoLookups=true across an entire server (vm) I found this command for applying to a specific var java -Dlog4j2.formatMsgNoLookups=true -jar myapp.jar but have been asked to deploy it across the entire OS not just to a single jar As its not a redhat OS variable, its part of Java, and I am not Java experienced I am not sure, it doesn't seem to be in any config file. On windows I understand it is a simple regedit, so does that mean it needs to go in bash profile as a env var or similar? EDIT the answer from elsewhere, incase someone else is looking is export _JAVA_TOOL_OPTIONS= -Dlog4j2.formatMsgNoLookups=true for tomcat its CATALINA_OPTS which may be found inside catalina.sh
|
java, log4j, redhat, rhel
| 0
| 1,121
| 1
|
https://stackoverflow.com/questions/70332074/how-can-i-set-dlog4j2-formatmsgnolookups-true-on-rhel6-and-rhel7
|
69,088,543
|
RHEL 8.4 - Unable to install httpd
|
I'm currently trying to install a web solution on a RHEL 8 distribution. But I can't install apache2 (httpd) : I search on Google but didn't find anything, and I didn't know RHEL, it's the first time Thanks
|
RHEL 8.4 - Unable to install httpd I'm currently trying to install a web solution on a RHEL 8 distribution. But I can't install apache2 (httpd) : I search on Google but didn't find anything, and I didn't know RHEL, it's the first time Thanks
|
redhat, rhel
| 0
| 2,938
| 2
|
https://stackoverflow.com/questions/69088543/rhel-8-4-unable-to-install-httpd
|
68,286,508
|
Getting unable to find 'distinguished_name' in config when generating CSR
|
I want to generate a CSR with a SAN portion and keep getting the following error: unable to find 'distinguished_name' in config As far as I can tell, the cnf is structured appropriately and being called as well. Noting the CN value doesn't match the SAN and is mandated by my organization. Here is the content of my cnf file: [me@server-5007749 ~]$ cat openssl.cnf [ req ] distinguished_name = req_distinguished_name req_extensions = req_ext [ req_distinguished_name ] C = CA 0.OU = SSL 1.OU = Device O = MyOrg CN = 43546323 [ req_ext ] subjectAltName = @alt_names [ alt_names ] DNS.1 = my_fqdn_here.ca Here is the content of my shell script containing the command creating the CSR: [me@server-5007749 ~]$ cat ssl.sh export OPENSSL_CONF=/home/me printenv OPENSSL_CONF openssl req -new -key /opt/rh/httpd24/root/etc/httpd/certs/private.key -out site_csr.csr Here is the output of my shell script: [me@server-5007749 ~]$ ./ssl.sh /home/me unable to find 'distinguished_name' in config problems making Certificate Request 140524933736336:error:0E06D06C:configuration file routines:NCONF_get_string:no value:conf_lib.c:324:group=req name=distinguished_name Edit #1 I tried with a few different OpenSSL versions thinking I might be better results: OpenSSL 1.0.2k-fips 26 Jan 2017 (RHEL default) OpenSSL 1.1.1f 31 Mar 2020 (Cygwin)
|
Getting unable to find 'distinguished_name' in config when generating CSR I want to generate a CSR with a SAN portion and keep getting the following error: unable to find 'distinguished_name' in config As far as I can tell, the cnf is structured appropriately and being called as well. Noting the CN value doesn't match the SAN and is mandated by my organization. Here is the content of my cnf file: [me@server-5007749 ~]$ cat openssl.cnf [ req ] distinguished_name = req_distinguished_name req_extensions = req_ext [ req_distinguished_name ] C = CA 0.OU = SSL 1.OU = Device O = MyOrg CN = 43546323 [ req_ext ] subjectAltName = @alt_names [ alt_names ] DNS.1 = my_fqdn_here.ca Here is the content of my shell script containing the command creating the CSR: [me@server-5007749 ~]$ cat ssl.sh export OPENSSL_CONF=/home/me printenv OPENSSL_CONF openssl req -new -key /opt/rh/httpd24/root/etc/httpd/certs/private.key -out site_csr.csr Here is the output of my shell script: [me@server-5007749 ~]$ ./ssl.sh /home/me unable to find 'distinguished_name' in config problems making Certificate Request 140524933736336:error:0E06D06C:configuration file routines:NCONF_get_string:no value:conf_lib.c:324:group=req name=distinguished_name Edit #1 I tried with a few different OpenSSL versions thinking I might be better results: OpenSSL 1.0.2k-fips 26 Jan 2017 (RHEL default) OpenSSL 1.1.1f 31 Mar 2020 (Cygwin)
|
apache, openssl, rhel, csr
| 0
| 2,167
| 1
|
https://stackoverflow.com/questions/68286508/getting-unable-to-find-distinguished-name-in-config-when-generating-csr
|
67,754,906
|
Unable to install createrepo.noarch
|
I created a new EC2 on AWS, using RHEL 8. I try to do sudo yum install createrepo.noarch But I get back this error: Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 2:00:24 ago on Sat 29 May 2021 05:02:03 PM UTC. No match for argument: createrepo.noarch Error: Unable to find a match: createrepo.noarch Anyone know how I can get around this error? I thought about downloading one of them from [URL] , but no idea which one I'm supposed to download.
|
Unable to install createrepo.noarch I created a new EC2 on AWS, using RHEL 8. I try to do sudo yum install createrepo.noarch But I get back this error: Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 2:00:24 ago on Sat 29 May 2021 05:02:03 PM UTC. No match for argument: createrepo.noarch Error: Unable to find a match: createrepo.noarch Anyone know how I can get around this error? I thought about downloading one of them from [URL] , but no idea which one I'm supposed to download.
|
linux, unix, rpm, yum, rhel
| 0
| 838
| 1
|
https://stackoverflow.com/questions/67754906/unable-to-install-createrepo-noarch
|
67,503,223
|
bash + how to set cli PATH with version in sub folder
|
Under /usr/hdp folder we can have only one the following sub-folders 2.6.5.0-292 2.6.4.0-91 2.6.0.3-8 example ls /usr/hdp/ 2.6.5.0-292 current stack file.txt I want to use the following cli, and $VERSION could be one of the above versions /usr/hdp/$VERSION/kafka/bin/kafka-reassign-partitions.sh then I did the following in my bash script [[ -d /usr/hdp/2.6.5.0-292 ]] && VERSION=2.6.5.0-292 [[ -d /usr/hdp/2.6.4.0-91 ]] && VERSION=2.6.4.0-91 [[ -d /usr/hdp/2.6.0.3-8 ]] && VERSION=2.6.0.3-8 /usr/hdp/$VERSION/kafka/bin/kafka-reassign-partitions.sh Can we do the above setting in a more efficienct way without the test of [[ -d ...... ]] ?
|
bash + how to set cli PATH with version in sub folder Under /usr/hdp folder we can have only one the following sub-folders 2.6.5.0-292 2.6.4.0-91 2.6.0.3-8 example ls /usr/hdp/ 2.6.5.0-292 current stack file.txt I want to use the following cli, and $VERSION could be one of the above versions /usr/hdp/$VERSION/kafka/bin/kafka-reassign-partitions.sh then I did the following in my bash script [[ -d /usr/hdp/2.6.5.0-292 ]] && VERSION=2.6.5.0-292 [[ -d /usr/hdp/2.6.4.0-91 ]] && VERSION=2.6.4.0-91 [[ -d /usr/hdp/2.6.0.3-8 ]] && VERSION=2.6.0.3-8 /usr/hdp/$VERSION/kafka/bin/kafka-reassign-partitions.sh Can we do the above setting in a more efficienct way without the test of [[ -d ...... ]] ?
|
linux, bash, shell, rhel
| 0
| 59
| 2
|
https://stackoverflow.com/questions/67503223/bash-how-to-set-cli-path-with-version-in-sub-folder
|
65,110,393
|
Pro*C program run properly in AIX but not in RHEL
|
I have some trouble running a binary done with pro C code. I can compile it (without warnings) in my new server RHEL (7.8) but it loops until it failed. I made a simple pro C script to test the compilation (see below). This script runs easily in my previous server AIX (6.1.0.0). The DB is Oracle 11c. Oracle client in AIX : .../oracle/10.2 Oracle client in RHEL : .../oracle/12102/cli64 Here my simple code I want to compile and execute : #include "demo.h" #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sqlca.h> #include <oraca.h> int code_erreur; char libelle_erreur[200]; EXEC ORACLE OPTION (ORACA=YES); EXEC SQL BEGIN DECLARE SECTION; VARCHAR username[30]; VARCHAR password[30]; VARCHAR host[30]; VARCHAR dynstmt[80]; VARCHAR mData[81]; EXEC SQL END DECLARE SECTION; int connect() { printf("\nConnection to %s@%s\n",username.arr,host.arr); EXEC SQL CONNECT :username IDENTIFIED BY :password USING :host; printf("Connection established\n"); return 0; } void sqlerr() { code_erreur = sqlca.sqlcode; strcpy(libelle_erreur,sqlca.sqlerrm.sqlerrmc); printf("SQL ERROR : %d,%s ;\n", code_erreur,libelle_erreur); printf("sqlca.sqlerrd : %d\n",sqlca.sqlerrd[2]); } int loadData() { int lIter; strcpy(dynstmt.arr, "SELECT BANNER FROM V$VERSION\n"); dynstmt.len = strlen(dynstmt.arr); printf("%s", (char *)dynstmt.arr); EXEC SQL PREPARE S FROM :dynstmt; EXEC SQL DECLARE lCursorData CURSOR FOR S; EXEC SQL OPEN lCursorData; for (lIter=0;lIter<10;lIter++) { EXEC SQL WHENEVER NOT FOUND DO break; EXEC SQL FETCH lCursorData INTO :mData; printf("%s\n", mData.arr); } EXECL SQL CLOSE lCursorData; return 0; } void main() { int lRetour=0; printf("Start demo\n"); strcpy((char *)username.arr,"USER"); username.len=(unsigned short)strlen((char *)username.arr); strcpy((char *)password.arr,"PASS"); password.len=(unsigned short)strlen((char *)password.arr); strcpy((char *)host.arr,"HOST"); host.len=(unsigned short)strlen((char *)host.arr); EXEC SQL WHENEVER SQLERROR do sqlerr(); lRetour = connect(); lRetour = loadData(); EXEC SQL COMMIT WORK RELEASE; printf("End demo\n"); exit(0); } When I compile it and execute it on my AIX server, here what I got : Start demo Connection to USER@HOST Connection established SELECT BANNER FROM V$VERSION [...] End demo But when I compile it on RHEL and execute it, the binary loops until a memory fault : [...] Connection to USER@HOST Connection to USER@HOST Connection to USER@HOST Memory fault Of course I want the same AIX result in RHEL.
|
Pro*C program run properly in AIX but not in RHEL I have some trouble running a binary done with pro C code. I can compile it (without warnings) in my new server RHEL (7.8) but it loops until it failed. I made a simple pro C script to test the compilation (see below). This script runs easily in my previous server AIX (6.1.0.0). The DB is Oracle 11c. Oracle client in AIX : .../oracle/10.2 Oracle client in RHEL : .../oracle/12102/cli64 Here my simple code I want to compile and execute : #include "demo.h" #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sqlca.h> #include <oraca.h> int code_erreur; char libelle_erreur[200]; EXEC ORACLE OPTION (ORACA=YES); EXEC SQL BEGIN DECLARE SECTION; VARCHAR username[30]; VARCHAR password[30]; VARCHAR host[30]; VARCHAR dynstmt[80]; VARCHAR mData[81]; EXEC SQL END DECLARE SECTION; int connect() { printf("\nConnection to %s@%s\n",username.arr,host.arr); EXEC SQL CONNECT :username IDENTIFIED BY :password USING :host; printf("Connection established\n"); return 0; } void sqlerr() { code_erreur = sqlca.sqlcode; strcpy(libelle_erreur,sqlca.sqlerrm.sqlerrmc); printf("SQL ERROR : %d,%s ;\n", code_erreur,libelle_erreur); printf("sqlca.sqlerrd : %d\n",sqlca.sqlerrd[2]); } int loadData() { int lIter; strcpy(dynstmt.arr, "SELECT BANNER FROM V$VERSION\n"); dynstmt.len = strlen(dynstmt.arr); printf("%s", (char *)dynstmt.arr); EXEC SQL PREPARE S FROM :dynstmt; EXEC SQL DECLARE lCursorData CURSOR FOR S; EXEC SQL OPEN lCursorData; for (lIter=0;lIter<10;lIter++) { EXEC SQL WHENEVER NOT FOUND DO break; EXEC SQL FETCH lCursorData INTO :mData; printf("%s\n", mData.arr); } EXECL SQL CLOSE lCursorData; return 0; } void main() { int lRetour=0; printf("Start demo\n"); strcpy((char *)username.arr,"USER"); username.len=(unsigned short)strlen((char *)username.arr); strcpy((char *)password.arr,"PASS"); password.len=(unsigned short)strlen((char *)password.arr); strcpy((char *)host.arr,"HOST"); host.len=(unsigned short)strlen((char *)host.arr); EXEC SQL WHENEVER SQLERROR do sqlerr(); lRetour = connect(); lRetour = loadData(); EXEC SQL COMMIT WORK RELEASE; printf("End demo\n"); exit(0); } When I compile it and execute it on my AIX server, here what I got : Start demo Connection to USER@HOST Connection established SELECT BANNER FROM V$VERSION [...] End demo But when I compile it on RHEL and execute it, the binary loops until a memory fault : [...] Connection to USER@HOST Connection to USER@HOST Connection to USER@HOST Memory fault Of course I want the same AIX result in RHEL.
|
compilation, rhel, oracle-pro-c
| 0
| 274
| 1
|
https://stackoverflow.com/questions/65110393/proc-program-run-properly-in-aix-but-not-in-rhel
|
63,358,702
|
Write input of a npm command to a variable but only a specific part
|
On my root directory of my project, I have a package.json. While doing the CI build, I am trying to capture the version properties from the file. Since I am running on a node container, the following command is possible. node -p -e require('./package.json').version I included the above in scripts property of package.json "scripts": { "version": "node -p -e require('./package.json').version" }, I am capturing it in a variable using export VERSION=$(npm run version) which seems to capture a lot more than the result of the npm command. Build environment is a Nodejs10 container built on rhel7. jq is not available and something without it may be better Any suggestions?
|
Write input of a npm command to a variable but only a specific part On my root directory of my project, I have a package.json. While doing the CI build, I am trying to capture the version properties from the file. Since I am running on a node container, the following command is possible. node -p -e require('./package.json').version I included the above in scripts property of package.json "scripts": { "version": "node -p -e require('./package.json').version" }, I am capturing it in a variable using export VERSION=$(npm run version) which seems to capture a lot more than the result of the npm command. Build environment is a Nodejs10 container built on rhel7. jq is not available and something without it may be better Any suggestions?
|
node.js, bash, npm, rhel, npm-scripts
| 0
| 1,525
| 2
|
https://stackoverflow.com/questions/63358702/write-input-of-a-npm-command-to-a-variable-but-only-a-specific-part
|
62,551,549
|
pip install + pip cant instelled the weel file from a local flat directory
|
we are trying to install the following wheel file ls /tmp/test argparse-1.4.0-py2.py3-none-any.whl one option is: pip install --no-index --find-links=file:///tmp/test argparse-1.4.0-py2.py3-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at [URL] WARNING: Requirement 'argparse-1.4.0-py2.py3-none-any.whl' looks like a filename, but the file does not exist Looking in links: file:///tmp/test Processing ./argparse-1.4.0-py2.py3-none-any.whl ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/tmp/argparse-1.4.0-py2.py3-none-any.whl' the second option is: pip install --no-index --find-links=/tmp/test argparse-1.4.0-py2.py3-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at [URL] WARNING: Requirement 'argparse-1.4.0-py2.py3-none-any.whl' looks like a filename, but the file does not exist Looking in links: /tmp/test Processing ./argparse-1.4.0-py2.py3-none-any.whl ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/tmp/argparse-1.4.0-py2.py3-none-any.whl' on both option pip failed to find the .whl file what is interesting is that pip ignore the folder /tmp/test what is going here ? , why pip cant installed the wheel file and find it under /tmp/test folder? reference:
|
pip install + pip cant instelled the weel file from a local flat directory we are trying to install the following wheel file ls /tmp/test argparse-1.4.0-py2.py3-none-any.whl one option is: pip install --no-index --find-links=file:///tmp/test argparse-1.4.0-py2.py3-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at [URL] WARNING: Requirement 'argparse-1.4.0-py2.py3-none-any.whl' looks like a filename, but the file does not exist Looking in links: file:///tmp/test Processing ./argparse-1.4.0-py2.py3-none-any.whl ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/tmp/argparse-1.4.0-py2.py3-none-any.whl' the second option is: pip install --no-index --find-links=/tmp/test argparse-1.4.0-py2.py3-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at [URL] WARNING: Requirement 'argparse-1.4.0-py2.py3-none-any.whl' looks like a filename, but the file does not exist Looking in links: /tmp/test Processing ./argparse-1.4.0-py2.py3-none-any.whl ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/tmp/argparse-1.4.0-py2.py3-none-any.whl' on both option pip failed to find the .whl file what is interesting is that pip ignore the folder /tmp/test what is going here ? , why pip cant installed the wheel file and find it under /tmp/test folder? reference:
|
python, python-2.7, pip, rhel, python-wheel
| 0
| 954
| 1
|
https://stackoverflow.com/questions/62551549/pip-install-pip-cant-instelled-the-weel-file-from-a-local-flat-directory
|
61,764,996
|
How to perform grep on curl within for loop?
|
I have created an array uisng the command IFS=', ' read -r -a array <<< "$(command)" The array has values: abc001 abc002 abc003 I want to loop through the array and run a curl command on each element. a) If curl output has string Connected then the curl command should timeout and the for loop should exit out. b) If the curl output is not having string Connected then the curl command should timeout and for loop should move to next element. I have written the following code. for element in "${array[@]}" do resp=$(curl -v [URL] echo resp done I am getting following output: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to abc001 port 8888 (#0) * Trying 10.10.10.10... * Connected to abc001 port 8888 (#0)
|
How to perform grep on curl within for loop? I have created an array uisng the command IFS=', ' read -r -a array <<< "$(command)" The array has values: abc001 abc002 abc003 I want to loop through the array and run a curl command on each element. a) If curl output has string Connected then the curl command should timeout and the for loop should exit out. b) If the curl output is not having string Connected then the curl command should timeout and for loop should move to next element. I have written the following code. for element in "${array[@]}" do resp=$(curl -v [URL] echo resp done I am getting following output: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to abc001 port 8888 (#0) * Trying 10.10.10.10... * Connected to abc001 port 8888 (#0)
|
linux, bash, shell, curl, rhel
| 0
| 959
| 2
|
https://stackoverflow.com/questions/61764996/how-to-perform-grep-on-curl-within-for-loop
|
61,644,820
|
what the differences between Install Docker Engine from binaries VS docker installation by yum and pip
|
we want to install docker and docker compose on our rhel machine what are the differences between Install Docker Engine from binaries VS docker installation by yum and pip?
|
what the differences between Install Docker Engine from binaries VS docker installation by yum and pip we want to install docker and docker compose on our rhel machine what are the differences between Install Docker Engine from binaries VS docker installation by yum and pip?
|
docker, docker-compose, pip, yum, rhel
| 0
| 188
| 1
|
https://stackoverflow.com/questions/61644820/what-the-differences-between-install-docker-engine-from-binaries-vs-docker-insta
|
61,382,926
|
Can't install python by pyenv on RHEL
|
Installed pyenv on RHEL7.6. Installed necessary libraries as wiki sudo yum install @development zlib-devel bzip2 bzip2-devel readline-devel sqlite \ sqlite-devel openssl-devel xz xz-devel libffi-devel findutils Check the install available list $ pyenv install --list | grep 3.8.2 3.8.2 When install it got error: $ pyenv install 3.8.2 Downloading Python-3.8.2.tar.xz... -> [URL] error: failed to download Python-3.8.2.tar.xz BUILD FAILED (Red Hat Enterprise Linux Server 7.6 using python-build 1.2.18-7-gae4d489) Even tried to set a proxy but still the same error. The OS information is $ cat /etc/os-release NAME="Red Hat Enterprise Linux Server" VERSION="7.6 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.6" PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server" HOME_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.6 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
|
Can't install python by pyenv on RHEL Installed pyenv on RHEL7.6. Installed necessary libraries as wiki sudo yum install @development zlib-devel bzip2 bzip2-devel readline-devel sqlite \ sqlite-devel openssl-devel xz xz-devel libffi-devel findutils Check the install available list $ pyenv install --list | grep 3.8.2 3.8.2 When install it got error: $ pyenv install 3.8.2 Downloading Python-3.8.2.tar.xz... -> [URL] error: failed to download Python-3.8.2.tar.xz BUILD FAILED (Red Hat Enterprise Linux Server 7.6 using python-build 1.2.18-7-gae4d489) Even tried to set a proxy but still the same error. The OS information is $ cat /etc/os-release NAME="Red Hat Enterprise Linux Server" VERSION="7.6 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.6" PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server" HOME_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.6 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
|
python, linux, rhel, pyenv
| 0
| 3,569
| 1
|
https://stackoverflow.com/questions/61382926/cant-install-python-by-pyenv-on-rhel
|
61,256,409
|
How to programatically get the memory usage of the current program?
|
I would like to get some information about the memory usage of my C++ program. The way I do this is by accessing /proc/self/stat and printing the virtual and resident set size. You can find an example here . Is this a good way to go? How accurate is the information I am accessing*? Could someone recommend a better way to measure memory usage programmatically? *Asking, because I get unexpected, sudden jumps of mem usage. My expectation was that the information is perfectly accurate. OS: I am running inside a docker container, which is based on RHEL. Additional info: If I limit the memory usage of the container with docker run -m , the printed memory is greater than the limit I set.
|
How to programatically get the memory usage of the current program? I would like to get some information about the memory usage of my C++ program. The way I do this is by accessing /proc/self/stat and printing the virtual and resident set size. You can find an example here . Is this a good way to go? How accurate is the information I am accessing*? Could someone recommend a better way to measure memory usage programmatically? *Asking, because I get unexpected, sudden jumps of mem usage. My expectation was that the information is perfectly accurate. OS: I am running inside a docker container, which is based on RHEL. Additional info: If I limit the memory usage of the container with docker run -m , the printed memory is greater than the limit I set.
|
c++, linux, memory, rhel
| 0
| 297
| 1
|
https://stackoverflow.com/questions/61256409/how-to-programatically-get-the-memory-usage-of-the-current-program
|
61,143,527
|
Unable to install Pandas for Python in RHEL 8
|
I am trying to install pandas for Python in my RHEL 8 server. I tried listing pandas packages using yum list pandas and it gave me the below package python3-pandas.x86_64 under available packages. But, when I try to install this package using yum install python3-pandas.x86_64 , it shows the below error: **Problem: package python3-pandas-0.25.3-1.el8.x86_64 requires python3-matplotlib, but none of the providers can be installed conflicting requests nothing provides libqhull.so.7()(64bit) needed by python3-matplotlib-3.0.3-3.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)** Do I need to install libqhull.so.7()(64bit) explicitly? If so, can you please let me know how I can do that? I don't see any relevant packages for this. Also, as per the Red Hat documentation , it says libqhull is already available as part RHEL 8. But I don't see the package. Is there something that I am missing? Thanks in advance
|
Unable to install Pandas for Python in RHEL 8 I am trying to install pandas for Python in my RHEL 8 server. I tried listing pandas packages using yum list pandas and it gave me the below package python3-pandas.x86_64 under available packages. But, when I try to install this package using yum install python3-pandas.x86_64 , it shows the below error: **Problem: package python3-pandas-0.25.3-1.el8.x86_64 requires python3-matplotlib, but none of the providers can be installed conflicting requests nothing provides libqhull.so.7()(64bit) needed by python3-matplotlib-3.0.3-3.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)** Do I need to install libqhull.so.7()(64bit) explicitly? If so, can you please let me know how I can do that? I don't see any relevant packages for this. Also, as per the Red Hat documentation , it says libqhull is already available as part RHEL 8. But I don't see the package. Is there something that I am missing? Thanks in advance
|
python, pandas, rhel
| 0
| 4,897
| 1
|
https://stackoverflow.com/questions/61143527/unable-to-install-pandas-for-python-in-rhel-8
|
60,458,826
|
How to install ImageMagick, ImageMagick-devel and PECL imagick on RHEL 8
|
It appears ImageMagick and ImageMagick-devel are removed from RHEL 8. I have added 3rd party repositories such as EPEL, REMI, & RPMFusion. I am able to install the GraphicsMagick replacement for Imagemagick but it appears Gmagick is not compatible with ImageMagick at PHP library code level. So the below method does not help... for the plugin I am using which depends on PHP Imagemagick, while using Gmagick to convert something in the CLI it will work. $ dnf install GraphicsMagick GraphicsMagick-devel GraphicsMagick-perl ghostscript $ cd /usr/local/src $ wget [URL] $ tar xfvz gmagick $ cd gmagick-* $ phpize $ ./configure $ make $ make installl $ php --ini | grep 'Loaded Configuration File' $ nano /etc/php.ini // add extension to end of php.ini extension=gmagick.so When trying to install ImageMagick $ sudo yum install ImageMagick-devel No match for argument: ImageMagick-deval Error: Unable to find a match: ImageMagick-deval $ pecl install imagick checking whether to enable the imagick extension... yes, shared checking for pkg-config... /bin/pkg-config checking ImageMagick MagickWand API configuration program... checking Testing /usr/local/bin/MagickWand-config... Doesn't exist checking Testing /usr/bin/MagickWand-config... Doesn't exist checking Testing /usr/sbin/bin/MagickWand-config... Doesn't exist checking Testing /opt/bin/MagickWand-config... Doesn't exist checking Testing /opt/local/bin/MagickWand-config... Doesn't exist configure: error: not found. Please provide a path to MagickWand-config or Wand-config program. ERROR: `/var/tmp/imagick/configure --with-php-config=/bin/php-config --with-imagick' failed Is there a way to manually get ImageMagick, ImageMagick-devel and PECL Imagick installed on RHEL 8 (NOT Gmagick) EDIT # dnf repolist Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:00:17 ago on Fri 28 Feb 2020 20:58:19 UTC. repo id repo name status *epel Extra Packages for Enterprise Linux 8 - x86_64 4,916 *epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64 0 remi-modular Remi's Modular repository for Enterprise Linux 8 - x86_64 16 remi-safe Safe Remi's RPM repository for Enterprise Linux 8 - x86_64 2,133 rhel-8-appstream-rhui-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (RPMs) 8,566 rhel-8-baseos-rhui-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS from RHUI (RPMs) 3,690 rhui-client-config-server-8 Red Hat Update Infrastructure 3 Client Configuration Server 8 6 rpmfusion-free-updates RPM Fusion for EL 8 - Free - Updates # yum install ImageMagick-devel Error: Problem: conflicting requests - nothing provides jasper-devel needed by ImageMagick-devel-6.9.10.86-1.el8.x86_64 - nothing provides OpenEXR-devel needed by ImageMagick-devel-6.9.10.86-1.el8.x86_64 - nothing provides ghostscript-devel needed by ImageMagick-devel-6.9.10.86-1.el8.x86_64 - nothing provides lcms2-devel needed by ImageMagick-devel-6.9.10.86-1.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) SOLVED - [URL] $ cd ImageMagick-7.0.9-26 $ ./configure $ make $ make install $ sudo ldconfig /usr/local/lib $ pecl install imagick $ nano /etc/php.ini // Add the following extension=imagick.so Works after removing Gmagick Clean try with $ sudo dnf install ImageMagick $ dnf install php73-php-pecl-imagick # php --ini | grep 'Loaded Configuration File' PHP Warning: PHP Startup: Unable to load dynamic library 'imagick.so' (tried: /usr/lib64/php/modules/imagick.so (/usr/lib64/php/modules/imagick.so: cannot open shared object file: No such file or directory), /usr/lib64/php/modules/imagick.so.so (/usr/lib64/php/modules/imagick.so.so: cannot open shared object file: No such file or directory)) in Unknown on line 0 Loaded Configuration File: /etc/php.ini # ls bz2.so curl.so fileinfo.so gettext.so json.so mysqli.so pdo_mysql.so phar.so simplexml.so sysvmsg.so tokenizer.so xml.so calendar.so dom.so ftp.so iconv.so mbstring.so mysqlnd.so pdo.so posix.so sockets.so sysvsem.so wddx.so xmlwriter.so ctype.so exif.so gd.so intl.so mcrypt.so opcache.so pdo_sqlite.so shmop.so sqlite3.so sysvshm.so xmlreader.so xsl.so nothing shown in php -m for imagick # php -v PHP Warning: PHP Startup: Unable to load dynamic library 'imagick.so' (tried: /usr/lib64/php/modules/imagick.so (/usr/lib64/php/modules/imagick.so: cannot open shared object file: No such file or directory), /usr/lib64/php/modules/imagick.so.so (/usr/lib64/php/modules/imagick.so.so: cannot open shared object file: No such file or directory)) in Unknown on line 0 PHP 7.3.5 (cli) (built: Apr 30 2019 08:37:17) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.3.5, Copyright (c) 1998-2018 Zend Technologies with Zend OPcache v7.3.5, Copyright (c) 1999-2018, by Zend Technologies
|
How to install ImageMagick, ImageMagick-devel and PECL imagick on RHEL 8 It appears ImageMagick and ImageMagick-devel are removed from RHEL 8. I have added 3rd party repositories such as EPEL, REMI, & RPMFusion. I am able to install the GraphicsMagick replacement for Imagemagick but it appears Gmagick is not compatible with ImageMagick at PHP library code level. So the below method does not help... for the plugin I am using which depends on PHP Imagemagick, while using Gmagick to convert something in the CLI it will work. $ dnf install GraphicsMagick GraphicsMagick-devel GraphicsMagick-perl ghostscript $ cd /usr/local/src $ wget [URL] $ tar xfvz gmagick $ cd gmagick-* $ phpize $ ./configure $ make $ make installl $ php --ini | grep 'Loaded Configuration File' $ nano /etc/php.ini // add extension to end of php.ini extension=gmagick.so When trying to install ImageMagick $ sudo yum install ImageMagick-devel No match for argument: ImageMagick-deval Error: Unable to find a match: ImageMagick-deval $ pecl install imagick checking whether to enable the imagick extension... yes, shared checking for pkg-config... /bin/pkg-config checking ImageMagick MagickWand API configuration program... checking Testing /usr/local/bin/MagickWand-config... Doesn't exist checking Testing /usr/bin/MagickWand-config... Doesn't exist checking Testing /usr/sbin/bin/MagickWand-config... Doesn't exist checking Testing /opt/bin/MagickWand-config... Doesn't exist checking Testing /opt/local/bin/MagickWand-config... Doesn't exist configure: error: not found. Please provide a path to MagickWand-config or Wand-config program. ERROR: `/var/tmp/imagick/configure --with-php-config=/bin/php-config --with-imagick' failed Is there a way to manually get ImageMagick, ImageMagick-devel and PECL Imagick installed on RHEL 8 (NOT Gmagick) EDIT # dnf repolist Updating Subscription Management repositories. Unable to read consumer identity This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Last metadata expiration check: 0:00:17 ago on Fri 28 Feb 2020 20:58:19 UTC. repo id repo name status *epel Extra Packages for Enterprise Linux 8 - x86_64 4,916 *epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64 0 remi-modular Remi's Modular repository for Enterprise Linux 8 - x86_64 16 remi-safe Safe Remi's RPM repository for Enterprise Linux 8 - x86_64 2,133 rhel-8-appstream-rhui-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (RPMs) 8,566 rhel-8-baseos-rhui-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS from RHUI (RPMs) 3,690 rhui-client-config-server-8 Red Hat Update Infrastructure 3 Client Configuration Server 8 6 rpmfusion-free-updates RPM Fusion for EL 8 - Free - Updates # yum install ImageMagick-devel Error: Problem: conflicting requests - nothing provides jasper-devel needed by ImageMagick-devel-6.9.10.86-1.el8.x86_64 - nothing provides OpenEXR-devel needed by ImageMagick-devel-6.9.10.86-1.el8.x86_64 - nothing provides ghostscript-devel needed by ImageMagick-devel-6.9.10.86-1.el8.x86_64 - nothing provides lcms2-devel needed by ImageMagick-devel-6.9.10.86-1.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) SOLVED - [URL] $ cd ImageMagick-7.0.9-26 $ ./configure $ make $ make install $ sudo ldconfig /usr/local/lib $ pecl install imagick $ nano /etc/php.ini // Add the following extension=imagick.so Works after removing Gmagick Clean try with $ sudo dnf install ImageMagick $ dnf install php73-php-pecl-imagick # php --ini | grep 'Loaded Configuration File' PHP Warning: PHP Startup: Unable to load dynamic library 'imagick.so' (tried: /usr/lib64/php/modules/imagick.so (/usr/lib64/php/modules/imagick.so: cannot open shared object file: No such file or directory), /usr/lib64/php/modules/imagick.so.so (/usr/lib64/php/modules/imagick.so.so: cannot open shared object file: No such file or directory)) in Unknown on line 0 Loaded Configuration File: /etc/php.ini # ls bz2.so curl.so fileinfo.so gettext.so json.so mysqli.so pdo_mysql.so phar.so simplexml.so sysvmsg.so tokenizer.so xml.so calendar.so dom.so ftp.so iconv.so mbstring.so mysqlnd.so pdo.so posix.so sockets.so sysvsem.so wddx.so xmlwriter.so ctype.so exif.so gd.so intl.so mcrypt.so opcache.so pdo_sqlite.so shmop.so sqlite3.so sysvshm.so xmlreader.so xsl.so nothing shown in php -m for imagick # php -v PHP Warning: PHP Startup: Unable to load dynamic library 'imagick.so' (tried: /usr/lib64/php/modules/imagick.so (/usr/lib64/php/modules/imagick.so: cannot open shared object file: No such file or directory), /usr/lib64/php/modules/imagick.so.so (/usr/lib64/php/modules/imagick.so.so: cannot open shared object file: No such file or directory)) in Unknown on line 0 PHP 7.3.5 (cli) (built: Apr 30 2019 08:37:17) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.3.5, Copyright (c) 1998-2018 Zend Technologies with Zend OPcache v7.3.5, Copyright (c) 1999-2018, by Zend Technologies
|
php, imagemagick, rpm, rhel, epel
| 0
| 14,733
| 2
|
https://stackoverflow.com/questions/60458826/how-to-install-imagemagick-imagemagick-devel-and-pecl-imagick-on-rhel-8
|
60,449,691
|
Rebuild RHEL source package (SRPM) for Libreoffice -- are the build dependencies not packaged?
|
I have a system running RHEL 8.1. This includes a packaged version of LibreOffice: % rpm -qi libreoffice-base Name : libreoffice-base Epoch : 1 Version : 6.0.6.1 Release : 19.el8 Architecture: x86_64 Install Date: Fri 21 Feb 2020 05:16:08 PM GMT Group : Unspecified Size : 7511388 License : (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 Signature : RSA/SHA256, Tue 20 Aug 2019 02:38:29 PM BST, Key ID 199e2f91fd431d51 Source RPM : libreoffice-6.0.6.1-19.el8.src.rpm [etc] I'd like to rebuild this RPM package from its source package. So I went to Red Hat's download page [URL] which unfortunately requires a login to access, and downloaded the source package libreoffice-6.0.6.1-19.el8.src.rpm . Then I installed the source package with rpm -Uvh as usual and it created files under the SPECS and SOURCES directories in my RPM build directory. Then as usual I went to build it: % cd SPECS % rpmbuild -ba libreoffice.spec error: Failed build dependencies: bsh is needed by libreoffice-1:6.0.6.1-19.el8.x86_64 glm-devel is needed by libreoffice-1:6.0.6.1-19.el8.x86_64 gpgmepp-devel is needed by libreoffice-1:6.0.6.1-19.el8.x86_64 [etc] Not too surprising, I hadn't expected all of the build dependencies to be already present. So I went to install them, starting with bsh : % sudo dnf -y install bsh Updating Subscription Management repositories. Last metadata expiration check: 0:39:20 ago on Fri 28 Feb 2020 09:39:22 AM GMT. No match for argument: bsh Error: Unable to find a match: bsh Now I'm a bit stumped. The package bsh is required to build libreoffice-base , which is a core package, yet bsh is not packaged? I hope there's something obvious I am missing here. The alternative would be that RHEL isn't self-hosting, which would be very depressing.
|
Rebuild RHEL source package (SRPM) for Libreoffice -- are the build dependencies not packaged? I have a system running RHEL 8.1. This includes a packaged version of LibreOffice: % rpm -qi libreoffice-base Name : libreoffice-base Epoch : 1 Version : 6.0.6.1 Release : 19.el8 Architecture: x86_64 Install Date: Fri 21 Feb 2020 05:16:08 PM GMT Group : Unspecified Size : 7511388 License : (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 Signature : RSA/SHA256, Tue 20 Aug 2019 02:38:29 PM BST, Key ID 199e2f91fd431d51 Source RPM : libreoffice-6.0.6.1-19.el8.src.rpm [etc] I'd like to rebuild this RPM package from its source package. So I went to Red Hat's download page [URL] which unfortunately requires a login to access, and downloaded the source package libreoffice-6.0.6.1-19.el8.src.rpm . Then I installed the source package with rpm -Uvh as usual and it created files under the SPECS and SOURCES directories in my RPM build directory. Then as usual I went to build it: % cd SPECS % rpmbuild -ba libreoffice.spec error: Failed build dependencies: bsh is needed by libreoffice-1:6.0.6.1-19.el8.x86_64 glm-devel is needed by libreoffice-1:6.0.6.1-19.el8.x86_64 gpgmepp-devel is needed by libreoffice-1:6.0.6.1-19.el8.x86_64 [etc] Not too surprising, I hadn't expected all of the build dependencies to be already present. So I went to install them, starting with bsh : % sudo dnf -y install bsh Updating Subscription Management repositories. Last metadata expiration check: 0:39:20 ago on Fri 28 Feb 2020 09:39:22 AM GMT. No match for argument: bsh Error: Unable to find a match: bsh Now I'm a bit stumped. The package bsh is required to build libreoffice-base , which is a core package, yet bsh is not packaged? I hope there's something obvious I am missing here. The alternative would be that RHEL isn't self-hosting, which would be very depressing.
|
rpm, rhel, libreoffice, rebuild
| 0
| 438
| 2
|
https://stackoverflow.com/questions/60449691/rebuild-rhel-source-package-srpm-for-libreoffice-are-the-build-dependencies
|
57,713,170
|
Set chrony.conf file to "server" instead of "pool"
|
I'm working with RHEL 8 and the ntp package is no longer supported and it is implemented by the chronyd daemon which is provided in the chrony package. The file is set up to use public servers from the pool.ntp.org project (pool 2.rhel.pool.ntp.org iburst). Is there a way to set server instead of pool? My chrony.conf file: # Use public servers from the pool.ntp.org project. # Please consider joining the pool ([URL] pool 2.rhel.pool.ntp.org iburst
|
Set chrony.conf file to "server" instead of "pool" I'm working with RHEL 8 and the ntp package is no longer supported and it is implemented by the chronyd daemon which is provided in the chrony package. The file is set up to use public servers from the pool.ntp.org project (pool 2.rhel.pool.ntp.org iburst). Is there a way to set server instead of pool? My chrony.conf file: # Use public servers from the pool.ntp.org project. # Please consider joining the pool ([URL] pool 2.rhel.pool.ntp.org iburst
|
rhel, rhel7, ntpd
| 0
| 6,594
| 1
|
https://stackoverflow.com/questions/57713170/set-chrony-conf-file-to-server-instead-of-pool
|
56,422,403
|
How to hide port number in a url
|
I have run an application on a cloud server..i am accessing the url [URL] with a public Ip. Can it be possible to access the url [URL] port number)
|
How to hide port number in a url I have run an application on a cloud server..i am accessing the url [URL] with a public Ip. Can it be possible to access the url [URL] port number)
|
java, apache, url, rhel, port-number
| 0
| 2,056
| 3
|
https://stackoverflow.com/questions/56422403/how-to-hide-port-number-in-a-url
|
55,946,410
|
Cannot register with any organization. Registering a system in RHEL
|
subscription-manager register --username="USERNAME" --password="PASSWORD" when i run this command i'm getting error:- Cannot register with any organization.
|
Cannot register with any organization. Registering a system in RHEL subscription-manager register --username="USERNAME" --password="PASSWORD" when i run this command i'm getting error:- Cannot register with any organization.
|
redhat, openstack, rhel
| 0
| 4,463
| 2
|
https://stackoverflow.com/questions/55946410/cannot-register-with-any-organization-registering-a-system-in-rhel
|
54,230,993
|
How to Implement a specific /etc/resolv.conf per Openshift project
|
I'm having a use case where each openshift project belongs to an own VLAN, which has more than just Openshift Nodes in it. Each VLAN has it's own independent DNS to resolve all the Hosts within that VLAN. The Openshift Cluster itself hosts more of such VLANs on the same time. To get the per-project dns resolution done, it is elementary to get a project-based DNS resolving implemented. Is there a way to change the pod's /etc/resolv.conf dependent on the Openshift project it runs in? The Cluster runs on RHEL 7.x, Openshift is 3.11
|
How to Implement a specific /etc/resolv.conf per Openshift project I'm having a use case where each openshift project belongs to an own VLAN, which has more than just Openshift Nodes in it. Each VLAN has it's own independent DNS to resolve all the Hosts within that VLAN. The Openshift Cluster itself hosts more of such VLANs on the same time. To get the per-project dns resolution done, it is elementary to get a project-based DNS resolving implemented. Is there a way to change the pod's /etc/resolv.conf dependent on the Openshift project it runs in? The Cluster runs on RHEL 7.x, Openshift is 3.11
|
kubernetes, dns, openshift, rhel, dnsmasq
| 0
| 1,286
| 1
|
https://stackoverflow.com/questions/54230993/how-to-implement-a-specific-etc-resolv-conf-per-openshift-project
|
53,368,822
|
Not able to create SSIS DB Catlog in linux RHEL 7
|
I am not able to create SSIS DB Catalog on my MS SQL Server which is installed on Linux RHEL 7 Server. Though I have installed SSIS on Linux RHEL 7. Whenever I am trying to create SSIS DB Catalog i am getting this error. TITLE: Microsoft SQL Server Management Studio The path to the catalog backup file could not be determined. Integration Services might not be installed on this server, or the user may not have the appropriate access permissions. (Microsoft.SqlServer.IntegrationServices.Common.ObjectModel) For help, click: [URL]
|
Not able to create SSIS DB Catlog in linux RHEL 7 I am not able to create SSIS DB Catalog on my MS SQL Server which is installed on Linux RHEL 7 Server. Though I have installed SSIS on Linux RHEL 7. Whenever I am trying to create SSIS DB Catalog i am getting this error. TITLE: Microsoft SQL Server Management Studio The path to the catalog backup file could not be determined. Integration Services might not be installed on this server, or the user may not have the appropriate access permissions. (Microsoft.SqlServer.IntegrationServices.Common.ObjectModel) For help, click: [URL]
|
sql-server, linux, ssis, rhel, rhel7
| 0
| 844
| 1
|
https://stackoverflow.com/questions/53368822/not-able-to-create-ssis-db-catlog-in-linux-rhel-7
|
52,071,373
|
Fetching a specific range of numeric value from huge list of numbers using AWK
|
i want to take out certain range of huge numeric values, 720000002774991000 to 720000002774991099. so i tried the below command, awk -F, ' { if (($1 >= 720000002774991000) && ($1 <= 720000002774991099)) print $0} ' VOUCHER_DUMP_REPORT.csv | head VOUCHER_DUMP_REPORT.csv is my input file and has only one column of that huge numbers. but the output i am getting is not accurate, it has some values other than the range i gave. output : 720000002774991065 720000002774991082 720000002774990985 720000002774991131 720000002774990919 720000002774991110 720000002774990947 720000002774991070 720000002774991042 720000002774991044
|
Fetching a specific range of numeric value from huge list of numbers using AWK i want to take out certain range of huge numeric values, 720000002774991000 to 720000002774991099. so i tried the below command, awk -F, ' { if (($1 >= 720000002774991000) && ($1 <= 720000002774991099)) print $0} ' VOUCHER_DUMP_REPORT.csv | head VOUCHER_DUMP_REPORT.csv is my input file and has only one column of that huge numbers. but the output i am getting is not accurate, it has some values other than the range i gave. output : 720000002774991065 720000002774991082 720000002774990985 720000002774991131 720000002774990919 720000002774991110 720000002774990947 720000002774991070 720000002774991042 720000002774991044
|
awk, rhel, nawk
| 0
| 259
| 1
|
https://stackoverflow.com/questions/52071373/fetching-a-specific-range-of-numeric-value-from-huge-list-of-numbers-using-awk
|
51,945,428
|
How should I pay RHEL licences with RHEL based container
|
I am evaluating Openshift Container Platform for my company. We will install it on top of RHEL. Should I pay a licence for RHEL for each container that is built on tp of RHEL ?
|
How should I pay RHEL licences with RHEL based container I am evaluating Openshift Container Platform for my company. We will install it on top of RHEL. Should I pay a licence for RHEL for each container that is built on tp of RHEL ?
|
docker, openshift, rhel
| 0
| 395
| 2
|
https://stackoverflow.com/questions/51945428/how-should-i-pay-rhel-licences-with-rhel-based-container
|
46,862,413
|
How to install python 2.7 on RHEL6 32 bit machine?
|
I am stuck with installing python2.7 on my redhat6 , which has default python 2.6 installed. Also searched a lot for repository of ' python-deltarpm ' and ' deltarpm ' which is required for installing ' createrepo ', but didn't get. I added repo from dvd and from [URL] and yum repolist gives repolist: 11,960 . I want to install pip & virtualenv using python2.7.
|
How to install python 2.7 on RHEL6 32 bit machine? I am stuck with installing python2.7 on my redhat6 , which has default python 2.6 installed. Also searched a lot for repository of ' python-deltarpm ' and ' deltarpm ' which is required for installing ' createrepo ', but didn't get. I added repo from dvd and from [URL] and yum repolist gives repolist: 11,960 . I want to install pip & virtualenv using python2.7.
|
python-2.7, repository, rpm, rhel, rhel6
| 0
| 1,827
| 2
|
https://stackoverflow.com/questions/46862413/how-to-install-python-2-7-on-rhel6-32-bit-machine
|
46,301,041
|
Virtual Box failed to open a session. Exit code 1073741819
|
I'm trying to install a VM with RHEL7 using Virtual Box. This question has appeared several times before but no answer is definitive or even correct. I have tried all the suggestions on other posts. The VM doesn't boot after simply going through the installation process with Virtual Box. The error message is: Result Code: E_FAIL (0x80004005) Component: MachineWrap Interface: IMachine {b2547866-a0a1-4391-8b86-6952d82efaa0} Any idea what is going on here?
|
Virtual Box failed to open a session. Exit code 1073741819 I'm trying to install a VM with RHEL7 using Virtual Box. This question has appeared several times before but no answer is definitive or even correct. I have tried all the suggestions on other posts. The VM doesn't boot after simply going through the installation process with Virtual Box. The error message is: Result Code: E_FAIL (0x80004005) Component: MachineWrap Interface: IMachine {b2547866-a0a1-4391-8b86-6952d82efaa0} Any idea what is going on here?
|
virtual-machine, virtualbox, rhel
| 0
| 3,181
| 1
|
https://stackoverflow.com/questions/46301041/virtual-box-failed-to-open-a-session-exit-code-1073741819
|
43,759,081
|
CKAN distribution not found error while trying to serve CKAN from apache
|
I am trying to configure datapusher for ckan 2.7 and as a prerequisite i have installed datastore and apache http server. apache httpd version: Apache/2.4.25 (Unix) mod_wsgi package installed 4.5.15 the permissions for the conf files are as specified in the ckan documentation [URL] contents of ckan_default.conf are WSGIScriptAlias / /etc/ckan/default/apache.wsgi WSGIPassAuthorization On WSGIDaemonProcess ckan_default display-name=ckan_default processes=2 threads=15 WSGIProcessGroup ckan_default ErrorLog /var/log/apache2/ckan_default.error.log CustomLog /var/log/apache2/ckan_default.custom.log combined contents of apache.wsgi file : import os ckan_home = os.environ.get('CKAN_HOME', '/usr/lib/ckan/default') activate_this = os.path.join(ckan_home, 'bin/activate_this.py') execfile(activate_this, dict(__file__=activate_this)) from paste.deploy import loadapp config_filepath = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'production.ini') from paste.script.util.logging_config import fileConfig fileConfig(config_filepath) application = loadapp('config:%s' % config_filepath) when i start the apache httpd server I see the following error in logs . [Tue May 02 17:39:23.953718 2017] [wsgi:error] [pid 24744:tid 140135528146688] mod_wsgi (pid=24744): Target WSGI script '/etc/ckan/default/apache.wsgi' cannot be loaded as Python module. [Tue May 02 17:39:23.953836 2017] [wsgi:error] [pid 24744:tid 140135528146688] mod_wsgi (pid=24744): Exception occurred processing WSGI script '/etc/ckan/default/apache.wsgi'. [Tue May 02 17:39:23.953875 2017] [wsgi:error] [pid 24744:tid 140135528146688] Traceback (most recent call last): [Tue May 02 17:39:23.953912 2017] [wsgi:error] [pid 24744:tid 140135528146688] File "/etc/ckan/default/apache.wsgi", line 10, in <module> [Tue May 02 17:39:23.954043 2017] [wsgi:error] [pid 24744:tid 140135528146688] application = loadapp('config:%s' % config_filepath) [Tue May 02 17:39:23.954067 2017] [wsgi:error] [pid 24744:tid 140135528146688] File "/usr/lib/ckan/default/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp [Tue May 02 17:39:23.955927 2017] [wsgi:error] [pid 24744:tid 140135528146688] DistributionNotFound: The 'ckan' distribution was not found and is required by the application Could anyone please help me resolve this issue? Thanks in advance PS : CKAN site loads fine when using paster serve
|
CKAN distribution not found error while trying to serve CKAN from apache I am trying to configure datapusher for ckan 2.7 and as a prerequisite i have installed datastore and apache http server. apache httpd version: Apache/2.4.25 (Unix) mod_wsgi package installed 4.5.15 the permissions for the conf files are as specified in the ckan documentation [URL] contents of ckan_default.conf are WSGIScriptAlias / /etc/ckan/default/apache.wsgi WSGIPassAuthorization On WSGIDaemonProcess ckan_default display-name=ckan_default processes=2 threads=15 WSGIProcessGroup ckan_default ErrorLog /var/log/apache2/ckan_default.error.log CustomLog /var/log/apache2/ckan_default.custom.log combined contents of apache.wsgi file : import os ckan_home = os.environ.get('CKAN_HOME', '/usr/lib/ckan/default') activate_this = os.path.join(ckan_home, 'bin/activate_this.py') execfile(activate_this, dict(__file__=activate_this)) from paste.deploy import loadapp config_filepath = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'production.ini') from paste.script.util.logging_config import fileConfig fileConfig(config_filepath) application = loadapp('config:%s' % config_filepath) when i start the apache httpd server I see the following error in logs . [Tue May 02 17:39:23.953718 2017] [wsgi:error] [pid 24744:tid 140135528146688] mod_wsgi (pid=24744): Target WSGI script '/etc/ckan/default/apache.wsgi' cannot be loaded as Python module. [Tue May 02 17:39:23.953836 2017] [wsgi:error] [pid 24744:tid 140135528146688] mod_wsgi (pid=24744): Exception occurred processing WSGI script '/etc/ckan/default/apache.wsgi'. [Tue May 02 17:39:23.953875 2017] [wsgi:error] [pid 24744:tid 140135528146688] Traceback (most recent call last): [Tue May 02 17:39:23.953912 2017] [wsgi:error] [pid 24744:tid 140135528146688] File "/etc/ckan/default/apache.wsgi", line 10, in <module> [Tue May 02 17:39:23.954043 2017] [wsgi:error] [pid 24744:tid 140135528146688] application = loadapp('config:%s' % config_filepath) [Tue May 02 17:39:23.954067 2017] [wsgi:error] [pid 24744:tid 140135528146688] File "/usr/lib/ckan/default/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp [Tue May 02 17:39:23.955927 2017] [wsgi:error] [pid 24744:tid 140135528146688] DistributionNotFound: The 'ckan' distribution was not found and is required by the application Could anyone please help me resolve this issue? Thanks in advance PS : CKAN site loads fine when using paster serve
|
apache, mod-wsgi, rhel, ckan
| 0
| 910
| 1
|
https://stackoverflow.com/questions/43759081/ckan-distribution-not-found-error-while-trying-to-serve-ckan-from-apache
|
43,689,098
|
Unable to sudo python but I can python without sudo
|
I'm performing some maintenance on my linux box (rhel via ssh) but when I go to run a python script as such: asemani$ python3.6 get-pip.py File "get-pip.py", line 20061, in <module> main() File "get-pip.py", line 194, in main bootstrap(tmpdir=tmpdir) File "get-pip.py", line 82, in bootstrap import pip File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 646, in _load_unlocked File "<frozen importlib._bootstrap>", line 616, in _load_backward_compatible File "/tmp/tmp04ft8p5f/pip.zip/pip/__init__.py", line 26, in <module> File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 646, in _load_unlocked File "<frozen importlib._bootstrap>", line 616, in _load_backward_compatible File "/tmp/tmp04ft8p5f/pip.zip/pip/utils/__init__.py", line 27, in <module> File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 646, in _load_unlocked File "<frozen importlib._bootstrap>", line 616, in _load_backward_compatible File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 3018, in <module> File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 3004, in _call_aside File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 3031, in _initialize_master_working_set File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 651, in _build_master File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 644, in __init__ File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 700, in add_entry File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 2017, in find_on_path PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.6/site-packages/setuptools-28.8.0.dist-info' When I try to use sudo: asemani$ sudo python3.6 get-pip.py' [sudo] password for asemani: sudo: python3.6: command not found Python isn't recognized? What is going on? How can I use sudo on python3.6. I'm pretty sure I installed it correctly. Edit: Approach 1: [asemani@746c9prda5r asemani]$ sudo -E python3.6 get-pip.py sudo: python3.6: command not found Approach 2: [asemani@746c9prda5r asemani]$ sudo /usr/local/bin/python3.6 get-pip.py [sudo] password for asemani: Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f0c4b78d908>, 'Connection to pypi.python.org timed out. (connect timeout=15)')': /simple/pip/ ^COperation cancelled by user
|
Unable to sudo python but I can python without sudo I'm performing some maintenance on my linux box (rhel via ssh) but when I go to run a python script as such: asemani$ python3.6 get-pip.py File "get-pip.py", line 20061, in <module> main() File "get-pip.py", line 194, in main bootstrap(tmpdir=tmpdir) File "get-pip.py", line 82, in bootstrap import pip File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 646, in _load_unlocked File "<frozen importlib._bootstrap>", line 616, in _load_backward_compatible File "/tmp/tmp04ft8p5f/pip.zip/pip/__init__.py", line 26, in <module> File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 646, in _load_unlocked File "<frozen importlib._bootstrap>", line 616, in _load_backward_compatible File "/tmp/tmp04ft8p5f/pip.zip/pip/utils/__init__.py", line 27, in <module> File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 646, in _load_unlocked File "<frozen importlib._bootstrap>", line 616, in _load_backward_compatible File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 3018, in <module> File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 3004, in _call_aside File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 3031, in _initialize_master_working_set File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 651, in _build_master File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 644, in __init__ File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 700, in add_entry File "/tmp/tmp04ft8p5f/pip.zip/pip/_vendor/pkg_resources/__init__.py", line 2017, in find_on_path PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.6/site-packages/setuptools-28.8.0.dist-info' When I try to use sudo: asemani$ sudo python3.6 get-pip.py' [sudo] password for asemani: sudo: python3.6: command not found Python isn't recognized? What is going on? How can I use sudo on python3.6. I'm pretty sure I installed it correctly. Edit: Approach 1: [asemani@746c9prda5r asemani]$ sudo -E python3.6 get-pip.py sudo: python3.6: command not found Approach 2: [asemani@746c9prda5r asemani]$ sudo /usr/local/bin/python3.6 get-pip.py [sudo] password for asemani: Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f0c4b78d908>, 'Connection to pypi.python.org timed out. (connect timeout=15)')': /simple/pip/ ^COperation cancelled by user
|
python, linux, rhel
| 0
| 654
| 2
|
https://stackoverflow.com/questions/43689098/unable-to-sudo-python-but-i-can-python-without-sudo
|
43,537,237
|
Unable to pull docker image - Repository not found
|
I'm unable to pull docker images in my environment. I think it's blocked by company firewall, but I'm not sure why It gets layer info and later It prints that repository is not found. sudo docker pull hello-world latest: Pulling from hello-world 50a54e1f9180: Pulling fs layer 7a5a2d73abce: Pulling fs layer Pulling repository hello-world Repository not found Docker version: (I cannot upgrade to newest docker on RHEL 6.9) Docker version 1.7.1, build 786b29d/1.7.1 Could somebody explain me which protocols (https only?) are used during docker image pulling phase and what addresses are contacted (" [URL] " only?) ?
|
Unable to pull docker image - Repository not found I'm unable to pull docker images in my environment. I think it's blocked by company firewall, but I'm not sure why It gets layer info and later It prints that repository is not found. sudo docker pull hello-world latest: Pulling from hello-world 50a54e1f9180: Pulling fs layer 7a5a2d73abce: Pulling fs layer Pulling repository hello-world Repository not found Docker version: (I cannot upgrade to newest docker on RHEL 6.9) Docker version 1.7.1, build 786b29d/1.7.1 Could somebody explain me which protocols (https only?) are used during docker image pulling phase and what addresses are contacted (" [URL] " only?) ?
|
docker, firewall, rhel
| 0
| 10,378
| 2
|
https://stackoverflow.com/questions/43537237/unable-to-pull-docker-image-repository-not-found
|
42,193,576
|
tomcat manager in redhat enterprise linux (RHEL7)?
|
I want to use the Tomcat Manager web app ( link ). It is distributed by Apache (e.g. as part of their core tar.gz binary distribution in their download page ). However it seems that the web app is not packaged in any one of the RPM packages offered by Redhat Enterprise Linux 7 (RHEL7). tomcat-jsp-2.2-api.noarch : Apache Tomcat JSP API implementation classes tomcat-lib.noarch : Libraries needed to run the Tomcat Web container tomcat-servlet-3.0-api.noarch : Apache Tomcat Servlet API implementation classes tomcat.noarch : Apache Servlet/JSP Engine, RI for Servlet 3.0/JSP 2.2 API tomcat-el-2.2-api.noarch : Expression Language v2.2 API Is that true? (And Why?) Does that mean we will have to package our own RPM if we want to use the web app and get it deployed via RPM? We are using RHEL 7.3 and Tomcat 7.0.69
|
tomcat manager in redhat enterprise linux (RHEL7)? I want to use the Tomcat Manager web app ( link ). It is distributed by Apache (e.g. as part of their core tar.gz binary distribution in their download page ). However it seems that the web app is not packaged in any one of the RPM packages offered by Redhat Enterprise Linux 7 (RHEL7). tomcat-jsp-2.2-api.noarch : Apache Tomcat JSP API implementation classes tomcat-lib.noarch : Libraries needed to run the Tomcat Web container tomcat-servlet-3.0-api.noarch : Apache Tomcat Servlet API implementation classes tomcat.noarch : Apache Servlet/JSP Engine, RI for Servlet 3.0/JSP 2.2 API tomcat-el-2.2-api.noarch : Expression Language v2.2 API Is that true? (And Why?) Does that mean we will have to package our own RPM if we want to use the web app and get it deployed via RPM? We are using RHEL 7.3 and Tomcat 7.0.69
|
tomcat, tomcat7, rhel, rhel7
| 0
| 842
| 1
|
https://stackoverflow.com/questions/42193576/tomcat-manager-in-redhat-enterprise-linux-rhel7
|
40,382,412
|
Install packages without internet Red Hat
|
I need to install Hyperledger on a Red Hat Enterprise Linux server that won't be connected to the internet. What I'll need to install is likely the Go language and Docker however given I have no internet connection I can't just use a package manager. I've read about methods to do this with Ubuntu that mention copying .deb files over or otherwise using tools - would this work with RHEL and if not would anyone be able to recommend a way of doing this? (or any advice for achieveing this generally).
|
Install packages without internet Red Hat I need to install Hyperledger on a Red Hat Enterprise Linux server that won't be connected to the internet. What I'll need to install is likely the Go language and Docker however given I have no internet connection I can't just use a package manager. I've read about methods to do this with Ubuntu that mention copying .deb files over or otherwise using tools - would this work with RHEL and if not would anyone be able to recommend a way of doing this? (or any advice for achieveing this generally).
|
linux, package, redhat, rhel, hyperledger
| 0
| 12,786
| 1
|
https://stackoverflow.com/questions/40382412/install-packages-without-internet-red-hat
|
37,647,612
|
Installing mysql-devel on RHEL7
|
I have been given a RHEL7 system that has MySQL already installed and running, and I need to install mysql-devel on it. When I run: yum install mysql-devel I get: Transaction check error: file /usr/bin/mysql_config-64 from install of mysql-community-devel-5.7.13-1.el7.x86_64 conflicts with file from package mysql-community-client-5.7.12-1.el7.x86_64 What is the best way to resolve this? I do not want to uninstall the existing running MySQL server. I am not a sysadmin, I am a developer and there is no admin available at this company. Output of yum --showduplicates list mysql-* : $ sudo yum --showduplicates list mysql-* Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Installed Packages mysql-community-client.x86_64 5.7.12-1.el7 @mysql57-community mysql-community-common.x86_64 5.7.12-1.el7 @mysql57-community mysql-community-libs.x86_64 5.7.12-1.el7 @mysql57-community mysql-community-libs-compat.x86_64 5.7.12-1.el7 @mysql57-community mysql-community-server.x86_64 5.7.12-1.el7 @mysql57-community Available Packages MySQL-python.x86_64 1.2.3-11.el7 rhel-7-server-rpms MySQL-zrm.noarch 3.0-6.el7 epel mysql-community-client.i686 5.7.9-1.el7 mysql57-community mysql-community-client.x86_64 5.7.9-1.el7 mysql57-community mysql-community-client.i686 5.7.10-1.el7 mysql57-community mysql-community-client.x86_64 5.7.10-1.el7 mysql57-community mysql-community-client.i686 5.7.11-1.el7 mysql57-community mysql-community-client.x86_64 5.7.11-1.el7 mysql57-community mysql-community-client.i686 5.7.12-1.el7 mysql57-community mysql-community-client.x86_64 5.7.12-1.el7 mysql57-community mysql-community-client.i686 5.7.13-1.el7 mysql57-community mysql-community-client.x86_64 5.7.13-1.el7 mysql57-community mysql-community-common.i686 5.7.9-1.el7 mysql57-community mysql-community-common.x86_64 5.7.9-1.el7 mysql57-community mysql-community-common.i686 5.7.10-1.el7 mysql57-community mysql-community-common.x86_64 5.7.10-1.el7 mysql57-community mysql-community-common.i686 5.7.11-1.el7 mysql57-community mysql-community-common.x86_64 5.7.11-1.el7 mysql57-community mysql-community-common.i686 5.7.12-1.el7 mysql57-community mysql-community-common.x86_64 5.7.12-1.el7 mysql57-community mysql-community-common.i686 5.7.13-1.el7 mysql57-community mysql-community-common.x86_64 5.7.13-1.el7 mysql57-community mysql-community-devel.i686 5.7.9-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.9-1.el7 mysql57-community mysql-community-devel.i686 5.7.10-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.10-1.el7 mysql57-community mysql-community-devel.i686 5.7.11-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.11-1.el7 mysql57-community mysql-community-devel.i686 5.7.12-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.12-1.el7 mysql57-community mysql-community-devel.i686 5.7.13-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.13-1.el7 mysql57-community mysql-community-embedded.i686 5.7.9-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.9-1.el7 mysql57-community mysql-community-embedded.i686 5.7.10-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.10-1.el7 mysql57-community mysql-community-embedded.i686 5.7.11-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.11-1.el7 mysql57-community mysql-community-embedded.i686 5.7.12-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.12-1.el7 mysql57-community mysql-community-embedded.i686 5.7.13-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.13-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.9-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.9-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.10-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.10-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.11-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.11-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.12-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.12-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.13-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.13-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.9-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.9-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.10-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.10-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.11-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.11-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.12-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.12-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.13-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.13-1.el7 mysql57-community mysql-community-libs.i686 5.7.9-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.9-1.el7 mysql57-community mysql-community-libs.i686 5.7.10-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.10-1.el7 mysql57-community mysql-community-libs.i686 5.7.11-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.11-1.el7 mysql57-community mysql-community-libs.i686 5.7.12-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.12-1.el7 mysql57-community mysql-community-libs.i686 5.7.13-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.13-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.9-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.9-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.10-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.10-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.11-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.11-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.12-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.12-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.13-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.13-1.el7 mysql57-community mysql-community-release.noarch el7-5 mysql-connectors-community mysql-community-release.noarch el7-5 mysql-tools-community mysql-community-release.noarch el7-7 mysql57-community mysql-community-server.x86_64 5.7.9-1.el7 mysql57-community mysql-community-server.x86_64 5.7.10-1.el7 mysql57-community mysql-community-server.x86_64 5.7.11-1.el7 mysql57-community mysql-community-server.x86_64 5.7.12-1.el7 mysql57-community mysql-community-server.x86_64 5.7.13-1.el7 mysql57-community mysql-community-test.x86_64 5.7.9-1.el7 mysql57-community mysql-community-test.x86_64 5.7.10-1.el7 mysql57-community mysql-community-test.x86_64 5.7.11-1.el7 mysql57-community mysql-community-test.x86_64 5.7.12-1.el7 mysql57-community mysql-community-test.x86_64 5.7.13-1.el7 mysql57-community mysql-connector-java.noarch 1:5.1.25-3.el7 rhel-7-server-rpms mysql-connector-odbc.x86_64 5.2.5-6.el7 rhel-7-server-rpms mysql-connector-odbc.x86_64 5.2.6-1.el7 mysql-connectors-community mysql-connector-odbc.x86_64 5.2.7-1.el7 mysql-connectors-community mysql-connector-odbc.x86_64 5.3.2-1.el7 mysql-connectors-community mysql-connector-odbc.x86_64 5.3.4-1.el7 mysql-connectors-community mysql-connector-odbc.x86_64 5.3.6-1.el7 mysql-connectors-community mysql-connector-odbc-debuginfo.x86_64 5.3.6-1.el7 mysql-connectors-community mysql-connector-odbc-setup.x86_64 5.3.6-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.1.4-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.1.5-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.1.6-1.el7 epel mysql-connector-python.noarch 1.1.6-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.1.7-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.2.2-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.2.3-1.el7 mysql-connectors-community mysql-connector-python.noarch 2.0.1-1.el7 mysql-connectors-community mysql-connector-python.noarch 2.0.2-1.el7 mysql-connectors-community mysql-connector-python.noarch 2.0.3-1.el7 mysql-connectors-community mysql-connector-python.noarch 2.0.4-1.el7 mysql-connectors-community mysql-connector-python.x86_64 2.1.3-1.el7 mysql-connectors-community mysql-connector-python-cext.x86_64 2.1.3-1.el7 mysql-connectors-community mysql-connector-python-debuginfo.x86_64 2.1.3-1.el7 mysql-connectors-community mysql-mmm.noarch 2.2.1-14.el7 epel mysql-mmm-agent.noarch 2.2.1-14.el7 epel mysql-mmm-monitor.noarch 2.2.1-14.el7 epel mysql-mmm-tools.noarch 2.2.1-14.el7 epel mysql-proxy.x86_64 0.8.5-2.el7 epel mysql-proxy-devel.x86_64 0.8.5-2.el7 epel mysql-router.x86_64 2.0.2-1.el7 mysql-tools-community mysql-router.x86_64 2.0.3-1.el7 mysql-tools-community mysql-router-debuginfo.x86_64 2.0.2-1.el7 mysql-tools-community mysql-router-debuginfo.x86_64 2.0.3-1.el7 mysql-tools-community mysql-utilities.noarch 1.3.6-1.el7 epel mysql-utilities.noarch 1.3.6-1.el7 mysql-tools-community mysql-utilities.noarch 1.4.3-1.el7 mysql-tools-community mysql-utilities.noarch 1.4.4-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.2-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.3-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.4-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.5-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.6-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.4.3-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.4.4-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.2-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.3-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.4-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.5-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.6-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.2.3-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.2.4-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.2.5-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.3-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.4-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.5-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.6-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.6-2.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.3-1.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.4-1.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.5-1.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.6-1.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.6-2.el7 mysql-tools-community
|
Installing mysql-devel on RHEL7 I have been given a RHEL7 system that has MySQL already installed and running, and I need to install mysql-devel on it. When I run: yum install mysql-devel I get: Transaction check error: file /usr/bin/mysql_config-64 from install of mysql-community-devel-5.7.13-1.el7.x86_64 conflicts with file from package mysql-community-client-5.7.12-1.el7.x86_64 What is the best way to resolve this? I do not want to uninstall the existing running MySQL server. I am not a sysadmin, I am a developer and there is no admin available at this company. Output of yum --showduplicates list mysql-* : $ sudo yum --showduplicates list mysql-* Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Installed Packages mysql-community-client.x86_64 5.7.12-1.el7 @mysql57-community mysql-community-common.x86_64 5.7.12-1.el7 @mysql57-community mysql-community-libs.x86_64 5.7.12-1.el7 @mysql57-community mysql-community-libs-compat.x86_64 5.7.12-1.el7 @mysql57-community mysql-community-server.x86_64 5.7.12-1.el7 @mysql57-community Available Packages MySQL-python.x86_64 1.2.3-11.el7 rhel-7-server-rpms MySQL-zrm.noarch 3.0-6.el7 epel mysql-community-client.i686 5.7.9-1.el7 mysql57-community mysql-community-client.x86_64 5.7.9-1.el7 mysql57-community mysql-community-client.i686 5.7.10-1.el7 mysql57-community mysql-community-client.x86_64 5.7.10-1.el7 mysql57-community mysql-community-client.i686 5.7.11-1.el7 mysql57-community mysql-community-client.x86_64 5.7.11-1.el7 mysql57-community mysql-community-client.i686 5.7.12-1.el7 mysql57-community mysql-community-client.x86_64 5.7.12-1.el7 mysql57-community mysql-community-client.i686 5.7.13-1.el7 mysql57-community mysql-community-client.x86_64 5.7.13-1.el7 mysql57-community mysql-community-common.i686 5.7.9-1.el7 mysql57-community mysql-community-common.x86_64 5.7.9-1.el7 mysql57-community mysql-community-common.i686 5.7.10-1.el7 mysql57-community mysql-community-common.x86_64 5.7.10-1.el7 mysql57-community mysql-community-common.i686 5.7.11-1.el7 mysql57-community mysql-community-common.x86_64 5.7.11-1.el7 mysql57-community mysql-community-common.i686 5.7.12-1.el7 mysql57-community mysql-community-common.x86_64 5.7.12-1.el7 mysql57-community mysql-community-common.i686 5.7.13-1.el7 mysql57-community mysql-community-common.x86_64 5.7.13-1.el7 mysql57-community mysql-community-devel.i686 5.7.9-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.9-1.el7 mysql57-community mysql-community-devel.i686 5.7.10-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.10-1.el7 mysql57-community mysql-community-devel.i686 5.7.11-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.11-1.el7 mysql57-community mysql-community-devel.i686 5.7.12-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.12-1.el7 mysql57-community mysql-community-devel.i686 5.7.13-1.el7 mysql57-community mysql-community-devel.x86_64 5.7.13-1.el7 mysql57-community mysql-community-embedded.i686 5.7.9-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.9-1.el7 mysql57-community mysql-community-embedded.i686 5.7.10-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.10-1.el7 mysql57-community mysql-community-embedded.i686 5.7.11-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.11-1.el7 mysql57-community mysql-community-embedded.i686 5.7.12-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.12-1.el7 mysql57-community mysql-community-embedded.i686 5.7.13-1.el7 mysql57-community mysql-community-embedded.x86_64 5.7.13-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.9-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.9-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.10-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.10-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.11-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.11-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.12-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.12-1.el7 mysql57-community mysql-community-embedded-compat.i686 5.7.13-1.el7 mysql57-community mysql-community-embedded-compat.x86_64 5.7.13-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.9-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.9-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.10-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.10-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.11-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.11-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.12-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.12-1.el7 mysql57-community mysql-community-embedded-devel.i686 5.7.13-1.el7 mysql57-community mysql-community-embedded-devel.x86_64 5.7.13-1.el7 mysql57-community mysql-community-libs.i686 5.7.9-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.9-1.el7 mysql57-community mysql-community-libs.i686 5.7.10-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.10-1.el7 mysql57-community mysql-community-libs.i686 5.7.11-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.11-1.el7 mysql57-community mysql-community-libs.i686 5.7.12-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.12-1.el7 mysql57-community mysql-community-libs.i686 5.7.13-1.el7 mysql57-community mysql-community-libs.x86_64 5.7.13-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.9-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.9-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.10-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.10-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.11-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.11-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.12-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.12-1.el7 mysql57-community mysql-community-libs-compat.i686 5.7.13-1.el7 mysql57-community mysql-community-libs-compat.x86_64 5.7.13-1.el7 mysql57-community mysql-community-release.noarch el7-5 mysql-connectors-community mysql-community-release.noarch el7-5 mysql-tools-community mysql-community-release.noarch el7-7 mysql57-community mysql-community-server.x86_64 5.7.9-1.el7 mysql57-community mysql-community-server.x86_64 5.7.10-1.el7 mysql57-community mysql-community-server.x86_64 5.7.11-1.el7 mysql57-community mysql-community-server.x86_64 5.7.12-1.el7 mysql57-community mysql-community-server.x86_64 5.7.13-1.el7 mysql57-community mysql-community-test.x86_64 5.7.9-1.el7 mysql57-community mysql-community-test.x86_64 5.7.10-1.el7 mysql57-community mysql-community-test.x86_64 5.7.11-1.el7 mysql57-community mysql-community-test.x86_64 5.7.12-1.el7 mysql57-community mysql-community-test.x86_64 5.7.13-1.el7 mysql57-community mysql-connector-java.noarch 1:5.1.25-3.el7 rhel-7-server-rpms mysql-connector-odbc.x86_64 5.2.5-6.el7 rhel-7-server-rpms mysql-connector-odbc.x86_64 5.2.6-1.el7 mysql-connectors-community mysql-connector-odbc.x86_64 5.2.7-1.el7 mysql-connectors-community mysql-connector-odbc.x86_64 5.3.2-1.el7 mysql-connectors-community mysql-connector-odbc.x86_64 5.3.4-1.el7 mysql-connectors-community mysql-connector-odbc.x86_64 5.3.6-1.el7 mysql-connectors-community mysql-connector-odbc-debuginfo.x86_64 5.3.6-1.el7 mysql-connectors-community mysql-connector-odbc-setup.x86_64 5.3.6-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.1.4-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.1.5-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.1.6-1.el7 epel mysql-connector-python.noarch 1.1.6-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.1.7-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.2.2-1.el7 mysql-connectors-community mysql-connector-python.noarch 1.2.3-1.el7 mysql-connectors-community mysql-connector-python.noarch 2.0.1-1.el7 mysql-connectors-community mysql-connector-python.noarch 2.0.2-1.el7 mysql-connectors-community mysql-connector-python.noarch 2.0.3-1.el7 mysql-connectors-community mysql-connector-python.noarch 2.0.4-1.el7 mysql-connectors-community mysql-connector-python.x86_64 2.1.3-1.el7 mysql-connectors-community mysql-connector-python-cext.x86_64 2.1.3-1.el7 mysql-connectors-community mysql-connector-python-debuginfo.x86_64 2.1.3-1.el7 mysql-connectors-community mysql-mmm.noarch 2.2.1-14.el7 epel mysql-mmm-agent.noarch 2.2.1-14.el7 epel mysql-mmm-monitor.noarch 2.2.1-14.el7 epel mysql-mmm-tools.noarch 2.2.1-14.el7 epel mysql-proxy.x86_64 0.8.5-2.el7 epel mysql-proxy-devel.x86_64 0.8.5-2.el7 epel mysql-router.x86_64 2.0.2-1.el7 mysql-tools-community mysql-router.x86_64 2.0.3-1.el7 mysql-tools-community mysql-router-debuginfo.x86_64 2.0.2-1.el7 mysql-tools-community mysql-router-debuginfo.x86_64 2.0.3-1.el7 mysql-tools-community mysql-utilities.noarch 1.3.6-1.el7 epel mysql-utilities.noarch 1.3.6-1.el7 mysql-tools-community mysql-utilities.noarch 1.4.3-1.el7 mysql-tools-community mysql-utilities.noarch 1.4.4-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.2-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.3-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.4-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.5-1.el7 mysql-tools-community mysql-utilities.noarch 1.5.6-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.4.3-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.4.4-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.2-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.3-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.4-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.5-1.el7 mysql-tools-community mysql-utilities-extra.noarch 1.5.6-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.2.3-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.2.4-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.2.5-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.3-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.4-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.5-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.6-1.el7 mysql-tools-community mysql-workbench-community.x86_64 6.3.6-2.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.3-1.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.4-1.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.5-1.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.6-1.el7 mysql-tools-community mysql-workbench-community-debuginfo.x86_64 6.3.6-2.el7 mysql-tools-community
|
mysql, rhel, rhel7
| 0
| 5,843
| 1
|
https://stackoverflow.com/questions/37647612/installing-mysql-devel-on-rhel7
|
37,281,173
|
Different mail from shell script and cron running the script
|
I have a perl script that executes a command. I am saving the output of this command to an array and checking the content on the output to mail me a success or a failure message. This works fine when i run the script itself. But when i schedule the script to run as a cronjob i get a failure mail irrespective of the output. I am using all the abs paths in script and know the cron is executing since i see in my logs that the command is executed successfully . Can someone tell me why the difference? One thing to notice is, the command takes about a minute to finish executing and the script is waiting till the command is executed to check the output. So i am seeing a little delay in the mail when i run the script. But in the cron, i get a mail immediately at the run time. I am assuming the cron isn't waiting for the command to execute and check the output. Could that be the case?
|
Different mail from shell script and cron running the script I have a perl script that executes a command. I am saving the output of this command to an array and checking the content on the output to mail me a success or a failure message. This works fine when i run the script itself. But when i schedule the script to run as a cronjob i get a failure mail irrespective of the output. I am using all the abs paths in script and know the cron is executing since i see in my logs that the command is executed successfully . Can someone tell me why the difference? One thing to notice is, the command takes about a minute to finish executing and the script is waiting till the command is executed to check the output. So i am seeing a little delay in the mail when i run the script. But in the cron, i get a mail immediately at the run time. I am assuming the cron isn't waiting for the command to execute and check the output. Could that be the case?
|
perl, email, cron, rhel
| 0
| 131
| 1
|
https://stackoverflow.com/questions/37281173/different-mail-from-shell-script-and-cron-running-the-script
|
36,938,018
|
DSRA0304E: XAException occurred. Could not load the DLL sqljdbc.dll, or one of the DLLs it references
|
I am getting error when hit the application url: Error Code: DSRA0304E: XAException occurred. XAException contents and details are: The cause is : null. DSRA0302E: XAException occurred. Error code is: XAER_RMERR (-3). Exception is: com.microsoft.sqlserver.jdbc.SQLServerException: Could not load the DLL sqljdbc.dll, or one of the DLLs it references. Reason: 126(The specified module could not be found.) Previously I was able to access application but after restrating services i am not able to access. I am using IBM WAS 8.5.5.8 Linux RHEL 6.7
|
DSRA0304E: XAException occurred. Could not load the DLL sqljdbc.dll, or one of the DLLs it references I am getting error when hit the application url: Error Code: DSRA0304E: XAException occurred. XAException contents and details are: The cause is : null. DSRA0302E: XAException occurred. Error code is: XAER_RMERR (-3). Exception is: com.microsoft.sqlserver.jdbc.SQLServerException: Could not load the DLL sqljdbc.dll, or one of the DLLs it references. Reason: 126(The specified module could not be found.) Previously I was able to access application but after restrating services i am not able to access. I am using IBM WAS 8.5.5.8 Linux RHEL 6.7
|
rhel, ibm-was
| 0
| 1,585
| 2
|
https://stackoverflow.com/questions/36938018/dsra0304e-xaexception-occurred-could-not-load-the-dll-sqljdbc-dll-or-one-of-t
|
36,655,281
|
Openshift Origin run-app against insecure registry yields stuck pod with "Error while pulling image"
|
I am using Openshift Origin in a Docker container and pulled in an image from the Docker registry (a container on the same RHEL host VM) using: oc new-app --insecure-registry=true --docker-image=mtl-vm375:5000/jenkins:1.0 That command seemed to work fine at the time. However, the pod stays as "ContainerCreating" and the result from kubectl describe pods: OPENSHIFT_DEPLOYMENT_NAME: jenkins-1 OPENSHIFT_DEPLOYMENT_NAMESPACE: default Conditions: Type Status Ready False Volumes: deployer-token-3bls9: Type: Secret (a volume populated by a Secret) SecretName: deployer-token-3bls9 Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2h 4m 33 {kubelet mtl-vm375} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "Error while pulling image: Get [URL] dial tcp 10.230.22.20:443: connection refused" 2h 6s 652 {kubelet mtl-vm375} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-pod:v1.1.5\"" Has an error that shows it is trying to use https, which I am guessing (but am not sure) is the error, as I do not have https correctly set up with certificates yet: Per other advice on Stackoverflow, I have added these environment variables to the Origin image: KUBE_ENABLE_INSECURE_REGISTRY=true EXTRA_DOCKER_OPTS=--insecure-registry I have also had similar results with: KUBE_ENABLE_INSECURE_REGISTRY=true\ EXTRA_DOCKER_OPTS="--insecure-registry 10.230.22.20" Version information: [root@mtl-vm375 origin]# oc version oc v1.1.5-52-gd58f979 kubernetes v1.2.0-36-g4a3f9c5 and [root@mtl-vm375 ~]# docker version Client: Version: 1.8.2-el7.centos API version: 1.20 Package Version: docker-1.8.2-10.el7.centos.x86_64 Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux/amd64 Server: Version: 1.8.2-el7.centos API version: 1.20 Package Version: Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux/amd64 Thanks in advance, John
|
Openshift Origin run-app against insecure registry yields stuck pod with "Error while pulling image" I am using Openshift Origin in a Docker container and pulled in an image from the Docker registry (a container on the same RHEL host VM) using: oc new-app --insecure-registry=true --docker-image=mtl-vm375:5000/jenkins:1.0 That command seemed to work fine at the time. However, the pod stays as "ContainerCreating" and the result from kubectl describe pods: OPENSHIFT_DEPLOYMENT_NAME: jenkins-1 OPENSHIFT_DEPLOYMENT_NAMESPACE: default Conditions: Type Status Ready False Volumes: deployer-token-3bls9: Type: Secret (a volume populated by a Secret) SecretName: deployer-token-3bls9 Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2h 4m 33 {kubelet mtl-vm375} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "Error while pulling image: Get [URL] dial tcp 10.230.22.20:443: connection refused" 2h 6s 652 {kubelet mtl-vm375} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-pod:v1.1.5\"" Has an error that shows it is trying to use https, which I am guessing (but am not sure) is the error, as I do not have https correctly set up with certificates yet: Per other advice on Stackoverflow, I have added these environment variables to the Origin image: KUBE_ENABLE_INSECURE_REGISTRY=true EXTRA_DOCKER_OPTS=--insecure-registry I have also had similar results with: KUBE_ENABLE_INSECURE_REGISTRY=true\ EXTRA_DOCKER_OPTS="--insecure-registry 10.230.22.20" Version information: [root@mtl-vm375 origin]# oc version oc v1.1.5-52-gd58f979 kubernetes v1.2.0-36-g4a3f9c5 and [root@mtl-vm375 ~]# docker version Client: Version: 1.8.2-el7.centos API version: 1.20 Package Version: docker-1.8.2-10.el7.centos.x86_64 Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux/amd64 Server: Version: 1.8.2-el7.centos API version: 1.20 Package Version: Go version: go1.4.2 Git commit: a01dc02/1.8.2 Built: OS/Arch: linux/amd64 Thanks in advance, John
|
docker, rhel, openshift-origin
| 0
| 1,508
| 2
|
https://stackoverflow.com/questions/36655281/openshift-origin-run-app-against-insecure-registry-yields-stuck-pod-with-error
|
35,222,229
|
How to install Git on redhat RHEL 6.5
|
I'm struggling to install the Git on my RHEL 6.5, first i tried 'yum install git' and give me no package found, then I try add those repo by using the rpm -Uvh command and none working. After i found this link [URL] and try to follow, but getting nowhere , because I can't do the step1 install the yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel Keep saying no package found, nothing to do. Very frustrated, why so difficult... All I want to do is create a build server by install the GIT, NODE JS , GULP, BOWER, NPM and use Jenkins to auto build the project when new code committed to the git hub. Thanks for any help.
|
How to install Git on redhat RHEL 6.5 I'm struggling to install the Git on my RHEL 6.5, first i tried 'yum install git' and give me no package found, then I try add those repo by using the rpm -Uvh command and none working. After i found this link [URL] and try to follow, but getting nowhere , because I can't do the step1 install the yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel Keep saying no package found, nothing to do. Very frustrated, why so difficult... All I want to do is create a build server by install the GIT, NODE JS , GULP, BOWER, NPM and use Jenkins to auto build the project when new code committed to the git hub. Thanks for any help.
|
git, github, redhat, rhel, rhel6
| 0
| 5,355
| 1
|
https://stackoverflow.com/questions/35222229/how-to-install-git-on-redhat-rhel-6-5
|
34,911,544
|
chef-client not starting with chef-client cookbook RHEL 6.7
|
I am using the 4.3.2 chef-client cookbook, chef-client 12.6, and my run list is - role chef-client, and my chef-client role is as follows: chef-client chef-client::config chef-client::delete_validation Link to the cookbook - [URL] My os is rhel 6.7 Also, if it matters, I am using Packer to create the image when this issue occurs. I log-on to the VM, and I am unable to start the service either. During the chef-client run, it will error with the following 'mazon-ebs: ================================================================================ amazon-ebs: Error executing action start on resource 'service[chef-client]' amazon-ebs: ================================================================================ amazon-ebs: amazon-ebs: Mixlib::ShellOut::ShellCommandFailed amazon-ebs: ------------------------------------ amazon-ebs: Expected process to exit with [0], but received '6' amazon-ebs: ---- Begin output of /sbin/service chef-client start ---- amazon-ebs: STDOUT: amazon-ebs: STDERR: amazon-ebs: ---- End output of /sbin/service chef-client start ---- amazon-ebs: Ran /sbin/service chef-client start returned 6 amazon-ebs: amazon-ebs: Resource Declaration: amazon-ebs: --------------------- amazon-ebs: # In /var/chef/cache/cookbooks/chef-client/recipes/init_service.rb amazon-ebs: amazon-ebs: 32: service 'chef-client' do amazon-ebs: 33: supports :status => true, :restart => true amazon-ebs: 34: action [:enable, :start] amazon-ebs: 35: end amazon-ebs: amazon-ebs: Compiled Resource: amazon-ebs: ------------------ amazon-ebs: # Declared in /var/chef/cache/cookbooks/chef-client/recipes/init_service.rb:32:in from_file' amazon-ebs: amazon-ebs: service("chef-client") do amazon-ebs: action [:enable, :start] amazon-ebs: updated true amazon-ebs: supports {:status=>true, :restart=>true} amazon-ebs: retries 0 amazon-ebs: retry_delay 2 amazon-ebs: default_guard_interpreter :default amazon-ebs: service_name "chef-client" amazon-ebs: enabled true amazon-ebs: pattern "chef-client" amazon-ebs: declared_type :service amazon-ebs: cookbook_name "chef-client" amazon-ebs: recipe_name "init_service" amazon-ebs: end amazon-ebs: amazon-ebs: [2016-01-20T16:49:04-05:00] INFO: Running queued delayed notifications before re-raising exception amazon-ebs: [2016-01-20T16:49:04-05:00] INFO: template[/etc/init.d/chef-client] sending restart action to service[chef-client] (delayed) amazon-ebs: * service[chef-client] action restart[2016-01-20T16:49:04-05:00] INFO: Processing service[chef-client] action restart (chef-client::init_service line 32) amazon-ebs: amazon-ebs: amazon-ebs: ================================================================================ amazon-ebs: Error executing action restart` on resource 'service[chef-client]' amazon-ebs: ================================================================================ amazon-ebs: amazon-ebs: Mixlib::ShellOut::ShellCommandFailed amazon-ebs: ------------------------------------ amazon-ebs: Expected process to exit with [0], but received '6' amazon-ebs: ---- Begin output of /sbin/service chef-client restart ---- amazon-ebs: STDOUT: Stopping chef-client: [FAILED] amazon-ebs: STDERR: amazon-ebs: ---- End output of /sbin/service chef-client restart ---- amazon-ebs: Ran /sbin/service chef-client restart returned 6 amazon-ebs: amazon-ebs: Resource Declaration: amazon-ebs: --------------------- amazon-ebs: # In /var/chef/cache/cookbooks/chef-client/recipes/init_service.rb amazon-ebs: amazon-ebs: 32: service 'chef-client' do amazon-ebs: 33: supports :status => true, :restart => true amazon-ebs: 34: action [:enable, :start] amazon-ebs: 35: end amazon-ebs: amazon-ebs: Compiled Resource: amazon-ebs: ------------------ amazon-ebs: # Declared in /var/chef/cache/cookbooks/chef-client/recipes/init_service.rb:32:in from_file amazon-ebs: amazon-ebs: service("chef-client") do amazon-ebs: action [:enable, :start] amazon-ebs: updated true amazon-ebs: supports {:status=>true, :restart=>true} amazon-ebs: retries 0 amazon-ebs: retry_delay 2 amazon-ebs: default_guard_interpreter :default amazon-ebs: service_name "chef-client" amazon-ebs: enabled true amazon-ebs: pattern "chef-client" amazon-ebs: declared_type :service amazon-ebs: cookbook_name "chef-client" amazon-ebs: recipe_name "init_service" amazon-ebs: end amazon-ebs: amazon-ebs:
|
chef-client not starting with chef-client cookbook RHEL 6.7 I am using the 4.3.2 chef-client cookbook, chef-client 12.6, and my run list is - role chef-client, and my chef-client role is as follows: chef-client chef-client::config chef-client::delete_validation Link to the cookbook - [URL] My os is rhel 6.7 Also, if it matters, I am using Packer to create the image when this issue occurs. I log-on to the VM, and I am unable to start the service either. During the chef-client run, it will error with the following 'mazon-ebs: ================================================================================ amazon-ebs: Error executing action start on resource 'service[chef-client]' amazon-ebs: ================================================================================ amazon-ebs: amazon-ebs: Mixlib::ShellOut::ShellCommandFailed amazon-ebs: ------------------------------------ amazon-ebs: Expected process to exit with [0], but received '6' amazon-ebs: ---- Begin output of /sbin/service chef-client start ---- amazon-ebs: STDOUT: amazon-ebs: STDERR: amazon-ebs: ---- End output of /sbin/service chef-client start ---- amazon-ebs: Ran /sbin/service chef-client start returned 6 amazon-ebs: amazon-ebs: Resource Declaration: amazon-ebs: --------------------- amazon-ebs: # In /var/chef/cache/cookbooks/chef-client/recipes/init_service.rb amazon-ebs: amazon-ebs: 32: service 'chef-client' do amazon-ebs: 33: supports :status => true, :restart => true amazon-ebs: 34: action [:enable, :start] amazon-ebs: 35: end amazon-ebs: amazon-ebs: Compiled Resource: amazon-ebs: ------------------ amazon-ebs: # Declared in /var/chef/cache/cookbooks/chef-client/recipes/init_service.rb:32:in from_file' amazon-ebs: amazon-ebs: service("chef-client") do amazon-ebs: action [:enable, :start] amazon-ebs: updated true amazon-ebs: supports {:status=>true, :restart=>true} amazon-ebs: retries 0 amazon-ebs: retry_delay 2 amazon-ebs: default_guard_interpreter :default amazon-ebs: service_name "chef-client" amazon-ebs: enabled true amazon-ebs: pattern "chef-client" amazon-ebs: declared_type :service amazon-ebs: cookbook_name "chef-client" amazon-ebs: recipe_name "init_service" amazon-ebs: end amazon-ebs: amazon-ebs: [2016-01-20T16:49:04-05:00] INFO: Running queued delayed notifications before re-raising exception amazon-ebs: [2016-01-20T16:49:04-05:00] INFO: template[/etc/init.d/chef-client] sending restart action to service[chef-client] (delayed) amazon-ebs: * service[chef-client] action restart[2016-01-20T16:49:04-05:00] INFO: Processing service[chef-client] action restart (chef-client::init_service line 32) amazon-ebs: amazon-ebs: amazon-ebs: ================================================================================ amazon-ebs: Error executing action restart` on resource 'service[chef-client]' amazon-ebs: ================================================================================ amazon-ebs: amazon-ebs: Mixlib::ShellOut::ShellCommandFailed amazon-ebs: ------------------------------------ amazon-ebs: Expected process to exit with [0], but received '6' amazon-ebs: ---- Begin output of /sbin/service chef-client restart ---- amazon-ebs: STDOUT: Stopping chef-client: [FAILED] amazon-ebs: STDERR: amazon-ebs: ---- End output of /sbin/service chef-client restart ---- amazon-ebs: Ran /sbin/service chef-client restart returned 6 amazon-ebs: amazon-ebs: Resource Declaration: amazon-ebs: --------------------- amazon-ebs: # In /var/chef/cache/cookbooks/chef-client/recipes/init_service.rb amazon-ebs: amazon-ebs: 32: service 'chef-client' do amazon-ebs: 33: supports :status => true, :restart => true amazon-ebs: 34: action [:enable, :start] amazon-ebs: 35: end amazon-ebs: amazon-ebs: Compiled Resource: amazon-ebs: ------------------ amazon-ebs: # Declared in /var/chef/cache/cookbooks/chef-client/recipes/init_service.rb:32:in from_file amazon-ebs: amazon-ebs: service("chef-client") do amazon-ebs: action [:enable, :start] amazon-ebs: updated true amazon-ebs: supports {:status=>true, :restart=>true} amazon-ebs: retries 0 amazon-ebs: retry_delay 2 amazon-ebs: default_guard_interpreter :default amazon-ebs: service_name "chef-client" amazon-ebs: enabled true amazon-ebs: pattern "chef-client" amazon-ebs: declared_type :service amazon-ebs: cookbook_name "chef-client" amazon-ebs: recipe_name "init_service" amazon-ebs: end amazon-ebs: amazon-ebs:
|
ruby, chef-infra, rhel
| 0
| 515
| 1
|
https://stackoverflow.com/questions/34911544/chef-client-not-starting-with-chef-client-cookbook-rhel-6-7
|
33,166,204
|
How to find commands ran in a particular day from a particular username
|
I am able to find out from secure log that who logged in from where using ssh. And the last command also proved that. Is there any why to find out what were the commands run in that ssh session?
|
How to find commands ran in a particular day from a particular username I am able to find out from secure log that who logged in from where using ssh. And the last command also proved that. Is there any why to find out what were the commands run in that ssh session?
|
linux, shell, unix, rhel
| 0
| 746
| 2
|
https://stackoverflow.com/questions/33166204/how-to-find-commands-ran-in-a-particular-day-from-a-particular-username
|
32,316,744
|
rpmbuild can't find the files I unpack from source
|
I am building a custom package which for the most part has a couple of dependencies and install my own files and scripts. I have those files in a tar file and after attempting the build I can see those files unpacked in the rpmbuild/BUILD directory. However there seems to be a problem with the %files directive as any file within is listed as error: File not found . The relevant section of my spec file looks like this: %prep %setup %install mkdir -p %{buildroot}/etc/collectd/ mkdir -p %{buildroot}/usr/bin/ mkdir -p %{buildroot}/usr/bin/collectd.conf.d/ install -m 777 collectd.conf.custom %{_builddir}/%{name}-%{version}/etc/collectd/ # list files owned by the package here %files %defattr(-,root,root) %config /etc/collectd.conf.custom %config /etc/collectd.d/http.conf %config /etc/collectd.d/csv.conf /usr/local/bin/my-plugin.py /usr/local/bin/my-script Like I said these files unpack to /BUILD but the builder fails in the %install directive after it executes the three mkdir statements. I am only trying to install one of the files in the above script so I can more easily tell that it succeeded. I consistently get the following error no matter what I try: + install -m 777 collectd.conf.turbine /home/vagrant/rpmbuild/BUILD/my-package-1.1/etc/collectd/ install: cannot stat `collectd.conf.custom': No such file or directory This file is in this directory as I checked using ls but for some reason I keep getting this error. EDIT: My %.spec file is as follows: Summary: my-package Collectd Name: my-package-collectd Version: 1.1 Release: Public Group: Applications/System License: Public Requires: collectd BuildArch: noarch BuildRoot: %{_tmppath}/%{name}-%{version} Source: %{name}-%{version}.tar.gz %prep %setup %install rm -rf %{buildroot} mkdir -p %{buildroot}/etc/collectd/ mkdir -p %{buildroot}/etc/collectd/collectd.conf.d/ mkdir -p %{buildroot}/usr/bin/ install -m 777 %{_builddir}/%{name}-%{version}/etc/collectd/collectd.conf.custom %{buildroot}/etc/collectd/ install -m 777 %{_builddir}/%{name}-%{version}/etc/collectd/collectd.conf.d/csv.conf %{buildroot}/etc/collectd/collectd.conf.d/ install -m 777 %{_builddir}/%{name}-%{version}/etc/collectd/collectd.conf.d/http.conf %{buildroot}/etc/collectd/collectd.conf.d/ # list files owned by the package here %files %defattr(-,root,root) %config /etc/collectd.conf.my-package %config /etc/collectd.d/http.conf %config /etc/collectd.d/csv.conf /usr/local/bin/cloudhealth.py /usr/local/bin/my-package-collectd
|
rpmbuild can't find the files I unpack from source I am building a custom package which for the most part has a couple of dependencies and install my own files and scripts. I have those files in a tar file and after attempting the build I can see those files unpacked in the rpmbuild/BUILD directory. However there seems to be a problem with the %files directive as any file within is listed as error: File not found . The relevant section of my spec file looks like this: %prep %setup %install mkdir -p %{buildroot}/etc/collectd/ mkdir -p %{buildroot}/usr/bin/ mkdir -p %{buildroot}/usr/bin/collectd.conf.d/ install -m 777 collectd.conf.custom %{_builddir}/%{name}-%{version}/etc/collectd/ # list files owned by the package here %files %defattr(-,root,root) %config /etc/collectd.conf.custom %config /etc/collectd.d/http.conf %config /etc/collectd.d/csv.conf /usr/local/bin/my-plugin.py /usr/local/bin/my-script Like I said these files unpack to /BUILD but the builder fails in the %install directive after it executes the three mkdir statements. I am only trying to install one of the files in the above script so I can more easily tell that it succeeded. I consistently get the following error no matter what I try: + install -m 777 collectd.conf.turbine /home/vagrant/rpmbuild/BUILD/my-package-1.1/etc/collectd/ install: cannot stat `collectd.conf.custom': No such file or directory This file is in this directory as I checked using ls but for some reason I keep getting this error. EDIT: My %.spec file is as follows: Summary: my-package Collectd Name: my-package-collectd Version: 1.1 Release: Public Group: Applications/System License: Public Requires: collectd BuildArch: noarch BuildRoot: %{_tmppath}/%{name}-%{version} Source: %{name}-%{version}.tar.gz %prep %setup %install rm -rf %{buildroot} mkdir -p %{buildroot}/etc/collectd/ mkdir -p %{buildroot}/etc/collectd/collectd.conf.d/ mkdir -p %{buildroot}/usr/bin/ install -m 777 %{_builddir}/%{name}-%{version}/etc/collectd/collectd.conf.custom %{buildroot}/etc/collectd/ install -m 777 %{_builddir}/%{name}-%{version}/etc/collectd/collectd.conf.d/csv.conf %{buildroot}/etc/collectd/collectd.conf.d/ install -m 777 %{_builddir}/%{name}-%{version}/etc/collectd/collectd.conf.d/http.conf %{buildroot}/etc/collectd/collectd.conf.d/ # list files owned by the package here %files %defattr(-,root,root) %config /etc/collectd.conf.my-package %config /etc/collectd.d/http.conf %config /etc/collectd.d/csv.conf /usr/local/bin/cloudhealth.py /usr/local/bin/my-package-collectd
|
rpm, yum, rhel, rpmbuild
| 0
| 3,287
| 1
|
https://stackoverflow.com/questions/32316744/rpmbuild-cant-find-the-files-i-unpack-from-source
|
31,039,029
|
Trying to read from file within existing for loop
|
Hello I am having a hard time figuring out how to complete the existing script I have written. In a nutsheel I am trying to set disk quotas based on an existing user for any user that doesn't have a quota set to 100MB or 102400 as below in the script. The logic seems to work currently but I have ran out of ideas for how to populate the $USER variable. Any help would be appreciated. AWK=$(awk '{ print $4 }' test.txt) USER=$(awk '{ print $1 }' test.txt) for quota in $AWK; do if [ "$quota" = 102400 ]; then echo "Quota already set to 100MB for user: "$USER"" else echo "Setting quota from template for user: $USER " edquota -p username "$USER" fi done The test.txt file is below: user1 -- 245 0 0 user2 -- 245 102400 102400 user3 -- 234 102400 102400 user4 -- 234 1 0
|
Trying to read from file within existing for loop Hello I am having a hard time figuring out how to complete the existing script I have written. In a nutsheel I am trying to set disk quotas based on an existing user for any user that doesn't have a quota set to 100MB or 102400 as below in the script. The logic seems to work currently but I have ran out of ideas for how to populate the $USER variable. Any help would be appreciated. AWK=$(awk '{ print $4 }' test.txt) USER=$(awk '{ print $1 }' test.txt) for quota in $AWK; do if [ "$quota" = 102400 ]; then echo "Quota already set to 100MB for user: "$USER"" else echo "Setting quota from template for user: $USER " edquota -p username "$USER" fi done The test.txt file is below: user1 -- 245 0 0 user2 -- 245 102400 102400 user3 -- 234 102400 102400 user4 -- 234 1 0
|
linux, bash, disk, rhel, quotas
| 0
| 51
| 1
|
https://stackoverflow.com/questions/31039029/trying-to-read-from-file-within-existing-for-loop
|
27,885,543
|
Failed dependencies: error when installing openssl from rpm
|
I need to upgrade my openssl (my current version is OpenSSL 1.0.1e-fips 11 Feb 2013 ). My box is not connected to internet. So I download Openssl rpm and execute rpm -Uvh openssl-1.0.1e-40.fc20.x86_64.rpm Command. Then I got following error. warning: openssl-1.0.1e-40.fc20.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 246110c1: NOKEY error: Failed dependencies: libc.so.6(GLIBC_2.14)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libc.so.6(GLIBC_2.15)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10()(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10(OPENSSL_1.0.1)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10(OPENSSL_1.0.1_EC)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10(libcrypto.so.10)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libssl.so.10()(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libssl.so.10(libssl.so.10)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 openssl-libs(x86-64) = 1:1.0.1e-40.fc20 is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10()(64bit) is needed by (installed) qt-1:4.6.2-26.el6_4.x86_64 libcrypto.so.10()(64bit) is needed by (installed) libssh2-1.4.2-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) wget-1.12-1.8.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) bind-libs-32:9.8.2-0.17.rc1.el6_4.6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) bind-utils-32:9.8.2-0.17.rc1.el6_4.6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) mysql-libs-5.1.71-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) fipscheck-1.2.0-7.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) python-libs-2.6.6-51.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) m2crypto-0.20.2-9.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) pyOpenSSL-0.10-2.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) python-ldap-0:2.3.10-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) openssh-5.3p1-94.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) ntpdate-4.2.6p5-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) ntp-4.2.6p5-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) certmonger-0.61-3.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) cyrus-sasl-2.1.23-13.el6_3.1.x86_64 libcrypto.so.10()(64bit) is needed by (installed) postfix-2:2.6.6-2.2.el6_1.x86_64 libcrypto.so.10()(64bit) is needed by (installed) openssh-clients-5.3p1-94.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) openssh-server-5.3p1-94.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) tcpdump-14:4.0.0-3.20090921gitdf3cb4.2.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) perl-Crypt-SSLeay-0.57-16.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 libcrypto.so.10(OPENSSL_1.0.1_EC)(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 libcrypto.so.10(libcrypto.so.10)(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 libssl.so.10()(64bit) is needed by (installed) qt-1:4.6.2-26.el6_4.x86_64 libssl.so.10()(64bit) is needed by (installed) libssh2-1.4.2-1.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) wget-1.12-1.8.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) mysql-libs-5.1.71-1.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) python-libs-2.6.6-51.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) m2crypto-0.20.2-9.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) pyOpenSSL-0.10-2.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) python-rhsm-1.9.6-1.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) python-ldap-0:2.3.10-1.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) postfix-2:2.6.6-2.2.el6_1.x86_64 libssl.so.10()(64bit) is needed by (installed) perl-Crypt-SSLeay-0.57-16.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 libssl.so.10(libssl.so.10)(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 I understood that is a collision with dependencies. What is the way to achieve me to update my openssl in offline mode ?
|
Failed dependencies: error when installing openssl from rpm I need to upgrade my openssl (my current version is OpenSSL 1.0.1e-fips 11 Feb 2013 ). My box is not connected to internet. So I download Openssl rpm and execute rpm -Uvh openssl-1.0.1e-40.fc20.x86_64.rpm Command. Then I got following error. warning: openssl-1.0.1e-40.fc20.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 246110c1: NOKEY error: Failed dependencies: libc.so.6(GLIBC_2.14)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libc.so.6(GLIBC_2.15)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10()(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10(OPENSSL_1.0.1)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10(OPENSSL_1.0.1_EC)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10(libcrypto.so.10)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libssl.so.10()(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 libssl.so.10(libssl.so.10)(64bit) is needed by openssl-1:1.0.1e-40.fc20.x86_64 openssl-libs(x86-64) = 1:1.0.1e-40.fc20 is needed by openssl-1:1.0.1e-40.fc20.x86_64 libcrypto.so.10()(64bit) is needed by (installed) qt-1:4.6.2-26.el6_4.x86_64 libcrypto.so.10()(64bit) is needed by (installed) libssh2-1.4.2-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) wget-1.12-1.8.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) bind-libs-32:9.8.2-0.17.rc1.el6_4.6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) bind-utils-32:9.8.2-0.17.rc1.el6_4.6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) mysql-libs-5.1.71-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) fipscheck-1.2.0-7.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) python-libs-2.6.6-51.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) m2crypto-0.20.2-9.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) pyOpenSSL-0.10-2.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) python-ldap-0:2.3.10-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) openssh-5.3p1-94.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) ntpdate-4.2.6p5-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) ntp-4.2.6p5-1.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) certmonger-0.61-3.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) cyrus-sasl-2.1.23-13.el6_3.1.x86_64 libcrypto.so.10()(64bit) is needed by (installed) postfix-2:2.6.6-2.2.el6_1.x86_64 libcrypto.so.10()(64bit) is needed by (installed) openssh-clients-5.3p1-94.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) openssh-server-5.3p1-94.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) tcpdump-14:4.0.0-3.20090921gitdf3cb4.2.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) perl-Crypt-SSLeay-0.57-16.el6.x86_64 libcrypto.so.10()(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 libcrypto.so.10(OPENSSL_1.0.1_EC)(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 libcrypto.so.10(libcrypto.so.10)(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 libssl.so.10()(64bit) is needed by (installed) qt-1:4.6.2-26.el6_4.x86_64 libssl.so.10()(64bit) is needed by (installed) libssh2-1.4.2-1.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) wget-1.12-1.8.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) mysql-libs-5.1.71-1.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) python-libs-2.6.6-51.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) m2crypto-0.20.2-9.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) pyOpenSSL-0.10-2.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) python-rhsm-1.9.6-1.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) python-ldap-0:2.3.10-1.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) postfix-2:2.6.6-2.2.el6_1.x86_64 libssl.so.10()(64bit) is needed by (installed) perl-Crypt-SSLeay-0.57-16.el6.x86_64 libssl.so.10()(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 libssl.so.10(libssl.so.10)(64bit) is needed by (installed) nginx-1.6.2-1.el6.ngx.x86_64 I understood that is a collision with dependencies. What is the way to achieve me to update my openssl in offline mode ?
|
linux, openssl, rpm, rhel
| 0
| 3,744
| 1
|
https://stackoverflow.com/questions/27885543/failed-dependencies-error-when-installing-openssl-from-rpm
|
25,313,004
|
On AWS EC2 Linux RHEL all PHP files are executing, except for index.php
|
I am lost on this one. I have setup an AWC EC2 RHEL server, and installed php and apache. Everything seems to work except for the fact that the index.php file will not execute. All other *.php files will work if I call them directly. index.php contains: <?php echo "test"; ?> in /etc/httpd/conf/httpd.conf I have the settings: <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> Although if I remove the index.php still does not work. I have the permissions on index.php as -rw-r--r-- ec2-user ec2-user I am not sure what other information would be useful here seeing as all other php files work except for the index.php. (eg: if I call /index.php outputs nothing, if I move index.php to index2.php and call /index2.php outpus test ) Any thoughts as to why this may be happening? EDIT I am now realizing there is a 500 internal server error when the index.php file is called checking the access logs. "GET /index.php HTTP/1.1" 500 Maybe that will help point in the right direction? I am still unclear how to solve this issue. Again, changing the name of the file to anything other than index.php (eg: index2.php ) will execute the file correctly. Also, after enabling error reporting I am getting this error: PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0 PHP Fatal error: Unknown: Failed opening required '/var/www/html/index.php' (include_path='.:/usr/share/pear:/usr/share/php') in Unknown on line 0 I have tried changing permissions to all sorts of variants including 777, 755, 655, 644, 664, etc... even tried changing owner to apache. no luck.
|
On AWS EC2 Linux RHEL all PHP files are executing, except for index.php I am lost on this one. I have setup an AWC EC2 RHEL server, and installed php and apache. Everything seems to work except for the fact that the index.php file will not execute. All other *.php files will work if I call them directly. index.php contains: <?php echo "test"; ?> in /etc/httpd/conf/httpd.conf I have the settings: <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> Although if I remove the index.php still does not work. I have the permissions on index.php as -rw-r--r-- ec2-user ec2-user I am not sure what other information would be useful here seeing as all other php files work except for the index.php. (eg: if I call /index.php outputs nothing, if I move index.php to index2.php and call /index2.php outpus test ) Any thoughts as to why this may be happening? EDIT I am now realizing there is a 500 internal server error when the index.php file is called checking the access logs. "GET /index.php HTTP/1.1" 500 Maybe that will help point in the right direction? I am still unclear how to solve this issue. Again, changing the name of the file to anything other than index.php (eg: index2.php ) will execute the file correctly. Also, after enabling error reporting I am getting this error: PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0 PHP Fatal error: Unknown: Failed opening required '/var/www/html/index.php' (include_path='.:/usr/share/pear:/usr/share/php') in Unknown on line 0 I have tried changing permissions to all sorts of variants including 777, 755, 655, 644, 664, etc... even tried changing owner to apache. no luck.
|
php, linux, amazon-web-services, amazon-ec2, rhel
| 0
| 860
| 3
|
https://stackoverflow.com/questions/25313004/on-aws-ec2-linux-rhel-all-php-files-are-executing-except-for-index-php
|
23,993,384
|
How to stop yum update from upgrading my centos 5.4 to 5.8
|
Our software has a requirement that it we must have Centos 5.4 and any other centos version will not work. I installed Centos 5.4 and then did an yum update which upgraded my OS to Centos 5.8. I have to re-install again and how do I avoid the Centos upgrade the next time I do yum update? I did find that doing exclude=kernel* in /etc/yum.conf will avoid the kernel upgrade but I am not sure if this enough.
|
How to stop yum update from upgrading my centos 5.4 to 5.8 Our software has a requirement that it we must have Centos 5.4 and any other centos version will not work. I installed Centos 5.4 and then did an yum update which upgraded my OS to Centos 5.8. I have to re-install again and how do I avoid the Centos upgrade the next time I do yum update? I did find that doing exclude=kernel* in /etc/yum.conf will avoid the kernel upgrade but I am not sure if this enough.
|
linux, centos, rhel
| 0
| 1,106
| 2
|
https://stackoverflow.com/questions/23993384/how-to-stop-yum-update-from-upgrading-my-centos-5-4-to-5-8
|
23,905,886
|
How to delete Snapshots in oVirt/RHEV
|
I am writing a script for automatically generating snapshots, which works pretty well so far. I would also like to be able to delete snapshots with my script, but it seems like there is no method call documented to do that. I have already found out that with rhev 3.3.0 you have to shutdown VMs to delete them, but still I am not able to delete snapshots. This is what I have so far: def deleteSnapshot(self): VM = self.con.vms.get(self.hostname.replace('.','_')) VM_status= VM.status.state if VM_status == 'up': self.stopVM() time.sleep(10) elif VM_status == 'down': self.listSnapshotDescription() # This is where the deletion stuff should happen, But I am still not able to find a way to delete these snapshots. I have also searched online to find some usable Redhat documentation on the topic, but haven't been able to find any that is of use. Can anybody give me a hint or something which points me in the right direction? Thank you in advance.
|
How to delete Snapshots in oVirt/RHEV I am writing a script for automatically generating snapshots, which works pretty well so far. I would also like to be able to delete snapshots with my script, but it seems like there is no method call documented to do that. I have already found out that with rhev 3.3.0 you have to shutdown VMs to delete them, but still I am not able to delete snapshots. This is what I have so far: def deleteSnapshot(self): VM = self.con.vms.get(self.hostname.replace('.','_')) VM_status= VM.status.state if VM_status == 'up': self.stopVM() time.sleep(10) elif VM_status == 'down': self.listSnapshotDescription() # This is where the deletion stuff should happen, But I am still not able to find a way to delete these snapshots. I have also searched online to find some usable Redhat documentation on the topic, but haven't been able to find any that is of use. Can anybody give me a hint or something which points me in the right direction? Thank you in advance.
|
python, snapshot, rhel
| 0
| 2,926
| 1
|
https://stackoverflow.com/questions/23905886/how-to-delete-snapshots-in-ovirt-rhev
|
23,218,097
|
Cannot find JAVA_HOME in Red Hat Server
|
I'm trying to find the location where JAVA_HOME env variale is set up and I tried to find in ~/.bash_profile , ~/.bashrc and /etc/profile but could not find the env variable. But when I run echo $JAVA_HOME its gives out the value /usr/local/java. Where else would the JAVA_HOME env variable. BTW its a Red Hat Linux Server.
|
Cannot find JAVA_HOME in Red Hat Server I'm trying to find the location where JAVA_HOME env variale is set up and I tried to find in ~/.bash_profile , ~/.bashrc and /etc/profile but could not find the env variable. But when I run echo $JAVA_HOME its gives out the value /usr/local/java. Where else would the JAVA_HOME env variable. BTW its a Red Hat Linux Server.
|
java, redhat, rhel
| 0
| 1,795
| 4
|
https://stackoverflow.com/questions/23218097/cannot-find-java-home-in-red-hat-server
|
22,793,620
|
Teradata 'Unable to get catalog string' error when using ODBC to connect
|
I am trying to reach a remote host that is running Teradata services via ODBC. The host that I am trying to connect from is 64-bit RHEL 6.x with the following Teradata software installed: bteq fastexp fastld jmsaxsmod mload mqaxsmod npaxsmod sqlpp tdodbc tdwallet tptbase tptstream tpump When I try to connect to the remote host via Python (interactive session), I receive a 'Unable to get catalog string' error: [@myhost:/path/to/scripts] ->python Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pyodbc >>> pyodbc.pooling = False >>> cn = pyodbc.connect("DRIVER={Teradata}; SERVER=12.245.67.255:1025;UID=usr;PWD=pwd", ANSI = True) Traceback (most recent call last): File "<stdin>", line 1, in <module> pyodbc.Error: ('28000', '[28000] [Teradata][ODBC Teradata Driver] Unable to get catalog string. (0) (SQLDriverConnect)') Furthermore, when I try to use isql (from the unixODBC yum package), I receive the same error [@my_host:/path/to/scripts] ->isql -v proddsn [28000][Teradata][ODBC Teradata Driver] Unable to get catalog string. [ISQL]ERROR: Could not SQLConnect
|
Teradata 'Unable to get catalog string' error when using ODBC to connect I am trying to reach a remote host that is running Teradata services via ODBC. The host that I am trying to connect from is 64-bit RHEL 6.x with the following Teradata software installed: bteq fastexp fastld jmsaxsmod mload mqaxsmod npaxsmod sqlpp tdodbc tdwallet tptbase tptstream tpump When I try to connect to the remote host via Python (interactive session), I receive a 'Unable to get catalog string' error: [@myhost:/path/to/scripts] ->python Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pyodbc >>> pyodbc.pooling = False >>> cn = pyodbc.connect("DRIVER={Teradata}; SERVER=12.245.67.255:1025;UID=usr;PWD=pwd", ANSI = True) Traceback (most recent call last): File "<stdin>", line 1, in <module> pyodbc.Error: ('28000', '[28000] [Teradata][ODBC Teradata Driver] Unable to get catalog string. (0) (SQLDriverConnect)') Furthermore, when I try to use isql (from the unixODBC yum package), I receive the same error [@my_host:/path/to/scripts] ->isql -v proddsn [28000][Teradata][ODBC Teradata Driver] Unable to get catalog string. [ISQL]ERROR: Could not SQLConnect
|
odbc, teradata, rhel, pyodbc, unixodbc
| 0
| 5,567
| 1
|
https://stackoverflow.com/questions/22793620/teradata-unable-to-get-catalog-string-error-when-using-odbc-to-connect
|
22,252,162
|
Free space in /
|
I need to free space in '/dev/sda5' which is mounted in '/'. I have also some other devices that ar mounted (eg '/dev/sdb1' on '/home' , '/dev/sdc1' on 'var/log' ... ). I have tried du -sh * but it takes too much time to check each directory. This is a production machine running RHEL. How can i get a list of the folders that belong to '/' (/dev/sda5) and not to other mounted devices to find where i can free space? [root@myservername ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 27G 24G 2.4G 92% / /dev/mapper/vg--log-root00 25G 3.4G 21G 14% /var/log /dev/sda2 25G 11G 13G 45% /home /dev/sda1 99M 12M 82M 13% /boot tmpfs 47G 0 47G 0% /dev/shm Any other tip to free space will be appreciated.
|
Free space in / I need to free space in '/dev/sda5' which is mounted in '/'. I have also some other devices that ar mounted (eg '/dev/sdb1' on '/home' , '/dev/sdc1' on 'var/log' ... ). I have tried du -sh * but it takes too much time to check each directory. This is a production machine running RHEL. How can i get a list of the folders that belong to '/' (/dev/sda5) and not to other mounted devices to find where i can free space? [root@myservername ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 27G 24G 2.4G 92% / /dev/mapper/vg--log-root00 25G 3.4G 21G 14% /var/log /dev/sda2 25G 11G 13G 45% /home /dev/sda1 99M 12M 82M 13% /boot tmpfs 47G 0 47G 0% /dev/shm Any other tip to free space will be appreciated.
|
linux, rhel
| 0
| 149
| 2
|
https://stackoverflow.com/questions/22252162/free-space-in
|
21,734,208
|
Sybase 15.7 on Red Hat Enterprise Linux Server release 6.4 (Santiago)
|
I am migrating to Red Hat from Solaris. Previously on Solaris, we connected to Sybase datasources through Java by passing an interfaces file location to a JDBC library. When I checked the Sybase documentation, they only list interfaces files as an option for UNIX and Windows... Does anyone know what the equivalent is for Linux? I know our datasources have been transfered over, because when i connect with: isql -S <server> -U <user> -P <password> I can connect to the database. Unfortunately the team who installed the software on our new machines is unable to provide any details... In summary, I know our datasource profiles are listed somewhere, because I can specify a server by name and connect. I need to find out where that information is.
|
Sybase 15.7 on Red Hat Enterprise Linux Server release 6.4 (Santiago) I am migrating to Red Hat from Solaris. Previously on Solaris, we connected to Sybase datasources through Java by passing an interfaces file location to a JDBC library. When I checked the Sybase documentation, they only list interfaces files as an option for UNIX and Windows... Does anyone know what the equivalent is for Linux? I know our datasources have been transfered over, because when i connect with: isql -S <server> -U <user> -P <password> I can connect to the database. Unfortunately the team who installed the software on our new machines is unable to provide any details... In summary, I know our datasource profiles are listed somewhere, because I can specify a server by name and connect. I need to find out where that information is.
|
linux, datasource, sybase, rhel
| 0
| 1,700
| 1
|
https://stackoverflow.com/questions/21734208/sybase-15-7-on-red-hat-enterprise-linux-server-release-6-4-santiago
|
21,066,161
|
How to add a user without using the 'useradd' or 'adduser' commands in RHEL6.3?
|
I want to add a user in a RHEL6.3 OS without using useradd or adduser command. I know that I have to edit in 4 files i.e passwd, group, shadow and gshadow. But please tell me what exactly I have to edit?
|
How to add a user without using the 'useradd' or 'adduser' commands in RHEL6.3? I want to add a user in a RHEL6.3 OS without using useradd or adduser command. I know that I have to edit in 4 files i.e passwd, group, shadow and gshadow. But please tell me what exactly I have to edit?
|
linux, redhat, rhel
| 0
| 3,469
| 1
|
https://stackoverflow.com/questions/21066161/how-to-add-a-user-without-using-the-useradd-or-adduser-commands-in-rhel6-3
|
20,171,796
|
A system call like scandir()
|
I want to scan a directory for all and only files entries. When I am trying to do this I am getting all directory listing /etc, /home, /selinux etc... but not files listing. Is there any system call which returns only files listing, not directories. Or if anyone can suggest me hot to check only files not directory in side a directory. for example... I want to access all files in side this folder /home/username/folderone/foldertwo/finalfolder . Inside folder scan all files.
|
A system call like scandir() I want to scan a directory for all and only files entries. When I am trying to do this I am getting all directory listing /etc, /home, /selinux etc... but not files listing. Is there any system call which returns only files listing, not directories. Or if anyone can suggest me hot to check only files not directory in side a directory. for example... I want to access all files in side this folder /home/username/folderone/foldertwo/finalfolder . Inside folder scan all files.
|
file, rhel, directory
| 0
| 577
| 1
|
https://stackoverflow.com/questions/20171796/a-system-call-like-scandir
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.