question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
29,659,999
Enabling FIPS on google compute engine instance
In trying to get FIPS enabled on a CentOS instance that is already up and running, I've looked at both RHEL documentation and gcloud 's limited documentation, but to no avail. [URL] [URL] simply points you to [URL] /proc/sys/crypto/fips_enabled exists, w/ a numeric value of 0 , but cannot be edited, even by root . Has anyone been able to enable FIPS ?
Enabling FIPS on google compute engine instance In trying to get FIPS enabled on a CentOS instance that is already up and running, I've looked at both RHEL documentation and gcloud 's limited documentation, but to no avail. [URL] [URL] simply points you to [URL] /proc/sys/crypto/fips_enabled exists, w/ a numeric value of 0 , but cannot be edited, even by root . Has anyone been able to enable FIPS ?
centos, google-compute-engine, rhel, fips
1
1,334
1
https://stackoverflow.com/questions/29659999/enabling-fips-on-google-compute-engine-instance
29,624,366
i am unable to delete user in linux .i created for gitolite installation
I am unable to delete user in linux RHEL Santiuago userdel -r admin The error message is below userdel: /home/admin not owned by admin, not removing. What to do...?
i am unable to delete user in linux .i created for gitolite installation I am unable to delete user in linux RHEL Santiuago userdel -r admin The error message is below userdel: /home/admin not owned by admin, not removing. What to do...?
linux, rhel, gitolite
1
113
2
https://stackoverflow.com/questions/29624366/i-am-unable-to-delete-user-in-linux-i-created-for-gitolite-installation
28,681,398
Re building Python and mod_wsgi
I had a problem which i described here mod_wsgi Apache error with django app finally I got to the point where i need to rebuild mod_wsgi. The server is RHEL 6, so have python 2.6 installed by default and in order to run some stuffs another admin installed python 2.7 additionally Now, I have this problem /usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against .rodata.str1.8' can not be used when making a shared object; recompile with -fPIC According to [URL] the problem I have is a conflict between python that was compiled to 32 bit and mod_wsgi to 64 Following this guide [URL] (and others) I'm trying to rebuild Python2.7 with --enabled-shared , but I got the same error when I run the make /usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against .rodata.str1.8' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libpython2.7.a: could not read symbols: Bad value collect2: ld returned 1 exit status Python build finished, but the necessary bits to build these modules were not found: bsddb185 dl imageop I dont know if the problem is the previous installation of python 2.7. There is a safe way to remove whole python 2.7 previous installation? I don't know if that can solve the problem.
Re building Python and mod_wsgi I had a problem which i described here mod_wsgi Apache error with django app finally I got to the point where i need to rebuild mod_wsgi. The server is RHEL 6, so have python 2.6 installed by default and in order to run some stuffs another admin installed python 2.7 additionally Now, I have this problem /usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against .rodata.str1.8' can not be used when making a shared object; recompile with -fPIC According to [URL] the problem I have is a conflict between python that was compiled to 32 bit and mod_wsgi to 64 Following this guide [URL] (and others) I'm trying to rebuild Python2.7 with --enabled-shared , but I got the same error when I run the make /usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against .rodata.str1.8' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libpython2.7.a: could not read symbols: Bad value collect2: ld returned 1 exit status Python build finished, but the necessary bits to build these modules were not found: bsddb185 dl imageop I dont know if the problem is the previous installation of python 2.7. There is a safe way to remove whole python 2.7 previous installation? I don't know if that can solve the problem.
python, django, python-2.7, rhel
1
500
1
https://stackoverflow.com/questions/28681398/re-building-python-and-mod-wsgi
27,387,836
What is the ~/.m2/repository/ structure for Maven/Tycho?
I downloaded the necessary pom.xml and jar files, now I need to organize them so Maven can find them, but I'm not exactly sure of the structure. Here's my guess: For example, the maven-clean-plugin would sit here: ~/.m2/repository/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.jar ~/.m2/repository/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.xml I realize the best way to go about this would be download it on my computer and test it out, but I can't. Also, would the same rule/methodology described in the path above be the same for any other xml and jar files? I'm asking this because when I run "mvn clean verify -o" I get the error "The POM for org.apache.maven.plugins:maven-clean-plugin:jar:2.5 is missing, no dependency information available", even tho the POM is located in the path described above.
What is the ~/.m2/repository/ structure for Maven/Tycho? I downloaded the necessary pom.xml and jar files, now I need to organize them so Maven can find them, but I'm not exactly sure of the structure. Here's my guess: For example, the maven-clean-plugin would sit here: ~/.m2/repository/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.jar ~/.m2/repository/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.xml I realize the best way to go about this would be download it on my computer and test it out, but I can't. Also, would the same rule/methodology described in the path above be the same for any other xml and jar files? I'm asking this because when I run "mvn clean verify -o" I get the error "The POM for org.apache.maven.plugins:maven-clean-plugin:jar:2.5 is missing, no dependency information available", even tho the POM is located in the path described above.
linux, maven, rhel, tycho
1
1,746
2
https://stackoverflow.com/questions/27387836/what-is-the-m2-repository-structure-for-maven-tycho
25,532,682
Can't write logs in linux using Yii::log?
in Yii based web application i am writing log files using Yii::log('info', CLogger::INFO, $exception); It is perfectly working in Windows. But when i try this in RHEL 6.0 it is not writing and not showing any error response I have changed the directory permissions as well chmod 777 /var/www/html/yiiblog/protected/runtime but it s not working and not showing any thing in error log of apache What is the reason for this and how to fix this ?
Can't write logs in linux using Yii::log? in Yii based web application i am writing log files using Yii::log('info', CLogger::INFO, $exception); It is perfectly working in Windows. But when i try this in RHEL 6.0 it is not writing and not showing any error response I have changed the directory permissions as well chmod 777 /var/www/html/yiiblog/protected/runtime but it s not working and not showing any thing in error log of apache What is the reason for this and how to fix this ?
php, linux, yii, rhel
1
229
1
https://stackoverflow.com/questions/25532682/cant-write-logs-in-linux-using-yiilog
25,504,878
RHEL7 Tomcat setenv.sh
I wanted to use setenv.sh in RHEL7 tomcat7 standard installation. However, the file is not used. I created it in /usr/share/tomcat/bin. But as there are no more script files in this directory, I suppose its probably not the right place. What works is to set my values directly in /usr/sbin/tomcat file, but this file might be overwritten by a future update.
RHEL7 Tomcat setenv.sh I wanted to use setenv.sh in RHEL7 tomcat7 standard installation. However, the file is not used. I created it in /usr/share/tomcat/bin. But as there are no more script files in this directory, I suppose its probably not the right place. What works is to set my values directly in /usr/sbin/tomcat file, but this file might be overwritten by a future update.
tomcat, rhel, setenv
1
2,855
1
https://stackoverflow.com/questions/25504878/rhel7-tomcat-setenv-sh
24,899,171
DOMJudge installation: compile script not generating proper permissions
I am trying to install DOMJudge on a version of Scientific Linux 6.5 (Carbon) adapted and managed by my university. I'm forced to use an RHEL-based Linux version in order to get support from my university for the machine, so switching to Debian-based Linux isn't an option. I got as far as getting the website up and running and getting the jury interface up. When I try to submit a solution to the 'hello world' example problem, the judgehost spits up the following compiler error (the text below is complete; nothing comes after the colon): Compiling failed: no executable was created; compiler output: Upon investigation into PREFIX/lib/domjudge/judge/compile.sh and into the judging directory created for the submission, I found that the compiled binary was given permissions that would not allow it to be seen by the compile script at all, causing the script to choke when checking for the existence of the binary after compilation. Here are the permissions and ownership that are granted to the binary when it is generated by compile.sh: $ ls -al /usr/local/var/lib/domjudge/judgings/domjudge/c2-s1-j11/compile total 1440 drwxrwxrwx. 2 domjudge domjudge 4096 Jul 22 15:07 . drwx------. 3 domjudge domjudge 4096 Jul 22 15:07 .. -rw-rw-r--. 1 domjudge domjudge 106 Jul 22 15:07 helloworld.cpp -rwxr-x---. 1 domjudge-run root 1461083 Jul 22 15:07 program My judgedaemon user is domjudge and my chroot user is domjudge-run. My question is this: how can I get compile.sh to set the proper permissions on the binary so it is readable by all users? If I'm barking up the wrong tree and this problem is indicative of a bigger problem in my configuration of DOMJudge, please let me know. I have had no small number of frustrations trying to get DOMJudge to work on Scientific Linux, and I don't doubt that I screwed something up along the way. Note: I tried to ignore the check within compile.sh to get the script to finish, but once the compilation finishes, the judgedaemon hiccups when trying to copy the binary to the chroot jail because it, again, doesn't have read permissions on the file. So it looks like I have to solve the permissions problem for the rest of the judgedaemon to work.
DOMJudge installation: compile script not generating proper permissions I am trying to install DOMJudge on a version of Scientific Linux 6.5 (Carbon) adapted and managed by my university. I'm forced to use an RHEL-based Linux version in order to get support from my university for the machine, so switching to Debian-based Linux isn't an option. I got as far as getting the website up and running and getting the jury interface up. When I try to submit a solution to the 'hello world' example problem, the judgehost spits up the following compiler error (the text below is complete; nothing comes after the colon): Compiling failed: no executable was created; compiler output: Upon investigation into PREFIX/lib/domjudge/judge/compile.sh and into the judging directory created for the submission, I found that the compiled binary was given permissions that would not allow it to be seen by the compile script at all, causing the script to choke when checking for the existence of the binary after compilation. Here are the permissions and ownership that are granted to the binary when it is generated by compile.sh: $ ls -al /usr/local/var/lib/domjudge/judgings/domjudge/c2-s1-j11/compile total 1440 drwxrwxrwx. 2 domjudge domjudge 4096 Jul 22 15:07 . drwx------. 3 domjudge domjudge 4096 Jul 22 15:07 .. -rw-rw-r--. 1 domjudge domjudge 106 Jul 22 15:07 helloworld.cpp -rwxr-x---. 1 domjudge-run root 1461083 Jul 22 15:07 program My judgedaemon user is domjudge and my chroot user is domjudge-run. My question is this: how can I get compile.sh to set the proper permissions on the binary so it is readable by all users? If I'm barking up the wrong tree and this problem is indicative of a bigger problem in my configuration of DOMJudge, please let me know. I have had no small number of frustrations trying to get DOMJudge to work on Scientific Linux, and I don't doubt that I screwed something up along the way. Note: I tried to ignore the check within compile.sh to get the script to finish, but once the compilation finishes, the judgedaemon hiccups when trying to copy the binary to the chroot jail because it, again, doesn't have read permissions on the file. So it looks like I have to solve the permissions problem for the rest of the judgedaemon to work.
linux, permissions, rhel
1
1,138
1
https://stackoverflow.com/questions/24899171/domjudge-installation-compile-script-not-generating-proper-permissions
24,490,017
Wordpress Installation PHP Error
THis is the update I get when I run the following command php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/curl.so' - /usr/lib/php/modules/curl.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/dbase.so' - /usr/lib/php/modules/dbase.so: wrong ELF class: ELFCLASS32 in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/fileinfo.so' - /usr/lib/php/modules/fileinfo.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/json.so' - /usr/lib/php/modules/json.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mysql.so' - /usr/lib/php/modules/mysql.so: wrong ELF class: ELFCLASS32 in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mysqli.so' - /usr/lib/php/modules/mysqli.so: wrong ELF class: ELFCLASS32 in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/pdo.so' - /usr/lib/php/modules/pdo.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/pdo_mysql.so' - /usr/lib/php/modules/pdo_mysql.so: wrong ELF class: ELFCLASS32 in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/pdo_sqlite.so' - /usr/lib/php/modules/pdo_sqlite.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/phar.so' - /usr/lib/php/modules/phar.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/zip.so' - /usr/lib/php/modules/zip.so: cannot open shared object file: No such file or directory in Unknown on line 0 I'm trying to install Wordpress on a RHEL Machine. I initially had gotten mySQL up and running and then realized that my PHP was version 5.1.6. I added another repo, installed PHP 5.3 and removed the old php packages. The error I get on a browser when I open localhost/wp-admin/install.php is: Your PHP installation appears to be missing the MySQL extension which is required by WordPress. I have the package php-mysql.x86_64 installed and I can't figure out what the problem is. I don't know what to do! Any help would be greatly appreciated. Thanks.
Wordpress Installation PHP Error THis is the update I get when I run the following command php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/curl.so' - /usr/lib/php/modules/curl.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/dbase.so' - /usr/lib/php/modules/dbase.so: wrong ELF class: ELFCLASS32 in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/fileinfo.so' - /usr/lib/php/modules/fileinfo.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/json.so' - /usr/lib/php/modules/json.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mysql.so' - /usr/lib/php/modules/mysql.so: wrong ELF class: ELFCLASS32 in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/mysqli.so' - /usr/lib/php/modules/mysqli.so: wrong ELF class: ELFCLASS32 in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/pdo.so' - /usr/lib/php/modules/pdo.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/pdo_mysql.so' - /usr/lib/php/modules/pdo_mysql.so: wrong ELF class: ELFCLASS32 in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/pdo_sqlite.so' - /usr/lib/php/modules/pdo_sqlite.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/phar.so' - /usr/lib/php/modules/phar.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/zip.so' - /usr/lib/php/modules/zip.so: cannot open shared object file: No such file or directory in Unknown on line 0 I'm trying to install Wordpress on a RHEL Machine. I initially had gotten mySQL up and running and then realized that my PHP was version 5.1.6. I added another repo, installed PHP 5.3 and removed the old php packages. The error I get on a browser when I open localhost/wp-admin/install.php is: Your PHP installation appears to be missing the MySQL extension which is required by WordPress. I have the package php-mysql.x86_64 installed and I can't figure out what the problem is. I don't know what to do! Any help would be greatly appreciated. Thanks.
php, mysql, wordpress, rhel
1
1,055
2
https://stackoverflow.com/questions/24490017/wordpress-installation-php-error
23,189,189
Apache 403 after /var/www backup
I'm running a WordPress site on a RHEL server and I do backup every week for the WP MySQL database and the /var/www folder: mysqldump -uroot -p******** blog -l -F > /bak/blog.sql tar -jcpv -f /bak/www.tar.bz2 /var/www The latest backup was done at 6 a.m. (UTC +8) this morning and I upgraded WP to 3.9 at 7. Some features were not compatible with 3.9, so I decided to roll back. I restored the database and the folder: mysql -uroot -p blog < /bak/blog.sql tar -jxv -f /bak/www.tar.bz2 -C / and then the site gave me a 403. I restarted Apache and rebooted the server but it didn't help. The site was running before I upgraded WP, so I think the conf.s are the same before and after the backup/restore, therefore the problem might not be there. My homepage is redirect to my.si.te/blog/, and I can't visit a pure index.html at my.si.te/test/ (/var/www/html/test/) either. It's the same message: You don't have permission to access /(blog/test) on this server. [Mon Apr 21 08:42:48 2014] [crit] [client 144.*.*.*] (13)Permission denied: /var/www/html/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Apr 21 08:42:56 2014] [crit] [client 157.*.*.*] (13)Permission denied: /var/www/html/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Apr 21 08:42:58 2014] [crit] [client 178.*.*.*] (13)Permission denied: /var/www/html/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable, referer: [URL] What would be the problem and how can I fix it? Thanks!
Apache 403 after /var/www backup I'm running a WordPress site on a RHEL server and I do backup every week for the WP MySQL database and the /var/www folder: mysqldump -uroot -p******** blog -l -F > /bak/blog.sql tar -jcpv -f /bak/www.tar.bz2 /var/www The latest backup was done at 6 a.m. (UTC +8) this morning and I upgraded WP to 3.9 at 7. Some features were not compatible with 3.9, so I decided to roll back. I restored the database and the folder: mysql -uroot -p blog < /bak/blog.sql tar -jxv -f /bak/www.tar.bz2 -C / and then the site gave me a 403. I restarted Apache and rebooted the server but it didn't help. The site was running before I upgraded WP, so I think the conf.s are the same before and after the backup/restore, therefore the problem might not be there. My homepage is redirect to my.si.te/blog/, and I can't visit a pure index.html at my.si.te/test/ (/var/www/html/test/) either. It's the same message: You don't have permission to access /(blog/test) on this server. [Mon Apr 21 08:42:48 2014] [crit] [client 144.*.*.*] (13)Permission denied: /var/www/html/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Apr 21 08:42:56 2014] [crit] [client 157.*.*.*] (13)Permission denied: /var/www/html/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable [Mon Apr 21 08:42:58 2014] [crit] [client 178.*.*.*] (13)Permission denied: /var/www/html/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable, referer: [URL] What would be the problem and how can I fix it? Thanks!
wordpress, apache, rhel
1
284
1
https://stackoverflow.com/questions/23189189/apache-403-after-var-www-backup
23,079,096
how to impersonate in linux
I have developed a multi-platform desktop application in python and PyQt and in it i want to implement the concept of impersonation. I have a requirement where user selects a file and the the application will check for naming conventions and other things. If everythin is fine then it copies the file in a server where only impersonate user lets say (user123) has full permissions other has only read permissions. I could able to achieve this in windows by using win32security and win32con TO IMPERSONATE LOGIN Self.handel=win32security.LogonUser(self.loginID,self.domain,self.password,win32con.LOGON32_LOGON_INTERACTIVE,win32con.LOGON32_PROVIDER_DEFAULT) win32security.ImpersonateLoggedOnUser(self.handel) AND TO REVERT BACK TO USER win32security.RevertToSelf() Can anyone suggest an approach to this under Linux (RHEL 6).
how to impersonate in linux I have developed a multi-platform desktop application in python and PyQt and in it i want to implement the concept of impersonation. I have a requirement where user selects a file and the the application will check for naming conventions and other things. If everythin is fine then it copies the file in a server where only impersonate user lets say (user123) has full permissions other has only read permissions. I could able to achieve this in windows by using win32security and win32con TO IMPERSONATE LOGIN Self.handel=win32security.LogonUser(self.loginID,self.domain,self.password,win32con.LOGON32_LOGON_INTERACTIVE,win32con.LOGON32_PROVIDER_DEFAULT) win32security.ImpersonateLoggedOnUser(self.handel) AND TO REVERT BACK TO USER win32security.RevertToSelf() Can anyone suggest an approach to this under Linux (RHEL 6).
python, linux, impersonation, rhel
1
3,196
1
https://stackoverflow.com/questions/23079096/how-to-impersonate-in-linux
22,365,610
Chef Apache2 recipe on RHEL: ServerName not defined in the default recipe
I am using Chef to build out a virtual machine on Rackspace. The VM is a RHEL 6.5 box. I am running into problems building the default Apache2 /etc/httpd/conf/httpd.conf file cleanly for RHEL using the Apache2 recipe (it appears to default to an Ubuntu flavor configuration). In the recipe config template ( apache2/templates/default/apache2.conf.erb ) there is no place to define ServerName . Consequently when I test Apache is working properly I get the following % apachectl configtest httpd: Could not reliably determine the server's fully qualified domain name, using ##### for ServerName Syntax OK where ##### is my DNS, listed in my /etc/hosts and defined in my cookbook recipe attributes/default.rb file as servername . If I look in the recipe template I don't see any location for the variable ServerName (first 17 lines): # # Generated by Chef # # Based on the Ubuntu apache2.conf ServerRoot "<%= node['apache']['dir'] %>" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # <% if %w[debian].include?(node['platform_family']) -%> LockFile /var/lock/apache2/accept.lock <% elsif %w[freebsd].include?(node['platform_family']) -%> LockFile /var/log/accept.lock <% else %> LockFile logs/accept.lock <% end -%> Now, if I manually go and edit /etc/httpd/conf/httpd.conf , adding a ServerName variable, everything works. Relevant before and after below: BEFORE # # Generated by Chef # # Based on the Ubuntu apache2.conf ServerRoot "/etc/httpd" AFTER # # Generated by Chef # # Based on the Ubuntu apache2.conf ServerRoot "/etc/httpd" ServerName #####:80 Now when I test Apache: % apachectl configtest Syntax OK Obviously the whole point of using Chef is to not hand edit configuration files, and whenever I rerun my chef recipe with chef-solo I am going to blow this customization away. I am so new to Chef that I don't really want to fork the cookbook on Github and make a new template for RHEL, but maybe that's what I need to do? I am hoping there is just one configuration setting in my overall recipe I am not defining, that will add this variable to my core Apache httpd.conf file. Hopefully someone with more experience with Chef, and in particular the Apache2 cookbook, can help me. Thanks in advance. EDIT #1 A look at netstat -tulpn shows that I think Apache (httpd) is actually working, or at least listening in on port 80: % netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1274/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1677/master tcp 0 0 **.***.*.***:3306 0.0.0.0:* LISTEN 1585/mysqld tcp 0 0 :::80 :::* LISTEN 6663/httpd tcp 0 0 :::22 :::* LISTEN 1274/sshd tcp 0 0 ::1:25 :::* LISTEN 1677/master Is it looking more and more like a networking (DNS) issue? EDIT #2 Based on the helpful comments to my original post I think I originally misdiagnosed this. I just built a new Ubuntu VM and installed Apache2 by hand using sudo apt-get install apache2 (ie. not using chef-solo ) and I see the same installed layout of apache2 that my Chef recipe created on my RHEL VM. I also get the same warning when running apachectl configtest : apache2: Could not reliably determine the server's fully qualified domain name, using ##### for ServerName Syntax OK When I point the browser on Ubuntu to 127.0.0.1:80 I see the 'It works!' standard Apache response. So, my issue is not really an issue. My thinking now is that this is a network problem. iptables ? EDIT #3 I just ssh'd into my RHEL VM and installed Firefox. I then opened it up and pointed it to 127.0.0.1:80 and I get the default page. I think this categorically confirms that I have a DNS issue. Time to speak to my networking admin.
Chef Apache2 recipe on RHEL: ServerName not defined in the default recipe I am using Chef to build out a virtual machine on Rackspace. The VM is a RHEL 6.5 box. I am running into problems building the default Apache2 /etc/httpd/conf/httpd.conf file cleanly for RHEL using the Apache2 recipe (it appears to default to an Ubuntu flavor configuration). In the recipe config template ( apache2/templates/default/apache2.conf.erb ) there is no place to define ServerName . Consequently when I test Apache is working properly I get the following % apachectl configtest httpd: Could not reliably determine the server's fully qualified domain name, using ##### for ServerName Syntax OK where ##### is my DNS, listed in my /etc/hosts and defined in my cookbook recipe attributes/default.rb file as servername . If I look in the recipe template I don't see any location for the variable ServerName (first 17 lines): # # Generated by Chef # # Based on the Ubuntu apache2.conf ServerRoot "<%= node['apache']['dir'] %>" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # <% if %w[debian].include?(node['platform_family']) -%> LockFile /var/lock/apache2/accept.lock <% elsif %w[freebsd].include?(node['platform_family']) -%> LockFile /var/log/accept.lock <% else %> LockFile logs/accept.lock <% end -%> Now, if I manually go and edit /etc/httpd/conf/httpd.conf , adding a ServerName variable, everything works. Relevant before and after below: BEFORE # # Generated by Chef # # Based on the Ubuntu apache2.conf ServerRoot "/etc/httpd" AFTER # # Generated by Chef # # Based on the Ubuntu apache2.conf ServerRoot "/etc/httpd" ServerName #####:80 Now when I test Apache: % apachectl configtest Syntax OK Obviously the whole point of using Chef is to not hand edit configuration files, and whenever I rerun my chef recipe with chef-solo I am going to blow this customization away. I am so new to Chef that I don't really want to fork the cookbook on Github and make a new template for RHEL, but maybe that's what I need to do? I am hoping there is just one configuration setting in my overall recipe I am not defining, that will add this variable to my core Apache httpd.conf file. Hopefully someone with more experience with Chef, and in particular the Apache2 cookbook, can help me. Thanks in advance. EDIT #1 A look at netstat -tulpn shows that I think Apache (httpd) is actually working, or at least listening in on port 80: % netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1274/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1677/master tcp 0 0 **.***.*.***:3306 0.0.0.0:* LISTEN 1585/mysqld tcp 0 0 :::80 :::* LISTEN 6663/httpd tcp 0 0 :::22 :::* LISTEN 1274/sshd tcp 0 0 ::1:25 :::* LISTEN 1677/master Is it looking more and more like a networking (DNS) issue? EDIT #2 Based on the helpful comments to my original post I think I originally misdiagnosed this. I just built a new Ubuntu VM and installed Apache2 by hand using sudo apt-get install apache2 (ie. not using chef-solo ) and I see the same installed layout of apache2 that my Chef recipe created on my RHEL VM. I also get the same warning when running apachectl configtest : apache2: Could not reliably determine the server's fully qualified domain name, using ##### for ServerName Syntax OK When I point the browser on Ubuntu to 127.0.0.1:80 I see the 'It works!' standard Apache response. So, my issue is not really an issue. My thinking now is that this is a network problem. iptables ? EDIT #3 I just ssh'd into my RHEL VM and installed Firefox. I then opened it up and pointed it to 127.0.0.1:80 and I get the default page. I think this categorically confirms that I have a DNS issue. Time to speak to my networking admin.
apache, virtual-machine, chef-infra, rhel, rackspace
1
1,146
1
https://stackoverflow.com/questions/22365610/chef-apache2-recipe-on-rhel-servername-not-defined-in-the-default-recipe
20,035,042
How to rollback an RPM to an older version?
What is the most efficient approach, when using RPM (RedHat Package Manager), to be able to "roll back" from a new version to an old version? Ex: On Monday I install v1.7 of my RPM. On Tuesday I upgrade to v1.8 of the same RPM. On Wednesday I discover a problem with v1.8 and would like to go back to v1.7. What's the best strategy to do that? An obvious solution is to keep copies of both package versions on the machine somewhere and do this by uninstalling v1.8 and installing 1.7. Might there be a better way of doing this? Might RPM have some better, built-in, way of managing the version archives?
How to rollback an RPM to an older version? What is the most efficient approach, when using RPM (RedHat Package Manager), to be able to "roll back" from a new version to an old version? Ex: On Monday I install v1.7 of my RPM. On Tuesday I upgrade to v1.8 of the same RPM. On Wednesday I discover a problem with v1.8 and would like to go back to v1.7. What's the best strategy to do that? An obvious solution is to keep copies of both package versions on the machine somewhere and do this by uninstalling v1.8 and installing 1.7. Might there be a better way of doing this? Might RPM have some better, built-in, way of managing the version archives?
redhat, rpm, rhel
1
6,138
1
https://stackoverflow.com/questions/20035042/how-to-rollback-an-rpm-to-an-older-version
18,503,014
Can we only recompile a kernel module in kernel source tree?
Let's say we install a kernel 2.6.32.el6, then we download the 2.6.32.el6.src.rpm, can we just install the source and modify some module, and use make -C 2.6.32.el6.src.source.directry -M$PWD in the module directory to compile the module, then we copy into /lib/modules/2.6.32.el6/kernel/moduledirectory and the new module would work? I try to modify kvm modules and compile it, but when I recompile the module and copy it into the directory, machine said when booting: kvm: no symbol version for module_layout kvm_intel: no symbol version for module_layout Anyone knows what is wrong?
Can we only recompile a kernel module in kernel source tree? Let's say we install a kernel 2.6.32.el6, then we download the 2.6.32.el6.src.rpm, can we just install the source and modify some module, and use make -C 2.6.32.el6.src.source.directry -M$PWD in the module directory to compile the module, then we copy into /lib/modules/2.6.32.el6/kernel/moduledirectory and the new module would work? I try to modify kvm modules and compile it, but when I recompile the module and copy it into the directory, machine said when booting: kvm: no symbol version for module_layout kvm_intel: no symbol version for module_layout Anyone knows what is wrong?
c, linux, linux-kernel, rhel
1
371
1
https://stackoverflow.com/questions/18503014/can-we-only-recompile-a-kernel-module-in-kernel-source-tree
17,026,379
RPM conflicts with Python Virtualenv
We use a RHEL5 cluster, and we make custom RPM's of our applications so we can deploy them in the field. I am having an unusual issue where a specific directory within our virtualenv is causing an RPM conflict and I can't figure out why. We have a package (python26-2.6.5-6.el5.x86_64) that provides our main Python executable. It's installed for other services, but not used by our project. Our project uses a virtualenv. For some reason the 'encodings' directory of the virtualenv is causing conflicts. Here is the files provided by the python26 package: [URL] Here is our spec file: [URL] Here is the output from yum: [URL] If someone could point me in the right direction it would be greatly appreciated.
RPM conflicts with Python Virtualenv We use a RHEL5 cluster, and we make custom RPM's of our applications so we can deploy them in the field. I am having an unusual issue where a specific directory within our virtualenv is causing an RPM conflict and I can't figure out why. We have a package (python26-2.6.5-6.el5.x86_64) that provides our main Python executable. It's installed for other services, but not used by our project. Our project uses a virtualenv. For some reason the 'encodings' directory of the virtualenv is causing conflicts. Here is the files provided by the python26 package: [URL] Here is our spec file: [URL] Here is the output from yum: [URL] If someone could point me in the right direction it would be greatly appreciated.
python, virtualenv, rpm, rhel
1
242
1
https://stackoverflow.com/questions/17026379/rpm-conflicts-with-python-virtualenv
13,405,247
What ports does Apache Hadoop version 1.0.3 use for intracluster communicaion of the daemons
I know port 22 is only used for control scripts. But i need to know what ports I should open for my 3 node cluster. 2 slaves, 1 namenode/jobtracker. On what port do the daemons run? On what ports are the URLs displayed? The hadoop distro is: Apache Hadoop version 1.0.3
What ports does Apache Hadoop version 1.0.3 use for intracluster communicaion of the daemons I know port 22 is only used for control scripts. But i need to know what ports I should open for my 3 node cluster. 2 slaves, 1 namenode/jobtracker. On what port do the daemons run? On what ports are the URLs displayed? The hadoop distro is: Apache Hadoop version 1.0.3
hadoop, mapreduce, hbase, rhel, elastic-map-reduce
1
1,044
2
https://stackoverflow.com/questions/13405247/what-ports-does-apache-hadoop-version-1-0-3-use-for-intracluster-communicaion-of
12,996,355
RHEL 5: Unable to apply patch
I was following this WebSite for installing repcache on Memcached1.4.5 [URL] Initially i downloaded this repcached-2.3-1.4.5.patch.gz I am using RHEL5 Then i run gunzip repcached-2.3-1.4.5.patch.gz It generated a file named repcached-2.3-1.4.5.patch Then executed patch -p1 -i repcached-2.3-1.4.5.patch It started Producing this patching file ChangeLog.repcached can't find file to patch at input line 66 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |diff -urN memcached-1.4.5/Makefile.am memcached-1.4.5-repcached-2.3/Makefile.am |--- memcached-1.4.5/Makefile.am Sat Apr 3 11:07:16 2010 |+++ memcached-1.4.5-repcached-2.3/Makefile.am Thu Feb 11 19:51:30 2010 -------------------------- File to patch: please see the screen shot of the above
RHEL 5: Unable to apply patch I was following this WebSite for installing repcache on Memcached1.4.5 [URL] Initially i downloaded this repcached-2.3-1.4.5.patch.gz I am using RHEL5 Then i run gunzip repcached-2.3-1.4.5.patch.gz It generated a file named repcached-2.3-1.4.5.patch Then executed patch -p1 -i repcached-2.3-1.4.5.patch It started Producing this patching file ChangeLog.repcached can't find file to patch at input line 66 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |diff -urN memcached-1.4.5/Makefile.am memcached-1.4.5-repcached-2.3/Makefile.am |--- memcached-1.4.5/Makefile.am Sat Apr 3 11:07:16 2010 |+++ memcached-1.4.5-repcached-2.3/Makefile.am Thu Feb 11 19:51:30 2010 -------------------------- File to patch: please see the screen shot of the above
linux, rhel
1
608
1
https://stackoverflow.com/questions/12996355/rhel-5-unable-to-apply-patch
12,584,575
What is the Linux equivalent for termiox.h?
I have an archaic modem interface library, which was originally made for Solaris and Linux, and I am trying to see if it will work for Linux. While compiling on Linux, I saw : #if ! defined(WIN32) #include <string.h> #include <fcntl.h> #include <sys/stat.h> #include <sys/types.h> #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <sys/ioctl.h> #include <sys/time.h> #include <termios.h> #include <sys/termiox.h> It doesn't seem to be able to find where termiox.h and when I google for it, it only shows up results for termios.h I can't simply take away the reference, because there are a number of calls made to it. Would anyone happen to know where the termiox calls are defined under Linux? The OS version is RHEL 5.5 The code which references the termiox libraries are just saying it to ignore termiox options: /home/local/NT/jayanthv/8.7/CallBur/lib/unix.c(556): error: struct "<unnamed>" has no field "termiox" if( modem_opt_ignore_termiox == No && ioctl( modem_handle, TCSETX, &mattr_current.termiox ) < 0 ) Should I just go ahead and add #if !defined() around the code?
What is the Linux equivalent for termiox.h? I have an archaic modem interface library, which was originally made for Solaris and Linux, and I am trying to see if it will work for Linux. While compiling on Linux, I saw : #if ! defined(WIN32) #include <string.h> #include <fcntl.h> #include <sys/stat.h> #include <sys/types.h> #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <sys/ioctl.h> #include <sys/time.h> #include <termios.h> #include <sys/termiox.h> It doesn't seem to be able to find where termiox.h and when I google for it, it only shows up results for termios.h I can't simply take away the reference, because there are a number of calls made to it. Would anyone happen to know where the termiox calls are defined under Linux? The OS version is RHEL 5.5 The code which references the termiox libraries are just saying it to ignore termiox options: /home/local/NT/jayanthv/8.7/CallBur/lib/unix.c(556): error: struct "<unnamed>" has no field "termiox" if( modem_opt_ignore_termiox == No && ioctl( modem_handle, TCSETX, &mattr_current.termiox ) < 0 ) Should I just go ahead and add #if !defined() around the code?
c, linux, rhel
1
743
1
https://stackoverflow.com/questions/12584575/what-is-the-linux-equivalent-for-termiox-h
4,514,164
Errno::EEXIST File Exists error when installing &#39;ferret&#39; gem from local .gem file
I am trying to install the ferret ruby gem on a RHEL zlinux (s390x architecture) machine, and am trying to install a .gem file after patching it so that it will compile . But even trying to install the pristine fetched gem, it fails as follows: [ me@s390x ]$ sudo gem fetch ferret Downloaded ferret-0.11.6 [ me@s390x ]$ sudo gem install -lV ferret-0.11.6.gem Installing gem ferret-0.11.6 Using local gem /home/rubyusr/rubygems/gems/cache/ferret-0.11.6.gem /home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin ERROR: While executing gem ... (Errno::EEXIST) File exists - /home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin None of the above-mentioned directories or files related to "ferret" existed before running this command. Also strange is that /home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin is a directory, although maybe that is a normal complaint. A final complicating factor is when I run the gem command I am actually running a shell script that sets the environment variables for my unusual rubygems directory (I haven't had any problems so far with this set up). Here is my gem shell script: #!/bin/bash export GEM_HOME=/home/rubyusr/rubygems/gems export GEM_PREFIX=/home/rubyusr/rubygems export RUBYLIB=$GEM_PREFIX/lib:/usr/lib/ruby:/usr/lib/ruby/site_ruby:/usr/lib/site_ruby export GEM_PATH=$GEM_HOME OUR_GEM_COMMAND=$GEM_PREFIX/bin/gem $OUR_GEM_COMMAND $@ EDIT: I forgot to add that running the gem install command normally does not seem to result in this error (but ferret fails to compile), with the error: posh.h:515:4: error: #error POSH cannot determine target CPU
Errno::EEXIST File Exists error when installing &#39;ferret&#39; gem from local .gem file I am trying to install the ferret ruby gem on a RHEL zlinux (s390x architecture) machine, and am trying to install a .gem file after patching it so that it will compile . But even trying to install the pristine fetched gem, it fails as follows: [ me@s390x ]$ sudo gem fetch ferret Downloaded ferret-0.11.6 [ me@s390x ]$ sudo gem install -lV ferret-0.11.6.gem Installing gem ferret-0.11.6 Using local gem /home/rubyusr/rubygems/gems/cache/ferret-0.11.6.gem /home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin ERROR: While executing gem ... (Errno::EEXIST) File exists - /home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin None of the above-mentioned directories or files related to "ferret" existed before running this command. Also strange is that /home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin is a directory, although maybe that is a normal complaint. A final complicating factor is when I run the gem command I am actually running a shell script that sets the environment variables for my unusual rubygems directory (I haven't had any problems so far with this set up). Here is my gem shell script: #!/bin/bash export GEM_HOME=/home/rubyusr/rubygems/gems export GEM_PREFIX=/home/rubyusr/rubygems export RUBYLIB=$GEM_PREFIX/lib:/usr/lib/ruby:/usr/lib/ruby/site_ruby:/usr/lib/site_ruby export GEM_PATH=$GEM_HOME OUR_GEM_COMMAND=$GEM_PREFIX/bin/gem $OUR_GEM_COMMAND $@ EDIT: I forgot to add that running the gem install command normally does not seem to result in this error (but ferret fails to compile), with the error: posh.h:515:4: error: #error POSH cannot determine target CPU
ruby, linux, rubygems, rhel, ferret
1
1,440
1
https://stackoverflow.com/questions/4514164/errnoeexist-file-exists-error-when-installing-ferret-gem-from-local-gem-fil
4,255,197
send mail is not working
for ex : i ahve aliases : testuser sendmail -v testuser@localhost < t.mail , this is working but when i am trying from another host sendmail -v testuser@host.company.com < t.mail .. its not working .. i am not receiveing mail what is the issue how to reslove it
send mail is not working for ex : i ahve aliases : testuser sendmail -v testuser@localhost < t.mail , this is working but when i am trying from another host sendmail -v testuser@host.company.com < t.mail .. its not working .. i am not receiveing mail what is the issue how to reslove it
linux, email, sendmail, rhel
1
1,183
1
https://stackoverflow.com/questions/4255197/send-mail-is-not-working
3,282,581
Weird JVM Crashing Issue with CF 9 on RedHat Enterprise Linux
We're currently running ColdFusion 9 on a RedHat Enterprise Linux server and have found that, under certain circumstances, the JVM is crashing causing the CF server to be completely useless and requiring a full server restart. The only error that's being returned by the CF app just prior to the JVM completing its death spiral is a java.lang.IndexOutOfBoundsException and it doesn't give too much additional information in the stacktrace. Has anyone else encountered a similar problem to this? Based on some really old threads on House of Fusion (circa 2003) this was a problem that occasionally surfaced due to a caching problem. But, supposedly, that has been fixed in subsequent CF versions. Anyway, I apologize in advance for the vagueness of this question but the errors we're getting back just before it crashes aren't particularly helpful. We have not been able to replicate this problem on Windows, Mac or Ubuntu. Whenever the java.lang.IndexOutOfBoundsException error is thrown in any of those environments the JVM recovers just fine. Any help would be greatly appreciated. Edit: Suffered a serious brain cramp this morning, we're running Enterprise Redhat not Enterprise Tomcat.
Weird JVM Crashing Issue with CF 9 on RedHat Enterprise Linux We're currently running ColdFusion 9 on a RedHat Enterprise Linux server and have found that, under certain circumstances, the JVM is crashing causing the CF server to be completely useless and requiring a full server restart. The only error that's being returned by the CF app just prior to the JVM completing its death spiral is a java.lang.IndexOutOfBoundsException and it doesn't give too much additional information in the stacktrace. Has anyone else encountered a similar problem to this? Based on some really old threads on House of Fusion (circa 2003) this was a problem that occasionally surfaced due to a caching problem. But, supposedly, that has been fixed in subsequent CF versions. Anyway, I apologize in advance for the vagueness of this question but the errors we're getting back just before it crashes aren't particularly helpful. We have not been able to replicate this problem on Windows, Mac or Ubuntu. Whenever the java.lang.IndexOutOfBoundsException error is thrown in any of those environments the JVM recovers just fine. Any help would be greatly appreciated. Edit: Suffered a serious brain cramp this morning, we're running Enterprise Redhat not Enterprise Tomcat.
jakarta-ee, coldfusion, jvm, redhat, rhel
1
268
1
https://stackoverflow.com/questions/3282581/weird-jvm-crashing-issue-with-cf-9-on-redhat-enterprise-linux
2,527,735
Trouble linking libboost libraries to compile sslsniff on RHEL
Trying to build sslsniff on a RHEL 5.2 system here. When compiling sslsniff on RHEL I hit the same errors when using libboost packages (from repositories like rpmforge) and compiling libboost from source (which appeared to be successful.) I tried this on a fresh system as well (no previous/failed/garbage installs of libboost etc.) # make g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SSLConnectionManager.o -MD -MP -MF .deps/SSLConnectionManager.Tpo -c -o SSLConnectionManager.o SSLConnectionManager.cpp mv -f .deps/SSLConnectionManager.Tpo .deps/SSLConnectionManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FirefoxUpdater.o -MD -MP -MF .deps/FirefoxUpdater.Tpo -c -o FirefoxUpdater.o FirefoxUpdater.cpp mv -f .deps/FirefoxUpdater.Tpo .deps/FirefoxUpdater.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT Logger.o -MD -MP -MF .deps/Logger.Tpo -c -o Logger.o Logger.cpp mv -f .deps/Logger.Tpo .deps/Logger.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SessionCache.o -MD -MP -MF .deps/SessionCache.Tpo -c -o SessionCache.o SessionCache.cpp mv -f .deps/SessionCache.Tpo .deps/SessionCache.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SSLBridge.o -MD -MP -MF .deps/SSLBridge.Tpo -c -o SSLBridge.o SSLBridge.cpp mv -f .deps/SSLBridge.Tpo .deps/SSLBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HTTPSBridge.o -MD -MP -MF .deps/HTTPSBridge.Tpo -c -o HTTPSBridge.o HTTPSBridge.cpp mv -f .deps/HTTPSBridge.Tpo .deps/HTTPSBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT sslsniff.o -MD -MP -MF .deps/sslsniff.Tpo -c -o sslsniff.o sslsniff.cpp mv -f .deps/sslsniff.Tpo .deps/sslsniff.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FingerprintManager.o -MD -MP -MF .deps/FingerprintManager.Tpo -c -o FingerprintManager.o FingerprintManager.cpp mv -f .deps/FingerprintManager.Tpo .deps/FingerprintManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT AuthorityCertificateManager.o -MD -MP -MF .deps/AuthorityCertificateManager.Tpo -c -o AuthorityCertificateManager.o test -f 'certificate/AuthorityCertificateManager.cpp' || echo './'certificate/AuthorityCertificateManager.cpp mv -f .deps/AuthorityCertificateManager.Tpo .deps/AuthorityCertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT TargetedCertificateManager.o -MD -MP -MF .deps/TargetedCertificateManager.Tpo -c -o TargetedCertificateManager.o test -f 'certificate/TargetedCertificateManager.cpp' || echo './'certificate/TargetedCertificateManager.cpp mv -f .deps/TargetedCertificateManager.Tpo .deps/TargetedCertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT CertificateManager.o -MD -MP -MF .deps/CertificateManager.Tpo -c -o CertificateManager.o test -f 'certificate/CertificateManager.cpp' || echo './'certificate/CertificateManager.cpp mv -f .deps/CertificateManager.Tpo .deps/CertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpBridge.o -MD -MP -MF .deps/HttpBridge.Tpo -c -o HttpBridge.o test -f 'http/HttpBridge.cpp' || echo './'http/HttpBridge.cpp mv -f .deps/HttpBridge.Tpo .deps/HttpBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpConnectionManager.o -MD -MP -MF .deps/HttpConnectionManager.Tpo -c -o HttpConnectionManager.o test -f 'http/HttpConnectionManager.cpp' || echo './'http/HttpConnectionManager.cpp mv -f .deps/HttpConnectionManager.Tpo .deps/HttpConnectionManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpHeaders.o -MD -MP -MF .deps/HttpHeaders.Tpo -c -o HttpHeaders.o test -f 'http/HttpHeaders.cpp' || echo './'http/HttpHeaders.cpp mv -f .deps/HttpHeaders.Tpo .deps/HttpHeaders.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT UpdateManager.o -MD -MP -MF .deps/UpdateManager.Tpo -c -o UpdateManager.o UpdateManager.cpp mv -f .deps/UpdateManager.Tpo .deps/UpdateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT OCSPDenier.o -MD -MP -MF .deps/OCSPDenier.Tpo -c -o OCSPDenier.o test -f 'http/OCSPDenier.cpp' || echo './'http/OCSPDenier.cpp mv -f .deps/OCSPDenier.Tpo .deps/OCSPDenier.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FirefoxAddonUpdater.o -MD -MP -MF .deps/FirefoxAddonUpdater.Tpo -c -o FirefoxAddonUpdater.o FirefoxAddonUpdater.cpp mv -f .deps/FirefoxAddonUpdater.Tpo .deps/FirefoxAddonUpdater.Po g++ -ggdb -g -O2 -lssl -lboost_filesystem -lpthread -lboost_thread -llog4cpp -o sslsniff SSLConnectionManager.o FirefoxUpdater.o Logger.o SessionCache.o SSLBridge.o HTTPSBridge.o sslsniff.o FingerprintManager.o AuthorityCertificateManager.o TargetedCertificateManager.o CertificateManager.o HttpBridge.o HttpConnectionManager.o HttpHeaders.o UpdateManager.o OCSPDenier.o FirefoxAddonUpdater.o SSLConnectionManager.o: In function __static_initialization_and_destruction_0': /usr/local/include/boost/system/error_code.hpp:208: undefined reference to boost::system::get_system_category()' /usr/local/include/boost/system/error_code.hpp:209: undefined reference to boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:214: undefined reference to boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:215: undefined reference to boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:216: undefined reference to boost::system::get_system_category()' There's more, but I guess there's a post length limit.. Most of them appear related to boost::system so I added -lboost_system to the linker command and got farther: # g++ -ggdb -g -O2 -lssl -lboost_filesystem -lpthread -lboost_thread -llog4cpp -o sslsniff SSLConnectionManager.o FirefoxUpdater.o Logger.o SessionCache.o SSLBridge.o HTTPSBridge.o sslsniff.o FingerprintManager.o AuthorityCertificateManager.o TargetedCertificateManager.o CertificateManager.o HttpBridge.o HttpConnectionManager.o HttpHeaders.o UpdateManager.o OCSPDenier.o FirefoxAddonUpdater.o -lboost_system SSLConnectionManager.o: In function thread<boost::_bi::bind_t<void, boost::_mfi::mf3<void, SSLConnectionManager, boost::shared_ptr<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_service<boost::asio::ip::tcp> > >, boost::asio::ip::basic_endpoint<boost::asio::ip::tcp>, bool>, boost::_bi::list4<boost::_bi::value<SSLConnectionManager*>, boost::_bi::value<boost::shared_ptr<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_service<boost::asio::ip::tcp> > > >, boost::_bi::value<boost::asio::ip::basic_endpoint<boost::asio::ip::tcp> >, boost::_bi::value<bool> > > >': /usr/local/include/boost/thread/detail/thread.hpp:191: undefined reference to boost::thread::start_thread()' SSLConnectionManager.o: In function ~thread_data': /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to `boost::detail::thread_data_base::~thread_data_base()' Now the errors are related to boost::detail and boost::filesystem::detail . I've tried using boost 1.35 and 1.42 (latest). On my own Ubuntu system, I installed the libraries from Ubuntu repositories and I was able to compile+link sslsniff just fine. Thanks in advance.
Trouble linking libboost libraries to compile sslsniff on RHEL Trying to build sslsniff on a RHEL 5.2 system here. When compiling sslsniff on RHEL I hit the same errors when using libboost packages (from repositories like rpmforge) and compiling libboost from source (which appeared to be successful.) I tried this on a fresh system as well (no previous/failed/garbage installs of libboost etc.) # make g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SSLConnectionManager.o -MD -MP -MF .deps/SSLConnectionManager.Tpo -c -o SSLConnectionManager.o SSLConnectionManager.cpp mv -f .deps/SSLConnectionManager.Tpo .deps/SSLConnectionManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FirefoxUpdater.o -MD -MP -MF .deps/FirefoxUpdater.Tpo -c -o FirefoxUpdater.o FirefoxUpdater.cpp mv -f .deps/FirefoxUpdater.Tpo .deps/FirefoxUpdater.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT Logger.o -MD -MP -MF .deps/Logger.Tpo -c -o Logger.o Logger.cpp mv -f .deps/Logger.Tpo .deps/Logger.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SessionCache.o -MD -MP -MF .deps/SessionCache.Tpo -c -o SessionCache.o SessionCache.cpp mv -f .deps/SessionCache.Tpo .deps/SessionCache.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SSLBridge.o -MD -MP -MF .deps/SSLBridge.Tpo -c -o SSLBridge.o SSLBridge.cpp mv -f .deps/SSLBridge.Tpo .deps/SSLBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HTTPSBridge.o -MD -MP -MF .deps/HTTPSBridge.Tpo -c -o HTTPSBridge.o HTTPSBridge.cpp mv -f .deps/HTTPSBridge.Tpo .deps/HTTPSBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT sslsniff.o -MD -MP -MF .deps/sslsniff.Tpo -c -o sslsniff.o sslsniff.cpp mv -f .deps/sslsniff.Tpo .deps/sslsniff.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FingerprintManager.o -MD -MP -MF .deps/FingerprintManager.Tpo -c -o FingerprintManager.o FingerprintManager.cpp mv -f .deps/FingerprintManager.Tpo .deps/FingerprintManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT AuthorityCertificateManager.o -MD -MP -MF .deps/AuthorityCertificateManager.Tpo -c -o AuthorityCertificateManager.o test -f 'certificate/AuthorityCertificateManager.cpp' || echo './'certificate/AuthorityCertificateManager.cpp mv -f .deps/AuthorityCertificateManager.Tpo .deps/AuthorityCertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT TargetedCertificateManager.o -MD -MP -MF .deps/TargetedCertificateManager.Tpo -c -o TargetedCertificateManager.o test -f 'certificate/TargetedCertificateManager.cpp' || echo './'certificate/TargetedCertificateManager.cpp mv -f .deps/TargetedCertificateManager.Tpo .deps/TargetedCertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT CertificateManager.o -MD -MP -MF .deps/CertificateManager.Tpo -c -o CertificateManager.o test -f 'certificate/CertificateManager.cpp' || echo './'certificate/CertificateManager.cpp mv -f .deps/CertificateManager.Tpo .deps/CertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpBridge.o -MD -MP -MF .deps/HttpBridge.Tpo -c -o HttpBridge.o test -f 'http/HttpBridge.cpp' || echo './'http/HttpBridge.cpp mv -f .deps/HttpBridge.Tpo .deps/HttpBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpConnectionManager.o -MD -MP -MF .deps/HttpConnectionManager.Tpo -c -o HttpConnectionManager.o test -f 'http/HttpConnectionManager.cpp' || echo './'http/HttpConnectionManager.cpp mv -f .deps/HttpConnectionManager.Tpo .deps/HttpConnectionManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpHeaders.o -MD -MP -MF .deps/HttpHeaders.Tpo -c -o HttpHeaders.o test -f 'http/HttpHeaders.cpp' || echo './'http/HttpHeaders.cpp mv -f .deps/HttpHeaders.Tpo .deps/HttpHeaders.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT UpdateManager.o -MD -MP -MF .deps/UpdateManager.Tpo -c -o UpdateManager.o UpdateManager.cpp mv -f .deps/UpdateManager.Tpo .deps/UpdateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT OCSPDenier.o -MD -MP -MF .deps/OCSPDenier.Tpo -c -o OCSPDenier.o test -f 'http/OCSPDenier.cpp' || echo './'http/OCSPDenier.cpp mv -f .deps/OCSPDenier.Tpo .deps/OCSPDenier.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FirefoxAddonUpdater.o -MD -MP -MF .deps/FirefoxAddonUpdater.Tpo -c -o FirefoxAddonUpdater.o FirefoxAddonUpdater.cpp mv -f .deps/FirefoxAddonUpdater.Tpo .deps/FirefoxAddonUpdater.Po g++ -ggdb -g -O2 -lssl -lboost_filesystem -lpthread -lboost_thread -llog4cpp -o sslsniff SSLConnectionManager.o FirefoxUpdater.o Logger.o SessionCache.o SSLBridge.o HTTPSBridge.o sslsniff.o FingerprintManager.o AuthorityCertificateManager.o TargetedCertificateManager.o CertificateManager.o HttpBridge.o HttpConnectionManager.o HttpHeaders.o UpdateManager.o OCSPDenier.o FirefoxAddonUpdater.o SSLConnectionManager.o: In function __static_initialization_and_destruction_0': /usr/local/include/boost/system/error_code.hpp:208: undefined reference to boost::system::get_system_category()' /usr/local/include/boost/system/error_code.hpp:209: undefined reference to boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:214: undefined reference to boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:215: undefined reference to boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:216: undefined reference to boost::system::get_system_category()' There's more, but I guess there's a post length limit.. Most of them appear related to boost::system so I added -lboost_system to the linker command and got farther: # g++ -ggdb -g -O2 -lssl -lboost_filesystem -lpthread -lboost_thread -llog4cpp -o sslsniff SSLConnectionManager.o FirefoxUpdater.o Logger.o SessionCache.o SSLBridge.o HTTPSBridge.o sslsniff.o FingerprintManager.o AuthorityCertificateManager.o TargetedCertificateManager.o CertificateManager.o HttpBridge.o HttpConnectionManager.o HttpHeaders.o UpdateManager.o OCSPDenier.o FirefoxAddonUpdater.o -lboost_system SSLConnectionManager.o: In function thread<boost::_bi::bind_t<void, boost::_mfi::mf3<void, SSLConnectionManager, boost::shared_ptr<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_service<boost::asio::ip::tcp> > >, boost::asio::ip::basic_endpoint<boost::asio::ip::tcp>, bool>, boost::_bi::list4<boost::_bi::value<SSLConnectionManager*>, boost::_bi::value<boost::shared_ptr<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_service<boost::asio::ip::tcp> > > >, boost::_bi::value<boost::asio::ip::basic_endpoint<boost::asio::ip::tcp> >, boost::_bi::value<bool> > > >': /usr/local/include/boost/thread/detail/thread.hpp:191: undefined reference to boost::thread::start_thread()' SSLConnectionManager.o: In function ~thread_data': /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to `boost::detail::thread_data_base::~thread_data_base()' Now the errors are related to boost::detail and boost::filesystem::detail . I've tried using boost 1.35 and 1.42 (latest). On my own Ubuntu system, I installed the libraries from Ubuntu repositories and I was able to compile+link sslsniff just fine. Thanks in advance.
boost, linker, boost-thread, rhel, rhel5
1
1,705
1
https://stackoverflow.com/questions/2527735/trouble-linking-libboost-libraries-to-compile-sslsniff-on-rhel
1,862,256
Running external code in a restricted environment (linux)
For reasons beyond the scope of this post, I want to run external (user submitted) code similar to the computer language benchmark game . Obviously this needs to be done in a restricted environment. Here are my restriction requirements: Can only read/write to current working directory (will be large tempdir) No external access (internet, etc) Anything else I probably don't care about (e.g., processor/memory usage, etc). I myself have several restrictions. A solution which uses standard *nix functionality (specifically RHEL 5.x) would be preferred, as then I could use our cluster for the backend. It is also difficult to get software installed there, so something in the base distribution would be optimal. Now, the questions: Can this even be done with externally compiled binaries? It seems like it could be possible, but also like it could just be hopeless. What about if we force the code itself to be submitted, and compile it ourselves. Does that make the problem easier or harder? Should I just give up on home directory protection, and use a VM/rollback? What about blocking external communication (isn't the VM usually talked to over a bridged LAN connection?) Something I missed? Possibly useful ideas: rssh . Doesn't help with compiled code though Using a VM with rollback after code finishes (can network be configured so there is a local bridge but no WAN bridge?). Doesn't work on cluster.
Running external code in a restricted environment (linux) For reasons beyond the scope of this post, I want to run external (user submitted) code similar to the computer language benchmark game . Obviously this needs to be done in a restricted environment. Here are my restriction requirements: Can only read/write to current working directory (will be large tempdir) No external access (internet, etc) Anything else I probably don't care about (e.g., processor/memory usage, etc). I myself have several restrictions. A solution which uses standard *nix functionality (specifically RHEL 5.x) would be preferred, as then I could use our cluster for the backend. It is also difficult to get software installed there, so something in the base distribution would be optimal. Now, the questions: Can this even be done with externally compiled binaries? It seems like it could be possible, but also like it could just be hopeless. What about if we force the code itself to be submitted, and compile it ourselves. Does that make the problem easier or harder? Should I just give up on home directory protection, and use a VM/rollback? What about blocking external communication (isn't the VM usually talked to over a bridged LAN connection?) Something I missed? Possibly useful ideas: rssh . Doesn't help with compiled code though Using a VM with rollback after code finishes (can network be configured so there is a local bridge but no WAN bridge?). Doesn't work on cluster.
linux, rhel
1
358
2
https://stackoverflow.com/questions/1862256/running-external-code-in-a-restricted-environment-linux
78,874,351
Prompt for username when starting wsl for the first time (like in Ubuntu)
We are creating a custom linux image for WSL based on RHEL and we want to create user when users login for the first time just like the ubuntu image.
Prompt for username when starting wsl for the first time (like in Ubuntu) We are creating a custom linux image for WSL based on RHEL and we want to create user when users login for the first time just like the ubuntu image.
windows-subsystem-for-linux, rhel
1
94
0
https://stackoverflow.com/questions/78874351/prompt-for-username-when-starting-wsl-for-the-first-time-like-in-ubuntu
78,764,164
Does the UBI 8 image not support OpenSSL versions beyond 1.1.1k?
I have a critical component that relies on OpenSSL version 1.1.1n with custom patches and fixes. Previously, this component was running on a Debian image, but now we are transitioning to UBI8. Despite attempting upgrades and using the latest UBI8.10 version, OpenSSL remains at version 1.1.1k, which is the default supported version in UBI8. Is there a specific reason why I cannot upgrade? Am I overlooking something, or does UBI8 currently not support versions beyond 1.1.1k? I have tried downloading OpenSSL 1.1.1n directly from the OpenSSL libraries to resolve this issue. However, I also need the openssl-libs package, which is used in UBI8 and cannot be manually downloaded(I guess). The current version provided in UBI8 is only up to version k. So even If I somehow manage with latest version of openssl or 1.1.1n version. I would be still getting the openssl-libs of k version only. I need solutions to either work around this issue and achieve compatibility with OpenSSL version 1.1.1n or newer, or solid reasons why achieving this might not be possible. I've tried using latest version of openssl which was manually downloaded in ubi8 with older version of openssl-libs which was present by default in ubi8. Mainly Need to figure out a way that we can get both openssl and openssl-libs packages in ubi8 in minimum 1.1.1n version
Does the UBI 8 image not support OpenSSL versions beyond 1.1.1k? I have a critical component that relies on OpenSSL version 1.1.1n with custom patches and fixes. Previously, this component was running on a Debian image, but now we are transitioning to UBI8. Despite attempting upgrades and using the latest UBI8.10 version, OpenSSL remains at version 1.1.1k, which is the default supported version in UBI8. Is there a specific reason why I cannot upgrade? Am I overlooking something, or does UBI8 currently not support versions beyond 1.1.1k? I have tried downloading OpenSSL 1.1.1n directly from the OpenSSL libraries to resolve this issue. However, I also need the openssl-libs package, which is used in UBI8 and cannot be manually downloaded(I guess). The current version provided in UBI8 is only up to version k. So even If I somehow manage with latest version of openssl or 1.1.1n version. I would be still getting the openssl-libs of k version only. I need solutions to either work around this issue and achieve compatibility with OpenSSL version 1.1.1n or newer, or solid reasons why achieving this might not be possible. I've tried using latest version of openssl which was manually downloaded in ubi8 with older version of openssl-libs which was present by default in ubi8. Mainly Need to figure out a way that we can get both openssl and openssl-libs packages in ubi8 in minimum 1.1.1n version
openssl, rhel, tpm, ubi
1
449
0
https://stackoverflow.com/questions/78764164/does-the-ubi-8-image-not-support-openssl-versions-beyond-1-1-1k
78,694,089
Can&#39;t get to Tomcat Manager
I am having trouble getting to the tomcat manager at [URL] in tomcat 10.1.25. I have tried with local web browsers and curl. I have a tomcat user created and it has ownership of everything in my tomcat directory. My end goal is to access remotely but would settle for locally. I am running RHEL 9. I can access [URL] both locally and remotely, but all of the internal links (/docs/*) from that page give a 403 error: "You are not authorized to view this page. By default the Manager is only accessible from a browser running on the same machine as Tomcat. If you wish to modify this restriction, you'll need to edit the Manager's context.xml file." Or error 401: You are not authorized to view this page. If you have not changed any configuration files, please examine the file conf/tomcat-users.xml in your installation. That file must contain the credentials to let you use this webapp. My tomcat-users.xml: <tomcat-users xmlns="[URL] xmlns:xsi="[URL] xsi:schemaLocation="[URL] tomcat-users.xsd" version="1.0"> <role rolename="manager-gui"/> <user username="fakeuser" password="fakepass" roles="manager-gui"/> <tomcat-users/> My webapps/manager/META-INF/context.xml: <Context antiResourceLocking="false" privileged="true" > <CookieProcessor className="org.apache.tomcat.util.http.Rfc6265CookieProcessor" sameSiteCookies="strict" /> <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" /> <Manager sessionAttributeValueClassNameFilter="java\.lang\.(?:Boolean|Integer|Long|Number|String)|org\.apache\.catalina\.filters\.CsrfPreventionFilter\$LruCache(?:\$1)?|java\.util\.(?:Linked)?HashMap"/> </Context> Things I have tried: Commenting out <Valve className... Changing allow="127.\d+.\d+.\d+|::1|0:0:0:0:0:0:0:1" to "127.\d+.\d+.\d+|::1|my ip here", "my ip here", "*", and ".*". Adding the role admin-gui to the tomcat-users.xml. Editing webapps/host-manager/META-INF/context.xml with the same as manager/META-INF/context.xml. making sure to restart tomcat and check the status with each change. using https. using [URL] I have followed every article I can find on here with no success.
Can&#39;t get to Tomcat Manager I am having trouble getting to the tomcat manager at [URL] in tomcat 10.1.25. I have tried with local web browsers and curl. I have a tomcat user created and it has ownership of everything in my tomcat directory. My end goal is to access remotely but would settle for locally. I am running RHEL 9. I can access [URL] both locally and remotely, but all of the internal links (/docs/*) from that page give a 403 error: "You are not authorized to view this page. By default the Manager is only accessible from a browser running on the same machine as Tomcat. If you wish to modify this restriction, you'll need to edit the Manager's context.xml file." Or error 401: You are not authorized to view this page. If you have not changed any configuration files, please examine the file conf/tomcat-users.xml in your installation. That file must contain the credentials to let you use this webapp. My tomcat-users.xml: <tomcat-users xmlns="[URL] xmlns:xsi="[URL] xsi:schemaLocation="[URL] tomcat-users.xsd" version="1.0"> <role rolename="manager-gui"/> <user username="fakeuser" password="fakepass" roles="manager-gui"/> <tomcat-users/> My webapps/manager/META-INF/context.xml: <Context antiResourceLocking="false" privileged="true" > <CookieProcessor className="org.apache.tomcat.util.http.Rfc6265CookieProcessor" sameSiteCookies="strict" /> <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" /> <Manager sessionAttributeValueClassNameFilter="java\.lang\.(?:Boolean|Integer|Long|Number|String)|org\.apache\.catalina\.filters\.CsrfPreventionFilter\$LruCache(?:\$1)?|java\.util\.(?:Linked)?HashMap"/> </Context> Things I have tried: Commenting out <Valve className... Changing allow="127.\d+.\d+.\d+|::1|0:0:0:0:0:0:0:1" to "127.\d+.\d+.\d+|::1|my ip here", "my ip here", "*", and ".*". Adding the role admin-gui to the tomcat-users.xml. Editing webapps/host-manager/META-INF/context.xml with the same as manager/META-INF/context.xml. making sure to restart tomcat and check the status with each change. using https. using [URL] I have followed every article I can find on here with no success.
tomcat, rhel
1
138
1
https://stackoverflow.com/questions/78694089/cant-get-to-tomcat-manager
78,197,859
Manual installation of gitlab-runner on RHEL fails
My Environment is RHEL, I'm trying to install gitlab-runner on my linux server which does not have internet connectivity. Thus, I cannot curl -: curl -L [URL] | bash So, instead I downloaded the linux setup from [URL] . And further more I copied that setup to the server from my local system. Could someone please tell me the next step how to further insatll it?? I'm trying to install the gitlab-runner setup on my linux server which does not have internet access.
Manual installation of gitlab-runner on RHEL fails My Environment is RHEL, I'm trying to install gitlab-runner on my linux server which does not have internet connectivity. Thus, I cannot curl -: curl -L [URL] | bash So, instead I downloaded the linux setup from [URL] . And further more I copied that setup to the server from my local system. Could someone please tell me the next step how to further insatll it?? I'm trying to install the gitlab-runner setup on my linux server which does not have internet access.
linux, server, gitlab, gitlab-ci-runner, rhel
1
258
0
https://stackoverflow.com/questions/78197859/manual-installation-of-gitlab-runner-on-rhel-fails
78,022,892
How to set an ipv4 default gateway with ansible nmcli in RHEL9?
I'm writing a playbook to set the default gateway on a server with RHEL9 and multiple network interfaces. This is part of a repeated process, we're cloning a VM from a template with default gateway on ens192 (the management interface) and during the customization we set up routing and change the default gateway interface to another, typically ens224 . However, after running the nmcli module with the gw4 setting, the routing table is not updated. This should theoretically work: - name: "unset default gw" community.general.nmcli: conn_name: "ens192" state: present never_default4: true dns4_ignore_auto: true - name: "set default gw" community.general.nmcli: conn_name: "ens224" state: present gw4: '10.59.41.1' dns4_ignore_auto: true - name: "reload changed NICs" shell: "/usr/bin/nmcli connection up {{ item }}" with_items: - ens192 - ens224 The nmcli connection up trick is the same we use to apply other routing changes. After running these tasks (ansible with --diff shows the proper changes) the routing table is not reloaded, we're just left with no default routes. nmcli shows that both NICs have lost the gateway setting. Some times, restarting NetworkManager will reload everything properly, but it's not always consistent. # nmcli con show ens192 | grep gateway ; nmcli con show ens224 | grep gateway connection.gateway-ping-timeout: 0 ipv4.gateway: -- ipv6.gateway: -- connection.gateway-ping-timeout: 0 ipv4.gateway: -- ipv6.gateway: -- We're running ansible [core 2.15.9] with these collections: Collection Version --------------------- ------- ansible.netcommon 6.0.0 ansible.posix 1.5.4 ansible.utils 3.1.0 community.crypto 2.17.1 community.general 8.3.0 community.hashi_vault 6.1.0 community.vmware 4.1.0 What is wrong here? Does the nmcli module not do what we are expecting from it?
How to set an ipv4 default gateway with ansible nmcli in RHEL9? I'm writing a playbook to set the default gateway on a server with RHEL9 and multiple network interfaces. This is part of a repeated process, we're cloning a VM from a template with default gateway on ens192 (the management interface) and during the customization we set up routing and change the default gateway interface to another, typically ens224 . However, after running the nmcli module with the gw4 setting, the routing table is not updated. This should theoretically work: - name: "unset default gw" community.general.nmcli: conn_name: "ens192" state: present never_default4: true dns4_ignore_auto: true - name: "set default gw" community.general.nmcli: conn_name: "ens224" state: present gw4: '10.59.41.1' dns4_ignore_auto: true - name: "reload changed NICs" shell: "/usr/bin/nmcli connection up {{ item }}" with_items: - ens192 - ens224 The nmcli connection up trick is the same we use to apply other routing changes. After running these tasks (ansible with --diff shows the proper changes) the routing table is not reloaded, we're just left with no default routes. nmcli shows that both NICs have lost the gateway setting. Some times, restarting NetworkManager will reload everything properly, but it's not always consistent. # nmcli con show ens192 | grep gateway ; nmcli con show ens224 | grep gateway connection.gateway-ping-timeout: 0 ipv4.gateway: -- ipv6.gateway: -- connection.gateway-ping-timeout: 0 ipv4.gateway: -- ipv6.gateway: -- We're running ansible [core 2.15.9] with these collections: Collection Version --------------------- ------- ansible.netcommon 6.0.0 ansible.posix 1.5.4 ansible.utils 3.1.0 community.crypto 2.17.1 community.general 8.3.0 community.hashi_vault 6.1.0 community.vmware 4.1.0 What is wrong here? Does the nmcli module not do what we are expecting from it?
ansible, rhel, networkmanager, nmcli, rhel9
1
1,166
0
https://stackoverflow.com/questions/78022892/how-to-set-an-ipv4-default-gateway-with-ansible-nmcli-in-rhel9
77,947,638
PUPPET ERROR expected chomping or indentation indicators
2024-02-06T12:50:38.562+01:00 ERROR [qtp488725606-52] [puppetserver] Puppet Server Error: Could not load external node results for "mynode": (<unknown>): expected chomping or indentation indicators, but found h(104) while scanning a block scalar at line 138 column 39 /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb:92:in `respond_to_errors' 2024-02-06T12:50:58.151+01:00 ERROR [qtp488725606-54] [puppetserver] Puppet Failed when searching for node "my node": Could not load external node results for "my node": (<unknown>): expected chomping or indentation indicators, but found h(104) while scanning a block scalar at line 138 column 39 Content of /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb def respond_to_errors(response) yield rescue Puppet::Network::HTTP::Error::HTTPError => e Puppet.info(e.message) respond_with_http_error(response, e) rescue StandardError => e http_e = Puppet::Network::HTTP::Error::HTTPServerError.new(e) Puppet.err([http_e.message, *e.backtrace].join("\n")) respond_with_http_error(response, http_e) end postgres --version postgres (PostgreSQL) 13.13 puppetdb --version puppetdb version: 7.13.0 puppetserver --version puppetserver version: 7.11.0 puppet --version 7.27.0 upgrade puppet, from 7.4 to 7.11 and puppetdb
PUPPET ERROR expected chomping or indentation indicators 2024-02-06T12:50:38.562+01:00 ERROR [qtp488725606-52] [puppetserver] Puppet Server Error: Could not load external node results for "mynode": (<unknown>): expected chomping or indentation indicators, but found h(104) while scanning a block scalar at line 138 column 39 /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb:92:in `respond_to_errors' 2024-02-06T12:50:58.151+01:00 ERROR [qtp488725606-54] [puppetserver] Puppet Failed when searching for node "my node": Could not load external node results for "my node": (<unknown>): expected chomping or indentation indicators, but found h(104) while scanning a block scalar at line 138 column 39 Content of /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb def respond_to_errors(response) yield rescue Puppet::Network::HTTP::Error::HTTPError => e Puppet.info(e.message) respond_with_http_error(response, e) rescue StandardError => e http_e = Puppet::Network::HTTP::Error::HTTPServerError.new(e) Puppet.err([http_e.message, *e.backtrace].join("\n")) respond_with_http_error(response, http_e) end postgres --version postgres (PostgreSQL) 13.13 puppetdb --version puppetdb version: 7.13.0 puppetserver --version puppetserver version: 7.11.0 puppet --version 7.27.0 upgrade puppet, from 7.4 to 7.11 and puppetdb
ruby, linux, puppet, rhel, rhel7
1
83
0
https://stackoverflow.com/questions/77947638/puppet-error-expected-chomping-or-indentation-indicators
77,916,354
Configure RHEL service so that it recognizes Amazon IP address of host EC2
Versions OS: RHEL 8.x I've installed a service -- call it foosvc -- to /usr/lib/systemd/system/foo.service , with configuration something like this: [Unit] ... After=syslog.target network.target [Service] ... ExecStart=/usr/sbin/foosvc fg ... [Install] WantedBy=multi-user.target When the host EC2 reboots, this service restarts as expected. However, there seems to be a race condition: namely, if I restart the service manually, sudo systemctl restart foosvc it listens on port 6000 as follows: foosvc 780 username 21u IPv4 22462 0t0 TCP 127.0.0.1:6000 (LISTEN) foosvc 780 username 21u IPv4 22462 0t0 TCP 172.30.41.149:6000 (LISTEN) where IP address 172.30.41.149 is the IP internal to AWS, and the external IP is the elastic IP (not listed here) attached to this EC2. However, if I reboot the EC2 from the dashboard, foosvc auto-starts as expected, but is listening to port 6000 only for the loopback IP address: foosvc 780 username 21u IPv4 22462 0t0 TCP 127.0.0.1:6000 (LISTEN) QUESTION: how can foosvc be configured so that on system reboots, foosvc listens to port 6000 on both localhost and the internal AWS IP address? NOTE: foosvc listens for both TCP and UDP on port 6000.
Configure RHEL service so that it recognizes Amazon IP address of host EC2 Versions OS: RHEL 8.x I've installed a service -- call it foosvc -- to /usr/lib/systemd/system/foo.service , with configuration something like this: [Unit] ... After=syslog.target network.target [Service] ... ExecStart=/usr/sbin/foosvc fg ... [Install] WantedBy=multi-user.target When the host EC2 reboots, this service restarts as expected. However, there seems to be a race condition: namely, if I restart the service manually, sudo systemctl restart foosvc it listens on port 6000 as follows: foosvc 780 username 21u IPv4 22462 0t0 TCP 127.0.0.1:6000 (LISTEN) foosvc 780 username 21u IPv4 22462 0t0 TCP 172.30.41.149:6000 (LISTEN) where IP address 172.30.41.149 is the IP internal to AWS, and the external IP is the elastic IP (not listed here) attached to this EC2. However, if I reboot the EC2 from the dashboard, foosvc auto-starts as expected, but is listening to port 6000 only for the loopback IP address: foosvc 780 username 21u IPv4 22462 0t0 TCP 127.0.0.1:6000 (LISTEN) QUESTION: how can foosvc be configured so that on system reboots, foosvc listens to port 6000 on both localhost and the internal AWS IP address? NOTE: foosvc listens for both TCP and UDP on port 6000.
amazon-web-services, amazon-ec2, rhel
1
43
1
https://stackoverflow.com/questions/77916354/configure-rhel-service-so-that-it-recognizes-amazon-ip-address-of-host-ec2
77,410,809
Python 3.10 on openshift with fips mode got error on pandarallel
run on docker with rhel 8 on openshift without root user this is the information about the OS that the docker is running NAME="Red Hat Enterprise Linux" VERSION="8.8 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.8" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos" HOME_URL="[URL] DOCUMENTATION_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.8 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.8" This is the requerment.txt file python 3.10.11 asteroid==2.15.5 async-timeout==4.0.3 attrs==23.1.0 certify==2023.7.22 charset-normalizer==3.2.0 contourpy==1.1.0 coverage==7.2.7 cycler==0.11.0 debugpy==1.6.7 dill==0.3.6 exceptiongroup==1.1.1 execnet==1.9.0 fonttools==4.42.1 idna==3.4 iniconfig==2.0.0 isort==5.12.0 Jinja2==3.1.2 joblib==1.3.2 jsonschema==4.17.3 kiwisolver==1.4.5 lazy-object-proxy==1.9.0 MarkupSafe==2.1.3 matplotlib==3.7.2 mccabe==0.7.0 mlxtend==0.22.0 numpy==1.25.2 packaging==23.1 pandarallel==1.6.5 pandas==2.0.0 pika==1.3.1 Pillow==10.0.0 platformdirs==3.5.3 pluggy==1.0.0 psutil==5.9.5 py==1.11.0 py-cpuinfo==9.0.0 pylint==2.17.2 pyparsing==3.0.9 pyrsistent==0.19.3 pytest==7.3.1 pytest-benchmark==4.0.0 pytest-cov==4.0.0 pytest-html==3.2.0 pytest-metadata==3.0.0 pytest-mock==3.10.0 pytest-order==1.1.0 pytest-ordering==0.6 pytest-timeout==2.1.0 pytest-xdist==3.2.1 python-dateutil==2.8.2 pytz==2023.3 redis==4.5.4 requests==2.31.0 scikit-learn==1.2.2 scipy==1.10.1 seaborn==0.12.2 six==1.16.0 threadpoolctl==3.2.0 tomli==2.0.1 tomlkit==0.11.8 typing_extensions==4.6.3 tzdata==2023.3 urllib3==2.0.4 wrapt==1.15.0 OpenSSL 1.1.1k FIPS 25 Mar 2021 I give an example of usage of the lib my usage is different but this code provides the same issue that i handle with Try to Run: import pandas as pd from pandarallel import pandarallel # Initialize pandarallel pandarallel.initialize(use_memory_fs=False) # Create a sample DataFrame data = {'A': range(1, 11), 'B': range(11, 21)} df = pd.DataFrame(data) # Define a function that will be applied to each row in the DataFrame def custom_function(row): return row['A'] + row['B'] # Use pandarallel to apply the function in parallel result = df.parallel_apply(custom_function, axis=1) print(result) this issue seems like there is an issue with multiprocessing Result: INFO: Pandarallel will run on 2 workers. INFO: Pandarallel will use standard multiprocessing data transfer (pipe) to transfer data between the main process and workers. Traceback (most recent call last): File "/usr/app/x.py", line 17, in <module> result = df.parallel_apply(custom_function, axis=1) File "/usr/local/lib/python3.10/site-packages/pandarallel/core.py", line 368, in closure master_workers_queue = manager.Queue() File "/usr/local/lib/python3.10/multiprocessing/managers.py", line 723, in temp token, exp = self._create(typeid, *args, **kwds) File "/usr/local/lib/python3.10/multiprocessing/managers.py", line 606, in _create conn = self._Client(self._address, authkey=self._authkey) File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 508, in Client answer_challenge(c, authkey) File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 755, in answer_challenge digest = hmac.new(authkey, message, 'md5').digest() File "/usr/local/lib/python3.10/hmac.py", line 184, in new return HMAC(key, msg, digestmod) File "/usr/local/lib/python3.10/hmac.py", line 60, in __init__ self._init_hmac(key, msg, digestmod) File "/usr/local/lib/python3.10/hmac.py", line 67, in _init_hmac self._hmac = _hashopenssl.hmac_new(key, msg, digestmod=digestmod) ValueError: no reason supplied Thanks
Python 3.10 on openshift with fips mode got error on pandarallel run on docker with rhel 8 on openshift without root user this is the information about the OS that the docker is running NAME="Red Hat Enterprise Linux" VERSION="8.8 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.8" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.8 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos" HOME_URL="[URL] DOCUMENTATION_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.8 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.8" This is the requerment.txt file python 3.10.11 asteroid==2.15.5 async-timeout==4.0.3 attrs==23.1.0 certify==2023.7.22 charset-normalizer==3.2.0 contourpy==1.1.0 coverage==7.2.7 cycler==0.11.0 debugpy==1.6.7 dill==0.3.6 exceptiongroup==1.1.1 execnet==1.9.0 fonttools==4.42.1 idna==3.4 iniconfig==2.0.0 isort==5.12.0 Jinja2==3.1.2 joblib==1.3.2 jsonschema==4.17.3 kiwisolver==1.4.5 lazy-object-proxy==1.9.0 MarkupSafe==2.1.3 matplotlib==3.7.2 mccabe==0.7.0 mlxtend==0.22.0 numpy==1.25.2 packaging==23.1 pandarallel==1.6.5 pandas==2.0.0 pika==1.3.1 Pillow==10.0.0 platformdirs==3.5.3 pluggy==1.0.0 psutil==5.9.5 py==1.11.0 py-cpuinfo==9.0.0 pylint==2.17.2 pyparsing==3.0.9 pyrsistent==0.19.3 pytest==7.3.1 pytest-benchmark==4.0.0 pytest-cov==4.0.0 pytest-html==3.2.0 pytest-metadata==3.0.0 pytest-mock==3.10.0 pytest-order==1.1.0 pytest-ordering==0.6 pytest-timeout==2.1.0 pytest-xdist==3.2.1 python-dateutil==2.8.2 pytz==2023.3 redis==4.5.4 requests==2.31.0 scikit-learn==1.2.2 scipy==1.10.1 seaborn==0.12.2 six==1.16.0 threadpoolctl==3.2.0 tomli==2.0.1 tomlkit==0.11.8 typing_extensions==4.6.3 tzdata==2023.3 urllib3==2.0.4 wrapt==1.15.0 OpenSSL 1.1.1k FIPS 25 Mar 2021 I give an example of usage of the lib my usage is different but this code provides the same issue that i handle with Try to Run: import pandas as pd from pandarallel import pandarallel # Initialize pandarallel pandarallel.initialize(use_memory_fs=False) # Create a sample DataFrame data = {'A': range(1, 11), 'B': range(11, 21)} df = pd.DataFrame(data) # Define a function that will be applied to each row in the DataFrame def custom_function(row): return row['A'] + row['B'] # Use pandarallel to apply the function in parallel result = df.parallel_apply(custom_function, axis=1) print(result) this issue seems like there is an issue with multiprocessing Result: INFO: Pandarallel will run on 2 workers. INFO: Pandarallel will use standard multiprocessing data transfer (pipe) to transfer data between the main process and workers. Traceback (most recent call last): File "/usr/app/x.py", line 17, in <module> result = df.parallel_apply(custom_function, axis=1) File "/usr/local/lib/python3.10/site-packages/pandarallel/core.py", line 368, in closure master_workers_queue = manager.Queue() File "/usr/local/lib/python3.10/multiprocessing/managers.py", line 723, in temp token, exp = self._create(typeid, *args, **kwds) File "/usr/local/lib/python3.10/multiprocessing/managers.py", line 606, in _create conn = self._Client(self._address, authkey=self._authkey) File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 508, in Client answer_challenge(c, authkey) File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 755, in answer_challenge digest = hmac.new(authkey, message, 'md5').digest() File "/usr/local/lib/python3.10/hmac.py", line 184, in new return HMAC(key, msg, digestmod) File "/usr/local/lib/python3.10/hmac.py", line 60, in __init__ self._init_hmac(key, msg, digestmod) File "/usr/local/lib/python3.10/hmac.py", line 67, in _init_hmac self._hmac = _hashopenssl.hmac_new(key, msg, digestmod=digestmod) ValueError: no reason supplied Thanks
python, openshift, rhel, fips, pandarallel
1
303
0
https://stackoverflow.com/questions/77410809/python-3-10-on-openshift-with-fips-mode-got-error-on-pandarallel
77,151,033
python cryptography can&#39;t installed on RHEL 8.6 when python version is 3.9
we have tried to install the cryptography module on our RHEL 8.6 linux machine but without success here short details about our server pip3 --version pip 23.2.1 from /usr/local/lib/python3.9/site-packages/pip (python 3.9) python3 --version Python 3.9.16 pip3 list | grep setuptools setuptools 68.2.2 more /etc/redhat-release Red Hat Enterprise Linux release 8.6 (Ootpa) uname -a 4.18.0-372.9.1.el8.x86_64 #1 SMP Fri Apr 15 22:12:19 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux and examples from installation proccess pip3 install --no-cache-dir --no-index "/tmp/1/cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl" ERROR: cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl is not a supported wheel on this platform. and with diff version pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl" ERROR: cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl is not a supported wheel on this platform. or different version pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl" ERROR: cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl is not a supported wheel on this platform. or pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.3-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl" ERROR: cryptography-37.0.3-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl is not a supported wheel on this platform. or pip3 install --no-cache-dir --no-index "/tmp/cryptography-41.0.4.tar.gz" Processing /tmp/cryptography-41.0.4.tar.gz Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [2 lines of output] ERROR: Could not find a version that satisfies the requirement setuptools>=61.0.0 (from versions: none) ERROR: No matching distribution found for setuptools>=61.0.0 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. I also downgrade the setuptools to version 38.4.0 but the same error and we reinstall the setuptools to original version - 68.2.2 so we not understand if cryptography cant installed on RHEL 8.6 version with python 3.9
python cryptography can&#39;t installed on RHEL 8.6 when python version is 3.9 we have tried to install the cryptography module on our RHEL 8.6 linux machine but without success here short details about our server pip3 --version pip 23.2.1 from /usr/local/lib/python3.9/site-packages/pip (python 3.9) python3 --version Python 3.9.16 pip3 list | grep setuptools setuptools 68.2.2 more /etc/redhat-release Red Hat Enterprise Linux release 8.6 (Ootpa) uname -a 4.18.0-372.9.1.el8.x86_64 #1 SMP Fri Apr 15 22:12:19 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux and examples from installation proccess pip3 install --no-cache-dir --no-index "/tmp/1/cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl" ERROR: cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl is not a supported wheel on this platform. and with diff version pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl" ERROR: cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl is not a supported wheel on this platform. or different version pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl" ERROR: cryptography-37.0.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl is not a supported wheel on this platform. or pip3 install --no-cache-dir --no-index "/tmp/cryptography-37.0.3-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl" ERROR: cryptography-37.0.3-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl is not a supported wheel on this platform. or pip3 install --no-cache-dir --no-index "/tmp/cryptography-41.0.4.tar.gz" Processing /tmp/cryptography-41.0.4.tar.gz Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [2 lines of output] ERROR: Could not find a version that satisfies the requirement setuptools>=61.0.0 (from versions: none) ERROR: No matching distribution found for setuptools>=61.0.0 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. I also downgrade the setuptools to version 38.4.0 but the same error and we reinstall the setuptools to original version - 68.2.2 so we not understand if cryptography cant installed on RHEL 8.6 version with python 3.9
python, python-3.x, pip, rhel
1
966
1
https://stackoverflow.com/questions/77151033/python-cryptography-cant-installed-on-rhel-8-6-when-python-version-is-3-9
76,846,919
upgrading AWS Cli in RHEL servers
aws-cli/1.16.280 Python/2.7.5 Linux/3.10.0-1160.92.1.el7.x86_64 botocore/1.13.16 /usr/local/bin/aws -> /usr/local/aws/bin/aws above is the version before upgrading. after upgrading it shows as aws-cli/2.9.16 Python/3.9.11 Linux/3.10.0-1160.92.1.el7.x86_64 exe/x86_64.rhel.7 prompt/off /usr/local/bin/aws -> /usr/local/aws-cli/v2/current/bin/aws 1.what is prompt/off at the end means ?? 2.while upgrading , i changed the install directory . will that be a problem ? or should i keep the install directory same as old version ? 3.does aws cli upgrade needs a restart ??
upgrading AWS Cli in RHEL servers aws-cli/1.16.280 Python/2.7.5 Linux/3.10.0-1160.92.1.el7.x86_64 botocore/1.13.16 /usr/local/bin/aws -> /usr/local/aws/bin/aws above is the version before upgrading. after upgrading it shows as aws-cli/2.9.16 Python/3.9.11 Linux/3.10.0-1160.92.1.el7.x86_64 exe/x86_64.rhel.7 prompt/off /usr/local/bin/aws -> /usr/local/aws-cli/v2/current/bin/aws 1.what is prompt/off at the end means ?? 2.while upgrading , i changed the install directory . will that be a problem ? or should i keep the install directory same as old version ? 3.does aws cli upgrade needs a restart ??
linux, amazon-web-services, rhel
1
85
1
https://stackoverflow.com/questions/76846919/upgrading-aws-cli-in-rhel-servers
76,731,453
Unable to load library libPS9Parser.so library permissions denied
I'm trying to run an application and the debug log is throwing this error: ERROR: Unable to load /app/path/libPCL9Parser.so: 13 'Permission denied' ERROR: Unable to load /app/path/libPS9Parser.so: 13 'Permission denied'. The files exist and have 755 permissions on both files so I'm not sure why the libraries aren't loading. Do I need to install another package or is there something up with the library config possibly? Can I add the path to the library in one of the library configuration files like /etc/ld.so.conf.d/kernel-3.10.0-1160.92.1.el7.x86_64.conf?? Thanks for all of your input!
Unable to load library libPS9Parser.so library permissions denied I'm trying to run an application and the debug log is throwing this error: ERROR: Unable to load /app/path/libPCL9Parser.so: 13 'Permission denied' ERROR: Unable to load /app/path/libPS9Parser.so: 13 'Permission denied'. The files exist and have 755 permissions on both files so I'm not sure why the libraries aren't loading. Do I need to install another package or is there something up with the library config possibly? Can I add the path to the library in one of the library configuration files like /etc/ld.so.conf.d/kernel-3.10.0-1160.92.1.el7.x86_64.conf?? Thanks for all of your input!
redhat, libraries, rhel
1
22
0
https://stackoverflow.com/questions/76731453/unable-to-load-library-libps9parser-so-library-permissions-denied
76,704,458
Port-forward drops connection to pod after first connection
Doing port forward for postgresql database through service : nohup kubectl --namespace test-0 port-forward service/operational-db 60000:5432 > /dev/null 2>&1 & Port-forward seems to be working till there is no connection from host machine. The moment Host machine tries to connect for the first time it throws this error - E0717 11:57:36.623431 15244 portforward.go:407] an error occurred forwarding 60000 -> 5432: error forwarding port 5432 to pod a2f34569140ff7aa7079e06aedca300e8776cb9a5b06806ed04fcf0feba018e8, uid : failed to execute portforward in network namespace "/var/run/netns/cni-6db2a2b7-68be-dade-ada5-32e00597e921": read tcp4 127.0.0.1:48064->127.0.0.1:5432: read: connection reset by peer kubectl/server version : > Client Version: v1.26.1 Kustomize Version: v4.5.7 Server Version: > v1.24.14-eks-c12679a Any pointer would be highly appreciated.
Port-forward drops connection to pod after first connection Doing port forward for postgresql database through service : nohup kubectl --namespace test-0 port-forward service/operational-db 60000:5432 > /dev/null 2>&1 & Port-forward seems to be working till there is no connection from host machine. The moment Host machine tries to connect for the first time it throws this error - E0717 11:57:36.623431 15244 portforward.go:407] an error occurred forwarding 60000 -> 5432: error forwarding port 5432 to pod a2f34569140ff7aa7079e06aedca300e8776cb9a5b06806ed04fcf0feba018e8, uid : failed to execute portforward in network namespace "/var/run/netns/cni-6db2a2b7-68be-dade-ada5-32e00597e921": read tcp4 127.0.0.1:48064->127.0.0.1:5432: read: connection reset by peer kubectl/server version : > Client Version: v1.26.1 Kustomize Version: v4.5.7 Server Version: > v1.24.14-eks-c12679a Any pointer would be highly appreciated.
kubernetes, amazon-ec2, kubectl, amazon-eks, rhel
1
3,462
1
https://stackoverflow.com/questions/76704458/port-forward-drops-connection-to-pod-after-first-connection
76,615,462
fatal error: cstddef: No such file or directory #include &lt;cstddef&gt;
Help please I am assembling the rpm package of the final cut program [URL] When assembling, I get an error In the file included from /usr/src/tmp/finalcut-buildroot/usr/include/final/fc.h:30: /usr/src/tmp/finalcut-buildroot/usr/include/final/f, type s.h:27:10: fatal error: cstddef: There is no such file or directory #include <cstddef> All dependencies are connected My spec BuildRequires: gcc-c++ BuildRequires: libstdc++-devel BuildRequires: automake BuildRequires: autoconf BuildRequires: autoconf-archive BuildRequires: libtool BuildRequires: pkg-config glib2 BuildRequires: glib2-devel libtinfo-devel libncurses-devel BuildRequires: libgpm-devel make etersoft-build-utils Requires: libfinal = %{version}-%{release} Connected all possible dependencies of new versions I put the cstddef file directly into the project, but nothing helps When building linux, it doesn't seem to see cstddef, although it is there, I checked
fatal error: cstddef: No such file or directory #include &lt;cstddef&gt; Help please I am assembling the rpm package of the final cut program [URL] When assembling, I get an error In the file included from /usr/src/tmp/finalcut-buildroot/usr/include/final/fc.h:30: /usr/src/tmp/finalcut-buildroot/usr/include/final/f, type s.h:27:10: fatal error: cstddef: There is no such file or directory #include <cstddef> All dependencies are connected My spec BuildRequires: gcc-c++ BuildRequires: libstdc++-devel BuildRequires: automake BuildRequires: autoconf BuildRequires: autoconf-archive BuildRequires: libtool BuildRequires: pkg-config glib2 BuildRequires: glib2-devel libtinfo-devel libncurses-devel BuildRequires: libgpm-devel make etersoft-build-utils Requires: libfinal = %{version}-%{release} Connected all possible dependencies of new versions I put the cstddef file directly into the project, but nothing helps When building linux, it doesn't seem to see cstddef, although it is there, I checked
rhel, rpmbuild, finalcut
1
1,850
0
https://stackoverflow.com/questions/76615462/fatal-error-cstddef-no-such-file-or-directory-include-cstddef
76,224,405
Kafka containers getting auto-deleted on RHEL OS
There are some issues we are experiencing while using Kafka in our environments. I will explain the scenario of the issues below for better understanding. We are setting up Kafka using a Docker compose file (running this docker compose on a server with RHEL operating system), along with that Kafka we are setting up Zookeeper and Kafdrop as well. So, once we do the setup and check the End-to-End connectivity, it works fine. By fine, I mean that the Kafka Consumer, Monitor and Producer container logs have live data, when we do the testing, which I suppose indicates that the Kafka broker node and all other components are working fine. This current setup works fine for some random time and suddenly, the containers get deleted. By deleted, I mean that the containers are built with a restart-always condition, so they are not even in exited state. They just vanish out and we had to do the setup all again. The reason we are raising this issue is that we are not able to pinpoint what exactly is causing the Kafka containers to get deleted when the Kafdrop and Zookeeper are still there and just the Kafka broker container gets deleted. For solutions, We tried reinstalling the Operating System. We tried using Kafka images from Bitnami and Wurstmeister. We tried changing the Internal and External port configuration in Docker compose file used to stand-up the containers. None of the above worked.
Kafka containers getting auto-deleted on RHEL OS There are some issues we are experiencing while using Kafka in our environments. I will explain the scenario of the issues below for better understanding. We are setting up Kafka using a Docker compose file (running this docker compose on a server with RHEL operating system), along with that Kafka we are setting up Zookeeper and Kafdrop as well. So, once we do the setup and check the End-to-End connectivity, it works fine. By fine, I mean that the Kafka Consumer, Monitor and Producer container logs have live data, when we do the testing, which I suppose indicates that the Kafka broker node and all other components are working fine. This current setup works fine for some random time and suddenly, the containers get deleted. By deleted, I mean that the containers are built with a restart-always condition, so they are not even in exited state. They just vanish out and we had to do the setup all again. The reason we are raising this issue is that we are not able to pinpoint what exactly is causing the Kafka containers to get deleted when the Kafdrop and Zookeeper are still there and just the Kafka broker container gets deleted. For solutions, We tried reinstalling the Operating System. We tried using Kafka images from Bitnami and Wurstmeister. We tried changing the Internal and External port configuration in Docker compose file used to stand-up the containers. None of the above worked.
linux, docker, rhel
1
38
0
https://stackoverflow.com/questions/76224405/kafka-containers-getting-auto-deleted-on-rhel-os
76,177,677
cron service isn&#39;t started after restart of my docker app
Hi I got a docker compose file... with php-apache. In this image I haven't cron... so I installed inside. Whenever I shut down my lab and restart the whole cron service is down. I always have to start it with "service cron start" How can I make it always persistent and autostart on restart? Thanks, Alen php-apache: image: ${phpapache} container_name: php-apache hostname: php-apache restart: always volumes: - /etc/app/latest:/var/www/html/ ports: - "8080:80"
cron service isn&#39;t started after restart of my docker app Hi I got a docker compose file... with php-apache. In this image I haven't cron... so I installed inside. Whenever I shut down my lab and restart the whole cron service is down. I always have to start it with "service cron start" How can I make it always persistent and autostart on restart? Thanks, Alen php-apache: image: ${phpapache} container_name: php-apache hostname: php-apache restart: always volumes: - /etc/app/latest:/var/www/html/ ports: - "8080:80"
docker, cron, rhel
1
180
0
https://stackoverflow.com/questions/76177677/cron-service-isnt-started-after-restart-of-my-docker-app
76,107,927
How do I resolve GLIBC error during compilation?
On Red Hat Enterprise Linux Server release 7.9 with ldd version = ldd (GNU libc) 2.17, when I try to build my code using makefile the following error is seen : //usr/lib64/libresolv.so.2: undefined reference to __resolv_context_get@GLIBC_PRIVATE' //usr/lib64/libresolv.so.2: undefined reference to __h_errno@GLIBC_PRIVATE' /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libpci.so: undefined reference to memcpy@GLIBC_2.14' //usr/lib64/libresolv.so.2: undefined reference to __resolv_context_get_override@GLIBC_PRIVATE' //usr/lib64/libresolv.so.2: undefined reference to __sendmmsg@GLIBC_PRIVATE' //usr/lib64/libresolv.so.2: undefined reference to __resolv_context_get_preinit@GLIBC_PRIVATE' //usr/lib64/libresolv.so.2: undefined reference to `__resolv_context_put@GLIBC_PRIVATE' collect2: error: ld returned 1 exit status How do I resolve this issue? The current ldd version is 2.17, should I try to upgrade GLIBC? This is a freshly installed server, Am i missing any configurations?
How do I resolve GLIBC error during compilation? On Red Hat Enterprise Linux Server release 7.9 with ldd version = ldd (GNU libc) 2.17, when I try to build my code using makefile the following error is seen : //usr/lib64/libresolv.so.2: undefined reference to __resolv_context_get@GLIBC_PRIVATE' //usr/lib64/libresolv.so.2: undefined reference to __h_errno@GLIBC_PRIVATE' /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../../lib64/libpci.so: undefined reference to memcpy@GLIBC_2.14' //usr/lib64/libresolv.so.2: undefined reference to __resolv_context_get_override@GLIBC_PRIVATE' //usr/lib64/libresolv.so.2: undefined reference to __sendmmsg@GLIBC_PRIVATE' //usr/lib64/libresolv.so.2: undefined reference to __resolv_context_get_preinit@GLIBC_PRIVATE' //usr/lib64/libresolv.so.2: undefined reference to `__resolv_context_put@GLIBC_PRIVATE' collect2: error: ld returned 1 exit status How do I resolve this issue? The current ldd version is 2.17, should I try to upgrade GLIBC? This is a freshly installed server, Am i missing any configurations?
linux, cmake, makefile, glibc, rhel
1
993
1
https://stackoverflow.com/questions/76107927/how-do-i-resolve-glibc-error-during-compilation
76,052,504
My custom yum repository cannot see RPM files
I have configured a custom repository with nginx RPMs. It is located on my local Artifactory: [URL] I can see that my RPMs are there: Index of rpm-nginx-prod-local/rhel8 Name Last modified Size ../ repodata/ 11-Apr-2023 12:47 - nginx-1.18.0-2.el8.ngx.x86_64.rpm 03-Dec-2020 15:00 798.63 KB nginx-1.20.2-1.el8.ngx.x86_64.rpm 03-Dec-2021 12:07 820.03 KB nginx-1.22.0-1.el8.ngx.x86_64.rpm 13-Jul-2022 14:28 826.56 KB nginx-1.22.1-1.el8.ngx.x86_64.rpm 02-Nov-2022 14:03 828.23 KB Artifactory/7.55.10 Server at artifactory Port 80 I also configured /etc/yum.repos.d/: cat /etc/yum.repos.d/nginx-rpms-rhel.repo [nginx-rpms-rhel] name=nginx - RHEL8 - x86_64 - REPO baseurl=[URL] enabled=1 fastestmirror_enabled=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release proxy=_none_ But when I try to list anything from that repo I get: yum search * --enablerepo=nginx-rpms-rhel --disablerepo="*" No matches found. yum install nginx --disablerepo=* --enablerepo=nginx-rpms-rhel Error: Unable to find a match: nginx yum install --disablerepo=* --enablerepo=nginx-rpms-rhel nginx-1.22.1-1.el8.ngx.x86_64 Error: Unable to find a match: nginx-1.22.1-1.el8.ngx.x86_64 This looks really odd to me, because there are rpms in my Artifactory. I'm running on RHEL8. I've tried to do yum clean all, but it is not the case. Any ideas?
My custom yum repository cannot see RPM files I have configured a custom repository with nginx RPMs. It is located on my local Artifactory: [URL] I can see that my RPMs are there: Index of rpm-nginx-prod-local/rhel8 Name Last modified Size ../ repodata/ 11-Apr-2023 12:47 - nginx-1.18.0-2.el8.ngx.x86_64.rpm 03-Dec-2020 15:00 798.63 KB nginx-1.20.2-1.el8.ngx.x86_64.rpm 03-Dec-2021 12:07 820.03 KB nginx-1.22.0-1.el8.ngx.x86_64.rpm 13-Jul-2022 14:28 826.56 KB nginx-1.22.1-1.el8.ngx.x86_64.rpm 02-Nov-2022 14:03 828.23 KB Artifactory/7.55.10 Server at artifactory Port 80 I also configured /etc/yum.repos.d/: cat /etc/yum.repos.d/nginx-rpms-rhel.repo [nginx-rpms-rhel] name=nginx - RHEL8 - x86_64 - REPO baseurl=[URL] enabled=1 fastestmirror_enabled=0 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release proxy=_none_ But when I try to list anything from that repo I get: yum search * --enablerepo=nginx-rpms-rhel --disablerepo="*" No matches found. yum install nginx --disablerepo=* --enablerepo=nginx-rpms-rhel Error: Unable to find a match: nginx yum install --disablerepo=* --enablerepo=nginx-rpms-rhel nginx-1.22.1-1.el8.ngx.x86_64 Error: Unable to find a match: nginx-1.22.1-1.el8.ngx.x86_64 This looks really odd to me, because there are rpms in my Artifactory. I'm running on RHEL8. I've tried to do yum clean all, but it is not the case. Any ideas?
yum, rhel
1
756
1
https://stackoverflow.com/questions/76052504/my-custom-yum-repository-cannot-see-rpm-files
75,722,792
wget only works with --secure-protocol option RHEL8
I am trying to call wget '[URL] on an AWS instance based on Red Hat Enterprise Linux 8.7 (Ootpa). It fails with GnuTLS: No or insufficient priorities were set. Unable to establish SSL connection. while it works using option --secure-protocol TLSv1_2 or --secure-protocol TLSv1_3`. Any ideas how to fix this so that I can call luarocks which in turn implicitly calls wget (without --secure-protocol) to download its needed package info.
wget only works with --secure-protocol option RHEL8 I am trying to call wget '[URL] on an AWS instance based on Red Hat Enterprise Linux 8.7 (Ootpa). It fails with GnuTLS: No or insufficient priorities were set. Unable to establish SSL connection. while it works using option --secure-protocol TLSv1_2 or --secure-protocol TLSv1_3`. Any ideas how to fix this so that I can call luarocks which in turn implicitly calls wget (without --secure-protocol) to download its needed package info.
wget, rhel, luarocks
1
1,344
0
https://stackoverflow.com/questions/75722792/wget-only-works-with-secure-protocol-option-rhel8
75,642,937
How to set a env for perl 5.36 binary in linux
We have RHEL 8.6 server and the perl is by default installed when we installed the RHEL. PERL version is 5.8.9. we need to install PERL 5.36. so we are trying to install the same. downloaded PERL binary from [URL] we executed sh <(curl -q [URL] state activate --default saravraj-org/Perl-5.36.0-Linux-CentOS Also sh <(curl -q [URL] \ -c'state activate --default saravraj-org/Perl-5.36.0-Linux-CentOS' Now the perl -v command shows the version as 5.36 build@sarav~/Perl-5.36.0-Linux-CentOS ❯❯❯ perl -v | grep 'This is' This is perl 5, version 36, subversion 0 (v5.36.0) built for x86_64-linux when we execute OpenSSL configure its taking perl 5.8.9 version still so we need to set the env for perl 5.36 version. could you please provide the setpes for the same. build@sarav /a/m/p/o/a/ciscossl-1.1.1n.7.2.390 ❯❯❯ perl -v ✘ 1 This is perl 5, version 36, subversion 0 ( v5.36.0 ) built for x86_64-linux Copyright 1987-2022, Larry Wall build@sarav /a/m/p/o/a/ciscossl-1.1.1n.7.2.390 ❯❯❯ ./config --prefix=/auto/open/linux/openssl no-threads no-ecdh no-ec - fPIC Operating system: x86_64-whatever-linux2 Perl v5.10.0 required--this is only v5.8.9 , stopped at ./Configure line 12. BEGIN failed--compilation aborted at ./Configure line 12. Perl v5.10.0 required--this is only v5.8.9, stopped at ./Configure line 12. BEGIN failed--compilation aborted at ./Configure line 12. This system (linux-x86_64) is not supported. See file INSTALL for details. we are expecting the script to take the PERL 5.36 version but it's taking 5.8.9
How to set a env for perl 5.36 binary in linux We have RHEL 8.6 server and the perl is by default installed when we installed the RHEL. PERL version is 5.8.9. we need to install PERL 5.36. so we are trying to install the same. downloaded PERL binary from [URL] we executed sh <(curl -q [URL] state activate --default saravraj-org/Perl-5.36.0-Linux-CentOS Also sh <(curl -q [URL] \ -c'state activate --default saravraj-org/Perl-5.36.0-Linux-CentOS' Now the perl -v command shows the version as 5.36 build@sarav~/Perl-5.36.0-Linux-CentOS ❯❯❯ perl -v | grep 'This is' This is perl 5, version 36, subversion 0 (v5.36.0) built for x86_64-linux when we execute OpenSSL configure its taking perl 5.8.9 version still so we need to set the env for perl 5.36 version. could you please provide the setpes for the same. build@sarav /a/m/p/o/a/ciscossl-1.1.1n.7.2.390 ❯❯❯ perl -v ✘ 1 This is perl 5, version 36, subversion 0 ( v5.36.0 ) built for x86_64-linux Copyright 1987-2022, Larry Wall build@sarav /a/m/p/o/a/ciscossl-1.1.1n.7.2.390 ❯❯❯ ./config --prefix=/auto/open/linux/openssl no-threads no-ecdh no-ec - fPIC Operating system: x86_64-whatever-linux2 Perl v5.10.0 required--this is only v5.8.9 , stopped at ./Configure line 12. BEGIN failed--compilation aborted at ./Configure line 12. Perl v5.10.0 required--this is only v5.8.9, stopped at ./Configure line 12. BEGIN failed--compilation aborted at ./Configure line 12. This system (linux-x86_64) is not supported. See file INSTALL for details. we are expecting the script to take the PERL 5.36 version but it's taking 5.8.9
perl, rhel
1
558
1
https://stackoverflow.com/questions/75642937/how-to-set-a-env-for-perl-5-36-binary-in-linux
75,389,635
Nodemailer works in development in localhost, but not in production VPS. How do I figure out what is wrong?
I am using Nodemailer in an ExpressJS Node application on a RHEL VPS to send form submissions via email, from and to an external email server. My application runs as expected on my computer using nodemon to serve to localhost, but when I try to use pm2 to run my app on my RHEL server, it does not send the emails. The app works as expected besides the nodemailer form submission. I am able to access all the routes and I even get a status code 200 when submitting the form, but that may be due to the way I set up my Express server's POST route and callback. Port 465 is open and I opened up port 25 as well just in case, but I'm not sure if that was necessary. I am new to email servers and how they work, so I could use some help if anyone can point me in the right direction. Here's a copy of my express server JS code: require("dotenv").config(); console.log("token", process.env.PASS); const express = require("express"); const app = express(); const nodemailer = require("nodemailer"); const path = require('path') const PORT = process.env.PORT; // Middleware app.use([ express.static(path.join(__dirname, '../public')), express.urlencoded({ extended: false }), express.json(), ]); // Set routes to static pages app.get("/home", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/index.html")); }); app.get("/contact", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/contact.html")); }); app.get("/services", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/services.html")); }); app.get("/pricing", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/pricing.html")); }); app.get("/about", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/about.html")); }); app.get("/gallery", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/gallery.html")); }); // Post contact form data to SMTP server app.post("/", (req, res) => { console.log(req.body); const transporter = nodemailer.createTransport({ host: process.env.HOST, port: 465, secure: true, auth: { user: process.env.EMAIL, pass: process.env.PASS, }, tls: { rejectUnauthorized: false, }, }); const mailOptions = { from: process.env.EMAIL, to: process.env.RECIP, subject: Message from ${req.body.name} - ${req.body.email}, text: Customer Info: Name: ${req.body.name} Phone: ${req.body.phone} Email: ${req.body.email} Jobsite: ${req.body.jobsite} Message: ${req.body.message}, }; transporter.sendMail(mailOptions, (error, info) => { if (error) { console.log(error); res.send("error"); } else { console.log("Message Sent: " + info.response); res.send("success"); } }); }); app.listen(PORT, () => { console.log(Server running on port ${PORT}); }); I tried doing the form submissions in the same manner as I did in development in my localhost on my computer, but it does not work. It may have something to do with RHEL's security features or something else I'm completely unaware of. I've been searching for answers on this particular issue for a few days and haven't found anything as of yet. I'm unsure of what could be causing this issue with the ports opened and with the Express application seemingly working fine besides nodemailer on the VPS.
Nodemailer works in development in localhost, but not in production VPS. How do I figure out what is wrong? I am using Nodemailer in an ExpressJS Node application on a RHEL VPS to send form submissions via email, from and to an external email server. My application runs as expected on my computer using nodemon to serve to localhost, but when I try to use pm2 to run my app on my RHEL server, it does not send the emails. The app works as expected besides the nodemailer form submission. I am able to access all the routes and I even get a status code 200 when submitting the form, but that may be due to the way I set up my Express server's POST route and callback. Port 465 is open and I opened up port 25 as well just in case, but I'm not sure if that was necessary. I am new to email servers and how they work, so I could use some help if anyone can point me in the right direction. Here's a copy of my express server JS code: require("dotenv").config(); console.log("token", process.env.PASS); const express = require("express"); const app = express(); const nodemailer = require("nodemailer"); const path = require('path') const PORT = process.env.PORT; // Middleware app.use([ express.static(path.join(__dirname, '../public')), express.urlencoded({ extended: false }), express.json(), ]); // Set routes to static pages app.get("/home", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/index.html")); }); app.get("/contact", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/contact.html")); }); app.get("/services", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/services.html")); }); app.get("/pricing", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/pricing.html")); }); app.get("/about", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/about.html")); }); app.get("/gallery", (req, res) => { res.status(200).sendFile(path.join(__dirname, "../public/gallery.html")); }); // Post contact form data to SMTP server app.post("/", (req, res) => { console.log(req.body); const transporter = nodemailer.createTransport({ host: process.env.HOST, port: 465, secure: true, auth: { user: process.env.EMAIL, pass: process.env.PASS, }, tls: { rejectUnauthorized: false, }, }); const mailOptions = { from: process.env.EMAIL, to: process.env.RECIP, subject: Message from ${req.body.name} - ${req.body.email}, text: Customer Info: Name: ${req.body.name} Phone: ${req.body.phone} Email: ${req.body.email} Jobsite: ${req.body.jobsite} Message: ${req.body.message}, }; transporter.sendMail(mailOptions, (error, info) => { if (error) { console.log(error); res.send("error"); } else { console.log("Message Sent: " + info.response); res.send("success"); } }); }); app.listen(PORT, () => { console.log(Server running on port ${PORT}); }); I tried doing the form submissions in the same manner as I did in development in my localhost on my computer, but it does not work. It may have something to do with RHEL's security features or something else I'm completely unaware of. I've been searching for answers on this particular issue for a few days and haven't found anything as of yet. I'm unsure of what could be causing this issue with the ports opened and with the Express application seemingly working fine besides nodemailer on the VPS.
node.js, express, smtp, nodemailer, rhel
1
165
0
https://stackoverflow.com/questions/75389635/nodemailer-works-in-development-in-localhost-but-not-in-production-vps-how-do
74,933,416
Podman on RHEL 8 running out of space during import
I am having issues with Podman running out of space when importing. This is happening on a RHEL 8 VM that has been deployed for our group. We do have a 80GB /docker partition available, but I am missing some Podman configuration that says to use /docker. This VM Can you all help me identify? Here is part of my /etc/containers/storage.conf: [storage] # Default Storage Driver, Must be set for proper operation. driver = "overlay" # Temporary storage location runroot = "/docker/temp" # Primary Read/Write location of container storage # When changing the graphroot location on an SELINUX system, you must # ensure the labeling matches the default locations labels with the # following commands: # semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH # restorecon -R -v /NEWSTORAGEPATH # graphroot = "/var/lib/containers/storage" graphroot = "/docker" We are running SELinux, so I did run these commands: semanage fcontext -a -e /var/lib/containers/storage /docker restorecon -R -v /docker and restart the podman service. However, if I run podman import docker.tar We receive the error: Getting image source signatures Copying blob 848eb673668a [=>------------------------------------] 1.8GiB / 41.3GiB Error: writing blob: storing blob to file "/var/tmp/storage2140624383/1": write /var/tmp/storage2140624383/1: no space left on device df -H shows: Filesystem Size Used Avail Use% Mounted on devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 84K 3.9G 1% /dev/shm tmpfs 3.9G 9.3M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/rhel_rhel86--svr-root 38G 7.2G 31G 20% / /dev/mapper/rhel_rhel86--svr-tmp 4.7G 66M 4.6G 2% /tmp /dev/mapper/rhel_rhel86--svr-home 43G 1.4G 42G 4% /home /dev/sda2 495M 276M 220M 56% /boot /dev/sdb1 79G 42G 33G 56% /docker /dev/sda1 500M 5.9M 494M 2% /boot/efi /dev/mapper/rhel_rhel86--svr-var 33G 1.6G 32G 5% /var /dev/mapper/rhel_rhel86--svr-var_log 4.7G 109M 4.6G 3% /var/log /dev/mapper/rhel_rhel86--svr-var_tmp 1.9G 47M 1.9G 3% /var/tmp /dev/mapper/rhel_rhel86--svr-var_log_audit 9.4G 132M 9.2G 2% /var/log/audit tmpfs 785M 8.0K 785M 1% /run/user/42 tmpfs 785M 0 785M 0% /run/user/1000 Do you guys know what I'm missing to tell Podman to use /docker instead of /var/tmp/storage2140624383 ? ################################################ Edited December 29: I was able to change the tmpdir to /docker. However, upon import of this 54GB docker.tar file, it is still telling me I am running out of space. We were able to import a small .tar (around 800MB) successfully, so we know podman is working. $ podman import docker.tar Getting image source signatures Copying blob b45265b317a7 done Error: writing blob: adding layer with blob "sha256:b45265b317a7897670ff015b177bac7b9d5037b3cfb490d3567da959c7e2cf70": Error processing tar file(exit status 1): write /a65be6ac39ddadfec332b73d772c49d5f1b4fffbe7a3a419d00fd58fcb4bb752/layer.tar: no space left on device
Podman on RHEL 8 running out of space during import I am having issues with Podman running out of space when importing. This is happening on a RHEL 8 VM that has been deployed for our group. We do have a 80GB /docker partition available, but I am missing some Podman configuration that says to use /docker. This VM Can you all help me identify? Here is part of my /etc/containers/storage.conf: [storage] # Default Storage Driver, Must be set for proper operation. driver = "overlay" # Temporary storage location runroot = "/docker/temp" # Primary Read/Write location of container storage # When changing the graphroot location on an SELINUX system, you must # ensure the labeling matches the default locations labels with the # following commands: # semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH # restorecon -R -v /NEWSTORAGEPATH # graphroot = "/var/lib/containers/storage" graphroot = "/docker" We are running SELinux, so I did run these commands: semanage fcontext -a -e /var/lib/containers/storage /docker restorecon -R -v /docker and restart the podman service. However, if I run podman import docker.tar We receive the error: Getting image source signatures Copying blob 848eb673668a [=>------------------------------------] 1.8GiB / 41.3GiB Error: writing blob: storing blob to file "/var/tmp/storage2140624383/1": write /var/tmp/storage2140624383/1: no space left on device df -H shows: Filesystem Size Used Avail Use% Mounted on devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 84K 3.9G 1% /dev/shm tmpfs 3.9G 9.3M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/rhel_rhel86--svr-root 38G 7.2G 31G 20% / /dev/mapper/rhel_rhel86--svr-tmp 4.7G 66M 4.6G 2% /tmp /dev/mapper/rhel_rhel86--svr-home 43G 1.4G 42G 4% /home /dev/sda2 495M 276M 220M 56% /boot /dev/sdb1 79G 42G 33G 56% /docker /dev/sda1 500M 5.9M 494M 2% /boot/efi /dev/mapper/rhel_rhel86--svr-var 33G 1.6G 32G 5% /var /dev/mapper/rhel_rhel86--svr-var_log 4.7G 109M 4.6G 3% /var/log /dev/mapper/rhel_rhel86--svr-var_tmp 1.9G 47M 1.9G 3% /var/tmp /dev/mapper/rhel_rhel86--svr-var_log_audit 9.4G 132M 9.2G 2% /var/log/audit tmpfs 785M 8.0K 785M 1% /run/user/42 tmpfs 785M 0 785M 0% /run/user/1000 Do you guys know what I'm missing to tell Podman to use /docker instead of /var/tmp/storage2140624383 ? ################################################ Edited December 29: I was able to change the tmpdir to /docker. However, upon import of this 54GB docker.tar file, it is still telling me I am running out of space. We were able to import a small .tar (around 800MB) successfully, so we know podman is working. $ podman import docker.tar Getting image source signatures Copying blob b45265b317a7 done Error: writing blob: adding layer with blob "sha256:b45265b317a7897670ff015b177bac7b9d5037b3cfb490d3567da959c7e2cf70": Error processing tar file(exit status 1): write /a65be6ac39ddadfec332b73d772c49d5f1b4fffbe7a3a419d00fd58fcb4bb752/layer.tar: no space left on device
rhel, podman
1
3,790
1
https://stackoverflow.com/questions/74933416/podman-on-rhel-8-running-out-of-space-during-import
73,990,199
Jittery movement of QScrollArea from custom touchscreen driver
So a few years ago, I wrote a custom touchscreen driver specifically for a particular application which ran on a Scientific Linux 6.4 (CentOS 6 based) OS, which did not have native touch support, but the kernel supported touch events, so I was able to directly read the raw data from the touchscreen in /dev/input/event * and read the event data to generate mouse events with Qt to control the application to mimic a multi-touch touchscreen. More recently, we've finally migrated to RedHat 8.4, but I had to disable the native touch driver, because as far as I know, the native Qt touch events didn't allow the same degree of control over the application that my driver did. Recently during a trial, a technician reported that when using the touchscreen to manipulate one of the application's image display areas, the movement was very "jittery". The image display area is simply a QScrollArea with a QImage inside, and hidden scroll bars. The way it works is that on a mouseMoveEvent, it manipulates the scrollbars according to the delta value on the mouse event. void PanArea::mouseMoveEvent(QMouseEvent* e) { SafeStr str; if (m_pw) { if (m_disable_panning == false && e->buttons().testFlag(Qt::RightButton)) { QPoint delta = e->pos() - m_last_pos; horizontalScrollBar()->setValue(horizontalScrollBar()->value() - delta.x()); verticalScrollBar()->setValue(verticalScrollBar()->value() - delta.y()); m_last_pos = e->pos(); emit signalScrollPositionChanged(getScrollPosition()); // This force update has been added because // when fast panning cause black to appear within the image // because some pan widget updates were being skipped. // Performance seems acceptable with this here. m_pw->update(); } else if (...) { // irrelevant code removed to save space } } } This works fine when simply right-click dragging on the scroll area. And then here is the relevant function that dispatches the mouse press and mouse move events from the touchscreen driver: void InputHandler::panStart(TouchPoint* tp, QWidget* widget) { QPoint pos(tp->cx(), tp->cy()); QWidget* target = widget; if (target == NULL) target = QApplication::widgetAt(pos); if (target != NULL) { QPoint local = target->mapFromGlobal(pos); QMouseEvent* press = new QMouseEvent(QEvent::MouseButtonPress, local, pos, Qt::RightButton, Qt::RightButton, Qt::NoModifier); QApplication::postEvent(widget, press); } } void InputHandler::panMove(TouchPoint* tp, QWidget* widget) { QPoint pos(tp->cx(), tp->cy()); QWidget* target = widget; if (target == NULL) target = QApplication::widgetAt(pos); if (target != NULL) { QPoint local = target->mapFromGlobal(pos); QMouseEvent* move = new QMouseEvent(QEvent::MouseMove, local, pos, Qt::NoButton, Qt::RightButton, Qt::NoModifier); QApplication::postEvent(widget, move); } } When I touch and drag on the widget, it DOES move, but it jumps all over the place as if I were rapidly dragging it in random directions. Is there some caveat of how Qt5 mouse events work that could explain this behavior? Or something to do with the way the widget is moving the image around?
Jittery movement of QScrollArea from custom touchscreen driver So a few years ago, I wrote a custom touchscreen driver specifically for a particular application which ran on a Scientific Linux 6.4 (CentOS 6 based) OS, which did not have native touch support, but the kernel supported touch events, so I was able to directly read the raw data from the touchscreen in /dev/input/event * and read the event data to generate mouse events with Qt to control the application to mimic a multi-touch touchscreen. More recently, we've finally migrated to RedHat 8.4, but I had to disable the native touch driver, because as far as I know, the native Qt touch events didn't allow the same degree of control over the application that my driver did. Recently during a trial, a technician reported that when using the touchscreen to manipulate one of the application's image display areas, the movement was very "jittery". The image display area is simply a QScrollArea with a QImage inside, and hidden scroll bars. The way it works is that on a mouseMoveEvent, it manipulates the scrollbars according to the delta value on the mouse event. void PanArea::mouseMoveEvent(QMouseEvent* e) { SafeStr str; if (m_pw) { if (m_disable_panning == false && e->buttons().testFlag(Qt::RightButton)) { QPoint delta = e->pos() - m_last_pos; horizontalScrollBar()->setValue(horizontalScrollBar()->value() - delta.x()); verticalScrollBar()->setValue(verticalScrollBar()->value() - delta.y()); m_last_pos = e->pos(); emit signalScrollPositionChanged(getScrollPosition()); // This force update has been added because // when fast panning cause black to appear within the image // because some pan widget updates were being skipped. // Performance seems acceptable with this here. m_pw->update(); } else if (...) { // irrelevant code removed to save space } } } This works fine when simply right-click dragging on the scroll area. And then here is the relevant function that dispatches the mouse press and mouse move events from the touchscreen driver: void InputHandler::panStart(TouchPoint* tp, QWidget* widget) { QPoint pos(tp->cx(), tp->cy()); QWidget* target = widget; if (target == NULL) target = QApplication::widgetAt(pos); if (target != NULL) { QPoint local = target->mapFromGlobal(pos); QMouseEvent* press = new QMouseEvent(QEvent::MouseButtonPress, local, pos, Qt::RightButton, Qt::RightButton, Qt::NoModifier); QApplication::postEvent(widget, press); } } void InputHandler::panMove(TouchPoint* tp, QWidget* widget) { QPoint pos(tp->cx(), tp->cy()); QWidget* target = widget; if (target == NULL) target = QApplication::widgetAt(pos); if (target != NULL) { QPoint local = target->mapFromGlobal(pos); QMouseEvent* move = new QMouseEvent(QEvent::MouseMove, local, pos, Qt::NoButton, Qt::RightButton, Qt::NoModifier); QApplication::postEvent(widget, move); } } When I touch and drag on the widget, it DOES move, but it jumps all over the place as if I were rapidly dragging it in random directions. Is there some caveat of how Qt5 mouse events work that could explain this behavior? Or something to do with the way the widget is moving the image around?
c++, linux, qt, qt5, rhel
1
80
0
https://stackoverflow.com/questions/73990199/jittery-movement-of-qscrollarea-from-custom-touchscreen-driver
73,940,835
NumPy producing different results in Ubuntu and RHEL
I am facing a platform reproducibility issue with NumPy. My development environment is in RHEL Linux and production is in PCF Cloud (Ubuntu). below calculation is giving different result in RHEL and Ubuntu. a=np.log(np.float32(0.042230162769556046)/np.float32(0.9577698111534119)) Result in RHEL: -3.1214728 Result in Ubuntu: -3.1214726 I see the floating point re producibility across platform is an issue. Any suggestions on how to handle this will be great help.
NumPy producing different results in Ubuntu and RHEL I am facing a platform reproducibility issue with NumPy. My development environment is in RHEL Linux and production is in PCF Cloud (Ubuntu). below calculation is giving different result in RHEL and Ubuntu. a=np.log(np.float32(0.042230162769556046)/np.float32(0.9577698111534119)) Result in RHEL: -3.1214728 Result in Ubuntu: -3.1214726 I see the floating point re producibility across platform is an issue. Any suggestions on how to handle this will be great help.
numpy, ubuntu, rhel
1
159
0
https://stackoverflow.com/questions/73940835/numpy-producing-different-results-in-ubuntu-and-rhel
73,564,263
gsettings proxy configuration out of sync with Network Manager GUI on RHEL/CentOS 8
I'm setting auto pac proxy settings (autoconfig-url) within a C++ program using gsettings and see that the Network Manager GUI Proxy Settings are out of sync with gsettings. In addition, after closing the Network Manager GUI it overrides the proxy settings in gsettings. Steps to Reproduce 1a. Programmatically set gsettings (System call) std::string cmd = "gsettings set org.gnome.system.proxy mode auto"; system(cmd.c_str()); cmd = "gsettings set org.gnome.system.proxy autoconfig-url [URL] system(cmd.c_str()); OR 1b. Programmatically set gsettings (GTK lib) GSettings *settings; settings = g_settings_new ("org.gnome.system.proxy"); g_settings_set_string(settings, "mode", "auto"); g_settings_set_string(settings, "autoconfig-url", "[URL] g_settings_sync(); Opened terminal and read back gsettings to verify $ gsettings get org.gnome.system.proxy mode 'auto' $ gsettings get org.gnome.system.proxy autoconfig-url '[URL] Opened Network Manager GUI in Settings and saw not in sync with gsettings. Close Network Manager GUI. Read back gsettings to find that the gsettings proxy settings have been overwritten by closing Network Manager GUI. Even if no updates were made. $ gsettings get org.gnome.system.proxy mode 'none' $ gsettings get org.gnome.system.proxy autoconfig-url '' I tested on Ubuntu 18.04 and 20.04 and I see the Network Manager GUI in sync with gsettings. I see this issue mainly on RHEL/CentOS 8.
gsettings proxy configuration out of sync with Network Manager GUI on RHEL/CentOS 8 I'm setting auto pac proxy settings (autoconfig-url) within a C++ program using gsettings and see that the Network Manager GUI Proxy Settings are out of sync with gsettings. In addition, after closing the Network Manager GUI it overrides the proxy settings in gsettings. Steps to Reproduce 1a. Programmatically set gsettings (System call) std::string cmd = "gsettings set org.gnome.system.proxy mode auto"; system(cmd.c_str()); cmd = "gsettings set org.gnome.system.proxy autoconfig-url [URL] system(cmd.c_str()); OR 1b. Programmatically set gsettings (GTK lib) GSettings *settings; settings = g_settings_new ("org.gnome.system.proxy"); g_settings_set_string(settings, "mode", "auto"); g_settings_set_string(settings, "autoconfig-url", "[URL] g_settings_sync(); Opened terminal and read back gsettings to verify $ gsettings get org.gnome.system.proxy mode 'auto' $ gsettings get org.gnome.system.proxy autoconfig-url '[URL] Opened Network Manager GUI in Settings and saw not in sync with gsettings. Close Network Manager GUI. Read back gsettings to find that the gsettings proxy settings have been overwritten by closing Network Manager GUI. Even if no updates were made. $ gsettings get org.gnome.system.proxy mode 'none' $ gsettings get org.gnome.system.proxy autoconfig-url '' I tested on Ubuntu 18.04 and 20.04 and I see the Network Manager GUI in sync with gsettings. I see this issue mainly on RHEL/CentOS 8.
centos, rhel, centos8, networkmanager, gsettings
1
355
0
https://stackoverflow.com/questions/73564263/gsettings-proxy-configuration-out-of-sync-with-network-manager-gui-on-rhel-cento
73,535,130
Kerberos Authentication Across Multiple Domains and Network Interfaces
I am trying to figure out how to set up passwordless Kerberos authentication with some unusual requirements. -- The Setup -- There are 2 networks: N1 and N2. There are 2 locations: the first location is where all the workstations are located and only has access to network N1, the second is where all the servers are located which are connected to N1 and N2. All of the workstations are windows machine, use the domain *.example.com, and are managed via a windows domain server. All of the servers are RHEL machines and have two domain names: *.a.example.com [the server's address on network N1] and *.b.example.com [the server's address on network N2]. -- The Need -- I need a user to be able to ssh into a server over network N1, this can be done via password or Kerberos. From that server, the user needs to be able to passwordless ssh into all the other servers using either of the server's domain names. -- What I've Tried -- A lot, I've been trying to make this work for 3-4 weeks now but I'll go over where I'm currently at as it seems to be the closest. I have set up 2 child domain servers for a.example.com and b.example.com I have added the servers to both domain controllers and their DNS servers I have used realmd to join the servers to the a.example.com domain server Updated my krb5.conf file to include all the domains (I'll include my configs at the bottom). I have tried every combination of dns_lookup_realm, rdns, dns_canonicalize_hostname, and ignore_acceptor_hostname In this setup I'm able to ssh via password from a workstation to server1.a.example.com and from there I can ssh without password to any other serverX.a.example.com all over network N1 as expected. The problem is when I try to ssh into serverX.b.example.com I get a "Server not found in Kerberos database". I was hoping by adding the b.example.com domain controller and manually adding the host I would add that machine to the database but it did not. I've reached the limit of my knowledge of Kerberos, Active Directory, and SSH so I have just been going in circles the last week or so. Any help would be greatly appreciated. -- Config Files -- krb5.conf includedir /var/lib/sss/pubconf/krb5.include.d/ includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = true ticket_lifetime = 24h renew_lifetime = 7d forwardable = false rdns = false pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt spake_preauth_groups = edwards25519 default_realm = A.EXAMPLE.COM default_ccache_name = KEYRING:persistent:%{uid} dns_canonicalize_hostname = true ignore_acceptor_hostname = true [realms] A.EXAMPLE.COM = { kdc = aserver.a.example.com admin_server = aserver.a.example.com } B.EXAMPLE.COM = { kdc = bserver.b.example.com admin_server = bserver.b.example.com } EXAMPLE.COM = { kdc = server.example.com admin_server = server.example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM .a.example.com = A.EXAMPLE.COM a.example.com = A.EXAMPLE.COM .b.example.com = B.EXAMPLE.COM b.example.com = B.EXAMPLE.COM sssd.conf [sssd] default_domain_suffix = a.example.com domains = a.example.com, b.example.com config_file_version = 2 services = nss,pam,ssh [domain/a.example.com] ad_domain = a.example.com krb5_realm = A.EXAMPLE.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%u@%d access_provider = ad # I have tried both with and without this block [domain/b.example.com] ad_domain = b.example.com krb5_realm = B.EXAMPLE.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%u@%d access_provider = ad sshd_config HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key SyslogFacility AUTHPRIV PermitRootLogin yes AuthorizedKeysFile .ssh/authorized_keys PasswordAuthentication yes ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials no GSSAPIStrictAcceptorCheck no UsePam yes X11Forwarding yes PrintMotd yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS Subsystem sftp /usr/libexec/openssh/sftp-server EDIT -- Additional Details -- Below is the keytab for serverx once I realm join a.example.com SERVERX$@A.EXAMPLE.COM SERVERX$@A.EXAMPLE.COM host/SERVERX@A.EXAMPLE.COM host/SERVERX@A.EXAMPLE.COM host/SERVERX.a.example.com@A.EXAMPLE.COM host/SERVERX.a.example.com@A.EXAMPLE.COM RestrictedKrbHost/SERVERX@A.EXAMPLE.COM RestrictedKrbHost/SERVERX@A.EXAMPLE.COM RestrictedKrbHost/SERVERX.a.example.com@A.EXAMPLE.COM RestrictedKrbHost/SERVERX.a.example.com@A.EXAMPLE.COM When I ssh from server1, I get a tgt from the b.example.com server but Kerberos is looking for host/serverx.b.example.com@B.EXAMPLE.COM which the host does not have in its keytab. Maybe if I manually add those entries it might work. EDIT 2 That will not work because it never even passes the ticket to serverx, the domain server returns the error "TGS request result: -1765328377/Server not found in Kerberos database" So I guess the question is how do I manually add a host into the windows domain Kerberos database. EDIT 3 I tried adding the SPN host/SERVERX@B.EXAMPLE.COM and RestrictedKrbHost/SERVERX@B.EXAMPLE.COM to SERVERX computer in the a.example.com domain controller but now I get a "Illegal cross-realm ticket" So I removed those and tried adding them to the SERVERX computer in the b.example.com domain controller but then I get a "The ticket isn't for us" error EDIT 4 Now I have added host/SERVERX@B.EXAMPLE.COM and RestrictedKrbHost/SERVERX@B.EXAMPLE.COM to the SERVERX computer in the b.example.com domain controller and host/host.b.example.com@B.EXAMPLE.COM to SERVERX keytab. This gets me close but the issue is the kvno number (and likely the encryption key) do not match. Because on serverx they are set via the a.example.com domain controller, but when server1 requests a ticket it gets it from the b.example.com domain controller. Now I have to figure out how to create the SPN in the b.example.com domain controller using the encryption key and kvno from the a.example.com domain controller. This sounds unlikely to be allowed.
Kerberos Authentication Across Multiple Domains and Network Interfaces I am trying to figure out how to set up passwordless Kerberos authentication with some unusual requirements. -- The Setup -- There are 2 networks: N1 and N2. There are 2 locations: the first location is where all the workstations are located and only has access to network N1, the second is where all the servers are located which are connected to N1 and N2. All of the workstations are windows machine, use the domain *.example.com, and are managed via a windows domain server. All of the servers are RHEL machines and have two domain names: *.a.example.com [the server's address on network N1] and *.b.example.com [the server's address on network N2]. -- The Need -- I need a user to be able to ssh into a server over network N1, this can be done via password or Kerberos. From that server, the user needs to be able to passwordless ssh into all the other servers using either of the server's domain names. -- What I've Tried -- A lot, I've been trying to make this work for 3-4 weeks now but I'll go over where I'm currently at as it seems to be the closest. I have set up 2 child domain servers for a.example.com and b.example.com I have added the servers to both domain controllers and their DNS servers I have used realmd to join the servers to the a.example.com domain server Updated my krb5.conf file to include all the domains (I'll include my configs at the bottom). I have tried every combination of dns_lookup_realm, rdns, dns_canonicalize_hostname, and ignore_acceptor_hostname In this setup I'm able to ssh via password from a workstation to server1.a.example.com and from there I can ssh without password to any other serverX.a.example.com all over network N1 as expected. The problem is when I try to ssh into serverX.b.example.com I get a "Server not found in Kerberos database". I was hoping by adding the b.example.com domain controller and manually adding the host I would add that machine to the database but it did not. I've reached the limit of my knowledge of Kerberos, Active Directory, and SSH so I have just been going in circles the last week or so. Any help would be greatly appreciated. -- Config Files -- krb5.conf includedir /var/lib/sss/pubconf/krb5.include.d/ includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = true ticket_lifetime = 24h renew_lifetime = 7d forwardable = false rdns = false pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt spake_preauth_groups = edwards25519 default_realm = A.EXAMPLE.COM default_ccache_name = KEYRING:persistent:%{uid} dns_canonicalize_hostname = true ignore_acceptor_hostname = true [realms] A.EXAMPLE.COM = { kdc = aserver.a.example.com admin_server = aserver.a.example.com } B.EXAMPLE.COM = { kdc = bserver.b.example.com admin_server = bserver.b.example.com } EXAMPLE.COM = { kdc = server.example.com admin_server = server.example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM .a.example.com = A.EXAMPLE.COM a.example.com = A.EXAMPLE.COM .b.example.com = B.EXAMPLE.COM b.example.com = B.EXAMPLE.COM sssd.conf [sssd] default_domain_suffix = a.example.com domains = a.example.com, b.example.com config_file_version = 2 services = nss,pam,ssh [domain/a.example.com] ad_domain = a.example.com krb5_realm = A.EXAMPLE.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%u@%d access_provider = ad # I have tried both with and without this block [domain/b.example.com] ad_domain = b.example.com krb5_realm = B.EXAMPLE.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%u@%d access_provider = ad sshd_config HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key SyslogFacility AUTHPRIV PermitRootLogin yes AuthorizedKeysFile .ssh/authorized_keys PasswordAuthentication yes ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials no GSSAPIStrictAcceptorCheck no UsePam yes X11Forwarding yes PrintMotd yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS Subsystem sftp /usr/libexec/openssh/sftp-server EDIT -- Additional Details -- Below is the keytab for serverx once I realm join a.example.com SERVERX$@A.EXAMPLE.COM SERVERX$@A.EXAMPLE.COM host/SERVERX@A.EXAMPLE.COM host/SERVERX@A.EXAMPLE.COM host/SERVERX.a.example.com@A.EXAMPLE.COM host/SERVERX.a.example.com@A.EXAMPLE.COM RestrictedKrbHost/SERVERX@A.EXAMPLE.COM RestrictedKrbHost/SERVERX@A.EXAMPLE.COM RestrictedKrbHost/SERVERX.a.example.com@A.EXAMPLE.COM RestrictedKrbHost/SERVERX.a.example.com@A.EXAMPLE.COM When I ssh from server1, I get a tgt from the b.example.com server but Kerberos is looking for host/serverx.b.example.com@B.EXAMPLE.COM which the host does not have in its keytab. Maybe if I manually add those entries it might work. EDIT 2 That will not work because it never even passes the ticket to serverx, the domain server returns the error "TGS request result: -1765328377/Server not found in Kerberos database" So I guess the question is how do I manually add a host into the windows domain Kerberos database. EDIT 3 I tried adding the SPN host/SERVERX@B.EXAMPLE.COM and RestrictedKrbHost/SERVERX@B.EXAMPLE.COM to SERVERX computer in the a.example.com domain controller but now I get a "Illegal cross-realm ticket" So I removed those and tried adding them to the SERVERX computer in the b.example.com domain controller but then I get a "The ticket isn't for us" error EDIT 4 Now I have added host/SERVERX@B.EXAMPLE.COM and RestrictedKrbHost/SERVERX@B.EXAMPLE.COM to the SERVERX computer in the b.example.com domain controller and host/host.b.example.com@B.EXAMPLE.COM to SERVERX keytab. This gets me close but the issue is the kvno number (and likely the encryption key) do not match. Because on serverx they are set via the a.example.com domain controller, but when server1 requests a ticket it gets it from the b.example.com domain controller. Now I have to figure out how to create the SPN in the b.example.com domain controller using the encryption key and kvno from the a.example.com domain controller. This sounds unlikely to be allowed.
ssh, active-directory, kerberos, rhel
1
5,735
1
https://stackoverflow.com/questions/73535130/kerberos-authentication-across-multiple-domains-and-network-interfaces
73,189,076
How to resolve permission issue inside docker container?
I have created a custom docker image which is working fine on centos but not on the Red Hat Enterprise Linux Server release 7.8 (Maipo). No uid and gid for available for the file system. so could not access any files. [root@568e47d62be4 /]# ls -l ls: cannot access 'bin': Operation not permitted ls: cannot access 'boot': Operation not permitted ls: cannot access 'dev': Operation not permitted ls: cannot access 'etc': Operation not permitted ls: cannot access 'home': Operation not permitted ls: cannot access 'lib': Operation not permitted ls: cannot access 'lib64': Operation not permitted ls: cannot access 'licenses': Operation not permitted ls: cannot access 'lost+found': Operation not permitted ls: cannot access 'media': Operation not permitted ls: cannot access 'mnt': Operation not permitted ls: cannot access 'opt': Operation not permitted ls: cannot access 'proc': Operation not permitted ls: cannot access 'root': Operation not permitted ls: cannot access 'run': Operation not permitted ls: cannot access 'sbin': Operation not permitted ls: cannot access 'srv': Operation not permitted ls: cannot access 'store': Operation not permitted ls: cannot access 'sys': Operation not permitted ls: cannot access 'tmp': Operation not permitted ls: cannot access 'usr': Operation not permitted ls: cannot access 'var': Operation not permitted total 0 l????????? ? ? ? ? ? bin d????????? ? ? ? ? ? boot d????????? ? ? ? ? ? dev d????????? ? ? ? ? ? etc d????????? ? ? ? ? ? home l????????? ? ? ? ? ? lib l????????? ? ? ? ? ? lib64 d????????? ? ? ? ? ? licenses d????????? ? ? ? ? ? lost+found d????????? ? ? ? ? ? media d????????? ? ? ? ? ? mnt d????????? ? ? ? ? ? opt d????????? ? ? ? ? ? proc d????????? ? ? ? ? ? root d????????? ? ? ? ? ? run l????????? ? ? ? ? ? sbin d????????? ? ? ? ? ? srv d????????? ? ? ? ? ? store d????????? ? ? ? ? ? sys d????????? ? ? ? ? ? tmp d????????? ? ? ? ? ? usr d????????? ? ? ? ? ? var What could be the issue?
How to resolve permission issue inside docker container? I have created a custom docker image which is working fine on centos but not on the Red Hat Enterprise Linux Server release 7.8 (Maipo). No uid and gid for available for the file system. so could not access any files. [root@568e47d62be4 /]# ls -l ls: cannot access 'bin': Operation not permitted ls: cannot access 'boot': Operation not permitted ls: cannot access 'dev': Operation not permitted ls: cannot access 'etc': Operation not permitted ls: cannot access 'home': Operation not permitted ls: cannot access 'lib': Operation not permitted ls: cannot access 'lib64': Operation not permitted ls: cannot access 'licenses': Operation not permitted ls: cannot access 'lost+found': Operation not permitted ls: cannot access 'media': Operation not permitted ls: cannot access 'mnt': Operation not permitted ls: cannot access 'opt': Operation not permitted ls: cannot access 'proc': Operation not permitted ls: cannot access 'root': Operation not permitted ls: cannot access 'run': Operation not permitted ls: cannot access 'sbin': Operation not permitted ls: cannot access 'srv': Operation not permitted ls: cannot access 'store': Operation not permitted ls: cannot access 'sys': Operation not permitted ls: cannot access 'tmp': Operation not permitted ls: cannot access 'usr': Operation not permitted ls: cannot access 'var': Operation not permitted total 0 l????????? ? ? ? ? ? bin d????????? ? ? ? ? ? boot d????????? ? ? ? ? ? dev d????????? ? ? ? ? ? etc d????????? ? ? ? ? ? home l????????? ? ? ? ? ? lib l????????? ? ? ? ? ? lib64 d????????? ? ? ? ? ? licenses d????????? ? ? ? ? ? lost+found d????????? ? ? ? ? ? media d????????? ? ? ? ? ? mnt d????????? ? ? ? ? ? opt d????????? ? ? ? ? ? proc d????????? ? ? ? ? ? root d????????? ? ? ? ? ? run l????????? ? ? ? ? ? sbin d????????? ? ? ? ? ? srv d????????? ? ? ? ? ? store d????????? ? ? ? ? ? sys d????????? ? ? ? ? ? tmp d????????? ? ? ? ? ? usr d????????? ? ? ? ? ? var What could be the issue?
linux, docker, file-permissions, rhel
1
353
0
https://stackoverflow.com/questions/73189076/how-to-resolve-permission-issue-inside-docker-container
72,694,666
RHEL8 and Centos7 httpd - php processing differently
I'm having an issue with httpd (Apache) on RHEL8. I've previously had this working and configured on Centos7 without any issues. Due to Centos7 depreciation, I've started migrating to RHEL8 but having an issue. The first is - php includes do not process for any page other than pages which are housed in the root folder of the php directory. Notes: I've configured the $_SERVER['DOCUMENT_ROOT'] on both servers and the variable in a phpinfo page results in the same - so I know this is being parsed correctly. I'm worried this is a difference in the php modules between RHEL8 and Centos7. I know there are different ways to install the modules. On the RHEL8 server I do not have a section in my phpinfo page labeled "apache2handler" as I do on the Centos7 box. When I am looking on the servers themselves- I see the following: php -v PHP 7.3.33 (cli) (built: XXX X XXXX 08:45:13) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.3.33, Copyright (c) 1998-2018 Zend Technologies with Zend OPcache v7.3.33, Copyright (c) 1999-2018, by Zend Technologies (identical output minus the build date). Same versions, etc. I do notice the repo for the php73 on the Centos7 comes from "remi-php73" whereas the RHEL8 box is using remi-safe. Not sure if this is relevant.
RHEL8 and Centos7 httpd - php processing differently I'm having an issue with httpd (Apache) on RHEL8. I've previously had this working and configured on Centos7 without any issues. Due to Centos7 depreciation, I've started migrating to RHEL8 but having an issue. The first is - php includes do not process for any page other than pages which are housed in the root folder of the php directory. Notes: I've configured the $_SERVER['DOCUMENT_ROOT'] on both servers and the variable in a phpinfo page results in the same - so I know this is being parsed correctly. I'm worried this is a difference in the php modules between RHEL8 and Centos7. I know there are different ways to install the modules. On the RHEL8 server I do not have a section in my phpinfo page labeled "apache2handler" as I do on the Centos7 box. When I am looking on the servers themselves- I see the following: php -v PHP 7.3.33 (cli) (built: XXX X XXXX 08:45:13) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.3.33, Copyright (c) 1998-2018 Zend Technologies with Zend OPcache v7.3.33, Copyright (c) 1999-2018, by Zend Technologies (identical output minus the build date). Same versions, etc. I do notice the repo for the php73 on the Centos7 comes from "remi-php73" whereas the RHEL8 box is using remi-safe. Not sure if this is relevant.
php, linux, apache, centos, rhel
1
652
1
https://stackoverflow.com/questions/72694666/rhel8-and-centos7-httpd-php-processing-differently
72,611,428
Rhel osbuild-composer system repository override is not working
As per document ( [URL] ) tried to override the system repository with custom base url . But blueprint depsolve is showing error as below ##composer-cli blueprints depsolve Test1-blueprint 2022-06-09 08:06:58,841: Test1-blueprint: This system does not have any valid subscriptions. Subscribe it before specifying rhsm: true in sources. And with next service restart osbuild-composer does not start ERROR: Info Error: Get "[URL] dial unix /run/weldr/api.socket: connect: connection refused Am I missing something here ?
Rhel osbuild-composer system repository override is not working As per document ( [URL] ) tried to override the system repository with custom base url . But blueprint depsolve is showing error as below ##composer-cli blueprints depsolve Test1-blueprint 2022-06-09 08:06:58,841: Test1-blueprint: This system does not have any valid subscriptions. Subscribe it before specifying rhsm: true in sources. And with next service restart osbuild-composer does not start ERROR: Info Error: Get "[URL] dial unix /run/weldr/api.socket: connect: connection refused Am I missing something here ?
rhel
1
1,500
1
https://stackoverflow.com/questions/72611428/rhel-osbuild-composer-system-repository-override-is-not-working
72,103,173
Linux Bash, Test command, why [ 0 -ne 0 ] is false instead of true?
When using test command in Linux Bash and numeric comparison between "Zero" equal to "zero" fetches an exit code 0 through echo $? $[ 0 -eq 0 ] $echo $? 0 However, when testing the same with an NOT EQUAL, why my exit code shows false and exit with value 1? $[ 0 -ne 0 ] $echo $? 1 man Test INTEGER1 -ne INTEGER2 INTEGER1 is not equal to INTEGER2 Could someone explain the logic behind the not equal to when equating with a same integer?
Linux Bash, Test command, why [ 0 -ne 0 ] is false instead of true? When using test command in Linux Bash and numeric comparison between "Zero" equal to "zero" fetches an exit code 0 through echo $? $[ 0 -eq 0 ] $echo $? 0 However, when testing the same with an NOT EQUAL, why my exit code shows false and exit with value 1? $[ 0 -ne 0 ] $echo $? 1 man Test INTEGER1 -ne INTEGER2 INTEGER1 is not equal to INTEGER2 Could someone explain the logic behind the not equal to when equating with a same integer?
linux, bash, shell, testing, rhel
1
840
2
https://stackoverflow.com/questions/72103173/linux-bash-test-command-why-0-ne-0-is-false-instead-of-true
71,948,391
RabbitMQ cluster on a single machine
I want to create a three node RabbitMQ cluster on a single RHEL8 machine for testing purposes. I tried instructions given in RabbitMQ official guide and also tried to follow this guide . The first node works fine and it's running. However, the second node cannot be started and throws up an error. I used below commands as mentioned in the guide. RABBITMQ_NODE_PORT=5672 RABBITMQ_NODENAME=rabbit rabbitmq-server -detached RABBITMQ_NODE_PORT=5673 RABBITMQ_NODENAME=hare rabbitmq-server -detached rabbitmqctl -n hare stop_app This command throws up below error. DIAGNOSTICS attempted to contact: [hare@localhost] hare@localhost: connected to epmd (port 4369) on localhost epmd reports: node 'hare' not running at all other nodes on localhost: [rabbit] On further inspection of logs, it seems like that this node tries to use the same ports used by the first node (e.g. MQTT port 1883). I think I might have to use the other option of declaring /etc/rabbitmq/rabbitmq.conf. Mainly because it seems to give more options to change ports etc. A sample config file resembling the one needed in my case or a link to a proper guide is highly appreciated.
RabbitMQ cluster on a single machine I want to create a three node RabbitMQ cluster on a single RHEL8 machine for testing purposes. I tried instructions given in RabbitMQ official guide and also tried to follow this guide . The first node works fine and it's running. However, the second node cannot be started and throws up an error. I used below commands as mentioned in the guide. RABBITMQ_NODE_PORT=5672 RABBITMQ_NODENAME=rabbit rabbitmq-server -detached RABBITMQ_NODE_PORT=5673 RABBITMQ_NODENAME=hare rabbitmq-server -detached rabbitmqctl -n hare stop_app This command throws up below error. DIAGNOSTICS attempted to contact: [hare@localhost] hare@localhost: connected to epmd (port 4369) on localhost epmd reports: node 'hare' not running at all other nodes on localhost: [rabbit] On further inspection of logs, it seems like that this node tries to use the same ports used by the first node (e.g. MQTT port 1883). I think I might have to use the other option of declaring /etc/rabbitmq/rabbitmq.conf. Mainly because it seems to give more options to change ports etc. A sample config file resembling the one needed in my case or a link to a proper guide is highly appreciated.
rabbitmq, rhel, rabbitmqctl, rhel8
1
1,271
1
https://stackoverflow.com/questions/71948391/rabbitmq-cluster-on-a-single-machine
71,704,297
How to securely host file on RHEL server and enable download for user
I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application. My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user. My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file? Thank you in advance!
How to securely host file on RHEL server and enable download for user I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application. My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user. My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file? Thank you in advance!
security, download, rhel, large-files
1
295
4
https://stackoverflow.com/questions/71704297/how-to-securely-host-file-on-rhel-server-and-enable-download-for-user
71,544,770
Unable to install gstreamer development packages in redhat/ubi8 docker image
I want to install gstreamer library in a Redhat linux docker image. My Dockerfile is as simple as this: FROM redhat/ubi8 AS builder RUN dnf --disableplugin=subscription-manager -y install gstreamer1-devel gstreamer1-plugins-base-tools gstreamer1-doc gstreamer1-plugins-base-devel gstreamer1-plugins-good gstreamer1-plugins-good-extras gstreamer1-plugins-ugly gstreamer1-plugins-bad-free gstreamer1-plugins-bad-free-devel gstreamer1-plugins-bad-free-extras But I get this error: Error: Unable to find a match: gstreamer1-devel gstreamer1-plugins-base-tools gstreamer1-doc gstreamer1-plugins-base-devel gstreamer1-plugins-good gstreamer1-plugins-good-extras gstreamer1-plugins-ugly gstreamer1-plugins-bad-free gstreamer1-plugins-bad-free-devel gstreamer1-plugins-bad-free-extras According to this post , I guess I'm doing it right. What else should I do? I want to build a gstreamer-based app inside the container.
Unable to install gstreamer development packages in redhat/ubi8 docker image I want to install gstreamer library in a Redhat linux docker image. My Dockerfile is as simple as this: FROM redhat/ubi8 AS builder RUN dnf --disableplugin=subscription-manager -y install gstreamer1-devel gstreamer1-plugins-base-tools gstreamer1-doc gstreamer1-plugins-base-devel gstreamer1-plugins-good gstreamer1-plugins-good-extras gstreamer1-plugins-ugly gstreamer1-plugins-bad-free gstreamer1-plugins-bad-free-devel gstreamer1-plugins-bad-free-extras But I get this error: Error: Unable to find a match: gstreamer1-devel gstreamer1-plugins-base-tools gstreamer1-doc gstreamer1-plugins-base-devel gstreamer1-plugins-good gstreamer1-plugins-good-extras gstreamer1-plugins-ugly gstreamer1-plugins-bad-free gstreamer1-plugins-bad-free-devel gstreamer1-plugins-bad-free-extras According to this post , I guess I'm doing it right. What else should I do? I want to build a gstreamer-based app inside the container.
docker, gstreamer, rhel
1
540
0
https://stackoverflow.com/questions/71544770/unable-to-install-gstreamer-development-packages-in-redhat-ubi8-docker-image
71,287,058
How to run commands in CI without being prompt for password
I want to include following commands in different stages of gitlab CI. ... script: | ssh -t ${SOME_USER}@${REMOTE_HOST} '([ -f "${FILE_PATH}/someFile.yaml" ] && cp -v "${FILE_PATH}/someFile.yaml" ${FILE_PATH}/someFile_yaml_$CI_COMMIT_TIMESTAMP.txt)' ... script: scp -v ${TF_ROOT}${LOCAL_FILE_PATH} ${SOME_USER}@${REMOTE_HOST}:${FILE_PATH}/ ... script: ssh -t ${SOME_USER}@${REMOTE_HOST} sudo chown -cv runner:runner ${FILE_PATH}/someFile.yaml ... script: ssh -t ${SOME_USER}@${REMOTE_HOST} sudo systemctl restart some-service ... Issue is when I execute this command individually manually, I have to provide password due to these being remote commands ssh and some require sudo and this approach wouldn’t work in a non-interactive mode such CI execution. So how can I execute these commands in Gitlab CI without password prompts?
How to run commands in CI without being prompt for password I want to include following commands in different stages of gitlab CI. ... script: | ssh -t ${SOME_USER}@${REMOTE_HOST} '([ -f "${FILE_PATH}/someFile.yaml" ] && cp -v "${FILE_PATH}/someFile.yaml" ${FILE_PATH}/someFile_yaml_$CI_COMMIT_TIMESTAMP.txt)' ... script: scp -v ${TF_ROOT}${LOCAL_FILE_PATH} ${SOME_USER}@${REMOTE_HOST}:${FILE_PATH}/ ... script: ssh -t ${SOME_USER}@${REMOTE_HOST} sudo chown -cv runner:runner ${FILE_PATH}/someFile.yaml ... script: ssh -t ${SOME_USER}@${REMOTE_HOST} sudo systemctl restart some-service ... Issue is when I execute this command individually manually, I have to provide password due to these being remote commands ssh and some require sudo and this approach wouldn’t work in a non-interactive mode such CI execution. So how can I execute these commands in Gitlab CI without password prompts?
linux, unix, gitlab-ci, rhel
1
1,134
2
https://stackoverflow.com/questions/71287058/how-to-run-commands-in-ci-without-being-prompt-for-password
70,967,621
Terraform vsphere multiple disk - Identify on linux
We currently deploy virtual machines (vesphere) with an variable amount of disks using dynamic disk block. In linux we have to format/mount etc. them manually later. Now I want to automate that too with a script executed from terraform on the operating system. But how can I identify the disks? Lets assume I attach 3 additional disks to a VM. I could hope and assume that disk 1 is /dev/sdb, disk 2 is /dev/sdc and so on. But thats not really how I want to apply that. Is there a way to put a label on the disk that is actually seen in the operating system (rhel linux) so I can be 100% sure that I work with the disk I want to? dynamic "disk" { for_each = var.data_disks content { label = "disk${disk.value["lun"]}" size = disk.value["disk_size_gb"] filesystem = disk.value["filesystem"] mountpoint = disk.value["mountpoint"] mountopts = disk.value["mountopts"] unit_number = disk.value["lun"] storage_policy_id = var.vsphere_storage_policy_id eagerly_scrub = var.eagerly_scrub thin_provisioned = var.thin_provisioned } } filesystem, mountpoint and mountopts are things I add over a json file and plan to use for the bash-script. The "label" is not available in linux. Thanks in advance
Terraform vsphere multiple disk - Identify on linux We currently deploy virtual machines (vesphere) with an variable amount of disks using dynamic disk block. In linux we have to format/mount etc. them manually later. Now I want to automate that too with a script executed from terraform on the operating system. But how can I identify the disks? Lets assume I attach 3 additional disks to a VM. I could hope and assume that disk 1 is /dev/sdb, disk 2 is /dev/sdc and so on. But thats not really how I want to apply that. Is there a way to put a label on the disk that is actually seen in the operating system (rhel linux) so I can be 100% sure that I work with the disk I want to? dynamic "disk" { for_each = var.data_disks content { label = "disk${disk.value["lun"]}" size = disk.value["disk_size_gb"] filesystem = disk.value["filesystem"] mountpoint = disk.value["mountpoint"] mountopts = disk.value["mountopts"] unit_number = disk.value["lun"] storage_policy_id = var.vsphere_storage_policy_id eagerly_scrub = var.eagerly_scrub thin_provisioned = var.thin_provisioned } } filesystem, mountpoint and mountopts are things I add over a json file and plan to use for the bash-script. The "label" is not available in linux. Thanks in advance
linux, terraform, rhel, terraform-provider-vsphere
1
910
0
https://stackoverflow.com/questions/70967621/terraform-vsphere-multiple-disk-identify-on-linux
70,927,574
Logout lightdm session does not close session
I have lightdm running on RHEL7.9. But when I try to logout user from vnc session. It doesn't close the vnc session instead it stays with blank screen and the process is not getting exited. I have also tried "exit_on_failure=true" under Seats configuration. But it will close all the vnc sessions Any suggestions? TIA
Logout lightdm session does not close session I have lightdm running on RHEL7.9. But when I try to logout user from vnc session. It doesn't close the vnc session instead it stays with blank screen and the process is not getting exited. I have also tried "exit_on_failure=true" under Seats configuration. But it will close all the vnc sessions Any suggestions? TIA
centos, rhel, mate, gdm
1
178
0
https://stackoverflow.com/questions/70927574/logout-lightdm-session-does-not-close-session
70,711,850
need Bind9 architecture advices
i need your advices for a DNS architecture. DNS architecture proposal In my company, every desktops/laptops are configured with DNS of the LAN (10.1.1.1), which is a Microsoft AD/DNS and i don't have the hand on it. Others DNS are Bind9 where i am admin. My purpose is to add other DNS servers for new projects (in a separated network) without change anything on laptops and on the LAN DNS and of course, i want developpers laptops (in LAN) can query and receive answer for fqdn of those new projects. From DNS (fqdn) point of vue, there is ONE domain (project.com) and MANY sub-domains (subX.project.com). And each sub-domain is in a separated network. Example: on each vlan, i will have a web server and i want it answers to its DNS sub-domain: web.project.com for the web server of the project zone. web.sub1.project.com for the web server of the sub-project zone web.sub2.project.com ... So, my understanding of Bind9 let me think that the LAN DNS server (10.1.1.1) can forward requests to the project DNS server (10.100.1.1). And project DNS can forward requests to sub-project DNS servers (10.200.1.1 / 10.250.1.1). Endly, all VMs of a network, can resolve public fqdn if the zone DNS forward their requests to the upper level DNS. I just want to resaid that i don't have the hand on the main DNS (in the LAN). Bellow, you will find the named.conf.options file which represents the architecture describes in the schema: DNS project.com (10.100.1.1/10.100.1.2) { allow-query { 127.0.0.1; 10.1.1.1; 10.1.1.2; 10.200.1.1; 10.200.1.2; 10.250.1.1; 10.250.1.2; 10.100.1.0/24; }; recursion yes; notify yes; allow-transfer { 10.100.1.2; }; # the slave forwarders { 10.1.1.1; 10.1.1.2; }; } DNS sub1.project.com (10.200.1.1/10.200.1.2) { allow-query { 127.0.0.1; 10.100.1.1; 10.100.1.2; 10.200.1.0/24; }; queries from VMs in this network and DNS from upper zone recursion yes; notify yes; allow-transfer { 10.200.1.2; }; forwarders { 10.100.1.1; 10.100.1.2; }; } DNS sub2.project.com (10.250.1.1/10.250.1.2) { allow-query { 127.0.0.1; 10.100.1.1; 10.100.1.2; 10.250.1.0/24; }; queries from VMs in this network and DNS from upper zone recursion yes; notify yes; allow-transfer { 10.250.1.2; }; forwarders { 10.100.1.1; 10.100.1.2; }; } What do you think about this architecture ? Do you see any drawbacks or mistakes or mis-understanding ? Regards.
need Bind9 architecture advices i need your advices for a DNS architecture. DNS architecture proposal In my company, every desktops/laptops are configured with DNS of the LAN (10.1.1.1), which is a Microsoft AD/DNS and i don't have the hand on it. Others DNS are Bind9 where i am admin. My purpose is to add other DNS servers for new projects (in a separated network) without change anything on laptops and on the LAN DNS and of course, i want developpers laptops (in LAN) can query and receive answer for fqdn of those new projects. From DNS (fqdn) point of vue, there is ONE domain (project.com) and MANY sub-domains (subX.project.com). And each sub-domain is in a separated network. Example: on each vlan, i will have a web server and i want it answers to its DNS sub-domain: web.project.com for the web server of the project zone. web.sub1.project.com for the web server of the sub-project zone web.sub2.project.com ... So, my understanding of Bind9 let me think that the LAN DNS server (10.1.1.1) can forward requests to the project DNS server (10.100.1.1). And project DNS can forward requests to sub-project DNS servers (10.200.1.1 / 10.250.1.1). Endly, all VMs of a network, can resolve public fqdn if the zone DNS forward their requests to the upper level DNS. I just want to resaid that i don't have the hand on the main DNS (in the LAN). Bellow, you will find the named.conf.options file which represents the architecture describes in the schema: DNS project.com (10.100.1.1/10.100.1.2) { allow-query { 127.0.0.1; 10.1.1.1; 10.1.1.2; 10.200.1.1; 10.200.1.2; 10.250.1.1; 10.250.1.2; 10.100.1.0/24; }; recursion yes; notify yes; allow-transfer { 10.100.1.2; }; # the slave forwarders { 10.1.1.1; 10.1.1.2; }; } DNS sub1.project.com (10.200.1.1/10.200.1.2) { allow-query { 127.0.0.1; 10.100.1.1; 10.100.1.2; 10.200.1.0/24; }; queries from VMs in this network and DNS from upper zone recursion yes; notify yes; allow-transfer { 10.200.1.2; }; forwarders { 10.100.1.1; 10.100.1.2; }; } DNS sub2.project.com (10.250.1.1/10.250.1.2) { allow-query { 127.0.0.1; 10.100.1.1; 10.100.1.2; 10.250.1.0/24; }; queries from VMs in this network and DNS from upper zone recursion yes; notify yes; allow-transfer { 10.250.1.2; }; forwarders { 10.100.1.1; 10.100.1.2; }; } What do you think about this architecture ? Do you see any drawbacks or mistakes or mis-understanding ? Regards.
dns, bind, rhel, self-hosting, bind9
1
156
1
https://stackoverflow.com/questions/70711850/need-bind9-architecture-advices
70,342,503
Openshift RHCOS 4.9-RHEL8.4 FirewallD
I am building an OCP 4.9 deployment in my lab with RHCOS 4.9 for the control plane and RHEL 8.4 for the worker nodes on VCenter 6.7. I am using RHEL 8.4 as the bastion host with Apache, HAProxy and the OCP install files, Bootstrap etc. Question #1 - Does anyone have the latest version of what the append-bootstrap.ign should look like as I heard the "append" is replaced with "merge" and the version is 3.2.0? Question #2 - Can the firewalld be disabled or is this required in RHEL 8.4 for OCP 4.9? Question #3 - If the firewalld is required, what should the configuration look like in addition to opening ports tcp ports 8080, 6443 and 22623 Thanks,
Openshift RHCOS 4.9-RHEL8.4 FirewallD I am building an OCP 4.9 deployment in my lab with RHCOS 4.9 for the control plane and RHEL 8.4 for the worker nodes on VCenter 6.7. I am using RHEL 8.4 as the bastion host with Apache, HAProxy and the OCP install files, Bootstrap etc. Question #1 - Does anyone have the latest version of what the append-bootstrap.ign should look like as I heard the "append" is replaced with "merge" and the version is 3.2.0? Question #2 - Can the firewalld be disabled or is this required in RHEL 8.4 for OCP 4.9? Question #3 - If the firewalld is required, what should the configuration look like in addition to opening ports tcp ports 8080, 6443 and 22623 Thanks,
openshift, rhel, vcenter
1
244
0
https://stackoverflow.com/questions/70342503/openshift-rhcos-4-9-rhel8-4-firewalld
69,455,480
dotnet build failing inside container
I encountered a really annoying issue when trying to execute dotnet builds within a container for a standard CI/CD flow. The issues seem to occur 'randomly' (on average about 4/10 builds fail when ran from Jenkins without any changes made to the code or config in between). The issues I'm getting (It seems a bit random which one occurs when - stacktrace below): MSB4018: CreateAppHost task failed unexpectedly - occurs most frequently /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: The "CreateAppHost" task failed unexpectedly. [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: System.IO.IOException: The process cannot access the file '/opt/app-root/<project_path>/Pse.Spr.Reports.Web/obj/Release/netcoreapp3.1/Pse.Spr.Reports.Web' because it is being used by another process. [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at System.IO.FileStream.Init(FileMode mode, FileShare share, String originalPath) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.Unix.cs:line 120 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.cs:line 252 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at System.IO.FileSystem.CopyFile(String sourceFullPath, String destFullPath, Boolean overwrite) in /_/src/System.IO.FileSystem/src/System/IO/FileSystem.Unix.cs:line 29 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.NET.HostModel.AppHost.BinaryUtils.CopyFile(String sourcePath, String destinationPath) [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.NET.HostModel.AppHost.HostWriter.CreateAppHost(String appHostSourceFilePath, String appHostDestinationFilePath, String appBinaryFilePath, Boolean windowsGraphicalUserInterface, String assemblyToCopyResorcesFrom) [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.NET.Build.Tasks.CreateAppHost.ExecuteCore() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.NET.Build.Tasks.TaskBase.Execute() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask) [/opt/app-root/<project_path>/<project_csproj>] MSB4018: GenerateDepsFile task failed unexpectedly /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: The "GenerateDepsFile" task failed unexpectedly. [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: System.IO.IOException: The process cannot access the file '/opt/app-root/<project_release_path><*.deps.json>' because it is being used by another process. [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at System.IO.FileStream.Init(FileMode mode, FileShare share, String originalPath) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.Unix.cs:line 120 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.cs:line 252 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at Microsoft.NET.Build.Tasks.GenerateDepsFile.WriteDepsFile(String depsFilePath) [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at Microsoft.NET.Build.Tasks.TaskBase.Execute() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask) NETSDK1029: Unable to use .../apphost as application host executable as it does not contain the expected placeholder byte sequence - my personal favorite :) /usr/lib64/dotnet/sdk/3.1.117/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error NETSDK1029: Unable to use '/usr/lib64/dotnet/packs/Microsoft.NETCore.App.Host.rhel.8-x64/3.1.17/runtimes/rhel.8-x64/native/apphost' as application host executable as it does not contain the expected placeholder byte sequence '63-33-61-62-38-66-66-31-33-37-32-30-65-38-61-64-39-30-34-37-64-64-33-39-34-36-36-62-33-63-38-39-37-34-65-35-39-32-63-32-66-61-33-38-33-64-34-61-33-39-36-30-37-31-34-63-61-65-66-30-63-34-66-32' that would mark where the application name would be written. [/opt/app-root/<project_csproj>] About now I think I've gone through all of the search results and none of the answers has helped so far - so maybe here I'll get lucky (and hopefully help someone in the process). Technical data: Jenkins agent Running on RHEL 8.4 with Podman v. 3.2.3 Container data OS - RHEL7.9 and RHEL8.4 (I tried both) Is run with a mounted volume from the Jenkins agent (the directory with the source code from cloned from the repository). Basic Dockerfile setup where I use a RHEL8 base image from our internal repository - the RHEL7 is a modified by installing scl-utils and such but the general idea is the same ARG rhel_version=7.9 ARG build_version=1.0.0 FROM <baseRHELimage> ENV HOME=/opt/app-root/ \ DOTNET_APP_PATH=/opt/app-root/ #DOTNET_RUNNING_IN_CONTAINER=true # Don't download/extract docs for nuget packages ENV NUGET_XMLDOC_MODE=skip RUN <setup yum repos> dnf install -y dotnet-sdk-3.1 &&\ dnf update -y; dnf clean all .NET data Using dotnet core 3.1 SDK The issues occur when I try to run a dotnet build. The Jenkinsfile is firstly removing all bin/ and obj/ directories in the first stage and then invoking the build for the application. The most promising things I tried: Some people suggested that the folders bin/ and obj/ needed to be fully deleted - this is happening in the first stage prior to the run I tried splitting up the dotnet publish command into: dotnet restore dotnet build --no-restore dotnet publish --no-build ... keeps throwing one of those 3 errors randomly across builds directly withing dotnet build I tried invoking that dotnet build with different flags like 'no-parallel' thinking maybe threads are somehow the root cause (I am getting a lot of 'process cannot access the file because it is being used by another process' warnings - but these happen for the successful builds as well - the default dotnet retries seem to be handling those Anyone any idea what I might be missing? The CI/CD pipeline was running previously on a RHEL7.7 agent directly (no containers used). Now we want to move to containerized builds with newer versions of RedHat - but these randomly reoccuring issues are blocking me so if anyone has any suggestions - greatly appreciated ;)
dotnet build failing inside container I encountered a really annoying issue when trying to execute dotnet builds within a container for a standard CI/CD flow. The issues seem to occur 'randomly' (on average about 4/10 builds fail when ran from Jenkins without any changes made to the code or config in between). The issues I'm getting (It seems a bit random which one occurs when - stacktrace below): MSB4018: CreateAppHost task failed unexpectedly - occurs most frequently /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: The "CreateAppHost" task failed unexpectedly. [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: System.IO.IOException: The process cannot access the file '/opt/app-root/<project_path>/Pse.Spr.Reports.Web/obj/Release/netcoreapp3.1/Pse.Spr.Reports.Web' because it is being used by another process. [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at System.IO.FileStream.Init(FileMode mode, FileShare share, String originalPath) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.Unix.cs:line 120 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.cs:line 252 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at System.IO.FileSystem.CopyFile(String sourceFullPath, String destFullPath, Boolean overwrite) in /_/src/System.IO.FileSystem/src/System/IO/FileSystem.Unix.cs:line 29 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.NET.HostModel.AppHost.BinaryUtils.CopyFile(String sourcePath, String destinationPath) [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.NET.HostModel.AppHost.HostWriter.CreateAppHost(String appHostSourceFilePath, String appHostDestinationFilePath, String appBinaryFilePath, Boolean windowsGraphicalUserInterface, String assemblyToCopyResorcesFrom) [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.NET.Build.Tasks.CreateAppHost.ExecuteCore() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.NET.Build.Tasks.TaskBase.Execute() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask) [/opt/app-root/<project_path>/<project_csproj>] MSB4018: GenerateDepsFile task failed unexpectedly /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: The "GenerateDepsFile" task failed unexpectedly. [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: System.IO.IOException: The process cannot access the file '/opt/app-root/<project_release_path><*.deps.json>' because it is being used by another process. [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at System.IO.FileStream.Init(FileMode mode, FileShare share, String originalPath) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.Unix.cs:line 120 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.cs:line 252 [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at Microsoft.NET.Build.Tasks.GenerateDepsFile.WriteDepsFile(String depsFilePath) [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at Microsoft.NET.Build.Tasks.TaskBase.Execute() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [/opt/app-root/<project_path>/<project_csproj>] /opt/rh/rh-dotnet31/root/usr/lib64/dotnet/sdk/3.1.103/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(192,5): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask) NETSDK1029: Unable to use .../apphost as application host executable as it does not contain the expected placeholder byte sequence - my personal favorite :) /usr/lib64/dotnet/sdk/3.1.117/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(393,5): error NETSDK1029: Unable to use '/usr/lib64/dotnet/packs/Microsoft.NETCore.App.Host.rhel.8-x64/3.1.17/runtimes/rhel.8-x64/native/apphost' as application host executable as it does not contain the expected placeholder byte sequence '63-33-61-62-38-66-66-31-33-37-32-30-65-38-61-64-39-30-34-37-64-64-33-39-34-36-36-62-33-63-38-39-37-34-65-35-39-32-63-32-66-61-33-38-33-64-34-61-33-39-36-30-37-31-34-63-61-65-66-30-63-34-66-32' that would mark where the application name would be written. [/opt/app-root/<project_csproj>] About now I think I've gone through all of the search results and none of the answers has helped so far - so maybe here I'll get lucky (and hopefully help someone in the process). Technical data: Jenkins agent Running on RHEL 8.4 with Podman v. 3.2.3 Container data OS - RHEL7.9 and RHEL8.4 (I tried both) Is run with a mounted volume from the Jenkins agent (the directory with the source code from cloned from the repository). Basic Dockerfile setup where I use a RHEL8 base image from our internal repository - the RHEL7 is a modified by installing scl-utils and such but the general idea is the same ARG rhel_version=7.9 ARG build_version=1.0.0 FROM <baseRHELimage> ENV HOME=/opt/app-root/ \ DOTNET_APP_PATH=/opt/app-root/ #DOTNET_RUNNING_IN_CONTAINER=true # Don't download/extract docs for nuget packages ENV NUGET_XMLDOC_MODE=skip RUN <setup yum repos> dnf install -y dotnet-sdk-3.1 &&\ dnf update -y; dnf clean all .NET data Using dotnet core 3.1 SDK The issues occur when I try to run a dotnet build. The Jenkinsfile is firstly removing all bin/ and obj/ directories in the first stage and then invoking the build for the application. The most promising things I tried: Some people suggested that the folders bin/ and obj/ needed to be fully deleted - this is happening in the first stage prior to the run I tried splitting up the dotnet publish command into: dotnet restore dotnet build --no-restore dotnet publish --no-build ... keeps throwing one of those 3 errors randomly across builds directly withing dotnet build I tried invoking that dotnet build with different flags like 'no-parallel' thinking maybe threads are somehow the root cause (I am getting a lot of 'process cannot access the file because it is being used by another process' warnings - but these happen for the successful builds as well - the default dotnet retries seem to be handling those Anyone any idea what I might be missing? The CI/CD pipeline was running previously on a RHEL7.7 agent directly (no containers used). Now we want to move to containerized builds with newer versions of RedHat - but these randomly reoccuring issues are blocking me so if anyone has any suggestions - greatly appreciated ;)
.net, .net-core, containers, rhel, dotnet-build
1
1,338
0
https://stackoverflow.com/questions/69455480/dotnet-build-failing-inside-container
69,137,122
cloud-init is using domains given in DHCP for hostname
In AWS VPC, I am associating the following as DHCP Domain name server: 10.*.*.2 (VPC DNS) Domain Name: ec2.internal privatedomain_1.com privatedomain_2.com publicdomain.com Due to which the hostname is getting set up as ip-10-5-*-*.privatedomain_1.comprivatedomain_2.compublicdomain.com the above is the combination of all 3 Domains names given in DHCP. The reason to add the domains to DHCP is to only effect /etc/resolv.conf and not the hostname but it is not working as expected, also it is adding a combination of these entries in `/etc/resolv.conf as well # cat /etc/resolv.conf # Generated by NetworkManager search ec2.internal privatedomain_1.com privatedomain_2.com publicdomain.com privatedomain_1.comprivatedomain_2.compublicdomain.com nameserver 10.*.*.2 I have tried using the preserve hostname document provided by AWS and cloud-init [URL] How do I stop cloud-init from overwriting my hostname on AWS (CentOS) [URL] . but these didn't work out, Any suggestions?
cloud-init is using domains given in DHCP for hostname In AWS VPC, I am associating the following as DHCP Domain name server: 10.*.*.2 (VPC DNS) Domain Name: ec2.internal privatedomain_1.com privatedomain_2.com publicdomain.com Due to which the hostname is getting set up as ip-10-5-*-*.privatedomain_1.comprivatedomain_2.compublicdomain.com the above is the combination of all 3 Domains names given in DHCP. The reason to add the domains to DHCP is to only effect /etc/resolv.conf and not the hostname but it is not working as expected, also it is adding a combination of these entries in `/etc/resolv.conf as well # cat /etc/resolv.conf # Generated by NetworkManager search ec2.internal privatedomain_1.com privatedomain_2.com publicdomain.com privatedomain_1.comprivatedomain_2.compublicdomain.com nameserver 10.*.*.2 I have tried using the preserve hostname document provided by AWS and cloud-init [URL] How do I stop cloud-init from overwriting my hostname on AWS (CentOS) [URL] . but these didn't work out, Any suggestions?
linux, amazon-web-services, amazon-ec2, rhel, cloud-init
1
707
0
https://stackoverflow.com/questions/69137122/cloud-init-is-using-domains-given-in-dhcp-for-hostname
68,813,987
Unable to get Compliant state for OS policy assignment on GCP console
I created a RHEL7 VM and a OS Policy assignment with a simple config. Here is the YAML I'm using to validate. Below is the same shell script used in yaml for reference: export num=$(stat --format '%a' /etc/crontab); if [[ "$num" -eq 644 ]]; then exit 100; else exit 101; fi Since I get the output 644 when I SSH and manually try the command, I'm checking the same with this script and after validating, it should give compliant state as result but it's giving non-compliant . PS: I was getting correct output i.e. compliant 3 days ago but nothing is working now. It seems google is making changes as the service still in Preview.
Unable to get Compliant state for OS policy assignment on GCP console I created a RHEL7 VM and a OS Policy assignment with a simple config. Here is the YAML I'm using to validate. Below is the same shell script used in yaml for reference: export num=$(stat --format '%a' /etc/crontab); if [[ "$num" -eq 644 ]]; then exit 100; else exit 101; fi Since I get the output 644 when I SSH and manually try the command, I'm checking the same with this script and after validating, it should give compliant state as result but it's giving non-compliant . PS: I was getting correct output i.e. compliant 3 days ago but nothing is working now. It seems google is making changes as the service still in Preview.
shell, google-cloud-platform, google-compute-engine, rhel, google-cloud-console
1
343
0
https://stackoverflow.com/questions/68813987/unable-to-get-compliant-state-for-os-policy-assignment-on-gcp-console
68,719,733
Mongosqld runs fine, but ODBC fails on test and PowerBI throws error 10060. Connector not working. Windows to two seperate RHEL Servers
The Goal I need to get data from a MongoDB updated every 15 minutes to use to build into a PowerBI report. The Gear I am connected from my windows machine via ssh to an RHEL server (server a). This server is running powerbi connector (SQLD) which is connected to my MongoDB that is running on a different server (server b). I'm also running MySQL on server b. My powerBI connector is installed on server b. Exactly where I'm at I am using the steps listed here (and all the associated pages) and have tried everything listed short of writing a config file, as the fact that things are working on mongosqld's end makes me think I don't need it... and if I can't get it working manually, having a config file won't exactly help. [URL] Using: mongosqld --mongo-uri="mongodb://10.xxx.xxx.xx" --auth --mongo-username="ThisGuy" --mongo-password="test" I successfully map the schema and show an active connection in the command window. I can also access my database from compass using an authorization enabled URL. When I set up an ODBC connector I use the IP of server a, the user and password from my url, and port 3307. Nothing shows up in the dropdown, when I click 'test' I get the following message: Connection Failed [MongoDB][ODBC 1.4(w) Driver]Can't connect to MySQL server4 on '10.xxx.xxx.xxx' (10060) I have also tried 3306, 27017, and 27015. Just to be safe I also added firewall rules for all traffic on these ports. I've tried this many times, including (just for the hell of it, and I'm kind of new to this stuff) the ip of server b, the ip of my machine, the credentials for MySQL, basically any combination of these things that I can think of. In powerBI, my odbc driver shows up, and when selected in the dropdown, it asks for a username and password. I have tried both mongo credentials and MySQL. Not sure which I should be using? regardless, I get the following error inside PowerBI: Details: "ODBC: ERROR [HY000] [MySQL][ODBC 1.4(w) Driver]Can't connect to MySQL server on '10.xxx.xxx.xxx' (10061) ERROR [HY000] [MySQL][ODBC 1.4(w) Driver]Can't connect to MySQL server on '10.xxx.xxx.xxx' (10061)" Thoughts I don't control either server, although I have root access, being new to this tech and company I am wary of screwing anything up that a co-worker will have to fix. I read in a different SO thread that maybe I need to downgrade the version of MySQL that is running on the server and that it could fix the problem, but I don't think that it will actually help and am afraid I might screw up something else on the server if I do this: The C Authentication plugin was developed against MySQL 5.7.18 Community Edition (64-bit), and tested with MySQL 5.7.18 Community Edition and the latest version of MongoDB Connector for BI. The plugin is not compatible with MySQL Server or Connector/ODBC driver version 8 and later. [URL] Maybe the problem is that server B is listening to server a on port 3307, and that there is another unknown port (not mentioned above) that my ODBC driver must be listening to? I'm not sure how to test for this when you get a step away like this. So that's it. I'm really stuck and would love some help, I am going to try the downgrade tomorrow if nothing else shakes loose and will keep this thread updated. Thank you for reading
Mongosqld runs fine, but ODBC fails on test and PowerBI throws error 10060. Connector not working. Windows to two seperate RHEL Servers The Goal I need to get data from a MongoDB updated every 15 minutes to use to build into a PowerBI report. The Gear I am connected from my windows machine via ssh to an RHEL server (server a). This server is running powerbi connector (SQLD) which is connected to my MongoDB that is running on a different server (server b). I'm also running MySQL on server b. My powerBI connector is installed on server b. Exactly where I'm at I am using the steps listed here (and all the associated pages) and have tried everything listed short of writing a config file, as the fact that things are working on mongosqld's end makes me think I don't need it... and if I can't get it working manually, having a config file won't exactly help. [URL] Using: mongosqld --mongo-uri="mongodb://10.xxx.xxx.xx" --auth --mongo-username="ThisGuy" --mongo-password="test" I successfully map the schema and show an active connection in the command window. I can also access my database from compass using an authorization enabled URL. When I set up an ODBC connector I use the IP of server a, the user and password from my url, and port 3307. Nothing shows up in the dropdown, when I click 'test' I get the following message: Connection Failed [MongoDB][ODBC 1.4(w) Driver]Can't connect to MySQL server4 on '10.xxx.xxx.xxx' (10060) I have also tried 3306, 27017, and 27015. Just to be safe I also added firewall rules for all traffic on these ports. I've tried this many times, including (just for the hell of it, and I'm kind of new to this stuff) the ip of server b, the ip of my machine, the credentials for MySQL, basically any combination of these things that I can think of. In powerBI, my odbc driver shows up, and when selected in the dropdown, it asks for a username and password. I have tried both mongo credentials and MySQL. Not sure which I should be using? regardless, I get the following error inside PowerBI: Details: "ODBC: ERROR [HY000] [MySQL][ODBC 1.4(w) Driver]Can't connect to MySQL server on '10.xxx.xxx.xxx' (10061) ERROR [HY000] [MySQL][ODBC 1.4(w) Driver]Can't connect to MySQL server on '10.xxx.xxx.xxx' (10061)" Thoughts I don't control either server, although I have root access, being new to this tech and company I am wary of screwing anything up that a co-worker will have to fix. I read in a different SO thread that maybe I need to downgrade the version of MySQL that is running on the server and that it could fix the problem, but I don't think that it will actually help and am afraid I might screw up something else on the server if I do this: The C Authentication plugin was developed against MySQL 5.7.18 Community Edition (64-bit), and tested with MySQL 5.7.18 Community Edition and the latest version of MongoDB Connector for BI. The plugin is not compatible with MySQL Server or Connector/ODBC driver version 8 and later. [URL] Maybe the problem is that server B is listening to server a on port 3307, and that there is another unknown port (not mentioned above) that my ODBC driver must be listening to? I'm not sure how to test for this when you get a step away like this. So that's it. I'm really stuck and would love some help, I am going to try the downgrade tomorrow if nothing else shakes loose and will keep this thread updated. Thank you for reading
mongodb, powerbi, odbc, rhel, mongodb-biconnector
1
1,084
0
https://stackoverflow.com/questions/68719733/mongosqld-runs-fine-but-odbc-fails-on-test-and-powerbi-throws-error-10060-conn
68,388,083
KeyStore.setKeyEntry not working in FIPS mode
I am using RHEL 8.3 with FIPS mode on and openjdk 1.8.0.265. I am using the following token for initializing the Keystore. name = NSSfips nssLibraryDirectory = /usr/lib64 nssSecmodDirectory = <path to nssdb> nssModule = fips For keypair generation I am using: KeyPairGenerator keyGen = KeyPairGenerator.getInstance("RSA", "SunJSSE"); If FIPS mode is disabled at the OS level everything works fine. Once I turn on the FIPS mode using fips-mode-set --enable I get the following error java.security.KeyStoreException: Cannot convert to PKCS11 keys at sun.security.pkcs11.P11KeyStore.storeSkey(P11KeyStore.java:1637) at sun.security.pkcs11.P11KeyStore.engineSetEntry(P11KeyStore.java:1127) at sun.security.pkcs11.P11KeyStore.engineSetKeyEntry(P11KeyStore.java:457) at java.security.KeyStore.setKeyEntry(KeyStore.java:1140) Caused by: java.security.InvalidKeyException: Could not create key at sun.security.pkcs11.P11SecretKeyFactory.createKey(P11SecretKeyFactory.java:274) at sun.security.pkcs11.P11SecretKeyFactory.convertKey(P11SecretKeyFactory.java:179) at sun.security.pkcs11.P11KeyStore.storeSkey(P11KeyStore.java:1634) ... 46 more Caused by: sun.security.pkcs11.wrapper.PKCS11Exception: CKR_ATTRIBUTE_VALUE_INVALID at sun.security.pkcs11.wrapper.PKCS11.C_CreateObject(Native Method) at sun.security.pkcs11.P11SecretKeyFactory.createKey(P11SecretKeyFactory.java:269) I am sort of unable to comprehend the reason why setKeyEntry is failing. Is there any possible workaround or solution?
KeyStore.setKeyEntry not working in FIPS mode I am using RHEL 8.3 with FIPS mode on and openjdk 1.8.0.265. I am using the following token for initializing the Keystore. name = NSSfips nssLibraryDirectory = /usr/lib64 nssSecmodDirectory = <path to nssdb> nssModule = fips For keypair generation I am using: KeyPairGenerator keyGen = KeyPairGenerator.getInstance("RSA", "SunJSSE"); If FIPS mode is disabled at the OS level everything works fine. Once I turn on the FIPS mode using fips-mode-set --enable I get the following error java.security.KeyStoreException: Cannot convert to PKCS11 keys at sun.security.pkcs11.P11KeyStore.storeSkey(P11KeyStore.java:1637) at sun.security.pkcs11.P11KeyStore.engineSetEntry(P11KeyStore.java:1127) at sun.security.pkcs11.P11KeyStore.engineSetKeyEntry(P11KeyStore.java:457) at java.security.KeyStore.setKeyEntry(KeyStore.java:1140) Caused by: java.security.InvalidKeyException: Could not create key at sun.security.pkcs11.P11SecretKeyFactory.createKey(P11SecretKeyFactory.java:274) at sun.security.pkcs11.P11SecretKeyFactory.convertKey(P11SecretKeyFactory.java:179) at sun.security.pkcs11.P11KeyStore.storeSkey(P11KeyStore.java:1634) ... 46 more Caused by: sun.security.pkcs11.wrapper.PKCS11Exception: CKR_ATTRIBUTE_VALUE_INVALID at sun.security.pkcs11.wrapper.PKCS11.C_CreateObject(Native Method) at sun.security.pkcs11.P11SecretKeyFactory.createKey(P11SecretKeyFactory.java:269) I am sort of unable to comprehend the reason why setKeyEntry is failing. Is there any possible workaround or solution?
keystore, rhel, fips, nss
1
577
0
https://stackoverflow.com/questions/68388083/keystore-setkeyentry-not-working-in-fips-mode
67,984,934
GDB missing serparate debuginfo after installing all debuginfo
I'm trying to use GDB with openssh on rhel 8. I have installed all openssh and openssl debuginfo packages and that works fine to find breakpoints. However, as soon as gdb hits the first breakpoint it throws an error with about 30 missing debuginfo with a yum command. I quit, copy-paste the command, all debuginfo packages install properly and I try again. gdb tells me all the packages I just installed are missing. I'm not sure how to fix it and I'm stuck. Would anyone have a solution?
GDB missing serparate debuginfo after installing all debuginfo I'm trying to use GDB with openssh on rhel 8. I have installed all openssh and openssl debuginfo packages and that works fine to find breakpoints. However, as soon as gdb hits the first breakpoint it throws an error with about 30 missing debuginfo with a yum command. I quit, copy-paste the command, all debuginfo packages install properly and I try again. gdb tells me all the packages I just installed are missing. I'm not sure how to fix it and I'm stuck. Would anyone have a solution?
gdb, rhel, openssh
1
408
1
https://stackoverflow.com/questions/67984934/gdb-missing-serparate-debuginfo-after-installing-all-debuginfo
67,898,823
500 Can&#39;t connect to localhost:443 (certificate verify failed)
sub getTracker { my ($self) = @_; my $url = HOST() . $URIFromSTDIN; my $req = HTTP::Request->new(GET => $url); $req->header('Accept', 'application/xml'); my @credentials = get_system_credentials(); $req->authorization_basic(@credentials[0], @credentials[1]); my $ua = LWP::UserAgent->new(ssl_opts => { verify_hostname => 0 },); $ua->agent("wfa/$VERSION"); my $resp = $ua->request($req); if ($resp->is_success) { print $resp->decoded_content; } else { die $resp->status_line; } my $output = $resp->decoded_content; my $ref = xml_in($output); $ref->{'userAgent'} = $ua; $ref->{'url'} = $url; } above code works fine on RHEL 7.x but on RHEL 8.x it is throwing an error: Unable to contact service url 500 Can't connect to localhost:443 (certificate verify failed)
500 Can&#39;t connect to localhost:443 (certificate verify failed) sub getTracker { my ($self) = @_; my $url = HOST() . $URIFromSTDIN; my $req = HTTP::Request->new(GET => $url); $req->header('Accept', 'application/xml'); my @credentials = get_system_credentials(); $req->authorization_basic(@credentials[0], @credentials[1]); my $ua = LWP::UserAgent->new(ssl_opts => { verify_hostname => 0 },); $ua->agent("wfa/$VERSION"); my $resp = $ua->request($req); if ($resp->is_success) { print $resp->decoded_content; } else { die $resp->status_line; } my $output = $resp->decoded_content; my $ref = xml_in($output); $ref->{'userAgent'} = $ua; $ref->{'url'} = $url; } above code works fine on RHEL 7.x but on RHEL 8.x it is throwing an error: Unable to contact service url 500 Can't connect to localhost:443 (certificate verify failed)
perl, rhel, perl-module, rhel7, rhel8
1
1,676
0
https://stackoverflow.com/questions/67898823/500-cant-connect-to-localhost443-certificate-verify-failed
67,396,081
How to configure a ODBC connection to teiid on rhel8? GSSAPI negotiation fails
I have a default Teiid 12.2 installation on RHEL 8. Now I'm trying to configure an ODBC connection that would be used by PHP. This always results in an error "[unixODBC]received invalid response to GSSAPI negotiation: R" This is my ODBC configuration [TEIID12] Driver = PostgreSQL Trace = No Description = PostgreSQL Data Source Servername = servername Port = 35432 Protocol = 7.4-1 UserName = someusername Password = xxxx Database = vdb ReadOnly = no ServerType = Postgres ConnSettings = UseServerSidePrepare=1 Debug=0 Fetch = 10000 A regular isql command also fails with the same information isql -v TEIID12 someusername xxxx Results in: [08001][unixODBC]received invalid response to GSSAPI negotiation: R [ISQL]ERROR: Could not SQLConnect Additional information: The same configuration used to work on a different Linux Distro (Ubuntu) but on this machine, kerberos was not configured. So I assume that this is influincing some sort of "preference". The standalone-teiid.xml configuration has <ssl mode="disabled" /> for the odbc transport. And yet the GSS API errors occur. In combination with the later, is it possible that the postgresql ODBC driver is requiring GSS to be used? Is there a setting in odbc.ini where this can be disabled? On Ubuntu the driver version was 10.01, on RHEL it is 10.03
How to configure a ODBC connection to teiid on rhel8? GSSAPI negotiation fails I have a default Teiid 12.2 installation on RHEL 8. Now I'm trying to configure an ODBC connection that would be used by PHP. This always results in an error "[unixODBC]received invalid response to GSSAPI negotiation: R" This is my ODBC configuration [TEIID12] Driver = PostgreSQL Trace = No Description = PostgreSQL Data Source Servername = servername Port = 35432 Protocol = 7.4-1 UserName = someusername Password = xxxx Database = vdb ReadOnly = no ServerType = Postgres ConnSettings = UseServerSidePrepare=1 Debug=0 Fetch = 10000 A regular isql command also fails with the same information isql -v TEIID12 someusername xxxx Results in: [08001][unixODBC]received invalid response to GSSAPI negotiation: R [ISQL]ERROR: Could not SQLConnect Additional information: The same configuration used to work on a different Linux Distro (Ubuntu) but on this machine, kerberos was not configured. So I assume that this is influincing some sort of "preference". The standalone-teiid.xml configuration has <ssl mode="disabled" /> for the odbc transport. And yet the GSS API errors occur. In combination with the later, is it possible that the postgresql ODBC driver is requiring GSS to be used? Is there a setting in odbc.ini where this can be disabled? On Ubuntu the driver version was 10.01, on RHEL it is 10.03
postgresql, odbc, rhel, teiid
1
632
1
https://stackoverflow.com/questions/67396081/how-to-configure-a-odbc-connection-to-teiid-on-rhel8-gssapi-negotiation-fails
66,975,679
Upgrade server running RHEL 7 with MySQL 5.7 to RHEL 8
We are looking at upgrading a backend server that runs RHEL 7 that hosts a MySQL 5.7 DB to RHEL 8. I read that the RHEL AppStream only supports MySQL 8 and that we would need to make some updates to disable AppStream for MySQL and manually install MySQL 5.7. Has anyone done this and what are your thoughts on the risk of using MySQL 5.7 on RHEL 8. On the other hand, is it just a better idea to move to MySQL 8 and deal with the data migration before upgrading the OS?
Upgrade server running RHEL 7 with MySQL 5.7 to RHEL 8 We are looking at upgrading a backend server that runs RHEL 7 that hosts a MySQL 5.7 DB to RHEL 8. I read that the RHEL AppStream only supports MySQL 8 and that we would need to make some updates to disable AppStream for MySQL and manually install MySQL 5.7. Has anyone done this and what are your thoughts on the risk of using MySQL 5.7 on RHEL 8. On the other hand, is it just a better idea to move to MySQL 8 and deal with the data migration before upgrading the OS?
mysql, rhel
1
570
0
https://stackoverflow.com/questions/66975679/upgrade-server-running-rhel-7-with-mysql-5-7-to-rhel-8
66,848,260
node process reload after after server restart(no npm)
I want to reload/restart node service automatically after server patching/restart. I cant install any npm package on server cause we don't give access to external network so I copy all node_modules through winscp. so npm install -g pm2 will not work not even any type of npm install will work.
node process reload after after server restart(no npm) I want to reload/restart node service automatically after server patching/restart. I cant install any npm package on server cause we don't give access to external network so I copy all node_modules through winscp. so npm install -g pm2 will not work not even any type of npm install will work.
node.js, centos, rhel
1
83
0
https://stackoverflow.com/questions/66848260/node-process-reload-after-after-server-restartno-npm
64,563,980
Why can I import numpy into my python interpreter but RHEL says numpy not installed?
Per below, rpm tells me numpy isn't installed, yet I have no problem importing numpy into my python interpret. Can anyone explain why that may be? (I had to change the 3 carrots in the interpreter to an arrow for stack to display what happened) x@red-hat-image install]$ rpm -q numpy package numpy is not installed x@red-hat-image yum]$ python Python 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2 Type "help", "copyright", "credits" or "license" for more information. ->import numpy ->
Why can I import numpy into my python interpreter but RHEL says numpy not installed? Per below, rpm tells me numpy isn't installed, yet I have no problem importing numpy into my python interpret. Can anyone explain why that may be? (I had to change the 3 carrots in the interpreter to an arrow for stack to display what happened) x@red-hat-image install]$ rpm -q numpy package numpy is not installed x@red-hat-image yum]$ python Python 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2 Type "help", "copyright", "credits" or "license" for more information. ->import numpy ->
python, numpy, rhel
1
153
1
https://stackoverflow.com/questions/64563980/why-can-i-import-numpy-into-my-python-interpreter-but-rhel-says-numpy-not-instal
64,441,325
Log File Creation Issue from Jar Service
I have a spring boot application deployed on RHEL 8. The app creates log files in specific folder and files based on the setting in the logback.xml file. I am deploying the app as a service and I see that the logs are not getting created with the below configuration. The below script file is invoked from systemd process where this file is referenced. #!/bin/sh SERVICE_NAME=My_Service_Name PATH_TO_JAR=/usr/Name_of_User/MyJavaApplication.jar PID_PATH_NAME=/tmp/My_Service_Name-pid case $1 in start) echo "Starting $SERVICE_NAME ..." if [ ! -f $PID_PATH_NAME ]; then java -jar $PATH_TO_JAR echo $! > $PID_PATH_NAME echo "$SERVICE_NAME started ..." else echo "$SERVICE_NAME is already running ..." fi ;; stop) if [ -f $PID_PATH_NAME ]; then PID=$(cat $PID_PATH_NAME); echo "$SERVICE_NAME stoping ..." kill $PID; echo "$SERVICE_NAME stopped ..." rm $PID_PATH_NAME else echo "$SERVICE_NAME is not running ..." fi ;; restart) if [ -f $PID_PATH_NAME ]; then PID=$(cat $PID_PATH_NAME); echo "$SERVICE_NAME stopping ..."; kill $PID; echo "$SERVICE_NAME stopped ..."; rm $PID_PATH_NAME echo "$SERVICE_NAME starting ..." java -jar $PATH_TO_JAR echo $! > $PID_PATH_NAME echo "$SERVICE_NAME started ..." else echo "$SERVICE_NAME is not running ..." fi ;; esac In the abovue script i even tried replacing java -jar $PATH_TO_JAR with nohup java -jar $PATH_TO_JAR /tmp 2>> /dev/null >>/dev/null & but no luck. If i dont use this script and run the app from terminal like java -jar MyApplication.jar the log files do get created at the required location.
Log File Creation Issue from Jar Service I have a spring boot application deployed on RHEL 8. The app creates log files in specific folder and files based on the setting in the logback.xml file. I am deploying the app as a service and I see that the logs are not getting created with the below configuration. The below script file is invoked from systemd process where this file is referenced. #!/bin/sh SERVICE_NAME=My_Service_Name PATH_TO_JAR=/usr/Name_of_User/MyJavaApplication.jar PID_PATH_NAME=/tmp/My_Service_Name-pid case $1 in start) echo "Starting $SERVICE_NAME ..." if [ ! -f $PID_PATH_NAME ]; then java -jar $PATH_TO_JAR echo $! > $PID_PATH_NAME echo "$SERVICE_NAME started ..." else echo "$SERVICE_NAME is already running ..." fi ;; stop) if [ -f $PID_PATH_NAME ]; then PID=$(cat $PID_PATH_NAME); echo "$SERVICE_NAME stoping ..." kill $PID; echo "$SERVICE_NAME stopped ..." rm $PID_PATH_NAME else echo "$SERVICE_NAME is not running ..." fi ;; restart) if [ -f $PID_PATH_NAME ]; then PID=$(cat $PID_PATH_NAME); echo "$SERVICE_NAME stopping ..."; kill $PID; echo "$SERVICE_NAME stopped ..."; rm $PID_PATH_NAME echo "$SERVICE_NAME starting ..." java -jar $PATH_TO_JAR echo $! > $PID_PATH_NAME echo "$SERVICE_NAME started ..." else echo "$SERVICE_NAME is not running ..." fi ;; esac In the abovue script i even tried replacing java -jar $PATH_TO_JAR with nohup java -jar $PATH_TO_JAR /tmp 2>> /dev/null >>/dev/null & but no luck. If i dont use this script and run the app from terminal like java -jar MyApplication.jar the log files do get created at the required location.
java, linux, spring, spring-boot, rhel
1
330
1
https://stackoverflow.com/questions/64441325/log-file-creation-issue-from-jar-service
64,415,998
Rhel + Kernel header files not in any of the expected locations - when installing network driver
we try to installe the network driver - i40e on rhel server 7.2 version , ( kernel - 3.10.0-512 ) as the following # tar zxf i40e-2.12.6.tar.gz # cd i40e-2.12.6/src # make install common.mk:82: *** Kernel header files not in any of the expected locations. common.mk:83: *** Install the appropriate kernel development package, e.g. common.mk:84: *** kernel-devel, for building kernel modules and try again. Stop. on other server ( rhel 7.2 ) its installed successfully as the following ( this is the second make install , so the output is little different from the first time ) # cd src/ # make install *** The target kernel has CONFIG_MODULE_SIG_ALL enabled, but *** the signing key cannot be found. Module signing has been *** disabled for this build. make[1]: Entering directory /usr/src/kernels/3.10.0-327.el7.x86_64' Building modules, stage 2. MODPOST 1 modules make[1]: Leaving directory /usr/src/kernels/3.10.0-327.el7.x86_64' Installing modules... *** The target kernel has CONFIG_MODULE_SIG_ALL enabled, but *** the signing key cannot be found. Module signing has been *** disabled for this build. make[1]: Entering directory /usr/src/kernels/3.10.0-327.el7.x86_64' INSTALL /tmp/i40e-2.12.6/src/i40e.ko DEPMOD 3.10.0-327.el7.x86_64 make[1]: Leaving directory /usr/src/kernels/3.10.0-327.el7.x86_64' /sbin/depmod -e -F /lib/modules/3.10.0-327.el7.x86_64/source/System.map -a 3.10.0-327.el7.x86_64 Updating initramfs... make mandocs_install make[1]: Entering directory /tmp/i40e-2.12.6/src' Copying manpages... make[1]: Leaving directory /tmp/i40e-2.12.6/src' what is the reason ? that we get the following errors: common.mk:82: *** Kernel header files not in any of the expected locations. common.mk:83: *** Install the appropriate kernel development package, e.g. common.mk:84: *** kernel-devel, for building kernel modules and try again. Stop. reference - [URL]
Rhel + Kernel header files not in any of the expected locations - when installing network driver we try to installe the network driver - i40e on rhel server 7.2 version , ( kernel - 3.10.0-512 ) as the following # tar zxf i40e-2.12.6.tar.gz # cd i40e-2.12.6/src # make install common.mk:82: *** Kernel header files not in any of the expected locations. common.mk:83: *** Install the appropriate kernel development package, e.g. common.mk:84: *** kernel-devel, for building kernel modules and try again. Stop. on other server ( rhel 7.2 ) its installed successfully as the following ( this is the second make install , so the output is little different from the first time ) # cd src/ # make install *** The target kernel has CONFIG_MODULE_SIG_ALL enabled, but *** the signing key cannot be found. Module signing has been *** disabled for this build. make[1]: Entering directory /usr/src/kernels/3.10.0-327.el7.x86_64' Building modules, stage 2. MODPOST 1 modules make[1]: Leaving directory /usr/src/kernels/3.10.0-327.el7.x86_64' Installing modules... *** The target kernel has CONFIG_MODULE_SIG_ALL enabled, but *** the signing key cannot be found. Module signing has been *** disabled for this build. make[1]: Entering directory /usr/src/kernels/3.10.0-327.el7.x86_64' INSTALL /tmp/i40e-2.12.6/src/i40e.ko DEPMOD 3.10.0-327.el7.x86_64 make[1]: Leaving directory /usr/src/kernels/3.10.0-327.el7.x86_64' /sbin/depmod -e -F /lib/modules/3.10.0-327.el7.x86_64/source/System.map -a 3.10.0-327.el7.x86_64 Updating initramfs... make mandocs_install make[1]: Entering directory /tmp/i40e-2.12.6/src' Copying manpages... make[1]: Leaving directory /tmp/i40e-2.12.6/src' what is the reason ? that we get the following errors: common.mk:82: *** Kernel header files not in any of the expected locations. common.mk:83: *** Install the appropriate kernel development package, e.g. common.mk:84: *** kernel-devel, for building kernel modules and try again. Stop. reference - [URL]
network-programming, linux-kernel, kernel-module, rhel
1
3,046
0
https://stackoverflow.com/questions/64415998/rhel-kernel-header-files-not-in-any-of-the-expected-locations-when-installin
64,282,829
Supported Linux OS version for Anthos GKE on-prem
I am referring to [URL] to install GKE on-prem by referring to [URL] . I will appreciate it if you can let me know the supported Linux OS version which needs to be installed as part of the Anthos GKE on-prem setup using VMware's vSphere Server Virtualization. Thanks in Advance. Best Regards, Kaushal
Supported Linux OS version for Anthos GKE on-prem I am referring to [URL] to install GKE on-prem by referring to [URL] . I will appreciate it if you can let me know the supported Linux OS version which needs to be installed as part of the Anthos GKE on-prem setup using VMware's vSphere Server Virtualization. Thanks in Advance. Best Regards, Kaushal
google-kubernetes-engine, rhel
1
731
2
https://stackoverflow.com/questions/64282829/supported-linux-os-version-for-anthos-gke-on-prem
63,959,508
Account or password is expired, reset your password and try again sudo: unable to change expired password: Authentication token manipulation error
I am creating one RHEL7.8 machine with s390x arch in Openstack using terraform and running init.sh file in user_data I got the below error log while executing any command in init.sh file. my init.sh file echo "Start executing bootstrap...\n" echo -e "${root_pwd}\n${root_pwd}" | sudo passwd root echo "Changed Root account password" sudo yum update -y i got below error msg Start executing bootstrap...\n sudo: Account or password is expired, reset your password and try again Changing password for root. sudo: no tty present and no askpass program specified sudo: unable to change expired password: Authentication token manipulation error Changed Root account password sudo: Account or password is expired, reset your password and try again sudo: no tty present and no askpass program specified sudo: unable to change expired password: Authentication token manipulation error When I login to machine manually and try sudo passwd root it is working fine. Why it is not working through terraform
Account or password is expired, reset your password and try again sudo: unable to change expired password: Authentication token manipulation error I am creating one RHEL7.8 machine with s390x arch in Openstack using terraform and running init.sh file in user_data I got the below error log while executing any command in init.sh file. my init.sh file echo "Start executing bootstrap...\n" echo -e "${root_pwd}\n${root_pwd}" | sudo passwd root echo "Changed Root account password" sudo yum update -y i got below error msg Start executing bootstrap...\n sudo: Account or password is expired, reset your password and try again Changing password for root. sudo: no tty present and no askpass program specified sudo: unable to change expired password: Authentication token manipulation error Changed Root account password sudo: Account or password is expired, reset your password and try again sudo: no tty present and no askpass program specified sudo: unable to change expired password: Authentication token manipulation error When I login to machine manually and try sudo passwd root it is working fine. Why it is not working through terraform
terraform, rhel, s390x
1
3,880
1
https://stackoverflow.com/questions/63959508/account-or-password-is-expired-reset-your-password-and-try-again-sudo-unable-t
63,958,928
How to build rootfs.tar from RHEL UBI image (pulled with podman)
If I pull a RHEL UBI image like so (On Windows using cygwin and podman), $podman pull registry.access.redhat.com/ubi8/ubi Is there a command I can run on the host system (Windows) to create a file on the host (Windows) that would be a rootfs.tar of the UBI image that was pulled? I want to then use that rootfs.tar to run on the Windows host under WSL2. If anyone has done this or if there is another way to fetch the UBI image as a rootfs.tar , it would be greatly helpful.
How to build rootfs.tar from RHEL UBI image (pulled with podman) If I pull a RHEL UBI image like so (On Windows using cygwin and podman), $podman pull registry.access.redhat.com/ubi8/ubi Is there a command I can run on the host system (Windows) to create a file on the host (Windows) that would be a rootfs.tar of the UBI image that was pulled? I want to then use that rootfs.tar to run on the Windows host under WSL2. If anyone has done this or if there is another way to fetch the UBI image as a rootfs.tar , it would be greatly helpful.
rhel, windows-subsystem-for-linux, podman, ubi
1
755
1
https://stackoverflow.com/questions/63958928/how-to-build-rootfs-tar-from-rhel-ubi-image-pulled-with-podman
63,850,706
Sharepoint RequestDigest is interpreted as parameter in cURL
I use the following sequence of cURL commands to upload files to Sharepoint. # Get Token curl -X POST -d "" --ntlm -u usr:pw [URL] # Checkout File curl -X POST -d "" -H "X-RequestDigest: 0x...,11 Sep 2020 14:45:30 -0000" --ntlm -u usr:pw "[URL] # Upload File curl --ntlm -u usr:pw --upload-file ... [URL] # Check In curl -X POST -d "" -H "X-RequestDigest: 0x...,11 Sep 2020 14:45:30 -0000" --ntlm -u usr:pw "[URL] That works fine when running locally using libcurl. It does not when running on RHEL7. Then the ' -0000' within the header's token is interpreted as parameter resulting in the error message below. curl: option -0000': is unkown I cannot remove the space or minus because it is part of the token. I already tried to escape the header in several ways without success. Do you have an idea on how to resolve this?
Sharepoint RequestDigest is interpreted as parameter in cURL I use the following sequence of cURL commands to upload files to Sharepoint. # Get Token curl -X POST -d "" --ntlm -u usr:pw [URL] # Checkout File curl -X POST -d "" -H "X-RequestDigest: 0x...,11 Sep 2020 14:45:30 -0000" --ntlm -u usr:pw "[URL] # Upload File curl --ntlm -u usr:pw --upload-file ... [URL] # Check In curl -X POST -d "" -H "X-RequestDigest: 0x...,11 Sep 2020 14:45:30 -0000" --ntlm -u usr:pw "[URL] That works fine when running locally using libcurl. It does not when running on RHEL7. Then the ' -0000' within the header's token is interpreted as parameter resulting in the error message below. curl: option -0000': is unkown I cannot remove the space or minus because it is part of the token. I already tried to escape the header in several ways without success. Do you have an idea on how to resolve this?
rest, curl, sharepoint, talend, rhel
1
406
1
https://stackoverflow.com/questions/63850706/sharepoint-requestdigest-is-interpreted-as-parameter-in-curl
63,635,671
xml parsing result in tag with None value for some tag
I am trying to parse a data intensive xml file. I am using lxml to parse each tag: from lxml import etree sourceFile=sys.argv[1] events = ("start", "end") context=etree.iterparse(sourceFile,events=events) for eachEvent, eachElement in context: <the code goes here> I am facing issue with the below data: <QualityData> <Measure>Care for Older Adults - Functional Status Assessment</Measure> <Question>Patients, ages 66 years or older, should have a functional status assessment completed every calendar year.</Question> <Answer>Date:12/31/2019</Answer> <SubAnswer2>Completed comprehensive functional status assessment (not limited to an acute or single condition, event, or body system) today</SubAnswer2> <Measure>Care for Older Adults - Pain Screening</Measure> <Question>Patients, ages 66 years or older, should have a pain assessment at least annually</Question> <Answer>Date:12/31/2019</Answer> <SubAnswer2>Comprehensive pain assessment (not limited to an acute or single condition, event, or body system) completed today</SubAnswer2> </QualityData> The tag SubAnswer2 has 2 occurrences in it. The 2nd occurrence is getting None value. Point to note is that the 2nd occurrence of other tags are getting read properly. Also, I am getting issue with only this data. There are other examples where the tag SubAnswer2 has multiple occurrences and they are getting parsed successfully. The code that I am using to read the value for Subanswer2 is: if eachElement.tag=='SubAnswer2' and QDstart and eachEvent=='start': QDlist.append(eachElement.text) I tried to parse using ElementTree as well. However, I get None for some other tags when I use it. To debug the problem, I wrote a simple parse and print of the data. It looks like for the missing data, the eachElement.text gets value when the event is 'end'. The code that I used to print the data: for eachEvent, eachElement in context: print(eachElement.tag,eachEvent,eachElement.text,sep='::') The output that I got: QualityData::start::None Measure::start::Care for Older Adults - Functional Status Assessment Measure::end::Care for Older Adults - Functional Status Assessment Question::start::Patients, ages 66 years or older, should have a functional status assessment completed every calendar year. Question::end::Patients, ages 66 years or older, should have a functional status assessment completed every calendar year. Answer::start::Date:12/31/2019 Answer::end::Date:12/31/2019 SubAnswer2::start::Completed comprehensive functional status assessment (not limited to an acute or single condition, event, or body system) today SubAnswer2::end::Completed comprehensive functional status assessment (not limited to an acute or single condition, event, or body system) today Measure::start::Care for Older Adults - Pain Screening Measure::end::Care for Older Adults - Pain Screening Question::start::Patients, ages 66 years or older, should have a pain assessment at least annually Question::end::Patients, ages 66 years or older, should have a pain assessment at least annually Answer::start::Date:12/31/2019 Answer::end::Date:12/31/2019 SubAnswer2::start::None SubAnswer2::end::Comprehensive pain assessment (not limited to an acute or single condition, event, or body system) completed today QualityData::end::None Observe the text for SubAnswer2 when the event is 'end'. Is there something I can do to make sure that the text for a tag appears when the event is 'start'? Thanks in advance.
xml parsing result in tag with None value for some tag I am trying to parse a data intensive xml file. I am using lxml to parse each tag: from lxml import etree sourceFile=sys.argv[1] events = ("start", "end") context=etree.iterparse(sourceFile,events=events) for eachEvent, eachElement in context: <the code goes here> I am facing issue with the below data: <QualityData> <Measure>Care for Older Adults - Functional Status Assessment</Measure> <Question>Patients, ages 66 years or older, should have a functional status assessment completed every calendar year.</Question> <Answer>Date:12/31/2019</Answer> <SubAnswer2>Completed comprehensive functional status assessment (not limited to an acute or single condition, event, or body system) today</SubAnswer2> <Measure>Care for Older Adults - Pain Screening</Measure> <Question>Patients, ages 66 years or older, should have a pain assessment at least annually</Question> <Answer>Date:12/31/2019</Answer> <SubAnswer2>Comprehensive pain assessment (not limited to an acute or single condition, event, or body system) completed today</SubAnswer2> </QualityData> The tag SubAnswer2 has 2 occurrences in it. The 2nd occurrence is getting None value. Point to note is that the 2nd occurrence of other tags are getting read properly. Also, I am getting issue with only this data. There are other examples where the tag SubAnswer2 has multiple occurrences and they are getting parsed successfully. The code that I am using to read the value for Subanswer2 is: if eachElement.tag=='SubAnswer2' and QDstart and eachEvent=='start': QDlist.append(eachElement.text) I tried to parse using ElementTree as well. However, I get None for some other tags when I use it. To debug the problem, I wrote a simple parse and print of the data. It looks like for the missing data, the eachElement.text gets value when the event is 'end'. The code that I used to print the data: for eachEvent, eachElement in context: print(eachElement.tag,eachEvent,eachElement.text,sep='::') The output that I got: QualityData::start::None Measure::start::Care for Older Adults - Functional Status Assessment Measure::end::Care for Older Adults - Functional Status Assessment Question::start::Patients, ages 66 years or older, should have a functional status assessment completed every calendar year. Question::end::Patients, ages 66 years or older, should have a functional status assessment completed every calendar year. Answer::start::Date:12/31/2019 Answer::end::Date:12/31/2019 SubAnswer2::start::Completed comprehensive functional status assessment (not limited to an acute or single condition, event, or body system) today SubAnswer2::end::Completed comprehensive functional status assessment (not limited to an acute or single condition, event, or body system) today Measure::start::Care for Older Adults - Pain Screening Measure::end::Care for Older Adults - Pain Screening Question::start::Patients, ages 66 years or older, should have a pain assessment at least annually Question::end::Patients, ages 66 years or older, should have a pain assessment at least annually Answer::start::Date:12/31/2019 Answer::end::Date:12/31/2019 SubAnswer2::start::None SubAnswer2::end::Comprehensive pain assessment (not limited to an acute or single condition, event, or body system) completed today QualityData::end::None Observe the text for SubAnswer2 when the event is 'end'. Is there something I can do to make sure that the text for a tag appears when the event is 'start'? Thanks in advance.
python-3.x, rhel
1
131
1
https://stackoverflow.com/questions/63635671/xml-parsing-result-in-tag-with-none-value-for-some-tag
63,458,546
Why can I not ssh into RHEL 8.2 when sshd is running and port 22 shows it&#39;s open?
I installed RHEL 8.2 with a free developer license (bare hardware), it looks like sshd is installed, running by default with port 22 already open, I did not have to do anything to install sshd or open the port. [root@<hostname> etc]# systemctl status sshd ● sshd.service - OpenSSH server daemon Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2020-08-17 13:35:12 MDT; 1h 7min ago ... but on Windows 10 Pro (with cygwin ssh client installed), ssh <user>@<ip-address> I get this error ssh: connect to host <ip-address> port 22: Permission denied On the RHEL 8.2 installation, in a bash terminal, I can successfully ssh locally: ssh <user>@<ip-address> and it works OK. Any ideas? This is what I am getting: From: 192.168.0.153 To: 192.168.0.106 $ssh -Tv <user>@<ip-address> OpenSSH_8.3p1, OpenSSL 1.1.1f 31 Mar 2020 debug1: Connecting to 192.168.0.106 [192.168.0.106] port 22. debug1: connect to address 192.168.0.106 port 22: Permission denied ssh: connect to host 192.168.0.106 port 22: Permission denied but on 192.168.0.106, it is showing sshd running and port 22 open. On the machine itself, I can ssh ( $ssh <user>@localhost works) On the server I want to reach, it shows port 22 as open, ssh service enabled (192.168.0.106) #firewall-cmd --list-all public (active) ... interfaces: enp37s0 services: cockpit dhcpv6-client http ssh ports: 22/tcp ...
Why can I not ssh into RHEL 8.2 when sshd is running and port 22 shows it&#39;s open? I installed RHEL 8.2 with a free developer license (bare hardware), it looks like sshd is installed, running by default with port 22 already open, I did not have to do anything to install sshd or open the port. [root@<hostname> etc]# systemctl status sshd ● sshd.service - OpenSSH server daemon Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2020-08-17 13:35:12 MDT; 1h 7min ago ... but on Windows 10 Pro (with cygwin ssh client installed), ssh <user>@<ip-address> I get this error ssh: connect to host <ip-address> port 22: Permission denied On the RHEL 8.2 installation, in a bash terminal, I can successfully ssh locally: ssh <user>@<ip-address> and it works OK. Any ideas? This is what I am getting: From: 192.168.0.153 To: 192.168.0.106 $ssh -Tv <user>@<ip-address> OpenSSH_8.3p1, OpenSSL 1.1.1f 31 Mar 2020 debug1: Connecting to 192.168.0.106 [192.168.0.106] port 22. debug1: connect to address 192.168.0.106 port 22: Permission denied ssh: connect to host 192.168.0.106 port 22: Permission denied but on 192.168.0.106, it is showing sshd running and port 22 open. On the machine itself, I can ssh ( $ssh <user>@localhost works) On the server I want to reach, it shows port 22 as open, ssh service enabled (192.168.0.106) #firewall-cmd --list-all public (active) ... interfaces: enp37s0 services: cockpit dhcpv6-client http ssh ports: 22/tcp ...
ssh, rhel, sshd
1
4,610
2
https://stackoverflow.com/questions/63458546/why-can-i-not-ssh-into-rhel-8-2-when-sshd-is-running-and-port-22-shows-its-open
61,823,603
yum clean all: cannot use yum on RHEL 7
I was facing an installation issue with yum which couldn't find mirrors for php. Hence i performed yum clean all and when i tried to install again, it gave me the following error. Not sure what to do. I am working on RHEL 7. Loaded plugins: product-id, search-disabled-repos, subscription-manager file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml" Trying other mirror. One of the configured repositories failed (Red Hat Enterprise Linux 7.6), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=InstallMedia ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable InstallMedia or subscription-manager repos --disable=InstallMedia 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=InstallMedia.skip_if_unavailable=true failure: repodata/repomd.xml from InstallMedia: [Errno 256] No more mirrors to try. file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml" How to i get this working again
yum clean all: cannot use yum on RHEL 7 I was facing an installation issue with yum which couldn't find mirrors for php. Hence i performed yum clean all and when i tried to install again, it gave me the following error. Not sure what to do. I am working on RHEL 7. Loaded plugins: product-id, search-disabled-repos, subscription-manager file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml" Trying other mirror. One of the configured repositories failed (Red Hat Enterprise Linux 7.6), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=InstallMedia ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable InstallMedia or subscription-manager repos --disable=InstallMedia 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=InstallMedia.skip_if_unavailable=true failure: repodata/repomd.xml from InstallMedia: [Errno 256] No more mirrors to try. file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml" How to i get this working again
linux, installation, yum, rhel
1
6,929
1
https://stackoverflow.com/questions/61823603/yum-clean-all-cannot-use-yum-on-rhel-7
61,702,529
Install ansible off-line from binaries
we have rhel linux machine without network access and we want to install ansible on that machine but we want to install the ansible from binaries ( not like pip/yum install ) , because we want to avoid any pip dependencies issues is any approach that is relevant ? example of the legacy way Step 1: Update your Control Node Any time you are installing new software, it is a good idea to ensure your existing operating system software is up to date. Let’s start with that task first. yum update Step 2: Install the EPEL Repository Installing Ansible is pretty straightforward. First, we’ll need to install the CentOS 7 EPEL repository. yum install epel-release Step 3: Install Ansible Next, we install the Ansible package from the EPEL repository. yum install ansible
Install ansible off-line from binaries we have rhel linux machine without network access and we want to install ansible on that machine but we want to install the ansible from binaries ( not like pip/yum install ) , because we want to avoid any pip dependencies issues is any approach that is relevant ? example of the legacy way Step 1: Update your Control Node Any time you are installing new software, it is a good idea to ensure your existing operating system software is up to date. Let’s start with that task first. yum update Step 2: Install the EPEL Repository Installing Ansible is pretty straightforward. First, we’ll need to install the CentOS 7 EPEL repository. yum install epel-release Step 3: Install Ansible Next, we install the Ansible package from the EPEL repository. yum install ansible
linux, pip, ansible, yum, rhel
1
5,806
2
https://stackoverflow.com/questions/61702529/install-ansible-off-line-from-binaries
61,543,366
How to let git use https when clone repository?
Installed this package on OS yum install curl-devel Installed git by source wget [URL] tar zxvf v2.26.2.tar.gz cd git-2.26.2 make configure ./configure --prefix=/usr/local make make install Version git --version git version 2.26.2 Clone repo with https git clone [URL] Cloning into 'the_silver_searcher'... fatal: unable to find remote helper for 'https' Why? If this way doesn't work, how to uninstall git now?
How to let git use https when clone repository? Installed this package on OS yum install curl-devel Installed git by source wget [URL] tar zxvf v2.26.2.tar.gz cd git-2.26.2 make configure ./configure --prefix=/usr/local make make install Version git --version git version 2.26.2 Clone repo with https git clone [URL] Cloning into 'the_silver_searcher'... fatal: unable to find remote helper for 'https' Why? If this way doesn't work, how to uninstall git now?
git, https, centos, rhel, rhel7
1
141
0
https://stackoverflow.com/questions/61543366/how-to-let-git-use-https-when-clone-repository
60,884,885
Error compiling Apache with SSL on RHEL 6.10
I'm trying to compile Apache on RHEL 6.10 Requirements are: no root access and use native RHEL OpenSSL openssl version OpenSSL 1.0.1e-fips 11 Feb 2013 but when I try to ./configure Apache 2.4.39 with ... --prefix=/sb/sys1/apache --with-ssl=/opt/puppetlabs/puppet/include/openssl ... --enable-ssl --enable-so it brings SSL related errors: checking whether to enable mod_ssl... checking dependencies checking for OpenSSL... checking for user-provided OpenSSL base directory... /opt/puppetlabs/puppet/include/openssl adding "-I/opt/puppetlabs/puppet/include/openssl/include" to CPPFLAGS setting MOD_CFLAGS to "-I/opt/puppetlabs/puppet/include/openssl/include" setting ab_CFLAGS to "-I/opt/puppetlabs/puppet/include/openssl/include" adding "-L/opt/puppetlabs/puppet/include/openssl/lib" to LDFLAGS setting MOD_LDFLAGS to "-L/opt/puppetlabs/puppet/include/openssl/lib" checking for OpenSSL version >= 0.9.8a... FAILED adding "-lssl" to MOD_LDFLAGS adding "-lcrypto" to MOD_LDFLAGS adding "-lrt" to MOD_LDFLAGS adding "-lcrypt" to MOD_LDFLAGS adding "-lpthread" to MOD_LDFLAGS setting LIBS to "-lssl -lcrypto -lrt -lcrypt -lpthread" forcing ab_LIBS to "-L/opt/puppetlabs/puppet/include/openssl/lib -lssl -lcrypto -lrt -lcrypt -lpthread" checking openssl/engine.h usability... no checking openssl/engine.h presence... no checking for openssl/engine.h... no checking for SSL_CTX_new... no checking for ENGINE_init... no checking for ENGINE_load_builtin_engines... no checking for RAND_egd... no configure: WARNING: OpenSSL libraries are unusable yes Any ideas why Apache doesn't seem to accept OpenSSL, while the version looks ok?
Error compiling Apache with SSL on RHEL 6.10 I'm trying to compile Apache on RHEL 6.10 Requirements are: no root access and use native RHEL OpenSSL openssl version OpenSSL 1.0.1e-fips 11 Feb 2013 but when I try to ./configure Apache 2.4.39 with ... --prefix=/sb/sys1/apache --with-ssl=/opt/puppetlabs/puppet/include/openssl ... --enable-ssl --enable-so it brings SSL related errors: checking whether to enable mod_ssl... checking dependencies checking for OpenSSL... checking for user-provided OpenSSL base directory... /opt/puppetlabs/puppet/include/openssl adding "-I/opt/puppetlabs/puppet/include/openssl/include" to CPPFLAGS setting MOD_CFLAGS to "-I/opt/puppetlabs/puppet/include/openssl/include" setting ab_CFLAGS to "-I/opt/puppetlabs/puppet/include/openssl/include" adding "-L/opt/puppetlabs/puppet/include/openssl/lib" to LDFLAGS setting MOD_LDFLAGS to "-L/opt/puppetlabs/puppet/include/openssl/lib" checking for OpenSSL version >= 0.9.8a... FAILED adding "-lssl" to MOD_LDFLAGS adding "-lcrypto" to MOD_LDFLAGS adding "-lrt" to MOD_LDFLAGS adding "-lcrypt" to MOD_LDFLAGS adding "-lpthread" to MOD_LDFLAGS setting LIBS to "-lssl -lcrypto -lrt -lcrypt -lpthread" forcing ab_LIBS to "-L/opt/puppetlabs/puppet/include/openssl/lib -lssl -lcrypto -lrt -lcrypt -lpthread" checking openssl/engine.h usability... no checking openssl/engine.h presence... no checking for openssl/engine.h... no checking for SSL_CTX_new... no checking for ENGINE_init... no checking for ENGINE_load_builtin_engines... no checking for RAND_egd... no configure: WARNING: OpenSSL libraries are unusable yes Any ideas why Apache doesn't seem to accept OpenSSL, while the version looks ok?
linux, apache, ssl, openssl, rhel
1
226
0
https://stackoverflow.com/questions/60884885/error-compiling-apache-with-ssl-on-rhel-6-10
60,694,801
Packages install but not found
I have packages installed under /usr/local/lib and I added that in my PATH as well, but then I try to import it in any of my python scripts I get an error saying module not found. -bash-4.2$ pip2 list | grep pytest pytest-mock 2.0.0 My PATH: echo $PATH /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/bin ERROR: -bash-4.2$ python2 >>> import pytest Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named pytest Only if the packages is installed under my /users/user-name/.local/bin folder, it is reflected else it is not. My usecase is to use this machine as a slave for my Jenkins setup. I tried injecting this PATH directly to the job during build process as well. Didn't work for me. I have been stuck on this for quite some while, any help on this is greatly appreciated.
Packages install but not found I have packages installed under /usr/local/lib and I added that in my PATH as well, but then I try to import it in any of my python scripts I get an error saying module not found. -bash-4.2$ pip2 list | grep pytest pytest-mock 2.0.0 My PATH: echo $PATH /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/bin ERROR: -bash-4.2$ python2 >>> import pytest Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named pytest Only if the packages is installed under my /users/user-name/.local/bin folder, it is reflected else it is not. My usecase is to use this machine as a slave for my Jenkins setup. I tried injecting this PATH directly to the job during build process as well. Didn't work for me. I have been stuck on this for quite some while, any help on this is greatly appreciated.
linux, jenkins, path, package, rhel
1
1,138
1
https://stackoverflow.com/questions/60694801/packages-install-but-not-found
60,265,691
RHEL - Environment variable
I have an environment file named .env337_dev . I need to run this file to set the environment before running another command. How to run this file? Inside the file, it contains several variables like this export AB_HOME=/et/dev/abinitio/sit1/abinitio-V2 #/gcc3p32 # for 32-bit export PATH=${AB_HOME}/bin:${PATH} Apart from . ./.env337_dev command which will run and set the environment, is there any other way to run this file ?
RHEL - Environment variable I have an environment file named .env337_dev . I need to run this file to set the environment before running another command. How to run this file? Inside the file, it contains several variables like this export AB_HOME=/et/dev/abinitio/sit1/abinitio-V2 #/gcc3p32 # for 32-bit export PATH=${AB_HOME}/bin:${PATH} Apart from . ./.env337_dev command which will run and set the environment, is there any other way to run this file ?
environment-variables, rhel, rhel6, ab-initio
1
896
2
https://stackoverflow.com/questions/60265691/rhel-environment-variable
60,157,714
RHEL8 and docker-compose default network error EHOSTUNREACH
We have been using single container Docker images for some time without issues on RHEL8. We need to move toward integrating multiple services using docker-compose but have not been successful in even simple attempts. We are using Mongo (mongo:4.2.3-bionic) and NodeJS (node:alpine). We created a simple node application which is trying to add a single document to a MongoDB collection. The code for dbwrite.js is: var MongoClient = require('mongodb').MongoClient; MongoClient.connect("mongodb://mongo:27017/", function(err, mongodb) { if (err) throw err; var mongodbo = mongodb.db("test"); var doc = {"payload":"test doc"}; mongodbo.collection("test2").insertOne(doc, function(err, res) { if (err) throw err; }); mongodb.close(); }); The Dockerfile for dbwrite.js is: FROM node:alpine ADD . / CMD ["node", "dbwrite.js"] The Mongo container was pulled from DockerHub as was the Node container. The docker-compose.yaml file: version: '3.1' services: mongo: image: mongo:4.2.3-bionic container_name: mongo restart: always ports: - 27017:27017 volumes: - ./mongo_db:/data/db app: image: dbwrite:v0.1 container_name: dbwrite If we perform "docker-compose up" the dbwrite container throws an error: dbwrite | /node_modules/mongodb/lib/topologies/server.js:233 dbwrite | throw err; dbwrite | ^ dbwrite | dbwrite | MongoNetworkError: failed to connect to server [mongo:27017] on first connect [Error: connect EHOSTUNREACH 172.22.0.2:27017 dbwrite | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1137:16) { dbwrite | name: 'MongoNetworkError', dbwrite | [Symbol(mongoErrorContextSymbol)]: {} dbwrite | }] dbwrite | at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:438:11) dbwrite | at Pool.emit (events.js:321:20) dbwrite | at /node_modules/mongodb/lib/core/connection/pool.js:561:14 dbwrite | at /node_modules/mongodb/lib/core/connection/pool.js:994:11 dbwrite | at /node_modules/mongodb/lib/core/connection/connect.js:31:7 dbwrite | at callback (/node_modules/mongodb/lib/core/connection/connect.js:264:5) dbwrite | at Socket.<anonymous> (/node_modules/mongodb/lib/core/connection/connect.js:294:7) dbwrite | at Object.onceWrapper (events.js:428:26) dbwrite | at Socket.emit (events.js:321:20) dbwrite | at emitErrorNT (internal/streams/destroy.js:84:8) { dbwrite | name: 'MongoNetworkError', dbwrite | [Symbol(mongoErrorContextSymbol)]: {} dbwrite | } dbwrite exited with code 1 Rebuilding the container (doing it the hard way -- I know -- but wanting to keep everything as identical as possible), and replacing the Dockerfile CMD line CMD ["node", "dbwrite.js"] with CMD ["ping", "-c", "20", "mongo"] yields normal ping responses from "mongo" so I believe the default network was created right and the DNS is happening as expected, yet my node application gets EHOSTUNREACH. dbwrite | 64 bytes from 172.22.0.2: seq=15 ttl=64 time=0.072 ms dbwrite | 64 bytes from 172.22.0.2: seq=16 ttl=64 time=0.080 ms dbwrite | 64 bytes from 172.22.0.2: seq=17 ttl=64 time=0.067 ms dbwrite | 64 bytes from 172.22.0.2: seq=18 ttl=64 time=0.121 ms dbwrite | 64 bytes from 172.22.0.2: seq=19 ttl=64 time=0.097 ms dbwrite | dbwrite | --- mongo ping statistics --- dbwrite | 20 packets transmitted, 20 packets received, 0% packet loss dbwrite | round-trip min/avg/max = 0.065/0.086/0.121 ms dbwrite exited with code 0 If we edit the dbwrite.js code and replace, "mongo" in the connect() method with "localhost" and execute "node dbwrite.js" from the localhost (outside a container), the Document to the Collection. The Mongo container log reports that it is listening on 0.0.0.0. mongo | 2020-02-10T19:35:26.337+0000 I NETWORK [listener] Listening on 0.0.0.0 mongo | 2020-02-10T19:35:26.337+0000 I NETWORK [listener] waiting for connections on port 27017 While I don't have the output captured, previous executions of "docker network inspect" showed both containers and their assigned IPv4 addresses on 172.22.0.x/16. IPAM showed using the default driver "bridge" on subnet 172.22.0.0/16 and a gateway of 172.22.0.1. Any suggestions on what could be wrong would be greatly appreciated. We are on the verge of down-grading off RHEL8 to see if that is related to our problem given that Red Hat so vocally claims NOT to support Docker. Seems like it is some network security issue since ICMP ping can traverse the bridge but TCP socket connection cannot.
RHEL8 and docker-compose default network error EHOSTUNREACH We have been using single container Docker images for some time without issues on RHEL8. We need to move toward integrating multiple services using docker-compose but have not been successful in even simple attempts. We are using Mongo (mongo:4.2.3-bionic) and NodeJS (node:alpine). We created a simple node application which is trying to add a single document to a MongoDB collection. The code for dbwrite.js is: var MongoClient = require('mongodb').MongoClient; MongoClient.connect("mongodb://mongo:27017/", function(err, mongodb) { if (err) throw err; var mongodbo = mongodb.db("test"); var doc = {"payload":"test doc"}; mongodbo.collection("test2").insertOne(doc, function(err, res) { if (err) throw err; }); mongodb.close(); }); The Dockerfile for dbwrite.js is: FROM node:alpine ADD . / CMD ["node", "dbwrite.js"] The Mongo container was pulled from DockerHub as was the Node container. The docker-compose.yaml file: version: '3.1' services: mongo: image: mongo:4.2.3-bionic container_name: mongo restart: always ports: - 27017:27017 volumes: - ./mongo_db:/data/db app: image: dbwrite:v0.1 container_name: dbwrite If we perform "docker-compose up" the dbwrite container throws an error: dbwrite | /node_modules/mongodb/lib/topologies/server.js:233 dbwrite | throw err; dbwrite | ^ dbwrite | dbwrite | MongoNetworkError: failed to connect to server [mongo:27017] on first connect [Error: connect EHOSTUNREACH 172.22.0.2:27017 dbwrite | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1137:16) { dbwrite | name: 'MongoNetworkError', dbwrite | [Symbol(mongoErrorContextSymbol)]: {} dbwrite | }] dbwrite | at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:438:11) dbwrite | at Pool.emit (events.js:321:20) dbwrite | at /node_modules/mongodb/lib/core/connection/pool.js:561:14 dbwrite | at /node_modules/mongodb/lib/core/connection/pool.js:994:11 dbwrite | at /node_modules/mongodb/lib/core/connection/connect.js:31:7 dbwrite | at callback (/node_modules/mongodb/lib/core/connection/connect.js:264:5) dbwrite | at Socket.<anonymous> (/node_modules/mongodb/lib/core/connection/connect.js:294:7) dbwrite | at Object.onceWrapper (events.js:428:26) dbwrite | at Socket.emit (events.js:321:20) dbwrite | at emitErrorNT (internal/streams/destroy.js:84:8) { dbwrite | name: 'MongoNetworkError', dbwrite | [Symbol(mongoErrorContextSymbol)]: {} dbwrite | } dbwrite exited with code 1 Rebuilding the container (doing it the hard way -- I know -- but wanting to keep everything as identical as possible), and replacing the Dockerfile CMD line CMD ["node", "dbwrite.js"] with CMD ["ping", "-c", "20", "mongo"] yields normal ping responses from "mongo" so I believe the default network was created right and the DNS is happening as expected, yet my node application gets EHOSTUNREACH. dbwrite | 64 bytes from 172.22.0.2: seq=15 ttl=64 time=0.072 ms dbwrite | 64 bytes from 172.22.0.2: seq=16 ttl=64 time=0.080 ms dbwrite | 64 bytes from 172.22.0.2: seq=17 ttl=64 time=0.067 ms dbwrite | 64 bytes from 172.22.0.2: seq=18 ttl=64 time=0.121 ms dbwrite | 64 bytes from 172.22.0.2: seq=19 ttl=64 time=0.097 ms dbwrite | dbwrite | --- mongo ping statistics --- dbwrite | 20 packets transmitted, 20 packets received, 0% packet loss dbwrite | round-trip min/avg/max = 0.065/0.086/0.121 ms dbwrite exited with code 0 If we edit the dbwrite.js code and replace, "mongo" in the connect() method with "localhost" and execute "node dbwrite.js" from the localhost (outside a container), the Document to the Collection. The Mongo container log reports that it is listening on 0.0.0.0. mongo | 2020-02-10T19:35:26.337+0000 I NETWORK [listener] Listening on 0.0.0.0 mongo | 2020-02-10T19:35:26.337+0000 I NETWORK [listener] waiting for connections on port 27017 While I don't have the output captured, previous executions of "docker network inspect" showed both containers and their assigned IPv4 addresses on 172.22.0.x/16. IPAM showed using the default driver "bridge" on subnet 172.22.0.0/16 and a gateway of 172.22.0.1. Any suggestions on what could be wrong would be greatly appreciated. We are on the verge of down-grading off RHEL8 to see if that is related to our problem given that Red Hat so vocally claims NOT to support Docker. Seems like it is some network security issue since ICMP ping can traverse the bridge but TCP socket connection cannot.
docker-compose, rhel, ehostunreach
1
467
0
https://stackoverflow.com/questions/60157714/rhel8-and-docker-compose-default-network-error-ehostunreach
60,028,089
Vsfp user don&#39;t see any data linux Rhel/Centos
I tried creating vsftp server on Rhel 8 and centos. My ftp Users can login into server but only sed list of directory's and are able to navigate to any directory. User cannot create directory or file User cannot see any files in any directory. I chnage chmod 777 And changed ownership but nothing works
Vsfp user don&#39;t see any data linux Rhel/Centos I tried creating vsftp server on Rhel 8 and centos. My ftp Users can login into server but only sed list of directory's and are able to navigate to any directory. User cannot create directory or file User cannot see any files in any directory. I chnage chmod 777 And changed ownership but nothing works
linux, centos, rhel, vsftpd
1
32
1
https://stackoverflow.com/questions/60028089/vsfp-user-dont-see-any-data-linux-rhel-centos
59,147,459
Cannot renew RHEL 7.6 Developer account subscription
I'm having problems renewing an RHEL Developer account subscription, which expired a few days ago. I performed the following steps, but still, subscription-manager notifies me that: Unable to find available subscriptions for all your installed products. [qa@brendan ~]$ sudo subscription-manager remove --all 0 subscriptions removed at the server. [qa@brendan ~]$ sudo subscription-manager unregister Unregistering from: subscription.rhsm.redhat.com:443/subscription System has been unregistered. [qa@brendan ~]$ sudo subscription-manager clean All local data removed [qa@brendan ~]$ sudo subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: brendanjsonar Password: The system has been registered with ID: 7fe89b83-6ec2-423c-9476-062ab20d286a The registered system name is: brendan.local [qa@brendan ~]$ sudo subscription-manager refresh All local data refreshed [qa@brendan ~]$ sudo subscription-manager attach --auto Installed Product Current Status: Product Name: Red Hat Enterprise Linux Server Status: Not Subscribed Unable to find available subscriptions for all your installed products. [qa@brendan ~]$ I also unregistered the system manually in [URL] , but the registration (with sudo subscription-manager register --username brendanjsonar --auto-attach ) still complains Unable to find available subscriptions for all your installed products. Any idea of how to renew my RHEL 7.6 system's registration?
Cannot renew RHEL 7.6 Developer account subscription I'm having problems renewing an RHEL Developer account subscription, which expired a few days ago. I performed the following steps, but still, subscription-manager notifies me that: Unable to find available subscriptions for all your installed products. [qa@brendan ~]$ sudo subscription-manager remove --all 0 subscriptions removed at the server. [qa@brendan ~]$ sudo subscription-manager unregister Unregistering from: subscription.rhsm.redhat.com:443/subscription System has been unregistered. [qa@brendan ~]$ sudo subscription-manager clean All local data removed [qa@brendan ~]$ sudo subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: brendanjsonar Password: The system has been registered with ID: 7fe89b83-6ec2-423c-9476-062ab20d286a The registered system name is: brendan.local [qa@brendan ~]$ sudo subscription-manager refresh All local data refreshed [qa@brendan ~]$ sudo subscription-manager attach --auto Installed Product Current Status: Product Name: Red Hat Enterprise Linux Server Status: Not Subscribed Unable to find available subscriptions for all your installed products. [qa@brendan ~]$ I also unregistered the system manually in [URL] , but the registration (with sudo subscription-manager register --username brendanjsonar --auto-attach ) still complains Unable to find available subscriptions for all your installed products. Any idea of how to renew my RHEL 7.6 system's registration?
subscription, rhel, rhel7
1
1,632
3
https://stackoverflow.com/questions/59147459/cannot-renew-rhel-7-6-developer-account-subscription
59,144,939
RHEL w/Tkinter &amp; Python3 - Changing the Activity name from &quot;Tk&quot;
I'm trying to set the Activity (not sure of the proper term, see screenshot) name for a Tkinter app. I'm not a Linux expert, novice, or really even beginner, but the system about dialog lists my test machine as Fedora 30. Window Manager is Gnome. Code can't get much simpler: import tkinter as tk master=tk.Tk() master.title("My lil' Application") master.mainloop() This is most likely either a very simple or impossible thing to accomplish. I've not been able to come up with the magical search terms that result in an answer yet. This seems like something that someone else must have run into though which means the answer has to be out there. Thank you for your time. Solution from stovfl: import tkinter as tk master=tk.Tk(className="My lil' Application") master.title("My lil' Application") master.mainloop()
RHEL w/Tkinter &amp; Python3 - Changing the Activity name from &quot;Tk&quot; I'm trying to set the Activity (not sure of the proper term, see screenshot) name for a Tkinter app. I'm not a Linux expert, novice, or really even beginner, but the system about dialog lists my test machine as Fedora 30. Window Manager is Gnome. Code can't get much simpler: import tkinter as tk master=tk.Tk() master.title("My lil' Application") master.mainloop() This is most likely either a very simple or impossible thing to accomplish. I've not been able to come up with the magical search terms that result in an answer yet. This seems like something that someone else must have run into though which means the answer has to be out there. Thank you for your time. Solution from stovfl: import tkinter as tk master=tk.Tk(className="My lil' Application") master.title("My lil' Application") master.mainloop()
python, tkinter, rhel, window-managers
1
194
0
https://stackoverflow.com/questions/59144939/rhel-w-tkinter-python3-changing-the-activity-name-from-tk
58,418,767
How to mount a drive in Linux that a non-super user can write to
I am attempting to mount an Azure Storage container on a RHEL server that can be written to by a regular user account. I am not the most familiar with Linux, but the command seems simple: mount -t cifs <account name> /mnt/disk -o umask=<umask>,uid=<uid>,username=<Containers master username>,password="<password>",vers=3.0 But this is throwing errors, and I'm assuming a syntax error. I have been searching all over, but I haven't seemed to find a good resource for this.
How to mount a drive in Linux that a non-super user can write to I am attempting to mount an Azure Storage container on a RHEL server that can be written to by a regular user account. I am not the most familiar with Linux, but the command seems simple: mount -t cifs <account name> /mnt/disk -o umask=<umask>,uid=<uid>,username=<Containers master username>,password="<password>",vers=3.0 But this is throwing errors, and I'm assuming a syntax error. I have been searching all over, but I haven't seemed to find a good resource for this.
mount, rhel
1
124
1
https://stackoverflow.com/questions/58418767/how-to-mount-a-drive-in-linux-that-a-non-super-user-can-write-to
58,105,841
Cannot run ansible on RHEL 7 - Paramiko is not installed
I have a server running RHEL 7, and I have installed ansible but cannot run a playbook with error saying paramiko is not installed. I have verified that paramiko is installed and also tried to install paramiko using pip but still does not work. TASK [Show the Connection] ************************************************************************************************************************************************** fatal: [ASA]: FAILED! => {"msg": "paramiko is not installed: No module named paramiko"} Below are the versions I have: sh-4.2$ sudo yum install ansible Package ansible-2.8.5-2.el7ae.noarch already installed and latest version sh-4.2$ sudo yum install python-paramiko Package python-paramiko-2.1.1-9.el7.noarch already installed and latest version sh-4.2$
Cannot run ansible on RHEL 7 - Paramiko is not installed I have a server running RHEL 7, and I have installed ansible but cannot run a playbook with error saying paramiko is not installed. I have verified that paramiko is installed and also tried to install paramiko using pip but still does not work. TASK [Show the Connection] ************************************************************************************************************************************************** fatal: [ASA]: FAILED! => {"msg": "paramiko is not installed: No module named paramiko"} Below are the versions I have: sh-4.2$ sudo yum install ansible Package ansible-2.8.5-2.el7ae.noarch already installed and latest version sh-4.2$ sudo yum install python-paramiko Package python-paramiko-2.1.1-9.el7.noarch already installed and latest version sh-4.2$
python, ansible, paramiko, rhel, rhel7
1
5,374
3
https://stackoverflow.com/questions/58105841/cannot-run-ansible-on-rhel-7-paramiko-is-not-installed
57,566,312
Data Migration to RDS?
I'm trying to use impdp on RHEL without GUI. I'm not sure if my direction is correct. Since RHEL has no GUI, so what I do is install SQLcl from [URL] Basically my objective is to migrate on-premise oracle database to RDS and this RDS can only be accessed by particular EC2 instance. In other words, I would need to find a way to perform Data Migration by SSH into this EC2 . Wondering if anyone can shed some light on whether my direction is correct? Or is there any simpler way to do it?
Data Migration to RDS? I'm trying to use impdp on RHEL without GUI. I'm not sure if my direction is correct. Since RHEL has no GUI, so what I do is install SQLcl from [URL] Basically my objective is to migrate on-premise oracle database to RDS and this RDS can only be accessed by particular EC2 instance. In other words, I would need to find a way to perform Data Migration by SSH into this EC2 . Wondering if anyone can shed some light on whether my direction is correct? Or is there any simpler way to do it?
database, oracle-database, amazon-web-services, migration, rhel
1
285
1
https://stackoverflow.com/questions/57566312/data-migration-to-rds
57,189,611
fatal: repository does not exist when cloning shared repository
I'm trying to create a shared bare repository from an existing project so that multiple developers within the same group can have access. I have cloned the repository located in my project's directory to a bare shared repository in another centralized directory on the same machine and am able to create a clone of bare repo myself, however the clone fails with a fatal: repository does not exist error when logged in as another user within the group. I've tried following the instructions on the git website for creating a bare repository and for cloning a repository. [URL] [URL] Based on the instructions, I used the following commands to create the bare repository and make it shared. $ git clone --bare /path/to/my_dir/my_project /path/to/repos/project.git $ cd /path/to/repos/project.git $ git init --bare --shared I am able to successfully make a clone using the following commands when I'm logged in as myself. $ cd /path/to/their_dir $ git clone /path/to/repos/project.git their_project However, when logged in as a different user, the following error occurs when trying to use the same commands, even though I can do an ls on and cd into the project.git directory as the other user. fatal: repository '/path/to/repos/project.git' does not exist As far as permissions go, my account is listed as the owner of the repository and it's files, and the other user belongs to the group that is associated with the files. $ ls -l /path/to/repos drwxrws---. 6 me devs 4096 Jul 24 12:53 project.git $ ls -l /path/to/repos/project.git drwx------. 2 me devs 4096 Jul 24 12:46 branches -rw-rw----. 1 me devs 177 Jul 24 12:53 config -rw-------. 1 me devs 73 Jul 24 12:46 description -rw-------. 1 me devs 23 Jul 24 12:46 HEAD drwx------. 2 me devs 4096 Jul 24 12:52 hooks drwx------. 2 me devs 4096 Jul 24 12:46 info drwx------. 104 me devs 4096 Jul 24 12:46 objects -rw-------. 1 me devs 98 Jul 24 12:46 packed-refs drwxrws---. 4 me devs 4096 Jul 24 12:46 refs How can I fix this so that another user can clone the repository while logged in under his or her account?
fatal: repository does not exist when cloning shared repository I'm trying to create a shared bare repository from an existing project so that multiple developers within the same group can have access. I have cloned the repository located in my project's directory to a bare shared repository in another centralized directory on the same machine and am able to create a clone of bare repo myself, however the clone fails with a fatal: repository does not exist error when logged in as another user within the group. I've tried following the instructions on the git website for creating a bare repository and for cloning a repository. [URL] [URL] Based on the instructions, I used the following commands to create the bare repository and make it shared. $ git clone --bare /path/to/my_dir/my_project /path/to/repos/project.git $ cd /path/to/repos/project.git $ git init --bare --shared I am able to successfully make a clone using the following commands when I'm logged in as myself. $ cd /path/to/their_dir $ git clone /path/to/repos/project.git their_project However, when logged in as a different user, the following error occurs when trying to use the same commands, even though I can do an ls on and cd into the project.git directory as the other user. fatal: repository '/path/to/repos/project.git' does not exist As far as permissions go, my account is listed as the owner of the repository and it's files, and the other user belongs to the group that is associated with the files. $ ls -l /path/to/repos drwxrws---. 6 me devs 4096 Jul 24 12:53 project.git $ ls -l /path/to/repos/project.git drwx------. 2 me devs 4096 Jul 24 12:46 branches -rw-rw----. 1 me devs 177 Jul 24 12:53 config -rw-------. 1 me devs 73 Jul 24 12:46 description -rw-------. 1 me devs 23 Jul 24 12:46 HEAD drwx------. 2 me devs 4096 Jul 24 12:52 hooks drwx------. 2 me devs 4096 Jul 24 12:46 info drwx------. 104 me devs 4096 Jul 24 12:46 objects -rw-------. 1 me devs 98 Jul 24 12:46 packed-refs drwxrws---. 4 me devs 4096 Jul 24 12:46 refs How can I fix this so that another user can clone the repository while logged in under his or her account?
linux, git, rhel
1
1,325
0
https://stackoverflow.com/questions/57189611/fatal-repository-does-not-exist-when-cloning-shared-repository
56,494,551
Can&#39;t find driver for sqlsrv when running a cron job
I run a Laravel application on a RHEL server. The database is on an external SQL SERVER. When running the web application there's no problem with database connection. When running the CRON job, which calls $ php artisan schedule:run the application can't find the driver for Sql server. Illuminate\Database\QueryException : could not find driver PHP Startup: Unable to load dynamic library 'sqlsrv.so
Can&#39;t find driver for sqlsrv when running a cron job I run a Laravel application on a RHEL server. The database is on an external SQL SERVER. When running the web application there's no problem with database connection. When running the CRON job, which calls $ php artisan schedule:run the application can't find the driver for Sql server. Illuminate\Database\QueryException : could not find driver PHP Startup: Unable to load dynamic library 'sqlsrv.so
php, laravel, pdo, rhel, sqlsrv
1
1,177
2
https://stackoverflow.com/questions/56494551/cant-find-driver-for-sqlsrv-when-running-a-cron-job
56,105,563
Error: repository epel, mysql, rhela and webtatic is listed more than once in the configuration
I currently have php53 installed redhat version 6.8 [rh@StorServIN ~]$ php -v PHP Warning: Module 'pgsql' already loaded in Unknown on line 0 PHP 5.3.3 (cli) (built: Dec 15 2015 04:52:58) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies but when I want to upgrade php 5.3 to php 7.2 in redhat 6.8, following error comes out: [rh@StorServIN etc]$ sudo yum install [URL] Loaded plugins: product-id, refresh-packagekit, search-disabled-repos, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Repository epel-debuginfo is listed more than once in the configuration Repository epel-source is listed more than once in the configuration Repository mysql57-community is listed more than once in the configuration Repository rhel-source is listed more than once in the configuration Repository webtatic is listed more than once in the configuration Repository webtatic-debuginfo is listed more than once in the configuration Repository webtatic-source is listed more than once in the configuration epel-release-latest-6.noarch.rpm | 14 kB 00:00 Examining /var/tmp/yum-root-Pdnwyw/epel-release-latest-6.noarch.rpm: epel-release-6-8.noarch /var/tmp/yum-root-Pdnwyw/epel-release-latest-6.noarch.rpm: does not update installed package. Error: Nothing to do I'm new in RHEL. How can I solve this?
Error: repository epel, mysql, rhela and webtatic is listed more than once in the configuration I currently have php53 installed redhat version 6.8 [rh@StorServIN ~]$ php -v PHP Warning: Module 'pgsql' already loaded in Unknown on line 0 PHP 5.3.3 (cli) (built: Dec 15 2015 04:52:58) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies but when I want to upgrade php 5.3 to php 7.2 in redhat 6.8, following error comes out: [rh@StorServIN etc]$ sudo yum install [URL] Loaded plugins: product-id, refresh-packagekit, search-disabled-repos, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Repository epel-debuginfo is listed more than once in the configuration Repository epel-source is listed more than once in the configuration Repository mysql57-community is listed more than once in the configuration Repository rhel-source is listed more than once in the configuration Repository webtatic is listed more than once in the configuration Repository webtatic-debuginfo is listed more than once in the configuration Repository webtatic-source is listed more than once in the configuration epel-release-latest-6.noarch.rpm | 14 kB 00:00 Examining /var/tmp/yum-root-Pdnwyw/epel-release-latest-6.noarch.rpm: epel-release-6-8.noarch /var/tmp/yum-root-Pdnwyw/epel-release-latest-6.noarch.rpm: does not update installed package. Error: Nothing to do I'm new in RHEL. How can I solve this?
php, repository, upgrade, rhel
1
37
0
https://stackoverflow.com/questions/56105563/error-repository-epel-mysql-rhela-and-webtatic-is-listed-more-than-once-in-th
55,868,885
How can I use a variable in if condition in expect script?
I am executing expect script on a remote host(say A) and I want to fetch some environment variables from that remote host(A). Depending on the remote host's(A) environment variables, I would like to perform some conditional operations on that host(A) and on the host(B) from which the expect script is being run. I could fetch the remote variables and set values in the remote variables. I couldn't execute the if condition as may be I am having issues figuring out the right syntax/format. Tried some from the google references but couldn't really get any closer to a working solution. send "export vers=rpm -q --queryformat '%{RELEASE}' rpm | grep -o '.$'\r" send "echo \$vers\r" expect -re $prompt send "if [[ \$vers -lt 7 ]]; then echo 'RHEL Version is \$vers'; else echo 'RHEL Version is \$vers'; fi\r" Getting below error: invalid command name "$vers" while executing "\$vers -lt 7 " invoked from within "[ \$vers -lt 7 ]" invoked from within "send "if [[ \$vers -lt 7 ]]; then echo 'RHEL Version is \$vers'; else echo 'RHEL Version is \$vers'; fi\r"" Need the expect script to execute "if" condition correctly and pass the value to the remote host and my local.
How can I use a variable in if condition in expect script? I am executing expect script on a remote host(say A) and I want to fetch some environment variables from that remote host(A). Depending on the remote host's(A) environment variables, I would like to perform some conditional operations on that host(A) and on the host(B) from which the expect script is being run. I could fetch the remote variables and set values in the remote variables. I couldn't execute the if condition as may be I am having issues figuring out the right syntax/format. Tried some from the google references but couldn't really get any closer to a working solution. send "export vers=rpm -q --queryformat '%{RELEASE}' rpm | grep -o '.$'\r" send "echo \$vers\r" expect -re $prompt send "if [[ \$vers -lt 7 ]]; then echo 'RHEL Version is \$vers'; else echo 'RHEL Version is \$vers'; fi\r" Getting below error: invalid command name "$vers" while executing "\$vers -lt 7 " invoked from within "[ \$vers -lt 7 ]" invoked from within "send "if [[ \$vers -lt 7 ]]; then echo 'RHEL Version is \$vers'; else echo 'RHEL Version is \$vers'; fi\r"" Need the expect script to execute "if" condition correctly and pass the value to the remote host and my local.
bash, expect, rhel
1
968
2
https://stackoverflow.com/questions/55868885/how-can-i-use-a-variable-in-if-condition-in-expect-script
55,648,365
Can I get a working curl command to remove a system from RHEL subscription?
I want to automate the addition and removal of VMs from the RHEL Subscription. I want to use a curl command if possible and keep it simple. I tried executing curl commands on the api.access.redhat.com/management/v1/subscriptions endpoints but it is giving errors like "Authentication parameters missing". Below is an example command I am using: curl -X GET -s -k -u username:Password "[URL] -H "accept: application/json" Expected to see the list of Subscribed systems but getting the "Authentication parameters missing" message.
Can I get a working curl command to remove a system from RHEL subscription? I want to automate the addition and removal of VMs from the RHEL Subscription. I want to use a curl command if possible and keep it simple. I tried executing curl commands on the api.access.redhat.com/management/v1/subscriptions endpoints but it is giving errors like "Authentication parameters missing". Below is an example command I am using: curl -X GET -s -k -u username:Password "[URL] -H "accept: application/json" Expected to see the list of Subscribed systems but getting the "Authentication parameters missing" message.
redhat, rhel
1
262
1
https://stackoverflow.com/questions/55648365/can-i-get-a-working-curl-command-to-remove-a-system-from-rhel-subscription
55,608,101
Java code OK in AIX 3.5, fails in RHEL 7.5: It makes corrupt DOC, DOCX, XLS, XLSX files but good HTML files
We have a Java process (see below) to generate DOC, DOCX, XLS, XLSX, and HTML and save it to our Linux PCs. It ran well in our old AIX 3.5 machine; but in our new RHEL 7.5 machine, Microsoft proprietary document formats (DOCX, DOC, XLSX, and XLS) it outputs are corrupt. HTML files are not corrupt. I have downloaded the output files into a Windows PC and try to open them; Exception errors say the DOC/DOCX/XLS/XSLX files are corrupt and cannot be opened. The Java code works well in the old Linux PC. I am assuming there might some libraries or software we need to install in the new PC. Below is my java code (Process to generate the file) int BUFFER_SIZE=1024*256; int bytesRead=0; Resultset rs=obj.resultSet(); DocumentClass = new DocumentClass(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buffer = new byte[BUFFER_SIZE]; BufferedInputStream bis = new BufferedInputStream(rs.getBinaryStream(1)); try { bytesRead = bis.read(buffer,0,BUFFER_SIZE); while (bytesRead > 0) { baos.write(buffer, 0, bytesRead); buffer = new byte[BUFFER_SIZE]; bytesRead = bis.read(buffer,0,BUFFER_SIZE); } } catch (IOException io ) { System.out.println(io.getMessage()); } DocumentClass.setFileBody(baos); BufferedWriter CreateDoc = new BufferedWriter(new FileWriter("/usr/Test.docx")); CreateDoc.write(DocumentClass.getFileBody().toString()); CreateDoc.close();
Java code OK in AIX 3.5, fails in RHEL 7.5: It makes corrupt DOC, DOCX, XLS, XLSX files but good HTML files We have a Java process (see below) to generate DOC, DOCX, XLS, XLSX, and HTML and save it to our Linux PCs. It ran well in our old AIX 3.5 machine; but in our new RHEL 7.5 machine, Microsoft proprietary document formats (DOCX, DOC, XLSX, and XLS) it outputs are corrupt. HTML files are not corrupt. I have downloaded the output files into a Windows PC and try to open them; Exception errors say the DOC/DOCX/XLS/XSLX files are corrupt and cannot be opened. The Java code works well in the old Linux PC. I am assuming there might some libraries or software we need to install in the new PC. Below is my java code (Process to generate the file) int BUFFER_SIZE=1024*256; int bytesRead=0; Resultset rs=obj.resultSet(); DocumentClass = new DocumentClass(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buffer = new byte[BUFFER_SIZE]; BufferedInputStream bis = new BufferedInputStream(rs.getBinaryStream(1)); try { bytesRead = bis.read(buffer,0,BUFFER_SIZE); while (bytesRead > 0) { baos.write(buffer, 0, bytesRead); buffer = new byte[BUFFER_SIZE]; bytesRead = bis.read(buffer,0,BUFFER_SIZE); } } catch (IOException io ) { System.out.println(io.getMessage()); } DocumentClass.setFileBody(baos); BufferedWriter CreateDoc = new BufferedWriter(new FileWriter("/usr/Test.docx")); CreateDoc.write(DocumentClass.getFileBody().toString()); CreateDoc.close();
rhel, java, aix
1
92
0
https://stackoverflow.com/questions/55608101/java-code-ok-in-aix-3-5-fails-in-rhel-7-5-it-makes-corrupt-doc-docx-xls-xls
55,502,674
Installing a specific version of GDAL within a Docker container?
I am trying to setup a Docker container for a Django site, but it relies on a version of GDAL that does not appear to be installable from the command line (1.8.1). In order to install this locally, I have to first run Configure and Make in the source folder. But I cannot find information online on how to run multi-step processes like this (perhaps running a bash script?) How do I instruct the Dockerfile to Configure/Make/Install from source? Also, is there a way to avoid having to containerize the source for this process? Specifics: Docker running Python2.7 alpine Django site RHEL 7.6 Thanks in advance UPDATE: Here's a specific log from what happens while trying to run 'pip install gdal==1.8.1' in the Dockerfile without GDAL installed Collecting GDAL==1.8.1 (from -r requirements.txt (line 45)) Downloading [URL] (400kB) Complete output from command python setup.py egg_info: running egg_info creating pip-egg-info/GDAL.egg-info writing pip-egg-info/GDAL.egg-info/PKG-INFO writing top-level names to pip-egg-info/GDAL.egg-info/top_level.txt writing dependency_links to pip-egg-info/GDAL.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/GDAL.egg-info/SOURCES.txt' Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-htaYeH/GDAL/setup.py", line 263, in <module> ext_modules = ext_modules ) File "/usr/local/lib/python2.7/site-packages/setuptools/__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "/usr/local/lib/python2.7/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/local/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 296, in run self.find_sources() File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 303, in find_sources mm.run() File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 534, in run self.add_defaults() File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 570, in add_defaults sdist.add_defaults(self) File "/usr/local/lib/python2.7/site-packages/setuptools/command/py36compat.py", line 36, in add_defaults self._add_defaults_ext() File "/usr/local/lib/python2.7/site-packages/setuptools/command/py36compat.py", line 119, in _add_defaults_ext build_ext = self.get_finalized_command('build_ext') File "/usr/local/lib/python2.7/distutils/cmd.py", line 312, in get_finalized_command cmd_obj.ensure_finalized() File "/usr/local/lib/python2.7/distutils/cmd.py", line 109, in ensure_finalized self.finalize_options() File "/tmp/pip-install-htaYeH/GDAL/setup.py", line 160, in finalize_options self.gdaldir = self.get_gdal_config('prefix') File "/tmp/pip-install-htaYeH/GDAL/setup.py", line 140, in get_gdal_config return fetch_config(option) File "/tmp/pip-install-htaYeH/GDAL/setup.py", line 89, in fetch_config raise gdal_config_error, e __main__.gdal_config_error: [Errno 2] No such file or directory
Installing a specific version of GDAL within a Docker container? I am trying to setup a Docker container for a Django site, but it relies on a version of GDAL that does not appear to be installable from the command line (1.8.1). In order to install this locally, I have to first run Configure and Make in the source folder. But I cannot find information online on how to run multi-step processes like this (perhaps running a bash script?) How do I instruct the Dockerfile to Configure/Make/Install from source? Also, is there a way to avoid having to containerize the source for this process? Specifics: Docker running Python2.7 alpine Django site RHEL 7.6 Thanks in advance UPDATE: Here's a specific log from what happens while trying to run 'pip install gdal==1.8.1' in the Dockerfile without GDAL installed Collecting GDAL==1.8.1 (from -r requirements.txt (line 45)) Downloading [URL] (400kB) Complete output from command python setup.py egg_info: running egg_info creating pip-egg-info/GDAL.egg-info writing pip-egg-info/GDAL.egg-info/PKG-INFO writing top-level names to pip-egg-info/GDAL.egg-info/top_level.txt writing dependency_links to pip-egg-info/GDAL.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/GDAL.egg-info/SOURCES.txt' Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-htaYeH/GDAL/setup.py", line 263, in <module> ext_modules = ext_modules ) File "/usr/local/lib/python2.7/site-packages/setuptools/__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "/usr/local/lib/python2.7/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/local/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/local/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 296, in run self.find_sources() File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 303, in find_sources mm.run() File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 534, in run self.add_defaults() File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 570, in add_defaults sdist.add_defaults(self) File "/usr/local/lib/python2.7/site-packages/setuptools/command/py36compat.py", line 36, in add_defaults self._add_defaults_ext() File "/usr/local/lib/python2.7/site-packages/setuptools/command/py36compat.py", line 119, in _add_defaults_ext build_ext = self.get_finalized_command('build_ext') File "/usr/local/lib/python2.7/distutils/cmd.py", line 312, in get_finalized_command cmd_obj.ensure_finalized() File "/usr/local/lib/python2.7/distutils/cmd.py", line 109, in ensure_finalized self.finalize_options() File "/tmp/pip-install-htaYeH/GDAL/setup.py", line 160, in finalize_options self.gdaldir = self.get_gdal_config('prefix') File "/tmp/pip-install-htaYeH/GDAL/setup.py", line 140, in get_gdal_config return fetch_config(option) File "/tmp/pip-install-htaYeH/GDAL/setup.py", line 89, in fetch_config raise gdal_config_error, e __main__.gdal_config_error: [Errno 2] No such file or directory
django, docker, makefile, gdal, rhel
1
1,669
1
https://stackoverflow.com/questions/55502674/installing-a-specific-version-of-gdal-within-a-docker-container
53,315,289
Why is yum re-creating default nginx config file on yum update on RHEL7?
I installed nginx via yum package on RHEL7. I added my config as /etc/nginx/conf.d/my.conf and deleted the config file shipped with the package /etc/nginx/conf.d/default.conf Recently, nginx package was updated via yum update. Now, the default.conf file is present again. I would have expected that yum doesn't touch default config files if they were changed or deleted. Is this normal yum behavior? Here some information about the RHEL version and nginx package. root@host: [~]# yum info nginx Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos This system is receiving updates from RHN Classic or Red Hat Satellite. Installed Packages Name : nginx Arch : x86_64 Epoch : 1 Version : 1.14.1 Release : 1.el7_4.ngx Size : 2.6 M Repo : installed From repo : nginx Summary : High performance web server URL : [URL] License : 2-clause BSD-like license Description : nginx [engine x] is an HTTP and reverse proxy server, as well as : a mail proxy server. I upgrade the package from 1.14.0 to version 1.14.1 shown above. root@host: [~]# nginx -v nginx version: nginx/1.14.1 Redhat version: root@host: [~]# hostnamectl Static hostname: host.example.com Icon name: computer-vm Chassis: vm Machine ID: SOME-ID Boot ID: ANOTHER-ID Virtualization: vmware Operating System: Red Hat Enterprise Linux CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server Kernel: Linux 3.10.0-862.14.4.el7.x86_64 Architecture: x86-64 If I rename my.conf to default.conf, it doesn't get replaced on a yum update.
Why is yum re-creating default nginx config file on yum update on RHEL7? I installed nginx via yum package on RHEL7. I added my config as /etc/nginx/conf.d/my.conf and deleted the config file shipped with the package /etc/nginx/conf.d/default.conf Recently, nginx package was updated via yum update. Now, the default.conf file is present again. I would have expected that yum doesn't touch default config files if they were changed or deleted. Is this normal yum behavior? Here some information about the RHEL version and nginx package. root@host: [~]# yum info nginx Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos This system is receiving updates from RHN Classic or Red Hat Satellite. Installed Packages Name : nginx Arch : x86_64 Epoch : 1 Version : 1.14.1 Release : 1.el7_4.ngx Size : 2.6 M Repo : installed From repo : nginx Summary : High performance web server URL : [URL] License : 2-clause BSD-like license Description : nginx [engine x] is an HTTP and reverse proxy server, as well as : a mail proxy server. I upgrade the package from 1.14.0 to version 1.14.1 shown above. root@host: [~]# nginx -v nginx version: nginx/1.14.1 Redhat version: root@host: [~]# hostnamectl Static hostname: host.example.com Icon name: computer-vm Chassis: vm Machine ID: SOME-ID Boot ID: ANOTHER-ID Virtualization: vmware Operating System: Red Hat Enterprise Linux CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server Kernel: Linux 3.10.0-862.14.4.el7.x86_64 Architecture: x86-64 If I rename my.conf to default.conf, it doesn't get replaced on a yum update.
nginx, centos, yum, rhel, rhel7
1
461
0
https://stackoverflow.com/questions/53315289/why-is-yum-re-creating-default-nginx-config-file-on-yum-update-on-rhel7
52,264,818
RuntimeError: Unable to initialize SecretService: Environment variable DBUS_SESSION_BUS_ADDRESS is unset
Running into issue with keyring: RuntimeError: Unable to initialize SecretService: Environment variable DBUS_SESSION_BUS_ADDRESS is unset SecretService is installed. OS is RHEL Running: python -c "import keyring.backends.SecretService as SS; SS.Keyring.priority" gives the following error: Traceback (most recent call last): File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/secretstorage/__init__.py", line 41, in dbus_init return connect_and_authenticate() File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/jeepney/integrate/blocking.py", line 70, in connect_and_authenticate bus_addr = get_bus(bus) File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/jeepney/bus.py", line 53, in get_bus return find_session_bus() File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/jeepney/bus.py", line 42, in find_session_bus addr = os.environ['DBUS_SESSION_BUS_ADDRESS'] File "/home/webpage/.pyenv/versions/3.6.5/lib/python3.6/os.py", line 669, in __getitem__ raise KeyError(key) from None KeyError: 'DBUS_SESSION_BUS_ADDRESS' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/keyring/backends/SecretService.py", line 37, in priority bus = secretstorage.dbus_init() File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/secretstorage/__init__.py", line 45, in dbus_init raise SecretServiceNotAvailableException(reason) from ex secretstorage.exceptions.SecretServiceNotAvailableException: Environment variable DBUS_SESSION_BUS_ADDRESS is unset During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/keyring/util/properties.py", line 26, in __get__ return self.fget.__get__(None, owner)() File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/keyring/backends/SecretService.py", line 41, in priority "Unable to initialize SecretService: %s" % e) RuntimeError: Unable to initialize SecretService: Environment variable DBUS_SESSION_BUS_ADDRESS is unset How can I fix this?
RuntimeError: Unable to initialize SecretService: Environment variable DBUS_SESSION_BUS_ADDRESS is unset Running into issue with keyring: RuntimeError: Unable to initialize SecretService: Environment variable DBUS_SESSION_BUS_ADDRESS is unset SecretService is installed. OS is RHEL Running: python -c "import keyring.backends.SecretService as SS; SS.Keyring.priority" gives the following error: Traceback (most recent call last): File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/secretstorage/__init__.py", line 41, in dbus_init return connect_and_authenticate() File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/jeepney/integrate/blocking.py", line 70, in connect_and_authenticate bus_addr = get_bus(bus) File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/jeepney/bus.py", line 53, in get_bus return find_session_bus() File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/jeepney/bus.py", line 42, in find_session_bus addr = os.environ['DBUS_SESSION_BUS_ADDRESS'] File "/home/webpage/.pyenv/versions/3.6.5/lib/python3.6/os.py", line 669, in __getitem__ raise KeyError(key) from None KeyError: 'DBUS_SESSION_BUS_ADDRESS' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/keyring/backends/SecretService.py", line 37, in priority bus = secretstorage.dbus_init() File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/secretstorage/__init__.py", line 45, in dbus_init raise SecretServiceNotAvailableException(reason) from ex secretstorage.exceptions.SecretServiceNotAvailableException: Environment variable DBUS_SESSION_BUS_ADDRESS is unset During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/keyring/util/properties.py", line 26, in __get__ return self.fget.__get__(None, owner)() File "/home/webpage/.pyenv/versions/WEBPAGE/lib/python3.6/site-packages/keyring/backends/SecretService.py", line 41, in priority "Unable to initialize SecretService: %s" % e) RuntimeError: Unable to initialize SecretService: Environment variable DBUS_SESSION_BUS_ADDRESS is unset How can I fix this?
python, rhel, python-keyring
1
4,962
3
https://stackoverflow.com/questions/52264818/runtimeerror-unable-to-initialize-secretservice-environment-variable-dbus-sess