question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
41,973,713
HTTPD server not starting after disabling these modules mod_include, mod_info, mod_autoindex and mod_userdir
As per this article "4. Disable Unnecessary Modules" Ref: [URL] It’s always good to minor the chances of being a victim of any web attack. So it’s recommended to disable all those modules that are not in use currently. I disabled these modules mod_imap, mod_include, mod_info, mod_userdir, mod_autoindex . After that httpd server is no restarting. Can you please help me to findout the issue. I didn't get any errors on error_log or access_log I got this following response if try to restart ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-02-01 10:02:08 CET; 1min 15s ago Docs: man:httpd(8) man:apachectl(8) Process: 58603 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=1/FAILURE) Process: 58601 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE) Main PID: 58601 (code=exited, status=1/FAILURE) apachectl configtest "AH00526: Syntax error on line 16 of /etc/httpd/conf.d/autoindex.conf: Invalid command 'IndexOptions', perhaps misspelled or defined by a module not included in the server configuration" # # Directives controlling the display of server-generated directory listings. # # Required modules: mod_authz_core, mod_authz_host, # mod_autoindex, mod_alias # # To see the listing of a directory, the Options directive for the # directory must include "Indexes", and the directory must not contain # a file matching those listed in the DirectoryIndex directive. # # # IndexOptions: Controls the appearance of server-generated directory # listings. # IndexOptions FancyIndexing HTMLTable VersionSort # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/httpd/icons/" <Directory "/usr/share/httpd/icons"> Options Indexes MultiViews FollowSymlinks AllowOverride None Require all granted </Directory> The error I got in this line I don't know what went wrong IndexOptions FancyIndexing HTMLTable VersionSort
HTTPD server not starting after disabling these modules mod_include, mod_info, mod_autoindex and mod_userdir As per this article "4. Disable Unnecessary Modules" Ref: [URL] It’s always good to minor the chances of being a victim of any web attack. So it’s recommended to disable all those modules that are not in use currently. I disabled these modules mod_imap, mod_include, mod_info, mod_userdir, mod_autoindex . After that httpd server is no restarting. Can you please help me to findout the issue. I didn't get any errors on error_log or access_log I got this following response if try to restart ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-02-01 10:02:08 CET; 1min 15s ago Docs: man:httpd(8) man:apachectl(8) Process: 58603 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=1/FAILURE) Process: 58601 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE) Main PID: 58601 (code=exited, status=1/FAILURE) apachectl configtest "AH00526: Syntax error on line 16 of /etc/httpd/conf.d/autoindex.conf: Invalid command 'IndexOptions', perhaps misspelled or defined by a module not included in the server configuration" # # Directives controlling the display of server-generated directory listings. # # Required modules: mod_authz_core, mod_authz_host, # mod_autoindex, mod_alias # # To see the listing of a directory, the Options directive for the # directory must include "Indexes", and the directory must not contain # a file matching those listed in the DirectoryIndex directive. # # # IndexOptions: Controls the appearance of server-generated directory # listings. # IndexOptions FancyIndexing HTMLTable VersionSort # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/httpd/icons/" <Directory "/usr/share/httpd/icons"> Options Indexes MultiViews FollowSymlinks AllowOverride None Require all granted </Directory> The error I got in this line I don't know what went wrong IndexOptions FancyIndexing HTMLTable VersionSort
php, apache, httpd.conf, rhel
1
9,074
1
https://stackoverflow.com/questions/41973713/httpd-server-not-starting-after-disabling-these-modules-mod-include-mod-info-m
41,761,812
ORA-01804 error while trying to load &quot;Oracle&quot; dbDriver
I'm using ROracle and do the following commands in R: Sys.getenv() drv <- dbDriver("Oracle") And here is the error I obtain after this last line: Error in .oci.Driver(.oci.drv(), interruptible = interruptible, unicode_as_utf8 = unicode_as_utf8, : Error while trying to retrieve text for error ORA-01804 I'm on RStudio Server working on a RHEL 5 server. How could I avoid this error?
ORA-01804 error while trying to load &quot;Oracle&quot; dbDriver I'm using ROracle and do the following commands in R: Sys.getenv() drv <- dbDriver("Oracle") And here is the error I obtain after this last line: Error in .oci.Driver(.oci.drv(), interruptible = interruptible, unicode_as_utf8 = unicode_as_utf8, : Error while trying to retrieve text for error ORA-01804 I'm on RStudio Server working on a RHEL 5 server. How could I avoid this error?
r, oracle-database, rhel, roracle
1
16,899
2
https://stackoverflow.com/questions/41761812/ora-01804-error-while-trying-to-load-oracle-dbdriver
41,258,638
To check if a software/application is installed in Linux
I am working on RHEL 6 and would like to check if Tomcat is installed in the system. The Tomcat process is not running. Basically looking for some Unix utility which can detect the third party software installed in the system.
To check if a software/application is installed in Linux I am working on RHEL 6 and would like to check if Tomcat is installed in the system. The Tomcat process is not running. Basically looking for some Unix utility which can detect the third party software installed in the system.
tomcat, rhel
1
4,100
2
https://stackoverflow.com/questions/41258638/to-check-if-a-software-application-is-installed-in-linux
41,192,868
Connecting the Docker Daemon insde the CDK on RHEL based docker images
I want to use the docker command line tool as in "docker ps", "docker build" and "docker run". How can I connect "docker" to the Docker Daemon inside the CDK, so I can create RHEL-based Docker images?
Connecting the Docker Daemon insde the CDK on RHEL based docker images I want to use the docker command line tool as in "docker ps", "docker build" and "docker run". How can I connect "docker" to the Docker Daemon inside the CDK, so I can create RHEL-based Docker images?
docker, rhel, rhel7, redhat-containers
1
109
1
https://stackoverflow.com/questions/41192868/connecting-the-docker-daemon-insde-the-cdk-on-rhel-based-docker-images
39,301,252
ColdFusion 11 Server Update - Updates don&#39;t download on RHEL 7
The Server Update > Updates page will not let me download the updates. I can click the buttons but they don't do anything. The "Check for updates" button works. The CF 11 servers that are running Windows Server 2012 and RHEL 5 have tabs on the page like this: Whereas the RHEL 7 servers have each tab section on one page. Like this:
ColdFusion 11 Server Update - Updates don&#39;t download on RHEL 7 The Server Update > Updates page will not let me download the updates. I can click the buttons but they don't do anything. The "Check for updates" button works. The CF 11 servers that are running Windows Server 2012 and RHEL 5 have tabs on the page like this: Whereas the RHEL 7 servers have each tab section on one page. Like this:
coldfusion, rhel, rhel7
1
124
1
https://stackoverflow.com/questions/39301252/coldfusion-11-server-update-updates-dont-download-on-rhel-7
39,194,878
Docker hanging requiring reboot
We are running docker 1.7.1, build 786b29d on RHEL 6.7. Recently we have had multiple times when the docker daemon locked up and we had to reboot the machine to get it back. A typical scenario is that a container that has been running fine for weeks suddenly starts throwing errors. Sometime we can restart the container and all is well. But other times all docker commands will hang, and restarting the daemon fails, and I see this in a ps: 4 Z root 4895 1 0 80 0 - 0 exit Aug23 ? 00:01:24 [docker] Looking in the system log I've seen this: device-mapper: ioctl: unable to remove open device docker-253:6-1048578-317bb6ad40cded3fbfd752d95551861c2e4ef08dffc1186853fea0e85da6b12b INFO: task docker:16676 blocked for more than 120 seconds. Not tainted 2.6.32-573.12.1.el6.x86_64 #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. docker D 000000000000000b 0 16676 1 0x00000080 ffff88035ef13ea8 0000000000000082 ffff88035ef13e70 ffff88035ef13e6c ffff88035ef13e28 ffff88062fc29a00 0000376c85170937 ffff8800283759c0 0000000000000400 00000001039d40c7 ffff8803000445f8 ffff88035ef13fd8 Call Trace: [] _mutexlock_slowpath+0x96/0x210 [] ? wake_up_process+0x15/0x20 [] mutex_lock+0x2b/0x50 [] sync_filesystems+0x26/0x150 [] sys_sync+0x17/0x40 [] system_call_fastpath+0x16/0x1b The latest docker version is 1.12.1 and we are on 1.7.1. Can or should I install a new version? 1.7.1 is the version yum installs. If I did want a new version how would I install that (sorry if that is a dumb question, I am not a sys admin). Googling, I found on this on a Red Hat site "Red Hat does not recommend running any version of Docker on any RHEL 6 releases." We have been running docker on RHEL 6 for a few years, so this confuses me. Upgrading to RHEL 7 is not really an option for us right now. Can anyone shed any light on these issue? We need docker to work reliably without having to reboot often.
Docker hanging requiring reboot We are running docker 1.7.1, build 786b29d on RHEL 6.7. Recently we have had multiple times when the docker daemon locked up and we had to reboot the machine to get it back. A typical scenario is that a container that has been running fine for weeks suddenly starts throwing errors. Sometime we can restart the container and all is well. But other times all docker commands will hang, and restarting the daemon fails, and I see this in a ps: 4 Z root 4895 1 0 80 0 - 0 exit Aug23 ? 00:01:24 [docker] Looking in the system log I've seen this: device-mapper: ioctl: unable to remove open device docker-253:6-1048578-317bb6ad40cded3fbfd752d95551861c2e4ef08dffc1186853fea0e85da6b12b INFO: task docker:16676 blocked for more than 120 seconds. Not tainted 2.6.32-573.12.1.el6.x86_64 #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. docker D 000000000000000b 0 16676 1 0x00000080 ffff88035ef13ea8 0000000000000082 ffff88035ef13e70 ffff88035ef13e6c ffff88035ef13e28 ffff88062fc29a00 0000376c85170937 ffff8800283759c0 0000000000000400 00000001039d40c7 ffff8803000445f8 ffff88035ef13fd8 Call Trace: [] _mutexlock_slowpath+0x96/0x210 [] ? wake_up_process+0x15/0x20 [] mutex_lock+0x2b/0x50 [] sync_filesystems+0x26/0x150 [] sys_sync+0x17/0x40 [] system_call_fastpath+0x16/0x1b The latest docker version is 1.12.1 and we are on 1.7.1. Can or should I install a new version? 1.7.1 is the version yum installs. If I did want a new version how would I install that (sorry if that is a dumb question, I am not a sys admin). Googling, I found on this on a Red Hat site "Red Hat does not recommend running any version of Docker on any RHEL 6 releases." We have been running docker on RHEL 6 for a few years, so this confuses me. Upgrading to RHEL 7 is not really an option for us right now. Can anyone shed any light on these issue? We need docker to work reliably without having to reboot often.
docker, rhel, rhel6
1
862
2
https://stackoverflow.com/questions/39194878/docker-hanging-requiring-reboot
38,989,590
Ansible - Managing Windows 2003 and WinRM connection timeout
I'm trying to implement Ansible in our company. I have 2 huge problems that may cause us to leave this product, but before we give up I thought maybe someone could help us. Some overall information - We installed ansible 2.1 on RHEL 6.5. We tried to use Ansible Tower but we gave up because of the complexity (most of our use is for ad-hoc commands). The first issue is managing Windows server 2003. When we want to manage windows servers, we need to run the pre-script, but it only works with PowerShell v.3 and above, while Windows server 2003 is not supporting PowerShell v.3 (it's almost impossible to install this version). In our company (unfortunately) there still dozens of Windows server 2003 machines. Is there a way to make Ansible being able to manage those servers? The second issue is the timeout of the WinRM. When we running an ad-hoc command on windows servers, there are machines that Ansible succeed to make a WinRM connection, but its hang out waiting the command to run (for example, even simple "hostname" commands). We reduced the ansible timeout, but it's still hang out, so we assuming that it succeed making the WinRM connection but hang after. There ia a way to configure the time out of ALL the process of ansible per machine, or otherwise configure the WinRM to have timeout after the connection succeeded? Thanks, Afik
Ansible - Managing Windows 2003 and WinRM connection timeout I'm trying to implement Ansible in our company. I have 2 huge problems that may cause us to leave this product, but before we give up I thought maybe someone could help us. Some overall information - We installed ansible 2.1 on RHEL 6.5. We tried to use Ansible Tower but we gave up because of the complexity (most of our use is for ad-hoc commands). The first issue is managing Windows server 2003. When we want to manage windows servers, we need to run the pre-script, but it only works with PowerShell v.3 and above, while Windows server 2003 is not supporting PowerShell v.3 (it's almost impossible to install this version). In our company (unfortunately) there still dozens of Windows server 2003 machines. Is there a way to make Ansible being able to manage those servers? The second issue is the timeout of the WinRM. When we running an ad-hoc command on windows servers, there are machines that Ansible succeed to make a WinRM connection, but its hang out waiting the command to run (for example, even simple "hostname" commands). We reduced the ansible timeout, but it's still hang out, so we assuming that it succeed making the WinRM connection but hang after. There ia a way to configure the time out of ALL the process of ansible per machine, or otherwise configure the WinRM to have timeout after the connection succeeded? Thanks, Afik
windows, ansible, rhel, adhoc, winrm
1
1,887
1
https://stackoverflow.com/questions/38989590/ansible-managing-windows-2003-and-winrm-connection-timeout
34,317,233
Using the Vimeo api gives intermittent ssl error when getting video list
I am having issues using the PHP API here . It works but every 2nd or 3rd request I get the following error: Unable to complete request.[SSL connect error] This is happening on Vimeo.php:154. This is right after curl has executed. I tried using curl on its own at the command line and have gotten: curl: (35) SSL connect error This references: A problem occurred somewhere in the SSL/TLS handshake. You really want the error buffer and read the message there as it pinpoints the problem slightly more. Could be certificates (file formats, paths, permissions), passwords, and others. So I tried it with PHP file_get_contents and the warning I get is Warning: file_get_contents(): SSL: Connection reset by peer I am kind of at a loss for where this comming from and don't know if this is VImeo rejecting my request sometimes or the server losing the connection sometimes. I am trying to find out if someone has had this issue before or has some steps I can use to go about getting more of a clue as to what is causing the issue. Here is my code Using file_Gets_contents $opts = array( 'http'=>array( 'method'=>"GET", 'header'=>"Authorization: bearer <Personal access token>\r\n" ), 'ssl'=>array( 'allow_self_signed'=>false, 'verify_peer'=>false, ) ); $context = stream_context_create($opts); $userAccount = '<user account>'; $url = "[URL] $unparsed_json = file_get_contents($url, false, $context); $json_object = json_decode($unparsed_json); var_dump($json_object);die(); Using CURL $userAccount = '<user account>'; $url = "[URL] $curlHeader = [ 'Authorization: bearer <Personal access token>', 'Accept: ' . self::VERSION_STRING, ]; $ch = curl_init(); curl_setopt ($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 30); curl_setopt ($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HTTPHEADER, $curlHeader); curl_setopt($ch, CURLOPT_HTTPGET, true); $unparsed_json = curl_exec ($ch); // Check if any error occurred if(curl_errno($ch)) { $info = curl_getinfo($ch); var_dump('Curl error: ', curl_error($ch), ' Curl error no: ', curl_errno($ch), ' Unparsed json: ', $unparsed_json, ' Info: ', $info, 'DIR CERT: ', __DIR__ . '/CERT'); die(); } curl_close ($ch); $json_object = json_decode($unparsed_json); var_dump($unparsed_json);die(); Using Vimeo.php $userAccount = '<user account>'; $url = "[URL] $client_id = '<Client ID>'; $client_secret = '<Secret>'; $lib = new Vimeo\Vimeo($client_id, $client_secret); $lib->setToken('<Personal access token>'); $response = $lib->request($url, [], 'GET'); var_dump($response['body']);die(); I used verbose on curl at the Linux command line and saw this: About to connect() to api.vimeo.com port 443 (#0) Trying 104.156.85.217... connected Connected to api.vimeo.com (104.156.85.217) port 443 (#0) Initializing NSS with certpath: sql:/etc/pki/nssdb CAfile: 'cert file location' CApath: none NSS error -5961 Closing connection #0 SSL connect error curl: (35) SSL connect error
Using the Vimeo api gives intermittent ssl error when getting video list I am having issues using the PHP API here . It works but every 2nd or 3rd request I get the following error: Unable to complete request.[SSL connect error] This is happening on Vimeo.php:154. This is right after curl has executed. I tried using curl on its own at the command line and have gotten: curl: (35) SSL connect error This references: A problem occurred somewhere in the SSL/TLS handshake. You really want the error buffer and read the message there as it pinpoints the problem slightly more. Could be certificates (file formats, paths, permissions), passwords, and others. So I tried it with PHP file_get_contents and the warning I get is Warning: file_get_contents(): SSL: Connection reset by peer I am kind of at a loss for where this comming from and don't know if this is VImeo rejecting my request sometimes or the server losing the connection sometimes. I am trying to find out if someone has had this issue before or has some steps I can use to go about getting more of a clue as to what is causing the issue. Here is my code Using file_Gets_contents $opts = array( 'http'=>array( 'method'=>"GET", 'header'=>"Authorization: bearer <Personal access token>\r\n" ), 'ssl'=>array( 'allow_self_signed'=>false, 'verify_peer'=>false, ) ); $context = stream_context_create($opts); $userAccount = '<user account>'; $url = "[URL] $unparsed_json = file_get_contents($url, false, $context); $json_object = json_decode($unparsed_json); var_dump($json_object);die(); Using CURL $userAccount = '<user account>'; $url = "[URL] $curlHeader = [ 'Authorization: bearer <Personal access token>', 'Accept: ' . self::VERSION_STRING, ]; $ch = curl_init(); curl_setopt ($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 30); curl_setopt ($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HTTPHEADER, $curlHeader); curl_setopt($ch, CURLOPT_HTTPGET, true); $unparsed_json = curl_exec ($ch); // Check if any error occurred if(curl_errno($ch)) { $info = curl_getinfo($ch); var_dump('Curl error: ', curl_error($ch), ' Curl error no: ', curl_errno($ch), ' Unparsed json: ', $unparsed_json, ' Info: ', $info, 'DIR CERT: ', __DIR__ . '/CERT'); die(); } curl_close ($ch); $json_object = json_decode($unparsed_json); var_dump($unparsed_json);die(); Using Vimeo.php $userAccount = '<user account>'; $url = "[URL] $client_id = '<Client ID>'; $client_secret = '<Secret>'; $lib = new Vimeo\Vimeo($client_id, $client_secret); $lib->setToken('<Personal access token>'); $response = $lib->request($url, [], 'GET'); var_dump($response['body']);die(); I used verbose on curl at the Linux command line and saw this: About to connect() to api.vimeo.com port 443 (#0) Trying 104.156.85.217... connected Connected to api.vimeo.com (104.156.85.217) port 443 (#0) Initializing NSS with certpath: sql:/etc/pki/nssdb CAfile: 'cert file location' CApath: none NSS error -5961 Closing connection #0 SSL connect error curl: (35) SSL connect error
php, curl, https, rhel, vimeo-api
1
2,508
1
https://stackoverflow.com/questions/34317233/using-the-vimeo-api-gives-intermittent-ssl-error-when-getting-video-list
33,576,675
Could not load file or assembly &#39;Mono.Posix&#39;
I'm attempting to run an exe on RHEL 6 using Mono. I've compiled Mono 4.0.2.4 and when I try to run my exe it crashes. My Command: /opt/mono/bin/mono /opt/mono/lib/mono/4.5/mono-service.exe -l:plexos.lock ./DALicenseServer.exe The Error. Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'Mono.Posix, Version=4.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or one of its dependencies. File name: 'Mono.Posix, Version=4.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' [ERROR] FATAL UNHANDLED EXCEPTION: System.IO.FileNotFoundException: Could not load file or assembly 'Mono.Posix, Version=4.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or one of its dependencies. File name: 'Mono.Posix, Version=4.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' I think I have the Mono.Posix.dll file: ls /opt/mono/lib/mono/4.0/Mono.Posix.dll /opt/mono/lib/mono/4.0/Mono.Posix.dll
Could not load file or assembly &#39;Mono.Posix&#39; I'm attempting to run an exe on RHEL 6 using Mono. I've compiled Mono 4.0.2.4 and when I try to run my exe it crashes. My Command: /opt/mono/bin/mono /opt/mono/lib/mono/4.5/mono-service.exe -l:plexos.lock ./DALicenseServer.exe The Error. Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'Mono.Posix, Version=4.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or one of its dependencies. File name: 'Mono.Posix, Version=4.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' [ERROR] FATAL UNHANDLED EXCEPTION: System.IO.FileNotFoundException: Could not load file or assembly 'Mono.Posix, Version=4.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or one of its dependencies. File name: 'Mono.Posix, Version=4.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' I think I have the Mono.Posix.dll file: ls /opt/mono/lib/mono/4.0/Mono.Posix.dll /opt/mono/lib/mono/4.0/Mono.Posix.dll
mono, rhel, mono-posix
1
5,781
2
https://stackoverflow.com/questions/33576675/could-not-load-file-or-assembly-mono-posix
33,281,806
Upgraded OpenSSL - How do I get Apache HTTPD to Use?
I recently had a need to upgrade an old server. The server fulfills a very specific purpose and as such has not been kept up to date. With the recent push for SSL to utilize SHA256 I needed to upgrade a few packages. Short Background The server is RHEL3 (yes, that is correct). I downloaded and built OpenSSL 0.9.8q and ensured it was the only instance of OpenSSL on the server (moving the old instance to a backup directory). I then downloaded and built cURL 7.15.5 with the ./configure --with-ssl=/usr/local/ssl - pointing the with-ssl to my new OpenSSL directory. Once cURL was built I tested my connection to the resource that is requiring sha256 using cURL. My connection test was successful using cURL. On to my problem and question I downloaded httpd 2.0.59 and built it with --enable-ssl and --enable-so , but my tests did not work. I also tried to d/l & build httpd 2.0.63 but I was having trouble getting 2.0.63 working at all. I then took the mod_ssl built from 2.0.63 and put it into the 2.0.59 directory...no luck either. I feel I am missing some element that connects httpd to my newly installed OpenSSL. What do I need to do to ensure mod_ssl is using my new version of OpenSSL on the server? I understand I am quite a few releases behind with my httpd instances, but again, this is an old server with a specific purpose. My only goal is to get it working with sha256, not buy a new server with the latest RHEL, etc. Thank for any input/assistance.
Upgraded OpenSSL - How do I get Apache HTTPD to Use? I recently had a need to upgrade an old server. The server fulfills a very specific purpose and as such has not been kept up to date. With the recent push for SSL to utilize SHA256 I needed to upgrade a few packages. Short Background The server is RHEL3 (yes, that is correct). I downloaded and built OpenSSL 0.9.8q and ensured it was the only instance of OpenSSL on the server (moving the old instance to a backup directory). I then downloaded and built cURL 7.15.5 with the ./configure --with-ssl=/usr/local/ssl - pointing the with-ssl to my new OpenSSL directory. Once cURL was built I tested my connection to the resource that is requiring sha256 using cURL. My connection test was successful using cURL. On to my problem and question I downloaded httpd 2.0.59 and built it with --enable-ssl and --enable-so , but my tests did not work. I also tried to d/l & build httpd 2.0.63 but I was having trouble getting 2.0.63 working at all. I then took the mod_ssl built from 2.0.63 and put it into the 2.0.59 directory...no luck either. I feel I am missing some element that connects httpd to my newly installed OpenSSL. What do I need to do to ensure mod_ssl is using my new version of OpenSSL on the server? I understand I am quite a few releases behind with my httpd instances, but again, this is an old server with a specific purpose. My only goal is to get it working with sha256, not buy a new server with the latest RHEL, etc. Thank for any input/assistance.
apache, ssl, rhel, sha256
1
1,042
1
https://stackoverflow.com/questions/33281806/upgraded-openssl-how-do-i-get-apache-httpd-to-use
32,196,269
gdb reports Segmentation fault - how to know where?
I'm running my program under gdb, with debuging information and without any optimizations. gdb reports: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffeffff700 (LWP 8875)] 0x0000001000000001 in ?? () From this message i do not understand where the problem happened. Is it possible to extract stacktrace / problem file and line number?
gdb reports Segmentation fault - how to know where? I'm running my program under gdb, with debuging information and without any optimizations. gdb reports: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffeffff700 (LWP 8875)] 0x0000001000000001 in ?? () From this message i do not understand where the problem happened. Is it possible to extract stacktrace / problem file and line number?
c++, gdb, rhel
1
215
1
https://stackoverflow.com/questions/32196269/gdb-reports-segmentation-fault-how-to-know-where
31,023,115
How to execute script after system reboot by using kickstart
I'm using PXE to install ISO. There are two scripts for environment configuration, I want to added them in kickstart file, so the environment will be setup completely and automatically after the system installed. However, my situation is: Script 1 must reboot (has been added in script1.sh) Script 2 depends on script 1 Here part of kickstart file: ... ... %post wget [URL] wget [URL] sh -x script1.sh | tee script1.log sh -x script2.sh | tee script2.log %end So, is there anyway that script 2 can be executed after system reboot by using kickstart file? Or the other way, just executed the script 2 once after reboot. Thanks.
How to execute script after system reboot by using kickstart I'm using PXE to install ISO. There are two scripts for environment configuration, I want to added them in kickstart file, so the environment will be setup completely and automatically after the system installed. However, my situation is: Script 1 must reboot (has been added in script1.sh) Script 2 depends on script 1 Here part of kickstart file: ... ... %post wget [URL] wget [URL] sh -x script1.sh | tee script1.log sh -x script2.sh | tee script2.log %end So, is there anyway that script 2 can be executed after system reboot by using kickstart file? Or the other way, just executed the script 2 once after reboot. Thanks.
rhel, reboot, rhel6
1
8,864
1
https://stackoverflow.com/questions/31023115/how-to-execute-script-after-system-reboot-by-using-kickstart
30,982,842
While loop in bash using variable from txt file
I am new to bash and writing a script to read variables that is stored on each line of a text file (there are thousands of these variables). So I tried to write a script that would read the lines and automatically output the solution to the screen and save into another text file. ./reader.sh > solution.text The problem I encounter is currently I have only 1 variable store in the Sheetone.txt for testing purpose which should take about 2 seconds to output everything but it is stuck in the while loop as well as is not outputting the solution. #!/bin/bash file=Sheetone.txt while IFS= read -r line do echo sh /usr/local/test/bin/test -ID $line -I done
While loop in bash using variable from txt file I am new to bash and writing a script to read variables that is stored on each line of a text file (there are thousands of these variables). So I tried to write a script that would read the lines and automatically output the solution to the screen and save into another text file. ./reader.sh > solution.text The problem I encounter is currently I have only 1 variable store in the Sheetone.txt for testing purpose which should take about 2 seconds to output everything but it is stuck in the while loop as well as is not outputting the solution. #!/bin/bash file=Sheetone.txt while IFS= read -r line do echo sh /usr/local/test/bin/test -ID $line -I done
linux, bash, rhel
1
2,315
1
https://stackoverflow.com/questions/30982842/while-loop-in-bash-using-variable-from-txt-file
29,763,739
How to point yum to a local HTTP repo to temporary
I am in a situation where I need to point to an internally hosted yum repo and update specific packages found in those repos. There are two repos, one for Red Hat 5.x and the other for Red Hat 6.x. Before you ask, I can't add the repos through the normal method as that requires upstream changes which I am not allowed to make. (That would be too simple!) Hence, why I am asking how to run a simple shell script, and tell yum to point to the RHEL 5.x repo for a specific package and upgrade to the latest package found there. This is for security patching. The sad way I am doing this now is to run a pssh loop against a bunch of machines where I know they are all RHEL 5 or 6, and run a yum update -y [URL] . But this is much harder to accomplish in a simple way if there is a mix of machines as then I'd have to point to different repos based the OS version, and also find the full RPM file names. Any ideas?
How to point yum to a local HTTP repo to temporary I am in a situation where I need to point to an internally hosted yum repo and update specific packages found in those repos. There are two repos, one for Red Hat 5.x and the other for Red Hat 6.x. Before you ask, I can't add the repos through the normal method as that requires upstream changes which I am not allowed to make. (That would be too simple!) Hence, why I am asking how to run a simple shell script, and tell yum to point to the RHEL 5.x repo for a specific package and upgrade to the latest package found there. This is for security patching. The sad way I am doing this now is to run a pssh loop against a bunch of machines where I know they are all RHEL 5 or 6, and run a yum update -y [URL] . But this is much harder to accomplish in a simple way if there is a mix of machines as then I'd have to point to different repos based the OS version, and also find the full RPM file names. Any ideas?
linux, yum, rhel, patch
1
3,060
1
https://stackoverflow.com/questions/29763739/how-to-point-yum-to-a-local-http-repo-to-temporary
28,981,456
puppet detect if a file changed but don&#39;t change it
I want to manage the config.xml file of the jenkins-service with puppet. The problem is that if Puppet changes the config.xml file and than restarts the jenkins service, the config.xml file gets overwritten by the currently loaded configuration of jenkins and the changes made by puppet are lost. That's what i have now: file { '/var/lib/jenkins/config.xml': source => 'puppet:///modules/jenkins/config.xml', owner => jenkins, group => jenkins, mode => '0644' } service { 'jenkins': ensure => running, enable => true, subscribe => File['/var/lib/jenkins/config.xml'] } My approach is to stop the jenkins service, than copy the config.xml and start the service again... naturally the service should not be stopped and started again every time puppet runs but only if the config.xml changed. I don't know how to do this with puppet and even if it is possible. Any ideas? Any help would be much appreciated
puppet detect if a file changed but don&#39;t change it I want to manage the config.xml file of the jenkins-service with puppet. The problem is that if Puppet changes the config.xml file and than restarts the jenkins service, the config.xml file gets overwritten by the currently loaded configuration of jenkins and the changes made by puppet are lost. That's what i have now: file { '/var/lib/jenkins/config.xml': source => 'puppet:///modules/jenkins/config.xml', owner => jenkins, group => jenkins, mode => '0644' } service { 'jenkins': ensure => running, enable => true, subscribe => File['/var/lib/jenkins/config.xml'] } My approach is to stop the jenkins service, than copy the config.xml and start the service again... naturally the service should not be stopped and started again every time puppet runs but only if the config.xml changed. I don't know how to do this with puppet and even if it is possible. Any ideas? Any help would be much appreciated
jenkins, puppet, rhel
1
3,551
3
https://stackoverflow.com/questions/28981456/puppet-detect-if-a-file-changed-but-dont-change-it
28,003,935
Cron job does not run in RHEL
I'm running RHEL and I'm trying to set up a cron job to run a shell script every 5 minutes. Following the directions here: [URL] I have service crond start and chkconfig crond on . Then I edited /etc/crontab and added: */5 * * * * my-user /path/to/shell.sh I did a chmod +x shell.sh . And I made sure to add a new line character at the end. I'm expecting it to run every 5 minutes but it never executes. What am I doing wrong?
Cron job does not run in RHEL I'm running RHEL and I'm trying to set up a cron job to run a shell script every 5 minutes. Following the directions here: [URL] I have service crond start and chkconfig crond on . Then I edited /etc/crontab and added: */5 * * * * my-user /path/to/shell.sh I did a chmod +x shell.sh . And I made sure to add a new line character at the end. I'm expecting it to run every 5 minutes but it never executes. What am I doing wrong?
cron, rhel
1
7,210
1
https://stackoverflow.com/questions/28003935/cron-job-does-not-run-in-rhel
27,366,121
How to install ImageMagick header files of specific version?
I am on RHEL and installed ImageMagick from source using the following: yum install -y libpng libpng-devel curl -LO [URL] tar -xvzf ImageMagick.tar.gz cd ImageMagick-6.8.9-9/ ./configure --prefix=/usr/local make install I also need to install the header files. How do I do this? The Yum latest repository only has 6.5.4 and if I install those I get version conflicts.
How to install ImageMagick header files of specific version? I am on RHEL and installed ImageMagick from source using the following: yum install -y libpng libpng-devel curl -LO [URL] tar -xvzf ImageMagick.tar.gz cd ImageMagick-6.8.9-9/ ./configure --prefix=/usr/local make install I also need to install the header files. How do I do this? The Yum latest repository only has 6.5.4 and if I install those I get version conflicts.
linux, imagemagick, redhat, rhel
1
1,064
1
https://stackoverflow.com/questions/27366121/how-to-install-imagemagick-header-files-of-specific-version
22,607,477
Linux &quot;Default Permissions&quot; for Future Files recursively?
I have a Cache folder which needs 777 for everything inside. For existing ones, and the any newly/future created files, folders, sub-folders, etc. Lets say: /var/www/html/cache So how to do it in Linux (or RHEL) please?
Linux &quot;Default Permissions&quot; for Future Files recursively? I have a Cache folder which needs 777 for everything inside. For existing ones, and the any newly/future created files, folders, sub-folders, etc. Lets say: /var/www/html/cache So how to do it in Linux (or RHEL) please?
linux, file-permissions, rhel
1
1,696
1
https://stackoverflow.com/questions/22607477/linux-default-permissions-for-future-files-recursively
22,281,853
Installing git-svn generates errors for dependancies
I am trying to install git-svn package on different redhat releases it always generates errors because of packages dependencies, I tried to solve those dependencies and nothing solved, I created a repo config with a url, but it didn't solve the problem. Create the repository config file /etc/yum.repos.d/puias-computational.repo: [puias-computational] name=PUIAS Computational baseurl=[URL] enabled=1 gpgcheck=0 yum install git-svn This out on RH6.4, I tried on 6.3 and 6.5 and same happens Loaded plugins: product-id, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package git-svn.noarch 0:1.7.11.4-3.puias6 will be installed --> Processing Dependency: git = 1.7.11.4-3.puias6 for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Term::ReadKey) for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Git::SVN::Ra) for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Git::SVN::Prompt) for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Git::SVN::Fetcher) for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Git::SVN::Editor) for package: git-svn-1.7.11.4-3.puias6.noarch --> Running transaction check ---> Package git.i686 0:1.7.11.4-3.puias6 will be installed --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libz.so.1 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libssl.so.10 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libpcre.so.0 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libexpat.so.1 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libcurl.so.4 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libcrypto.so.10 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: git = 1.8.3.1-1.sdl6 for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch ---> Package perl-TermReadKey.x86_64 0:2.30-13.el6 will be installed --> Running transaction check ---> Package expat.i686 0:2.0.1-11.el6_2 will be installed ---> Package git.x86_64 0:1.7.1-2.el6_0.1 will be updated ---> Package git.i686 0:1.8.3.1-1.sdl6 will be an update --> Processing Dependency: perl-Git = 1.8.3.1-1.sdl6 for package: git-1.8.3.1-1.sdl6.i686 ---> Package libcurl.i686 0:7.19.7-35.el6 will be installed --> Processing Dependency: libssh2(x86-32) >= 1.4.2 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssl3.so(NSS_3.4) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssl3.so(NSS_3.2) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssl3.so(NSS_3.11.4) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssl3.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssh2.so.1 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libsmime3.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libplds4.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libplc4.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnssutil3.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.9.3) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.9.2) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.5) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.4) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.3) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.2) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.12.5) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.12.1) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.12) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.10) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnspr4.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libldap-2.4.so.2 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libkrb5.so.3 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libk5crypto.so.3 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libidn.so.11(LIBIDN_1.0) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libidn.so.11 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libgssapi_krb5.so.2(gssapi_krb5_2_MIT) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libgssapi_krb5.so.2 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libcom_err.so.2 for package: libcurl-7.19.7-35.el6.i686 ---> Package openssl.i686 0:1.0.0-27.el6 will be installed ---> Package pcre.i686 0:7.8-6.el6 will be installed --> Processing Dependency: libstdc++.so.6(GLIBCXX_3.4.9) for package: pcre-7.8-6.el6.i686 --> Processing Dependency: libstdc++.so.6(GLIBCXX_3.4) for package: pcre-7.8-6.el6.i686 --> Processing Dependency: libstdc++.so.6(CXXABI_1.3) for package: pcre-7.8-6.el6.i686 --> Processing Dependency: libstdc++.so.6 for package: pcre-7.8-6.el6.i686 ---> Package perl-Git.noarch 0:1.7.1-2.el6_0.1 will be updated ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch ---> Package zlib.i686 0:1.2.3-29.el6 will be installed --> Running transaction check ---> Package krb5-libs.i686 0:1.10.3-10.el6 will be installed --> Processing Dependency: libselinux.so.1 for package: krb5-libs-1.10.3-10.el6.i686 --> Processing Dependency: libkeyutils.so.1(KEYUTILS_0.3) for package: krb5-libs-1.10.3-10.el6.i686 --> Processing Dependency: libkeyutils.so.1 for package: krb5-libs-1.10.3-10.el6.i686 ---> Package libcom_err.i686 0:1.41.12-14.el6 will be installed ---> Package libidn.i686 0:1.18-2.el6 will be installed ---> Package libssh2.i686 0:1.4.2-1.el6 will be installed ---> Package libstdc++.i686 0:4.4.7-3.el6 will be installed ---> Package nspr.i686 0:4.9.2-1.el6 will be installed ---> Package nss.i686 0:3.14.0.0-12.el6 will be installed --> Processing Dependency: nss-softokn(x86-32) >= 3.12.9 for package: nss-3.14.0.0-12.el6.i686 --> Processing Dependency: libsoftokn3.so for package: nss-3.14.0.0-12.el6.i686 ---> Package nss-util.i686 0:3.14.0.0-2.el6 will be installed ---> Package openldap.i686 0:2.4.23-31.el6 will be installed --> Processing Dependency: libsasl2.so.2 for package: openldap-2.4.23-31.el6.i686 ---> Package perl-Git.noarch 0:1.7.1-2.el6_0.1 will be updated ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git.noarch 0:1.8.3.1-1.sdl6 will be an update ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Running transaction check ---> Package cyrus-sasl-lib.i686 0:2.1.23-13.el6_3.1 will be installed --> Processing Dependency: libdb-4.7.so for package: cyrus-sasl-lib-2.1.23-13.el6_3.1.i686 ---> Package keyutils-libs.i686 0:1.4-4.el6 will be installed ---> Package libselinux.i686 0:2.0.94-5.3.el6 will be installed ---> Package nss-softokn.i686 0:3.12.9-11.el6 will be installed --> Processing Dependency: libsqlite3.so.0 for package: nss-softokn-3.12.9-11.el6.i686 ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Running transaction check ---> Package db4.i686 0:4.7.25-17.el6 will be installed ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch ---> Package sqlite.i686 0:3.6.20-1.el6 will be installed --> Processing Dependency: libreadline.so.6 for package: sqlite-3.6.20-1.el6.i686 --> Running transaction check ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch ---> Package readline.i686 0:6.0-4.el6 will be installed --> Processing Dependency: libtinfo.so.5 for package: readline-6.0-4.el6.i686 --> Running transaction check ---> Package ncurses-libs.i686 0:5.7-3.20090208.el6 will be installed ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Finished Dependency Resolution Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(SVN::Client) Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(YAML::Any) Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(SVN::Core) Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(SVN::Delta) Error: Package: git-1.7.11.4-3.puias6.i686 (puias-computational) Requires: perl-Git = 1.7.11.4-3.puias6 Removing: perl-Git-1.7.1-2.el6_0.1.noarch (@anaconda-RedHatEnterpriseLinux-201301301459.x86_64/6.4) perl-Git = 1.7.1-2.el6_0.1 Updated By: perl-Git-1.8.3.1-1.sdl6.noarch (puias-computational) perl-Git = 1.8.3.1-1.sdl6 Available: perl-Git-1.7.11.4-3.puias6.noarch (puias-computational) perl-Git = 1.7.11.4-3.puias6 Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(SVN::Ra) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
Installing git-svn generates errors for dependancies I am trying to install git-svn package on different redhat releases it always generates errors because of packages dependencies, I tried to solve those dependencies and nothing solved, I created a repo config with a url, but it didn't solve the problem. Create the repository config file /etc/yum.repos.d/puias-computational.repo: [puias-computational] name=PUIAS Computational baseurl=[URL] enabled=1 gpgcheck=0 yum install git-svn This out on RH6.4, I tried on 6.3 and 6.5 and same happens Loaded plugins: product-id, security, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package git-svn.noarch 0:1.7.11.4-3.puias6 will be installed --> Processing Dependency: git = 1.7.11.4-3.puias6 for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Term::ReadKey) for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Git::SVN::Ra) for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Git::SVN::Prompt) for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Git::SVN::Fetcher) for package: git-svn-1.7.11.4-3.puias6.noarch --> Processing Dependency: perl(Git::SVN::Editor) for package: git-svn-1.7.11.4-3.puias6.noarch --> Running transaction check ---> Package git.i686 0:1.7.11.4-3.puias6 will be installed --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libz.so.1 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libssl.so.10 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libpcre.so.0 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libexpat.so.1 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libcurl.so.4 for package: git-1.7.11.4-3.puias6.i686 --> Processing Dependency: libcrypto.so.10 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: git = 1.8.3.1-1.sdl6 for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch ---> Package perl-TermReadKey.x86_64 0:2.30-13.el6 will be installed --> Running transaction check ---> Package expat.i686 0:2.0.1-11.el6_2 will be installed ---> Package git.x86_64 0:1.7.1-2.el6_0.1 will be updated ---> Package git.i686 0:1.8.3.1-1.sdl6 will be an update --> Processing Dependency: perl-Git = 1.8.3.1-1.sdl6 for package: git-1.8.3.1-1.sdl6.i686 ---> Package libcurl.i686 0:7.19.7-35.el6 will be installed --> Processing Dependency: libssh2(x86-32) >= 1.4.2 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssl3.so(NSS_3.4) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssl3.so(NSS_3.2) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssl3.so(NSS_3.11.4) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssl3.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libssh2.so.1 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libsmime3.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libplds4.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libplc4.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnssutil3.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.9.3) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.9.2) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.5) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.4) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.3) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.2) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.12.5) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.12.1) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.12) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so(NSS_3.10) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnss3.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libnspr4.so for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libldap-2.4.so.2 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libkrb5.so.3 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libk5crypto.so.3 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libidn.so.11(LIBIDN_1.0) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libidn.so.11 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libgssapi_krb5.so.2(gssapi_krb5_2_MIT) for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libgssapi_krb5.so.2 for package: libcurl-7.19.7-35.el6.i686 --> Processing Dependency: libcom_err.so.2 for package: libcurl-7.19.7-35.el6.i686 ---> Package openssl.i686 0:1.0.0-27.el6 will be installed ---> Package pcre.i686 0:7.8-6.el6 will be installed --> Processing Dependency: libstdc++.so.6(GLIBCXX_3.4.9) for package: pcre-7.8-6.el6.i686 --> Processing Dependency: libstdc++.so.6(GLIBCXX_3.4) for package: pcre-7.8-6.el6.i686 --> Processing Dependency: libstdc++.so.6(CXXABI_1.3) for package: pcre-7.8-6.el6.i686 --> Processing Dependency: libstdc++.so.6 for package: pcre-7.8-6.el6.i686 ---> Package perl-Git.noarch 0:1.7.1-2.el6_0.1 will be updated ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch ---> Package zlib.i686 0:1.2.3-29.el6 will be installed --> Running transaction check ---> Package krb5-libs.i686 0:1.10.3-10.el6 will be installed --> Processing Dependency: libselinux.so.1 for package: krb5-libs-1.10.3-10.el6.i686 --> Processing Dependency: libkeyutils.so.1(KEYUTILS_0.3) for package: krb5-libs-1.10.3-10.el6.i686 --> Processing Dependency: libkeyutils.so.1 for package: krb5-libs-1.10.3-10.el6.i686 ---> Package libcom_err.i686 0:1.41.12-14.el6 will be installed ---> Package libidn.i686 0:1.18-2.el6 will be installed ---> Package libssh2.i686 0:1.4.2-1.el6 will be installed ---> Package libstdc++.i686 0:4.4.7-3.el6 will be installed ---> Package nspr.i686 0:4.9.2-1.el6 will be installed ---> Package nss.i686 0:3.14.0.0-12.el6 will be installed --> Processing Dependency: nss-softokn(x86-32) >= 3.12.9 for package: nss-3.14.0.0-12.el6.i686 --> Processing Dependency: libsoftokn3.so for package: nss-3.14.0.0-12.el6.i686 ---> Package nss-util.i686 0:3.14.0.0-2.el6 will be installed ---> Package openldap.i686 0:2.4.23-31.el6 will be installed --> Processing Dependency: libsasl2.so.2 for package: openldap-2.4.23-31.el6.i686 ---> Package perl-Git.noarch 0:1.7.1-2.el6_0.1 will be updated ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git.noarch 0:1.8.3.1-1.sdl6 will be an update ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Running transaction check ---> Package cyrus-sasl-lib.i686 0:2.1.23-13.el6_3.1 will be installed --> Processing Dependency: libdb-4.7.so for package: cyrus-sasl-lib-2.1.23-13.el6_3.1.i686 ---> Package keyutils-libs.i686 0:1.4-4.el6 will be installed ---> Package libselinux.i686 0:2.0.94-5.3.el6 will be installed ---> Package nss-softokn.i686 0:3.12.9-11.el6 will be installed --> Processing Dependency: libsqlite3.so.0 for package: nss-softokn-3.12.9-11.el6.i686 ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Running transaction check ---> Package db4.i686 0:4.7.25-17.el6 will be installed ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch ---> Package sqlite.i686 0:3.6.20-1.el6 will be installed --> Processing Dependency: libreadline.so.6 for package: sqlite-3.6.20-1.el6.i686 --> Running transaction check ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch ---> Package readline.i686 0:6.0-4.el6 will be installed --> Processing Dependency: libtinfo.so.5 for package: readline-6.0-4.el6.i686 --> Running transaction check ---> Package ncurses-libs.i686 0:5.7-3.20090208.el6 will be installed ---> Package perl-Git.noarch 0:1.7.11.4-3.puias6 will be an update --> Processing Dependency: perl-Git = 1.7.11.4-3.puias6 for package: git-1.7.11.4-3.puias6.i686 ---> Package perl-Git-SVN.noarch 0:1.8.3.1-1.sdl6 will be installed --> Processing Dependency: perl(YAML::Any) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Ra) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Delta) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Core) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Processing Dependency: perl(SVN::Client) for package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch --> Finished Dependency Resolution Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(SVN::Client) Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(YAML::Any) Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(SVN::Core) Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(SVN::Delta) Error: Package: git-1.7.11.4-3.puias6.i686 (puias-computational) Requires: perl-Git = 1.7.11.4-3.puias6 Removing: perl-Git-1.7.1-2.el6_0.1.noarch (@anaconda-RedHatEnterpriseLinux-201301301459.x86_64/6.4) perl-Git = 1.7.1-2.el6_0.1 Updated By: perl-Git-1.8.3.1-1.sdl6.noarch (puias-computational) perl-Git = 1.8.3.1-1.sdl6 Available: perl-Git-1.7.11.4-3.puias6.noarch (puias-computational) perl-Git = 1.7.11.4-3.puias6 Error: Package: perl-Git-SVN-1.8.3.1-1.sdl6.noarch (puias-computational) Requires: perl(SVN::Ra) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
linux, git, svn, yum, rhel
1
2,815
1
https://stackoverflow.com/questions/22281853/installing-git-svn-generates-errors-for-dependancies
22,131,301
Openstack: Error on Puppet while installing on redhat 6.4
I am installing the Openstack with following steps: Installed openstack kernel yum -y update ( Doesn't update anything) setup the ntp reboot installed openstack-packstack Generate answer file openstack --answer-file=/root/answerfile.txt and an error occurs in puppet configuration: Adding Horizon manifest entries... [ DONE ] Preparing servers... [ DONE ] Adding post install manifest entries... [ DONE ] Installing Dependencies... [ DONE ] Copying Puppet modules and manifests... [ DONE ] Applying Puppet manifests... Applying 192.168.170.143_prescript.pp ERROR ERROR : Error during puppet run : err: /Stage[main]//Package[openstack-selinux]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned 1: Error: Nothing to do Please check log file /var/tmp/packstack/20140302-121029-B85D0p/openstack-setup.log for more information. I have already installed the EPEL repo and Local CD repo. And also installed the puppet with (yum install puppet) before above installation (I have tried with our steps from manual and book but it doesn't worked). Any suggestion for this issue?
Openstack: Error on Puppet while installing on redhat 6.4 I am installing the Openstack with following steps: Installed openstack kernel yum -y update ( Doesn't update anything) setup the ntp reboot installed openstack-packstack Generate answer file openstack --answer-file=/root/answerfile.txt and an error occurs in puppet configuration: Adding Horizon manifest entries... [ DONE ] Preparing servers... [ DONE ] Adding post install manifest entries... [ DONE ] Installing Dependencies... [ DONE ] Copying Puppet modules and manifests... [ DONE ] Applying Puppet manifests... Applying 192.168.170.143_prescript.pp ERROR ERROR : Error during puppet run : err: /Stage[main]//Package[openstack-selinux]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install openstack-selinux' returned 1: Error: Nothing to do Please check log file /var/tmp/packstack/20140302-121029-B85D0p/openstack-setup.log for more information. I have already installed the EPEL repo and Local CD repo. And also installed the puppet with (yum install puppet) before above installation (I have tried with our steps from manual and book but it doesn't worked). Any suggestion for this issue?
linux, puppet, openstack, rhel
1
1,667
1
https://stackoverflow.com/questions/22131301/openstack-error-on-puppet-while-installing-on-redhat-6-4
18,825,848
SSSD Authentication with Samba 4
I have recently upgraded to samba 4 from samba 3.5 on a RHEL 6.3 platform. It is pleasing that the new version can replace AD DC and has it's own built it kdc and ldb database. Now my intention is to make linux boxes authenticate to samba4 by connecting through ldap as samba 4 works like a kerberized ldap server. I am able to connect using Apache directory studio using the administrator dn to the ldap database. However I am unable to properly configure sssd on RHEL 6 client machines to authenticate against the samba server via ldap. Here is my sssd configuration file- [sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss, pam domains = default [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [domain/default] ldap_default_authtok_type = password ldap_id_use_start_tls = False cache_credentials = True ldap_group_object_class = group ldap_search_base = <My Domain dn> chpass_provider = krb5 ldap_default_authtok = <Administrator Password> id_provider = ldap auth_provider = krb5 ldap_default_bind_dn = cn=Administrator,cn=Users,<My Domain dn> ldap_user_gecos = displayName debug_level = 0 ldap_uri = ldap://<samba_server_hostname>/ krb5_realm = <krb auth realm(same as domain name)> krb5_kpasswd = <samba_server_hostname> ldap_schema = rfc2307bis ldap_force_upper_case_realm = True ldap_user_object_class = person ldap_tls_cacertdir = /etc/openldap/cacerts krb5_kdcip = <samba_server_hostname> I can run kinit for Administrator on the client successfully, and I can run ldapsearch when binding as Administrator but id or getent passwd for any user is not working. Any ideas please ??
SSSD Authentication with Samba 4 I have recently upgraded to samba 4 from samba 3.5 on a RHEL 6.3 platform. It is pleasing that the new version can replace AD DC and has it's own built it kdc and ldb database. Now my intention is to make linux boxes authenticate to samba4 by connecting through ldap as samba 4 works like a kerberized ldap server. I am able to connect using Apache directory studio using the administrator dn to the ldap database. However I am unable to properly configure sssd on RHEL 6 client machines to authenticate against the samba server via ldap. Here is my sssd configuration file- [sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss, pam domains = default [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [domain/default] ldap_default_authtok_type = password ldap_id_use_start_tls = False cache_credentials = True ldap_group_object_class = group ldap_search_base = <My Domain dn> chpass_provider = krb5 ldap_default_authtok = <Administrator Password> id_provider = ldap auth_provider = krb5 ldap_default_bind_dn = cn=Administrator,cn=Users,<My Domain dn> ldap_user_gecos = displayName debug_level = 0 ldap_uri = ldap://<samba_server_hostname>/ krb5_realm = <krb auth realm(same as domain name)> krb5_kpasswd = <samba_server_hostname> ldap_schema = rfc2307bis ldap_force_upper_case_realm = True ldap_user_object_class = person ldap_tls_cacertdir = /etc/openldap/cacerts krb5_kdcip = <samba_server_hostname> I can run kinit for Administrator on the client successfully, and I can run ldapsearch when binding as Administrator but id or getent passwd for any user is not working. Any ideas please ??
linux, rhel, samba, sssd
1
6,336
1
https://stackoverflow.com/questions/18825848/sssd-authentication-with-samba-4
17,763,063
RHEL6 LDAP client not listing all the groups
Using sssd I have a RHEL6 client configured to login using ldaps. The login works, but if the user logged is assigned to more than 1 group at the ldap level, groups only returns 1 group? Could I be missing a configuration somewhere? The group shown is the default group assigned to the user. In RHEL5 client, the group command display all the groups that are assigned to the user.
RHEL6 LDAP client not listing all the groups Using sssd I have a RHEL6 client configured to login using ldaps. The login works, but if the user logged is assigned to more than 1 group at the ldap level, groups only returns 1 group? Could I be missing a configuration somewhere? The group shown is the default group assigned to the user. In RHEL5 client, the group command display all the groups that are assigned to the user.
linux, ldap, redhat, rhel
1
3,641
2
https://stackoverflow.com/questions/17763063/rhel6-ldap-client-not-listing-all-the-groups
17,397,034
aws ec2 iptables port 80 (http)
I recently setup an ec2 Instance (RHEL) on Amazon Web Services. I am having trouble getting it to respond to http requests. I understand that port 80 (http) needs to be added to the inbound rules in the instance's security group. This hasn't worked so far - I've tried both adding it to the default security group, and creating a new security group. Has anyone else had a similar experience? Next step is Amazon support, but thought I would ask here first.
aws ec2 iptables port 80 (http) I recently setup an ec2 Instance (RHEL) on Amazon Web Services. I am having trouble getting it to respond to http requests. I understand that port 80 (http) needs to be added to the inbound rules in the instance's security group. This hasn't worked so far - I've tried both adding it to the default security group, and creating a new security group. Has anyone else had a similar experience? Next step is Amazon support, but thought I would ask here first.
linux, amazon-web-services, amazon-ec2, iptables, rhel
1
2,616
1
https://stackoverflow.com/questions/17397034/aws-ec2-iptables-port-80-http
17,238,617
Running Chef cookbooks on ExaData
I am trying to run a Chef Cookbook on an ExaData server and I'm running into issues. I was able to bootstrap my ExaData servers. However when I run chef-client on the target nodes, I get an error like this . Then I went back and did a verbose output of the error , and still don't have any idea of what the issue is. I am able to ping , traceroute , and nc to and from the ExaData server to the Chef Server. None of the files transfer from the cookbook, or none of the files download from the remote Zabbix repository. The Chef run completes the role, and recipes but nothing is installed. Is there something different about ExaData from regular RHEL distributions that would cause issues? --EDIT - 2013-07-15-- From looking at a "successful" chef-client run on a regular RHEL 6.2 OS, where as ExaData runs RHEL 5.8, I saw fewer errors. There does seem to be a lot of libraries missing from ExaData in order to run chef-client. From what I have heard, and read in other posts, was that ExaData is a stripped version of RHEL 5.8, using only what is needed to run databases.
Running Chef cookbooks on ExaData I am trying to run a Chef Cookbook on an ExaData server and I'm running into issues. I was able to bootstrap my ExaData servers. However when I run chef-client on the target nodes, I get an error like this . Then I went back and did a verbose output of the error , and still don't have any idea of what the issue is. I am able to ping , traceroute , and nc to and from the ExaData server to the Chef Server. None of the files transfer from the cookbook, or none of the files download from the remote Zabbix repository. The Chef run completes the role, and recipes but nothing is installed. Is there something different about ExaData from regular RHEL distributions that would cause issues? --EDIT - 2013-07-15-- From looking at a "successful" chef-client run on a regular RHEL 6.2 OS, where as ExaData runs RHEL 5.8, I saw fewer errors. There does seem to be a lot of libraries missing from ExaData in order to run chef-client. From what I have heard, and read in other posts, was that ExaData is a stripped version of RHEL 5.8, using only what is needed to run databases.
ruby, chef-infra, rhel, role, exadata
1
319
1
https://stackoverflow.com/questions/17238617/running-chef-cookbooks-on-exadata
16,906,276
How to detect a kernel panic on a remote machine?
I have software that monitors the health of several linux machines on a local network. One of the checks it does is ping all of the machines periodically to ensure that they are responsive. It has recently come to my attention that one or more machines can be in a kernel panic state yet still respond to ping. I'd like to know if there's some sort of check I can do in C++ that returns true when either: a) Remote machine is unresponsive (currently doing this with ping statements). b) Remote machine is responsive, but in a kernel panic state. The thing is, I don't know what works and what doesn't during a kernel panic. This is on RHEL 5.7 if that helps. Thanks in advance!
How to detect a kernel panic on a remote machine? I have software that monitors the health of several linux machines on a local network. One of the checks it does is ping all of the machines periodically to ensure that they are responsive. It has recently come to my attention that one or more machines can be in a kernel panic state yet still respond to ping. I'd like to know if there's some sort of check I can do in C++ that returns true when either: a) Remote machine is unresponsive (currently doing this with ping statements). b) Remote machine is responsive, but in a kernel panic state. The thing is, I don't know what works and what doesn't during a kernel panic. This is on RHEL 5.7 if that helps. Thanks in advance!
c++, linux, kernel, rhel, panic
1
1,248
1
https://stackoverflow.com/questions/16906276/how-to-detect-a-kernel-panic-on-a-remote-machine
16,793,426
cannot initialize spring context due to &quot;Error creating bean with name..&quot; error
My spring application is running on jboss 4.2.3, when I start the application server deployment of My application failed. I have already run this application on some other computer. what seems to be diffrent is that; When I list the files there is a dot preceding the permission columns, however on the former computer I dont see any dot in the permission columns. the bean definition is as follows; <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="[URL] xmlns:xsi="[URL] xmlns:jee="[URL] xsi:schemaLocation="[URL] [URL] [URL] [URL] <!-- Service --> <bean id="campaignCatalogService" class="com.commonutil.utils.CometCommonManager" factory-method="getCampaignCatalogService"/> <bean id="campaignPropertyService" class="com.commonutil.utils.CometCommonManager" factory-method="getCampaignPropertyService"/> <bean id="comboBoxOptionService" class="com.commonutil.utils.CometCommonManager" factory-method="getComboBoxOptionService"/> <bean id="workspaceService" class="com.commonutil.utils.CometCommonManager" factory-method="getWorkspaceService"/> <bean id="campaignGroupPropertyService" class="com.commonutil.utils.CometCommonManager" factory-method="getCampaignGroupPropertyService"/> <bean id="campaignDefinitionService" class="com.commonutil.utils.CometCommonManager" factory-method="getCampaignDefinitionService"/> the error log is as follows; 05-27@13:57 26 ERROR (ContextLoader.java:215) - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'campaignDefinitionService' defined in ServletContext resource [/WEB-INF/applicationContext-core.xml]: Initialization of bean failed; nested exception is java.lang.StringIndexOutOfBoundsException: String index out of range: 10 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:478) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:220) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:729) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:381) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3856) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4361) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:790) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:770) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:553) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:296) at org.jboss.mx.server.RawDynamicInvoker.invoke(RawDynamicInvoker.java:164) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.apache.catalina.core.StandardContext.init(StandardContext.java:5312) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:296) at org.jboss.mx.server.RawDynamicInvoker.invoke(RawDynamicInvoker.java:164) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.web.tomcat.service.TomcatDeployer.performDeployInternal(TomcatDeployer.java:301) at org.jboss.web.tomcat.service.TomcatDeployer.performDeploy(TomcatDeployer.java:104) at org.jboss.web.AbstractWebDeployer.start(AbstractWebDeployer.java:375) at org.jboss.web.WebModule.startModule(WebModule.java:83) at org.jboss.web.WebModule.startService(WebModule.java:61) at org.jboss.system.ServiceMBeanSupport.jbossInternalStart(ServiceMBeanSupport.java:289) at org.jboss.system.ServiceMBeanSupport.jbossInternalLifecycle(ServiceMBeanSupport.java:245) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.server.Invocation.invoke(Invocation.java:86) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.system.ServiceController$ServiceProxy.invoke(ServiceController.java:978) at sun.proxy.$Proxy0.start(Unknown Source) at org.jboss.system.ServiceController.start(ServiceController.java:417) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.server.Invocation.invoke(Invocation.java:86) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy45.start(Unknown Source) at org.jboss.web.AbstractWebContainer.start(AbstractWebContainer.java:466) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor.java:133) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBeanOperationInterceptor.java:142) at org.jboss.mx.interceptor.DynamicInterceptor.invoke(DynamicInterceptor.java:97) at org.jboss.system.InterceptorServiceMBeanSupport.invokeNext(InterceptorServiceMBeanSupport.java:238) at org.jboss.wsf.container.jboss42.DeployerInterceptor.start(DeployerInterceptor.java:87) at org.jboss.deployment.SubDeployerInterceptorSupport$XMBeanInterceptor.start(SubDeployerInterceptorSupport.java:188) at org.jboss.deployment.SubDeployerInterceptor.invoke(SubDeployerInterceptor.java:95) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy46.start(Unknown Source) at org.jboss.deployment.MainDeployer.start(MainDeployer.java:1025) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:819) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor.java:133) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBeanOperationInterceptor.java:142) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy10.deploy(Unknown Source) at org.jboss.deployment.scanner.URLDeploymentScanner.deploy(URLDeploymentScanner.java:421) at org.jboss.deployment.scanner.URLDeploymentScanner.scan(URLDeploymentScanner.java:634) at org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.doScan(AbstractDeploymentScanner.java:263) at org.jboss.deployment.scanner.AbstractDeploymentScanner.startService(AbstractDeploymentScanner.java:336) at org.jboss.system.ServiceMBeanSupport.jbossInternalStart(ServiceMBeanSupport.java:289) at org.jboss.system.ServiceMBeanSupport.jbossInternalLifecycle(ServiceMBeanSupport.java:245) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.server.Invocation.invoke(Invocation.java:86) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.system.ServiceController$ServiceProxy.invoke(ServiceController.java:978) at sun.proxy.$Proxy0.start(Unknown Source) at org.jboss.system.ServiceController.start(ServiceController.java:417) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.server.Invocation.invoke(Invocation.java:86) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy4.start(Unknown Source) at org.jboss.deployment.SARDeployer.start(SARDeployer.java:304) at org.jboss.deployment.MainDeployer.start(MainDeployer.java:1025) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:819) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:766) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor.java:133) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBeanOperationInterceptor.java:142) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy5.deploy(Unknown Source) at org.jboss.system.server.ServerImpl.doStart(ServerImpl.java:482) at org.jboss.system.server.ServerImpl.start(ServerImpl.java:362) at org.jboss.Main.boot(Main.java:200) at org.jboss.Main$1.run(Main.java:508) at java.lang.Thread.run(Thread.java:679) Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: 10 at java.lang.String.charAt(String.java:694) at org.jboss.web.tomcat.service.WebAppClassLoader.findClass(WebAppClassLoader.java:81) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1325) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1204) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:266) at org.aspectj.weaver.reflect.ReflectionBasedReferenceTypeDelegateFactory.createDelegate(ReflectionBasedReferenceTypeDelegateFactory.java:40) at org.aspectj.weaver.reflect.ReflectionWorld.resolveDelegate(ReflectionWorld.java:111) at org.aspectj.weaver.World.resolveToReferenceType(World.java:363) at org.aspectj.weaver.World.resolve(World.java:258) at org.aspectj.weaver.ResolvedType.getDeclaringType(ResolvedType.java:1336) at org.aspectj.weaver.patterns.WithinPointcut.isWithinType(WithinPointcut.java:50) at org.aspectj.weaver.patterns.WithinPointcut.fastMatch(WithinPointcut.java:67) at org.aspectj.weaver.internal.tools.PointcutExpressionImpl.couldMatchJoinPointsInType(PointcutExpressionImpl.java:78) at org.springframework.aop.aspectj.AspectJExpressionPointcut.matches(AspectJExpressionPointcut.java:234) at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:198) at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:253) at org.springframework.aop.support.AopUtils.findAdvisorsThatCanApply(AopUtils.java:287) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findAdvisorsThatCanApply(AbstractAdvisorAutoProxyCreator.java:113) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findEligibleAdvisors(AbstractAdvisorAutoProxyCreator.java:85) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(AbstractAdvisorAutoProxyCreator.java:66) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:345) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:309) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:361) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1342) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:471) ... 149 more 05-27@13:57 26 ERROR (Discovery.java:410) - failed sending discovery request java.lang.NullPointerException at org.apache.log4j.Category.isEnabledFor(Category.java:749) at org.apache.commons.logging.impl.Log4JLogger.isTraceEnabled(Log4JLogger.java:333) at org.jgroups.protocols.TP.down(TP.java:1167) at org.jgroups.protocols.PING.sendMcastDiscoveryRequest(PING.java:278) at org.jgroups.protocols.PING.sendGetMembersRequest(PING.java:259) at org.jgroups.protocols.Discovery$PingSenderTask$1.run(Discovery.java:406) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679)
cannot initialize spring context due to &quot;Error creating bean with name..&quot; error My spring application is running on jboss 4.2.3, when I start the application server deployment of My application failed. I have already run this application on some other computer. what seems to be diffrent is that; When I list the files there is a dot preceding the permission columns, however on the former computer I dont see any dot in the permission columns. the bean definition is as follows; <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="[URL] xmlns:xsi="[URL] xmlns:jee="[URL] xsi:schemaLocation="[URL] [URL] [URL] [URL] <!-- Service --> <bean id="campaignCatalogService" class="com.commonutil.utils.CometCommonManager" factory-method="getCampaignCatalogService"/> <bean id="campaignPropertyService" class="com.commonutil.utils.CometCommonManager" factory-method="getCampaignPropertyService"/> <bean id="comboBoxOptionService" class="com.commonutil.utils.CometCommonManager" factory-method="getComboBoxOptionService"/> <bean id="workspaceService" class="com.commonutil.utils.CometCommonManager" factory-method="getWorkspaceService"/> <bean id="campaignGroupPropertyService" class="com.commonutil.utils.CometCommonManager" factory-method="getCampaignGroupPropertyService"/> <bean id="campaignDefinitionService" class="com.commonutil.utils.CometCommonManager" factory-method="getCampaignDefinitionService"/> the error log is as follows; 05-27@13:57 26 ERROR (ContextLoader.java:215) - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'campaignDefinitionService' defined in ServletContext resource [/WEB-INF/applicationContext-core.xml]: Initialization of bean failed; nested exception is java.lang.StringIndexOutOfBoundsException: String index out of range: 10 at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:478) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:220) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:729) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:381) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3856) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4361) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:790) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:770) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:553) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:296) at org.jboss.mx.server.RawDynamicInvoker.invoke(RawDynamicInvoker.java:164) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.apache.catalina.core.StandardContext.init(StandardContext.java:5312) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:296) at org.jboss.mx.server.RawDynamicInvoker.invoke(RawDynamicInvoker.java:164) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.web.tomcat.service.TomcatDeployer.performDeployInternal(TomcatDeployer.java:301) at org.jboss.web.tomcat.service.TomcatDeployer.performDeploy(TomcatDeployer.java:104) at org.jboss.web.AbstractWebDeployer.start(AbstractWebDeployer.java:375) at org.jboss.web.WebModule.startModule(WebModule.java:83) at org.jboss.web.WebModule.startService(WebModule.java:61) at org.jboss.system.ServiceMBeanSupport.jbossInternalStart(ServiceMBeanSupport.java:289) at org.jboss.system.ServiceMBeanSupport.jbossInternalLifecycle(ServiceMBeanSupport.java:245) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.server.Invocation.invoke(Invocation.java:86) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.system.ServiceController$ServiceProxy.invoke(ServiceController.java:978) at sun.proxy.$Proxy0.start(Unknown Source) at org.jboss.system.ServiceController.start(ServiceController.java:417) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.server.Invocation.invoke(Invocation.java:86) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy45.start(Unknown Source) at org.jboss.web.AbstractWebContainer.start(AbstractWebContainer.java:466) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor.java:133) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBeanOperationInterceptor.java:142) at org.jboss.mx.interceptor.DynamicInterceptor.invoke(DynamicInterceptor.java:97) at org.jboss.system.InterceptorServiceMBeanSupport.invokeNext(InterceptorServiceMBeanSupport.java:238) at org.jboss.wsf.container.jboss42.DeployerInterceptor.start(DeployerInterceptor.java:87) at org.jboss.deployment.SubDeployerInterceptorSupport$XMBeanInterceptor.start(SubDeployerInterceptorSupport.java:188) at org.jboss.deployment.SubDeployerInterceptor.invoke(SubDeployerInterceptor.java:95) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy46.start(Unknown Source) at org.jboss.deployment.MainDeployer.start(MainDeployer.java:1025) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:819) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor.java:133) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBeanOperationInterceptor.java:142) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy10.deploy(Unknown Source) at org.jboss.deployment.scanner.URLDeploymentScanner.deploy(URLDeploymentScanner.java:421) at org.jboss.deployment.scanner.URLDeploymentScanner.scan(URLDeploymentScanner.java:634) at org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.doScan(AbstractDeploymentScanner.java:263) at org.jboss.deployment.scanner.AbstractDeploymentScanner.startService(AbstractDeploymentScanner.java:336) at org.jboss.system.ServiceMBeanSupport.jbossInternalStart(ServiceMBeanSupport.java:289) at org.jboss.system.ServiceMBeanSupport.jbossInternalLifecycle(ServiceMBeanSupport.java:245) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.server.Invocation.invoke(Invocation.java:86) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.system.ServiceController$ServiceProxy.invoke(ServiceController.java:978) at sun.proxy.$Proxy0.start(Unknown Source) at org.jboss.system.ServiceController.start(ServiceController.java:417) at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.server.Invocation.invoke(Invocation.java:86) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy4.start(Unknown Source) at org.jboss.deployment.SARDeployer.start(SARDeployer.java:304) at org.jboss.deployment.MainDeployer.start(MainDeployer.java:1025) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:819) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:766) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:94) at org.jboss.mx.interceptor.AbstractInterceptor.invoke(AbstractInterceptor.java:133) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.interceptor.ModelMBeanOperationInterceptor.invoke(ModelMBeanOperationInterceptor.java:142) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659) at org.jboss.mx.util.MBeanProxyExt.invoke(MBeanProxyExt.java:210) at sun.proxy.$Proxy5.deploy(Unknown Source) at org.jboss.system.server.ServerImpl.doStart(ServerImpl.java:482) at org.jboss.system.server.ServerImpl.start(ServerImpl.java:362) at org.jboss.Main.boot(Main.java:200) at org.jboss.Main$1.run(Main.java:508) at java.lang.Thread.run(Thread.java:679) Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: 10 at java.lang.String.charAt(String.java:694) at org.jboss.web.tomcat.service.WebAppClassLoader.findClass(WebAppClassLoader.java:81) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1325) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1204) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:266) at org.aspectj.weaver.reflect.ReflectionBasedReferenceTypeDelegateFactory.createDelegate(ReflectionBasedReferenceTypeDelegateFactory.java:40) at org.aspectj.weaver.reflect.ReflectionWorld.resolveDelegate(ReflectionWorld.java:111) at org.aspectj.weaver.World.resolveToReferenceType(World.java:363) at org.aspectj.weaver.World.resolve(World.java:258) at org.aspectj.weaver.ResolvedType.getDeclaringType(ResolvedType.java:1336) at org.aspectj.weaver.patterns.WithinPointcut.isWithinType(WithinPointcut.java:50) at org.aspectj.weaver.patterns.WithinPointcut.fastMatch(WithinPointcut.java:67) at org.aspectj.weaver.internal.tools.PointcutExpressionImpl.couldMatchJoinPointsInType(PointcutExpressionImpl.java:78) at org.springframework.aop.aspectj.AspectJExpressionPointcut.matches(AspectJExpressionPointcut.java:234) at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:198) at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:253) at org.springframework.aop.support.AopUtils.findAdvisorsThatCanApply(AopUtils.java:287) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findAdvisorsThatCanApply(AbstractAdvisorAutoProxyCreator.java:113) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findEligibleAdvisors(AbstractAdvisorAutoProxyCreator.java:85) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(AbstractAdvisorAutoProxyCreator.java:66) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:345) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:309) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:361) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1342) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:471) ... 149 more 05-27@13:57 26 ERROR (Discovery.java:410) - failed sending discovery request java.lang.NullPointerException at org.apache.log4j.Category.isEnabledFor(Category.java:749) at org.apache.commons.logging.impl.Log4JLogger.isTraceEnabled(Log4JLogger.java:333) at org.jgroups.protocols.TP.down(TP.java:1167) at org.jgroups.protocols.PING.sendMcastDiscoveryRequest(PING.java:278) at org.jgroups.protocols.PING.sendGetMembersRequest(PING.java:259) at org.jgroups.protocols.Discovery$PingSenderTask$1.run(Discovery.java:406) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:679)
spring, jakarta-ee, jboss, rhel, selinux
1
4,137
3
https://stackoverflow.com/questions/16793426/cannot-initialize-spring-context-due-to-error-creating-bean-with-name-error
12,825,978
Django syncdb behaving strangely with unicode strings
I'm going to apologize ahead of time for the length of this post. I just want to be sure I didn't leave any info out. I have an application which uses django's ORM outside of a django app, which uses "syncdb" by calling call_command('syncdb') directly (note: I'm using a virtualenv for every scenario listed below). My unit tests in the application try to setup a "test" django database using SQLite as the backend (whereas the production environment uses MySQL). Each time one of the unit tests runs, it calls call_command('syncdb') using the same test django settings throughout each test. I am able to run these unit tests on 2 different environments (one with Windows 7/Python 2.7.3, another with Mac OS X ML/Python 2.7.2). There are no issues at all with the tests; but these are relatively clean Python installs on both. However, when I try to run this on a RHEL server, I get the following error, when the unit tests try to run syncdb: DatabaseError: table "my_app_mytable" already exists After a lot of frustrating googling and debugging, I (think) I've eliminated the bugs reported here and here . I did a lot of hacking around, and I think I have narrowed the problem down to this statement in django's syncdb command file (as crazy as that sounds) (line 59): tables = connection.introspection.table_names() I setup a pdb.set_trace() inside django's syncdb source on both environments to take a look. Here is what I found: (Env that works) (Pdb) tables [u'my_app_mytable', u'my_app_myothertable'] Seems OK there. From the looks of the syncdb file, django uses the tables variable to check an app's models against what's already in the database. (Env that doesn't work) (Pdb) tables [u'm\x00y\x00_\x00a\x00p\x00p\x00_\x00m\x00y\x00t\x00a\x00', u'm\x00y\x00_\x00a\x00p\x00p\x00_\x00m\x00y\x00o\x00t\x00'] Unless I'm just going crazy, I think this is making the following statement in django's source return false: def model_installed(model): opts = model._meta converter = connection.introspection.table_name_converter return not ((converter(opts.db_table) in tables) or (opts.auto_created and converter(opts.auto_created._meta.db_table) in tables)) This method is called via filter a few lines after that definition, and it looks like it checks to see if converter(opts.db_table) is in the tables list. I ran them manually in both environments too: (Env that works) (Pdb) opts = all_models[0][1][0]._meta (Pdb) converter = connection.introspection.table_name_converter (Pdb) converter(opts.db_table) in tables True As you can see I (kind of) manually ran the model_installed function to see what converter(opts.db_table) returns, and it looks like a perfectly normal string on both environments. However: (Env that doesn't work) (Pdb) opts = all_models[0][1][0]._meta (Pdb) converter = connection.introspection.table_name_converter (Pdb) converter(opts.db_table) in tables False So it looks like since the tables variable is a list of crazy-looking crud on the broken environment, that method is falsely claiming that each model's table name isn't in the database, which gives me the original error I stated at the beginning. Just to make sure I really wasn't going nuts, I also tried manually inserting the correct list to compare: (Env that doesn't work) (Pdb) converter(opts.db_table) in [u'my_app_mytable', u'my_app_myothertable'] True Do I need to recompile Python on this environment? I read the following question on stackoverflow, and found my broken environment was exhibiting weird behavior: (myvirtualenv)[username@myserver]$ python Python 2.7.3 (default, Apr 12 2012, 10:40:11) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import StringIO, cStringIO, sys >>> StringIO.StringIO(u"fubar").getvalue() u'fubar' >>> cStringIO.StringIO(u"fubar").getvalue() 'fubar' >>> cStringIO.StringIO(u"\u0405\u0406").getvalue() Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128) >>> sys.maxunicode 65535 >>> sys.byteorder 'little' EDIT: OK, so I looked a little bit through the django source some more, and it looks like they're getting the table list this way: def get_table_list(self, cursor): "Returns a list of table names in the current database." # Skip the sqlite_sequence system table used for autoincrement key # generation. cursor.execute(""" SELECT name FROM sqlite_master WHERE type='table' AND NOT name='sqlite_sequence' ORDER BY name""") return [row[0] for row in cursor.fetchall()] So I connected to the sqlite file manually in the python interpreter, and ran that query: (Env that doesn't work) >>> import sqlite3 >>> conn = sqlite3.connect('/path/to/sqlite/file') >>> curs = conn.cursor() >>> curs.execute(""" ... SELECT name FROM sqlite_master ... WHERE type='table' AND NOT name='sqlite_sequence' ... ORDER BY name""") <sqlite3.Cursor object at 0xb7557500> >>> curs.fetchall() [(u'c\x00c\x00_\x00s\x00t\x00a\x00t\x00s\x00_\x00c\x00c\x00',), (u'c\x00c\x00_\x00s\x00t\x00a\x00t\x00s\x00_\x00c\x00c\x00s',)] So it looks like SQLite returns a UTF16-LE string for that query. On the working environment, it returned the following: (Env that works) >>> curs.fetchall() [(u'my_app_mytable',), (u'my_app_myothertable',)] Even without an encoding defined at the top, the "working" environment doesn't seem to have any trouble interpreting my models file and creating the tables appropriately. Is there some SQLite default setting that is causing this? Or is git converting the file to UTF-16LE on the broken environment, and sticking with UTF-8/ASCII in the working environment?
Django syncdb behaving strangely with unicode strings I'm going to apologize ahead of time for the length of this post. I just want to be sure I didn't leave any info out. I have an application which uses django's ORM outside of a django app, which uses "syncdb" by calling call_command('syncdb') directly (note: I'm using a virtualenv for every scenario listed below). My unit tests in the application try to setup a "test" django database using SQLite as the backend (whereas the production environment uses MySQL). Each time one of the unit tests runs, it calls call_command('syncdb') using the same test django settings throughout each test. I am able to run these unit tests on 2 different environments (one with Windows 7/Python 2.7.3, another with Mac OS X ML/Python 2.7.2). There are no issues at all with the tests; but these are relatively clean Python installs on both. However, when I try to run this on a RHEL server, I get the following error, when the unit tests try to run syncdb: DatabaseError: table "my_app_mytable" already exists After a lot of frustrating googling and debugging, I (think) I've eliminated the bugs reported here and here . I did a lot of hacking around, and I think I have narrowed the problem down to this statement in django's syncdb command file (as crazy as that sounds) (line 59): tables = connection.introspection.table_names() I setup a pdb.set_trace() inside django's syncdb source on both environments to take a look. Here is what I found: (Env that works) (Pdb) tables [u'my_app_mytable', u'my_app_myothertable'] Seems OK there. From the looks of the syncdb file, django uses the tables variable to check an app's models against what's already in the database. (Env that doesn't work) (Pdb) tables [u'm\x00y\x00_\x00a\x00p\x00p\x00_\x00m\x00y\x00t\x00a\x00', u'm\x00y\x00_\x00a\x00p\x00p\x00_\x00m\x00y\x00o\x00t\x00'] Unless I'm just going crazy, I think this is making the following statement in django's source return false: def model_installed(model): opts = model._meta converter = connection.introspection.table_name_converter return not ((converter(opts.db_table) in tables) or (opts.auto_created and converter(opts.auto_created._meta.db_table) in tables)) This method is called via filter a few lines after that definition, and it looks like it checks to see if converter(opts.db_table) is in the tables list. I ran them manually in both environments too: (Env that works) (Pdb) opts = all_models[0][1][0]._meta (Pdb) converter = connection.introspection.table_name_converter (Pdb) converter(opts.db_table) in tables True As you can see I (kind of) manually ran the model_installed function to see what converter(opts.db_table) returns, and it looks like a perfectly normal string on both environments. However: (Env that doesn't work) (Pdb) opts = all_models[0][1][0]._meta (Pdb) converter = connection.introspection.table_name_converter (Pdb) converter(opts.db_table) in tables False So it looks like since the tables variable is a list of crazy-looking crud on the broken environment, that method is falsely claiming that each model's table name isn't in the database, which gives me the original error I stated at the beginning. Just to make sure I really wasn't going nuts, I also tried manually inserting the correct list to compare: (Env that doesn't work) (Pdb) converter(opts.db_table) in [u'my_app_mytable', u'my_app_myothertable'] True Do I need to recompile Python on this environment? I read the following question on stackoverflow, and found my broken environment was exhibiting weird behavior: (myvirtualenv)[username@myserver]$ python Python 2.7.3 (default, Apr 12 2012, 10:40:11) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import StringIO, cStringIO, sys >>> StringIO.StringIO(u"fubar").getvalue() u'fubar' >>> cStringIO.StringIO(u"fubar").getvalue() 'fubar' >>> cStringIO.StringIO(u"\u0405\u0406").getvalue() Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128) >>> sys.maxunicode 65535 >>> sys.byteorder 'little' EDIT: OK, so I looked a little bit through the django source some more, and it looks like they're getting the table list this way: def get_table_list(self, cursor): "Returns a list of table names in the current database." # Skip the sqlite_sequence system table used for autoincrement key # generation. cursor.execute(""" SELECT name FROM sqlite_master WHERE type='table' AND NOT name='sqlite_sequence' ORDER BY name""") return [row[0] for row in cursor.fetchall()] So I connected to the sqlite file manually in the python interpreter, and ran that query: (Env that doesn't work) >>> import sqlite3 >>> conn = sqlite3.connect('/path/to/sqlite/file') >>> curs = conn.cursor() >>> curs.execute(""" ... SELECT name FROM sqlite_master ... WHERE type='table' AND NOT name='sqlite_sequence' ... ORDER BY name""") <sqlite3.Cursor object at 0xb7557500> >>> curs.fetchall() [(u'c\x00c\x00_\x00s\x00t\x00a\x00t\x00s\x00_\x00c\x00c\x00',), (u'c\x00c\x00_\x00s\x00t\x00a\x00t\x00s\x00_\x00c\x00c\x00s',)] So it looks like SQLite returns a UTF16-LE string for that query. On the working environment, it returned the following: (Env that works) >>> curs.fetchall() [(u'my_app_mytable',), (u'my_app_myothertable',)] Even without an encoding defined at the top, the "working" environment doesn't seem to have any trouble interpreting my models file and creating the tables appropriately. Is there some SQLite default setting that is causing this? Or is git converting the file to UTF-16LE on the broken environment, and sticking with UTF-8/ASCII in the working environment?
python, django, unicode, virtualenv, rhel
1
462
3
https://stackoverflow.com/questions/12825978/django-syncdb-behaving-strangely-with-unicode-strings
79,671,853
Appending &quot;string&quot; value to auditd rule file, if it doesn&#39;t exist on RHEL
I have an issue that I most likely cannot formulate in a way, that any search engine will give me a proper result, as I am receiving the same result every time. I am trying to append a "string" value to a .rules file in /etc/audit/rules.d, if it does not exist. This is my current attempt: AUDIT_RULES_DIR="/etc/audit/rules.d" AUDIT_RULE_FILE="$AUDIT_RULES_DIR/rapid7.rules" AUDIT_RULE='-a always,exit -F arch=b64 -S execve -F key=execve' grep -qxF $AUDIT_RULE $AUDIT_RULE_FILE || echo $AUDIT_RULE >> $AUDIT_RULE_FILE But I am getting the following output: grep: invalid option -- 'S' Usage: grep [OPTION]... PATTERNS [FILE]... Try 'grep --help' for more information. Which to my understanding seems like it is attempting to paste the output as a command. Another attempt has been the following (suggested by Copilot): if ! auditctl -l | grep -q "$AUDIT_RULE"; then echo "[+] Adding audit rule: $AUDIT_RULE" echo "$AUDIT_RULE" >> "$AUDIT_RULE_FILE" else echo "[=] Audit rule already exists." fi It does append the string to the file, however it will continue to add it every single time, as it seems like it cannot process the grep properly. System is RHEL 9.5, but it is supposed to work on all versions down to 7.0. Any suggestions as to how I can achieve this?
Appending &quot;string&quot; value to auditd rule file, if it doesn&#39;t exist on RHEL I have an issue that I most likely cannot formulate in a way, that any search engine will give me a proper result, as I am receiving the same result every time. I am trying to append a "string" value to a .rules file in /etc/audit/rules.d, if it does not exist. This is my current attempt: AUDIT_RULES_DIR="/etc/audit/rules.d" AUDIT_RULE_FILE="$AUDIT_RULES_DIR/rapid7.rules" AUDIT_RULE='-a always,exit -F arch=b64 -S execve -F key=execve' grep -qxF $AUDIT_RULE $AUDIT_RULE_FILE || echo $AUDIT_RULE >> $AUDIT_RULE_FILE But I am getting the following output: grep: invalid option -- 'S' Usage: grep [OPTION]... PATTERNS [FILE]... Try 'grep --help' for more information. Which to my understanding seems like it is attempting to paste the output as a command. Another attempt has been the following (suggested by Copilot): if ! auditctl -l | grep -q "$AUDIT_RULE"; then echo "[+] Adding audit rule: $AUDIT_RULE" echo "$AUDIT_RULE" >> "$AUDIT_RULE_FILE" else echo "[=] Audit rule already exists." fi It does append the string to the file, however it will continue to add it every single time, as it seems like it cannot process the grep properly. System is RHEL 9.5, but it is supposed to work on all versions down to 7.0. Any suggestions as to how I can achieve this?
linux, bash, sh, rhel
1
139
2
https://stackoverflow.com/questions/79671853/appending-string-value-to-auditd-rule-file-if-it-doesnt-exist-on-rhel
79,442,615
Is there a way to preserve timestamps of modified files in rpm package?
I am trying to create a rpm with cumulative changes which is a tar ball, i.e.: the tar contains all the modified files from GA release to current patch. When I try to create a rpm build I use cp -Rp command to preserve the timestamp of the files, but when I install using yum install then the time changed to to the latest, i.e.: installation time. I want to preserve the files' time stamps, as I want to know when the files were added in the tar ball originally. I have few queries: Is there anyway to retain the timestamp? when I install using yum install will it replace the files like "tar extract" or skip the files which are same as the installation location? Is there a way to give custom location while yum install (or any other way)?
Is there a way to preserve timestamps of modified files in rpm package? I am trying to create a rpm with cumulative changes which is a tar ball, i.e.: the tar contains all the modified files from GA release to current patch. When I try to create a rpm build I use cp -Rp command to preserve the timestamp of the files, but when I install using yum install then the time changed to to the latest, i.e.: installation time. I want to preserve the files' time stamps, as I want to know when the files were added in the tar ball originally. I have few queries: Is there anyway to retain the timestamp? when I install using yum install will it replace the files like "tar extract" or skip the files which are same as the installation location? Is there a way to give custom location while yum install (or any other way)?
rpm, yum, rhel, rpmbuild, rpm-spec
1
88
1
https://stackoverflow.com/questions/79442615/is-there-a-way-to-preserve-timestamps-of-modified-files-in-rpm-package
77,081,662
Why does the owner of the podman volume for a rootless container get changed to this particular UID?
I am brand new to podman, and trying to understand something I have observed with permissions / UIDs. I have created two podman named volumes as my non-root user (named qsd ), one to be mounted in a rootless InfluxDB container and one for a rootless Grafana container: $ podman volume create influxdb_volume $ podman volume create grafana_volume I can see that this creates two directories, and that upon creation they are all owned by me: $ tree -pud -L 2 /home/qsd/.local/share/containers/storage/volumes/ /home/qsd/.local/share/containers/storage/volumes/ ├── [drwx------ qsd ] grafana_volume │ └── [drwxr-xr-x qsd ] _data └── [drwx------ qsd ] influxdb_volume └── [drwxr-xr-x qsd ] _data Now I create the two containers, and mount each of the named volumes into the respective ones: $ podman run -d --rm --name grafana_container --publish 3000:3000 --mount type=volume,source=grafana_volume,destination=/var/lib/grafana grafana $ podman run -d --rm --name influxdb_container --publish 8086:8086 --mount type=volume,source=influxdb_volume,destination=/var/lib/influxdb influxdb:1.8 and I can now see two processes running on the host, one with owner 16007 and one as qsd : $ ps -ef | grep "grafana server" 166007 162111 162101 1 12:57 ? 00:00:06 grafana server $ ps -ef | grep influxd qsd 162184 162174 0 12:57 ? 00:00:00 influxd Now I think I understand why this is: Even though the containers are run by me (non-root user qsd ) the InfluxDB container runs inside as root (UID=0) and the Grafana container runs inside as user grafana (UID=472): $ podman exec -it influxdb_container id uid=0(root) gid=0(root) groups=0(root) $ podman exec -it grafana_container id uid=472(grafana) gid=0(root) groups=0(root) I can see that the UID of my user is 1005: $ id uid=1005(qsd) gid=1005(qsd) groups=1005(qsd),100(users) and the subuid file looks like this: $ cat /etc/subuid qsd:165536:65536 So, for my user qsd , I should have a mapping which starts at UID 165536 if I have a rootless container running a process as non-root. In the Grafana case, the UID on the host should appear as 165536 + 472 - 1 = 166007, as it does. In the InfluxDB case, the host sees it as simply qsd, because the process is running as root inside the container. Now, what I don't understand is: why the InfluxDB name volume directory has changed itself to the following permissions (UID , GID = 166534) after the container has started up: $ ls -ltr /home/qsd/.local/share/containers/storage/volumes/influxdb_volume/ total 0 drwxr-xr-x 5 166534 166534 41 Sep 11 12:58 _data $ ls -ltr /home/qsd/.local/share/containers/storage/volumes/grafana_volume/ total 0 drwxrwxrwx 6 166007 qsd 77 Sep 11 13:12 _data I would expect it to still be owned by qsd (as the process was running as qsd), because the container is running as root user inside. Where does this number 166534 come from, and why is it not qsd? Also ,is there a better way to do this, rather than having the directory permssions changing to these other UID values? Thank you!
Why does the owner of the podman volume for a rootless container get changed to this particular UID? I am brand new to podman, and trying to understand something I have observed with permissions / UIDs. I have created two podman named volumes as my non-root user (named qsd ), one to be mounted in a rootless InfluxDB container and one for a rootless Grafana container: $ podman volume create influxdb_volume $ podman volume create grafana_volume I can see that this creates two directories, and that upon creation they are all owned by me: $ tree -pud -L 2 /home/qsd/.local/share/containers/storage/volumes/ /home/qsd/.local/share/containers/storage/volumes/ ├── [drwx------ qsd ] grafana_volume │ └── [drwxr-xr-x qsd ] _data └── [drwx------ qsd ] influxdb_volume └── [drwxr-xr-x qsd ] _data Now I create the two containers, and mount each of the named volumes into the respective ones: $ podman run -d --rm --name grafana_container --publish 3000:3000 --mount type=volume,source=grafana_volume,destination=/var/lib/grafana grafana $ podman run -d --rm --name influxdb_container --publish 8086:8086 --mount type=volume,source=influxdb_volume,destination=/var/lib/influxdb influxdb:1.8 and I can now see two processes running on the host, one with owner 16007 and one as qsd : $ ps -ef | grep "grafana server" 166007 162111 162101 1 12:57 ? 00:00:06 grafana server $ ps -ef | grep influxd qsd 162184 162174 0 12:57 ? 00:00:00 influxd Now I think I understand why this is: Even though the containers are run by me (non-root user qsd ) the InfluxDB container runs inside as root (UID=0) and the Grafana container runs inside as user grafana (UID=472): $ podman exec -it influxdb_container id uid=0(root) gid=0(root) groups=0(root) $ podman exec -it grafana_container id uid=472(grafana) gid=0(root) groups=0(root) I can see that the UID of my user is 1005: $ id uid=1005(qsd) gid=1005(qsd) groups=1005(qsd),100(users) and the subuid file looks like this: $ cat /etc/subuid qsd:165536:65536 So, for my user qsd , I should have a mapping which starts at UID 165536 if I have a rootless container running a process as non-root. In the Grafana case, the UID on the host should appear as 165536 + 472 - 1 = 166007, as it does. In the InfluxDB case, the host sees it as simply qsd, because the process is running as root inside the container. Now, what I don't understand is: why the InfluxDB name volume directory has changed itself to the following permissions (UID , GID = 166534) after the container has started up: $ ls -ltr /home/qsd/.local/share/containers/storage/volumes/influxdb_volume/ total 0 drwxr-xr-x 5 166534 166534 41 Sep 11 12:58 _data $ ls -ltr /home/qsd/.local/share/containers/storage/volumes/grafana_volume/ total 0 drwxrwxrwx 6 166007 qsd 77 Sep 11 13:12 _data I would expect it to still be owned by qsd (as the process was running as qsd), because the container is running as root user inside. Where does this number 166534 come from, and why is it not qsd? Also ,is there a better way to do this, rather than having the directory permssions changing to these other UID values? Thank you!
permissions, rhel, docker-volume, podman, uid
1
1,302
1
https://stackoverflow.com/questions/77081662/why-does-the-owner-of-the-podman-volume-for-a-rootless-container-get-changed-to
76,979,438
Denodo on RHEL 7 without gui
How to install Denodo on rhel7 CLI without GUI? i have installed it using denodo_installer_cli.sh on rhel7 and it was successful. it is throwing an error x11 variable not found. while i run the script denodo_platform.sh. all the requirments has been fulfilled.
Denodo on RHEL 7 without gui How to install Denodo on rhel7 CLI without GUI? i have installed it using denodo_installer_cli.sh on rhel7 and it was successful. it is throwing an error x11 variable not found. while i run the script denodo_platform.sh. all the requirments has been fulfilled.
installation, command-line-interface, rhel, denodo
1
95
1
https://stackoverflow.com/questions/76979438/denodo-on-rhel-7-without-gui
75,262,392
On el8/el9/newer, how do you get newer versions of software like python3, gcc, java, etc?
For example on el7: to develop an nvidia CUDA application you need a newer gcc than the default gcc version 4.8.x and to get the newer version you would use a software repo called "Software Collections" (SCL) the base python3 is 3.6 and you need newer python modules and so you install python3.8 from SCL Starting on el8, and el9: the SCL is deprecated and so there is a different method for installing and configuring newer versions of gcc and python3. On el8/el9/newer, how do you get newer versions of software like python3, gcc, java, etc?
On el8/el9/newer, how do you get newer versions of software like python3, gcc, java, etc? For example on el7: to develop an nvidia CUDA application you need a newer gcc than the default gcc version 4.8.x and to get the newer version you would use a software repo called "Software Collections" (SCL) the base python3 is 3.6 and you need newer python modules and so you install python3.8 from SCL Starting on el8, and el9: the SCL is deprecated and so there is a different method for installing and configuring newer versions of gcc and python3. On el8/el9/newer, how do you get newer versions of software like python3, gcc, java, etc?
linux, rhel, dnf
1
1,988
1
https://stackoverflow.com/questions/75262392/on-el8-el9-newer-how-do-you-get-newer-versions-of-software-like-python3-gcc-j
74,386,124
How to find if a network interface is using dhcp/static ip without relying on sh (e.g. ip route) using python?
I currently need to find out if any network interface (on RHEL) is configured to use dynamic IP (DHCP) using python (python2 preferably, but if there's a solution using python3, I'd like to hear it). I don't want to throw a bash subprocess and just parse the response from ip route or nmcli for instance. What we tried/found out so far: There's a python lib (currently archived) netifaces but even with that we were not able to make it give us that answer. Also having second thoughts of depending on a lib that has been archived a while ago. On /sys/class/net it's possible to see the interfaces and on it there are a lot of properties, but none of it seems to give what we want. Maybe there's another Linux interface that could give us that?! On /etc/sysconfig/network-scripts/ifcfg-** it's possible to see options of every interface (NetworkManager is the owner I guess) and one could parse the BOOTPROTO= option. However, that does not necessarily show the current connection method because one could change that file or even modify the interface through nmcli and that would only be read after the interface has been restarted (down/up) Note: Sorry if some specific network terms were not correctly used, I'm not an expert there :) Any thoughts? Any ideas?
How to find if a network interface is using dhcp/static ip without relying on sh (e.g. ip route) using python? I currently need to find out if any network interface (on RHEL) is configured to use dynamic IP (DHCP) using python (python2 preferably, but if there's a solution using python3, I'd like to hear it). I don't want to throw a bash subprocess and just parse the response from ip route or nmcli for instance. What we tried/found out so far: There's a python lib (currently archived) netifaces but even with that we were not able to make it give us that answer. Also having second thoughts of depending on a lib that has been archived a while ago. On /sys/class/net it's possible to see the interfaces and on it there are a lot of properties, but none of it seems to give what we want. Maybe there's another Linux interface that could give us that?! On /etc/sysconfig/network-scripts/ifcfg-** it's possible to see options of every interface (NetworkManager is the owner I guess) and one could parse the BOOTPROTO= option. However, that does not necessarily show the current connection method because one could change that file or even modify the interface through nmcli and that would only be read after the interface has been restarted (down/up) Note: Sorry if some specific network terms were not correctly used, I'm not an expert there :) Any thoughts? Any ideas?
python, linux, python-2.7, network-programming, rhel
1
689
1
https://stackoverflow.com/questions/74386124/how-to-find-if-a-network-interface-is-using-dhcp-static-ip-without-relying-on-sh
74,243,015
How to resolve kube-proxy stuck in container creating state?
I am trying to join a ubuntu host to rhel kubernetes master. Installed kubernetes version 1.24.3 and using crio runtime. If i join a rhel vm to rhel kubernetes master. There is no issue. But when i join the ubuntu to host to rhel kubernetes master. kube-proxy in kube-system is stuck in container creating state. Describe the node and getting the following error: Failed to create pod sandbox: rpc error: code = Unknown desc = error creating pod sandbox with name "k8s_kube-proxy-s56kp_kube-system_(ID)": initializing source docker://registry.k8s.io/pause:3.6: pinging container registry registry.k8s.io: Get "[URL] dial tcp (ip):443: i/o timeout How to resolve this issue?
How to resolve kube-proxy stuck in container creating state? I am trying to join a ubuntu host to rhel kubernetes master. Installed kubernetes version 1.24.3 and using crio runtime. If i join a rhel vm to rhel kubernetes master. There is no issue. But when i join the ubuntu to host to rhel kubernetes master. kube-proxy in kube-system is stuck in container creating state. Describe the node and getting the following error: Failed to create pod sandbox: rpc error: code = Unknown desc = error creating pod sandbox with name "k8s_kube-proxy-s56kp_kube-system_(ID)": initializing source docker://registry.k8s.io/pause:3.6: pinging container registry registry.k8s.io: Get "[URL] dial tcp (ip):443: i/o timeout How to resolve this issue?
ubuntu, kubernetes, rhel, cri-o
1
623
1
https://stackoverflow.com/questions/74243015/how-to-resolve-kube-proxy-stuck-in-container-creating-state
73,465,861
Apache server will not start, but works while it&#39;s &quot;Activating&quot; and does not give any meaningful errors
Apache 2.4.6 RHEL 7.9 PHP 7.4.30 Running this in a Google VM! Ran "apachectl configtest" and got a "Syntax OK" message. Turned debug logging on in httpd.conf and started the process. This is the output of systemctl status httpd httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: activating (start) since Tue 2022-08-23 22:59:59 UTC; 7s ago Docs: man:httpd(8) man:apachectl(8) Main PID: 1866 (httpd) CGroup: /system.slice/httpd.service ├─1866 /usr/sbin/httpd -DFOREGROUND ├─1867 /usr/sbin/httpd -DFOREGROUND ├─1868 /usr/sbin/httpd -DFOREGROUND ├─1869 /usr/sbin/httpd -DFOREGROUND ├─1870 /usr/sbin/httpd -DFOREGROUND └─1871 /usr/sbin/httpd -DFOREGROUND Here is the output in error_log: [Tue Aug 23 22:59:59.144963 2022] [core:notice] [pid 1866] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0 [Tue Aug 23 22:59:59.146252 2022] [suexec:notice] [pid 1866] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Aug 23 22:59:59.183943 2022] [core:warn] [pid 1866] AH00098: pid file /etc/httpd/run/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Tue Aug 23 22:59:59.187253 2022] [proxy:debug] [pid 1867] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.187346 2022] [proxy:debug] [pid 1867] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.187378 2022] [proxy:debug] [pid 1867] proxy_util.c(1939): AH00931: initialized single connection worker in child 1867 for (*) [Tue Aug 23 22:59:59.188948 2022] [proxy:debug] [pid 1868] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.189021 2022] [proxy:debug] [pid 1868] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.189047 2022] [proxy:debug] [pid 1868] proxy_util.c(1939): AH00931: initialized single connection worker in child 1868 for (*) [Tue Aug 23 22:59:59.190495 2022] [proxy:debug] [pid 1869] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.190562 2022] [proxy:debug] [pid 1869] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.190596 2022] [proxy:debug] [pid 1869] proxy_util.c(1939): AH00931: initialized single connection worker in child 1869 for (*) [Tue Aug 23 22:59:59.190866 2022] [mpm_prefork:notice] [pid 1866] AH00163: Apache/2.4.6 (Red Hat Enterprise Linux) PHP/7.4.30 configured -- resuming normal operations [Tue Aug 23 22:59:59.190886 2022] [mpm_prefork:info] [pid 1866] AH00164: Server built: Mar 22 2022 15:35:18 [Tue Aug 23 22:59:59.190901 2022] [core:notice] [pid 1866] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' [Tue Aug 23 22:59:59.190921 2022] [mpm_prefork:debug] [pid 1866] prefork.c(1005): AH00165: Accept mutex: sysvsem (default: sysvsem) [Tue Aug 23 22:59:59.192147 2022] [proxy:debug] [pid 1870] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.192152 2022] [proxy:debug] [pid 1871] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.192217 2022] [proxy:debug] [pid 1871] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.192253 2022] [proxy:debug] [pid 1871] proxy_util.c(1939): AH00931: initialized single connection worker in child 1871 for (*) [Tue Aug 23 22:59:59.192298 2022] [proxy:debug] [pid 1870] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.192332 2022] [proxy:debug] [pid 1870] proxy_util.c(1939): AH00931: initialized single connection worker in child 1870 for (*) Plugging the server IP in a web browser loads the page with no issues. However, after a little while, the start process will timeout and the app will exit. What am I missing? ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: failed (Result: signal) since Tue 2022-08-23 23:02:59 UTC; 15s ago Docs: man:httpd(8) man:apachectl(8) Process: 1866 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=killed, signal=KILL) Main PID: 1866 (code=killed, signal=KILL) Aug 23 22:59:59 instance-1 systemd[1]: Starting The Apache HTTP Server... Aug 23 23:01:29 instance-1 systemd[1]: httpd.service start operation timed out. Terminating. Aug 23 23:02:59 instance-1 systemd[1]: httpd.service stop-sigterm timed out. Killing. Aug 23 23:02:59 instance-1 systemd[1]: httpd.service: main process exited, code=killed, status=9/KILL Aug 23 23:02:59 instance-1 systemd[1]: Failed to start The Apache HTTP Server. Aug 23 23:02:59 instance-1 systemd[1]: Unit httpd.service entered failed state. Aug 23 23:02:59 instance-1 systemd[1]: httpd.service failed.
Apache server will not start, but works while it&#39;s &quot;Activating&quot; and does not give any meaningful errors Apache 2.4.6 RHEL 7.9 PHP 7.4.30 Running this in a Google VM! Ran "apachectl configtest" and got a "Syntax OK" message. Turned debug logging on in httpd.conf and started the process. This is the output of systemctl status httpd httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: activating (start) since Tue 2022-08-23 22:59:59 UTC; 7s ago Docs: man:httpd(8) man:apachectl(8) Main PID: 1866 (httpd) CGroup: /system.slice/httpd.service ├─1866 /usr/sbin/httpd -DFOREGROUND ├─1867 /usr/sbin/httpd -DFOREGROUND ├─1868 /usr/sbin/httpd -DFOREGROUND ├─1869 /usr/sbin/httpd -DFOREGROUND ├─1870 /usr/sbin/httpd -DFOREGROUND └─1871 /usr/sbin/httpd -DFOREGROUND Here is the output in error_log: [Tue Aug 23 22:59:59.144963 2022] [core:notice] [pid 1866] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0 [Tue Aug 23 22:59:59.146252 2022] [suexec:notice] [pid 1866] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Aug 23 22:59:59.183943 2022] [core:warn] [pid 1866] AH00098: pid file /etc/httpd/run/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Tue Aug 23 22:59:59.187253 2022] [proxy:debug] [pid 1867] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.187346 2022] [proxy:debug] [pid 1867] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.187378 2022] [proxy:debug] [pid 1867] proxy_util.c(1939): AH00931: initialized single connection worker in child 1867 for (*) [Tue Aug 23 22:59:59.188948 2022] [proxy:debug] [pid 1868] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.189021 2022] [proxy:debug] [pid 1868] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.189047 2022] [proxy:debug] [pid 1868] proxy_util.c(1939): AH00931: initialized single connection worker in child 1868 for (*) [Tue Aug 23 22:59:59.190495 2022] [proxy:debug] [pid 1869] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.190562 2022] [proxy:debug] [pid 1869] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.190596 2022] [proxy:debug] [pid 1869] proxy_util.c(1939): AH00931: initialized single connection worker in child 1869 for (*) [Tue Aug 23 22:59:59.190866 2022] [mpm_prefork:notice] [pid 1866] AH00163: Apache/2.4.6 (Red Hat Enterprise Linux) PHP/7.4.30 configured -- resuming normal operations [Tue Aug 23 22:59:59.190886 2022] [mpm_prefork:info] [pid 1866] AH00164: Server built: Mar 22 2022 15:35:18 [Tue Aug 23 22:59:59.190901 2022] [core:notice] [pid 1866] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' [Tue Aug 23 22:59:59.190921 2022] [mpm_prefork:debug] [pid 1866] prefork.c(1005): AH00165: Accept mutex: sysvsem (default: sysvsem) [Tue Aug 23 22:59:59.192147 2022] [proxy:debug] [pid 1870] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.192152 2022] [proxy:debug] [pid 1871] proxy_util.c(1843): AH00925: initializing worker proxy:reverse shared [Tue Aug 23 22:59:59.192217 2022] [proxy:debug] [pid 1871] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.192253 2022] [proxy:debug] [pid 1871] proxy_util.c(1939): AH00931: initialized single connection worker in child 1871 for (*) [Tue Aug 23 22:59:59.192298 2022] [proxy:debug] [pid 1870] proxy_util.c(1888): AH00927: initializing worker proxy:reverse local [Tue Aug 23 22:59:59.192332 2022] [proxy:debug] [pid 1870] proxy_util.c(1939): AH00931: initialized single connection worker in child 1870 for (*) Plugging the server IP in a web browser loads the page with no issues. However, after a little while, the start process will timeout and the app will exit. What am I missing? ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: failed (Result: signal) since Tue 2022-08-23 23:02:59 UTC; 15s ago Docs: man:httpd(8) man:apachectl(8) Process: 1866 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=killed, signal=KILL) Main PID: 1866 (code=killed, signal=KILL) Aug 23 22:59:59 instance-1 systemd[1]: Starting The Apache HTTP Server... Aug 23 23:01:29 instance-1 systemd[1]: httpd.service start operation timed out. Terminating. Aug 23 23:02:59 instance-1 systemd[1]: httpd.service stop-sigterm timed out. Killing. Aug 23 23:02:59 instance-1 systemd[1]: httpd.service: main process exited, code=killed, status=9/KILL Aug 23 23:02:59 instance-1 systemd[1]: Failed to start The Apache HTTP Server. Aug 23 23:02:59 instance-1 systemd[1]: Unit httpd.service entered failed state. Aug 23 23:02:59 instance-1 systemd[1]: httpd.service failed.
apache, redhat, httpd.conf, rhel, rhel7
1
666
1
https://stackoverflow.com/questions/73465861/apache-server-will-not-start-but-works-while-its-activating-and-does-not-giv
73,085,664
How do I reformat XML to work with MySQL LoadXML
I am on a red hat system and I have multiple XML files generated from various SOAP requests that are in a format that is not compatible with MySQL's LoadXML function. I need to load the data into MySQL tables. One table will be setup for each type of XML file, depending on the data received via the Soap XML API. Sample format of one of the files is as this, but each file will have a different number of columns and different column names. I am trying to find a way to convert them to a compatible format in the most generic way possible since I will have to create any customized solution for each API request/response. <soap:Envelope xmlns:soap="[URL] <soap:Body> <dbd:DataRetrievalRequestResponse xmlns:dbd="dbd.v1"> <DataObjects> <ObjectSelect> <mdNm>controller-ac</mdNm> <meNm>WALL-EQPT-A</meNm> </ObjectSelect> <DataInstances> <DataInstance> <instanceId>DSS1</instanceId> <Attribute> <name>Name</name> <value>DSS1</value> </Attribute> <Attribute> <name>Operational Mode</name> <value>mode-fast</value> </Attribute> <Attribute> <name>Rate - Down</name> <value>1099289</value> </Attribute> <Attribute> <name>Rate - Up</name> <value>1479899</value> </Attribute> </DataInstance> <DataInstance> <instanceId>DSS2</instanceId> <Attribute> <name>Name</name> <value>DSS2</value> </Attribute> <Attribute> <name>Operational Mode</name> <value>mode-fast</value> </Attribute> <Attribute> <name>Rate - Down</name> <value>1299433</value> </Attribute> <Attribute> <name>Rate - Up</name> <value>1379823</value> </Attribute> </DataInstance> </DataInstances> </DataObjects> </dbd:DataRetrievalRequestResponse> </soap:Body> </soap:Envelope> Of course I want the data to be entered into a mysql table with column names 'id, Name, Group' rows for each unique instance Name Operational Mode Rate - Down Rate - Up DSS1 mode-fast 1099289 1479899 DSS2 mode-fast 1299433 1379823 Do I need to create an XSLT and preprocess this XML data from command line prior to running it to LoadXML to get it into a format that MySQL LoadXML function will accept? This would not be a problem, but I am not familiar with XSLT transformations. Is there a way to reformat the above XML to straight CSV (preferred), or to another XML format that is compatible, such as the examples given in mysql documentation for loadxml? <row> <field name='column1'>value1</field> <field name='column2'>value2</field> </row> I tried doing LOAD DATA INFILE and using ExtractValue function, but some of the values have spaces in them, and the delimiter for ExtractValue is hard coded to single-space. This makes it unusable as a workaround.
How do I reformat XML to work with MySQL LoadXML I am on a red hat system and I have multiple XML files generated from various SOAP requests that are in a format that is not compatible with MySQL's LoadXML function. I need to load the data into MySQL tables. One table will be setup for each type of XML file, depending on the data received via the Soap XML API. Sample format of one of the files is as this, but each file will have a different number of columns and different column names. I am trying to find a way to convert them to a compatible format in the most generic way possible since I will have to create any customized solution for each API request/response. <soap:Envelope xmlns:soap="[URL] <soap:Body> <dbd:DataRetrievalRequestResponse xmlns:dbd="dbd.v1"> <DataObjects> <ObjectSelect> <mdNm>controller-ac</mdNm> <meNm>WALL-EQPT-A</meNm> </ObjectSelect> <DataInstances> <DataInstance> <instanceId>DSS1</instanceId> <Attribute> <name>Name</name> <value>DSS1</value> </Attribute> <Attribute> <name>Operational Mode</name> <value>mode-fast</value> </Attribute> <Attribute> <name>Rate - Down</name> <value>1099289</value> </Attribute> <Attribute> <name>Rate - Up</name> <value>1479899</value> </Attribute> </DataInstance> <DataInstance> <instanceId>DSS2</instanceId> <Attribute> <name>Name</name> <value>DSS2</value> </Attribute> <Attribute> <name>Operational Mode</name> <value>mode-fast</value> </Attribute> <Attribute> <name>Rate - Down</name> <value>1299433</value> </Attribute> <Attribute> <name>Rate - Up</name> <value>1379823</value> </Attribute> </DataInstance> </DataInstances> </DataObjects> </dbd:DataRetrievalRequestResponse> </soap:Body> </soap:Envelope> Of course I want the data to be entered into a mysql table with column names 'id, Name, Group' rows for each unique instance Name Operational Mode Rate - Down Rate - Up DSS1 mode-fast 1099289 1479899 DSS2 mode-fast 1299433 1379823 Do I need to create an XSLT and preprocess this XML data from command line prior to running it to LoadXML to get it into a format that MySQL LoadXML function will accept? This would not be a problem, but I am not familiar with XSLT transformations. Is there a way to reformat the above XML to straight CSV (preferred), or to another XML format that is compatible, such as the examples given in mysql documentation for loadxml? <row> <field name='column1'>value1</field> <field name='column2'>value2</field> </row> I tried doing LOAD DATA INFILE and using ExtractValue function, but some of the values have spaces in them, and the delimiter for ExtractValue is hard coded to single-space. This makes it unusable as a workaround.
mysql, linux, xml, xslt, rhel
1
74
1
https://stackoverflow.com/questions/73085664/how-do-i-reformat-xml-to-work-with-mysql-loadxml
73,037,866
kafka + what are the advantages of more kafka machines in the cluster
We have the following options in order to design new Kafka production cluster ( based on OS - RHEL 7.9 version ) 5 Kafka brokers with Usable storage per broker: 96,000 GB ( 96 TB ) 18 Kafka brokers with Usable storage per broker: 20,000 GB ( 20 TB ) We can see that the second option is more expensive - 18 machines ( when disk Kafka storage is 20TB ) , When the first option is only with 5 machines but disk Kafka storage is 96T But we are wondering what is the best practice from Kafka performance side ? is it better to be with 18 Kafka machines with less storage ? Or to be with 5 Kafka machines with 96TB storage per machine What are the advantages and disadvantages on options 1 and 2
kafka + what are the advantages of more kafka machines in the cluster We have the following options in order to design new Kafka production cluster ( based on OS - RHEL 7.9 version ) 5 Kafka brokers with Usable storage per broker: 96,000 GB ( 96 TB ) 18 Kafka brokers with Usable storage per broker: 20,000 GB ( 20 TB ) We can see that the second option is more expensive - 18 machines ( when disk Kafka storage is 20TB ) , When the first option is only with 5 machines but disk Kafka storage is 96T But we are wondering what is the best practice from Kafka performance side ? is it better to be with 18 Kafka machines with less storage ? Or to be with 5 Kafka machines with 96TB storage per machine What are the advantages and disadvantages on options 1 and 2
apache-kafka, rhel
1
165
1
https://stackoverflow.com/questions/73037866/kafka-what-are-the-advantages-of-more-kafka-machines-in-the-cluster
72,615,853
Script to extract switchport status via snmp
Good day, looking for some assistance, still quite new to scripting and don't have a clue where to start with this. Scenario: I have multiple Cisco switches on which I want to check the port status and alias and only output certain information from the output. So for example I would run snmpwalk -v 2c -c mycommunity myswitchname .1.3.6.1.2.1.2.2.1.8 and will get a list of all ports with their status and then snmpwalk -v 2c -c mycommunity myswitchname .1.3.6.1.2.1.31.1.1.1.18 will give me a list of the interface alias's. These could be quite long lists if its a stack switch setup Now I need to match the status with the alias and output something like "aliasname is up or aliasname is down" to a file, but also want to exclude empty strings on the alias output to reduce file size. I would also be using a file containing all the switch names instead of having to run the run the snmp command manually on every switch, so would probably need to reference the file to run the command on Ideally output would look something like: Switch1 interface "aliasname" is up Switch1 interface "aliasname" is down etc Sample output of SNMP commands snmpwalk with 1.3.6.1.2.1.31.1.1.1.18 will produce something like below IF_MIB::ifAlias.1 = STRING: IF_MIB::ifAlias.2 = STRING: IF_MIB::ifAlias.3 = STRING: IF_MIB::ifAlias.4 = STRING: IF_MIB::ifAlias.5 = STRING: IF_MIB::ifAlias.6 = STRING: IF_MIB::ifAlias.7 = STRING: IF_MIB::ifAlias.8 = STRING: IF_MIB::ifAlias.9 = STRING: IF_MIB::ifAlias.10 = STRING: IF_MIB::ifAlias.11 = STRING: IF_MIB::ifAlias.12 = STRING: IF_MIB::ifAlias.13 = STRING: IF_MIB::ifAlias.14 = STRING: IF_MIB::ifAlias.15 = STRING: IF_MIB::ifAlias.16 = STRING: IF_MIB::ifAlias.17 = STRING: IF_MIB::ifAlias.18 = STRING: IF_MIB::ifAlias.19 = STRING: IF_MIB::ifAlias.20 = STRING: AP034 IF_MIB::ifAlias.21 = STRING: IF_MIB::ifAlias.22 = STRING: IF_MIB::ifAlias.23 = STRING: IF_MIB::ifAlias.24 = STRING: AP031 IF_MIB::ifAlias.25 = STRING: IF_MIB::ifAlias.26 = STRING: AP022 IF_MIB::ifAlias.27 = STRING: AP027 IF_MIB::ifAlias.28 = STRING: IF_MIB::ifAlias.29 = STRING: IF_MIB::ifAlias.30 = STRING: Time Clock IF_MIB::ifAlias.31 = STRING: IF_MIB::ifAlias.32 = STRING: IF_MIB::ifAlias.33 = STRING: IF_MIB::ifAlias.34 = STRING: Intercom Office IF_MIB::ifAlias.35 = STRING: IF_MIB::ifAlias.36 = STRING: AP018 IF_MIB::ifAlias.37 = STRING: IF_MIB::ifAlias.38 = STRING: SW002 IF_MIB::ifAlias.39 = STRING: IF_MIB::ifAlias.40 = STRING: snmpwalk with .1.3.6.1.2.1.2.2.1.8 will produce below output IF_MIB::ifOperStatus.1 = INTEGER: down(2) IF_MIB::ifOperStatus.2 = INTEGER: down(2) IF_MIB::ifOperStatus.3 = INTEGER: up(1) IF_MIB::ifOperStatus.4 = INTEGER: up(1) IF_MIB::ifOperStatus.5 = INTEGER: up(1) IF_MIB::ifOperStatus.6 = INTEGER: up(1) IF_MIB::ifOperStatus.7 = INTEGER: up(1) IF_MIB::ifOperStatus.8 = INTEGER: up(1) IF_MIB::ifOperStatus.9 = INTEGER: up(1) IF_MIB::ifOperStatus.10 = INTEGER: up(1) IF_MIB::ifOperStatus.11 = INTEGER: up(1) IF_MIB::ifOperStatus.12 = INTEGER: up(1) IF_MIB::ifOperStatus.13 = INTEGER: up(1) IF_MIB::ifOperStatus.14 = INTEGER: up(1) IF_MIB::ifOperStatus.15 = INTEGER: up(1) IF_MIB::ifOperStatus.16 = INTEGER: up(1) IF_MIB::ifOperStatus.17 = INTEGER: up(1) IF_MIB::ifOperStatus.18 = INTEGER: up(1) IF_MIB::ifOperStatus.19 = INTEGER: up(1) IF_MIB::ifOperStatus.20 = INTEGER: up(1) IF_MIB::ifOperStatus.21 = INTEGER: up(1) IF_MIB::ifOperStatus.22 = INTEGER: up(1) IF_MIB::ifOperStatus.23 = INTEGER: up(1) IF_MIB::ifOperStatus.24 = INTEGER: up(1) IF_MIB::ifOperStatus.25 = INTEGER: up(1) IF_MIB::ifOperStatus.26 = INTEGER: up(1) IF_MIB::ifOperStatus.27 = INTEGER: up(1) IF_MIB::ifOperStatus.28 = INTEGER: up(1) IF_MIB::ifOperStatus.29 = INTEGER: up(1) IF_MIB::ifOperStatus.30 = INTEGER: up(1) IF_MIB::ifOperStatus.31 = INTEGER: up(1) IF_MIB::ifOperStatus.32 = INTEGER: up(1) IF_MIB::ifOperStatus.33 = INTEGER: up(1) IF_MIB::ifOperStatus.34 = INTEGER: up(1) IF_MIB::ifOperStatus.35 = INTEGER: up(1) IF_MIB::ifOperStatus.36 = INTEGER: up(1) IF_MIB::ifOperStatus.37 = INTEGER: up(1) IF_MIB::ifOperStatus.38 = INTEGER: up(1) IF_MIB::ifOperStatus.39 = INTEGER: up(1) IF_MIB::ifOperStatus.40 = INTEGER: up(1) If have tried something simple like just to see the output but it just gives me the names listed and underneath the status cat $1 | while read line; do IFSTATUS=snmpwalk -v 2c -c mycomunity $line 1.3.6.1.2.1.2.2.1.8 | grep -o '[^ ]*$' IFNAME=snmpwalk -v 2c -c mycommunity $line 1.3.6.1.2.1.31.1.1.1.18 | grep -o '[^ ]*$' echo " $IFNAME = is $IFSTATUS " done
Script to extract switchport status via snmp Good day, looking for some assistance, still quite new to scripting and don't have a clue where to start with this. Scenario: I have multiple Cisco switches on which I want to check the port status and alias and only output certain information from the output. So for example I would run snmpwalk -v 2c -c mycommunity myswitchname .1.3.6.1.2.1.2.2.1.8 and will get a list of all ports with their status and then snmpwalk -v 2c -c mycommunity myswitchname .1.3.6.1.2.1.31.1.1.1.18 will give me a list of the interface alias's. These could be quite long lists if its a stack switch setup Now I need to match the status with the alias and output something like "aliasname is up or aliasname is down" to a file, but also want to exclude empty strings on the alias output to reduce file size. I would also be using a file containing all the switch names instead of having to run the run the snmp command manually on every switch, so would probably need to reference the file to run the command on Ideally output would look something like: Switch1 interface "aliasname" is up Switch1 interface "aliasname" is down etc Sample output of SNMP commands snmpwalk with 1.3.6.1.2.1.31.1.1.1.18 will produce something like below IF_MIB::ifAlias.1 = STRING: IF_MIB::ifAlias.2 = STRING: IF_MIB::ifAlias.3 = STRING: IF_MIB::ifAlias.4 = STRING: IF_MIB::ifAlias.5 = STRING: IF_MIB::ifAlias.6 = STRING: IF_MIB::ifAlias.7 = STRING: IF_MIB::ifAlias.8 = STRING: IF_MIB::ifAlias.9 = STRING: IF_MIB::ifAlias.10 = STRING: IF_MIB::ifAlias.11 = STRING: IF_MIB::ifAlias.12 = STRING: IF_MIB::ifAlias.13 = STRING: IF_MIB::ifAlias.14 = STRING: IF_MIB::ifAlias.15 = STRING: IF_MIB::ifAlias.16 = STRING: IF_MIB::ifAlias.17 = STRING: IF_MIB::ifAlias.18 = STRING: IF_MIB::ifAlias.19 = STRING: IF_MIB::ifAlias.20 = STRING: AP034 IF_MIB::ifAlias.21 = STRING: IF_MIB::ifAlias.22 = STRING: IF_MIB::ifAlias.23 = STRING: IF_MIB::ifAlias.24 = STRING: AP031 IF_MIB::ifAlias.25 = STRING: IF_MIB::ifAlias.26 = STRING: AP022 IF_MIB::ifAlias.27 = STRING: AP027 IF_MIB::ifAlias.28 = STRING: IF_MIB::ifAlias.29 = STRING: IF_MIB::ifAlias.30 = STRING: Time Clock IF_MIB::ifAlias.31 = STRING: IF_MIB::ifAlias.32 = STRING: IF_MIB::ifAlias.33 = STRING: IF_MIB::ifAlias.34 = STRING: Intercom Office IF_MIB::ifAlias.35 = STRING: IF_MIB::ifAlias.36 = STRING: AP018 IF_MIB::ifAlias.37 = STRING: IF_MIB::ifAlias.38 = STRING: SW002 IF_MIB::ifAlias.39 = STRING: IF_MIB::ifAlias.40 = STRING: snmpwalk with .1.3.6.1.2.1.2.2.1.8 will produce below output IF_MIB::ifOperStatus.1 = INTEGER: down(2) IF_MIB::ifOperStatus.2 = INTEGER: down(2) IF_MIB::ifOperStatus.3 = INTEGER: up(1) IF_MIB::ifOperStatus.4 = INTEGER: up(1) IF_MIB::ifOperStatus.5 = INTEGER: up(1) IF_MIB::ifOperStatus.6 = INTEGER: up(1) IF_MIB::ifOperStatus.7 = INTEGER: up(1) IF_MIB::ifOperStatus.8 = INTEGER: up(1) IF_MIB::ifOperStatus.9 = INTEGER: up(1) IF_MIB::ifOperStatus.10 = INTEGER: up(1) IF_MIB::ifOperStatus.11 = INTEGER: up(1) IF_MIB::ifOperStatus.12 = INTEGER: up(1) IF_MIB::ifOperStatus.13 = INTEGER: up(1) IF_MIB::ifOperStatus.14 = INTEGER: up(1) IF_MIB::ifOperStatus.15 = INTEGER: up(1) IF_MIB::ifOperStatus.16 = INTEGER: up(1) IF_MIB::ifOperStatus.17 = INTEGER: up(1) IF_MIB::ifOperStatus.18 = INTEGER: up(1) IF_MIB::ifOperStatus.19 = INTEGER: up(1) IF_MIB::ifOperStatus.20 = INTEGER: up(1) IF_MIB::ifOperStatus.21 = INTEGER: up(1) IF_MIB::ifOperStatus.22 = INTEGER: up(1) IF_MIB::ifOperStatus.23 = INTEGER: up(1) IF_MIB::ifOperStatus.24 = INTEGER: up(1) IF_MIB::ifOperStatus.25 = INTEGER: up(1) IF_MIB::ifOperStatus.26 = INTEGER: up(1) IF_MIB::ifOperStatus.27 = INTEGER: up(1) IF_MIB::ifOperStatus.28 = INTEGER: up(1) IF_MIB::ifOperStatus.29 = INTEGER: up(1) IF_MIB::ifOperStatus.30 = INTEGER: up(1) IF_MIB::ifOperStatus.31 = INTEGER: up(1) IF_MIB::ifOperStatus.32 = INTEGER: up(1) IF_MIB::ifOperStatus.33 = INTEGER: up(1) IF_MIB::ifOperStatus.34 = INTEGER: up(1) IF_MIB::ifOperStatus.35 = INTEGER: up(1) IF_MIB::ifOperStatus.36 = INTEGER: up(1) IF_MIB::ifOperStatus.37 = INTEGER: up(1) IF_MIB::ifOperStatus.38 = INTEGER: up(1) IF_MIB::ifOperStatus.39 = INTEGER: up(1) IF_MIB::ifOperStatus.40 = INTEGER: up(1) If have tried something simple like just to see the output but it just gives me the names listed and underneath the status cat $1 | while read line; do IFSTATUS=snmpwalk -v 2c -c mycomunity $line 1.3.6.1.2.1.2.2.1.8 | grep -o '[^ ]*$' IFNAME=snmpwalk -v 2c -c mycommunity $line 1.3.6.1.2.1.31.1.1.1.18 | grep -o '[^ ]*$' echo " $IFNAME = is $IFSTATUS " done
bash, snmp, rhel
1
1,770
1
https://stackoverflow.com/questions/72615853/script-to-extract-switchport-status-via-snmp
72,435,462
Multiple &amp; different Variables for loops BASH
I would like to run a for loop on a list of two columns, first column holds a name of a server, and the second column holds the UUID of the same server. I would like to run the another loop on the same server name and UUID and only after this second loop completed I would like to return to the first loop but this time for the next server name & UUID. All of these loops should be running only while a specifc condition occurs. Any suggestions on how can I execute the following? I was thinking of doing it this way: sudo ceph -s| grep 'osds: XX up' wc -l > ceph s CEPH=(cat ceph s) SERVER=(cat info/servernames) UUID=(cat info/uuids) AUTH= (cat info/token) SPT=$(cat info/SPT) while [ SCEPH -eq] 1 ] do set --$SERVER for i in $UUID do ## Poveroff # echo "UUID: ${i}/powerState" echo "TOKEN: ${AUTH}" echo "name: $1" echo "UUID: $1" echo "SPT:$SPT" sleep 2s echo applying SPT echo "TOKEN: $AUTH" echo "NAME: $1" echo "UUID:$i" # shift ping -c 1 "$1" >> /dev/null 2>&1 if [ $? -eq 0 ]; then shift else echo "$1 is down" ###should be the same sevrername echoed in the first loop### fi sleep 2m done done echo #apply spt ## again spouse to run for the same servername and UUID ## echo "APPLY SPTI while [ $CEPH -eq 0 ] do set -- $SERYER for i in $UUID do echo "TOKEN: SAUTH" echo "NAME: $1" echo "UUID: Si" shift sleep 5s done done example of UUID and server files cat uuid VDIVUIFDVNFDHYJ VDIVUIFDVNFDTRE VDIVUIFDVNFBVCN VDIVUIFDVNFDNAQ cat serversnames SERVER1 SERVER2 SERVER3 SERVER4 that means, SERVER1 should be getting the UUID of the first line and so on. Once all loops completed i want the next servername and uuid to run. Any thoughts?
Multiple &amp; different Variables for loops BASH I would like to run a for loop on a list of two columns, first column holds a name of a server, and the second column holds the UUID of the same server. I would like to run the another loop on the same server name and UUID and only after this second loop completed I would like to return to the first loop but this time for the next server name & UUID. All of these loops should be running only while a specifc condition occurs. Any suggestions on how can I execute the following? I was thinking of doing it this way: sudo ceph -s| grep 'osds: XX up' wc -l > ceph s CEPH=(cat ceph s) SERVER=(cat info/servernames) UUID=(cat info/uuids) AUTH= (cat info/token) SPT=$(cat info/SPT) while [ SCEPH -eq] 1 ] do set --$SERVER for i in $UUID do ## Poveroff # echo "UUID: ${i}/powerState" echo "TOKEN: ${AUTH}" echo "name: $1" echo "UUID: $1" echo "SPT:$SPT" sleep 2s echo applying SPT echo "TOKEN: $AUTH" echo "NAME: $1" echo "UUID:$i" # shift ping -c 1 "$1" >> /dev/null 2>&1 if [ $? -eq 0 ]; then shift else echo "$1 is down" ###should be the same sevrername echoed in the first loop### fi sleep 2m done done echo #apply spt ## again spouse to run for the same servername and UUID ## echo "APPLY SPTI while [ $CEPH -eq 0 ] do set -- $SERYER for i in $UUID do echo "TOKEN: SAUTH" echo "NAME: $1" echo "UUID: Si" shift sleep 5s done done example of UUID and server files cat uuid VDIVUIFDVNFDHYJ VDIVUIFDVNFDTRE VDIVUIFDVNFBVCN VDIVUIFDVNFDNAQ cat serversnames SERVER1 SERVER2 SERVER3 SERVER4 that means, SERVER1 should be getting the UUID of the first line and so on. Once all loops completed i want the next servername and uuid to run. Any thoughts?
linux, bash, loops, rhel, shift
1
159
1
https://stackoverflow.com/questions/72435462/multiple-different-variables-for-loops-bash
72,264,662
How to wait for full cloud-initialization before VM is marked as running
I am currently configuring a virtual machine to work as an agent within Azure (with Ubuntu as image). In which the additional configuration is running through a cloud init file. In which, among others, I have the below 'fix' within bootcmd and multiple steps within runcmd. However the machine already gives the state running within the azure portal, while still running the cloud configuration phase (cloud_config_modules). This has as a result pipelines see the machine as ready for usage while not everything is installed/configured yet and breaks. I tried a couple of things which did not result in the desired effect. After which I stumbled on the following article/bug ; The proposed solution worked, however I switched to a rhel image and it stopped working. I noticed this image is not using walinuxagent as the solution states but waagent, so I tried to replacing that like the example below without any success. bootcmd: - mkdir -p /etc/systemd/system/waagent.service.d - echo "[Unit]\nAfter=cloud-final.service" > /etc/systemd/system/waagent.service.d/override.conf - sed "s/After=multi-user.target//g" /lib/systemd/system/cloud-final.service > /etc/systemd/system/cloud-final.service - systemctl daemon-reload After this, also tried to set the runcmd steps to the bootcmd steps. This resulted in a boot which took ages and eventually froze. Since I am not that familiar with rhel and Linux overall, I wanted to ask help if anyone might have some suggestions which I can additionally try. (Apply some other configuration to ensure await on the cloud-final.service within a waagent?)
How to wait for full cloud-initialization before VM is marked as running I am currently configuring a virtual machine to work as an agent within Azure (with Ubuntu as image). In which the additional configuration is running through a cloud init file. In which, among others, I have the below 'fix' within bootcmd and multiple steps within runcmd. However the machine already gives the state running within the azure portal, while still running the cloud configuration phase (cloud_config_modules). This has as a result pipelines see the machine as ready for usage while not everything is installed/configured yet and breaks. I tried a couple of things which did not result in the desired effect. After which I stumbled on the following article/bug ; The proposed solution worked, however I switched to a rhel image and it stopped working. I noticed this image is not using walinuxagent as the solution states but waagent, so I tried to replacing that like the example below without any success. bootcmd: - mkdir -p /etc/systemd/system/waagent.service.d - echo "[Unit]\nAfter=cloud-final.service" > /etc/systemd/system/waagent.service.d/override.conf - sed "s/After=multi-user.target//g" /lib/systemd/system/cloud-final.service > /etc/systemd/system/cloud-final.service - systemctl daemon-reload After this, also tried to set the runcmd steps to the bootcmd steps. This resulted in a boot which took ages and eventually froze. Since I am not that familiar with rhel and Linux overall, I wanted to ask help if anyone might have some suggestions which I can additionally try. (Apply some other configuration to ensure await on the cloud-final.service within a waagent?)
azure-devops, rhel, cloud-init
1
3,250
3
https://stackoverflow.com/questions/72264662/how-to-wait-for-full-cloud-initialization-before-vm-is-marked-as-running
71,114,860
What is the use of systemd user instance for ssh logins
When you login using SSH, pam_systemd module automatically launches a systemd --user instance when the user logs in for the first time. We can mask the user@.service to deactivate this. Even when we deactivate the user@.service, there is no noticeable impact. Is there a impact when we mask the service ?
What is the use of systemd user instance for ssh logins When you login using SSH, pam_systemd module automatically launches a systemd --user instance when the user logs in for the first time. We can mask the user@.service to deactivate this. Even when we deactivate the user@.service, there is no noticeable impact. Is there a impact when we mask the service ?
ssh, systemd, rhel
1
1,889
1
https://stackoverflow.com/questions/71114860/what-is-the-use-of-systemd-user-instance-for-ssh-logins
70,984,108
Which is the right package in the yum repo for Ops Agent on RHEL7?
I am trying to install the Ops Agent from the GCP yum repo: [URL] I see there are 4 different packages for this agent: google-cloud-ops-agent-el7-x86_64-0 google-cloud-ops-agent-el7-x86_64-1 google-cloud-ops-agent-el7-x86_64-2 google-cloud-ops-agent-el7-x86_64-all What is the difference between the suffixes, -0, -1, -2, and -all? I've tried looking through the docs but couldn't find anything, and I am confused over which one I should get.
Which is the right package in the yum repo for Ops Agent on RHEL7? I am trying to install the Ops Agent from the GCP yum repo: [URL] I see there are 4 different packages for this agent: google-cloud-ops-agent-el7-x86_64-0 google-cloud-ops-agent-el7-x86_64-1 google-cloud-ops-agent-el7-x86_64-2 google-cloud-ops-agent-el7-x86_64-all What is the difference between the suffixes, -0, -1, -2, and -all? I've tried looking through the docs but couldn't find anything, and I am confused over which one I should get.
google-cloud-platform, yum, rhel, repo
1
478
1
https://stackoverflow.com/questions/70984108/which-is-the-right-package-in-the-yum-repo-for-ops-agent-on-rhel7
70,709,580
RHEL 7: missing cgroup after reboot instances
I'm trying to limit resources by using cgroup. It's working fine until I reboot the instance. I had checked and found that the cgroup was removed for some reason. This is my step to creating the cgroup: # Create a cgroup mkdir /sys/fs/cgroup/memory/my_cgroup # Add the process to it echo $PID > /sys/fs/cgroup/memory/my_cgroup/cgroup.procs # Set the limit to 40MB echo $((40 * 1024 * 1024)) > /sys/fs/cgroup/memory/my_cgroup/memory.limit_in_bytes I'm using AMI RHEL-7.5_HVM-20180813-x86_64, kernel version 3.10.0-862.11.6.el7.x86_64. Could you guys help me out with this problem? Thanks in advance.
RHEL 7: missing cgroup after reboot instances I'm trying to limit resources by using cgroup. It's working fine until I reboot the instance. I had checked and found that the cgroup was removed for some reason. This is my step to creating the cgroup: # Create a cgroup mkdir /sys/fs/cgroup/memory/my_cgroup # Add the process to it echo $PID > /sys/fs/cgroup/memory/my_cgroup/cgroup.procs # Set the limit to 40MB echo $((40 * 1024 * 1024)) > /sys/fs/cgroup/memory/my_cgroup/memory.limit_in_bytes I'm using AMI RHEL-7.5_HVM-20180813-x86_64, kernel version 3.10.0-862.11.6.el7.x86_64. Could you guys help me out with this problem? Thanks in advance.
amazon-ec2, rhel, cgroups
1
921
1
https://stackoverflow.com/questions/70709580/rhel-7-missing-cgroup-after-reboot-instances
69,531,487
bash/shell - How do I get the group name of a directory on RHEL and save it to a variable?
I'm working on a script to create users and grant them access to a directory, but they have to be added to the group that has permissions on the directory. I noticed that if I do a stat on the directory I can list out the group name: [root@pizzaServer myuser]# stat /ftp/PizzaFolder File: ‘/ftp/PizzaFolder’ Size: 22 Blocks: 0 IO Block: 4096 directory Device: fd03h/64771d Inode: 2108605 Links: 3 Access: (0750/drwxr-x---) Uid: ( 0/ root) **Gid: (1934800276/PizzaGroup)** And then I can grep it: [root@pizzaServer myuser]# stat /ftp/PizzaFolder | grep Gid Access: (0750/drwxr-x---) Uid: ( 0/ root) Gid: (1934800276/PizzaGroup) And then possibly use sed to crop out the group name? Is that the best method or is there another way? I only want the group name "PizzaGroup" as an output.
bash/shell - How do I get the group name of a directory on RHEL and save it to a variable? I'm working on a script to create users and grant them access to a directory, but they have to be added to the group that has permissions on the directory. I noticed that if I do a stat on the directory I can list out the group name: [root@pizzaServer myuser]# stat /ftp/PizzaFolder File: ‘/ftp/PizzaFolder’ Size: 22 Blocks: 0 IO Block: 4096 directory Device: fd03h/64771d Inode: 2108605 Links: 3 Access: (0750/drwxr-x---) Uid: ( 0/ root) **Gid: (1934800276/PizzaGroup)** And then I can grep it: [root@pizzaServer myuser]# stat /ftp/PizzaFolder | grep Gid Access: (0750/drwxr-x---) Uid: ( 0/ root) Gid: (1934800276/PizzaGroup) And then possibly use sed to crop out the group name? Is that the best method or is there another way? I only want the group name "PizzaGroup" as an output.
bash, shell, variables, permissions, rhel
1
950
1
https://stackoverflow.com/questions/69531487/bash-shell-how-do-i-get-the-group-name-of-a-directory-on-rhel-and-save-it-to-a
69,386,394
Systemd Script for creating Apache Tomcat Service in RHEL 8
Starting with version 8.0, Red Hat Enterprise Linux (RHEL) no longer provides any version of the Apache Tomcat JAVA webserver/servlet as part of the RHEL distribution.[1] Therefore, we have to install Tomcat via WAR file in the RHEL systems. The problem which arises is that it becomes difficult to start, stop or restart the Tomcat service as there is to service file installed, through which we could have easily used the command service tomcat start to start the service. But there is a way through which we can create this service manually by writing a Systemd script. By placing this script in the /etc/systemd/system/ directory, we can use the service commands for managing the Tomcat Service. Please share the Tomcat Service Creation Script .
Systemd Script for creating Apache Tomcat Service in RHEL 8 Starting with version 8.0, Red Hat Enterprise Linux (RHEL) no longer provides any version of the Apache Tomcat JAVA webserver/servlet as part of the RHEL distribution.[1] Therefore, we have to install Tomcat via WAR file in the RHEL systems. The problem which arises is that it becomes difficult to start, stop or restart the Tomcat service as there is to service file installed, through which we could have easily used the command service tomcat start to start the service. But there is a way through which we can create this service manually by writing a Systemd script. By placing this script in the /etc/systemd/system/ directory, we can use the service commands for managing the Tomcat Service. Please share the Tomcat Service Creation Script .
java, tomcat, systemd, rhel, rhel8
1
6,975
1
https://stackoverflow.com/questions/69386394/systemd-script-for-creating-apache-tomcat-service-in-rhel-8
69,095,911
Getting SOURCES from dynamic subdirectories
I need to build multiple RPM packages. But it's getting a missing file as it is searching in the /SOURCES/ directory instead of searching in the respective subdirectory. Is there any way to solve it without touching the spec files, with rpm macros ? After executing rpmbuild -bb ~/rpmbuild/SPECS/*.spec getting the following error: error: File /home/centos/rpmbuild/SOURCES/autoconf-2.69.tar.gz: No such file or directory Here is my ~/rpmbuild/SOURCES/ tree after symlinking with the development codes: ~/rpmbuild/SOURCES/ ├── autoconf -> /home/centos/Project/autoconf/ │ ├── autoconf-2.69.tar.gz │ ├── autoconf.spec │ └── config.site ├── autorespond-toaster -> /home/centos/Project/autorespond-toaster/ │ ├── autorespond-2.0.5.tar.bz2 │ ├── autorespond-toaster.spec │ └── autorespond_utf-8.patch ├── bind -> /home/centos/Project/bind/ │ ├── bind-9.3.1rc1-sdb_tools-Makefile.in │ ├── bind-9.9.9-P6.tar.gz │ ├── bind.spec │ ├── config-8.tar.bz2 │ ├── Copyright.caching-nameserver │ ├── dnszone.schema │ ├── flexible.m4 │ ├── ldap2zone.c │ ├── named.conf.sample │ ├── named.init │ ├── named.init.el4 │ ├── named.logrotate │ ├── named.NetworkManager │ ├── named.portreserve │ ├── named.sysconfig │ ├── README.sdb_pgsql │ └── rfc1912.txt And ~/rpmbuild/SPECS/ tree: ~/rpmbuild/SPECS/ ├── autoconf.spec -> /home/centos/Project/autoconf/autoconf.spec ├── autorespond-toaster.spec -> /home/centos/Project/autorespond-toaster/autorespond-toaster.spec ├── bind.spec -> /home/centos/Project/bind/bind.spec REPOs [URL] [URL] [URL]
Getting SOURCES from dynamic subdirectories I need to build multiple RPM packages. But it's getting a missing file as it is searching in the /SOURCES/ directory instead of searching in the respective subdirectory. Is there any way to solve it without touching the spec files, with rpm macros ? After executing rpmbuild -bb ~/rpmbuild/SPECS/*.spec getting the following error: error: File /home/centos/rpmbuild/SOURCES/autoconf-2.69.tar.gz: No such file or directory Here is my ~/rpmbuild/SOURCES/ tree after symlinking with the development codes: ~/rpmbuild/SOURCES/ ├── autoconf -> /home/centos/Project/autoconf/ │ ├── autoconf-2.69.tar.gz │ ├── autoconf.spec │ └── config.site ├── autorespond-toaster -> /home/centos/Project/autorespond-toaster/ │ ├── autorespond-2.0.5.tar.bz2 │ ├── autorespond-toaster.spec │ └── autorespond_utf-8.patch ├── bind -> /home/centos/Project/bind/ │ ├── bind-9.3.1rc1-sdb_tools-Makefile.in │ ├── bind-9.9.9-P6.tar.gz │ ├── bind.spec │ ├── config-8.tar.bz2 │ ├── Copyright.caching-nameserver │ ├── dnszone.schema │ ├── flexible.m4 │ ├── ldap2zone.c │ ├── named.conf.sample │ ├── named.init │ ├── named.init.el4 │ ├── named.logrotate │ ├── named.NetworkManager │ ├── named.portreserve │ ├── named.sysconfig │ ├── README.sdb_pgsql │ └── rfc1912.txt And ~/rpmbuild/SPECS/ tree: ~/rpmbuild/SPECS/ ├── autoconf.spec -> /home/centos/Project/autoconf/autoconf.spec ├── autorespond-toaster.spec -> /home/centos/Project/autorespond-toaster/autorespond-toaster.spec ├── bind.spec -> /home/centos/Project/bind/bind.spec REPOs [URL] [URL] [URL]
linux, rpm, rhel, rpmbuild, rpm-spec
1
457
1
https://stackoverflow.com/questions/69095911/getting-sources-from-dynamic-subdirectories
68,747,437
Can I identify and set network device names in an Anaconda kickstart file?
I'm creating a kickstart file for RHEL 8 and am trying to simplify network connectivity among machines. All of the machines will have onboard ethernet ports that will distribute eno1 and eno2 names, but also a separate card which will generate unique names when RHEL is installed. My question is, can I use the kickstart to identify copper or fiber connections and give them names there or will I need to do this in the post-install section?
Can I identify and set network device names in an Anaconda kickstart file? I'm creating a kickstart file for RHEL 8 and am trying to simplify network connectivity among machines. All of the machines will have onboard ethernet ports that will distribute eno1 and eno2 names, but also a separate card which will generate unique names when RHEL is installed. My question is, can I use the kickstart to identify copper or fiber connections and give them names there or will I need to do this in the post-install section?
anaconda, rhel
1
2,229
1
https://stackoverflow.com/questions/68747437/can-i-identify-and-set-network-device-names-in-an-anaconda-kickstart-file
67,860,194
getting &quot;grep: write error: pipe broken&quot; when command executed through application installation
Below is the command which is executed when I install the application(this line written in one of the script of our application). PASS= strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 10 | tr -d '\n' every time I get the error "grep: write error: pipe broken". here are few points to be noted When I install the application on RHEL 7.X. It runs without an issue. When I run the command directory on RHEL 8.X. it doesn't give an error. It throws an error only when installing the application on RHEL 8.x. Also, I have tried few other ways to generate alphanumaric character like: X= strings /dev/urandom | grep -o -m15 '[[:alnum:]]' PASS= echo "$X" | head -n 10 | tr -d '\n' PASS= strings /dev/urandom | tr -dc A-Za-z0-9 | head -c10 PASS= cat /dev/urandom | tr -dc A-Za-z0-9 | head -c10 X= strings /dev/urandom | head -n 100 PASS= echo "X" | grep -o '[[:alnum:]]' | head -n 10 | tr -d '\n' PASS= < /dev/urandom tr -dc '[[:alnum:]]' | head -c10 None of this worked on RHEL 8.X while installing application. However all this command works fine when executing directly on terminal.
getting &quot;grep: write error: pipe broken&quot; when command executed through application installation Below is the command which is executed when I install the application(this line written in one of the script of our application). PASS= strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 10 | tr -d '\n' every time I get the error "grep: write error: pipe broken". here are few points to be noted When I install the application on RHEL 7.X. It runs without an issue. When I run the command directory on RHEL 8.X. it doesn't give an error. It throws an error only when installing the application on RHEL 8.x. Also, I have tried few other ways to generate alphanumaric character like: X= strings /dev/urandom | grep -o -m15 '[[:alnum:]]' PASS= echo "$X" | head -n 10 | tr -d '\n' PASS= strings /dev/urandom | tr -dc A-Za-z0-9 | head -c10 PASS= cat /dev/urandom | tr -dc A-Za-z0-9 | head -c10 X= strings /dev/urandom | head -n 100 PASS= echo "X" | grep -o '[[:alnum:]]' | head -n 10 | tr -d '\n' PASS= < /dev/urandom tr -dc '[[:alnum:]]' | head -c10 None of this worked on RHEL 8.X while installing application. However all this command works fine when executing directly on terminal.
shell, rhel, rhel7, rhel8
1
2,387
1
https://stackoverflow.com/questions/67860194/getting-grep-write-error-pipe-broken-when-command-executed-through-applicati
67,809,316
OperatingSystemMXBean is not a valid MXBean interface error when deploying on WebSphere Application Server
When deploying and executing my aplication in WebSphere Application server the following error message is presented. [6/1/21 17:44:24:958 CDT] 0000004e webapp E com.ibm.ws.webcontainer.webapp.WebApp notifyServletContextCreated SRVE0283E: Exception caught while initializing context: {0} java.lang.ExceptionInInitializerError at java.lang.J9VMInternals.ensureError(J9VMInternals.java:141) at java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:130) at java.lang.Class.forNameImpl(Native Method) at java.lang.Class.forName(Class.java:278) at com.transform.cartridge.volpayreferencedatautil.VolPayReferenceDataUtilFunctions.addTimeLog(VolPayReferenceDataUtilFunctions.java:204) at com.transform.flow.logaudit.LogAudit.Custom3Activity(LogAudit.java:100) at com.transform.flow.logaudit.LogAudit.executeInternal(LogAudit.java:152) at com.transform.flow.logaudit.LogAudit.run0(LogAudit.java:115) at com.tplus.transform.runtime.AbstractMessageFlow.run(AbstractMessageFlow.java:157) at com.tplus.transform.runtime.volante.MessageFlowVolante$1.run(MessageFlowVolante.java:75) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start0(DeclarativeTransactionalExecutor.java:137) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start(DeclarativeTransactionalExecutor.java:113) at com.tplus.transform.runtime.volante.MessageFlowVolante.run(MessageFlowVolante.java:73) at com.tplus.transform.runtime.proxy.MessageFlowProxy.run(MessageFlowProxy.java:32) at com.transform.flow.logaudit_sys_debug_instr.LogAudit_Sys_Debug_Instr.Invoke1Activity(LogAudit_Sys_Debug_Instr.java:41) at com.transform.flow.logaudit_sys_debug_instr.LogAudit_Sys_Debug_Instr.executeInternal(LogAudit_Sys_Debug_Instr.java:82) at com.transform.flow.logaudit_sys_debug_instr.LogAudit_Sys_Debug_Instr.run0(LogAudit_Sys_Debug_Instr.java:53) at com.tplus.transform.runtime.AbstractMessageFlow.run(AbstractMessageFlow.java:157) at com.tplus.transform.runtime.volante.MessageFlowVolante$1.run(MessageFlowVolante.java:75) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start0(DeclarativeTransactionalExecutor.java:137) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start(DeclarativeTransactionalExecutor.java:113) at com.tplus.transform.runtime.volante.MessageFlowVolante.run(MessageFlowVolante.java:73) at com.tplus.transform.runtime.proxy.MessageFlowProxy.run(MessageFlowProxy.java:32) at com.transform.flow.volantepaymentenginestartofprocess.VolantePaymentEngineStartOfProcess.Invoke4Activity(VolantePaymentEngineStartOfProcess.java:77) at com.transform.flow.volantepaymentenginestartofprocess.VolantePaymentEngineStartOfProcess.executeInternal(VolantePaymentEngineStartOfProcess.java:325) at com.transform.flow.volantepaymentenginestartofprocess.VolantePaymentEngineStartOfProcess.run0(VolantePaymentEngineStartOfProcess.java:300) at com.tplus.transform.runtime.AbstractMessageFlow.run(AbstractMessageFlow.java:157) at com.tplus.transform.runtime.volante.MessageFlowVolante$1.run(MessageFlowVolante.java:75) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start0(DeclarativeTransactionalExecutor.java:137) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start(DeclarativeTransactionalExecutor.java:113) at com.tplus.transform.runtime.volante.MessageFlowVolante.run(MessageFlowVolante.java:73) at com.tplus.transform.runtime.proxy.MessageFlowProxy.run(MessageFlowProxy.java:32) at com.volantetech.volante.services.StartOfProcess.process(StartOfProcess.java:45) at com.volantetech.volante.services.ApplicationStartupListener.contextInitialized(ApplicationStartupListener.java:218) at com.ibm.ws.webcontainer.webapp.WebApp.notifyServletContextCreated(WebApp.java:1837) at com.ibm.ws.webcontainer.webapp.WebAppImpl.initialize(WebAppImpl.java:443) at com.ibm.ws.webcontainer.webapp.WebGroupImpl.addWebApplication(WebGroupImpl.java:88) at com.ibm.ws.webcontainer.VirtualHostImpl.addWebApplication(VirtualHostImpl.java:171) at com.ibm.ws.webcontainer.WSWebContainer.addWebApp(WSWebContainer.java:904) at com.ibm.ws.webcontainer.WSWebContainer.addWebApplication(WSWebContainer.java:789) at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:427) at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:719) at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1249) at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1591) at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:708) at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:1162) at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:801) at com.ibm.ws.runtime.component.ApplicationMgrImpl$5.run(ApplicationMgrImpl.java:2325) at com.ibm.ws.security.auth.ContextManagerImpl.runAs(ContextManagerImpl.java:5536) at com.ibm.ws.security.auth.ContextManagerImpl.runAsSystem(ContextManagerImpl.java:5662) at com.ibm.ws.security.core.SecurityContext.runAsSystem(SecurityContext.java:255) at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:2330) at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:436) at com.ibm.ws.runtime.component.CompositionUnitImpl.start(CompositionUnitImpl.java:123) at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:379) at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.access$500(CompositionUnitMgrImpl.java:127) at com.ibm.ws.runtime.component.CompositionUnitMgrImpl$CUInitializer.run(CompositionUnitMgrImpl.java:985) at com.ibm.wsspi.runtime.component.WsComponentImpl$_AsynchInitializer.run(WsComponentImpl.java:524) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1909) Caused by: java.lang.IllegalArgumentException: com.sun.management.OperatingSystemMXBean is not a valid MXBean interface. at java.lang.management.ManagementFactory.getPlatformMXBean(ManagementFactory.java:469) at com.volantetech.volante.services.performance.log.LogTime.<init>(LogTime.java:23) at com.volantetech.volante.services.performance.log.LogTime.<clinit>(LogTime.java:20) ... 57 more The server is a RHEL with WebSphere 9.0.5.5 and java openjdk version "1.8.0_292" The application is developed so it takes a message from a message queue and processes them. The message queue being used is IBM MQ 9.1.1.0. The application is taking the messages but not processing them correctly. The error is presenting on startup and also on execution. Any guidance would be much valued.
OperatingSystemMXBean is not a valid MXBean interface error when deploying on WebSphere Application Server When deploying and executing my aplication in WebSphere Application server the following error message is presented. [6/1/21 17:44:24:958 CDT] 0000004e webapp E com.ibm.ws.webcontainer.webapp.WebApp notifyServletContextCreated SRVE0283E: Exception caught while initializing context: {0} java.lang.ExceptionInInitializerError at java.lang.J9VMInternals.ensureError(J9VMInternals.java:141) at java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:130) at java.lang.Class.forNameImpl(Native Method) at java.lang.Class.forName(Class.java:278) at com.transform.cartridge.volpayreferencedatautil.VolPayReferenceDataUtilFunctions.addTimeLog(VolPayReferenceDataUtilFunctions.java:204) at com.transform.flow.logaudit.LogAudit.Custom3Activity(LogAudit.java:100) at com.transform.flow.logaudit.LogAudit.executeInternal(LogAudit.java:152) at com.transform.flow.logaudit.LogAudit.run0(LogAudit.java:115) at com.tplus.transform.runtime.AbstractMessageFlow.run(AbstractMessageFlow.java:157) at com.tplus.transform.runtime.volante.MessageFlowVolante$1.run(MessageFlowVolante.java:75) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start0(DeclarativeTransactionalExecutor.java:137) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start(DeclarativeTransactionalExecutor.java:113) at com.tplus.transform.runtime.volante.MessageFlowVolante.run(MessageFlowVolante.java:73) at com.tplus.transform.runtime.proxy.MessageFlowProxy.run(MessageFlowProxy.java:32) at com.transform.flow.logaudit_sys_debug_instr.LogAudit_Sys_Debug_Instr.Invoke1Activity(LogAudit_Sys_Debug_Instr.java:41) at com.transform.flow.logaudit_sys_debug_instr.LogAudit_Sys_Debug_Instr.executeInternal(LogAudit_Sys_Debug_Instr.java:82) at com.transform.flow.logaudit_sys_debug_instr.LogAudit_Sys_Debug_Instr.run0(LogAudit_Sys_Debug_Instr.java:53) at com.tplus.transform.runtime.AbstractMessageFlow.run(AbstractMessageFlow.java:157) at com.tplus.transform.runtime.volante.MessageFlowVolante$1.run(MessageFlowVolante.java:75) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start0(DeclarativeTransactionalExecutor.java:137) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start(DeclarativeTransactionalExecutor.java:113) at com.tplus.transform.runtime.volante.MessageFlowVolante.run(MessageFlowVolante.java:73) at com.tplus.transform.runtime.proxy.MessageFlowProxy.run(MessageFlowProxy.java:32) at com.transform.flow.volantepaymentenginestartofprocess.VolantePaymentEngineStartOfProcess.Invoke4Activity(VolantePaymentEngineStartOfProcess.java:77) at com.transform.flow.volantepaymentenginestartofprocess.VolantePaymentEngineStartOfProcess.executeInternal(VolantePaymentEngineStartOfProcess.java:325) at com.transform.flow.volantepaymentenginestartofprocess.VolantePaymentEngineStartOfProcess.run0(VolantePaymentEngineStartOfProcess.java:300) at com.tplus.transform.runtime.AbstractMessageFlow.run(AbstractMessageFlow.java:157) at com.tplus.transform.runtime.volante.MessageFlowVolante$1.run(MessageFlowVolante.java:75) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start0(DeclarativeTransactionalExecutor.java:137) at com.tplus.transform.runtime.volante.DeclarativeTransactionalExecutor.start(DeclarativeTransactionalExecutor.java:113) at com.tplus.transform.runtime.volante.MessageFlowVolante.run(MessageFlowVolante.java:73) at com.tplus.transform.runtime.proxy.MessageFlowProxy.run(MessageFlowProxy.java:32) at com.volantetech.volante.services.StartOfProcess.process(StartOfProcess.java:45) at com.volantetech.volante.services.ApplicationStartupListener.contextInitialized(ApplicationStartupListener.java:218) at com.ibm.ws.webcontainer.webapp.WebApp.notifyServletContextCreated(WebApp.java:1837) at com.ibm.ws.webcontainer.webapp.WebAppImpl.initialize(WebAppImpl.java:443) at com.ibm.ws.webcontainer.webapp.WebGroupImpl.addWebApplication(WebGroupImpl.java:88) at com.ibm.ws.webcontainer.VirtualHostImpl.addWebApplication(VirtualHostImpl.java:171) at com.ibm.ws.webcontainer.WSWebContainer.addWebApp(WSWebContainer.java:904) at com.ibm.ws.webcontainer.WSWebContainer.addWebApplication(WSWebContainer.java:789) at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:427) at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:719) at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1249) at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1591) at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:708) at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:1162) at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:801) at com.ibm.ws.runtime.component.ApplicationMgrImpl$5.run(ApplicationMgrImpl.java:2325) at com.ibm.ws.security.auth.ContextManagerImpl.runAs(ContextManagerImpl.java:5536) at com.ibm.ws.security.auth.ContextManagerImpl.runAsSystem(ContextManagerImpl.java:5662) at com.ibm.ws.security.core.SecurityContext.runAsSystem(SecurityContext.java:255) at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:2330) at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:436) at com.ibm.ws.runtime.component.CompositionUnitImpl.start(CompositionUnitImpl.java:123) at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.start(CompositionUnitMgrImpl.java:379) at com.ibm.ws.runtime.component.CompositionUnitMgrImpl.access$500(CompositionUnitMgrImpl.java:127) at com.ibm.ws.runtime.component.CompositionUnitMgrImpl$CUInitializer.run(CompositionUnitMgrImpl.java:985) at com.ibm.wsspi.runtime.component.WsComponentImpl$_AsynchInitializer.run(WsComponentImpl.java:524) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1909) Caused by: java.lang.IllegalArgumentException: com.sun.management.OperatingSystemMXBean is not a valid MXBean interface. at java.lang.management.ManagementFactory.getPlatformMXBean(ManagementFactory.java:469) at com.volantetech.volante.services.performance.log.LogTime.<init>(LogTime.java:23) at com.volantetech.volante.services.performance.log.LogTime.<clinit>(LogTime.java:20) ... 57 more The server is a RHEL with WebSphere 9.0.5.5 and java openjdk version "1.8.0_292" The application is developed so it takes a message from a message queue and processes them. The message queue being used is IBM MQ 9.1.1.0. The application is taking the messages but not processing them correctly. The error is presenting on startup and also on execution. Any guidance would be much valued.
java, websphere, ibm-mq, rhel
1
251
2
https://stackoverflow.com/questions/67809316/operatingsystemmxbean-is-not-a-valid-mxbean-interface-error-when-deploying-on-we
62,896,138
Not able to install python3 on Redhat 8 offline mode
I created an offline repo for RHEL 8: downloaded all needed packages with dnf download create repodata with createrepo command I'm able to install most of the packages in offline mode but python3 can't be installed The error I'm receiving is: No available modular metadata for modular package 'python36-3.6.8-2.module+el8.1.0+3334+5cb623d7.x86_64', it cannot be installed on the system Error: No available modular metadata for modular package What can I do to fix this? Regards,
Not able to install python3 on Redhat 8 offline mode I created an offline repo for RHEL 8: downloaded all needed packages with dnf download create repodata with createrepo command I'm able to install most of the packages in offline mode but python3 can't be installed The error I'm receiving is: No available modular metadata for modular package 'python36-3.6.8-2.module+el8.1.0+3334+5cb623d7.x86_64', it cannot be installed on the system Error: No available modular metadata for modular package What can I do to fix this? Regards,
redhat, rhel
1
9,109
2
https://stackoverflow.com/questions/62896138/not-able-to-install-python3-on-redhat-8-offline-mode
62,777,295
how to copy file from remote server to HDFS
I have a remote server and servers authenticated Hadoop environment. I want to copy file from Remote server to Hadoop machine to HDFS Please advise efficient approach/HDFS command to copy files from remote server to HDFS. Any example will be helpful. as ordinary way to copy file from remote server to server itself is scp -rp file remote_server:/tmp but this approach not support copy directly to hdfs
how to copy file from remote server to HDFS I have a remote server and servers authenticated Hadoop environment. I want to copy file from Remote server to Hadoop machine to HDFS Please advise efficient approach/HDFS command to copy files from remote server to HDFS. Any example will be helpful. as ordinary way to copy file from remote server to server itself is scp -rp file remote_server:/tmp but this approach not support copy directly to hdfs
hadoop, hdfs, scp, rhel
1
1,445
2
https://stackoverflow.com/questions/62777295/how-to-copy-file-from-remote-server-to-hdfs
62,059,443
python + script failed regarding to cryptography under /usr/lib64/python2.7
I create the following python script this python script will read the file /lpp/airflow/.sec/rmq_pass to var - pass_hash and will decrypt it to decrypted_pass more security_test.py import sys import os import base64 from cryptography.fernet import Fernet key_file = "/lpp/airflow/.sec/key" rmq_pass_file = "/lpp/airflow/.sec/rmq_pass" key = open(key_file, 'r') f = Fernet(key.read()) pass_hash = open(rmq_pass_file, 'r') #decrypting the password from "pass_file" file using the key from the "key_file". decrypted_pass = f.decrypt(pass_hash.read()) ConnStr = "amqp://airflow:" + decrypted_pass + "@localhost:5672//" when I run the script its failed on /usr/lib64/python2.7/site-packages/cryptography/fernet.py , or any under /usr/lib64/python2.7/site-packages/cryptography we try to re-install the package cryptography , but this didn't help and idea what is could be ? python security_test.py Traceback (most recent call last): File "security_test.py", line 14, in <module> decrypted_pass = f.decrypt(pass_hash.read()) File "/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 75, in decrypt return self._decrypt_data(data, timestamp, ttl) File "/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 117, in _decrypt_data self._verify_signature(data) File "/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 101, in _verify_signature h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend) File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/primitives/hmac.py", line 31, in __init__ self._ctx = self._backend.create_hmac_ctx(key, self.algorithm) File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 207, in create_hmac_ctx return _HMACContext(self, key, algorithm) File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/openssl/hmac.py", line 34, in __init__ key_ptr = self._backend._ffi.from_buffer(key) TypeError: from_buffer() cannot return the address of the raw string within a str or unicode or bytearray object IMPORTANT NOTE - on other machine this script is working fine what is the best way to resolve it ? by remove all modules and install them again ? or by re-install python ?
python + script failed regarding to cryptography under /usr/lib64/python2.7 I create the following python script this python script will read the file /lpp/airflow/.sec/rmq_pass to var - pass_hash and will decrypt it to decrypted_pass more security_test.py import sys import os import base64 from cryptography.fernet import Fernet key_file = "/lpp/airflow/.sec/key" rmq_pass_file = "/lpp/airflow/.sec/rmq_pass" key = open(key_file, 'r') f = Fernet(key.read()) pass_hash = open(rmq_pass_file, 'r') #decrypting the password from "pass_file" file using the key from the "key_file". decrypted_pass = f.decrypt(pass_hash.read()) ConnStr = "amqp://airflow:" + decrypted_pass + "@localhost:5672//" when I run the script its failed on /usr/lib64/python2.7/site-packages/cryptography/fernet.py , or any under /usr/lib64/python2.7/site-packages/cryptography we try to re-install the package cryptography , but this didn't help and idea what is could be ? python security_test.py Traceback (most recent call last): File "security_test.py", line 14, in <module> decrypted_pass = f.decrypt(pass_hash.read()) File "/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 75, in decrypt return self._decrypt_data(data, timestamp, ttl) File "/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 117, in _decrypt_data self._verify_signature(data) File "/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 101, in _verify_signature h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend) File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/primitives/hmac.py", line 31, in __init__ self._ctx = self._backend.create_hmac_ctx(key, self.algorithm) File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 207, in create_hmac_ctx return _HMACContext(self, key, algorithm) File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/openssl/hmac.py", line 34, in __init__ key_ptr = self._backend._ffi.from_buffer(key) TypeError: from_buffer() cannot return the address of the raw string within a str or unicode or bytearray object IMPORTANT NOTE - on other machine this script is working fine what is the best way to resolve it ? by remove all modules and install them again ? or by re-install python ?
python, python-2.7, cryptography, airflow, rhel
1
341
1
https://stackoverflow.com/questions/62059443/python-script-failed-regarding-to-cryptography-under-usr-lib64-python2-7
62,017,643
pip + how to download the latest version of the .whl files
Is it possible to download the latest version of .whl files? for example, first, we try with pip download some specific version of enum34 pip download enum34-1.1.10-py2-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at [URL] WARNING: Requirement 'enum34-1.1.10-py2-none-any.whl' looks like a filename, but the file does not exist Processing ./enum34-1.1.10-py2-none-any.whl ERROR: Exception: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 188, in _main status = self.run(options, args) File "/usr/lib/python2.7/site-packages/pip/_internal/cli/req_command.py", line 185, in wrapper return func(self, options, args) File "/usr/lib/python2.7/site-packages/pip/_internal/commands/download.py", line 132, in run reqs, check_supported_wheels=True File "/usr/lib/python2.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 179, in resolve discovered_reqs.extend(self._resolve_one(requirement_set, req)) File "/usr/lib/python2.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 362, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/usr/lib/python2.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 314, in _get_abstract_dist_for abstract_dist = self.preparer.prepare_linked_requirement(req) File "/usr/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 469, in prepare_linked_requirement hashes=hashes, File "/usr/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 264, in unpack_url unpack_file(file.path, location, file.content_type) File "/usr/lib/python2.7/site-packages/pip/_internal/utils/unpacking.py", line 252, in unpack_file flatten=not filename.endswith('.whl') File "/usr/lib/python2.7/site-packages/pip/_internal/utils/unpacking.py", line 112, in unzip_file zipfp = open(filename, 'rb') IOError: [Errno 2] No such file or directory: '/var/tmp/enum34-1.1.10-py2-none-any.whl' then we try to download the latest version of enum34* pip download enum34* DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at [URL] ERROR: Invalid requirement: 'enum34*'
pip + how to download the latest version of the .whl files Is it possible to download the latest version of .whl files? for example, first, we try with pip download some specific version of enum34 pip download enum34-1.1.10-py2-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at [URL] WARNING: Requirement 'enum34-1.1.10-py2-none-any.whl' looks like a filename, but the file does not exist Processing ./enum34-1.1.10-py2-none-any.whl ERROR: Exception: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 188, in _main status = self.run(options, args) File "/usr/lib/python2.7/site-packages/pip/_internal/cli/req_command.py", line 185, in wrapper return func(self, options, args) File "/usr/lib/python2.7/site-packages/pip/_internal/commands/download.py", line 132, in run reqs, check_supported_wheels=True File "/usr/lib/python2.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 179, in resolve discovered_reqs.extend(self._resolve_one(requirement_set, req)) File "/usr/lib/python2.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 362, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/usr/lib/python2.7/site-packages/pip/_internal/resolution/legacy/resolver.py", line 314, in _get_abstract_dist_for abstract_dist = self.preparer.prepare_linked_requirement(req) File "/usr/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 469, in prepare_linked_requirement hashes=hashes, File "/usr/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 264, in unpack_url unpack_file(file.path, location, file.content_type) File "/usr/lib/python2.7/site-packages/pip/_internal/utils/unpacking.py", line 252, in unpack_file flatten=not filename.endswith('.whl') File "/usr/lib/python2.7/site-packages/pip/_internal/utils/unpacking.py", line 112, in unzip_file zipfp = open(filename, 'rb') IOError: [Errno 2] No such file or directory: '/var/tmp/enum34-1.1.10-py2-none-any.whl' then we try to download the latest version of enum34* pip download enum34* DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at [URL] ERROR: Invalid requirement: 'enum34*'
python, python-2.7, pip, ansible, rhel
1
1,674
1
https://stackoverflow.com/questions/62017643/pip-how-to-download-the-latest-version-of-the-whl-files
61,317,416
Podman: Changes made to podman config.json do not persist on container start
I am trying to add a network to a podman container after it has already been created. These are the steps I took: Create and start a container: podman run -it --name "container" --network=mgmtnet img_v1 /bin/bash The container starts. I now stop the container podman stop container I edit the podman config.json file at: /var/lib/containers/storage/overlay-containers/60dfc044f28b0b60f0490f351f44b3647531c245d1348084944feaea783a6ad5/userdata/config.json I add an extra netns path in the namespaces section. "namespaces": [ { "type": "pid" }, { "type": "network", >> "path": "/var/run/netns/cni-8231c733-6932-ed54-4dee-92477014da6e", >>[+] "path": "/var/run/netns/test_net" }, { "type": "ipc" }, { "type": "uts" }, { "type": "mount" } ], I start the container podman start container I expected the changes (an extra interface) in the container. But that doesn't happen. Also, checking the config.json, I find that my changes are gone. So starting the container removes the changes in config. How to overcome this? Extra info: [root@bng-ix-svr1 ~]# podman info host: BuildahVersion: 1.9.0 Conmon: package: podman-1.4.2-5.module+el8.1.0+4240+893c1ab8.x86_64 path: /usr/libexec/podman/conmon version: 'conmon version 2.0.1-dev, commit: unknown' Distribution: distribution: '"rhel"' version: "8.1" MemFree: 253316108288 MemTotal: 270097387520 OCIRuntime: package: runc-1.0.0-60.rc8.module+el8.1.0+4081+b29780af.x86_64 path: /usr/bin/runc version: 'runc version spec: 1.0.1-dev' SwapFree: 5368705024 SwapTotal: 5368705024 arch: amd64 cpus: 16 hostname: bng-ix-svr1.englab.juniper.net kernel: 4.18.0-147.el8.x86_64 os: linux rootless: false uptime: 408h 2m 41.08s (Approximately 17.00 days) registries: blocked: null insecure: null search: - registry.redhat.io - registry.access.redhat.com - quay.io - docker.io store: ConfigFile: /etc/containers/storage.conf ContainerStore: number: 4 GraphDriverName: overlay GraphOptions: null GraphRoot: /var/lib/containers/storage GraphStatus: Backing Filesystem: xfs Native Overlay Diff: "true" Supports d_type: "true" Using metacopy: "false" ImageStore: number: 2 RunRoot: /var/run/containers/storage VolumePath: /var/lib/containers/storage/volumes
Podman: Changes made to podman config.json do not persist on container start I am trying to add a network to a podman container after it has already been created. These are the steps I took: Create and start a container: podman run -it --name "container" --network=mgmtnet img_v1 /bin/bash The container starts. I now stop the container podman stop container I edit the podman config.json file at: /var/lib/containers/storage/overlay-containers/60dfc044f28b0b60f0490f351f44b3647531c245d1348084944feaea783a6ad5/userdata/config.json I add an extra netns path in the namespaces section. "namespaces": [ { "type": "pid" }, { "type": "network", >> "path": "/var/run/netns/cni-8231c733-6932-ed54-4dee-92477014da6e", >>[+] "path": "/var/run/netns/test_net" }, { "type": "ipc" }, { "type": "uts" }, { "type": "mount" } ], I start the container podman start container I expected the changes (an extra interface) in the container. But that doesn't happen. Also, checking the config.json, I find that my changes are gone. So starting the container removes the changes in config. How to overcome this? Extra info: [root@bng-ix-svr1 ~]# podman info host: BuildahVersion: 1.9.0 Conmon: package: podman-1.4.2-5.module+el8.1.0+4240+893c1ab8.x86_64 path: /usr/libexec/podman/conmon version: 'conmon version 2.0.1-dev, commit: unknown' Distribution: distribution: '"rhel"' version: "8.1" MemFree: 253316108288 MemTotal: 270097387520 OCIRuntime: package: runc-1.0.0-60.rc8.module+el8.1.0+4081+b29780af.x86_64 path: /usr/bin/runc version: 'runc version spec: 1.0.1-dev' SwapFree: 5368705024 SwapTotal: 5368705024 arch: amd64 cpus: 16 hostname: bng-ix-svr1.englab.juniper.net kernel: 4.18.0-147.el8.x86_64 os: linux rootless: false uptime: 408h 2m 41.08s (Approximately 17.00 days) registries: blocked: null insecure: null search: - registry.redhat.io - registry.access.redhat.com - quay.io - docker.io store: ConfigFile: /etc/containers/storage.conf ContainerStore: number: 4 GraphDriverName: overlay GraphOptions: null GraphRoot: /var/lib/containers/storage GraphStatus: Backing Filesystem: xfs Native Overlay Diff: "true" Supports d_type: "true" Using metacopy: "false" ImageStore: number: 2 RunRoot: /var/run/containers/storage VolumePath: /var/lib/containers/storage/volumes
network-programming, containers, rhel, podman, config.json
1
4,324
1
https://stackoverflow.com/questions/61317416/podman-changes-made-to-podman-config-json-do-not-persist-on-container-start
60,035,546
what package provides systemd-networkd in RHEL8?
I am unable to find systemd-networkd package for RHEL 8. I tried yum provides and yum search but to no avail. [ec2-user@server1 ~]$ sudo yum search systemd-networkd Last metadata expiration check: 0:06:33 ago on Mon 03 Feb 2020 08:16:21 UTC. No matches found. [ec2-user@server1 ~]$ sudo yum search systemd-resolved Last metadata expiration check: 0:06:43 ago on Mon 03 Feb 2020 08:16:21 UTC. No matches found. [ec2-user@server1 ~]$ sudo yum provides systemd-networkd Last metadata expiration check: 0:06:50 ago on Mon 03 Feb 2020 08:16:21 UTC. Error: No Matches found [ec2-user@server1 ~]$ sudo yum search systemd | grep networkd Last metadata expiration check: 0:11:47 ago on Mon 03 Feb 2020 08:16:21 UTC.
what package provides systemd-networkd in RHEL8? I am unable to find systemd-networkd package for RHEL 8. I tried yum provides and yum search but to no avail. [ec2-user@server1 ~]$ sudo yum search systemd-networkd Last metadata expiration check: 0:06:33 ago on Mon 03 Feb 2020 08:16:21 UTC. No matches found. [ec2-user@server1 ~]$ sudo yum search systemd-resolved Last metadata expiration check: 0:06:43 ago on Mon 03 Feb 2020 08:16:21 UTC. No matches found. [ec2-user@server1 ~]$ sudo yum provides systemd-networkd Last metadata expiration check: 0:06:50 ago on Mon 03 Feb 2020 08:16:21 UTC. Error: No Matches found [ec2-user@server1 ~]$ sudo yum search systemd | grep networkd Last metadata expiration check: 0:11:47 ago on Mon 03 Feb 2020 08:16:21 UTC.
linux, rhel
1
1,992
1
https://stackoverflow.com/questions/60035546/what-package-provides-systemd-networkd-in-rhel8
58,858,201
MongoDB degrading write performance over time
I am importing a lot of data (18GB, 3 million documents) over time, almost all the data are indexed, so there are lots of indexing going on. The system consist of a single client (single process on a separate machine) establishing a single connection (using pymongo) and doing insertMany in batch of 1000 docs. MongoDB setup: single instance, journaling enabled, WiredTiger with default cache, RHEL 7, version 4.2.1 192GB RAM, 16 CPUs 1.5 TB SSD, cloud machine. When I start the server (after full reboot) and insert the collection, it takes 1.5 hours. If the server run for a while inserting some other data (from a single client), it finishes to insert the data, I delete the collection and run the same data to insert - it takes 6 hours to insert it (there is still sufficient disk more than 60%, nothing else making connections to the db). It feels like the server performance degrades over time, may be OS specific. Any similar experience, ideas?
MongoDB degrading write performance over time I am importing a lot of data (18GB, 3 million documents) over time, almost all the data are indexed, so there are lots of indexing going on. The system consist of a single client (single process on a separate machine) establishing a single connection (using pymongo) and doing insertMany in batch of 1000 docs. MongoDB setup: single instance, journaling enabled, WiredTiger with default cache, RHEL 7, version 4.2.1 192GB RAM, 16 CPUs 1.5 TB SSD, cloud machine. When I start the server (after full reboot) and insert the collection, it takes 1.5 hours. If the server run for a while inserting some other data (from a single client), it finishes to insert the data, I delete the collection and run the same data to insert - it takes 6 hours to insert it (there is still sufficient disk more than 60%, nothing else making connections to the db). It feels like the server performance degrades over time, may be OS specific. Any similar experience, ideas?
mongodb, cloud, rhel
1
917
2
https://stackoverflow.com/questions/58858201/mongodb-degrading-write-performance-over-time
57,376,947
rpm postinstall for different versions of distribution
I am building a rpm which should be usable for both RHEL 6 and 7. I am able to find and install correct files based on 0%{?rhel} . But is it possible to make the postinstall script work that way during installation. If i use the 0%{?rhel} in postinstall, the corresponding scripts are made part of rpm during build time. Is it possible to do distribution based scripts during installation time in postinstall section?
rpm postinstall for different versions of distribution I am building a rpm which should be usable for both RHEL 6 and 7. I am able to find and install correct files based on 0%{?rhel} . But is it possible to make the postinstall script work that way during installation. If i use the 0%{?rhel} in postinstall, the corresponding scripts are made part of rpm during build time. Is it possible to do distribution based scripts during installation time in postinstall section?
rpm, rhel, rpm-spec
1
249
1
https://stackoverflow.com/questions/57376947/rpm-postinstall-for-different-versions-of-distribution
57,056,968
SonarQube server not showing in browser
I have a Linux VM running with a Jenkins, Nexus and SonarQube server on it. The IP for the VM is 192.168.56.2 and I have no trouble accessing both Jenkins and Nexus on ports 8080 and 8081 respectively. However, when I try to access 192.168.56.2:9000 for SonarQube it just says 192.168.56.2 refused to connect . When I run systemctl status sonar in the terminal it shows that SonarQube is active and running. I have opened the firewall to port 9000 and I have not changed any of the default settings. Does anyone have any idea what might be the issue?
SonarQube server not showing in browser I have a Linux VM running with a Jenkins, Nexus and SonarQube server on it. The IP for the VM is 192.168.56.2 and I have no trouble accessing both Jenkins and Nexus on ports 8080 and 8081 respectively. However, when I try to access 192.168.56.2:9000 for SonarQube it just says 192.168.56.2 refused to connect . When I run systemctl status sonar in the terminal it shows that SonarQube is active and running. I have opened the firewall to port 9000 and I have not changed any of the default settings. Does anyone have any idea what might be the issue?
sonarqube, virtual-machine, rhel
1
700
1
https://stackoverflow.com/questions/57056968/sonarqube-server-not-showing-in-browser
56,959,460
Simple cgi script fails with permission denied error
Perl CGI script fails with Can't locate /home/testdir/first.pl: /home/testdir/first.pl: Permission denied at /var/www/cgi-bin/first.cgi line 2. and End of script output before headers: first.cgi in /etc/httpd/logs/error_log This is an rhel8 system with apache 2.4 I have tried moving first.pl around to different locations and modifying first.cgi to point to first.pl . first.cgi executes if I place first.pl in /var/www , but not /home/testdir , /var or other directories In httpd.conf , I set permissions for /home/testdir/ to the same as /var/www , shown below, and restarted apache <Directory "/home/testdir"> AllowOverride None # Allow open access: Require all granted </Directory> Out of frustration, I then changed the permissions for /var/www to Require all denied and restarted apache. first.cgi still successfully ran first.pl when I pointed it to /var/www with the permissions changed to Require all denied . I also disable suexec and recieved the same errors when pointing first.cgi to /home/testdir The permissions for first.pl are 755 in /home/testdir as well as /var/www and the user and group are both root. The permissions for home , testdir , var and www are all 755 and the users and groups are all root first.cgi : #!/usr/bin/perl require '/home/testdir/first.pl'; test(); first.pl : #!/usr/bin/perl sub test{ print "Content-type: text/html\n\n"; print "Hello, World."; } first; The script should display "Hello, World". on the webpage. Instead, it displays: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at root@localhost to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log.
Simple cgi script fails with permission denied error Perl CGI script fails with Can't locate /home/testdir/first.pl: /home/testdir/first.pl: Permission denied at /var/www/cgi-bin/first.cgi line 2. and End of script output before headers: first.cgi in /etc/httpd/logs/error_log This is an rhel8 system with apache 2.4 I have tried moving first.pl around to different locations and modifying first.cgi to point to first.pl . first.cgi executes if I place first.pl in /var/www , but not /home/testdir , /var or other directories In httpd.conf , I set permissions for /home/testdir/ to the same as /var/www , shown below, and restarted apache <Directory "/home/testdir"> AllowOverride None # Allow open access: Require all granted </Directory> Out of frustration, I then changed the permissions for /var/www to Require all denied and restarted apache. first.cgi still successfully ran first.pl when I pointed it to /var/www with the permissions changed to Require all denied . I also disable suexec and recieved the same errors when pointing first.cgi to /home/testdir The permissions for first.pl are 755 in /home/testdir as well as /var/www and the user and group are both root. The permissions for home , testdir , var and www are all 755 and the users and groups are all root first.cgi : #!/usr/bin/perl require '/home/testdir/first.pl'; test(); first.pl : #!/usr/bin/perl sub test{ print "Content-type: text/html\n\n"; print "Hello, World."; } first; The script should display "Hello, World". on the webpage. Instead, it displays: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at root@localhost to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log.
apache, perl, cgi, rhel
1
2,411
1
https://stackoverflow.com/questions/56959460/simple-cgi-script-fails-with-permission-denied-error
56,644,904
Feature (@javascript) testing doesn&#39;t work well
I'm setting up a simple check when I type a keyword in an input field in browser to see the result. And when the checking begins on the javascript part, the check is made long time to finish with error. I'm using : Linux RedHat 7 Behat 3.5.0 Selenium Standalone Server 3.141.59 Mink 1.6 # behat.yml default: extensions: Behat\MinkExtension: browser_name: chrome goutte: ~ javascript_session: selenium2 selenium2: wd_host: [URL] capabilities: { "browser": "chrome", "version": "*", 'chrome': {'switches':['--start-maximized']}} base_url: [URL] suites: ui: contexts: [FeatureContext, WebContext] #Webcontext.php <?php use Behat\MinkExtension\Context\MinkContext; class WebContext extends MinkContext { /** *@When I wait for :arg1 seconds */ public function iWaitForSeconds($args) { $this->getSession()->wait($args * 1000); } /** * @When I fill in :arg1 with: :arg2 */ public function iFillInWith($value, $field) { $javascript = "window.onload = function () {var e = document.getElementById('$field').value='$value';}"; $this->getSession()->executeScript($javascript); } } # bing.feature @insulated Feature: Bing Scenario: Homepage Given I am on the homepage Then I should see "Bing" And I should see "Images" And I should see "Office Online" @javascript Scenario: Search Given I am on the homepage When I fill in "sb_form_q" with "grafikart" And I wait for 1 seconds Then I should see "Grafikart.fr" I expect a quick checking and all of my lines are green, but currently it doesn't work.
Feature (@javascript) testing doesn&#39;t work well I'm setting up a simple check when I type a keyword in an input field in browser to see the result. And when the checking begins on the javascript part, the check is made long time to finish with error. I'm using : Linux RedHat 7 Behat 3.5.0 Selenium Standalone Server 3.141.59 Mink 1.6 # behat.yml default: extensions: Behat\MinkExtension: browser_name: chrome goutte: ~ javascript_session: selenium2 selenium2: wd_host: [URL] capabilities: { "browser": "chrome", "version": "*", 'chrome': {'switches':['--start-maximized']}} base_url: [URL] suites: ui: contexts: [FeatureContext, WebContext] #Webcontext.php <?php use Behat\MinkExtension\Context\MinkContext; class WebContext extends MinkContext { /** *@When I wait for :arg1 seconds */ public function iWaitForSeconds($args) { $this->getSession()->wait($args * 1000); } /** * @When I fill in :arg1 with: :arg2 */ public function iFillInWith($value, $field) { $javascript = "window.onload = function () {var e = document.getElementById('$field').value='$value';}"; $this->getSession()->executeScript($javascript); } } # bing.feature @insulated Feature: Bing Scenario: Homepage Given I am on the homepage Then I should see "Bing" And I should see "Images" And I should see "Office Online" @javascript Scenario: Search Given I am on the homepage When I fill in "sb_form_q" with "grafikart" And I wait for 1 seconds Then I should see "Grafikart.fr" I expect a quick checking and all of my lines are green, but currently it doesn't work.
php, selenium, rhel, behat
1
2,044
3
https://stackoverflow.com/questions/56644904/feature-javascript-testing-doesnt-work-well
55,071,283
How to install Cargo on a RHEL Linux server?
I tried installing Cargo on a RHEL server with: curl [URL] -sSf | sh but after finishing, I get the response: cargo -bash: cargo: command not found Is there a different way to install?
How to install Cargo on a RHEL Linux server? I tried installing Cargo on a RHEL server with: curl [URL] -sSf | sh but after finishing, I get the response: cargo -bash: cargo: command not found Is there a different way to install?
linux, rust, centos, rhel, rust-cargo
1
5,511
1
https://stackoverflow.com/questions/55071283/how-to-install-cargo-on-a-rhel-linux-server
54,444,513
How to Include include path (full or relative) in LS command results
This command supplies almost all of the information which I need in the proper CSV format (filename,dateModified,size) But I would also like to include the file's directory as a separate item within each line's output. Is there a way to do that using ls or another rhel available command? ls * -R --fu | awk '{ print $9","$6","$5}'
How to Include include path (full or relative) in LS command results This command supplies almost all of the information which I need in the proper CSV format (filename,dateModified,size) But I would also like to include the file's directory as a separate item within each line's output. Is there a way to do that using ls or another rhel available command? ls * -R --fu | awk '{ print $9","$6","$5}'
rhel, ls
1
370
1
https://stackoverflow.com/questions/54444513/how-to-include-include-path-full-or-relative-in-ls-command-results
54,268,321
How to get python3 pypy on centos 7
i need pypy compatible to python3(for django2.0) on centos 7 Portable version and ubuntu version not work, centos have only 2.7 pypy question is how to get python3 compatible result? I got 2.7-compatible tree when i get source like this hg clone [URL] pypy pypy get-pip.py /usr/lib64/pypy-5.0.1/bin/pip install virtualenv pypy -m virtualenv /tmp/pypy27_venv/ source /tmp/pypy27_venv/bin/activate pip install -r pypy/requirements.txt cd /usr/src/pypy/pypy/goal pypy ../../rpython/bin/rpython --opt=jit After build compleate i get /tmp/usession-default-19/build/pypy-3-centos7/bin/pypy Python 2.7.13 (0873ec79aa36, Jan 19 2019, 13:33:23) [PyPy 6.1.0-alpha0 with GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
How to get python3 pypy on centos 7 i need pypy compatible to python3(for django2.0) on centos 7 Portable version and ubuntu version not work, centos have only 2.7 pypy question is how to get python3 compatible result? I got 2.7-compatible tree when i get source like this hg clone [URL] pypy pypy get-pip.py /usr/lib64/pypy-5.0.1/bin/pip install virtualenv pypy -m virtualenv /tmp/pypy27_venv/ source /tmp/pypy27_venv/bin/activate pip install -r pypy/requirements.txt cd /usr/src/pypy/pypy/goal pypy ../../rpython/bin/rpython --opt=jit After build compleate i get /tmp/usession-default-19/build/pypy-3-centos7/bin/pypy Python 2.7.13 (0873ec79aa36, Jan 19 2019, 13:33:23) [PyPy 6.1.0-alpha0 with GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
centos7, centos6, rhel, pypy, rhel7
1
2,944
1
https://stackoverflow.com/questions/54268321/how-to-get-python3-pypy-on-centos-7
53,124,607
Shellscript to monitor a log file if keyword triggers then run the snmptrap command
Is there a way to monitor a log file using shell script like tail -f /var/log/errorlog.txt then if something like down keyword appears, then generate SNMPTRAP to snmp manager and continues the monitoring I have a SNMP script available to generate SNMPTrap and it looks like snmptrap -v v2c -c community host "Error message" Lets the say the script name is snmp.sh My question is how to perform the below operation tail the logs if keyword[down] matches then use snmp.sh script to send alert else leave As per the suggestion i tried this tail -F /data/log/test.log | egrep -io 'got signal 15 | now exiting' | while read -r line ; do case "$line" in "got signal 15") echo "hi" ;; "now exiting") echo "hi2" ;; *) esac done but the problem is tail is not working here with case statement, whenever the new log details added its not going to the case statement and echos the output I could get the output if i use cat/less/more Could you someone please tell what mistake i have done here ? Thanks in advance
Shellscript to monitor a log file if keyword triggers then run the snmptrap command Is there a way to monitor a log file using shell script like tail -f /var/log/errorlog.txt then if something like down keyword appears, then generate SNMPTRAP to snmp manager and continues the monitoring I have a SNMP script available to generate SNMPTrap and it looks like snmptrap -v v2c -c community host "Error message" Lets the say the script name is snmp.sh My question is how to perform the below operation tail the logs if keyword[down] matches then use snmp.sh script to send alert else leave As per the suggestion i tried this tail -F /data/log/test.log | egrep -io 'got signal 15 | now exiting' | while read -r line ; do case "$line" in "got signal 15") echo "hi" ;; "now exiting") echo "hi2" ;; *) esac done but the problem is tail is not working here with case statement, whenever the new log details added its not going to the case statement and echos the output I could get the output if i use cat/less/more Could you someone please tell what mistake i have done here ? Thanks in advance
linux, bash, shell, rhel, snmp-trap
1
3,130
2
https://stackoverflow.com/questions/53124607/shellscript-to-monitor-a-log-file-if-keyword-triggers-then-run-the-snmptrap-comm
52,228,696
Docker container receives permission denied in mounts
On RHEL, CentOS, Fedora, and other SELinux-enabled distributions, creating a docker image with mounts and turned-on SELinux returns permission denied: docker run --rm -it -v $(pwd):/home centos7 [root@4b348767653c ~]# ls /home ls: cannot open directory /home: Permission denied How do I continue using Docker images with mounted volumes without turning off SELinux?
Docker container receives permission denied in mounts On RHEL, CentOS, Fedora, and other SELinux-enabled distributions, creating a docker image with mounts and turned-on SELinux returns permission denied: docker run --rm -it -v $(pwd):/home centos7 [root@4b348767653c ~]# ls /home ls: cannot open directory /home: Permission denied How do I continue using Docker images with mounted volumes without turning off SELinux?
docker, centos, rhel, selinux
1
1,226
1
https://stackoverflow.com/questions/52228696/docker-container-receives-permission-denied-in-mounts
51,469,237
Is it possible to enable/disable rules remotely from Wazuh server?
I have read about Centralized configuration in Wazuh. But can the rules be enabled/disabled in server instead of changing in all servers ?
Is it possible to enable/disable rules remotely from Wazuh server? I have read about Centralized configuration in Wazuh. But can the rules be enabled/disabled in server instead of changing in all servers ?
rhel, ossec
1
2,536
1
https://stackoverflow.com/questions/51469237/is-it-possible-to-enable-disable-rules-remotely-from-wazuh-server
51,150,138
docker error while loading shared libraries (RHEL 7.5)
I installed Docker on a Red Hat Enterprise Linux Server 7.5 (Maipo) system: docker version Version: 1.13.1 API version: 1.26 Package version: docker-1.13.1-58.git87f2fab.e17.x86_64 OS/Arch: linux/amd64 Now if I try to run a docker image, I get errors similar to this: docker run docker.io/jupyter/datascience-notebook tini: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory I have searched for help and have already taken a multitude of possible actions: libraries seem to be linked correctly all libraries are up to date Hello-World example works I also came across information saying that running containers from docker.io / hub.docker.com under RHEL is not supported - which I don't really get, as main purpose of docker is to enable running programs independent from their OS...? [URL] Does this mean using docker under RHEL does not really provide me with the possibility of easily deploying/sharing a docker-image with non-RHEL users? Also, does this mean I can only access and use official RHEL-docker images? [URL] As I wanted to use docker to have ready-to-go environments with R-Python/Jupyter/H2o (and similar), I'm disappointed because I could not find suitable images for RHEL there. So, my questions would be: Is it possible to run docker.io / hub.docker.com images under RHEL7.5? if not, could I share my own created docker images under RHEL7.5 to other users with different OS versions? Are there other projects / sites to share docker-images for data science purposes on RHEL? Would you agree that my next step would be: building my own docker-image, adding R/Python/jupyter step by step? Best regards, workah0lic
docker error while loading shared libraries (RHEL 7.5) I installed Docker on a Red Hat Enterprise Linux Server 7.5 (Maipo) system: docker version Version: 1.13.1 API version: 1.26 Package version: docker-1.13.1-58.git87f2fab.e17.x86_64 OS/Arch: linux/amd64 Now if I try to run a docker image, I get errors similar to this: docker run docker.io/jupyter/datascience-notebook tini: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory I have searched for help and have already taken a multitude of possible actions: libraries seem to be linked correctly all libraries are up to date Hello-World example works I also came across information saying that running containers from docker.io / hub.docker.com under RHEL is not supported - which I don't really get, as main purpose of docker is to enable running programs independent from their OS...? [URL] Does this mean using docker under RHEL does not really provide me with the possibility of easily deploying/sharing a docker-image with non-RHEL users? Also, does this mean I can only access and use official RHEL-docker images? [URL] As I wanted to use docker to have ready-to-go environments with R-Python/Jupyter/H2o (and similar), I'm disappointed because I could not find suitable images for RHEL there. So, my questions would be: Is it possible to run docker.io / hub.docker.com images under RHEL7.5? if not, could I share my own created docker images under RHEL7.5 to other users with different OS versions? Are there other projects / sites to share docker-images for data science purposes on RHEL? Would you agree that my next step would be: building my own docker-image, adding R/Python/jupyter step by step? Best regards, workah0lic
docker, rhel
1
2,517
1
https://stackoverflow.com/questions/51150138/docker-error-while-loading-shared-libraries-rhel-7-5
50,837,363
Protocol error when using Net:SSH:Perl module
I have a script that uses Net::SSH:Perl module. It is able to ssh to RHEL6.9 hosts but not RHEL7.4 hosts. I get the following error: Protocol error: expected packet type 91, got 80 at /Net/SSH/Perl/Packet.pm line 221 How do I remedy this?
Protocol error when using Net:SSH:Perl module I have a script that uses Net::SSH:Perl module. It is able to ssh to RHEL6.9 hosts but not RHEL7.4 hosts. I get the following error: Protocol error: expected packet type 91, got 80 at /Net/SSH/Perl/Packet.pm line 221 How do I remedy this?
perl, unix, ssh, rhel
1
781
1
https://stackoverflow.com/questions/50837363/protocol-error-when-using-netsshperl-module
50,395,131
Editing Jenkins sysconfig not changing home directory - RHEL
I've edited /etc/sysconfig/jenkins and changed the following from /var/lib/jenkins to /app/jenkins in order to move to a volume with more storage allocated. The problem is even after restart jenkins home directory still shows /var/lib/jenkins. What do I need to change in order for this change to take effect? ## Path: Development/Jenkins ## Description: Jenkins Automation Server ## Type: string ## Default: "/app/jenkins" ## ServiceRestart: jenkins # # Directory where Jenkins store its configuration and working # files (checkouts, build reports, artifacts, ...). # JENKINS_HOME="/app/jenkins"
Editing Jenkins sysconfig not changing home directory - RHEL I've edited /etc/sysconfig/jenkins and changed the following from /var/lib/jenkins to /app/jenkins in order to move to a volume with more storage allocated. The problem is even after restart jenkins home directory still shows /var/lib/jenkins. What do I need to change in order for this change to take effect? ## Path: Development/Jenkins ## Description: Jenkins Automation Server ## Type: string ## Default: "/app/jenkins" ## ServiceRestart: jenkins # # Directory where Jenkins store its configuration and working # files (checkouts, build reports, artifacts, ...). # JENKINS_HOME="/app/jenkins"
jenkins, rhel
1
607
1
https://stackoverflow.com/questions/50395131/editing-jenkins-sysconfig-not-changing-home-directory-rhel
48,969,497
How to install ChromeDriver in RedHat Linux?
I am trying to use selenium on RHEL(Red Hat). In order to use Chrome browser, I need to install the chromedriver. Does anyone know how to install this, or what command I need to use? I did not find any resources on this that worked. Really appreciate any help. Thank you.
How to install ChromeDriver in RedHat Linux? I am trying to use selenium on RHEL(Red Hat). In order to use Chrome browser, I need to install the chromedriver. Does anyone know how to install this, or what command I need to use? I did not find any resources on this that worked. Really appreciate any help. Thank you.
linux, google-chrome, selenium, selenium-chromedriver, rhel
1
17,279
1
https://stackoverflow.com/questions/48969497/how-to-install-chromedriver-in-redhat-linux
48,448,875
How to print remote from Linux Server to Windows Printer Server
Now I have the need to print a document from RHEL server to a Windows Printer Server, the protocol to be use is IPP. The reason why protocol IPP is because on Windows Server 2012, LPD and LPR Services are deprecated, see [URL] Another good reason to use IPP: [URL] My question is, which is the best way on RHEL system to send a document to be printed to Windows Server Printer though IPP protocol?
How to print remote from Linux Server to Windows Printer Server Now I have the need to print a document from RHEL server to a Windows Printer Server, the protocol to be use is IPP. The reason why protocol IPP is because on Windows Server 2012, LPD and LPR Services are deprecated, see [URL] Another good reason to use IPP: [URL] My question is, which is the best way on RHEL system to send a document to be printed to Windows Server Printer though IPP protocol?
windows-server-2012, rhel, ipp-protocol
1
4,137
1
https://stackoverflow.com/questions/48448875/how-to-print-remote-from-linux-server-to-windows-printer-server
48,437,162
Replacing too many if statements in bash
I have written a bash script which works as follows 1) Check for status of service1. If it is not started then check for status of service2. If service2 is not started then display error. 2) If service1 is started then check for log file of service1 and grep for starting. If it matches then success 3) If service2 is started then check for log file of service1 and grep for starting. If it matches then success My bash script looks as below but it has too many if statements. I am trying to figure out to reduce these many if statements #!/bin/bash service service1 status | grep "good" > /dev/null if [ $? -ne 0 ];then service service2 status | grep "good" > /dev/null if [$? -ne 0];then var1="service1 is down" echo "$var1" else cat /dir/log2 | grep "abc" > /dev/null 2>&1 if [ $? -ne 0 ];then var2="exception" echo "$var2" fi fi else cat /dir/log1 | grep "abc" > /dev/null 2>&1 if [ $? -ne 0 ];then var3="exception" echo "$var3" fi fi
Replacing too many if statements in bash I have written a bash script which works as follows 1) Check for status of service1. If it is not started then check for status of service2. If service2 is not started then display error. 2) If service1 is started then check for log file of service1 and grep for starting. If it matches then success 3) If service2 is started then check for log file of service1 and grep for starting. If it matches then success My bash script looks as below but it has too many if statements. I am trying to figure out to reduce these many if statements #!/bin/bash service service1 status | grep "good" > /dev/null if [ $? -ne 0 ];then service service2 status | grep "good" > /dev/null if [$? -ne 0];then var1="service1 is down" echo "$var1" else cat /dir/log2 | grep "abc" > /dev/null 2>&1 if [ $? -ne 0 ];then var2="exception" echo "$var2" fi fi else cat /dir/log1 | grep "abc" > /dev/null 2>&1 if [ $? -ne 0 ];then var3="exception" echo "$var3" fi fi
linux, bash, service, sh, rhel
1
1,798
2
https://stackoverflow.com/questions/48437162/replacing-too-many-if-statements-in-bash
48,084,883
Tomcat SSL certificate authority invalid
Tried asking before but wasn't too good at it so heres attempt two I'm trying to get SSL running on a tomcat 7 server under RHEL. Sever works fine under HTTP but when I try to access it with HTTPS, I get this error. looking into it further, chrome tells me this did some research. Added the certs to /etc/pki/ca-trust/source/anchors, update-ca-trust, still the same problem. tried rebuilding the keystore from scratch and changing up the order in which they were imported, still nothing. Heres whats currently in my keystore: root, Dec 29, 2017, trustedCertEntry, tomcat, Dec 29, 2017, PrivateKeyEntry, intermed, Dec 29, 2017, trustedCertEntry, crm2.mydomain.org, Jan 3, 2018, trustedCertEntry, and whats in my server.xml <Connector port="443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/opt/apache-tomcat-7.0.82/conf/store" keystorePass=[pass] clientAuth="false" sslProtocol="TLS"/> Other info: Certs obtained from godaddy. Used guide for installation here godddy ssl checker says I'm missing the intermediate certificate Tomcat version 7 RHEL 7.4 java 1.8 any help is appreciated
Tomcat SSL certificate authority invalid Tried asking before but wasn't too good at it so heres attempt two I'm trying to get SSL running on a tomcat 7 server under RHEL. Sever works fine under HTTP but when I try to access it with HTTPS, I get this error. looking into it further, chrome tells me this did some research. Added the certs to /etc/pki/ca-trust/source/anchors, update-ca-trust, still the same problem. tried rebuilding the keystore from scratch and changing up the order in which they were imported, still nothing. Heres whats currently in my keystore: root, Dec 29, 2017, trustedCertEntry, tomcat, Dec 29, 2017, PrivateKeyEntry, intermed, Dec 29, 2017, trustedCertEntry, crm2.mydomain.org, Jan 3, 2018, trustedCertEntry, and whats in my server.xml <Connector port="443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/opt/apache-tomcat-7.0.82/conf/store" keystorePass=[pass] clientAuth="false" sslProtocol="TLS"/> Other info: Certs obtained from godaddy. Used guide for installation here godddy ssl checker says I'm missing the intermediate certificate Tomcat version 7 RHEL 7.4 java 1.8 any help is appreciated
tomcat, ssl, https, rhel
1
6,898
1
https://stackoverflow.com/questions/48084883/tomcat-ssl-certificate-authority-invalid
47,888,089
Issue when starting wiremock-standalone using crontab
I have a new regression suite that uses the Wiremock standalone JAR. In order to ensure this is running on the server, I have this script called checkwiremock.sh #!/bin/bash cnt=$(ps -eaflc --sort stime | grep wiremock-standalone-2.11.0.jar |grep -v grep | wc -l) if(test $cnt -eq 1); then echo "Service already running..." else echo "Starting Service" nohup java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324 --verbose & fi The script works as expected when ran manually ./checkwiremock.sh However when started using Crontab, * * * * * /bin/bash /etc/opt/wiremock/checkwiremock.sh Wiremock returns No response could be served as there are no stub mappings in this WireMock instance. The only difference I can see between the manually started process and cron process is the TTY root 31526 9.5 3.2 1309736 62704 pts/0 Sl 11:28 0:01 java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324 root 31729 22.0 1.9 1294104 37808 ? Sl 11:31 0:00 java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324 Can't figure out what is wrong here. Server details: Red Hat Enterprise Linux Server release 6.5 (Santiago) *Edit: corrected paths to ones actually used
Issue when starting wiremock-standalone using crontab I have a new regression suite that uses the Wiremock standalone JAR. In order to ensure this is running on the server, I have this script called checkwiremock.sh #!/bin/bash cnt=$(ps -eaflc --sort stime | grep wiremock-standalone-2.11.0.jar |grep -v grep | wc -l) if(test $cnt -eq 1); then echo "Service already running..." else echo "Starting Service" nohup java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324 --verbose & fi The script works as expected when ran manually ./checkwiremock.sh However when started using Crontab, * * * * * /bin/bash /etc/opt/wiremock/checkwiremock.sh Wiremock returns No response could be served as there are no stub mappings in this WireMock instance. The only difference I can see between the manually started process and cron process is the TTY root 31526 9.5 3.2 1309736 62704 pts/0 Sl 11:28 0:01 java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324 root 31729 22.0 1.9 1294104 37808 ? Sl 11:31 0:00 java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324 Can't figure out what is wrong here. Server details: Red Hat Enterprise Linux Server release 6.5 (Santiago) *Edit: corrected paths to ones actually used
linux, cron, rhel, wiremock
1
2,063
1
https://stackoverflow.com/questions/47888089/issue-when-starting-wiremock-standalone-using-crontab
45,729,175
Can&#39;t locate object method &quot;exchange&quot; via package &quot;Net::SSH::Perl::Kex::C25519&quot; at /usr/local/lib64/perl5/Net/SSH/Perl/Kex.pm line 107
I have been using the SSH package of Perl to connect to my RHEL systems. I recently upgraded one of my VM to redhat-release-server-7.2-9.el7.x86_64. Now when I am running my Perl script it is throwing the error: Can't locate object method "exchange" via package "Net::SSH::Perl::Kex::C25519" at /usr/local/lib64/perl5/Net/SSH/Perl/Kex.pm line 107. when making the ssh object. The same script is otherwise working on my 6.8 RHEL version. Any suggestions? Here is the code: #!/usr/local/bin/perl use strict; use warnings; use Net::SSH::Perl; my $ssh = Net::SSH::Perl->new($server_ip, debug=>1); $ssh->login("root","password"); Debug print: [root@Automation_linux_10]# perl temp.pl Automation_linux_[Auto_server_ip]: Reading configuration data /root/.ssh/config Automation_linux_[Auto_server_ip]: Reading configuration data /etc/ssh_config Automation_linux_[Auto_server_ip]: Allocated local port 1023. Automation_linux_[Auto_server_ip]: Connecting to [SERVER_IP], port 22. Automation_linux_[Auto_server_ip]: Remote version string: SSH-2.0-OpenSSH_6.6.1 Automation_linux_[Auto_server_ip]: Remote protocol version 2.0, remote software version OpenSSH_6.6.1 Automation_linux_[Auto_server_ip]: Net::SSH::Perl Version 2.12, protocol version 2.0. Automation_linux_[Auto_server_ip]: No compat match: OpenSSH_6.6.1. Automation_linux_[Auto_server_ip]: Connection established. Automation_linux_[Auto_server_ip]: Sent key-exchange init (KEXINIT), waiting for response. Automation_linux_[Auto_server_ip]: Using curve25519-sha256@libssh.org for key exchange Automation_linux_[Auto_server_ip]: Host key algorithm: ssh-ed25519 Automation_linux_[Auto_server_ip]: Algorithms, c->s: chacha20-poly1305@openssh.com <implicit> none Automation_linux_[Auto_server_ip]: Algorithms, s->c: chacha20-poly1305@openssh.com <implicit> none Can't locate object method "exchange" via package "Net::SSH::Perl::Kex::C25519" at /usr/local/lib64/perl5/Net/SSH/Perl/Kex.pm line 107.
Can&#39;t locate object method &quot;exchange&quot; via package &quot;Net::SSH::Perl::Kex::C25519&quot; at /usr/local/lib64/perl5/Net/SSH/Perl/Kex.pm line 107 I have been using the SSH package of Perl to connect to my RHEL systems. I recently upgraded one of my VM to redhat-release-server-7.2-9.el7.x86_64. Now when I am running my Perl script it is throwing the error: Can't locate object method "exchange" via package "Net::SSH::Perl::Kex::C25519" at /usr/local/lib64/perl5/Net/SSH/Perl/Kex.pm line 107. when making the ssh object. The same script is otherwise working on my 6.8 RHEL version. Any suggestions? Here is the code: #!/usr/local/bin/perl use strict; use warnings; use Net::SSH::Perl; my $ssh = Net::SSH::Perl->new($server_ip, debug=>1); $ssh->login("root","password"); Debug print: [root@Automation_linux_10]# perl temp.pl Automation_linux_[Auto_server_ip]: Reading configuration data /root/.ssh/config Automation_linux_[Auto_server_ip]: Reading configuration data /etc/ssh_config Automation_linux_[Auto_server_ip]: Allocated local port 1023. Automation_linux_[Auto_server_ip]: Connecting to [SERVER_IP], port 22. Automation_linux_[Auto_server_ip]: Remote version string: SSH-2.0-OpenSSH_6.6.1 Automation_linux_[Auto_server_ip]: Remote protocol version 2.0, remote software version OpenSSH_6.6.1 Automation_linux_[Auto_server_ip]: Net::SSH::Perl Version 2.12, protocol version 2.0. Automation_linux_[Auto_server_ip]: No compat match: OpenSSH_6.6.1. Automation_linux_[Auto_server_ip]: Connection established. Automation_linux_[Auto_server_ip]: Sent key-exchange init (KEXINIT), waiting for response. Automation_linux_[Auto_server_ip]: Using curve25519-sha256@libssh.org for key exchange Automation_linux_[Auto_server_ip]: Host key algorithm: ssh-ed25519 Automation_linux_[Auto_server_ip]: Algorithms, c->s: chacha20-poly1305@openssh.com <implicit> none Automation_linux_[Auto_server_ip]: Algorithms, s->c: chacha20-poly1305@openssh.com <implicit> none Can't locate object method "exchange" via package "Net::SSH::Perl::Kex::C25519" at /usr/local/lib64/perl5/Net/SSH/Perl/Kex.pm line 107.
perl, ssh, rhel
1
1,226
1
https://stackoverflow.com/questions/45729175/cant-locate-object-method-exchange-via-package-netsshperlkexc25519
45,244,476
MYSQL installation issue in RHEL7.3
While installing MySQL-server-5.6.36-1.el7.x86_64 getting below error. Please advise how to proceed further? Also, Installed client and devel.rpm files, though it got struck when installing server. Transaction check error: file /usr/share/mysql/charsets/README from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/czech/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/danish/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/dutch/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/english/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/estonian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/french/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/german/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/greek/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/hungarian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/italian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/japanese/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/korean/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/norwegian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/polish/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/portuguese/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/romanian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/russian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/serbian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/slovak/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/spanish/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/swedish/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/ukrainian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/armscii8.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/ascii.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp1250.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp1251.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp1256.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp1257.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp850.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp852.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp866.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/dec8.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/geostd8.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/greek.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/hebrew.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/hp8.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/keybcs2.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/koi8r.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/koi8u.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/latin1.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64
MYSQL installation issue in RHEL7.3 While installing MySQL-server-5.6.36-1.el7.x86_64 getting below error. Please advise how to proceed further? Also, Installed client and devel.rpm files, though it got struck when installing server. Transaction check error: file /usr/share/mysql/charsets/README from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/czech/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/danish/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/dutch/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/english/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/estonian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/french/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/german/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/greek/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/hungarian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/italian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/japanese/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/korean/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/norwegian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/polish/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/portuguese/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/romanian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/russian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/serbian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/slovak/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/spanish/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/swedish/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/ukrainian/errmsg.sys from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/armscii8.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/ascii.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp1250.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp1251.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp1256.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp1257.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp850.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp852.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/cp866.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/dec8.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/geostd8.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/greek.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/hebrew.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/hp8.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/keybcs2.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/koi8r.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/koi8u.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64 file /usr/share/mysql/charsets/latin1.xml from install of MySQL-server-5.6.36-1.el7.x86_64 conflicts with file from package mariadb-libs-1:5.5.52-1.el7.x86_64
mysql, linux, redhat, rhel
1
740
1
https://stackoverflow.com/questions/45244476/mysql-installation-issue-in-rhel7-3
44,628,098
Piwik not showing the images
I installed/configured Piwik on a RHEL 7 server (using Apache 2.4 and PHP 7.0 ), but the images are not displayed. The URL f.ex. of the logo is [URL] or " [URL] ". When I will open the url directly, I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at (mail) to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I would be thankful for any help. Best regards "..plugins/.htaccess": '# This file is auto generated by Piwik, do not edit directly # Please report any issue or improvement directly to the Piwik team. # First, deny access to all files in this directory <Files "*"> <IfModule mod_version.c> <IfVersion < 2.4> Order Deny,Allow Deny from All </IfVersion> <IfVersion >= 2.4> Require all denied </IfVersion> </IfModule> <IfModule !mod_version.c> <IfModule !mod_authz_core.c> Order Deny,Allow Deny from All </IfModule> <IfModule mod_authz_core.c> Require all denied </IfModule> </IfModule> </Files> # Serve HTML files as text/html mime type - Note: requires mod_mime apache module! <IfModule mod_mime.c> AddHandler text/html .html AddHandler text/html .htm </IfModule> # Allow to serve static files which are safe <Files ~ "\.(gif|ico|jpg|png|svg|js|css|htm|html|swf|mp3|mp4|wav|ogg|avi|ttf|eot)$"> <IfModule mod_version.c> <IfVersion < 2.4> Order Allow,Deny Allow from All </IfVersion> <IfVersion >= 2.4> Order Allow,Deny Allow from All #Require all granted </IfVersion> </IfModule> <IfModule !mod_version.c> <IfModule !mod_authz_core.c> Order Allow,Deny Allow from All </IfModule> <IfModule mod_authz_core.c> Order Allow,Deny Allow from All #Require all granted </IfModule> </IfModule> </Files>'
Piwik not showing the images I installed/configured Piwik on a RHEL 7 server (using Apache 2.4 and PHP 7.0 ), but the images are not displayed. The URL f.ex. of the logo is [URL] or " [URL] ". When I will open the url directly, I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator at (mail) to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I would be thankful for any help. Best regards "..plugins/.htaccess": '# This file is auto generated by Piwik, do not edit directly # Please report any issue or improvement directly to the Piwik team. # First, deny access to all files in this directory <Files "*"> <IfModule mod_version.c> <IfVersion < 2.4> Order Deny,Allow Deny from All </IfVersion> <IfVersion >= 2.4> Require all denied </IfVersion> </IfModule> <IfModule !mod_version.c> <IfModule !mod_authz_core.c> Order Deny,Allow Deny from All </IfModule> <IfModule mod_authz_core.c> Require all denied </IfModule> </IfModule> </Files> # Serve HTML files as text/html mime type - Note: requires mod_mime apache module! <IfModule mod_mime.c> AddHandler text/html .html AddHandler text/html .htm </IfModule> # Allow to serve static files which are safe <Files ~ "\.(gif|ico|jpg|png|svg|js|css|htm|html|swf|mp3|mp4|wav|ogg|avi|ttf|eot)$"> <IfModule mod_version.c> <IfVersion < 2.4> Order Allow,Deny Allow from All </IfVersion> <IfVersion >= 2.4> Order Allow,Deny Allow from All #Require all granted </IfVersion> </IfModule> <IfModule !mod_version.c> <IfModule !mod_authz_core.c> Order Allow,Deny Allow from All </IfModule> <IfModule mod_authz_core.c> Order Allow,Deny Allow from All #Require all granted </IfModule> </IfModule> </Files>'
php, rhel, matomo
1
261
1
https://stackoverflow.com/questions/44628098/piwik-not-showing-the-images
44,498,914
Piwik - Setting up with Oracle DB instead of mysql
I can't go forward with my Piwik configuration. Is it possible to set up PIWIK with an Oracle DB at the moment? I'm searching for the answer since last monday, but no success until today. Piwik 3.0.4 Apache 2.4 PHP 7.0 rhel 7 I would be thankful for any help. Best regards, Burak
Piwik - Setting up with Oracle DB instead of mysql I can't go forward with my Piwik configuration. Is it possible to set up PIWIK with an Oracle DB at the moment? I'm searching for the answer since last monday, but no success until today. Piwik 3.0.4 Apache 2.4 PHP 7.0 rhel 7 I would be thankful for any help. Best regards, Burak
database, oracle-database, rhel, matomo
1
167
1
https://stackoverflow.com/questions/44498914/piwik-setting-up-with-oracle-db-instead-of-mysql
43,614,529
RHEL: cgroup change of group failed
When I run the following command, I get cgroup change of group failed : cgexec --sticky -g *:/throttle some_task Cgroup throttle is defined in cgconfig.conf, which looks like this: # Configuration file generated by cgsnapshot mount { cpuset = /cgroup/cpuset; cpu = /cgroup/cpu; cpuacct = /cgroup/cpuacct; memory = /cgroup/memory; devices = /cgroup/devices; freezer = /cgroup/freezer; net_cls = /cgroup/net_cls; blkio = /cgroup/blkio; } group throttle { cpu { cpu.rt_period_us="1000000"; cpu.rt_runtime_us="0"; cpu.cfs_period_us="1000000"; cpu.cfs_quota_us="500000"; cpu.shares="1024"; } } group throttle { memory { memory.memsw.failcnt="0"; memory.limit_in_bytes="1073741824"; memory.memsw.max_usage_in_bytes="0"; memory.move_charge_at_immigrate="0"; memory.swappiness="60"; memory.use_hierarchy="0"; memory.failcnt="0"; memory.soft_limit_in_bytes="134217728"; memory.memsw.limit_in_bytes="1073741824"; memory.max_usage_in_bytes="0"; } } group throttle { blkio { blkio.throttle.write_iops_device="8:0 10"; blkio.throttle.read_iops_device="8:0 10"; blkio.throttle.write_bps_device=""; blkio.throttle.read_bps_device=""; blkio.weight="500"; blkio.weight_device=""; } } I have searched far and wide and haven't a clue how to start trouble shooting this. This seems to be commonly associated with incorrect permissions. However, I don't define permissions (the documentation for cgroups says that this is optional). I'm running the process as root.
RHEL: cgroup change of group failed When I run the following command, I get cgroup change of group failed : cgexec --sticky -g *:/throttle some_task Cgroup throttle is defined in cgconfig.conf, which looks like this: # Configuration file generated by cgsnapshot mount { cpuset = /cgroup/cpuset; cpu = /cgroup/cpu; cpuacct = /cgroup/cpuacct; memory = /cgroup/memory; devices = /cgroup/devices; freezer = /cgroup/freezer; net_cls = /cgroup/net_cls; blkio = /cgroup/blkio; } group throttle { cpu { cpu.rt_period_us="1000000"; cpu.rt_runtime_us="0"; cpu.cfs_period_us="1000000"; cpu.cfs_quota_us="500000"; cpu.shares="1024"; } } group throttle { memory { memory.memsw.failcnt="0"; memory.limit_in_bytes="1073741824"; memory.memsw.max_usage_in_bytes="0"; memory.move_charge_at_immigrate="0"; memory.swappiness="60"; memory.use_hierarchy="0"; memory.failcnt="0"; memory.soft_limit_in_bytes="134217728"; memory.memsw.limit_in_bytes="1073741824"; memory.max_usage_in_bytes="0"; } } group throttle { blkio { blkio.throttle.write_iops_device="8:0 10"; blkio.throttle.read_iops_device="8:0 10"; blkio.throttle.write_bps_device=""; blkio.throttle.read_bps_device=""; blkio.weight="500"; blkio.weight_device=""; } } I have searched far and wide and haven't a clue how to start trouble shooting this. This seems to be commonly associated with incorrect permissions. However, I don't define permissions (the documentation for cgroups says that this is optional). I'm running the process as root.
rhel, rhel6, cgroups
1
1,618
1
https://stackoverflow.com/questions/43614529/rhel-cgroup-change-of-group-failed
43,082,924
Web Server Installation Error (RHEL)
my friend has been trying to install web server via RHEL, but this error has pop out, can anybody explain why does this happens? Is it because of the firewall/ports has not been open or am i using the wrong command?
Web Server Installation Error (RHEL) my friend has been trying to install web server via RHEL, but this error has pop out, can anybody explain why does this happens? Is it because of the firewall/ports has not been open or am i using the wrong command?
webserver, rhel
1
54
1
https://stackoverflow.com/questions/43082924/web-server-installation-error-rhel
42,671,234
Creating an iso of a RHEL instance
I have an amazon ec2 instance with RHEL 7.3 on it. I would like to convert this into an iso so that I can migrate it wherever I want. What are the best tools to create an iso of a virtual machine. Or how do I clone/backup this VM so that I can restore it anywhere I want?
Creating an iso of a RHEL instance I have an amazon ec2 instance with RHEL 7.3 on it. I would like to convert this into an iso so that I can migrate it wherever I want. What are the best tools to create an iso of a virtual machine. Or how do I clone/backup this VM so that I can restore it anywhere I want?
amazon-web-services, amazon-ec2, rhel, rhel7
1
563
3
https://stackoverflow.com/questions/42671234/creating-an-iso-of-a-rhel-instance
42,485,867
Openldap failover on RHEL
How to implement OPENLDAP failover on RHEL. We have couple of LDAP servers, need to know how to handle failover of one server and redirect to other, viceversa.
Openldap failover on RHEL How to implement OPENLDAP failover on RHEL. We have couple of LDAP servers, need to know how to handle failover of one server and redirect to other, viceversa.
openldap, rhel, failover
1
359
1
https://stackoverflow.com/questions/42485867/openldap-failover-on-rhel
41,543,035
MobileFirst 7.0 When uploading an app in server farm mode, &quot;Unresponsive&quot; errors are received
Environment details: OS RHEL 6.8 64bits IBM MobileFirst Platform Server Version - 7.0.0.00.20160526-2153 Project WAR Version - 7.0.0.00.20161114-0240 Application Version - 7.0.0.00.20160526-2153 IBM Websphere Liberty Verison - 8.5.5.4 Java - IBM JRE 1.7.0 Created a farm configuration following IBM Knowledge Center MobileFirst 7.0 Configure Farm with uniques values for jndiName="ibm.worklight.admin.serverid",farm_member_1 and farm_member_2. At start, both servers show these into the messages.log: [1/8/17 18:53:14:313 CST] 00000034 SystemErr R 1111 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-10] openjpa.Runtime - Starting OpenJPA 1.2.2 [1/8/17 18:53:14:315 CST] 00000034 SystemErr R 1113 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-10] openjpa.jdbc.JDBC - Using dictionary class "org.apache.openjpa.jdbc.sql.DB2Dictionary" (DB2/LINUXX8664 SQL10053 ,IBM Data Server Driver for JDBC and SQLJ 4.17.29). [1/8/17 18:53:14:364 CST] 00000034 SystemErr R 1162 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-10] openjpa.Runtime - Though you are using optimistic transactions, OpenJPA is now beginning a datastore transaction because you have requested a lock on some data. Almost every time an app is upload to the runtime, server 2 (farm_member_2) becomes unresponsive (Web console Home-->Runtime-->Server Farm Nodes).When that occurs, the server 2 is updating all the resources, like it was rebooting , loading all the apps and adapters from the runtime. Once finished to load all the apps, these error is shown again: [1/9/17 1:43:19:868 CST] 00000b05 com.ibm.worklight.admin.jmx.ManagementMXBeanImpl I runtime01: server01///10.77.230.146: 2017-01-09T07:43:19.850Z: Transaction handler reset [1/9/17 1:43:19:869 CST] 0000005e com.worklight.core.auth.impl.AuthenticationFilter I FWLSE0273I: Set sync required to 'false' [project runtime01] [1/9/17 1:43:19:872 CST] 00000a86 SystemErr R 16601596 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-1589] openjpa.Runtime - Though you are using optimistic transactions, OpenJPA is now beginning a datastore transaction because you have requested a lock on some data. [1/9/17 1:43:19:880 CST] 00000a86 SystemErr R 16601604 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-1589] openjpa.Runtime - Though you are using optimistic transactions, OpenJPA is now beginning a datastore transaction because you have requested a lock on some data. Is that a common thing with this type of configuration? If not,any ideas on how to avoid this?
MobileFirst 7.0 When uploading an app in server farm mode, &quot;Unresponsive&quot; errors are received Environment details: OS RHEL 6.8 64bits IBM MobileFirst Platform Server Version - 7.0.0.00.20160526-2153 Project WAR Version - 7.0.0.00.20161114-0240 Application Version - 7.0.0.00.20160526-2153 IBM Websphere Liberty Verison - 8.5.5.4 Java - IBM JRE 1.7.0 Created a farm configuration following IBM Knowledge Center MobileFirst 7.0 Configure Farm with uniques values for jndiName="ibm.worklight.admin.serverid",farm_member_1 and farm_member_2. At start, both servers show these into the messages.log: [1/8/17 18:53:14:313 CST] 00000034 SystemErr R 1111 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-10] openjpa.Runtime - Starting OpenJPA 1.2.2 [1/8/17 18:53:14:315 CST] 00000034 SystemErr R 1113 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-10] openjpa.jdbc.JDBC - Using dictionary class "org.apache.openjpa.jdbc.sql.DB2Dictionary" (DB2/LINUXX8664 SQL10053 ,IBM Data Server Driver for JDBC and SQLJ 4.17.29). [1/8/17 18:53:14:364 CST] 00000034 SystemErr R 1162 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-10] openjpa.Runtime - Though you are using optimistic transactions, OpenJPA is now beginning a datastore transaction because you have requested a lock on some data. Almost every time an app is upload to the runtime, server 2 (farm_member_2) becomes unresponsive (Web console Home-->Runtime-->Server Farm Nodes).When that occurs, the server 2 is updating all the resources, like it was rebooting , loading all the apps and adapters from the runtime. Once finished to load all the apps, these error is shown again: [1/9/17 1:43:19:868 CST] 00000b05 com.ibm.worklight.admin.jmx.ManagementMXBeanImpl I runtime01: server01///10.77.230.146: 2017-01-09T07:43:19.850Z: Transaction handler reset [1/9/17 1:43:19:869 CST] 0000005e com.worklight.core.auth.impl.AuthenticationFilter I FWLSE0273I: Set sync required to 'false' [project runtime01] [1/9/17 1:43:19:872 CST] 00000a86 SystemErr R 16601596 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-1589] openjpa.Runtime - Though you are using optimistic transactions, OpenJPA is now beginning a datastore transaction because you have requested a lock on some data. [1/9/17 1:43:19:880 CST] 00000a86 SystemErr R 16601604 WorklightManagementPU-db2 INFO [LargeThreadPool-thread-1589] openjpa.Runtime - Though you are using optimistic transactions, OpenJPA is now beginning a datastore transaction because you have requested a lock on some data. Is that a common thing with this type of configuration? If not,any ideas on how to avoid this?
ibm-mobilefirst, websphere-liberty, rhel, mobilefirst-server
1
152
1
https://stackoverflow.com/questions/41543035/mobilefirst-7-0-when-uploading-an-app-in-server-farm-mode-unresponsive-errors
41,409,572
How to sign/add a public key in RHEL.?
I am trying to add a public key from Fusion Forge for RHEL. In Ubuntu, we use wget -O - [URL] | apt-key add to add a key. What is the command in RHEL for the same action?
How to sign/add a public key in RHEL.? I am trying to add a public key from Fusion Forge for RHEL. In Ubuntu, we use wget -O - [URL] | apt-key add to add a key. What is the command in RHEL for the same action?
ubuntu, rhel
1
126
1
https://stackoverflow.com/questions/41409572/how-to-sign-add-a-public-key-in-rhel
41,215,389
Execute a service with specific user - Ubuntu / CentOS
I trying to execute a service and pass a specific user without run sudo command. If I use the sudo command, the application will create a temp files with root permission and when the service was been restarted with correct user (Application User) the application user will not able to access this temp files. Independent of the user who starts this service, the process need start with application user. Have any way to fix it? My OS it is Ubuntu 16 and CentOS 6/7. Thanks
Execute a service with specific user - Ubuntu / CentOS I trying to execute a service and pass a specific user without run sudo command. If I use the sudo command, the application will create a temp files with root permission and when the service was been restarted with correct user (Application User) the application user will not able to access this temp files. Independent of the user who starts this service, the process need start with application user. Have any way to fix it? My OS it is Ubuntu 16 and CentOS 6/7. Thanks
linux, ubuntu, service, centos, rhel
1
946
1
https://stackoverflow.com/questions/41215389/execute-a-service-with-specific-user-ubuntu-centos
41,192,902
Deploying multiple versions of Red Hat Developer Toolset to multiple RHEL version
Can I deploy multiple versions of Red Hat Developer Toolset to multiple versions of Red Hat Enterprise Linux?
Deploying multiple versions of Red Hat Developer Toolset to multiple RHEL version Can I deploy multiple versions of Red Hat Developer Toolset to multiple versions of Red Hat Enterprise Linux?
redhat, rhel, rhel7, redhat-dts
1
165
1
https://stackoverflow.com/questions/41192902/deploying-multiple-versions-of-red-hat-developer-toolset-to-multiple-rhel-versio
40,703,207
RHEL Atomic Host 7 OpenShift Enterprise installation
Good, installation instruction describes installation for RHEL 7 (rpm and containerized) and RHEL 7 Atomic Host 7 (containerized only), rpm seems to be valid. But how to use Atomic Host, there are no clear instructions? Even: -bash-4.2# atomic host install atomic-openshift-utils Downloading metadata: [==================================================] 100% Resolving dependencies... done Checking out tree 90c9735... done error: Package 'openshift-ansible-playbooks' has (currently) unsupported script of type '%pretrans' don't help.
RHEL Atomic Host 7 OpenShift Enterprise installation Good, installation instruction describes installation for RHEL 7 (rpm and containerized) and RHEL 7 Atomic Host 7 (containerized only), rpm seems to be valid. But how to use Atomic Host, there are no clear instructions? Even: -bash-4.2# atomic host install atomic-openshift-utils Downloading metadata: [==================================================] 100% Resolving dependencies... done Checking out tree 90c9735... done error: Package 'openshift-ansible-playbooks' has (currently) unsupported script of type '%pretrans' don't help.
atomic, host, rhel, openshift-enterprise
1
629
1
https://stackoverflow.com/questions/40703207/rhel-atomic-host-7-openshift-enterprise-installation
39,920,066
Receiving multicast data on RHEL 7 using c++
I am trying to receive multicast UDP data on a network interface on RHEL 7.2 So, about my setup: NIC: Intel X540 IP: 192.168.42.100 Distro: RHEL 7.2 Multicast Address: 224.5.6.7 Port: 2002 Interface name: ens4f1 I have 2 interfaces open, the 1 Gbit on the Mobo and one of the 10 Gbit on the intel card. Like many other posts, I have data coming in and visible on both wireshark and tcpdump but my recvfrom call just hangs. My code is a copy of a similar problem described here , which appears to be working for the OP. Notes: 1) I run my code as root 2) I have tried to change the rp_filter in /proc/sys/net/ipv4/conf/ens4f1/rp_filter to 0. No change 3) Disabling SELinux did not change anything 4) Wireshark and tcpdump shows the data just fine. Dump shown below [@localhost ~]$ sudo tcpdump -c 5 -i ens4f1 -v tcpdump: listening on ens4f1, link-type EN10MB (Ethernet), capture size 65535 bytes 15:43:57.368470 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 7996) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 7968 15:43:57.368477 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 316) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 288 15:43:57.368869 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 7996) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 7968 15:43:57.368878 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 316) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 288 15:43:57.369264 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 7996) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 7968 5 packets captured 46 packets received by filter 9 packets dropped by kernel Copy of code #include <stdlib.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <time.h> #include <string.h> #include <stdio.h> #include <iostream> #include <string> using namespace std; #define HELLO_PORT 2002 #define HELLO_GROUP "224.5.6.7" #define MSGBUFSIZE 10000 int main(int argc, char *argv[]) { string source_iface("192.168.42.100"); string group(HELLO_GROUP); int port(HELLO_PORT); cout << "group: " << group << " port: " << port << " source_iface: " << source_iface << endl; int fd; if ((fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0) { perror("socket"); exit(1); } u_int yes = 1; if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes)) < 0) { perror("Reusing ADDR failed"); exit(1); } struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_family = AF_INET; addr.sin_port = htons(port); addr.sin_addr.s_addr = (group.empty() ? htonl(INADDR_ANY) : inet_addr(group.c_str())); if (bind(fd,(struct sockaddr *)&addr, sizeof(addr)) < 0) { perror("bind"); exit(1); } struct ip_mreq mreq; mreq.imr_multiaddr.s_addr = inet_addr(group.c_str()); mreq.imr_interface.s_addr = (source_iface.empty() ? htonl(INADDR_ANY) : inet_addr(source_iface.c_str())); if (setsockopt(fd, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)) < 0) { perror("setsockopt"); exit(1); } socklen_t addrlen; int nbytes; char msgbuf[MSGBUFSIZE]; while (1) { memset(&msgbuf, 0, MSGBUFSIZE); addrlen = sizeof(addr); if ((nbytes = recvfrom(fd, msgbuf, MSGBUFSIZE, 0, (struct sockaddr *)&addr, &addrlen)) < 0) { perror("recvfrom"); exit(1); } cout.write(msgbuf, nbytes); cout.flush(); } return 0; } All help and suggestions are most welcome Thanks Henrik
Receiving multicast data on RHEL 7 using c++ I am trying to receive multicast UDP data on a network interface on RHEL 7.2 So, about my setup: NIC: Intel X540 IP: 192.168.42.100 Distro: RHEL 7.2 Multicast Address: 224.5.6.7 Port: 2002 Interface name: ens4f1 I have 2 interfaces open, the 1 Gbit on the Mobo and one of the 10 Gbit on the intel card. Like many other posts, I have data coming in and visible on both wireshark and tcpdump but my recvfrom call just hangs. My code is a copy of a similar problem described here , which appears to be working for the OP. Notes: 1) I run my code as root 2) I have tried to change the rp_filter in /proc/sys/net/ipv4/conf/ens4f1/rp_filter to 0. No change 3) Disabling SELinux did not change anything 4) Wireshark and tcpdump shows the data just fine. Dump shown below [@localhost ~]$ sudo tcpdump -c 5 -i ens4f1 -v tcpdump: listening on ens4f1, link-type EN10MB (Ethernet), capture size 65535 bytes 15:43:57.368470 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 7996) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 7968 15:43:57.368477 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 316) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 288 15:43:57.368869 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 7996) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 7968 15:43:57.368878 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 316) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 288 15:43:57.369264 IP (tos 0x0, ttl 255, id 6526, offset 0, flags [DF], proto UDP (17), length 7996) 192.168.42.44.62111 > 224.5.6.7.globe: UDP, length 7968 5 packets captured 46 packets received by filter 9 packets dropped by kernel Copy of code #include <stdlib.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <time.h> #include <string.h> #include <stdio.h> #include <iostream> #include <string> using namespace std; #define HELLO_PORT 2002 #define HELLO_GROUP "224.5.6.7" #define MSGBUFSIZE 10000 int main(int argc, char *argv[]) { string source_iface("192.168.42.100"); string group(HELLO_GROUP); int port(HELLO_PORT); cout << "group: " << group << " port: " << port << " source_iface: " << source_iface << endl; int fd; if ((fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0) { perror("socket"); exit(1); } u_int yes = 1; if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes)) < 0) { perror("Reusing ADDR failed"); exit(1); } struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_family = AF_INET; addr.sin_port = htons(port); addr.sin_addr.s_addr = (group.empty() ? htonl(INADDR_ANY) : inet_addr(group.c_str())); if (bind(fd,(struct sockaddr *)&addr, sizeof(addr)) < 0) { perror("bind"); exit(1); } struct ip_mreq mreq; mreq.imr_multiaddr.s_addr = inet_addr(group.c_str()); mreq.imr_interface.s_addr = (source_iface.empty() ? htonl(INADDR_ANY) : inet_addr(source_iface.c_str())); if (setsockopt(fd, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)) < 0) { perror("setsockopt"); exit(1); } socklen_t addrlen; int nbytes; char msgbuf[MSGBUFSIZE]; while (1) { memset(&msgbuf, 0, MSGBUFSIZE); addrlen = sizeof(addr); if ((nbytes = recvfrom(fd, msgbuf, MSGBUFSIZE, 0, (struct sockaddr *)&addr, &addrlen)) < 0) { perror("recvfrom"); exit(1); } cout.write(msgbuf, nbytes); cout.flush(); } return 0; } All help and suggestions are most welcome Thanks Henrik
c++, sockets, udp, multicast, rhel
1
1,026
1
https://stackoverflow.com/questions/39920066/receiving-multicast-data-on-rhel-7-using-c
39,714,996
A new loop device won&#39;t properly be created in RHEL4
As the title, I'm on RHEL 4. Currently, there are /dev/loop0 ~ /dev/loop7 (eight devices) created that seems to work fine. When I try to create a new device, loop8, by calling mknod /dev/loop8 b 7 8 chown --reference=/dev/loop0 /dev/loop8 chmod --reference=/dev/loop0 /dev/loop8 a device seems to be created, but it doesn't work as intended. First, as shown on the link, we can see loop8. Terminal output picture However, using losetup on the two creates different outputs as below. losetup /dev/loop0 loop: can't get info on device /dev/loop0: No such device or address losetup /dev/loop8 loop: can't open device /dev/loop8: No such device or address Now let me create two files and setup the two loop devices. As in the link, loop0 succeeds while loop8 fails. Two device comparison Why is this the case? PS. On an extra note, if I restart the computer, it always has loop0~loop7 even if you create or remove any loop devices. Furthermore, as shown in the first link, the time they are edited are all at bootup. Finally, even if you setup /dev/loop0 with a file, after computer reboot, despite /dev/loop0 existing, /dev/loop0 is no longer set up. I have no idea why these are happening.
A new loop device won&#39;t properly be created in RHEL4 As the title, I'm on RHEL 4. Currently, there are /dev/loop0 ~ /dev/loop7 (eight devices) created that seems to work fine. When I try to create a new device, loop8, by calling mknod /dev/loop8 b 7 8 chown --reference=/dev/loop0 /dev/loop8 chmod --reference=/dev/loop0 /dev/loop8 a device seems to be created, but it doesn't work as intended. First, as shown on the link, we can see loop8. Terminal output picture However, using losetup on the two creates different outputs as below. losetup /dev/loop0 loop: can't get info on device /dev/loop0: No such device or address losetup /dev/loop8 loop: can't open device /dev/loop8: No such device or address Now let me create two files and setup the two loop devices. As in the link, loop0 succeeds while loop8 fails. Two device comparison Why is this the case? PS. On an extra note, if I restart the computer, it always has loop0~loop7 even if you create or remove any loop devices. Furthermore, as shown in the first link, the time they are edited are all at bootup. Finally, even if you setup /dev/loop0 with a file, after computer reboot, despite /dev/loop0 existing, /dev/loop0 is no longer set up. I have no idea why these are happening.
linux, redhat, rhel
1
508
1
https://stackoverflow.com/questions/39714996/a-new-loop-device-wont-properly-be-created-in-rhel4
39,682,265
SWI-Prolog Installation on RHEL 7
I am attempting to install SWI-Prolog on a server running rhel 7. I have followed the instructions to build SWI-Prolog here: [URL] . The build completes without error; however, I see no trace of the application. In addition, the application does not seem accessible because I am unable to invoke swipl to begin executing Prolog commands.
SWI-Prolog Installation on RHEL 7 I am attempting to install SWI-Prolog on a server running rhel 7. I have followed the instructions to build SWI-Prolog here: [URL] . The build completes without error; however, I see no trace of the application. In addition, the application does not seem accessible because I am unable to invoke swipl to begin executing Prolog commands.
prolog, redhat, rhel, swi-prolog
1
1,303
2
https://stackoverflow.com/questions/39682265/swi-prolog-installation-on-rhel-7
39,598,466
rpm yum install : how to put conflicts on an already installed package?
I'm trying to stop installation of an rpm if I have a particular version of a package already installed. Say I have 2 packages A-7.1.1.rpm & Main_package-1.0.rpm to be installed. And I've A-1.4.0.rpm already installed on the machine. So what I want to achieve is that, if the installed version of A < 7.1.0 then upgrade of Main_package should not happen. I tried to put Conflicts tag in the spec file of Main_package as follows. Conflicts : A < 7.1.0 And executed yum install *.rpm Here yum finds a latest version of package 'A' in the directory (ie, A-7.1.1.rpm), it doesn't conflict. What I want to check is, if the machine has a particular version of package A. I could not find any other tags that I can use within the spec file. Is there any way I can achieve this? Note: I can't have this check inside a script which then invokes yum install *.rpm I can execute only yum install *.rpm , nothing else.
rpm yum install : how to put conflicts on an already installed package? I'm trying to stop installation of an rpm if I have a particular version of a package already installed. Say I have 2 packages A-7.1.1.rpm & Main_package-1.0.rpm to be installed. And I've A-1.4.0.rpm already installed on the machine. So what I want to achieve is that, if the installed version of A < 7.1.0 then upgrade of Main_package should not happen. I tried to put Conflicts tag in the spec file of Main_package as follows. Conflicts : A < 7.1.0 And executed yum install *.rpm Here yum finds a latest version of package 'A' in the directory (ie, A-7.1.1.rpm), it doesn't conflict. What I want to check is, if the machine has a particular version of package A. I could not find any other tags that I can use within the spec file. Is there any way I can achieve this? Note: I can't have this check inside a script which then invokes yum install *.rpm I can execute only yum install *.rpm , nothing else.
upgrade, rpm, yum, rhel, rpm-spec
1
2,811
2
https://stackoverflow.com/questions/39598466/rpm-yum-install-how-to-put-conflicts-on-an-already-installed-package
37,046,471
Running Riak as a non root user on RHEL
I am new to Riak and just installed riak on a RHEL server. Does anyone knows if it is possible to run riak without root or sudo permissions? Can I just change some file/folder permissions to allow this?
Running Riak as a non root user on RHEL I am new to Riak and just installed riak on a RHEL server. Does anyone knows if it is possible to run riak without root or sudo permissions? Can I just change some file/folder permissions to allow this?
rhel, riak
1
124
1
https://stackoverflow.com/questions/37046471/running-riak-as-a-non-root-user-on-rhel
35,978,202
Which yum variable can give 6 or 7 as output
I can see the yum variables as on Redhat page at Redhat docs The variable $releasever resolves to 6Server or 7Server etc Is there any default variable which can resolve to just 6 or 7 Thanks,
Which yum variable can give 6 or 7 as output I can see the yum variables as on Redhat page at Redhat docs The variable $releasever resolves to 6Server or 7Server etc Is there any default variable which can resolve to just 6 or 7 Thanks,
yum, rhel
1
539
1
https://stackoverflow.com/questions/35978202/which-yum-variable-can-give-6-or-7-as-output
35,802,475
Find which distrubtion a RPM belongs to ? RHEL or Centos or Suse
How do I find to which distribution (for example, RHEL or Centos or Suse) a RPM file belongs to? I have a RHEL box, can I use RPM provided at link, [URL]
Find which distrubtion a RPM belongs to ? RHEL or Centos or Suse How do I find to which distribution (for example, RHEL or Centos or Suse) a RPM file belongs to? I have a RHEL box, can I use RPM provided at link, [URL]
unix, rpm, rhel
1
163
1
https://stackoverflow.com/questions/35802475/find-which-distrubtion-a-rpm-belongs-to-rhel-or-centos-or-suse
33,805,947
Failed to install rabbitmq-server on Linux(Oracle Linux Server release 6.7/RHEL)
I am getting following error while installing. $ sudo yum install rabbitmq-server Loaded plugins: aliases, changelog, fastestmirror, kabi, presto, refresh-packagekit, security, tmprepo, ulninfo, verify, versionlock Loading support for kernel ABI Setting up Install Process Loading mirror speeds from cached hostfile * epel: archive.linux.duke.edu * ol6_UEKR3: slcac475.us.oracle.com * ol6_latest: slcac475.us.oracle.com Resolving Dependencies --> Running transaction check ---> Package rabbitmq-server.noarch 0:3.1.5-1.el6 will be installed --> Finished Dependency Resolution` ... ... ... groupadd: Can't get unique system GID (no more available GIDs) useradd: group 'rabbitmq' does not exist error: %pre(rabbitmq-server-3.1.5-1.el6.noarch) scriptlet failed, exit status 6 Error in PREIN scriptlet in rpm package rabbitmq-server-3.1.5-1.el6.noarch error: install: %pre scriptlet failed (2), skipping rabbitmq-server-3.1.5-1.el6 Verifying : rabbitmq-server-3.1.5-1.el6.noarch Failed rabbitmq-server.noarch 0:3.1.5-1.el6 Complete! Note: I have already installed the erlang I have refereed few posts already there Installing rabbitmq-server on RHEL RabbitMQ install issue on Centos 5.5
Failed to install rabbitmq-server on Linux(Oracle Linux Server release 6.7/RHEL) I am getting following error while installing. $ sudo yum install rabbitmq-server Loaded plugins: aliases, changelog, fastestmirror, kabi, presto, refresh-packagekit, security, tmprepo, ulninfo, verify, versionlock Loading support for kernel ABI Setting up Install Process Loading mirror speeds from cached hostfile * epel: archive.linux.duke.edu * ol6_UEKR3: slcac475.us.oracle.com * ol6_latest: slcac475.us.oracle.com Resolving Dependencies --> Running transaction check ---> Package rabbitmq-server.noarch 0:3.1.5-1.el6 will be installed --> Finished Dependency Resolution` ... ... ... groupadd: Can't get unique system GID (no more available GIDs) useradd: group 'rabbitmq' does not exist error: %pre(rabbitmq-server-3.1.5-1.el6.noarch) scriptlet failed, exit status 6 Error in PREIN scriptlet in rpm package rabbitmq-server-3.1.5-1.el6.noarch error: install: %pre scriptlet failed (2), skipping rabbitmq-server-3.1.5-1.el6 Verifying : rabbitmq-server-3.1.5-1.el6.noarch Failed rabbitmq-server.noarch 0:3.1.5-1.el6 Complete! Note: I have already installed the erlang I have refereed few posts already there Installing rabbitmq-server on RHEL RabbitMQ install issue on Centos 5.5
ruby-on-rails, linux, installation, rabbitmq, rhel
1
657
1
https://stackoverflow.com/questions/33805947/failed-to-install-rabbitmq-server-on-linuxoracle-linux-server-release-6-7-rhel
33,267,682
How do I install collectd write_atsd plugin on RHEL 6?
I am using Axibase Time Series Database and I would like to install the collectd write_atsd plugin on RHEL 6? Is there an rpm I can use?
How do I install collectd write_atsd plugin on RHEL 6? I am using Axibase Time Series Database and I would like to install the collectd write_atsd plugin on RHEL 6? Is there an rpm I can use?
rhel, rhel6, collectd, axibase
1
226
1
https://stackoverflow.com/questions/33267682/how-do-i-install-collectd-write-atsd-plugin-on-rhel-6
32,508,911
HDP 2.1 to 2.2 upgrade RHEL6
I have a cluster with 1 NameNode and 4 DataNodes on Red Hat Linux Enterprise 6. My HDP version is 2.1. Ambari version was 1.7 but I upgraded it to 2.1. I want to upgrade HDP to version 2.2. I read that if I want to upgrade HDP from 2.1 to 2.2 I have to do it before I upgrade Ambari to 2.1. When I am upgrading hdp to 2.2 ambari does not see any changes and everything is not working. I am using this tutorial: [URL] How can I do it? I tried to downgrade ambari to 1.7 but I got many errors. What if I try upgrade now hdp to 2.2 and then my ambari from 2.1 to 2.1.1. Will it work? The problem is that I have very little time. Thank you in advance
HDP 2.1 to 2.2 upgrade RHEL6 I have a cluster with 1 NameNode and 4 DataNodes on Red Hat Linux Enterprise 6. My HDP version is 2.1. Ambari version was 1.7 but I upgraded it to 2.1. I want to upgrade HDP to version 2.2. I read that if I want to upgrade HDP from 2.1 to 2.2 I have to do it before I upgrade Ambari to 2.1. When I am upgrading hdp to 2.2 ambari does not see any changes and everything is not working. I am using this tutorial: [URL] How can I do it? I tried to downgrade ambari to 1.7 but I got many errors. What if I try upgrade now hdp to 2.2 and then my ambari from 2.1 to 2.1.1. Will it work? The problem is that I have very little time. Thank you in advance
hadoop, upgrade, rhel, ambari
1
447
1
https://stackoverflow.com/questions/32508911/hdp-2-1-to-2-2-upgrade-rhel6
31,310,629
Why the printk console_loglevel can be lower than minimum_console_loglevel?
My Linux distro is RHEL7 , and kernel version is 3.10.0 . Form the printk document, I know minimum_console_loglevel definition: minimum_console_loglevel: minimum (highest) value to which console_loglevel can be set Querying current log level of printk : [root@localhost kernel]# cat /proc/sys/kernel/printk 7 4 1 7 Modify current console log level: [root@localhost kernel]# echo 0 > /proc/sys/kernel/printk [root@localhost kernel]# cat /proc/sys/kernel/printk 0 4 1 7 Per my understanding, the minimum_console_loglevel is 1 , so modifying console_loglevel to 0 should be failed. But from the cat output, it seems success. From printk.c code: case SYSLOG_ACTION_CONSOLE_LEVEL: error = -EINVAL; if (len < 1 || len > 8) goto out; if (len < minimum_console_loglevel) len = minimum_console_loglevel; console_loglevel = len; /* Implicitly re-enable logging to console */ saved_console_loglevel = -1; error = 0; break; I also think console_loglevel values shouldn't be modified.
Why the printk console_loglevel can be lower than minimum_console_loglevel? My Linux distro is RHEL7 , and kernel version is 3.10.0 . Form the printk document, I know minimum_console_loglevel definition: minimum_console_loglevel: minimum (highest) value to which console_loglevel can be set Querying current log level of printk : [root@localhost kernel]# cat /proc/sys/kernel/printk 7 4 1 7 Modify current console log level: [root@localhost kernel]# echo 0 > /proc/sys/kernel/printk [root@localhost kernel]# cat /proc/sys/kernel/printk 0 4 1 7 Per my understanding, the minimum_console_loglevel is 1 , so modifying console_loglevel to 0 should be failed. But from the cat output, it seems success. From printk.c code: case SYSLOG_ACTION_CONSOLE_LEVEL: error = -EINVAL; if (len < 1 || len > 8) goto out; if (len < minimum_console_loglevel) len = minimum_console_loglevel; console_loglevel = len; /* Implicitly re-enable logging to console */ saved_console_loglevel = -1; error = 0; break; I also think console_loglevel values shouldn't be modified.
c, linux, linux-kernel, rhel, printk
1
1,694
1
https://stackoverflow.com/questions/31310629/why-the-printk-console-loglevel-can-be-lower-than-minimum-console-loglevel
30,321,445
failed to connect to 127.0.0.1:7199: connection refused
I am getting error failed to connect to 127.0.0.1:7199: connection refused when I do a nodetool status on my RHEL machine. It was working fine until yesterday but today it suddenly started giving this error. I did not make any changes to the configuration files. I have DSE installed and properly configured as it was running fine till yesterday from past 3-4 months. The cassandra.yaml has the cluster name, seed, rpc address, rpc port, listen address all configured correctly. Also I set -Djava.rmi.server.hostname=<server ip address>; in cassandra-env.sh. Still did not work. Nor am I able to connect to cqlsh, nor my SOLR is accessible after this. Also I have allowed all ports on my security group on my machine to check if it is any port problem but it is not. Any help would be appreciated.
failed to connect to 127.0.0.1:7199: connection refused I am getting error failed to connect to 127.0.0.1:7199: connection refused when I do a nodetool status on my RHEL machine. It was working fine until yesterday but today it suddenly started giving this error. I did not make any changes to the configuration files. I have DSE installed and properly configured as it was running fine till yesterday from past 3-4 months. The cassandra.yaml has the cluster name, seed, rpc address, rpc port, listen address all configured correctly. Also I set -Djava.rmi.server.hostname=<server ip address>; in cassandra-env.sh. Still did not work. Nor am I able to connect to cqlsh, nor my SOLR is accessible after this. Also I have allowed all ports on my security group on my machine to check if it is any port problem but it is not. Any help would be appreciated.
solr, cassandra, rhel, datastax, datastax-enterprise
1
7,628
2
https://stackoverflow.com/questions/30321445/failed-to-connect-to-127-0-0-17199-connection-refused
29,669,487
Update packages policy on RHEL6
Do you know how frequently RHEL6 EPEL packages are updated ? Is there a way to track it in order to have timeline? I am mainly interested in docker-io package At the moment, EPEL package is set with docker-io-1.4.1-3.el6.x86_64 . However, docker 1.5.0 has been released since 2015-02-03 EPEL package version: [URL] Thanks
Update packages policy on RHEL6 Do you know how frequently RHEL6 EPEL packages are updated ? Is there a way to track it in order to have timeline? I am mainly interested in docker-io package At the moment, EPEL package is set with docker-io-1.4.1-3.el6.x86_64 . However, docker 1.5.0 has been released since 2015-02-03 EPEL package version: [URL] Thanks
docker, rhel, epel
1
169
1
https://stackoverflow.com/questions/29669487/update-packages-policy-on-rhel6