question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
36,432,768
|
Windows disappearing behind a full screen undecorated JFrame
|
I have a JFrame which I am setting to be full screen like so: JFrame frame = new JFrame(); frame.setSize(1920, 1080); // Resolution of the monitor frame.setUndecorated(true); frame.setVisible(true); Problem is, any popups (e.g. JDialogs) spawned from this frame open up behind the frame and I can only access them by alt-tabbing to them. Furthermore, I am running a two monitor setup with each monitor treated as a separate display. If any windows are opened on top of the JFrame, and I move my mouse cursor to the second monitor, the windows disappear behind the JFrame. This is on RHEL 6.4, so perhaps it's a Linux window management issue? It should also be noted that I am running without gnome-panel, so it's a completely bare Linux desktop (no menu bar, no task bar). Things behave normally when my JFrame is decorated. That is, pop ups open on top of it and windows no longer disappear behind it when I move my mouse cursor to the second monitor. It's only when I set the JFrame to be undecorated and full screen that windows start getting lost behind it. It's like Linux is "locking" the undecorated JFrame to the monitor. I can't alt-drag it in this state either. If I set the JFrame to be slightly smaller than the monitor resolution (e.g. 1 pixel smaller), then it is no longer "locked". I can alt-drag it and windows no longer get lost behind it. Is there any way to prevent windows from getting lost behind the full screen, undecorated JFrame? I've tried all solutions listed here, but none of them work: JFrame full screen focusing . EDIT The above problem happens when running under Java 7. Running under Java 8 fixes the dialog issue, but the screen crossing issue is still there.
|
Windows disappearing behind a full screen undecorated JFrame I have a JFrame which I am setting to be full screen like so: JFrame frame = new JFrame(); frame.setSize(1920, 1080); // Resolution of the monitor frame.setUndecorated(true); frame.setVisible(true); Problem is, any popups (e.g. JDialogs) spawned from this frame open up behind the frame and I can only access them by alt-tabbing to them. Furthermore, I am running a two monitor setup with each monitor treated as a separate display. If any windows are opened on top of the JFrame, and I move my mouse cursor to the second monitor, the windows disappear behind the JFrame. This is on RHEL 6.4, so perhaps it's a Linux window management issue? It should also be noted that I am running without gnome-panel, so it's a completely bare Linux desktop (no menu bar, no task bar). Things behave normally when my JFrame is decorated. That is, pop ups open on top of it and windows no longer disappear behind it when I move my mouse cursor to the second monitor. It's only when I set the JFrame to be undecorated and full screen that windows start getting lost behind it. It's like Linux is "locking" the undecorated JFrame to the monitor. I can't alt-drag it in this state either. If I set the JFrame to be slightly smaller than the monitor resolution (e.g. 1 pixel smaller), then it is no longer "locked". I can alt-drag it and windows no longer get lost behind it. Is there any way to prevent windows from getting lost behind the full screen, undecorated JFrame? I've tried all solutions listed here, but none of them work: JFrame full screen focusing . EDIT The above problem happens when running under Java 7. Running under Java 8 fixes the dialog issue, but the screen crossing issue is still there.
|
java, linux, swing, jframe, rhel
| 2
| 770
| 1
|
https://stackoverflow.com/questions/36432768/windows-disappearing-behind-a-full-screen-undecorated-jframe
|
36,000,189
|
Trouble using Meteor-Please to deploy to Centos/RHEL
|
I've been trying for sometime to use the mplz utility to deploy and run a Meteor app. It seems (well now it seemed) simpler and more unified than Meteor Up which said it was targeted towards Debian/Ubuntu distros. After running a successful mplz setup on a clean CentOS7 image, I cannot access the app. All I have ever gotten is an "nginx error!" page. In the nginx error log I saw this at first: 2016/03/14 17:14:47 [crit] 4997#0: *2 connect() to 127.0.0.1:3000 failed (13: Permission denied) while connecting to upstream, client: myLocalIP, server: domain.com, request: "GET / HTTP/1.1", upstream: "[URL] host: "domain.com" After doing some research I believe I fixed the permissions issue by changing the nginx user and adding the users to the appropriate groups. The website still only displayed the nginx error page, but had a new message in the error_log . I am now getting a connection refused error: 2016/03/14 18:15:59 [error] 2489#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: myLocalIp, server: domain.com, request: "GET / HTTP/1.1", upstream: "[URL] host: "domain.com" All I'm really trying to get done is deploy a persistent copy of my Meteor app to the server. I am not at all familiar with nginx or server-ops kind of stuff, I've mainly worked on existing websites working on features. I would love any suggestions of how to solve this issue OR how to better or more easily deploy Meteor to a public server.
|
Trouble using Meteor-Please to deploy to Centos/RHEL I've been trying for sometime to use the mplz utility to deploy and run a Meteor app. It seems (well now it seemed) simpler and more unified than Meteor Up which said it was targeted towards Debian/Ubuntu distros. After running a successful mplz setup on a clean CentOS7 image, I cannot access the app. All I have ever gotten is an "nginx error!" page. In the nginx error log I saw this at first: 2016/03/14 17:14:47 [crit] 4997#0: *2 connect() to 127.0.0.1:3000 failed (13: Permission denied) while connecting to upstream, client: myLocalIP, server: domain.com, request: "GET / HTTP/1.1", upstream: "[URL] host: "domain.com" After doing some research I believe I fixed the permissions issue by changing the nginx user and adding the users to the appropriate groups. The website still only displayed the nginx error page, but had a new message in the error_log . I am now getting a connection refused error: 2016/03/14 18:15:59 [error] 2489#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: myLocalIp, server: domain.com, request: "GET / HTTP/1.1", upstream: "[URL] host: "domain.com" All I'm really trying to get done is deploy a persistent copy of my Meteor app to the server. I am not at all familiar with nginx or server-ops kind of stuff, I've mainly worked on existing websites working on features. I would love any suggestions of how to solve this issue OR how to better or more easily deploy Meteor to a public server.
|
meteor, nginx, rhel, centos7
| 2
| 124
| 0
|
https://stackoverflow.com/questions/36000189/trouble-using-meteor-please-to-deploy-to-centos-rhel
|
35,301,606
|
Bash script to fill forms on an opened website
|
I'm struggling with this issue for a long time, and haven't found any 'elegant' way to achieve my goal. The idea is quite simple: After loading a webpage in Firefox browser I want to be able to fill the available textboxes with a bash script, and leave it for further editing. Curl and wget are a 'no-go' as both of them fill forms and send data directly to a specified website. I've tried xdotool to go through all of the forms, but after filling every single one of the textboxes there is a slight time delay, that varies, before I'm able to jump to the next one. As you can imagine it is far more efficient for me to monitor for the right moment and fill the whole page by hand, rather than setting the maximum (observed) time to wait for every 'jump'. P.S. I'm aware of the available extensions for Firefox, but I'm not permitted to go this way with my project. The system (which is RHEL6) and browser are not to be modified for this task. External repositories, and all the guidance are more than welcome.
|
Bash script to fill forms on an opened website I'm struggling with this issue for a long time, and haven't found any 'elegant' way to achieve my goal. The idea is quite simple: After loading a webpage in Firefox browser I want to be able to fill the available textboxes with a bash script, and leave it for further editing. Curl and wget are a 'no-go' as both of them fill forms and send data directly to a specified website. I've tried xdotool to go through all of the forms, but after filling every single one of the textboxes there is a slight time delay, that varies, before I'm able to jump to the next one. As you can imagine it is far more efficient for me to monitor for the right moment and fill the whole page by hand, rather than setting the maximum (observed) time to wait for every 'jump'. P.S. I'm aware of the available extensions for Firefox, but I'm not permitted to go this way with my project. The system (which is RHEL6) and browser are not to be modified for this task. External repositories, and all the guidance are more than welcome.
|
bash, firefox, rhel
| 2
| 452
| 0
|
https://stackoverflow.com/questions/35301606/bash-script-to-fill-forms-on-an-opened-website
|
35,213,448
|
R hangs while plotting on RHEL5
|
Any plotting package is hanging, even for very simple plots. Here is my sample code: user.desktop% R R version 3.2.1 (2015-06-18) -- "World-Famous Astronaut" Copyright (C) 2015 The R Foundation for Statistical Computing Platform: x86_64-redhat-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > d <- data.frame(x = 1:10, y = (1:10)^2) > plot(d$x, d$y) The window hangs from here until I terminate the session. The code works fine on Windows 3.2.2. Any ideas what's going wrong here? Update: Reconstructed output from capabilites() from comments: > data.frame(capabilities()) capabilities.. jpeg TRUE png TRUE tiff TRUE tcltk TRUE X11 TRUE aqua FALSE http/ftp TRUE sockets TRUE libxml TRUE fifo TRUE cledit TRUE iconv TRUE NLS TRUE profmem FALSE cairo TRUE ICU FALSE long.double TRUE libcurl FALSE
|
R hangs while plotting on RHEL5 Any plotting package is hanging, even for very simple plots. Here is my sample code: user.desktop% R R version 3.2.1 (2015-06-18) -- "World-Famous Astronaut" Copyright (C) 2015 The R Foundation for Statistical Computing Platform: x86_64-redhat-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > d <- data.frame(x = 1:10, y = (1:10)^2) > plot(d$x, d$y) The window hangs from here until I terminate the session. The code works fine on Windows 3.2.2. Any ideas what's going wrong here? Update: Reconstructed output from capabilites() from comments: > data.frame(capabilities()) capabilities.. jpeg TRUE png TRUE tiff TRUE tcltk TRUE X11 TRUE aqua FALSE http/ftp TRUE sockets TRUE libxml TRUE fifo TRUE cledit TRUE iconv TRUE NLS TRUE profmem FALSE cairo TRUE ICU FALSE long.double TRUE libcurl FALSE
|
r, linux, plot, ggplot2, rhel
| 2
| 47
| 0
|
https://stackoverflow.com/questions/35213448/r-hangs-while-plotting-on-rhel5
|
33,181,637
|
MATLAB's quadprog is exteremely slow on my strong local machine vs another remote machine
|
I am using MATLAB's quadprog and it runs extremely slow on my local machine. When I run the exact code on a remote machine, it completes within 10 minutes. When I run it on my local machine, it doesn't terminates even after 24 hours (I kill it at some point). While the code runs, the memory usage on my local machine is ~10GB RAM (while my local machine has ~100 GB of free RAM). The usage on the remote machine is 20-30GB RAM . Any idea on what to do to make it run faster on my local machine? Important EDIT 18 Oct. : I executed a smaller scale problem on both machines. On the local machine it takes 1900 sec, on the remote it takes 8 sec, a gain of ~240. Both machine also have multiple multi-core processors. I noticed this time with htop , that the remote machine uses all its processors, whilst the local machine uses only a single processor (although all the others are available). Any idea on how can I make MATLAB use all processors on the local machine? Some side notes: 1: nnz for H, Aeq =~ 10e6, dimensions are approx 11e6 x 11e6 2: quad programming with only equality constraints has a closed form solution (See Boyd ). When I solve it with the closed form solution, it takes ~10 minutes on my local machine vs 5 minutes on the remote machine. While both consume ~20-30GB of memory. Since I would like to add inequality constraints, I would like to be able to run quadprog quickly on my local machine. 3: Below is cat /proc/cpuinfo on my machine vs remote machine (the remote machine is stronger, however the local machine is also strong): A 14 cores vs 4 cores is a gain of ~x3.5 (not taking multi-thread overhead), and AVX vs SSE is max ~x2. So it doesn't explain a gain of 240 that I see. Also, when I use the closed form solution (instead of quadprog), the remote machine has a gain of only x2, vs the local machine. 4: I am sure I am running 64 bit version, because I see that the memory consumption is 10-15GB. 5: The local system runs RHEL, the remote runs ubuntu. Local uname -a results: Linux hostname 2.6.32-573.7.1.el6.x86_64 #1 SMP Thu Sep 10 13:42:16 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux Remote uname -a results: Linux hostname 3.13.0-65-generic #105-Ubuntu SMP Mon Sep 21 18:50:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux 6: Hyper threading is enabled on the machine. I checked it with this script. 7: Starting parallel pool as someone suggested doesn't help. Thanks! Local machine cpu info of a single processor (out of many) vendor_id : GenuineIntel cpu family : 6 model : 26 model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz stepping : 5 microcode : 25 cpu MHz : 1600.000 cache size : 8192 KB physical id : 1 siblings : 8 core id : 3 cpu cores : 4 apicid : 23 initial apicid : 23 fpu : yes fpu_exception : yes cpui level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida dts tpr_shadow vnmi flexpriority ept vpid bogomips : 4532.68 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: Remote machine cpu info of a single processor (out of many) vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz stepping : 2 microcode : 0x2d cpu MHz : 1200.000 cache size : 35840 KB physical id : 1 siblings : 28 core id : 14 cpu cores : 14 apicid : 61 initial apicid : 61 fpu : yes fpu_exception : yes cpuid level : 15 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid bogomips : 5189.05 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:
|
MATLAB's quadprog is exteremely slow on my strong local machine vs another remote machine I am using MATLAB's quadprog and it runs extremely slow on my local machine. When I run the exact code on a remote machine, it completes within 10 minutes. When I run it on my local machine, it doesn't terminates even after 24 hours (I kill it at some point). While the code runs, the memory usage on my local machine is ~10GB RAM (while my local machine has ~100 GB of free RAM). The usage on the remote machine is 20-30GB RAM . Any idea on what to do to make it run faster on my local machine? Important EDIT 18 Oct. : I executed a smaller scale problem on both machines. On the local machine it takes 1900 sec, on the remote it takes 8 sec, a gain of ~240. Both machine also have multiple multi-core processors. I noticed this time with htop , that the remote machine uses all its processors, whilst the local machine uses only a single processor (although all the others are available). Any idea on how can I make MATLAB use all processors on the local machine? Some side notes: 1: nnz for H, Aeq =~ 10e6, dimensions are approx 11e6 x 11e6 2: quad programming with only equality constraints has a closed form solution (See Boyd ). When I solve it with the closed form solution, it takes ~10 minutes on my local machine vs 5 minutes on the remote machine. While both consume ~20-30GB of memory. Since I would like to add inequality constraints, I would like to be able to run quadprog quickly on my local machine. 3: Below is cat /proc/cpuinfo on my machine vs remote machine (the remote machine is stronger, however the local machine is also strong): A 14 cores vs 4 cores is a gain of ~x3.5 (not taking multi-thread overhead), and AVX vs SSE is max ~x2. So it doesn't explain a gain of 240 that I see. Also, when I use the closed form solution (instead of quadprog), the remote machine has a gain of only x2, vs the local machine. 4: I am sure I am running 64 bit version, because I see that the memory consumption is 10-15GB. 5: The local system runs RHEL, the remote runs ubuntu. Local uname -a results: Linux hostname 2.6.32-573.7.1.el6.x86_64 #1 SMP Thu Sep 10 13:42:16 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux Remote uname -a results: Linux hostname 3.13.0-65-generic #105-Ubuntu SMP Mon Sep 21 18:50:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux 6: Hyper threading is enabled on the machine. I checked it with this script. 7: Starting parallel pool as someone suggested doesn't help. Thanks! Local machine cpu info of a single processor (out of many) vendor_id : GenuineIntel cpu family : 6 model : 26 model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz stepping : 5 microcode : 25 cpu MHz : 1600.000 cache size : 8192 KB physical id : 1 siblings : 8 core id : 3 cpu cores : 4 apicid : 23 initial apicid : 23 fpu : yes fpu_exception : yes cpui level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida dts tpr_shadow vnmi flexpriority ept vpid bogomips : 4532.68 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: Remote machine cpu info of a single processor (out of many) vendor_id : GenuineIntel cpu family : 6 model : 63 model name : Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz stepping : 2 microcode : 0x2d cpu MHz : 1200.000 cache size : 35840 KB physical id : 1 siblings : 28 core id : 14 cpu cores : 14 apicid : 61 initial apicid : 61 fpu : yes fpu_exception : yes cpuid level : 15 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid bogomips : 5189.05 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management:
|
multithreading, matlab, rhel, quadprog, quadratic-programming
| 2
| 581
| 1
|
https://stackoverflow.com/questions/33181637/matlabs-quadprog-is-exteremely-slow-on-my-strong-local-machine-vs-another-remot
|
33,167,527
|
Locating shared library libOS.so
|
I have downloaded a script that used libOS.so, bit I don't have it on CemtOS 6.7 and can't find any info on which software package uses libOS.so Can anyone tell me which package includes libOS.so?
|
Locating shared library libOS.so I have downloaded a script that used libOS.so, bit I don't have it on CemtOS 6.7 and can't find any info on which software package uses libOS.so Can anyone tell me which package includes libOS.so?
|
linux, centos, shared-libraries, yum, rhel
| 2
| 291
| 1
|
https://stackoverflow.com/questions/33167527/locating-shared-library-libos-so
|
32,452,535
|
c++ program core dump when redirecting input stream to string
|
I have recently discovered my c++ program is core dumping on Red Hat Linux when attempting to redirect from an input stream to a string. The program is provided a PID and attempts to obtain the process name from within /proc. Code is as follows: std::string processName; std::stringstream filename; filename << "/proc/" << pid << "/status"; std::ifstream f(filename.str().c_str()); if (f.good()) { std::string label; f >> label; // This causes the core dump if (label == "Name:") { f >> processName; } } f.close(); I've done some searching and discovered the following c++ bug which looks to be very similar to my issue (note the last comment which is specifically the operation I am performing). [URL] I have been able to reproduce the problem using a small test program performing the same operation within a tight loop, but still don't quite understand the reasoning behind the actual problem. Is anyone aware of any problems with attempting to read process names using the method shown above (from within /proc)? I am contemplating rewriting my code to use system calls instead of reading from the file system, but was hoping for some advice prior to making the change.
|
c++ program core dump when redirecting input stream to string I have recently discovered my c++ program is core dumping on Red Hat Linux when attempting to redirect from an input stream to a string. The program is provided a PID and attempts to obtain the process name from within /proc. Code is as follows: std::string processName; std::stringstream filename; filename << "/proc/" << pid << "/status"; std::ifstream f(filename.str().c_str()); if (f.good()) { std::string label; f >> label; // This causes the core dump if (label == "Name:") { f >> processName; } } f.close(); I've done some searching and discovered the following c++ bug which looks to be very similar to my issue (note the last comment which is specifically the operation I am performing). [URL] I have been able to reproduce the problem using a small test program performing the same operation within a tight loop, but still don't quite understand the reasoning behind the actual problem. Is anyone aware of any problems with attempting to read process names using the method shown above (from within /proc)? I am contemplating rewriting my code to use system calls instead of reading from the file system, but was hoping for some advice prior to making the change.
|
rhel, c++
| 2
| 468
| 1
|
https://stackoverflow.com/questions/32452535/c-program-core-dump-when-redirecting-input-stream-to-string
|
30,362,771
|
Jenkins CLI won't install; it just hangs
|
I think this is the right place for Jenkins CLI questions. I installed Jenkins on RHEL v 7. I now want to try out the CLI. From the command prompt, I ran this: java -jar jenkins-cli.jar -s [URL] help When I am root, it just hangs. When I am a user that has permissions to install packages, I get prompted for a password for the it private SSH key (id_rsa). I enter it, then it just hangs. The command I am running was lifted from my Jenkins installation. If you log into the GUI of Jenkins, then go to Manage Jenkins -> Jenkins CLI, there is a command that purports to download jenkins-cli.jar and install it. All I want to do is run it successfully. Why is it just hanging?
|
Jenkins CLI won't install; it just hangs I think this is the right place for Jenkins CLI questions. I installed Jenkins on RHEL v 7. I now want to try out the CLI. From the command prompt, I ran this: java -jar jenkins-cli.jar -s [URL] help When I am root, it just hangs. When I am a user that has permissions to install packages, I get prompted for a password for the it private SSH key (id_rsa). I enter it, then it just hangs. The command I am running was lifted from my Jenkins installation. If you log into the GUI of Jenkins, then go to Manage Jenkins -> Jenkins CLI, there is a command that purports to download jenkins-cli.jar and install it. All I want to do is run it successfully. Why is it just hanging?
|
java, jenkins, command-line-interface, rhel
| 2
| 710
| 1
|
https://stackoverflow.com/questions/30362771/jenkins-cli-wont-install-it-just-hangs
|
25,957,150
|
Window always on top over another application in full screen mode in rhel
|
I am having one application that is being displayed in full screen mode in rhel machine Now I try to invoke kmagnifier and keep it on top of my application but it always goes in background. Same happens with any other system or user application Any possible way to fix the position of second application?
|
Window always on top over another application in full screen mode in rhel I am having one application that is being displayed in full screen mode in rhel machine Now I try to invoke kmagnifier and keep it on top of my application but it always goes in background. Same happens with any other system or user application Any possible way to fix the position of second application?
|
qt, rhel
| 2
| 353
| 0
|
https://stackoverflow.com/questions/25957150/window-always-on-top-over-another-application-in-full-screen-mode-in-rhel
|
25,242,829
|
hadoop cause system crash with "soft lock" and "hard lock"
|
I am running hadoop2.2 on redhat6.3-6.5 ,and all of my machines crashed after a while. /var/log/messages shows repeatedly: Aug 11 06:30:42 jn4_73_128 kernel: BUG: soft lockup - CPU#1 stuck for 67s! [jsvc:11508] Aug 11 06:30:42 jn4_73_128 kernel: Modules linked in: bridge stp llc iptable_filter ip_tables mptctl mptbase xfs exportfs power_meter microcode dcdbas serio_raw iTCO_w dt iTCO_vendor_support i7core_edac edac_core sg bnx2 ext4 mbcache jbd2 sd_mod crc_t10dif wmi mpt2sas scsi_transport_sas raid_class dm_mirror dm_region_hash dm_log dm_m od [last unloaded: scsi_wait_scan] Aug 11 06:30:42 jn4_73_128 kernel: CPU 1 Aug 11 06:30:42 jn4_73_128 kernel: Modules linked in: bridge stp llc iptable_filter ip_tables mptctl mptbase xfs exportfs power_meter microcode dcdbas serio_raw iTCO_w dt iTCO_vendor_support i7core_edac edac_core sg bnx2 ext4 mbcache jbd2 sd_mod crc_t10dif wmi mpt2sas scsi_transport_sas raid_class dm_mirror dm_region_hash dm_log dm_m od [last unloaded: scsi_wait_scan] Aug 11 06:30:42 jn4_73_128 kernel: Aug 11 06:30:42 jn4_73_128 kernel: Pid: 11508, comm: jsvc Tainted: G W --------------- 2.6.32-279.el6.x86_64 #1 Dell Inc. PowerEdge R510/084YMW Aug 11 06:30:42 jn4_73_128 kernel: RIP: 0010:[<ffffffff8104d088>] [<ffffffff8104d088>] wait_for_rqlock+0x28/0x40 Aug 11 06:30:42 jn4_73_128 kernel: RSP: 0018:ffff8807786c3ee8 EFLAGS: 00000202 Aug 11 06:30:42 jn4_73_128 kernel: RAX: 00000000f6e9f6e1 RBX: ffff8807786c3ee8 RCX: ffff880028216680 Aug 11 06:30:42 jn4_73_128 kernel: RDX: 00000000fffff6e9 RSI: ffff88061cd29370 RDI: 0000000000000286 Aug 11 06:30:42 jn4_73_128 kernel: RBP: ffffffff8100bc0e R08: 0000000000000001 R09: 0000000000000001 Aug 11 06:30:42 jn4_73_128 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000286 Aug 11 06:30:42 jn4_73_128 kernel: R13: ffff8807786c3eb8 R14: ffffffff810e0f6e R15: ffff8807786c3e48 Aug 11 06:30:42 jn4_73_128 kernel: FS: 0000000000000000(0000) GS:ffff880028200000(0000) knlGS:0000000000000000 Aug 11 06:30:42 jn4_73_128 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Aug 11 06:30:42 jn4_73_128 kernel: CR2: 0000000000e5bd70 CR3: 0000000001a85000 CR4: 00000000000006e0 Aug 11 06:30:42 jn4_73_128 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Aug 11 06:30:42 jn4_73_128 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Aug 11 06:30:42 jn4_73_128 kernel: Process jsvc (pid: 11508, threadinfo ffff8807786c2000, task ffff880c1def3500) Aug 11 06:30:42 jn4_73_128 kernel: Stack: Aug 11 06:30:42 jn4_73_128 kernel: ffff8807786c3f68 ffffffff8107091b 0000000000000000 ffff8807786c3f28 Aug 11 06:30:42 jn4_73_128 kernel: <d> ffff880701735260 ffff880c1def39c8 ffff880c1def39c8 0000000000000000 Aug 11 06:30:42 jn4_73_128 kernel: <d> ffff8807786c3f28 ffff8807786c3f28 ffff8807786c3f78 00007f092d0ad700 Aug 11 06:30:42 jn4_73_128 kernel: Call Trace: Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff8107091b>] ? do_exit+0x5ab/0x870 Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff81070ce7>] ? sys_exit+0x17/0x20 Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b Aug 11 06:30:42 jn4_73_128 kernel: Code: ff ff 90 55 48 89 e5 0f 1f 44 00 00 48 c7 c0 80 66 01 00 65 48 8b 0c 25 b0 e0 00 00 0f ae f0 48 01 c1 eb 09 0f 1f 80 00 00 00 00 <f3> 90 8b 01 89 c2 c1 fa 10 66 39 c2 75 f2 c9 c3 0f 1f 84 00 00 Aug 11 06:30:42 jn4_73_128 kernel: Call Trace: Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff8107091b>] ? do_exit+0x5ab/0x870 Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff81070ce7>] ? sys_exit+0x17/0x20 Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b </em> and finally crashed crash /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux /opt/crash/127.0.0.1-2014-08-10-09\:47\:38/vmcore crash 6.1.0-5.el6 Copyright (C) 2002-2012 Red Hat, Inc. Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation Copyright (C) 1999-2006 Hewlett-Packard Co Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited Copyright (C) 2006, 2007 VA Linux Systems Japan K.K. Copyright (C) 2005, 2011 NEC Corporation Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc. Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc. This program is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Enter "help copying" to see the conditions. This program has absolutely no warranty. Enter "help warranty" for details. GNU gdb (GDB) 7.3.1 Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <[URL] This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-unknown-linux-gnu"... please wait... (determining panic task) WARNING: active task ffff881071850040 on cpu 12 not found in PID hash KERNEL: /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux DUMPFILE: /opt/crash/127.0.0.1-2014-08-10-09:47:38/vmcore [PARTIAL DUMP] CPUS: 24 DATE: Sun Aug 10 09:47:32 2014 UPTIME: 7 days, 16:00:19 LOAD AVERAGE: 11.01, 3.11, 1.08 TASKS: 724 NODENAME: master1.otocyon.com RELEASE: 2.6.32-431.5.1.el6.x86_64 VERSION: #1 SMP Fri Jan 10 14:46:43 EST 2014 MACHINE: x86_64 (1895 Mhz) MEMORY: 64 GB PANIC: "Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 0" PID: 23976 COMMAND: "sh" TASK: ffff881071850aa0 [THREAD_INFO: ffff880a05c80000] CPU: 0 STATE: TASK_INTERRUPTIBLE (PANIC) crash> bt PID: 23976 TASK: ffff881071850aa0 CPU: 0 COMMAND: "sh" #0 [ffff880028207b50] machine_kexec at ffffffff81038f3b #1 [ffff880028207bb0] crash_kexec at ffffffff810c5d82 #2 [ffff880028207c80] panic at ffffffff8152751a #3 [ffff880028207d00] watchdog_overflow_callback at ffffffff810e696d #4 [ffff880028207d20] __perf_event_overflow at ffffffff8111c847 #5 [ffff880028207da0] perf_event_overflow at ffffffff8111ce14 #6 [ffff880028207db0] intel_pmu_handle_irq at ffffffff81022d87 #7 [ffff880028207e90] perf_event_nmi_handler at ffffffff8152bd69 #8 [ffff880028207ea0] notifier_call_chain at ffffffff8152d825 #9 [ffff880028207ee0] atomic_notifier_call_chain at ffffffff8152d88a #10 [ffff880028207ef0] notify_die at ffffffff810a153e #11 [ffff880028207f20] do_nmi at ffffffff8152b4eb #12 [ffff880028207f50] nmi at ffffffff8152adb0 [exception RIP: task_rq_unlock_wait+44] RIP: ffffffff810534fc RSP: ffff880a05c81dc8 RFLAGS: 00000016 RAX: 000000000ec70ebe RBX: ffff881071850040 RCX: ffff8800282d6840 RDX: 0000000000000ec7 RSI: 0000000000000000 RDI: ffff881071850040 RBP: ffff880a05c81dc8 R8: dead000000200200 R9: dead000000200200 R10: ffff8810734a42d0 R11: 0000000000000246 R12: 00000000000114b8 R13: ffff8810734a4180 R14: ffff881071fd3440 R15: ffff881071fd3c48 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- <NMI exception stack> --- #13 [ffff880a05c81dc8] task_rq_unlock_wait at ffffffff810534fc #14 [ffff880a05c81dd0] release_task at ffffffff81075454 #15 [ffff880a05c81e10] wait_consider_task at ffffffff81075fb6 #16 [ffff880a05c81e80] do_wait at ffffffff810763e6 #17 [ffff880a05c81ee0] sys_wait4 at ffffffff810765d3 #18 [ffff880a05c81f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003e1a2ac8be RSP: 00007fffa58c6330 RFLAGS: 00010207 RAX: 000000000000003d RBX: ffffffff8100b072 RCX: 0000003e1a232be0 RDX: 0000000000000000 RSI: 00007fffa58c62ec RDI: ffffffffffffffff RBP: 00000000ffffffff R8: 000000000203b8d0 R9: 000000000203d590 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000005d00 ORIG_RAX: 000000000000003d CS: 0033 SS: 002b It happened on machines from different vendors,and I have tried to update to the latest kernel from redhat. Can anyone with the same experience help?
|
hadoop cause system crash with "soft lock" and "hard lock" I am running hadoop2.2 on redhat6.3-6.5 ,and all of my machines crashed after a while. /var/log/messages shows repeatedly: Aug 11 06:30:42 jn4_73_128 kernel: BUG: soft lockup - CPU#1 stuck for 67s! [jsvc:11508] Aug 11 06:30:42 jn4_73_128 kernel: Modules linked in: bridge stp llc iptable_filter ip_tables mptctl mptbase xfs exportfs power_meter microcode dcdbas serio_raw iTCO_w dt iTCO_vendor_support i7core_edac edac_core sg bnx2 ext4 mbcache jbd2 sd_mod crc_t10dif wmi mpt2sas scsi_transport_sas raid_class dm_mirror dm_region_hash dm_log dm_m od [last unloaded: scsi_wait_scan] Aug 11 06:30:42 jn4_73_128 kernel: CPU 1 Aug 11 06:30:42 jn4_73_128 kernel: Modules linked in: bridge stp llc iptable_filter ip_tables mptctl mptbase xfs exportfs power_meter microcode dcdbas serio_raw iTCO_w dt iTCO_vendor_support i7core_edac edac_core sg bnx2 ext4 mbcache jbd2 sd_mod crc_t10dif wmi mpt2sas scsi_transport_sas raid_class dm_mirror dm_region_hash dm_log dm_m od [last unloaded: scsi_wait_scan] Aug 11 06:30:42 jn4_73_128 kernel: Aug 11 06:30:42 jn4_73_128 kernel: Pid: 11508, comm: jsvc Tainted: G W --------------- 2.6.32-279.el6.x86_64 #1 Dell Inc. PowerEdge R510/084YMW Aug 11 06:30:42 jn4_73_128 kernel: RIP: 0010:[<ffffffff8104d088>] [<ffffffff8104d088>] wait_for_rqlock+0x28/0x40 Aug 11 06:30:42 jn4_73_128 kernel: RSP: 0018:ffff8807786c3ee8 EFLAGS: 00000202 Aug 11 06:30:42 jn4_73_128 kernel: RAX: 00000000f6e9f6e1 RBX: ffff8807786c3ee8 RCX: ffff880028216680 Aug 11 06:30:42 jn4_73_128 kernel: RDX: 00000000fffff6e9 RSI: ffff88061cd29370 RDI: 0000000000000286 Aug 11 06:30:42 jn4_73_128 kernel: RBP: ffffffff8100bc0e R08: 0000000000000001 R09: 0000000000000001 Aug 11 06:30:42 jn4_73_128 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000286 Aug 11 06:30:42 jn4_73_128 kernel: R13: ffff8807786c3eb8 R14: ffffffff810e0f6e R15: ffff8807786c3e48 Aug 11 06:30:42 jn4_73_128 kernel: FS: 0000000000000000(0000) GS:ffff880028200000(0000) knlGS:0000000000000000 Aug 11 06:30:42 jn4_73_128 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Aug 11 06:30:42 jn4_73_128 kernel: CR2: 0000000000e5bd70 CR3: 0000000001a85000 CR4: 00000000000006e0 Aug 11 06:30:42 jn4_73_128 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Aug 11 06:30:42 jn4_73_128 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Aug 11 06:30:42 jn4_73_128 kernel: Process jsvc (pid: 11508, threadinfo ffff8807786c2000, task ffff880c1def3500) Aug 11 06:30:42 jn4_73_128 kernel: Stack: Aug 11 06:30:42 jn4_73_128 kernel: ffff8807786c3f68 ffffffff8107091b 0000000000000000 ffff8807786c3f28 Aug 11 06:30:42 jn4_73_128 kernel: <d> ffff880701735260 ffff880c1def39c8 ffff880c1def39c8 0000000000000000 Aug 11 06:30:42 jn4_73_128 kernel: <d> ffff8807786c3f28 ffff8807786c3f28 ffff8807786c3f78 00007f092d0ad700 Aug 11 06:30:42 jn4_73_128 kernel: Call Trace: Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff8107091b>] ? do_exit+0x5ab/0x870 Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff81070ce7>] ? sys_exit+0x17/0x20 Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b Aug 11 06:30:42 jn4_73_128 kernel: Code: ff ff 90 55 48 89 e5 0f 1f 44 00 00 48 c7 c0 80 66 01 00 65 48 8b 0c 25 b0 e0 00 00 0f ae f0 48 01 c1 eb 09 0f 1f 80 00 00 00 00 <f3> 90 8b 01 89 c2 c1 fa 10 66 39 c2 75 f2 c9 c3 0f 1f 84 00 00 Aug 11 06:30:42 jn4_73_128 kernel: Call Trace: Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff8107091b>] ? do_exit+0x5ab/0x870 Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff81070ce7>] ? sys_exit+0x17/0x20 Aug 11 06:30:42 jn4_73_128 kernel: [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b </em> and finally crashed crash /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux /opt/crash/127.0.0.1-2014-08-10-09\:47\:38/vmcore crash 6.1.0-5.el6 Copyright (C) 2002-2012 Red Hat, Inc. Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation Copyright (C) 1999-2006 Hewlett-Packard Co Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited Copyright (C) 2006, 2007 VA Linux Systems Japan K.K. Copyright (C) 2005, 2011 NEC Corporation Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc. Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc. This program is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Enter "help copying" to see the conditions. This program has absolutely no warranty. Enter "help warranty" for details. GNU gdb (GDB) 7.3.1 Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <[URL] This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-unknown-linux-gnu"... please wait... (determining panic task) WARNING: active task ffff881071850040 on cpu 12 not found in PID hash KERNEL: /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux DUMPFILE: /opt/crash/127.0.0.1-2014-08-10-09:47:38/vmcore [PARTIAL DUMP] CPUS: 24 DATE: Sun Aug 10 09:47:32 2014 UPTIME: 7 days, 16:00:19 LOAD AVERAGE: 11.01, 3.11, 1.08 TASKS: 724 NODENAME: master1.otocyon.com RELEASE: 2.6.32-431.5.1.el6.x86_64 VERSION: #1 SMP Fri Jan 10 14:46:43 EST 2014 MACHINE: x86_64 (1895 Mhz) MEMORY: 64 GB PANIC: "Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 0" PID: 23976 COMMAND: "sh" TASK: ffff881071850aa0 [THREAD_INFO: ffff880a05c80000] CPU: 0 STATE: TASK_INTERRUPTIBLE (PANIC) crash> bt PID: 23976 TASK: ffff881071850aa0 CPU: 0 COMMAND: "sh" #0 [ffff880028207b50] machine_kexec at ffffffff81038f3b #1 [ffff880028207bb0] crash_kexec at ffffffff810c5d82 #2 [ffff880028207c80] panic at ffffffff8152751a #3 [ffff880028207d00] watchdog_overflow_callback at ffffffff810e696d #4 [ffff880028207d20] __perf_event_overflow at ffffffff8111c847 #5 [ffff880028207da0] perf_event_overflow at ffffffff8111ce14 #6 [ffff880028207db0] intel_pmu_handle_irq at ffffffff81022d87 #7 [ffff880028207e90] perf_event_nmi_handler at ffffffff8152bd69 #8 [ffff880028207ea0] notifier_call_chain at ffffffff8152d825 #9 [ffff880028207ee0] atomic_notifier_call_chain at ffffffff8152d88a #10 [ffff880028207ef0] notify_die at ffffffff810a153e #11 [ffff880028207f20] do_nmi at ffffffff8152b4eb #12 [ffff880028207f50] nmi at ffffffff8152adb0 [exception RIP: task_rq_unlock_wait+44] RIP: ffffffff810534fc RSP: ffff880a05c81dc8 RFLAGS: 00000016 RAX: 000000000ec70ebe RBX: ffff881071850040 RCX: ffff8800282d6840 RDX: 0000000000000ec7 RSI: 0000000000000000 RDI: ffff881071850040 RBP: ffff880a05c81dc8 R8: dead000000200200 R9: dead000000200200 R10: ffff8810734a42d0 R11: 0000000000000246 R12: 00000000000114b8 R13: ffff8810734a4180 R14: ffff881071fd3440 R15: ffff881071fd3c48 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 --- <NMI exception stack> --- #13 [ffff880a05c81dc8] task_rq_unlock_wait at ffffffff810534fc #14 [ffff880a05c81dd0] release_task at ffffffff81075454 #15 [ffff880a05c81e10] wait_consider_task at ffffffff81075fb6 #16 [ffff880a05c81e80] do_wait at ffffffff810763e6 #17 [ffff880a05c81ee0] sys_wait4 at ffffffff810765d3 #18 [ffff880a05c81f80] system_call_fastpath at ffffffff8100b072 RIP: 0000003e1a2ac8be RSP: 00007fffa58c6330 RFLAGS: 00010207 RAX: 000000000000003d RBX: ffffffff8100b072 RCX: 0000003e1a232be0 RDX: 0000000000000000 RSI: 00007fffa58c62ec RDI: ffffffffffffffff RBP: 00000000ffffffff R8: 000000000203b8d0 R9: 000000000203d590 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000005d00 ORIG_RAX: 000000000000003d CS: 0033 SS: 002b It happened on machines from different vendors,and I have tried to update to the latest kernel from redhat. Can anyone with the same experience help?
|
hadoop, crash, centos, rhel
| 2
| 783
| 0
|
https://stackoverflow.com/questions/25242829/hadoop-cause-system-crash-with-soft-lock-and-hard-lock
|
22,940,962
|
Unable to install PHP, MySQL on RHEL5
|
I'm trying to install PHP and MySql on an Apache Web Server on a RHEL 5.7 VM. I have tried to do it with the following yum Remi and EPEL repos: rpm -Uvh [URL] rpm -Uvh [URL] It looks like there are "Missing Dependencies" whenever I try to do this: yum --enablerepo=remi,remi-test install mysql mysql-server php php-common Then, I get the following error: --> Finished Dependency Resolution mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) php-cli-5.4.27-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libgmp.so.3()(64bit) is needed by package php-cli-5.4.27-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libaio.so.1()(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: perl(DBI) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: perl-DBI is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) php-5.4.27-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: httpd is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) php-5.4.27-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: httpd-mmn = 20051115 is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) php-5.4.27-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libgmp.so.3()(64bit) is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: perl-DBD-MySQL is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: httpd is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libgmp.so.3()(64bit) is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) Error: Missing Dependency: httpd-mmn = 20051115 is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) Error: Missing Dependency: perl-DBI is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libaio.so.1()(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libgmp.so.3()(64bit) is needed by package php-cli-5.4.27-1.el5.remi.x86_64 (remi) Error: Missing Dependency: perl-DBD-MySQL is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: perl(DBI) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. I already have httpd (I can run service httpd start), and whenever I try to install libaio, I get this: [root@mod2014 rpm-gpg]# yum install --enablerepo=remi,remi-test libaio libaio-devel Setting up Install Process No package libaio available. No package libaio-devel available. Nothing to do
|
Unable to install PHP, MySQL on RHEL5 I'm trying to install PHP and MySql on an Apache Web Server on a RHEL 5.7 VM. I have tried to do it with the following yum Remi and EPEL repos: rpm -Uvh [URL] rpm -Uvh [URL] It looks like there are "Missing Dependencies" whenever I try to do this: yum --enablerepo=remi,remi-test install mysql mysql-server php php-common Then, I get the following error: --> Finished Dependency Resolution mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) php-cli-5.4.27-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libgmp.so.3()(64bit) is needed by package php-cli-5.4.27-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libaio.so.1()(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: perl(DBI) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: perl-DBI is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) php-5.4.27-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: httpd is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) php-5.4.27-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: httpd-mmn = 20051115 is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) php-5.4.27-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: libgmp.so.3()(64bit) is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) mysql-server-5.5.37-1.el5.remi.x86_64 from remi has depsolving problems --> Missing Dependency: perl-DBD-MySQL is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: httpd is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libgmp.so.3()(64bit) is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) Error: Missing Dependency: httpd-mmn = 20051115 is needed by package php-5.4.27-1.el5.remi.x86_64 (remi) Error: Missing Dependency: perl-DBI is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libaio.so.1()(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libgmp.so.3()(64bit) is needed by package php-cli-5.4.27-1.el5.remi.x86_64 (remi) Error: Missing Dependency: perl-DBD-MySQL is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libaio.so.1(LIBAIO_0.1)(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: perl(DBI) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) Error: Missing Dependency: libaio.so.1(LIBAIO_0.4)(64bit) is needed by package mysql-server-5.5.37-1.el5.remi.x86_64 (remi) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. I already have httpd (I can run service httpd start), and whenever I try to install libaio, I get this: [root@mod2014 rpm-gpg]# yum install --enablerepo=remi,remi-test libaio libaio-devel Setting up Install Process No package libaio available. No package libaio-devel available. Nothing to do
|
rpm, yum, rhel
| 2
| 2,450
| 2
|
https://stackoverflow.com/questions/22940962/unable-to-install-php-mysql-on-rhel5
|
18,771,015
|
Disruptor performance on RHEL VM vrs RHEL installed on hardware?
|
It seems to me that this is an important question due to the increasing prevalence of "The Cloud" and for companies that use VM's for assorted needs. Specifically the question is: How do the Disruptor libraries perform on a RHEL virtual machine vrs. RHEL installed directly on hardware? Also very useful to know if there is improvement using it vrs. not on a VM?
|
Disruptor performance on RHEL VM vrs RHEL installed on hardware? It seems to me that this is an important question due to the increasing prevalence of "The Cloud" and for companies that use VM's for assorted needs. Specifically the question is: How do the Disruptor libraries perform on a RHEL virtual machine vrs. RHEL installed directly on hardware? Also very useful to know if there is improvement using it vrs. not on a VM?
|
virtual-machine, rhel, lmax
| 2
| 111
| 0
|
https://stackoverflow.com/questions/18771015/disruptor-performance-on-rhel-vm-vrs-rhel-installed-on-hardware
|
17,747,310
|
Unable to sent mail using exim4.8 - Unrouteable Address
|
I am unable to send mails using exim4.8. Please find the below logs and my exim config file. Please help! Tried all config changes but not able to find a solution. Thanks in advance. Below is the eximgrep result: exigrep "xyz" /var/spool/exim/log/mainlog 2013-07-19 17:43:01 1V09YT-0006vZ-Ax <= root@192.168.4.14 U=root P=local S=419 2013-07-19 17:43:01 1V09YT-0006vZ-Ax ** yyy@yyy.yyy: Unrouteable address 2013-07-19 17:43:01 1V09YT-0006vZ-Ax Completed Below is my Exim config file: primary_hostname = 192.168.4.14 domainlist local_domains = @:localhost domainlist relay_to_domains = * hostlist relay_from_hosts = 127.0.0.1 acl_smtp_rcpt = acl_check_rcpt acl_smtp_data = acl_check_data rfc1413_hosts = * rfc1413_query_timeout = 5s ignore_bounce_errors_after = 1d timeout_frozen_after = 7d begin acl deny message = Restricted characters in address domains = +local_domains local_parts = ^[.] : ^.*[@%!/|] deny message = Restricted characters in address domains = !+local_domains local_parts = ^[./|] : ^.*[@%!] : ^.*/\\.\\./ accept local_parts = postmaster domains = +local_domains require verify = sender accept hosts = +relay_from_hosts control = submission control = dkim_disable_verify accept authenticated = * control = submission control = dkim_disable_verify require message = relay not permitted domains = +local_domains : +relay_to_domains require verify = recipient $dnslist_domain\n$dnslist_text $dnslist_domain accept acl_check_data: accept begin routers smart_route: driver=manualroute transport=remote_smtp route_data=bamail2.nova.local no_more system_aliases: driver = redirect allow_fail allow_defer data = ${lookup{$local_part}lsearch{/etc/aliases}} # user = exim file_transport = address_file pipe_transport = address_pipe userforward: driver = redirect check_local_user file = $home/.forward no_verify no_expn check_ancestor file_transport = address_file pipe_transport = address_pipe reply_transport = address_reply localuser: driver = accept check_local_user transport = local_delivery cannot_route_message = Unknown user begin transports remote_smtp: driver = smtp interface = 10.50.50.41 address_pipe: driver = pipe return_output address_file: driver = appendfile delivery_date_add envelope_to_add return_path_add address_reply: driver = autoreply begin retry * * F,2h,15m; G,16h,1h,1.5; F,4d,6h begin rewrite begin authenticators
|
Unable to sent mail using exim4.8 - Unrouteable Address I am unable to send mails using exim4.8. Please find the below logs and my exim config file. Please help! Tried all config changes but not able to find a solution. Thanks in advance. Below is the eximgrep result: exigrep "xyz" /var/spool/exim/log/mainlog 2013-07-19 17:43:01 1V09YT-0006vZ-Ax <= root@192.168.4.14 U=root P=local S=419 2013-07-19 17:43:01 1V09YT-0006vZ-Ax ** yyy@yyy.yyy: Unrouteable address 2013-07-19 17:43:01 1V09YT-0006vZ-Ax Completed Below is my Exim config file: primary_hostname = 192.168.4.14 domainlist local_domains = @:localhost domainlist relay_to_domains = * hostlist relay_from_hosts = 127.0.0.1 acl_smtp_rcpt = acl_check_rcpt acl_smtp_data = acl_check_data rfc1413_hosts = * rfc1413_query_timeout = 5s ignore_bounce_errors_after = 1d timeout_frozen_after = 7d begin acl deny message = Restricted characters in address domains = +local_domains local_parts = ^[.] : ^.*[@%!/|] deny message = Restricted characters in address domains = !+local_domains local_parts = ^[./|] : ^.*[@%!] : ^.*/\\.\\./ accept local_parts = postmaster domains = +local_domains require verify = sender accept hosts = +relay_from_hosts control = submission control = dkim_disable_verify accept authenticated = * control = submission control = dkim_disable_verify require message = relay not permitted domains = +local_domains : +relay_to_domains require verify = recipient $dnslist_domain\n$dnslist_text $dnslist_domain accept acl_check_data: accept begin routers smart_route: driver=manualroute transport=remote_smtp route_data=bamail2.nova.local no_more system_aliases: driver = redirect allow_fail allow_defer data = ${lookup{$local_part}lsearch{/etc/aliases}} # user = exim file_transport = address_file pipe_transport = address_pipe userforward: driver = redirect check_local_user file = $home/.forward no_verify no_expn check_ancestor file_transport = address_file pipe_transport = address_pipe reply_transport = address_reply localuser: driver = accept check_local_user transport = local_delivery cannot_route_message = Unknown user begin transports remote_smtp: driver = smtp interface = 10.50.50.41 address_pipe: driver = pipe return_output address_file: driver = appendfile delivery_date_add envelope_to_add return_path_add address_reply: driver = autoreply begin retry * * F,2h,15m; G,16h,1h,1.5; F,4d,6h begin rewrite begin authenticators
|
rhel, exim
| 2
| 2,828
| 1
|
https://stackoverflow.com/questions/17747310/unable-to-sent-mail-using-exim4-8-unrouteable-address
|
16,098,336
|
JVM throws Exception
|
I am using RHEL 6.0 with JRE1.6.0_43 installed. It's installed properly and java plugin for browser also working fine. But I am facing the problem while running java_vm in /jre/bin directory. While running for the first time it throws, java_vm process: You need to set both JAVA_HOME and PLUGIN_HOME Then i set the JAVA_HOME=/ and PLUGIN_HOME=/plugin, after that it throws the following exception while running java_vm, java_vm process: Couldn't find class sun/plugin/navig/motif/Plugin Exception in thread "main" java.lang.NoClassDefFoundError: sun/plugin/navig/motif/Plugin Caused by: java.lang.ClassNotFoundException: sun.plugin.navig.motif.Plugin at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) Then i modified the PLUGIN_HOME=/ , when i try to run again it shows another exception, Java process: caught exception from sun.plugin.navig.motif.Plugin.start Exception in thread "main" java.lang.NullPointerException at sun.plugin.navig.motif.Plugin.start(Unknown Source) After this, I don't know what to do to get rid of this problem... I think the problem is with PLUGIN_HOME variable.. Please provide me the clear details about PLUGIN_HOME and tell me why we need this variable?
|
JVM throws Exception I am using RHEL 6.0 with JRE1.6.0_43 installed. It's installed properly and java plugin for browser also working fine. But I am facing the problem while running java_vm in /jre/bin directory. While running for the first time it throws, java_vm process: You need to set both JAVA_HOME and PLUGIN_HOME Then i set the JAVA_HOME=/ and PLUGIN_HOME=/plugin, after that it throws the following exception while running java_vm, java_vm process: Couldn't find class sun/plugin/navig/motif/Plugin Exception in thread "main" java.lang.NoClassDefFoundError: sun/plugin/navig/motif/Plugin Caused by: java.lang.ClassNotFoundException: sun.plugin.navig.motif.Plugin at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) Then i modified the PLUGIN_HOME=/ , when i try to run again it shows another exception, Java process: caught exception from sun.plugin.navig.motif.Plugin.start Exception in thread "main" java.lang.NullPointerException at sun.plugin.navig.motif.Plugin.start(Unknown Source) After this, I don't know what to do to get rid of this problem... I think the problem is with PLUGIN_HOME variable.. Please provide me the clear details about PLUGIN_HOME and tell me why we need this variable?
|
jvm, java, rhel
| 2
| 589
| 0
|
https://stackoverflow.com/questions/16098336/jvm-throws-exception
|
14,334,234
|
Permissions to add SPN to computer account
|
I am trying to add some RHEL6 boxes to an S2008R2 domain. Part of the process is to create a computer account in AD, then add an SPN to it. Pretty much all guides say that you need Admin privileges on AD, but that is not available to me. I want to give the Linux Admins the minimum rights possible on AD. Can any one tell me what rights they need on the target OU to do this? Adding Computer objects is obvious, but then they cannot add the SPN. If I take step back and try it from the Linux side (using net ads join creatupn="host\jhgfjg" ) then it adds the object, but once again does not add the SPN.
|
Permissions to add SPN to computer account I am trying to add some RHEL6 boxes to an S2008R2 domain. Part of the process is to create a computer account in AD, then add an SPN to it. Pretty much all guides say that you need Admin privileges on AD, but that is not available to me. I want to give the Linux Admins the minimum rights possible on AD. Can any one tell me what rights they need on the target OU to do this? Adding Computer objects is obvious, but then they cannot add the SPN. If I take step back and try it from the Linux side (using net ads join creatupn="host\jhgfjg" ) then it adds the object, but once again does not add the SPN.
|
active-directory, rhel
| 2
| 1,517
| 1
|
https://stackoverflow.com/questions/14334234/permissions-to-add-spn-to-computer-account
|
12,737,044
|
getimagesize does not work with files that contain latin/special chars PHP RHEL
|
We have a set of images, around 2500, that are stored in a local RHEL 6.2 server. We also have a MySQL DB with all the names of these files. The problem is, when we call html2pdf to create the PDF from an HTML generated through PHP, whenever it loads an image with special chars, such as áéíóúÁÉÍÓÚñÑ, the script throws an error because the file can't be read (because of the special chars, not because of permissions). so the question is: MySQL returns natively the filename as "imágen 1 00023884 otoño.jpg", and the files are stored in RHEL with the same name. How do you tell getimagesize() that it should read the filename AS IS ? I've tried fopen but it doesn't work either, even if I rawurlencode the filename it throws the can't load file error. ("allow_url_fopen" is on) Everything else works fine with files with no special chars. Hope I made myself clear. Thanks!
|
getimagesize does not work with files that contain latin/special chars PHP RHEL We have a set of images, around 2500, that are stored in a local RHEL 6.2 server. We also have a MySQL DB with all the names of these files. The problem is, when we call html2pdf to create the PDF from an HTML generated through PHP, whenever it loads an image with special chars, such as áéíóúÁÉÍÓÚñÑ, the script throws an error because the file can't be read (because of the special chars, not because of permissions). so the question is: MySQL returns natively the filename as "imágen 1 00023884 otoño.jpg", and the files are stored in RHEL with the same name. How do you tell getimagesize() that it should read the filename AS IS ? I've tried fopen but it doesn't work either, even if I rawurlencode the filename it throws the can't load file error. ("allow_url_fopen" is on) Everything else works fine with files with no special chars. Hope I made myself clear. Thanks!
|
php, html, linux, rhel, html2pdf
| 2
| 560
| 1
|
https://stackoverflow.com/questions/12737044/getimagesize-does-not-work-with-files-that-contain-latin-special-chars-php-rhel
|
3,859,876
|
Why am I getting a SegFault when I call pdftk from PHP/Apache but not PHP/CLI or directly
|
When I call /usr/local/bin/pdftk from PHP in Apache (via shell_exec() , exec() , system() , etc. ), it returns the SYNOPSIS message as expected. When I call /usr/local/bin/pdftk input.pdf fill_form input.fdf output output.pdf flatten via shell_exec() , nothing returns. When I copy and paste the exact same string to the same path in the shell (as the apache user), the output.pdf file is generated as expected. Moving the pdftk command into a PHP shell script (shebang is #!/usr/bin/php ) and executing it with php script.php works perfectly. Calling that shell script (with its stderr redirected to stdout) from PHP in Apache (via shell_exec(script.php); ) results in this line: sh: line 1: 32547 Segmentation fault /usr/local/bin/pdftk input.pdf fill_form input.fdf output output.pdf flatten 2>&1 Whenever I run the script from the command line (via PHP or directly), it works fine. Whenever I run the script through PHP via Apache, it either fails without any notification or gives the SegFault listed above. It's PHP 4.3.9 on RHEL4. Please don't shoot me. I've set memory to 512M with ini_set() and made sure that the apache user had read/write to all paths (with fopen()) and by logging in as apache ... Just went and checked /var/log/messages to find this: Oct 4 21:17:58 discovery kernel: audit(1286241478.692:1764638): avc: denied { read } for pid=32627 comm="pdftk" name="zero" dev=tmpfs ino=2161 scontext=root:system_r:httpd_sys_script_t tcontext=system_u:object_r:zero_device_t tclass=chr_file NOTE: Disabling SELinux "fixed" the problem. Has this moved into a ServerFault question? Can anybody give me the 30 second SELinux access controls primer here?
|
Why am I getting a SegFault when I call pdftk from PHP/Apache but not PHP/CLI or directly When I call /usr/local/bin/pdftk from PHP in Apache (via shell_exec() , exec() , system() , etc. ), it returns the SYNOPSIS message as expected. When I call /usr/local/bin/pdftk input.pdf fill_form input.fdf output output.pdf flatten via shell_exec() , nothing returns. When I copy and paste the exact same string to the same path in the shell (as the apache user), the output.pdf file is generated as expected. Moving the pdftk command into a PHP shell script (shebang is #!/usr/bin/php ) and executing it with php script.php works perfectly. Calling that shell script (with its stderr redirected to stdout) from PHP in Apache (via shell_exec(script.php); ) results in this line: sh: line 1: 32547 Segmentation fault /usr/local/bin/pdftk input.pdf fill_form input.fdf output output.pdf flatten 2>&1 Whenever I run the script from the command line (via PHP or directly), it works fine. Whenever I run the script through PHP via Apache, it either fails without any notification or gives the SegFault listed above. It's PHP 4.3.9 on RHEL4. Please don't shoot me. I've set memory to 512M with ini_set() and made sure that the apache user had read/write to all paths (with fopen()) and by logging in as apache ... Just went and checked /var/log/messages to find this: Oct 4 21:17:58 discovery kernel: audit(1286241478.692:1764638): avc: denied { read } for pid=32627 comm="pdftk" name="zero" dev=tmpfs ino=2161 scontext=root:system_r:httpd_sys_script_t tcontext=system_u:object_r:zero_device_t tclass=chr_file NOTE: Disabling SELinux "fixed" the problem. Has this moved into a ServerFault question? Can anybody give me the 30 second SELinux access controls primer here?
|
php, pdf-generation, segmentation-fault, rhel, selinux
| 2
| 1,296
| 1
|
https://stackoverflow.com/questions/3859876/why-am-i-getting-a-segfault-when-i-call-pdftk-from-php-apache-but-not-php-cli-or
|
75,440,767
|
RHEL - SELinux access control errors
|
I have problem with SElinux privilages with docker, in docker i run mailcow but now i have a blank screen and looks like it might be problem with privilages. Because diagnostic says this: SELinux is preventing /usr/local/bin/php from read access on the file /web/inc/init_db.inc.php . Audit log: type=AVC msg=audit(1676319004.771:1087): avc: denied { read } for pid=14555 comm="php-fpm" name="init_db.inc.php" dev="dm-0" ino=135058961 scontext=system_u:system_r:container_t:s0:c706,c972 tcontext=system_u:object_r:container_file_t:s0:c89,c575 tclass=file permissive=0 type=SYSCALL msg=audit(1676319004.771:1087): arch=c000003e syscall=2 success=no exit=-13 a0=7fffc4e15850 a1=8000 a2=0 a3=0 items=1 ppid=6637 pid=14555 auid=4294967295 uid=82 gid=82 euid=82 suid=82 fsuid=82 egid=82 sgid=82 fsgid=82 tty=(none) ses=4294967295 comm="php-fpm" exe="/usr/local/sbin/php-fpm" subj=system_u:system_r:container_t:s0:c706,c972 key=(null) type=CWD msg=audit(1676319004.771:1087): cwd="/web" type=PATH msg=audit(1676319004.771:1087): item=0 name="/web/inc/init_db.inc.php" inode=135058961 dev=fd:00 mode=0100666 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:container_file_t:s0:c89,c575 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 how can i solved it? This is what i tried: ausearch -c 'php' --raw | audit2allow -M my-php semodule -X 300 -i my-php.pp and results are: compilation failed: my-php.te:15:ERROR 'syntax error' at token 'mlsconstrain' on line 15: # mlsconstrain file { ioctl read lock execute execute_no_trans } ((h1 dom h2 -Fail-) or (t1 != mcs_constrained_type -Fail-) ); Constraint DENIED mlsconstrain file { write setattr append unlink link rename } ((h1 dom h2 -Fail-) or (t1 != mcs_constrained_type -Fail-) ); Constraint DENIED /usr/bin/checkmodule: error(s) encountered while parsing configuration [root@rhel ~]# semodule -X 300 -i my-php.pp libsemanage.map_compressed_file: Unable to open my-php.pp (No such file or directory). libsemanage.semanage_direct_install_file: Unable to read file my-php.pp (No such file or directory). semodule: Failed on my-php.pp!
|
RHEL - SELinux access control errors I have problem with SElinux privilages with docker, in docker i run mailcow but now i have a blank screen and looks like it might be problem with privilages. Because diagnostic says this: SELinux is preventing /usr/local/bin/php from read access on the file /web/inc/init_db.inc.php . Audit log: type=AVC msg=audit(1676319004.771:1087): avc: denied { read } for pid=14555 comm="php-fpm" name="init_db.inc.php" dev="dm-0" ino=135058961 scontext=system_u:system_r:container_t:s0:c706,c972 tcontext=system_u:object_r:container_file_t:s0:c89,c575 tclass=file permissive=0 type=SYSCALL msg=audit(1676319004.771:1087): arch=c000003e syscall=2 success=no exit=-13 a0=7fffc4e15850 a1=8000 a2=0 a3=0 items=1 ppid=6637 pid=14555 auid=4294967295 uid=82 gid=82 euid=82 suid=82 fsuid=82 egid=82 sgid=82 fsgid=82 tty=(none) ses=4294967295 comm="php-fpm" exe="/usr/local/sbin/php-fpm" subj=system_u:system_r:container_t:s0:c706,c972 key=(null) type=CWD msg=audit(1676319004.771:1087): cwd="/web" type=PATH msg=audit(1676319004.771:1087): item=0 name="/web/inc/init_db.inc.php" inode=135058961 dev=fd:00 mode=0100666 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:container_file_t:s0:c89,c575 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 how can i solved it? This is what i tried: ausearch -c 'php' --raw | audit2allow -M my-php semodule -X 300 -i my-php.pp and results are: compilation failed: my-php.te:15:ERROR 'syntax error' at token 'mlsconstrain' on line 15: # mlsconstrain file { ioctl read lock execute execute_no_trans } ((h1 dom h2 -Fail-) or (t1 != mcs_constrained_type -Fail-) ); Constraint DENIED mlsconstrain file { write setattr append unlink link rename } ((h1 dom h2 -Fail-) or (t1 != mcs_constrained_type -Fail-) ); Constraint DENIED /usr/bin/checkmodule: error(s) encountered while parsing configuration [root@rhel ~]# semodule -X 300 -i my-php.pp libsemanage.map_compressed_file: Unable to open my-php.pp (No such file or directory). libsemanage.semanage_direct_install_file: Unable to read file my-php.pp (No such file or directory). semodule: Failed on my-php.pp!
|
php, rhel, privileges, selinux, mailcow
| 2
| 1,357
| 1
|
https://stackoverflow.com/questions/75440767/rhel-selinux-access-control-errors
|
36,750,295
|
Install packages with yum to alt python location
|
I am currently running RHEL 6.6. This has python 2.6.6 pre-installed and is the default. I installed python 2.7 using the altinstall method. The default python is still 2.6.6 . I'm trying to install the python-devel packages using yum which only runs with root. However when I run yum install python-devel as root it installs the packages to python 2.6.6 . Is there a way to get yum to install packages using the alt python install?
|
Install packages with yum to alt python location I am currently running RHEL 6.6. This has python 2.6.6 pre-installed and is the default. I installed python 2.7 using the altinstall method. The default python is still 2.6.6 . I'm trying to install the python-devel packages using yum which only runs with root. However when I run yum install python-devel as root it installs the packages to python 2.6.6 . Is there a way to get yum to install packages using the alt python install?
|
python, python-2.7, rhel, rhel6
| 2
| 321
| 1
|
https://stackoverflow.com/questions/36750295/install-packages-with-yum-to-alt-python-location
|
21,998,745
|
Neo4j on Amazon EC2 - Not accessible from remote machines
|
I am currently attempting to get neo4j installed properly on an EC2 instance with RHEL. Currently I can not hit the server on port 7474 from a browser to see the neo4j webadmin or browser. As of right now I can successfully access localhost:7474 which leads me to believe it is some level of access issue with remote connections. What I have done so far: Installed Oracle Java 1.7 on the EC2 instance Installed neo4j-community-2.0.1 Added org.neo4j.server.webserver.address=0.0.0.0 to the neo4j-server.properties Added a custom TCP rule in EC2 UI for port 7474 allowing 0.0.0.0/0 Added 7474 to iptables Restarted the instance run neo4j start Looking at netstat I see the process listening on port 7474, so I am unsure what else would be preventing external traffic from hitting the public EC2 DNS for the server on port 7474. console.log 2014-02-24 20:25:24.572+0000 INFO [API] Setting startup timeout to: 120000ms based on -1 2014-02-24 20:25:27.226+0000 INFO [API] Successfully started database 2014-02-24 20:25:28.924+0000 INFO [API] Starting HTTP on port :7474 with 10 threads available 2014-02-24 20:25:29.387+0000 INFO [API] Enabling HTTPS on port :7473 2014-02-24 20:25:30.077+0000 INFO [API] Mounted discovery module at [/] 2014-02-24 20:25:30.088+0000 INFO [API] Mounted REST API at [/db/data/] 2014-02-24 20:25:30.097+0000 INFO [API] Mounted management API at [/db/manage/] 2014-02-24 20:25:30.099+0000 INFO [API] Mounted webadmin at [/webadmin] 2014-02-24 20:25:30.100+0000 INFO [API] Mounted Neo4j Browser at [/browser] 2014-02-24 20:25:30.202+0000 INFO [API] Mounting static content at [/webadmin] from [webadmin-html] 2014-02-24 20:25:30.326+0000 INFO [API] Mounting static content at [/browser] from [browser] 15:25:30.328 [main] WARN o.e.j.server.handler.ContextHandler - o.e.j.s.ServletContextHandler@164cc9b7{/,null,null} contextPath ends with / 15:25:30.328 [main] WARN o.e.j.server.handler.ContextHandler - Empty contextPath 15:25:30.331 [main] INFO org.eclipse.jetty.server.Server - jetty-9.0.5.v20130815 15:25:30.387 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.h.MovedContextHandler@5e9c3ce7{/,null,AVAILABLE} 15:25:30.780 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /webadmin, did not find org.apache.jasper.servlet.JspServlet 15:25:30.802 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@488a358e{/webadmin,jar:file:/opt/neo4j-community-2.0.1/system/lib/neo4j-server-2.0.1-static-web.jar!/webadmin-html,AVAILABLE} 15:25:31.964 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@750e589{/db/manage,null,AVAILABLE} 15:25:32.759 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@1480606f{/db/data,null,AVAILABLE} 15:25:32.787 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.apache.jasper.servlet.JspServlet 15:25:32.789 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@3f773163{/browser,jar:file:/opt/neo4j-community-2.0.1/system/lib/neo4j-browser-2.0.1.jar!/browser,AVAILABLE} 15:25:33.047 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@164cc9b7{/,null,AVAILABLE} 15:25:33.078 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector@14cfe45e{HTTP/1.1}{0.0.0.0:7474} 15:25:34.498 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector@44590060{SSL-HTTP/1.1}{0.0.0.0:7473} 2014-02-24 20:25:34.500+0000 INFO [API] Remote interface ready and available at [[URL] neo4j.0.0.log: Feb 24, 2014 3:25:24 PM org.neo4j.server.logging.Logger log INFO: Setting startup timeout to: 120000ms based on -1 Feb 24, 2014 3:25:27 PM org.neo4j.server.logging.Logger log INFO: Successfully started database Feb 24, 2014 3:25:28 PM org.neo4j.server.logging.Logger log INFO: Starting HTTP on port :7474 with 10 threads available Feb 24, 2014 3:25:29 PM org.neo4j.server.logging.Logger log INFO: Enabling HTTPS on port :7473 Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted discovery module at [/] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted REST API at [/db/data/] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted management API at [/db/manage/] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted webadmin at [/webadmin] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted Neo4j Browser at [/browser] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounting static content at [/webadmin] from [webadmin-html] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounting static content at [/browser] from [browser] Feb 24, 2014 3:25:31 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Feb 24, 2014 3:25:31 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Feb 24, 2014 3:25:32 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Feb 24, 2014 3:25:34 PM org.neo4j.server.logging.Logger log INFO: Remote interface ready and available at [[URL]
|
Neo4j on Amazon EC2 - Not accessible from remote machines I am currently attempting to get neo4j installed properly on an EC2 instance with RHEL. Currently I can not hit the server on port 7474 from a browser to see the neo4j webadmin or browser. As of right now I can successfully access localhost:7474 which leads me to believe it is some level of access issue with remote connections. What I have done so far: Installed Oracle Java 1.7 on the EC2 instance Installed neo4j-community-2.0.1 Added org.neo4j.server.webserver.address=0.0.0.0 to the neo4j-server.properties Added a custom TCP rule in EC2 UI for port 7474 allowing 0.0.0.0/0 Added 7474 to iptables Restarted the instance run neo4j start Looking at netstat I see the process listening on port 7474, so I am unsure what else would be preventing external traffic from hitting the public EC2 DNS for the server on port 7474. console.log 2014-02-24 20:25:24.572+0000 INFO [API] Setting startup timeout to: 120000ms based on -1 2014-02-24 20:25:27.226+0000 INFO [API] Successfully started database 2014-02-24 20:25:28.924+0000 INFO [API] Starting HTTP on port :7474 with 10 threads available 2014-02-24 20:25:29.387+0000 INFO [API] Enabling HTTPS on port :7473 2014-02-24 20:25:30.077+0000 INFO [API] Mounted discovery module at [/] 2014-02-24 20:25:30.088+0000 INFO [API] Mounted REST API at [/db/data/] 2014-02-24 20:25:30.097+0000 INFO [API] Mounted management API at [/db/manage/] 2014-02-24 20:25:30.099+0000 INFO [API] Mounted webadmin at [/webadmin] 2014-02-24 20:25:30.100+0000 INFO [API] Mounted Neo4j Browser at [/browser] 2014-02-24 20:25:30.202+0000 INFO [API] Mounting static content at [/webadmin] from [webadmin-html] 2014-02-24 20:25:30.326+0000 INFO [API] Mounting static content at [/browser] from [browser] 15:25:30.328 [main] WARN o.e.j.server.handler.ContextHandler - o.e.j.s.ServletContextHandler@164cc9b7{/,null,null} contextPath ends with / 15:25:30.328 [main] WARN o.e.j.server.handler.ContextHandler - Empty contextPath 15:25:30.331 [main] INFO org.eclipse.jetty.server.Server - jetty-9.0.5.v20130815 15:25:30.387 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.h.MovedContextHandler@5e9c3ce7{/,null,AVAILABLE} 15:25:30.780 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /webadmin, did not find org.apache.jasper.servlet.JspServlet 15:25:30.802 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@488a358e{/webadmin,jar:file:/opt/neo4j-community-2.0.1/system/lib/neo4j-server-2.0.1-static-web.jar!/webadmin-html,AVAILABLE} 15:25:31.964 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@750e589{/db/manage,null,AVAILABLE} 15:25:32.759 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@1480606f{/db/data,null,AVAILABLE} 15:25:32.787 [main] INFO o.e.j.w.StandardDescriptorProcessor - NO JSP Support for /browser, did not find org.apache.jasper.servlet.JspServlet 15:25:32.789 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.w.WebAppContext@3f773163{/browser,jar:file:/opt/neo4j-community-2.0.1/system/lib/neo4j-browser-2.0.1.jar!/browser,AVAILABLE} 15:25:33.047 [main] INFO o.e.j.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@164cc9b7{/,null,AVAILABLE} 15:25:33.078 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector@14cfe45e{HTTP/1.1}{0.0.0.0:7474} 15:25:34.498 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector@44590060{SSL-HTTP/1.1}{0.0.0.0:7473} 2014-02-24 20:25:34.500+0000 INFO [API] Remote interface ready and available at [[URL] neo4j.0.0.log: Feb 24, 2014 3:25:24 PM org.neo4j.server.logging.Logger log INFO: Setting startup timeout to: 120000ms based on -1 Feb 24, 2014 3:25:27 PM org.neo4j.server.logging.Logger log INFO: Successfully started database Feb 24, 2014 3:25:28 PM org.neo4j.server.logging.Logger log INFO: Starting HTTP on port :7474 with 10 threads available Feb 24, 2014 3:25:29 PM org.neo4j.server.logging.Logger log INFO: Enabling HTTPS on port :7473 Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted discovery module at [/] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted REST API at [/db/data/] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted management API at [/db/manage/] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted webadmin at [/webadmin] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounted Neo4j Browser at [/browser] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounting static content at [/webadmin] from [webadmin-html] Feb 24, 2014 3:25:30 PM org.neo4j.server.logging.Logger log INFO: Mounting static content at [/browser] from [browser] Feb 24, 2014 3:25:31 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Feb 24, 2014 3:25:31 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Feb 24, 2014 3:25:32 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' Feb 24, 2014 3:25:34 PM org.neo4j.server.logging.Logger log INFO: Remote interface ready and available at [[URL]
|
java, amazon-web-services, amazon-ec2, neo4j, rhel
| 2
| 1,411
| 2
|
https://stackoverflow.com/questions/21998745/neo4j-on-amazon-ec2-not-accessible-from-remote-machines
|
512,063
|
Meaning of Path Structure on Unix
|
Does anybody have a reference to what the various path names mean on different flavors of Unix? Please include Solaris, RHEL, and SLES in the list if possible. e.g. From what I have gathered /lib is standard libraries for the distribution, which never change (is this correct? or do they get new versions from time to time?), /usr/local is for apps installed by the sysadmin, etc. But I am not sure that this is correct and I'm still unclear about the difference between /usr/lib and /lib (the former is for sysadmin installed libraries?) and /sbin and /bin and so on... Thanks.
|
Meaning of Path Structure on Unix Does anybody have a reference to what the various path names mean on different flavors of Unix? Please include Solaris, RHEL, and SLES in the list if possible. e.g. From what I have gathered /lib is standard libraries for the distribution, which never change (is this correct? or do they get new versions from time to time?), /usr/local is for apps installed by the sysadmin, etc. But I am not sure that this is correct and I'm still unclear about the difference between /usr/lib and /lib (the former is for sysadmin installed libraries?) and /sbin and /bin and so on... Thanks.
|
unix, path, suse, rhel
| 1
| 2,412
| 4
|
https://stackoverflow.com/questions/512063/meaning-of-path-structure-on-unix
|
38,862,430
|
Unterminated 's' command with 's/([\^][^])//g'
|
I am trying to find any caret (^) characters in my file, and delete them and the subsequent character whenever they exist. I am running this in bash. Any time I try and run the sed to do so: sed -i 's/([\^][^])//g' myfile.txt I get the below error: sed: -e expression #1, char 14: unterminated `s' command Any ideas?
|
Unterminated 's' command with 's/([\^][^])//g' I am trying to find any caret (^) characters in my file, and delete them and the subsequent character whenever they exist. I am running this in bash. Any time I try and run the sed to do so: sed -i 's/([\^][^])//g' myfile.txt I get the below error: sed: -e expression #1, char 14: unterminated `s' command Any ideas?
|
linux, bash, sed, rhel
| 1
| 839
| 3
|
https://stackoverflow.com/questions/38862430/unterminated-s-command-with-s-g
|
25,004,021
|
telnet to determine open ports (shell script)
|
I am trying to write a shell script which takes an IP address and a port number as input and outputs whether the port is open on the host.. My shell script looks like this #!/bin/bash name=$(echo exit | telnet $1 $2 | grep "Connected") if [ "$name" == "" ] then echo "Port $2 is not open on $1" else echo "Port $2 is open on $1" fi It works fine but my output contains 2 lines, something like this: [root@ip-172-31-8-36 Scripts]# ./test.sh 172.31.35.246 7199 Connection closed by foreign host. Port 7199 is open on 172.31.35.246 OR [root@ip-172-31-8-36 Scripts]# ./test.sh 172.31.35.246 7200 telnet: connect to address 172.31.35.246: Connection refused Port 7200 is not open on 172.31.35.246 I want to suppress the 1st line from the output in both cases. Any idea how to do it?
|
telnet to determine open ports (shell script) I am trying to write a shell script which takes an IP address and a port number as input and outputs whether the port is open on the host.. My shell script looks like this #!/bin/bash name=$(echo exit | telnet $1 $2 | grep "Connected") if [ "$name" == "" ] then echo "Port $2 is not open on $1" else echo "Port $2 is open on $1" fi It works fine but my output contains 2 lines, something like this: [root@ip-172-31-8-36 Scripts]# ./test.sh 172.31.35.246 7199 Connection closed by foreign host. Port 7199 is open on 172.31.35.246 OR [root@ip-172-31-8-36 Scripts]# ./test.sh 172.31.35.246 7200 telnet: connect to address 172.31.35.246: Connection refused Port 7200 is not open on 172.31.35.246 I want to suppress the 1st line from the output in both cases. Any idea how to do it?
|
linux, bash, telnet, rhel
| 1
| 4,927
| 3
|
https://stackoverflow.com/questions/25004021/telnet-to-determine-open-ports-shell-script
|
32,143,367
|
How to reliably check if I am above a certain CentOS version (CentOS 7) in a Python script?
|
I have been given a task to port a bunch of our internal applications from CentOS6 to CentOS7. With this move, we are changing our dependencies from external packages that we have repackaged ourselves to official upstream version of the packages. Because of this I am looking for a reliable piece of python2.7 code that will do this: if CentOS version >= 7: do things the new way else: do things the deprecated way It will be used for autogenerating .spec files to make RPMs. I've been looking at things like parsing /etc/redhat-release but that seems a little bit unreliable for what I want. Is there a better way? Thanks very much.
|
How to reliably check if I am above a certain CentOS version (CentOS 7) in a Python script? I have been given a task to port a bunch of our internal applications from CentOS6 to CentOS7. With this move, we are changing our dependencies from external packages that we have repackaged ourselves to official upstream version of the packages. Because of this I am looking for a reliable piece of python2.7 code that will do this: if CentOS version >= 7: do things the new way else: do things the deprecated way It will be used for autogenerating .spec files to make RPMs. I've been looking at things like parsing /etc/redhat-release but that seems a little bit unreliable for what I want. Is there a better way? Thanks very much.
|
python, python-2.7, centos, rhel, centos7
| 1
| 203
| 2
|
https://stackoverflow.com/questions/32143367/how-to-reliably-check-if-i-am-above-a-certain-centos-version-centos-7-in-a-pyt
|
25,801,396
|
gfortran doesn't understand C style comments?
|
I am trying to compile the WRF package, which is mostly written in fortran. Most of the files in this package has a preamble using C-style "/* … */" comments. Unfortunately, when compiling, I have been receiving these errors frequently: /* Copyright (C) 1991-2012 Free Software Foundation, Inc. 1 Error: Invalid character in name at (1) ../dyn_em/module_big_step_utilities_em.f90:2.3: I believe my gfortran version simply does not understand these type of comments, and is failing. I find this very odd since I downloaded the compiler using "yum install" on Red Hat 7. Could someone please enlighten me??
|
gfortran doesn't understand C style comments? I am trying to compile the WRF package, which is mostly written in fortran. Most of the files in this package has a preamble using C-style "/* … */" comments. Unfortunately, when compiling, I have been receiving these errors frequently: /* Copyright (C) 1991-2012 Free Software Foundation, Inc. 1 Error: Invalid character in name at (1) ../dyn_em/module_big_step_utilities_em.f90:2.3: I believe my gfortran version simply does not understand these type of comments, and is failing. I find this very odd since I downloaded the compiler using "yum install" on Red Hat 7. Could someone please enlighten me??
|
fortran, gfortran, rhel
| 1
| 2,225
| 1
|
https://stackoverflow.com/questions/25801396/gfortran-doesnt-understand-c-style-comments
|
61,366,815
|
How Cassandra Cluster - Seed Provider Works?
|
I have a doubt on the cassandra seed_provider assignment. In my environment, there are 3 cassandra nodes required to setup as cluster. How should I define it in the cassandra.yaml? I'm confused since most of the tutorials gave different answers. Example: Host A - 192.168.1.1 Host B - 192.168.1.2 Host C - 192.168.1.3 The following is my current setup for Host A, is it correct? What about the configuration for Host B & Host C? # any class that implements the SeedProvider interface and has a # constructor that takes a Map<String, String> of parameters will do. seed_provider: # Addresses of hosts that are deemed contact points. # Cassandra nodes use this list of hosts to find each other and learn # the topology of the ring. You must change this if you are running # multiple nodes! - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: # seeds is actually a comma-delimited list of addresses. # Ex: "<ip1>,<ip2>,<ip3>" - seeds: "192.168.1.1,192.168.1.2,192.168.1.3"
|
How Cassandra Cluster - Seed Provider Works? I have a doubt on the cassandra seed_provider assignment. In my environment, there are 3 cassandra nodes required to setup as cluster. How should I define it in the cassandra.yaml? I'm confused since most of the tutorials gave different answers. Example: Host A - 192.168.1.1 Host B - 192.168.1.2 Host C - 192.168.1.3 The following is my current setup for Host A, is it correct? What about the configuration for Host B & Host C? # any class that implements the SeedProvider interface and has a # constructor that takes a Map<String, String> of parameters will do. seed_provider: # Addresses of hosts that are deemed contact points. # Cassandra nodes use this list of hosts to find each other and learn # the topology of the ring. You must change this if you are running # multiple nodes! - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: # seeds is actually a comma-delimited list of addresses. # Ex: "<ip1>,<ip2>,<ip3>" - seeds: "192.168.1.1,192.168.1.2,192.168.1.3"
|
database, cassandra, rhel, cassandra-2.0
| 1
| 3,485
| 1
|
https://stackoverflow.com/questions/61366815/how-cassandra-cluster-seed-provider-works
|
58,773,246
|
As a root user can I execute commands in BASH as another user without requiring a password?
|
I have a root user account on an RHeL server. On that server I have a simple script called user.sh that just returns the current user: #!/bin/bash echo $USER when run from my root account the output is bash user.sh >>>root From another script I would like to be able to temporarily switch between users without entering a password, storing the password in the script or a file , or modifying /etc/sudoers and execute user.sh and then return back to my initial root account. Is this at all possible? Here's what I've tried so far: Using delimeters to execute a block of code #!/bin/bash bash /user.sh su other_user <<EOF echo Current user: $USER EOF output: root Current user: root switching to a user in bash, executing a command and then logging back out #!/bin/bash bash /user.sh su other_user bash /user.sh exit output: The script pauses execution and returns me to the terminal logged in as other_user however I will still be in my root account's directory that contains user.sh root [other_user@my_server]$ if I then type exit I'm returned to my root account and the script completes execution using the su - <username> -c /path/to/the/shellscript.sh to execute a script as a different account and then return #!/bin/bash bash /user.sh su - other_user -c /path/user.sh output: root -bash: /path/user.sh: Permission denied using sudo -i -u other_user to log in as the user and execute the script which yields the same problem experienced with attempt #2 but I am redirected to other_user 's home directory. It may be worth noting that if I use method 2, while I'm logged in as other_user I am able to run bash user.sh and yield the desired output: other_user
|
As a root user can I execute commands in BASH as another user without requiring a password? I have a root user account on an RHeL server. On that server I have a simple script called user.sh that just returns the current user: #!/bin/bash echo $USER when run from my root account the output is bash user.sh >>>root From another script I would like to be able to temporarily switch between users without entering a password, storing the password in the script or a file , or modifying /etc/sudoers and execute user.sh and then return back to my initial root account. Is this at all possible? Here's what I've tried so far: Using delimeters to execute a block of code #!/bin/bash bash /user.sh su other_user <<EOF echo Current user: $USER EOF output: root Current user: root switching to a user in bash, executing a command and then logging back out #!/bin/bash bash /user.sh su other_user bash /user.sh exit output: The script pauses execution and returns me to the terminal logged in as other_user however I will still be in my root account's directory that contains user.sh root [other_user@my_server]$ if I then type exit I'm returned to my root account and the script completes execution using the su - <username> -c /path/to/the/shellscript.sh to execute a script as a different account and then return #!/bin/bash bash /user.sh su - other_user -c /path/user.sh output: root -bash: /path/user.sh: Permission denied using sudo -i -u other_user to log in as the user and execute the script which yields the same problem experienced with attempt #2 but I am redirected to other_user 's home directory. It may be worth noting that if I use method 2, while I'm logged in as other_user I am able to run bash user.sh and yield the desired output: other_user
|
linux, bash, root, sudo, rhel
| 1
| 4,220
| 1
|
https://stackoverflow.com/questions/58773246/as-a-root-user-can-i-execute-commands-in-bash-as-another-user-without-requiring
|
58,208,150
|
How to use scl command as a script shebang?
|
If I want to run a specific command (with arguments) under Software Collections, I can use this command: scl enable python27 "ls /tmp" However, if I try to make a shell script that has a similar command as its shebang line, I get errors: $ cat myscript #!/usr/bin/scl enable python27 "ls /tmp" echo hello $ ./myscript Unable to open /etc/scl/prefixes/"ls! What am I doing wrong?
|
How to use scl command as a script shebang? If I want to run a specific command (with arguments) under Software Collections, I can use this command: scl enable python27 "ls /tmp" However, if I try to make a shell script that has a similar command as its shebang line, I get errors: $ cat myscript #!/usr/bin/scl enable python27 "ls /tmp" echo hello $ ./myscript Unable to open /etc/scl/prefixes/"ls! What am I doing wrong?
|
python-2.7, rhel, shebang, software-collections
| 1
| 6,303
| 3
|
https://stackoverflow.com/questions/58208150/how-to-use-scl-command-as-a-script-shebang
|
50,493,487
|
openssl in RHEL - sign client cert with root without utilizing openssl.cnf?
|
Trying to achieve a sort of self-signed PKI setup utilizing openssl on RHEL, with a few caveats. I will attempt to provide as much information as possible here. Versions: RHEL 6.7 | OpenSSL 1.0.1e-fips 11 Feb 2013 Caveats/constraints on the script: this script will be utilized to create multiple key sets - by default, one root keypair and cert and two client keypairs and certs per run. User input is asked for for file location, filename and passphrase on the client keys. All this was fairly straightforward, and I had a script that would run these commands on user request, and utilized the openssl.cnf file to point back to the root key to sign. I had used sed to change the location pointers in openssl.cnf based on filename originally, and was successfully able to sign the client cert. However, there are two major caveats to this: I was asked to change the script so that it is not dynamically the script or other files per each run, meaning openssl.cnf should not be edited on the fly if possinble. If this is needed to function however, then it should be fine. The user needs to be able to run multiple sets of this script ad hoc, especially with regards to the client keypairs (I have the root script and the client generation script separate in a user selectable menu). That is to say, generating the client keypair is requisite on having a root key to associate with, but can be done multiple times, and the client key script should ask the user with which root key to associate and sign from? Because of these constraints, it didn't seem that editing openssl.cnf was a prudent option, and not very scalable. So, given this info my question which I've been unable to figure out is simply: Is there a way to point a client key to a variable which would be the root key cert to sign? (Rather than utilizing openssl.cnf for the 'certificate' and 'private_key' entries?) As of now, I have: root key & cert: openssl req -config $dir/openssl.cnf -new -x509 -days 3652 -nodes -sha384 -newkey ec:ec-secp384r1.pem -keyout $userdir/${rootName}_private.key -out $userdir/${rootName}.crt -subj "stuff_here" ... export rootName client keys & certs: read -p "Which root key do you want to associate this client keypair with? Please type absolute filepath and filename (ending in .key); rkAssoc #STILL NEED TO USE THIS VARIABLE ##KEY GENERATION openssl req -newkey ec:ec-secp384rp1.pem -keyout $userdir/{$clientName}_privat.key -out $userdir/client/${clientName}.csr -subj "Stuff_here" ##SIGN CSR openssl ca -config $dir/openssl.cnf -policy policy_anything -extensions usr_cert -days 730 -notext -md sha384 -in $userdir/client/${clientName}.csr -out $userdir/client/${clientName}_signedprivatekey.pem && echo "Client key created." So I guess, 1) Did i do the client signing correctly (something seems off about it but not sure) 2) instead of referencing -req openssl.cnf I presume there is probably some kind of flag where you could do something more like openssl ca ... -cert ${rkAssoc} is this remotely correct or am I way off? Thanks in advance for anyone who lends a hand.
|
openssl in RHEL - sign client cert with root without utilizing openssl.cnf? Trying to achieve a sort of self-signed PKI setup utilizing openssl on RHEL, with a few caveats. I will attempt to provide as much information as possible here. Versions: RHEL 6.7 | OpenSSL 1.0.1e-fips 11 Feb 2013 Caveats/constraints on the script: this script will be utilized to create multiple key sets - by default, one root keypair and cert and two client keypairs and certs per run. User input is asked for for file location, filename and passphrase on the client keys. All this was fairly straightforward, and I had a script that would run these commands on user request, and utilized the openssl.cnf file to point back to the root key to sign. I had used sed to change the location pointers in openssl.cnf based on filename originally, and was successfully able to sign the client cert. However, there are two major caveats to this: I was asked to change the script so that it is not dynamically the script or other files per each run, meaning openssl.cnf should not be edited on the fly if possinble. If this is needed to function however, then it should be fine. The user needs to be able to run multiple sets of this script ad hoc, especially with regards to the client keypairs (I have the root script and the client generation script separate in a user selectable menu). That is to say, generating the client keypair is requisite on having a root key to associate with, but can be done multiple times, and the client key script should ask the user with which root key to associate and sign from? Because of these constraints, it didn't seem that editing openssl.cnf was a prudent option, and not very scalable. So, given this info my question which I've been unable to figure out is simply: Is there a way to point a client key to a variable which would be the root key cert to sign? (Rather than utilizing openssl.cnf for the 'certificate' and 'private_key' entries?) As of now, I have: root key & cert: openssl req -config $dir/openssl.cnf -new -x509 -days 3652 -nodes -sha384 -newkey ec:ec-secp384r1.pem -keyout $userdir/${rootName}_private.key -out $userdir/${rootName}.crt -subj "stuff_here" ... export rootName client keys & certs: read -p "Which root key do you want to associate this client keypair with? Please type absolute filepath and filename (ending in .key); rkAssoc #STILL NEED TO USE THIS VARIABLE ##KEY GENERATION openssl req -newkey ec:ec-secp384rp1.pem -keyout $userdir/{$clientName}_privat.key -out $userdir/client/${clientName}.csr -subj "Stuff_here" ##SIGN CSR openssl ca -config $dir/openssl.cnf -policy policy_anything -extensions usr_cert -days 730 -notext -md sha384 -in $userdir/client/${clientName}.csr -out $userdir/client/${clientName}_signedprivatekey.pem && echo "Client key created." So I guess, 1) Did i do the client signing correctly (something seems off about it but not sure) 2) instead of referencing -req openssl.cnf I presume there is probably some kind of flag where you could do something more like openssl ca ... -cert ${rkAssoc} is this remotely correct or am I way off? Thanks in advance for anyone who lends a hand.
|
linux, scripting, openssl, rhel, pki
| 1
| 1,097
| 1
|
https://stackoverflow.com/questions/50493487/openssl-in-rhel-sign-client-cert-with-root-without-utilizing-openssl-cnf
|
45,567,476
|
Bash scripting - if cases $? > 0
|
Sorry for possible spam, I'm finishing RHEL Security Hardening/Auditing script, where I want an overall result in the end. For example, # PermitEmptyPasswords grep -E '^\s*PermitEmptyPasswords\s+no\s*' /etc/ssh/sshd_config &> /dev/null if [ $? = 0 ]; then echo "[ OK ] PermitEmptyPasswords is properly configured"; else echo "[ ERROR ] PermitEmptyPasswords is not properly configured"; fi Now, my idea for overall result (Safe/Not safe) is to make sum of all these if $? cases, if all cases give sum of 0, it will echo "This system is properly configured by hardening policy", else echo "This system has errors" + reprint all errors where $? is > 0. How to get this work? I'm new at scripting, so any help will be appreciable. Thanks in advance.
|
Bash scripting - if cases $? > 0 Sorry for possible spam, I'm finishing RHEL Security Hardening/Auditing script, where I want an overall result in the end. For example, # PermitEmptyPasswords grep -E '^\s*PermitEmptyPasswords\s+no\s*' /etc/ssh/sshd_config &> /dev/null if [ $? = 0 ]; then echo "[ OK ] PermitEmptyPasswords is properly configured"; else echo "[ ERROR ] PermitEmptyPasswords is not properly configured"; fi Now, my idea for overall result (Safe/Not safe) is to make sum of all these if $? cases, if all cases give sum of 0, it will echo "This system is properly configured by hardening policy", else echo "This system has errors" + reprint all errors where $? is > 0. How to get this work? I'm new at scripting, so any help will be appreciable. Thanks in advance.
|
bash, scripting, rhel, if-case
| 1
| 17,190
| 2
|
https://stackoverflow.com/questions/45567476/bash-scripting-if-cases-0
|
38,575,798
|
Bash: Read in file, edit line, output to new file
|
I am new to linux and new to scripting. I am working in a linux environment using bash. I need to do the following things: 1. read a txt file line by line 2. delete the first line 3. remove the middle part of each line after the first 4. copy the changes to a new txt file Each line after the first has three sections, the first always ends in .pdf and the third always begins with R0 but the middle section has no consistency. Example of 2 lines in the file: R01234567_High Transcript_01234567.pdf High School Transcript R01234567 R01891023_Application_01891023127.pdf Application R01891023 Here is what I have so far. I'm just reading the file, printing it to screen and copying it to another file. #! /bin/bash cd /usr/local/bin; #echo "list of files:"; #ls; for index in *.txt; do echo "file: ${index}"; echo "reading..." exec<${index} value=0 while read line do #value='expr ${value} +1'; echo ${line}; done echo "read done for ${index}"; cp ${index} /usr/local/bin/test2; echo "file ${index} moved to test2"; done So my question is, how can I delete the middle bit of each line, after .pdf but before the R0...?
|
Bash: Read in file, edit line, output to new file I am new to linux and new to scripting. I am working in a linux environment using bash. I need to do the following things: 1. read a txt file line by line 2. delete the first line 3. remove the middle part of each line after the first 4. copy the changes to a new txt file Each line after the first has three sections, the first always ends in .pdf and the third always begins with R0 but the middle section has no consistency. Example of 2 lines in the file: R01234567_High Transcript_01234567.pdf High School Transcript R01234567 R01891023_Application_01891023127.pdf Application R01891023 Here is what I have so far. I'm just reading the file, printing it to screen and copying it to another file. #! /bin/bash cd /usr/local/bin; #echo "list of files:"; #ls; for index in *.txt; do echo "file: ${index}"; echo "reading..." exec<${index} value=0 while read line do #value='expr ${value} +1'; echo ${line}; done echo "read done for ${index}"; cp ${index} /usr/local/bin/test2; echo "file ${index} moved to test2"; done So my question is, how can I delete the middle bit of each line, after .pdf but before the R0...?
|
linux, bash, shell, rhel
| 1
| 4,915
| 4
|
https://stackoverflow.com/questions/38575798/bash-read-in-file-edit-line-output-to-new-file
|
22,168,639
|
Unable to set just year with Linux date command
|
For some reason I fail to update year only using date command date Tue Mar 4 20:15:42 IST 2014 date '+%Y' -s '2013' date Tue Mar 4 20:13:01 IST 2014 I tried it on both RedHat and Ubuntu... NTP is not running...
|
Unable to set just year with Linux date command For some reason I fail to update year only using date command date Tue Mar 4 20:15:42 IST 2014 date '+%Y' -s '2013' date Tue Mar 4 20:13:01 IST 2014 I tried it on both RedHat and Ubuntu... NTP is not running...
|
linux, bash, shell, ubuntu, rhel
| 1
| 3,782
| 1
|
https://stackoverflow.com/questions/22168639/unable-to-set-just-year-with-linux-date-command
|
22,166,614
|
Perl processes disappears after some days
|
I have a perl file ( test.pl ). It will work in recurring manner. The purpose of the file is send emails from DB Following is the code in test.pl sub send_mail{ $db->connect(); # Some DB operations # # Send mail # $db->disconnect(); sleep(5); send_mail(); } send_mail(); Iam executing 5 instance of this file ,like as below perl test.pl >> /var/www/html/emailerrorlog/error1.log 2>&1 & perl test.pl >> /var/www/html/emailerrorlog/error2.log 2>&1 & perl test.pl >> /var/www/html/emailerrorlog/error3.log 2>&1 & perl test.pl >> /var/www/html/emailerrorlog/error4.log 2>&1 & perl test.pl >> /var/www/html/emailerrorlog/error5.log 2>&1 & if i execute the command ps -ef | grep perl | grep -v grep I can see 5 instances of above mentioned perl file That file will work perfectly for some days But after some days, the perl processes will start to disappear one by one . After some days all process will disappear. Now. if i execute the command ps -ef | grep perl | grep -v grep ,I can't see any process, I can't see any error log in the log files. So, what may be the chances for disappearing the perl processes? How can i debugg it ? Where can i see the perl error log? It has the same issue in Centos and Red Hat Linux Any one have idea?
|
Perl processes disappears after some days I have a perl file ( test.pl ). It will work in recurring manner. The purpose of the file is send emails from DB Following is the code in test.pl sub send_mail{ $db->connect(); # Some DB operations # # Send mail # $db->disconnect(); sleep(5); send_mail(); } send_mail(); Iam executing 5 instance of this file ,like as below perl test.pl >> /var/www/html/emailerrorlog/error1.log 2>&1 & perl test.pl >> /var/www/html/emailerrorlog/error2.log 2>&1 & perl test.pl >> /var/www/html/emailerrorlog/error3.log 2>&1 & perl test.pl >> /var/www/html/emailerrorlog/error4.log 2>&1 & perl test.pl >> /var/www/html/emailerrorlog/error5.log 2>&1 & if i execute the command ps -ef | grep perl | grep -v grep I can see 5 instances of above mentioned perl file That file will work perfectly for some days But after some days, the perl processes will start to disappear one by one . After some days all process will disappear. Now. if i execute the command ps -ef | grep perl | grep -v grep ,I can't see any process, I can't see any error log in the log files. So, what may be the chances for disappearing the perl processes? How can i debugg it ? Where can i see the perl error log? It has the same issue in Centos and Red Hat Linux Any one have idea?
|
perl, centos, rhel
| 1
| 98
| 1
|
https://stackoverflow.com/questions/22166614/perl-processes-disappears-after-some-days
|
63,899,323
|
Docker: Openjdk:14 RHEL based imaged, cannot install yum/wget/netstat
|
In my dockerfile, I need a maven builder (3.6 at least) working on a OpenJDK (J14 is required). FROM maven:3.6.3-openjdk-14 as builder The problem is simple: I need netstat command because it is used in several scripts. The OpenJDK official image is RHEL based, so it comes without any of this package installed. I tried to download it or yum via wget command but, as you can guess, it is not installed. I feel trapped because it seems like you cannot you can't install any package on it.
|
Docker: Openjdk:14 RHEL based imaged, cannot install yum/wget/netstat In my dockerfile, I need a maven builder (3.6 at least) working on a OpenJDK (J14 is required). FROM maven:3.6.3-openjdk-14 as builder The problem is simple: I need netstat command because it is used in several scripts. The OpenJDK official image is RHEL based, so it comes without any of this package installed. I tried to download it or yum via wget command but, as you can guess, it is not installed. I feel trapped because it seems like you cannot you can't install any package on it.
|
docker, rhel, redhat-openjdk
| 1
| 1,689
| 1
|
https://stackoverflow.com/questions/63899323/docker-openjdk14-rhel-based-imaged-cannot-install-yum-wget-netstat
|
58,790,277
|
Get /var/log/mesages for a particular time
|
I want to see the logs for a particular time, ie, from 10:00 to 13:00. OS : redhat 6 Log : /var/log/messages Timestamp: Nov 10 10:00 to 13:00 I have tried below command, but no luck: sed -n '/Nov 10 10:00:01/ , /Nov 10 13:30:09/p' /var/log/messages
|
Get /var/log/mesages for a particular time I want to see the logs for a particular time, ie, from 10:00 to 13:00. OS : redhat 6 Log : /var/log/messages Timestamp: Nov 10 10:00 to 13:00 I have tried below command, but no luck: sed -n '/Nov 10 10:00:01/ , /Nov 10 13:30:09/p' /var/log/messages
|
linux, sed, rhel
| 1
| 2,984
| 1
|
https://stackoverflow.com/questions/58790277/get-var-log-mesages-for-a-particular-time
|
43,650,716
|
Syntax error in Ansible playbook when using variables
|
I have written the following Ansible playbook to transfer files: --- - hosts: webservers vars: appname: myapp repofile: /etc/ansible/packagerepo/scripts/ become: yes tasks: - name: Copy tomcat template file. copy: src: "{{ repofile }}"/tomcat_template.sh dest: /apps/bin/tomcat_template.sh - name: Copy App template file copy: src: "{{ repofile }}"/app_template dest: /etc/init.d/app_template But it's giving the following error when using Ansible variables. If we do not use variables, it works absolutely fine. The offending line appears to be: #src: /etc/ansible/packagerepo/scripts/tomcat_template.sh src: "{{ repofile }}"/tomcat_template.sh ^ here We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. For instance: with_items: - {{ foo }} Should be written as: with_items: - "{{ foo }}" Please suggest.
|
Syntax error in Ansible playbook when using variables I have written the following Ansible playbook to transfer files: --- - hosts: webservers vars: appname: myapp repofile: /etc/ansible/packagerepo/scripts/ become: yes tasks: - name: Copy tomcat template file. copy: src: "{{ repofile }}"/tomcat_template.sh dest: /apps/bin/tomcat_template.sh - name: Copy App template file copy: src: "{{ repofile }}"/app_template dest: /etc/init.d/app_template But it's giving the following error when using Ansible variables. If we do not use variables, it works absolutely fine. The offending line appears to be: #src: /etc/ansible/packagerepo/scripts/tomcat_template.sh src: "{{ repofile }}"/tomcat_template.sh ^ here We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. For instance: with_items: - {{ foo }} Should be written as: with_items: - "{{ foo }}" Please suggest.
|
ansible, rhel
| 1
| 971
| 1
|
https://stackoverflow.com/questions/43650716/syntax-error-in-ansible-playbook-when-using-variables
|
34,742,635
|
How to use amqsput command in shell script to push data into a queue?
|
I am using following command to push some data into a Message Queue. amqsput QUEUE_NAME QUEUE_MANAGER_NAME Then after console available I push my required data. (just shown in the screenshot, I will copy my data now). Since there is a wait involved, Like I first I need to make sure that it connected with the queue through that queue manger then I push my data. Which successfully works. How can I do it through a shell script? Update: In shell script I can try following #!/bin/ksh /opt/mqm/samp/bin/amqsput QUEUE_NAME QUEUE_MANAGER_NAME < /filepath/data.txt But I can't push string. After < it expects a file. Any help?
|
How to use amqsput command in shell script to push data into a queue? I am using following command to push some data into a Message Queue. amqsput QUEUE_NAME QUEUE_MANAGER_NAME Then after console available I push my required data. (just shown in the screenshot, I will copy my data now). Since there is a wait involved, Like I first I need to make sure that it connected with the queue through that queue manger then I push my data. Which successfully works. How can I do it through a shell script? Update: In shell script I can try following #!/bin/ksh /opt/mqm/samp/bin/amqsput QUEUE_NAME QUEUE_MANAGER_NAME < /filepath/data.txt But I can't push string. After < it expects a file. Any help?
|
shell, ibm-mq, rhel
| 1
| 15,280
| 2
|
https://stackoverflow.com/questions/34742635/how-to-use-amqsput-command-in-shell-script-to-push-data-into-a-queue
|
23,284,978
|
How to add exception to iptables?
|
I'm working on a RHEL server and I need to add an exception in the firewall, so you can allways axcess on Port 3000 ... How can I do this? Thanks @ Rahul R Dhobi : -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 3000 -j ACCEPT I have added this after my other rules, but I only get an error, iptables: Applying firewall rules: iptables-restore: line 13 failed [FAILED] when restarting the service. As I'm really new to Linux/iptables, I can't really tell, if it's a syntax Error, or something else. I also tried -A INPUT -m state --state NEW -m tcp -p tcp --dport 3000 -j ACCEPT Now I don't get an Error anymore, but I still can't access through port 3000. # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A INPUT -m state --state NEW -m tcp -p tcp --dport 3000 -j ACCEPT COMMIT
|
How to add exception to iptables? I'm working on a RHEL server and I need to add an exception in the firewall, so you can allways axcess on Port 3000 ... How can I do this? Thanks @ Rahul R Dhobi : -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 3000 -j ACCEPT I have added this after my other rules, but I only get an error, iptables: Applying firewall rules: iptables-restore: line 13 failed [FAILED] when restarting the service. As I'm really new to Linux/iptables, I can't really tell, if it's a syntax Error, or something else. I also tried -A INPUT -m state --state NEW -m tcp -p tcp --dport 3000 -j ACCEPT Now I don't get an Error anymore, but I still can't access through port 3000. # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A INPUT -m state --state NEW -m tcp -p tcp --dport 3000 -j ACCEPT COMMIT
|
linux, unix, iptables, rhel
| 1
| 11,447
| 1
|
https://stackoverflow.com/questions/23284978/how-to-add-exception-to-iptables
|
22,120,207
|
Portable packaging technology
|
I am looking for a good portable technology that would allow for unified way to package software that runs on five different platforms. The platforms are Solaris10/SPARC, Solaris10/X86, Solaris11/SPARC, Solaris11/X86, and RHEL6.4/X86. About 95% of the software is portable Java applications but there a small amount of middleware that is platform-specific. Instead of building five different distributable images that are 95% identical (one for every platform), I want to produce a single universal image for all five. If I follow this route, it makes sense to unify the packaging format to have identical look and feel to the installation process and to keep everything neatly arranged in the distributable image. Oh, and different software components in the image are individual packages, much like a distro comprised of a bunch of RPMs. In a search for a portable packaging mechanism I looked at RPM, which is a pain to build for Solaris (and I need four different builds). I also read up on dpkg and a few other exotic packaging and installation suites. None of them seems to be as portable as I would like it to be. I wish there was a Python version of RPM that does not require anything but Python interpreter, but I could not find anything of this kind. I would really appreciate any hints as to what existing freeware and portable packaging and installation solutions there are. I can always design and build my own but I would rather rely on somebody else who went this route before me. Thank you!
|
Portable packaging technology I am looking for a good portable technology that would allow for unified way to package software that runs on five different platforms. The platforms are Solaris10/SPARC, Solaris10/X86, Solaris11/SPARC, Solaris11/X86, and RHEL6.4/X86. About 95% of the software is portable Java applications but there a small amount of middleware that is platform-specific. Instead of building five different distributable images that are 95% identical (one for every platform), I want to produce a single universal image for all five. If I follow this route, it makes sense to unify the packaging format to have identical look and feel to the installation process and to keep everything neatly arranged in the distributable image. Oh, and different software components in the image are individual packages, much like a distro comprised of a bunch of RPMs. In a search for a portable packaging mechanism I looked at RPM, which is a pain to build for Solaris (and I need four different builds). I also read up on dpkg and a few other exotic packaging and installation suites. None of them seems to be as portable as I would like it to be. I wish there was a Python version of RPM that does not require anything but Python interpreter, but I could not find anything of this kind. I would really appreciate any hints as to what existing freeware and portable packaging and installation solutions there are. I can always design and build my own but I would rather rely on somebody else who went this route before me. Thank you!
|
solaris, portability, rpm, rhel, package-managers
| 1
| 84
| 3
|
https://stackoverflow.com/questions/22120207/portable-packaging-technology
|
17,241,236
|
Writing a Sys::Getpagesize Perl module for system call getpagesize (man page GETPAGESIZE(2))
|
I have been tasked to write a Perl module that requires that I use Perl XS. I have not ever used Perl XS but I have looked at the documentation here: [URL] and it looks like I will need the C source code for the getpagesize system call. I tried looking for getpagesize.c via yum like so ... $ sudo yum provides */getpagesize.c ... but I do not seem to have any RPMs that would provide getpagesize.c . Can anyone out there help me out of the ditch I seem to have driven into? Thanks
|
Writing a Sys::Getpagesize Perl module for system call getpagesize (man page GETPAGESIZE(2)) I have been tasked to write a Perl module that requires that I use Perl XS. I have not ever used Perl XS but I have looked at the documentation here: [URL] and it looks like I will need the C source code for the getpagesize system call. I tried looking for getpagesize.c via yum like so ... $ sudo yum provides */getpagesize.c ... but I do not seem to have any RPMs that would provide getpagesize.c . Can anyone out there help me out of the ditch I seem to have driven into? Thanks
|
perl, system-calls, rhel, perl-xs
| 1
| 296
| 2
|
https://stackoverflow.com/questions/17241236/writing-a-sysgetpagesize-perl-module-for-system-call-getpagesize-man-page-get
|
16,351,595
|
Kernel source rpm CentOS 6.3
|
I have CentOS 6.3 installed. cat /etc/redhat-release CentOS release 6.3 (Final) uname -mrs Linux 2.6.32-279.el6.x86_64 x86_64 I am following the steps outlined in the following links to fetch and build the kernel (to enable certain features): [URL] [URL] In the CentOS 6.3 vault ( [URL] ), following kernel source rpms are listed: kernel-2.6.32-279.1.1.el6.src.rpm kernel-2.6.32-279.2.1.el6.src.rpm kernel-2.6.32-279.5.1.el6.src.rpm kernel-2.6.32-279.5.2.el6.src.rpm kernel-2.6.32-279.9.1.el6.src.rpm kernel-2.6.32-279.11.1.el6.src.rpm kernel-2.6.32-279.14.1.el6.src.rpm kernel-2.6.32-279.19.1.el6.src.rpm kernel-2.6.32-279.22.1.el6.src.rpm I am trying to find out, which of the above source rpm corresponds to kernel version 2.6.32-279.el6 (output of "uname -r" which does not contain the additional 2 digits of the version number in the list above). Any help would be very appreciated. Thank you, Ahmed.
|
Kernel source rpm CentOS 6.3 I have CentOS 6.3 installed. cat /etc/redhat-release CentOS release 6.3 (Final) uname -mrs Linux 2.6.32-279.el6.x86_64 x86_64 I am following the steps outlined in the following links to fetch and build the kernel (to enable certain features): [URL] [URL] In the CentOS 6.3 vault ( [URL] ), following kernel source rpms are listed: kernel-2.6.32-279.1.1.el6.src.rpm kernel-2.6.32-279.2.1.el6.src.rpm kernel-2.6.32-279.5.1.el6.src.rpm kernel-2.6.32-279.5.2.el6.src.rpm kernel-2.6.32-279.9.1.el6.src.rpm kernel-2.6.32-279.11.1.el6.src.rpm kernel-2.6.32-279.14.1.el6.src.rpm kernel-2.6.32-279.19.1.el6.src.rpm kernel-2.6.32-279.22.1.el6.src.rpm I am trying to find out, which of the above source rpm corresponds to kernel version 2.6.32-279.el6 (output of "uname -r" which does not contain the additional 2 digits of the version number in the list above). Any help would be very appreciated. Thank you, Ahmed.
|
centos, rhel, centos6
| 1
| 5,807
| 1
|
https://stackoverflow.com/questions/16351595/kernel-source-rpm-centos-6-3
|
8,376,883
|
crontab entry for individual user
|
On my RHEL5 box, I have so far set up cron jobs by placing entries in the /etc/crontab file which is for safely reasons, only editable by root. Are there other ways to set up cron jobs for individual users? Preferably, I would like each user to have their own cron file that they can edit at will without requiring root privileges. Can this be done?
|
crontab entry for individual user On my RHEL5 box, I have so far set up cron jobs by placing entries in the /etc/crontab file which is for safely reasons, only editable by root. Are there other ways to set up cron jobs for individual users? Preferably, I would like each user to have their own cron file that they can edit at will without requiring root privileges. Can this be done?
|
linux, cron, administration, rhel
| 1
| 1,596
| 2
|
https://stackoverflow.com/questions/8376883/crontab-entry-for-individual-user
|
73,242,249
|
How to install static libraries (eg libstdc++, libm, libc) on AWS official Rocky Linux?
|
Rocky Linux is a free distribution that repackages each release of RHEL (Red Hat Enterprise Linux). It is what CentOS used to be. On AWS there are Official releases of Rocky Linux 8 Green Obsidian (currently 8.6 = RHEL 8.6) and Rocky Linux 9 Blue Onyx (currently 9.0 = RHEL 9.0). I am using g++ (gcc). On Rocky Linux 8.6: g++ (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10) On Rocky Linux 9.0: g++ (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) Building with dynamic linking works fine. Making a statically linked build works fine elsewhere (e.g. Ubuntu). But it seems some static libraries are missing on the Rocky Linux platforms (8 or 9), which leads to error messages when trying to build with -static linking. /usr/bin/ld: cannot find -lstdc++ /usr/bin/ld: cannot find -lm /usr/bin/ld: cannot find -lc Looking on the whole system for any lib*.a for static linking, I do find /usr/lib/gcc/x86_64-redhat-linux/8/32/libstdc++.a However, I believe that is for "32" bit builds, not 64. I do find libm.so and libc.so for dynamic linking, but there are no libm.a or libc.a libraries for static linking. Using yum, I don't find any packages that are or that provide libstdc++-static. gcc.x86_64 and glibc-devel.x86_64 are already installed. What is needed to get the necessary static libraries for a static build? Thanks in advance!
|
How to install static libraries (eg libstdc++, libm, libc) on AWS official Rocky Linux? Rocky Linux is a free distribution that repackages each release of RHEL (Red Hat Enterprise Linux). It is what CentOS used to be. On AWS there are Official releases of Rocky Linux 8 Green Obsidian (currently 8.6 = RHEL 8.6) and Rocky Linux 9 Blue Onyx (currently 9.0 = RHEL 9.0). I am using g++ (gcc). On Rocky Linux 8.6: g++ (GCC) 8.5.0 20210514 (Red Hat 8.5.0-10) On Rocky Linux 9.0: g++ (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) Building with dynamic linking works fine. Making a statically linked build works fine elsewhere (e.g. Ubuntu). But it seems some static libraries are missing on the Rocky Linux platforms (8 or 9), which leads to error messages when trying to build with -static linking. /usr/bin/ld: cannot find -lstdc++ /usr/bin/ld: cannot find -lm /usr/bin/ld: cannot find -lc Looking on the whole system for any lib*.a for static linking, I do find /usr/lib/gcc/x86_64-redhat-linux/8/32/libstdc++.a However, I believe that is for "32" bit builds, not 64. I do find libm.so and libc.so for dynamic linking, but there are no libm.a or libc.a libraries for static linking. Using yum, I don't find any packages that are or that provide libstdc++-static. gcc.x86_64 and glibc-devel.x86_64 are already installed. What is needed to get the necessary static libraries for a static build? Thanks in advance!
|
linux, gcc, amazon-ec2, static-libraries, rhel
| 1
| 10,533
| 2
|
https://stackoverflow.com/questions/73242249/how-to-install-static-libraries-eg-libstdc-libm-libc-on-aws-official-rocky
|
67,114,447
|
java -version giving error in Red Hat Enterprise Linux release 8.3
|
I have installed jdk1.6 in my RHEL 8.3 OS , and changed the JAVA_HOME to point to this folder. However, if i do a java -version, it shows below error : Error occurred during initialization of VM Unable to load native library: libnsl.so.1: cannot open shared object file: No such file or directory Have restarted the putty session but still the issue persists. Any help is appreciated..
|
java -version giving error in Red Hat Enterprise Linux release 8.3 I have installed jdk1.6 in my RHEL 8.3 OS , and changed the JAVA_HOME to point to this folder. However, if i do a java -version, it shows below error : Error occurred during initialization of VM Unable to load native library: libnsl.so.1: cannot open shared object file: No such file or directory Have restarted the putty session but still the issue persists. Any help is appreciated..
|
java, linux, unix, rhel, java-6
| 1
| 4,865
| 1
|
https://stackoverflow.com/questions/67114447/java-version-giving-error-in-red-hat-enterprise-linux-release-8-3
|
66,172,076
|
What is the difference between UBI and Atomic Base Image
|
My company is RHEL customer, and I need base container images for: Portable CI/CD build environments Running services with minimal dependencies (Go, Java, Python) Looking at these 2 docker files, they are identical. Is there any difference in support model, or maintenance? [URL] [URL]
|
What is the difference between UBI and Atomic Base Image My company is RHEL customer, and I need base container images for: Portable CI/CD build environments Running services with minimal dependencies (Go, Java, Python) Looking at these 2 docker files, they are identical. Is there any difference in support model, or maintenance? [URL] [URL]
|
rhel, container-image
| 1
| 2,849
| 1
|
https://stackoverflow.com/questions/66172076/what-is-the-difference-between-ubi-and-atomic-base-image
|
62,724,133
|
Do AWS charge licence fee for an image based on RHEL AMI
|
The problem is I bake an AMI with Packer and I use RHEL as a source. When the image is being baked AWS charge for the Packer VM on pay-as-you-go basis, but when that image is ready and I start using it, do AWS know that it’s based on RHEL? Do they still charge licence fee for it?
|
Do AWS charge licence fee for an image based on RHEL AMI The problem is I bake an AMI with Packer and I use RHEL as a source. When the image is being baked AWS charge for the Packer VM on pay-as-you-go basis, but when that image is ready and I start using it, do AWS know that it’s based on RHEL? Do they still charge licence fee for it?
|
amazon-web-services, rhel, packer
| 1
| 651
| 1
|
https://stackoverflow.com/questions/62724133/do-aws-charge-licence-fee-for-an-image-based-on-rhel-ami
|
61,789,762
|
How to correctly use telnet in a bash script?
|
I am running following command to test port connectivity curl -v telnet://target ip address:desired port number When server connects successfully I see output as below: # curl -v telnet://127.0.0.1:22 * About to connect() to 127.0.0.1 port 22 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 22 (#0) When server doesn't connects successfully I see output as below: # curl -v telnet://127.0.0.1:22 * About to connect() to 127.0.0.1 port 22 (#0) * Trying 127.0.0.1... Now for a given list of servers, I am trying to automate it using bash script. for element in "${array[@]}"; do timeout 2s curl -v telnet://"$element":22 >/dev/null 2>&1 if [ $? -eq 0 ]; then echo "'$element' connected" && break else echo "Connection with $element failed." fi done The array has values: abc001 abc002 abc003 I am always getting output from within else statement Connection with abc001 failed. Connection with abc002 failed. Connection with abc003 failed. I think it is because the return code is always 124 The error code is 124 for both success and failure How can I modify my script to work correctly?
|
How to correctly use telnet in a bash script? I am running following command to test port connectivity curl -v telnet://target ip address:desired port number When server connects successfully I see output as below: # curl -v telnet://127.0.0.1:22 * About to connect() to 127.0.0.1 port 22 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 22 (#0) When server doesn't connects successfully I see output as below: # curl -v telnet://127.0.0.1:22 * About to connect() to 127.0.0.1 port 22 (#0) * Trying 127.0.0.1... Now for a given list of servers, I am trying to automate it using bash script. for element in "${array[@]}"; do timeout 2s curl -v telnet://"$element":22 >/dev/null 2>&1 if [ $? -eq 0 ]; then echo "'$element' connected" && break else echo "Connection with $element failed." fi done The array has values: abc001 abc002 abc003 I am always getting output from within else statement Connection with abc001 failed. Connection with abc002 failed. Connection with abc003 failed. I think it is because the return code is always 124 The error code is 124 for both success and failure How can I modify my script to work correctly?
|
linux, bash, curl, telnet, rhel
| 1
| 8,705
| 2
|
https://stackoverflow.com/questions/61789762/how-to-correctly-use-telnet-in-a-bash-script
|
58,507,895
|
C program getenv() returns null despite terminal echo working
|
I have read all the other posts with the same title, but I'm not on an embedded system, and I have my includes and environment variable set correctly. Running on RHEL 7.5, my program with getenv wasn't working so I created a C application with the simple functionality of printing the JAVA_HOME environment variable. It returns null from the C program but the environment variable is set (has been set permanently) and reads fine in my putty terminal. This is exactly what I'm running (just the paths shortened): C: #include <stdio.h> #include <stdlib.h> int main () { printf("JAVA HOME : %s\n", getenv("JAVA_HOME")); return(0); } makefile: CC=gcc CFLAGS=-c -Wall -I/path/to/includes BIN=/path/to/bin INCLUDE=/path/to/includes default : $(BIN)/testEnv testEnv.o : testEnv.c $(CC) $(CFLAGS) testEnv.c #------------ Make testEnv------------------- $(BIN)/testEnv: testEnv.o $(CC) -o $@ \ testEnv.o Terminal: >bin/testEnv JAVA HOME : (null) >echo $JAVA_HOME /path/to/java Does anyone have anything new I should check or know what the issue could be? Thanks.
|
C program getenv() returns null despite terminal echo working I have read all the other posts with the same title, but I'm not on an embedded system, and I have my includes and environment variable set correctly. Running on RHEL 7.5, my program with getenv wasn't working so I created a C application with the simple functionality of printing the JAVA_HOME environment variable. It returns null from the C program but the environment variable is set (has been set permanently) and reads fine in my putty terminal. This is exactly what I'm running (just the paths shortened): C: #include <stdio.h> #include <stdlib.h> int main () { printf("JAVA HOME : %s\n", getenv("JAVA_HOME")); return(0); } makefile: CC=gcc CFLAGS=-c -Wall -I/path/to/includes BIN=/path/to/bin INCLUDE=/path/to/includes default : $(BIN)/testEnv testEnv.o : testEnv.c $(CC) $(CFLAGS) testEnv.c #------------ Make testEnv------------------- $(BIN)/testEnv: testEnv.o $(CC) -o $@ \ testEnv.o Terminal: >bin/testEnv JAVA HOME : (null) >echo $JAVA_HOME /path/to/java Does anyone have anything new I should check or know what the issue could be? Thanks.
|
c, linux, rhel, getenv
| 1
| 701
| 1
|
https://stackoverflow.com/questions/58507895/c-program-getenv-returns-null-despite-terminal-echo-working
|
54,184,191
|
Ansible provisioning without internet access
|
I know that you can setup proxy in Ansible to provision behind corporate network: [URL] like this: environment: http_proxy: [URL] Unfortunately in my case there is no access to internet from the server at all. Downloading roles locally and putting them under /roles folder seems solve the role issue, but roles still download packages from the internet when using: package: name: package-name state: present I guess there is no way to make dry/pre run so Ansible downloads all the packages, then push that into repo and run Ansible provision using locally downloaded packages?
|
Ansible provisioning without internet access I know that you can setup proxy in Ansible to provision behind corporate network: [URL] like this: environment: http_proxy: [URL] Unfortunately in my case there is no access to internet from the server at all. Downloading roles locally and putting them under /roles folder seems solve the role issue, but roles still download packages from the internet when using: package: name: package-name state: present I guess there is no way to make dry/pre run so Ansible downloads all the packages, then push that into repo and run Ansible provision using locally downloaded packages?
|
ansible, rhel, ansible-role
| 1
| 3,227
| 3
|
https://stackoverflow.com/questions/54184191/ansible-provisioning-without-internet-access
|
53,923,411
|
Permanently set environment variables for Multiple users
|
Java build machine is on Redhat 7.5 to run mvn clean package deploy for building Java code. Multiple users login with their username to this build machine, to build Java code. To permanently set JAVA_HOME & update PATH environment for all users, Updating /etc/profile needs sudo source /etc/profile for every login, by every user. All users are part of wheel group Edit: Using sudo on internal command is invalid How to permanently set these variables, for every login?
|
Permanently set environment variables for Multiple users Java build machine is on Redhat 7.5 to run mvn clean package deploy for building Java code. Multiple users login with their username to this build machine, to build Java code. To permanently set JAVA_HOME & update PATH environment for all users, Updating /etc/profile needs sudo source /etc/profile for every login, by every user. All users are part of wheel group Edit: Using sudo on internal command is invalid How to permanently set these variables, for every login?
|
java, bash, shell, rhel, user-management
| 1
| 2,964
| 2
|
https://stackoverflow.com/questions/53923411/permanently-set-environment-variables-for-multiple-users
|
52,551,340
|
c++ static variable initialization problem - referencing on another static const
|
I tried to declare two static variables in two different .cpp, one is trying to use another during initialization (e.g. Class B -> Class A). Code could be compiled if I have main.cpp which includes a.h and b.h. It crashed during run time (Segmentation fault (core dumped)). I understand it is a problem with static variable initialization, static variable A might yet to be initialized during the initialization of static object B. May I ask what is the proper way to resolve this kind of problem by changing the way I code or any design pattern would help? I have seen some post saying to use "constexpr" to force A::a initialization during compile time, I got sucked in syntax error. static constexpr std::string a; // in a.h constexpr std::string A::a="AAA"; // in a.cpp errors: a.h:7:34: error: constexpr static data member ‘a’ must have an initializer static constexpr std::string a; a.cpp:4:26: error: redeclaration ‘A::a’ differs in ‘constexpr’ constexpr std::string A::a="AAA"; FULL CODE are below: a.h #include <string> using namespace std; class A { public: static const std::string a; A(); ~A(); }; a.cpp #include "a.h" using namespace std; const std::string A::a("AAA"); A::A(){}; A::~A(){}; b.h #include <string> using namespace std; class B { public: B(const std::string& a ); ~B(); }; b.cpp #include "b.h" #include "a.h" #include <iostream> static const B b(A::a); B::B(const std::string& s){ cout <<"B obj::" << s << endl; }; B::~B(){}; I have thought of creating a global getter function getA() { static std::string A::a; //hope that would force A::a initialization return A::a; } and then static B b(getA()) it looks ugly...
|
c++ static variable initialization problem - referencing on another static const I tried to declare two static variables in two different .cpp, one is trying to use another during initialization (e.g. Class B -> Class A). Code could be compiled if I have main.cpp which includes a.h and b.h. It crashed during run time (Segmentation fault (core dumped)). I understand it is a problem with static variable initialization, static variable A might yet to be initialized during the initialization of static object B. May I ask what is the proper way to resolve this kind of problem by changing the way I code or any design pattern would help? I have seen some post saying to use "constexpr" to force A::a initialization during compile time, I got sucked in syntax error. static constexpr std::string a; // in a.h constexpr std::string A::a="AAA"; // in a.cpp errors: a.h:7:34: error: constexpr static data member ‘a’ must have an initializer static constexpr std::string a; a.cpp:4:26: error: redeclaration ‘A::a’ differs in ‘constexpr’ constexpr std::string A::a="AAA"; FULL CODE are below: a.h #include <string> using namespace std; class A { public: static const std::string a; A(); ~A(); }; a.cpp #include "a.h" using namespace std; const std::string A::a("AAA"); A::A(){}; A::~A(){}; b.h #include <string> using namespace std; class B { public: B(const std::string& a ); ~B(); }; b.cpp #include "b.h" #include "a.h" #include <iostream> static const B b(A::a); B::B(const std::string& s){ cout <<"B obj::" << s << endl; }; B::~B(){}; I have thought of creating a global getter function getA() { static std::string A::a; //hope that would force A::a initialization return A::a; } and then static B b(getA()) it looks ugly...
|
c++, linux, g++, rhel
| 1
| 612
| 1
|
https://stackoverflow.com/questions/52551340/c-static-variable-initialization-problem-referencing-on-another-static-const
|
49,102,287
|
Building application for multiple version of RedHat
|
Assume in my company, we have same application written in C++, running in machines of RHEL5, 6 and 7. I want to build from one single build server (which is running RHEL7) to get the executable that runs in old previous versions of RHEL. May I know if it is achievable? I expect if I am building in RHEL7, with corresponding version of gcc and glibc (and other libs) available in RHEL5, the resulting executable should run in RHEL5. Is my understanding correct? Or are there more things to pay attention to?
|
Building application for multiple version of RedHat Assume in my company, we have same application written in C++, running in machines of RHEL5, 6 and 7. I want to build from one single build server (which is running RHEL7) to get the executable that runs in old previous versions of RHEL. May I know if it is achievable? I expect if I am building in RHEL7, with corresponding version of gcc and glibc (and other libs) available in RHEL5, the resulting executable should run in RHEL5. Is my understanding correct? Or are there more things to pay attention to?
|
c++, build, rhel
| 1
| 1,062
| 2
|
https://stackoverflow.com/questions/49102287/building-application-for-multiple-version-of-redhat
|
48,451,695
|
Can you upgrade Postresql-server directly from 9.2 to 9.6?
|
Can you upgrade Postresql-server directly from 9.2 to 9.6? (Running on RHEL 7)
|
Can you upgrade Postresql-server directly from 9.2 to 9.6? Can you upgrade Postresql-server directly from 9.2 to 9.6? (Running on RHEL 7)
|
postgresql, upgrade, rhel
| 1
| 1,987
| 1
|
https://stackoverflow.com/questions/48451695/can-you-upgrade-postresql-server-directly-from-9-2-to-9-6
|
45,876,462
|
Firewall error in RHEL 7.2: "No chain/target/match by that name."
|
I am trying to configure NRPE 3.2.0 and it uses the port 5666 to run remote scripts. When I try to execute the command below, i get this error. firewall-cmd --zone=public --add-port=5666/tcp Error: COMMAND_FAILED: '/sbin/iptables -w2 -A IN_public_allow -t filter -m tcp -p tcp --dport 5666 -m conntrack --ctstate NEW -j ACCEPT' failed: iptables: No chain/target/match by that name. Failed to apply rules. A firewall reload might solve the issue if the firewall has been modified using ip*tables or ebtables. I understand that the chain I am trying to append does not exist, but I don't have a clue about what this chain is or how to create it.
|
Firewall error in RHEL 7.2: "No chain/target/match by that name." I am trying to configure NRPE 3.2.0 and it uses the port 5666 to run remote scripts. When I try to execute the command below, i get this error. firewall-cmd --zone=public --add-port=5666/tcp Error: COMMAND_FAILED: '/sbin/iptables -w2 -A IN_public_allow -t filter -m tcp -p tcp --dport 5666 -m conntrack --ctstate NEW -j ACCEPT' failed: iptables: No chain/target/match by that name. Failed to apply rules. A firewall reload might solve the issue if the firewall has been modified using ip*tables or ebtables. I understand that the chain I am trying to append does not exist, but I don't have a clue about what this chain is or how to create it.
|
firewall, iptables, rhel, nagios, nrpe
| 1
| 2,845
| 1
|
https://stackoverflow.com/questions/45876462/firewall-error-in-rhel-7-2-no-chain-target-match-by-that-name
|
45,237,682
|
How to run multiple, distinct Fortran scripts in parallel on RHEL 6.9
|
Let's say I have N Fortran executables and M cores on my machine, where N is greater than M . I want to be able to run these executables in parallel. I am using RHEL 6.9 I have used both OpenMP and GNU Parallel in the past to run code in parallel. However for my current purposes, neither of these two options would work: RHEL doesn't have a GNU Parallel distribution, and OpenMP applies to parallelizing blocks within a single executable, not multiple executables. What is the best way to run these N executables in parallel? Would a simple approach like executable_1 & executable_2 & ... & executable_N work?
|
How to run multiple, distinct Fortran scripts in parallel on RHEL 6.9 Let's say I have N Fortran executables and M cores on my machine, where N is greater than M . I want to be able to run these executables in parallel. I am using RHEL 6.9 I have used both OpenMP and GNU Parallel in the past to run code in parallel. However for my current purposes, neither of these two options would work: RHEL doesn't have a GNU Parallel distribution, and OpenMP applies to parallelizing blocks within a single executable, not multiple executables. What is the best way to run these N executables in parallel? Would a simple approach like executable_1 & executable_2 & ... & executable_N work?
|
shell, parallel-processing, openmp, rhel, gnu-parallel
| 1
| 216
| 2
|
https://stackoverflow.com/questions/45237682/how-to-run-multiple-distinct-fortran-scripts-in-parallel-on-rhel-6-9
|
34,081,112
|
Building GCC 5.2 on RHEL6
|
Since I needed C++14 support for one of our projects, I was trying to build GCC5.2 on my RHEL6 instance using the steps described in [URL] . However, though these steps work well on a RHEL5 instance, I get the following error on AmazonLinux during the step where it builds libgomp: configure:3688: checking for C compiler default output file name configure:3710: /home/samudra/gcc/./gcc/xgcc -B/home/samudra/gcc/./gcc/ -B/home/samudra/gcc5/x86_64-redhat-linux/bin/ -B/home/samudra/gcc5/x86_64-redhat-linux/lib/ -isystem /home/samudra/gcc5/x86_64-redhat-linux/include -isystem /home/samudra/gcc5/x86_64-redhat-linux/sys-include -g -O2 conftest.c >&5 /usr/lib/../lib64/crt1.o: In function _start': (.text+0x12): undefined reference to __libc_csu_fini' /usr/lib/../lib64/crt1.o: In function _start': (.text+0x19): undefined reference to __libc_csu_init' collect2: error: ld returned 1 exit status configure:3714: $? = 1 configure:3751: result: configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "GNU Offloading and Multi Processing Runtime Library" | #define PACKAGE_TARNAME "libgomp" | #define PACKAGE_VERSION "1.0" | #define PACKAGE_STRING "GNU Offloading and Multi Processing Runtime Library 1.0" | #define PACKAGE_BUGREPORT "" | #define PACKAGE_URL "[URL] | #define PACKAGE "libgomp" | #define VERSION "1.0" | /* end confdefs.h. */ | | int | main () | { | | ; | return 0; | } configure:3757: error: in `/home/samudra/gcc/x86_64-redhat-linux/libgomp': configure:3761: error: C compiler cannot create executables Some hunting around ( [URL] ) suggests the the compiler is unable to find the libc library. However I am unable to figure out a way to fix it. Has anyone faced a similar issue earlier? Any idea what is going on?
|
Building GCC 5.2 on RHEL6 Since I needed C++14 support for one of our projects, I was trying to build GCC5.2 on my RHEL6 instance using the steps described in [URL] . However, though these steps work well on a RHEL5 instance, I get the following error on AmazonLinux during the step where it builds libgomp: configure:3688: checking for C compiler default output file name configure:3710: /home/samudra/gcc/./gcc/xgcc -B/home/samudra/gcc/./gcc/ -B/home/samudra/gcc5/x86_64-redhat-linux/bin/ -B/home/samudra/gcc5/x86_64-redhat-linux/lib/ -isystem /home/samudra/gcc5/x86_64-redhat-linux/include -isystem /home/samudra/gcc5/x86_64-redhat-linux/sys-include -g -O2 conftest.c >&5 /usr/lib/../lib64/crt1.o: In function _start': (.text+0x12): undefined reference to __libc_csu_fini' /usr/lib/../lib64/crt1.o: In function _start': (.text+0x19): undefined reference to __libc_csu_init' collect2: error: ld returned 1 exit status configure:3714: $? = 1 configure:3751: result: configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "GNU Offloading and Multi Processing Runtime Library" | #define PACKAGE_TARNAME "libgomp" | #define PACKAGE_VERSION "1.0" | #define PACKAGE_STRING "GNU Offloading and Multi Processing Runtime Library 1.0" | #define PACKAGE_BUGREPORT "" | #define PACKAGE_URL "[URL] | #define PACKAGE "libgomp" | #define VERSION "1.0" | /* end confdefs.h. */ | | int | main () | { | | ; | return 0; | } configure:3757: error: in `/home/samudra/gcc/x86_64-redhat-linux/libgomp': configure:3761: error: C compiler cannot create executables Some hunting around ( [URL] ) suggests the the compiler is unable to find the libc library. However I am unable to figure out a way to fix it. Has anyone faced a similar issue earlier? Any idea what is going on?
|
c++, gcc, rhel
| 1
| 796
| 1
|
https://stackoverflow.com/questions/34081112/building-gcc-5-2-on-rhel6
|
34,067,355
|
How is the syntax of regular expression at bash?
|
I created a regex that finally works for my case :pkcs7-data\n.+\n\s+(.+?): You can have a look how it works right here REGEX101 link It has to find the first occurrence of a certain significant number. I built it using REGEX101 but I have to use it in a bash terminal. My idea is to use that regex in a grep command which gets as an input a file too. grep -Po ':pkcs7-data\n.+\n\s+(.+?):' file.txt My problem is that REGEX101 syntax I used doesn't fit for this bash bash --version GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <[URL] I lookep up some tool ( tool1 ) or files ( file1 , file2 , file3 ) I found but I'm still not able to get anything. I mean, every time I execute grep I don't get anything. I think, the problem must be in some symbols like "\n" or "+" but I'm not succeeding. If I execute something like grep -Po ':pkcs7-data' file.txt I got good results. Once I start with symbols like end of line begin the problems.
|
How is the syntax of regular expression at bash? I created a regex that finally works for my case :pkcs7-data\n.+\n\s+(.+?): You can have a look how it works right here REGEX101 link It has to find the first occurrence of a certain significant number. I built it using REGEX101 but I have to use it in a bash terminal. My idea is to use that regex in a grep command which gets as an input a file too. grep -Po ':pkcs7-data\n.+\n\s+(.+?):' file.txt My problem is that REGEX101 syntax I used doesn't fit for this bash bash --version GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <[URL] I lookep up some tool ( tool1 ) or files ( file1 , file2 , file3 ) I found but I'm still not able to get anything. I mean, every time I execute grep I don't get anything. I think, the problem must be in some symbols like "\n" or "+" but I'm not succeeding. If I execute something like grep -Po ':pkcs7-data' file.txt I got good results. Once I start with symbols like end of line begin the problems.
|
regex, bash, grep, rhel
| 1
| 158
| 3
|
https://stackoverflow.com/questions/34067355/how-is-the-syntax-of-regular-expression-at-bash
|
27,640,807
|
Show only TSTA* files in JFileChooser
|
So, inverse to the question: Open only .xml file in JFileChooser I'd like to open a JFileChooser that shows files starting with, for example, "TSTA" . I don't care about the rest of the file name.
|
Show only TSTA* files in JFileChooser So, inverse to the question: Open only .xml file in JFileChooser I'd like to open a JFileChooser that shows files starting with, for example, "TSTA" . I don't care about the rest of the file name.
|
java, linux, jfilechooser, rhel
| 1
| 28
| 1
|
https://stackoverflow.com/questions/27640807/show-only-tsta-files-in-jfilechooser
|
24,833,816
|
why is ec2 free tier being charged
|
I launched a t2.micro instance which at the time of launch clearly stated free tier eligible . It is a RHEL7 system on which I then installed some usual software such as Java, Tomcat, nginx etc. In the billing section, I see that I am being charged for this instance. So far I have been charged $2.36 at the rate of $0.073 for 36 hours. How do I explain this?
|
why is ec2 free tier being charged I launched a t2.micro instance which at the time of launch clearly stated free tier eligible . It is a RHEL7 system on which I then installed some usual software such as Java, Tomcat, nginx etc. In the billing section, I see that I am being charged for this instance. So far I have been charged $2.36 at the rate of $0.073 for 36 hours. How do I explain this?
|
amazon-ec2, rhel
| 1
| 169
| 1
|
https://stackoverflow.com/questions/24833816/why-is-ec2-free-tier-being-charged
|
23,642,828
|
Troubleshooting warning from "make -j N"
|
When I issue a command like (for example) make -j 4 , I get the following error: warning: jobserver unavailable: using -j1. Add `+' to parent make rule I am using the developer toolkit from Scientific Linux 6 under RHEL6: $ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/rh/devtoolset-2/root/usr/libexec/gcc/x86_64-redhat-linux/4.8.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/rh/devtoolset-2/root/usr --mandir=/opt/rh/devtoolset-2/root/usr/share/man --infodir=/opt/rh/devtoolset-2/root/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-isl=/builddir/build/BUILD/gcc-4.8.2-20140120/obj-x86_64-redhat-linux/isl-install --with-cloog=/builddir/build/BUILD/gcc-4.8.2-20140120/obj-x86_64-redhat-linux/cloog-install --with-mpc=/builddir/build/BUILD/gcc-4.8.2-20140120/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.8.2 20140120 (Red Hat 4.8.2-15) (GCC) I have libmpc 0.8.3 installed: $ rpm -qa | grep mpc libmpcdec-1.2.6-6.1.el6.x86_64 libmpc-0.8-3.el6.x86_64 Is there a way to troubleshoot my project's makefile or gcc to determine what is causing the warning and fix it? I do not seem to need the + symbol added to rules when building from OS X via Clang/LLVM. Google searches on the warning text are not returning much information that is apparently useful.
|
Troubleshooting warning from "make -j N" When I issue a command like (for example) make -j 4 , I get the following error: warning: jobserver unavailable: using -j1. Add `+' to parent make rule I am using the developer toolkit from Scientific Linux 6 under RHEL6: $ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/opt/rh/devtoolset-2/root/usr/libexec/gcc/x86_64-redhat-linux/4.8.2/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/opt/rh/devtoolset-2/root/usr --mandir=/opt/rh/devtoolset-2/root/usr/share/man --infodir=/opt/rh/devtoolset-2/root/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,fortran,lto --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libgcj --with-isl=/builddir/build/BUILD/gcc-4.8.2-20140120/obj-x86_64-redhat-linux/isl-install --with-cloog=/builddir/build/BUILD/gcc-4.8.2-20140120/obj-x86_64-redhat-linux/cloog-install --with-mpc=/builddir/build/BUILD/gcc-4.8.2-20140120/obj-x86_64-redhat-linux/mpc-install --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.8.2 20140120 (Red Hat 4.8.2-15) (GCC) I have libmpc 0.8.3 installed: $ rpm -qa | grep mpc libmpcdec-1.2.6-6.1.el6.x86_64 libmpc-0.8-3.el6.x86_64 Is there a way to troubleshoot my project's makefile or gcc to determine what is causing the warning and fix it? I do not seem to need the + symbol added to rules when building from OS X via Clang/LLVM. Google searches on the warning text are not returning much information that is apparently useful.
|
gcc, makefile, rhel, rhel-scl
| 1
| 1,775
| 1
|
https://stackoverflow.com/questions/23642828/troubleshooting-warning-from-make-j-n
|
23,416,149
|
Cannot Start Eclipse. JVM terminated. Exit code=13. (RHEL)
|
Working on getting eclipse running on RHEL 5. I extracted the tarball, but I am getting the following errors. This is the error I get: JVM terminated. Exit code=13 /software/java64/jdk1.6.0_24/bin/java -Dosgi.requiredJavaVersion=1.6 -XX:MaxPermSize=256m -Xms40m -Xmx512m -jar /privdir/iasr160/software/eclipse//plugins/org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar -os linux -ws gtk -arch x86 -showsplash /privdir/iasr160/software/eclipse//plugins/org.eclipse.platform_4.3.2.v20140221-1700/splash.bmp -launcher /privdir/iasr160/software/eclipse/eclipse -name Eclipse --launcher.library /privdir/iasr160/software/eclipse//plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.200.v20140116-2212/eclipse_1508.so -startup /privdir/iasr160/software/eclipse//plugins/org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar --launcher.appendVmargs -exitdata 410006 -product org.eclipse.epp.package.cpp.product -vm /software/java64/jdk1.6.0_24/bin/java -vmargs -Dosgi.requiredJavaVersion=1.6 -XX:MaxPermSize=256m -Xms40m -Xmx512m -jar /privdir/iasr160/software/eclipse//plugins/org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar This is what my eclipse.ini looks like: -startup plugins/org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.200.v20140116-2212 -product org.eclipse.epp.package.cpp.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile --launcher.appendVmargs -vm /software/java64/jdk1.6.0_24/bin/java -vmargs -Dosgi.requiredJavaVersion=1.6 -XX:MaxPermSize=256m -Xms40m -Xmx512m What additional details are needed?
|
Cannot Start Eclipse. JVM terminated. Exit code=13. (RHEL) Working on getting eclipse running on RHEL 5. I extracted the tarball, but I am getting the following errors. This is the error I get: JVM terminated. Exit code=13 /software/java64/jdk1.6.0_24/bin/java -Dosgi.requiredJavaVersion=1.6 -XX:MaxPermSize=256m -Xms40m -Xmx512m -jar /privdir/iasr160/software/eclipse//plugins/org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar -os linux -ws gtk -arch x86 -showsplash /privdir/iasr160/software/eclipse//plugins/org.eclipse.platform_4.3.2.v20140221-1700/splash.bmp -launcher /privdir/iasr160/software/eclipse/eclipse -name Eclipse --launcher.library /privdir/iasr160/software/eclipse//plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.200.v20140116-2212/eclipse_1508.so -startup /privdir/iasr160/software/eclipse//plugins/org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar --launcher.appendVmargs -exitdata 410006 -product org.eclipse.epp.package.cpp.product -vm /software/java64/jdk1.6.0_24/bin/java -vmargs -Dosgi.requiredJavaVersion=1.6 -XX:MaxPermSize=256m -Xms40m -Xmx512m -jar /privdir/iasr160/software/eclipse//plugins/org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar This is what my eclipse.ini looks like: -startup plugins/org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.200.v20140116-2212 -product org.eclipse.epp.package.cpp.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile --launcher.appendVmargs -vm /software/java64/jdk1.6.0_24/bin/java -vmargs -Dosgi.requiredJavaVersion=1.6 -XX:MaxPermSize=256m -Xms40m -Xmx512m What additional details are needed?
|
linux, eclipse, rhel
| 1
| 4,229
| 1
|
https://stackoverflow.com/questions/23416149/cannot-start-eclipse-jvm-terminated-exit-code-13-rhel
|
21,512,833
|
SSH Xforwarding changing user accounts
|
I am trying to make a script to install more or less automatically oracle database as well as some other application of my own. I haven't writen a line yet because I want to make all steps manually first. So, my environment is the following. I have RHEL 5 with no graphic interface. I am connecting to the server from Windows laptop through SSH as root. I have enabled XForwarding, so when I login with root account I can run xdpyinfo so that I can check XServer configuration. I need XForwarding because the Oracle DB installation procedure requires an XServer. However, Oracle requires the user oracle to perform the installation. I have already created the oracle user but when changing the user from root to oracle I can no longer run xdpyinfo command so the Oracle installation procedure fails. I get the following error: Xlib: connection to "localhost:10.0" refused by server Xlib: PuTTY X11 proxy: wrong authorisation protocol attempted xdpyinfo: unable to open display "localhost:10.0". I have tried to use xhost to enable my laptop to access my server but I have failed as well to do that.
|
SSH Xforwarding changing user accounts I am trying to make a script to install more or less automatically oracle database as well as some other application of my own. I haven't writen a line yet because I want to make all steps manually first. So, my environment is the following. I have RHEL 5 with no graphic interface. I am connecting to the server from Windows laptop through SSH as root. I have enabled XForwarding, so when I login with root account I can run xdpyinfo so that I can check XServer configuration. I need XForwarding because the Oracle DB installation procedure requires an XServer. However, Oracle requires the user oracle to perform the installation. I have already created the oracle user but when changing the user from root to oracle I can no longer run xdpyinfo command so the Oracle installation procedure fails. I get the following error: Xlib: connection to "localhost:10.0" refused by server Xlib: PuTTY X11 proxy: wrong authorisation protocol attempted xdpyinfo: unable to open display "localhost:10.0". I have tried to use xhost to enable my laptop to access my server but I have failed as well to do that.
|
oracle-database, ssh, rhel, xserver
| 1
| 8,199
| 1
|
https://stackoverflow.com/questions/21512833/ssh-xforwarding-changing-user-accounts
|
21,230,738
|
installing zmq module on redHat linux RHEL 6.2 64bit machine
|
Hi I am trying to install zmq module on a linux machine.Below are the steps that i followed • Make sure you have the following packages installed (sudo apt-get install binutils libtool autoconf automake) • Get the latest POSIX tarball ([URL] and untar it • Run configure (./configure) • Run make (sudo make install) • npm install zmq or npm update if you already have it listed in your package.json • sudo ldconfig (otherwise you might get the error "connot open shared object file") But when i ran the above steps i got the error as below Error: libzmq.so.3: cannot open shared object file: No such file or directory at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17) at require (module.js:380:17) at Object.<anonymous> (/var/MLIDeployment/MLI/zeromq-4.0.3/node_modules/zmq/lib/index.js:8:11) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17) I'm stuck here any help regarding this will be much helpful
|
installing zmq module on redHat linux RHEL 6.2 64bit machine Hi I am trying to install zmq module on a linux machine.Below are the steps that i followed • Make sure you have the following packages installed (sudo apt-get install binutils libtool autoconf automake) • Get the latest POSIX tarball ([URL] and untar it • Run configure (./configure) • Run make (sudo make install) • npm install zmq or npm update if you already have it listed in your package.json • sudo ldconfig (otherwise you might get the error "connot open shared object file") But when i ran the above steps i got the error as below Error: libzmq.so.3: cannot open shared object file: No such file or directory at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17) at require (module.js:380:17) at Object.<anonymous> (/var/MLIDeployment/MLI/zeromq-4.0.3/node_modules/zmq/lib/index.js:8:11) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:364:17) I'm stuck here any help regarding this will be much helpful
|
installation, zeromq, rhel
| 1
| 1,910
| 1
|
https://stackoverflow.com/questions/21230738/installing-zmq-module-on-redhat-linux-rhel-6-2-64bit-machine
|
20,603,773
|
SAS Install Fails from not Finding libXext
|
I'm not sure if this is the correct part of the stackoverflow family to be posting this question to so I apologize if this isn't the correct site. I'm trying to setup a new installation of SAS on Red Hat Enterprise Linux Server release 6.5 on a 64-bit machine and I keep getting an error when attempting to run the setup.sh file included with SAS. In the terminal I receive the following: An error occurred while launching Java. Please check the following log file: /home/user/.SASAppData/SASDeploymentWizard/deploywiz_2013-12-15-23.41.42.log /tmp/_setup21770/products/javaruntime__99185__lax__xx__sp0__1/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory If I look at the error log I receive the following: Sun Dec 15 23:16:36 EST 2013 Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/_setup13184/products/javaruntime__99185__lax__xx__sp0__1/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(Unknown Source) at sun.awt.DebugHelper.<clinit>(Unknown Source) at sun.awt.X11GraphicsEnvironment.<clinit>(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(Unknown Source) at com.sas.ssn.Kit.main(Kit.java:1653) The strangest part is that I know libXtst (which from what I understand contains libXent) is installed properly. Running locate libXtst shows it in the following locations: /usr/lib64/libXtst.so /usr/lib64/libXtst.so.6 /usr/lib64/libXtst.so.6.1.0 and yum list libXtst returns libXtst.x86_64 1.2.1-2.el6 @rhel-x86_64-server-6 as being installed. I've Googled this pretty extensively and haven't come up with any reasonable solutions, even the previous admin's documentation on SAS installations didn't help here. I'd really appreciate a hand or pointer in the correct direction. Edit: I forgot to include that if I run setup.sh -console I'm able to step through the text based installer.
|
SAS Install Fails from not Finding libXext I'm not sure if this is the correct part of the stackoverflow family to be posting this question to so I apologize if this isn't the correct site. I'm trying to setup a new installation of SAS on Red Hat Enterprise Linux Server release 6.5 on a 64-bit machine and I keep getting an error when attempting to run the setup.sh file included with SAS. In the terminal I receive the following: An error occurred while launching Java. Please check the following log file: /home/user/.SASAppData/SASDeploymentWizard/deploywiz_2013-12-15-23.41.42.log /tmp/_setup21770/products/javaruntime__99185__lax__xx__sp0__1/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory If I look at the error log I receive the following: Sun Dec 15 23:16:36 EST 2013 Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/_setup13184/products/javaruntime__99185__lax__xx__sp0__1/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(Unknown Source) at sun.awt.DebugHelper.<clinit>(Unknown Source) at sun.awt.X11GraphicsEnvironment.<clinit>(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(Unknown Source) at com.sas.ssn.Kit.main(Kit.java:1653) The strangest part is that I know libXtst (which from what I understand contains libXent) is installed properly. Running locate libXtst shows it in the following locations: /usr/lib64/libXtst.so /usr/lib64/libXtst.so.6 /usr/lib64/libXtst.so.6.1.0 and yum list libXtst returns libXtst.x86_64 1.2.1-2.el6 @rhel-x86_64-server-6 as being installed. I've Googled this pretty extensively and haven't come up with any reasonable solutions, even the previous admin's documentation on SAS installations didn't help here. I'd really appreciate a hand or pointer in the correct direction. Edit: I forgot to include that if I run setup.sh -console I'm able to step through the text based installer.
|
java, linux, sas, rhel
| 1
| 2,646
| 2
|
https://stackoverflow.com/questions/20603773/sas-install-fails-from-not-finding-libxext
|
15,337,123
|
How to Enable SSL/HTTPS on Tomcat 7 on RHEL
|
I have a Java application which I am running on RHEL server. I want to enable SSL on tomcat 7 on RHEL. I am following this tutorial . I used this command to to create a self-signed certificate. keytool -genkey -alias mkyong -keyalg RSA -keystore c:\mkyongkeystore But on running [URL] I am not getting anything and I am enable to configure Tomcat to support SSL Or https.
|
How to Enable SSL/HTTPS on Tomcat 7 on RHEL I have a Java application which I am running on RHEL server. I want to enable SSL on tomcat 7 on RHEL. I am following this tutorial . I used this command to to create a self-signed certificate. keytool -genkey -alias mkyong -keyalg RSA -keystore c:\mkyongkeystore But on running [URL] I am not getting anything and I am enable to configure Tomcat to support SSL Or https.
|
java, linux, ssl-certificate, redhat, rhel
| 1
| 12,244
| 1
|
https://stackoverflow.com/questions/15337123/how-to-enable-ssl-https-on-tomcat-7-on-rhel
|
10,118,886
|
RHEL linker error
|
I am on RHEL 5.8 on x86_64 machine: $ uname -r 2.6.18-308.1.1.el5 $ uname -m x86_64 $ try to cross-compile the sources to be the i386 compatible: CFLAGS += -m32 LDFLAGS += -L/lib -lpthread -luuid but link stage fails with error: /usr/bin/ld: skipping incompatible /usr/lib64/libuuid.so when searching for -luuid /usr/bin/ld: skipping incompatible /usr/lib64/libuuid.a when searching for -luuid /usr/bin/ld: cannot find -luuid collect2: ld returned 1 exit status Actually, the host machine has the /lib/libuuid.so.1.2 : $ readelf -h /lib/libuuid.so.1.2 ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: DYN (Shared object file) Machine: Intel 80386 Version: 0x1 Entry point address: 0xf90 Start of program headers: 52 (bytes into file) Start of section headers: 13352 (bytes into file) Flags: 0x0 Size of this header: 52 (bytes) Size of program headers: 32 (bytes) Number of program headers: 6 Size of section headers: 40 (bytes) Number of section headers: 28 Section header string table index: 27 $ Are there any ld or other's options to fix this link issue?
|
RHEL linker error I am on RHEL 5.8 on x86_64 machine: $ uname -r 2.6.18-308.1.1.el5 $ uname -m x86_64 $ try to cross-compile the sources to be the i386 compatible: CFLAGS += -m32 LDFLAGS += -L/lib -lpthread -luuid but link stage fails with error: /usr/bin/ld: skipping incompatible /usr/lib64/libuuid.so when searching for -luuid /usr/bin/ld: skipping incompatible /usr/lib64/libuuid.a when searching for -luuid /usr/bin/ld: cannot find -luuid collect2: ld returned 1 exit status Actually, the host machine has the /lib/libuuid.so.1.2 : $ readelf -h /lib/libuuid.so.1.2 ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2's complement, little endian Version: 1 (current) OS/ABI: UNIX - System V ABI Version: 0 Type: DYN (Shared object file) Machine: Intel 80386 Version: 0x1 Entry point address: 0xf90 Start of program headers: 52 (bytes into file) Start of section headers: 13352 (bytes into file) Flags: 0x0 Size of this header: 52 (bytes) Size of program headers: 32 (bytes) Number of program headers: 6 Size of section headers: 40 (bytes) Number of section headers: 28 Section header string table index: 27 $ Are there any ld or other's options to fix this link issue?
|
linux, rhel
| 1
| 1,878
| 2
|
https://stackoverflow.com/questions/10118886/rhel-linker-error
|
5,081,326
|
Issue logging in into Tomcat JMX Services via jconsloe : javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:281)
|
We are running Tomcat 6 on RHEL 5 and Oracle JDK 1.6_24 and I am having a problem loggin into the jmx services remotely through jconsole. Here is my setenv.sh : JAVA_OPTS="-Xms512m -Xmx1152m -XX:MaxPermSize=512m" CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9888" CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false" CATALINA_OPTS="$CATALINA_OPTS -Djava.rmi.server.hostname=192.168.XX.XX" CATALINA_OPTS="$CATALINA_OPTS -Djava.rmi.server.logCalls=true" Here is the output from 'ps -elf | grep tomcat' (I also see the 9888 port listening when using netstat) : 0 S root 2930 1 1 85 0 - 500084 184466 20:47 ? 00:00:15 /usr/java/default/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Xms512m -Xmx1152m -XX:MaxPermSize=512m -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -server -Dbuild.compiler.emacs=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9888 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=192.168.XX.XX -Djava.rmi.server.logCalls=true -Djava.endorsed.dirs=/usr/local/tomcat/endorsed -classpath /usr/local/tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat -Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start Then when trying to connect through jconsole I am running the following command : jconsole -debug 192.168.XX.XX:9888 It throws the following error: java.lang.NullPointerException at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:281) at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:228) at sun.tools.jconsole.ProxyClient.tryConnect(ProxyClient.java:334) at sun.tools.jconsole.ProxyClient.connect(ProxyClient.java:296) at sun.tools.jconsole.VMPanel$2.run(VMPanel.java:281) I have run out of ideas for debugging this and can't seem to find any answers. Any thought or ideas? As a second problem, shutting down tomcat does not stop the jmx process and i cannot restart tomcat since it is still listening on 9888. Do I have to specify anything to stop the JMX process during shutdown? Thanks all Dustin Chesterman
|
Issue logging in into Tomcat JMX Services via jconsloe : javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:281) We are running Tomcat 6 on RHEL 5 and Oracle JDK 1.6_24 and I am having a problem loggin into the jmx services remotely through jconsole. Here is my setenv.sh : JAVA_OPTS="-Xms512m -Xmx1152m -XX:MaxPermSize=512m" CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9888" CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false" CATALINA_OPTS="$CATALINA_OPTS -Djava.rmi.server.hostname=192.168.XX.XX" CATALINA_OPTS="$CATALINA_OPTS -Djava.rmi.server.logCalls=true" Here is the output from 'ps -elf | grep tomcat' (I also see the 9888 port listening when using netstat) : 0 S root 2930 1 1 85 0 - 500084 184466 20:47 ? 00:00:15 /usr/java/default/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Xms512m -Xmx1152m -XX:MaxPermSize=512m -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -server -Dbuild.compiler.emacs=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9888 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=192.168.XX.XX -Djava.rmi.server.logCalls=true -Djava.endorsed.dirs=/usr/local/tomcat/endorsed -classpath /usr/local/tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat -Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start Then when trying to connect through jconsole I am running the following command : jconsole -debug 192.168.XX.XX:9888 It throws the following error: java.lang.NullPointerException at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:281) at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:228) at sun.tools.jconsole.ProxyClient.tryConnect(ProxyClient.java:334) at sun.tools.jconsole.ProxyClient.connect(ProxyClient.java:296) at sun.tools.jconsole.VMPanel$2.run(VMPanel.java:281) I have run out of ideas for debugging this and can't seem to find any answers. Any thought or ideas? As a second problem, shutting down tomcat does not stop the jmx process and i cannot restart tomcat since it is still listening on 9888. Do I have to specify anything to stop the JMX process during shutdown? Thanks all Dustin Chesterman
|
tomcat, jmx, jconsole, rhel
| 1
| 2,475
| 3
|
https://stackoverflow.com/questions/5081326/issue-logging-in-into-tomcat-jmx-services-via-jconsloe-javax-management-remote
|
78,847,261
|
How to tell rpmbuild to install package A only when packages B C D are not installed?
|
I try to build an RPM package on RHEL8 with rpmbuild version 4.14.3. This package shall have a dependency, that at least OpenJDK 1.8.0 is installed. This OpenJDK 1.8.0 shall not be installed if java-11-openjdk or java-17-openjdk or java-21-openjdk are installed. # rpm -qa | grep java java-11-openjdk-devel-11.0.23.0.9-3.el8.x86_64 tzdata-java-2024a-1.el8.noarch javapackages-filesystem-5.3.0-1.module+el8+2447+6f56d9a6.noarch java-11-openjdk-11.0.23.0.9-3.el8.x86_64 java-11-openjdk-headless-11.0.23.0.9-3.el8.x86_64 ... When yum install my package: Dependencies resolved. =================================== Package =================================== Installing: packageblabla Installing dependencies: java-1.8.0-openjdk java-1.8.0-openjdk-devel java-1.8.0-openjdk-headless Installing weak dependencies: gtk2 The SPEC-file: %{?_with_rhel8:Requires: (( java-1.8.0-openjdk-devel or java-11-openjdk-devel ) and ( java-1.8.0-openjdk-devel or java-17-openjdk-devel ) and ( java-1.8.0-openjdk-devel or java-21-openjdk-devel )) } I tried with "unless" following [URL] But rpmbuild complains that I shall use "or". error: line 150: Illegal context for 'unless', please use 'or' instead: Requires: (( java-1.8.0-openjdk-devel unless java-11-openjdk-devel ) and ( java-1.8.0-openjdk-devel unless java-17-openjdk-devel ) and ( java-1.8.0-openjdk-devel unless java-21-openjdk-devel )) Additionally the build is also then needed for RHEL7, which comes with rpm version 4.11. Kind regards, Marcus
|
How to tell rpmbuild to install package A only when packages B C D are not installed? I try to build an RPM package on RHEL8 with rpmbuild version 4.14.3. This package shall have a dependency, that at least OpenJDK 1.8.0 is installed. This OpenJDK 1.8.0 shall not be installed if java-11-openjdk or java-17-openjdk or java-21-openjdk are installed. # rpm -qa | grep java java-11-openjdk-devel-11.0.23.0.9-3.el8.x86_64 tzdata-java-2024a-1.el8.noarch javapackages-filesystem-5.3.0-1.module+el8+2447+6f56d9a6.noarch java-11-openjdk-11.0.23.0.9-3.el8.x86_64 java-11-openjdk-headless-11.0.23.0.9-3.el8.x86_64 ... When yum install my package: Dependencies resolved. =================================== Package =================================== Installing: packageblabla Installing dependencies: java-1.8.0-openjdk java-1.8.0-openjdk-devel java-1.8.0-openjdk-headless Installing weak dependencies: gtk2 The SPEC-file: %{?_with_rhel8:Requires: (( java-1.8.0-openjdk-devel or java-11-openjdk-devel ) and ( java-1.8.0-openjdk-devel or java-17-openjdk-devel ) and ( java-1.8.0-openjdk-devel or java-21-openjdk-devel )) } I tried with "unless" following [URL] But rpmbuild complains that I shall use "or". error: line 150: Illegal context for 'unless', please use 'or' instead: Requires: (( java-1.8.0-openjdk-devel unless java-11-openjdk-devel ) and ( java-1.8.0-openjdk-devel unless java-17-openjdk-devel ) and ( java-1.8.0-openjdk-devel unless java-21-openjdk-devel )) Additionally the build is also then needed for RHEL7, which comes with rpm version 4.11. Kind regards, Marcus
|
dependencies, require, rpm, rhel, rpmbuild
| 1
| 89
| 1
|
https://stackoverflow.com/questions/78847261/how-to-tell-rpmbuild-to-install-package-a-only-when-packages-b-c-d-are-not-insta
|
78,714,773
|
Not able to install/compile perl module DBD::DB2 on Rocky Linux 9.4 with cpanm
|
I'm trying to install/compile the DBD::DB2 perl module on rocky linux 9.4 with the cpanm commando but it is not working.I tried even On RHEL and Oracle Linux but is not working anyway. I get the following error: [root@ol9vm-desk01 ~]# cpanm DBD::DB2 --> Working on DBD::DB2 Fetching [URL] ... OK Configuring DBD-DB2-1.89 ... OK Building and testing DBD-DB2-1.89 ... FAIL ! Installing DBD::DB2 failed. See /root/.cpanm/work/1720158884.3245/build.log for details. Retry with --force to force install it. What I did untill now: 1. Perl Installed: [root@ol9vm-desk01 ~]# perl -V Summary of my perl5 (revision 5 version 32 subversion 1) configuration: Platform: osname=linux osvers=5.15.0-5.76.5.1.el9uek.x86_64 archname=x86_64-linux-thread-multi uname='linux host-100-100-224-49 5.15.0-5.76.5.1.el9uek.x86_64 #2 smp fri dec 9 18:37:36 pst 2022 x86_64 x86_64 x86_64 gnulinux' config_args='-des -Doptimize=none -Dccflags=-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -Dldflags=-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Dccdlflags=-Wl,--enable-new-dtags -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Dlddlflags=-shared -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Dshrpdir=/usr/lib64 -DDEBUGGING=-g -Dversion=5.32.1 -Dmyhostname=localhost -Dperladmin=root@localhost -Dcc=gcc -Dcf_by=Red Hat, Inc. -Dprefix=/usr -Dvendorprefix=/usr -Dsiteprefix=/usr/local -Dsitelib=/usr/local/share/perl5/5.32 -Dsitearch=/usr/local/lib64/perl5/5.32 -Dprivlib=/usr/share/perl5 -Dvendorlib=/usr/share/perl5/vendor_perl -Darchlib=/usr/lib64/perl5 -Dvendorarch=/usr/lib64/perl5/vendor_perl -Darchname=x86_64-linux-thread-multi -Dlibpth=/usr/local/lib64 /lib64 /usr/lib64 -Duseshrplib -Dusethreads -Duseithreads -Dusedtrace=/usr/bin/dtrace -Duselargefiles -Dd_semctl_semun -Di_db -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio -Dinstallusrbinperl=n -Ubincompat5005 -Uversiononly -Dpager=/usr/bin/less -isr -Dd_gethostent_r_proto -Ud_endhostent_r_proto -Ud_sethostent_r_proto -Ud_endprotoent_r_proto -Ud_setprotoent_r_proto -Ud_endservent_r_proto -Ud_setservent_r_proto -Dscriptdir=/usr/bin -Dusesitecustomize -Duse64bitint' @INC: /usr/local/lib64/perl5/5.32 /usr/local/share/perl5/5.32 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 2. IBM DB2 Data Server Client installed tdbcli@ol9vm-desk01 ~]$ db2level DB21085I This instance or install (instance name, where applicable: "tdbcli") uses "64" bits and DB2 code release "SQL11059" with level identifier "060A010F". Informational tokens are "DB2 v11.5.9.0", "s2310270807", "DYN2310270807AMD64", and Fix Pack "0". Product is installed at "/opt/ibm/db2/V11.5". 3. DBI Perl module successfully installed as prerequisite: instmodsh Available commands are: l - List all installed modules m <module> - Select a module q - Quit the program cmd? l Installed modules are: DBI Perl 4. Try Installing the perl Module as root DBD::DB2: DB2_HOME=/opt/ibm/db2/V11.5 export DB2LIB=/opt/ibm/db2/V11.5/lib64 [root@ol9vm-desk01 ~]# cpanm DBD::DB2 --> Working on DBD::DB2 Fetching [URL] ... OK Configuring DBD-DB2-1.89 ... OK Building and testing DBD-DB2-1.89 ... FAIL ! Installing DBD::DB2 failed. See /root/.cpanm/work/1720158884.3245/build.log for details. Retry with --force to force install it. Here the final part of the compilation log error: dbdimp.c: In function ‘dbd_st_cancel’: /usr/local/lib64/perl5/5.32/auto/DBI/DBIXS.h:383:45: warning: unused variable ‘imp_dbh’ [-Wunused-variable] 383 | #define D_imp_dbh_from_sth D_imp_from_child(imp_dbh, imp_dbh_t, imp_sth) | ^~~~~~~ /usr/local/lib64/perl5/5.32/auto/DBI/DBIXS.h:381:39: note: in definition of macro ‘D_imp_from_child’ 381 | type *name = (type*)(DBIc_PARENT_COM(child)) | ^~~~ dbdimp.c:3000:9: note: in expansion of macro ‘D_imp_dbh_from_sth’ 3000 | D_imp_dbh_from_sth; | ^~~~~~~~~~~~~~~~~~ dbdimp.c: In function ‘db2_st_finish’: /usr/local/lib64/perl5/5.32/auto/DBI/DBIXS.h:383:45: warning: unused variable ‘imp_dbh’ [-Wunused-variable] 383 | #define D_imp_dbh_from_sth D_imp_from_child(imp_dbh, imp_dbh_t, imp_sth) | ^~~~~~~ /usr/local/lib64/perl5/5.32/auto/DBI/DBIXS.h:381:39: note: in definition of macro ‘D_imp_from_child’ 381 | type *name = (type*)(DBIc_PARENT_COM(child)) | ^~~~ dbdimp.c:3019:9: note: in expansion of macro ‘D_imp_dbh_from_sth’ 3019 | D_imp_dbh_from_sth; | ^~~~~~~~~~~~~~~~~~ dbdimp.c: In function ‘bind_lob_column_helper’: dbdimp.c:1081:1: warning: control reaches end of non-void function [-Wreturn-type] 1081 | } | ^ dbdimp.c: In function ‘db2_describe’: dbdimp.c:1318:20: warning: ‘db_codepage’ may be used uninitialized in this function [-Wmaybe-uninitialized] 1318 | if ( app_codepage != db_codepage) { | ^ dbdimp.c:1318:20: warning: ‘app_codepage’ may be used uninitialized in this function [-Wmaybe-uninitialized] cc1: some warnings being treated as errors make: *** [Makefile:351: dbdimp.o] Error 1 At the moment I am really lost... Thank you in advance for your help I tried sourcing the db2 environment in this way: source /home/tdbcli/sqllib/db2profile but I got the same failure. For a better understanding I post the complete build log: [URL]
|
Not able to install/compile perl module DBD::DB2 on Rocky Linux 9.4 with cpanm I'm trying to install/compile the DBD::DB2 perl module on rocky linux 9.4 with the cpanm commando but it is not working.I tried even On RHEL and Oracle Linux but is not working anyway. I get the following error: [root@ol9vm-desk01 ~]# cpanm DBD::DB2 --> Working on DBD::DB2 Fetching [URL] ... OK Configuring DBD-DB2-1.89 ... OK Building and testing DBD-DB2-1.89 ... FAIL ! Installing DBD::DB2 failed. See /root/.cpanm/work/1720158884.3245/build.log for details. Retry with --force to force install it. What I did untill now: 1. Perl Installed: [root@ol9vm-desk01 ~]# perl -V Summary of my perl5 (revision 5 version 32 subversion 1) configuration: Platform: osname=linux osvers=5.15.0-5.76.5.1.el9uek.x86_64 archname=x86_64-linux-thread-multi uname='linux host-100-100-224-49 5.15.0-5.76.5.1.el9uek.x86_64 #2 smp fri dec 9 18:37:36 pst 2022 x86_64 x86_64 x86_64 gnulinux' config_args='-des -Doptimize=none -Dccflags=-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -Dldflags=-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Dccdlflags=-Wl,--enable-new-dtags -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Dlddlflags=-shared -Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Dshrpdir=/usr/lib64 -DDEBUGGING=-g -Dversion=5.32.1 -Dmyhostname=localhost -Dperladmin=root@localhost -Dcc=gcc -Dcf_by=Red Hat, Inc. -Dprefix=/usr -Dvendorprefix=/usr -Dsiteprefix=/usr/local -Dsitelib=/usr/local/share/perl5/5.32 -Dsitearch=/usr/local/lib64/perl5/5.32 -Dprivlib=/usr/share/perl5 -Dvendorlib=/usr/share/perl5/vendor_perl -Darchlib=/usr/lib64/perl5 -Dvendorarch=/usr/lib64/perl5/vendor_perl -Darchname=x86_64-linux-thread-multi -Dlibpth=/usr/local/lib64 /lib64 /usr/lib64 -Duseshrplib -Dusethreads -Duseithreads -Dusedtrace=/usr/bin/dtrace -Duselargefiles -Dd_semctl_semun -Di_db -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio -Dinstallusrbinperl=n -Ubincompat5005 -Uversiononly -Dpager=/usr/bin/less -isr -Dd_gethostent_r_proto -Ud_endhostent_r_proto -Ud_sethostent_r_proto -Ud_endprotoent_r_proto -Ud_setprotoent_r_proto -Ud_endservent_r_proto -Ud_setservent_r_proto -Dscriptdir=/usr/bin -Dusesitecustomize -Duse64bitint' @INC: /usr/local/lib64/perl5/5.32 /usr/local/share/perl5/5.32 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 2. IBM DB2 Data Server Client installed tdbcli@ol9vm-desk01 ~]$ db2level DB21085I This instance or install (instance name, where applicable: "tdbcli") uses "64" bits and DB2 code release "SQL11059" with level identifier "060A010F". Informational tokens are "DB2 v11.5.9.0", "s2310270807", "DYN2310270807AMD64", and Fix Pack "0". Product is installed at "/opt/ibm/db2/V11.5". 3. DBI Perl module successfully installed as prerequisite: instmodsh Available commands are: l - List all installed modules m <module> - Select a module q - Quit the program cmd? l Installed modules are: DBI Perl 4. Try Installing the perl Module as root DBD::DB2: DB2_HOME=/opt/ibm/db2/V11.5 export DB2LIB=/opt/ibm/db2/V11.5/lib64 [root@ol9vm-desk01 ~]# cpanm DBD::DB2 --> Working on DBD::DB2 Fetching [URL] ... OK Configuring DBD-DB2-1.89 ... OK Building and testing DBD-DB2-1.89 ... FAIL ! Installing DBD::DB2 failed. See /root/.cpanm/work/1720158884.3245/build.log for details. Retry with --force to force install it. Here the final part of the compilation log error: dbdimp.c: In function ‘dbd_st_cancel’: /usr/local/lib64/perl5/5.32/auto/DBI/DBIXS.h:383:45: warning: unused variable ‘imp_dbh’ [-Wunused-variable] 383 | #define D_imp_dbh_from_sth D_imp_from_child(imp_dbh, imp_dbh_t, imp_sth) | ^~~~~~~ /usr/local/lib64/perl5/5.32/auto/DBI/DBIXS.h:381:39: note: in definition of macro ‘D_imp_from_child’ 381 | type *name = (type*)(DBIc_PARENT_COM(child)) | ^~~~ dbdimp.c:3000:9: note: in expansion of macro ‘D_imp_dbh_from_sth’ 3000 | D_imp_dbh_from_sth; | ^~~~~~~~~~~~~~~~~~ dbdimp.c: In function ‘db2_st_finish’: /usr/local/lib64/perl5/5.32/auto/DBI/DBIXS.h:383:45: warning: unused variable ‘imp_dbh’ [-Wunused-variable] 383 | #define D_imp_dbh_from_sth D_imp_from_child(imp_dbh, imp_dbh_t, imp_sth) | ^~~~~~~ /usr/local/lib64/perl5/5.32/auto/DBI/DBIXS.h:381:39: note: in definition of macro ‘D_imp_from_child’ 381 | type *name = (type*)(DBIc_PARENT_COM(child)) | ^~~~ dbdimp.c:3019:9: note: in expansion of macro ‘D_imp_dbh_from_sth’ 3019 | D_imp_dbh_from_sth; | ^~~~~~~~~~~~~~~~~~ dbdimp.c: In function ‘bind_lob_column_helper’: dbdimp.c:1081:1: warning: control reaches end of non-void function [-Wreturn-type] 1081 | } | ^ dbdimp.c: In function ‘db2_describe’: dbdimp.c:1318:20: warning: ‘db_codepage’ may be used uninitialized in this function [-Wmaybe-uninitialized] 1318 | if ( app_codepage != db_codepage) { | ^ dbdimp.c:1318:20: warning: ‘app_codepage’ may be used uninitialized in this function [-Wmaybe-uninitialized] cc1: some warnings being treated as errors make: *** [Makefile:351: dbdimp.o] Error 1 At the moment I am really lost... Thank you in advance for your help I tried sourcing the db2 environment in this way: source /home/tdbcli/sqllib/db2profile but I got the same failure. For a better understanding I post the complete build log: [URL]
|
perl, db2, rhel, cpanm, rocky-os
| 1
| 690
| 2
|
https://stackoverflow.com/questions/78714773/not-able-to-install-compile-perl-module-dbddb2-on-rocky-linux-9-4-with-cpanm
|
78,379,910
|
Varnish Using Significantly More Memory than Cache?
|
I'm using varnish as a web cache. I'm running on RHEL 9.2. My cache is sized at 1GB. My varnish process is using 3.7G memory. $ ps -p 1163 -o %mem,rss %MEM RSS 15.7 3886108 I'm running varnish with the -s malloc,1024M argument, which should set the cache size to 1GB. The full command, being invoked by systemd, is: /usr/sbin/varnishd -a 127.6.6.6:7480 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,1024M -P /var/run/varnish.pid -T 127.6.6.6:62 It appears that is working, I can see with varnishstat : $ varnishstat -1 | grep s0.g SMA.s0.g_alloc 22573 . Allocations outstanding SMA.s0.g_bytes 1073105012 . Bytes outstanding SMA.s0.g_space 636812 . Bytes available I find it hard to believe there is 2.7GB overhead for a 1GB cache, what am I missing here? Restarting with systemctl restart varnish drops the usage temporarily (as it empties the cache, but also the non-cache memory disappears), but it gradually builds up again to settle at 3.7GB. This is a test system under somewhat heavy load, is it possible this memory is genuinely in use serving requests? I can see this in the transient space, to my eye this looked uninteresting but after reading some other threads maybe it is. SMA.Transient.c_req 9720800 7.90 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 324751584558 263852.57 Bytes allocated SMA.Transient.c_freed 324751584558 263852.57 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available I've also seen a few suggestions that varnish is designed for use with jemalloc, I do not have jemalloc installed, it looks like varnish is using glibc's malloc. EDIT: Full varnishstat -1 output can be found here: [URL] , and varnishadm param.show here: [URL] . At the time of this stat dump, memory usage was at 3GB. ps -p 917285 -o %mem,vsz,rss %MEM VSZ RSS 12.8 7278920 3136620
|
Varnish Using Significantly More Memory than Cache? I'm using varnish as a web cache. I'm running on RHEL 9.2. My cache is sized at 1GB. My varnish process is using 3.7G memory. $ ps -p 1163 -o %mem,rss %MEM RSS 15.7 3886108 I'm running varnish with the -s malloc,1024M argument, which should set the cache size to 1GB. The full command, being invoked by systemd, is: /usr/sbin/varnishd -a 127.6.6.6:7480 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,1024M -P /var/run/varnish.pid -T 127.6.6.6:62 It appears that is working, I can see with varnishstat : $ varnishstat -1 | grep s0.g SMA.s0.g_alloc 22573 . Allocations outstanding SMA.s0.g_bytes 1073105012 . Bytes outstanding SMA.s0.g_space 636812 . Bytes available I find it hard to believe there is 2.7GB overhead for a 1GB cache, what am I missing here? Restarting with systemctl restart varnish drops the usage temporarily (as it empties the cache, but also the non-cache memory disappears), but it gradually builds up again to settle at 3.7GB. This is a test system under somewhat heavy load, is it possible this memory is genuinely in use serving requests? I can see this in the transient space, to my eye this looked uninteresting but after reading some other threads maybe it is. SMA.Transient.c_req 9720800 7.90 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 324751584558 263852.57 Bytes allocated SMA.Transient.c_freed 324751584558 263852.57 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available I've also seen a few suggestions that varnish is designed for use with jemalloc, I do not have jemalloc installed, it looks like varnish is using glibc's malloc. EDIT: Full varnishstat -1 output can be found here: [URL] , and varnishadm param.show here: [URL] . At the time of this stat dump, memory usage was at 3GB. ps -p 917285 -o %mem,vsz,rss %MEM VSZ RSS 12.8 7278920 3136620
|
varnish, rhel
| 1
| 891
| 1
|
https://stackoverflow.com/questions/78379910/varnish-using-significantly-more-memory-than-cache
|
77,928,800
|
Column format separation using paste and an array
|
Using bash for this script. I'm trying to display values of a disk by gathering the information and outputting them using paste. Sometimes a single physical volume may be assigned to multiple logical volumes. The output of that when using paste comes out a little funky. Here is the function I created. function ShowVolumeAttr() { local lun_id="$1" # Show info on PV, VG, and LV PV=$(pvscan | grep $lun_id | awk '{print $2}') VG=$(pvscan | grep $lun_id | awk '{print $4}') declare -a LV for pv in $PV do LV[${#LV[@]}]+=$(pvdisplay -m $pv | grep "Logical volume" | awk '{print $3}') done echo -e "Physical Volume(s)\tVolume Group\tLogical Volume(s)" echo -e "------------------\t------------\t-----------------" paste <(printf %s "$PV") <(printf "\t%s" "$VG") <(printf "\t%s" "${LV[@]}") } And here is what the output looks like. I was hoping to have all of the Logical Volume entries line up under each other, but since it each entry after the 1st in the array gets its own line and there is no PV/VG in front of it, it tabs normally from the left side. Is there any way to line them up? Physical Volume(s) Volume Group Logical Volume(s) ------------------ ------------ ----------------- /dev/sda2 rhel /dev/rhel/root /dev/rhel/var /dev/rhel/home /dev/rhel/swap /dev/rhel/usr /dev/rhel/tmp /dev/rhel/opt /dev/rhel/var Thanks, Patrick Tried to separate them out with different printf commands as well. Also tried adding tabs to the last printf entry. EDIT Ended up doing the below and it comes out looking nice. Kept playing with it over the last couple of days. for pv in $PV do PVARRAY[${#PVARRAY[@]}]+=$pv done # Stack Overflow Addition [URL] for pv in "${PVARRAY[@]}" do # Add Logical Volumes for $pv into the array LV[${#LV[@]}]+=$(pvdisplay -m $pv | grep "Logical volume" | awk '{print $3}') # Loop through array and display contents for lv in "${!LV[@]}" do printf "%s\t%s\t%s\t%s\n" "Physical Volume(s)" "Volume Group" "Logical Volume(s)" "LV Size" printf "%s\t%s\t%s\t%s\n" "------------------" "------------" "-----------------" "-------" if [ "$lv" -ne 0 ] then printf "No physical volumes" fi PHYSV=$(lvs --segments ${LV[$lv]} -o+lv_size,devices | tail -n +2 | grep $pv | awk '{print $8}' | sed "s/([^)]*)/()/g" | tr -d '()') #echo -e "PHYSV: $PHYSV" LOGVOL=$(pvs $pv -o+lv_path,devices | grep ${LV[$lv]} | awk '{print $8}') #echo -e "LOGVOL: $LOGVOL" PHYSVOL=$(lvs --segments ${LV[$lv]} -o+lv_size,devices | tail -n +2 | grep $pv | awk '{print $6}') #echo -e "PHYSVOL: $PHYSVOL" VOLGRP=$(lvs --segments ${LV[$lv]} -o+lv_size,devices | tail -n +2 | grep $pv | awk '{print $2}') #echo -e "VOLGRP: $VOLGRP" paste <(printf "%s" "$PHYSV") <(printf "%s" "$VOLGRP") <(printf "%s" "${LV[$lv]}") <(printf "%s" "$PHYSVOL") done | column -ts $'\t' # Clear array LV=() done Couple of different tests: [root@lavulc01-new scripts]# ./dmcli.sh -l sdb Partition /dev/sdb1 exists but there are no physical volumes found! [root@lavulc01-new scripts]# ./dmcli.sh -l sda Physical Volume(s) Volume Group Logical Volume(s) LV Size ------------------ ------------ ----------------- ------- /dev/sda2 rhel /dev/rhel/root 8.00g /dev/sda2 rhel /dev/rhel/var 8.00g /dev/sda2 rhel /dev/rhel/home 8.00g /dev/sda2 rhel /dev/rhel/swap 32.00g /dev/sda2 rhel /dev/rhel/usr 8.00g /dev/sda2 rhel /dev/rhel/tmp 16.00g /dev/sda2 rhel /dev/rhel/opt 8.00g /dev/sda2 rhel /dev/rhel/var 4.00m Physical Volume(s) Volume Group Logical Volume(s) LV Size ------------------ ------------ ----------------- ------- /dev/sda3 rhel /dev/rhel/var <92.00g [root@lavulc01-new scripts]# ./dmcli.sh -l mpatha Physical Volume(s) Volume Group Logical Volume(s) LV Size ------------------ ------------ ----------------- ------- /dev/mapper/mpatha1 u01_vg /dev/u01_vg/u01_lv <1.50t LUN mpatha ID: 60:06:01:60:5e:26:47:00:96:82:74:5b:0b:5b:2c:8c Adapters Paths Backing Disk Array Port UUID -------------- ------------ ----------------------- 8:0:2:0 sdz 50:06:01:61:47:e4:45:bd 7:0:2:0 sdj 50:06:01:60:47:e4:45:bd 7:0:3:0 sdn 50:06:01:62:47:e4:45:bd 8:0:3:0 sdad 50:06:01:63:47:e4:45:bd 7:0:0:0 sdb 50:06:01:68:47:e4:45:bd 7:0:1:0 sdf 50:06:01:6a:47:e4:45:bd 8:0:0:0 sdr 50:06:01:69:47:e4:45:bd 8:0:1:0 sdv 50:06:01:6b:47:e4:45:bd [root@lavulc01-new scripts]# ./dmcli.sh -l mpatha1 Disk mpatha1 not found or is a partition device. [root@lavulc01-new scripts]# ./dmcli.sh -l mpathd Failed to find device for physical volume "/dev/mapper/mpathd?". LUN mpathd ID: 60:06:01:60:99:10:47:00:91:b9:c0:5c:8e:e3:29:86 Adapters Paths Backing Disk Array Port UUID -------------- ------------ ----------------------- 7:0:2:3 sdm 50:06:01:60:47:e4:45:bd 7:0:3:3 sdq 50:06:01:62:47:e4:45:bd 8:0:2:3 sdac 50:06:01:61:47:e4:45:bd 8:0:3:3 sdag 50:06:01:63:47:e4:45:bd 7:0:1:3 sdi 50:06:01:6a:47:e4:45:bd 7:0:0:3 sde 50:06:01:68:47:e4:45:bd 8:0:0:3 sdu 50:06:01:69:47:e4:45:bd 8:0:1:3 sdy 50:06:01:6b:47:e4:45:bd
|
Column format separation using paste and an array Using bash for this script. I'm trying to display values of a disk by gathering the information and outputting them using paste. Sometimes a single physical volume may be assigned to multiple logical volumes. The output of that when using paste comes out a little funky. Here is the function I created. function ShowVolumeAttr() { local lun_id="$1" # Show info on PV, VG, and LV PV=$(pvscan | grep $lun_id | awk '{print $2}') VG=$(pvscan | grep $lun_id | awk '{print $4}') declare -a LV for pv in $PV do LV[${#LV[@]}]+=$(pvdisplay -m $pv | grep "Logical volume" | awk '{print $3}') done echo -e "Physical Volume(s)\tVolume Group\tLogical Volume(s)" echo -e "------------------\t------------\t-----------------" paste <(printf %s "$PV") <(printf "\t%s" "$VG") <(printf "\t%s" "${LV[@]}") } And here is what the output looks like. I was hoping to have all of the Logical Volume entries line up under each other, but since it each entry after the 1st in the array gets its own line and there is no PV/VG in front of it, it tabs normally from the left side. Is there any way to line them up? Physical Volume(s) Volume Group Logical Volume(s) ------------------ ------------ ----------------- /dev/sda2 rhel /dev/rhel/root /dev/rhel/var /dev/rhel/home /dev/rhel/swap /dev/rhel/usr /dev/rhel/tmp /dev/rhel/opt /dev/rhel/var Thanks, Patrick Tried to separate them out with different printf commands as well. Also tried adding tabs to the last printf entry. EDIT Ended up doing the below and it comes out looking nice. Kept playing with it over the last couple of days. for pv in $PV do PVARRAY[${#PVARRAY[@]}]+=$pv done # Stack Overflow Addition [URL] for pv in "${PVARRAY[@]}" do # Add Logical Volumes for $pv into the array LV[${#LV[@]}]+=$(pvdisplay -m $pv | grep "Logical volume" | awk '{print $3}') # Loop through array and display contents for lv in "${!LV[@]}" do printf "%s\t%s\t%s\t%s\n" "Physical Volume(s)" "Volume Group" "Logical Volume(s)" "LV Size" printf "%s\t%s\t%s\t%s\n" "------------------" "------------" "-----------------" "-------" if [ "$lv" -ne 0 ] then printf "No physical volumes" fi PHYSV=$(lvs --segments ${LV[$lv]} -o+lv_size,devices | tail -n +2 | grep $pv | awk '{print $8}' | sed "s/([^)]*)/()/g" | tr -d '()') #echo -e "PHYSV: $PHYSV" LOGVOL=$(pvs $pv -o+lv_path,devices | grep ${LV[$lv]} | awk '{print $8}') #echo -e "LOGVOL: $LOGVOL" PHYSVOL=$(lvs --segments ${LV[$lv]} -o+lv_size,devices | tail -n +2 | grep $pv | awk '{print $6}') #echo -e "PHYSVOL: $PHYSVOL" VOLGRP=$(lvs --segments ${LV[$lv]} -o+lv_size,devices | tail -n +2 | grep $pv | awk '{print $2}') #echo -e "VOLGRP: $VOLGRP" paste <(printf "%s" "$PHYSV") <(printf "%s" "$VOLGRP") <(printf "%s" "${LV[$lv]}") <(printf "%s" "$PHYSVOL") done | column -ts $'\t' # Clear array LV=() done Couple of different tests: [root@lavulc01-new scripts]# ./dmcli.sh -l sdb Partition /dev/sdb1 exists but there are no physical volumes found! [root@lavulc01-new scripts]# ./dmcli.sh -l sda Physical Volume(s) Volume Group Logical Volume(s) LV Size ------------------ ------------ ----------------- ------- /dev/sda2 rhel /dev/rhel/root 8.00g /dev/sda2 rhel /dev/rhel/var 8.00g /dev/sda2 rhel /dev/rhel/home 8.00g /dev/sda2 rhel /dev/rhel/swap 32.00g /dev/sda2 rhel /dev/rhel/usr 8.00g /dev/sda2 rhel /dev/rhel/tmp 16.00g /dev/sda2 rhel /dev/rhel/opt 8.00g /dev/sda2 rhel /dev/rhel/var 4.00m Physical Volume(s) Volume Group Logical Volume(s) LV Size ------------------ ------------ ----------------- ------- /dev/sda3 rhel /dev/rhel/var <92.00g [root@lavulc01-new scripts]# ./dmcli.sh -l mpatha Physical Volume(s) Volume Group Logical Volume(s) LV Size ------------------ ------------ ----------------- ------- /dev/mapper/mpatha1 u01_vg /dev/u01_vg/u01_lv <1.50t LUN mpatha ID: 60:06:01:60:5e:26:47:00:96:82:74:5b:0b:5b:2c:8c Adapters Paths Backing Disk Array Port UUID -------------- ------------ ----------------------- 8:0:2:0 sdz 50:06:01:61:47:e4:45:bd 7:0:2:0 sdj 50:06:01:60:47:e4:45:bd 7:0:3:0 sdn 50:06:01:62:47:e4:45:bd 8:0:3:0 sdad 50:06:01:63:47:e4:45:bd 7:0:0:0 sdb 50:06:01:68:47:e4:45:bd 7:0:1:0 sdf 50:06:01:6a:47:e4:45:bd 8:0:0:0 sdr 50:06:01:69:47:e4:45:bd 8:0:1:0 sdv 50:06:01:6b:47:e4:45:bd [root@lavulc01-new scripts]# ./dmcli.sh -l mpatha1 Disk mpatha1 not found or is a partition device. [root@lavulc01-new scripts]# ./dmcli.sh -l mpathd Failed to find device for physical volume "/dev/mapper/mpathd?". LUN mpathd ID: 60:06:01:60:99:10:47:00:91:b9:c0:5c:8e:e3:29:86 Adapters Paths Backing Disk Array Port UUID -------------- ------------ ----------------------- 7:0:2:3 sdm 50:06:01:60:47:e4:45:bd 7:0:3:3 sdq 50:06:01:62:47:e4:45:bd 8:0:2:3 sdac 50:06:01:61:47:e4:45:bd 8:0:3:3 sdag 50:06:01:63:47:e4:45:bd 7:0:1:3 sdi 50:06:01:6a:47:e4:45:bd 7:0:0:3 sde 50:06:01:68:47:e4:45:bd 8:0:0:3 sdu 50:06:01:69:47:e4:45:bd 8:0:1:3 sdy 50:06:01:6b:47:e4:45:bd
|
bash, rhel
| 1
| 102
| 1
|
https://stackoverflow.com/questions/77928800/column-format-separation-using-paste-and-an-array
|
75,668,331
|
How to require RPM dependency from a certain module stream?
|
I have to package my Node.js app into an RPM, which will be installed on Oracle Linux 8 using dnf from a private registry. My app requires Node.js 16 to work properly. It looks like there are two ways to define requirements of my app: Add a note somewhere and hope that user will install Node.js 16 first. Use Requires field of RPM. I prefer the second one, so I added this into my RPM spec: Requires: nodejs >= 16.14 However, Node.js 10 was installed during the RPM installation. When I tried to reproduce the issue I figured out that Node.js 10 is exactly what dnf can find: $ dnf search nodejs -v ============================================= Name Exactly Matched: nodejs ============================================= nodejs.x86_64 : JavaScript runtime Repo : ol8_appstream Matched from: Provide : nodejs = 1:10.24.0-1.module+el8.3.0+9671+154373c8 I went to dnf docs and found out that there is a modularity concept in Fedora universe. As I understand, it's kind of “release channel”, or, as they call it, “stream”. Usually it's related to a major version of the package. So, it looks like first I have to switch the module stream, and then install Node.js. I checked what streams I have to choose from: $ sudo dnf module list nodejs Oracle Linux 8 Application Stream (x86_64) Name Stream Profiles Summary nodejs 10 [d] common [d], development, minimal, s2i Javascript runtime nodejs 12 common [d], development, minimal, s2i Javascript runtime nodejs 14 common [d], development, minimal, s2i Javascript runtime nodejs 16 common [d], development, minimal, s2i Javascript runtime nodejs 18 common [d], development, minimal, s2i Javascript runtime Oracle Linux 8 EPEL Modular Packages for Development (x86_64) Name Stream Profiles Summary nodejs 13 default, development, minimal Javascript runtime nodejs 16-epel default, development, minimal Javascript runtime Sure, I can switch the stream manually and install Node.js 16 by myself. But I would like to write my RPM in a way which will tell dnf to do it for me. My question is: is it even possible? Is there a way for an RPM to require exact module stream? I searched all over the internet and could not find anything. I feels like usually folks do not do things like that in Fedora/RHEL/OL world. If so, please, tell me what is the correct way to require a proper Node.js version in my case? Update: I've figured out the problem with Node.js 10 being installed when I required 16. It happened because nodejs in OL8 has Epoch set to 1, which makes the whole version of Node.js 10 1:10.24... , while I do not set Epoch, which means I'm asking for 0:16.14... . Hence, dnf makes a correct assumption installing 1:10.24 when I ask for >= 0:16.14 . Get more: Related issue in RPM repo Versioning doc
|
How to require RPM dependency from a certain module stream? I have to package my Node.js app into an RPM, which will be installed on Oracle Linux 8 using dnf from a private registry. My app requires Node.js 16 to work properly. It looks like there are two ways to define requirements of my app: Add a note somewhere and hope that user will install Node.js 16 first. Use Requires field of RPM. I prefer the second one, so I added this into my RPM spec: Requires: nodejs >= 16.14 However, Node.js 10 was installed during the RPM installation. When I tried to reproduce the issue I figured out that Node.js 10 is exactly what dnf can find: $ dnf search nodejs -v ============================================= Name Exactly Matched: nodejs ============================================= nodejs.x86_64 : JavaScript runtime Repo : ol8_appstream Matched from: Provide : nodejs = 1:10.24.0-1.module+el8.3.0+9671+154373c8 I went to dnf docs and found out that there is a modularity concept in Fedora universe. As I understand, it's kind of “release channel”, or, as they call it, “stream”. Usually it's related to a major version of the package. So, it looks like first I have to switch the module stream, and then install Node.js. I checked what streams I have to choose from: $ sudo dnf module list nodejs Oracle Linux 8 Application Stream (x86_64) Name Stream Profiles Summary nodejs 10 [d] common [d], development, minimal, s2i Javascript runtime nodejs 12 common [d], development, minimal, s2i Javascript runtime nodejs 14 common [d], development, minimal, s2i Javascript runtime nodejs 16 common [d], development, minimal, s2i Javascript runtime nodejs 18 common [d], development, minimal, s2i Javascript runtime Oracle Linux 8 EPEL Modular Packages for Development (x86_64) Name Stream Profiles Summary nodejs 13 default, development, minimal Javascript runtime nodejs 16-epel default, development, minimal Javascript runtime Sure, I can switch the stream manually and install Node.js 16 by myself. But I would like to write my RPM in a way which will tell dnf to do it for me. My question is: is it even possible? Is there a way for an RPM to require exact module stream? I searched all over the internet and could not find anything. I feels like usually folks do not do things like that in Fedora/RHEL/OL world. If so, please, tell me what is the correct way to require a proper Node.js version in my case? Update: I've figured out the problem with Node.js 10 being installed when I required 16. It happened because nodejs in OL8 has Epoch set to 1, which makes the whole version of Node.js 10 1:10.24... , while I do not set Epoch, which means I'm asking for 0:16.14... . Hence, dnf makes a correct assumption installing 1:10.24 when I ask for >= 0:16.14 . Get more: Related issue in RPM repo Versioning doc
|
fedora, rpm, rhel, dnf, oraclelinux
| 1
| 961
| 1
|
https://stackoverflow.com/questions/75668331/how-to-require-rpm-dependency-from-a-certain-module-stream
|
73,584,541
|
Why does the result of OpenSSL differ for TLS versions?
|
Our Java application is running on the RHEL 8.5 OS platform. The application makes use of Apache HTTPd and the Amazon Corretto JDK 1.8.0_322 release. And we have added the below code in the ssl.conf file to enhance the security level. SSLProtocol -ALL +TLSv1.2 SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:AES256-CCM:DHE-RSA-AES256-CCM In the above part, we have enabled Only TLSv1.2. The below OpenSSL commands are displaying different results. Scenario 1: [root@test ~]# openssl ciphers -v | awk '{print $2}' | sort | uniq SSLv3 TLSv1 TLSv1.2 TLSv1.3 Scenario 2: [root@test ~]# /usr/bin/openssl ciphers -v TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD TLS_AES_128_CCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESCCM(128) Mac=AEAD ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-ECDSA-AES256-CCM TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESCCM(256) Mac=AEAD ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-CCM TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESCCM(128) Mac=AEAD ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES256-SHA TLSv1 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES256-SHA TLSv1 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES128-SHA TLSv1 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-RSA-AES128-SHA TLSv1 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD AES256-CCM TLSv1.2 Kx=RSA Au=RSA Enc=AESCCM(256) Mac=AEAD AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD AES128-CCM TLSv1.2 Kx=RSA Au=RSA Enc=AESCCM(128) Mac=AEAD AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256 AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256 AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1 AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=DH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD DHE-RSA-AES256-CCM TLSv1.2 Kx=DH Au=RSA Enc=AESCCM(256) Mac=AEAD DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-CCM TLSv1.2 Kx=DH Au=RSA Enc=AESCCM(128) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1 PSK-AES256-GCM-SHA384 TLSv1.2 Kx=PSK Au=PSK Enc=AESGCM(256) Mac=AEAD PSK-CHACHA20-POLY1305 TLSv1.2 Kx=PSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD PSK-AES256-CCM TLSv1.2 Kx=PSK Au=PSK Enc=AESCCM(256) Mac=AEAD PSK-AES128-GCM-SHA256 TLSv1.2 Kx=PSK Au=PSK Enc=AESGCM(128) Mac=AEAD PSK-AES128-CCM TLSv1.2 Kx=PSK Au=PSK Enc=AESCCM(128) Mac=AEAD PSK-AES256-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(256) Mac=SHA1 PSK-AES128-CBC-SHA256 TLSv1 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA256 PSK-AES128-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA1 DHE-PSK-AES256-GCM-SHA384 TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESGCM(256) Mac=AEAD DHE-PSK-CHACHA20-POLY1305 TLSv1.2 Kx=DHEPSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD DHE-PSK-AES256-CCM TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESCCM(256) Mac=AEAD DHE-PSK-AES128-GCM-SHA256 TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESGCM(128) Mac=AEAD DHE-PSK-AES128-CCM TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESCCM(128) Mac=AEAD DHE-PSK-AES256-CBC-SHA SSLv3 Kx=DHEPSK Au=PSK Enc=AES(256) Mac=SHA1 DHE-PSK-AES128-CBC-SHA256 TLSv1 Kx=DHEPSK Au=PSK Enc=AES(128) Mac=SHA256 DHE-PSK-AES128-CBC-SHA SSLv3 Kx=DHEPSK Au=PSK Enc=AES(128) Mac=SHA1 ECDHE-PSK-CHACHA20-POLY1305 TLSv1.2 Kx=ECDHEPSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-PSK-AES256-CBC-SHA TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(256) Mac=SHA1 ECDHE-PSK-AES128-CBC-SHA256 TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(128) Mac=SHA256 ECDHE-PSK-AES128-CBC-SHA TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(128) Mac=SHA1 [root@test ~]# Scenario 3: [root@test ~]# openssl s_client -connect <IP_ADDRESS>:8443 -tls1 CONNECTED(00000003) 139679030896448:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1544:SSL alert number 70 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 104 bytes Verification: OK --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: PSK identity: None PSK identity hint: None SRP username: None Start Time: 1662128840 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no --- Scenario 4: [root@test ~]# nmap -sV --script ssl-enum-ciphers -p 8443 <IP_ADDRESS> Starting Nmap 7.70 ( [URL] ) at 2022-09-02 20:02 IST mass_dns: warning: Unable to open /etc/resolv.conf. Try using --system-dns or specify valid servers with --dns-servers mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns-servers Nmap scan report for XXXXX (IP_ADDRESS) Host is up (0.00067s latency). PORT STATE SERVICE VERSION 8443/tcp open ssl/http Apache httpd |_http-server-header: Apache | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A | TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) - A | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A | compressors: | NULL | cipher preference: client |_ least strength: A MAC Address: 00:50:56:A7:92:7B (VMware) Service detection performed. Please report any incorrect results at [URL] . Nmap done: 1 IP address (1 host up) scanned in 12.90 seconds In the above results, scenario 1 and scenario 2 is showing that TLSv1 is also enabled. But scenario 3 and scenario 4 is showing that only TLSv1.2 is enabled. We are confused with the above results because of whether TLSv1 is enabled or not. Please help us with the answers to the below queries. Why is the OpenSSL command displaying different results(Scenario 2 & 4)? How do I disable TLSv1 if it is enabled? How to confirm that TLSv1 is disabled on our server for Linux?
|
Why does the result of OpenSSL differ for TLS versions? Our Java application is running on the RHEL 8.5 OS platform. The application makes use of Apache HTTPd and the Amazon Corretto JDK 1.8.0_322 release. And we have added the below code in the ssl.conf file to enhance the security level. SSLProtocol -ALL +TLSv1.2 SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:AES256-CCM:DHE-RSA-AES256-CCM In the above part, we have enabled Only TLSv1.2. The below OpenSSL commands are displaying different results. Scenario 1: [root@test ~]# openssl ciphers -v | awk '{print $2}' | sort | uniq SSLv3 TLSv1 TLSv1.2 TLSv1.3 Scenario 2: [root@test ~]# /usr/bin/openssl ciphers -v TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD TLS_AES_128_CCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESCCM(128) Mac=AEAD ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-ECDSA-AES256-CCM TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESCCM(256) Mac=AEAD ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-CCM TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESCCM(128) Mac=AEAD ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES256-SHA TLSv1 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES256-SHA TLSv1 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES128-SHA TLSv1 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-RSA-AES128-SHA TLSv1 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD AES256-CCM TLSv1.2 Kx=RSA Au=RSA Enc=AESCCM(256) Mac=AEAD AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD AES128-CCM TLSv1.2 Kx=RSA Au=RSA Enc=AESCCM(128) Mac=AEAD AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256 AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256 AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1 AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=DH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD DHE-RSA-AES256-CCM TLSv1.2 Kx=DH Au=RSA Enc=AESCCM(256) Mac=AEAD DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-CCM TLSv1.2 Kx=DH Au=RSA Enc=AESCCM(128) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1 PSK-AES256-GCM-SHA384 TLSv1.2 Kx=PSK Au=PSK Enc=AESGCM(256) Mac=AEAD PSK-CHACHA20-POLY1305 TLSv1.2 Kx=PSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD PSK-AES256-CCM TLSv1.2 Kx=PSK Au=PSK Enc=AESCCM(256) Mac=AEAD PSK-AES128-GCM-SHA256 TLSv1.2 Kx=PSK Au=PSK Enc=AESGCM(128) Mac=AEAD PSK-AES128-CCM TLSv1.2 Kx=PSK Au=PSK Enc=AESCCM(128) Mac=AEAD PSK-AES256-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(256) Mac=SHA1 PSK-AES128-CBC-SHA256 TLSv1 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA256 PSK-AES128-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA1 DHE-PSK-AES256-GCM-SHA384 TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESGCM(256) Mac=AEAD DHE-PSK-CHACHA20-POLY1305 TLSv1.2 Kx=DHEPSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD DHE-PSK-AES256-CCM TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESCCM(256) Mac=AEAD DHE-PSK-AES128-GCM-SHA256 TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESGCM(128) Mac=AEAD DHE-PSK-AES128-CCM TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESCCM(128) Mac=AEAD DHE-PSK-AES256-CBC-SHA SSLv3 Kx=DHEPSK Au=PSK Enc=AES(256) Mac=SHA1 DHE-PSK-AES128-CBC-SHA256 TLSv1 Kx=DHEPSK Au=PSK Enc=AES(128) Mac=SHA256 DHE-PSK-AES128-CBC-SHA SSLv3 Kx=DHEPSK Au=PSK Enc=AES(128) Mac=SHA1 ECDHE-PSK-CHACHA20-POLY1305 TLSv1.2 Kx=ECDHEPSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-PSK-AES256-CBC-SHA TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(256) Mac=SHA1 ECDHE-PSK-AES128-CBC-SHA256 TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(128) Mac=SHA256 ECDHE-PSK-AES128-CBC-SHA TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(128) Mac=SHA1 [root@test ~]# Scenario 3: [root@test ~]# openssl s_client -connect <IP_ADDRESS>:8443 -tls1 CONNECTED(00000003) 139679030896448:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1544:SSL alert number 70 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 104 bytes Verification: OK --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: PSK identity: None PSK identity hint: None SRP username: None Start Time: 1662128840 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no --- Scenario 4: [root@test ~]# nmap -sV --script ssl-enum-ciphers -p 8443 <IP_ADDRESS> Starting Nmap 7.70 ( [URL] ) at 2022-09-02 20:02 IST mass_dns: warning: Unable to open /etc/resolv.conf. Try using --system-dns or specify valid servers with --dns-servers mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns-servers Nmap scan report for XXXXX (IP_ADDRESS) Host is up (0.00067s latency). PORT STATE SERVICE VERSION 8443/tcp open ssl/http Apache httpd |_http-server-header: Apache | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A | TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) - A | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A | compressors: | NULL | cipher preference: client |_ least strength: A MAC Address: 00:50:56:A7:92:7B (VMware) Service detection performed. Please report any incorrect results at [URL] . Nmap done: 1 IP address (1 host up) scanned in 12.90 seconds In the above results, scenario 1 and scenario 2 is showing that TLSv1 is also enabled. But scenario 3 and scenario 4 is showing that only TLSv1.2 is enabled. We are confused with the above results because of whether TLSv1 is enabled or not. Please help us with the answers to the below queries. Why is the OpenSSL command displaying different results(Scenario 2 & 4)? How do I disable TLSv1 if it is enabled? How to confirm that TLSv1 is disabled on our server for Linux?
|
java, ssl, openssl, tls1.2, rhel
| 1
| 669
| 1
|
https://stackoverflow.com/questions/73584541/why-does-the-result-of-openssl-differ-for-tls-versions
|
73,036,422
|
Configure keycloak.service to run keycloak 18.0.2 as a daemon process in rhel
|
I try to configure keycloak.service in systemd to run keycloak 18.0.2 as a daemon process. There is current folder which symlink to kk folder. I try to start kk in dev mode on port 8180 [Unit] Description=Keycloak After=network.target [Service] Type=idle User=keycloak Group=keycloak ExecStart=/opt/keycloak/current/bin/kc.sh start-dev --http-port=8180 TimeoutStartSec=600 TimeoutStopSec=600 [Install] WantedBy=multi-user.target But it didn't work. Also if i just run bin/kc.sh start-dev --http-port=8180 it work correctly, but not as a daemon process
|
Configure keycloak.service to run keycloak 18.0.2 as a daemon process in rhel I try to configure keycloak.service in systemd to run keycloak 18.0.2 as a daemon process. There is current folder which symlink to kk folder. I try to start kk in dev mode on port 8180 [Unit] Description=Keycloak After=network.target [Service] Type=idle User=keycloak Group=keycloak ExecStart=/opt/keycloak/current/bin/kc.sh start-dev --http-port=8180 TimeoutStartSec=600 TimeoutStopSec=600 [Install] WantedBy=multi-user.target But it didn't work. Also if i just run bin/kc.sh start-dev --http-port=8180 it work correctly, but not as a daemon process
|
keycloak, daemon, rhel
| 1
| 13,909
| 4
|
https://stackoverflow.com/questions/73036422/configure-keycloak-service-to-run-keycloak-18-0-2-as-a-daemon-process-in-rhel
|
72,651,083
|
run RHEL9 docker container on centos 8 host
|
My main developer box is running Centos8. I'm working on a project where I need to do some builds on RHEL7/8/9. I have docker installed on the host and pulling RHEL7 image from registry.redhat.io/rhel7:7.9-702.1655292978 , RHEL8 from docker hub ( redhat/ubi8:latest ) and RHEL9 also from docker hub ( redhat/ubi9:latest ). RHEL 7/8 work without issue but RHEL9 has the error: subscription-manager is disabled when running inside a container. Please refer to your host system for subscription management. I have a valid subscription but for some reason, it is not possible to actually run a RHEL9 image from a non RHEL host. I'm not sure I understand the reason for this but is there a workaround (other than changing the host to RHEL) so that I can register my RHEL9 container?
|
run RHEL9 docker container on centos 8 host My main developer box is running Centos8. I'm working on a project where I need to do some builds on RHEL7/8/9. I have docker installed on the host and pulling RHEL7 image from registry.redhat.io/rhel7:7.9-702.1655292978 , RHEL8 from docker hub ( redhat/ubi8:latest ) and RHEL9 also from docker hub ( redhat/ubi9:latest ). RHEL 7/8 work without issue but RHEL9 has the error: subscription-manager is disabled when running inside a container. Please refer to your host system for subscription management. I have a valid subscription but for some reason, it is not possible to actually run a RHEL9 image from a non RHEL host. I'm not sure I understand the reason for this but is there a workaround (other than changing the host to RHEL) so that I can register my RHEL9 container?
|
docker, rhel
| 1
| 2,644
| 1
|
https://stackoverflow.com/questions/72651083/run-rhel9-docker-container-on-centos-8-host
|
71,591,219
|
ALBD SERVER ISSUES
|
Working in Red Hat Linux trying to build RPMs that are accessed via Clearcase vobs but keep on getting a recurring error which states, 'unable to find albd-server on host ', 'Unable to contact view - clearcase object not found' and 'Unknown host - Name or service not know'. Any guidance would be much appreciated.
|
ALBD SERVER ISSUES Working in Red Hat Linux trying to build RPMs that are accessed via Clearcase vobs but keep on getting a recurring error which states, 'unable to find albd-server on host ', 'Unable to contact view - clearcase object not found' and 'Unknown host - Name or service not know'. Any guidance would be much appreciated.
|
clearcase, rhel
| 1
| 711
| 3
|
https://stackoverflow.com/questions/71591219/albd-server-issues
|
69,917,489
|
How to block the auto append of architecture(x86_64) in rpmbuild
|
I'm trying to build an RPM using rpmbuild on RHEL 8.4 OS. All the details related to RPM are given in a SPEC file. In the stage of "Processing file", architecture(x86_64) is automatically appended at the end of N-V-R. Processing file : Application-Server-10.2.0-3.x86-64. Post that, I have added " BuildArch: noarch " in the SPEC file to block the architecture. But the result was, Processing file : Application-Server-10.2.0-3.noarch This addition of architecture leads to some errors like "File Not found" and finally got the error as "usr/bin/rpmbuild failed with exit code 1". And the SPEC file is like, # Version Name : ApplicationServer Version : 10.2.0 Release : 3 License : xxxx BuildArch: noarch %description Application Server %files %defattr(0755,xxx,xxx) %attr(0755,xxx,xxx) /jboss %changelog And the final error was, CreateRPM: [copy] Copying 1 file to /root/xxxx/xxxx/xxx/xxx/work/SPECS [rpm] Building the RPM based on the xxxx.spec file [rpm] Processing files: ApplicationServer-10.2.0-3.x86_64 [rpm] [rpm] [rpm] RPM build errors: [rpm] error: File not found: /root/xxxx/xxxx/xxx/xxx/work/BUILDROOT/ApplicationServer-10.2.0-3.x86_64/jboss [rpm] File not found: /root/xxxx/xxxx/xxx/xxx/work/BUILDROOT/ApplicationServer-10.2.0-3.x86_64/jboss [rpm] '/usr/bin/rpmbuild' failed with exit code 1 Please let me know the workaround solution. Thanks
|
How to block the auto append of architecture(x86_64) in rpmbuild I'm trying to build an RPM using rpmbuild on RHEL 8.4 OS. All the details related to RPM are given in a SPEC file. In the stage of "Processing file", architecture(x86_64) is automatically appended at the end of N-V-R. Processing file : Application-Server-10.2.0-3.x86-64. Post that, I have added " BuildArch: noarch " in the SPEC file to block the architecture. But the result was, Processing file : Application-Server-10.2.0-3.noarch This addition of architecture leads to some errors like "File Not found" and finally got the error as "usr/bin/rpmbuild failed with exit code 1". And the SPEC file is like, # Version Name : ApplicationServer Version : 10.2.0 Release : 3 License : xxxx BuildArch: noarch %description Application Server %files %defattr(0755,xxx,xxx) %attr(0755,xxx,xxx) /jboss %changelog And the final error was, CreateRPM: [copy] Copying 1 file to /root/xxxx/xxxx/xxx/xxx/work/SPECS [rpm] Building the RPM based on the xxxx.spec file [rpm] Processing files: ApplicationServer-10.2.0-3.x86_64 [rpm] [rpm] [rpm] RPM build errors: [rpm] error: File not found: /root/xxxx/xxxx/xxx/xxx/work/BUILDROOT/ApplicationServer-10.2.0-3.x86_64/jboss [rpm] File not found: /root/xxxx/xxxx/xxx/xxx/work/BUILDROOT/ApplicationServer-10.2.0-3.x86_64/jboss [rpm] '/usr/bin/rpmbuild' failed with exit code 1 Please let me know the workaround solution. Thanks
|
rpm, rhel, rpmbuild, rpm-spec
| 1
| 722
| 1
|
https://stackoverflow.com/questions/69917489/how-to-block-the-auto-append-of-architecturex86-64-in-rpmbuild
|
69,442,250
|
You do not have the PROCESS privilege at /usr/bin/pt-online-schema-change line 4456, <STDIN> line 1
|
I'm experiment with the pt-online-schema-change tool with a MySQL 5.7 database hosted by AWS RDS. I have gotten past a few different configuration issues and when I run a dry-run everything looks good but I get this error immediately when I try and run the command with the execute flag. I'm running this on Amazon Linux 2 which is RHEL 8 based (I believed). I'm not sure if this is related to my Linux permissions or my database permissions. I'm running this as root on my workstation and my MySQL user has fairly elevated permissions. I found a Percona forum post on this but it's from a long time ago with only one response and the response doesn't provide any concrete advice on how to resolve and I'm not sure the response is valid based on other internet researches about linux process permissions. Any advice on how to resolve this error? Do I need to add more permissions for my MYSQL user or something on the linux command line side? $ pt-online-schema-change D=my_db,t=my_table,h=my_host.com,u=my_user --alter="drop COLUMN ios_notification_token" --alter-foreign-keys-method="auto" --ask-pass --preserve-triggers --execute Enter MySQL password: You do not have the PROCESS privilege at /usr/bin/pt-online-schema-change line 4456, <STDIN> line 1.
|
You do not have the PROCESS privilege at /usr/bin/pt-online-schema-change line 4456, <STDIN> line 1 I'm experiment with the pt-online-schema-change tool with a MySQL 5.7 database hosted by AWS RDS. I have gotten past a few different configuration issues and when I run a dry-run everything looks good but I get this error immediately when I try and run the command with the execute flag. I'm running this on Amazon Linux 2 which is RHEL 8 based (I believed). I'm not sure if this is related to my Linux permissions or my database permissions. I'm running this as root on my workstation and my MySQL user has fairly elevated permissions. I found a Percona forum post on this but it's from a long time ago with only one response and the response doesn't provide any concrete advice on how to resolve and I'm not sure the response is valid based on other internet researches about linux process permissions. Any advice on how to resolve this error? Do I need to add more permissions for my MYSQL user or something on the linux command line side? $ pt-online-schema-change D=my_db,t=my_table,h=my_host.com,u=my_user --alter="drop COLUMN ios_notification_token" --alter-foreign-keys-method="auto" --ask-pass --preserve-triggers --execute Enter MySQL password: You do not have the PROCESS privilege at /usr/bin/pt-online-schema-change line 4456, <STDIN> line 1.
|
mysql, linux, amazon-rds, rhel, pt-online-schema-change
| 1
| 2,415
| 1
|
https://stackoverflow.com/questions/69442250/you-do-not-have-the-process-privilege-at-usr-bin-pt-online-schema-change-line-4
|
68,516,954
|
The stream or file could not be opened in append mode: failed to open stream: Permission denied
|
I'm hosting a Laravel project inside a RHEL 7 machine using NGINX and PHP7.4-fpm in my nginx.conf , I set the user as the following user nexta; while in my .../.../php-fpm.d/www.conf user = nexta group = nexta listen = /var/run/php-fpm/php-fpm.sock listen.owner = nexta listen.group = nexta listen.mode = 0660 Using this setup, I managed to run the Laravel application, however, I am stuck with the permission denied error It says: The stream or file "/home/nexta/mywebsite.com/storage/logs/laravel-2021-07-25.log" could not be opened in append mode: failed to open stream: Permission denied I have tried the followings Setting the chown and chmod sudo chown -R nexta:nexta /home/nexta/mywebsite.com sudo chown -R nexta:nexta /home/nexta/mywebsite.com/storage sudo chown -R nexta:nexta /home/nexta/mywebsite.com/bootstrap/cache sudo chmod -R 775 /home/nexta/mywebsite.com/storage sudo chmod -R 775 /home/nexta/mywebsite.com/bootstrap/cache tried to use 777 (I know it's not a good practice) sudo chmod -R 777 /home/nexta/mywebsite.com/storage sudo chmod -R 777 /home/nexta/mywebsite.com/bootstrap/cache changed the user in nginx.conf to nginx and chown the site to nginx. The error still prevails and also I am unable to view my application Ran the followings: php artisan cache:clear composer dumpautoload What I also noticed, if I run php artisan cahce:clear (notice the typo), this triggers an error and the app able to access the log file, yet the when the app is accessed through http request, the app is unable to access the log file. I also try to echo whoami in the php request (using a non laravel application), I got the following response Returned with status 0 and output: Array ( [0] => nexta ) which means, when the app is executed by http request, the user is nexta. At this point, I'm not even sure what causes the error.
|
The stream or file could not be opened in append mode: failed to open stream: Permission denied I'm hosting a Laravel project inside a RHEL 7 machine using NGINX and PHP7.4-fpm in my nginx.conf , I set the user as the following user nexta; while in my .../.../php-fpm.d/www.conf user = nexta group = nexta listen = /var/run/php-fpm/php-fpm.sock listen.owner = nexta listen.group = nexta listen.mode = 0660 Using this setup, I managed to run the Laravel application, however, I am stuck with the permission denied error It says: The stream or file "/home/nexta/mywebsite.com/storage/logs/laravel-2021-07-25.log" could not be opened in append mode: failed to open stream: Permission denied I have tried the followings Setting the chown and chmod sudo chown -R nexta:nexta /home/nexta/mywebsite.com sudo chown -R nexta:nexta /home/nexta/mywebsite.com/storage sudo chown -R nexta:nexta /home/nexta/mywebsite.com/bootstrap/cache sudo chmod -R 775 /home/nexta/mywebsite.com/storage sudo chmod -R 775 /home/nexta/mywebsite.com/bootstrap/cache tried to use 777 (I know it's not a good practice) sudo chmod -R 777 /home/nexta/mywebsite.com/storage sudo chmod -R 777 /home/nexta/mywebsite.com/bootstrap/cache changed the user in nginx.conf to nginx and chown the site to nginx. The error still prevails and also I am unable to view my application Ran the followings: php artisan cache:clear composer dumpautoload What I also noticed, if I run php artisan cahce:clear (notice the typo), this triggers an error and the app able to access the log file, yet the when the app is accessed through http request, the app is unable to access the log file. I also try to echo whoami in the php request (using a non laravel application), I got the following response Returned with status 0 and output: Array ( [0] => nexta ) which means, when the app is executed by http request, the user is nexta. At this point, I'm not even sure what causes the error.
|
php, laravel, nginx, rhel
| 1
| 4,779
| 2
|
https://stackoverflow.com/questions/68516954/the-stream-or-file-could-not-be-opened-in-append-mode-failed-to-open-stream-pe
|
66,810,139
|
Issues installing Docker on RHEL 7 Linux Server
|
I have been constantly running into this issue more and more lately, and finally need some assistance because I'm completely stuck. I just got access to a RHEL EC2 Linux server and I am just simply trying to install Docker. This process has been extremely painful lately. Tons of 404 HTTP Not Found errors when trying to follow the processes mentioned online According to [URL] , you can just simply run one of the following two commands: sudo amazon-linux-extras install docker sudo yum install docker However, neither one of these comands work, as shown in the output below: [root@d8de679d27f2454 myuser]# sudo amazon-linux-extras install docker sudo: amazon-linux-extras: command not found [root@d8de679d27f2454 myuser]# yum install docker Loaded plugins: amazon-id, search-disabled-repos No package docker available. Error: Nothing to do [root@d8de679d27f2454 myuser]# Here is a list of things I've tried to do : First Attempt (RE: How to install docker on Amazon Linux2 ) The second answer proposed in that you can just run the following: sudo yum update -y sudo yum -y install docker However, that doesn't work either, as shown in the output below: [root@d8de679d27f2454 myuser]# yum update -y Loaded plugins: amazon-id, search-disabled-repos No packages marked for update [root@d8de679d27f2454 myuser]# yum -y install docker Loaded plugins: amazon-id, search-disabled-repos No package docker available. Error: Nothing to do [root@d8de679d27f2454 myuser]# Second Attempt : Installing via get.docker.com When running curl [URL] | bash , that doesn't work either Third Attempt : [URL] Part of this article suggests running the following two commands: sudo yum install -y [URL] sudo yum install -y yum-utils device-mapper-persistent-data lvm2 However, that doesn't work either: # yum install -y yum-utils device-mapper-persistent-data lvm2 Loaded plugins: amazon-id, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. [URL] [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article [URL] If above article doesn't help to resolve this issue please open a ticket with Red Hat Support. One of the configured repositories failed (Docker CE Stable - x86_64), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=docker-ce-stable ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable docker-ce-stable or subscription-manager repos --disable=docker-ce-stable 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true failure: repodata/repomd.xml from docker-ce-stable: [Errno 256] No more mirrors to try. [URL] [Errno 14] HTTPS Error 404 - Not Found Here's the output of my cat /etc/os-release command NAME="Red Hat Enterprise Linux Server" VERSION="7.9 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.9" PRETTY_NAME="Red Hat Enterprise Linux Server 7.9 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.9:GA:server" HOME_URL="[URL] BUG_REPORT_URL="[URL] Any help would be greatly appreciated. It seems nearly impossible to install docker at this point.
|
Issues installing Docker on RHEL 7 Linux Server I have been constantly running into this issue more and more lately, and finally need some assistance because I'm completely stuck. I just got access to a RHEL EC2 Linux server and I am just simply trying to install Docker. This process has been extremely painful lately. Tons of 404 HTTP Not Found errors when trying to follow the processes mentioned online According to [URL] , you can just simply run one of the following two commands: sudo amazon-linux-extras install docker sudo yum install docker However, neither one of these comands work, as shown in the output below: [root@d8de679d27f2454 myuser]# sudo amazon-linux-extras install docker sudo: amazon-linux-extras: command not found [root@d8de679d27f2454 myuser]# yum install docker Loaded plugins: amazon-id, search-disabled-repos No package docker available. Error: Nothing to do [root@d8de679d27f2454 myuser]# Here is a list of things I've tried to do : First Attempt (RE: How to install docker on Amazon Linux2 ) The second answer proposed in that you can just run the following: sudo yum update -y sudo yum -y install docker However, that doesn't work either, as shown in the output below: [root@d8de679d27f2454 myuser]# yum update -y Loaded plugins: amazon-id, search-disabled-repos No packages marked for update [root@d8de679d27f2454 myuser]# yum -y install docker Loaded plugins: amazon-id, search-disabled-repos No package docker available. Error: Nothing to do [root@d8de679d27f2454 myuser]# Second Attempt : Installing via get.docker.com When running curl [URL] | bash , that doesn't work either Third Attempt : [URL] Part of this article suggests running the following two commands: sudo yum install -y [URL] sudo yum install -y yum-utils device-mapper-persistent-data lvm2 However, that doesn't work either: # yum install -y yum-utils device-mapper-persistent-data lvm2 Loaded plugins: amazon-id, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. [URL] [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article [URL] If above article doesn't help to resolve this issue please open a ticket with Red Hat Support. One of the configured repositories failed (Docker CE Stable - x86_64), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=docker-ce-stable ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable docker-ce-stable or subscription-manager repos --disable=docker-ce-stable 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=docker-ce-stable.skip_if_unavailable=true failure: repodata/repomd.xml from docker-ce-stable: [Errno 256] No more mirrors to try. [URL] [Errno 14] HTTPS Error 404 - Not Found Here's the output of my cat /etc/os-release command NAME="Red Hat Enterprise Linux Server" VERSION="7.9 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.9" PRETTY_NAME="Red Hat Enterprise Linux Server 7.9 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.9:GA:server" HOME_URL="[URL] BUG_REPORT_URL="[URL] Any help would be greatly appreciated. It seems nearly impossible to install docker at this point.
|
linux, docker, rhel, rhel7
| 1
| 2,907
| 2
|
https://stackoverflow.com/questions/66810139/issues-installing-docker-on-rhel-7-linux-server
|
66,448,941
|
Freeing some memory space on my Amazon Linux 2
|
I was doing testing on my server and always getting this error file for here-document: No space left on device -bash: cannot create temp file for here-document: No space left on device So I check it with the command "df". Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 1970540 0 1970540 0% /dev tmpfs 1988952 0 1988952 0% /dev/shm tmpfs 1988952 65980 1922972 4% /run tmpfs 1988952 0 1988952 0% /sys/fs/cgroup /dev/nvme0n1p1 8376300 8376280 20 100% / tmpfs 397792 16 397776 1% /run/user/1000 I saw that the file /dev/nvme0n1p1 is using most of the memory. Is there any way I can free those memory? Or what can I do to have free some memory.
|
Freeing some memory space on my Amazon Linux 2 I was doing testing on my server and always getting this error file for here-document: No space left on device -bash: cannot create temp file for here-document: No space left on device So I check it with the command "df". Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 1970540 0 1970540 0% /dev tmpfs 1988952 0 1988952 0% /dev/shm tmpfs 1988952 65980 1922972 4% /run tmpfs 1988952 0 1988952 0% /sys/fs/cgroup /dev/nvme0n1p1 8376300 8376280 20 100% / tmpfs 397792 16 397776 1% /run/user/1000 I saw that the file /dev/nvme0n1p1 is using most of the memory. Is there any way I can free those memory? Or what can I do to have free some memory.
|
linux, amazon-web-services, amazon-ec2, amazon-ecs, rhel
| 1
| 3,966
| 1
|
https://stackoverflow.com/questions/66448941/freeing-some-memory-space-on-my-amazon-linux-2
|
65,598,089
|
R only ever runs on a certain CPU in Linux
|
I have an 8-core RHEL Linux machine running R 4.0.2. If I ask R for the number of cores, I can confirm that 8 are available. > print(future::availableWorkers()) [1] "localhost" "localhost" "localhost" "localhost" "localhost" "localhost" [7] "localhost" "localhost" > print(parallel::detectCores()) [1] 8 However, if I run this simple example f <- function(out=0) { for (i in 1:1e10) out <- out + 1 } output <- parallel::mclapply(1:8, f, mc.cores = 8) my top indicates that only 1 core is being used (so that each worker is using 1/8th of that core, or 1/64th of the entire machine). %Cpu0 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 2.0 us, 0.0 sy, 0.0 ni, 98.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu6 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu7 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 32684632 total, 28211076 free, 2409992 used, 2063564 buff/cache KiB Swap: 16449532 total, 11475052 free, 4974480 used. 29213180 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3483 user 20 0 493716 57980 948 R 1.8 0.2 0:18.09 R 3479 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3480 user 20 0 493716 57980 948 R 1.5 0.2 0:18.08 R 3481 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3482 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3484 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3485 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3486 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R Does anyone know what might be going on here? Another StackOverflow question that documents similar behavior is here . It's clear that I messed up the install somehow. I followed these install instructions for RHEL 7. I'm guessing there is a dependency missing, but I have no idea where to look. If anyone has any ideas of diagnostics to run, etc., they would be most appreciated. For further context, I have R 3.4.1 also installed on my machine, and when I run this code, everything works fine. (I installed that version through yum .) I also installed R 4.0.3 yesterday using the same instructions linked above, and it suffers from the same problem.
|
R only ever runs on a certain CPU in Linux I have an 8-core RHEL Linux machine running R 4.0.2. If I ask R for the number of cores, I can confirm that 8 are available. > print(future::availableWorkers()) [1] "localhost" "localhost" "localhost" "localhost" "localhost" "localhost" [7] "localhost" "localhost" > print(parallel::detectCores()) [1] 8 However, if I run this simple example f <- function(out=0) { for (i in 1:1e10) out <- out + 1 } output <- parallel::mclapply(1:8, f, mc.cores = 8) my top indicates that only 1 core is being used (so that each worker is using 1/8th of that core, or 1/64th of the entire machine). %Cpu0 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 2.0 us, 0.0 sy, 0.0 ni, 98.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu6 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu7 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 32684632 total, 28211076 free, 2409992 used, 2063564 buff/cache KiB Swap: 16449532 total, 11475052 free, 4974480 used. 29213180 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3483 user 20 0 493716 57980 948 R 1.8 0.2 0:18.09 R 3479 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3480 user 20 0 493716 57980 948 R 1.5 0.2 0:18.08 R 3481 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3482 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3484 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3485 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R 3486 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R Does anyone know what might be going on here? Another StackOverflow question that documents similar behavior is here . It's clear that I messed up the install somehow. I followed these install instructions for RHEL 7. I'm guessing there is a dependency missing, but I have no idea where to look. If anyone has any ideas of diagnostics to run, etc., they would be most appreciated. For further context, I have R 3.4.1 also installed on my machine, and when I run this code, everything works fine. (I installed that version through yum .) I also installed R 4.0.3 yesterday using the same instructions linked above, and it suffers from the same problem.
|
r, linux, parallel-processing, rhel
| 1
| 290
| 1
|
https://stackoverflow.com/questions/65598089/r-only-ever-runs-on-a-certain-cpu-in-linux
|
63,990,002
|
Jenkins can't find binary in PATH and doesn't appear to inherit from Linux
|
I've tried everything for this an it's exhausted my knowledge. I'm trying to get Jenkins to use the Path from Linux and it's not working it seems. Running Jenkins on RHEL Linux and Jenkins runs using the 'jenkins' user: If I sudo su as the 'jenkins' user, I can see items like Blaze and Firefox on the Path: (Both Jenkins and the server have been restarted multiple times since these have been on the path) If I firefox -V while in a jenkins user bash, it works fine. In a Jenkins job that attempts to use the Firefox driver, it errors with the following: Cannot find firefox binary in PATH. Make sure firefox is installed. OS appears to be: LINUX In the System Info, it has PATH=/sbin:/usr/sbin:/bin:/usr/bin Running as SYSTEM [EnvInject] - Loading node environment variables. The settings for the nodes are default, which I understand means it should inherit from linux I also made a job that runs the following: whoami echo $PATH firefox -V This returns: + whoami jenkins + echo /sbin:/usr/sbin:/bin:/usr/bin /sbin:/usr/sbin:/bin:/usr/bin + firefox -V /tmp/jenkins4835013049839580673.sh: line 4: firefox: command not found No matter what I try, I can't seem to get Jenkins to use the 'correct' Path value. Is there something I'm missing, or is it as confusing as I am feeling? Additional Stuff: Adding a symlink into /usr/bin also doesn't work for some reason. I can't add them as "Environment Variables" within the Jenkins config as it seems to break the pipeline jobs that don't need the additional binaries. Can anyone help?
|
Jenkins can't find binary in PATH and doesn't appear to inherit from Linux I've tried everything for this an it's exhausted my knowledge. I'm trying to get Jenkins to use the Path from Linux and it's not working it seems. Running Jenkins on RHEL Linux and Jenkins runs using the 'jenkins' user: If I sudo su as the 'jenkins' user, I can see items like Blaze and Firefox on the Path: (Both Jenkins and the server have been restarted multiple times since these have been on the path) If I firefox -V while in a jenkins user bash, it works fine. In a Jenkins job that attempts to use the Firefox driver, it errors with the following: Cannot find firefox binary in PATH. Make sure firefox is installed. OS appears to be: LINUX In the System Info, it has PATH=/sbin:/usr/sbin:/bin:/usr/bin Running as SYSTEM [EnvInject] - Loading node environment variables. The settings for the nodes are default, which I understand means it should inherit from linux I also made a job that runs the following: whoami echo $PATH firefox -V This returns: + whoami jenkins + echo /sbin:/usr/sbin:/bin:/usr/bin /sbin:/usr/sbin:/bin:/usr/bin + firefox -V /tmp/jenkins4835013049839580673.sh: line 4: firefox: command not found No matter what I try, I can't seem to get Jenkins to use the 'correct' Path value. Is there something I'm missing, or is it as confusing as I am feeling? Additional Stuff: Adding a symlink into /usr/bin also doesn't work for some reason. I can't add them as "Environment Variables" within the Jenkins config as it seems to break the pipeline jobs that don't need the additional binaries. Can anyone help?
|
linux, jenkins, rhel
| 1
| 1,981
| 1
|
https://stackoverflow.com/questions/63990002/jenkins-cant-find-binary-in-path-and-doesnt-appear-to-inherit-from-linux
|
63,817,939
|
Ansible to check for Java and install if not already on RHEL machines
|
Trying to create a job that will check for java and install if it comes back as not installed. All works with the exception of the install part. Ansible is telling me a conditional has not been met so it skips the install. - name: fetch java version shell: java -version 2>&1 | grep version | awk '{print $3}' | sed 's/" //g' changed_when: False register: java_result failed_when: False - name: print java version debug: msg: " {{ java_result.stdout }} " when: java_result.rc==0 - name: install java version yum: name: java-1.8.0-openjdk.x86_64 present: yes when: java_result.rc!=0 The end result that works: - name: fetch java version shell: java -version changed_when: False register: java_result failed_when: False - name: install java version yum: name: java state: latest when: java_result.rc!=0 become: yes become_user: root thanks.
|
Ansible to check for Java and install if not already on RHEL machines Trying to create a job that will check for java and install if it comes back as not installed. All works with the exception of the install part. Ansible is telling me a conditional has not been met so it skips the install. - name: fetch java version shell: java -version 2>&1 | grep version | awk '{print $3}' | sed 's/" //g' changed_when: False register: java_result failed_when: False - name: print java version debug: msg: " {{ java_result.stdout }} " when: java_result.rc==0 - name: install java version yum: name: java-1.8.0-openjdk.x86_64 present: yes when: java_result.rc!=0 The end result that works: - name: fetch java version shell: java -version changed_when: False register: java_result failed_when: False - name: install java version yum: name: java state: latest when: java_result.rc!=0 become: yes become_user: root thanks.
|
java, ansible, rhel
| 1
| 3,117
| 1
|
https://stackoverflow.com/questions/63817939/ansible-to-check-for-java-and-install-if-not-already-on-rhel-machines
|
62,959,322
|
RabbitMQ users are deleted after pod restart
|
I'm running RabbitMQ on a Kubernetes cluster. I have mounted only log location and config location of the RabbitMQ. When the pod restarts, all the users I have created are lost. Anyway way to mount user details?
|
RabbitMQ users are deleted after pod restart I'm running RabbitMQ on a Kubernetes cluster. I have mounted only log location and config location of the RabbitMQ. When the pod restarts, all the users I have created are lost. Anyway way to mount user details?
|
linux, kubernetes, rabbitmq, rhel
| 1
| 1,338
| 1
|
https://stackoverflow.com/questions/62959322/rabbitmq-users-are-deleted-after-pod-restart
|
62,716,348
|
Removing multiple rpms using ansible on RHEL machines
|
I wish to remove chef related rpms from a set of servers. Will this suffice in a playbook? 1st option: - name: Check if chef rpms exist shell: rpm -qa *chef* register: rpm_output - name: Remove chef rpms if they exist shell: rpm -e rpm_output when: rpm_output.stat.exists 2nd option: - name: remove the chef package yum: name: chef* state: absent Will the above two playbooks remove multiple rpms if the output has more than one listed? Thanks in advance!
|
Removing multiple rpms using ansible on RHEL machines I wish to remove chef related rpms from a set of servers. Will this suffice in a playbook? 1st option: - name: Check if chef rpms exist shell: rpm -qa *chef* register: rpm_output - name: Remove chef rpms if they exist shell: rpm -e rpm_output when: rpm_output.stat.exists 2nd option: - name: remove the chef package yum: name: chef* state: absent Will the above two playbooks remove multiple rpms if the output has more than one listed? Thanks in advance!
|
ansible, rpm, rhel
| 1
| 2,890
| 1
|
https://stackoverflow.com/questions/62716348/removing-multiple-rpms-using-ansible-on-rhel-machines
|
60,869,304
|
How do I specify ruby 2.6 module in my spec file. ruby 2.6 is required for my rpm
|
I can manually install ruby with the commands: sudo dnf module enable ruby:2.6 sudo dnf module -y update ruby:2.6 How do I go about making the ruby:2.6 module a requirement in an rpm? This is in CentOS
|
How do I specify ruby 2.6 module in my spec file. ruby 2.6 is required for my rpm I can manually install ruby with the commands: sudo dnf module enable ruby:2.6 sudo dnf module -y update ruby:2.6 How do I go about making the ruby:2.6 module a requirement in an rpm? This is in CentOS
|
module, centos, dependencies, rhel, rpm-spec
| 1
| 315
| 2
|
https://stackoverflow.com/questions/60869304/how-do-i-specify-ruby-2-6-module-in-my-spec-file-ruby-2-6-is-required-for-my-rp
|
60,746,155
|
How do I check the apache config for validity in Red Hat Linux?
|
My operating system is Red Hat Enterprise Linux 7. This isn't working... # httpd -t bash: httpd: command not found In case it helps, this is the command I run to restart apache on this box... # systemctl restart httpd24-httpd But this doesn't work... # httpd24-httpd -t bash: httpd24-httpd: command not found This doesn't work either... # apachectl -t bash: apachetl: command not found Nor does this work... # apachectl configtest bash: apachetl: command not found
|
How do I check the apache config for validity in Red Hat Linux? My operating system is Red Hat Enterprise Linux 7. This isn't working... # httpd -t bash: httpd: command not found In case it helps, this is the command I run to restart apache on this box... # systemctl restart httpd24-httpd But this doesn't work... # httpd24-httpd -t bash: httpd24-httpd: command not found This doesn't work either... # apachectl -t bash: apachetl: command not found Nor does this work... # apachectl configtest bash: apachetl: command not found
|
apache, rhel
| 1
| 5,348
| 1
|
https://stackoverflow.com/questions/60746155/how-do-i-check-the-apache-config-for-validity-in-red-hat-linux
|
60,159,639
|
How to install Python 3.8.1 on RHEL 8?
|
This is really frustrating. I want to install the latest version of Python (at the time of this issue: Python 3.8.1) on RHEL 8, (RHEL being one of the most widely used distributions of Linux). I would like to type: #dnf install python and have it install the latest version of Python. I can't do this, and I do not know why. When I go to python.org and click on 'install for Linux' I get a link to the source code. There are no instructions there as to what to do with the source code. I do not understand why this is. I don't want the source code, I want to install python 3.8.1 executables for my platform (RHEL 8). I search on how to install python 3.8.1 from source and get a long list of dependencies that I have to install and a long list of steps. Is this because it is a very rare thing for companies to run Python on Linux? Can we get together here and make it easy for folks to install Python on Linux? I'm willing to pay money out of my daily earnings to setup a RHEL 8 repo to get Python 3.8 there if IBM/Redhat is not willing to do this. Why does the official Python organization hate Linux? Why does IBM / Redhat hate Python? Can we bring the two together in peace and harmony so that they just get along? This is very frustrating, I should be able to knock this task out in a few seconds, and it has turned into hours. The same amount of hours to figure out how to do this is probably done every day by developers all over the world that want to install/run the latest version of Python on Linux (CentOS / RHEL).
|
How to install Python 3.8.1 on RHEL 8? This is really frustrating. I want to install the latest version of Python (at the time of this issue: Python 3.8.1) on RHEL 8, (RHEL being one of the most widely used distributions of Linux). I would like to type: #dnf install python and have it install the latest version of Python. I can't do this, and I do not know why. When I go to python.org and click on 'install for Linux' I get a link to the source code. There are no instructions there as to what to do with the source code. I do not understand why this is. I don't want the source code, I want to install python 3.8.1 executables for my platform (RHEL 8). I search on how to install python 3.8.1 from source and get a long list of dependencies that I have to install and a long list of steps. Is this because it is a very rare thing for companies to run Python on Linux? Can we get together here and make it easy for folks to install Python on Linux? I'm willing to pay money out of my daily earnings to setup a RHEL 8 repo to get Python 3.8 there if IBM/Redhat is not willing to do this. Why does the official Python organization hate Linux? Why does IBM / Redhat hate Python? Can we bring the two together in peace and harmony so that they just get along? This is very frustrating, I should be able to knock this task out in a few seconds, and it has turned into hours. The same amount of hours to figure out how to do this is probably done every day by developers all over the world that want to install/run the latest version of Python on Linux (CentOS / RHEL).
|
python-3.x, rhel, centos8
| 1
| 4,850
| 1
|
https://stackoverflow.com/questions/60159639/how-to-install-python-3-8-1-on-rhel-8
|
57,686,679
|
How to set the installed(who is installing the RPM) user and group privileges while packing the RPM
|
I am just wondering if there is a way to specify the current user(who is installing the RPM) user:group privileges to installed folder of RPM(/usr/lib/appfloder) in the Spec(.spec) file. Example: Currently while installing the RPM it is considering root:root privileges if we do not create the user and group and not added in section (%defattr(777, maya, maya, 777)) in the spec file. if we add defattr it is considering maya as a user. Expecting, please consider the current user 'user1' is installing the RPM, the privileges should under 'user1'. if 'user2' is installing the RPM it should under 'user2'.. Why? I have an RPM package which install on /usr/lib/app and run as a service. And my application needs the current user home directory to search for some package which is installed for the current user.
|
How to set the installed(who is installing the RPM) user and group privileges while packing the RPM I am just wondering if there is a way to specify the current user(who is installing the RPM) user:group privileges to installed folder of RPM(/usr/lib/appfloder) in the Spec(.spec) file. Example: Currently while installing the RPM it is considering root:root privileges if we do not create the user and group and not added in section (%defattr(777, maya, maya, 777)) in the spec file. if we add defattr it is considering maya as a user. Expecting, please consider the current user 'user1' is installing the RPM, the privileges should under 'user1'. if 'user2' is installing the RPM it should under 'user2'.. Why? I have an RPM package which install on /usr/lib/app and run as a service. And my application needs the current user home directory to search for some package which is installed for the current user.
|
centos, rpm, rhel, packing
| 1
| 938
| 1
|
https://stackoverflow.com/questions/57686679/how-to-set-the-installedwho-is-installing-the-rpm-user-and-group-privileges-wh
|
56,559,235
|
iOS - Continuous Integration with Fastlane / Bamboo
|
We have an On-Premise Bamboo Server hosted on RHEL & we want to Integrate FastLane to automate our Mobile App CI/CD process. We have a Mac laptop which is used for iOS build but the plan is to automate the process. Can someone give any pointers to integrate the FastLane with the on-prem Bamboo CI server.
|
iOS - Continuous Integration with Fastlane / Bamboo We have an On-Premise Bamboo Server hosted on RHEL & we want to Integrate FastLane to automate our Mobile App CI/CD process. We have a Mac laptop which is used for iOS build but the plan is to automate the process. Can someone give any pointers to integrate the FastLane with the on-prem Bamboo CI server.
|
ios, continuous-integration, bamboo, rhel, fastlane
| 1
| 1,542
| 1
|
https://stackoverflow.com/questions/56559235/ios-continuous-integration-with-fastlane-bamboo
|
54,937,613
|
Installing OpenJDK 8 on RHEL 7 fails
|
Running into an issue where a scripted install that used to work started failing with this error. Error: Package: 1:java-1.8.0-openjdk-headless-1.8.0.201.b09-0.el7_6.x86_64 (rhui-REGION-rhel-server-releases) Requires: pcsc-lite-devel(x86-64) I'm guessing that it's a repo hiccup but wondering if there's a fix that I can roll out while we wait for the maintainers to get on this?
|
Installing OpenJDK 8 on RHEL 7 fails Running into an issue where a scripted install that used to work started failing with this error. Error: Package: 1:java-1.8.0-openjdk-headless-1.8.0.201.b09-0.el7_6.x86_64 (rhui-REGION-rhel-server-releases) Requires: pcsc-lite-devel(x86-64) I'm guessing that it's a repo hiccup but wondering if there's a fix that I can roll out while we wait for the maintainers to get on this?
|
java, rhel
| 1
| 696
| 1
|
https://stackoverflow.com/questions/54937613/installing-openjdk-8-on-rhel-7-fails
|
52,748,114
|
PIDL(or similar) on RHEL 7
|
I have a .idl (Interface Definition Language) file on my windows machine. I would like to get the same file to compile on my RHEL7 machine. I have looked everywhere to see if there was a compiler similar to MIDL that is available for RHEL. I came across a compiler called PIDL with Samba, however, when I installed Samba, PIDL was not a part of the installation. Does anybody know how to get PIDL on RHEL 7 and how to use it? OR if there is any other resource I can use to compile .idl files on RHEL7? Thank you for any help!
|
PIDL(or similar) on RHEL 7 I have a .idl (Interface Definition Language) file on my windows machine. I would like to get the same file to compile on my RHEL7 machine. I have looked everywhere to see if there was a compiler similar to MIDL that is available for RHEL. I came across a compiler called PIDL with Samba, however, when I installed Samba, PIDL was not a part of the installation. Does anybody know how to get PIDL on RHEL 7 and how to use it? OR if there is any other resource I can use to compile .idl files on RHEL7? Thank you for any help!
|
rhel, samba, rhel7
| 1
| 88
| 1
|
https://stackoverflow.com/questions/52748114/pidlor-similar-on-rhel-7
|
50,934,492
|
rhel 7.1 octave gnuplot - resolving libGL error: failed to load driver: swrast
|
I have recently gone through a drill to resolve getting octave on rhel 7.1 to plot using gnuplot. Basically, I was getting the following ugly messages and no plot: $ export LIBGL_DEBUG=verbose $ octave $ GNU Octave, version 3.8.2 octave:1> x = -10:0.1:10; plot(x, sin(x)) libGL: OpenDriver: trying /usr/lib64/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib64/dri/swrast_dri.so libGL: driver does not expose __driDriverGetExtensions_swrast(): /usr/lib64/dri/swrast_dri.so: undefined symbol: __driDriverGetExtensions_swrast libGL: Can't open configuration file /home/jsaari/.drirc: No such file or directory. libGL: Can't open configuration file /home/jsaari/.drirc: No such file or directory. libGL error: failed to load driver: swrast function is no-op function is no-op function is no-op . . .
|
rhel 7.1 octave gnuplot - resolving libGL error: failed to load driver: swrast I have recently gone through a drill to resolve getting octave on rhel 7.1 to plot using gnuplot. Basically, I was getting the following ugly messages and no plot: $ export LIBGL_DEBUG=verbose $ octave $ GNU Octave, version 3.8.2 octave:1> x = -10:0.1:10; plot(x, sin(x)) libGL: OpenDriver: trying /usr/lib64/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib64/dri/swrast_dri.so libGL: driver does not expose __driDriverGetExtensions_swrast(): /usr/lib64/dri/swrast_dri.so: undefined symbol: __driDriverGetExtensions_swrast libGL: Can't open configuration file /home/jsaari/.drirc: No such file or directory. libGL: Can't open configuration file /home/jsaari/.drirc: No such file or directory. libGL error: failed to load driver: swrast function is no-op function is no-op function is no-op . . .
|
opengl, gnuplot, octave, rhel
| 1
| 1,534
| 1
|
https://stackoverflow.com/questions/50934492/rhel-7-1-octave-gnuplot-resolving-libgl-error-failed-to-load-driver-swrast
|
50,416,160
|
Cannot install Red Hat git
|
I have a strange situation that I cannot install git on my new Red Hat Enterprise Linux 7.5 64-bit platform $> yum install rh-git29 I get this: Loaded plugins: product-id, search-disabled-repos, subscription-manager Resolving Dependencies ...Error: Package: rh-git29-git-2.9.3-3.el7.x86_64 (rhel-server-rhscl-7-eus-rpms) Requires: perl(Term::ReadKey) Error: Package: rh-git29-perl-Git-2.9.3-3.el7.noarch (rhel-server-rhscl-7-eus-rpms) So I tried to install perl and got this now: $> yum install rh-perl5261! Error: Package: 4:rh-perl526-perl-devel-5.26.1-404.el7.x86_64 (rhel-server-rhscl-7-eus-rpms) Requires: systemtap-sdt-devel What package is systemtap-sdt-devel in?
|
Cannot install Red Hat git I have a strange situation that I cannot install git on my new Red Hat Enterprise Linux 7.5 64-bit platform $> yum install rh-git29 I get this: Loaded plugins: product-id, search-disabled-repos, subscription-manager Resolving Dependencies ...Error: Package: rh-git29-git-2.9.3-3.el7.x86_64 (rhel-server-rhscl-7-eus-rpms) Requires: perl(Term::ReadKey) Error: Package: rh-git29-perl-Git-2.9.3-3.el7.noarch (rhel-server-rhscl-7-eus-rpms) So I tried to install perl and got this now: $> yum install rh-perl5261! Error: Package: 4:rh-perl526-perl-devel-5.26.1-404.el7.x86_64 (rhel-server-rhscl-7-eus-rpms) Requires: systemtap-sdt-devel What package is systemtap-sdt-devel in?
|
git, perl, redhat, rhel
| 1
| 1,335
| 2
|
https://stackoverflow.com/questions/50416160/cannot-install-red-hat-git
|
49,894,692
|
Django installation fails using Ansible in RHEL
|
I have the following playbook: --- - hosts: app become: yes tasks: - name: Install MySQL-Python yum: name=MySQL-python state=present - name: Install Python Setup Tools yum: name=python-setuptools state=present - name: Install django easy_install: name=django state=present This fails with the error: This version of Django requires Python 3.4, but you're trying to\ninstall it on Python 2.7.\n\nThis may be because you are using a version of pip that doesn't\nunderstand the python_requires classifier. Make sure you\nhave pip >= 9.0 and setuptools >= 24.2, then try again:\n\n $ python -m pip install --upgrade pip setuptools\n $ python -m pip install django\n\nThis will install the latest version of Django which works on your\nversion of Python. If you can't upgrade your pip (or Python), request\nan older version of Django:\n\n $ python -m pip install \"django<2\"\nerror: Setup script exited with 1\n"} I followed this article to install Python 3 and also set python=python3 , yet I am facing the same error message when I run the playbook. Can anyone please suggest what to do? Also, I do I install a previous version of Django using Ansible?
|
Django installation fails using Ansible in RHEL I have the following playbook: --- - hosts: app become: yes tasks: - name: Install MySQL-Python yum: name=MySQL-python state=present - name: Install Python Setup Tools yum: name=python-setuptools state=present - name: Install django easy_install: name=django state=present This fails with the error: This version of Django requires Python 3.4, but you're trying to\ninstall it on Python 2.7.\n\nThis may be because you are using a version of pip that doesn't\nunderstand the python_requires classifier. Make sure you\nhave pip >= 9.0 and setuptools >= 24.2, then try again:\n\n $ python -m pip install --upgrade pip setuptools\n $ python -m pip install django\n\nThis will install the latest version of Django which works on your\nversion of Python. If you can't upgrade your pip (or Python), request\nan older version of Django:\n\n $ python -m pip install \"django<2\"\nerror: Setup script exited with 1\n"} I followed this article to install Python 3 and also set python=python3 , yet I am facing the same error message when I run the playbook. Can anyone please suggest what to do? Also, I do I install a previous version of Django using Ansible?
|
python, django, ansible, rhel
| 1
| 349
| 1
|
https://stackoverflow.com/questions/49894692/django-installation-fails-using-ansible-in-rhel
|
49,552,796
|
OS Storage not release after move data file in Oracle 12c
|
Today I try to move oracle data file online as we know that is a capability from Oracle 12c. But after Im moving some data file, why my mount point still give me 100% of usage? [oracle@DB myserver]$ df -h /oracle/oradata1/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_oradata1-lv_oradata1 296G 281G 21M 100% /oracle/oradata1 [oracle@DB myserver]$ [oracle@DB myserver]$ du -sh /oracle/oradata1/ 151G /oracle/oradata1/ it supposed to be I have more than 100GB free space in my /oradata1 I also check the data file in the old mount point oradata1 already moved to oradata2. But why my OS(rhel 6.5) Storage not releasing the size? Can someone give me some clue for this? thanks in advance
|
OS Storage not release after move data file in Oracle 12c Today I try to move oracle data file online as we know that is a capability from Oracle 12c. But after Im moving some data file, why my mount point still give me 100% of usage? [oracle@DB myserver]$ df -h /oracle/oradata1/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_oradata1-lv_oradata1 296G 281G 21M 100% /oracle/oradata1 [oracle@DB myserver]$ [oracle@DB myserver]$ du -sh /oracle/oradata1/ 151G /oracle/oradata1/ it supposed to be I have more than 100GB free space in my /oradata1 I also check the data file in the old mount point oradata1 already moved to oradata2. But why my OS(rhel 6.5) Storage not releasing the size? Can someone give me some clue for this? thanks in advance
|
linux, oracle-database, storage, oracle12c, rhel
| 1
| 1,763
| 1
|
https://stackoverflow.com/questions/49552796/os-storage-not-release-after-move-data-file-in-oracle-12c
|
45,263,986
|
RHEL import self-signed certificate
|
I am really new into certificates and find it quite difficult to achieve what I have in mind. Let me have a self-signed certificate generated with openSSL. What are the steps I should follow in RHEL in order to have that certificate become trusted? Is there any difference in procedure between .pfx and .crt? Can you also provide me with some details on TA, CA private/pub keys and their role in certification process?
|
RHEL import self-signed certificate I am really new into certificates and find it quite difficult to achieve what I have in mind. Let me have a self-signed certificate generated with openSSL. What are the steps I should follow in RHEL in order to have that certificate become trusted? Is there any difference in procedure between .pfx and .crt? Can you also provide me with some details on TA, CA private/pub keys and their role in certification process?
|
ssl, https, rhel, self-signed
| 1
| 2,753
| 1
|
https://stackoverflow.com/questions/45263986/rhel-import-self-signed-certificate
|
43,618,690
|
Proper way to format timestamp in bash
|
So I need to create an output file for an outside contractor containing a list of filenames from a directory and creation dates in a specific format like this: FileName YYYYmmDDHHMMSS So far I've come up with: find ./ -type f -printf " %f %a\n" which returns: FIleName Fri Apr 21 18:21:15.0458585800 2017 Or: ls -l | awk {'print $9" "$6" "$7" "$8'} which returns: FileNAme Apr 21 18:21 But neither is quite the output i need as it needs to be purely numerical and include seconds. Keeping in mind that the list of files could be very large so efficiency is a priority, how can i get this output?
|
Proper way to format timestamp in bash So I need to create an output file for an outside contractor containing a list of filenames from a directory and creation dates in a specific format like this: FileName YYYYmmDDHHMMSS So far I've come up with: find ./ -type f -printf " %f %a\n" which returns: FIleName Fri Apr 21 18:21:15.0458585800 2017 Or: ls -l | awk {'print $9" "$6" "$7" "$8'} which returns: FileNAme Apr 21 18:21 But neither is quite the output i need as it needs to be purely numerical and include seconds. Keeping in mind that the list of files could be very large so efficiency is a priority, how can i get this output?
|
linux, bash, rhel
| 1
| 99
| 1
|
https://stackoverflow.com/questions/43618690/proper-way-to-format-timestamp-in-bash
|
42,544,740
|
can we convert a live server to docker image
|
We are having a custom linux ( an RHEL spin-off) with security enhancements made to harden the box. Now we are planning to move to docker for dev-ops. Is there a way to convert the running box / ova / iso to docker? We are pretty new to docker and we tried to install on a rhel image step by step , it is difficult to harden again as we depend on 3rd party vendors.
|
can we convert a live server to docker image We are having a custom linux ( an RHEL spin-off) with security enhancements made to harden the box. Now we are planning to move to docker for dev-ops. Is there a way to convert the running box / ova / iso to docker? We are pretty new to docker and we tried to install on a rhel image step by step , it is difficult to harden again as we depend on 3rd party vendors.
|
linux, docker, rhel
| 1
| 8,048
| 2
|
https://stackoverflow.com/questions/42544740/can-we-convert-a-live-server-to-docker-image
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.