question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
19,638,835
|
Execute JSP at Tomcat Startup
|
Please note that I am not the developer of this app and I realize that there are probably better ways to implement it. However, I have been given the following objective: I need to automatically execute a jsp page when tomcat starts on an RHEL server. The only kicker is that their are three URL Parameters that need to be passed. Here is an example of the URL: [URL] In my web.xml I have the following but it does not seem to be loading automatically. I suppose I could modify the start_tomcat script to include a wget but I was hoping to make this independent from the server it is deployed to. <servlet> <servlet-name>getlistdata</servlet-name> <jsp-file>/getlistdata.jsp</jsp-file> <init-param> <param-name>param1</param-name> <param-value>P1</param-value> </init-param> <init-param> <param-name>param2</param-name> <param-value>P2</param-value> </init-param> <init-param> <param-name>param3</param-name> <param-value>P3</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet>
|
Execute JSP at Tomcat Startup Please note that I am not the developer of this app and I realize that there are probably better ways to implement it. However, I have been given the following objective: I need to automatically execute a jsp page when tomcat starts on an RHEL server. The only kicker is that their are three URL Parameters that need to be passed. Here is an example of the URL: [URL] In my web.xml I have the following but it does not seem to be loading automatically. I suppose I could modify the start_tomcat script to include a wget but I was hoping to make this independent from the server it is deployed to. <servlet> <servlet-name>getlistdata</servlet-name> <jsp-file>/getlistdata.jsp</jsp-file> <init-param> <param-name>param1</param-name> <param-value>P1</param-value> </init-param> <init-param> <param-name>param2</param-name> <param-value>P2</param-value> </init-param> <init-param> <param-name>param3</param-name> <param-value>P3</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet>
|
java, jsp, tomcat, rhel
| 0
| 1,601
| 2
|
https://stackoverflow.com/questions/19638835/execute-jsp-at-tomcat-startup
|
18,276,996
|
How to run a command only on remote login to linux
|
I want my system to display available updates when I log in. What I did is to setup a cron job to run daily to save the output of yum list updates to a file. Now I need to find a place for a command to cat this file which is run only when I login to the Linux machine. I'm running RedHat Enterprise Linux and bash. I'd like solutions for both running it on every login or only on remote logins, and also for all users and for a specific user - so I can choose the best one and so that this information is here for others. Right now I most need to be for all users and only when the first login.
|
How to run a command only on remote login to linux I want my system to display available updates when I log in. What I did is to setup a cron job to run daily to save the output of yum list updates to a file. Now I need to find a place for a command to cat this file which is run only when I login to the Linux machine. I'm running RedHat Enterprise Linux and bash. I'd like solutions for both running it on every login or only on remote logins, and also for all users and for a specific user - so I can choose the best one and so that this information is here for others. Right now I most need to be for all users and only when the first login.
|
linux, bash, yum, rhel, login-script
| 0
| 438
| 2
|
https://stackoverflow.com/questions/18276996/how-to-run-a-command-only-on-remote-login-to-linux
|
15,814,641
|
Where is the RHEL version 5 Developer Guide?
|
I found the "Developer Guide" for RHEL version 6 located at: [URL] --> click on the Developer Guide link. However when you scroll down further to the RHEL version 5 documentation I can't find an equivalent Developer Guide. Is there an equivalent Developer Guide for RHEL 5 documentation? If there is not an equivalent Developer Guide in RHEL version 5 documentation, what RHEL 5 document/section talks about the 'Application Compatibility' (which covers things like API and ABI compatibility spanning major/minor releases*)? * NOTE: in RHEL 6 the Applicaiton Compatibility is covered in Section 3.2 of the Developer Guide. ( RHEL --> version 6 --> Developer Guide --> Section 3.2 ).
|
Where is the RHEL version 5 Developer Guide? I found the "Developer Guide" for RHEL version 6 located at: [URL] --> click on the Developer Guide link. However when you scroll down further to the RHEL version 5 documentation I can't find an equivalent Developer Guide. Is there an equivalent Developer Guide for RHEL 5 documentation? If there is not an equivalent Developer Guide in RHEL version 5 documentation, what RHEL 5 document/section talks about the 'Application Compatibility' (which covers things like API and ABI compatibility spanning major/minor releases*)? * NOTE: in RHEL 6 the Applicaiton Compatibility is covered in Section 3.2 of the Developer Guide. ( RHEL --> version 6 --> Developer Guide --> Section 3.2 ).
|
redhat, rhel, abi, rhel5
| 0
| 151
| 1
|
https://stackoverflow.com/questions/15814641/where-is-the-rhel-version-5-developer-guide
|
14,232,211
|
Is it possible in CVS: "Clients not allowed to commit if there is no log written"?
|
I am new to CVS. We recently started our version controlling by CVS. We are using SmartCVS on our RHEL systems as our CVS client. Is it possible that my team members won't be able to commit if they have not put anything in log Message box?
|
Is it possible in CVS: "Clients not allowed to commit if there is no log written"? I am new to CVS. We recently started our version controlling by CVS. We are using SmartCVS on our RHEL systems as our CVS client. Is it possible that my team members won't be able to commit if they have not put anything in log Message box?
|
linux, version-control, cvs, rhel
| 0
| 90
| 1
|
https://stackoverflow.com/questions/14232211/is-it-possible-in-cvs-clients-not-allowed-to-commit-if-there-is-no-log-written
|
13,536,788
|
Same piped call to isql works on Solaris but not on RHEL
|
Background: I need to port a ksh script from SunOS 5.10 to RHEL 5.8. It makes a call to isql to retrieve some data and, quite contrary to the intended application of final endpoint client utilities such as isql, it parses its out to be used by a variable in the shell script. Please note that I just inherited this and by no means did design such a hack myself. I certainly never would be parsing isql out to assign value to a var in shell -- if the script needed that info, I would use Perl with some API like DBD::DBI that is designed to marshall data between the application and the data store. But I have what I have and must work within the parameters. What is happening is that the following piped input does return data on SunOS but not in RHEL: echo "SELECT some_field FROM some_table WHERE some_crtra = 'X' \ngo" | isql -U$USER -P$PASS -D$DB -S$SERVER That output on Solaris being: some_field ------ Y (1 row affected) From that point, the script uses awk to extract just the field value from the above stream but let's ignore that because that's not the problem. Also please note that I am able to get the data executing the piped commands separately, i.e. by going manually into isql and running the SQL. So the SQL or the connection string are not the problem -- it is either how the piping streams data OR isql itself works differently on the different platforms. Can anybody see why there is disparate response to the same input on the two systems? Any idea how I can change the piping to make it work? Thanks
|
Same piped call to isql works on Solaris but not on RHEL Background: I need to port a ksh script from SunOS 5.10 to RHEL 5.8. It makes a call to isql to retrieve some data and, quite contrary to the intended application of final endpoint client utilities such as isql, it parses its out to be used by a variable in the shell script. Please note that I just inherited this and by no means did design such a hack myself. I certainly never would be parsing isql out to assign value to a var in shell -- if the script needed that info, I would use Perl with some API like DBD::DBI that is designed to marshall data between the application and the data store. But I have what I have and must work within the parameters. What is happening is that the following piped input does return data on SunOS but not in RHEL: echo "SELECT some_field FROM some_table WHERE some_crtra = 'X' \ngo" | isql -U$USER -P$PASS -D$DB -S$SERVER That output on Solaris being: some_field ------ Y (1 row affected) From that point, the script uses awk to extract just the field value from the above stream but let's ignore that because that's not the problem. Also please note that I am able to get the data executing the piped commands separately, i.e. by going manually into isql and running the SQL. So the SQL or the connection string are not the problem -- it is either how the piping streams data OR isql itself works differently on the different platforms. Can anybody see why there is disparate response to the same input on the two systems? Any idea how I can change the piping to make it work? Thanks
|
solaris, ksh, rhel, isql
| 0
| 270
| 1
|
https://stackoverflow.com/questions/13536788/same-piped-call-to-isql-works-on-solaris-but-not-on-rhel
|
13,516,858
|
nspr-devel for Debian Linux?
|
I was wondering if this is possible. I am able to install nspr into CentOS; yum install nspr-devel I saw this is really only meant for for CentOS/RHEL/Fedora so note [URL] Is there a way I can install this into the latest version of Debian with apt-get? If so, does any one have the step by step instructions? Thanks
|
nspr-devel for Debian Linux? I was wondering if this is possible. I am able to install nspr into CentOS; yum install nspr-devel I saw this is really only meant for for CentOS/RHEL/Fedora so note [URL] Is there a way I can install this into the latest version of Debian with apt-get? If so, does any one have the step by step instructions? Thanks
|
linux, ubuntu, centos, debian, rhel
| 0
| 2,316
| 1
|
https://stackoverflow.com/questions/13516858/nspr-devel-for-debian-linux
|
12,927,062
|
git password prompting in RHEL
|
I'm on an RHEL machine and it seems that git uses a GUI password prompt when I try to clone private repositories. I'd like it to use the terminal itself. How can I set git to behave in that way?
|
git password prompting in RHEL I'm on an RHEL machine and it seems that git uses a GUI password prompt when I try to clone private repositories. I'd like it to use the terminal itself. How can I set git to behave in that way?
|
linux, git, rhel, rhel5
| 0
| 238
| 1
|
https://stackoverflow.com/questions/12927062/git-password-prompting-in-rhel
|
12,621,336
|
mule service start script raises exception with log file
|
I'm trying to rewrite the mule start script so it works as a service on a RHEL. Currently I have it mostly done. It is starting and I have the most of the log files being successfully written where I want them. But there's a file named literally .log that I do not know what is for, neither where to configure (its name and path). Such file is adding the following nasty lines in the mule_ee.log upon start up: log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: .log (Permission denied) at java.io.FileOutputStream.openAppend(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:207) at java.io.FileOutputStream.<init>(FileOutputStream.java:131) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547) at org.mule.module.launcher.log4j.ApplicationAwareRepositorySelector.configureFrom(ApplicationAwareRepositorySelector.java:166) at org.mule.module.launcher.log4j.ApplicationAwareRepositorySelector.getLoggerRepository(ApplicationAwareRepositorySelector.java:95) at org.apache.log4j.LogManager.getLoggerRepository(LogManager.java:208) at org.apache.log4j.LogManager.getLogger(LogManager.java:228) at org.mule.module.logging.MuleLoggerFactory.getLogger(MuleLoggerFactory.java:77) at org.mule.module.logging.DispatchingLogger.getLogger(DispatchingLogger.java:419) at org.mule.module.logging.DispatchingLogger.isInfoEnabled(DispatchingLogger.java:191) at org.apache.commons.logging.impl.SLF4JLog.isInfoEnabled(SLF4JLog.java:78) at org.mule.module.launcher.application.DefaultMuleApplication.init(DefaultMuleApplication.java:188) at org.mule.module.launcher.application.PriviledgedMuleApplication.init(PriviledgedMuleApplication.java:46) at org.mule.module.launcher.application.ApplicationWrapper.init(ApplicationWrapper.java:64) at org.mule.module.launcher.DefaultMuleDeployer.deploy(DefaultMuleDeployer.java:46) at org.mule.module.launcher.DeploymentService.guardedDeploy(DeploymentService.java:398) at org.mule.module.launcher.DeploymentService.start(DeploymentService.java:181) at org.mule.module.launcher.MuleContainer.start(MuleContainer.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mule.module.reboot.MuleContainerWrapper.start(MuleContainerWrapper.java:56) at org.tanukisoftware.wrapper.WrapperManager$12.run(WrapperManager.java:3925)` What is that .log file for? Where is the conf file to set it up to be written in a place where the mule user has permissions to write?
|
mule service start script raises exception with log file I'm trying to rewrite the mule start script so it works as a service on a RHEL. Currently I have it mostly done. It is starting and I have the most of the log files being successfully written where I want them. But there's a file named literally .log that I do not know what is for, neither where to configure (its name and path). Such file is adding the following nasty lines in the mule_ee.log upon start up: log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: .log (Permission denied) at java.io.FileOutputStream.openAppend(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:207) at java.io.FileOutputStream.<init>(FileOutputStream.java:131) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547) at org.mule.module.launcher.log4j.ApplicationAwareRepositorySelector.configureFrom(ApplicationAwareRepositorySelector.java:166) at org.mule.module.launcher.log4j.ApplicationAwareRepositorySelector.getLoggerRepository(ApplicationAwareRepositorySelector.java:95) at org.apache.log4j.LogManager.getLoggerRepository(LogManager.java:208) at org.apache.log4j.LogManager.getLogger(LogManager.java:228) at org.mule.module.logging.MuleLoggerFactory.getLogger(MuleLoggerFactory.java:77) at org.mule.module.logging.DispatchingLogger.getLogger(DispatchingLogger.java:419) at org.mule.module.logging.DispatchingLogger.isInfoEnabled(DispatchingLogger.java:191) at org.apache.commons.logging.impl.SLF4JLog.isInfoEnabled(SLF4JLog.java:78) at org.mule.module.launcher.application.DefaultMuleApplication.init(DefaultMuleApplication.java:188) at org.mule.module.launcher.application.PriviledgedMuleApplication.init(PriviledgedMuleApplication.java:46) at org.mule.module.launcher.application.ApplicationWrapper.init(ApplicationWrapper.java:64) at org.mule.module.launcher.DefaultMuleDeployer.deploy(DefaultMuleDeployer.java:46) at org.mule.module.launcher.DeploymentService.guardedDeploy(DeploymentService.java:398) at org.mule.module.launcher.DeploymentService.start(DeploymentService.java:181) at org.mule.module.launcher.MuleContainer.start(MuleContainer.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mule.module.reboot.MuleContainerWrapper.start(MuleContainerWrapper.java:56) at org.tanukisoftware.wrapper.WrapperManager$12.run(WrapperManager.java:3925)` What is that .log file for? Where is the conf file to set it up to be written in a place where the mule user has permissions to write?
|
service, mule, rhel
| 0
| 1,344
| 2
|
https://stackoverflow.com/questions/12621336/mule-service-start-script-raises-exception-with-log-file
|
11,798,601
|
how can I grant privileges to Linux user?
|
for Example I will make a user called username I need to grant all root privileges to user username
|
how can I grant privileges to Linux user? for Example I will make a user called username I need to grant all root privileges to user username
|
linux, debian, rhel
| 0
| 536
| 2
|
https://stackoverflow.com/questions/11798601/how-can-i-grant-privileges-to-linux-user
|
8,706,453
|
Replicate solaris commands in RHEL
|
Are there any equivalent commands such as pidmax , max_nprocs or maxuprc at RHEL? or how can it be developed?, I mean, there must exist a formula, a balance between several variables like memory, cpu and running processes. Thanks in advance.
|
Replicate solaris commands in RHEL Are there any equivalent commands such as pidmax , max_nprocs or maxuprc at RHEL? or how can it be developed?, I mean, there must exist a formula, a balance between several variables like memory, cpu and running processes. Thanks in advance.
|
multithreading, performance, bash, solaris, rhel
| 0
| 108
| 1
|
https://stackoverflow.com/questions/8706453/replicate-solaris-commands-in-rhel
|
6,057,308
|
java does not exit back to prompt when running jar file fails
|
when running java -jar myfile.jar on my RHEL 64 machine, if the install fails for some reason, i don't get back to prompt - thus the process doesn't exit on other RHEL 64 machine, when running the same jar file, if it fails, i get back to prompt. both machines are running the same version of java 1.6.0_25 anyone know what can cause this behavior ? edit: the jar has an ant build xml that fails right on the beginning (i've added <fail/> task). when running the file i get this Total time: 1 second validate failed org.tp23.antinstaller.InstallException: Error running the install, Ant run failed - examine the error logs for details at org.tp23.antinstaller.runtime.exe.AntLauncherFilter.exec(AntLauncherFilter.java:112) at org.tp23.antinstaller.runtime.exe.AntLauncherValidateFilter.exec(AntLauncherValidateFilter.java:53) at org.tp23.antinstaller.runtime.ExecInstall.exec(ExecInstall.java:89) at org.tp23.antinstaller.selfextract.SelfExtractor.main(SelfExtractor.java:372) Install failed Error running the install, Ant run failed - examine the error logs for details Failed but no prompt, process still running...
|
java does not exit back to prompt when running jar file fails when running java -jar myfile.jar on my RHEL 64 machine, if the install fails for some reason, i don't get back to prompt - thus the process doesn't exit on other RHEL 64 machine, when running the same jar file, if it fails, i get back to prompt. both machines are running the same version of java 1.6.0_25 anyone know what can cause this behavior ? edit: the jar has an ant build xml that fails right on the beginning (i've added <fail/> task). when running the file i get this Total time: 1 second validate failed org.tp23.antinstaller.InstallException: Error running the install, Ant run failed - examine the error logs for details at org.tp23.antinstaller.runtime.exe.AntLauncherFilter.exec(AntLauncherFilter.java:112) at org.tp23.antinstaller.runtime.exe.AntLauncherValidateFilter.exec(AntLauncherValidateFilter.java:53) at org.tp23.antinstaller.runtime.ExecInstall.exec(ExecInstall.java:89) at org.tp23.antinstaller.selfextract.SelfExtractor.main(SelfExtractor.java:372) Install failed Error running the install, Ant run failed - examine the error logs for details Failed but no prompt, process still running...
|
linux, jar, java, rhel
| 0
| 1,819
| 1
|
https://stackoverflow.com/questions/6057308/java-does-not-exit-back-to-prompt-when-running-jar-file-fails
|
5,026,734
|
Take undo tablespace datafile offline ,which is in recovery mode?
|
I am trying to take undo datafile in offline mode using following command alter database datafile '<datafile path>' offline; this datafile is in recovery mode. Oracle shows message as database successfully altered. But after executing this command when I select entries in v$datafile. The file that I just made offline remains in this table. Can somebody please tell how to take undo datafile offline. OS: RHEL Oracle version: 11g Datafile status: Recovery EDIT: I already tried command alter database datafile '/your/data/file/name' offline drop it says database altered successfully , alter database open; fails with message <my undo log file name> needs recovery of undo file. I cannot recover it as I have lost the archive files. It seems like file is getting dropped logically not physically. Now I just want my database to be up and running and for that I want to take this file to be offline. When I check v$datafile table it shows the entry for the file irrespective of alter database datafile '<datafile path>' offline drop; ran succesfully, Please help me resolve the issue. Database is down from the morning and I could not get it started
|
Take undo tablespace datafile offline ,which is in recovery mode? I am trying to take undo datafile in offline mode using following command alter database datafile '<datafile path>' offline; this datafile is in recovery mode. Oracle shows message as database successfully altered. But after executing this command when I select entries in v$datafile. The file that I just made offline remains in this table. Can somebody please tell how to take undo datafile offline. OS: RHEL Oracle version: 11g Datafile status: Recovery EDIT: I already tried command alter database datafile '/your/data/file/name' offline drop it says database altered successfully , alter database open; fails with message <my undo log file name> needs recovery of undo file. I cannot recover it as I have lost the archive files. It seems like file is getting dropped logically not physically. Now I just want my database to be up and running and for that I want to take this file to be offline. When I check v$datafile table it shows the entry for the file irrespective of alter database datafile '<datafile path>' offline drop; ran succesfully, Please help me resolve the issue. Database is down from the morning and I could not get it started
|
linux, oracle-database, rhel
| 0
| 6,555
| 1
|
https://stackoverflow.com/questions/5026734/take-undo-tablespace-datafile-offline-which-is-in-recovery-mode
|
3,648,528
|
samba share on rails
|
I am writing an XML file in rails(running on RHEL) and will then need to post this file across to a windows share. Sambala was installed so that we can SMB to the share, but after running some test code I get the error: uninitialized constant ApplicationController::Sambala samba = Sambala.new( :domain => 'myDOMAIN', :host => 'myHOST', :share => 'mySHARE', :user => 'myUSER', :password => 'myPASSWORD') samba.cd('mySHARE') # => true samba.put(:from => 'aLocalFile.txt') Is there a better way to connect to a windows share using rails on RHEL? or do I need to include a reference to sambala somewhere?
|
samba share on rails I am writing an XML file in rails(running on RHEL) and will then need to post this file across to a windows share. Sambala was installed so that we can SMB to the share, but after running some test code I get the error: uninitialized constant ApplicationController::Sambala samba = Sambala.new( :domain => 'myDOMAIN', :host => 'myHOST', :share => 'mySHARE', :user => 'myUSER', :password => 'myPASSWORD') samba.cd('mySHARE') # => true samba.put(:from => 'aLocalFile.txt') Is there a better way to connect to a windows share using rails on RHEL? or do I need to include a reference to sambala somewhere?
|
ruby-on-rails, windows, samba, rhel
| 0
| 751
| 2
|
https://stackoverflow.com/questions/3648528/samba-share-on-rails
|
79,395,095
|
TestFX and Monocle on RHEL not working in headless mode
|
I'm trying to run a simple TestFX application on RHEL 9.4. It works fine in 'normal' mode, but for CI I need to also run in headless mode. I'm using Java 1.8.0_431 and Monocle 1.8.0_20. Running from a CLI, I use the following: mvn clean install -Dtestfx.headless=true -Dtestfx.robot=glass -Dglass.platform=Monocle -Dmonocle.platform=Headless -Dprism.order=sw -Dprism.txt=t2K -Dheadless.geometry=16000x12000-32 -Djavafx.platform=linux -Djava.awt.headless=true -X This gives the following fault: Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.178 sec <<< FAILURE! com.example.TestFXExampleTest Time elapsed: 0.177 sec <<< ERROR! java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NoClassDefFoundError: com/sun/glass/ui/Robot at org.testfx.util.WaitForAsyncUtils.waitFor(WaitForAsyncUtils.java:301) at org.testfx.api.FxToolkit.registerPrimaryStage(FxToolkit.java:113)... If I run from IntelliJ, I'm passing "-Dheadless=true" which executes the following: @BeforeClass public static void setUpHeadlessMode() throws Exception { if (Boolean.getBoolean("headless")) { // System.setProperty("javafx.platform", "monocle"); System.setProperty("testfx.robot", "glass"); System.setProperty("testfx.headless", "true"); System.setProperty("glass.platform", "Monocle"); System.setProperty("monocle.platform", "Headless"); System.setProperty("prism.order", "sw"); System.setProperty("prism.text", "t2k"); System.setProperty("java.awt.headless", "true"); } registerPrimaryStage(); } And this gives me: java.lang.NoSuchMethodException: com.sun.glass.ui.Screen.<init>(long, int, int, int, int, int, int, int, int, int, int, int, float) at java.lang.Class.getConstructor0(Class.java:3082) at java.lang.Class.getDeclaredConstructor(Class.java:2178) at com.sun.glass.ui.monocle.MonocleApplication$2.run(MonocleApplication.java:234) ... Both of these errors seem to suggest that there is a missing library, although not the same one. I've seen numerous discussions about TestFX, but most are either for later java versions, or are for Windows, or are not about headless mode. The fact that the test app builds and runs in headed mode suggests the POM is probably ok, although I can provide (a redacted) version of that if this might be useful. The relevant bits are: <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-fxml</artifactId> <version>11</version> </dependency> <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-controls</artifactId> <version>11</version> </dependency> <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-graphics</artifactId> <version>11</version> </dependency> On Windows, the headless components are provided by the native 'glass.dll' and (providing you magically get the right version) this works fine. I'm assuming there should be a similar '.so' for Linux, but so far have not located one. (And I'm not 100% sure this is actually the correct diagnosis or fix anyway.) I'm expecting headless to work as it does in Windows. So far I've tried numerous combinations of Monocle version, mvn switches, and VMArgs. The machine is heavily locked-down, so I can't do things like easily install new software or grab thing directly from Github. But previous attempts on a similar machine have also failed to run headless mode tests.
|
TestFX and Monocle on RHEL not working in headless mode I'm trying to run a simple TestFX application on RHEL 9.4. It works fine in 'normal' mode, but for CI I need to also run in headless mode. I'm using Java 1.8.0_431 and Monocle 1.8.0_20. Running from a CLI, I use the following: mvn clean install -Dtestfx.headless=true -Dtestfx.robot=glass -Dglass.platform=Monocle -Dmonocle.platform=Headless -Dprism.order=sw -Dprism.txt=t2K -Dheadless.geometry=16000x12000-32 -Djavafx.platform=linux -Djava.awt.headless=true -X This gives the following fault: Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.178 sec <<< FAILURE! com.example.TestFXExampleTest Time elapsed: 0.177 sec <<< ERROR! java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NoClassDefFoundError: com/sun/glass/ui/Robot at org.testfx.util.WaitForAsyncUtils.waitFor(WaitForAsyncUtils.java:301) at org.testfx.api.FxToolkit.registerPrimaryStage(FxToolkit.java:113)... If I run from IntelliJ, I'm passing "-Dheadless=true" which executes the following: @BeforeClass public static void setUpHeadlessMode() throws Exception { if (Boolean.getBoolean("headless")) { // System.setProperty("javafx.platform", "monocle"); System.setProperty("testfx.robot", "glass"); System.setProperty("testfx.headless", "true"); System.setProperty("glass.platform", "Monocle"); System.setProperty("monocle.platform", "Headless"); System.setProperty("prism.order", "sw"); System.setProperty("prism.text", "t2k"); System.setProperty("java.awt.headless", "true"); } registerPrimaryStage(); } And this gives me: java.lang.NoSuchMethodException: com.sun.glass.ui.Screen.<init>(long, int, int, int, int, int, int, int, int, int, int, int, float) at java.lang.Class.getConstructor0(Class.java:3082) at java.lang.Class.getDeclaredConstructor(Class.java:2178) at com.sun.glass.ui.monocle.MonocleApplication$2.run(MonocleApplication.java:234) ... Both of these errors seem to suggest that there is a missing library, although not the same one. I've seen numerous discussions about TestFX, but most are either for later java versions, or are for Windows, or are not about headless mode. The fact that the test app builds and runs in headed mode suggests the POM is probably ok, although I can provide (a redacted) version of that if this might be useful. The relevant bits are: <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-fxml</artifactId> <version>11</version> </dependency> <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-controls</artifactId> <version>11</version> </dependency> <dependency> <groupId>org.openjfx</groupId> <artifactId>javafx-graphics</artifactId> <version>11</version> </dependency> On Windows, the headless components are provided by the native 'glass.dll' and (providing you magically get the right version) this works fine. I'm assuming there should be a similar '.so' for Linux, but so far have not located one. (And I'm not 100% sure this is actually the correct diagnosis or fix anyway.) I'm expecting headless to work as it does in Windows. So far I've tried numerous combinations of Monocle version, mvn switches, and VMArgs. The machine is heavily locked-down, so I can't do things like easily install new software or grab thing directly from Github. But previous attempts on a similar machine have also failed to run headless mode tests.
|
javafx, rhel, headless, testfx, monocle
| 0
| 152
| 1
|
https://stackoverflow.com/questions/79395095/testfx-and-monocle-on-rhel-not-working-in-headless-mode
|
78,847,167
|
IBM MQ 9.1 compatibility with RHEL 8.10
|
We are currently running IBM MQ 9.1.0.5 on RHEL 7.9. Since this version of RHEL has announced EOS we are planning to migrate to RHEL 8. is IBM MQ 9.1.0.5 compatible with RHEL 8.10 ?
|
IBM MQ 9.1 compatibility with RHEL 8.10 We are currently running IBM MQ 9.1.0.5 on RHEL 7.9. Since this version of RHEL has announced EOS we are planning to migrate to RHEL 8. is IBM MQ 9.1.0.5 compatible with RHEL 8.10 ?
|
ibm-mq, rhel
| 0
| 306
| 1
|
https://stackoverflow.com/questions/78847167/ibm-mq-9-1-compatibility-with-rhel-8-10
|
78,122,282
|
dockerfile - add a package to ubi minimal base image from private repository
|
I am trying to build my docker image from nodejs minimal as base image. nodejs minimal does not have shadow-utils which I need to add user group and user. The nodejs minimal image I am using is hosted in a private repository and this repo does not host the shadow-utils package. How do I install shadow-utils in the nodejs minimal image via my dockerfile? I have tried the following in my dockerfile FROM private.repo.com/node:20.9.0_ubi9 RUN microdnf install shadow-utils -y && \ microdnf clean all When I run docker build -t myimage:mytag . it fails saying the following. Not sure if this is a permission issue or a repository issue. => ERROR [6/7] RUN microdnf install shadow-utils -y && microdnf clean all 0.4s ------ > [6/7] RUN microdnf install shadow-utils -y && microdnf clean all: 0.289 error: Failed to create: /var/cache/yum/metadata ------ dockerfile:10 -------------------- 9 | 10 | >>> RUN microdnf install shadow-utils -y && \ 11 | >>> microdnf clean all 12 | -------------------- ERROR: failed to solve: process "/bin/sh -c microdnf install shadow-utils -y && microdnf clean all" did not complete successfully: exit code: 1
|
dockerfile - add a package to ubi minimal base image from private repository I am trying to build my docker image from nodejs minimal as base image. nodejs minimal does not have shadow-utils which I need to add user group and user. The nodejs minimal image I am using is hosted in a private repository and this repo does not host the shadow-utils package. How do I install shadow-utils in the nodejs minimal image via my dockerfile? I have tried the following in my dockerfile FROM private.repo.com/node:20.9.0_ubi9 RUN microdnf install shadow-utils -y && \ microdnf clean all When I run docker build -t myimage:mytag . it fails saying the following. Not sure if this is a permission issue or a repository issue. => ERROR [6/7] RUN microdnf install shadow-utils -y && microdnf clean all 0.4s ------ > [6/7] RUN microdnf install shadow-utils -y && microdnf clean all: 0.289 error: Failed to create: /var/cache/yum/metadata ------ dockerfile:10 -------------------- 9 | 10 | >>> RUN microdnf install shadow-utils -y && \ 11 | >>> microdnf clean all 12 | -------------------- ERROR: failed to solve: process "/bin/sh -c microdnf install shadow-utils -y && microdnf clean all" did not complete successfully: exit code: 1
|
docker, dockerfile, redhat, rhel, ubi
| 0
| 1,539
| 2
|
https://stackoverflow.com/questions/78122282/dockerfile-add-a-package-to-ubi-minimal-base-image-from-private-repository
|
77,179,970
|
powershell needed for a springboot project dependency
|
Hi So im working a project where a springboot project uses a maven dependency jpowershell Now my problem is i have dockerized the project with a rhel 7 base image and also installing powershell and trying to run my docker image but get the error and the docker container exits PowerShell not available com.profesorfalken.jpowershell.PowerShellNotAvailableException: Cannot execute PowerShell.exe. Please make sure that it is installed in your system This is my docker file ,am following the correct procedure or am i making some mistake .Any help will be much appreciated FROM something.abc.com/test-base/rhel:7 ENV http_proxy [URL] ENV https_proxy [URL] ENV no_proxy localhost,127.0.0.1,.xyz.com ENV PATH="/usr/java/jdk-15.0.1/bin:$PATH" # Install common packages RUN yum install -y bzip2 RUN yum install -y wget RUN wget [URL] RUN yum install -y powershell-7.3.7-1.rh.x86_64.rpm RUN pwsh --version # Download and Install OpenJDK 15 RUN cd /tmp && wget [URL] && \ tar -xvzf openjdk-15.0.1_linux-x64_bin.tar.gz && \ mkdir -p /usr/java && mv /tmp/jdk-15.0.1 /usr/java VOLUME /tmp ADD target/xyz-*.jar /app/xyz/sample.jar EXPOSE 8080 ENTRYPOINT ["java","-jar","/app/xyz/sample.jar"] This is the error when i run the docker container Powershell paths
|
powershell needed for a springboot project dependency Hi So im working a project where a springboot project uses a maven dependency jpowershell Now my problem is i have dockerized the project with a rhel 7 base image and also installing powershell and trying to run my docker image but get the error and the docker container exits PowerShell not available com.profesorfalken.jpowershell.PowerShellNotAvailableException: Cannot execute PowerShell.exe. Please make sure that it is installed in your system This is my docker file ,am following the correct procedure or am i making some mistake .Any help will be much appreciated FROM something.abc.com/test-base/rhel:7 ENV http_proxy [URL] ENV https_proxy [URL] ENV no_proxy localhost,127.0.0.1,.xyz.com ENV PATH="/usr/java/jdk-15.0.1/bin:$PATH" # Install common packages RUN yum install -y bzip2 RUN yum install -y wget RUN wget [URL] RUN yum install -y powershell-7.3.7-1.rh.x86_64.rpm RUN pwsh --version # Download and Install OpenJDK 15 RUN cd /tmp && wget [URL] && \ tar -xvzf openjdk-15.0.1_linux-x64_bin.tar.gz && \ mkdir -p /usr/java && mv /tmp/jdk-15.0.1 /usr/java VOLUME /tmp ADD target/xyz-*.jar /app/xyz/sample.jar EXPOSE 8080 ENTRYPOINT ["java","-jar","/app/xyz/sample.jar"] This is the error when i run the docker container Powershell paths
|
linux, spring-boot, docker, rhel
| 0
| 509
| 1
|
https://stackoverflow.com/questions/77179970/powershell-needed-for-a-springboot-project-dependency
|
76,924,701
|
Cloud-init 18.5 module execution fails before creating the SSH keys
|
I am provisioning a virtual machine which executes the cloud init script on its first boot. I have a floating IP assigned to the VM and I can ping it, but can't ssh into it. I suspect that the cloud-init script execution has failed somewhere. Even if I pass userdata with password, the module may not execute the userdata script. How can I peek inside the VM to figure the root cause? This doesn't happen frequently. Image: RHEL 8/9 Virtualization: KVM/Libvirt Cloud-init version: 18.5
|
Cloud-init 18.5 module execution fails before creating the SSH keys I am provisioning a virtual machine which executes the cloud init script on its first boot. I have a floating IP assigned to the VM and I can ping it, but can't ssh into it. I suspect that the cloud-init script execution has failed somewhere. Even if I pass userdata with password, the module may not execute the userdata script. How can I peek inside the VM to figure the root cause? This doesn't happen frequently. Image: RHEL 8/9 Virtualization: KVM/Libvirt Cloud-init version: 18.5
|
virtual-machine, virtualization, rhel, kvm, cloud-init
| 0
| 146
| 1
|
https://stackoverflow.com/questions/76924701/cloud-init-18-5-module-execution-fails-before-creating-the-ssh-keys
|
76,827,683
|
NGINX proxy forward with POST request gives 405 (Not Allow)
|
I am making a post request from my frontend aplication, request to my node aplicaton running on server and listen for 4001 PORT. Request look like : [URL] - and wokrs fine. I want to hide a port and make forwarding in NGINX so in my frontend aplication I can make request like [URL] . I have try this NGINX configuration but it dont work, still gettin error 405(NotAllow): server { listen 443 ssl default_server; listen [::]:443 ssl default_server; server_name domen.name.com; ssl_certificate /etc/nginx/certificate/domain.crt; ssl_certificate_key /etc/nginx/certificate/domain.key; root /mnt/storage/app/frontend/build; index index.php index.html index.htm index.nginx-debian.html; location ~ ^/ { try_files $uri $uri/ /index.html; } location /login { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass [URL] proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_method POST; proxy_set_header X-Forwarded-Proto $scheme; } } addind : error_page 405 =200 $uri; - doesnot help.
|
NGINX proxy forward with POST request gives 405 (Not Allow) I am making a post request from my frontend aplication, request to my node aplicaton running on server and listen for 4001 PORT. Request look like : [URL] - and wokrs fine. I want to hide a port and make forwarding in NGINX so in my frontend aplication I can make request like [URL] . I have try this NGINX configuration but it dont work, still gettin error 405(NotAllow): server { listen 443 ssl default_server; listen [::]:443 ssl default_server; server_name domen.name.com; ssl_certificate /etc/nginx/certificate/domain.crt; ssl_certificate_key /etc/nginx/certificate/domain.key; root /mnt/storage/app/frontend/build; index index.php index.html index.htm index.nginx-debian.html; location ~ ^/ { try_files $uri $uri/ /index.html; } location /login { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass [URL] proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_method POST; proxy_set_header X-Forwarded-Proto $scheme; } } addind : error_page 405 =200 $uri; - doesnot help.
|
node.js, reactjs, nginx, rhel
| 0
| 2,093
| 1
|
https://stackoverflow.com/questions/76827683/nginx-proxy-forward-with-post-request-gives-405-not-allow
|
76,638,008
|
Docker Nginx Django [emerg] host not found in upstream /etc/nginx/conf.d/nginx.conf
|
I'm new to web applications landscape and just trying to deploy a basic Django application with Gunicorn, Nginx and podman (version 4.4.1) on Red Hat Linux Enterprise 8.7. Dockerfile for Nginx is the official image from Docker Hub v 1.25.1. There's no docker-compose/podman-compose available on the server. I'm starting the build by creating a dedicated network: podman network create testapp-net The next component is Django application: podman build -t testapp-django -f src/testapp-django/compose/django/Dockerfile . Dockerfile for the app is based on Ubuntu base image and I'm exposing port 8000: FROM ubuntu:22.04 ... RUN addgroup --system django \ && adduser --system --ingroup django django ... WORKDIR /app ... RUN chmod +x /app ... EXPOSE 8000 ... COPY src/testapp-django/compose/django/entrypoint /entrypoint RUN sed -i -e 's/^M$//' /entrypoint RUN chmod +x /entrypoint RUN chown django /entrypoint COPY src/testapp-django/compose/django/start /start RUN sed -i -e 's/^M$//' /start RUN chmod +x /start RUN chown django /start RUN chown -R django:django /app USER django ENTRYPOINT ["/entrypoint"] /entrypoint: set -o errexit set -o pipefail set -o nounset exec "$@" /start: set -o errexit set -o pipefail set -o nounset python3 /app/manage.py migrate gunicorn testapp.wsgi:application --bind 0.0.0.0:8000 --chdir=/app Starting the Django app is successful: podman run -d -p 8010:8000 --name testapp-django --env-file src/testapp-django/.env --network testapp-net testapp-django /start Response: [2023-07-07 10:23:41 +0000] [24] [INFO] Starting gunicorn 20.1.0 [2023-07-07 10:23:41 +0000] [24] [INFO] Listening at: [URL] (24) [2023-07-07 10:23:41 +0000] [24] [INFO] Using worker: sync [2023-07-07 10:23:41 +0000] [26] [INFO] Booting worker with pid: 26 In the next step I want to start Nginx. Dockerfile: FROM nginx:1.25.1 RUN rm /etc/nginx/conf.d/default.conf COPY src/testapp-django/compose/nginx/nginx.conf /etc/nginx/conf.d My nginx.conf file: upstream testapp-django { server testapp-django:8000; } server { listen 80; location / { proxy_pass [URL] proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; client_max_body_size 20M; } } podman build -t testapp-nginx -f src/testapp-django/compose/nginx/Dockerfile . When I run the container though: podman run -p 1337:80 --name testapp-nginx --network testapp-net testapp-nginx I'm getting the following response: /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2023/07/07 14:48:34 [emerg] 1#1: host not found in upstream "testapp-django:8000" in /etc/nginx/conf.d/nginx.conf:2 nginx: [emerg] host not found in upstream "testapp-django:8000" in /etc/nginx/conf.d/nginx.conf:2 I was looking for solution in similar posts on SO, but without any success. Inspect on app container, I see the following for the network: "NetworkSettings": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "Bridge": "", "SandboxID": "", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "8000/tcp": [ { "HostIp": "", "HostPort": "8000" } ] }, "SandboxKey": "/run/user/1632100669/netns/netns-10b5a628-1e92-a4ac-1800-2957e0edaf1c", "Networks": { "testapp-net": { "EndpointID": "", "Gateway": "10.89.1.1", "IPAddress": "10.89.1.17", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "16:cb:3d:2a:d1:43", "NetworkID": "testapp-net", "DriverOpts": null, "IPAMConfig": null, "Links": null, "Aliases": [ "2a14008a1c9d" ] } } } The same on nginx container: "NetworkSettings": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "Bridge": "", "SandboxID": "", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "80/tcp": [ { "HostIp": "", "HostPort": "1337" } ] }, "SandboxKey": "", "Networks": { "testapp-net": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "NetworkID": "testapp-net", "DriverOpts": null, "IPAMConfig": null, "Links": null, "Aliases": [ "a4a11b846dbc" ] } } }
|
Docker Nginx Django [emerg] host not found in upstream /etc/nginx/conf.d/nginx.conf I'm new to web applications landscape and just trying to deploy a basic Django application with Gunicorn, Nginx and podman (version 4.4.1) on Red Hat Linux Enterprise 8.7. Dockerfile for Nginx is the official image from Docker Hub v 1.25.1. There's no docker-compose/podman-compose available on the server. I'm starting the build by creating a dedicated network: podman network create testapp-net The next component is Django application: podman build -t testapp-django -f src/testapp-django/compose/django/Dockerfile . Dockerfile for the app is based on Ubuntu base image and I'm exposing port 8000: FROM ubuntu:22.04 ... RUN addgroup --system django \ && adduser --system --ingroup django django ... WORKDIR /app ... RUN chmod +x /app ... EXPOSE 8000 ... COPY src/testapp-django/compose/django/entrypoint /entrypoint RUN sed -i -e 's/^M$//' /entrypoint RUN chmod +x /entrypoint RUN chown django /entrypoint COPY src/testapp-django/compose/django/start /start RUN sed -i -e 's/^M$//' /start RUN chmod +x /start RUN chown django /start RUN chown -R django:django /app USER django ENTRYPOINT ["/entrypoint"] /entrypoint: set -o errexit set -o pipefail set -o nounset exec "$@" /start: set -o errexit set -o pipefail set -o nounset python3 /app/manage.py migrate gunicorn testapp.wsgi:application --bind 0.0.0.0:8000 --chdir=/app Starting the Django app is successful: podman run -d -p 8010:8000 --name testapp-django --env-file src/testapp-django/.env --network testapp-net testapp-django /start Response: [2023-07-07 10:23:41 +0000] [24] [INFO] Starting gunicorn 20.1.0 [2023-07-07 10:23:41 +0000] [24] [INFO] Listening at: [URL] (24) [2023-07-07 10:23:41 +0000] [24] [INFO] Using worker: sync [2023-07-07 10:23:41 +0000] [26] [INFO] Booting worker with pid: 26 In the next step I want to start Nginx. Dockerfile: FROM nginx:1.25.1 RUN rm /etc/nginx/conf.d/default.conf COPY src/testapp-django/compose/nginx/nginx.conf /etc/nginx/conf.d My nginx.conf file: upstream testapp-django { server testapp-django:8000; } server { listen 80; location / { proxy_pass [URL] proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; client_max_body_size 20M; } } podman build -t testapp-nginx -f src/testapp-django/compose/nginx/Dockerfile . When I run the container though: podman run -p 1337:80 --name testapp-nginx --network testapp-net testapp-nginx I'm getting the following response: /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2023/07/07 14:48:34 [emerg] 1#1: host not found in upstream "testapp-django:8000" in /etc/nginx/conf.d/nginx.conf:2 nginx: [emerg] host not found in upstream "testapp-django:8000" in /etc/nginx/conf.d/nginx.conf:2 I was looking for solution in similar posts on SO, but without any success. Inspect on app container, I see the following for the network: "NetworkSettings": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "Bridge": "", "SandboxID": "", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "8000/tcp": [ { "HostIp": "", "HostPort": "8000" } ] }, "SandboxKey": "/run/user/1632100669/netns/netns-10b5a628-1e92-a4ac-1800-2957e0edaf1c", "Networks": { "testapp-net": { "EndpointID": "", "Gateway": "10.89.1.1", "IPAddress": "10.89.1.17", "IPPrefixLen": 24, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "16:cb:3d:2a:d1:43", "NetworkID": "testapp-net", "DriverOpts": null, "IPAMConfig": null, "Links": null, "Aliases": [ "2a14008a1c9d" ] } } } The same on nginx container: "NetworkSettings": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "Bridge": "", "SandboxID": "", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "80/tcp": [ { "HostIp": "", "HostPort": "1337" } ] }, "SandboxKey": "", "Networks": { "testapp-net": { "EndpointID": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "NetworkID": "testapp-net", "DriverOpts": null, "IPAMConfig": null, "Links": null, "Aliases": [ "a4a11b846dbc" ] } } }
|
django, docker, nginx, rhel, podman
| 0
| 374
| 2
|
https://stackoverflow.com/questions/76638008/docker-nginx-django-emerg-host-not-found-in-upstream-etc-nginx-conf-d-nginx-c
|
76,555,963
|
Linux read call behavior when a segment of the file is corrupted
|
Context : OS : Red hat 8.X File systems : EXT4, XFS Storage Types : SSD, HDD Corruption : Meant here is an activity that result in written data cannot be retrieved as it was written. .e.g. Disk Device level corruption. Linux read call signature is ssize_t read(int fd, void buf[.count], size_t count); . Say the file referred by fd , has corrupted segments (+ NOT corrupted segments). If the read request goes through one or more corrupted segments(assume segments are A(OK)--B(corrupted)--C(OK)--D(corrupted)--E(OK) and fd 's file position is set before the beginning of A and "count" is large enough to contain all A -> E segments), Is there a possibility of read 's return value to be larger than ZERO ? (and buf to contain data) ? If so, 1.1. What would be contained in buf ? will it contain any data from corrupted segments B and D ? What could be the return value of read ' ? 1.2 What are probability of this happening ? What factors could increase the probability of this happening ? e.g. re-boot ? Would the file size returned by fstat count any bytes from corrupted segments ? Purpose : I am trying to decide(under above given OS, File system conditions), if I NEED to add a "application level calculated checksum" along with written(binary) data and when reading the same file if read returns success(i.e. return value > 0), validate the (app level written)checksum before concluding data as valid. Also I am NOT worried about some intruder modifying the written data here. Only worried about things that can happen from system activity. e.g. machine re-boot
|
Linux read call behavior when a segment of the file is corrupted Context : OS : Red hat 8.X File systems : EXT4, XFS Storage Types : SSD, HDD Corruption : Meant here is an activity that result in written data cannot be retrieved as it was written. .e.g. Disk Device level corruption. Linux read call signature is ssize_t read(int fd, void buf[.count], size_t count); . Say the file referred by fd , has corrupted segments (+ NOT corrupted segments). If the read request goes through one or more corrupted segments(assume segments are A(OK)--B(corrupted)--C(OK)--D(corrupted)--E(OK) and fd 's file position is set before the beginning of A and "count" is large enough to contain all A -> E segments), Is there a possibility of read 's return value to be larger than ZERO ? (and buf to contain data) ? If so, 1.1. What would be contained in buf ? will it contain any data from corrupted segments B and D ? What could be the return value of read ' ? 1.2 What are probability of this happening ? What factors could increase the probability of this happening ? e.g. re-boot ? Would the file size returned by fstat count any bytes from corrupted segments ? Purpose : I am trying to decide(under above given OS, File system conditions), if I NEED to add a "application level calculated checksum" along with written(binary) data and when reading the same file if read returns success(i.e. return value > 0), validate the (app level written)checksum before concluding data as valid. Also I am NOT worried about some intruder modifying the written data here. Only worried about things that can happen from system activity. e.g. machine re-boot
|
c, linux, rhel, ext4, xfs
| 0
| 165
| 1
|
https://stackoverflow.com/questions/76555963/linux-read-call-behavior-when-a-segment-of-the-file-is-corrupted
|
76,489,955
|
403 Forbidden on rhel 8.4 using httpd userdir on a folder that is not /home
|
I'm in the process of configuring our new server on RedHat 8.4 and making userdir work with httpd has been bugging me. I've configured the file /etc/httpd/conf.d/userdir.conf as follows: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir enabled # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html </IfModule> # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # <Directory "/user/*/public_html"> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> I've used /user because we don't use the /home directory and I think that is part of the problem. I read online that SELinux configures the folder with some degree of accessibility and I tried adding the same rule as /home to the /user folder: /user/[^/]+/.+ all files system_u:object_r:user_home_t:s0 I also ran the command setsebool -P httpd_enable_homedirs 1 , but since /user is not the home directory, it did nothing. I also made sure to give access to the full path leading to /public_html and the folders inside. If you have any ideas of things I can do that I haven't done already, I'll be happy to hear about it. Edit 1 after running audit2allow -a #============= httpd_t ============== #!!!! This avc can be allowed using one of the these booleans: # httpd_use_nfs, use_nfs_home_dirs, git_system_use_nfs allow httpd_t nfs_t:dir read; #!!!! This avc can be allowed using one of the these booleans: # httpd_use_nfs, use_nfs_home_dirs, git_system_use_nfs allow httpd_t nfs_t:file getattr; #============= init_t ============== #!!!! This avc is allowed in the current policy allow init_t portmap_port_t:tcp_socket name_connect; #============= rhsmcertd_t ============== allow rhsmcertd_t gpg_exec_t:file execute; #============= sshd_t ============== #!!!! This avc can be allowed using the boolean 'use_nfs_home_dirs' allow sshd_t nfs_t:file read; #============= system_dbusd_t ============== #!!!! This avc has a dontaudit rule in the current policy allow system_dbusd_t hi_reserved_port_t:tcp_socket name_bind; #!!!! This avc is allowed in the current policy allow system_dbusd_t portmap_port_t:tcp_socket name_connect;
|
403 Forbidden on rhel 8.4 using httpd userdir on a folder that is not /home I'm in the process of configuring our new server on RedHat 8.4 and making userdir work with httpd has been bugging me. I've configured the file /etc/httpd/conf.d/userdir.conf as follows: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir enabled # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html </IfModule> # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # <Directory "/user/*/public_html"> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> I've used /user because we don't use the /home directory and I think that is part of the problem. I read online that SELinux configures the folder with some degree of accessibility and I tried adding the same rule as /home to the /user folder: /user/[^/]+/.+ all files system_u:object_r:user_home_t:s0 I also ran the command setsebool -P httpd_enable_homedirs 1 , but since /user is not the home directory, it did nothing. I also made sure to give access to the full path leading to /public_html and the folders inside. If you have any ideas of things I can do that I haven't done already, I'll be happy to hear about it. Edit 1 after running audit2allow -a #============= httpd_t ============== #!!!! This avc can be allowed using one of the these booleans: # httpd_use_nfs, use_nfs_home_dirs, git_system_use_nfs allow httpd_t nfs_t:dir read; #!!!! This avc can be allowed using one of the these booleans: # httpd_use_nfs, use_nfs_home_dirs, git_system_use_nfs allow httpd_t nfs_t:file getattr; #============= init_t ============== #!!!! This avc is allowed in the current policy allow init_t portmap_port_t:tcp_socket name_connect; #============= rhsmcertd_t ============== allow rhsmcertd_t gpg_exec_t:file execute; #============= sshd_t ============== #!!!! This avc can be allowed using the boolean 'use_nfs_home_dirs' allow sshd_t nfs_t:file read; #============= system_dbusd_t ============== #!!!! This avc has a dontaudit rule in the current policy allow system_dbusd_t hi_reserved_port_t:tcp_socket name_bind; #!!!! This avc is allowed in the current policy allow system_dbusd_t portmap_port_t:tcp_socket name_connect;
|
apache, rhel, selinux
| 0
| 805
| 1
|
https://stackoverflow.com/questions/76489955/403-forbidden-on-rhel-8-4-using-httpd-userdir-on-a-folder-that-is-not-home
|
76,456,526
|
How to get the jdk install path using shell programmatically when dual jdk are installed in RHEL?
|
I have a list of servers in which I need to find the jdk-8 install location programmatically. A few of the servers are having single jdk installation which makes it easier to find the install location using the shell command readlink -f $(which java) from /usr/java . But certain RHEL servers have multiple jdk installed both 11 and 8 and the alternative --config java would be set as jdk 11 , in these cases it is difficult to get the install location of the jdk 8 using the above command. Seeking shell script to find the jdk install location when there are multiple jdk version installation in RHEL server and works for both single and multiple jdk installation ?
|
How to get the jdk install path using shell programmatically when dual jdk are installed in RHEL? I have a list of servers in which I need to find the jdk-8 install location programmatically. A few of the servers are having single jdk installation which makes it easier to find the install location using the shell command readlink -f $(which java) from /usr/java . But certain RHEL servers have multiple jdk installed both 11 and 8 and the alternative --config java would be set as jdk 11 , in these cases it is difficult to get the install location of the jdk 8 using the above command. Seeking shell script to find the jdk install location when there are multiple jdk version installation in RHEL server and works for both single and multiple jdk installation ?
|
linux, bash, shell, rhel
| 0
| 166
| 1
|
https://stackoverflow.com/questions/76456526/how-to-get-the-jdk-install-path-using-shell-programmatically-when-dual-jdk-are-i
|
76,421,216
|
db2prereqcheck hangs on some linux systems
|
When doing a db2prereqcheck on a linx system, it hangs and doesn't ever come back. The db2prereqcheck command is not only executed manually when one wants to install db2 components. It is also called by some diagnostic tests like db2support . -s for example. I experienced that it can hang and I had other change but killing the process. What can be a reason for this?
|
db2prereqcheck hangs on some linux systems When doing a db2prereqcheck on a linx system, it hangs and doesn't ever come back. The db2prereqcheck command is not only executed manually when one wants to install db2 components. It is also called by some diagnostic tests like db2support . -s for example. I experienced that it can hang and I had other change but killing the process. What can be a reason for this?
|
db2, rhel, db2-luw, rhel8
| 0
| 153
| 1
|
https://stackoverflow.com/questions/76421216/db2prereqcheck-hangs-on-some-linux-systems
|
75,486,557
|
Error Linking a Static File for COBOL DB2 on RHEL
|
Compiling/Linking COBOL code for DB2 on a RHEL 8.6 server which is hitting an error. Command running: cob2 -F/etc/cob2.cfg -v myfile.cbl -L/opt/IBM/db2/V11.5/lib32 -I/opt/IBM/db2/V11.5/include/cobol_a -ldb2 -q"size(16384k)" -L. linkfile.a -o myfile.exe db2level Informational tokens are "DB2 v11.5.0.0", "s1906101300", "DYN1906101300AMD64", and Fix Pack "0". Product is installed at "/opt/IBM/db2/V11.5" cob2 -V Program cob2 Version 1.1.0 Built Mon Sep 27 10:39:30 2021 Error Message: /usr/bin/ld: /usr/lib/gcc/x86_64-redhat-linux/8/../../../../lib/Scrtl.0(.text+0x1c): unresolvable R_386_GOTOFF relocation against symbol '__libc_csu_fini' /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: error ld returned 1 exit status. Tried changing various options but running out of options now. I was expecting/hoping for a clean compile and link, and the .exe file available. No CFLAGS and/or LDFLAGS set. gcc -v Using built-in specs. COLLECT GCC=gcc COLLECT LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/8/lto-wrapper OFFLOAD_TARGET_NAMES=nvptx-none OFFLOAD_TARGET_DEFAULT=1 Target: x86_64-redhat-linux Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable- __cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-objext --enable-linkr-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl --disable-libmpx --enable-offload-targets=nvptx-none --without-cuda-driver --enable-gnu-indirect-function --enable-cet --with-tune-generic --with-arch_32=x86-64 --build=x86_64-redhat-linux Thread model: posix gcc version 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC) Incoming argument vector for cob2... [ 0] - 4: cpb2 [ 1] - 2: -# [ 2] - 15: -F/etc/cob2.cfg [ 3] - 2: -v [ 4] - 18: /tmp/out/myfile.cbl [ 5] - 10: linkfile.a [ 6] - 26: -L/opt/IBM/db2/V11.5/lib32 [ 7] - 36: -I/opt/IBM/db2/V11.5/include/cobol_a [ 8] - 5: -ldb2 [ 9] - 14: -qsize(16384K) [10] - 2: -o [11] - 20: /code/bin/myfile.exe Outgoing environment variables... PATH: /opt/ibm/cobol/1.1.0/usr/bin/:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/user1///bin/usr/bin/X11:/sbin:/:/code/bin:/code/scripts/home/user1//sqllib/bin LIBPATH: /home/user1//sqllib/lib:/usr/lib:/lib LD_LIBRARY_PATH: /opt/ibm/cobol/rte/usr/lib/:/opt/ibm/cobol/rte/:/opt/ibm/cobol1.1.0/usr/lib:home/user1/sqllib/lib32:/home/user1//sqllib/lib32/gskit NLSPATH: /opt/ibm/cobol/rte/usr/lib/usr/share/locale/%L/%N:/opt/ibm/cobol/rte/usr/share/locale/%L/%N:/opt/ibm/cobol/1.1.0/usr/share/locale/%L/%N:/opt/IBM/db2/V11.5/msg/%L/%N:/opt/IBM/db2/V11.5/msg/en_US/%N SYSLIB: /opt/IBM/db2/V11.5/include/cobol_a:/opt/IBM/db2/V11.5/include/cobol_a Outgoing argument vector for /opt/ibm/cobol/1.1.0/usr/bin/cob3 ... [ 0] - 33 /opt/ibm/cobol/1.1.0/usr/bin/cob3 [ 1] - 14 -qsize(16384K) exec: /opt/ibm/cobol/1.1.0/usr/bin/cob3 -qsize(16384K) /tmp/myfile.cbl Outgoing argument vector for /usr/bin/gcc ... [ 0] - 12: /usr/bin/gcc [ 1] - 4: -m32 [ 2] - 7: -shared [ 3] - 5: -fPIC [ 4] - 9: -rdynamic [ 5] - 28: -fasynchronous -unwind -tables [ 6] - 20: -W1, --hash-style=gnu [ 7] - 18: -W1, --export-dynam [ 8] - 14: -W1, -Bsymbolic [ 9] - 14: -W1, --build-id [10] - 22: -W1, --enable-new-dtags [11] - 11: -W1, -zrelro [12] - 9: -W1, -znow [13] - 10: -W1, -zdefs [14] - 18: -W1, -z,noexecstack [15] - 12: -W1, -znotext [16] - 27: -W1, ---allow-shlib-undefined [17] - 4: -pie [18] - 5: -fPIE [19] - 15: --fwhole-program [20] - 15: -W1, --as-needed [21] - 8: myfile.o [22] - 10: linkfile.a [23] - 26: -L/opt/IBM/db2/V11.5/lib32 [24] - 5: ldb2 [25] - 2: -o [26] - 19: /code/bin/myfile.exe [27] - 15: -W1, --no-omagic [28] - 13: -W1, Bdynamic [29] - 15: -W1, --as-needed [30] - 31: L/opt/ibm/cobol/1.1.0/usr/lib/ [31] - 29: L/opt/ibm/cobol/rte/usr/lib/ [32] - 10: -lcob2_32s [33] - 10: -lcob2_32r [34] - 9: -ldfp_32r [35] - 3: -lm [36] - 9: -lpthread [37] - 150: -W1, -rpath,/opt/ibm/cobol/rte/usr/lib/:/opt/ibm/cobol/rte/:/opt/ibm/cobol/1.1.0/usr/lib/:/home/user1//sqllib/lib32:/home/user1/sqllib/lib32/gskit exec: /usr/bin/gcc -m32 -shared -fPIC -rdynamic -fasynchronous -unwind -tables -W1, --hash-style=gnu -W1, --export-dynam -W1, -Bsymbolic -W1, --build-id -W1, --enable-new-dtags -W1, -zrelro -W1, -znow -W1, -zdefs -W1, -z,noexecstack -W1, -znotext -W1, ---allow-shlib-undefined -pie -fPIE --fwhole-program -W1, --as-needed myfile.o linkfile.a -L/opt/IBM/db2/V11.5/lib32 ldb2 -o /code/bin/myfile.exe -W1, --no-omagic -W1, Bdynamic -W1, --as-needed L/opt/ibm/cobol/1.1.0/usr/lib/ L/opt/ibm/cobol/rte/usr/lib/ -lcob2_32s -lcob2_32r -ldfp_32r -lm -lpthread -W1, -rpath,/opt/ibm/cobol/rte/usr/lib/:/opt/ibm/cobol/rte/:/opt/ibm/cobol/1.1.0/usr/lib/:/home/user1//sqllib/lib32:/home/user1/sqllib/lib32/gskit
|
Error Linking a Static File for COBOL DB2 on RHEL Compiling/Linking COBOL code for DB2 on a RHEL 8.6 server which is hitting an error. Command running: cob2 -F/etc/cob2.cfg -v myfile.cbl -L/opt/IBM/db2/V11.5/lib32 -I/opt/IBM/db2/V11.5/include/cobol_a -ldb2 -q"size(16384k)" -L. linkfile.a -o myfile.exe db2level Informational tokens are "DB2 v11.5.0.0", "s1906101300", "DYN1906101300AMD64", and Fix Pack "0". Product is installed at "/opt/IBM/db2/V11.5" cob2 -V Program cob2 Version 1.1.0 Built Mon Sep 27 10:39:30 2021 Error Message: /usr/bin/ld: /usr/lib/gcc/x86_64-redhat-linux/8/../../../../lib/Scrtl.0(.text+0x1c): unresolvable R_386_GOTOFF relocation against symbol '__libc_csu_fini' /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: error ld returned 1 exit status. Tried changing various options but running out of options now. I was expecting/hoping for a clean compile and link, and the .exe file available. No CFLAGS and/or LDFLAGS set. gcc -v Using built-in specs. COLLECT GCC=gcc COLLECT LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/8/lto-wrapper OFFLOAD_TARGET_NAMES=nvptx-none OFFLOAD_TARGET_DEFAULT=1 Target: x86_64-redhat-linux Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable- __cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-objext --enable-linkr-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl --disable-libmpx --enable-offload-targets=nvptx-none --without-cuda-driver --enable-gnu-indirect-function --enable-cet --with-tune-generic --with-arch_32=x86-64 --build=x86_64-redhat-linux Thread model: posix gcc version 8.5.0 20210514 (Red Hat 8.5.0-10) (GCC) Incoming argument vector for cob2... [ 0] - 4: cpb2 [ 1] - 2: -# [ 2] - 15: -F/etc/cob2.cfg [ 3] - 2: -v [ 4] - 18: /tmp/out/myfile.cbl [ 5] - 10: linkfile.a [ 6] - 26: -L/opt/IBM/db2/V11.5/lib32 [ 7] - 36: -I/opt/IBM/db2/V11.5/include/cobol_a [ 8] - 5: -ldb2 [ 9] - 14: -qsize(16384K) [10] - 2: -o [11] - 20: /code/bin/myfile.exe Outgoing environment variables... PATH: /opt/ibm/cobol/1.1.0/usr/bin/:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/user1///bin/usr/bin/X11:/sbin:/:/code/bin:/code/scripts/home/user1//sqllib/bin LIBPATH: /home/user1//sqllib/lib:/usr/lib:/lib LD_LIBRARY_PATH: /opt/ibm/cobol/rte/usr/lib/:/opt/ibm/cobol/rte/:/opt/ibm/cobol1.1.0/usr/lib:home/user1/sqllib/lib32:/home/user1//sqllib/lib32/gskit NLSPATH: /opt/ibm/cobol/rte/usr/lib/usr/share/locale/%L/%N:/opt/ibm/cobol/rte/usr/share/locale/%L/%N:/opt/ibm/cobol/1.1.0/usr/share/locale/%L/%N:/opt/IBM/db2/V11.5/msg/%L/%N:/opt/IBM/db2/V11.5/msg/en_US/%N SYSLIB: /opt/IBM/db2/V11.5/include/cobol_a:/opt/IBM/db2/V11.5/include/cobol_a Outgoing argument vector for /opt/ibm/cobol/1.1.0/usr/bin/cob3 ... [ 0] - 33 /opt/ibm/cobol/1.1.0/usr/bin/cob3 [ 1] - 14 -qsize(16384K) exec: /opt/ibm/cobol/1.1.0/usr/bin/cob3 -qsize(16384K) /tmp/myfile.cbl Outgoing argument vector for /usr/bin/gcc ... [ 0] - 12: /usr/bin/gcc [ 1] - 4: -m32 [ 2] - 7: -shared [ 3] - 5: -fPIC [ 4] - 9: -rdynamic [ 5] - 28: -fasynchronous -unwind -tables [ 6] - 20: -W1, --hash-style=gnu [ 7] - 18: -W1, --export-dynam [ 8] - 14: -W1, -Bsymbolic [ 9] - 14: -W1, --build-id [10] - 22: -W1, --enable-new-dtags [11] - 11: -W1, -zrelro [12] - 9: -W1, -znow [13] - 10: -W1, -zdefs [14] - 18: -W1, -z,noexecstack [15] - 12: -W1, -znotext [16] - 27: -W1, ---allow-shlib-undefined [17] - 4: -pie [18] - 5: -fPIE [19] - 15: --fwhole-program [20] - 15: -W1, --as-needed [21] - 8: myfile.o [22] - 10: linkfile.a [23] - 26: -L/opt/IBM/db2/V11.5/lib32 [24] - 5: ldb2 [25] - 2: -o [26] - 19: /code/bin/myfile.exe [27] - 15: -W1, --no-omagic [28] - 13: -W1, Bdynamic [29] - 15: -W1, --as-needed [30] - 31: L/opt/ibm/cobol/1.1.0/usr/lib/ [31] - 29: L/opt/ibm/cobol/rte/usr/lib/ [32] - 10: -lcob2_32s [33] - 10: -lcob2_32r [34] - 9: -ldfp_32r [35] - 3: -lm [36] - 9: -lpthread [37] - 150: -W1, -rpath,/opt/ibm/cobol/rte/usr/lib/:/opt/ibm/cobol/rte/:/opt/ibm/cobol/1.1.0/usr/lib/:/home/user1//sqllib/lib32:/home/user1/sqllib/lib32/gskit exec: /usr/bin/gcc -m32 -shared -fPIC -rdynamic -fasynchronous -unwind -tables -W1, --hash-style=gnu -W1, --export-dynam -W1, -Bsymbolic -W1, --build-id -W1, --enable-new-dtags -W1, -zrelro -W1, -znow -W1, -zdefs -W1, -z,noexecstack -W1, -znotext -W1, ---allow-shlib-undefined -pie -fPIE --fwhole-program -W1, --as-needed myfile.o linkfile.a -L/opt/IBM/db2/V11.5/lib32 ldb2 -o /code/bin/myfile.exe -W1, --no-omagic -W1, Bdynamic -W1, --as-needed L/opt/ibm/cobol/1.1.0/usr/lib/ L/opt/ibm/cobol/rte/usr/lib/ -lcob2_32s -lcob2_32r -ldfp_32r -lm -lpthread -W1, -rpath,/opt/ibm/cobol/rte/usr/lib/:/opt/ibm/cobol/rte/:/opt/ibm/cobol/1.1.0/usr/lib/:/home/user1//sqllib/lib32:/home/user1/sqllib/lib32/gskit
|
gcc, db2, linker-errors, rhel, cobol
| 0
| 116
| 1
|
https://stackoverflow.com/questions/75486557/error-linking-a-static-file-for-cobol-db2-on-rhel
|
75,285,374
|
Getting error during Jenkins build "fatal error: opensslconf-i386.h"
|
Jenkins pipeline fails with the below error while running in Rhel 7.9 worker node /usr/include/openssl/opensslconf.h:13:30: fatal error: opensslconf-i386.h: No such file or directory #include "opensslconf-i386.h" I tried to install libssl-dev:i386 on Rhel 7.9 but I am getting an error "No package libssl-dev:i386 available" This is the commands that was executed in the pipeline sudo make package ARCH=linux_32
|
Getting error during Jenkins build "fatal error: opensslconf-i386.h" Jenkins pipeline fails with the below error while running in Rhel 7.9 worker node /usr/include/openssl/opensslconf.h:13:30: fatal error: opensslconf-i386.h: No such file or directory #include "opensslconf-i386.h" I tried to install libssl-dev:i386 on Rhel 7.9 but I am getting an error "No package libssl-dev:i386 available" This is the commands that was executed in the pipeline sudo make package ARCH=linux_32
|
linux, openssl, rhel
| 0
| 601
| 1
|
https://stackoverflow.com/questions/75285374/getting-error-during-jenkins-build-fatal-error-opensslconf-i386-h
|
75,112,067
|
Apache VFS SFTP Connection hangs
|
I am using Apache VFS to upload a file to an SFTP server, if the file is newer than the file on the server or doesn't exist there yet. The server connection uses SSH Keys for Authentication. I am using the following java code (plus error handling etc.) to connect to the server and check the file modification date-time: DefaultFileSystemManager manager = new DefaultFileSystemManager(); manager.addProvider("sftp", new SftpFileProvider()); manager.init(); FileSystemOptions opts = createDefaultOptions(); BytesIdentityInfo identityInfo = new BytesIdentityInfo(server.sshKey.getBytes(), null); SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, identityInfo); remoteFileObject = manager.resolveFile(new URI("sftp",server.UserName,server.HostName,server.Port,remoteFilePath,null,null).toString(), createDefaultOptions(server.Key)); FileContent content = remoteFileObject.getContent(); return content.getLastModifiedTime(); The SSH key is in the format -----BEGIN RSA PRIVATE KEY----- etc.; as exported by puttyGen under Conversions -> Export OpenSSH Key (i.e. the old format of OpenSSH key, not the new one). I have tested this code on Windows, with a locally hosted SFTP server (i.e. also on the same Windows machine), and it works successfully. I am now wanting to use this in a Linux environment (RHEL), connecting to an AWS Transfer SFTP server, secured using SSH keys as described. I can connect successfully using the SFTP command from the Linux OS shell: sftp -oIdentityFile=/path/to/test.ppk USER@xxx.xxx.xxx.xxx But, when I try to run the java code, the code hangs on the call to manager.resolveFile . After half an hour (I think - this might not be related), I get the following in /var/log/messages: systemd-logind[1297]: Session 115360 logged out. Waiting for processes to exit. systemd[1]: session-115360.scope: Succeeded. systemd-logind[1297]: Removed session 115360. I don't have SELinux enabled, so I don't think that's interfering in any way. Can anyone help suggest what might be causing this?
|
Apache VFS SFTP Connection hangs I am using Apache VFS to upload a file to an SFTP server, if the file is newer than the file on the server or doesn't exist there yet. The server connection uses SSH Keys for Authentication. I am using the following java code (plus error handling etc.) to connect to the server and check the file modification date-time: DefaultFileSystemManager manager = new DefaultFileSystemManager(); manager.addProvider("sftp", new SftpFileProvider()); manager.init(); FileSystemOptions opts = createDefaultOptions(); BytesIdentityInfo identityInfo = new BytesIdentityInfo(server.sshKey.getBytes(), null); SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, identityInfo); remoteFileObject = manager.resolveFile(new URI("sftp",server.UserName,server.HostName,server.Port,remoteFilePath,null,null).toString(), createDefaultOptions(server.Key)); FileContent content = remoteFileObject.getContent(); return content.getLastModifiedTime(); The SSH key is in the format -----BEGIN RSA PRIVATE KEY----- etc.; as exported by puttyGen under Conversions -> Export OpenSSH Key (i.e. the old format of OpenSSH key, not the new one). I have tested this code on Windows, with a locally hosted SFTP server (i.e. also on the same Windows machine), and it works successfully. I am now wanting to use this in a Linux environment (RHEL), connecting to an AWS Transfer SFTP server, secured using SSH keys as described. I can connect successfully using the SFTP command from the Linux OS shell: sftp -oIdentityFile=/path/to/test.ppk USER@xxx.xxx.xxx.xxx But, when I try to run the java code, the code hangs on the call to manager.resolveFile . After half an hour (I think - this might not be related), I get the following in /var/log/messages: systemd-logind[1297]: Session 115360 logged out. Waiting for processes to exit. systemd[1]: session-115360.scope: Succeeded. systemd-logind[1297]: Removed session 115360. I don't have SELinux enabled, so I don't think that's interfering in any way. Can anyone help suggest what might be causing this?
|
ssh, sftp, rhel, apache-commons-vfs
| 0
| 1,267
| 1
|
https://stackoverflow.com/questions/75112067/apache-vfs-sftp-connection-hangs
|
74,891,869
|
Yum dependency I'm trying to install doesn't have a provider
|
One of the dependencies of the rlwrap package I'm trying to install doesn't have a provider. Is there a command line option I can pass to the install to let it forget about the perl(File::Slurp) dependency? package: rlwrap-0.45.2-2.el8.x86_64 dependency: /usr/bin/perl provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: /usr/bin/python3 provider: python36-3.6.8-38.module+el8.5.0+12207+5c5719bc.x86_64 provider: python38-3.8.13-1.module+el8.7.0+15641+2ece4388.x86_64 provider: python39-3.9.13-2.module+el8.7.0+17195+44752b34.x86_64 dependency: libc.so.6(GLIBC_2.15)(64bit) provider: glibc-2.28-211.el8.x86_64 dependency: libreadline.so.7()(64bit) provider: readline-7.0-10.el8.x86_64 dependency: libtinfo.so.6()(64bit) provider: ncurses-libs-6.1-9.20180224.el8.x86_64 dependency: libutil.so.1()(64bit) provider: glibc-2.28-211.el8.x86_64 dependency: libutil.so.1(GLIBC_2.2.5)(64bit) provider: glibc-2.28-211.el8.x86_64 dependency: perl(:VERSION) >= 5.6.0 provider: perl-libs-4:5.26.3-421.el8.i686 provider: perl-libs-4:5.26.3-421.el8.x86_64 dependency: perl(AutoLoader) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(Carp) provider: perl-Carp-1.42-396.el8.noarch dependency: perl(Config) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(Data::Dumper) provider: perl-Data-Dumper-2.167-399.el8.x86_64 dependency: perl(Exporter) provider: perl-Exporter-5.72-396.el8.noarch dependency: perl(File::Slurp) dependency: perl(Getopt::Std) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(POSIX) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(RlwrapFilter) provider: rlwrap-0.45.2-2.el8.x86_64 dependency: perl(constant) provider: perl-constant-1.33-396.el8.noarch dependency: perl(lib) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(strict) provider: perl-libs-4:5.26.3-421.el8.i686 provider: perl-libs-4:5.26.3-421.el8.x86_64 dependency: perl(vars) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: rtld(GNU_HASH) provider: glibc-2.28-211.el8.i686 provider: glibc-2.28-211.el8.x86_64 I have installed the perl(File::Slurp) dependency separately. For reference, this is the command I am using to install the rlwrap package and error: yum install rlwrap Last metadata expiration check: 0:41:10 ago on Thu Dec 22 16:38:35 2022. Error: Problem: cannot install the best candidate for the job - nothing provides perl(File::Slurp) needed by rlwrap-0.45.2-2.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) I have tried the suggestions from the yum output.
|
Yum dependency I'm trying to install doesn't have a provider One of the dependencies of the rlwrap package I'm trying to install doesn't have a provider. Is there a command line option I can pass to the install to let it forget about the perl(File::Slurp) dependency? package: rlwrap-0.45.2-2.el8.x86_64 dependency: /usr/bin/perl provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: /usr/bin/python3 provider: python36-3.6.8-38.module+el8.5.0+12207+5c5719bc.x86_64 provider: python38-3.8.13-1.module+el8.7.0+15641+2ece4388.x86_64 provider: python39-3.9.13-2.module+el8.7.0+17195+44752b34.x86_64 dependency: libc.so.6(GLIBC_2.15)(64bit) provider: glibc-2.28-211.el8.x86_64 dependency: libreadline.so.7()(64bit) provider: readline-7.0-10.el8.x86_64 dependency: libtinfo.so.6()(64bit) provider: ncurses-libs-6.1-9.20180224.el8.x86_64 dependency: libutil.so.1()(64bit) provider: glibc-2.28-211.el8.x86_64 dependency: libutil.so.1(GLIBC_2.2.5)(64bit) provider: glibc-2.28-211.el8.x86_64 dependency: perl(:VERSION) >= 5.6.0 provider: perl-libs-4:5.26.3-421.el8.i686 provider: perl-libs-4:5.26.3-421.el8.x86_64 dependency: perl(AutoLoader) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(Carp) provider: perl-Carp-1.42-396.el8.noarch dependency: perl(Config) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(Data::Dumper) provider: perl-Data-Dumper-2.167-399.el8.x86_64 dependency: perl(Exporter) provider: perl-Exporter-5.72-396.el8.noarch dependency: perl(File::Slurp) dependency: perl(Getopt::Std) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(POSIX) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(RlwrapFilter) provider: rlwrap-0.45.2-2.el8.x86_64 dependency: perl(constant) provider: perl-constant-1.33-396.el8.noarch dependency: perl(lib) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: perl(strict) provider: perl-libs-4:5.26.3-421.el8.i686 provider: perl-libs-4:5.26.3-421.el8.x86_64 dependency: perl(vars) provider: perl-interpreter-4:5.26.3-421.el8.x86_64 dependency: rtld(GNU_HASH) provider: glibc-2.28-211.el8.i686 provider: glibc-2.28-211.el8.x86_64 I have installed the perl(File::Slurp) dependency separately. For reference, this is the command I am using to install the rlwrap package and error: yum install rlwrap Last metadata expiration check: 0:41:10 ago on Thu Dec 22 16:38:35 2022. Error: Problem: cannot install the best candidate for the job - nothing provides perl(File::Slurp) needed by rlwrap-0.45.2-2.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) I have tried the suggestions from the yum output.
|
linux, perl, yum, rhel
| 0
| 1,126
| 1
|
https://stackoverflow.com/questions/74891869/yum-dependency-im-trying-to-install-doesnt-have-a-provider
|
74,259,498
|
yum + how to identify the installed rpm with lower version
|
The goal is to compare the list of rpms under folder /tmp/list_of_rpms to installed rpm, and if the package is installed with a lower version then print this rpm. The approach to check if rpm is installed or not can be easily verified with rpm -qi gssproxy-0.7.0-30.el7_9.x86_64.rpm package gssproxy-0.7.0-30.el7_9.x86_64.rpm is not installed but actually this rpm installed with lower version as rpm -qa | grep gssproxy gssproxy-0.7.0-29.el7.x86_64 Can we identify also if rpm is installed with lower version? The approach to take the installed rpm by rpm -qa | grep gssproxy and then comparing it to the rpm file /tmp/gssproxy-0.7.0-30.el7_9.x86_64.rpm by regex is very complicated.
|
yum + how to identify the installed rpm with lower version The goal is to compare the list of rpms under folder /tmp/list_of_rpms to installed rpm, and if the package is installed with a lower version then print this rpm. The approach to check if rpm is installed or not can be easily verified with rpm -qi gssproxy-0.7.0-30.el7_9.x86_64.rpm package gssproxy-0.7.0-30.el7_9.x86_64.rpm is not installed but actually this rpm installed with lower version as rpm -qa | grep gssproxy gssproxy-0.7.0-29.el7.x86_64 Can we identify also if rpm is installed with lower version? The approach to take the installed rpm by rpm -qa | grep gssproxy and then comparing it to the rpm file /tmp/gssproxy-0.7.0-30.el7_9.x86_64.rpm by regex is very complicated.
|
regex, bash, rpm, yum, rhel
| 0
| 554
| 1
|
https://stackoverflow.com/questions/74259498/yum-how-to-identify-the-installed-rpm-with-lower-version
|
73,304,595
|
python + how to pass variable in to os.system command
|
here is simple example how to re-scan disk sda from python script #!/usr/bin/python3 import subprocess import os command = "echo 1 > /sys/block/sda/device/rescan" os.system(command) in case we want to set the disk as variable as disk_name #!/usr/bin/python3 import subprocess import os disk_name = sda command = "echo 1 > /sys/block/disk_name/device/rescan" os.system(command) then what is the rights approach to pass the disk_name variable in to command ? , or maybe other better approach ? we tried as the following but without success command = " 'echo 1 > /sys/block/' + str(disk_name) + '/device/rescan' " os.system(command)
|
python + how to pass variable in to os.system command here is simple example how to re-scan disk sda from python script #!/usr/bin/python3 import subprocess import os command = "echo 1 > /sys/block/sda/device/rescan" os.system(command) in case we want to set the disk as variable as disk_name #!/usr/bin/python3 import subprocess import os disk_name = sda command = "echo 1 > /sys/block/disk_name/device/rescan" os.system(command) then what is the rights approach to pass the disk_name variable in to command ? , or maybe other better approach ? we tried as the following but without success command = " 'echo 1 > /sys/block/' + str(disk_name) + '/device/rescan' " os.system(command)
|
python, python-3.x, linux, rhel
| 0
| 1,501
| 1
|
https://stackoverflow.com/questions/73304595/python-how-to-pass-variable-in-to-os-system-command
|
73,021,580
|
Some replica are not in sync when installing scratch Kafka cluster
|
we are installing new Apache Kafka - version 2.7 on Linux machines version RHEL 7.9 total Kafka machines in the cluster are - 5 machines now installation is completed , but we noticed that not all ISR are in Sync I want to share all the reason that maybe explain what cause replica to be not in Sync Slow replica: A follower replica that is consistently not able to catch up with the writes on the leader for a certain period of time. One of the most common reasons for this is an I/O bottleneck on the follower replica causing it to append the copied messages at a rate slower than it can consume from the leader. Stuck replica: A follower replica that has stopped fetching from the leader for a certain period of time. A replica could be stuck either due to a GC pause or because it has failed or died. Bootstrapping replica: When the user increases the replication factor of the topic, the new follower replicas are out-of-sync until they are fully caught up to the leader’s log. but since we are dealing with new scratch Kafka cluster , then I wonder if the problem with ISR that are not in sync maybe related to some parameters in Kafka server.properties that are not set as well here is example about __consumer_offsets topic we can see many missing ISR's Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer Topic: __consumer_offsets Partition: 0 Leader: 1003 Replicas: 1003,1001,1002 Isr: 1003,1001,1002 Topic: __consumer_offsets Partition: 1 Leader: 1001 Replicas: 1001,1002,1003 Isr: 1001,1003,1002 Topic: __consumer_offsets Partition: 2 Leader: 1003 Replicas: 1002,1003,1001 Isr: 1003,1001 Topic: __consumer_offsets Partition: 3 Leader: 1003 Replicas: 1003,1002,1001 Isr: 1003,1001 Topic: __consumer_offsets Partition: 4 Leader: 1001 Replicas: 1001,1003,1002 Isr: 1001,1003 Topic: __consumer_offsets Partition: 5 Leader: 1001 Replicas: 1002,1001,1003 Isr: 1003,1001,1002 Topic: __consumer_offsets Partition: 6 Leader: 1003 Replicas: 1003,1001,1002 Isr: 1003,1001,1002 Topic: __consumer_offsets Partition: 7 Leader: 1001 Replicas: 1001,1002,1003 Isr: 1001,1003,1002 Topic: __consumer_offsets Partition: 8 Leader: 1003 Replicas: 1002,1003,1001 Isr: 1003,1001 Topic: __consumer_offsets Partition: 9 Leader: 1003 Replicas: 1003,1002,1001 Isr: 1003,1001 Topic: __consumer_offsets Partition: 10 Leader: 1001 Replicas: 1001,1003,1002 Isr: 1001,1003 Topic: __consumer_offsets Partition: 11 Leader: 1001 Replicas: 1002,1001,1003 Isr: 1003 here is example to what we have in server.properties but after googled a while , we not found what can avoid the problem of ISR that are not in sync auto.create.topics.enable=false auto.leader.rebalance.enable=true background.threads=10 log.retention.bytes=-1 log.retention.hours=12 delete.topic.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 log.dir=/var/kafka/kafka-data log.flush.interval.messages=9223372036854775807 log.flush.interval.ms=1000 log.flush.offset.checkpoint.interval.ms=60000 log.flush.scheduler.interval.ms=9223372036854775807 log.flush.start.offset.checkpoint.interval.ms=60000 compression.type=producer log.roll.jitter.hours=0 log.segment.bytes=1073741824 log.segment.delete.delay.ms=60000 message.max.bytes=1000012 min.insync.replicas=1 num.io.threads=8 num.network.threads=3 num.recovery.threads.per.data.dir=1 num.replica.fetchers=1 offset.metadata.max.bytes=4096 offsets.commit.required.acks=-1 offsets.commit.timeout.ms=5000 offsets.load.buffer.size=5242880 offsets.retention.check.interval.ms=600000 offsets.retention.minutes=10080 offsets.topic.compression.codec=0 offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 offsets.topic.segment.bytes=104857600 queued.max.requests=500 quota.consumer.default=9223372036854775807 quota.producer.default=9223372036854775807 replica.fetch.min.bytes=1 replica.fetch.wait.max.ms=500 replica.high.watermark.checkpoint.interval.ms=5000 replica.lag.time.max.ms=10000 replica.socket.receive.buffer.bytes=65536 replica.socket.timeout.ms=30000 request.timeout.ms=30000 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 socket.send.buffer.bytes=102400 transaction.max.timeout.ms=900000 transaction.state.log.load.buffer.size=5242880 transaction.state.log.min.isr=2 transaction.state.log.num.partitions=50 transaction.state.log.replication.factor=3 transaction.state.log.segment.bytes=104857600 transactional.id.expiration.ms=604800000 unclean.leader.election.enable=false zookeeper.connection.timeout.ms=600000 zookeeper.max.in.flight.requests=10 zookeeper.session.timeout.ms=600000 zookeeper.set.acl=false broker.id.generation.enable=true connections.max.idle.ms=600000 connections.max.reauth.ms=0 controlled.shutdown.enable=true controlled.shutdown.max.retries=3 controlled.shutdown.retry.backoff.ms=5000 controller.socket.timeout.ms=30000 default.replication.factor=2 delegation.token.expiry.time.ms=86400000 delegation.token.max.lifetime.ms=604800000 delete.records.purgatory.purge.interval.requests=1 fetch.purgatory.purge.interval.requests=1000 group.initial.rebalance.delay.ms=3000 group.max.session.timeout.ms=1800000 group.max.size=2147483647 group.min.session.timeout.ms=6000 log.213`1234cleaner.backoff.ms=15000 log.cleaner.dedupe.buffer.size=134217728 log.cleaner.delete.retention.ms=86400000 log.cleaner.enable=true log.cleaner.io.buffer.load.factor=0.9 log.cleaner.io.buffer.size=524288 log.cleaner.io.max.bytes.per.second=1.7976931348623157e308 log.cleaner.max.compaction.lag.ms=9223372036854775807 log.cleaner.min.cleanable.ratio=0.5 log.cleaner.min.compaction.lag.ms=0 log.cleaner.threads=1 log.cleanup.policy=delete log.index.interval.bytes=4096 log.index.size.max.bytes=10485760 log.message.timestamp.difference.max.ms=9223372036854775807 log.message.timestamp.type=CreateTime log.preallocate=false log.retention.check.interval.ms=300000 max.connections=2147483647 max.connections.per.ip=2147483647 max.incremental.fetch.session.cache.slots=1000 num.partitions=1 producer.purgatory.purge.interval.requests=1000 queued.max.request.bytes=-1 replica.fetch.backoff.ms=1000 replica.fetch.max.bytes=1048576 replica.fetch.response.max.bytes=10485760 reserved.broker.max.id=1500 transaction.abort.timed.out.transaction.cleanup.interval.ms=60000 transaction.remove.expired.transaction.cleanup.interval.ms=3600000 zookeeper.sync.time.ms=2000 broker.rack=/default-rack we'll appreciate , to get suggestions to how to improve the replica to be in Sync links Fixing under replicated partitions in kafka [URL] What is a right value for replica.lag.time.max.ms? [URL] [URL] [URL] here are the options that we consider to do ( but only as suggestion not solution ) restart Kafka brokers , each Kafka step by step remove the non in SYNC replica by rm -rf , as example rm -rf TEST_TOPIC_1 , and hope that Kafka will create this replica and as results it will be in SYNC try to use the kafka-reassign-partitions maybe ISR will be in Sync after some time ? increase replica.lag.time.max.ms to higher value as 1 day and restart the brokers The definition of synchronization depends on the topic configuration, but by default, this means that the replica has been or has been fully synchronized with the leader in the last 10 seconds. The settings for this time period are:replica.lag.time.max.ms, and has a server default value, which can be overridden by each topic. What is the ISR? The ISR is simply all the replicas of a partition that are "in-sync" with the leader. The definition of "in-sync" depends on the topic configuration, but by default, it means that a replica is or has been fully caught up with the leader in the last 10 seconds. The setting for this time period is: replica.lag.time.max.ms and has a server default which can be overridden on a per topic basis. At a minimum the, ISR will consist of the leader replica and any additional follower replicas that are also considered in-sync. Followers replicate data from the leader to themselves by sending Fetch Requests periodically, by default every 500ms. If a follower fails, then it will cease sending fetch requests and after the default, 10 seconds will be removed from the ISR. Likewise, if a follower slows down, perhaps a network related issue or constrained server resources, then as soon as it has been lagging behind the leader for more than 10 seconds it is removed from the ISR. Some other important related parameters to be configured are: min.insync.replicas: Specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. offsets.retention.check.interval.ms: Frequency at which to check for stale Offsets. offsets.topic.segment.bytes: This should be kept relatively small in order to facilitate faster Log Compaction and Cache Loads. replica.lag.time.max.ms: If the follower has not consumed the Leaders log OR sent fetch requests, for at least this much time, it is removed from the ISR. replica.fetch.wait.max.ms : Max wait time for each fetcher request issued by follower replicas, must be less than the replica.lag.time.max.ms to avoid shrinking of ISR. transaction.max.timeout.ms: In case a client requests a timeout greater than this value, it’s not allowed so as to not stall other consumers. zookeeper.session.timeout.ms: Zookeeper session timeout. zookeeper.sync.time.ms: How far a follower can be behind a Leader, setting this too high can result in an ISR that has potentially many out-of-sync nodes.
|
Some replica are not in sync when installing scratch Kafka cluster we are installing new Apache Kafka - version 2.7 on Linux machines version RHEL 7.9 total Kafka machines in the cluster are - 5 machines now installation is completed , but we noticed that not all ISR are in Sync I want to share all the reason that maybe explain what cause replica to be not in Sync Slow replica: A follower replica that is consistently not able to catch up with the writes on the leader for a certain period of time. One of the most common reasons for this is an I/O bottleneck on the follower replica causing it to append the copied messages at a rate slower than it can consume from the leader. Stuck replica: A follower replica that has stopped fetching from the leader for a certain period of time. A replica could be stuck either due to a GC pause or because it has failed or died. Bootstrapping replica: When the user increases the replication factor of the topic, the new follower replicas are out-of-sync until they are fully caught up to the leader’s log. but since we are dealing with new scratch Kafka cluster , then I wonder if the problem with ISR that are not in sync maybe related to some parameters in Kafka server.properties that are not set as well here is example about __consumer_offsets topic we can see many missing ISR's Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer Topic: __consumer_offsets Partition: 0 Leader: 1003 Replicas: 1003,1001,1002 Isr: 1003,1001,1002 Topic: __consumer_offsets Partition: 1 Leader: 1001 Replicas: 1001,1002,1003 Isr: 1001,1003,1002 Topic: __consumer_offsets Partition: 2 Leader: 1003 Replicas: 1002,1003,1001 Isr: 1003,1001 Topic: __consumer_offsets Partition: 3 Leader: 1003 Replicas: 1003,1002,1001 Isr: 1003,1001 Topic: __consumer_offsets Partition: 4 Leader: 1001 Replicas: 1001,1003,1002 Isr: 1001,1003 Topic: __consumer_offsets Partition: 5 Leader: 1001 Replicas: 1002,1001,1003 Isr: 1003,1001,1002 Topic: __consumer_offsets Partition: 6 Leader: 1003 Replicas: 1003,1001,1002 Isr: 1003,1001,1002 Topic: __consumer_offsets Partition: 7 Leader: 1001 Replicas: 1001,1002,1003 Isr: 1001,1003,1002 Topic: __consumer_offsets Partition: 8 Leader: 1003 Replicas: 1002,1003,1001 Isr: 1003,1001 Topic: __consumer_offsets Partition: 9 Leader: 1003 Replicas: 1003,1002,1001 Isr: 1003,1001 Topic: __consumer_offsets Partition: 10 Leader: 1001 Replicas: 1001,1003,1002 Isr: 1001,1003 Topic: __consumer_offsets Partition: 11 Leader: 1001 Replicas: 1002,1001,1003 Isr: 1003 here is example to what we have in server.properties but after googled a while , we not found what can avoid the problem of ISR that are not in sync auto.create.topics.enable=false auto.leader.rebalance.enable=true background.threads=10 log.retention.bytes=-1 log.retention.hours=12 delete.topic.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 log.dir=/var/kafka/kafka-data log.flush.interval.messages=9223372036854775807 log.flush.interval.ms=1000 log.flush.offset.checkpoint.interval.ms=60000 log.flush.scheduler.interval.ms=9223372036854775807 log.flush.start.offset.checkpoint.interval.ms=60000 compression.type=producer log.roll.jitter.hours=0 log.segment.bytes=1073741824 log.segment.delete.delay.ms=60000 message.max.bytes=1000012 min.insync.replicas=1 num.io.threads=8 num.network.threads=3 num.recovery.threads.per.data.dir=1 num.replica.fetchers=1 offset.metadata.max.bytes=4096 offsets.commit.required.acks=-1 offsets.commit.timeout.ms=5000 offsets.load.buffer.size=5242880 offsets.retention.check.interval.ms=600000 offsets.retention.minutes=10080 offsets.topic.compression.codec=0 offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 offsets.topic.segment.bytes=104857600 queued.max.requests=500 quota.consumer.default=9223372036854775807 quota.producer.default=9223372036854775807 replica.fetch.min.bytes=1 replica.fetch.wait.max.ms=500 replica.high.watermark.checkpoint.interval.ms=5000 replica.lag.time.max.ms=10000 replica.socket.receive.buffer.bytes=65536 replica.socket.timeout.ms=30000 request.timeout.ms=30000 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 socket.send.buffer.bytes=102400 transaction.max.timeout.ms=900000 transaction.state.log.load.buffer.size=5242880 transaction.state.log.min.isr=2 transaction.state.log.num.partitions=50 transaction.state.log.replication.factor=3 transaction.state.log.segment.bytes=104857600 transactional.id.expiration.ms=604800000 unclean.leader.election.enable=false zookeeper.connection.timeout.ms=600000 zookeeper.max.in.flight.requests=10 zookeeper.session.timeout.ms=600000 zookeeper.set.acl=false broker.id.generation.enable=true connections.max.idle.ms=600000 connections.max.reauth.ms=0 controlled.shutdown.enable=true controlled.shutdown.max.retries=3 controlled.shutdown.retry.backoff.ms=5000 controller.socket.timeout.ms=30000 default.replication.factor=2 delegation.token.expiry.time.ms=86400000 delegation.token.max.lifetime.ms=604800000 delete.records.purgatory.purge.interval.requests=1 fetch.purgatory.purge.interval.requests=1000 group.initial.rebalance.delay.ms=3000 group.max.session.timeout.ms=1800000 group.max.size=2147483647 group.min.session.timeout.ms=6000 log.213`1234cleaner.backoff.ms=15000 log.cleaner.dedupe.buffer.size=134217728 log.cleaner.delete.retention.ms=86400000 log.cleaner.enable=true log.cleaner.io.buffer.load.factor=0.9 log.cleaner.io.buffer.size=524288 log.cleaner.io.max.bytes.per.second=1.7976931348623157e308 log.cleaner.max.compaction.lag.ms=9223372036854775807 log.cleaner.min.cleanable.ratio=0.5 log.cleaner.min.compaction.lag.ms=0 log.cleaner.threads=1 log.cleanup.policy=delete log.index.interval.bytes=4096 log.index.size.max.bytes=10485760 log.message.timestamp.difference.max.ms=9223372036854775807 log.message.timestamp.type=CreateTime log.preallocate=false log.retention.check.interval.ms=300000 max.connections=2147483647 max.connections.per.ip=2147483647 max.incremental.fetch.session.cache.slots=1000 num.partitions=1 producer.purgatory.purge.interval.requests=1000 queued.max.request.bytes=-1 replica.fetch.backoff.ms=1000 replica.fetch.max.bytes=1048576 replica.fetch.response.max.bytes=10485760 reserved.broker.max.id=1500 transaction.abort.timed.out.transaction.cleanup.interval.ms=60000 transaction.remove.expired.transaction.cleanup.interval.ms=3600000 zookeeper.sync.time.ms=2000 broker.rack=/default-rack we'll appreciate , to get suggestions to how to improve the replica to be in Sync links Fixing under replicated partitions in kafka [URL] What is a right value for replica.lag.time.max.ms? [URL] [URL] [URL] here are the options that we consider to do ( but only as suggestion not solution ) restart Kafka brokers , each Kafka step by step remove the non in SYNC replica by rm -rf , as example rm -rf TEST_TOPIC_1 , and hope that Kafka will create this replica and as results it will be in SYNC try to use the kafka-reassign-partitions maybe ISR will be in Sync after some time ? increase replica.lag.time.max.ms to higher value as 1 day and restart the brokers The definition of synchronization depends on the topic configuration, but by default, this means that the replica has been or has been fully synchronized with the leader in the last 10 seconds. The settings for this time period are:replica.lag.time.max.ms, and has a server default value, which can be overridden by each topic. What is the ISR? The ISR is simply all the replicas of a partition that are "in-sync" with the leader. The definition of "in-sync" depends on the topic configuration, but by default, it means that a replica is or has been fully caught up with the leader in the last 10 seconds. The setting for this time period is: replica.lag.time.max.ms and has a server default which can be overridden on a per topic basis. At a minimum the, ISR will consist of the leader replica and any additional follower replicas that are also considered in-sync. Followers replicate data from the leader to themselves by sending Fetch Requests periodically, by default every 500ms. If a follower fails, then it will cease sending fetch requests and after the default, 10 seconds will be removed from the ISR. Likewise, if a follower slows down, perhaps a network related issue or constrained server resources, then as soon as it has been lagging behind the leader for more than 10 seconds it is removed from the ISR. Some other important related parameters to be configured are: min.insync.replicas: Specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. offsets.retention.check.interval.ms: Frequency at which to check for stale Offsets. offsets.topic.segment.bytes: This should be kept relatively small in order to facilitate faster Log Compaction and Cache Loads. replica.lag.time.max.ms: If the follower has not consumed the Leaders log OR sent fetch requests, for at least this much time, it is removed from the ISR. replica.fetch.wait.max.ms : Max wait time for each fetcher request issued by follower replicas, must be less than the replica.lag.time.max.ms to avoid shrinking of ISR. transaction.max.timeout.ms: In case a client requests a timeout greater than this value, it’s not allowed so as to not stall other consumers. zookeeper.session.timeout.ms: Zookeeper session timeout. zookeeper.sync.time.ms: How far a follower can be behind a Leader, setting this too high can result in an ISR that has potentially many out-of-sync nodes.
|
apache-kafka, rhel
| 0
| 3,346
| 1
|
https://stackoverflow.com/questions/73021580/some-replica-are-not-in-sync-when-installing-scratch-kafka-cluster
|
72,729,502
|
Docker build fails with Error: Unable to find a match: e2fsprogs xfsprogs mdadm parted
|
Base image is redhat ubi8/ubi Step 5/10 : RUN yum install ca-certificates e2fsprogs xfsprogs util-linux mdadm parted lvm2 libudev-devel -y ---> Running in 792zfd35b185 Updating Subscription Management repositories. Unable to read consumer identity This system is not registered with an entitlement server. You can use subscription-manager to register. Red Hat Universal Base Image 8 (RPMs) - BaseOS 2.1 MB/s | 803 kB 00:00 Red Hat Universal Base Image 8 (RPMs) - AppStre 5.9 MB/s | 3.0 MB 00:00 Red Hat Universal Base Image 8 (RPMs) - CodeRea 120 kB/s | 18 kB 00:00 Package ca-certificates-2021.2.50-80.0.el8_4.noarch is already installed. No match for argument: e2fsprogs No match for argument: xfsprogs Package util-linux-2.32.1-35.el8.x86_64 is already installed. No match for argument: mdadm No match for argument: parted Error: Unable to find a match: e2fsprogs xfsprogs mdadm parted The command '/bin/sh -c yum install ca-certificates e2fsprogs xfsprogs util-linux mdadm parted lvm2 libudev-devel -y' returned a non-zero code: 1 Docker file is below #Docker file FROM registry.access.redhat.com/ubi8/ubi ... ... ... RUN yum install ca-certificates e2fsprogs xfsprogs util-linux nvme-cli mdadm parted lvm2 libudev-devel -y How to resolve it. I have an active subscription on my host machine
|
Docker build fails with Error: Unable to find a match: e2fsprogs xfsprogs mdadm parted Base image is redhat ubi8/ubi Step 5/10 : RUN yum install ca-certificates e2fsprogs xfsprogs util-linux mdadm parted lvm2 libudev-devel -y ---> Running in 792zfd35b185 Updating Subscription Management repositories. Unable to read consumer identity This system is not registered with an entitlement server. You can use subscription-manager to register. Red Hat Universal Base Image 8 (RPMs) - BaseOS 2.1 MB/s | 803 kB 00:00 Red Hat Universal Base Image 8 (RPMs) - AppStre 5.9 MB/s | 3.0 MB 00:00 Red Hat Universal Base Image 8 (RPMs) - CodeRea 120 kB/s | 18 kB 00:00 Package ca-certificates-2021.2.50-80.0.el8_4.noarch is already installed. No match for argument: e2fsprogs No match for argument: xfsprogs Package util-linux-2.32.1-35.el8.x86_64 is already installed. No match for argument: mdadm No match for argument: parted Error: Unable to find a match: e2fsprogs xfsprogs mdadm parted The command '/bin/sh -c yum install ca-certificates e2fsprogs xfsprogs util-linux mdadm parted lvm2 libudev-devel -y' returned a non-zero code: 1 Docker file is below #Docker file FROM registry.access.redhat.com/ubi8/ubi ... ... ... RUN yum install ca-certificates e2fsprogs xfsprogs util-linux nvme-cli mdadm parted lvm2 libudev-devel -y How to resolve it. I have an active subscription on my host machine
|
docker, rhel
| 0
| 621
| 1
|
https://stackoverflow.com/questions/72729502/docker-build-fails-with-error-unable-to-find-a-match-e2fsprogs-xfsprogs-mdadm
|
72,601,129
|
Error installing gem mysql2 -v 0.3.21 on RHEL 8
|
I am copying my application to a new server, as old one got corrupted, but while trying to run bundle install, gem mysql2 failed to install. [me@localhost redmine]$ gem install mysql2 -v 0.3.21 Building native extensions. This could take a while... ERROR: Error installing mysql2: ERROR: Failed to build gem native extension. current directory: /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/gems/mysql2-0.3.21/ext/mysql2 /home/x-mwojciechow4/.rvm/rubies/ruby-2.2.9/bin/ruby -I /home/x-mwojciechow4/.rvm/rubies/ruby-2.2.9/lib/ruby/site_ruby/2.2.0 -r ./siteconf20220613-3672195-xa12ap.rb extconf.rb checking for ruby/thread.h... yes checking for rb_thread_call_without_gvl() in ruby/thread.h... yes checking for rb_thread_blocking_region()... no checking for rb_wait_for_single_fd()... yes checking for rb_hash_dup()... yes checking for rb_intern3()... yes ----- Using mysql_config at /usr/bin/mysql_config ----- checking for mysql.h... yes checking for errmsg.h... yes checking for mysqld_error.h... yes ----- Setting libpath to /usr/lib64/mysql ----- creating Makefile current directory: /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/gems/mysql2-0.3.21/ext/mysql2 make "DESTDIR=" clean current directory: /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/gems/mysql2-0.3.21/ext/mysql2 make "DESTDIR=" compiling client.c client.c: In function ‘nogvl_read_query_result’: client.c:439:3: error: unknown type name ‘my_bool’; did you mean ‘bool’? my_bool res = mysql_read_query_result(client); ^~~~~~~ bool client.c: In function ‘_mysql_client_options’: client.c:762:3: error: unknown type name ‘my_bool’; did you mean ‘bool’? my_bool boolval; ^~~~~~~ bool client.c:797:10: error: ‘MYSQL_SECURE_AUTH’ undeclared (first use in this function); did you mean ‘MYSQL_DEFAULT_AUTH’? case MYSQL_SECURE_AUTH: ^~~~~~~~~~~~~~~~~ MYSQL_DEFAULT_AUTH client.c:797:10: note: each undeclared identifier is reported only once for each function it appears in client.c: In function ‘set_secure_auth’: client.c:1185:38: error: ‘MYSQL_SECURE_AUTH’ undeclared (first use in this function); did you mean ‘MYSQL_DEFAULT_AUTH’? return _mysql_client_options(self, MYSQL_SECURE_AUTH, value); ^~~~~~~~~~~~~~~~~ MYSQL_DEFAULT_AUTH client.c:1186:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ make: *** [Makefile:238: client.o] Error 1 make failed, exit code 2 Gem files will remain installed in /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/gems/mysql2-0.3.21 for inspection. Results logged to /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/extensions/x86_64-linux/2.2.0/mysql2-0.3.21/gem_make.out The first error I notice is "checking for rb_thread_blocking_region()... no" -however I didnt find anything useful by googling this. It is important to me to stay on the same mysql2 version, as this is a testing server, so I want it to be as close to production server as possible. system: RHEL 8.1 Ruby 2.2.9p480 Rails 4.2.7.1 MySql 8.0.29 MySQL Community Server - GPL
|
Error installing gem mysql2 -v 0.3.21 on RHEL 8 I am copying my application to a new server, as old one got corrupted, but while trying to run bundle install, gem mysql2 failed to install. [me@localhost redmine]$ gem install mysql2 -v 0.3.21 Building native extensions. This could take a while... ERROR: Error installing mysql2: ERROR: Failed to build gem native extension. current directory: /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/gems/mysql2-0.3.21/ext/mysql2 /home/x-mwojciechow4/.rvm/rubies/ruby-2.2.9/bin/ruby -I /home/x-mwojciechow4/.rvm/rubies/ruby-2.2.9/lib/ruby/site_ruby/2.2.0 -r ./siteconf20220613-3672195-xa12ap.rb extconf.rb checking for ruby/thread.h... yes checking for rb_thread_call_without_gvl() in ruby/thread.h... yes checking for rb_thread_blocking_region()... no checking for rb_wait_for_single_fd()... yes checking for rb_hash_dup()... yes checking for rb_intern3()... yes ----- Using mysql_config at /usr/bin/mysql_config ----- checking for mysql.h... yes checking for errmsg.h... yes checking for mysqld_error.h... yes ----- Setting libpath to /usr/lib64/mysql ----- creating Makefile current directory: /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/gems/mysql2-0.3.21/ext/mysql2 make "DESTDIR=" clean current directory: /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/gems/mysql2-0.3.21/ext/mysql2 make "DESTDIR=" compiling client.c client.c: In function ‘nogvl_read_query_result’: client.c:439:3: error: unknown type name ‘my_bool’; did you mean ‘bool’? my_bool res = mysql_read_query_result(client); ^~~~~~~ bool client.c: In function ‘_mysql_client_options’: client.c:762:3: error: unknown type name ‘my_bool’; did you mean ‘bool’? my_bool boolval; ^~~~~~~ bool client.c:797:10: error: ‘MYSQL_SECURE_AUTH’ undeclared (first use in this function); did you mean ‘MYSQL_DEFAULT_AUTH’? case MYSQL_SECURE_AUTH: ^~~~~~~~~~~~~~~~~ MYSQL_DEFAULT_AUTH client.c:797:10: note: each undeclared identifier is reported only once for each function it appears in client.c: In function ‘set_secure_auth’: client.c:1185:38: error: ‘MYSQL_SECURE_AUTH’ undeclared (first use in this function); did you mean ‘MYSQL_DEFAULT_AUTH’? return _mysql_client_options(self, MYSQL_SECURE_AUTH, value); ^~~~~~~~~~~~~~~~~ MYSQL_DEFAULT_AUTH client.c:1186:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ make: *** [Makefile:238: client.o] Error 1 make failed, exit code 2 Gem files will remain installed in /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/gems/mysql2-0.3.21 for inspection. Results logged to /home/x-mwojciechow4/.rvm/gems/ruby-2.2.9/extensions/x86_64-linux/2.2.0/mysql2-0.3.21/gem_make.out The first error I notice is "checking for rb_thread_blocking_region()... no" -however I didnt find anything useful by googling this. It is important to me to stay on the same mysql2 version, as this is a testing server, so I want it to be as close to production server as possible. system: RHEL 8.1 Ruby 2.2.9p480 Rails 4.2.7.1 MySql 8.0.29 MySQL Community Server - GPL
|
ruby, rubygems, rhel, mysql2
| 0
| 823
| 1
|
https://stackoverflow.com/questions/72601129/error-installing-gem-mysql2-v-0-3-21-on-rhel-8
|
72,360,585
|
Auto Start Script
|
So I am making a script that can run these commands whenever a server boot/reboot: sudo bash su - erp cd frappe-bench/ bench start >/tmp/bench_log & I found guides here and there about how can I change user in script I came out with the following script: #! /bin/sh sudo -u erp bash cd /home/erp/frappe-bench/ bench start >/tmp/bench_log & And, I have created a service at /etc/systemd/system/ and set it to run automatically when the server boots up. The problem is, whenever I run sudo systemctl start erpnextd.service and checked the status, it came up with this May 24 17:10:05 appbsystem2 systemd[1]: Started ERPNext | Auto Restart. May 24 17:10:05 appbsystem2 sudo[18814]: root : TTY=unknown ; PWD=/ ; USER=>erp ; COMMAND=/bin/bash May 24 17:10:05 appbsystem2 systemd[1]: erpnextd.service: Succeeded. But it still doesn't start up ERPNext. All I wanted to do is make a script that will start erpnext automatically everytime a server reboot. Note: I only install frappe-bench on user erp only
|
Auto Start Script So I am making a script that can run these commands whenever a server boot/reboot: sudo bash su - erp cd frappe-bench/ bench start >/tmp/bench_log & I found guides here and there about how can I change user in script I came out with the following script: #! /bin/sh sudo -u erp bash cd /home/erp/frappe-bench/ bench start >/tmp/bench_log & And, I have created a service at /etc/systemd/system/ and set it to run automatically when the server boots up. The problem is, whenever I run sudo systemctl start erpnextd.service and checked the status, it came up with this May 24 17:10:05 appbsystem2 systemd[1]: Started ERPNext | Auto Restart. May 24 17:10:05 appbsystem2 sudo[18814]: root : TTY=unknown ; PWD=/ ; USER=>erp ; COMMAND=/bin/bash May 24 17:10:05 appbsystem2 systemd[1]: erpnextd.service: Succeeded. But it still doesn't start up ERPNext. All I wanted to do is make a script that will start erpnext automatically everytime a server reboot. Note: I only install frappe-bench on user erp only
|
bash, rhel, erpnext
| 0
| 1,347
| 3
|
https://stackoverflow.com/questions/72360585/auto-start-script
|
72,245,857
|
Puppet agent disabled in Puppet master
|
In my environment, there is a RHEL puppet master which successfully managing over 500 nodes. When I run puppet in master server (puppet agent -t), I am getting below error. It seems puppet agent is disabled in master. Is there any impact , if I enable puppet agent in master. * [root@puppet-master]# puppet agent -t Notice: Skipping run of Puppet configuration client; administratively disabled (Reason: 'reason not specified'); Use 'puppet agent --enable' to re-enable.*
|
Puppet agent disabled in Puppet master In my environment, there is a RHEL puppet master which successfully managing over 500 nodes. When I run puppet in master server (puppet agent -t), I am getting below error. It seems puppet agent is disabled in master. Is there any impact , if I enable puppet agent in master. * [root@puppet-master]# puppet agent -t Notice: Skipping run of Puppet configuration client; administratively disabled (Reason: 'reason not specified'); Use 'puppet agent --enable' to re-enable.*
|
devops, puppet, rhel, puppet-enterprise
| 0
| 1,380
| 1
|
https://stackoverflow.com/questions/72245857/puppet-agent-disabled-in-puppet-master
|
71,511,238
|
PHP Warning: Module 'oci8' already loaded in Unknown on line 0
|
My applications are running on Linux RHEL 8 and it uses oracle DB. I tested the DB connection it's working properly. But now I am facing this error PHP Warning: Module 'oci8' already loaded in Unknown on line 0 I disabled extension=oci8.so in /etc/php.ini extension=oci8.so in /etc/php.d/20-oci8.ini file But none of this is working for me. How to solve it? Any clue?
|
PHP Warning: Module 'oci8' already loaded in Unknown on line 0 My applications are running on Linux RHEL 8 and it uses oracle DB. I tested the DB connection it's working properly. But now I am facing this error PHP Warning: Module 'oci8' already loaded in Unknown on line 0 I disabled extension=oci8.so in /etc/php.ini extension=oci8.so in /etc/php.d/20-oci8.ini file But none of this is working for me. How to solve it? Any clue?
|
php, oracle-database, rhel, oci8
| 0
| 2,987
| 1
|
https://stackoverflow.com/questions/71511238/php-warning-module-oci8-already-loaded-in-unknown-on-line-0
|
71,408,778
|
How to back up acl to file then apply this to another folder?
|
Backup "myfolder" folder's permission with getfacl and apply those permissions to "/yourfolder" directory. getfacl file1 | setfacl --set-file=- file2 code above does not suite for me as I need 2 different commands
|
How to back up acl to file then apply this to another folder? Backup "myfolder" folder's permission with getfacl and apply those permissions to "/yourfolder" directory. getfacl file1 | setfacl --set-file=- file2 code above does not suite for me as I need 2 different commands
|
linux, centos, acl, rhel
| 0
| 1,522
| 1
|
https://stackoverflow.com/questions/71408778/how-to-back-up-acl-to-file-then-apply-this-to-another-folder
|
71,352,496
|
Where is it possible to download some sample applications for jboss eap?
|
I have just started working with jboss eap 7 application servers, so I would like to have an environment where I can carry out various tests, I have created a small laboratory with standalone and cluster included, but I would like to have some applications where I can test deployments, status monitoring and the server logs and the applications, I have a graylog for that purpose of the logs, but I just need the ear war applications etc... where could I download some sample applications to deploy in my laboratory? Thanks in advance
|
Where is it possible to download some sample applications for jboss eap? I have just started working with jboss eap 7 application servers, so I would like to have an environment where I can carry out various tests, I have created a small laboratory with standalone and cluster included, but I would like to have some applications where I can test deployments, status monitoring and the server logs and the applications, I have a graylog for that purpose of the logs, but I just need the ear war applications etc... where could I download some sample applications to deploy in my laboratory? Thanks in advance
|
deployment, jboss, rhel, sample
| 0
| 1,384
| 1
|
https://stackoverflow.com/questions/71352496/where-is-it-possible-to-download-some-sample-applications-for-jboss-eap
|
71,311,556
|
In a ksh script find value of latest file/jar and save to a variable
|
I am a self-confessed beginner level when it comes to RHEL and I can navigate around, find things, run things etc. And I only have basic permissions on my DEV, INT, PROD and BCP servers at my client as most of my time is taken up in complex development. I use ksh scripts to launch Tomcat, and for the new version of my project it is a SpringBoot launchable JAR file. New deployments will feature a dynamically changing jar/file name deployed from Nexus such that instead of the fixed name (in the older Tomcat setup for exploded folder in /webapps) I will need to look it up as it'll have a changing version number at the end. I came up with this: find . -name bondac*.jar | awk '{print substr($1,3);}' Which works on CLI, so I added it to my launch ksh file (which worked previously with fixed jar name) like so: jartolaunch = find . -name bondac*.jar | awk '{print substr($1,3);}' java -jar $jartolaunch --spring.config.location=/data/chronos/activepivot/config/application.yml && sleep 1 && check_process $1 However this gives the error, when ksh is run, of 'jartolaunch not found'.
|
In a ksh script find value of latest file/jar and save to a variable I am a self-confessed beginner level when it comes to RHEL and I can navigate around, find things, run things etc. And I only have basic permissions on my DEV, INT, PROD and BCP servers at my client as most of my time is taken up in complex development. I use ksh scripts to launch Tomcat, and for the new version of my project it is a SpringBoot launchable JAR file. New deployments will feature a dynamically changing jar/file name deployed from Nexus such that instead of the fixed name (in the older Tomcat setup for exploded folder in /webapps) I will need to look it up as it'll have a changing version number at the end. I came up with this: find . -name bondac*.jar | awk '{print substr($1,3);}' Which works on CLI, so I added it to my launch ksh file (which worked previously with fixed jar name) like so: jartolaunch = find . -name bondac*.jar | awk '{print substr($1,3);}' java -jar $jartolaunch --spring.config.location=/data/chronos/activepivot/config/application.yml && sleep 1 && check_process $1 However this gives the error, when ksh is run, of 'jartolaunch not found'.
|
linux, awk, ksh, rhel
| 0
| 76
| 1
|
https://stackoverflow.com/questions/71311556/in-a-ksh-script-find-value-of-latest-file-jar-and-save-to-a-variable
|
71,056,247
|
unzip -j -o not working through Ansible, but working directly on host
|
I am trying to grab one file from app.war called test.properties , this command works perfectly on a RHEL host when I run it directly on it: unzip -j -o /home/test/app.war "*test.properties*" But , when I run the same thing in Ansible, it does not work, it does not extract anything, there is no change: - name: Extract test.properties file shell: 'unzip -j -o /home/test/app.war "*test.properties*"' FYI, I CANNOT use Unarchive Module due to Ansible being version 2.9 . Am I doing anything wrong Ansible side? Maybe I am missing something extra like sudo or quotes?
|
unzip -j -o not working through Ansible, but working directly on host I am trying to grab one file from app.war called test.properties , this command works perfectly on a RHEL host when I run it directly on it: unzip -j -o /home/test/app.war "*test.properties*" But , when I run the same thing in Ansible, it does not work, it does not extract anything, there is no change: - name: Extract test.properties file shell: 'unzip -j -o /home/test/app.war "*test.properties*"' FYI, I CANNOT use Unarchive Module due to Ansible being version 2.9 . Am I doing anything wrong Ansible side? Maybe I am missing something extra like sudo or quotes?
|
linux, ansible, yaml, unzip, rhel
| 0
| 1,398
| 1
|
https://stackoverflow.com/questions/71056247/unzip-j-o-not-working-through-ansible-but-working-directly-on-host
|
71,020,916
|
Podman image not updating like it has to
|
I ran the following commands to change some lines in a file contained in a podman container: # RUN THE IMAGE podman run -it opensearchproject/opensearch-dashboards:1.2.0 /bin/bash # READ CONTENT cat config\opensearch_dashboards.yml # OLD CONTENT while IFS='' read -r a; do echo "${a//localhost/0.0.0.0}" done < opensearch_dashboards.yml > opensearch_dashboards.yml.t mv opensearch_dashboards.yml{.t,} # READ NEW CONTENT cat config\opensearch_dashboards.yml # NEW CONTENT LOOKS FINE, CLOSE SESSION exit # RUN IMAGE, AGAIN podman run -it opensearchproject/opensearch-dashboards:1.2.0 /bin/bash # READ CONTENT AGAIN cat config\opensearch_dashboards.yml # OLD CONTENT SHOWS UP What am I missing? I thought I could update the image, but it doesn't work. Everytime the replace works, it goes up in flames. I'm new to containers and I feel stuck.
|
Podman image not updating like it has to I ran the following commands to change some lines in a file contained in a podman container: # RUN THE IMAGE podman run -it opensearchproject/opensearch-dashboards:1.2.0 /bin/bash # READ CONTENT cat config\opensearch_dashboards.yml # OLD CONTENT while IFS='' read -r a; do echo "${a//localhost/0.0.0.0}" done < opensearch_dashboards.yml > opensearch_dashboards.yml.t mv opensearch_dashboards.yml{.t,} # READ NEW CONTENT cat config\opensearch_dashboards.yml # NEW CONTENT LOOKS FINE, CLOSE SESSION exit # RUN IMAGE, AGAIN podman run -it opensearchproject/opensearch-dashboards:1.2.0 /bin/bash # READ CONTENT AGAIN cat config\opensearch_dashboards.yml # OLD CONTENT SHOWS UP What am I missing? I thought I could update the image, but it doesn't work. Everytime the replace works, it goes up in flames. I'm new to containers and I feel stuck.
|
bash, unix, rhel, podman
| 0
| 1,224
| 1
|
https://stackoverflow.com/questions/71020916/podman-image-not-updating-like-it-has-to
|
70,958,470
|
Restart jboss service or redoploy when modifying application configurations?
|
When the properties or .xml of some application (ear) in jboss (RHEL) are modified, is it necessary to restart the jboss service or simply by doing a redeploy (mv .deployed .dodeploy) are the changes recognized? Thanks
|
Restart jboss service or redoploy when modifying application configurations? When the properties or .xml of some application (ear) in jboss (RHEL) are modified, is it necessary to restart the jboss service or simply by doing a redeploy (mv .deployed .dodeploy) are the changes recognized? Thanks
|
java, jboss, rhel, restart
| 0
| 319
| 1
|
https://stackoverflow.com/questions/70958470/restart-jboss-service-or-redoploy-when-modifying-application-configurations
|
70,163,428
|
install aws cli on rhel 7.x server running on Azure
|
I am trying to get AWS CLI installed on Azure RHEL 7.x server in python virtual environment. I am running into issues with it, this is what I have done so far pip install boto3 pip3 install boto3 aws s3 ls I am getting an error "Traceback (most recent call last): import awscli.clidriver File "/usr/lib/python2.7/site-packages/awscli/clidriver.py", line 17, in module I see how to fix the issue with changing the sed -i -e 's//lib///lib64//' /usr/lib/python2.7/site-packages/awscli/clidriver.py at [URL] My question, How can I point my virtual env to point to python3 Thanks
|
install aws cli on rhel 7.x server running on Azure I am trying to get AWS CLI installed on Azure RHEL 7.x server in python virtual environment. I am running into issues with it, this is what I have done so far pip install boto3 pip3 install boto3 aws s3 ls I am getting an error "Traceback (most recent call last): import awscli.clidriver File "/usr/lib/python2.7/site-packages/awscli/clidriver.py", line 17, in module I see how to fix the issue with changing the sed -i -e 's//lib///lib64//' /usr/lib/python2.7/site-packages/awscli/clidriver.py at [URL] My question, How can I point my virtual env to point to python3 Thanks
|
azure, pip, boto3, aws-cli, rhel
| 0
| 295
| 1
|
https://stackoverflow.com/questions/70163428/install-aws-cli-on-rhel-7-x-server-running-on-azure
|
69,921,199
|
Is it possible to determine how many messages are in a POSIX message queue?
|
I am working with POSIX running on a RHEL machine. Is there a way to check the number of messages that are remaining in a message queue (System V Preferably)? The purpose of this is simply a desire to know which queues have the most messages at a given time so that I can make a "managing" thread receive the messages in a Longest-Queue-First manner. I didn't see anything about this in the man pages (that were C/C++-specific and not tied to IPCs). Does anyone have an idea of how to do this?
|
Is it possible to determine how many messages are in a POSIX message queue? I am working with POSIX running on a RHEL machine. Is there a way to check the number of messages that are remaining in a message queue (System V Preferably)? The purpose of this is simply a desire to know which queues have the most messages at a given time so that I can make a "managing" thread receive the messages in a Longest-Queue-First manner. I didn't see anything about this in the man pages (that were C/C++-specific and not tied to IPCs). Does anyone have an idea of how to do this?
|
c++, queue, posix, message-queue, rhel
| 0
| 905
| 1
|
https://stackoverflow.com/questions/69921199/is-it-possible-to-determine-how-many-messages-are-in-a-posix-message-queue
|
69,212,795
|
Is it possible to get OS Support deadline for RHEL and Windows OS using API call
|
Is there anyway we can get OS support deadline using API call. We have RHEL and windows OS VMs.
|
Is it possible to get OS Support deadline for RHEL and Windows OS using API call Is there anyway we can get OS support deadline using API call. We have RHEL and windows OS VMs.
|
windows, windows-10, redhat, rhel
| 0
| 52
| 2
|
https://stackoverflow.com/questions/69212795/is-it-possible-to-get-os-support-deadline-for-rhel-and-windows-os-using-api-call
|
68,734,394
|
matplotlib subplots runs very slow only on some workstations
|
I'm using matplotlib in scripts on two different RHEL linux workstations (both RHEL 7.9). On one workstation it takes less than a second (0.18s) to call plt.subplots, while the other can take more than 45s!! Below is a simple timing script I used to check the timing. I've tried different versions of matplotlib (via miniconda), but it doesn't seem to make a difference. The workstation that loads plt.subplots slowly should be faster based on specs (see below). Any ideas of other things to check? Thanks in advance for any help! Andy import matplotlib.pyplot as plt import time fig_time_s = time.time() fig, ax = plt.subplots(figsize=(9.0, 4.0), dpi=200) fig_time_e = time.time() fig_time = fig_time_e - fig_time_s print('FIG TIME: {:7.2f}s'.format(fig_time)) version of matplotlib: |# Name |Version |Build |Channel |---|---|---|---| |matplotlib |3.2.2 | 1 |conda-forge |matplotlib-base |3.2.2 |py37h30547a4_0 |conda-forge These are the specs of the workstations: Property faster slower Architecture: x86_64 x86_64 CPU op-mode(s): 32-bit, 64-bit 32-bit, 64-bit Byte Order: Little Endian Little Endian CPU(s): 6 48 On-line CPU(s) list: 0-5 0-47 Thread(s) per core: 1 2 Core(s) per socket: 6 24 Socket(s): 1 1 NUMA node(s): 1 1 Vendor ID: GenuineIntel GenuineIntel CPU family: 6 6 Model: 63 85 Model name: Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz Stepping: 2 4 CPU MHz: 1899.884 1199.871 CPU max MHz: 1900.0000 3700.0000 CPU min MHz: 1200.0000 1200.0000 BogoMIPS: 3791.33 5400.00 Virtualization: VT-x VT-x L1d cache: 32K 32K L1i cache: 32K 32K L2 cache: 256K 1024K L3 cache: 15360K 33792K NUMA node0 CPU(s): 0-5 0-47 NUMA node0 CPU(s): 0-47
|
matplotlib subplots runs very slow only on some workstations I'm using matplotlib in scripts on two different RHEL linux workstations (both RHEL 7.9). On one workstation it takes less than a second (0.18s) to call plt.subplots, while the other can take more than 45s!! Below is a simple timing script I used to check the timing. I've tried different versions of matplotlib (via miniconda), but it doesn't seem to make a difference. The workstation that loads plt.subplots slowly should be faster based on specs (see below). Any ideas of other things to check? Thanks in advance for any help! Andy import matplotlib.pyplot as plt import time fig_time_s = time.time() fig, ax = plt.subplots(figsize=(9.0, 4.0), dpi=200) fig_time_e = time.time() fig_time = fig_time_e - fig_time_s print('FIG TIME: {:7.2f}s'.format(fig_time)) version of matplotlib: |# Name |Version |Build |Channel |---|---|---|---| |matplotlib |3.2.2 | 1 |conda-forge |matplotlib-base |3.2.2 |py37h30547a4_0 |conda-forge These are the specs of the workstations: Property faster slower Architecture: x86_64 x86_64 CPU op-mode(s): 32-bit, 64-bit 32-bit, 64-bit Byte Order: Little Endian Little Endian CPU(s): 6 48 On-line CPU(s) list: 0-5 0-47 Thread(s) per core: 1 2 Core(s) per socket: 6 24 Socket(s): 1 1 NUMA node(s): 1 1 Vendor ID: GenuineIntel GenuineIntel CPU family: 6 6 Model: 63 85 Model name: Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz Stepping: 2 4 CPU MHz: 1899.884 1199.871 CPU max MHz: 1900.0000 3700.0000 CPU min MHz: 1200.0000 1200.0000 BogoMIPS: 3791.33 5400.00 Virtualization: VT-x VT-x L1d cache: 32K 32K L1i cache: 32K 32K L2 cache: 256K 1024K L3 cache: 15360K 33792K NUMA node0 CPU(s): 0-5 0-47 NUMA node0 CPU(s): 0-47
|
python, matplotlib, rhel
| 0
| 339
| 1
|
https://stackoverflow.com/questions/68734394/matplotlib-subplots-runs-very-slow-only-on-some-workstations
|
68,254,545
|
Docker build end up with "no space left on the device" error, though there is enough space in one of the disk
|
There are 3 disks available for my Linux VM, sda (100GB), sdb (32 GB), sdc(512GB), total 644 GB space available. Details below. Mount point "/var" free space reaching to 0% and Docker build failing with "no space left on device" error, to notice this, "watch -n 0.1 df -H" used in other tab while Docker build happening. Why it is showing "no space left" error, though there is enough space on sdc ? Why system unable to use sdb or sdc free space for sda "/var" mount point ?
|
Docker build end up with "no space left on the device" error, though there is enough space in one of the disk There are 3 disks available for my Linux VM, sda (100GB), sdb (32 GB), sdc(512GB), total 644 GB space available. Details below. Mount point "/var" free space reaching to 0% and Docker build failing with "no space left on device" error, to notice this, "watch -n 0.1 df -H" used in other tab while Docker build happening. Why it is showing "no space left" error, though there is enough space on sdc ? Why system unable to use sdb or sdc free space for sda "/var" mount point ?
|
linux, docker, rhel, rhel7
| 0
| 861
| 1
|
https://stackoverflow.com/questions/68254545/docker-build-end-up-with-no-space-left-on-the-device-error-though-there-is-en
|
68,209,538
|
Failed to install Oracle database 18 XE in Amazon AWS with 'Amazon Linux 2' AMI
|
I am trying to install Oracle database 18XE in the AWS I took which has Amazon Linux 2 as the AMI. However It throws an error saying Requires: /etc/redhat-release adding the whole screenshot here What does this mean? Any help? I did not find any source which says Oracle database 18XE does not support Amazon Linux 2 and only supports Red-Hat Linux distribution.
|
Failed to install Oracle database 18 XE in Amazon AWS with 'Amazon Linux 2' AMI I am trying to install Oracle database 18XE in the AWS I took which has Amazon Linux 2 as the AMI. However It throws an error saying Requires: /etc/redhat-release adding the whole screenshot here What does this mean? Any help? I did not find any source which says Oracle database 18XE does not support Amazon Linux 2 and only supports Red-Hat Linux distribution.
|
linux, oracle-database, amazon-web-services, amazon-ec2, rhel
| 0
| 966
| 1
|
https://stackoverflow.com/questions/68209538/failed-to-install-oracle-database-18-xe-in-amazon-aws-with-amazon-linux-2-ami
|
67,644,900
|
What kind of a thing is the "_gateway" variable in CentOS/RHEL?
|
I already knew about the "_gateway" variable in CentOS/RHEL (I think up to 7 it was just "gateway", without the _ sign). Today I setup AdGuard DNS server in my home lab, on the same machine is a reverse proxy to serve some internal services, I added a second IP to the host because I needed multiple HTTP ports. Nevermind, I set a wildcard DNS entry on adguard. Like this: *.mydomain.com 172.16.20.60 (which is the IP of the reverse proxy mentioned above) - I didn't want to add all services manually, so I chose the wildcard method. A few hours later I noticed that the machine that hosts the dns server and reverse proxy wasn't able to connect to the internet. I did a traceroute and saw that it was trying to reach the internet over _gateway.mydomain.com (which leads to the machine itselfs). A ping on _gateway.mydomain.com returned the machines IP, so I remembered the wildcard dns entry on my dns server. I added _gateway.mydomain.com to it's correct IP and then it worked as expected. My question is what is the thing about this kind of environment variable "_gateway"/"gateway" - why does RedHat do this? And I wonder why the network tries to reach it's gateway via a dns name? I wasn't able to find any information about this and didn't find any setting on the machine itselfs to disable this behavior. Here's some output: ping _gateway PING _gateway.mydomain.com (172.16.20.10) 56(84) bytes of data. 64 bytes from _gateway (172.16.20.10): icmp_seq=1 ttl=64 time=0.112 ms 64 bytes from _gateway (172.16.20.10): icmp_seq=2 ttl=64 time=0.285 ms 64 bytes from _gateway (172.16.20.10): icmp_seq=3 ttl=64 time=0.190 ms 64 bytes from _gateway (172.16.20.10): icmp_seq=4 ttl=64 time=0.284 ms host _gateway _gateway.mydomain.com has address 172.16.20.10 Any ideas? Thanks Dan
|
What kind of a thing is the "_gateway" variable in CentOS/RHEL? I already knew about the "_gateway" variable in CentOS/RHEL (I think up to 7 it was just "gateway", without the _ sign). Today I setup AdGuard DNS server in my home lab, on the same machine is a reverse proxy to serve some internal services, I added a second IP to the host because I needed multiple HTTP ports. Nevermind, I set a wildcard DNS entry on adguard. Like this: *.mydomain.com 172.16.20.60 (which is the IP of the reverse proxy mentioned above) - I didn't want to add all services manually, so I chose the wildcard method. A few hours later I noticed that the machine that hosts the dns server and reverse proxy wasn't able to connect to the internet. I did a traceroute and saw that it was trying to reach the internet over _gateway.mydomain.com (which leads to the machine itselfs). A ping on _gateway.mydomain.com returned the machines IP, so I remembered the wildcard dns entry on my dns server. I added _gateway.mydomain.com to it's correct IP and then it worked as expected. My question is what is the thing about this kind of environment variable "_gateway"/"gateway" - why does RedHat do this? And I wonder why the network tries to reach it's gateway via a dns name? I wasn't able to find any information about this and didn't find any setting on the machine itselfs to disable this behavior. Here's some output: ping _gateway PING _gateway.mydomain.com (172.16.20.10) 56(84) bytes of data. 64 bytes from _gateway (172.16.20.10): icmp_seq=1 ttl=64 time=0.112 ms 64 bytes from _gateway (172.16.20.10): icmp_seq=2 ttl=64 time=0.285 ms 64 bytes from _gateway (172.16.20.10): icmp_seq=3 ttl=64 time=0.190 ms 64 bytes from _gateway (172.16.20.10): icmp_seq=4 ttl=64 time=0.284 ms host _gateway _gateway.mydomain.com has address 172.16.20.10 Any ideas? Thanks Dan
|
dns, centos, rhel, centos8, rhel8
| 0
| 4,823
| 1
|
https://stackoverflow.com/questions/67644900/what-kind-of-a-thing-is-the-gateway-variable-in-centos-rhel
|
67,416,161
|
PKCS#11 interop convert in .net core error in RHEL
|
I am using PKCS#11 interop version 1.3. Converted this version in .netcore 3.1. While with .netcore application it is working fine in windows environment but giving error in RHEL Method C_FindObjectsInit returned CKR_TEMPLATE_INCONSISTENT I know that latest version of interop support .netstandard2.0 . So it would be easy to create application in .netcore 3.1. But I am having few restriction so have to use only PKCS#11 interop 1.3.
|
PKCS#11 interop convert in .net core error in RHEL I am using PKCS#11 interop version 1.3. Converted this version in .netcore 3.1. While with .netcore application it is working fine in windows environment but giving error in RHEL Method C_FindObjectsInit returned CKR_TEMPLATE_INCONSISTENT I know that latest version of interop support .netstandard2.0 . So it would be easy to create application in .netcore 3.1. But I am having few restriction so have to use only PKCS#11 interop 1.3.
|
.net-core, rhel, pkcs#11, pkcs11interop
| 0
| 191
| 1
|
https://stackoverflow.com/questions/67416161/pkcs11-interop-convert-in-net-core-error-in-rhel
|
67,382,978
|
Python requests using a proxy returns 'service not known, socket.getaddrinfo, socket.gaierror: [Errno -2] on a valid url
|
Error: Failed to establish a new connection: [Errno -2] Name or service not known File "/ / /****/lib64/python3.6/site-packages/urllib3/util/connection.py", line 73, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "a", line 745, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known This problem is only happening on Redhat 7 in production. I have the exact same code running on ubuntu from a different network and I have no problems. Everything works perfect. My code calls 3 API's. 2 do not work on redhat but one does. I am using an anonymous proxy. The other strange thing is I can telnet to all theses api's on redhat through the same proxy and they work fine. But just not through requests. UPDATE: the server is on a corporate network which does not resolve dns. The dns is resolved on the external proxy To me it seems as though the requests library is not communicating with an updated dns resolver or something like this, but I am a bit hazy on dns to be honest. I don't think there is any point in posting the code as I said it all works perfect on ubuntu. Just for the sake of It I will post one of the calls. I am using ... requests==2.25.1 Python 3.6.12 Redhat 7 HTTP_PROXIES={'http': '[URL] headers = {'accept': 'application/json', 'Content-Type': 'application/json',} data = json.dumps({"key":list_values}) response = requests.post('[URL] proxies=HTTP_PROXIES, headers=headers, data=data) Any help , very much appreciated
|
Python requests using a proxy returns 'service not known, socket.getaddrinfo, socket.gaierror: [Errno -2] on a valid url Error: Failed to establish a new connection: [Errno -2] Name or service not known File "/ / /****/lib64/python3.6/site-packages/urllib3/util/connection.py", line 73, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "a", line 745, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known This problem is only happening on Redhat 7 in production. I have the exact same code running on ubuntu from a different network and I have no problems. Everything works perfect. My code calls 3 API's. 2 do not work on redhat but one does. I am using an anonymous proxy. The other strange thing is I can telnet to all theses api's on redhat through the same proxy and they work fine. But just not through requests. UPDATE: the server is on a corporate network which does not resolve dns. The dns is resolved on the external proxy To me it seems as though the requests library is not communicating with an updated dns resolver or something like this, but I am a bit hazy on dns to be honest. I don't think there is any point in posting the code as I said it all works perfect on ubuntu. Just for the sake of It I will post one of the calls. I am using ... requests==2.25.1 Python 3.6.12 Redhat 7 HTTP_PROXIES={'http': '[URL] headers = {'accept': 'application/json', 'Content-Type': 'application/json',} data = json.dumps({"key":list_values}) response = requests.post('[URL] proxies=HTTP_PROXIES, headers=headers, data=data) Any help , very much appreciated
|
python-3.x, proxy, python-requests, dns, rhel
| 0
| 1,358
| 1
|
https://stackoverflow.com/questions/67382978/python-requests-using-a-proxy-returns-service-not-known-socket-getaddrinfo-so
|
67,023,354
|
Continually output resource monitor information to files in linux
|
So I am attempting to sample the output of the resource monitor in order to get an idea of how specific test cases affect the machine over time, but I have not been able to find a way to continually output information from the resource monitor (using top command on RHEL) into a new file or existing log file (or is there a log file that exists already for this?). I am attempting to data mine the resource monitor to find the optimal load balancing for specific instances on this host. I do not want to cause too much variation in the resources by doing so, but i am aware that there will be some error in the resources caused by creating new files. (I will normalize accordingly)
|
Continually output resource monitor information to files in linux So I am attempting to sample the output of the resource monitor in order to get an idea of how specific test cases affect the machine over time, but I have not been able to find a way to continually output information from the resource monitor (using top command on RHEL) into a new file or existing log file (or is there a log file that exists already for this?). I am attempting to data mine the resource monitor to find the optimal load balancing for specific instances on this host. I do not want to cause too much variation in the resources by doing so, but i am aware that there will be some error in the resources caused by creating new files. (I will normalize accordingly)
|
linux-kernel, rhel
| 0
| 71
| 1
|
https://stackoverflow.com/questions/67023354/continually-output-resource-monitor-information-to-files-in-linux
|
66,858,999
|
RHEL: allow user1 to launch a program that reads settings file owned by another user
|
Question is about configuration of RHEL Operating System, or adding a custom script, I suppose. I want to allow user1 to launch my program that reads a settings file owned by another user. The final scope is: to avoid user1 to be able to read the settings file. to allow user1 to launch my program executable. I supposed that my program and my settings file could be owned by root, giving user1 the right to execute the program. But If I do this, will the program be able to read settings file owned by root? Is it there a solution to this problem, without customization of my program executable? Edit: The scope is to protect settings file content, but allow the user to use the application. Another way to solve the same problem with different question is: Suppose that I give root privileges to exe and settings file, and then start the exe automatically during boot. User 1 will not be able to read settings file (this is what I want). Suppose that the exe is a terminal application that prints standard output and expects commands as standard input. Is it there a way, for user1, to read standard output and write standard input to the exe previously launched by root?
|
RHEL: allow user1 to launch a program that reads settings file owned by another user Question is about configuration of RHEL Operating System, or adding a custom script, I suppose. I want to allow user1 to launch my program that reads a settings file owned by another user. The final scope is: to avoid user1 to be able to read the settings file. to allow user1 to launch my program executable. I supposed that my program and my settings file could be owned by root, giving user1 the right to execute the program. But If I do this, will the program be able to read settings file owned by root? Is it there a solution to this problem, without customization of my program executable? Edit: The scope is to protect settings file content, but allow the user to use the application. Another way to solve the same problem with different question is: Suppose that I give root privileges to exe and settings file, and then start the exe automatically during boot. User 1 will not be able to read settings file (this is what I want). Suppose that the exe is a terminal application that prints standard output and expects commands as standard input. Is it there a way, for user1, to read standard output and write standard input to the exe previously launched by root?
|
executable, rhel, rhel7, rhel6, rhel5
| 0
| 123
| 1
|
https://stackoverflow.com/questions/66858999/rhel-allow-user1-to-launch-a-program-that-reads-settings-file-owned-by-another
|
66,640,694
|
httpd won't upgrade on RHEL 7: "Package(s) httpd available, but not installed."
|
When I attempt to upgrade Apache... cd /etc/yum.repos.d && wget [URL] yum install -y epel-release yum upgrade httpd ...the output says "Package(s) httpd available, but not installed." Actually, the above commands worked fine on my staging server, and I got the desired upgrade. But then when I tried the same steps on my production server, I see "Package(s) httpd available, but not installed." Variations of this issue appear elsewhere on stackoverflow and other forums, but it appears the proper solution changes frequently, and it is difficult to rely on past answers that in many cases appear to reference defunct mirrors.
|
httpd won't upgrade on RHEL 7: "Package(s) httpd available, but not installed." When I attempt to upgrade Apache... cd /etc/yum.repos.d && wget [URL] yum install -y epel-release yum upgrade httpd ...the output says "Package(s) httpd available, but not installed." Actually, the above commands worked fine on my staging server, and I got the desired upgrade. But then when I tried the same steps on my production server, I see "Package(s) httpd available, but not installed." Variations of this issue appear elsewhere on stackoverflow and other forums, but it appears the proper solution changes frequently, and it is difficult to rely on past answers that in many cases appear to reference defunct mirrors.
|
apache, yum, rhel, rhel7
| 0
| 1,324
| 1
|
https://stackoverflow.com/questions/66640694/httpd-wont-upgrade-on-rhel-7-packages-httpd-available-but-not-installed
|
65,578,929
|
C: client program reports SSL Certificate verify failed error with localCA
|
I am writing a simple TLS client/server program to securely communicate over the network. Initially I am building and running both the client and server on the same machine running RHEL 8.2. First, I am using custom self signned ssl certificate and key for my programs. I have placed the rootCA.crt (my custom CA certificate in /root/CA/rootCA.crt). Also copied the rootCA.pem to /etc/pki/ca-trust/source/anchors/ and executed update-ca-trust enable then update-ca-trust extract to install the certificate to the system. (Not sure if I need to reboot the system for it to take effect.) Initially, the client and server were able to communicate usint TLS untill I added the certificate validation part of the code on the client side. Certificate Verification snippet: ctx = SSL_CTX_new(method); /* Create new context */ if ( ctx == NULL ) { ERR_print_errors_fp(stderr); abort(); } SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL); SSL_CTX_set_verify_depth(ctx, 4); const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_TLSv1 | SSL_OP_NO_TLSv1_1 | SSL_OP_NO_COMPRESSION; SSL_CTX_set_options(ctx, flags); if(SSL_CTX_load_verify_locations(ctx, NULL, "/root/CA/") == 0){ ERR_print_errors_fp(stderr); abort(); } ssl = SSL_new(ctx); /* create new SSL connection state */ SSL_set_fd(ssl, server); /* attach the socket descriptor */ if ( SSL_connect(ssl) == FAIL ) /* perform the connection */ ERR_print_errors_fp(stderr); else { sprintf(acClientRequest, "%s", cpRequestMessage); /* construct reply */ printf("\n\nConnected with %s encryption\n", SSL_get_ciphe } when I run the server and client programs I see the following error messafe => Onclient: 140736372886336:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1915: On Server: 140736022137664:error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:ssl/record/rec_layer_s3.c:1543:SSL alert number 48 Not sure, what is going wrong with the certificate validation process. Can anyone suggest me how to fix this error?
|
C: client program reports SSL Certificate verify failed error with localCA I am writing a simple TLS client/server program to securely communicate over the network. Initially I am building and running both the client and server on the same machine running RHEL 8.2. First, I am using custom self signned ssl certificate and key for my programs. I have placed the rootCA.crt (my custom CA certificate in /root/CA/rootCA.crt). Also copied the rootCA.pem to /etc/pki/ca-trust/source/anchors/ and executed update-ca-trust enable then update-ca-trust extract to install the certificate to the system. (Not sure if I need to reboot the system for it to take effect.) Initially, the client and server were able to communicate usint TLS untill I added the certificate validation part of the code on the client side. Certificate Verification snippet: ctx = SSL_CTX_new(method); /* Create new context */ if ( ctx == NULL ) { ERR_print_errors_fp(stderr); abort(); } SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL); SSL_CTX_set_verify_depth(ctx, 4); const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_TLSv1 | SSL_OP_NO_TLSv1_1 | SSL_OP_NO_COMPRESSION; SSL_CTX_set_options(ctx, flags); if(SSL_CTX_load_verify_locations(ctx, NULL, "/root/CA/") == 0){ ERR_print_errors_fp(stderr); abort(); } ssl = SSL_new(ctx); /* create new SSL connection state */ SSL_set_fd(ssl, server); /* attach the socket descriptor */ if ( SSL_connect(ssl) == FAIL ) /* perform the connection */ ERR_print_errors_fp(stderr); else { sprintf(acClientRequest, "%s", cpRequestMessage); /* construct reply */ printf("\n\nConnected with %s encryption\n", SSL_get_ciphe } when I run the server and client programs I see the following error messafe => Onclient: 140736372886336:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1915: On Server: 140736022137664:error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:ssl/record/rec_layer_s3.c:1543:SSL alert number 48 Not sure, what is going wrong with the certificate validation process. Can anyone suggest me how to fix this error?
|
c, linux, ssl, openssl, rhel
| 0
| 1,556
| 1
|
https://stackoverflow.com/questions/65578929/c-client-program-reports-ssl-certificate-verify-failed-error-with-localca
|
65,547,308
|
Jmeter Script Execution hangs - The JVM should have exited but did not
|
We are running a jmeter performance tests script. It executes and produces results, but after that it just hangs (wait infinitely) showing The JVM should have exited but did not. Full execution logs - jmeter -n -t myScript01.jmx -l myScript01-results.jtl Creating summariser <summary> Created the tree successfully using myScript.jmx Starting standalone test @ Sun Jan 03 05:07:06 UTC 2021 (1609650426432) Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4447 summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%) Tidying up ... @ Sun Jan 03 05:19:01 UTC 2021 (1609651141694) ... end of run The JVM should have exited but did not. The following non-daemon threads are still running (DestroyJavaVM is OK): Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park java.util.concurrent.locks.LockSupport#park at line:175 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await at line:2039 java.awt.EventQueue#getNextEvent at line:554 java.awt.EventDispatchThread#pumpOneEventForFilters at line:187 java.awt.EventDispatchThread#pumpEventsForFilter at line:116 java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105 java.awt.EventDispatchThread#pumpEvents at line:101 java.awt.EventDispatchThread#pumpEvents at line:93 java.awt.EventDispatchThread#run at line:82 Thread[DestroyJavaVM,5,main], stackTrace: Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait sun.awt.AWTAutoShutdown#run at line:314 java.lang.Thread#run at line:748 OS - $ cat /etc/os-release NAME="Red Hat Enterprise Linux Server" VERSION="7.8 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.8" PRETTY_NAME="Red Hat Enterprise Linux Server 7.8 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.8:GA:server" HOME_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.8 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.8" java version - $ java -version openjdk version "1.8.0_272" OpenJDK Runtime Environment (build 1.8.0_272-b10) OpenJDK 64-Bit Server VM (build 25.272-b10, mixed mode) jmeter version - 5.3 $ jmeter -v _ ____ _ ____ _ _ _____ _ __ __ _____ _____ _____ ____ / \ | _ \ / \ / ___| | | | ____| | | \/ | ____|_ _| ____| _ \ / _ \ | |_) / _ \| | | |_| | _| _ | | |\/| | _| | | | _| | |_) | / ___ \| __/ ___ \ |___| _ | |___ | |_| | | | | |___ | | | |___| _ < /_/ \_\_| /_/ \_\____|_| |_|_____| \___/|_| |_|_____| |_| |_____|_| \_\ 5.3 Copyright (c) 1999-2020 The Apache Software Foundation Any guidance is appreciated!
|
Jmeter Script Execution hangs - The JVM should have exited but did not We are running a jmeter performance tests script. It executes and produces results, but after that it just hangs (wait infinitely) showing The JVM should have exited but did not. Full execution logs - jmeter -n -t myScript01.jmx -l myScript01-results.jtl Creating summariser <summary> Created the tree successfully using myScript.jmx Starting standalone test @ Sun Jan 03 05:07:06 UTC 2021 (1609650426432) Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4447 summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%) Tidying up ... @ Sun Jan 03 05:19:01 UTC 2021 (1609651141694) ... end of run The JVM should have exited but did not. The following non-daemon threads are still running (DestroyJavaVM is OK): Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park java.util.concurrent.locks.LockSupport#park at line:175 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await at line:2039 java.awt.EventQueue#getNextEvent at line:554 java.awt.EventDispatchThread#pumpOneEventForFilters at line:187 java.awt.EventDispatchThread#pumpEventsForFilter at line:116 java.awt.EventDispatchThread#pumpEventsForHierarchy at line:105 java.awt.EventDispatchThread#pumpEvents at line:101 java.awt.EventDispatchThread#pumpEvents at line:93 java.awt.EventDispatchThread#run at line:82 Thread[DestroyJavaVM,5,main], stackTrace: Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait sun.awt.AWTAutoShutdown#run at line:314 java.lang.Thread#run at line:748 OS - $ cat /etc/os-release NAME="Red Hat Enterprise Linux Server" VERSION="7.8 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.8" PRETTY_NAME="Red Hat Enterprise Linux Server 7.8 (Maipo)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.8:GA:server" HOME_URL="[URL] BUG_REPORT_URL="[URL] REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.8 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.8" java version - $ java -version openjdk version "1.8.0_272" OpenJDK Runtime Environment (build 1.8.0_272-b10) OpenJDK 64-Bit Server VM (build 25.272-b10, mixed mode) jmeter version - 5.3 $ jmeter -v _ ____ _ ____ _ _ _____ _ __ __ _____ _____ _____ ____ / \ | _ \ / \ / ___| | | | ____| | | \/ | ____|_ _| ____| _ \ / _ \ | |_) / _ \| | | |_| | _| _ | | |\/| | _| | | | _| | |_) | / ___ \| __/ ___ \ |___| _ | |___ | |_| | | | | |___ | | | |___| _ < /_/ \_\_| /_/ \_\____|_| |_|_____| \___/|_| |_|_____| |_| |_____|_| \_\ 5.3 Copyright (c) 1999-2020 The Apache Software Foundation Any guidance is appreciated!
|
java, jmeter, performance-testing, rhel
| 0
| 514
| 1
|
https://stackoverflow.com/questions/65547308/jmeter-script-execution-hangs-the-jvm-should-have-exited-but-did-not
|
64,880,499
|
yum + how to enable repo in yum when enabled=0
|
we have the following repo on our rhel 7.2 server as we can see we set the enabled=0 , as default more infra-update.repo [infra-76-update] name=infra 76 update baseurl=[URL] gpgcheck=0 enabled=0 but in case we want to use this repo - infra-7.6 , we wripte the following yum syntax yum --disablerepo=* --enablerepo=infra-76 update -y but we get Error getting repository data for infra-76, repository not found but when we set the enabled=1 , then we can do the installation of yum successfully with - ( yum --disablerepo=* --enablerepo=infra-76 update -y ) but the question is: is it possible to enable repo as yum --disablerepo=* --enablerepo=infra-76 , while enabled=0 ?? NOTE - the target is to install the rpm's from repo infra-76 in spite the parameter in repo config is enabled=0
|
yum + how to enable repo in yum when enabled=0 we have the following repo on our rhel 7.2 server as we can see we set the enabled=0 , as default more infra-update.repo [infra-76-update] name=infra 76 update baseurl=[URL] gpgcheck=0 enabled=0 but in case we want to use this repo - infra-7.6 , we wripte the following yum syntax yum --disablerepo=* --enablerepo=infra-76 update -y but we get Error getting repository data for infra-76, repository not found but when we set the enabled=1 , then we can do the installation of yum successfully with - ( yum --disablerepo=* --enablerepo=infra-76 update -y ) but the question is: is it possible to enable repo as yum --disablerepo=* --enablerepo=infra-76 , while enabled=0 ?? NOTE - the target is to install the rpm's from repo infra-76 in spite the parameter in repo config is enabled=0
|
linux, rpm, yum, rhel
| 0
| 3,427
| 1
|
https://stackoverflow.com/questions/64880499/yum-how-to-enable-repo-in-yum-when-enabled-0
|
64,454,140
|
Software to keymap between mac and rhel - swap the control and command key
|
Is there any good software out there that helps with keyboard mapping for Mac keyboards so they can be used easily on RHEL specifically swapping out the control and the command key to make basic copy paste operations super simple
|
Software to keymap between mac and rhel - swap the control and command key Is there any good software out there that helps with keyboard mapping for Mac keyboards so they can be used easily on RHEL specifically swapping out the control and the command key to make basic copy paste operations super simple
|
macos, keyboard, rhel, rhel7
| 0
| 350
| 1
|
https://stackoverflow.com/questions/64454140/software-to-keymap-between-mac-and-rhel-swap-the-control-and-command-key
|
64,416,720
|
Cannot set thread priority to real time despite using cap_sys_nice
|
I have an application that checks on POSIX environment whether thread priorities can be set to real time by calling struct sched_param param; param.sched_priority = 1; int canSetRealTimeThreadPriority = (pthread_setschedparam(pthread_self(), SCHED_FIFO, ¶m) == 0); On one system system A this works, but on another system B the check fails and I would like to find out why. On both systems: the application is started as a systemd service via a service startup script. calling getcap on the binary returns among others cap_sys_nice+eip . the service script defines that the application is run by a non root user via User=[non root user] the service scripts sets LimitRTPRIO=20 calling sysctl -n kernel.sched_rt_runtime_us returns 950000 , which should be the default calling sysctl -n kernel.sched_rt_period_us returns 1000000 , which should be the default systemctl show [serviceName] returns among others LimitRTPRIO=20 calling the limits on the running process of the application ( prlimit --pid [application_pid] ) will show among others: RESOURCE DESCRIPTION SOFT HARD UNITS NICE max nice prio allowed to raise 0 0 RTPRIO max real-time priority 20 20 RTTIME timeout for real-time tasks unlimited unlimited microsecs On system B where it doesn't allow real time thread priorities: etc/security/limits.conf contains line [non root user] - rtprio 20 kernel version is 3.10.0-862.el7.x86_64 and OS version Red Hat Enterprise Linux Server release 7.4 (Maipo) On system A where realtime thread priorities can be set: kernel version is 3.10.0-957.56.1.el7.x86_64 and OS version Red Hat Enterprise Linux Server release 7.6 (Maipo) When I test on system A and remove cap_sys_nice+eip from the binary via setcap '' [binary] I also cannot set real time thread priorities. I assume some setting on system B overrides the cap_sys_nice setting because it has a higher priority, so I wonder what that can be.
|
Cannot set thread priority to real time despite using cap_sys_nice I have an application that checks on POSIX environment whether thread priorities can be set to real time by calling struct sched_param param; param.sched_priority = 1; int canSetRealTimeThreadPriority = (pthread_setschedparam(pthread_self(), SCHED_FIFO, ¶m) == 0); On one system system A this works, but on another system B the check fails and I would like to find out why. On both systems: the application is started as a systemd service via a service startup script. calling getcap on the binary returns among others cap_sys_nice+eip . the service script defines that the application is run by a non root user via User=[non root user] the service scripts sets LimitRTPRIO=20 calling sysctl -n kernel.sched_rt_runtime_us returns 950000 , which should be the default calling sysctl -n kernel.sched_rt_period_us returns 1000000 , which should be the default systemctl show [serviceName] returns among others LimitRTPRIO=20 calling the limits on the running process of the application ( prlimit --pid [application_pid] ) will show among others: RESOURCE DESCRIPTION SOFT HARD UNITS NICE max nice prio allowed to raise 0 0 RTPRIO max real-time priority 20 20 RTTIME timeout for real-time tasks unlimited unlimited microsecs On system B where it doesn't allow real time thread priorities: etc/security/limits.conf contains line [non root user] - rtprio 20 kernel version is 3.10.0-862.el7.x86_64 and OS version Red Hat Enterprise Linux Server release 7.4 (Maipo) On system A where realtime thread priorities can be set: kernel version is 3.10.0-957.56.1.el7.x86_64 and OS version Red Hat Enterprise Linux Server release 7.6 (Maipo) When I test on system A and remove cap_sys_nice+eip from the binary via setcap '' [binary] I also cannot set real time thread priorities. I assume some setting on system B overrides the cap_sys_nice setting because it has a higher priority, so I wonder what that can be.
|
c++, pthreads, rhel, thread-priority
| 0
| 3,216
| 1
|
https://stackoverflow.com/questions/64416720/cannot-set-thread-priority-to-real-time-despite-using-cap-sys-nice
|
64,314,138
|
Postgres, RHEL and Docker
|
I've been battling with this issue for the last 4 days of my life and it's driving me crazy. I'm trying to deploy a service that uses a postgres DB in RHEL 8. In order to do so, I'm deploying them both using docker-compose. The problem is that from the service container I can ping the postgres container, but the service is not able to connect the DB... I simplified the use case, and used a docker compose that uses the adminer docker image to connect any of the listed DB managers including postgres. It works great on my machine and on the test server that uses ubuntu 20, but when I try it in RHEL 8, I can't get access to the DB either! This are the docker and docker-compose versions: Docker: Client: Docker Engine - Community Version: 19.03.13 API version: 1.40 Go version: go1.13.15 Git commit: 4484c46d9d Built: Wed Sep 16 17:02:36 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.13 API version: 1.40 (minimum version 1.12) Go version: go1.13.15 Git commit: 4484c46d9d Built: Wed Sep 16 17:01:11 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.3.7 GitCommit: 8fba4e9a7d01810a393d5d25a3621dc101981175 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 Docker-compose docker-compose version 1.25.5, build 8a1c60f6 docker-py version: 4.1.0 CPython version: 3.7.5 OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019 The OS Red Hat Enterprise Linux release 8.1 (Ootpa) The actual docker-compose that I'm using is this: version: '3.1' services: db: image: postgres restart: always environment: POSTGRES_PASSWORD: example adminer: image: adminer restart: always ports: - 8080:8080 Logs for the postgres container seem to be fine when compared to the logs that I get in the other two systems: db_1 | The files belonging to this database system will be owned by user "postgres". db_1 | This user must also own the server process. db_1 | db_1 | The database cluster will be initialized with locale "en_US.utf8". db_1 | The default database encoding has accordingly been set to "UTF8". db_1 | The default text search configuration will be set to "english". db_1 | db_1 | Data page checksums are disabled. db_1 | db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok db_1 | creating subdirectories ... ok db_1 | selecting dynamic shared memory implementation ... posix db_1 | selecting default max_connections ... 100 db_1 | selecting default shared_buffers ... 128MB db_1 | selecting default time zone ... Etc/UTC db_1 | creating configuration files ... ok db_1 | running bootstrap script ... ok db_1 | performing post-bootstrap initialization ... ok db_1 | syncing data to disk ... ok db_1 | db_1 | db_1 | Success. You can now start the database server using: db_1 | db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start db_1 | db_1 | initdb: warning: enabling "trust" authentication for local connections db_1 | You can change this by editing pg_hba.conf or using the option -A, or db_1 | --auth-local and --auth-host, the next time you run initdb. db_1 | waiting for server to start....2020-10-12 08:18:28.489 UTC [46] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit db_1 | 2020-10-12 08:18:28.499 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" db_1 | 2020-10-12 08:18:28.545 UTC [47] LOG: database system was shut down at 2020-10-12 08:18:25 UTC db_1 | 2020-10-12 08:18:28.557 UTC [46] LOG: database system is ready to accept connections db_1 | done db_1 | server started db_1 | db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* db_1 | db_1 | waiting for server to shut down...2020-10-12 08:18:28.572 UTC [46] LOG: received fast shutdown request db_1 | .2020-10-12 08:18:28.581 UTC [46] LOG: aborting any active transactions db_1 | 2020-10-12 08:18:28.582 UTC [46] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 db_1 | 2020-10-12 08:18:28.583 UTC [48] LOG: shutting down db_1 | 2020-10-12 08:18:28.648 UTC [46] LOG: database system is shut down db_1 | done db_1 | server stopped db_1 | db_1 | PostgreSQL init process complete; ready for start up. db_1 | db_1 | 2020-10-12 08:18:28.693 UTC [1] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit db_1 | 2020-10-12 08:18:28.694 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 db_1 | 2020-10-12 08:18:28.694 UTC [1] LOG: listening on IPv6 address "::", port 5432 db_1 | 2020-10-12 08:18:28.712 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" db_1 | 2020-10-12 08:18:28.751 UTC [55] LOG: database system was shut down at 2020-10-12 08:18:28 UTC db_1 | 2020-10-12 08:18:28.764 UTC [1] LOG: database system is ready to accept connections Did anybody encountered this problem before? Do you have nay suggestions that I could try? EDIT: The adminer container logs are all the same as with the other machines. They just throw this error when trying to connect, instead an ok message: Is the server running on host "postgres" (10.10.10.2) and accepting TCP/IP connections on port 5432?
|
Postgres, RHEL and Docker I've been battling with this issue for the last 4 days of my life and it's driving me crazy. I'm trying to deploy a service that uses a postgres DB in RHEL 8. In order to do so, I'm deploying them both using docker-compose. The problem is that from the service container I can ping the postgres container, but the service is not able to connect the DB... I simplified the use case, and used a docker compose that uses the adminer docker image to connect any of the listed DB managers including postgres. It works great on my machine and on the test server that uses ubuntu 20, but when I try it in RHEL 8, I can't get access to the DB either! This are the docker and docker-compose versions: Docker: Client: Docker Engine - Community Version: 19.03.13 API version: 1.40 Go version: go1.13.15 Git commit: 4484c46d9d Built: Wed Sep 16 17:02:36 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.13 API version: 1.40 (minimum version 1.12) Go version: go1.13.15 Git commit: 4484c46d9d Built: Wed Sep 16 17:01:11 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.3.7 GitCommit: 8fba4e9a7d01810a393d5d25a3621dc101981175 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 Docker-compose docker-compose version 1.25.5, build 8a1c60f6 docker-py version: 4.1.0 CPython version: 3.7.5 OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019 The OS Red Hat Enterprise Linux release 8.1 (Ootpa) The actual docker-compose that I'm using is this: version: '3.1' services: db: image: postgres restart: always environment: POSTGRES_PASSWORD: example adminer: image: adminer restart: always ports: - 8080:8080 Logs for the postgres container seem to be fine when compared to the logs that I get in the other two systems: db_1 | The files belonging to this database system will be owned by user "postgres". db_1 | This user must also own the server process. db_1 | db_1 | The database cluster will be initialized with locale "en_US.utf8". db_1 | The default database encoding has accordingly been set to "UTF8". db_1 | The default text search configuration will be set to "english". db_1 | db_1 | Data page checksums are disabled. db_1 | db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok db_1 | creating subdirectories ... ok db_1 | selecting dynamic shared memory implementation ... posix db_1 | selecting default max_connections ... 100 db_1 | selecting default shared_buffers ... 128MB db_1 | selecting default time zone ... Etc/UTC db_1 | creating configuration files ... ok db_1 | running bootstrap script ... ok db_1 | performing post-bootstrap initialization ... ok db_1 | syncing data to disk ... ok db_1 | db_1 | db_1 | Success. You can now start the database server using: db_1 | db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start db_1 | db_1 | initdb: warning: enabling "trust" authentication for local connections db_1 | You can change this by editing pg_hba.conf or using the option -A, or db_1 | --auth-local and --auth-host, the next time you run initdb. db_1 | waiting for server to start....2020-10-12 08:18:28.489 UTC [46] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit db_1 | 2020-10-12 08:18:28.499 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" db_1 | 2020-10-12 08:18:28.545 UTC [47] LOG: database system was shut down at 2020-10-12 08:18:25 UTC db_1 | 2020-10-12 08:18:28.557 UTC [46] LOG: database system is ready to accept connections db_1 | done db_1 | server started db_1 | db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* db_1 | db_1 | waiting for server to shut down...2020-10-12 08:18:28.572 UTC [46] LOG: received fast shutdown request db_1 | .2020-10-12 08:18:28.581 UTC [46] LOG: aborting any active transactions db_1 | 2020-10-12 08:18:28.582 UTC [46] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 db_1 | 2020-10-12 08:18:28.583 UTC [48] LOG: shutting down db_1 | 2020-10-12 08:18:28.648 UTC [46] LOG: database system is shut down db_1 | done db_1 | server stopped db_1 | db_1 | PostgreSQL init process complete; ready for start up. db_1 | db_1 | 2020-10-12 08:18:28.693 UTC [1] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit db_1 | 2020-10-12 08:18:28.694 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 db_1 | 2020-10-12 08:18:28.694 UTC [1] LOG: listening on IPv6 address "::", port 5432 db_1 | 2020-10-12 08:18:28.712 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" db_1 | 2020-10-12 08:18:28.751 UTC [55] LOG: database system was shut down at 2020-10-12 08:18:28 UTC db_1 | 2020-10-12 08:18:28.764 UTC [1] LOG: database system is ready to accept connections Did anybody encountered this problem before? Do you have nay suggestions that I could try? EDIT: The adminer container logs are all the same as with the other machines. They just throw this error when trying to connect, instead an ok message: Is the server running on host "postgres" (10.10.10.2) and accepting TCP/IP connections on port 5432?
|
postgresql, docker, docker-compose, rhel
| 0
| 1,834
| 2
|
https://stackoverflow.com/questions/64314138/postgres-rhel-and-docker
|
63,933,051
|
IOError: [Errno 32] Broken pipe
|
I usually observe this while running my automation scripts using paramiko ssh module in python. While executing some commands, it fails with the following error. Observed this error in tcl scripts as well, so realized it is not specific to a language IOError: [Errno 32] Broken pipe And we have observed it in while writing the steam of output to a file as well as below file_handle.write(line_data) We can handle the exception and add a retry block (Ref: IOError: [Errno 32] Broken pipe: Python ) But I am curios to know why is it happening at the first place so that I can take necessary precautions before running my job. My findings resulted in "network drop" or "recipient system not responding". But I am not really convinced with those points. Please let me know the root cause
|
IOError: [Errno 32] Broken pipe I usually observe this while running my automation scripts using paramiko ssh module in python. While executing some commands, it fails with the following error. Observed this error in tcl scripts as well, so realized it is not specific to a language IOError: [Errno 32] Broken pipe And we have observed it in while writing the steam of output to a file as well as below file_handle.write(line_data) We can handle the exception and add a retry block (Ref: IOError: [Errno 32] Broken pipe: Python ) But I am curios to know why is it happening at the first place so that I can take necessary precautions before running my job. My findings resulted in "network drop" or "recipient system not responding". But I am not really convinced with those points. Please let me know the root cause
|
python, linux, unix, pipe, rhel
| 0
| 5,274
| 1
|
https://stackoverflow.com/questions/63933051/ioerror-errno-32-broken-pipe
|
63,327,628
|
Running two podman/docker containers of PostgreSQL on a single host
|
I have two applications, each of which use several databases. Before the days of Docker, I would have just put all the databases on one host (due to resource consumption associated with running multiple physical hosts/VMs). Logically, it seems to me that separating these into groups (1 group of DBs per application) is the right thing to do and with containers the overhead is low and this seems possible. However, I have not seen this use case. I've seen multiple instances of containerized Postgres running so as to maintain multiple versions (hence different images). Is there a good technical reason why people do not do this (two or more containers of PostgreSQL instances using the same image for purposes of isolating groups of DBs)? When I tried to do this, I ran into errors having to do with the second instance trying to configure the postgres user. I had to pass in an option to ignore migration errors. I'm wondering if there is a good reason not to do this.
|
Running two podman/docker containers of PostgreSQL on a single host I have two applications, each of which use several databases. Before the days of Docker, I would have just put all the databases on one host (due to resource consumption associated with running multiple physical hosts/VMs). Logically, it seems to me that separating these into groups (1 group of DBs per application) is the right thing to do and with containers the overhead is low and this seems possible. However, I have not seen this use case. I've seen multiple instances of containerized Postgres running so as to maintain multiple versions (hence different images). Is there a good technical reason why people do not do this (two or more containers of PostgreSQL instances using the same image for purposes of isolating groups of DBs)? When I tried to do this, I ran into errors having to do with the second instance trying to configure the postgres user. I had to pass in an option to ignore migration errors. I'm wondering if there is a good reason not to do this.
|
postgresql, docker, containers, rhel, podman
| 0
| 578
| 1
|
https://stackoverflow.com/questions/63327628/running-two-podman-docker-containers-of-postgresql-on-a-single-host
|
63,159,493
|
How to install minishift in a customized directory in linux
|
While trying to start minishift, it automatically updates the cache in the home directory. /home/abc/.minishift/cache/..... However I want minishift to use a custom directory instead of the default home directory as I am running out of space Can this be achieved by changing any parameters during ./minishift start Tried codeready containers too, but it copies in the default home directory.. FATA Failed to copy embedded 'crc_libvirt_4.5.1.crcbundle' from /opt/data/crc-linux-1.13.0-amd64/crc to /home/abc/.crc/cache/crc_libvirt_4.5.1.crcbundle: write /home/abc/.crc/cache/crc_libvirt_4.5.1.crcbundle: no space left on device
|
How to install minishift in a customized directory in linux While trying to start minishift, it automatically updates the cache in the home directory. /home/abc/.minishift/cache/..... However I want minishift to use a custom directory instead of the default home directory as I am running out of space Can this be achieved by changing any parameters during ./minishift start Tried codeready containers too, but it copies in the default home directory.. FATA Failed to copy embedded 'crc_libvirt_4.5.1.crcbundle' from /opt/data/crc-linux-1.13.0-amd64/crc to /home/abc/.crc/cache/crc_libvirt_4.5.1.crcbundle: write /home/abc/.crc/cache/crc_libvirt_4.5.1.crcbundle: no space left on device
|
linux, openshift, rhel, minishift
| 0
| 287
| 2
|
https://stackoverflow.com/questions/63159493/how-to-install-minishift-in-a-customized-directory-in-linux
|
63,155,479
|
How to fix "No package docker-1.13.1 available" when trying to install Docker on RHEL Server 7.8?
|
I'm trying to install Docker on Red Hat Enterprise Linux Server release 7.8 (Maipo). I'm following the OKD host preparation instructions . When I run yum install docker-1.13.1 , I get: Loaded plugins: amazon-id, search-disabled-repos rhel-7-server-rhui-rh-common-rpms | 2.1 kB 00:00:00 rhel-7-server-rhui-rpms | 2.0 kB 00:00:00 rhui-client-config-server-7 | 2.1 kB 00:00:00 (1/6): rhel-7-server-rhui-rh-common-rpms/7Server/x86_64/updateinfo | 35 kB 00:00:00 (2/6): rhel-7-server-rhui-rh-common-rpms/7Server/x86_64/group | 124 B 00:00:00 (3/6): rhel-7-server-rhui-rh-common-rpms/7Server/x86_64/primary | 66 kB 00:00:00 (4/6): rhel-7-server-rhui-rpms/7Server/x86_64/group | 772 kB 00:00:00 (5/6): rhel-7-server-rhui-rpms/7Server/x86_64/updateinfo | 3.8 MB 00:00:00 (6/6): rhel-7-server-rhui-rpms/7Server/x86_64/primary | 45 MB 00:00:00 rhel-7-server-rhui-rh-common-rpms 243/243 rhel-7-server-rhui-rpms 29237/29237 No package docker-1.13.1 available. Error: Nothing to do How can I fix this?
|
How to fix "No package docker-1.13.1 available" when trying to install Docker on RHEL Server 7.8? I'm trying to install Docker on Red Hat Enterprise Linux Server release 7.8 (Maipo). I'm following the OKD host preparation instructions . When I run yum install docker-1.13.1 , I get: Loaded plugins: amazon-id, search-disabled-repos rhel-7-server-rhui-rh-common-rpms | 2.1 kB 00:00:00 rhel-7-server-rhui-rpms | 2.0 kB 00:00:00 rhui-client-config-server-7 | 2.1 kB 00:00:00 (1/6): rhel-7-server-rhui-rh-common-rpms/7Server/x86_64/updateinfo | 35 kB 00:00:00 (2/6): rhel-7-server-rhui-rh-common-rpms/7Server/x86_64/group | 124 B 00:00:00 (3/6): rhel-7-server-rhui-rh-common-rpms/7Server/x86_64/primary | 66 kB 00:00:00 (4/6): rhel-7-server-rhui-rpms/7Server/x86_64/group | 772 kB 00:00:00 (5/6): rhel-7-server-rhui-rpms/7Server/x86_64/updateinfo | 3.8 MB 00:00:00 (6/6): rhel-7-server-rhui-rpms/7Server/x86_64/primary | 45 MB 00:00:00 rhel-7-server-rhui-rh-common-rpms 243/243 rhel-7-server-rhui-rpms 29237/29237 No package docker-1.13.1 available. Error: Nothing to do How can I fix this?
|
docker, openshift, rhel, openshift-origin, epel
| 0
| 969
| 1
|
https://stackoverflow.com/questions/63155479/how-to-fix-no-package-docker-1-13-1-available-when-trying-to-install-docker-on
|
63,044,201
|
is it possible to run shell command inside the service
|
under /etc/systemd/system , we have the service - cc_check.service in the service we activate the script - /home/cc_start_daemon.sh as the following ExecStart=/home/cc_start_daemon.sh is it possible inside the service to add a shell command - bash /home/second_try.sh as [Service] Restart=on-failure StartLimitInterval=5min StartLimitBurst=4 LimitMEMLOCK=infinity LimitNOFILE=65535 Type=simple ExecStart=/home/cc_start_daemon.sh bash /home/second_try.sh [Install] WantedBy=multi-user.target the target is to run another script after - /home/cc_start_daemon.sh
|
is it possible to run shell command inside the service under /etc/systemd/system , we have the service - cc_check.service in the service we activate the script - /home/cc_start_daemon.sh as the following ExecStart=/home/cc_start_daemon.sh is it possible inside the service to add a shell command - bash /home/second_try.sh as [Service] Restart=on-failure StartLimitInterval=5min StartLimitBurst=4 LimitMEMLOCK=infinity LimitNOFILE=65535 Type=simple ExecStart=/home/cc_start_daemon.sh bash /home/second_try.sh [Install] WantedBy=multi-user.target the target is to run another script after - /home/cc_start_daemon.sh
|
bash, service, systemd, rhel, systemctl
| 0
| 307
| 1
|
https://stackoverflow.com/questions/63044201/is-it-possible-to-run-shell-command-inside-the-service
|
62,374,438
|
Unable to start Apache HTTPD after server reboot
|
unable to start Apache HTTPD after server reboot. We have /etc/httpd/conf/ owned by func. user/group. Hence we use scripts to start httpd.conf and every time server reboots, we are unable to start HTTPD. We found that /var/run/httpd is getting changed to apache/root after reboot. Hence script fails to start the HTTPD which is under func. user/group. Please help me !!!
|
Unable to start Apache HTTPD after server reboot unable to start Apache HTTPD after server reboot. We have /etc/httpd/conf/ owned by func. user/group. Hence we use scripts to start httpd.conf and every time server reboots, we are unable to start HTTPD. We found that /var/run/httpd is getting changed to apache/root after reboot. Hence script fails to start the HTTPD which is under func. user/group. Please help me !!!
|
linux, apache, httpd.conf, rhel
| 0
| 3,738
| 1
|
https://stackoverflow.com/questions/62374438/unable-to-start-apache-httpd-after-server-reboot
|
62,190,715
|
python + No module named cryptography.fernet after upgrade cryptography pkg
|
we update some python pkgs and modules one of them was the pkg cryptography we update the cryptography from version cryptography (1.7.1) to cryptography (2.9.2) but when we access the python shell , we get ImportError: No module named cryptography.fernet in spite cryptography is installed pip list |grep cryptography cryptography (2.9.2) from python shell python Python 2.7.5 (default, Sep 12 2018, 05:31:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from cryptography.fernet import Fernet Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named cryptography.fernet Note - on the previous version cryptography (1.7.1) , every thing was ok so what could be the problem with the module? note the same problem is from the python script import sys import os import base64 from cryptography.fernet import Fernet . . .
|
python + No module named cryptography.fernet after upgrade cryptography pkg we update some python pkgs and modules one of them was the pkg cryptography we update the cryptography from version cryptography (1.7.1) to cryptography (2.9.2) but when we access the python shell , we get ImportError: No module named cryptography.fernet in spite cryptography is installed pip list |grep cryptography cryptography (2.9.2) from python shell python Python 2.7.5 (default, Sep 12 2018, 05:31:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from cryptography.fernet import Fernet Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named cryptography.fernet Note - on the previous version cryptography (1.7.1) , every thing was ok so what could be the problem with the module? note the same problem is from the python script import sys import os import base64 from cryptography.fernet import Fernet . . .
|
python, python-3.x, python-2.7, pip, rhel
| 0
| 1,781
| 1
|
https://stackoverflow.com/questions/62190715/python-no-module-named-cryptography-fernet-after-upgrade-cryptography-pkg
|
61,780,171
|
Linux process/component sending frequent DNS queries to resolve the local hostname (but shouldn't)
|
I'm not a networking guru so could use some help. I am running a RHEL7 (Red Hat Enterprise Linux) VM (Virtual Machine) where some component on the OS is sending frequent DNS queries to resolve it's own local hostname to our main DNS server (which shouldn't be happening because the DNS server won't know anything about its address). Can anyone provide guidance as to how I can find out what component/service/process this is? It's filling our logs with 19k records over just hours and I need to find a way to fix this. The hostname for the RHEL VM is spe1.2v29999999.dev.local , there is a static IP on this VM and it is 10.70.49.61 . The /etc/hosts looks like: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost4 localhost4.localdomain4 I suspected it might be a java jar we have running on the VM, but I stopped it via systemctl stop MyJavaJar but after running a tcp dump via tcpdump -i any udp port 53 , I could still see the queries happening. Here are some examples from different days/times in the logs (both A and TXT records): 2020-05-05T13:53:50.189178+00:00 dns.green.blue.mycompany.com 127.0.0.1 <daemon.info> dnsmasq[20886]: 739 10.70.49.61/65078 query[A] spe1.2v29999999.dev.local from 10.70.49.61 2020-05-07T00:01:39.934899+00:00 dns.green.blue.mycompany.com 127.0.0.1 <daemon.info> dnsmasq[8615]: 27827 10.70.49.61/57348 reply spe1.2v29999999.dev.local is NXDOMAIN 2020-05-11T00:01:20.674688+00:00 dns.green.blue.mycompany.com 127.0.0.1 <daemon.info> dnsmasq[8615]: 130345 10.70.49.61/53321 query[TXT] bootstrap.spe1.2v29999999.dev.local from 10.70.49.61 Would making any changes to /etc/hostname , /etc/sysconfig , /var/named .zone files, /var/named.conf or /etc/named help? Can I do more with tcpdump ? Thanks
|
Linux process/component sending frequent DNS queries to resolve the local hostname (but shouldn't) I'm not a networking guru so could use some help. I am running a RHEL7 (Red Hat Enterprise Linux) VM (Virtual Machine) where some component on the OS is sending frequent DNS queries to resolve it's own local hostname to our main DNS server (which shouldn't be happening because the DNS server won't know anything about its address). Can anyone provide guidance as to how I can find out what component/service/process this is? It's filling our logs with 19k records over just hours and I need to find a way to fix this. The hostname for the RHEL VM is spe1.2v29999999.dev.local , there is a static IP on this VM and it is 10.70.49.61 . The /etc/hosts looks like: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost4 localhost4.localdomain4 I suspected it might be a java jar we have running on the VM, but I stopped it via systemctl stop MyJavaJar but after running a tcp dump via tcpdump -i any udp port 53 , I could still see the queries happening. Here are some examples from different days/times in the logs (both A and TXT records): 2020-05-05T13:53:50.189178+00:00 dns.green.blue.mycompany.com 127.0.0.1 <daemon.info> dnsmasq[20886]: 739 10.70.49.61/65078 query[A] spe1.2v29999999.dev.local from 10.70.49.61 2020-05-07T00:01:39.934899+00:00 dns.green.blue.mycompany.com 127.0.0.1 <daemon.info> dnsmasq[8615]: 27827 10.70.49.61/57348 reply spe1.2v29999999.dev.local is NXDOMAIN 2020-05-11T00:01:20.674688+00:00 dns.green.blue.mycompany.com 127.0.0.1 <daemon.info> dnsmasq[8615]: 130345 10.70.49.61/53321 query[TXT] bootstrap.spe1.2v29999999.dev.local from 10.70.49.61 Would making any changes to /etc/hostname , /etc/sysconfig , /var/named .zone files, /var/named.conf or /etc/named help? Can I do more with tcpdump ? Thanks
|
linux, dns, rhel
| 0
| 910
| 1
|
https://stackoverflow.com/questions/61780171/linux-process-component-sending-frequent-dns-queries-to-resolve-the-local-hostna
|
61,757,843
|
How to fetch output of command into a variable?
|
I am trying to run a command and storing the values in a list list = sed -n 's/^abc//p' /etc/filename I am getting an error command not found while running the above command. However, when I directly run the sed -n 's/^abc//p' /etc/filename command, the output is coming fine as below: abc01 abc02 abc03
|
How to fetch output of command into a variable? I am trying to run a command and storing the values in a list list = sed -n 's/^abc//p' /etc/filename I am getting an error command not found while running the above command. However, when I directly run the sed -n 's/^abc//p' /etc/filename command, the output is coming fine as below: abc01 abc02 abc03
|
linux, bash, shell, sed, rhel
| 0
| 214
| 1
|
https://stackoverflow.com/questions/61757843/how-to-fetch-output-of-command-into-a-variable
|
61,645,228
|
installed the docker from binaries + docker startup
|
we installed the docker from binaries as the following ( according to [URL] ) wget [URL] --2020-05-06 20:39:22-- [URL] Resolving download.docker.com (download.docker.com)... 13.225.249.16, 13.225.249.45, 13.225.249.106, ... Connecting to download.docker.com (download.docker.com)|13.225.249.16|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 27816900 (27M) [application/x-tar] Saving to: ‘docker-17.03.0-ce.tgz’ 100%[===================================================================================================================================================>] 27,816,900 4.76MB/s in 3.7s 2020-05-06 20:39:26 (7.11 MB/s) - ‘docker-17.03.0-ce.tgz’ saved [27816900/27816900] now we tar it tar xzvf docker-17.03.0-ce.tgz docker/ docker/docker-containerd-ctr docker/docker-proxy docker/docker docker/docker-containerd docker/dockerd docker/docker-init docker/docker-containerd-shim docker/docker-runc the files after untar # ls docker docker-17.03.0-ce.tgz hsperfdata_root stable cd docker/ # ls docker docker-containerd docker-containerd-ctr docker-containerd-shim dockerd docker-init docker-proxy docker-runc now we need to start the dockerd but how to start it we try y /tmp/docker/dockerd Failed to connect to containerd. Please make sure containerd is installed in your PATH or you have specified the correct address. Got error: exec: "docker-containerd": executable file not found in $PATH so where we are wrong here ?
|
installed the docker from binaries + docker startup we installed the docker from binaries as the following ( according to [URL] ) wget [URL] --2020-05-06 20:39:22-- [URL] Resolving download.docker.com (download.docker.com)... 13.225.249.16, 13.225.249.45, 13.225.249.106, ... Connecting to download.docker.com (download.docker.com)|13.225.249.16|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 27816900 (27M) [application/x-tar] Saving to: ‘docker-17.03.0-ce.tgz’ 100%[===================================================================================================================================================>] 27,816,900 4.76MB/s in 3.7s 2020-05-06 20:39:26 (7.11 MB/s) - ‘docker-17.03.0-ce.tgz’ saved [27816900/27816900] now we tar it tar xzvf docker-17.03.0-ce.tgz docker/ docker/docker-containerd-ctr docker/docker-proxy docker/docker docker/docker-containerd docker/dockerd docker/docker-init docker/docker-containerd-shim docker/docker-runc the files after untar # ls docker docker-17.03.0-ce.tgz hsperfdata_root stable cd docker/ # ls docker docker-containerd docker-containerd-ctr docker-containerd-shim dockerd docker-init docker-proxy docker-runc now we need to start the dockerd but how to start it we try y /tmp/docker/dockerd Failed to connect to containerd. Please make sure containerd is installed in your PATH or you have specified the correct address. Got error: exec: "docker-containerd": executable file not found in $PATH so where we are wrong here ?
|
docker, docker-compose, containers, docker-machine, rhel
| 0
| 1,930
| 1
|
https://stackoverflow.com/questions/61645228/installed-the-docker-from-binaries-docker-startup
|
61,416,437
|
Running dotnet through cron on RedHat fails
|
We have a dotnet core script we use to index some files. We leverage RedHat Software Collection so items like dotnet can tie into our RHEL setup. To run the script, we do the following: source scl_source enable rh-dotnet30 /opt/rh/rh-dotnet30/root/usr/bin/dotnet /d/h/fileprocessor.dll 1 We want to run this in cron, but we can not get it to work. We have tried the following: Adding the 'source' command to the bash profile, but this doesn't seem to be reliable for us, and not run on the cron event. Running this directly in cron Running this as a shell script in cron We are at a loss, it seems we can never get the two commands to work together. If we don't include the source command, even if in our profile, it will not run and gives us the error " It was not possible to find any installed .NET Core SDKs Did you mean to run .NET Core SDK commands? Install a .NET Core SDK from: [URL] "
|
Running dotnet through cron on RedHat fails We have a dotnet core script we use to index some files. We leverage RedHat Software Collection so items like dotnet can tie into our RHEL setup. To run the script, we do the following: source scl_source enable rh-dotnet30 /opt/rh/rh-dotnet30/root/usr/bin/dotnet /d/h/fileprocessor.dll 1 We want to run this in cron, but we can not get it to work. We have tried the following: Adding the 'source' command to the bash profile, but this doesn't seem to be reliable for us, and not run on the cron event. Running this directly in cron Running this as a shell script in cron We are at a loss, it seems we can never get the two commands to work together. If we don't include the source command, even if in our profile, it will not run and gives us the error " It was not possible to find any installed .NET Core SDKs Did you mean to run .NET Core SDK commands? Install a .NET Core SDK from: [URL] "
|
.net-core, cron, rhel, rhel7, software-collections
| 0
| 732
| 2
|
https://stackoverflow.com/questions/61416437/running-dotnet-through-cron-on-redhat-fails
|
60,848,286
|
To exit out of the code if expect condition not met RHEL
|
I have written a small code using expect in Red Hat Linux 8 to check if a host is reachable using ICMP Ping. My intention is if the destination is not reachable , then the execution should break.Please find below the code #!/usr/bin/expect lassign $argv 1 2 spawn ping -c 2 -i 3 -W 1 $1 expect { " 0%" {puts "Source is reachable!"} " 100%" {puts "Source is not rechable. Please restart IPSEC and check!"} eof {break\r } } However with the above break statement, the execution is getting continued to next line. I am new to expect in bash scripting.Any help would be highly appreciated
|
To exit out of the code if expect condition not met RHEL I have written a small code using expect in Red Hat Linux 8 to check if a host is reachable using ICMP Ping. My intention is if the destination is not reachable , then the execution should break.Please find below the code #!/usr/bin/expect lassign $argv 1 2 spawn ping -c 2 -i 3 -W 1 $1 expect { " 0%" {puts "Source is reachable!"} " 100%" {puts "Source is not rechable. Please restart IPSEC and check!"} eof {break\r } } However with the above break statement, the execution is getting continued to next line. I am new to expect in bash scripting.Any help would be highly appreciated
|
bash, shell, expect, rhel
| 0
| 101
| 2
|
https://stackoverflow.com/questions/60848286/to-exit-out-of-the-code-if-expect-condition-not-met-rhel
|
60,340,771
|
Linux environment variable
|
Is there any command to get only the exported variables in RHEL. For ex: If I exporting multiple variable line by line like below, export AIR_HOME=value1 export PATH=value2 Is there any command to see only the exported variables. I tried printenv and env and it didn't worked for me it is printing the entire environment variables. Also echo $AIR_HOME will not work in my case since the exported variable name will change in different situation.
|
Linux environment variable Is there any command to get only the exported variables in RHEL. For ex: If I exporting multiple variable line by line like below, export AIR_HOME=value1 export PATH=value2 Is there any command to see only the exported variables. I tried printenv and env and it didn't worked for me it is printing the entire environment variables. Also echo $AIR_HOME will not work in my case since the exported variable name will change in different situation.
|
environment-variables, rhel
| 0
| 160
| 2
|
https://stackoverflow.com/questions/60340771/linux-environment-variable
|
59,905,177
|
Generating a default config SIMP file
|
I am on a RHEL 7.7 instance that uses SIMP . I am trying to generate a default configuration (YAML) file. Directly from the SIMP docs : You can use the --dry-run option to step through the questions without changing anything and then run simp config -a /root/.simp/simp_conf.yaml to apply the changes. And further down: If you want to understand what variables apply to your setup, run simp config --dry-run and examine the generated simp_conf.yaml file. That file will contain both the settings and their documentation. I've tried doing so via: simp config --dry-run simp config --dry-run -o default_simp_config.yaml simp config --dry-run -f -o default_simp_config.yaml No file is generated as a result of any of these commands. What am I missing? Info: # simp version 5.1.0 # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
|
Generating a default config SIMP file I am on a RHEL 7.7 instance that uses SIMP . I am trying to generate a default configuration (YAML) file. Directly from the SIMP docs : You can use the --dry-run option to step through the questions without changing anything and then run simp config -a /root/.simp/simp_conf.yaml to apply the changes. And further down: If you want to understand what variables apply to your setup, run simp config --dry-run and examine the generated simp_conf.yaml file. That file will contain both the settings and their documentation. I've tried doing so via: simp config --dry-run simp config --dry-run -o default_simp_config.yaml simp config --dry-run -f -o default_simp_config.yaml No file is generated as a result of any of these commands. What am I missing? Info: # simp version 5.1.0 # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
|
rhel, rhel7
| 0
| 49
| 1
|
https://stackoverflow.com/questions/59905177/generating-a-default-config-simp-file
|
59,641,119
|
Install Python Packages and dependencies on Linux without Internet
|
I have a dev Linux server (RHEL) which doesn't have any internet connectivity where we need to develop a python application. I can connect to this dev box from my local Windows server where I have internet connection. I would need to install python-3.75 and some other packages (some of which need gcc compilers and other dependencies) on this dev box. What is the best way to do this considering that some packages will have many dependencies and there is no internet on the dev box ? Some options that the internet research suggests for package installation are: Download the packages using PIP DOWNLOAD on the local server > copy the package tar to the dev server > pip install package download and unpack the source distribution > using the setup.py file of the package: run python setup.py install --user Install using Wheels: Find the wheel for the package > upload it to the dev server > run pip install SomePackage.whl Please let me know which one of these is good considering the limitations and kindly suggest if there is any other option as well.
|
Install Python Packages and dependencies on Linux without Internet I have a dev Linux server (RHEL) which doesn't have any internet connectivity where we need to develop a python application. I can connect to this dev box from my local Windows server where I have internet connection. I would need to install python-3.75 and some other packages (some of which need gcc compilers and other dependencies) on this dev box. What is the best way to do this considering that some packages will have many dependencies and there is no internet on the dev box ? Some options that the internet research suggests for package installation are: Download the packages using PIP DOWNLOAD on the local server > copy the package tar to the dev server > pip install package download and unpack the source distribution > using the setup.py file of the package: run python setup.py install --user Install using Wheels: Find the wheel for the package > upload it to the dev server > run pip install SomePackage.whl Please let me know which one of these is good considering the limitations and kindly suggest if there is any other option as well.
|
python, rhel
| 0
| 2,758
| 1
|
https://stackoverflow.com/questions/59641119/install-python-packages-and-dependencies-on-linux-without-internet
|
59,371,341
|
Installing uWSGI service on RHEL8
|
I am trying to install the uWSGI service on RHEL8. After which I should be able to do systemctl start uwsgi . As far as I can tell from online resources, this should work: yum install uwsgi (or dnf install uwsgi ), but gives Error: Unable to find a match: uwsgi . Most resources are on earlier versions of RHEL, but I have not been able to find anything specific for RHEL8. I have enabled the EPEL repository. yum repolist gives: repo id repo name status *epel Extra Packages for Enterprise Linux 8 - x86_64 3678 rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) 8289 rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) 3315 rhel-8-for-x86_64-supplementary-rpms Red Hat Enterprise Linux 8 for x86_64 - Supplementary (RPMs) 28 I have tried the 'manual' approach, as per [URL] , but this hasn't been very successful so far either, and I would prefer to use a system package if at all possible.
|
Installing uWSGI service on RHEL8 I am trying to install the uWSGI service on RHEL8. After which I should be able to do systemctl start uwsgi . As far as I can tell from online resources, this should work: yum install uwsgi (or dnf install uwsgi ), but gives Error: Unable to find a match: uwsgi . Most resources are on earlier versions of RHEL, but I have not been able to find anything specific for RHEL8. I have enabled the EPEL repository. yum repolist gives: repo id repo name status *epel Extra Packages for Enterprise Linux 8 - x86_64 3678 rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) 8289 rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) 3315 rhel-8-for-x86_64-supplementary-rpms Red Hat Enterprise Linux 8 for x86_64 - Supplementary (RPMs) 28 I have tried the 'manual' approach, as per [URL] , but this hasn't been very successful so far either, and I would prefer to use a system package if at all possible.
|
uwsgi, rhel
| 0
| 3,834
| 1
|
https://stackoverflow.com/questions/59371341/installing-uwsgi-service-on-rhel8
|
59,038,804
|
Issue running ASP.NET Core in RHEL 7
|
I have built and published an ASP.NET Core application within Visual Studio 2017 targeting ASP.NET Core 2.2.8 Runtime. When I deploy (xcopy) to RedHat and try to run in Kestrel I get the following error: Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'Microsoft.AspNetCore.Server.Kestrel.Core, Version=2.2.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The module was expected to contain an assembly manifest. This is what I have in my .csproj file: <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp2.2</TargetFramework> <AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel> <Platforms>x64</Platforms> <MicrosoftNETPlatformLibrary>Microsoft.NETCore.App</MicrosoftNETPlatformLibrary> </PropertyGroup> <ItemGroup> <Content Update="appsettings.json;web.config"> <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> </Content> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.App" Version="2.2.8" /> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.2.8" /> <PackageReference Include="Microsoft.AspNetCore.Razor.Design" Version="2.2.0" PrivateAssets="All" /> </ItemGroup> <ItemGroup> <PackageReference Update="Microsoft.NETCore.App" Version="2.2.8"/> </ItemGroup> </Project>
|
Issue running ASP.NET Core in RHEL 7 I have built and published an ASP.NET Core application within Visual Studio 2017 targeting ASP.NET Core 2.2.8 Runtime. When I deploy (xcopy) to RedHat and try to run in Kestrel I get the following error: Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'Microsoft.AspNetCore.Server.Kestrel.Core, Version=2.2.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The module was expected to contain an assembly manifest. This is what I have in my .csproj file: <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp2.2</TargetFramework> <AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel> <Platforms>x64</Platforms> <MicrosoftNETPlatformLibrary>Microsoft.NETCore.App</MicrosoftNETPlatformLibrary> </PropertyGroup> <ItemGroup> <Content Update="appsettings.json;web.config"> <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> </Content> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.App" Version="2.2.8" /> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.2.8" /> <PackageReference Include="Microsoft.AspNetCore.Razor.Design" Version="2.2.0" PrivateAssets="All" /> </ItemGroup> <ItemGroup> <PackageReference Update="Microsoft.NETCore.App" Version="2.2.8"/> </ItemGroup> </Project>
|
asp.net-core, rhel, rhel7
| 0
| 302
| 1
|
https://stackoverflow.com/questions/59038804/issue-running-asp-net-core-in-rhel-7
|
58,880,850
|
A path concatenated with os.path.join in a RHEL environment is using two different separators
|
I have some code that needs to work cross-platform and calls a subprocess of conda.exe. Prior to calling the subprocess, I create the path to conda.exe using the following code (and this path is then used to call conda.exe in the subprocess later on): install_dir = os.path.normpath(arcpy.GetInstallInfo()["InstallDir"]) conda = os.path.join(install_dir, "bin", "Python", "Scripts", "conda.exe") This works perfectly fine on Windows and Ubuntu, but on RHEL the path returned uses two different separators, example below (ellipsis is not part of path): z:\\...\\arcgis\\server\\framework\\runtime\\arcgis\\/bin/Python/Scripts/conda.exe Needless to say, when I try to call conda in the subprocess, I get a "No such file or directory" error. Any idea why the path is being put together using two different separators when running in RHEL? So far I have not been able to come up with a solution that works, thank you for any help that points me in the right direction!
|
A path concatenated with os.path.join in a RHEL environment is using two different separators I have some code that needs to work cross-platform and calls a subprocess of conda.exe. Prior to calling the subprocess, I create the path to conda.exe using the following code (and this path is then used to call conda.exe in the subprocess later on): install_dir = os.path.normpath(arcpy.GetInstallInfo()["InstallDir"]) conda = os.path.join(install_dir, "bin", "Python", "Scripts", "conda.exe") This works perfectly fine on Windows and Ubuntu, but on RHEL the path returned uses two different separators, example below (ellipsis is not part of path): z:\\...\\arcgis\\server\\framework\\runtime\\arcgis\\/bin/Python/Scripts/conda.exe Needless to say, when I try to call conda in the subprocess, I get a "No such file or directory" error. Any idea why the path is being put together using two different separators when running in RHEL? So far I have not been able to come up with a solution that works, thank you for any help that points me in the right direction!
|
python, path, rhel
| 0
| 42
| 1
|
https://stackoverflow.com/questions/58880850/a-path-concatenated-with-os-path-join-in-a-rhel-environment-is-using-two-differe
|
58,736,028
|
C strchr works with NULL value on HPUX but segfaults on RHEL
|
I'm moving some code from HPUX 11.11 to RHEL 7.5 and it includes a function that uses strchr. On HPUX it runs fine, and on RHEL there is a segmentation fault. I isolated the code and created the following simple test, with the subsequent results. It looks like HPUX strchr is returning an empty string rather than NULL when the character is not found. This is not what the man page says. I have found it might not be the strchr function but the difference in how HPUX handles NULL values, or a difference in compiler from cc to gcc. Does anyone actually know what's happening here? Why is the code not segfaulting on HPUX? C Code: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char *argv[]) { int i = 0; char* phrase = "Here is my phrase."; while (i < 5){ phrase = strchr(phrase, ' '); ++phrase; ++i; } return 0; } HPUX makefile: BIN=/path/to/bin/ CFLAGS=-c #----------Targets----------# default: $(BIN)strchrtest #----------Objects----------# OBJS = strchrtest.o #----------------BUILD----------------# $(BIN)strchrtest: $(OBJS) cc -o $@ $(OBJS) strchrtest.o: strchrtest.c cc $(CFLAGS) strchrtest.c RHEL makefile: CC=gcc BIN=/path/to/bin/ CFLAGS=-c -g -Wall #----------Targets----------# default: $(BIN)strchrtest #----------Objects----------# OBJS = strchrtest.o #----------------BUILD----------------# $(BIN)strchrtest: $(OBJS) $(CC) -o $@ $(OBJS) strchrtest.o: strchrtest.c $(CC) $(CFLAGS) strchrtest.c HPUX is just a successful result. No segmentation fault. RHEL results: (gdb) run Starting program: ../strchrtest Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7b4c5f4 in __strchr_sse42 () from /lib64/libc.so.6 Copied form my comment below: The point is that the strchr function itself is causing the segfault when asked to find a character in the null pointer (that has been incremented), but on HPUX it doesn't. So it's either returning an empty string that is then passed strchr on the next loop, or strchr is handling the null pointer parameter differently by not segfaulting. Or I misunderstand what's happening.
|
C strchr works with NULL value on HPUX but segfaults on RHEL I'm moving some code from HPUX 11.11 to RHEL 7.5 and it includes a function that uses strchr. On HPUX it runs fine, and on RHEL there is a segmentation fault. I isolated the code and created the following simple test, with the subsequent results. It looks like HPUX strchr is returning an empty string rather than NULL when the character is not found. This is not what the man page says. I have found it might not be the strchr function but the difference in how HPUX handles NULL values, or a difference in compiler from cc to gcc. Does anyone actually know what's happening here? Why is the code not segfaulting on HPUX? C Code: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char *argv[]) { int i = 0; char* phrase = "Here is my phrase."; while (i < 5){ phrase = strchr(phrase, ' '); ++phrase; ++i; } return 0; } HPUX makefile: BIN=/path/to/bin/ CFLAGS=-c #----------Targets----------# default: $(BIN)strchrtest #----------Objects----------# OBJS = strchrtest.o #----------------BUILD----------------# $(BIN)strchrtest: $(OBJS) cc -o $@ $(OBJS) strchrtest.o: strchrtest.c cc $(CFLAGS) strchrtest.c RHEL makefile: CC=gcc BIN=/path/to/bin/ CFLAGS=-c -g -Wall #----------Targets----------# default: $(BIN)strchrtest #----------Objects----------# OBJS = strchrtest.o #----------------BUILD----------------# $(BIN)strchrtest: $(OBJS) $(CC) -o $@ $(OBJS) strchrtest.o: strchrtest.c $(CC) $(CFLAGS) strchrtest.c HPUX is just a successful result. No segmentation fault. RHEL results: (gdb) run Starting program: ../strchrtest Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7b4c5f4 in __strchr_sse42 () from /lib64/libc.so.6 Copied form my comment below: The point is that the strchr function itself is causing the segfault when asked to find a character in the null pointer (that has been incremented), but on HPUX it doesn't. So it's either returning an empty string that is then passed strchr on the next loop, or strchr is handling the null pointer parameter differently by not segfaulting. Or I misunderstand what's happening.
|
c, null, rhel, hp-ux, strchr
| 0
| 309
| 1
|
https://stackoverflow.com/questions/58736028/c-strchr-works-with-null-value-on-hpux-but-segfaults-on-rhel
|
58,715,471
|
Getting segmentation fault(core dumped) after installing glibc through conda
|
I am trying to install glibc through conda on CentOS - 6.5 using conda install -c dan_blanchard glibc It installs glibc-2.18. I am getting Getting segmentation fault(core dumped) after running the above command when I try to open the python in terminal. I am working on a remote server with no admin privileges. On running echo $LD_LIBRARY_PATH , I am getting /share/opt/python/3.6.5/lib:/share/lsf/9.1/linux2.6-glibc2.3-x86_64/lib I need to update Glibc for installing pytorch=1.3 from source . Is it possible to update glibc? If yes, then what I can do to make it right? I checked the below answer also but couldn't apply to my use-case as it have root privileges. After updating glibc: Segmentation fault (core dumped)
|
Getting segmentation fault(core dumped) after installing glibc through conda I am trying to install glibc through conda on CentOS - 6.5 using conda install -c dan_blanchard glibc It installs glibc-2.18. I am getting Getting segmentation fault(core dumped) after running the above command when I try to open the python in terminal. I am working on a remote server with no admin privileges. On running echo $LD_LIBRARY_PATH , I am getting /share/opt/python/3.6.5/lib:/share/lsf/9.1/linux2.6-glibc2.3-x86_64/lib I need to update Glibc for installing pytorch=1.3 from source . Is it possible to update glibc? If yes, then what I can do to make it right? I checked the below answer also but couldn't apply to my use-case as it have root privileges. After updating glibc: Segmentation fault (core dumped)
|
python, centos, glibc, rhel
| 0
| 2,075
| 1
|
https://stackoverflow.com/questions/58715471/getting-segmentation-faultcore-dumped-after-installing-glibc-through-conda
|
58,660,083
|
Upgrade to php 7 and apache 2.4 from php 5.3 and apache 2.2 in Amazon EC2
|
I have a legacy system in which Apache 2.2.34 (linux) is installed along with php 5.3.29 (CLI). I just want to upgrade my apache to 2.4.x so that I will be able to use php 7. I have tried searching for the same but majority of sites provide solution for CentOS or Ubuntu. I'm new to Linux so I'm a bit confused when applying the same on Amazon EC2 instance. That would be really helpful if someone can provide me a step by step process to do the upgrade process. I just need to upgrade the server and I can do the configuration accordingly.
|
Upgrade to php 7 and apache 2.4 from php 5.3 and apache 2.2 in Amazon EC2 I have a legacy system in which Apache 2.2.34 (linux) is installed along with php 5.3.29 (CLI). I just want to upgrade my apache to 2.4.x so that I will be able to use php 7. I have tried searching for the same but majority of sites provide solution for CentOS or Ubuntu. I'm new to Linux so I'm a bit confused when applying the same on Amazon EC2 instance. That would be really helpful if someone can provide me a step by step process to do the upgrade process. I just need to upgrade the server and I can do the configuration accordingly.
|
linux, amazon-ec2, php-7, rhel, apache2.4
| 0
| 2,071
| 2
|
https://stackoverflow.com/questions/58660083/upgrade-to-php-7-and-apache-2-4-from-php-5-3-and-apache-2-2-in-amazon-ec2
|
57,805,047
|
Varnish 6 compatibility with RHEL 6
|
Anyone knows if Varnish Cache 6 series are compatible with RHEL 6? I'm particularly interested on the compatibility of Varnish LTS v6.0.4 and RHEL v6.10 (Santiago). I couldn't find any official documentation about this.
|
Varnish 6 compatibility with RHEL 6 Anyone knows if Varnish Cache 6 series are compatible with RHEL 6? I'm particularly interested on the compatibility of Varnish LTS v6.0.4 and RHEL v6.10 (Santiago). I couldn't find any official documentation about this.
|
compatibility, varnish, rhel
| 0
| 38
| 1
|
https://stackoverflow.com/questions/57805047/varnish-6-compatibility-with-rhel-6
|
57,724,504
|
RPM failing to install on RHEL
|
I am trying to install an RPM package on RHEL7. I am getting the following error; Fri Aug 30 05:36:55 UTC 2019--> Start Installing downloaded package... file /etc/rc.d from install of abc.x86_64 conflicts with file from package chkconfig-1.7.4-1.el7.x86_64 file /etc/rc.d/init.d from install of abc.x86_64 conflicts with file from package chkconfig-1.7.4-1.el7.x86_64 file /etc/rc.d from install of abc.x86_64 conflicts with file from package initscripts-9.49.47-1.el7.x86_64 file /etc/rc.d/init.d from install of abc.x86_64 conflicts with file from package initscripts-9.49.47-1.el7.x86_64 file /usr/lib/systemd/system from install of abc.x86_64 conflicts with file from package systemd-219-67.el7_7.1.x86_64 file /usr/lib/systemd from install of abc.x86_64 conflicts with file from package systemd-219-67.el7_7.1.x86_64 What does this error really mean? Does this mean that RPM abc is not allowed to make any changes to /etc/rc.d or chkconfig-1.7.4-1.el7.x86_64 is a conflicting package?
|
RPM failing to install on RHEL I am trying to install an RPM package on RHEL7. I am getting the following error; Fri Aug 30 05:36:55 UTC 2019--> Start Installing downloaded package... file /etc/rc.d from install of abc.x86_64 conflicts with file from package chkconfig-1.7.4-1.el7.x86_64 file /etc/rc.d/init.d from install of abc.x86_64 conflicts with file from package chkconfig-1.7.4-1.el7.x86_64 file /etc/rc.d from install of abc.x86_64 conflicts with file from package initscripts-9.49.47-1.el7.x86_64 file /etc/rc.d/init.d from install of abc.x86_64 conflicts with file from package initscripts-9.49.47-1.el7.x86_64 file /usr/lib/systemd/system from install of abc.x86_64 conflicts with file from package systemd-219-67.el7_7.1.x86_64 file /usr/lib/systemd from install of abc.x86_64 conflicts with file from package systemd-219-67.el7_7.1.x86_64 What does this error really mean? Does this mean that RPM abc is not allowed to make any changes to /etc/rc.d or chkconfig-1.7.4-1.el7.x86_64 is a conflicting package?
|
rpm, rhel, rhel7
| 0
| 382
| 1
|
https://stackoverflow.com/questions/57724504/rpm-failing-to-install-on-rhel
|
57,144,507
|
Where can I find the souce code of a RPM package?
|
I'm using centos7, the version of the docker rpms are [root@node-6 ~]# rpm -qa | grep docker docker-common-1.13.1-63.git94f4240.el7.centos.x86_64 docker-client-1.13.1-63.git94f4240.el7.centos.x86_64 docker-1.13.1-63.git94f4240.el7.centos.x86_64 I downloaded the source code of docker 1.13 from github, found that it doesn't match with the log printed on the server. It seems that RHEL/CENT OS made a lot of modifications to the docker they provide. I searched a lot on google and centos rpm git , but no luck. Is "centos edition" docker open souce? If so, where can I find the source code?
|
Where can I find the souce code of a RPM package? I'm using centos7, the version of the docker rpms are [root@node-6 ~]# rpm -qa | grep docker docker-common-1.13.1-63.git94f4240.el7.centos.x86_64 docker-client-1.13.1-63.git94f4240.el7.centos.x86_64 docker-1.13.1-63.git94f4240.el7.centos.x86_64 I downloaded the source code of docker 1.13 from github, found that it doesn't match with the log printed on the server. It seems that RHEL/CENT OS made a lot of modifications to the docker they provide. I searched a lot on google and centos rpm git , but no luck. Is "centos edition" docker open souce? If so, where can I find the source code?
|
package, centos, rpm, yum, rhel
| 0
| 936
| 1
|
https://stackoverflow.com/questions/57144507/where-can-i-find-the-souce-code-of-a-rpm-package
|
57,031,722
|
Used PowerShell to change my RHEL root passwords via PuTTY, but I don't know what I changed my password to
|
Basically the title. My friend provided me a script to batch change RHEL passwords via Powershell and PuTTY, but the new password I entered doesn't work when I try to log in. I think the issue is that it doesn't escape one of the special characters that's in the new password, but I can't figure out what the new password would have been. The "new password" I used was similar to this: a1b2c3d"4e5f6g7 I attempted to replace the secure strings for regular strings, or use telnet instead of SSH with a packet capture to determine what exactly is being sent, but none of that has worked thus far. Set-ExecutionPolicy -ExecutionPolicy RemoteSigned # Displays prompt Write-Host "This will update the root password on the Linux Servers" # Get the running directory $rundirectory = Split-Path $MyInvocation.MyCommand.Path #$rundirectory = Split-Path $rundirectory # Get old root credential $oldrootPassword = Read-Host "Enter old root password" -AsSecureString $oldrootCredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "root", $oldrootPassword # Get new root credential $newrootPassword = Read-Host "Enter new root password" -AsSecureString $newrootCredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "root", $newrootPassword $newrootPassword2 = Read-Host "Retype new root password" -AsSecureString $newrootCredential2 = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "root", $newrootPassword2 # $gc = get-content \linuxservers.txt if ($newrootCredential.GetNetworkCredential().Password -ceq $newrootCredential2.GetNetworkCredential().Password) { $templogfile = $rundirectory + "\Temp\log.txt" $tempchfile = $rundirectory + "\Temp\pwd_changes.txt" $log = $rundirectory + "\Logs\RHEL\Password_Changes_$(Get-Date -f MMddyyyy).log" $newrootPassword = $newrootCredential.GetNetworkCredential().Password $serverlist = $rundirectory + "\linuxservers.txt" Get-Content $serverlist | %{ # Connects to host and stores SSH key in case it does not have one already echo y | plink.exe -ssh -pw $oldrootCredential.GetNetworkCredential().Password root@$_ exit # Opens a session to the server to use for disaster recovery putty.exe -ssh -pw $oldrootCredential.GetNetworkCredential().Password root@$_ # Adds delay to complete login before password is changed Start-Sleep -Milliseconds 900 # Command sent to host to change password that is then logged echo y | plink.exe -ssh -v -pw $oldrootCredential.GetNetworkCredential().Password root@$_ "echo root:'$newrootPassword' | chpasswd" 2>&1 >> $templogfile # Parses file and stores output in variable $outpt = cat $templogfile | Select-String "Session sent command exit status" # Adds server name and variable to changes file echo n $_.ToUpper() n$outpt `n "------------------------------------" >> $tempchfile # Removes the log file to be used again in loop Remove-Item $templogfile # Opens second PuTTY session to make sure password works putty.exe -ssh -pw $newrootCredential.GetNetworkCredential().Password root@$_ } } else { $writehost = "ERROR: New root passwords do not match. Exiting..." } if ($writehost -ceq "ERROR: New root passwords do not match. Exiting...") { Write-Host "ERROR: New root passwords do not match. Exiting..." } else { # Places contents of results file in variable $pwresults = cat $tempchfile # Adds comment at top of file and creates new results file echo "Investigate all servers that do not have a command exit status of 0" $pwresults >> $log # Removes the changes file Remove-Item $tempchfile # Opens results file for administrator to investigate Invoke-Item $log } I expected the new password to be a1b2c3d"4e5f6g7; however, this does not work upon login.
|
Used PowerShell to change my RHEL root passwords via PuTTY, but I don't know what I changed my password to Basically the title. My friend provided me a script to batch change RHEL passwords via Powershell and PuTTY, but the new password I entered doesn't work when I try to log in. I think the issue is that it doesn't escape one of the special characters that's in the new password, but I can't figure out what the new password would have been. The "new password" I used was similar to this: a1b2c3d"4e5f6g7 I attempted to replace the secure strings for regular strings, or use telnet instead of SSH with a packet capture to determine what exactly is being sent, but none of that has worked thus far. Set-ExecutionPolicy -ExecutionPolicy RemoteSigned # Displays prompt Write-Host "This will update the root password on the Linux Servers" # Get the running directory $rundirectory = Split-Path $MyInvocation.MyCommand.Path #$rundirectory = Split-Path $rundirectory # Get old root credential $oldrootPassword = Read-Host "Enter old root password" -AsSecureString $oldrootCredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "root", $oldrootPassword # Get new root credential $newrootPassword = Read-Host "Enter new root password" -AsSecureString $newrootCredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "root", $newrootPassword $newrootPassword2 = Read-Host "Retype new root password" -AsSecureString $newrootCredential2 = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList "root", $newrootPassword2 # $gc = get-content \linuxservers.txt if ($newrootCredential.GetNetworkCredential().Password -ceq $newrootCredential2.GetNetworkCredential().Password) { $templogfile = $rundirectory + "\Temp\log.txt" $tempchfile = $rundirectory + "\Temp\pwd_changes.txt" $log = $rundirectory + "\Logs\RHEL\Password_Changes_$(Get-Date -f MMddyyyy).log" $newrootPassword = $newrootCredential.GetNetworkCredential().Password $serverlist = $rundirectory + "\linuxservers.txt" Get-Content $serverlist | %{ # Connects to host and stores SSH key in case it does not have one already echo y | plink.exe -ssh -pw $oldrootCredential.GetNetworkCredential().Password root@$_ exit # Opens a session to the server to use for disaster recovery putty.exe -ssh -pw $oldrootCredential.GetNetworkCredential().Password root@$_ # Adds delay to complete login before password is changed Start-Sleep -Milliseconds 900 # Command sent to host to change password that is then logged echo y | plink.exe -ssh -v -pw $oldrootCredential.GetNetworkCredential().Password root@$_ "echo root:'$newrootPassword' | chpasswd" 2>&1 >> $templogfile # Parses file and stores output in variable $outpt = cat $templogfile | Select-String "Session sent command exit status" # Adds server name and variable to changes file echo n $_.ToUpper() n$outpt `n "------------------------------------" >> $tempchfile # Removes the log file to be used again in loop Remove-Item $templogfile # Opens second PuTTY session to make sure password works putty.exe -ssh -pw $newrootCredential.GetNetworkCredential().Password root@$_ } } else { $writehost = "ERROR: New root passwords do not match. Exiting..." } if ($writehost -ceq "ERROR: New root passwords do not match. Exiting...") { Write-Host "ERROR: New root passwords do not match. Exiting..." } else { # Places contents of results file in variable $pwresults = cat $tempchfile # Adds comment at top of file and creates new results file echo "Investigate all servers that do not have a command exit status of 0" $pwresults >> $log # Removes the changes file Remove-Item $tempchfile # Opens results file for administrator to investigate Invoke-Item $log } I expected the new password to be a1b2c3d"4e5f6g7; however, this does not work upon login.
|
powershell, redhat, putty, rhel
| 0
| 436
| 1
|
https://stackoverflow.com/questions/57031722/used-powershell-to-change-my-rhel-root-passwords-via-putty-but-i-dont-know-wha
|
55,361,976
|
.NET Core Runtime only on Linux
|
I'm trying to figure out a deployment strategy for our RHEL server with .NET Core microservices. I was hoping that we could use a .NET Core runtime on our production systems rather than an "SDK" version. The idea being that the runtime has less of an attack surface than perhaps the SDK would. When I investigate the two options it seems I am downloading the exact same package: yum install rh-dotnet22 -y Why does Microsoft even bother making the distinction here? Is there something I'm unaware of? Is there a way to get runtimes only as opposed to SDKs?
|
.NET Core Runtime only on Linux I'm trying to figure out a deployment strategy for our RHEL server with .NET Core microservices. I was hoping that we could use a .NET Core runtime on our production systems rather than an "SDK" version. The idea being that the runtime has less of an attack surface than perhaps the SDK would. When I investigate the two options it seems I am downloading the exact same package: yum install rh-dotnet22 -y Why does Microsoft even bother making the distinction here? Is there something I'm unaware of? Is there a way to get runtimes only as opposed to SDKs?
|
security, .net-core, rhel
| 0
| 271
| 2
|
https://stackoverflow.com/questions/55361976/net-core-runtime-only-on-linux
|
55,089,691
|
How to install Rust toolset on RHEL in EC2?
|
I tried installing Rust Toolset to get Cargo: yum install rust-toolset-7 No package rust-toolset-7 available. I also tried: subscription-manager repos --enable rhel-7-server-devtools-rpms Error: 'rhel-7-server-devtools-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
|
How to install Rust toolset on RHEL in EC2? I tried installing Rust Toolset to get Cargo: yum install rust-toolset-7 No package rust-toolset-7 available. I also tried: subscription-manager repos --enable rhel-7-server-devtools-rpms Error: 'rhel-7-server-devtools-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
|
linux, amazon-ec2, rust, rhel
| 0
| 3,677
| 1
|
https://stackoverflow.com/questions/55089691/how-to-install-rust-toolset-on-rhel-in-ec2
|
54,322,418
|
Gunicorn throws error 403 when accessing static files
|
python==2.7.5 , django==1.11.10 , gunicorn==19.7.1 , RHEL 7.4 I have a django project at my job written not by me. It was in eventcat user's home directory and with time we ran out of available space on the disk. I was to move the project to /data/ . After I moved the project directory and set up a new environment I faced the problem that static files are not loaded and throwing 403 forbidden error. Well, I know that gunicorn is not supposed to serve static files on production, but this is an internal project with low load. I have to deal with it as is. The server is started with a selfwritten script (I changed the environment line to new path): #!/bin/sh . ~/.bash_profile . /data/eventcat/env/bin/activate exec gunicorn -c gunicorn.conf.py eventcat.wsgi:application The gunicorn.conf.py consists of: bind = '127.0.0.1:8000' backlog = 2048 workers = 1 worker_class = 'sync' worker_connections = 1000 timeout = 120 keepalive = 2 spew = False daemon = True pidfile = 'eventcat.pid' umask = 0 user = None group = None tmp_upload_dir = None errorlog = 'er.log' loglevel = 'debug' accesslog = 'ac.log' access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"' proc_name = None def post_fork(server, worker): server.log.info("Worker spawned (pid: %s)", worker.pid) def pre_fork(server, worker): pass def pre_exec(server): server.log.info("Forked child, re-executing.") def when_ready(server): server.log.info("Server is ready. Spawning workers") def worker_int(worker): worker.log.info("worker received INT or QUIT signal") import threading, sys, traceback id2name = dict([(th.identm, th.name) for th in threading.enumerate()]) code = [] for threadId, stack in sys._current_frames().items(): code.append("\n# Thread: %s(%d)" % (id2name.get(threadId, ""), threadId)) for filename, lineno, name, line in traceback.exctract_stack(stack): code.append('File: "%s", line %d, in %s' %(filename, lineno, name)) if line: code.append(" %s" % (line.strip())) worker.log.debug("\n".join(code)) def worker_abort(worker): worker.log.info("worker received SIGABRT signal") All the files in static directory are owned by eventcat user just like the directory itself. I couldn't find any useful information in er.log and ac.log . The server is running on https protocol and there is an ssl.conf in project directory. It has aliases for static and media pointing to previous project location and I changed all these entries to the new ones. Though I couldn't find where this config file is used. Please, advise how can I find out what is the cause of the issue. What config files or anything should I look into? UPDATE : Thanks to @ruddra, gunicorn wasn't serving static at all. It was httpd that was. After making changes in httpd config everything is working.
|
Gunicorn throws error 403 when accessing static files python==2.7.5 , django==1.11.10 , gunicorn==19.7.1 , RHEL 7.4 I have a django project at my job written not by me. It was in eventcat user's home directory and with time we ran out of available space on the disk. I was to move the project to /data/ . After I moved the project directory and set up a new environment I faced the problem that static files are not loaded and throwing 403 forbidden error. Well, I know that gunicorn is not supposed to serve static files on production, but this is an internal project with low load. I have to deal with it as is. The server is started with a selfwritten script (I changed the environment line to new path): #!/bin/sh . ~/.bash_profile . /data/eventcat/env/bin/activate exec gunicorn -c gunicorn.conf.py eventcat.wsgi:application The gunicorn.conf.py consists of: bind = '127.0.0.1:8000' backlog = 2048 workers = 1 worker_class = 'sync' worker_connections = 1000 timeout = 120 keepalive = 2 spew = False daemon = True pidfile = 'eventcat.pid' umask = 0 user = None group = None tmp_upload_dir = None errorlog = 'er.log' loglevel = 'debug' accesslog = 'ac.log' access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"' proc_name = None def post_fork(server, worker): server.log.info("Worker spawned (pid: %s)", worker.pid) def pre_fork(server, worker): pass def pre_exec(server): server.log.info("Forked child, re-executing.") def when_ready(server): server.log.info("Server is ready. Spawning workers") def worker_int(worker): worker.log.info("worker received INT or QUIT signal") import threading, sys, traceback id2name = dict([(th.identm, th.name) for th in threading.enumerate()]) code = [] for threadId, stack in sys._current_frames().items(): code.append("\n# Thread: %s(%d)" % (id2name.get(threadId, ""), threadId)) for filename, lineno, name, line in traceback.exctract_stack(stack): code.append('File: "%s", line %d, in %s' %(filename, lineno, name)) if line: code.append(" %s" % (line.strip())) worker.log.debug("\n".join(code)) def worker_abort(worker): worker.log.info("worker received SIGABRT signal") All the files in static directory are owned by eventcat user just like the directory itself. I couldn't find any useful information in er.log and ac.log . The server is running on https protocol and there is an ssl.conf in project directory. It has aliases for static and media pointing to previous project location and I changed all these entries to the new ones. Though I couldn't find where this config file is used. Please, advise how can I find out what is the cause of the issue. What config files or anything should I look into? UPDATE : Thanks to @ruddra, gunicorn wasn't serving static at all. It was httpd that was. After making changes in httpd config everything is working.
|
python, django, gunicorn, rhel, static-files
| 0
| 879
| 1
|
https://stackoverflow.com/questions/54322418/gunicorn-throws-error-403-when-accessing-static-files
|
54,038,558
|
How to get ansible to look in the container that it's running from instead of the servers provided in the inventory
|
Setup: Ansible initializes and runs from inside a docker container. A git repo which contains the playbooks and inventory files is loaded into this container. What the playbook does: The specific playbook that I'm working on runs a simple findmnt on all of the hosts listed in the inventory, and writes the output to a flat txt file, which I'm then fetching. Actual Issue: The ansible container isn't running in the detached mode and so when the run is done, there's no way to retrieve these result txt files. Since there is a git repo layered into the container, I tried stashing these into it, but the command again executes on the inventory-hosts instead of the container from where Ansible is running. I've tried multiple methods to do the same, but am unable to find a way to have ansible do a set of operations within the container that it is running from. How do I handle this situation. Playbook: --- - name: get the nfs mounts reports hosts: 2017_CI become: true vars : nfs_results: "/tmp/{{ host_name }}.txt" host_name: "{{ inventory_hostname }}" tasks: - name: "get the list of nfs mounts on {{ host_name }} " #shell: 'findmnt -lo source,target,fstype,label,options,used -t nfs' #AIX nfsstat -m shell: 'mount -l -t nfs' register: nfs_output failed_when: "'FAILED' in nfs_output.stderr" - name: Store the nfs report ouput file copy: content: "{{ nfs_output.stdout }}\n" dest: "{{ nfs_results }}" owner: root group: root mode: 0777 force: yes register: nfs_results - name: Fetching the output to ansible host fetch: src: "/tmp/{{ inventory_hostname }}.txt" dest: "/tmp/" flat: yes - pause: minutes: 2 - name: copying file with permissions copy: src: "/tmp/{{ inventory_hostname }}.txt" dest: "/data/web/nfsmountInfo/" owner: root group: root mode: 0777 # - name: Transfer file from ServerA to ServerB # synchronize: # src: "/tmp/{{ inventory_hostname }}.txt" # dest: "/data/web/nfsmountInfo/" # mode: push # delegate_to: "localhost" # become: yes # become_user: root # - pause: # minutes: 1 # - name: git configuration fo email setup # git_config: # name: user.email # scope: global # value: 'xxxx@x.com' # - name: git configuration fo email setup # git_config: # name: user.name # scope: global # value: 'myUser' # - name: Add the files into staging workspace # shell: git add . # args: # chdir: '/home/jenkins/workspace/TestPipelines/NFSTestAnsible/nfsmountInfo/' # - name: Commit the changes # shell: git commit -m "Update the nfsmount reports" # args: # chdir: '/home/jenkins/workspace/TestSaddamPipelines/NFSTestAnsible/nfsmountInfo/' # - name: Set origin to include username and password. # shell: "git remote set-url origin [URL] # - name: Push to origin. # shell: "git push origin nfs-mnt-testing"
|
How to get ansible to look in the container that it's running from instead of the servers provided in the inventory Setup: Ansible initializes and runs from inside a docker container. A git repo which contains the playbooks and inventory files is loaded into this container. What the playbook does: The specific playbook that I'm working on runs a simple findmnt on all of the hosts listed in the inventory, and writes the output to a flat txt file, which I'm then fetching. Actual Issue: The ansible container isn't running in the detached mode and so when the run is done, there's no way to retrieve these result txt files. Since there is a git repo layered into the container, I tried stashing these into it, but the command again executes on the inventory-hosts instead of the container from where Ansible is running. I've tried multiple methods to do the same, but am unable to find a way to have ansible do a set of operations within the container that it is running from. How do I handle this situation. Playbook: --- - name: get the nfs mounts reports hosts: 2017_CI become: true vars : nfs_results: "/tmp/{{ host_name }}.txt" host_name: "{{ inventory_hostname }}" tasks: - name: "get the list of nfs mounts on {{ host_name }} " #shell: 'findmnt -lo source,target,fstype,label,options,used -t nfs' #AIX nfsstat -m shell: 'mount -l -t nfs' register: nfs_output failed_when: "'FAILED' in nfs_output.stderr" - name: Store the nfs report ouput file copy: content: "{{ nfs_output.stdout }}\n" dest: "{{ nfs_results }}" owner: root group: root mode: 0777 force: yes register: nfs_results - name: Fetching the output to ansible host fetch: src: "/tmp/{{ inventory_hostname }}.txt" dest: "/tmp/" flat: yes - pause: minutes: 2 - name: copying file with permissions copy: src: "/tmp/{{ inventory_hostname }}.txt" dest: "/data/web/nfsmountInfo/" owner: root group: root mode: 0777 # - name: Transfer file from ServerA to ServerB # synchronize: # src: "/tmp/{{ inventory_hostname }}.txt" # dest: "/data/web/nfsmountInfo/" # mode: push # delegate_to: "localhost" # become: yes # become_user: root # - pause: # minutes: 1 # - name: git configuration fo email setup # git_config: # name: user.email # scope: global # value: 'xxxx@x.com' # - name: git configuration fo email setup # git_config: # name: user.name # scope: global # value: 'myUser' # - name: Add the files into staging workspace # shell: git add . # args: # chdir: '/home/jenkins/workspace/TestPipelines/NFSTestAnsible/nfsmountInfo/' # - name: Commit the changes # shell: git commit -m "Update the nfsmount reports" # args: # chdir: '/home/jenkins/workspace/TestSaddamPipelines/NFSTestAnsible/nfsmountInfo/' # - name: Set origin to include username and password. # shell: "git remote set-url origin [URL] # - name: Push to origin. # shell: "git push origin nfs-mnt-testing"
|
docker, rhel, ansible-2.x
| 0
| 893
| 1
|
https://stackoverflow.com/questions/54038558/how-to-get-ansible-to-look-in-the-container-that-its-running-from-instead-of-th
|
53,205,669
|
Azure VM RHEL Custom Script Extension: script.sh not found error
|
I've been trying to deploy a custom script extension to a RHEL VM in Azure. The script just contains 'mkdir testfolder'. And 'command to execute' is 'sh script.sh'. I keep getting an error 'script.sh can not be found'. How do I properly execute custom script Extension in RHEL Azure VM? On the menu blade of a RHEL azure vm, selected extensions, then chose custom script extensions from the list of extensions, then uploaded the shell script file(script.sh), and in command to execute: sh script.sh
|
Azure VM RHEL Custom Script Extension: script.sh not found error I've been trying to deploy a custom script extension to a RHEL VM in Azure. The script just contains 'mkdir testfolder'. And 'command to execute' is 'sh script.sh'. I keep getting an error 'script.sh can not be found'. How do I properly execute custom script Extension in RHEL Azure VM? On the menu blade of a RHEL azure vm, selected extensions, then chose custom script extensions from the list of extensions, then uploaded the shell script file(script.sh), and in command to execute: sh script.sh
|
azure, azure-virtual-machine, rhel
| 0
| 335
| 1
|
https://stackoverflow.com/questions/53205669/azure-vm-rhel-custom-script-extension-script-sh-not-found-error
|
52,836,874
|
How can i achieve processing of a mesaage to a topic only once?
|
I have two instances of my app subscribing to a topic. Since there are two instances(i.e two subscribers), two events(messages) will be generated and written to a queue. (Now i have duplicate message in the queue and one by one each is going to be processed) But I want to have a solution where only one event is processed/or only one message is written to the queue. How can i achieve that?. I must have two subscribers instead one goes down
|
How can i achieve processing of a mesaage to a topic only once? I have two instances of my app subscribing to a topic. Since there are two instances(i.e two subscribers), two events(messages) will be generated and written to a queue. (Now i have duplicate message in the queue and one by one each is going to be processed) But I want to have a solution where only one event is processed/or only one message is written to the queue. How can i achieve that?. I must have two subscribers instead one goes down
|
java, jms, publish-subscribe, rhel, mq
| 0
| 483
| 1
|
https://stackoverflow.com/questions/52836874/how-can-i-achieve-processing-of-a-mesaage-to-a-topic-only-once
|
51,703,341
|
Discrepancy between df -h and ll -h returned information
|
everyone! df -h shows that a disk is full (used 100%). But when I use ll -h total size of files almost 2%. Can someone explain what happens?
|
Discrepancy between df -h and ll -h returned information everyone! df -h shows that a disk is full (used 100%). But when I use ll -h total size of files almost 2%. Can someone explain what happens?
|
linux, shell, rhel
| 0
| 214
| 1
|
https://stackoverflow.com/questions/51703341/discrepancy-between-df-h-and-ll-h-returned-information
|
51,434,289
|
Docker EE 2.0 - Supported OSes
|
Note to Moderators: As this is not a programming question, please delete if it is considered as an inappropriate question for stackoverflow. I am unable to figure out what are the supported RHEL versions for Docker EE 2.0. This article ( Compatibility matrix ) states that RHEL versions 7.3-7.5 are supported. However the article - Docker EE end to end Install states the following: Also, make sure the hosts are running one of these operating systems: A maintained version of CentOS 7. Archived versions aren’t supported or tested. Red Hat Enterprise Linux 7.0, 7.1, 7.2, or 7.3 -Ubuntu 14.04 LTS or 16.04 LTS SUSE Linux Enterprise 12 Oracle Linux 7.3 They don't seem to be consistent with each other.
|
Docker EE 2.0 - Supported OSes Note to Moderators: As this is not a programming question, please delete if it is considered as an inappropriate question for stackoverflow. I am unable to figure out what are the supported RHEL versions for Docker EE 2.0. This article ( Compatibility matrix ) states that RHEL versions 7.3-7.5 are supported. However the article - Docker EE end to end Install states the following: Also, make sure the hosts are running one of these operating systems: A maintained version of CentOS 7. Archived versions aren’t supported or tested. Red Hat Enterprise Linux 7.0, 7.1, 7.2, or 7.3 -Ubuntu 14.04 LTS or 16.04 LTS SUSE Linux Enterprise 12 Oracle Linux 7.3 They don't seem to be consistent with each other.
|
docker, rhel, docker-ee
| 0
| 116
| 2
|
https://stackoverflow.com/questions/51434289/docker-ee-2-0-supported-oses
|
35,531,661
|
Using env variable in Spring Boot's application.properties
|
We are working on a Spring Boot web application, and the database we are using is MySQL; the setup we have is we first test it locally (means we need to install MySQL on our PC); then we push to Bitbucket; Jenkins automatically detects the new push to Bitbucket and does a build on it (for Jenkins mvn build to pass we also need to install MySQL on the virtual machines that is running Jenkins). if Jenkins build passes we push the code to our application on OpenShift (using the Openshift deployment plugin on Jenkins). The problem we have, as you may have already figured it out, is that: in application.properties we can not hard code the MySQL info. Since our project will be running in 3 different places ( local , Jenkins , and OpenShift ), we need to make the datasource field dynamic in application.properties (we know there are different ways of doing it but we are working on this solution for now). spring.datasource.url = spring.datasource.username = spring.datasource.password = The solution we came up with is we create system environment variables locally and in the Jenkins VM (naming them the same way OpenShift names them), and assigning them the right values respectively: export OPENSHIFT_MYSQL_DB_HOST="jdbc:mysql://localhost" export OPENSHIFT_MYSQL_DB_PORT="3306" export OPENSHIFT_MYSQL_DB_USERNAME="root" export OPENSHIFT_MYSQL_DB_PASSWORD="123asd" We have done this and it works. We have also checked with Map<String, String> env = System.getenv(); that the environment variables can be made into java variables as such: String password = env.get("OPENSHIFT_MYSQL_DB_PASSWORD"); String userName = env.get("OPENSHIFT_MYSQL_DB_USERNAME"); String sqlURL = env.get("OPENSHIFT_MYSQL_DB_HOST"); String sqlPort = env.get("OPENSHIFT_MYSQL_DB_PORT"); Now the only thing left is we need to use these java variables in our application.properties , and that is what we are having trouble with. In which folder, and how, do we need to assign the password , userName , sqlURL , and sqlPort variables for application.properties to be able to see them and how do we include them in application.properties ? We have tried many things one of them being: spring.datasource.url = ${sqlURL}:${sqlPort}/"nameofDB" spring.datasource.username = ${userName} spring.datasource.password = ${password} No luck so far. We are probably not putting these environment variables in the right class/folder or are using them incorrectly in application.properties .
|
Using env variable in Spring Boot's application.properties We are working on a Spring Boot web application, and the database we are using is MySQL; the setup we have is we first test it locally (means we need to install MySQL on our PC); then we push to Bitbucket; Jenkins automatically detects the new push to Bitbucket and does a build on it (for Jenkins mvn build to pass we also need to install MySQL on the virtual machines that is running Jenkins). if Jenkins build passes we push the code to our application on OpenShift (using the Openshift deployment plugin on Jenkins). The problem we have, as you may have already figured it out, is that: in application.properties we can not hard code the MySQL info. Since our project will be running in 3 different places ( local , Jenkins , and OpenShift ), we need to make the datasource field dynamic in application.properties (we know there are different ways of doing it but we are working on this solution for now). spring.datasource.url = spring.datasource.username = spring.datasource.password = The solution we came up with is we create system environment variables locally and in the Jenkins VM (naming them the same way OpenShift names them), and assigning them the right values respectively: export OPENSHIFT_MYSQL_DB_HOST="jdbc:mysql://localhost" export OPENSHIFT_MYSQL_DB_PORT="3306" export OPENSHIFT_MYSQL_DB_USERNAME="root" export OPENSHIFT_MYSQL_DB_PASSWORD="123asd" We have done this and it works. We have also checked with Map<String, String> env = System.getenv(); that the environment variables can be made into java variables as such: String password = env.get("OPENSHIFT_MYSQL_DB_PASSWORD"); String userName = env.get("OPENSHIFT_MYSQL_DB_USERNAME"); String sqlURL = env.get("OPENSHIFT_MYSQL_DB_HOST"); String sqlPort = env.get("OPENSHIFT_MYSQL_DB_PORT"); Now the only thing left is we need to use these java variables in our application.properties , and that is what we are having trouble with. In which folder, and how, do we need to assign the password , userName , sqlURL , and sqlPort variables for application.properties to be able to see them and how do we include them in application.properties ? We have tried many things one of them being: spring.datasource.url = ${sqlURL}:${sqlPort}/"nameofDB" spring.datasource.username = ${userName} spring.datasource.password = ${password} No luck so far. We are probably not putting these environment variables in the right class/folder or are using them incorrectly in application.properties .
|
java, mysql, spring, spring-mvc, openshift
| 389
| 870,656
| 11
|
https://stackoverflow.com/questions/35531661/using-env-variable-in-spring-boots-application-properties
|
34,848,422
|
How can I debug "ImagePullBackOff"?
|
All of a sudden, I cannot deploy some images which could be deployed before. I got the following pod status: [root@webdev2 origin]# oc get pods NAME READY STATUS RESTARTS AGE arix-3-yjq9w 0/1 ImagePullBackOff 0 10m docker-registry-2-vqstm 1/1 Running 0 2d router-1-kvjxq 1/1 Running 0 2d The application just won't start. The pod is not trying to run the container. From the Event page, I have got Back-off pulling image "172.30.84.25:5000/default/arix@sha256:d326 . I have verified that I can pull the image with the tag with docker pull . I have also checked the log of the last container. It was closed for some reason. I think the pod should at least try to restart it. I have run out of ideas to debug the issues. What can I check more?
|
How can I debug "ImagePullBackOff"? All of a sudden, I cannot deploy some images which could be deployed before. I got the following pod status: [root@webdev2 origin]# oc get pods NAME READY STATUS RESTARTS AGE arix-3-yjq9w 0/1 ImagePullBackOff 0 10m docker-registry-2-vqstm 1/1 Running 0 2d router-1-kvjxq 1/1 Running 0 2d The application just won't start. The pod is not trying to run the container. From the Event page, I have got Back-off pulling image "172.30.84.25:5000/default/arix@sha256:d326 . I have verified that I can pull the image with the tag with docker pull . I have also checked the log of the last container. It was closed for some reason. I think the pod should at least try to restart it. I have run out of ideas to debug the issues. What can I check more?
|
kubernetes, openshift, openshift-origin
| 235
| 333,856
| 14
|
https://stackoverflow.com/questions/34848422/how-can-i-debug-imagepullbackoff
|
48,956,049
|
What is the difference between persistent volume (PV) and persistent volume claim (PVC) in simple terms?
|
What is the difference between persistent volume (PV) and persistent volume claim (PVC) in Kubernetes/ Openshift by referring to documentation? What is the difference between both in simple terms?
|
What is the difference between persistent volume (PV) and persistent volume claim (PVC) in simple terms? What is the difference between persistent volume (PV) and persistent volume claim (PVC) in Kubernetes/ Openshift by referring to documentation? What is the difference between both in simple terms?
|
kubernetes, openshift, storage, persistent-volumes, persistent-volume-claims
| 160
| 87,818
| 11
|
https://stackoverflow.com/questions/48956049/what-is-the-difference-between-persistent-volume-pv-and-persistent-volume-clai
|
28,896,733
|
rhc setup gives error no such file dl/import
|
I'm installing openshift client tools as described: [URL] . On step 'Setting up Your Machine' I got error: rhc setup C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' : cannot load such file -- dl/import (LoadError) Full stack trace: C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require': cannot load such file -- dl/import (LoadError) from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/pageant.rb:1:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/agent/socket.rb:5:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/agent.rb:22:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/key_manager.rb:4:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/session.rb:4:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh.rb:11:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/ssh_helpers.rb:18:in <top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard.rb:77:in <class:Wizard>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard.rb:7:in <module:RHC>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard.rb:6:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands/base.rb:4:in <top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands/account.rb:2:in <module:Commands>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands/account.rb:1:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands.rb:189:in block in load' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands.rb:188:in each' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands.rb:188:in load' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/cli.rb:36:in start' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/bin/rhc:20:in <top (required)>' from C:/Ruby22-x64/bin/rhc:23:in load' from C:/Ruby22-x64/bin/rhc:23:in `<main>' I found same problem: [URL] It's suggest to replace DL with Fiddle. How I can get working rhc?
|
rhc setup gives error no such file dl/import I'm installing openshift client tools as described: [URL] . On step 'Setting up Your Machine' I got error: rhc setup C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' : cannot load such file -- dl/import (LoadError) Full stack trace: C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require': cannot load such file -- dl/import (LoadError) from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/pageant.rb:1:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/agent/socket.rb:5:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/agent.rb:22:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/key_manager.rb:4:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/authentication/session.rb:4:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh.rb:11:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/ssh_helpers.rb:18:in <top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard.rb:77:in <class:Wizard>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard.rb:7:in <module:RHC>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard.rb:6:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands/base.rb:4:in <top (required)>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands/account.rb:2:in <module:Commands>' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands/account.rb:1:in <top (required)>' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in require' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands.rb:189:in block in load' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands.rb:188:in each' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/commands.rb:188:in load' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/cli.rb:36:in start' from C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/bin/rhc:20:in <top (required)>' from C:/Ruby22-x64/bin/rhc:23:in load' from C:/Ruby22-x64/bin/rhc:23:in `<main>' I found same problem: [URL] It's suggest to replace DL with Fiddle. How I can get working rhc?
|
ruby, openshift
| 158
| 43,808
| 7
|
https://stackoverflow.com/questions/28896733/rhc-setup-gives-error-no-such-file-dl-import
|
12,657,168
|
Can I use my existing git repo with openshift?
|
Is it necessary to have git repo on openshift only? I already have bitbucket / github git repo and would prefer to push there only. Can I simply hook into it so that openshift gets intimation ? Or for simplification, I push only at github, but when I want to deploy, I do something with openshift? I did check this but it confused me: it's talking about merging exiting and new (openshift) git ?
|
Can I use my existing git repo with openshift? Is it necessary to have git repo on openshift only? I already have bitbucket / github git repo and would prefer to push there only. Can I simply hook into it so that openshift gets intimation ? Or for simplification, I push only at github, but when I want to deploy, I do something with openshift? I did check this but it confused me: it's talking about merging exiting and new (openshift) git ?
|
git, openshift
| 104
| 55,516
| 11
|
https://stackoverflow.com/questions/12657168/can-i-use-my-existing-git-repo-with-openshift
|
16,046,038
|
OpenShift rhc setup using multiple accounts
|
I have two accounts on Openshift platform. How can I setup my computer so that I can manage both of them with rhc ? I cannot find any relevant option in the command line arguments.
|
OpenShift rhc setup using multiple accounts I have two accounts on Openshift platform. How can I setup my computer so that I can manage both of them with rhc ? I cannot find any relevant option in the command line arguments.
|
ssh, installation, openshift, ssh-keys, openshift-client-tools
| 94
| 27,604
| 7
|
https://stackoverflow.com/questions/16046038/openshift-rhc-setup-using-multiple-accounts
|
16,840,342
|
How does docker compare to openshift?
|
Docker and OpenShift are both frameworks to implement a PaaS service. How do they compare in architecture and features?
|
How does docker compare to openshift? Docker and OpenShift are both frameworks to implement a PaaS service. How do they compare in architecture and features?
|
openshift, docker
| 78
| 70,136
| 7
|
https://stackoverflow.com/questions/16840342/how-does-docker-compare-to-openshift
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.