qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
47,689,456
|
I was trying to connect oracle database using python like below.
```
import cx_Oracle
conn = cx_Oracle.connect('user/password@host:port/database')
```
I've faced an error when connecting oracle.
DatabaseError: DPI-1047: 64-bit Oracle Client library cannot be loaded: "libclntsh.so: cannot open shared object file: No such file or directory". See <https://oracle.github.io/odpi/doc/installation.html#linux> for help.
I've been struggling to figure it out. I used my user name, password, host, port and database('orcl') for example,
`'admin/admin@10.10.10.10:1010/orcl'`.
Why coudn't it connect?
Ahh, btw I'm running all the code in azure notebooks.
|
2017/12/07
|
[
"https://Stackoverflow.com/questions/47689456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3176741/"
] |
That error indicates that you are missing a 64-bit Oracle client installation or it hasn't been configured correctly. Take a look at the link mentioned in the error message. It will give instructions on how to perform the Oracle client installation and configuration.
[Update on behalf of Anthony: his latest cx\_Oracle release doesn't need Oracle Client libraries so you won't see the DPI-1047 error if you upgrade. The driver got renamed to python-oracledb but the API still supports the Python DB API 2.0 specification. See the [homepage](https://oracle.github.io/python-oracledb/).]
|
This error come when your Oracle Client is not installed or LD\_LIBRARY\_PATH is not set where libclntsh.so is present.
if you have Oracle client installed then search for libclntsh.so and set the LD\_LIBRARY\_PATH as
"export LD\_LIBRARY\_PATH=/app/bds/parcels/ORACLE\_INSTANT\_CLIENT/instantclient\_11\_2:$LD\_LIBRARY\_PATH"
|
47,689,456
|
I was trying to connect oracle database using python like below.
```
import cx_Oracle
conn = cx_Oracle.connect('user/password@host:port/database')
```
I've faced an error when connecting oracle.
DatabaseError: DPI-1047: 64-bit Oracle Client library cannot be loaded: "libclntsh.so: cannot open shared object file: No such file or directory". See <https://oracle.github.io/odpi/doc/installation.html#linux> for help.
I've been struggling to figure it out. I used my user name, password, host, port and database('orcl') for example,
`'admin/admin@10.10.10.10:1010/orcl'`.
Why coudn't it connect?
Ahh, btw I'm running all the code in azure notebooks.
|
2017/12/07
|
[
"https://Stackoverflow.com/questions/47689456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3176741/"
] |
That error indicates that you are missing a 64-bit Oracle client installation or it hasn't been configured correctly. Take a look at the link mentioned in the error message. It will give instructions on how to perform the Oracle client installation and configuration.
[Update on behalf of Anthony: his latest cx\_Oracle release doesn't need Oracle Client libraries so you won't see the DPI-1047 error if you upgrade. The driver got renamed to python-oracledb but the API still supports the Python DB API 2.0 specification. See the [homepage](https://oracle.github.io/python-oracledb/).]
|
Here is the full program to connect Oracle using python.
First, you need to install cx\_Oracle. to install it fire the below command.
`pip install cx_Oracle`
```js
import cx_Oracle
def get_databse_coonection():
try:
host='hostName'
port ='portnumber'
serviceName='sid of you database'
user = 'userName'
password = 'password'
dns = cx_Oracle.makedsn(host,port,service_name=serviceName)
con = cx_Oracle.connect(user, password, dns)
cursor = con.cursor()
query ="select * from table"
cursor.execute(query)
for c in cursor:
print(c)
except cx_Oracle.DatabaseError as e:
print("There is a problem with Oracle", e)
finally:
if cursor:
cursor.close()
if con:
con.close()
get_databse_coonection()
```
|
47,689,456
|
I was trying to connect oracle database using python like below.
```
import cx_Oracle
conn = cx_Oracle.connect('user/password@host:port/database')
```
I've faced an error when connecting oracle.
DatabaseError: DPI-1047: 64-bit Oracle Client library cannot be loaded: "libclntsh.so: cannot open shared object file: No such file or directory". See <https://oracle.github.io/odpi/doc/installation.html#linux> for help.
I've been struggling to figure it out. I used my user name, password, host, port and database('orcl') for example,
`'admin/admin@10.10.10.10:1010/orcl'`.
Why coudn't it connect?
Ahh, btw I'm running all the code in azure notebooks.
|
2017/12/07
|
[
"https://Stackoverflow.com/questions/47689456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3176741/"
] |
This error come when your Oracle Client is not installed or LD\_LIBRARY\_PATH is not set where libclntsh.so is present.
if you have Oracle client installed then search for libclntsh.so and set the LD\_LIBRARY\_PATH as
"export LD\_LIBRARY\_PATH=/app/bds/parcels/ORACLE\_INSTANT\_CLIENT/instantclient\_11\_2:$LD\_LIBRARY\_PATH"
|
Here is the full program to connect Oracle using python.
First, you need to install cx\_Oracle. to install it fire the below command.
`pip install cx_Oracle`
```js
import cx_Oracle
def get_databse_coonection():
try:
host='hostName'
port ='portnumber'
serviceName='sid of you database'
user = 'userName'
password = 'password'
dns = cx_Oracle.makedsn(host,port,service_name=serviceName)
con = cx_Oracle.connect(user, password, dns)
cursor = con.cursor()
query ="select * from table"
cursor.execute(query)
for c in cursor:
print(c)
except cx_Oracle.DatabaseError as e:
print("There is a problem with Oracle", e)
finally:
if cursor:
cursor.close()
if con:
con.close()
get_databse_coonection()
```
|
53,649,039
|
I have a Databricks notebook setup that works as the following;
* pyspark connection details to Blob storage account
* Read file through spark dataframe
* convert to pandas Df
* data modelling on pandas Df
* convert to spark Df
* write to blob storage in single file
My problem is, that you can not name the file output file, where I need a static csv filename.
Is there way to rename this in pyspark?
```
## Blob Storage account information
storage_account_name = ""
storage_account_access_key = ""
## File location and File type
file_location = "path/.blob.core.windows.net/Databricks_Files/input"
file_location_new = "path/.blob.core.windows.net/Databricks_Files/out"
file_type = "csv"
## Connection string to connect to blob storage
spark.conf.set(
"fs.azure.account.key."+storage_account_name+".blob.core.windows.net",
storage_account_access_key)
```
Followed by outputting file after data transformation
```
dfspark.coalesce(1).write.format('com.databricks.spark.csv') \
.mode('overwrite').option("header", "true").save(file_location_new)
```
Where the file is then write as **"part-00000-tid-336943946930983.....csv"**
Where as a the goal is to have **"Output.csv"**
Another approach I looked at was just recreating this in python but have not come across in the documentation yet of how to output the file back to blob storage.
I know the method to retrieve from Blob storage is *.get\_blob\_to\_path* via [microsoft.docs](https://learn.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob#blob-dataexploration)
Any help here is greatly appreciated.
|
2018/12/06
|
[
"https://Stackoverflow.com/questions/53649039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6050134/"
] |
>
> I know the compiler is supposed to generate an error for templates that are erroneous for any template parameter even if not instantiated.
>
>
>
That is not the case, though. If no instantiation can be generated for a template, then the program is ill-formed, **no diagnostic required**(1). So the program is ill-formed regardless of whether you get an error or it compiles "successfully."
Looking at it from the other perspective, a compiler must not allow a warning-turned-error to affect SFINAE, as that could change the semantics of a valid program and would thus make the compiler non-conforming. So if a compiler wants to diagnose a warning as an error, it must do this by stopping compilation and not by introducing a substitution failure.
In other words, `-Werror` can make the compiler reject a well-formed program (that is its intended purpose, after all), but it would be a compiler bug if it changed the semantics of one.
---
(1) Quoting C++17 (N4659), [temp.res] 17.6/8:
>
> The program is
> ill-formed, no diagnostic required, if:
>
>
> * no valid specialization can be generated for a template ... and the template is not instantiated, or
> * ...
>
>
>
|
While it's largely a quality of implementation issue, `-Werror` can indeed (and does) interfere with SFINAE. Here is a more involved example to test it:
```
#include <type_traits>
template <typename T>
constexpr bool foo() {
if (false) {
T a;
}
return false;
}
template<typename T, typename = void> struct Check {};
template<typename T> struct Check<T, std::enable_if_t<foo<T>()>> {};
int main() {
Check<int> c;
}
```
The line `T a;` can trigger that warning (and error), even though the branch is dead (it's dead on purpose, so that `foo` is a constexpr function mostly regardless of `T`). Now, according to the standard itself, that is a well-formed program.
But because [Clang](http://coliru.stacked-crooked.com/a/fb7678387e33b97b) and [GCC](http://coliru.stacked-crooked.com/a/d75bfb09c9eb92c9) cause an error there, and that error is in the non-immediate context of the `Check` specialization, we get a hard error. Even though according to the standard itself this should just fall back to the primary template due to substitution failure in the immediate context only.
|
40,476,046
|
i'm actually an amateur python programmer and am trying to use the django framework for an android app backend. everything is okay but my problem is actually how to pass the image in the Filefield to JSON. i have tried using SerializerMethodField as described in the rest framework documentation but didn't work. sorry if this question is off track but i seriously need help.
This is from my serializer class
```
class DealSerializer(serializers.ModelSerializer):
class Meta:
model = Deal
image = serializers.SerializerMethodField()
fields = [
'title',
'description',
'image'
]
def get_image(obj):
return obj.image.url
```
and this is my view
```
class DealList(APIView):
def get(self, request):
deals= Deal.objects.all()
serializer = DealSerializer(deals, many=True)
return Response(serializer)
```
|
2016/11/07
|
[
"https://Stackoverflow.com/questions/40476046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6214350/"
] |
If you want to check if two files are equal, you can check the exit code of `diff -q` (or `cmp`). This is faster since it doesn't require finding the exact differences:
```
if diff -q file1 file2 > /dev/null
then
echo "The files are equal"
else
echo "The files are different or inaccessible"
fi
```
All Unix tools have an exit code, and it's usually faster, easier and more robust to check that than to capture and compare their output in a text-based way.
|
You can use the logic pipe:
For one command:
```
diff -q file1 file2 > /dev/null && echo "The files are equal"
```
Or more commands:
```
diff -q file1 file2 > /dev/null && {
echo "The files are equal"; echo "Other command"
echo "More other command"
}
```
|
56,181,987
|
I installed PySpark on Amazon AWS using instructions:
<https://medium.com/@josemarcialportilla/getting-spark-python-and-jupyter-notebook-running-on-amazon-ec2-dec599e1c297>
This works fine:
```py
Import pyspark as SparkContext
```
This gives error:
```
sc = SparkContext()
TypeError Traceback (most recent call last)
<ipython-input-3-2dfc28fca47d> in <module>
----> 1 sc = SparkContext()
TypeError: 'module' object is not callable
```
|
2019/05/17
|
[
"https://Stackoverflow.com/questions/56181987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11270319/"
] |
You can just use the copy constructor of `ArrayList` which accepts a `Collection<? extends E>`:
```
List<GtbEtobsOYenibelge> listOnayStatu = servis.listOnayStatus4Belge(user.getBirimId().getId());
List<GtbEtobsOYenibelge> cloneOnayStatu = new ArrayList<>(listOnayStatu);
```
That way you create a copy of `listOnayStatu`. Also you should not rely on `clone()` anymore as it has been confirmed to been a bad decision
|
The method `servis.listOnayStatus4Belge` returns a [Vector](https://docs.oracle.com/javase/8/docs/api/index.html). A `Vector` implements the `List` interface but is not an `ArrayList`. Therefore you can't cast it to one.
Looking at the problematic statement:
```
cloneOnayStatu = ((List) ((ArrayList) listOnayStatu).clone());
```
You are copying a Vector and assigning it to `cloneOnayStatu`. You should be able to do it like this:
```
cloneOnayStatu = (List<GtbEtobsOYenibelge>) ((Vector<GtbEtobsOYenibelge>)listOnayStatu).clone();
```
The `clone()` method call will return another Vector but it's declared return type is `Object`. Therefore you need to cast it to a List for the assignment to work.
However, `clone()` is not used much these days. You can have more control over what kind of List you want the result to be by using a constructor, such as the following:
```
cloneOnayStatu = new ArrayList<>(listOnayStatu);
```
|
56,181,987
|
I installed PySpark on Amazon AWS using instructions:
<https://medium.com/@josemarcialportilla/getting-spark-python-and-jupyter-notebook-running-on-amazon-ec2-dec599e1c297>
This works fine:
```py
Import pyspark as SparkContext
```
This gives error:
```
sc = SparkContext()
TypeError Traceback (most recent call last)
<ipython-input-3-2dfc28fca47d> in <module>
----> 1 sc = SparkContext()
TypeError: 'module' object is not callable
```
|
2019/05/17
|
[
"https://Stackoverflow.com/questions/56181987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11270319/"
] |
You can just use the copy constructor of `ArrayList` which accepts a `Collection<? extends E>`:
```
List<GtbEtobsOYenibelge> listOnayStatu = servis.listOnayStatus4Belge(user.getBirimId().getId());
List<GtbEtobsOYenibelge> cloneOnayStatu = new ArrayList<>(listOnayStatu);
```
That way you create a copy of `listOnayStatu`. Also you should not rely on `clone()` anymore as it has been confirmed to been a bad decision
|
You can try to save it as a new arraylist.
```
List<GtbEtobsOYenibelge> listOnayStatu = new ArrayList<>();
List<GtbEtobsOYenibelge> cloneOnayStatu;
listOnayStatu = servis.listOnayStatus4Belge(user.getBirimId().getId());
cloneOnayStatu = new ArrayList(listOnayStatu);
```
or you can use addAll
```
cloneOnayStatu.addAll(listOnayStatu);
```
|
41,492,878
|
I tried to install "scholarly" package, but I keep receiving this error:
```
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_openssl.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o
build/temp.linux-x86_64-2.7/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools,tokenize;__file__='/tmp/pip-build-0OXGEx/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EdgZGB-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-0OXGEx/cryptography/
```
Already tried the solutions in the following post, but it didnt work:
[pip install lxml error](https://stackoverflow.com/questions/5178416/pip-install-lxml-error/5178444)
|
2017/01/05
|
[
"https://Stackoverflow.com/questions/41492878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5413088/"
] |
I had the same problem.
This one helped me:
```
sudo apt-get install build-essential libssl-dev libffi-dev python-dev
```
If you are using `python3`, try to replace `python-dev` with `python3-dev`
|
Install lib32ncurses5-dev:
```
sudo apt-get install lib32ncurses5-dev
```
|
41,492,878
|
I tried to install "scholarly" package, but I keep receiving this error:
```
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_openssl.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o
build/temp.linux-x86_64-2.7/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools,tokenize;__file__='/tmp/pip-build-0OXGEx/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EdgZGB-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-0OXGEx/cryptography/
```
Already tried the solutions in the following post, but it didnt work:
[pip install lxml error](https://stackoverflow.com/questions/5178416/pip-install-lxml-error/5178444)
|
2017/01/05
|
[
"https://Stackoverflow.com/questions/41492878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5413088/"
] |
I had the same problem.
This one helped me:
```
sudo apt-get install build-essential libssl-dev libffi-dev python-dev
```
If you are using `python3`, try to replace `python-dev` with `python3-dev`
|
In my case the exception was:
**Exception:**
```
#include <snappy-c.h>
^~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit
status 1
```
And I solved it by installing these libraries:
```
sudo apt-get install libsnappy-dev
pip3 install python-snappy
```
[Here](https://github.com/andrix/python-snappy/issues/58) is a great explanation about the cause of the exception and how we can get rid of that.
|
41,492,878
|
I tried to install "scholarly" package, but I keep receiving this error:
```
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_openssl.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o
build/temp.linux-x86_64-2.7/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools,tokenize;__file__='/tmp/pip-build-0OXGEx/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EdgZGB-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-0OXGEx/cryptography/
```
Already tried the solutions in the following post, but it didnt work:
[pip install lxml error](https://stackoverflow.com/questions/5178416/pip-install-lxml-error/5178444)
|
2017/01/05
|
[
"https://Stackoverflow.com/questions/41492878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5413088/"
] |
I had the same problem.
This one helped me:
```
sudo apt-get install build-essential libssl-dev libffi-dev python-dev
```
If you are using `python3`, try to replace `python-dev` with `python3-dev`
|
In a newly created `python 3.6`, virtual environment and trying to run my `setup.py` of my module, the following command solved the error,
`sudo apt-get install python3.6-dev`
For me the error was,
```
... Python.h: No such file or directory
18 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
```
The remaining packages `build-essential libssl-dev libffi-dev` there were already installed from a previous time.
|
41,492,878
|
I tried to install "scholarly" package, but I keep receiving this error:
```
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_openssl.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o
build/temp.linux-x86_64-2.7/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools,tokenize;__file__='/tmp/pip-build-0OXGEx/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EdgZGB-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-0OXGEx/cryptography/
```
Already tried the solutions in the following post, but it didnt work:
[pip install lxml error](https://stackoverflow.com/questions/5178416/pip-install-lxml-error/5178444)
|
2017/01/05
|
[
"https://Stackoverflow.com/questions/41492878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5413088/"
] |
Install lib32ncurses5-dev:
```
sudo apt-get install lib32ncurses5-dev
```
|
In my case the exception was:
**Exception:**
```
#include <snappy-c.h>
^~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit
status 1
```
And I solved it by installing these libraries:
```
sudo apt-get install libsnappy-dev
pip3 install python-snappy
```
[Here](https://github.com/andrix/python-snappy/issues/58) is a great explanation about the cause of the exception and how we can get rid of that.
|
41,492,878
|
I tried to install "scholarly" package, but I keep receiving this error:
```
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_openssl.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o
build/temp.linux-x86_64-2.7/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools,tokenize;__file__='/tmp/pip-build-0OXGEx/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EdgZGB-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-0OXGEx/cryptography/
```
Already tried the solutions in the following post, but it didnt work:
[pip install lxml error](https://stackoverflow.com/questions/5178416/pip-install-lxml-error/5178444)
|
2017/01/05
|
[
"https://Stackoverflow.com/questions/41492878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5413088/"
] |
Install lib32ncurses5-dev:
```
sudo apt-get install lib32ncurses5-dev
```
|
In a newly created `python 3.6`, virtual environment and trying to run my `setup.py` of my module, the following command solved the error,
`sudo apt-get install python3.6-dev`
For me the error was,
```
... Python.h: No such file or directory
18 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
```
The remaining packages `build-essential libssl-dev libffi-dev` there were already installed from a previous time.
|
41,492,878
|
I tried to install "scholarly" package, but I keep receiving this error:
```
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/_openssl.c -o build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_openssl.o
build/temp.linux-x86_64-2.7/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools,tokenize;__file__='/tmp/pip-build-0OXGEx/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-EdgZGB-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-0OXGEx/cryptography/
```
Already tried the solutions in the following post, but it didnt work:
[pip install lxml error](https://stackoverflow.com/questions/5178416/pip-install-lxml-error/5178444)
|
2017/01/05
|
[
"https://Stackoverflow.com/questions/41492878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5413088/"
] |
In a newly created `python 3.6`, virtual environment and trying to run my `setup.py` of my module, the following command solved the error,
`sudo apt-get install python3.6-dev`
For me the error was,
```
... Python.h: No such file or directory
18 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
```
The remaining packages `build-essential libssl-dev libffi-dev` there were already installed from a previous time.
|
In my case the exception was:
**Exception:**
```
#include <snappy-c.h>
^~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit
status 1
```
And I solved it by installing these libraries:
```
sudo apt-get install libsnappy-dev
pip3 install python-snappy
```
[Here](https://github.com/andrix/python-snappy/issues/58) is a great explanation about the cause of the exception and how we can get rid of that.
|
39,983,159
|
This is the code that results in an error message:
```
import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.parse(data)
```
The error:

I'm new to python. I did read documentation and a couple of tutorials, but clearly I still have done something wrong. I don't believe it is the xml file itself because it does this to two different xml files.
|
2016/10/11
|
[
"https://Stackoverflow.com/questions/39983159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6938631/"
] |
`data` is a reference to the XML content as a string, but the [`parse()`](https://docs.python.org/2.7/library/xml.etree.elementtree.html#xml.etree.ElementTree.parse) function expects a filename or [file object](https://docs.python.org/2/glossary.html#term-file-object) as argument. That's why there is an an error.
`urlhandle` is a file object, so `tree = ET.parse(urlhandle)` should work for you.
|
The error message indicates that your code is trying to open a file, who's name is stored in the variable source.
It's failing to open that file (IOError) because the variable source contains a bunch of XML, not a file name.
|
39,983,159
|
This is the code that results in an error message:
```
import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.parse(data)
```
The error:

I'm new to python. I did read documentation and a couple of tutorials, but clearly I still have done something wrong. I don't believe it is the xml file itself because it does this to two different xml files.
|
2016/10/11
|
[
"https://Stackoverflow.com/questions/39983159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6938631/"
] |
Consider using ElementTree's [`fromstring()`](https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.fromstring):
```
import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
# http://feeds.bbci.co.uk/news/rss.xml?edition=int
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.fromstring(data)
print ET.tostring(tree, encoding='utf8', method='xml')
```
|
The error message indicates that your code is trying to open a file, who's name is stored in the variable source.
It's failing to open that file (IOError) because the variable source contains a bunch of XML, not a file name.
|
39,983,159
|
This is the code that results in an error message:
```
import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.parse(data)
```
The error:

I'm new to python. I did read documentation and a couple of tutorials, but clearly I still have done something wrong. I don't believe it is the xml file itself because it does this to two different xml files.
|
2016/10/11
|
[
"https://Stackoverflow.com/questions/39983159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6938631/"
] |
Consider using ElementTree's [`fromstring()`](https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.fromstring):
```
import urllib
import xml.etree.ElementTree as ET
url = raw_input('Enter URL:')
# http://feeds.bbci.co.uk/news/rss.xml?edition=int
urlhandle = urllib.urlopen(url)
data = urlhandle.read()
tree = ET.fromstring(data)
print ET.tostring(tree, encoding='utf8', method='xml')
```
|
`data` is a reference to the XML content as a string, but the [`parse()`](https://docs.python.org/2.7/library/xml.etree.elementtree.html#xml.etree.ElementTree.parse) function expects a filename or [file object](https://docs.python.org/2/glossary.html#term-file-object) as argument. That's why there is an an error.
`urlhandle` is a file object, so `tree = ET.parse(urlhandle)` should work for you.
|
55,436,590
|
I am a beginner trying to learn Python. I wrote a program using Geany and would like to build and execute it but I keep getting this error: "The system cannot find the path specified". I believe I added the right info to the Path though:
```
Compile C:\Python373\python -m py_compile "%f"
Execute C:\Python373\python "%f"
```
this doesn't work. Can anyone help me figure it out. Thank you.
|
2019/03/30
|
[
"https://Stackoverflow.com/questions/55436590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10286420/"
] |
You can try this solution
First open `sdkmanager.bat` with any text editor
Then find this line
```
%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %SDKMANAGER_OPTS%
```
And change it to this line
```
%JAVA_EXE%" %DEFAULT_JVM_OPTS% --add-modules java.xml.bind %JAVA_OPTS% %SDKMANAGER_OPTS%
```
I hope this solves your problem.
|
I had to do the following to fix this error on Windows 10:
1. Install JDK 8. I had JDK 12 installed but it did not seem to work with that version.
2. Add Java to my environment variable Path
To add Java to your environment variable Path do the following:
`Go to Computer -> Advanced system settings -> Environment variables -> PATH -> and add the path to your local java bin directory. It looks like this: C:\Program Files\Java\jdk-versionyouhave\bin`
|
55,436,590
|
I am a beginner trying to learn Python. I wrote a program using Geany and would like to build and execute it but I keep getting this error: "The system cannot find the path specified". I believe I added the right info to the Path though:
```
Compile C:\Python373\python -m py_compile "%f"
Execute C:\Python373\python "%f"
```
this doesn't work. Can anyone help me figure it out. Thank you.
|
2019/03/30
|
[
"https://Stackoverflow.com/questions/55436590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10286420/"
] |
You can try this solution
First open `sdkmanager.bat` with any text editor
Then find this line
```
%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %SDKMANAGER_OPTS%
```
And change it to this line
```
%JAVA_EXE%" %DEFAULT_JVM_OPTS% --add-modules java.xml.bind %JAVA_OPTS% %SDKMANAGER_OPTS%
```
I hope this solves your problem.
|
I had the issue as default installation of java was v11
`java -version`
Should be : `openjdk version "1.8.0_252"`
Fix:
`sudo apt-get install openjdk-8-jdk`
Don't worry won't overwrite
Then switch to the correct version via
`sudo update-alternatives --config java`
confirm correct output from `java -version`
than run `sdkmanager` again.
|
55,436,590
|
I am a beginner trying to learn Python. I wrote a program using Geany and would like to build and execute it but I keep getting this error: "The system cannot find the path specified". I believe I added the right info to the Path though:
```
Compile C:\Python373\python -m py_compile "%f"
Execute C:\Python373\python "%f"
```
this doesn't work. Can anyone help me figure it out. Thank you.
|
2019/03/30
|
[
"https://Stackoverflow.com/questions/55436590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10286420/"
] |
I had the issue as default installation of java was v11
`java -version`
Should be : `openjdk version "1.8.0_252"`
Fix:
`sudo apt-get install openjdk-8-jdk`
Don't worry won't overwrite
Then switch to the correct version via
`sudo update-alternatives --config java`
confirm correct output from `java -version`
than run `sdkmanager` again.
|
I had to do the following to fix this error on Windows 10:
1. Install JDK 8. I had JDK 12 installed but it did not seem to work with that version.
2. Add Java to my environment variable Path
To add Java to your environment variable Path do the following:
`Go to Computer -> Advanced system settings -> Environment variables -> PATH -> and add the path to your local java bin directory. It looks like this: C:\Program Files\Java\jdk-versionyouhave\bin`
|
17,260,338
|
I'm trying to deploy a Flask app to Heroku however upon pushing the code I get the error
```
2013-06-23T11:23:59.264600+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
```
I'm not sure what to try, I've tried changing the port from 5000 to 33507, but to no avail. My Procfile looks like this:
```
web: python main.py
```
`main.py` is the main Flask file which initiates the server.
Thanks.
|
2013/06/23
|
[
"https://Stackoverflow.com/questions/17260338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/970323/"
] |
In my Flask app hosted on Heroku, I use this code to start the server:
```py
if __name__ == '__main__':
# Bind to PORT if defined, otherwise default to 5000.
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
```
When developing locally, this will use port 5000, in production Heroku will set the `PORT` environment variable.
(Side note: By default, Flask is only accessible from your own computer, not from any other in the network (see the [Quickstart](https://flask.palletsprojects.com/en/2.0.x/quickstart/)). Setting `host='0.0.0.0'` will make Flask available from the network)
|
Your `main.py` script cannot bind to a specific port, it needs to bind to the port number set in the `$PORT` environment variable. Heroku sets the port it wants in that variable prior to invoking your application.
The error you are getting suggests you are binding to a port that is not the one Heroku expects.
|
17,260,338
|
I'm trying to deploy a Flask app to Heroku however upon pushing the code I get the error
```
2013-06-23T11:23:59.264600+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
```
I'm not sure what to try, I've tried changing the port from 5000 to 33507, but to no avail. My Procfile looks like this:
```
web: python main.py
```
`main.py` is the main Flask file which initiates the server.
Thanks.
|
2013/06/23
|
[
"https://Stackoverflow.com/questions/17260338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/970323/"
] |
In addition to [msiemens](https://stackoverflow.com/users/997063/msiemens)'s answer
```
import os
from run import app as application
if __name__ == '__main__':
port = int(os.environ.get('PORT', 5000))
application.run(host='0.0.0.0', port=port)
```
Your Procfile should specify the port address which in this case is stored in the heroku environment variable ${PORT}
`web: gunicorn --bind 0.0.0.0:${PORT} wsgi`
|
Your `main.py` script cannot bind to a specific port, it needs to bind to the port number set in the `$PORT` environment variable. Heroku sets the port it wants in that variable prior to invoking your application.
The error you are getting suggests you are binding to a port that is not the one Heroku expects.
|
17,260,338
|
I'm trying to deploy a Flask app to Heroku however upon pushing the code I get the error
```
2013-06-23T11:23:59.264600+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
```
I'm not sure what to try, I've tried changing the port from 5000 to 33507, but to no avail. My Procfile looks like this:
```
web: python main.py
```
`main.py` is the main Flask file which initiates the server.
Thanks.
|
2013/06/23
|
[
"https://Stackoverflow.com/questions/17260338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/970323/"
] |
Your `main.py` script cannot bind to a specific port, it needs to bind to the port number set in the `$PORT` environment variable. Heroku sets the port it wants in that variable prior to invoking your application.
The error you are getting suggests you are binding to a port that is not the one Heroku expects.
|
This also fixes the problem of [H20: App boot timeout](https://devcenter.heroku.com/changelog-items/45).
My Procfile looks like this:
```
web: gunicorn -t 150 -c gunicorn_config.py main:app --bind 0.0.0.0:${PORT}
```
and main.py:
```
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
```
|
17,260,338
|
I'm trying to deploy a Flask app to Heroku however upon pushing the code I get the error
```
2013-06-23T11:23:59.264600+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
```
I'm not sure what to try, I've tried changing the port from 5000 to 33507, but to no avail. My Procfile looks like this:
```
web: python main.py
```
`main.py` is the main Flask file which initiates the server.
Thanks.
|
2013/06/23
|
[
"https://Stackoverflow.com/questions/17260338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/970323/"
] |
In my Flask app hosted on Heroku, I use this code to start the server:
```py
if __name__ == '__main__':
# Bind to PORT if defined, otherwise default to 5000.
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
```
When developing locally, this will use port 5000, in production Heroku will set the `PORT` environment variable.
(Side note: By default, Flask is only accessible from your own computer, not from any other in the network (see the [Quickstart](https://flask.palletsprojects.com/en/2.0.x/quickstart/)). Setting `host='0.0.0.0'` will make Flask available from the network)
|
In addition to [msiemens](https://stackoverflow.com/users/997063/msiemens)'s answer
```
import os
from run import app as application
if __name__ == '__main__':
port = int(os.environ.get('PORT', 5000))
application.run(host='0.0.0.0', port=port)
```
Your Procfile should specify the port address which in this case is stored in the heroku environment variable ${PORT}
`web: gunicorn --bind 0.0.0.0:${PORT} wsgi`
|
17,260,338
|
I'm trying to deploy a Flask app to Heroku however upon pushing the code I get the error
```
2013-06-23T11:23:59.264600+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
```
I'm not sure what to try, I've tried changing the port from 5000 to 33507, but to no avail. My Procfile looks like this:
```
web: python main.py
```
`main.py` is the main Flask file which initiates the server.
Thanks.
|
2013/06/23
|
[
"https://Stackoverflow.com/questions/17260338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/970323/"
] |
In my Flask app hosted on Heroku, I use this code to start the server:
```py
if __name__ == '__main__':
# Bind to PORT if defined, otherwise default to 5000.
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
```
When developing locally, this will use port 5000, in production Heroku will set the `PORT` environment variable.
(Side note: By default, Flask is only accessible from your own computer, not from any other in the network (see the [Quickstart](https://flask.palletsprojects.com/en/2.0.x/quickstart/)). Setting `host='0.0.0.0'` will make Flask available from the network)
|
This also fixes the problem of [H20: App boot timeout](https://devcenter.heroku.com/changelog-items/45).
My Procfile looks like this:
```
web: gunicorn -t 150 -c gunicorn_config.py main:app --bind 0.0.0.0:${PORT}
```
and main.py:
```
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
```
|
17,260,338
|
I'm trying to deploy a Flask app to Heroku however upon pushing the code I get the error
```
2013-06-23T11:23:59.264600+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
```
I'm not sure what to try, I've tried changing the port from 5000 to 33507, but to no avail. My Procfile looks like this:
```
web: python main.py
```
`main.py` is the main Flask file which initiates the server.
Thanks.
|
2013/06/23
|
[
"https://Stackoverflow.com/questions/17260338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/970323/"
] |
In addition to [msiemens](https://stackoverflow.com/users/997063/msiemens)'s answer
```
import os
from run import app as application
if __name__ == '__main__':
port = int(os.environ.get('PORT', 5000))
application.run(host='0.0.0.0', port=port)
```
Your Procfile should specify the port address which in this case is stored in the heroku environment variable ${PORT}
`web: gunicorn --bind 0.0.0.0:${PORT} wsgi`
|
This also fixes the problem of [H20: App boot timeout](https://devcenter.heroku.com/changelog-items/45).
My Procfile looks like this:
```
web: gunicorn -t 150 -c gunicorn_config.py main:app --bind 0.0.0.0:${PORT}
```
and main.py:
```
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
```
|
60,136,547
|
I can't figure out how to use multithreading/multiprocessing in python to speed up this scraping process getting all the usernames from the hashtag 'cats' on instagram.
My goal is to make this as fast as possible because currently the process is kinda slow
```
from instaloader import Instaloader
HASHTAG = 'cats'
loader = Instaloader(sleep=False)
users = []
for post in loader.get_hashtag_posts(HASHTAG):
if post.owner_username not in users:
users.append(post.owner_username)
print(post.owner_username)
```
|
2020/02/09
|
[
"https://Stackoverflow.com/questions/60136547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12867155/"
] |
The `LockedIterator` is inspired from [here](https://stackoverflow.com/questions/1131430/are-generators-threadsafe).
```
import threading
from instaloader import Instaloader
class LockedIterator(object):
def __init__(self, it):
self.lock = threading.Lock()
self.it = it.__iter__()
def __iter__(self):
return self
def __next__(self):
self.lock.acquire()
try:
return self.it.__next__()
finally:
self.lock.release()
HASHTAG = 'cats'
posts = Instaloader(sleep=False).get_hashtag_posts(HASHTAG)
posts = LockedIterator(posts)
users = set()
def worker():
try:
for post in posts:
print(post.owner_username)
users.add(post.owner_username)
except Exception as e:
print(e)
raise
threads = []
for i in range(4):
t = threading.Thread(target=worker)
threads.append(t)
t.start()
for t in threads:
t.join()
```
|
**Goal is to have an input file and seperated output.txt files, maybe you can help me here to**
It should be something with line 45
And i'm not really advanced so my try may contains some wrong code, I don't know
As an example hashtags for input.txt I used the:
*wqddt & d2deltas*
```
from instaloader import Instaloader
import threading
import io
import time
import sys
class LockedIterator(object):
def __init__(self, it):
self.lock = threading.Lock()
self.it = it.__iter__()
def __iter__(self):
return self
def __next__(self):
self.lock.acquire()
try:
return self.it.__next__()
finally:
self.lock.release()
f = open('input.txt','r',encoding='utf-8')
HASHTAG = f.read()
p = HASHTAG.split('\n')
PROFILE = p[:]
for ind in range(len(PROFILE)):
pro = PROFILE[ind]
posts = Instaloader(sleep=False).get_hashtag_posts(pro)
posts = LockedIterator(posts)
users = set()
start_time = time.time()
PROFILE = p[:]
def worker():
for ind in range(len(PROFILE)):
pro = PROFILE[ind]
try:
filename = 'downloads/'+pro+'.txt'
fil = open(filename,'a',newline='',encoding="utf-8")
for post in posts:
hashtags = post.owner_username
fil.write(str(hashtags)+'\n')
except:
print('Skipping',pro)
threads = []
for i in range(4): #Input Threads
t = threading.Thread(target=worker)
threads.append(t)
t.start()
for t in threads:
t.join()
end_time = time.time()
print("Done")
print("Time taken : " + str(end_time - start_time) + "sec")
```
|
20,763,448
|
EDITED HEAVILY with some new information (and a bounty)
I am trying to create a plug in in python for gimp. (on windows)
this page <http://gimpbook.com/scripting/notes.html> suggests running it from the shell, or looking at ~/.xsession-errors
neither work.
I am able to run it from the cmd shell, as
>
> gimp-2.8.exe -c --verbose ## (as suggested by <http://gimpchat.com/viewtopic.php?f=9&t=751> )
>
>
>
this causes the output from "pdb.gimp\_message(...)" to go to a terminal.
BUT !!! this only works when everything is running as expected , i get no output on crashes.
i've tried print statements, they go nowhere.
this other guy had a similar problem , but the discussion got sidetracked.
[Plugins usually don't work, how do I debug?](https://stackoverflow.com/questions/18969820/plugins-usually-dont-work-how-do-i-debug)
---
in some places i saw recommendations to run it from within the python-fu console.
this gets me nowhere. i need to comment out import gimpfu, as it raises errors, and i don't get gtk working.
---
my current problem is that even if the plugin registers and shows on the menu, when there is some error and it does not behave as expected, i don't know where to start looking for hints .
(i've tried clicking in all sorts of contexts, w - w/o selection, with w/o image. )
I was able to copy , and execute example plugins from <http://gimpbook.com/scripting/>
and i got the, working, but when a change i make breaks something, i know not what, and morphing an existing program line by line is tedious .(gimp has to be shut down and restared each time)
---
so to sum up -
1- can i refresh a plugin without restarting gimp ? (so at least my slow-morph will be faster )
2- can i run plug-ins from the python-fu shell. (as opposed to just importing them to make sure they parse.)
3- is there an error-log i am missing, or something to that effect?
4- is there a way to run gimp on windows from a shell to see output ? (am i better off under cygwin (or virtualbox.. ))?
5- i haven't yet looked up how to connect winpdb to an existing process. how would i go about connecting it to a python process that runs inside gimp?
thanks
|
2013/12/24
|
[
"https://Stackoverflow.com/questions/20763448",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1456530/"
] |
>
> 1- can i refresh a plugin without restarting gimp ? (so at least my
> slow-morph will be faster )
>
>
>
You must restart GIMP when you add a script or change register().
No need to restart when changing other parts of the script -- it runs as a separate process and will be re-read from disk each time.
helpful source:
<http://gimpbook.com/scripting/notes.html>
>
> 2- can i run plug-ins from the python-fu shell. (as opposed to just
> importing them to make sure they parse.)
>
>
>
Yes, you can access to your registered plug-in in `python-fu` console as:
```
>>> pdb.name_of_registerd_plug-in
```
And can call it like:
```
>>> pdb.name_of_registerd_plug-in(img, arg1, arg2, ...)
```
Also in `python-fu` dialog console, you can click to `Browse ..` option and find your registered plug-in,
and then click `Apply` , to import it to `python-fu` console.
helpful source:
<http://registry.gimp.org/node/28434>
>
> 3- is there an error-log i am missing, or something to that effect?
>
>
>
To log, you can define a function like this:
```
def gimp_log(text):
pdb.gimp_message(text)
```
And use it in your code, whenever you want.
To see log of that, in `gimp` program, open `Error Console` from `Dockable Dialogs` in `Windows` menu, otherwise a message box will be pop up on every time you make a log.
Also you can redirect `stdin` and `stdout` to a file,:
```
import sys
sys.stderr = open('er.txt', 'a')
sys.stdout = open('log.txt', 'a')
```
When you do that, all of `exceptions` will go to `err.txt` and all of print out will be go to `log.txt`
Note that open file with `a` option instead of `w` to keep log file.
helpful sources:
[How do I output info to the console in a Gimp python script?](https://stackoverflow.com/questions/9955834/how-do-i-output-info-to-the-console-in-a-gimp-python-script)
<http://www.exp-media.com/content/extending-gimp-python-python-fu-plugins-part-2>
>
> 4- is there a way to run gimp on windows from a shell to see output ?
> (am i better off under cygwin (or virtualbox.. ))?
>
>
>
I got some error for that, but may try again ...
>
> 5- i haven't yet looked up how to connect winpdb to an existing
> process. how would i go about connecting it to a python process that
> runs inside gimp?
>
>
>
First install [winpdb](http://winpdb.org/download/) , and also [wxPython](http://www.wxpython.org/) ( Winpdb GUI depends on wxPython)
Note that `Gimp` has own python interpreter, and may you want to install `winpdb` to your default python interpreter or to gimp python interpreter.
If you install `winpdb` to your default python interpreter, then you need to copy `rpdb2.py` installed file to `..\Lib\site-packages` of gimp python interpreter path.
After that you should be able to import `pdb2` module from `Python-Fu` console of gimp:
```
GIMP 2.8.10 Python Console
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)]
>>> import rpdb2
>>>
```
Now in your plug-in code, for example in your main function add following code:
```
import rpdb2 # may be included out side of function.
rpdb2.start_embedded_debugger("pass") # a password that will asked by winpdb
```
Next, go to gimp and run your python plug-in, when you run your plug-in, it will run and then wait when reach to above code.
Now to open `Winpdb GUI` go to `..\PythonXX\Scripts` and run `winpdb_.pyw`.
(Note that when using Winpdb for remote debugging make sure any [firewall](http://winpdb.org/docs/requirements/) on the way has TCP port 51000 open. Note that if port 51000 is taken Winpdb will search for an alternative port between 51000 and 51023.)
Then in `Winpdb GUI` from `File` menu select `attach` and give `pass` as password to it, and then you can see your plug-in script on that list, select it and start your debug step by step.

helpful resource:
[Installing PyGIMP on Windows](https://stackoverflow.com/questions/14592607/installing-pygimp-on-windows)
Useful sources:
<http://wiki.gimp.org/index.php/Hacking:Plugins>
<http://www.gimp.org/docs/python/index.html>
<http://wiki.elvanor.net/index.php/GIMP_Scripting>
<http://www.exp-media.com/gimp-python-tutorial>
<http://coderazzi.net/python/gimp/pythonfu.html>
<http://www.ibm.com/developerworks/opensource/library/os-autogimp/os-autogimp-pdf.pdf>
|
as noted in [How do I output info to the console in a Gimp python script?](https://stackoverflow.com/questions/9955834/how-do-i-output-info-to-the-console-in-a-gimp-python-script/15637932#15637932)
add
```
import sys
sys.stderr = open( 'c:\\temp\\gimpstderr.txt', 'w')
sys.stdout = open( 'c:\\temp\\gimpstdout.txt', 'w')
```
at the beginning of the plug in file.
|
20,763,448
|
EDITED HEAVILY with some new information (and a bounty)
I am trying to create a plug in in python for gimp. (on windows)
this page <http://gimpbook.com/scripting/notes.html> suggests running it from the shell, or looking at ~/.xsession-errors
neither work.
I am able to run it from the cmd shell, as
>
> gimp-2.8.exe -c --verbose ## (as suggested by <http://gimpchat.com/viewtopic.php?f=9&t=751> )
>
>
>
this causes the output from "pdb.gimp\_message(...)" to go to a terminal.
BUT !!! this only works when everything is running as expected , i get no output on crashes.
i've tried print statements, they go nowhere.
this other guy had a similar problem , but the discussion got sidetracked.
[Plugins usually don't work, how do I debug?](https://stackoverflow.com/questions/18969820/plugins-usually-dont-work-how-do-i-debug)
---
in some places i saw recommendations to run it from within the python-fu console.
this gets me nowhere. i need to comment out import gimpfu, as it raises errors, and i don't get gtk working.
---
my current problem is that even if the plugin registers and shows on the menu, when there is some error and it does not behave as expected, i don't know where to start looking for hints .
(i've tried clicking in all sorts of contexts, w - w/o selection, with w/o image. )
I was able to copy , and execute example plugins from <http://gimpbook.com/scripting/>
and i got the, working, but when a change i make breaks something, i know not what, and morphing an existing program line by line is tedious .(gimp has to be shut down and restared each time)
---
so to sum up -
1- can i refresh a plugin without restarting gimp ? (so at least my slow-morph will be faster )
2- can i run plug-ins from the python-fu shell. (as opposed to just importing them to make sure they parse.)
3- is there an error-log i am missing, or something to that effect?
4- is there a way to run gimp on windows from a shell to see output ? (am i better off under cygwin (or virtualbox.. ))?
5- i haven't yet looked up how to connect winpdb to an existing process. how would i go about connecting it to a python process that runs inside gimp?
thanks
|
2013/12/24
|
[
"https://Stackoverflow.com/questions/20763448",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1456530/"
] |
>
> 1- can i refresh a plugin without restarting gimp ? (so at least my
> slow-morph will be faster )
>
>
>
You must restart GIMP when you add a script or change register().
No need to restart when changing other parts of the script -- it runs as a separate process and will be re-read from disk each time.
helpful source:
<http://gimpbook.com/scripting/notes.html>
>
> 2- can i run plug-ins from the python-fu shell. (as opposed to just
> importing them to make sure they parse.)
>
>
>
Yes, you can access to your registered plug-in in `python-fu` console as:
```
>>> pdb.name_of_registerd_plug-in
```
And can call it like:
```
>>> pdb.name_of_registerd_plug-in(img, arg1, arg2, ...)
```
Also in `python-fu` dialog console, you can click to `Browse ..` option and find your registered plug-in,
and then click `Apply` , to import it to `python-fu` console.
helpful source:
<http://registry.gimp.org/node/28434>
>
> 3- is there an error-log i am missing, or something to that effect?
>
>
>
To log, you can define a function like this:
```
def gimp_log(text):
pdb.gimp_message(text)
```
And use it in your code, whenever you want.
To see log of that, in `gimp` program, open `Error Console` from `Dockable Dialogs` in `Windows` menu, otherwise a message box will be pop up on every time you make a log.
Also you can redirect `stdin` and `stdout` to a file,:
```
import sys
sys.stderr = open('er.txt', 'a')
sys.stdout = open('log.txt', 'a')
```
When you do that, all of `exceptions` will go to `err.txt` and all of print out will be go to `log.txt`
Note that open file with `a` option instead of `w` to keep log file.
helpful sources:
[How do I output info to the console in a Gimp python script?](https://stackoverflow.com/questions/9955834/how-do-i-output-info-to-the-console-in-a-gimp-python-script)
<http://www.exp-media.com/content/extending-gimp-python-python-fu-plugins-part-2>
>
> 4- is there a way to run gimp on windows from a shell to see output ?
> (am i better off under cygwin (or virtualbox.. ))?
>
>
>
I got some error for that, but may try again ...
>
> 5- i haven't yet looked up how to connect winpdb to an existing
> process. how would i go about connecting it to a python process that
> runs inside gimp?
>
>
>
First install [winpdb](http://winpdb.org/download/) , and also [wxPython](http://www.wxpython.org/) ( Winpdb GUI depends on wxPython)
Note that `Gimp` has own python interpreter, and may you want to install `winpdb` to your default python interpreter or to gimp python interpreter.
If you install `winpdb` to your default python interpreter, then you need to copy `rpdb2.py` installed file to `..\Lib\site-packages` of gimp python interpreter path.
After that you should be able to import `pdb2` module from `Python-Fu` console of gimp:
```
GIMP 2.8.10 Python Console
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)]
>>> import rpdb2
>>>
```
Now in your plug-in code, for example in your main function add following code:
```
import rpdb2 # may be included out side of function.
rpdb2.start_embedded_debugger("pass") # a password that will asked by winpdb
```
Next, go to gimp and run your python plug-in, when you run your plug-in, it will run and then wait when reach to above code.
Now to open `Winpdb GUI` go to `..\PythonXX\Scripts` and run `winpdb_.pyw`.
(Note that when using Winpdb for remote debugging make sure any [firewall](http://winpdb.org/docs/requirements/) on the way has TCP port 51000 open. Note that if port 51000 is taken Winpdb will search for an alternative port between 51000 and 51023.)
Then in `Winpdb GUI` from `File` menu select `attach` and give `pass` as password to it, and then you can see your plug-in script on that list, select it and start your debug step by step.

helpful resource:
[Installing PyGIMP on Windows](https://stackoverflow.com/questions/14592607/installing-pygimp-on-windows)
Useful sources:
<http://wiki.gimp.org/index.php/Hacking:Plugins>
<http://www.gimp.org/docs/python/index.html>
<http://wiki.elvanor.net/index.php/GIMP_Scripting>
<http://www.exp-media.com/gimp-python-tutorial>
<http://coderazzi.net/python/gimp/pythonfu.html>
<http://www.ibm.com/developerworks/opensource/library/os-autogimp/os-autogimp-pdf.pdf>
|
I am a newbie to python, but I would like to give a shout-out, first to winpdb, and then to this comment for integrating winpdb into GIMP.
This same procedure works as well for LibreOffice 4.
If I may be allowed to vent a little; I have a moderate amount of experience with Visual Basic, more or less at a hobbiest level, but I decided a few years ago to get into OpenOffice when MicroSoft threatened to abandon VB for the Mac. I don't want to say that VB in OpenOffice was onerous, but the lack of anything resembling an IDE is tedious. Now, with winpdb, I will never be looking back. It's python from here on out, baby.
Steps taken:
-- As suggested by Omid above, I first got winpdb running out of GIMP (relatively painless).
-- I copied the rpdb2.py file to C:\Program Files\LibreOffice 4\program\python-core-3.3.3\lib\site-packages\rpdb2.py. (Win 7, LibreOffice 4.4.03)
-- I edited the HelloWorld.py file in C:\Program Files\LibreOffice 4\share\Scripts\python directory (saved in WinPDb\_HelloWorld.py to same directory).
```
# HelloWorld python script for the scripting framework
# This file is part of the LibreOffice project.
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. blah, blah, blah ...
import rpdb2
#rpdb2.start_embedded_debugger("Your Password Here") # << DON'T start debugger here.
# It only gets you lost in the LO python wrapper when debugging.
def HelloWorldPython( ):
"""Prints the string 'Hello World(in Python)' into the current document"""
# start debugger INSIDE function, where it will be called from LO Macros -- Duh!!
rpdb2.start_embedded_debugger("YourPasswordHere")
#get the doc from the scripting context which is made available to all scripts
desktop = XSCRIPTCONTEXT.getDesktop()
#... etc., see HelloWorld.py
```
WinPDb\_HelloWorld appears under LibreOffice Macros in the Macro Selector (see <https://wiki.openoffice.org/wiki/Python_as_a_macro_language> for more on that).
(can't show you a picture - posting as a guest)
|
55,841,631
|
So i have a question to create a matrix, but I'm unsure why the values are shared? Not sure if its due to the sequence being a reference type or not?
If you write this code in pythontutor, you'll find that the main tuple all points to the same 'row' tuple and is shared. I understand that if I did `return row*n` it'd be shared, but **Why is it that when you concatenate tuples, or append lists, why would it then be shared (referred to the same memory address)**?
```
def make_matrix(n):
row = (0, )*n
board = ()
for i in range(n):
board += (row,)
return board
matrix = make_board(4)
print(matrix)
```
As compared to this code, where each row is separately (0,0,0,0) and not shared.
```
def make_board(n):
return tuple(tuple(0 for i in range(n)) for i in range(n))
matrix = make_board(4)
print(matrix)
```
|
2019/04/25
|
[
"https://Stackoverflow.com/questions/55841631",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11245768/"
] |
The reason why your query isn't working as expected is because you are not actually targeting the specific array element you want to update.
Here's how I would write the query:
```
patients.findOneAndUpdate(
{_id: "5cb939a3ba1d7d693846136c"},
{$set: {"myArray.$[el].value": 424214 } },
{
arrayFilters: [{ "el.treatment": "beauty" }],
new: true
}
)
```
To break down what's happening:
1) First we're looking for the patient by ID
2) Then using `$set` we specify which updates are being applied. notice the `$[el]` syntax. this is referring to an individual element in the array. you can name it what ever you want but it must match the variable name used in `arrayFilters`.
3) Then in the configuration object we specify `arrayFilters` which is saying to only target array elements with "treatment" equal to "beauty"
Note though that it will target and update all array elements with treatment equal to "beauty". If you haven't already you'll want to ensure that the treatment value is unique. [`$addToSet`](https://docs.mongodb.com/manual/reference/operator/update/addToSet/) could be useful here, and would be used when **adding** elements to the array.
|
Ok i found out and managed to update but the right answer from Frank Rose is better cause it worked in my other projects but not the current one
Because i was using version 4.4 of mongoose, only version 5 and above can use arrayfilter
For mongoose version < 5:
```
patients.findOneAndUpdate(
{
_id: "5cb939a3ba1d7d693846136c",
'images.treatment': "beauty"
},
{$set: {"myArray.$.value": 424214 } },
{
new: true
}
)
```
|
50,913,172
|
Big hello to the Stackoverflow community,
I am trying to read in a .csv file with 1370 rows and two columns: `Time` and `Speed`.
```
Time Speed
0 1
1 4
2 7
3 8
```
I want to find the difference in `Speed` between two time steps (e.g. `Time` `2` and `1`, which is `3`) for the entire length of the data. I want to add a new column `dS` with the previously calculated difference. The data would now look like:
```
Time Speed dS
0 1 NaN
1 4 3
2 7 3
3 8 1
```
The code I am using is as follows:
```
import pandas as pd
from pandas import read_csv
df2 = pd.read_csv ('speed.csv')
dVV = []
for i, row in df2.iterrows():
dVV.append(df2.iloc[i+1,1] - df2.iloc[i,1])
break
df2['dVV']=dVV
```
The error I am getting is:
```
ValueError Traceback (most recent call last)
<ipython-input-29-4ed9fde37ff9> in <module>()
14 break
15
---> 16 df2['dVV']=dVV
17
18 #df2.to_csv('udds_test.csv', index=False, header=True)
~\Anaconda3\lib\site-packages\pandas\core\frame.py in __setitem__(self, key,
value)
2517 else:
2518 # set column
-> 2519 self._set_item(key, value)
2520
2521 def _setitem_slice(self, key, value):
~\Anaconda3\lib\site-packages\pandas\core\frame.py in _set_item(self, key,
value)
2583
2584 self._ensure_valid_index(value)
-> 2585 value = self._sanitize_column(key, value)
2586 NDFrame._set_item(self, key, value)
2587
~\Anaconda3\lib\site-packages\pandas\core\frame.py in _sanitize_column(self, key, value, broadcast)
2758
2759 # turn me into an ndarray
-> 2760 value = _sanitize_index(value, self.index, copy=False)
2761 if not isinstance(value, (np.ndarray, Index)):
2762 if isinstance(value, list) and len(value) > 0:
~\Anaconda3\lib\site-packages\pandas\core\series.py in _sanitize_index(data,
index, copy)
3119
3120 if len(data) != len(index):
-> 3121 raise ValueError('Length of values does not match length of '
'index')
3122
3123 if isinstance(data, PeriodIndex):
ValueError: Length of values does not match length of index
```
I am guessing that the code is breaking after the last 1370th row. How can I tackle this?
|
2018/06/18
|
[
"https://Stackoverflow.com/questions/50913172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9957516/"
] |
You can just use [`pd.Series.diff`](http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.Series.diff.html):
```
df['ds'] = df['Speed'].diff()
print(df)
Time Speed ds
0 0 1 NaN
1 1 4 3.0
2 2 7 3.0
3 3 8 1.0
```
The loop method you've attempted is not recommend when vectorised solutions such as `pd.Series.diff` are available.
|
Use:
```
df['Speed_avg'] = df['Speed'].rolling(2, min_periods=2).mean()
df['ds'] = df['Speed'].diff()
```
Output:
```
Time Speed Speed_avg ds
0 0 1 NaN NaN
1 1 4 2.5 3.0
2 2 7 5.5 3.0
3 3 8 7.5 1.0
```
|
46,501,292
|
I'm building a data extract using [scrapy](https://scrapy.org/) and want to normalize a raw string pulled out of an HTML document. Here's an example string:
```
Sapphire RX460 OC 2/4GB
```
Notice two groups of two whitespaces preceeding the string literal and between `OC` and `2`.
Python provides trim as described in [How do I trim whitespace with Python?](https://stackoverflow.com/questions/1185524/how-do-i-trim-whitespace-with-python) But that won't handle the two spaces between `OC` and `2`, which I need collapsed into a single space.
I've tried using [`normalize-space()`](http://devdocs.io/xslt_xpath/xpath/functions/normalize-space) from XPath while extracting data with my [scrapy Selector](https://doc.scrapy.org/en/latest/topics/selectors.html) and that works but the assignment verbose with strong rightward drift:
```
product_title = product.css('h3').xpath('normalize-space((text()))').extract_first()
```
Is there an elegant way to normalize whitespace using Python? If not a one-liner, is there a way I can break the above line into something easier to read without throwing an indentation error, e.g.
```
product_title = product.css('h3')
.xpath('normalize-space((text()))')
.extract_first()
```
|
2017/09/30
|
[
"https://Stackoverflow.com/questions/46501292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/712334/"
] |
You can use:
```
" ".join(s.split())
```
where `s` is your string.
|
You can use a function like below with regular expression to scan for continuous spaces and replace them by 1 space
```
import re
def clean_data(data):
return re.sub(" {2,}", " ", data.strip())
product_title = clean(product.css('h3::text').extract_first())
```
And then improve clean function anyway you like it
|
46,501,292
|
I'm building a data extract using [scrapy](https://scrapy.org/) and want to normalize a raw string pulled out of an HTML document. Here's an example string:
```
Sapphire RX460 OC 2/4GB
```
Notice two groups of two whitespaces preceeding the string literal and between `OC` and `2`.
Python provides trim as described in [How do I trim whitespace with Python?](https://stackoverflow.com/questions/1185524/how-do-i-trim-whitespace-with-python) But that won't handle the two spaces between `OC` and `2`, which I need collapsed into a single space.
I've tried using [`normalize-space()`](http://devdocs.io/xslt_xpath/xpath/functions/normalize-space) from XPath while extracting data with my [scrapy Selector](https://doc.scrapy.org/en/latest/topics/selectors.html) and that works but the assignment verbose with strong rightward drift:
```
product_title = product.css('h3').xpath('normalize-space((text()))').extract_first()
```
Is there an elegant way to normalize whitespace using Python? If not a one-liner, is there a way I can break the above line into something easier to read without throwing an indentation error, e.g.
```
product_title = product.css('h3')
.xpath('normalize-space((text()))')
.extract_first()
```
|
2017/09/30
|
[
"https://Stackoverflow.com/questions/46501292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/712334/"
] |
Instead of using regex's for this, a more efficient solution is to use the join/split option, observe:
```
>>> timeit.Timer((lambda:' '.join(' Sapphire RX460 OC 2/4GB'.split()))).timeit()
0.7263979911804199
>>> def f():
return re.sub(" +", ' ', " Sapphire RX460 OC 2/4GB").split()
>>> timeit.Timer(f).timeit()
4.163465976715088
```
|
You can use a function like below with regular expression to scan for continuous spaces and replace them by 1 space
```
import re
def clean_data(data):
return re.sub(" {2,}", " ", data.strip())
product_title = clean(product.css('h3::text').extract_first())
```
And then improve clean function anyway you like it
|
46,501,292
|
I'm building a data extract using [scrapy](https://scrapy.org/) and want to normalize a raw string pulled out of an HTML document. Here's an example string:
```
Sapphire RX460 OC 2/4GB
```
Notice two groups of two whitespaces preceeding the string literal and between `OC` and `2`.
Python provides trim as described in [How do I trim whitespace with Python?](https://stackoverflow.com/questions/1185524/how-do-i-trim-whitespace-with-python) But that won't handle the two spaces between `OC` and `2`, which I need collapsed into a single space.
I've tried using [`normalize-space()`](http://devdocs.io/xslt_xpath/xpath/functions/normalize-space) from XPath while extracting data with my [scrapy Selector](https://doc.scrapy.org/en/latest/topics/selectors.html) and that works but the assignment verbose with strong rightward drift:
```
product_title = product.css('h3').xpath('normalize-space((text()))').extract_first()
```
Is there an elegant way to normalize whitespace using Python? If not a one-liner, is there a way I can break the above line into something easier to read without throwing an indentation error, e.g.
```
product_title = product.css('h3')
.xpath('normalize-space((text()))')
.extract_first()
```
|
2017/09/30
|
[
"https://Stackoverflow.com/questions/46501292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/712334/"
] |
You can use:
```
" ".join(s.split())
```
where `s` is your string.
|
Instead of using regex's for this, a more efficient solution is to use the join/split option, observe:
```
>>> timeit.Timer((lambda:' '.join(' Sapphire RX460 OC 2/4GB'.split()))).timeit()
0.7263979911804199
>>> def f():
return re.sub(" +", ' ', " Sapphire RX460 OC 2/4GB").split()
>>> timeit.Timer(f).timeit()
4.163465976715088
```
|
37,336,875
|
I have a 2 set of data i crawled from a html table using regex expression
data:
```
<div class = "info">
<div class="name"><td>random</td></div>
<div class="hp"><td>123456</td></div>
<div class="email"><td>random@mail.com</td></div>
</div>
<div class = "info">
<div class="name"><td>random123</td></div>
<div class="hp"><td>654321</td></div>
<div class="email"><td>random123@mail.com</td></div>
</div>
```
regex:
```
matchname = re.search('\<div class="name"><td>(.*?)</td>' , match3).group(1)
matchhp = re.search('\<div class="hp"><td>(.*?)</td>' , match3).group(1)
matchemail = re.search('\<div class="email"><td>(.*?)</td>' , match3).group(1)
```
so using the regex i can take out
```
random
123456
random@mail.com
```
so after saving this set of data into my database i want to save the next set how do i get the next set of data? i tried using findall then insert into my db but everything was in 1 line. I need the data to be in the db set by set.
New to python please comment on which part is unclear will try to edit
|
2016/05/20
|
[
"https://Stackoverflow.com/questions/37336875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3797825/"
] |
You should not be parsing HTML with regex. It's just a mess, do it with BS4. Doing it the right way:
```
soup = BeautifulSoup(match3, "html.parser")
names = []
allTds = soup.find_all("td")
for i,item in enumerate(allTds[::3]):
# firstname hp email
names.append((item.text, allTds[(i*3)+1].text, allTds[(i*3)+2].text))
```
And for the sake of answering the question asked I guess I'll include a horrible ugly regex that you should never use. ESPECIALLY because it's html, don't ever use regex for parsing html. (please don't use this)
```
for thisMatch in re.findall(r"<td>(.+?)</td>.+?<td>(.+?)</td>.+?<td>(.+?)</td>", match3, re.DOTALL):
print(thisMatch[0], thisMatch[1], thisMatch[2])
```
|
As @Racialz pointed out, you should look into [using HTML parsers instead of regular expressions](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags).
Let's take [`BeautifulSoup`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) as well as @Racialz did, but build a more robust solution. Find all `info` elements and locate all fields inside producing a list of dictionaries in the output:
```
from pprint import pprint
from bs4 import BeautifulSoup
data = """
<div>
<div class = "info">
<div class="name"><td>random</td></div>
<div class="hp"><td>123456</td></div>
<div class="email"><td>random@mail.com</td></div>
</div>
<div class = "info">
<div class="name"><td>random123</td></div>
<div class="hp"><td>654321</td></div>
<div class="email"><td>random123@mail.com</td></div>
</div>
</div>
"""
soup = BeautifulSoup(data, "html.parser")
fields = ["name", "hp", "email"]
result = [
{field: info.find(class_=field).get_text() for field in fields}
for info in soup.find_all(class_="info")
]
pprint(result)
```
Prints:
```
[{'email': 'random@mail.com', 'hp': '123456', 'name': 'random'},
{'email': 'random123@mail.com', 'hp': '654321', 'name': 'random123'}]
```
|
39,679,940
|
I have two lists:
```
list1=['lo0','lo1','te123','te234']
list2=['lo0','first','lo1','second','lo2','third','te123','fourth']
```
I want to write a python code to print the next element of list2 where item of list1 is present in list2,else write "no-match",i.e, I want the output as:
```
first
second
no-match
fourth
```
I came up with the following code:
```
for i1 in range(len(list2)):
for i2 in range(len(list1)):
if list1[i2]==rlist2[i1]:
desc.write(list2[i1+1])
desc.write('\n')
```
but it gives the output as:
```
first
second
fourth
```
and I cannot figure how to induce "no-match" where the elements aren't present in list2. Please guide! Thanks in advance.
|
2016/09/24
|
[
"https://Stackoverflow.com/questions/39679940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6708941/"
] |
You're absolutely right - `messagePolling` is a function. However, `messagePolling()` is *not* a function. You can see that right in your console:
```
// assume messagePolling is a function that doesn't return anything
messagePolling() // -> undefined
```
So, when you do this:
```
setTimeout(messagePolling(), 1000)
```
You're really doing this:
```
setTimeout(undefined, 1000)
```
But when you do this:
```
setTimeout(messagePolling, 1000)
```
You're *actually passing the function* to `setTimeout`. Then `setTimeout` will know to run the function you passed - `messagePolling` - later on. It won't work if it decides to call `undefined` (the result of `messagePolling()`) later, right?
|
Written as
`setTimeout(messagePolling(),1000)` the function is executed **immediately** and a `setTimeout` is set to call `undefined` (the value returned by your function) after one second. (this should actually throw an error if ran inside Node.js, as `undefined` is not a valid function)
Written as `setTimeout(messagePolling,1000)` the `setTimeout` is set to call your function after one second.
|
39,679,940
|
I have two lists:
```
list1=['lo0','lo1','te123','te234']
list2=['lo0','first','lo1','second','lo2','third','te123','fourth']
```
I want to write a python code to print the next element of list2 where item of list1 is present in list2,else write "no-match",i.e, I want the output as:
```
first
second
no-match
fourth
```
I came up with the following code:
```
for i1 in range(len(list2)):
for i2 in range(len(list1)):
if list1[i2]==rlist2[i1]:
desc.write(list2[i1+1])
desc.write('\n')
```
but it gives the output as:
```
first
second
fourth
```
and I cannot figure how to induce "no-match" where the elements aren't present in list2. Please guide! Thanks in advance.
|
2016/09/24
|
[
"https://Stackoverflow.com/questions/39679940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6708941/"
] |
Written as
`setTimeout(messagePolling(),1000)` the function is executed **immediately** and a `setTimeout` is set to call `undefined` (the value returned by your function) after one second. (this should actually throw an error if ran inside Node.js, as `undefined` is not a valid function)
Written as `setTimeout(messagePolling,1000)` the `setTimeout` is set to call your function after one second.
|
When you type `messagePolling` you are passing the function to `setTimeout` as a parameter. This is the standard way to use setTimeout.
When you type `messagePolling()` you are executing the function and passing the return value to `setTimeout`
That being said, this code looks odd to me. This function just runs itself. It's going to keep running itself indefinitely if you do this.
|
39,679,940
|
I have two lists:
```
list1=['lo0','lo1','te123','te234']
list2=['lo0','first','lo1','second','lo2','third','te123','fourth']
```
I want to write a python code to print the next element of list2 where item of list1 is present in list2,else write "no-match",i.e, I want the output as:
```
first
second
no-match
fourth
```
I came up with the following code:
```
for i1 in range(len(list2)):
for i2 in range(len(list1)):
if list1[i2]==rlist2[i1]:
desc.write(list2[i1+1])
desc.write('\n')
```
but it gives the output as:
```
first
second
fourth
```
and I cannot figure how to induce "no-match" where the elements aren't present in list2. Please guide! Thanks in advance.
|
2016/09/24
|
[
"https://Stackoverflow.com/questions/39679940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6708941/"
] |
Written as
`setTimeout(messagePolling(),1000)` the function is executed **immediately** and a `setTimeout` is set to call `undefined` (the value returned by your function) after one second. (this should actually throw an error if ran inside Node.js, as `undefined` is not a valid function)
Written as `setTimeout(messagePolling,1000)` the `setTimeout` is set to call your function after one second.
|
Anywhere a function name contains "()" it is executed immediately except when it is wrapped in quotes i.e is a string.
|
39,679,940
|
I have two lists:
```
list1=['lo0','lo1','te123','te234']
list2=['lo0','first','lo1','second','lo2','third','te123','fourth']
```
I want to write a python code to print the next element of list2 where item of list1 is present in list2,else write "no-match",i.e, I want the output as:
```
first
second
no-match
fourth
```
I came up with the following code:
```
for i1 in range(len(list2)):
for i2 in range(len(list1)):
if list1[i2]==rlist2[i1]:
desc.write(list2[i1+1])
desc.write('\n')
```
but it gives the output as:
```
first
second
fourth
```
and I cannot figure how to induce "no-match" where the elements aren't present in list2. Please guide! Thanks in advance.
|
2016/09/24
|
[
"https://Stackoverflow.com/questions/39679940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6708941/"
] |
You're absolutely right - `messagePolling` is a function. However, `messagePolling()` is *not* a function. You can see that right in your console:
```
// assume messagePolling is a function that doesn't return anything
messagePolling() // -> undefined
```
So, when you do this:
```
setTimeout(messagePolling(), 1000)
```
You're really doing this:
```
setTimeout(undefined, 1000)
```
But when you do this:
```
setTimeout(messagePolling, 1000)
```
You're *actually passing the function* to `setTimeout`. Then `setTimeout` will know to run the function you passed - `messagePolling` - later on. It won't work if it decides to call `undefined` (the result of `messagePolling()`) later, right?
|
When you type `messagePolling` you are passing the function to `setTimeout` as a parameter. This is the standard way to use setTimeout.
When you type `messagePolling()` you are executing the function and passing the return value to `setTimeout`
That being said, this code looks odd to me. This function just runs itself. It's going to keep running itself indefinitely if you do this.
|
39,679,940
|
I have two lists:
```
list1=['lo0','lo1','te123','te234']
list2=['lo0','first','lo1','second','lo2','third','te123','fourth']
```
I want to write a python code to print the next element of list2 where item of list1 is present in list2,else write "no-match",i.e, I want the output as:
```
first
second
no-match
fourth
```
I came up with the following code:
```
for i1 in range(len(list2)):
for i2 in range(len(list1)):
if list1[i2]==rlist2[i1]:
desc.write(list2[i1+1])
desc.write('\n')
```
but it gives the output as:
```
first
second
fourth
```
and I cannot figure how to induce "no-match" where the elements aren't present in list2. Please guide! Thanks in advance.
|
2016/09/24
|
[
"https://Stackoverflow.com/questions/39679940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6708941/"
] |
You're absolutely right - `messagePolling` is a function. However, `messagePolling()` is *not* a function. You can see that right in your console:
```
// assume messagePolling is a function that doesn't return anything
messagePolling() // -> undefined
```
So, when you do this:
```
setTimeout(messagePolling(), 1000)
```
You're really doing this:
```
setTimeout(undefined, 1000)
```
But when you do this:
```
setTimeout(messagePolling, 1000)
```
You're *actually passing the function* to `setTimeout`. Then `setTimeout` will know to run the function you passed - `messagePolling` - later on. It won't work if it decides to call `undefined` (the result of `messagePolling()`) later, right?
|
Anywhere a function name contains "()" it is executed immediately except when it is wrapped in quotes i.e is a string.
|
52,608,069
|
a python Newbie here. I am currently trying to figure out how to parse all the msg files I have stored in a specific folder and then save the body text to a csv file.
```
import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
msg = outlook.OpenSharedItem(r"C:\Users\XY\Documents\Email Reader\test.msg")
print(msg.Body)
del outlook, msg
```
So far, I only found a way to open one specific msg file, but not all the files I stored in my folder. I think I should be able to handle storing the data in a csv file, but I just can't figure out how to read multiple msg files. Hope you can help me!
cheers
|
2018/10/02
|
[
"https://Stackoverflow.com/questions/52608069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10445933/"
] |
You can try something like this to iterate through every file with '.msg' extension in a directory:
```
import os
pathname = os.fsencode('Pathname as string')
for file in os.listdir(pathname):
filename = os.fsdecode(file)
if filename.endswith(".msg"):
#Do something
continue
else:
continue
```
Hope this helps!
|
You can use `pathlib` to iterate over the contents of the directory.
Try this:
```
from pathlib import Path
import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
# Assuming \Documents\Email Reader is the directory containg files
for p in Path(r'C:\Users\XY\Documents\Email Reader').iterdir():
if p.is_file() and p.suffix == '.msg':
msg = outlook.OpenSharedItem(p)
print(msg.Body)
```
|
39,280,060
|
So I was messing around in python, and developed a problem.
I start out with a string like the following:
```
a = "1523467aa252aaa98a892a8198aa818a18238aa82938a"
```
For every number, you have to add it to a `sum` variable.Also, with every encounter of a letter, the index iterator must move back 2. My program keeps crashing at `isinstance()`. This is the code I have so far:
```
def sum():
a = '93752aaa746a27a1754aa90a93aaaaa238a44a75aa08750912738a8461a8759383aa328a4a4935903a6a55503605350'
z = 0
for i in a:
if isinstance(a[i], int):
z = z + a[i]
elif isinstance(a[i], str):
a = a[:i] + a[(i+1):]
i = i - 2
continue
print z
return z
sum()
```
|
2016/09/01
|
[
"https://Stackoverflow.com/questions/39280060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6421595/"
] |
This part is not doing what you think:
```
for i in a:
if isinstance(a[i], int):
```
Since `i` is an iterator, there is no need to use `a[i]`, it will confuse Python.
Also, since `a` is a string, no element of it will be an `int`, they will all be `string`. You want something like this:
```
for i in a:
if i.isdigit():
z += int(i)
```
***EDIT:*** removing elements of an iterable while iterating over it is a common problem on SO, I would recommend creating a new string with only the elements you wan to keep:
```
z = 0
b = ''
for i in a:
if i.isdigit():
z += int(i)
b += str(i)
a = b # set a back to b so the "original string" is set to a string with all non-numeric characters removed.
```
|
You have a few problems with your code. You don't seem to understand how `for... in` loops work, but @Will already addressed that problem in his answer. Furthermore, you have a misunderstanding of how `isinstance()` works. As the numbers are characters of a string, when you iterate over that string each character will also be a (one-length) string. `isinstance(a[i], int)` will fail for every character regardless of whether or not it can be converted to an `int`. What you actually want to do is just try converting each character to an `int` and adding it to the total. If it works, great, and if not just catch the exception and keep on going. You don't need to worry about non-numeric characters because when each one raises a `ValueError` it will simply be ignored and the next character in the string will be processed.
```
string = '93752aaa746a27a1754aa90a93aaaaa238a44a75aa08750912738a8461a8759383aa328a4a4935903a6a55503605350'
def sum_(string):
total = 0
for c in string:
try:
total += int(c)
except ValueError:
pass
return total
sum_(string)
```
Furthermore, this function is equivalent to the following one-liners:
```
sum(int(c) for c in string if c.isdigit())
```
Or the functional style...
```
sum(map(int, filter(str.isdigit, string)))
```
|
8,329,601
|
I am a beginner in python and cant understand why this is happening:
```
from math import *
print "enter the number"
n=int(raw_input())
d=2
s=0
while d<n :
if n%d==0:
x=math.log(d)
s=s+x
print d
d=d+1
print s,n,float(n)/s
```
Running it in Python and inputing a non prime gives the error
```
Traceback (most recent call last):
File "C:\Python27\mit ocw\pset1a.py", line 28, in <module>
x=math.log(d)
NameError: name 'math' is not defined
```
|
2011/11/30
|
[
"https://Stackoverflow.com/questions/8329601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/855763/"
] |
Change
```
from math import *
```
to
```
import math
```
Using `from X import *` is generally not a good idea as it uncontrollably pollutes the global namespace and could present other difficulties.
|
You need to `import math` rather than `from math import *`.
|
8,329,601
|
I am a beginner in python and cant understand why this is happening:
```
from math import *
print "enter the number"
n=int(raw_input())
d=2
s=0
while d<n :
if n%d==0:
x=math.log(d)
s=s+x
print d
d=d+1
print s,n,float(n)/s
```
Running it in Python and inputing a non prime gives the error
```
Traceback (most recent call last):
File "C:\Python27\mit ocw\pset1a.py", line 28, in <module>
x=math.log(d)
NameError: name 'math' is not defined
```
|
2011/11/30
|
[
"https://Stackoverflow.com/questions/8329601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/855763/"
] |
Change
```
from math import *
```
to
```
import math
```
Using `from X import *` is generally not a good idea as it uncontrollably pollutes the global namespace and could present other difficulties.
|
You did a mistake..
When you wrote :
```
from math import *
# This imports all the functions and the classes from math
# log method is also imported.
# But there is nothing defined with name math
```
So, When you try using `math.log`
It gives you error, so :
replace `math.log` with `log`
Or
replace `from math import *` with `import math`
This should solve the problem.
|
8,329,601
|
I am a beginner in python and cant understand why this is happening:
```
from math import *
print "enter the number"
n=int(raw_input())
d=2
s=0
while d<n :
if n%d==0:
x=math.log(d)
s=s+x
print d
d=d+1
print s,n,float(n)/s
```
Running it in Python and inputing a non prime gives the error
```
Traceback (most recent call last):
File "C:\Python27\mit ocw\pset1a.py", line 28, in <module>
x=math.log(d)
NameError: name 'math' is not defined
```
|
2011/11/30
|
[
"https://Stackoverflow.com/questions/8329601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/855763/"
] |
Change
```
from math import *
```
to
```
import math
```
Using `from X import *` is generally not a good idea as it uncontrollably pollutes the global namespace and could present other difficulties.
|
How about (when you need only `math.pi`):
```
from math import pi as PI
```
and then use it like `PI` symbol?
|
8,329,601
|
I am a beginner in python and cant understand why this is happening:
```
from math import *
print "enter the number"
n=int(raw_input())
d=2
s=0
while d<n :
if n%d==0:
x=math.log(d)
s=s+x
print d
d=d+1
print s,n,float(n)/s
```
Running it in Python and inputing a non prime gives the error
```
Traceback (most recent call last):
File "C:\Python27\mit ocw\pset1a.py", line 28, in <module>
x=math.log(d)
NameError: name 'math' is not defined
```
|
2011/11/30
|
[
"https://Stackoverflow.com/questions/8329601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/855763/"
] |
You need to `import math` rather than `from math import *`.
|
How about (when you need only `math.pi`):
```
from math import pi as PI
```
and then use it like `PI` symbol?
|
8,329,601
|
I am a beginner in python and cant understand why this is happening:
```
from math import *
print "enter the number"
n=int(raw_input())
d=2
s=0
while d<n :
if n%d==0:
x=math.log(d)
s=s+x
print d
d=d+1
print s,n,float(n)/s
```
Running it in Python and inputing a non prime gives the error
```
Traceback (most recent call last):
File "C:\Python27\mit ocw\pset1a.py", line 28, in <module>
x=math.log(d)
NameError: name 'math' is not defined
```
|
2011/11/30
|
[
"https://Stackoverflow.com/questions/8329601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/855763/"
] |
You did a mistake..
When you wrote :
```
from math import *
# This imports all the functions and the classes from math
# log method is also imported.
# But there is nothing defined with name math
```
So, When you try using `math.log`
It gives you error, so :
replace `math.log` with `log`
Or
replace `from math import *` with `import math`
This should solve the problem.
|
How about (when you need only `math.pi`):
```
from math import pi as PI
```
and then use it like `PI` symbol?
|
49,709,826
|
I am on Windows 10, and I run the following Python file:
```
import subprocess
subprocess.call("dir")
```
But I get the following error:
```
File "A:/python-tests/subprocess_test.py", line 10, in <module>
subprocess.call(["dir"])
File "A:\anaconda\lib\subprocess.py", line 267, in call
with Popen(*popenargs, **kwargs) as p:
File "A:\anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 210, in __init__
super(SubprocessPopen, self).__init__(*args, **kwargs)
File "A:\anaconda\lib\subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "A:\anaconda\lib\subprocess.py", line 997, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
```
Note that I am only using `dir` as an example here. I actually want to run a more complicated command, but I am getting the same error in that case too.
Note that I am not using `shell=True`, so the answer to this question is not applicable: [Cannot find the file specified when using subprocess.call('dir', shell=True) in Python](https://stackoverflow.com/q/20330385/3486684)
This is line 997 of `subprocess.py`:
```
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
# no special security
None, None,
int(not close_fds),
creationflags,
env,
os.fspath(cwd) if cwd is not None else None,
startupinfo)
```
When I run the debugger to check out the arguments being passed to CreateProcess, I notice that `executable` is `None`. Is that normal?
|
2018/04/07
|
[
"https://Stackoverflow.com/questions/49709826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3486684/"
] |
dir is a command implemented in cmd.exe so there is no dir.exe windows executable. You must call the command through cmd.
```
subprocess.call(['cmd', '/c', 'dir'])
```
|
You ***must*** set `shell=True` when calling `dir`, since `dir` isn't an executable (there's no such thing as dir.exe). `dir` is an [internal command](https://www.computerhope.com/jargon/i/intecomm.htm) that was loaded with cmd.exe.
As you can see from the [documentation](https://docs.python.org/dev/library/subprocess.html):
>
> On Windows with `shell=True`, the `COMSPEC` environment variable specifies
> the default shell. The only time you need to specify `shell=True` on
> Windows is when the command you wish to execute is built into the
> shell (e.g. **dir** or copy). You do not need `shell=True` to run a batch
> file or console-based executable.
>
>
>
|
48,213,605
|
So I have a csv file that looks like this..
```
1 a
2 b
3 c
```
And I want to make it look like this..
```
1 2 3
a b c
```
I'm at a loss for how to do this with python3, anyone have any ideas? Really appreciate it
|
2018/01/11
|
[
"https://Stackoverflow.com/questions/48213605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8761965/"
] |
Are you reading the csv with pandas? you can always use numpy or pandas transpose
```
import numpy as np
ar1 = np.array([[1,2,3], ['a','b','c']])
ar2 = np.transpose(ar1)
Out[22]:
array([['1', 'a'],
['2', 'b'],
['3', 'c']],
dtype='<U11')
```
|
As others have mentioned, `pandas` and `transpose()` is the way to go here. Here is an example:
```
import pandas as pd
input_filename = "path/to/file"
# I am using space as the sep because that is what you have shown in the example
# Also, you need header=None since your file doesn't have a header
df = pd.read_csv(input_filename ), header=None, sep="\s+") # read into dataframe
output_filename = "path/to/output"
df.transpose().to_csv(output_filename, index=False, header=False)
```
**Explanation**:
`read_csv()` loads the contents of your file into a `dataframe` which I called `df`. This is what `df` looks like:
```
>>> print(df)
0 1
0 1 a
1 2 b
2 3 c
```
You want to switch the rows and columns, and we can do that by calling `transpose()`. Here is what the transposed `dataframe` looks like:
```
>>> print(df.transpose())
0 1 2
0 1 2 3
1 a b c
```
Now write the transposed `dataframe` to a file with the `to_csv()` method. By specifying `index=False` and `header=False`, we will avoid writing the header row and the index column.
|
5,188,285
|
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout.
Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example.
What can be the C++ equivalent to Python's traceback library?
|
2011/03/04
|
[
"https://Stackoverflow.com/questions/5188285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
This is platform-specific, and also depends on how you're compiling code. If you compile code with gcc using `-fomit-frame-pointer` it's very hard to get a useful backtrace, generally requiring heuristics. If you're using any libraries that use that flag you'll also run into problems--it's often used for heavily optimized libraries (eg. nVidia's OpenGL libraries).
This isn't a self-contained solution, as it's part of a larger engine, but the code is helpful:
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Unix/Backtrace.cpp> (Linux, OSX)
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Win32/Crash.cpp> (CrashHandler::do\_backtrace for Win32)
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Darwin/DarwinThreadHelpers.cpp> (OSX)
This includes backtracing with the frame pointer with gcc when available and heuristic backtracing when it isn't; this can tend to give spurious entries in the trace, but for getting a backtrace for a crash report it's much better than losing the trace entirely.
There's other related code in those directories you'd want to look at to make use of that code (symbol lookups, signal handling); those links are a good starting point.
|
Try [google core dumper](http://code.google.com/p/google-coredumper/), it will give you a core dump when you need it.
|
5,188,285
|
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout.
Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example.
What can be the C++ equivalent to Python's traceback library?
|
2011/03/04
|
[
"https://Stackoverflow.com/questions/5188285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
Try [google core dumper](http://code.google.com/p/google-coredumper/), it will give you a core dump when you need it.
|
I have had success with [libunwind](http://savannah.nongnu.org/projects/libunwind/) in the past. I know it works well with linux, but not sure how Windows is, although it claims to be portable.
|
5,188,285
|
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout.
Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example.
What can be the C++ equivalent to Python's traceback library?
|
2011/03/04
|
[
"https://Stackoverflow.com/questions/5188285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
Try [google core dumper](http://code.google.com/p/google-coredumper/), it will give you a core dump when you need it.
|
If you are looking for getting a 'stack trace' in case of crash, try '[google breakpad](http://code.google.com/p/google-breakpad/)'
|
5,188,285
|
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout.
Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example.
What can be the C++ equivalent to Python's traceback library?
|
2011/03/04
|
[
"https://Stackoverflow.com/questions/5188285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
This is platform-specific, and also depends on how you're compiling code. If you compile code with gcc using `-fomit-frame-pointer` it's very hard to get a useful backtrace, generally requiring heuristics. If you're using any libraries that use that flag you'll also run into problems--it's often used for heavily optimized libraries (eg. nVidia's OpenGL libraries).
This isn't a self-contained solution, as it's part of a larger engine, but the code is helpful:
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Unix/Backtrace.cpp> (Linux, OSX)
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Win32/Crash.cpp> (CrashHandler::do\_backtrace for Win32)
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Darwin/DarwinThreadHelpers.cpp> (OSX)
This includes backtracing with the frame pointer with gcc when available and heuristic backtracing when it isn't; this can tend to give spurious entries in the trace, but for getting a backtrace for a crash report it's much better than losing the trace entirely.
There's other related code in those directories you'd want to look at to make use of that code (symbol lookups, signal handling); those links are a good starting point.
|
I have had success with [libunwind](http://savannah.nongnu.org/projects/libunwind/) in the past. I know it works well with linux, but not sure how Windows is, although it claims to be portable.
|
5,188,285
|
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout.
Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example.
What can be the C++ equivalent to Python's traceback library?
|
2011/03/04
|
[
"https://Stackoverflow.com/questions/5188285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
This is platform-specific, and also depends on how you're compiling code. If you compile code with gcc using `-fomit-frame-pointer` it's very hard to get a useful backtrace, generally requiring heuristics. If you're using any libraries that use that flag you'll also run into problems--it's often used for heavily optimized libraries (eg. nVidia's OpenGL libraries).
This isn't a self-contained solution, as it's part of a larger engine, but the code is helpful:
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Unix/Backtrace.cpp> (Linux, OSX)
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Win32/Crash.cpp> (CrashHandler::do\_backtrace for Win32)
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Darwin/DarwinThreadHelpers.cpp> (OSX)
This includes backtracing with the frame pointer with gcc when available and heuristic backtracing when it isn't; this can tend to give spurious entries in the trace, but for getting a backtrace for a crash report it's much better than losing the trace entirely.
There's other related code in those directories you'd want to look at to make use of that code (symbol lookups, signal handling); those links are a good starting point.
|
If you are looking for getting a 'stack trace' in case of crash, try '[google breakpad](http://code.google.com/p/google-breakpad/)'
|
5,188,285
|
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout.
Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example.
What can be the C++ equivalent to Python's traceback library?
|
2011/03/04
|
[
"https://Stackoverflow.com/questions/5188285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
This is platform-specific, and also depends on how you're compiling code. If you compile code with gcc using `-fomit-frame-pointer` it's very hard to get a useful backtrace, generally requiring heuristics. If you're using any libraries that use that flag you'll also run into problems--it's often used for heavily optimized libraries (eg. nVidia's OpenGL libraries).
This isn't a self-contained solution, as it's part of a larger engine, but the code is helpful:
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Unix/Backtrace.cpp> (Linux, OSX)
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Win32/Crash.cpp> (CrashHandler::do\_backtrace for Win32)
* <https://svn.stepmania.com/svn/trunk/stepmania/src/archutils/Darwin/DarwinThreadHelpers.cpp> (OSX)
This includes backtracing with the frame pointer with gcc when available and heuristic backtracing when it isn't; this can tend to give spurious entries in the trace, but for getting a backtrace for a crash report it's much better than losing the trace entirely.
There's other related code in those directories you'd want to look at to make use of that code (symbol lookups, signal handling); those links are a good starting point.
|
There's now [cpp-traceback](https://code.google.com/p/cpp-traceback/), it's exactly Python-style tracebacks for C++.
|
5,188,285
|
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout.
Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example.
What can be the C++ equivalent to Python's traceback library?
|
2011/03/04
|
[
"https://Stackoverflow.com/questions/5188285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
There's now [cpp-traceback](https://code.google.com/p/cpp-traceback/), it's exactly Python-style tracebacks for C++.
|
I have had success with [libunwind](http://savannah.nongnu.org/projects/libunwind/) in the past. I know it works well with linux, but not sure how Windows is, although it claims to be portable.
|
5,188,285
|
I need to get some debugging libraries/tools to trace back the stack information print out to the stdout.
Python's [traceback](http://docs.python.org/library/traceback.html) library can be an example.
What can be the C++ equivalent to Python's traceback library?
|
2011/03/04
|
[
"https://Stackoverflow.com/questions/5188285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
There's now [cpp-traceback](https://code.google.com/p/cpp-traceback/), it's exactly Python-style tracebacks for C++.
|
If you are looking for getting a 'stack trace' in case of crash, try '[google breakpad](http://code.google.com/p/google-breakpad/)'
|
14,962,289
|
I am running a django app with nginx & uwsgi. Here's how i run uwsgi:
```
sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp --module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings --socket=/tmp/pyapp.socket --cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid --uid=220 --gid=499
```
& nginx configurations:
```
server {
listen 80;
server_name test.com
root /www/python/apps/pyapp/;
access_log /var/log/nginx/test.com.access.log;
error_log /var/log/nginx/test.com.error.log;
# https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production
location /static/ {
alias /www/python/apps/pyapp/static/;
expires 30d;
}
location /media/ {
alias /www/python/apps/pyapp/media/;
expires 30d;
}
location / {
uwsgi_pass unix:///tmp/pyapp.socket;
include uwsgi_params;
proxy_read_timeout 120;
}
# what to serve if upstream is not available or crashes
#error_page 500 502 503 504 /media/50x.html;
}
```
Here comes the problem. When doing "ab" (ApacheBenchmark) on the server i get the following results:
nginx version: nginx version: nginx/1.2.6
uwsgi version:1.4.5
```
Server Software: nginx/1.0.15
Server Hostname: pycms.com
Server Port: 80
Document Path: /api/nodes/mostviewed/8/?format=json
Document Length: 8696 bytes
Concurrency Level: 100
Time taken for tests: 41.232 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 8866000 bytes
HTML transferred: 8696000 bytes
Requests per second: 24.25 [#/sec] (mean)
Time per request: 4123.216 [ms] (mean)
Time per request: 41.232 [ms] (mean, across all concurrent requests)
Transfer rate: 209.99 [Kbytes/sec] received
```
While running on 500 concurrency level
```
oncurrency Level: 500
Time taken for tests: 2.175 seconds
Complete requests: 1000
Failed requests: 50
(Connect: 0, Receive: 0, Length: 50, Exceptions: 0)
Write errors: 0
Non-2xx responses: 950
Total transferred: 629200 bytes
HTML transferred: 476300 bytes
Requests per second: 459.81 [#/sec] (mean)
Time per request: 1087.416 [ms] (mean)
Time per request: 2.175 [ms] (mean, across all concurrent requests)
Transfer rate: 282.53 [Kbytes/sec] received
```
As you can see... all requests on the server fail with either timeout errors or "Client prematurely disconnected" or:
```
writev(): Broken pipe [proto/uwsgi.c line 124] during GET /api/nodes/mostviewed/9/?format=json
```
Here's a little bit more about my application:
Basically, it's a collection of models that reflect MySQL tables which contain all the content. At the frontend, i have django-rest-framework which serves json content to the clients.
I've installed django-profiling & django debug toolbar to see whats going on. On django-profiling here's what i get when running a single request:
```
Instance wide RAM usage
Partition of a set of 147315 objects. Total size = 20779408 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 63960 43 5726288 28 5726288 28 str
1 36887 25 3131112 15 8857400 43 tuple
2 2495 2 1500392 7 10357792 50 dict (no owner)
3 615 0 1397160 7 11754952 57 dict of module
4 1371 1 1236432 6 12991384 63 type
5 9974 7 1196880 6 14188264 68 function
6 8974 6 1076880 5 15265144 73 types.CodeType
7 1371 1 1014408 5 16279552 78 dict of type
8 2684 2 340640 2 16620192 80 list
9 382 0 328912 2 16949104 82 dict of class
<607 more rows. Type e.g. '_.more' to view.>
CPU Time for this request
11068 function calls (10158 primitive calls) in 0.064 CPU seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/django/views/generic/base.py:44(view)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/django/views/decorators/csrf.py:76(wrapped_view)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/views.py:359(dispatch)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/generics.py:144(get)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/mixins.py:46(list)
1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:348(data)
21/1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:273(to_native)
21/1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:190(convert_object)
11/1 0.000 0.000 0.036 0.036 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:303(field_to_native)
13/11 0.000 0.000 0.033 0.003 /usr/lib/python2.6/site-packages/django/db/models/query.py:92(__iter__)
3/1 0.000 0.000 0.033 0.033 /usr/lib/python2.6/site-packages/django/db/models/query.py:77(__len__)
4 0.000 0.000 0.030 0.008 /usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py:794(execute_sql)
1 0.000 0.000 0.021 0.021 /usr/lib/python2.6/site-packages/django/views/generic/list.py:33(paginate_queryset)
1 0.000 0.000 0.021 0.021 /usr/lib/python2.6/site-packages/django/core/paginator.py:35(page)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/core/paginator.py:20(validate_number)
3 0.000 0.000 0.020 0.007 /usr/lib/python2.6/site-packages/django/core/paginator.py:57(_get_num_pages)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/core/paginator.py:44(_get_count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:340(count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:394(get_count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:568(_prefetch_related_objects)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:1596(prefetch_related_objects)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/db/backends/util.py:36(execute)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:340(get_aggregation)
5 0.000 0.000 0.020 0.004 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:136(execute)
2 0.000 0.000 0.020 0.010 /usr/lib/python2.6/site-packages/django/db/models/query.py:1748(prefetch_one_level)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py:112(execute)
5 0.000 0.000 0.019 0.004 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:316(_query)
60 0.000 0.000 0.018 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:231(iterator)
5 0.012 0.002 0.015 0.003 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:278(_do_query)
60 0.000 0.000 0.013 0.000 /usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py:751(results_iter)
30 0.000 0.000 0.010 0.000 /usr/lib/python2.6/site-packages/django/db/models/manager.py:115(all)
50 0.000 0.000 0.009 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:870(_clone)
51 0.001 0.000 0.009 0.000 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:235(clone)
4 0.000 0.000 0.009 0.002 /usr/lib/python2.6/site-packages/django/db/backends/__init__.py:302(cursor)
4 0.000 0.000 0.008 0.002 /usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py:361(_cursor)
1 0.000 0.000 0.008 0.008 /usr/lib64/python2.6/site-packages/MySQLdb/__init__.py:78(Connect)
910/208 0.003 0.000 0.008 0.000 /usr/lib64/python2.6/copy.py:144(deepcopy)
22 0.000 0.000 0.007 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:619(filter)
22 0.000 0.000 0.007 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:633(_filter_or_exclude)
20 0.000 0.000 0.005 0.000 /usr/lib/python2.6/site-packages/django/db/models/fields/related.py:560(get_query_set)
1 0.000 0.000 0.005 0.005 /usr/lib64/python2.6/site-packages/MySQLdb/connections.py:8()
```
..etc
However, django-debug-toolbar shows the following:
```
Resource Usage
Resource Value
User CPU time 149.977 msec
System CPU time 119.982 msec
Total CPU time 269.959 msec
Elapsed time 326.291 msec
Context switches 11 voluntary, 40 involuntary
and 5 queries in 27.1 ms
```
The problem is that "top" shows the load average rising quickly and apache benchmark which i ran both on the local server and from a remote machine within the network shows that i am not serving many requests / second.
What is the problem? this is as far as i could reach when profiling the code so it would be appreciated if someone can point of what i am doing here.
**Edit (23/02/2013): Adding more details based on Andrew Alcock's answer:**
The points that require my attention / answer are
(3)(3) I've executed "show global variables" on MySQL and found out that MySQL configurations had 151 for max\_connections setting which is more than enough to serve the workers i am starting for uwsgi.
(3)(4)(2) The single request i am profiling is the heaviest one. It executes 4 queries according to django-debug-toolbar. What happens is that all queries run in:
3.71, 2.83, 0.88, 4.84 ms respectively.
(4) Here you're referring to memory paging? if so, how could i tell?
(5) On 16 workers, 100 concurrency rate, 1000 requests the load average goes up to ~ 12
I ran the tests on different number of workers (concurrency level is 100):
1. 1 worker, load average ~ 1.85, 19 reqs / second, Time per request: 5229.520, 0 non-2xx
2. 2 worker, load average ~ 1.5, 19 reqs / second, Time per request: 516.520, 0 non-2xx
3. 4 worker, load average ~ 3, 16 reqs / second, Time per request: 5929.921, 0 non-2xx
4. 8 worker, load average ~ 5, 18 reqs / second, Time per request: 5301.458, 0 non-2xx
5. 16 worker, load average ~ 19, 15 reqs / second, Time per request: 6384.720, 0 non-2xx
AS you can see, the more workers we have, the more load we have on the system. I can see in uwsgi's daemon log that the response time in milliseconds increases when i increase the number of workers.
On 16 workers, running 500 concurrency level requests uwsgi starts loggin the errors:
```
writev(): Broken pipe [proto/uwsgi.c line 124]
```
Load goes up to ~ 10 as well. and the tests don't take much time because non-2xx responses are 923 out of 1000 which is why the response here is quite fast as it's almost empty. Which is also a reply to your point #4 in the summary.
Assuming that what i am facing here is an OS latency based on I/O and networking, what is the recommended action to scale this up? new hardware? bigger server?
Thanks
|
2013/02/19
|
[
"https://Stackoverflow.com/questions/14962289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/202690/"
] |
**EDIT 1** Seen the comment that you have 1 virtual core, adding commentary through on all relavant points
**EDIT 2** More information from Maverick, so I'm eliminating ideas ruled out and developing the confirmed issues.
**EDIT 3** Filled out more details about uwsgi request queue and scaling options. Improved grammar.
**EDIT 4** Updates from Maverick and minor improvements
Comments are too small, so here are some thoughts:
1. Load average is basically how many processes are running on or waiting for CPU attention. For a perfectly loaded system with 1 CPU core, the load average should be 1.0; for a 4 core system, it should be 4.0. The moment you run the web test, the threading rockets and you have a *lot* of processes waiting for CPU. Unless the load average exceeds the number of CPU cores by a significant margin, it is not a concern
2. The first 'Time per request' value of 4s correlates to the length of the request queue - 1000 requests dumped on Django nearly instantaneously and took on average 4s to service, about 3.4s of which were waiting in a queue. This is due to the very heavy mismatch between the number of requests (100) vs. the number of processors (16) causing 84 of the requests to be waiting for a processor at any one moment.
3. Running at a concurrency of 100, the tests take 41 seconds at 24 requests/sec. You have 16 processes (threads), so each request is processed about 700ms. Given your type of transaction, that is a *long* time per request. This may be because:
1. The CPU cost of each request is high in Django (which is highly unlikely given the low CPU value from the debug toolbar)
2. The OS is task switching a lot (especially if the load average is higher than 4-8), and the latency is purely down to having too many processes.
3. There are not enough DB connections serving the 16 processes so processes are waiting to have one come available. Do you have at least one connection available per process?
4. There is *considerable* latency around the DB, either:
1. Tens of small requests each taking, say, 10ms, most of which is networking overhead. If so, can you introducing caching or reduce the SQL calls to a smaller number. Or
2. One or a couple of requests are taking 100's of ms. To check this, run profiling on the DB. If so, you need to optimise that request.
4. The split between system and user CPU cost is unusually high in system, although the total CPU is low. This implies that most of the work in Django is kernel related, such as networking or disk. In this scenario, it might be network costs (eg receiving and sending HTTP requests and receiving and sending requests to the DB). Sometimes this will be high because of *paging*. If there's no paging going on, then you probably don't have to worry about this at all.
5. You have set the processes at 16, but have a high load average (how high you don't state). Ideally you should always have at least *one* process waiting for CPU (so that CPUs don't spin idly). Processes here don't seem CPU bound, but have a significant latency, so you need more processes than cores. How many more? Try running the uwsgi with different numbers of processors (1, 2, 4, 8, 12, 16, 24, etc) until you have the best throughput. If you change latency of the average process, you will need to adjust this again.
6. The 500 concurrency level definitely is a problem, but is it the client or the server? The report says 50 (out of 100) had the incorrect content-length which implies a server problem. The non-2xx also seems to point there. Is it possible to capture the non-2xx responses for debugging - stack traces or the specific error message would be incredibly useful (EDIT) and is caused by the uwsgi request queue running with it's default value of 100.
So, in summary:

1. Django seems fine
2. Mismatch between concurrency of load test (100 or 500) vs. processes (16): You're pushing way too many concurrent requests into the system for the number of processes to handle. Once you are above the number of processes, all that will happen is that you will lengthen the HTTP Request queue in the web server
3. There is a large latency, so either
1. Mismatch between processes (16) and CPU cores (1): If the load average is >3, then it's probably too many processes. Try again with a smaller number of processes
1. Load average > 2 -> try 8 processes
2. Load average > 4 -> try 4 processes
3. Load average > 8 -> try 2 processes
2. If the load average <3, it may be in the DB, so profile the DB to see whether there are loads of small requests (additively causing the latency) or one or two SQL statements are the problem
4. Without capturing the failed response, there's not much I can say about the failures at 500 concurrency
**Developing ideas**
Your load averages >10 on a single cored machine is *really* nasty and (as you observe) leads to a lot of task switching and general slow behaviour. I personally don't remember seeing a machine with a load average of 19 (which you have for 16 processes) - congratulations for getting it so high ;)
The DB performance is great, so I'd give that an all-clear right now.
**Paging**: To answer you question on how to see paging - you can detect OS paging in several ways. For example, in top, the header has page-ins and outs (see the last line):
```
Processes: 170 total, 3 running, 4 stuck, 163 sleeping, 927 threads 15:06:31
Load Avg: 0.90, 1.19, 1.94 CPU usage: 1.37% user, 2.97% sys, 95.65% idle SharedLibs: 144M resident, 0B data, 24M linkedit.
MemRegions: 31726 total, 2541M resident, 120M private, 817M shared. PhysMem: 1420M wired, 3548M active, 1703M inactive, 6671M used, 1514M free.
VM: 392G vsize, 1286M framework vsize, 1534241(0) pageins, 0(0) pageouts. Networks: packets: 789684/288M in, 912863/482M out. Disks: 739807/15G read, 996745/24G written.
```
**Number of processes**: In your current configuration, the number of processes is *way* too high. **Scale the number of processes back to a 2**. We might bring this value up later, depending on shifting further load off this server.
**Location of Apache Benchmark**: The load average of 1.85 for one process suggests to me that you are running the load generator on the same machine as uwsgi - is that correct?
If so, you really need to run this from another machine otherwise the test runs are not representative of actual load - you're taking memory and CPU from the web processes for use in the load generator. In addition, the load generator's 100 or 500 threads will generally stress your server in a way that does not happen in real life. Indeed this might be the reason the whole test fails.
**Location of the DB**: The load average for one process also suggest that you are running the DB on the same machine as the web processes - is this correct?
If I'm correct about the DB, then the first and best way to start scaling is to move the DB to another machine. We do this for a couple of reasons:
1. A DB server needs a different hardware profile from a processing node:
1. Disk: DB needs a lot of fast, redundant, backed up disk, and a processing node needs just a basic disk
2. CPU: A processing node needs the fastest CPU you can afford whereas a DB machine can often make do without (often its performance is gated on disk and RAM)
3. RAM: a DB machine generally needs as much RAM as possible (and the fastest DB has *all* its data in RAM), whereas many processing nodes need much less (yours needs about 20MB per process - very small
4. Scaling: **Atomic** DBs scale best by having monster machines with many CPUs whereas the web tier (not having state) can scale by plugging in many identical small boxen.
2. CPU affinity: It's better for the CPU to have a load average of 1.0 and processes to have affinity to a single core. Doing so maximizes the use of the CPU cache and minimizes task switching overheads. By separating the DB and processing nodes, you are enforcing this affinity in HW.
**500 concurrency with exceptions** The request queue in the diagram above is at most 100 - if uwsgi receives a request when the queue is full, the request is rejected with a 5xx error. I think this was happening in your 500 concurrency load test - basically the queue filled up with the first 100 or so threads, then the other 400 threads issued the remaining 900 requests and received immediate 5xx errors.
To handle 500 requests per second you need to ensure two things:
1. The Request Queue size is configured to handle the burst: Use the `--listen` argument to `uwsgi`
2. The system can handle a throughput at above 500 requests per second if 500 is a normal condition, or a bit below if 500 is a peak. See scaling notes below.
I imagine that uwsgi has the queue set to a smaller number to better handle DDoS attacks; if placed under huge load, most requests immediately fail with almost no processing allowing the box as a whole to still be responsive to the administrators.
**General advice for scaling a system**
Your most important consideration is probably to **maximize throughput**. Another possible need to minimize response time, but I won't discuss this here. In maximising throughput, you are trying to maximize the *system*, not individual components; some local decreases might improve overall system throughput (for example, making a change that happens to add latency in the web tier *in order to improve performance of the DB* is a net gain).
Onto specifics:
1. **Move the DB to a separate machine**. After this, profile the DB during your load test by running `top` and your favorite MySQL monitoring tool. You need to be able to profile . Moving the DB to a separate machine will introduce some additional latency (several ms) per request, so expect to slightly increase the number of processes at the web tier to keep the same throughput.
2. Ensure that `uswgi` request queue is large enough to handle a burst of traffic using the `--listen` argument. This should be several times the maximum steady-state requests-per-second your system can handle.
3. On the web/app tier: **Balance the number of processes with the number of CPU cores** and the inherent latency in the process. Too many processes slows performance, too few means that you'll never fully utilize the system resources. There is no fixed balancing point, as every application and usage pattern is different, so benchmark and adjust. As a guide, use the processes' latency, if each task has:
* 0% latency, then you need 1 process per core
* 50% latency (i.e. the CPU time is half the actual time), then you need 2 processes per core
* 67% latency, then you need 3 processes per core
4. Check `top` during the test to ensure that you are above 90% cpu utilisation (for every core) *and* you have a load average a little above 1.0. If the load average is higher, scale back the processes. If all goes well, at some point you won't be able to achieve this target, and DB might now be the bottleneck
5. At some point you will need more power in the web tier. You can either choose to add more CPU to the machine (relatively easy) and so add more processes, **and/or** you can add in more processing nodes (horizontal scaleability). The latter can be achieved in uwsgi using the method discussed [here](https://stackoverflow.com/a/15050495/1395668) by [Łukasz Mierzwa](https://stackoverflow.com/users/1154047/ukasz-mierzwa)
|
Adding more workers and getting less r/s means that your request "is pure CPU" and there is no IO waits that another worker can use to serve another request.
If you want to scale you will need to use another server with more (or faster) cpu's.
However this is a synthetic test, the number of r/s you get are the upper bound for the exact request that you are testing, once on production there are many more variables that can affect the performance.
|
14,962,289
|
I am running a django app with nginx & uwsgi. Here's how i run uwsgi:
```
sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp --module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings --socket=/tmp/pyapp.socket --cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid --uid=220 --gid=499
```
& nginx configurations:
```
server {
listen 80;
server_name test.com
root /www/python/apps/pyapp/;
access_log /var/log/nginx/test.com.access.log;
error_log /var/log/nginx/test.com.error.log;
# https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production
location /static/ {
alias /www/python/apps/pyapp/static/;
expires 30d;
}
location /media/ {
alias /www/python/apps/pyapp/media/;
expires 30d;
}
location / {
uwsgi_pass unix:///tmp/pyapp.socket;
include uwsgi_params;
proxy_read_timeout 120;
}
# what to serve if upstream is not available or crashes
#error_page 500 502 503 504 /media/50x.html;
}
```
Here comes the problem. When doing "ab" (ApacheBenchmark) on the server i get the following results:
nginx version: nginx version: nginx/1.2.6
uwsgi version:1.4.5
```
Server Software: nginx/1.0.15
Server Hostname: pycms.com
Server Port: 80
Document Path: /api/nodes/mostviewed/8/?format=json
Document Length: 8696 bytes
Concurrency Level: 100
Time taken for tests: 41.232 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 8866000 bytes
HTML transferred: 8696000 bytes
Requests per second: 24.25 [#/sec] (mean)
Time per request: 4123.216 [ms] (mean)
Time per request: 41.232 [ms] (mean, across all concurrent requests)
Transfer rate: 209.99 [Kbytes/sec] received
```
While running on 500 concurrency level
```
oncurrency Level: 500
Time taken for tests: 2.175 seconds
Complete requests: 1000
Failed requests: 50
(Connect: 0, Receive: 0, Length: 50, Exceptions: 0)
Write errors: 0
Non-2xx responses: 950
Total transferred: 629200 bytes
HTML transferred: 476300 bytes
Requests per second: 459.81 [#/sec] (mean)
Time per request: 1087.416 [ms] (mean)
Time per request: 2.175 [ms] (mean, across all concurrent requests)
Transfer rate: 282.53 [Kbytes/sec] received
```
As you can see... all requests on the server fail with either timeout errors or "Client prematurely disconnected" or:
```
writev(): Broken pipe [proto/uwsgi.c line 124] during GET /api/nodes/mostviewed/9/?format=json
```
Here's a little bit more about my application:
Basically, it's a collection of models that reflect MySQL tables which contain all the content. At the frontend, i have django-rest-framework which serves json content to the clients.
I've installed django-profiling & django debug toolbar to see whats going on. On django-profiling here's what i get when running a single request:
```
Instance wide RAM usage
Partition of a set of 147315 objects. Total size = 20779408 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 63960 43 5726288 28 5726288 28 str
1 36887 25 3131112 15 8857400 43 tuple
2 2495 2 1500392 7 10357792 50 dict (no owner)
3 615 0 1397160 7 11754952 57 dict of module
4 1371 1 1236432 6 12991384 63 type
5 9974 7 1196880 6 14188264 68 function
6 8974 6 1076880 5 15265144 73 types.CodeType
7 1371 1 1014408 5 16279552 78 dict of type
8 2684 2 340640 2 16620192 80 list
9 382 0 328912 2 16949104 82 dict of class
<607 more rows. Type e.g. '_.more' to view.>
CPU Time for this request
11068 function calls (10158 primitive calls) in 0.064 CPU seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/django/views/generic/base.py:44(view)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/django/views/decorators/csrf.py:76(wrapped_view)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/views.py:359(dispatch)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/generics.py:144(get)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/mixins.py:46(list)
1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:348(data)
21/1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:273(to_native)
21/1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:190(convert_object)
11/1 0.000 0.000 0.036 0.036 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:303(field_to_native)
13/11 0.000 0.000 0.033 0.003 /usr/lib/python2.6/site-packages/django/db/models/query.py:92(__iter__)
3/1 0.000 0.000 0.033 0.033 /usr/lib/python2.6/site-packages/django/db/models/query.py:77(__len__)
4 0.000 0.000 0.030 0.008 /usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py:794(execute_sql)
1 0.000 0.000 0.021 0.021 /usr/lib/python2.6/site-packages/django/views/generic/list.py:33(paginate_queryset)
1 0.000 0.000 0.021 0.021 /usr/lib/python2.6/site-packages/django/core/paginator.py:35(page)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/core/paginator.py:20(validate_number)
3 0.000 0.000 0.020 0.007 /usr/lib/python2.6/site-packages/django/core/paginator.py:57(_get_num_pages)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/core/paginator.py:44(_get_count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:340(count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:394(get_count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:568(_prefetch_related_objects)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:1596(prefetch_related_objects)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/db/backends/util.py:36(execute)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:340(get_aggregation)
5 0.000 0.000 0.020 0.004 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:136(execute)
2 0.000 0.000 0.020 0.010 /usr/lib/python2.6/site-packages/django/db/models/query.py:1748(prefetch_one_level)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py:112(execute)
5 0.000 0.000 0.019 0.004 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:316(_query)
60 0.000 0.000 0.018 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:231(iterator)
5 0.012 0.002 0.015 0.003 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:278(_do_query)
60 0.000 0.000 0.013 0.000 /usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py:751(results_iter)
30 0.000 0.000 0.010 0.000 /usr/lib/python2.6/site-packages/django/db/models/manager.py:115(all)
50 0.000 0.000 0.009 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:870(_clone)
51 0.001 0.000 0.009 0.000 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:235(clone)
4 0.000 0.000 0.009 0.002 /usr/lib/python2.6/site-packages/django/db/backends/__init__.py:302(cursor)
4 0.000 0.000 0.008 0.002 /usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py:361(_cursor)
1 0.000 0.000 0.008 0.008 /usr/lib64/python2.6/site-packages/MySQLdb/__init__.py:78(Connect)
910/208 0.003 0.000 0.008 0.000 /usr/lib64/python2.6/copy.py:144(deepcopy)
22 0.000 0.000 0.007 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:619(filter)
22 0.000 0.000 0.007 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:633(_filter_or_exclude)
20 0.000 0.000 0.005 0.000 /usr/lib/python2.6/site-packages/django/db/models/fields/related.py:560(get_query_set)
1 0.000 0.000 0.005 0.005 /usr/lib64/python2.6/site-packages/MySQLdb/connections.py:8()
```
..etc
However, django-debug-toolbar shows the following:
```
Resource Usage
Resource Value
User CPU time 149.977 msec
System CPU time 119.982 msec
Total CPU time 269.959 msec
Elapsed time 326.291 msec
Context switches 11 voluntary, 40 involuntary
and 5 queries in 27.1 ms
```
The problem is that "top" shows the load average rising quickly and apache benchmark which i ran both on the local server and from a remote machine within the network shows that i am not serving many requests / second.
What is the problem? this is as far as i could reach when profiling the code so it would be appreciated if someone can point of what i am doing here.
**Edit (23/02/2013): Adding more details based on Andrew Alcock's answer:**
The points that require my attention / answer are
(3)(3) I've executed "show global variables" on MySQL and found out that MySQL configurations had 151 for max\_connections setting which is more than enough to serve the workers i am starting for uwsgi.
(3)(4)(2) The single request i am profiling is the heaviest one. It executes 4 queries according to django-debug-toolbar. What happens is that all queries run in:
3.71, 2.83, 0.88, 4.84 ms respectively.
(4) Here you're referring to memory paging? if so, how could i tell?
(5) On 16 workers, 100 concurrency rate, 1000 requests the load average goes up to ~ 12
I ran the tests on different number of workers (concurrency level is 100):
1. 1 worker, load average ~ 1.85, 19 reqs / second, Time per request: 5229.520, 0 non-2xx
2. 2 worker, load average ~ 1.5, 19 reqs / second, Time per request: 516.520, 0 non-2xx
3. 4 worker, load average ~ 3, 16 reqs / second, Time per request: 5929.921, 0 non-2xx
4. 8 worker, load average ~ 5, 18 reqs / second, Time per request: 5301.458, 0 non-2xx
5. 16 worker, load average ~ 19, 15 reqs / second, Time per request: 6384.720, 0 non-2xx
AS you can see, the more workers we have, the more load we have on the system. I can see in uwsgi's daemon log that the response time in milliseconds increases when i increase the number of workers.
On 16 workers, running 500 concurrency level requests uwsgi starts loggin the errors:
```
writev(): Broken pipe [proto/uwsgi.c line 124]
```
Load goes up to ~ 10 as well. and the tests don't take much time because non-2xx responses are 923 out of 1000 which is why the response here is quite fast as it's almost empty. Which is also a reply to your point #4 in the summary.
Assuming that what i am facing here is an OS latency based on I/O and networking, what is the recommended action to scale this up? new hardware? bigger server?
Thanks
|
2013/02/19
|
[
"https://Stackoverflow.com/questions/14962289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/202690/"
] |
**EDIT 1** Seen the comment that you have 1 virtual core, adding commentary through on all relavant points
**EDIT 2** More information from Maverick, so I'm eliminating ideas ruled out and developing the confirmed issues.
**EDIT 3** Filled out more details about uwsgi request queue and scaling options. Improved grammar.
**EDIT 4** Updates from Maverick and minor improvements
Comments are too small, so here are some thoughts:
1. Load average is basically how many processes are running on or waiting for CPU attention. For a perfectly loaded system with 1 CPU core, the load average should be 1.0; for a 4 core system, it should be 4.0. The moment you run the web test, the threading rockets and you have a *lot* of processes waiting for CPU. Unless the load average exceeds the number of CPU cores by a significant margin, it is not a concern
2. The first 'Time per request' value of 4s correlates to the length of the request queue - 1000 requests dumped on Django nearly instantaneously and took on average 4s to service, about 3.4s of which were waiting in a queue. This is due to the very heavy mismatch between the number of requests (100) vs. the number of processors (16) causing 84 of the requests to be waiting for a processor at any one moment.
3. Running at a concurrency of 100, the tests take 41 seconds at 24 requests/sec. You have 16 processes (threads), so each request is processed about 700ms. Given your type of transaction, that is a *long* time per request. This may be because:
1. The CPU cost of each request is high in Django (which is highly unlikely given the low CPU value from the debug toolbar)
2. The OS is task switching a lot (especially if the load average is higher than 4-8), and the latency is purely down to having too many processes.
3. There are not enough DB connections serving the 16 processes so processes are waiting to have one come available. Do you have at least one connection available per process?
4. There is *considerable* latency around the DB, either:
1. Tens of small requests each taking, say, 10ms, most of which is networking overhead. If so, can you introducing caching or reduce the SQL calls to a smaller number. Or
2. One or a couple of requests are taking 100's of ms. To check this, run profiling on the DB. If so, you need to optimise that request.
4. The split between system and user CPU cost is unusually high in system, although the total CPU is low. This implies that most of the work in Django is kernel related, such as networking or disk. In this scenario, it might be network costs (eg receiving and sending HTTP requests and receiving and sending requests to the DB). Sometimes this will be high because of *paging*. If there's no paging going on, then you probably don't have to worry about this at all.
5. You have set the processes at 16, but have a high load average (how high you don't state). Ideally you should always have at least *one* process waiting for CPU (so that CPUs don't spin idly). Processes here don't seem CPU bound, but have a significant latency, so you need more processes than cores. How many more? Try running the uwsgi with different numbers of processors (1, 2, 4, 8, 12, 16, 24, etc) until you have the best throughput. If you change latency of the average process, you will need to adjust this again.
6. The 500 concurrency level definitely is a problem, but is it the client or the server? The report says 50 (out of 100) had the incorrect content-length which implies a server problem. The non-2xx also seems to point there. Is it possible to capture the non-2xx responses for debugging - stack traces or the specific error message would be incredibly useful (EDIT) and is caused by the uwsgi request queue running with it's default value of 100.
So, in summary:

1. Django seems fine
2. Mismatch between concurrency of load test (100 or 500) vs. processes (16): You're pushing way too many concurrent requests into the system for the number of processes to handle. Once you are above the number of processes, all that will happen is that you will lengthen the HTTP Request queue in the web server
3. There is a large latency, so either
1. Mismatch between processes (16) and CPU cores (1): If the load average is >3, then it's probably too many processes. Try again with a smaller number of processes
1. Load average > 2 -> try 8 processes
2. Load average > 4 -> try 4 processes
3. Load average > 8 -> try 2 processes
2. If the load average <3, it may be in the DB, so profile the DB to see whether there are loads of small requests (additively causing the latency) or one or two SQL statements are the problem
4. Without capturing the failed response, there's not much I can say about the failures at 500 concurrency
**Developing ideas**
Your load averages >10 on a single cored machine is *really* nasty and (as you observe) leads to a lot of task switching and general slow behaviour. I personally don't remember seeing a machine with a load average of 19 (which you have for 16 processes) - congratulations for getting it so high ;)
The DB performance is great, so I'd give that an all-clear right now.
**Paging**: To answer you question on how to see paging - you can detect OS paging in several ways. For example, in top, the header has page-ins and outs (see the last line):
```
Processes: 170 total, 3 running, 4 stuck, 163 sleeping, 927 threads 15:06:31
Load Avg: 0.90, 1.19, 1.94 CPU usage: 1.37% user, 2.97% sys, 95.65% idle SharedLibs: 144M resident, 0B data, 24M linkedit.
MemRegions: 31726 total, 2541M resident, 120M private, 817M shared. PhysMem: 1420M wired, 3548M active, 1703M inactive, 6671M used, 1514M free.
VM: 392G vsize, 1286M framework vsize, 1534241(0) pageins, 0(0) pageouts. Networks: packets: 789684/288M in, 912863/482M out. Disks: 739807/15G read, 996745/24G written.
```
**Number of processes**: In your current configuration, the number of processes is *way* too high. **Scale the number of processes back to a 2**. We might bring this value up later, depending on shifting further load off this server.
**Location of Apache Benchmark**: The load average of 1.85 for one process suggests to me that you are running the load generator on the same machine as uwsgi - is that correct?
If so, you really need to run this from another machine otherwise the test runs are not representative of actual load - you're taking memory and CPU from the web processes for use in the load generator. In addition, the load generator's 100 or 500 threads will generally stress your server in a way that does not happen in real life. Indeed this might be the reason the whole test fails.
**Location of the DB**: The load average for one process also suggest that you are running the DB on the same machine as the web processes - is this correct?
If I'm correct about the DB, then the first and best way to start scaling is to move the DB to another machine. We do this for a couple of reasons:
1. A DB server needs a different hardware profile from a processing node:
1. Disk: DB needs a lot of fast, redundant, backed up disk, and a processing node needs just a basic disk
2. CPU: A processing node needs the fastest CPU you can afford whereas a DB machine can often make do without (often its performance is gated on disk and RAM)
3. RAM: a DB machine generally needs as much RAM as possible (and the fastest DB has *all* its data in RAM), whereas many processing nodes need much less (yours needs about 20MB per process - very small
4. Scaling: **Atomic** DBs scale best by having monster machines with many CPUs whereas the web tier (not having state) can scale by plugging in many identical small boxen.
2. CPU affinity: It's better for the CPU to have a load average of 1.0 and processes to have affinity to a single core. Doing so maximizes the use of the CPU cache and minimizes task switching overheads. By separating the DB and processing nodes, you are enforcing this affinity in HW.
**500 concurrency with exceptions** The request queue in the diagram above is at most 100 - if uwsgi receives a request when the queue is full, the request is rejected with a 5xx error. I think this was happening in your 500 concurrency load test - basically the queue filled up with the first 100 or so threads, then the other 400 threads issued the remaining 900 requests and received immediate 5xx errors.
To handle 500 requests per second you need to ensure two things:
1. The Request Queue size is configured to handle the burst: Use the `--listen` argument to `uwsgi`
2. The system can handle a throughput at above 500 requests per second if 500 is a normal condition, or a bit below if 500 is a peak. See scaling notes below.
I imagine that uwsgi has the queue set to a smaller number to better handle DDoS attacks; if placed under huge load, most requests immediately fail with almost no processing allowing the box as a whole to still be responsive to the administrators.
**General advice for scaling a system**
Your most important consideration is probably to **maximize throughput**. Another possible need to minimize response time, but I won't discuss this here. In maximising throughput, you are trying to maximize the *system*, not individual components; some local decreases might improve overall system throughput (for example, making a change that happens to add latency in the web tier *in order to improve performance of the DB* is a net gain).
Onto specifics:
1. **Move the DB to a separate machine**. After this, profile the DB during your load test by running `top` and your favorite MySQL monitoring tool. You need to be able to profile . Moving the DB to a separate machine will introduce some additional latency (several ms) per request, so expect to slightly increase the number of processes at the web tier to keep the same throughput.
2. Ensure that `uswgi` request queue is large enough to handle a burst of traffic using the `--listen` argument. This should be several times the maximum steady-state requests-per-second your system can handle.
3. On the web/app tier: **Balance the number of processes with the number of CPU cores** and the inherent latency in the process. Too many processes slows performance, too few means that you'll never fully utilize the system resources. There is no fixed balancing point, as every application and usage pattern is different, so benchmark and adjust. As a guide, use the processes' latency, if each task has:
* 0% latency, then you need 1 process per core
* 50% latency (i.e. the CPU time is half the actual time), then you need 2 processes per core
* 67% latency, then you need 3 processes per core
4. Check `top` during the test to ensure that you are above 90% cpu utilisation (for every core) *and* you have a load average a little above 1.0. If the load average is higher, scale back the processes. If all goes well, at some point you won't be able to achieve this target, and DB might now be the bottleneck
5. At some point you will need more power in the web tier. You can either choose to add more CPU to the machine (relatively easy) and so add more processes, **and/or** you can add in more processing nodes (horizontal scaleability). The latter can be achieved in uwsgi using the method discussed [here](https://stackoverflow.com/a/15050495/1395668) by [Łukasz Mierzwa](https://stackoverflow.com/users/1154047/ukasz-mierzwa)
|
Please run benchmarks much longer than a minute (5-10 at least), You really won't get much information from such a short test. And use uWSGI's carbon plugin to push stats to carbon/graphite server (You will need to have one), You will have much more information for debugging.
When You send 500 concurrent requests to Your app and it can't handle such load, listen queue on each backend will be filled pretty quickly (it's 100 requests by default), You might want to increase that, but if workers can't process requests that fast and listen queue (also known as backlog) is full, linux network stack will drop request and You will start getting errors.
Your first benchmark states that You can process single request in ~42 ms, so single worker could process at most 1000ms / 42ms = ~23 requests per second (if db and other parts of app stack didn't slow down as concurrency goes up). So to process 500 concurrent requests You would need at least 500 / 23 = 21 workers (but in reality I would say at least 40), You have only 16, no wonder it breaks under such load.
EDIT: I've mixed rate with concurrency - at least 21 workers will allow You to process 500 requests per second, not 500 concurrent requests. If You really want to handle 500 concurrent requests than You simply need 500 workers. Unless You will run Your app in async mode, check "Gevent" section in uWSGI docs.
PS. uWSGI comes with great load balancer with backend autoconfiguration (read docs under "Subscription Server" and "FastRouter"). You can setup it in a way that allows You to hot-plug new backend as needed, You just start workers on new node and they will subscribe to FastRouter and start getting requests. This is the best way to scale horizontally. And with backends on AWS You can automate this so that new backends will be started quickly when needed.
|
14,962,289
|
I am running a django app with nginx & uwsgi. Here's how i run uwsgi:
```
sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp --module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings --socket=/tmp/pyapp.socket --cheaper=8 --processes=16 --harakiri=10 --max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid --uid=220 --gid=499
```
& nginx configurations:
```
server {
listen 80;
server_name test.com
root /www/python/apps/pyapp/;
access_log /var/log/nginx/test.com.access.log;
error_log /var/log/nginx/test.com.error.log;
# https://docs.djangoproject.com/en/dev/howto/static-files/#serving-static-files-in-production
location /static/ {
alias /www/python/apps/pyapp/static/;
expires 30d;
}
location /media/ {
alias /www/python/apps/pyapp/media/;
expires 30d;
}
location / {
uwsgi_pass unix:///tmp/pyapp.socket;
include uwsgi_params;
proxy_read_timeout 120;
}
# what to serve if upstream is not available or crashes
#error_page 500 502 503 504 /media/50x.html;
}
```
Here comes the problem. When doing "ab" (ApacheBenchmark) on the server i get the following results:
nginx version: nginx version: nginx/1.2.6
uwsgi version:1.4.5
```
Server Software: nginx/1.0.15
Server Hostname: pycms.com
Server Port: 80
Document Path: /api/nodes/mostviewed/8/?format=json
Document Length: 8696 bytes
Concurrency Level: 100
Time taken for tests: 41.232 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 8866000 bytes
HTML transferred: 8696000 bytes
Requests per second: 24.25 [#/sec] (mean)
Time per request: 4123.216 [ms] (mean)
Time per request: 41.232 [ms] (mean, across all concurrent requests)
Transfer rate: 209.99 [Kbytes/sec] received
```
While running on 500 concurrency level
```
oncurrency Level: 500
Time taken for tests: 2.175 seconds
Complete requests: 1000
Failed requests: 50
(Connect: 0, Receive: 0, Length: 50, Exceptions: 0)
Write errors: 0
Non-2xx responses: 950
Total transferred: 629200 bytes
HTML transferred: 476300 bytes
Requests per second: 459.81 [#/sec] (mean)
Time per request: 1087.416 [ms] (mean)
Time per request: 2.175 [ms] (mean, across all concurrent requests)
Transfer rate: 282.53 [Kbytes/sec] received
```
As you can see... all requests on the server fail with either timeout errors or "Client prematurely disconnected" or:
```
writev(): Broken pipe [proto/uwsgi.c line 124] during GET /api/nodes/mostviewed/9/?format=json
```
Here's a little bit more about my application:
Basically, it's a collection of models that reflect MySQL tables which contain all the content. At the frontend, i have django-rest-framework which serves json content to the clients.
I've installed django-profiling & django debug toolbar to see whats going on. On django-profiling here's what i get when running a single request:
```
Instance wide RAM usage
Partition of a set of 147315 objects. Total size = 20779408 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 63960 43 5726288 28 5726288 28 str
1 36887 25 3131112 15 8857400 43 tuple
2 2495 2 1500392 7 10357792 50 dict (no owner)
3 615 0 1397160 7 11754952 57 dict of module
4 1371 1 1236432 6 12991384 63 type
5 9974 7 1196880 6 14188264 68 function
6 8974 6 1076880 5 15265144 73 types.CodeType
7 1371 1 1014408 5 16279552 78 dict of type
8 2684 2 340640 2 16620192 80 list
9 382 0 328912 2 16949104 82 dict of class
<607 more rows. Type e.g. '_.more' to view.>
CPU Time for this request
11068 function calls (10158 primitive calls) in 0.064 CPU seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/django/views/generic/base.py:44(view)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/django/views/decorators/csrf.py:76(wrapped_view)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/views.py:359(dispatch)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/generics.py:144(get)
1 0.000 0.000 0.064 0.064 /usr/lib/python2.6/site-packages/rest_framework/mixins.py:46(list)
1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:348(data)
21/1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:273(to_native)
21/1 0.000 0.000 0.038 0.038 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:190(convert_object)
11/1 0.000 0.000 0.036 0.036 /usr/lib/python2.6/site-packages/rest_framework/serializers.py:303(field_to_native)
13/11 0.000 0.000 0.033 0.003 /usr/lib/python2.6/site-packages/django/db/models/query.py:92(__iter__)
3/1 0.000 0.000 0.033 0.033 /usr/lib/python2.6/site-packages/django/db/models/query.py:77(__len__)
4 0.000 0.000 0.030 0.008 /usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py:794(execute_sql)
1 0.000 0.000 0.021 0.021 /usr/lib/python2.6/site-packages/django/views/generic/list.py:33(paginate_queryset)
1 0.000 0.000 0.021 0.021 /usr/lib/python2.6/site-packages/django/core/paginator.py:35(page)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/core/paginator.py:20(validate_number)
3 0.000 0.000 0.020 0.007 /usr/lib/python2.6/site-packages/django/core/paginator.py:57(_get_num_pages)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/core/paginator.py:44(_get_count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:340(count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:394(get_count)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:568(_prefetch_related_objects)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/query.py:1596(prefetch_related_objects)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/db/backends/util.py:36(execute)
1 0.000 0.000 0.020 0.020 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:340(get_aggregation)
5 0.000 0.000 0.020 0.004 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:136(execute)
2 0.000 0.000 0.020 0.010 /usr/lib/python2.6/site-packages/django/db/models/query.py:1748(prefetch_one_level)
4 0.000 0.000 0.020 0.005 /usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py:112(execute)
5 0.000 0.000 0.019 0.004 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:316(_query)
60 0.000 0.000 0.018 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:231(iterator)
5 0.012 0.002 0.015 0.003 /usr/lib64/python2.6/site-packages/MySQLdb/cursors.py:278(_do_query)
60 0.000 0.000 0.013 0.000 /usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py:751(results_iter)
30 0.000 0.000 0.010 0.000 /usr/lib/python2.6/site-packages/django/db/models/manager.py:115(all)
50 0.000 0.000 0.009 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:870(_clone)
51 0.001 0.000 0.009 0.000 /usr/lib/python2.6/site-packages/django/db/models/sql/query.py:235(clone)
4 0.000 0.000 0.009 0.002 /usr/lib/python2.6/site-packages/django/db/backends/__init__.py:302(cursor)
4 0.000 0.000 0.008 0.002 /usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py:361(_cursor)
1 0.000 0.000 0.008 0.008 /usr/lib64/python2.6/site-packages/MySQLdb/__init__.py:78(Connect)
910/208 0.003 0.000 0.008 0.000 /usr/lib64/python2.6/copy.py:144(deepcopy)
22 0.000 0.000 0.007 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:619(filter)
22 0.000 0.000 0.007 0.000 /usr/lib/python2.6/site-packages/django/db/models/query.py:633(_filter_or_exclude)
20 0.000 0.000 0.005 0.000 /usr/lib/python2.6/site-packages/django/db/models/fields/related.py:560(get_query_set)
1 0.000 0.000 0.005 0.005 /usr/lib64/python2.6/site-packages/MySQLdb/connections.py:8()
```
..etc
However, django-debug-toolbar shows the following:
```
Resource Usage
Resource Value
User CPU time 149.977 msec
System CPU time 119.982 msec
Total CPU time 269.959 msec
Elapsed time 326.291 msec
Context switches 11 voluntary, 40 involuntary
and 5 queries in 27.1 ms
```
The problem is that "top" shows the load average rising quickly and apache benchmark which i ran both on the local server and from a remote machine within the network shows that i am not serving many requests / second.
What is the problem? this is as far as i could reach when profiling the code so it would be appreciated if someone can point of what i am doing here.
**Edit (23/02/2013): Adding more details based on Andrew Alcock's answer:**
The points that require my attention / answer are
(3)(3) I've executed "show global variables" on MySQL and found out that MySQL configurations had 151 for max\_connections setting which is more than enough to serve the workers i am starting for uwsgi.
(3)(4)(2) The single request i am profiling is the heaviest one. It executes 4 queries according to django-debug-toolbar. What happens is that all queries run in:
3.71, 2.83, 0.88, 4.84 ms respectively.
(4) Here you're referring to memory paging? if so, how could i tell?
(5) On 16 workers, 100 concurrency rate, 1000 requests the load average goes up to ~ 12
I ran the tests on different number of workers (concurrency level is 100):
1. 1 worker, load average ~ 1.85, 19 reqs / second, Time per request: 5229.520, 0 non-2xx
2. 2 worker, load average ~ 1.5, 19 reqs / second, Time per request: 516.520, 0 non-2xx
3. 4 worker, load average ~ 3, 16 reqs / second, Time per request: 5929.921, 0 non-2xx
4. 8 worker, load average ~ 5, 18 reqs / second, Time per request: 5301.458, 0 non-2xx
5. 16 worker, load average ~ 19, 15 reqs / second, Time per request: 6384.720, 0 non-2xx
AS you can see, the more workers we have, the more load we have on the system. I can see in uwsgi's daemon log that the response time in milliseconds increases when i increase the number of workers.
On 16 workers, running 500 concurrency level requests uwsgi starts loggin the errors:
```
writev(): Broken pipe [proto/uwsgi.c line 124]
```
Load goes up to ~ 10 as well. and the tests don't take much time because non-2xx responses are 923 out of 1000 which is why the response here is quite fast as it's almost empty. Which is also a reply to your point #4 in the summary.
Assuming that what i am facing here is an OS latency based on I/O and networking, what is the recommended action to scale this up? new hardware? bigger server?
Thanks
|
2013/02/19
|
[
"https://Stackoverflow.com/questions/14962289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/202690/"
] |
Please run benchmarks much longer than a minute (5-10 at least), You really won't get much information from such a short test. And use uWSGI's carbon plugin to push stats to carbon/graphite server (You will need to have one), You will have much more information for debugging.
When You send 500 concurrent requests to Your app and it can't handle such load, listen queue on each backend will be filled pretty quickly (it's 100 requests by default), You might want to increase that, but if workers can't process requests that fast and listen queue (also known as backlog) is full, linux network stack will drop request and You will start getting errors.
Your first benchmark states that You can process single request in ~42 ms, so single worker could process at most 1000ms / 42ms = ~23 requests per second (if db and other parts of app stack didn't slow down as concurrency goes up). So to process 500 concurrent requests You would need at least 500 / 23 = 21 workers (but in reality I would say at least 40), You have only 16, no wonder it breaks under such load.
EDIT: I've mixed rate with concurrency - at least 21 workers will allow You to process 500 requests per second, not 500 concurrent requests. If You really want to handle 500 concurrent requests than You simply need 500 workers. Unless You will run Your app in async mode, check "Gevent" section in uWSGI docs.
PS. uWSGI comes with great load balancer with backend autoconfiguration (read docs under "Subscription Server" and "FastRouter"). You can setup it in a way that allows You to hot-plug new backend as needed, You just start workers on new node and they will subscribe to FastRouter and start getting requests. This is the best way to scale horizontally. And with backends on AWS You can automate this so that new backends will be started quickly when needed.
|
Adding more workers and getting less r/s means that your request "is pure CPU" and there is no IO waits that another worker can use to serve another request.
If you want to scale you will need to use another server with more (or faster) cpu's.
However this is a synthetic test, the number of r/s you get are the upper bound for the exact request that you are testing, once on production there are many more variables that can affect the performance.
|
47,943,854
|
I'm new to waf build tool and I've googled for answers but very few unhelpful links.
Does anyone know?
As wscript is essentially a python script, I suppose I could use the `os` package?
|
2017/12/22
|
[
"https://Stackoverflow.com/questions/47943854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5556905/"
] |
Don't use the `os` module, instead use the `DEST_*` variables:
```py
ctx.load('compiler_c')
print (ctx.env.DEST_OS, ctx.env.DEST_CPU, ctx.env.DEST_BINFMT)
```
On my machine this would print `('linux', 'x86_64', 'elf')`. Then you can dispatch on that.
|
You can use `import` at every point where you could use it any other python script.
I prefer using `platform` for programming a function os-agnostic instead on evaluate some attributes of `os`.
Writing the [Build-related commands](https://waf.io/book/#_build_related_commands) example in the [waf book](https://waf.io/book/) os-agnostic, could look something like this:
```py
import platform
top = '.'
out = 'build_directory'
def configure(ctx):
pass
def build(ctx):
if platform.system().lower().startswith('win'):
cp = 'copy'
else:
cp = 'cp'
ctx(rule=cp+' ${SRC} ${TGT}', source='foo.txt', target='bar.txt')
```
|
37,400,078
|
I am trying to translate an if-else statement written in c++ to a corresponding chunk of python code. For a C++ map dpt2, I am attempting to translate:
```
if (dpt2.find(key_t) == dpt2.end()) { dpt2[key_t] = rat; }
else { dpt2.find(key_t) -> second = dpt2.find(key_t) -> second + rat; }
```
I'm not super familiar with C++, but my understanding is that the -> operator is equivalent to a method call for a class that is being referenced by a pointer. My question is how do I translate this code into something that can be handled by an OrderedDict() object in python?
|
2016/05/23
|
[
"https://Stackoverflow.com/questions/37400078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3396878/"
] |
First of all, in C++ you'd write that as:
```
dpt[key_t] += rat;
```
That will do only one map lookup - as opposed to the code you wrote which does 2 lookups in the case that `key_t` isn't in the map and 3 lookups in the case that it is.
---
And in Python, you'd write it much the same way - assuming you declare `dpt` to be the right thing:
```
dpt = collections.defaultdict(int)
...
dpt[key_t] += rat
```
|
Something like this?
```
dpt2[key_t] = dpt2.get(key_t, 0) + rat
```
|
17,093,322
|
I have a large data set of urls and I need a way to parse words from the urls eg:
```
realestatesales.com -> {"real","estate","sales"}
```
I would prefer to do it in python. This seems like it should be possible with some kind of english language dictionary. There might be some ambiguous cases, but I feel like there should be a solution out there somewhere.
|
2013/06/13
|
[
"https://Stackoverflow.com/questions/17093322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1893354/"
] |
This is a problem is word segmentation, and an efficient dynamic programming solution exists. [This](http://thenoisychannel.com/2011/08/08/retiring-a-great-interview-problem/) page discusses how you could implement it. I have also answered this question on SO before, but I can't find a link to the answer. Please feel free to edit my post if you do.
|
This might be of use to you: <http://www.clips.ua.ac.be/pattern>
It's a set of modules which, depending on your system, might already be installed. It does all kinds of interesting stuff, and even if it doesn't do exactly what you need it might get you started on the right path.
|
17,093,322
|
I have a large data set of urls and I need a way to parse words from the urls eg:
```
realestatesales.com -> {"real","estate","sales"}
```
I would prefer to do it in python. This seems like it should be possible with some kind of english language dictionary. There might be some ambiguous cases, but I feel like there should be a solution out there somewhere.
|
2013/06/13
|
[
"https://Stackoverflow.com/questions/17093322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1893354/"
] |
This is a problem is word segmentation, and an efficient dynamic programming solution exists. [This](http://thenoisychannel.com/2011/08/08/retiring-a-great-interview-problem/) page discusses how you could implement it. I have also answered this question on SO before, but I can't find a link to the answer. Please feel free to edit my post if you do.
|
Ternary Search Trees when filled with a word-dictionary can find the most-complex set of matched terms (*words*) rather efficiently. This is the solution I've previously used.
You can get a C/Python implementation of a tst here: <http://github.com/nlehuen/pytst>
**Example:**
```
import tst
tree = tst.TST()
#note that tst.ListAction() assigns each matched term to a list
words = tree.scan("MultipleWordString", tst.ListAction())
```
**Other Resources:**
The open-source search engine called "Solr" uses what it calls a "[Word-Boundary-Filter](http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.WordDelimiterFilterFactory)" to deal with this problem you might want to have a look at it.
|
14,441,412
|
I have python scripts and shell scripts in the same folder which both need configuration. I currently have a config.py for my python scripts but I was wondering if it is possible to have a single configuration file which can be easily read by both python scripts and also shell scripts.
Can anyone give an example of the format of a configuration file best suited to being read by both python and shell.
|
2013/01/21
|
[
"https://Stackoverflow.com/questions/14441412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1738522/"
] |
I think the simplest solution will be :
```
key1="value1"
key2="value2"
key3="value3"
```
in [shell](/questions/tagged/shell "show questions tagged 'shell'") you just have to source this env file and in Python, it's easy to parse.
Spaces are not allowed around `=`
For Python, see this post : [Emulating Bash 'source' in Python](https://stackoverflow.com/questions/3503719/emulating-bash-source-in-python)
|
This is valid in both shell and python:
```
NUMBER=42
STRING="Hello there"
```
what else do you need?
|
14,441,412
|
I have python scripts and shell scripts in the same folder which both need configuration. I currently have a config.py for my python scripts but I was wondering if it is possible to have a single configuration file which can be easily read by both python scripts and also shell scripts.
Can anyone give an example of the format of a configuration file best suited to being read by both python and shell.
|
2013/01/21
|
[
"https://Stackoverflow.com/questions/14441412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1738522/"
] |
I think the simplest solution will be :
```
key1="value1"
key2="value2"
key3="value3"
```
in [shell](/questions/tagged/shell "show questions tagged 'shell'") you just have to source this env file and in Python, it's easy to parse.
Spaces are not allowed around `=`
For Python, see this post : [Emulating Bash 'source' in Python](https://stackoverflow.com/questions/3503719/emulating-bash-source-in-python)
|
**configobj** lib can help with this.
```
from configobj import ConfigObj
cfg = ConfigObj('/home/.aws/config')
access_key_id = cfg['aws_access_key_id']
secret_access_key = cfg['aws_secret_access_key']
```
|
14,441,412
|
I have python scripts and shell scripts in the same folder which both need configuration. I currently have a config.py for my python scripts but I was wondering if it is possible to have a single configuration file which can be easily read by both python scripts and also shell scripts.
Can anyone give an example of the format of a configuration file best suited to being read by both python and shell.
|
2013/01/21
|
[
"https://Stackoverflow.com/questions/14441412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1738522/"
] |
I think the simplest solution will be :
```
key1="value1"
key2="value2"
key3="value3"
```
in [shell](/questions/tagged/shell "show questions tagged 'shell'") you just have to source this env file and in Python, it's easy to parse.
Spaces are not allowed around `=`
For Python, see this post : [Emulating Bash 'source' in Python](https://stackoverflow.com/questions/3503719/emulating-bash-source-in-python)
|
Keeping "config.py" rather than "config.sh" leads to some pretty code.
*config.py*
```
CONFIG_VAR = "value"
CONFIG_VAR2 = "value2"
```
*script.py*:
```
import config
CONFIG_VAR = config.CONFIG_VAR
CONFIG_VAR2 = config.CONFIG_VAR2
```
*script.sh*:
```
CONFIG_VAR="$(python-c 'import config;print(config.CONFIG_VAR)')"
CONFIG_VAR2="$(python-c 'import config;print(config.CONFIG_VAR2)')"
```
Plus, this is a lot more legible than trying to "Emulate Bash 'source' in Python" like Gilles answer requires.
|
14,441,412
|
I have python scripts and shell scripts in the same folder which both need configuration. I currently have a config.py for my python scripts but I was wondering if it is possible to have a single configuration file which can be easily read by both python scripts and also shell scripts.
Can anyone give an example of the format of a configuration file best suited to being read by both python and shell.
|
2013/01/21
|
[
"https://Stackoverflow.com/questions/14441412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1738522/"
] |
**configobj** lib can help with this.
```
from configobj import ConfigObj
cfg = ConfigObj('/home/.aws/config')
access_key_id = cfg['aws_access_key_id']
secret_access_key = cfg['aws_secret_access_key']
```
|
This is valid in both shell and python:
```
NUMBER=42
STRING="Hello there"
```
what else do you need?
|
14,441,412
|
I have python scripts and shell scripts in the same folder which both need configuration. I currently have a config.py for my python scripts but I was wondering if it is possible to have a single configuration file which can be easily read by both python scripts and also shell scripts.
Can anyone give an example of the format of a configuration file best suited to being read by both python and shell.
|
2013/01/21
|
[
"https://Stackoverflow.com/questions/14441412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1738522/"
] |
This is valid in both shell and python:
```
NUMBER=42
STRING="Hello there"
```
what else do you need?
|
Keeping "config.py" rather than "config.sh" leads to some pretty code.
*config.py*
```
CONFIG_VAR = "value"
CONFIG_VAR2 = "value2"
```
*script.py*:
```
import config
CONFIG_VAR = config.CONFIG_VAR
CONFIG_VAR2 = config.CONFIG_VAR2
```
*script.sh*:
```
CONFIG_VAR="$(python-c 'import config;print(config.CONFIG_VAR)')"
CONFIG_VAR2="$(python-c 'import config;print(config.CONFIG_VAR2)')"
```
Plus, this is a lot more legible than trying to "Emulate Bash 'source' in Python" like Gilles answer requires.
|
14,441,412
|
I have python scripts and shell scripts in the same folder which both need configuration. I currently have a config.py for my python scripts but I was wondering if it is possible to have a single configuration file which can be easily read by both python scripts and also shell scripts.
Can anyone give an example of the format of a configuration file best suited to being read by both python and shell.
|
2013/01/21
|
[
"https://Stackoverflow.com/questions/14441412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1738522/"
] |
**configobj** lib can help with this.
```
from configobj import ConfigObj
cfg = ConfigObj('/home/.aws/config')
access_key_id = cfg['aws_access_key_id']
secret_access_key = cfg['aws_secret_access_key']
```
|
Keeping "config.py" rather than "config.sh" leads to some pretty code.
*config.py*
```
CONFIG_VAR = "value"
CONFIG_VAR2 = "value2"
```
*script.py*:
```
import config
CONFIG_VAR = config.CONFIG_VAR
CONFIG_VAR2 = config.CONFIG_VAR2
```
*script.sh*:
```
CONFIG_VAR="$(python-c 'import config;print(config.CONFIG_VAR)')"
CONFIG_VAR2="$(python-c 'import config;print(config.CONFIG_VAR2)')"
```
Plus, this is a lot more legible than trying to "Emulate Bash 'source' in Python" like Gilles answer requires.
|
680,320
|
Consider the following skeleton of a models.py for a space conquest game:
```
class Fleet(models.Model):
game = models.ForeignKey(Game, related_name='planet_set')
owner = models.ForeignKey(User, related_name='planet_set', null=True, blank=True)
home = models.ForeignKey(Planet, related_name='departing_fleet_set')
dest = models.ForeignKey(Planet, related_name='arriving_fleet_set')
ships = models.IntegerField()
class Planet(models.Model):
game = models.ForeignKey(Game, related_name='planet_set')
owner = models.ForeignKey(User, related_name='planet_set', null=True, blank=True)
name = models.CharField(max_length=250)
ships = models.IntegerField()
```
I have many such data models for a project I'm working on, and I change the state of the game based on somewhat complicated interactions between various data objects. I want to avoid lots of unnecessary calls to the database, so once per turn, I do something like
1. Query all the fleets, planets, and other objects from the database and cache them as python objects
2. Process the game objects, resolving the state of the game
3. Save them back in the database
This model seems to totally break down when using ForeignKey objects. For example, when a new fleet departs a planet, I have a line that looks something like this:
```
fleet.home.ships -= fleet.ships
```
After this line runs, I have other code that alters the number of ships at each of the planets, including the planet fleet.home. Unfortunately, the changes made in the above line are not reflected in the QuerySet of planets that I obtained earlier, so that when I save all the planets at the end of the turn, the changes to fleet.home's ships get overwritten.
Is there some better way of handling this situation? Or is this just how all ORMs are?
|
2009/03/25
|
[
"https://Stackoverflow.com/questions/680320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51100/"
] |
Django's ORM does not implement an [identity map](http://en.wikipedia.org/wiki/Identity_map) (it's in the [ticket tracker](http://code.djangoproject.com/ticket/17), but it isn't clear if or when it will be implemented; at least one core Django committer has [expressed opposition to it](http://spreadsheets.google.com/ccc?key=pSqnCvef6OXmGWQ9qbEVMeA)). This means that if you arrive at the same database object through two different query paths, you are working with different Python objects in memory.
This means that your design (load everything into memory at once, modify a lot of things, then save it all back at the end) is unworkable using the Django ORM. First because it will often waste lots of memory loading in duplicate copies of the same object, and second because of "overwriting" issues like the one you're running into.
You either need to rework your design to avoid these issues (either be careful to work with only one QuerySet at a time, saving anything modified before you make another query; or if you load several queries, look up all relations manually, don't ever traverse ForeignKeys using the convenient attributes for them), or use an alternative Python ORM that implements identity map. [SQLAlchemy](http://www.sqlalchemy.org/) is one option.
Note that this doesn't mean Django's ORM is "bad." It's optimized for the case of web applications, where these kinds of issues are rare (I've done web development with Django for years and never once had this problem on a real project). If your use case is different, you may want to choose a different ORM.
|
This is perhaps what you are looking for:
<https://web.archive.org/web/20121126091406/http://simonwillison.net/2009/May/7/mmalones/>
|
41,931,719
|
I am learning Python and I am reading the "Think Python" and doing some simple exercises included in the book.
I am asked "Define a new function called do\_four that takes a function object and a value and calls the function four times, passing the value as a parameter."
I am trying to compose this function with one statement by calling a function already defined called do\_twice() and test it with a function called print\_double(). Here is the code:
```
def do_twice(f, x):
f(x)
f(x)
def do_four(f, v):
do_twice(do_twice(f, v), v)
def print_twice(s):
print s
print s
s = 'abc'
do_four(print_twice, s)
```
This code produces an error:
```
abc
abc
abc
abc
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-41-95b513e5e0ee> in <module>()
----> 1 do_four(print_twice, s)
<ipython-input-40-100f8587f50a> in do_four(f, v)
1 def do_four(f, v):
----> 2 do_twice(do_twice(f, v), v)
<ipython-input-38-7143620502ce> in do_twice(f, x)
1 def do_twice(f, x):
----> 2 f(x)
3 f(x)
TypeError: 'NoneType' object is not callable
```
In trying to understand what is happening I tried to construct a Stack Diagram as described in the book. Here it is:
[](https://i.stack.imgur.com/EsMCt.png)
Could you explain the error message and comment on the Stack Diagram?
Your advice will be appreciated.
|
2017/01/30
|
[
"https://Stackoverflow.com/questions/41931719",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7128498/"
] |
`do_twice` gets a function on the first argument, and doesn't return anything. So there is no reason to pass `do_twice` the result of `do_twice`. You need to pass it `a function`.
This would do what you meant:
```
def do_four(f, v):
do_twice(f, v)
do_twice(f, v)
```
Very similar to how you defined `do_twice` by `f`
|
>
>
> ```
> do_twice(do_twice(f, v), v)
> ^^^^^^^^^^^^^^
>
> ```
>
>
Slightly rewritten:
```
result = do_twice(f, v)
do_twice(result, v)
```
You're passing the return value of `do_twice(...)` as the first parameter to `do_twice(...)`. That parameter is supposed to be a function object. `do_twice` does not *return* anything, so `result` is `None`, which you're passing instead of the expected function object.
There's no point in nesting the two `do_twice` in any way here.
|
24,029,634
|
I ran into this today and can't figure out why. I have several functions chained together that perform some time consuming operations as part of a larger pipeline. I've included these here, pared down to a test example, as best as I could. The issue is that when I call a function directly, I get the expected output (e.g., 5 different trees). However, when I call the same function in a multiprocessing pool with apply\_async (or apply, doesn't matter), I get 5 trees, but they are all the same.
I've documented this in an IPython notebook, which can be viewed here: <http://nbviewer.ipython.org/gist/cfriedline/0e275d528ff1a8d674c6>
In cell 91, I create 5 trees (each with 10 tips), and return two lists. The first containing the non-multiprocessing trees, and the second from apply\_async.
In cell 92, you can see the results of creating trees without multiprocessing, and in 93, with multiprocessing.
What I expect is that there would be a total of 10 different trees between the two tests, but instead all of the multiprocessing trees are identical. Makes little sense to me.
Relevant versions of things:
* Linux 2.6.18-238.12.1.el5 x86\_64 GNU/Linux
* Python 2.7.6 :: Anaconda 1.9.2 (64-bit)
* IPython 2.0.0
* Rpy2 2.3.9
Thanks!
Chris
|
2014/06/04
|
[
"https://Stackoverflow.com/questions/24029634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1027577/"
] |
I solved this one, with a point in the right direction from @mgilson. In fact, it was a random number problem, just not in python - in R (sigh). The state of R is copied when the Pool is created, meaning so is its random seed. To fix, just a little rpy2 as below calling R's set.seed function (with some process specific stuff for good measure):
```
def create_tree(num_tips, type):
"""
creates the taxa tree in R
@param num_tips: number of taxa to create
@param type: type for naming (e.g., 'taxa')
@return: a dendropy Tree
@rtype: dendropy.Tree
"""
r = rpy2.robjects.r
set_seed = r('set.seed')
set_seed(int((time.time()+os.getpid()*1000)))
rpy2.robjects.globalenv['numtips'] = num_tips
rpy2.robjects.globalenv['treetype'] = type
name = _get_random_string(20)
if type == "T":
r("%s = rtree(numtips, rooted=T, tip.label=paste(treetype, seq(1:(numtips)), sep=''))" % name)
else:
r("%s = rtree(numtips, rooted=F, tip.label=paste(treetype, seq(1:(numtips)), sep=''))" % name)
tree = r[name]
return ape_to_dendropy(tree)
```
|
I'm not 100% familiar with these libraries, however, on Linux, (IIRC) `multiprocessing` uses `os.fork`. This means that the state of the random module (which you're using) will also be forked and that each of your processes will generate *the same sequence of random numbers* resulting in a not-so-random `_get_random_string` function.
If I'm right, and you make the pool smaller than the number of trees that you want, you should see that you get groups of N identical trees (where N is the number of pools).
I think that probably the ideal solution is to [re-seed](https://docs.python.org/2/library/random.html#random.seed) the random number generator inside of each of the processes. It's unlikely that they'll run at *exactly* the same time, so you should get differing results.
|
1,770,789
|
I'm trying to write a python script which follows the common unix command line pattern of accepting input from stdin if no file name is given. This is what I've been using:
```
if __name__ == "__main__":
if len(sys.argv) > 1:
stream = open(sys.argv[1])
else:
stream = sys.stdin
```
Is there a more pythonic way to do that?
|
2009/11/20
|
[
"https://Stackoverflow.com/questions/1770789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170340/"
] |
The [fileinput](http://docs.python.org/library/fileinput.html) module is perfect for this.
|
similar but one-line solution
```
stream = sys.argv[1] if len(sys.argv)>1 else sys.stdin
```
|
1,770,789
|
I'm trying to write a python script which follows the common unix command line pattern of accepting input from stdin if no file name is given. This is what I've been using:
```
if __name__ == "__main__":
if len(sys.argv) > 1:
stream = open(sys.argv[1])
else:
stream = sys.stdin
```
Is there a more pythonic way to do that?
|
2009/11/20
|
[
"https://Stackoverflow.com/questions/1770789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170340/"
] |
similar but one-line solution
```
stream = sys.argv[1] if len(sys.argv)>1 else sys.stdin
```
|
I would suggest you make it more unixy instead:
```
if len(sys.argv) > 1:
sys.stdin = open(sys.argv[1])
```
|
1,770,789
|
I'm trying to write a python script which follows the common unix command line pattern of accepting input from stdin if no file name is given. This is what I've been using:
```
if __name__ == "__main__":
if len(sys.argv) > 1:
stream = open(sys.argv[1])
else:
stream = sys.stdin
```
Is there a more pythonic way to do that?
|
2009/11/20
|
[
"https://Stackoverflow.com/questions/1770789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170340/"
] |
similar but one-line solution
```
stream = sys.argv[1] if len(sys.argv)>1 else sys.stdin
```
|
how about this one?
```
stream=sys.argv[1:] and open(sys.argv[1]) or sys.stdin
```
|
1,770,789
|
I'm trying to write a python script which follows the common unix command line pattern of accepting input from stdin if no file name is given. This is what I've been using:
```
if __name__ == "__main__":
if len(sys.argv) > 1:
stream = open(sys.argv[1])
else:
stream = sys.stdin
```
Is there a more pythonic way to do that?
|
2009/11/20
|
[
"https://Stackoverflow.com/questions/1770789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170340/"
] |
The [fileinput](http://docs.python.org/library/fileinput.html) module is perfect for this.
|
I would suggest you make it more unixy instead:
```
if len(sys.argv) > 1:
sys.stdin = open(sys.argv[1])
```
|
1,770,789
|
I'm trying to write a python script which follows the common unix command line pattern of accepting input from stdin if no file name is given. This is what I've been using:
```
if __name__ == "__main__":
if len(sys.argv) > 1:
stream = open(sys.argv[1])
else:
stream = sys.stdin
```
Is there a more pythonic way to do that?
|
2009/11/20
|
[
"https://Stackoverflow.com/questions/1770789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170340/"
] |
The [fileinput](http://docs.python.org/library/fileinput.html) module is perfect for this.
|
how about this one?
```
stream=sys.argv[1:] and open(sys.argv[1]) or sys.stdin
```
|
1,770,789
|
I'm trying to write a python script which follows the common unix command line pattern of accepting input from stdin if no file name is given. This is what I've been using:
```
if __name__ == "__main__":
if len(sys.argv) > 1:
stream = open(sys.argv[1])
else:
stream = sys.stdin
```
Is there a more pythonic way to do that?
|
2009/11/20
|
[
"https://Stackoverflow.com/questions/1770789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/170340/"
] |
how about this one?
```
stream=sys.argv[1:] and open(sys.argv[1]) or sys.stdin
```
|
I would suggest you make it more unixy instead:
```
if len(sys.argv) > 1:
sys.stdin = open(sys.argv[1])
```
|
45,703,959
|
When trying to deploy an Flask application to my LAMP server, I got an error from [flipflop](https://github.com/Kozea/flipflop), a FastCGI/WSGI gateway which enables my application to speak the FastCGI protocol.
>
> ~/minimal/run.py
>
>
>
```
from flask import Flask
from flipflop import WSGIServer
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
if __name__ == '__main__':
WSGIServer(app).run()
```
Relevant part of the Apache configuration file, i.e. `/etc/httpd/conf/httpd.conf`:
```
<VirtualHost *:80>
ScriptAlias / /home/apps/minimal/run.py
ErrorLog /var/log/httpd/error_log
</VirtualHost>
```
Error report by Apache/2.2.15:
```
[apps@kernod0 ~]$ sudo head -n 20 /var/log/httpd/error_log
[sudo] password for apps:
[Wed Aug 16 16:39:16 2017] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Aug 16 16:39:16 2017] [notice] Digest: generating secret for digest authentication ...
[Wed Aug 16 16:39:16 2017] [notice] Digest: done
[Wed Aug 16 16:39:16 2017] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fcgid/2.3.9 configured -- resuming normal operations
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] Traceback (most recent call last):
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] File "/home/apps/minimal/run.py", line 12, in <module>
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] WSGIServer(app).run()
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flipflop.py", line 938, in run
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] sock.getpeername()
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] socket.error: [Errno 88] Socket operation on non-socket
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] Premature end of script headers: run.py
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] Traceback (most recent call last):
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] File "/home/apps/minimal/run.py", line 12, in <module>
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] WSGIServer(app).run()
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flipflop.py", line 938, in run
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] sock.getpeername()
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] socket.error: [Errno 88] Socket operation on non-socket
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] Premature end of script headers: run.py
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.205] Traceback (most recent call last):
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.205] File "/home/apps/minimal/run.py", line 12, in <module>
```
---
In addition, even without using `flipflop`, it still doesn't work:
>
> ~/minimal/run.py
>
>
>
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
if __name__ == '__main__':
app.run()
```
Error output:
```
[apps@kernod0 ~]$ sudo cat /var/log/httpd/error_log
[Wed Aug 16 20:47:24 2017] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Aug 16 20:47:24 2017] [notice] Digest: generating secret for digest authentication ...
[Wed Aug 16 20:47:24 2017] [notice] Digest: done
[Wed Aug 16 20:47:24 2017] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fcgid/2.3.9 configured -- resuming normal operations
[Wed Aug 16 20:47:33 2017] [error] [client 100.116.226.182] * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] Traceback (most recent call last):
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/run.py", line 11, in <module>
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] app.run()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flask/app.py", line 841, in run
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] run_simple(host, port, self, **options)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 739, in run_simple
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] inner()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 699, in inner
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] fd=fd)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 593, in make_server
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] passthrough_errors, ssl_context, fd=fd)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 504, in __init__
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] HTTPServer.__init__(self, (host, int(port)), handler)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/SocketServer.py", line 412, in __init__
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] self.server_bind()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/BaseHTTPServer.py", line 108, in server_bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] SocketServer.TCPServer.server_bind(self)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/SocketServer.py", line 423, in server_bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] self.socket.bind(self.server_address)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "<string>", line 1, in bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] socket
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] .
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] error
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] :
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Errno 98] Address already in use
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] Premature end of script headers: run.py
[Wed Aug 16 20:48:33 2017] [warn] [client 100.116.226.182] Timeout waiting for output from CGI script /home/apps/minimal/run.py
[Wed Aug 16 20:48:33 2017] [error] [client 100.116.226.182] Script timed out before returning headers: run.py
[Wed Aug 16 20:49:33 2017] [warn] [client 100.116.226.182] Timeout waiting for output from CGI script /home/apps/minimal/run.py
```
|
2017/08/16
|
[
"https://Stackoverflow.com/questions/45703959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5399734/"
] |
I've managed to run your example, but there are some tweaking involved to make it work.
You might need to change paths on your system, because from your logs it seems that you're using system that runs `python2.6` and older `apache` version which still uses `httpd` file.
If it is possible I would advise you to upgrade your environment.
**Here is a step-by-step working solution:**
1.Install `virtualenvwrapper`:
```
sudo -EH pip2 install virtualenvwrapper
```
2.Activte it:
```
source /usr/local/bin/virtualenvwrapper.sh
```
3.Create virtual env:
```
mkvirtualenv minimal
```
4.Install `flask` and `flup`:
```
pip install -U flask flup
```
`flipflop` is not working for me, but as it's README states
>
> This module is a simplified fork of flup, written by Allan Saddi. It only has the FastCGI part of the original module.
>
>
>
so you can safely use it.
5.Install `apache2`:
```
sudo apt-get install apache2
```
6.Install `libapache2-mod-fastcgi`:
```
sudo apt-get install libapache2-mod-fastcgi
```
7.Create `/var/www/minimal/run.py`:
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
```
8.Create `/var/www/minimal/minimal.fcgi`:
```
#!/usr/bin/python
import sys
import logging
logging.basicConfig(stream=sys.stderr)
activate_this = '/home/some_user/.virtualenvs/minimal/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
sys.path.insert(0,"/var/www/minimal/")
from flup.server.fcgi import WSGIServer
from run import app
if __name__ == '__main__':
WSGIServer(app).run()
```
9.Make `minimal.fcgi` executable:
```
sudo chmod +x minimal.fcgi
```
10.Create `minimal.conf` file (in `/etc/apache2/sites-available` on my server):
```
FastCgiServer /var/www/minimal/minimal.fcgi -idle-timeout 300 -processes 5
<VirtualHost *:80>
ServerName YOUR_IP_ADDRESS
DocumentRoot /var/www/minimal/
AddHandler fastcgi-script fcgi
ScriptAlias / /var/www/minimal/minimal.fcgi/
<Location />
SetHandler fastcgi-script
</Location>
</VirtualHost>
```
11.Enable new site:
```
sudo a2ensite minimal.conf
```
12.Change `/var/www/` ownership to `www-data` user:
```
sudo chown -R www-data:www-data /var/www/
```
13.Restart `apache2`:
```
sudo /etc/init.d/apache2 restart
```
And voila! :)
If you visit your server address you should see `hello, world` in your browser:
[](https://i.stack.imgur.com/F1GQN.png)
Also when restarting `apache` you can view `FastCGI` starting in apache's `error.log`:
```
[Thu Aug 24 16:33:09.354544 2017] [mpm_event:notice] [pid 17375:tid 139752788969344] AH00491: caught SIGTERM, shutting down
[Thu Aug 24 16:33:10.414829 2017] [mpm_event:notice] [pid 17548:tid 139700962228096] AH00489: Apache/2.4.18 (Ubuntu) mod_fastcgi/mod_fastcgi-SNAP-0910052141 configured -- resuming normal operations
[Thu Aug 24 16:33:10.415033 2017] [core:notice] [pid 17548:tid 139700962228096] AH00094: Command line: '/usr/sbin/apache2'
[Thu Aug 24 16:33:10.415651 2017] [:notice] [pid 17551:tid 139700962228096] FastCGI: process manager initialized (pid 17551)
[Thu Aug 24 16:33:10.416135 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17556)
[Thu Aug 24 16:33:11.416571 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17618)
[Thu Aug 24 16:33:12.422058 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17643)
[Thu Aug 24 16:33:13.422763 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17651)
[Thu Aug 24 16:33:14.423536 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17659)
```
|
You can't run the fastcgi script from the terminal. This script is supposed to be executed by Apache. Typically you have it configured in a `ScriptAlias` directive in your Apache config file.
|
45,703,959
|
When trying to deploy an Flask application to my LAMP server, I got an error from [flipflop](https://github.com/Kozea/flipflop), a FastCGI/WSGI gateway which enables my application to speak the FastCGI protocol.
>
> ~/minimal/run.py
>
>
>
```
from flask import Flask
from flipflop import WSGIServer
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
if __name__ == '__main__':
WSGIServer(app).run()
```
Relevant part of the Apache configuration file, i.e. `/etc/httpd/conf/httpd.conf`:
```
<VirtualHost *:80>
ScriptAlias / /home/apps/minimal/run.py
ErrorLog /var/log/httpd/error_log
</VirtualHost>
```
Error report by Apache/2.2.15:
```
[apps@kernod0 ~]$ sudo head -n 20 /var/log/httpd/error_log
[sudo] password for apps:
[Wed Aug 16 16:39:16 2017] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Aug 16 16:39:16 2017] [notice] Digest: generating secret for digest authentication ...
[Wed Aug 16 16:39:16 2017] [notice] Digest: done
[Wed Aug 16 16:39:16 2017] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fcgid/2.3.9 configured -- resuming normal operations
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] Traceback (most recent call last):
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] File "/home/apps/minimal/run.py", line 12, in <module>
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] WSGIServer(app).run()
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flipflop.py", line 938, in run
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] sock.getpeername()
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] socket.error: [Errno 88] Socket operation on non-socket
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] Premature end of script headers: run.py
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] Traceback (most recent call last):
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] File "/home/apps/minimal/run.py", line 12, in <module>
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] WSGIServer(app).run()
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flipflop.py", line 938, in run
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] sock.getpeername()
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] socket.error: [Errno 88] Socket operation on non-socket
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] Premature end of script headers: run.py
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.205] Traceback (most recent call last):
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.205] File "/home/apps/minimal/run.py", line 12, in <module>
```
---
In addition, even without using `flipflop`, it still doesn't work:
>
> ~/minimal/run.py
>
>
>
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
if __name__ == '__main__':
app.run()
```
Error output:
```
[apps@kernod0 ~]$ sudo cat /var/log/httpd/error_log
[Wed Aug 16 20:47:24 2017] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Aug 16 20:47:24 2017] [notice] Digest: generating secret for digest authentication ...
[Wed Aug 16 20:47:24 2017] [notice] Digest: done
[Wed Aug 16 20:47:24 2017] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fcgid/2.3.9 configured -- resuming normal operations
[Wed Aug 16 20:47:33 2017] [error] [client 100.116.226.182] * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] Traceback (most recent call last):
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/run.py", line 11, in <module>
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] app.run()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flask/app.py", line 841, in run
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] run_simple(host, port, self, **options)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 739, in run_simple
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] inner()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 699, in inner
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] fd=fd)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 593, in make_server
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] passthrough_errors, ssl_context, fd=fd)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 504, in __init__
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] HTTPServer.__init__(self, (host, int(port)), handler)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/SocketServer.py", line 412, in __init__
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] self.server_bind()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/BaseHTTPServer.py", line 108, in server_bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] SocketServer.TCPServer.server_bind(self)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/SocketServer.py", line 423, in server_bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] self.socket.bind(self.server_address)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "<string>", line 1, in bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] socket
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] .
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] error
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] :
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Errno 98] Address already in use
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] Premature end of script headers: run.py
[Wed Aug 16 20:48:33 2017] [warn] [client 100.116.226.182] Timeout waiting for output from CGI script /home/apps/minimal/run.py
[Wed Aug 16 20:48:33 2017] [error] [client 100.116.226.182] Script timed out before returning headers: run.py
[Wed Aug 16 20:49:33 2017] [warn] [client 100.116.226.182] Timeout waiting for output from CGI script /home/apps/minimal/run.py
```
|
2017/08/16
|
[
"https://Stackoverflow.com/questions/45703959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5399734/"
] |
I've managed to run your example, but there are some tweaking involved to make it work.
You might need to change paths on your system, because from your logs it seems that you're using system that runs `python2.6` and older `apache` version which still uses `httpd` file.
If it is possible I would advise you to upgrade your environment.
**Here is a step-by-step working solution:**
1.Install `virtualenvwrapper`:
```
sudo -EH pip2 install virtualenvwrapper
```
2.Activte it:
```
source /usr/local/bin/virtualenvwrapper.sh
```
3.Create virtual env:
```
mkvirtualenv minimal
```
4.Install `flask` and `flup`:
```
pip install -U flask flup
```
`flipflop` is not working for me, but as it's README states
>
> This module is a simplified fork of flup, written by Allan Saddi. It only has the FastCGI part of the original module.
>
>
>
so you can safely use it.
5.Install `apache2`:
```
sudo apt-get install apache2
```
6.Install `libapache2-mod-fastcgi`:
```
sudo apt-get install libapache2-mod-fastcgi
```
7.Create `/var/www/minimal/run.py`:
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
```
8.Create `/var/www/minimal/minimal.fcgi`:
```
#!/usr/bin/python
import sys
import logging
logging.basicConfig(stream=sys.stderr)
activate_this = '/home/some_user/.virtualenvs/minimal/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
sys.path.insert(0,"/var/www/minimal/")
from flup.server.fcgi import WSGIServer
from run import app
if __name__ == '__main__':
WSGIServer(app).run()
```
9.Make `minimal.fcgi` executable:
```
sudo chmod +x minimal.fcgi
```
10.Create `minimal.conf` file (in `/etc/apache2/sites-available` on my server):
```
FastCgiServer /var/www/minimal/minimal.fcgi -idle-timeout 300 -processes 5
<VirtualHost *:80>
ServerName YOUR_IP_ADDRESS
DocumentRoot /var/www/minimal/
AddHandler fastcgi-script fcgi
ScriptAlias / /var/www/minimal/minimal.fcgi/
<Location />
SetHandler fastcgi-script
</Location>
</VirtualHost>
```
11.Enable new site:
```
sudo a2ensite minimal.conf
```
12.Change `/var/www/` ownership to `www-data` user:
```
sudo chown -R www-data:www-data /var/www/
```
13.Restart `apache2`:
```
sudo /etc/init.d/apache2 restart
```
And voila! :)
If you visit your server address you should see `hello, world` in your browser:
[](https://i.stack.imgur.com/F1GQN.png)
Also when restarting `apache` you can view `FastCGI` starting in apache's `error.log`:
```
[Thu Aug 24 16:33:09.354544 2017] [mpm_event:notice] [pid 17375:tid 139752788969344] AH00491: caught SIGTERM, shutting down
[Thu Aug 24 16:33:10.414829 2017] [mpm_event:notice] [pid 17548:tid 139700962228096] AH00489: Apache/2.4.18 (Ubuntu) mod_fastcgi/mod_fastcgi-SNAP-0910052141 configured -- resuming normal operations
[Thu Aug 24 16:33:10.415033 2017] [core:notice] [pid 17548:tid 139700962228096] AH00094: Command line: '/usr/sbin/apache2'
[Thu Aug 24 16:33:10.415651 2017] [:notice] [pid 17551:tid 139700962228096] FastCGI: process manager initialized (pid 17551)
[Thu Aug 24 16:33:10.416135 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17556)
[Thu Aug 24 16:33:11.416571 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17618)
[Thu Aug 24 16:33:12.422058 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17643)
[Thu Aug 24 16:33:13.422763 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17651)
[Thu Aug 24 16:33:14.423536 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17659)
```
|
In general you should use `mod_fastcgi` and configuration simillar to:
```
<VirtualHost *:8091>
ServerName helloworld.local
DocumentRoot /home/fe/work/flipflop
FastCgiServer /home/fe/work/flipflop/run.py
ScriptAlias / /home/fe/work/flipflop/run.py
<Location />
Options none
</Location>
</VirtualHost>
```
So it will make run your script as FastCgi, but I'm not familiar with `flipflop` and can't make it works.
But if you are not limited to `flipflop` you could use `uwsgi` to run your application, `mod_wsgi` to run it with `Apache` (read more details in [Flask documentation](http://flask.pocoo.org/docs/0.12/deploying/mod_wsgi/)) or use `Flask-Script` `runserver` command to run your application in debug server (see example in [Flask-Script documentation](https://flask-script.readthedocs.io/en/latest/)
|
45,703,959
|
When trying to deploy an Flask application to my LAMP server, I got an error from [flipflop](https://github.com/Kozea/flipflop), a FastCGI/WSGI gateway which enables my application to speak the FastCGI protocol.
>
> ~/minimal/run.py
>
>
>
```
from flask import Flask
from flipflop import WSGIServer
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
if __name__ == '__main__':
WSGIServer(app).run()
```
Relevant part of the Apache configuration file, i.e. `/etc/httpd/conf/httpd.conf`:
```
<VirtualHost *:80>
ScriptAlias / /home/apps/minimal/run.py
ErrorLog /var/log/httpd/error_log
</VirtualHost>
```
Error report by Apache/2.2.15:
```
[apps@kernod0 ~]$ sudo head -n 20 /var/log/httpd/error_log
[sudo] password for apps:
[Wed Aug 16 16:39:16 2017] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Aug 16 16:39:16 2017] [notice] Digest: generating secret for digest authentication ...
[Wed Aug 16 16:39:16 2017] [notice] Digest: done
[Wed Aug 16 16:39:16 2017] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fcgid/2.3.9 configured -- resuming normal operations
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] Traceback (most recent call last):
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] File "/home/apps/minimal/run.py", line 12, in <module>
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] WSGIServer(app).run()
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flipflop.py", line 938, in run
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] sock.getpeername()
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] socket.error: [Errno 88] Socket operation on non-socket
[Wed Aug 16 16:39:16 2017] [error] [client 100.116.224.219] Premature end of script headers: run.py
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] Traceback (most recent call last):
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] File "/home/apps/minimal/run.py", line 12, in <module>
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] WSGIServer(app).run()
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flipflop.py", line 938, in run
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] sock.getpeername()
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] socket.error: [Errno 88] Socket operation on non-socket
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.253] Premature end of script headers: run.py
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.205] Traceback (most recent call last):
[Wed Aug 16 16:39:17 2017] [error] [client 100.116.226.205] File "/home/apps/minimal/run.py", line 12, in <module>
```
---
In addition, even without using `flipflop`, it still doesn't work:
>
> ~/minimal/run.py
>
>
>
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
if __name__ == '__main__':
app.run()
```
Error output:
```
[apps@kernod0 ~]$ sudo cat /var/log/httpd/error_log
[Wed Aug 16 20:47:24 2017] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Aug 16 20:47:24 2017] [notice] Digest: generating secret for digest authentication ...
[Wed Aug 16 20:47:24 2017] [notice] Digest: done
[Wed Aug 16 20:47:24 2017] [notice] Apache/2.2.15 (Unix) DAV/2 mod_fcgid/2.3.9 configured -- resuming normal operations
[Wed Aug 16 20:47:33 2017] [error] [client 100.116.226.182] * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] Traceback (most recent call last):
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/run.py", line 11, in <module>
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] app.run()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/flask/app.py", line 841, in run
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] run_simple(host, port, self, **options)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 739, in run_simple
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] inner()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 699, in inner
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] fd=fd)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 593, in make_server
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] passthrough_errors, ssl_context, fd=fd)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/home/apps/minimal/flask/lib/python2.6/site-packages/werkzeug/serving.py", line 504, in __init__
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] HTTPServer.__init__(self, (host, int(port)), handler)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/SocketServer.py", line 412, in __init__
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] self.server_bind()
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/BaseHTTPServer.py", line 108, in server_bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] SocketServer.TCPServer.server_bind(self)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "/usr/lib64/python2.6/SocketServer.py", line 423, in server_bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] self.socket.bind(self.server_address)
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] File "<string>", line 1, in bind
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] socket
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] .
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] error
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] :
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] [Errno 98] Address already in use
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190]
[Wed Aug 16 20:47:37 2017] [error] [client 100.116.226.190] Premature end of script headers: run.py
[Wed Aug 16 20:48:33 2017] [warn] [client 100.116.226.182] Timeout waiting for output from CGI script /home/apps/minimal/run.py
[Wed Aug 16 20:48:33 2017] [error] [client 100.116.226.182] Script timed out before returning headers: run.py
[Wed Aug 16 20:49:33 2017] [warn] [client 100.116.226.182] Timeout waiting for output from CGI script /home/apps/minimal/run.py
```
|
2017/08/16
|
[
"https://Stackoverflow.com/questions/45703959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5399734/"
] |
I've managed to run your example, but there are some tweaking involved to make it work.
You might need to change paths on your system, because from your logs it seems that you're using system that runs `python2.6` and older `apache` version which still uses `httpd` file.
If it is possible I would advise you to upgrade your environment.
**Here is a step-by-step working solution:**
1.Install `virtualenvwrapper`:
```
sudo -EH pip2 install virtualenvwrapper
```
2.Activte it:
```
source /usr/local/bin/virtualenvwrapper.sh
```
3.Create virtual env:
```
mkvirtualenv minimal
```
4.Install `flask` and `flup`:
```
pip install -U flask flup
```
`flipflop` is not working for me, but as it's README states
>
> This module is a simplified fork of flup, written by Allan Saddi. It only has the FastCGI part of the original module.
>
>
>
so you can safely use it.
5.Install `apache2`:
```
sudo apt-get install apache2
```
6.Install `libapache2-mod-fastcgi`:
```
sudo apt-get install libapache2-mod-fastcgi
```
7.Create `/var/www/minimal/run.py`:
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'hello, world'
```
8.Create `/var/www/minimal/minimal.fcgi`:
```
#!/usr/bin/python
import sys
import logging
logging.basicConfig(stream=sys.stderr)
activate_this = '/home/some_user/.virtualenvs/minimal/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
sys.path.insert(0,"/var/www/minimal/")
from flup.server.fcgi import WSGIServer
from run import app
if __name__ == '__main__':
WSGIServer(app).run()
```
9.Make `minimal.fcgi` executable:
```
sudo chmod +x minimal.fcgi
```
10.Create `minimal.conf` file (in `/etc/apache2/sites-available` on my server):
```
FastCgiServer /var/www/minimal/minimal.fcgi -idle-timeout 300 -processes 5
<VirtualHost *:80>
ServerName YOUR_IP_ADDRESS
DocumentRoot /var/www/minimal/
AddHandler fastcgi-script fcgi
ScriptAlias / /var/www/minimal/minimal.fcgi/
<Location />
SetHandler fastcgi-script
</Location>
</VirtualHost>
```
11.Enable new site:
```
sudo a2ensite minimal.conf
```
12.Change `/var/www/` ownership to `www-data` user:
```
sudo chown -R www-data:www-data /var/www/
```
13.Restart `apache2`:
```
sudo /etc/init.d/apache2 restart
```
And voila! :)
If you visit your server address you should see `hello, world` in your browser:
[](https://i.stack.imgur.com/F1GQN.png)
Also when restarting `apache` you can view `FastCGI` starting in apache's `error.log`:
```
[Thu Aug 24 16:33:09.354544 2017] [mpm_event:notice] [pid 17375:tid 139752788969344] AH00491: caught SIGTERM, shutting down
[Thu Aug 24 16:33:10.414829 2017] [mpm_event:notice] [pid 17548:tid 139700962228096] AH00489: Apache/2.4.18 (Ubuntu) mod_fastcgi/mod_fastcgi-SNAP-0910052141 configured -- resuming normal operations
[Thu Aug 24 16:33:10.415033 2017] [core:notice] [pid 17548:tid 139700962228096] AH00094: Command line: '/usr/sbin/apache2'
[Thu Aug 24 16:33:10.415651 2017] [:notice] [pid 17551:tid 139700962228096] FastCGI: process manager initialized (pid 17551)
[Thu Aug 24 16:33:10.416135 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17556)
[Thu Aug 24 16:33:11.416571 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17618)
[Thu Aug 24 16:33:12.422058 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17643)
[Thu Aug 24 16:33:13.422763 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17651)
[Thu Aug 24 16:33:14.423536 2017] [:warn] [pid 17551:tid 139700962228096] FastCGI: server "/var/www/minimal/minimal.fcgi" started (pid 17659)
```
|
First things first, looks like you're having already some app running/listening on port 5000.
You might want to find which with `sudo sockstat |grep 5000` and then configure Apache consequently, or kill the process/service using `localhost:5000`.
Second, looks like your virtual host is not taken into account/not fully configured.
|
33,874,089
|
I am trying to integrate Alipay Gateway with my website using [this](https://github.com/liuyug/django-alipay).
I am getting the payment form but on redirecting to Alipay's website I am getting the `ILLEGAL_PARTNER_EXTERFACE` (pic attached) error.
[](https://i.stack.imgur.com/3ppU1.png)
[Few responses](https://wordpress.org/plugins/alipay-for-woocommerce/faq/) for the error online say `payment type` is different for testing environment. Can anyone give any pointers how to solve this? Any other kit for Alipay integration with a `Django(python)` based website ?
|
2015/11/23
|
[
"https://Stackoverflow.com/questions/33874089",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3442820/"
] |
According to the official documentation [here](https://cshall.alipay.com/support/help_detail.htm?help_id=397107), the possible reasons for that error code are:
* You did not apply for this particular payment gateway type
* You did apply for this payment gateway type, but it has not been approved yet
* You did apply for this payment gateway type, but it has been suspended due to violation of ToS
In your case, I guess it should be the first one.
There are several gateway types:
* Alipay\_Express (Alipay Express Checkout)
* Alipay\_Secured (Alipay Secured Checkout)
* Alipay\_Dual (Alipay Dual Function Checkout)
* ...
You need to make sure if your AliPay account is a business one, because only a business account can you use the Alipay Express gateway type.
Regarding examples, you can check `liuyug/django-alipay`, which is pretty similar to `spookylukey/django-paypal`, assuming you had experience in integration with PayPal.
*OT: Sorry for not providing the direct links to the GitHub repos mentioned above. StackOverflow kept saying that I need at least 10 reputation to post more than 2 links.*
|
Which you use Alipay gateway API?
It appears you have not applied for the relevant interface privillege or incorrect **partner\_id** param.
Whatever you use anyone language,they just it's based on common http request.
Alipay provides a sandbox enviroment.But them use a common **partner\_id**.
As far as I know none provided Alipay python SDK.
|
56,768,320
|
It often occurs to me when I try to manipulate data, for example **"UnicodeDecodeError: 'gbk' codec can't decode byte 0x91 in position 2196: illegal multibyte sequence".**
I have found a way to bypass this error but my curiosity drives me to investigate what is in position 2196.
### **Here comes the question**:
How to understand the number 2196? I mean what encoding should I use when I counting from 1,2,...,2196. utf-8? gbk? binary? hex or sth else?
And how can I see the number in that position without throwing error?
**Here is a code portion as an example:**
```
with open(r"G:\ETCData\6aMTC\2019-06-01.txt", "r") as fp:
for i, line in enumerate(fp):
if i == 6:
pass
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-2-6810d8c84b34> in <module>()
1 with open(r"G:\ETCData\6aMTC\2019-06-01.txt", "r") as fp:
----> 2 for i, line in enumerate(fp):
3 if i == 6:
4 pass
UnicodeDecodeError: 'gbk' codec can't decode byte 0x91 in position 2196: illegal multibyte sequence
```
|
2019/06/26
|
[
"https://Stackoverflow.com/questions/56768320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6632083/"
] |
You need to subscribe to the post observable returned by `method` function. It is done like this.
```
this.method().subscribe(
res => {
// Handle success response here
},
err => {
// Handle error response here
}
);
```
|
you are getting the 400 bad request error, the payload keys are mis matching with the middle wear. please suggest pass the correct params into Request object.
|
56,768,320
|
It often occurs to me when I try to manipulate data, for example **"UnicodeDecodeError: 'gbk' codec can't decode byte 0x91 in position 2196: illegal multibyte sequence".**
I have found a way to bypass this error but my curiosity drives me to investigate what is in position 2196.
### **Here comes the question**:
How to understand the number 2196? I mean what encoding should I use when I counting from 1,2,...,2196. utf-8? gbk? binary? hex or sth else?
And how can I see the number in that position without throwing error?
**Here is a code portion as an example:**
```
with open(r"G:\ETCData\6aMTC\2019-06-01.txt", "r") as fp:
for i, line in enumerate(fp):
if i == 6:
pass
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-2-6810d8c84b34> in <module>()
1 with open(r"G:\ETCData\6aMTC\2019-06-01.txt", "r") as fp:
----> 2 for i, line in enumerate(fp):
3 if i == 6:
4 pass
UnicodeDecodeError: 'gbk' codec can't decode byte 0x91 in position 2196: illegal multibyte sequence
```
|
2019/06/26
|
[
"https://Stackoverflow.com/questions/56768320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6632083/"
] |
you should subscribe the post method because this method of http class returns a observable.
you can rewrite your code as:-
```
method() {
const url='/pathname/';
return this.http.post(url, this.Object).subscribe( resp=> {
const data = resp; // response you get from serve
}, error => {
console.log(error); //error you get from server
});
}
```
|
you are getting the 400 bad request error, the payload keys are mis matching with the middle wear. please suggest pass the correct params into Request object.
|
3,331,850
|
I generated a SQL script from a C# application on Windows 7. The name entries have utf8 characters. It works find on Windows machine where I use a python script to populate the db. Now the same script fails on Linux platform complaining about those special characters.
Similar things happened when I generated XML file containing utf chars on Windows 7 but fails to show up on browsers (IE, Firefox.).
I used to generate such scripts on Windows XP and it worked perfect everywhere.
|
2010/07/26
|
[
"https://Stackoverflow.com/questions/3331850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/243655/"
] |
Please give a small example of a script with "utf8 characters" in the "name entries". Are you sure that they are `utf8` and not some windows encoding like `cp1252'? What makes you sure? Try this in Python at the command prompt:
```
... python -c "print repr(open('small_script.sql', 'rb').read())"
```
The interesting parts of the output are where it uses `\xhh` (where h is any hex digit) to represent non-ASCII characters e.g. `\xc3\xa2` is the UTF-8 encoding of the small a with circumflex accent. Show us a representative sample of such output. Also tell us the exact error message(s) that you get from that sample script.
**Update:** It appears that you have data encoded in `cp1252` or similar (`Latin1` aka `ISO-8859-1` is as rare as hen's teeth on Windows). To get that into `UTF-8` using Python, you'd do `fixed_data = data.decode('cp1252').encode('utf8')`; I can't help you with C# -- you may like to ask a separate question about that.
|
Assuming you're using python, make sure you are using [Unicode strings](http://evanjones.ca/python-utf8.html).
For example:
```
s = "Hello world" # Regular String
u = u"Hello Unicode world" # Unicdoe String
```
Edit:
Here's an example of reading from a UTF-8 file from the linked site:
```
import codecs
fileObj = codecs.open( "someFile", "r", "utf-8" )
u = fileObj.read() # Returns a Unicode string from the UTF-8 bytes in the file
```
|
63,397,618
|
I'm currently trying to run an application using Docker but get the following error message when I start the application:
```py
error while loading shared libraries: libopencv_highgui.so.4.4: cannot open shared object file: No such file or directory
```
I assume that something is going wrong in the docker file and that the installation is not complete or correct. Therefore I have added the section about OpenCV at the end of the post.
Did I miss an important step or an error in the dockerfile?
```py
FROM nvidia/cuda:10.2-devel-ubuntu18.04 as TOOLKITS
RUN apt-get update && apt-get install -y apt-utils
# Install additional packages
RUN apt-get install -y \
build-essential \
bzip2 \
checkinstall \
cmake \
curl \
gcc \
gfortran \
git \
pkg-config \
python3-pip \
python3-dev \
python3-numpy \
nano \
openexr \
unzip \
wget \
yasm
FROM TOOLKITS as GIT_PULLS
WORKDIR /
RUN git clone https://github.com/opencv/opencv.git
RUN git clone https://github.com/opencv/opencv_contrib.git
FROM GIT_PULLS as OPENCV_PREPERATION
RUN apt-get install -y \
libgtk-3-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libatlas-base-dev \
libtbb2 \
libtbb-dev \
libdc1394-22-dev
FROM OPENCV_PREPERATION as OPENCV_CMAKE
WORKDIR /
RUN mkdir /opencv/build
WORKDIR /opencv/build
RUN cmake \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=/usr/local \
-DINSTALL_C_EXAMPLES=ON \
-DINSTALL_PYTHON_EXAMPLES=ON \
-DWITH_TBB=ON \
-DWITH_V4L=ON \
-DOPENCV_GENERATE_PKGCONFIG=ON \
-DWITH_OPENGL=ON \
-DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-DOPENCV_PC_FILE_NAME=opencv.pc \
-DBUILD_EXAMPLES=ON ..
FROM OPENCV_CMAKE as BUILD_OPENCV_MAKE
RUN make -j $(nproc)
RUN make install
FROM TOOLKITS
COPY --from=XXX /opencv /opencv
COPY --from=XXX /opencv_contrib /opencv_contrib
```
|
2020/08/13
|
[
"https://Stackoverflow.com/questions/63397618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13460282/"
] |
I was facing the same issue before when installing OpenCV in Docker with Python image. You probably don't need this much dependencies but it's an option. I will have a lightweight version that fits my case. Please give a try for the following code:
**Heavy-loaded version:**
```
FROM python:3.7
RUN apt-get update \
&& apt-get install -y \
build-essential \
cmake \
git \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
RUN pip install numpy
WORKDIR /
ENV OPENCV_VERSION="4.1.1"
# install opencv-python from its source
RUN wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip \
&& unzip ${OPENCV_VERSION}.zip \
&& mkdir /opencv-${OPENCV_VERSION}/cmake_binary \
&& cd /opencv-${OPENCV_VERSION}/cmake_binary \
&& cmake -DBUILD_TIFF=ON \
-DBUILD_opencv_java=OFF \
-DWITH_CUDA=OFF \
-DWITH_OPENGL=ON \
-DWITH_OPENCL=ON \
-DWITH_IPP=ON \
-DWITH_TBB=ON \
-DWITH_EIGEN=ON \
-DWITH_V4L=ON \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=$(python3.7 -c "import sys; print(sys.prefix)") \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_INCLUDE_DIR=$(python3.7 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-DPYTHON_PACKAGES_PATH=$(python3.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
.. \
&& make install \
&& rm /${OPENCV_VERSION}.zip \
&& rm -r /opencv-${OPENCV_VERSION}
RUN ln -s \
/usr/local/python/cv2/python-3.7/cv2.cpython-37m-x86_64-linux-gnu.so \
/usr/local/lib/python3.7/site-packages/cv2.so
RUN apt-get --fix-missing update && apt-get --fix-broken install && apt-get install -y poppler-utils && apt-get install -y tesseract-ocr && \
apt-get install -y libtesseract-dev && apt-get install -y libleptonica-dev && ldconfig && apt install -y libsm6 libxext6 && apt install -y python-opencv
```
**Lightweight version:**
```
FROM python:3.7
RUN apt-get update -y
RUN apt-update && apt install -y libsm6 libxext6
```
For my case, I ended up using the heavy-loaded version just to save some hassle and both versions should work fine. For your reference, please also see [this link](https://stackoverflow.com/questions/63197519/tesseractnotfound-issue-when-containerizing-in-docker) and thanks to Neo Anderson's great help.
|
```sh
apt-get update -y
apt install -y libsm6 libxext6
apt update
pip install pyglview
apt install -y libgl1-mesa-glx
```
|
63,397,618
|
I'm currently trying to run an application using Docker but get the following error message when I start the application:
```py
error while loading shared libraries: libopencv_highgui.so.4.4: cannot open shared object file: No such file or directory
```
I assume that something is going wrong in the docker file and that the installation is not complete or correct. Therefore I have added the section about OpenCV at the end of the post.
Did I miss an important step or an error in the dockerfile?
```py
FROM nvidia/cuda:10.2-devel-ubuntu18.04 as TOOLKITS
RUN apt-get update && apt-get install -y apt-utils
# Install additional packages
RUN apt-get install -y \
build-essential \
bzip2 \
checkinstall \
cmake \
curl \
gcc \
gfortran \
git \
pkg-config \
python3-pip \
python3-dev \
python3-numpy \
nano \
openexr \
unzip \
wget \
yasm
FROM TOOLKITS as GIT_PULLS
WORKDIR /
RUN git clone https://github.com/opencv/opencv.git
RUN git clone https://github.com/opencv/opencv_contrib.git
FROM GIT_PULLS as OPENCV_PREPERATION
RUN apt-get install -y \
libgtk-3-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libatlas-base-dev \
libtbb2 \
libtbb-dev \
libdc1394-22-dev
FROM OPENCV_PREPERATION as OPENCV_CMAKE
WORKDIR /
RUN mkdir /opencv/build
WORKDIR /opencv/build
RUN cmake \
-DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=/usr/local \
-DINSTALL_C_EXAMPLES=ON \
-DINSTALL_PYTHON_EXAMPLES=ON \
-DWITH_TBB=ON \
-DWITH_V4L=ON \
-DOPENCV_GENERATE_PKGCONFIG=ON \
-DWITH_OPENGL=ON \
-DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-DOPENCV_PC_FILE_NAME=opencv.pc \
-DBUILD_EXAMPLES=ON ..
FROM OPENCV_CMAKE as BUILD_OPENCV_MAKE
RUN make -j $(nproc)
RUN make install
FROM TOOLKITS
COPY --from=XXX /opencv /opencv
COPY --from=XXX /opencv_contrib /opencv_contrib
```
|
2020/08/13
|
[
"https://Stackoverflow.com/questions/63397618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13460282/"
] |
I also had lots of issues in this process, and found this repository:
<https://github.com/janza/docker-python3-opencv>
Clone or download this and add the additional dependencies and files according to your requirement.
|
```sh
apt-get update -y
apt install -y libsm6 libxext6
apt update
pip install pyglview
apt install -y libgl1-mesa-glx
```
|
40,828,531
|
This is a little bit a newbie question I know. But however I couldn't find an answer to this question.
I have made some websites that leverage the functionality of automatic emailling. I have made this websites using PHP. Every website I do, in the mailling part, I come accross some "redundancies". Let me give an example, from the examples of [PHPMailer](https://github.com/PHPMailer/PHPMailer) library:
```
$mail = new PHPMailer;
$mail->isSMTP();
$mail->Host = 'mail.domail.com';
$mail->SMTPAuth = true;
$mail->Username = 'someuser@domain.com'; // SMTP username
$mail->Password = 'secret';
$mail->Port = 587;
$mail->setFrom('someuser@domain.com', 'Mailer');
$mail->addAddress('to@gmail.com', 'Joe User'); // Add a recipient
$mail->isHTML(true);
$mail->Subject = 'Here is the subject';
$mail->Body = 'This is the HTML message body <b>in bold!</b>';
$mail->AltBody = 'This is the body in plain text for non-HTML mail clients';
```
In these two statements, is where I thought there are redundancies: `$mail->Username = "someuser@domain.com; $mail->Password = 'secret';"` and `$mail->setFrom('someuser@domain.com')`. Here is my question? Why do I need to provide a "from" address if I already given a username and password. Shoudln't it simply log in to my email account and sent it? If I provide a user name, why do I also provide a "from" address? And vice versa.
Could someone explain the reason why mailling systems work like this? I have alson seen similar structure in python's standard mailling library.
|
2016/11/27
|
[
"https://Stackoverflow.com/questions/40828531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4966877/"
] |
First comments on your net's way of working:
* there is no arrow back to the `off` state. So once you switch on your washing machine, won't you never be able to switch it off again ?
* `drain` and `dry` both conduct back to `idle`. But when idle has a token, it will either go to delicate or to T1. The conditions ("program" chosen by the operator) don't vanish, so they would be triggered again and again.
Considering the last point, I'd suggest to have a different idle for the end of the program to avoid this cycling. If you have to pass several times through the same state but take different actions depending on the progress, you have to work with more tokens.
Some remarks about the net's form:
* you don't need to put the 1 on every arc. You could make this more readable by Leaving the 1 out and indicating a number on an arc, only when more than one tokens would be needed.
* usually, the transitions are not aligned with the arcs (although nothing forbids is) but rather perpendicular to the flow (here, horizontal)
* In principle, "places" (nodes) represent states or resources, and "transitions" (rectangles) represent an event that changes the state (or an action that consumes resources). Your naming convention should better reflect this
|
Apparently you're missing some condition to stop the process. Now once you start your washing will continue in an endless loop.
|
40,828,531
|
This is a little bit a newbie question I know. But however I couldn't find an answer to this question.
I have made some websites that leverage the functionality of automatic emailling. I have made this websites using PHP. Every website I do, in the mailling part, I come accross some "redundancies". Let me give an example, from the examples of [PHPMailer](https://github.com/PHPMailer/PHPMailer) library:
```
$mail = new PHPMailer;
$mail->isSMTP();
$mail->Host = 'mail.domail.com';
$mail->SMTPAuth = true;
$mail->Username = 'someuser@domain.com'; // SMTP username
$mail->Password = 'secret';
$mail->Port = 587;
$mail->setFrom('someuser@domain.com', 'Mailer');
$mail->addAddress('to@gmail.com', 'Joe User'); // Add a recipient
$mail->isHTML(true);
$mail->Subject = 'Here is the subject';
$mail->Body = 'This is the HTML message body <b>in bold!</b>';
$mail->AltBody = 'This is the body in plain text for non-HTML mail clients';
```
In these two statements, is where I thought there are redundancies: `$mail->Username = "someuser@domain.com; $mail->Password = 'secret';"` and `$mail->setFrom('someuser@domain.com')`. Here is my question? Why do I need to provide a "from" address if I already given a username and password. Shoudln't it simply log in to my email account and sent it? If I provide a user name, why do I also provide a "from" address? And vice versa.
Could someone explain the reason why mailling systems work like this? I have alson seen similar structure in python's standard mailling library.
|
2016/11/27
|
[
"https://Stackoverflow.com/questions/40828531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4966877/"
] |
First comments on your net's way of working:
* there is no arrow back to the `off` state. So once you switch on your washing machine, won't you never be able to switch it off again ?
* `drain` and `dry` both conduct back to `idle`. But when idle has a token, it will either go to delicate or to T1. The conditions ("program" chosen by the operator) don't vanish, so they would be triggered again and again.
Considering the last point, I'd suggest to have a different idle for the end of the program to avoid this cycling. If you have to pass several times through the same state but take different actions depending on the progress, you have to work with more tokens.
Some remarks about the net's form:
* you don't need to put the 1 on every arc. You could make this more readable by Leaving the 1 out and indicating a number on an arc, only when more than one tokens would be needed.
* usually, the transitions are not aligned with the arcs (although nothing forbids is) but rather perpendicular to the flow (here, horizontal)
* In principle, "places" (nodes) represent states or resources, and "transitions" (rectangles) represent an event that changes the state (or an action that consumes resources). Your naming convention should better reflect this
|
I think it would be nice to leave the transition graphics unshaded or unfilled if it is not enabled. Personally I fill it green if it is enabled.
If you want someone to check if you modeled a logic properly in your Petri Net then it would be nice if you include a description of your system logic in prose.
|
48,108,469
|
I am doing some PCA using sklearn.decomposition.PCA. I found that if the input matrix X is big, the results of two different PCA instances for PCA.transform will not be the same. For example, when X is a 100x200 matrix, there will not be a problem. When X is a 1000x200 or a 100x2000 matrix, the results of two different PCA instances will be different. I am not sure what's the cause for this: I suppose there is no random elements in sklearn's PCA solver? I am using sklearn version 0.18.1. with python 2.7
The script below illustrates the issue.
```
import numpy as np
import sklearn.linear_model as sklin
from sklearn.decomposition import PCA
n_sample,n_feature = 100,200
X = np.random.rand(n_sample,n_feature)
pca_1 = PCA(n_components=10)
pca_1.fit(X)
X_transformed_1 = pca_1.transform(X)
pca_2 = PCA(n_components=10)
pca_2.fit(X)
X_transformed_2 = pca_2.transform(X)
print(np.sum(X_transformed_1 == X_transformed_2) )
print(np.mean((X_transformed_1 - X_transformed_2)**2) )
```
|
2018/01/05
|
[
"https://Stackoverflow.com/questions/48108469",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7439635/"
] |
There's a `svd_solver` param in PCA and by default it has value "auto". Depending on the input data size, it chooses most efficient solver.
Now as for your case, when size is larger than 500, it will choose `randomized`.
>
> svd\_solver : string {‘auto’, ‘full’, ‘arpack’, ‘randomized’}
>
>
> **auto** :
>
>
> the solver is selected by a default policy based on X.shape and n\_components: if the input data is larger than 500x500 and the
> number of components to extract is lower than 80% of the smallest
> dimension of the data, then the more efficient ‘randomized’ method is
> enabled. Otherwise the exact full SVD is computed and optionally
> truncated afterwards.
>
>
>
To control how the randomized solver behaves, you can set `random_state` param in PCA which will control the random number generator.
Try using
```
pca_1 = PCA(n_components=10, random_state=SOME_INT)
pca_2 = PCA(n_components=10, random_state=SOME_INT)
```
|
I had a similar problem even with the same trial number but on different machines I was getting different result setting the svd\_solver to '`arpack`' solved the problem
|
45,890,001
|
I want to capture only the lines that end with two asterisks using the following code:
```
import re
total_lines = 0
processed_lines = 0
regexp = re.compile(r'[*][\s]+[*]$')
for line in open('testfile.txt', 'r'):
total_lines += 1
if regexp.search(line):
print'Line not parsed. Format not defined yet'
else:
processed_lines += 1
print "Total lines: {} - Processed lines: {}".format(total_lines, processed_lines)
```
In Windows works fine. But when I used the code in CentOS, the regex does not work. This is the output for `testfile.txt` (file with 40 lines)
Windows `re.__version__ = '2.2.1'`:
```
Line not parsed. Format not defined yet
Line not parsed. Format not defined yet
Line not parsed. Format not defined yet
Line not parsed. Format not defined yet
Line not parsed. Format not defined yet
Total lines: 40 - Processed lines: 35
```
Linux `re.__version__='2.2.1'`:
```
Total lines: 40 - Processed lines: 40
```
Both OS use the same python version. You can found the `testfile.txt` [here](http://txt.do/d6jb8) and [here](http://m.uploadedit.com/bbtc/1503698518189.txt):
|
2017/08/25
|
[
"https://Stackoverflow.com/questions/45890001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2579896/"
] |
Open the file in universal newline mode `rU` to support I/O on files which have a newline format that is not the native format on the platform in python 2.x, then the $ in your regex will match the EOL.
```
import re
total_lines = 0
processed_lines = 0
regexp = re.compile(r'[*][\s]+[*]$')
for line in open('testfile.txt', 'rU'):
total_lines += 1
if regexp.search(line):
print'Line not parsed. Format not defined yet'
else:
processed_lines += 1
print "Total lines: {} - Processed lines: {}".format(total_lines, processed_lines)
```
[PEP278](http://www.python.org/dev/peps/pep-0278/) explained what `rU` stands for:
>
> In a Python with universal newline support open() the mode parameter
> can also be "U", meaning "open for input as a text file with universal
> newline interpretation". Mode "rU" is also allowed, for symmetry with
> "rb".
>
>
>
|
The test file you offered doesn't contain any lines that end with two asterisks?
This regex should match all lines that end with two asterisks:
.\*\\*{2}$
|
62,131,355
|
I am trying to create all subset of a given string **recursively**.
Given string = 'aab', we generate all subsets for the characters being distinct.
The answer is: `["", "b", "a", "ab", "ba", "a", "ab", "ba", "aa", "aa", "aab", "aab", "aba", "aba", "baa", "baa"]`.
I have been looking at several solutions such as [this one](https://stackoverflow.com/questions/24318311/generate-all-subsets-of-a-string-using-recursion)
but I am trying to make the function accept a single variable- only the string and work with that, and can't figure out how.
I have been also looking at [this](https://stackoverflow.com/questions/26332412/python-recursive-function-to-display-all-subsets-of-given-set) solution of a similar problem, but as it deals with lists and not strings I seem to have some trouble transforming that to accept and generate strings.
Here is my code, in this example I can't connect the str to the list. Hence my question.
**I edited the input and the output.**
```
def gen_all_strings(word):
if len(word) == 0:
return ''
rest = gen_all_strings(word[1:])
return rest + [[ + word[0]] + dummy for dummy in rest]
```
|
2020/06/01
|
[
"https://Stackoverflow.com/questions/62131355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11469782/"
] |
```
from itertools import *
def recursive_product(s,r=None,i=0):
if r is None:
r = []
if i>len(s):
return r
for c in product(s, repeat=i):
r.append("".join(c))
return recursive_product(s,r,i+1)
print(recursive_product('ab'))
print(recursive_product('abc'))
```
Output:
`['', 'a', 'b', 'aa', 'ab', 'ba', 'bb']`
`['', 'a', 'b', 'c', 'aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc', 'aaa', 'aab', 'aac', 'aba', 'abb', 'abc', 'aca', 'acb', 'acc', 'baa', 'bab', 'bac', 'bba', 'bbb', 'bbc', 'bca', 'bcb', 'bcc', 'caa', 'cab', 'cac', 'cba', 'cbb', 'cbc', 'cca', 'ccb', 'ccc']`
To be honest it feels really forced to use recursion in this case, a much simpler version that has the same results:
```
nonrecursive_product = lambda s: [''.join(c)for i in range(len(s)+1) for c in product(s,repeat=i)]
```
|
This is the [powerset](https://stackoverflow.com/questions/1482308/how-to-get-all-subsets-of-a-set-powerset) of the set of characters in the string.
```
from itertools import chain, combinations
s = set('ab') #split string into a set of characters
# combinations gives the elements of the powerset of a given length r
# from_iterable puts all these into an 'iterable'
# which is converted here to a list
list(chain.from_iterable(combinations(s, r) for r in range(len(s)+1)))
```
|
62,131,355
|
I am trying to create all subset of a given string **recursively**.
Given string = 'aab', we generate all subsets for the characters being distinct.
The answer is: `["", "b", "a", "ab", "ba", "a", "ab", "ba", "aa", "aa", "aab", "aab", "aba", "aba", "baa", "baa"]`.
I have been looking at several solutions such as [this one](https://stackoverflow.com/questions/24318311/generate-all-subsets-of-a-string-using-recursion)
but I am trying to make the function accept a single variable- only the string and work with that, and can't figure out how.
I have been also looking at [this](https://stackoverflow.com/questions/26332412/python-recursive-function-to-display-all-subsets-of-given-set) solution of a similar problem, but as it deals with lists and not strings I seem to have some trouble transforming that to accept and generate strings.
Here is my code, in this example I can't connect the str to the list. Hence my question.
**I edited the input and the output.**
```
def gen_all_strings(word):
if len(word) == 0:
return ''
rest = gen_all_strings(word[1:])
return rest + [[ + word[0]] + dummy for dummy in rest]
```
|
2020/06/01
|
[
"https://Stackoverflow.com/questions/62131355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11469782/"
] |
```
from itertools import *
def recursive_product(s,r=None,i=0):
if r is None:
r = []
if i>len(s):
return r
for c in product(s, repeat=i):
r.append("".join(c))
return recursive_product(s,r,i+1)
print(recursive_product('ab'))
print(recursive_product('abc'))
```
Output:
`['', 'a', 'b', 'aa', 'ab', 'ba', 'bb']`
`['', 'a', 'b', 'c', 'aa', 'ab', 'ac', 'ba', 'bb', 'bc', 'ca', 'cb', 'cc', 'aaa', 'aab', 'aac', 'aba', 'abb', 'abc', 'aca', 'acb', 'acc', 'baa', 'bab', 'bac', 'bba', 'bbb', 'bbc', 'bca', 'bcb', 'bcc', 'caa', 'cab', 'cac', 'cba', 'cbb', 'cbc', 'cca', 'ccb', 'ccc']`
To be honest it feels really forced to use recursion in this case, a much simpler version that has the same results:
```
nonrecursive_product = lambda s: [''.join(c)for i in range(len(s)+1) for c in product(s,repeat=i)]
```
|
```py
import itertools as it
def all_subsets(iterable):
s = list(iterable)
subsets = it.chain.from_iterable(it.permutations(s,r) for r in range(len(s) + 1))
return list(map("".join, list(subsets)))
print(all_subsets('aab'))
# ['', 'a', 'a', 'b', 'aa', 'ab', 'aa', 'ab', 'ba', 'ba', 'aab', 'aba', 'aab', 'aba', 'baa', 'baa']
print(all_subsets('abc'))
# ['', 'a', 'b', 'c', 'ab', 'ac', 'ba', 'bc', 'ca', 'cb', 'abc', 'acb', 'bac', 'bca', 'cab', 'cba']
```
|
40,279,577
|
using python package "xlsxwriter", I want to highlight cells in the following conditional range;
value > 1 or value <-1
However, some cells have -inf/inf values and it fill colors them too (to yellow). Is thare any way to unhighlight them?
I tried "conditional\_format" function to uncolor them, but it doesn't work.
[output example](https://i.stack.imgur.com/kMqhb.png)
```
format1 = workbook.add_format({'bg_color':'#FFBF00'}) #yellow
format2 = workbook.add_format({'bg_color':'#2E64FE'}) #blue
format3 = workbook.add_format({'bg_color':'#FFFFFF'}) #white
c_fold=[data.columns.get_loc(col) for col in data.columns if col.startswith("fold")]
c_fold.sort()
l=len(data)+1
worksheet.conditional_format(1,c_fold[0],l,c_fold[-1], {'type':'cell',
'criteria' : '>',
'value':1,
'format':format1,
})
worksheet.conditional_format(1,c_fold[0],l,c_fold[-1], {'type':'cell',
'criteria' : '<',
'value':-1,
'format':format2,
})
worksheet.conditional_format(1,c_fold[0],l,c_fold[-1], {'type':'text',
'criteria' : 'begins with',
'value':"-inf",
'format':format3,
})
```
Thanks in advance
|
2016/10/27
|
[
"https://Stackoverflow.com/questions/40279577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7079128/"
] |
required\_param means that the parameter must exist (or Moodle will throw an immediate, fatal error).
If the parameter is optional, then use optional\_param('name of param', 'default value', PARAM\_TEXT) instead. Then you can check to see if this has the 'default value' (I usually use null as the default value).
In either case, isset() does not make sense, as the variable always has a value assigned to it.
|
You should compare the result of `required_param('LType',PARAM_ALPHA)` with the value you spect, instead of using isset. For example:
```
if(required_param('LType',PARAM_ALPHA) != 'some value'){
echo "salaam";exit;
}
```
Or:
```
if(required_param('LType',PARAM_ALPHA) === false){
echo "salaam";exit;
}
```
|
54,360,408
|
i am writing a python application that is sending continously UDP messages to a predefined network with other hosts and fixed IPs. I wrote the python application and dockerized it. The application works fine in the docker, no problems there.
Unfortunately i am failing to send the UDP messages from my docker to the host so they will be sent to the other hosts in the network. The same is for receiving messages. Right now i dont know how to set up my docker so it is receiving a UDP message from a host with fixed IP adress in the network.
I tried to set up my docker network with `--net host` and i sent all the UDP messages from my docker container via localhost to my host. This worked fine, too. I am missing the link where i can sent the messages no to the "outside world". I tried to make a picture of my problem.
[](https://i.stack.imgur.com/a1ohK.jpg)
My Question: How do i have to set up the network communcation for my docker/host so it can receive messages via UDP from other hosts in the network?
Thanks
|
2019/01/25
|
[
"https://Stackoverflow.com/questions/54360408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7864140/"
] |
So i experimented a lot and i figured out, that i just need to run the docker container with the network configuration as host. The UDP socket in my container is bound to the IP adress of my host and therefore just needs to be linked to the Network of the host. Everyone who is struggeling the same issue, just run
```
docker run --network=host <YOURCONTAINER>
```
|
Build your own bridge
---------------------
1.Configure the new bridge.
```
$ sudo ip link set dev br0 up
$ sudo ip addr add 192.168.5.1/24 dev bridge0
$ sudo ip link set dev bridge0 up
```
Confirm the new bridge’s settings.
```
$ ip addr show bridge0
4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.1/24 scope global bridge0
valid_lft forever preferred_lft forever <br/>
```
2. Configure Docker to use the new bridge by setting the option in the daemon.json file, which is located in `/etc/docker/` on Linux or `C:\ProgramData\docker\config\` on Windows Server. On Docker for Mac or Docker for Windows, click the Docker icon, choose **Preferences**, and go to **Daemon**.
If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents:
```
{
"bridge": "bridge0"
}
```
Restart Docker for the changes to take effect.
3. Confirm that the new outgoing NAT masquerade is set up.
```
$ sudo iptables -t nat -L -n
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 192.168.5.0/24 0.0.0.0/0
```
4.Remove the now-unused `docker0` bridge.
```
$ sudo ip link set dev docker0 down
$ sudo ip link del name br0
$ sudo iptables -t nat -F POSTROUTING
```
5.Create a new container, and verify that it is in the new IP address range.
([ref](https://docs.docker.com/v17.09/engine/userguide/networking/default_network/build-bridges/).)
|
54,524,124
|
I put together a VAE using Dense Neural Networks in Keras. During `model.fit` I get a dimension mismatch, but not sure what is throwing the code off. Below is what my code looks like
```
from keras.layers import Lambda, Input, Dense
from keras.models import Model
from keras.datasets import mnist
from keras.losses import mse, binary_crossentropy
from keras.utils import plot_model
from keras import backend as K
import keras
import numpy as np
import matplotlib.pyplot as plt
import argparse
import os
(x_train, y_train), (x_test, y_test) = mnist.load_data()
image_size = x_train.shape[1]
original_dim = image_size * image_size
x_train = np.reshape(x_train, [-1, original_dim])
x_test = np.reshape(x_test, [-1, original_dim])
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
# network parameters
input_shape = (original_dim, )
intermediate_dim = 512
batch_size = 128
latent_dim = 2
epochs = 50
x = Input(batch_shape=(batch_size, original_dim))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_sigma = Dense(latent_dim)(h)
def sampling(args):
z_mean, z_log_sigma = args
#epsilon = K.random_normal(shape=(batch, dim))
epsilon = K.random_normal(shape=(batch_size, latent_dim))
return z_mean + K.exp(z_log_sigma) * epsilon
# note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_sigma])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_sigma])
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)
print('X Decoded Mean shape: ', x_decoded_mean.shape)
# end-to-end autoencoder
vae = Model(x, x_decoded_mean)
# encoder, from inputs to latent space
encoder = Model(x, z_mean)
# generator, from latent space to reconstructed inputs
decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)
def vae_loss(x, x_decoded_mean):
xent_loss = keras.metrics.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma), axis=-1)
return xent_loss + kl_loss
vae.compile(optimizer='rmsprop', loss=vae_loss)
print('X train shape: ', x_train.shape)
print('X test shape: ', x_test.shape)
vae.fit(x_train, x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test))
```
Here is the stack trace that I see when `model.fit` is called.
```
File "/home/asattar/workspace/projects/keras-examples/blogautoencoder/VariationalAutoEncoder.py", line 81, in <module>
validation_data=(x_test, x_test))
File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/engine/training.py", line 1047, in fit
validation_steps=validation_steps)
File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/engine/training_arrays.py", line 195, in fit_loop
outs = fit_function(ins_batch)
File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/backend/tensorflow_backend.py", line 2897, in __call__
return self._call(inputs)
File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/backend/tensorflow_backend.py", line 2855, in _call
fetched = self._callable_fn(*array_vals)
File "/home/asattar/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1439, in __call__
run_metadata_ptr)
File "/home/asattar/.local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [128,784] vs. [96,784]
[[{{node training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/BroadcastGradientArgs}} = BroadcastGradientArgs[T=DT_INT32, _class=["loc:@train...ad/Reshape"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/Shape, training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/Shape_1)]]
```
Please note the "Incompatible shapes: [128,784] vs. [96,784]" in the stack trace" towards the end of the trace.
|
2019/02/04
|
[
"https://Stackoverflow.com/questions/54524124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1491639/"
] |
According to [Keras: What if the size of data is not divisible by batch\_size?](https://stackoverflow.com/questions/37974340/keras-what-if-the-size-of-data-is-not-divisible-by-batch-size), one should better use `model.fit_generator` rather than `model.fit` here.
To use `model.fit_generator`, one should define one's own generator object.
Following is an example:
```
from keras.utils import Sequence
import math
class Generator(Sequence):
# Class is a dataset wrapper for better training performance
def __init__(self, x_set, y_set, batch_size=256):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
self.indices = np.arange(self.x.shape[0])
def __len__(self):
return math.floor(self.x.shape[0] / self.batch_size)
def __getitem__(self, idx):
inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_x = self.x[inds]
batch_y = self.y[inds]
return batch_x, batch_y
def on_epoch_end(self):
np.random.shuffle(self.indices)
train_datagen = Generator(x_train, x_train, batch_size)
test_datagen = Generator(x_test, x_test, batch_size)
vae.fit_generator(train_datagen,
steps_per_epoch=len(x_train)//batch_size,
validation_data=test_datagen,
validation_steps=len(x_test)//batch_size,
epochs=epochs)
```
Code adopted from [How to shuffle after each epoch using a custom generator?](https://github.com/keras-team/keras/issues/9707).
|
Just tried to replicate and found out that when you define
`x = Input(batch_shape=(batch_size, original_dim))`
you're setting the batch size and it's causing a mismatch when it starts to validate. Change to
```
x = Input(shape=input_shape)
```
and you should be all set.
|
30,005,876
|
When creating a derived class, what is actually being inherited from `pygame.sprite.Sprite`? It's something that doesn't need to be set up anywhere else in a class, so what is it? Are there actual methods included with it or does python/pygame just know what do with it?
|
2015/05/02
|
[
"https://Stackoverflow.com/questions/30005876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4515529/"
] |
[Use the source, Luke!!!](https://www.youtube.com/watch?v=o2we_B6hDrY) @ [pygame.sprite.Sprite](https://bitbucket.org/pygame/pygame/src/dc57da440ac3415ff679c0e9a1d6d75d949b2db9/lib/sprite.py?at=default#cl-106) inherits `object`

|
Look it up on the original pygame website:
<http://www.pygame.org/docs/ref/sprite.html#pygame.sprite.Sprite>
|
14,521,414
|
I'm currently working on a small python script, for controlling my home PC (really just a hobby project - nothing serious).
Inside the script, there is two threads running at the same time using thread (might start using threading instead) like this:
```
thread.start_new_thread( Function, (Args) )
```
Its works as intended when testing the script... but after compiling the code using Pyinstaller there are two processes (One for each thread - I think).
How do I fix this?
|
2013/01/25
|
[
"https://Stackoverflow.com/questions/14521414",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1995290/"
] |
Just kill the loader from the main program if it really bothers you. Here's one way to do it.
```
import os
import win32com.client
proc_name = 'MyProgram.exe'
my_pid = os.getpid()
wmi = win32com.client.GetObject('winmgmts:')
all_procs = wmi.InstancesOf('Win32_Process')
for proc in all_procs:
if proc.Properties_("Name").Value == proc_name:
proc_pid = proc.Properties_("ProcessID").Value
if proc_pid != my_pid:
print "killed my loader %s\n" % (proc_pid)
os.kill(proc_pid, 9)
```
|
Python code does not need to be "compiled with pyinstaller"
Products like "Pyinstaller" or "py2exe" are usefull to create a single executable file that you can distribute to third parties, or relocate inside your computer without worrying about the Python instalation - however, they don add "speed" nor is the resulting binary file any more "finished" than your original .py (or .pyw on Windows) file.
What these products do is to create another copy of the Python itnrepreter, alogn with all the modules your porgram use, and pack them inside a single file. It is likely the Pyinstaller keep a second process running to check things on the main script (like launching it, maybe there are options on it to keep the script running and so on). This is not part of a standard Python program.
It is not likely Pyinstaller splits the threads into 2 separate proccess as that would cause compatibility problems - thread run on the same process and can transparently access the same data structures.
How a "canonical" Python program runs: the main process, seen by the O.S. is the Python binary (Python.exe on Windows) - it finds the Python script it was called for - if there is a ".pyc" file for it, that is loaded - else, it loads your ".py" file and compiles that to Python byte code (not to windwos executable). This compilation is authomatic and transparent to people running the program. It is analogous to a Java compile from a .java file to a .class - but there is no explicit step needed by the programmer or user - it is made in place - and other factors control wether Python will store the resulting bytecode as .pyc file or not.
To sum up: there is no performance impact in running the ".py" script directly instead of generating an .exe file with Pyinstaller or other product. You have a disk-space usage inpact if you do, though, as you will have one copy of the Python interpreter and libraries for each of your scripts.
The URL pointeded by Janne Karila on the comment nails it - its even worse than I thought:
in order to run yioru script, pyinstaller unpacks Python DLLs and modules in a temporary directory. The time and system resources needed todo that, compared with a single script run is non-trivial.
<http://www.pyinstaller.org/export/v2.0/project/doc/Manual.html?format=raw#how-one-file-mode-works>
|
49,992,781
|
I have the following code in python2. I wanted to know if inheritance works or basic class works if we don't pass 'self' or don't have an init method in the class.
here is the code
```
class Animal:
def whoAmi():
print "Animal"
>>> class Dog(Animal):
pass
...
>>> d= Dog()
>>> d.whoAmi
<bound method Dog.whoAmi of <__main__.Dog instance at 0x0000000004ED3348>>
>>> d.whoAmi()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: whoAmi() takes no arguments (1 given)
>>> d.whoAmi
<bound method Dog.whoAmi of <__main__.Dog instance at 0x0000000004ED3348>>
```
why doesn't it print "Animal" here?
|
2018/04/24
|
[
"https://Stackoverflow.com/questions/49992781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7406832/"
] |
Lets first tackle why doesn't it print "Animal".
The clue is is in the error message:
>
> TypeError: whoAmi() takes no arguments (**1 given**)
>
>
>
When you do `d.whoAmi()`, really what Python is doing is `Dog.whoAmi(d)`. Since your method does not take any arguments, you get that exception.
By convention (as is it case with many style "rules" in Python), for those methods of classes that work on instances, the first argument is called *`self`*. However, it can be called anything you want. The key thing to remember is that there **must be at least one argument**. You can name it whatever you want, but the agreement in the Python community is to call it `self`.
Here is an example showing that it really doesn't matter what you call it:
```
>>> class Foo():
... def whoami(blah):
... print "Boo"
...
>>> a = Foo()
>>> a.whoami()
Boo
```
Inheritance works fine even if you don't have methods with `self`, as it is perfectly normal to have class-level methods in Python.
All methods that have double underscores (sometimes called "dunder" methods), like `__init__` are optional. You don't have to define them if the default functionality works for you.
The key thing to remember here is that the argument to `self` is passed implicitly by Python. You don't really "pass" `self`. Python knows that the method is being called on an instance, and passes the instance as the first argument to the method.
|
Since you’re are effectively initiating Dog, you’re creating a `self`. So, when you write `d.whoAmi()`, the interpreter inserts `self` as a function argument.
If you tried:
```
d = Dog
d.whoAmi()
```
It should work as expected.
By the way, you should put the decorator `@staticmethod` in he top of your `whoAmi` function for it to work the way you did.
|
47,261,255
|
I'm trying to execute a dag which needs to be run only once. So I placed the dag execution interval as '@once'. However, I'm getting the error as mentioned in this link -
<https://issues.apache.org/jira/browse/AIRFLOW-1400>
Now i'm trying to pass the exact date of execution as below:
```
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2017,11,13),
'email': ['airflow@airflow.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(seconds=5)
}
dag = DAG(
dag_id='dagNameTest', default_args=default_args, schedule_interval='12 09 13 11 2017',concurrency=1)
```
This is throwing error as:
```
File "/usr/lib/python2.7/site-packages/croniter/croniter.py", line 543, in expand
expr_format))
CroniterBadCronError: [12 09 13 11 2017] is not acceptable, out of range
```
Can someone help to resolve this.
Thanks,
Arjun
|
2017/11/13
|
[
"https://Stackoverflow.com/questions/47261255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7229291/"
] |
Grouping by either "TransactionCategory" or "TranCatID" will give you the desired result shown as follows:
```
SELECT TransactionCategory.TransCatName, SUM( `Value`) AS Value FROM Transactions JOIN TransactionCategory on Transactions.TransactionCategory = TransactionCategory.TranCatID GROUP BY TransactionCategory.TransactionCategory;
or
SELECT TransactionCategory.TransCatName, SUM( `Value`) AS Value FROM Transactions JOIN TransactionCategory on Transactions.TransactionCategory = TransactionCategory.TranCatID GROUP BY TransactionCategory.TranCatID;
```
|
This should do the trick
```
SELECT TransactionCategory.TransCatName,
SUM(Transactions.Value) as Value
FROM Transactions
LEFT JOIN TransactionCategory ON TransactionCategory.TranCatID = Transaction.TransactionCategory
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.