qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
32,586,612
|
I was getting started with **AWS' Elastic Beanstalk**.
I am following this [tutorial](https://realpython.com/blog/python/deploying-a-django-app-to-aws-elastic-beanstalk/) to **deploy a Django/PostgreSQL app**.
I did everything before the 'Configuring a Database' section. The deployment was also successful but I am getting an Internal Server Error.
Here's the traceback from the logs:
```
mod_wsgi (pid=30226): Target WSGI script '/opt/python/current/app/polly/wsgi.py' cannot be loaded as Python module.
[Tue Sep 15 12:06:43.472954 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] mod_wsgi (pid=30226): Exception occurred processing WSGI script '/opt/python/current/app/polly/wsgi.py'.
[Tue Sep 15 12:06:43.474702 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] Traceback (most recent call last):
[Tue Sep 15 12:06:43.474727 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] File "/opt/python/current/app/polly/wsgi.py", line 12, in <module>
[Tue Sep 15 12:06:43.474777 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] from django.core.wsgi import get_wsgi_application
[Tue Sep 15 12:06:43.474799 2015] [:error] [pid 30226] [remote 172.31.14.126:53947] ImportError: No module named django.core.wsgi
```
Any idea what's wrong?
|
2015/09/15
|
[
"https://Stackoverflow.com/questions/32586612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4201498/"
] |
The answer (<https://stackoverflow.com/a/47209268/6169225>) by [carl-g](https://stackoverflow.com/users/39396/carl-g) is correct. One thing that got me was that `requirements.txt` was in the wrong directory. Let's say you created a django project called `mysite`. This is the directory in which you run the `eb` command(s) --> make sure `requirements.txt` is in this directory.
|
If you forget the **.ebextensions** folder you will get the same error.
I was following along with a good simple (non Elastic Beanstalk) [tutorial](https://scotch.io/tutorials/build-your-first-python-and-django-application) and missed step 3 & 4 of [Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html).
I was using Django 1.11 and Python 2.7
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
It may be because you are using a virtual environment inside pyCharm, while you might have installed the library using system's default pip.
Check `Preferences->Project->Python Interpreter` inside Pycharm, and see if your library is listed there. If not, install it using **`+`** icon. Normally, if you use pyCharm's inbuilt terminal, it is already using the same virtual env as your project. So using pip there may help.
Usage syntax is as below:
```
import mysql.connector
conn = mysql.connector.connect(
user='root',
password='#####',
host='127.0.0.1',
database='some_db')
conn.close()
```
|
Go to project interpreter and download mysql-connector.You need to install it also in pycharm
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
It may be because you are using a virtual environment inside pyCharm, while you might have installed the library using system's default pip.
Check `Preferences->Project->Python Interpreter` inside Pycharm, and see if your library is listed there. If not, install it using **`+`** icon. Normally, if you use pyCharm's inbuilt terminal, it is already using the same virtual env as your project. So using pip there may help.
Usage syntax is as below:
```
import mysql.connector
conn = mysql.connector.connect(
user='root',
password='#####',
host='127.0.0.1',
database='some_db')
conn.close()
```
|
I was having this exact issue as well, after a while i solved it by **simply changing the name of my script in PyCharm**, turns out i had named my script mysql.py (because it was my first time attempting to connect it to python) and it was critically interfering with the import.
TLDR: **Change the name file if it's asssigned as mysql.py**, as it takes priority over the mysql-connector and prevents it from being imported, even if installed correctly.
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
It may be because you are using a virtual environment inside pyCharm, while you might have installed the library using system's default pip.
Check `Preferences->Project->Python Interpreter` inside Pycharm, and see if your library is listed there. If not, install it using **`+`** icon. Normally, if you use pyCharm's inbuilt terminal, it is already using the same virtual env as your project. So using pip there may help.
Usage syntax is as below:
```
import mysql.connector
conn = mysql.connector.connect(
user='root',
password='#####',
host='127.0.0.1',
database='some_db')
conn.close()
```
|
People have commented with reasonable responses (and I'm sure OP is good on this by now) but they weren't super clear to me...
Don't use "pip3 install" in terminal in your pycharm project. In fact, uninstall any mysql connectors you have already using this method.
So now that you have verified there are no other mysql connector packages... Add the package "mysql-connector-python" using the Python Interpreter only (in preferences) in your pycharm project. Now the package should work!
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
You need to import the connector as
```
import mysql.connector
```
Check the examples in the [docs](https://dev.mysql.com/doc/connector-python/en/connector-python-example-connecting.html) for this
If that doesn't work, then there might be an inconsistency between the interpreter that pycharm uses and the one you installed the package for. In pycharm, go to
```
File->Settings->Project Interpreter
```
In the terminal, enter
```
where python #Windows
which python #Linux
```
and also
```
where/which pip
```
make sure that the interpreter configured in pycharm is the same that appears when typing `which/where python` in the command line/shell. Also make sure that `pip` also points to the same python distribution.
|
Go to project interpreter and download mysql-connector.You need to install it also in pycharm
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
You need to import the connector as
```
import mysql.connector
```
Check the examples in the [docs](https://dev.mysql.com/doc/connector-python/en/connector-python-example-connecting.html) for this
If that doesn't work, then there might be an inconsistency between the interpreter that pycharm uses and the one you installed the package for. In pycharm, go to
```
File->Settings->Project Interpreter
```
In the terminal, enter
```
where python #Windows
which python #Linux
```
and also
```
where/which pip
```
make sure that the interpreter configured in pycharm is the same that appears when typing `which/where python` in the command line/shell. Also make sure that `pip` also points to the same python distribution.
|
I was having this exact issue as well, after a while i solved it by **simply changing the name of my script in PyCharm**, turns out i had named my script mysql.py (because it was my first time attempting to connect it to python) and it was critically interfering with the import.
TLDR: **Change the name file if it's asssigned as mysql.py**, as it takes priority over the mysql-connector and prevents it from being imported, even if installed correctly.
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
You need to import the connector as
```
import mysql.connector
```
Check the examples in the [docs](https://dev.mysql.com/doc/connector-python/en/connector-python-example-connecting.html) for this
If that doesn't work, then there might be an inconsistency between the interpreter that pycharm uses and the one you installed the package for. In pycharm, go to
```
File->Settings->Project Interpreter
```
In the terminal, enter
```
where python #Windows
which python #Linux
```
and also
```
where/which pip
```
make sure that the interpreter configured in pycharm is the same that appears when typing `which/where python` in the command line/shell. Also make sure that `pip` also points to the same python distribution.
|
People have commented with reasonable responses (and I'm sure OP is good on this by now) but they weren't super clear to me...
Don't use "pip3 install" in terminal in your pycharm project. In fact, uninstall any mysql connectors you have already using this method.
So now that you have verified there are no other mysql connector packages... Add the package "mysql-connector-python" using the Python Interpreter only (in preferences) in your pycharm project. Now the package should work!
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
Go to project interpreter and download mysql-connector.You need to install it also in pycharm
|
I was having this exact issue as well, after a while i solved it by **simply changing the name of my script in PyCharm**, turns out i had named my script mysql.py (because it was my first time attempting to connect it to python) and it was critically interfering with the import.
TLDR: **Change the name file if it's asssigned as mysql.py**, as it takes priority over the mysql-connector and prevents it from being imported, even if installed correctly.
|
52,308,349
|
I have successfully installed mysql-connector using pip.
```
Installing collected packages: mysql-connector
Running setup.py install for mysql-connector ... done
Successfully installed mysql-connector-2.1.6
```
However, in PyCharm when I have a script that uses the line:
```
import mysql-connector
```
PyCharm gives me an error saying there isn't a package called **"mysql"** installed. Is there some sort of syntax that should be used to indicate that the entire package name contains the "-" and is not just "mysql"?
When I run my script in IDLE, mysql.connector imports just fine. (I changed it to mysql-connector after seeing the "-" in the name of the package and having trouble in PyCharm.)
EDIT: per @FlyingTeller's suggestions, in the terminal, "where python" returns C:...Programs\Python\Python36-32\python.exe. "where pip" returns ...Python\Python36-32\Scripts\pip.exe. The interpreter in PyCharm for this project is this same filepath & exe as "where python" in the terminal.
Per @Tushar's comment, this program isn't using a virtual environment and the mysql-connector library is already present in the Preferences->Project->Python Interpreter.
Thanks for the feedback and any additional guidance you may be able to provide.
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52308349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8207436/"
] |
Go to project interpreter and download mysql-connector.You need to install it also in pycharm
|
People have commented with reasonable responses (and I'm sure OP is good on this by now) but they weren't super clear to me...
Don't use "pip3 install" in terminal in your pycharm project. In fact, uninstall any mysql connectors you have already using this method.
So now that you have verified there are no other mysql connector packages... Add the package "mysql-connector-python" using the Python Interpreter only (in preferences) in your pycharm project. Now the package should work!
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
**EDIT:**
The answer is incorrect, because it is one type of recursion. It is called indirect recursion <https://en.wikipedia.org/wiki/Recursion_(computer_science)#Indirect_recursion>.
~~I think the simplest way to do this without recursion is the following:~~
```
import java.util.LinkedList;
import java.util.List;
interface Handler {
void handle(Chain chain);
}
interface Chain {
void process();
}
class FirstHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("first handler");
chain.process();
}
}
class SecondHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("second handler");
chain.process();
}
}
class Runner implements Chain {
private List<Handler> handlers;
private int size = 5000; // change this parameter to avoid stackoverflowerror
private int n = 0;
public static void main(String[] args) {
Runner runner = new Runner();
runner.setHandlers();
runner.process();
}
private void setHandlers() {
handlers = new LinkedList<>();
int i = 0;
while (i < size) {
// there can be different implementations of handler interface
handlers.add(new FirstHandler());
handlers.add(new SecondHandler());
i += 2;
}
}
public void process() {
if (n < size) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
```
At first glance this example looks a little crazy, but it's not as unrealistic as it seems.
The main idea of this approach is the **chain of responsibility** pattern. You can reproduce this exception in real life by implementing chain of responsibility pattern. For instance, you have some objects and every object after doing some logic call the next object in chain and pass the results of his job to the next one.
You can see this in java filter ([javax.servlet.Filter](https://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html)).
I don't know detailed mechanism of working this class, but it calls the next filter in chain using doFilter method and after all filters/servlets processing request, it continue working in the same method below doFilter.
In other words it intercepts request/response before servlets and before sending response to a client.It is dangerous piece of code because all called methods are in the same stack at the same thread. Thus, **it may initiate stackoverflow exception if the chain is too big or you call doFilter method on deep level that also provide the same situation. Perhaps, during debugging you might see chain of calls
in one thread** and it potentially can be the cause of stackoverflowerror.
Also you can take chain of responsibility pattern example from links below and add **collection of elements** instead of several and you also will get stackoverflowerror.
Links with the pattern:
[<https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java>](https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java)
[<https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern>](https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern)
I hope it was helpful for you.
|
Since the question is very interesting, I have tried to simplify the answer of *hide* :
```
public class Stackoverflow {
static class Handler {
void handle(Chain chain){
chain.process();
System.out.println("yeah");
}
}
static class Chain {
private List<Handler> handlers = new ArrayList<>();
private int n = 0;
private void setHandlers(int count) {
int i = 0;
while (i++ < count) {
handlers.add(new Handler());
}
}
public void process() {
if (n < handlers.size()) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
public static void main(String[] args) {
Chain chain = new Chain();
chain.setHandlers(10000);
chain.process();
}
}
```
It's important to note that if stackoverflow occurs, the string "yeah" will never be output.
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
Since the question is very interesting, I have tried to simplify the answer of *hide* :
```
public class Stackoverflow {
static class Handler {
void handle(Chain chain){
chain.process();
System.out.println("yeah");
}
}
static class Chain {
private List<Handler> handlers = new ArrayList<>();
private int n = 0;
private void setHandlers(int count) {
int i = 0;
while (i++ < count) {
handlers.add(new Handler());
}
}
public void process() {
if (n < handlers.size()) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
public static void main(String[] args) {
Chain chain = new Chain();
chain.setHandlers(10000);
chain.process();
}
}
```
It's important to note that if stackoverflow occurs, the string "yeah" will never be output.
|
Of course we can do it :) . No recursion at all!
```
public static void main(String[] args) {
throw new StackOverflowError();
}
```
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
**EDIT:**
The answer is incorrect, because it is one type of recursion. It is called indirect recursion <https://en.wikipedia.org/wiki/Recursion_(computer_science)#Indirect_recursion>.
~~I think the simplest way to do this without recursion is the following:~~
```
import java.util.LinkedList;
import java.util.List;
interface Handler {
void handle(Chain chain);
}
interface Chain {
void process();
}
class FirstHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("first handler");
chain.process();
}
}
class SecondHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("second handler");
chain.process();
}
}
class Runner implements Chain {
private List<Handler> handlers;
private int size = 5000; // change this parameter to avoid stackoverflowerror
private int n = 0;
public static void main(String[] args) {
Runner runner = new Runner();
runner.setHandlers();
runner.process();
}
private void setHandlers() {
handlers = new LinkedList<>();
int i = 0;
while (i < size) {
// there can be different implementations of handler interface
handlers.add(new FirstHandler());
handlers.add(new SecondHandler());
i += 2;
}
}
public void process() {
if (n < size) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
```
At first glance this example looks a little crazy, but it's not as unrealistic as it seems.
The main idea of this approach is the **chain of responsibility** pattern. You can reproduce this exception in real life by implementing chain of responsibility pattern. For instance, you have some objects and every object after doing some logic call the next object in chain and pass the results of his job to the next one.
You can see this in java filter ([javax.servlet.Filter](https://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html)).
I don't know detailed mechanism of working this class, but it calls the next filter in chain using doFilter method and after all filters/servlets processing request, it continue working in the same method below doFilter.
In other words it intercepts request/response before servlets and before sending response to a client.It is dangerous piece of code because all called methods are in the same stack at the same thread. Thus, **it may initiate stackoverflow exception if the chain is too big or you call doFilter method on deep level that also provide the same situation. Perhaps, during debugging you might see chain of calls
in one thread** and it potentially can be the cause of stackoverflowerror.
Also you can take chain of responsibility pattern example from links below and add **collection of elements** instead of several and you also will get stackoverflowerror.
Links with the pattern:
[<https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java>](https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java)
[<https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern>](https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern)
I hope it was helpful for you.
|
Java stores primitive types on the stack. Objects created in local scope are allocated on the heap, with the reference to them on the stack.
You can overflow the stack without recursion by allocating too many primitive types in method scope. With normal stack size settings, you would have to allocate an excessive number of variables to overflow.
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
**EDIT:**
The answer is incorrect, because it is one type of recursion. It is called indirect recursion <https://en.wikipedia.org/wiki/Recursion_(computer_science)#Indirect_recursion>.
~~I think the simplest way to do this without recursion is the following:~~
```
import java.util.LinkedList;
import java.util.List;
interface Handler {
void handle(Chain chain);
}
interface Chain {
void process();
}
class FirstHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("first handler");
chain.process();
}
}
class SecondHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("second handler");
chain.process();
}
}
class Runner implements Chain {
private List<Handler> handlers;
private int size = 5000; // change this parameter to avoid stackoverflowerror
private int n = 0;
public static void main(String[] args) {
Runner runner = new Runner();
runner.setHandlers();
runner.process();
}
private void setHandlers() {
handlers = new LinkedList<>();
int i = 0;
while (i < size) {
// there can be different implementations of handler interface
handlers.add(new FirstHandler());
handlers.add(new SecondHandler());
i += 2;
}
}
public void process() {
if (n < size) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
```
At first glance this example looks a little crazy, but it's not as unrealistic as it seems.
The main idea of this approach is the **chain of responsibility** pattern. You can reproduce this exception in real life by implementing chain of responsibility pattern. For instance, you have some objects and every object after doing some logic call the next object in chain and pass the results of his job to the next one.
You can see this in java filter ([javax.servlet.Filter](https://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html)).
I don't know detailed mechanism of working this class, but it calls the next filter in chain using doFilter method and after all filters/servlets processing request, it continue working in the same method below doFilter.
In other words it intercepts request/response before servlets and before sending response to a client.It is dangerous piece of code because all called methods are in the same stack at the same thread. Thus, **it may initiate stackoverflow exception if the chain is too big or you call doFilter method on deep level that also provide the same situation. Perhaps, during debugging you might see chain of calls
in one thread** and it potentially can be the cause of stackoverflowerror.
Also you can take chain of responsibility pattern example from links below and add **collection of elements** instead of several and you also will get stackoverflowerror.
Links with the pattern:
[<https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java>](https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java)
[<https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern>](https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern)
I hope it was helpful for you.
|
Here is the implementation of [Eric J.](https://stackoverflow.com/users/141172/eric-j) idea of generating excessive number of local variables using javassist library:
```
class SoeNonRecursive {
static final String generatedMethodName = "holderForVariablesMethod";
@SneakyThrows
Class<?> createClassWithLotsOfLocalVars(String generatedClassName, final int numberOfLocalVarsToGenerate) {
ClassPool pool = ClassPool.getDefault();
CtClass generatedClass = pool.makeClass(generatedClassName);
CtMethod generatedMethod = CtNewMethod.make(getMethodBody(numberOfLocalVarsToGenerate), generatedClass);
generatedClass.addMethod(generatedMethod);
return generatedClass.toClass();
}
private String getMethodBody(final int numberOfLocalVarsToGenerate) {
StringBuilder methodBody = new StringBuilder("public static long ")
.append(generatedMethodName).append("() {")
.append(System.lineSeparator());
StringBuilder antiDeadCodeEliminationString = new StringBuilder("long result = i0");
long i = 0;
while (i < numberOfLocalVarsToGenerate) {
methodBody.append(" long i").append(i)
.append(" = ").append(i).append(";")
.append(System.lineSeparator());
antiDeadCodeEliminationString.append("+").append("i").append(i);
i++;
}
antiDeadCodeEliminationString.append(";");
methodBody.append(" ").append(antiDeadCodeEliminationString)
.append(System.lineSeparator())
.append(" return result;")
.append(System.lineSeparator())
.append("}");
return methodBody.toString();
}
}
```
and tests:
```
class SoeNonRecursiveTest {
private final SoeNonRecursive soeNonRecursive = new SoeNonRecursive();
//Should be different for every case, or once generated class become
//"frozen" for javassist: http://www.javassist.org/tutorial/tutorial.html#read
private String generatedClassName;
@Test
void stackOverflowWithoutRecursion() {
generatedClassName = "Soe1";
final int numberOfLocalVarsToGenerate = 6000;
assertThrows(StackOverflowError.class, () -> soeNonRecursive
.createClassWithLotsOfLocalVars(generatedClassName, numberOfLocalVarsToGenerate));
}
@SneakyThrows
@Test
void methodGeneratedCorrectly() {
generatedClassName = "Soe2";
final int numberOfLocalVarsToGenerate = 6;
Class<?> generated = soeNonRecursive.createClassWithLotsOfLocalVars(generatedClassName, numberOfLocalVarsToGenerate);
//Arithmetic progression
long expected = Math.round((numberOfLocalVarsToGenerate - 1.0)/2 * numberOfLocalVarsToGenerate);
long actual = (long) generated.getDeclaredMethod(generatedMethodName).invoke(generated);
assertEquals(expected, actual);
}
}
```
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
Here is the implementation of [Eric J.](https://stackoverflow.com/users/141172/eric-j) idea of generating excessive number of local variables using javassist library:
```
class SoeNonRecursive {
static final String generatedMethodName = "holderForVariablesMethod";
@SneakyThrows
Class<?> createClassWithLotsOfLocalVars(String generatedClassName, final int numberOfLocalVarsToGenerate) {
ClassPool pool = ClassPool.getDefault();
CtClass generatedClass = pool.makeClass(generatedClassName);
CtMethod generatedMethod = CtNewMethod.make(getMethodBody(numberOfLocalVarsToGenerate), generatedClass);
generatedClass.addMethod(generatedMethod);
return generatedClass.toClass();
}
private String getMethodBody(final int numberOfLocalVarsToGenerate) {
StringBuilder methodBody = new StringBuilder("public static long ")
.append(generatedMethodName).append("() {")
.append(System.lineSeparator());
StringBuilder antiDeadCodeEliminationString = new StringBuilder("long result = i0");
long i = 0;
while (i < numberOfLocalVarsToGenerate) {
methodBody.append(" long i").append(i)
.append(" = ").append(i).append(";")
.append(System.lineSeparator());
antiDeadCodeEliminationString.append("+").append("i").append(i);
i++;
}
antiDeadCodeEliminationString.append(";");
methodBody.append(" ").append(antiDeadCodeEliminationString)
.append(System.lineSeparator())
.append(" return result;")
.append(System.lineSeparator())
.append("}");
return methodBody.toString();
}
}
```
and tests:
```
class SoeNonRecursiveTest {
private final SoeNonRecursive soeNonRecursive = new SoeNonRecursive();
//Should be different for every case, or once generated class become
//"frozen" for javassist: http://www.javassist.org/tutorial/tutorial.html#read
private String generatedClassName;
@Test
void stackOverflowWithoutRecursion() {
generatedClassName = "Soe1";
final int numberOfLocalVarsToGenerate = 6000;
assertThrows(StackOverflowError.class, () -> soeNonRecursive
.createClassWithLotsOfLocalVars(generatedClassName, numberOfLocalVarsToGenerate));
}
@SneakyThrows
@Test
void methodGeneratedCorrectly() {
generatedClassName = "Soe2";
final int numberOfLocalVarsToGenerate = 6;
Class<?> generated = soeNonRecursive.createClassWithLotsOfLocalVars(generatedClassName, numberOfLocalVarsToGenerate);
//Arithmetic progression
long expected = Math.round((numberOfLocalVarsToGenerate - 1.0)/2 * numberOfLocalVarsToGenerate);
long actual = (long) generated.getDeclaredMethod(generatedMethodName).invoke(generated);
assertEquals(expected, actual);
}
}
```
|
Looking at this answer below, not sure if this works for Java, but sounds like you can declare an array of pointers? Might be able to achieve Eric J's idea without requiring a generator.
[Is it on the Stack or Heap?](https://stackoverflow.com/questions/1056444/is-it-on-the-stack-or-heap)
```
int* x[LARGENUMBER]; // The addresses are held on the stack
int i; // On the stack
for(i = 0; i < LARGENUMBER; ++i)
x[i] = malloc(sizeof(int)*10); // Allocates memory on the heap
```
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
Java stores primitive types on the stack. Objects created in local scope are allocated on the heap, with the reference to them on the stack.
You can overflow the stack without recursion by allocating too many primitive types in method scope. With normal stack size settings, you would have to allocate an excessive number of variables to overflow.
|
Looking at this answer below, not sure if this works for Java, but sounds like you can declare an array of pointers? Might be able to achieve Eric J's idea without requiring a generator.
[Is it on the Stack or Heap?](https://stackoverflow.com/questions/1056444/is-it-on-the-stack-or-heap)
```
int* x[LARGENUMBER]; // The addresses are held on the stack
int i; // On the stack
for(i = 0; i < LARGENUMBER; ++i)
x[i] = malloc(sizeof(int)*10); // Allocates memory on the heap
```
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
Here is the implementation of [Eric J.](https://stackoverflow.com/users/141172/eric-j) idea of generating excessive number of local variables using javassist library:
```
class SoeNonRecursive {
static final String generatedMethodName = "holderForVariablesMethod";
@SneakyThrows
Class<?> createClassWithLotsOfLocalVars(String generatedClassName, final int numberOfLocalVarsToGenerate) {
ClassPool pool = ClassPool.getDefault();
CtClass generatedClass = pool.makeClass(generatedClassName);
CtMethod generatedMethod = CtNewMethod.make(getMethodBody(numberOfLocalVarsToGenerate), generatedClass);
generatedClass.addMethod(generatedMethod);
return generatedClass.toClass();
}
private String getMethodBody(final int numberOfLocalVarsToGenerate) {
StringBuilder methodBody = new StringBuilder("public static long ")
.append(generatedMethodName).append("() {")
.append(System.lineSeparator());
StringBuilder antiDeadCodeEliminationString = new StringBuilder("long result = i0");
long i = 0;
while (i < numberOfLocalVarsToGenerate) {
methodBody.append(" long i").append(i)
.append(" = ").append(i).append(";")
.append(System.lineSeparator());
antiDeadCodeEliminationString.append("+").append("i").append(i);
i++;
}
antiDeadCodeEliminationString.append(";");
methodBody.append(" ").append(antiDeadCodeEliminationString)
.append(System.lineSeparator())
.append(" return result;")
.append(System.lineSeparator())
.append("}");
return methodBody.toString();
}
}
```
and tests:
```
class SoeNonRecursiveTest {
private final SoeNonRecursive soeNonRecursive = new SoeNonRecursive();
//Should be different for every case, or once generated class become
//"frozen" for javassist: http://www.javassist.org/tutorial/tutorial.html#read
private String generatedClassName;
@Test
void stackOverflowWithoutRecursion() {
generatedClassName = "Soe1";
final int numberOfLocalVarsToGenerate = 6000;
assertThrows(StackOverflowError.class, () -> soeNonRecursive
.createClassWithLotsOfLocalVars(generatedClassName, numberOfLocalVarsToGenerate));
}
@SneakyThrows
@Test
void methodGeneratedCorrectly() {
generatedClassName = "Soe2";
final int numberOfLocalVarsToGenerate = 6;
Class<?> generated = soeNonRecursive.createClassWithLotsOfLocalVars(generatedClassName, numberOfLocalVarsToGenerate);
//Arithmetic progression
long expected = Math.round((numberOfLocalVarsToGenerate - 1.0)/2 * numberOfLocalVarsToGenerate);
long actual = (long) generated.getDeclaredMethod(generatedMethodName).invoke(generated);
assertEquals(expected, actual);
}
}
```
|
Of course we can do it :) . No recursion at all!
```
public static void main(String[] args) {
throw new StackOverflowError();
}
```
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
Java stores primitive types on the stack. Objects created in local scope are allocated on the heap, with the reference to them on the stack.
You can overflow the stack without recursion by allocating too many primitive types in method scope. With normal stack size settings, you would have to allocate an excessive number of variables to overflow.
|
Of course we can do it :) . No recursion at all!
```
public static void main(String[] args) {
throw new StackOverflowError();
}
```
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
**EDIT:**
The answer is incorrect, because it is one type of recursion. It is called indirect recursion <https://en.wikipedia.org/wiki/Recursion_(computer_science)#Indirect_recursion>.
~~I think the simplest way to do this without recursion is the following:~~
```
import java.util.LinkedList;
import java.util.List;
interface Handler {
void handle(Chain chain);
}
interface Chain {
void process();
}
class FirstHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("first handler");
chain.process();
}
}
class SecondHandler implements Handler {
@Override
public void handle(Chain chain) {
System.out.println("second handler");
chain.process();
}
}
class Runner implements Chain {
private List<Handler> handlers;
private int size = 5000; // change this parameter to avoid stackoverflowerror
private int n = 0;
public static void main(String[] args) {
Runner runner = new Runner();
runner.setHandlers();
runner.process();
}
private void setHandlers() {
handlers = new LinkedList<>();
int i = 0;
while (i < size) {
// there can be different implementations of handler interface
handlers.add(new FirstHandler());
handlers.add(new SecondHandler());
i += 2;
}
}
public void process() {
if (n < size) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
```
At first glance this example looks a little crazy, but it's not as unrealistic as it seems.
The main idea of this approach is the **chain of responsibility** pattern. You can reproduce this exception in real life by implementing chain of responsibility pattern. For instance, you have some objects and every object after doing some logic call the next object in chain and pass the results of his job to the next one.
You can see this in java filter ([javax.servlet.Filter](https://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html)).
I don't know detailed mechanism of working this class, but it calls the next filter in chain using doFilter method and after all filters/servlets processing request, it continue working in the same method below doFilter.
In other words it intercepts request/response before servlets and before sending response to a client.It is dangerous piece of code because all called methods are in the same stack at the same thread. Thus, **it may initiate stackoverflow exception if the chain is too big or you call doFilter method on deep level that also provide the same situation. Perhaps, during debugging you might see chain of calls
in one thread** and it potentially can be the cause of stackoverflowerror.
Also you can take chain of responsibility pattern example from links below and add **collection of elements** instead of several and you also will get stackoverflowerror.
Links with the pattern:
[<https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java>](https://www.journaldev.com/1617/chain-of-responsibility-design-pattern-in-java)
[<https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern>](https://en.wikipedia.org/wiki/Chain-of-responsibility_pattern)
I hope it was helpful for you.
|
Of course we can do it :) . No recursion at all!
```
public static void main(String[] args) {
throw new StackOverflowError();
}
```
|
49,076,648
|
Doing this seemingly trivial task should be simple and obvious using PIVOT - but isn't.
What is the cleanest way to do the conversion, not necessarily using pivot, **when limited to ONLY using "pure" SQL** (see other factors, below)?
It shouldn't affect the answer, but note that a Python 3.X front end is being used to run SQL queries on a MS SQL Server 2012 backend.
Background :
I need to create CSV files by calling SQL code from Python 3.x. The CSV header line is created from the field (column) names of the SQL table that holds the results of the query.
The following SQL code extracts the field names and returns them as N rows of 1 column - but I need them as 1 row of N columns. (In the example below, the final result must be "A", "B", "C" .)
```
CREATE TABLE #MyTable -- ideally the real code uses "DECLARE @MyTable TABLE"
(
A varchar( 32 ),
B varchar( 32 ),
C varchar( 32 )
) ;
CREATE TABLE #MetaData -- ideally the real code uses "DECLARE @MetaData TABLE"
(
NameOfField varchar( 32 ) not NULL
) ;
INSERT INTO #MetaData
SELECT name
FROM tempdb.sys.columns as X
WHERE ( object_id = Object_id( 'tempdb..#MyTable' ) )
ORDER BY column_id ; -- generally redundant, ensures correct order if results returned in random order
/*
OK so far, the field names are returned as 3 rows of 1 column (entitled "NameOfField").
Pivoting them into 1 row of 3 columns should be something simple like:
*/
SELECT NameOfField
FROM #MetaData AS Source
PIVOT
(
COUNT( [ NameOfField ] ) FOR [ NameOfField ]
IN ( #MetaData ) -- I've tried "IN (SELECT NameOfField FROM #Metadata)"
) AS Destination ;
```
This error gets raised twice, once for the COUNT and once for the "FOR" clause of the PIVOT statement:
```
Msg 207, Level 16, State 1, Line 32
Invalid column name ' NameOfField'.
```
How do I use the contents of #Metadata to get PIVOT to work? Or is there another simple way?
Other background factors to be aware of:
* OBDC (Python's pyodbc package) is being used to pass the SQL queries from - and return the results (a cursor) to - a Python 3.x front end. Consequently there is no opportunity to use any type of manual intervention before the result set is returned to Python.
* The above SQL code is intended to become standard boilerplate for every query passed to SQL. The code must dynamically "adapt" itself to the structure of #MyTable (e.g. if field B is removed while D and E are added after C, the end result must be "A", "C","D", "E"). This means that the field names of a table must never appear inside PIVOT's IN clause (the #MetaData table is intended to supply those values).
* "Standard" SQL must be used. ALL vendor specific (e.g. Microsoft) extensions/utilities (e.g. "bcp", sqlcmd) must be avoided unless there is a very compelling reason to use them (because "it's there" doesn't count).
* For known reasons the select clause (into #Metadata) doesn't work for temporary variables (@MyTable). Is there an equivalent Select that works for temporary variables(i.e. @MetaData)?
UPDATE: This problem is subtly different from that in [SQL Server dynamic PIVOT query?](https://stackoverflow.com/questions/10404348/sql-server-dynamic-pivot-query). In my case I have to preserve the order of the fields, *something not required by that question*.
WHY I NEED TO DO THIS:
* The python code is a GUI for non-technical people. They use the GUI to pick & chose which (or even all) SQL reports to run from a HUGE number of reports.
* Apps like Excel are being used to view these files: to keep our users happy each CSV file must have a header line. The header line will consist of the field names from the SQL table that holds the results of the query.
* *These scripts can change at any time (e.g. add/delete a column) without any advance notice. To meet our users needs the header line must automatically "adjust itself" to make the corresponding changes. The SQL code below accomplishes this.*
* The header line gets merged (using UNION) with the query results to form the result set (a cursor) that gets passed back to Python. Python then processes the returned data and creates the CSV file (including the header line) that gets used by our customers.
In a nutshell: We have many sites, many users, many queries. By having SQL "dynmically create" the header line we remove the headache of having to manually manage/coordinate/rollout the SQL changes to all affected parties.
|
2018/03/02
|
[
"https://Stackoverflow.com/questions/49076648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459519/"
] |
Since the question is very interesting, I have tried to simplify the answer of *hide* :
```
public class Stackoverflow {
static class Handler {
void handle(Chain chain){
chain.process();
System.out.println("yeah");
}
}
static class Chain {
private List<Handler> handlers = new ArrayList<>();
private int n = 0;
private void setHandlers(int count) {
int i = 0;
while (i++ < count) {
handlers.add(new Handler());
}
}
public void process() {
if (n < handlers.size()) {
Handler handler = handlers.get(n++);
handler.handle(this);
}
}
}
public static void main(String[] args) {
Chain chain = new Chain();
chain.setHandlers(10000);
chain.process();
}
}
```
It's important to note that if stackoverflow occurs, the string "yeah" will never be output.
|
Looking at this answer below, not sure if this works for Java, but sounds like you can declare an array of pointers? Might be able to achieve Eric J's idea without requiring a generator.
[Is it on the Stack or Heap?](https://stackoverflow.com/questions/1056444/is-it-on-the-stack-or-heap)
```
int* x[LARGENUMBER]; // The addresses are held on the stack
int i; // On the stack
for(i = 0; i < LARGENUMBER; ++i)
x[i] = malloc(sizeof(int)*10); // Allocates memory on the heap
```
|
59,707,234
|
I'm unable to install pygraphviz even after installing graphviz and ensuring that cgraph.h is present in the directory.
I've also manually specified the directory for install. e.g. install-path
fatal error C1083: Cannot open include file: 'graphviz/cgraph.h': No such file or directory
Looking for any and all suggestions. Using Windows.
```
C:\Users\mmcgown\Desktop\School\MSDS452\pygraphviz-1.5>python setup.py install --prefix=C:\Program_Files_(x86)\Graphviz2.38 --include-path=C:\Program_Files_(x86)\Graphviz2.38\include\ --library-path=C:\Program_Files_(x86)\Graphviz2.38\lib\
```
```
running install
running build
running build_py
running egg_info
writing pygraphviz.egg-info\PKG-INFO
writing dependency_links to pygraphviz.egg-info\dependency_links.txt
writing top-level names to pygraphviz.egg-info\top_level.txt
reading manifest file 'pygraphviz.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'doc'
warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching '*.txt' under directory 'doc'
warning: no files found matching '*.css' under directory 'doc'
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '.svn' found anywhere in distribution
no previously-included directories found matching 'doc\build'
writing manifest file 'pygraphviz.egg-info\SOURCES.txt'
running build_ext
building 'pygraphviz._graphviz' extension
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MT -IC:\Program_Files_(x86)\Graphviz2.38\include\ -IC:\Users\mmcgown\AppData\Local\Continuum\anaconda3\include -IC:\Users\mmcgown\AppData\Local\Continuum\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt" /Tcpygraphviz/graphviz_wrap.c /Fobuild\temp.win-amd64-3.7\Release\pygraphviz/graphviz_wrap.obj
graphviz_wrap.c
pygraphviz/graphviz_wrap.c(2987): fatal error C1083: Cannot open include file: 'graphviz/cgraph.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.24.28314\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2
```
|
2020/01/12
|
[
"https://Stackoverflow.com/questions/59707234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9682236/"
] |
On Ubuntu please do
`sudo apt install graphviz-dev`
|
For those who visit this page, you may already come across to this [fix](https://github.com/tan-wei/pygraphviz) or this [issue](https://github.com/pygraphviz/pygraphviz/issues/155) on GitHub and try to install GraphViz2.38 manually. But neither of them will work since GraphViz and PyGraphViz are 2 different libraries
Mac or Ubuntu already have their solutions on GitHub, however for Win10 64-bit, it does not receive any fix yet since 2018. [Installing pygraphviz on Windows 10 64-bit, Python 3.6](https://stackoverflow.com/questions/45093811/installing-pygraphviz-on-windows-10-64-bit-python-3-6)
Someone have created a build of PyGraphviz 1.5 on his [Anaconda channel](https://anaconda.org/alubbock/pygraphviz) for Windows 64 bit running Python 3.6, Python 3.7 or Python 3.8. If you're running Anaconda, you can install with:
```
conda install -c alubbock pygraphviz
```
Please mark this as a possible duplicate with [this question](https://stackoverflow.com/questions/40809758/howto-install-pygraphviz-on-windows-10-64bit) if someone see it.
|
59,707,234
|
I'm unable to install pygraphviz even after installing graphviz and ensuring that cgraph.h is present in the directory.
I've also manually specified the directory for install. e.g. install-path
fatal error C1083: Cannot open include file: 'graphviz/cgraph.h': No such file or directory
Looking for any and all suggestions. Using Windows.
```
C:\Users\mmcgown\Desktop\School\MSDS452\pygraphviz-1.5>python setup.py install --prefix=C:\Program_Files_(x86)\Graphviz2.38 --include-path=C:\Program_Files_(x86)\Graphviz2.38\include\ --library-path=C:\Program_Files_(x86)\Graphviz2.38\lib\
```
```
running install
running build
running build_py
running egg_info
writing pygraphviz.egg-info\PKG-INFO
writing dependency_links to pygraphviz.egg-info\dependency_links.txt
writing top-level names to pygraphviz.egg-info\top_level.txt
reading manifest file 'pygraphviz.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'doc'
warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching '*.txt' under directory 'doc'
warning: no files found matching '*.css' under directory 'doc'
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '.svn' found anywhere in distribution
no previously-included directories found matching 'doc\build'
writing manifest file 'pygraphviz.egg-info\SOURCES.txt'
running build_ext
building 'pygraphviz._graphviz' extension
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MT -IC:\Program_Files_(x86)\Graphviz2.38\include\ -IC:\Users\mmcgown\AppData\Local\Continuum\anaconda3\include -IC:\Users\mmcgown\AppData\Local\Continuum\anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.24.28314\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt" /Tcpygraphviz/graphviz_wrap.c /Fobuild\temp.win-amd64-3.7\Release\pygraphviz/graphviz_wrap.obj
graphviz_wrap.c
pygraphviz/graphviz_wrap.c(2987): fatal error C1083: Cannot open include file: 'graphviz/cgraph.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.24.28314\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2
```
|
2020/01/12
|
[
"https://Stackoverflow.com/questions/59707234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9682236/"
] |
I installed the latest graphviz using the latest win64 executable from graphviz.org
[graphviz 2.49 win64](https://gitlab.com/api/v4/projects/4207231/packages/generic/graphviz-releases/2.49.0/stable_windows_10_cmake_Release_x64_graphviz-install-2.49.0-win64.exe)
then installation using the following command worked for me
```
pip install --global-option=build_ext --global-option="-IC:\Program Files\Graphviz\include" --global-option="-LC:\Program Files\Graphviz\lib" pygraphviz
```
|
For those who visit this page, you may already come across to this [fix](https://github.com/tan-wei/pygraphviz) or this [issue](https://github.com/pygraphviz/pygraphviz/issues/155) on GitHub and try to install GraphViz2.38 manually. But neither of them will work since GraphViz and PyGraphViz are 2 different libraries
Mac or Ubuntu already have their solutions on GitHub, however for Win10 64-bit, it does not receive any fix yet since 2018. [Installing pygraphviz on Windows 10 64-bit, Python 3.6](https://stackoverflow.com/questions/45093811/installing-pygraphviz-on-windows-10-64-bit-python-3-6)
Someone have created a build of PyGraphviz 1.5 on his [Anaconda channel](https://anaconda.org/alubbock/pygraphviz) for Windows 64 bit running Python 3.6, Python 3.7 or Python 3.8. If you're running Anaconda, you can install with:
```
conda install -c alubbock pygraphviz
```
Please mark this as a possible duplicate with [this question](https://stackoverflow.com/questions/40809758/howto-install-pygraphviz-on-windows-10-64bit) if someone see it.
|
14,119,978
|
I'm a newbie and was trying something in python 2.7.2 with Numpy which wasn't working as expected so wanted to check if there was something basic I was misunderstanding.
I was calculating a value for a triangle (trinormals) and then updating a value per point of the triangle (vertnormals) using an array of triangle indexes (trivertexidx). As a loop I was calculating:
```
for itri in range(ntriangles) :
vertnormals[(trivertidx[itri,0]),:] += trinormals[itri,:]
vertnormals[(trivertidx[itri,1]),:] += trinormals[itri,:]
vertnormals[(trivertidx[itri,2]),:] += trinormals[itri,:]
```
As this was a little slow I thought it could be modified to :
```
vertnormals[(trivertidx[:,0]),:] += trinormals[:,:]
vertnormals[(trivertidx[:,1]),:] += trinormals[:,:]
vertnormals[(trivertidx[:,2]),:] += trinormals[:,:]
```
However this doesn't give the same results. Is there another simpler way to write the loop? Any pointers appreciated. Note the intent here was to get a single value for each entry in vertnormals and then normalise the result.
|
2013/01/02
|
[
"https://Stackoverflow.com/questions/14119978",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1942439/"
] |
First the column in which you store the date should be an long type. This column will store the milliseconds from epoch for the date.
Now for Query
```
Calendar calendar = Calendar.getInstance(); // This will give you the current time.
// Removing the timestamp from current time to point to todays date
calendar.set(Calendar.HOUR_OF_DAY, 0);
calendar.set(Calendar.MINUTE, 0);
calendar.set(Calendar.SECOND, 0);
calendar.set(Calendar.MILLISECOND, 0);
calendar.add(Calendar.DATE, -3); // Will subtract 3 days from today.
Date beforeThreeDays = calendar.getTime();
calendar.add(Calendar.DATE, 6); // Will be your 3 days after today
Date afterThreeDays = calendar.getTime();
db.query("Table", null, "YOUR_DATE_COLUMN >= ? AND YOUR_DATE_COLUMN <= ?", new String[] { beforeThreeDays.getTime() + "", afterThreeDays.getTime() + "" }, null, null, null);
```
|
```
Select *
From TABLE_NAME
WHERE DATEDIFF(day, GETDATE(), COLUMN_TABLE) <= 3
```
|
41,914,398
|
I've written this very short spider to go to a U.S. News link and take the names of the colleges listed there.
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
import scrapy
class CollegesSpider(scrapy.Spider):
name = "colleges"
start_urls = [
'http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities?_mode=list&acceptance-rate-max=20'
]
def parse(self, response):
for school in response.css('div.items'):
yield {
'name': school.xpath('//*[@id="view-1c4ddd8a-8b04-4c93-8b68-9b7b4e5d8969"]/div/div[1]/div[1]/h3/a').extract_first(),
}
```
However, when I run this spider and ask for the names to be stored in a file named schools.json, the file comes out blank. What am I doing wrong?
|
2017/01/28
|
[
"https://Stackoverflow.com/questions/41914398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5535448/"
] |
Got it! It is because the robot detection.
Encode
```
>>> r = requests.get('http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities?_mode=list&acceptance-rate-max=20', headers={'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'})
>>> r.status_code
200
```
Then you will have all the content you need. Do whatever parsing or extraction you need. The procedure to encode a header should be very similar in Scrapy.
[scrapy doc for request with headers](https://doc.scrapy.org/en/latest/topics/request-response.html)
User agent for Chrome
```
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36
```
|
I am on my mobile so don't remember exact variable name, but it should be robots\_follow
Set it to False
|
41,914,398
|
I've written this very short spider to go to a U.S. News link and take the names of the colleges listed there.
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
import scrapy
class CollegesSpider(scrapy.Spider):
name = "colleges"
start_urls = [
'http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities?_mode=list&acceptance-rate-max=20'
]
def parse(self, response):
for school in response.css('div.items'):
yield {
'name': school.xpath('//*[@id="view-1c4ddd8a-8b04-4c93-8b68-9b7b4e5d8969"]/div/div[1]/div[1]/h3/a').extract_first(),
}
```
However, when I run this spider and ask for the names to be stored in a file named schools.json, the file comes out blank. What am I doing wrong?
|
2017/01/28
|
[
"https://Stackoverflow.com/questions/41914398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5535448/"
] |
Got it! It is because the robot detection.
Encode
```
>>> r = requests.get('http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities?_mode=list&acceptance-rate-max=20', headers={'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'})
>>> r.status_code
200
```
Then you will have all the content you need. Do whatever parsing or extraction you need. The procedure to encode a header should be very similar in Scrapy.
[scrapy doc for request with headers](https://doc.scrapy.org/en/latest/topics/request-response.html)
User agent for Chrome
```
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36
```
|
The page you're referring to as start url doesn't contain any element with id `view-1c4ddd8a-8b04-4c93-8b68-9b7b4e5d8969`- it looks like quite unique and doesn't seem to be the good choice for pretty universal XPath expression. I'd recommend to use something like `school.xpath('.//div[@data-view="colleges-search-results-card"]//h3/a/text()').extract()`
|
28,610,556
|
I'm a beginner in python, and I'm trying to write a program that makes a call to Weibo(Chinese Twitter) API and receive a json response. It's just a basic keyword search and fetching search result example.
But the problem is I don't know how to make an api call from python, so I'm keep getting error messages. The API I'm trying to use is <http://open.weibo.com/wiki/2/search/topics>
It's in Chinese but basically it says the api url, method -> GET, and the list of parameters I need. My guess is that I messed up with the parameters, that method: GET shouldn't be treated as a parameter but in some other ways which I don't know. Can somebody help??
Below is what I tried. I'm just pasting the relevant part, before this part there is a api authorization codes.
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# sudo pip install sinaweibopy
import sys
import urllib, urllib2
from weibo import APIClient
import webbrowser
APP_KEY = '1234' # there are real values here in the actual code
APP_SECRET = '1234'
CALLBACK_URL = 'http://111.111'
def get_auth():
# some code here, not pasted
def get_data():
access_token = '1234'
expires_in = '1234'
# This works fine
client = APIClient(app_key=APP_KEY, app_secret=APP_SECRET, redirect_uri=CALLBACK_URL)
client.set_access_token(access_token, expires_in)
r = client.statuses.user_timeline.get()
for st in r.statuses:
print st.text.encode('utf-8')
# This doesn't work
# statuses = client.search.topics.get(q=u'eland')
# This also doesn't work
# url = 'https://api.weibo.com/2/search/topics.json'
# params = {'method': GET, 'source': APP_KEY, 'access_token': access_token, 'q': 'new balance', 'count' : 50}
# request = urllib2.Request(url, urllib.urlencode(params))
# response = urllib2.urlopen(request)
```
error message (url call):
```
Traceback (most recent call last):
File "weibopr.py", line 85, in <module>
elif opt == '2': get_data()
File "weibopr.py", line 57, in get_data
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2015/02/19
|
[
"https://Stackoverflow.com/questions/28610556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4584513/"
] |
If "if clause" has to span more than one lines of code you have to surround it with curly brackets "{}". Change your code to :
```
using System;
class Program
{
static void Main(string[] args)
{
int tal1, tal2;
int slinga;
tal2 = Convert.ToInt32(Console.ReadLine());
for (slinga = 0; slinga < 2; slinga++)
{
if (tal1 == 56)
{
Console.WriteLine(Addera(slinga, tal1));
tal2--;
}
else
tal1 = 56;
}
}
static int Addera(int tal1, int tal2)
{
return tal1 + tal2;
}
}
```
|
Following code lines are wrong
```
if (tal1 == 56)
Console.WriteLine(Addera(slinga, tal1));
tal2--;
else tal1 = 56;
```
You need to update it to
```
if (tal1 == 56){
Console.WriteLine(Addera(slinga, tal1));
tal2--;
}
else {
tal1 = 56;
}
```
Reason : You need `{ }` for multi-lined `if-else` conditions
[MSDN says](https://msdn.microsoft.com/en-us/library/5011f09h.aspx)
>
> Both the then-statement and the else-statement can consist of a single statement or multiple statements that are enclosed in braces ({}). For a single statement, the braces are optional but recommended.
>
>
>
So you need `{ }` since it's multi-lined
|
28,610,556
|
I'm a beginner in python, and I'm trying to write a program that makes a call to Weibo(Chinese Twitter) API and receive a json response. It's just a basic keyword search and fetching search result example.
But the problem is I don't know how to make an api call from python, so I'm keep getting error messages. The API I'm trying to use is <http://open.weibo.com/wiki/2/search/topics>
It's in Chinese but basically it says the api url, method -> GET, and the list of parameters I need. My guess is that I messed up with the parameters, that method: GET shouldn't be treated as a parameter but in some other ways which I don't know. Can somebody help??
Below is what I tried. I'm just pasting the relevant part, before this part there is a api authorization codes.
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# sudo pip install sinaweibopy
import sys
import urllib, urllib2
from weibo import APIClient
import webbrowser
APP_KEY = '1234' # there are real values here in the actual code
APP_SECRET = '1234'
CALLBACK_URL = 'http://111.111'
def get_auth():
# some code here, not pasted
def get_data():
access_token = '1234'
expires_in = '1234'
# This works fine
client = APIClient(app_key=APP_KEY, app_secret=APP_SECRET, redirect_uri=CALLBACK_URL)
client.set_access_token(access_token, expires_in)
r = client.statuses.user_timeline.get()
for st in r.statuses:
print st.text.encode('utf-8')
# This doesn't work
# statuses = client.search.topics.get(q=u'eland')
# This also doesn't work
# url = 'https://api.weibo.com/2/search/topics.json'
# params = {'method': GET, 'source': APP_KEY, 'access_token': access_token, 'q': 'new balance', 'count' : 50}
# request = urllib2.Request(url, urllib.urlencode(params))
# response = urllib2.urlopen(request)
```
error message (url call):
```
Traceback (most recent call last):
File "weibopr.py", line 85, in <module>
elif opt == '2': get_data()
File "weibopr.py", line 57, in get_data
response = urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
```
|
2015/02/19
|
[
"https://Stackoverflow.com/questions/28610556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4584513/"
] |
If "if clause" has to span more than one lines of code you have to surround it with curly brackets "{}". Change your code to :
```
using System;
class Program
{
static void Main(string[] args)
{
int tal1, tal2;
int slinga;
tal2 = Convert.ToInt32(Console.ReadLine());
for (slinga = 0; slinga < 2; slinga++)
{
if (tal1 == 56)
{
Console.WriteLine(Addera(slinga, tal1));
tal2--;
}
else
tal1 = 56;
}
}
static int Addera(int tal1, int tal2)
{
return tal1 + tal2;
}
}
```
|
Change
```
if (tal1 == 56)
Console.WriteLine(Addera(slinga, tal1));
tal2--;
else tal1 = 56;
```
to
```
if (tal1 == 56)
{
Console.WriteLine(Addera(slinga, tal1));
tal2--;
}
else tal1 = 56;
```
The {} tells the compiler that the code between those brackets is part of the same logic branch.
|
27,270,530
|
I am worrying that this might be a really stupid question. However I can't find a solution.
I want to do the following operation in python without using a loop, because I am dealing with large size arrays.
Is there any suggestion?
```
import numpy as np
a = np.array([1,2,3,..., N]) # arbitrary 1d array
b = np.array([[1,2,3],[4,5,6],[7,8,9]]) # arbitrary 2d array
c = np.zeros((N,3,3))
c[0,:,:] = a[0]*b
c[1,:,:] = a[1]*b
c[2,:,:] = a[2]*b
c[3,:,:] = ...
...
...
c[N-1,:,:] = a[N-1]*b
```
|
2014/12/03
|
[
"https://Stackoverflow.com/questions/27270530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3683468/"
] |
To avoid Python-level loops, you could use `np.newaxis` to expand `a` (or None, which is the same thing):
```
>>> a = np.arange(1,5)
>>> b = np.arange(1,10).reshape((3,3))
>>> a[:,None,None]*b
array([[[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9]],
[[ 2, 4, 6],
[ 8, 10, 12],
[14, 16, 18]],
[[ 3, 6, 9],
[12, 15, 18],
[21, 24, 27]],
[[ 4, 8, 12],
[16, 20, 24],
[28, 32, 36]]])
```
Or [`np.einsum`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html), which is overkill here, but is often handy and makes it very explicit what you want to happen with the coordinates:
```
>>> c2 = np.einsum('i,jk->ijk', a, b)
>>> np.allclose(c2, a[:,None,None]*b)
True
```
|
Didn't understand this multiplication.. but here is a way to make matrix multiplication in python using numpy:
```
import numpy as np
a = np.matrix([1, 2])
b = np.matrix([[1, 2], [3, 4]])
result = a*b
print(result)
>>>result
matrix([7, 10])
```
|
27,270,530
|
I am worrying that this might be a really stupid question. However I can't find a solution.
I want to do the following operation in python without using a loop, because I am dealing with large size arrays.
Is there any suggestion?
```
import numpy as np
a = np.array([1,2,3,..., N]) # arbitrary 1d array
b = np.array([[1,2,3],[4,5,6],[7,8,9]]) # arbitrary 2d array
c = np.zeros((N,3,3))
c[0,:,:] = a[0]*b
c[1,:,:] = a[1]*b
c[2,:,:] = a[2]*b
c[3,:,:] = ...
...
...
c[N-1,:,:] = a[N-1]*b
```
|
2014/12/03
|
[
"https://Stackoverflow.com/questions/27270530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3683468/"
] |
My answer uses only `numpy` primitives, in particular for the array multiplication (what you want to do has a name, it is an *outer product*).
Due to a restriction in `numpy`'s outer multiplication function we have to reshape the result, but this is very cheap because the data block of the `ndarray` is not involved.
```
% python
Python 2.7.8 (default, Oct 18 2014, 12:50:18)
[GCC 4.9.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> a = np.array((1,2))
>>> b = np.array([[n*m for m in (1,2,3,4,5,6)] for n in (10,100,1000)])
>>> print b
[[ 10 20 30 40 50 60]
[ 100 200 300 400 500 600]
[1000 2000 3000 4000 5000 6000]]
>>> print np.outer(a,b)
[[ 10 20 30 40 50 60 100 200 300 400 500 600
1000 2000 3000 4000 5000 6000]
[ 20 40 60 80 100 120 200 400 600 800 1000 1200
2000 4000 6000 8000 10000 12000]]
>>> print "Almost there!"
Almost there!
>>> print np.outer(a,b).reshape(a.shape[0],b.shape[0], b.shape[1])
[[[ 10 20 30 40 50 60]
[ 100 200 300 400 500 600]
[ 1000 2000 3000 4000 5000 6000]]
[[ 20 40 60 80 100 120]
[ 200 400 600 800 1000 1200]
[ 2000 4000 6000 8000 10000 12000]]]
>>>
```
|
Didn't understand this multiplication.. but here is a way to make matrix multiplication in python using numpy:
```
import numpy as np
a = np.matrix([1, 2])
b = np.matrix([[1, 2], [3, 4]])
result = a*b
print(result)
>>>result
matrix([7, 10])
```
|
36,774,171
|
While building python from source on a MacOS, I accidntally overwrote the python that came with MacOS, now it doesn't have SSL. I tried to build again by running `--with-ssl` option
```
./configure --with-ssl
```
but when I subsequently ran `make`, it said this
```
Python build finished, but the necessary bits to build these modules were not found:
_bsddb _ssl dl
imageop linuxaudiodev ossaudiodev
readline spwd sunaudiodev
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
```
It's not clear to me from looking at `setup.py` what I'm supposed to do to find the "necessary bits". What can I do to build python with SSL on MacOS?
|
2016/04/21
|
[
"https://Stackoverflow.com/questions/36774171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/577455/"
] |
Just open `setup.py` and find method `detect_modules()`. It has some lines like (2.7.11 for me):
```
# Detect SSL support for the socket module (via _ssl)
search_for_ssl_incs_in = [
'/usr/local/ssl/include',
'/usr/contrib/ssl/include/'
]
ssl_incs = find_file('openssl/ssl.h', inc_dirs,
search_for_ssl_incs_in
)
if ssl_incs is not None:
krb5_h = find_file('krb5.h', inc_dirs,
['/usr/kerberos/include'])
if krb5_h:
ssl_incs += krb5_h
ssl_libs = find_library_file(self.compiler, 'ssl',lib_dirs,
['/usr/local/ssl/lib',
'/usr/contrib/ssl/lib/'
] )
if (ssl_incs is not None and
ssl_libs is not None):
exts.append( Extension('_ssl', ['_ssl.c'],
include_dirs = ssl_incs,
library_dirs = ssl_libs,
libraries = ['ssl', 'crypto'],
depends = ['socketmodule.h']), )
else:
missing.append('_ssl')
```
So it seems that you need SSL and Kerberos. Kerberos comes installed with Mac. So You need to install `openssl`. You can do it with `brew`:
```
brew install openssl
```
`openssl` headers could be installed in a path different than Python will search. So issue
```
locate ssl.h
```
and add the path to `search_for_ssl_incs_in`. For example for me it is:
```
/usr/local/Cellar/openssl/1.0.2d_1/include/openssl/ssl.h
```
So I should add `/usr/local/Cellar/openssl/1.0.2d_1/include/` to `search_for_ssl_incs_in`.
Don't forget that these are for Python 2.7.11. But the process should be same.
Hope that helps.
|
First of all, MacOS only includes LibreSSL 2.2.7 libraries and no headers, you really want to install OpenSSL using homebrew:
```
$ brew install openssl
```
The openssl formula is a *keg-only* formula because the LibreSSL library is shadowing OpenSSL and Homebrew will not interfere with this. This means that you can find OpenSSL not in `/usr/local` but in `/usr/local/opt/openssl`. But Homebrew includes the necessary command-line tools to figure out what path to use.
You then need to tell `configure` about these. If you are building Python 3.7 or newer, use the `--with-openssl` switch:
```
./configure --with-openssl=$(brew --prefix openssl)
```
If you are building an older release, set the `CPPFLAGS` and `LDFLAGS` environment variables:
```
CPPFLAGS="-I$(brew --prefix openssl)/include" \
LDFLAGS="-L$(brew --prefix openssl)/lib" \
./configure
```
and the Python configuration infrastructure takes it from there.
Know that *now ancient* Python releases (2.6 or older, 3.0-3.4) only work with OpenSSL 1.0.x and before, which no longer is installable from homebrew core.
|
58,738,629
|
I'm trying to convert string to date using `arrow` module.
During the conversion, I received this error:
`arrow.parser.ParserMatchError: Failed to match '%A %d %B %Y %I:%M:%S %p %Z' when parsing 'Wednesday 06 November 2019 03:05:42 PM CDT'`
The conversion is done using one simple line according to this [documentation](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior):
`date = arrow.get(date, '%A %d %B %Y %I:%M:%S %p %Z')`
I also try to do this with `datetime` and got another error:
`ValueError: time data 'Wednesday 06 November 2019 03:27:33 PM CDT' does not match format '%A %d %B %Y %I:%M:%S %p %Z'`
What am I missing?
|
2019/11/06
|
[
"https://Stackoverflow.com/questions/58738629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4193208/"
] |
This looks like a [regex](https://www.regular-expressions.info/) kind of problem to me so use [Pattern](https://docs.oracle.com/javase/9/docs/api/java/util/regex/Pattern.html) class. By positively matching what we want, it implicitly ignores files that don't conform (like your `._` example)
```
final Pattern p = Pattern.compile(".*_(\\d{4})-(\\d{2})\\.pdf$");
for (File obj : contentsOfDirectory) {
if (obj.isFile())
String file = "this is the file directory";
String pdfBills = file + obj.getName().toString();
Matcher m = p.matcher(pdfBills);
if (m.matches()) {
int year = Integer.parseInt(m.group(1));
int month = Integer.parseInt(m.group(2));
// ... do stuff with year and month
}
```
|
What if instead of looking at the end of the filename, you inspected the beginning? It looks like the first part of the filename is consistently YYYY-MM, you could then parse out the year and the month using `.substring()` like so:
```
String year = pdfBills.substring(0, 4);
String month = pdfBills.substring(5, 7);
```
Then, you can convert your month numeric String to a human readable month String like this:
```
import java.text.DateFormatSymbols;
DateFormatSymbols symbols = new DateFormatSymbols();
int intMonth = Integer.parseInt(month);
String monthName = symbols.getMonths()[intMonth-1];
```
|
52,996,227
|
I have a JSON file that looks like this:
```
{
"authors": [
{
"name": "John Steinbeck",
"description": "An author from Salinas California"
},
{
"name": "Mark Twain",
"description": "An icon of american literature",
"publications": [
{
"book": "Huckleberry Fin"
},
{
"book": "The Mysterious Stranger"
},
{
"book": "Puddinhead Wilson"
}
]
},
{
"name": "Herman Melville",
"description": "Wrote about a famous whale.",
"publications": [
{
"book": "Moby Dick"
}
]
},
{
"name": "Edgar Poe",
"description": "Middle Name was Alan"
}
]
}
```
I'm using python to get the values of the publications elements.
my code looks like this:
```
import json
with open('derp.json') as f:
data = json.load(f)
for i in range (0, len (data['authors'])):
print data['authors'][i]['name']+data['authors'][i]['publications']
```
I'm able to get all the names if i just use a:
```
print data['authors'][i]['name']
```
But when I attempt to iterate through to return the publications, I get a keyError. I expect it's because the publications element isn't part of every author.
How can I get these values to return?
|
2018/10/25
|
[
"https://Stackoverflow.com/questions/52996227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4163962/"
] |
>
> "Use the force, Linq!" - Obi Enum Kenobi
>
>
>
```
using System.Linq;
List<Int32> numbers = new List<Int32>()
{
1,
2,
3,
4
};
String asString = String
.Join(
", ",
numbers.Select( n => n.ToString( CultureInfo.InvariantCulture ) )
);
List<Int32> fromString = asString
.Split( "," )
.Select( c => Int32.Parse( c, CultureInfo.InvariantCulture ) );
```
When converting to and from strings that are read by machines, not humans, it's important to avoid using `ToString` and `Parse` without using `CultureInfo.InvariantCulture` to ensure consistent formatting regardless of a user's culture and formatting settings.
FWIW, I have my own helper library that adds this useful extension method:
```
public static String ToStringInvariant<T>( this T value )
where T : IConvertible
{
return value.ToString( c, CultureInfo.InvariantCulture );
}
public static String StringJoin( this IEnumerable<String> source, String separator )
{
return String.Join( separator, source );
}
```
Which tidies things up somewhat:
```
String asString = numbers
.Select( n => n.ToStringInvariant() )
.StringJoin( ", " );
List<Int32> fromString = asString
.Split( "," )
.Select( c => Int32.Parse( c, CultureInfo.InvariantCulture ) );
```
|
The easiest way is to make a new list each time and casting each item as you iterate.
|
52,996,227
|
I have a JSON file that looks like this:
```
{
"authors": [
{
"name": "John Steinbeck",
"description": "An author from Salinas California"
},
{
"name": "Mark Twain",
"description": "An icon of american literature",
"publications": [
{
"book": "Huckleberry Fin"
},
{
"book": "The Mysterious Stranger"
},
{
"book": "Puddinhead Wilson"
}
]
},
{
"name": "Herman Melville",
"description": "Wrote about a famous whale.",
"publications": [
{
"book": "Moby Dick"
}
]
},
{
"name": "Edgar Poe",
"description": "Middle Name was Alan"
}
]
}
```
I'm using python to get the values of the publications elements.
my code looks like this:
```
import json
with open('derp.json') as f:
data = json.load(f)
for i in range (0, len (data['authors'])):
print data['authors'][i]['name']+data['authors'][i]['publications']
```
I'm able to get all the names if i just use a:
```
print data['authors'][i]['name']
```
But when I attempt to iterate through to return the publications, I get a keyError. I expect it's because the publications element isn't part of every author.
How can I get these values to return?
|
2018/10/25
|
[
"https://Stackoverflow.com/questions/52996227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4163962/"
] |
>
> "Use the force, Linq!" - Obi Enum Kenobi
>
>
>
```
using System.Linq;
List<Int32> numbers = new List<Int32>()
{
1,
2,
3,
4
};
String asString = String
.Join(
", ",
numbers.Select( n => n.ToString( CultureInfo.InvariantCulture ) )
);
List<Int32> fromString = asString
.Split( "," )
.Select( c => Int32.Parse( c, CultureInfo.InvariantCulture ) );
```
When converting to and from strings that are read by machines, not humans, it's important to avoid using `ToString` and `Parse` without using `CultureInfo.InvariantCulture` to ensure consistent formatting regardless of a user's culture and formatting settings.
FWIW, I have my own helper library that adds this useful extension method:
```
public static String ToStringInvariant<T>( this T value )
where T : IConvertible
{
return value.ToString( c, CultureInfo.InvariantCulture );
}
public static String StringJoin( this IEnumerable<String> source, String separator )
{
return String.Join( separator, source );
}
```
Which tidies things up somewhat:
```
String asString = numbers
.Select( n => n.ToStringInvariant() )
.StringJoin( ", " );
List<Int32> fromString = asString
.Split( "," )
.Select( c => Int32.Parse( c, CultureInfo.InvariantCulture ) );
```
|
String is the 2nd worst format you can have. Only binary is slightly worse. If you got an Int, keep it an int. Do not transform anything into a string unless you **really** need to (like sending it via XML, use IO). This does not seem like such a case.
The only reason I can think you want to turn them into strings to build a DB query. But do not build SQL Queries via string concatenation. You only get [SQL Injections](https://xkcd.com/327/) doing that. Use SQL Parameters. If you do, there is no need to convert them to string. Something between the SQL Classes and the DBMS will deal with getting it transferred reliably.
If for some reason the SQL is actually storing that Data as NVARCHARS: Replace that DB design. There is no reason to ever have a DB store a number as string. It has proper Number Field Types in anything this side of MS Access.
|
52,996,227
|
I have a JSON file that looks like this:
```
{
"authors": [
{
"name": "John Steinbeck",
"description": "An author from Salinas California"
},
{
"name": "Mark Twain",
"description": "An icon of american literature",
"publications": [
{
"book": "Huckleberry Fin"
},
{
"book": "The Mysterious Stranger"
},
{
"book": "Puddinhead Wilson"
}
]
},
{
"name": "Herman Melville",
"description": "Wrote about a famous whale.",
"publications": [
{
"book": "Moby Dick"
}
]
},
{
"name": "Edgar Poe",
"description": "Middle Name was Alan"
}
]
}
```
I'm using python to get the values of the publications elements.
my code looks like this:
```
import json
with open('derp.json') as f:
data = json.load(f)
for i in range (0, len (data['authors'])):
print data['authors'][i]['name']+data['authors'][i]['publications']
```
I'm able to get all the names if i just use a:
```
print data['authors'][i]['name']
```
But when I attempt to iterate through to return the publications, I get a keyError. I expect it's because the publications element isn't part of every author.
How can I get these values to return?
|
2018/10/25
|
[
"https://Stackoverflow.com/questions/52996227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4163962/"
] |
String is the 2nd worst format you can have. Only binary is slightly worse. If you got an Int, keep it an int. Do not transform anything into a string unless you **really** need to (like sending it via XML, use IO). This does not seem like such a case.
The only reason I can think you want to turn them into strings to build a DB query. But do not build SQL Queries via string concatenation. You only get [SQL Injections](https://xkcd.com/327/) doing that. Use SQL Parameters. If you do, there is no need to convert them to string. Something between the SQL Classes and the DBMS will deal with getting it transferred reliably.
If for some reason the SQL is actually storing that Data as NVARCHARS: Replace that DB design. There is no reason to ever have a DB store a number as string. It has proper Number Field Types in anything this side of MS Access.
|
The easiest way is to make a new list each time and casting each item as you iterate.
|
1,168,687
|
Right now I'm working on a scripting language that doesn't yet have a FFI. I'd like to know what's the most convenient way to get it in, assuming that I'd like to write it like cool geeks do - I'd like to write the FFI in the scripting language itself.
The programming language I need to interface is C. So for basics I know that libdl.so is my best friend. Obviously it's not the only thing I'm going to need but the most important of them.
I only have slight ideas about what else do I need for it. I'd like to get similar behavior from the FFI as what python ctypes has.
What should I need to know in order to get this far? I know there's some serious magic with data structures I'll need to deal with. How do I manage it so that I could do most of that serious magic in the scripting language itself? I'd have use from such magic for much more than just the foreign function interface. For instance I might want to pass C-like binary data to files.
|
2009/07/22
|
[
"https://Stackoverflow.com/questions/1168687",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21711/"
] |
I think an appropiate answer requires a [detailed essay](http://vmathew.in/dnc.html).
Basically, there should be wrappers for the library loading and symbol searching facilities provided by the host OS. If the core datatypes of your language are internally represented with a single C data structure, then a requirement can be placed on the library developers that the parameters and the return type of the exported C functions should be objects of that data structure. This will make data exchange simpler to implement. If your language has some form of pattern expressions and first class functions, then the signature of C functions might be written in patterns and the library searched for a function matching an equivalent signature. Here is some pseudocode of a C function and its usage in script:
```
/* arith.dll */
/* A sample C function callable from the scripting language. */
#include "my_script.h" // Data structures used by the script interpreter.
My_Script_Object* add(My_Script_Object* num1, My_Script_Object* num2)
{
int a = My_Script_Object_To_Int(num1);
int b = My_Script_Object_To_Int(num2);
return Int_To_My_Script_Object(a + b);
}
/* End of arith.dll */
// Script using the dll
clib = open_library("arith.dll");
// if it has first-class functions
add_func = clib.find([add int int]);
if (cfunc != null)
{
sum = add_func(10, 20);
print(sum);
}
// otherwise
print(clib.call("add", 10 20));
```
It is not possible to discuss more implementation details here. Note that
we haven't said anything about garbage collection, etc.
The sources available at the following links may help you move further:
<http://common-lisp.net/project/cffi/>
<http://www.nongnu.org/cinvoke/>
|
Check out <http://sourceware.org/libffi/>
Remember the calling conventions are going to be different on different architectures, i.e. what order function variables are popped onto the stack. I don't know about writing it in your own scripting language, I do know that Java JNI uses libffi.
|
63,966,342
|
```
import pandas as pd
import datetime as dt
from pandas_datareader import data as web
import yfinance as yf
yf.pdr_override()
```
filename=r'C:\Users\User\Desktop\from\_python\data\_from\_python.xlsx'
```
yeah = pd.read_excel(filename, sheet_name='entry')
stock = []
stock = list(yeah['name'])
stock = [ s.replace('\xa0', '') for s in stock if not pd.isna(s) ]
adj_close=pd.DataFrame([])
high_price=pd.DataFrame([])
low_price=pd.DataFrame([])
volume=pd.DataFrame([])
print(stock)
['^GSPC', 'NQ=F', 'AAU', 'ALB', 'AOS', 'APPS', 'AQB', 'ASPN', 'ATHM', 'AZRE', 'BCYC', 'BGNE', 'CAT', 'CC', 'CLAR', 'CLCT', 'CMBM', 'CMT', 'CRDF', 'CYD', 'DE', 'DKNG', 'EARN', 'EMN', 'FBIO', 'FBRX', 'FCX', 'FLXS', 'FMC', 'FMCI', 'GME', 'GRVY', 'HAIN', 'HBM', 'HIBB', 'IEX', 'IOR', 'KFS', 'MAXR', 'MPX', 'MRTX', 'NSTG', 'NVCR', 'NVO', 'OESX', 'PENN', 'PLL', 'PRTK', 'RDY', 'REGI', 'REKR', 'SBE', 'SQM', 'TCON', 'TCS', 'TGB', 'TPTX', 'TRIL', 'UEC', 'VCEL', 'VOXX', 'WIT', 'WKHS', 'XNCR']
for symbol in stock:
adj_close[symbol] = web.get_data_yahoo([symbol],start,end)['Adj Close']
```
I have a list of tickers, I have got the adj close price, how can get these tickers NAME and SECTORS?
for single ticker I found in web, it can be done like as below
```
sbux = yf.Ticker("SBUX")
tlry = yf.Ticker("TLRY")
print(sbux.info['sector'])
print(tlry.info['sector'])
```
How can I make it as a `dataframe` that I can put the data into excel as I am doing for `adj` price.
Thanks a lot!
|
2020/09/19
|
[
"https://Stackoverflow.com/questions/63966342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13933399/"
] |
You can try this answer using a package called [yahooquery](https://github.com/dpguthrie/yahooquery). Disclaimer: I am the author of the package.
```
from yahooquery import Ticker
import pandas as pd
symbols = ['^GSPC', 'NQ=F', 'AAU', 'ALB', 'AOS', 'APPS', 'AQB', 'ASPN', 'ATHM', 'AZRE', 'BCYC', 'BGNE', 'CAT', 'CC', 'CLAR', 'CLCT', 'CMBM', 'CMT', 'CRDF', 'CYD', 'DE', 'DKNG', 'EARN', 'EMN', 'FBIO', 'FBRX', 'FCX', 'FLXS', 'FMC', 'FMCI', 'GME', 'GRVY', 'HAIN', 'HBM', 'HIBB', 'IEX', 'IOR', 'KFS', 'MAXR', 'MPX', 'MRTX', 'NSTG', 'NVCR', 'NVO', 'OESX', 'PENN', 'PLL', 'PRTK', 'RDY', 'REGI', 'REKR', 'SBE', 'SQM', 'TCON', 'TCS', 'TGB', 'TPTX', 'TRIL', 'UEC', 'VCEL', 'VOXX', 'WIT', 'WKHS', 'XNCR']
# Create Ticker instance, passing symbols as first argument
# Optional asynchronous argument allows for asynchronous requests
tickers = Ticker(symbols, asynchronous=True)
data = tickers.get_modules("summaryProfile quoteType")
df = pd.DataFrame.from_dict(data).T
# flatten dicts within each column, creating new dataframes
dataframes = [pd.json_normalize([x for x in df[module] if isinstance(x, dict)]) for module in ['summaryProfile', 'quoteType']]
# concat dataframes from previous step
df = pd.concat(dataframes, axis=1)
# View columns
df.columns
Index(['address1', 'address2', 'city', 'state', 'zip', 'country', 'phone',
'fax', 'website', 'industry', 'sector', 'longBusinessSummary',
'fullTimeEmployees', 'companyOfficers', 'maxAge', 'exchange',
'quoteType', 'symbol', 'underlyingSymbol', 'shortName', 'longName',
'firstTradeDateEpochUtc', 'timeZoneFullName', 'timeZoneShortName',
'uuid', 'messageBoardId', 'gmtOffSetMilliseconds', 'maxAge'],
dtype='object')
# Data you're looking for
df[['symbol', 'shortName', 'sector']].head(10)
symbol shortName sector
0 NQZ20.CME Nasdaq 100 Dec 20 NaN
1 ALB Albemarle Corporation Basic Materials
2 AOS A.O. Smith Corporation Industrials
3 ASPN Aspen Aerogels, Inc. Industrials
4 AAU Almaden Minerals, Ltd. Basic Materials
5 ^GSPC S&P 500 NaN
6 ATHM Autohome Inc. Communication Services
7 AQB AquaBounty Technologies, Inc. Consumer Defensive
8 APPS Digital Turbine, Inc. Technology
9 BCYC Bicycle Therapeutics plc Healthcare
```
|
It processes stocks and sectors at the same time. However, some stocks do not have a sector, so an error countermeasure is added.
Since the issue column name consists of sector and issue name, we change it to a hierarchical column and update the retrieved data frame. Finally, I save it in CSV format to import it into Excel. I've only tried some of the stocks due to the large number of stocks, so there may be some issues.
```
import datetime
import pandas as pd
import yfinance as yf
import pandas_datareader.data as web
yf.pdr_override()
start = "2018-01-01"
end = "2019-01-01"
# symbol = ['^GSPC', 'NQ=F', 'AAU', 'ALB', 'AOS', 'APPS', 'AQB', 'ASPN', 'ATHM', 'AZRE', 'BCYC', 'BGNE', 'CAT',
#'CC', 'CLAR', 'CLCT', 'CMBM', 'CMT', 'CRDF', 'CYD', 'DE', 'DKNG', 'EARN', 'EMN', 'FBIO', 'FBRX', 'FCX', 'FLXS',
#'FMC', 'FMCI', 'GME', 'GRVY', 'HAIN', 'HBM', 'HIBB', 'IEX', 'IOR', 'KFS', 'MAXR', 'MPX', 'MRTX', 'NSTG', 'NVCR',
#'NVO', 'OESX', 'PENN', 'PLL', 'PRTK', 'RDY', 'REGI', 'REKR', 'SBE', 'SQM', 'TCON', 'TCS', 'TGB', 'TPTX', 'TRIL',
#'UEC', 'VCEL', 'VOXX', 'WIT', 'WKHS', 'XNCR']
stock = ['^GSPC', 'NQ=F', 'AAU', 'ALB', 'AOS', 'APPS']
adj_close = pd.DataFrame([])
for symbol in stock:
try:
sector = yf.Ticker(symbol).info['sector']
name = yf.Ticker(symbol).info['shortName']
except:
sector = 'None'
name = 'None'
adj_close[sector, symbol] = web.get_data_yahoo(symbol, start=start, end=end)['Adj Close']
idx = pd.MultiIndex.from_tuples(adj_close.columns)
adj_close.columns = idx
adj_close.head()
None Basic Materials Industrials Technology
^GSPC_None NQ=F_None AAU_None ALB_Albemarle Corporation AOS_A.O. Smith Corporation APPS_Digital Turbine, Inc.
2018-01-02 2695.810059 6514.75 1.03 125.321663 58.657742 1.79
2018-01-03 2713.060059 6584.50 1.00 125.569397 59.010468 1.87
2018-01-04 2723.989990 6603.50 0.98 124.073502 59.286930 1.86
2018-01-05 2743.149902 6667.75 1.00 125.502716 60.049587 1.96
2018-01-08 2747.709961 6688.00 0.95 130.962250 60.335583 1.96
# for excel
adj_close.to_csv('stock.csv', sep=',')
```
|
18,828,124
|
I am running my django web app in httpd .
In httpd.conf this is what I have.
```
Listen 8090
User ctaftest
Group ctaftest
```
And after starting httpd server when I do
`netstat -anp |grep httpd`
I get
```
root 31621 1 1 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31625 31621 5 17:23 ? 00:00:02 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31626 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31627 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31628 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31629 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31646 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
```
Note that other than 1 process all other httpd processes are running with ctaftest user
Now this is my problem.
Within my view, if I do
```
dir_path = os.path.expanduser("~/dir_path")
```
I am getting `/root/dirpath` where as I am expecting `/home/ctaftest/dirpath`
Note: When I use Django development server (runserver) I get the expected output,
`/home/ctaftest/dirpath`
Whats wrong when I run from httpd and how can I get the user `ctaftest` itself as the current user when I run my Django webapp from `httpd`
|
2013/09/16
|
[
"https://Stackoverflow.com/questions/18828124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1371989/"
] |
What does your WSGIDaemonProcess configuration in the rest of your Apache config look like? You can set the user there.
```
WSGIDaemonProcess mysite user=ctaftest group=ctaftest threads=5
```
|
To start with, if you call:
```
os.path.expanduser("dir_path")
```
it should return just:
```
dir_path
```
Did you instead mean:
```
os.path.expanduser("~/dir_path")
```
Anyway, when you use embedded mode of mod\_wsgi, your code runs in the Apache child worker processes. These processes can be shared with other Apache modules such as PHP and Perl modules. Because it is a shared environment neither mod\_wsgi or any web application code can't be presumptuous in thinking it can change the current working directory of the process. As a result, the current working directory is inherited from what ever Apache is started with, which would be the root of the file system.
For similar reasons, you can't go overriding what environment variables may be set and as a result, if Apache passed through HOME as being that for the root user that Apache starts as, then when you use os.path.expanduser('~'), the tilde will be replaced with whatever HOME was set to.
So what you are seeing is quite normal, including one process still running as root, which is the parent Apache process, in which none of your requests are being run anyway, as it just acts as a process monitor to manage the child worker processes, handle restarts etc.
In general, in a web application it is regarded as bad practice to rely on things like the current working directory, the values of environment variables such as HOME, USERNAME, PATH etc, as they aren't always set to sensible things depending on the hosting environment.
That all said, if when using mod\_wsgi, you instead use the preferred daemon mode, then because at that point it is only running your Python web application, mod\_wsgi will override HOME to be the directory for the user that the daemon process runs as. If environment variables such as USER, USERNAME and LOGNAME are set, it will also similarly override those with a value corresponding to what user the daemon process runs as. It will even change the current working directory to be the home directory for that user.
In summary. You should not be building in such dependencies into a web application, but specify such things via configuration, otherwise you limit portability. If you for some reason don't want to do that, then use daemon mode of mod\_wsgi instead.
|
18,828,124
|
I am running my django web app in httpd .
In httpd.conf this is what I have.
```
Listen 8090
User ctaftest
Group ctaftest
```
And after starting httpd server when I do
`netstat -anp |grep httpd`
I get
```
root 31621 1 1 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31625 31621 5 17:23 ? 00:00:02 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31626 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31627 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31628 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31629 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31646 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
```
Note that other than 1 process all other httpd processes are running with ctaftest user
Now this is my problem.
Within my view, if I do
```
dir_path = os.path.expanduser("~/dir_path")
```
I am getting `/root/dirpath` where as I am expecting `/home/ctaftest/dirpath`
Note: When I use Django development server (runserver) I get the expected output,
`/home/ctaftest/dirpath`
Whats wrong when I run from httpd and how can I get the user `ctaftest` itself as the current user when I run my Django webapp from `httpd`
|
2013/09/16
|
[
"https://Stackoverflow.com/questions/18828124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1371989/"
] |
What does your WSGIDaemonProcess configuration in the rest of your Apache config look like? You can set the user there.
```
WSGIDaemonProcess mysite user=ctaftest group=ctaftest threads=5
```
|
To solve it , the accepted answer helped , however I had to add
```
`WSGIProcessGroup` directive also
```
So I configured something like this.
```
WSGIDaemonProcess ctaf.com user=ctaftest group=ctaftest threads=10 python-path=/home/ctaftest/virtualpython/CTAFWEB_PRODUCTION/ctafweb
WSGIProcessGroup ctaf.com
```
|
18,828,124
|
I am running my django web app in httpd .
In httpd.conf this is what I have.
```
Listen 8090
User ctaftest
Group ctaftest
```
And after starting httpd server when I do
`netstat -anp |grep httpd`
I get
```
root 31621 1 1 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31625 31621 5 17:23 ? 00:00:02 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31626 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31627 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31628 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31629 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31646 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
```
Note that other than 1 process all other httpd processes are running with ctaftest user
Now this is my problem.
Within my view, if I do
```
dir_path = os.path.expanduser("~/dir_path")
```
I am getting `/root/dirpath` where as I am expecting `/home/ctaftest/dirpath`
Note: When I use Django development server (runserver) I get the expected output,
`/home/ctaftest/dirpath`
Whats wrong when I run from httpd and how can I get the user `ctaftest` itself as the current user when I run my Django webapp from `httpd`
|
2013/09/16
|
[
"https://Stackoverflow.com/questions/18828124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1371989/"
] |
To start with, if you call:
```
os.path.expanduser("dir_path")
```
it should return just:
```
dir_path
```
Did you instead mean:
```
os.path.expanduser("~/dir_path")
```
Anyway, when you use embedded mode of mod\_wsgi, your code runs in the Apache child worker processes. These processes can be shared with other Apache modules such as PHP and Perl modules. Because it is a shared environment neither mod\_wsgi or any web application code can't be presumptuous in thinking it can change the current working directory of the process. As a result, the current working directory is inherited from what ever Apache is started with, which would be the root of the file system.
For similar reasons, you can't go overriding what environment variables may be set and as a result, if Apache passed through HOME as being that for the root user that Apache starts as, then when you use os.path.expanduser('~'), the tilde will be replaced with whatever HOME was set to.
So what you are seeing is quite normal, including one process still running as root, which is the parent Apache process, in which none of your requests are being run anyway, as it just acts as a process monitor to manage the child worker processes, handle restarts etc.
In general, in a web application it is regarded as bad practice to rely on things like the current working directory, the values of environment variables such as HOME, USERNAME, PATH etc, as they aren't always set to sensible things depending on the hosting environment.
That all said, if when using mod\_wsgi, you instead use the preferred daemon mode, then because at that point it is only running your Python web application, mod\_wsgi will override HOME to be the directory for the user that the daemon process runs as. If environment variables such as USER, USERNAME and LOGNAME are set, it will also similarly override those with a value corresponding to what user the daemon process runs as. It will even change the current working directory to be the home directory for that user.
In summary. You should not be building in such dependencies into a web application, but specify such things via configuration, otherwise you limit portability. If you for some reason don't want to do that, then use daemon mode of mod\_wsgi instead.
|
To solve it , the accepted answer helped , however I had to add
```
`WSGIProcessGroup` directive also
```
So I configured something like this.
```
WSGIDaemonProcess ctaf.com user=ctaftest group=ctaftest threads=10 python-path=/home/ctaftest/virtualpython/CTAFWEB_PRODUCTION/ctafweb
WSGIProcessGroup ctaf.com
```
|
48,171,851
|
I can't find a command example for archiving a set of files from a given prefix in S3 into a given vault in Glacier using ONLY COMMAND LINE, i.e. no Lifecycles, no python+boto. Thanks.
This doc has a lot of examples but none fit my request:
<https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html>
|
2018/01/09
|
[
"https://Stackoverflow.com/questions/48171851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1872286/"
] |
That's because you can't. As described in the [Amazon's S3 Documentation](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html):
>
> You cannot specify GLACIER as the storage class at the time that you create an object. You create GLACIER objects by first uploading objects using STANDARD, RRS, or STANDARD\_IA as the storage class. Then, you transition these objects to the GLACIER storage class using lifecycle management.
>
>
>
|
You're looking for this:
<https://aws.amazon.com/premiumsupport/knowledge-center/restore-s3-object-glacier-storage-class/>
```
aws s3 cp s3://bucketname/key/file s3://bucketname/key/file --storage-class GLACIER
```
optionally use --recursive instead of a specific file name.
|
27,712,101
|
I am trying to sort dictionaries in MongoDB. However, I get the value error "too many values to unpack" because I think it's implying that there are too many values in each dictionary (there are 16 values in each one). This is my code:
```
FortyMinute.find().sort(['Rank', 1])
```
Anyone know how to get around this?
EDIT: Full traceback
```
Traceback (most recent call last):
File "main.py", line 33, in <module>
main(sys.argv[1:])
File "main.py", line 21, in main
fm.readFortyMinute(args[0])
File "/Users/Yih-Jen/Documents/Rowing Project/FortyMinute.py", line 71, in readFortyMinute
writeFortyMinute(FortyMinData)
File "/Users/Yih-Jen/Documents/Rowing Project/FortyMinute.py", line 104, in writeFortyMinute
FortyMinute.find().sort(['Rank', 1])
File "/Users/Yih-Jen/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 692, in sort
self.__ordering = helpers._index_document(keys)
File "/Users/Yih-Jen/anaconda/lib/python2.7/site-packages/pymongo/helpers.py", line 65, in _index_document
for (key, value) in index_list:
ValueError: too many values to unpack
```
|
2014/12/30
|
[
"https://Stackoverflow.com/questions/27712101",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4392607/"
] |
You pass the arguments and values in **unpacked** as so:
```
FortyMinute.find().sort('Rank', 1)
```
---
It is only when you're passing **multiple sort parameters** that you group arguments and values using lists, and then too you must surround all your parameters with a tuple as so:
```
FortyMinute.find().sort([(Rank', 1), ('Date', 1)])
```
---
**Pro-tip:** Even the `Cursor.sort` documentation linked below recommends using `pymongo.DESCENDING` and `pymongo.ASCENDING` instead of 1 and -1; in general, you should use descriptive variable names instead of magic constants in your code as so:
```
FortyMinute.find().sort('Rank',pymongo.DESCENDING)
```
---
Finally, if you are so inclined, you can sort the list using Python's built-in as the another answerer mentioned; but even thought `sorted` accepts iterators and not just sequences it might be more inefficient and nonstandard:
```
sorted(FortyMinute.find(), key=key_function)
```
where you might define `key_function` to return the `Rank` column of a record.
---
[Link to the official documentation](http://api.mongodb.org/python/current/api/pymongo/cursor.html#pymongo.cursor.Cursor.sort)
|
If you want mong/pymongo to sort:
```
FortyMinute.find().sort('Rank', 1)
```
If you want to sort using multiple fields:
```
FortyMinute.find().sort([('Rank': 1,), ('other', -1,)])
```
You also have constants to make it more clear what you're doing:
```
FortyMinute.find().sort('Rank',pymongo.DESCENDING)
```
If you want to sort in python first you have to return the result and use a sorting method in python:
```
sorted(FortyMinute.find(), key=<some key...>)
```
|
50,208,381
|
I'm experimenting with developing python flask app, and would like to configure the app to apache as a daemon, so I wouldn't need to restart apache after every change. The configuration is now like [instructed here](https://code.google.com/archive/p/modwsgi/wikis/QuickConfigurationGuide.wiki#Mounting_At_Root_Of_Site%20QuickConfigGuide):
httpd.conf
```
WSGIDaemonProcess /rapo threads=5 display-name=%{GROUP}
WSGIProcessGroup /rapo
WSGIScriptAlias /rapo /var/www/cgi-bin/pycgi/koe.wsgi
```
koe.wsgi contains just
```
import sys
sys.path.insert(0, "/var/www/cgi-bin/pycgi")
from koe2 import app as application
```
And in koe2.py there is
```
@app.route('/rapo')
def hello_world():
return 'Hello, World!'
```
that output I can see when I go to the webserver's /rapo/hello -path, so flask works, but the daemon configuration does not (I still need to restart to see any changes made). Here with [similar problem](https://stackoverflow.com/a/2116481/364931) it seems the key was that the names match, and they do. SW versions: Apache/2.4.6 (CentOS) PHP/5.4.16 mod\_wsgi/3.4.
We don't have any [virtual hosts](http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html) defined in the httpd.conf, which might be the missing thing, as that worked [in this case](https://stackoverflow.com/q/36733012/364931)? Thanks for any help!
|
2018/05/07
|
[
"https://Stackoverflow.com/questions/50208381",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/364931/"
] |
You can use `sapply` or `lapply` to accomplish it .
```
#supposing your data.frame is called 'df'
sapply(df, unique)
#$x1
#[1] 1 2 3 4 6 7 8
#
#$x2
#[1] 2 5 7 8 9 0
#
#$x3
#[1] 6 5 1 2 3 4
```
or
```
lapply(df, unique)
#$x1
#[1] 1 2 3 4 6 7 8
#
#$x2
#[1] 2 5 7 8 9 0
#
#$x3
#[1] 6 5 1 2 3 4
```
|
```
# Imagine D is your data.frame object
apply(D,1, function(x) rle(x)$values)
```
|
50,208,381
|
I'm experimenting with developing python flask app, and would like to configure the app to apache as a daemon, so I wouldn't need to restart apache after every change. The configuration is now like [instructed here](https://code.google.com/archive/p/modwsgi/wikis/QuickConfigurationGuide.wiki#Mounting_At_Root_Of_Site%20QuickConfigGuide):
httpd.conf
```
WSGIDaemonProcess /rapo threads=5 display-name=%{GROUP}
WSGIProcessGroup /rapo
WSGIScriptAlias /rapo /var/www/cgi-bin/pycgi/koe.wsgi
```
koe.wsgi contains just
```
import sys
sys.path.insert(0, "/var/www/cgi-bin/pycgi")
from koe2 import app as application
```
And in koe2.py there is
```
@app.route('/rapo')
def hello_world():
return 'Hello, World!'
```
that output I can see when I go to the webserver's /rapo/hello -path, so flask works, but the daemon configuration does not (I still need to restart to see any changes made). Here with [similar problem](https://stackoverflow.com/a/2116481/364931) it seems the key was that the names match, and they do. SW versions: Apache/2.4.6 (CentOS) PHP/5.4.16 mod\_wsgi/3.4.
We don't have any [virtual hosts](http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html) defined in the httpd.conf, which might be the missing thing, as that worked [in this case](https://stackoverflow.com/q/36733012/364931)? Thanks for any help!
|
2018/05/07
|
[
"https://Stackoverflow.com/questions/50208381",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/364931/"
] |
```
# Imagine D is your data.frame object
apply(D,1, function(x) rle(x)$values)
```
|
```
A=apply(dat,1,unique)
data.frame(t(sapply(A,`length<-`,max(lengths(A)))))
X1 X2 X3 X4 X5 X6 X7
1 1 2 3 4 6 7 8
2 2 5 7 8 9 0 NA
3 6 5 1 2 3 4 NA
```
|
50,208,381
|
I'm experimenting with developing python flask app, and would like to configure the app to apache as a daemon, so I wouldn't need to restart apache after every change. The configuration is now like [instructed here](https://code.google.com/archive/p/modwsgi/wikis/QuickConfigurationGuide.wiki#Mounting_At_Root_Of_Site%20QuickConfigGuide):
httpd.conf
```
WSGIDaemonProcess /rapo threads=5 display-name=%{GROUP}
WSGIProcessGroup /rapo
WSGIScriptAlias /rapo /var/www/cgi-bin/pycgi/koe.wsgi
```
koe.wsgi contains just
```
import sys
sys.path.insert(0, "/var/www/cgi-bin/pycgi")
from koe2 import app as application
```
And in koe2.py there is
```
@app.route('/rapo')
def hello_world():
return 'Hello, World!'
```
that output I can see when I go to the webserver's /rapo/hello -path, so flask works, but the daemon configuration does not (I still need to restart to see any changes made). Here with [similar problem](https://stackoverflow.com/a/2116481/364931) it seems the key was that the names match, and they do. SW versions: Apache/2.4.6 (CentOS) PHP/5.4.16 mod\_wsgi/3.4.
We don't have any [virtual hosts](http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html) defined in the httpd.conf, which might be the missing thing, as that worked [in this case](https://stackoverflow.com/q/36733012/364931)? Thanks for any help!
|
2018/05/07
|
[
"https://Stackoverflow.com/questions/50208381",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/364931/"
] |
You can use `sapply` or `lapply` to accomplish it .
```
#supposing your data.frame is called 'df'
sapply(df, unique)
#$x1
#[1] 1 2 3 4 6 7 8
#
#$x2
#[1] 2 5 7 8 9 0
#
#$x3
#[1] 6 5 1 2 3 4
```
or
```
lapply(df, unique)
#$x1
#[1] 1 2 3 4 6 7 8
#
#$x2
#[1] 2 5 7 8 9 0
#
#$x3
#[1] 6 5 1 2 3 4
```
|
```
A=apply(dat,1,unique)
data.frame(t(sapply(A,`length<-`,max(lengths(A)))))
X1 X2 X3 X4 X5 X6 X7
1 1 2 3 4 6 7 8
2 2 5 7 8 9 0 NA
3 6 5 1 2 3 4 NA
```
|
56,034,831
|
I am using Keras with `fit_generator()`. My generator connects to a Database (MongoDB in my case) to fetch data for each batch. If I use the multiprocessing flag of `fit_generator()` I get this Warning:
```
UserWarning: MongoClient opened before fork. Create MongoClient only after forking.
```
I am connecting to the Database during `__init__()`:
```
class MyCustomGenerator(tf.keras.utils.Sequence):
def __init__(self, ...):
collection = MagicMongoDBConnector()
def __len__(self):
...
def __getitem__(self, idx):
# Using collection to fetch data from mongoDB
...
def on_epoch_end(self):
...
```
I would assume I need to have a separate connection for each epoch, but unfortunately, there is no `on_epoch_begin(self)` callback available (as seen [here](https://github.com/tensorflow/tensorflow/blob/v2.0.0-alpha0/tensorflow/python/keras/utils/data_utils.py)).
So two questions:
How and when does Keras fork the Generator if multiprocessing is used?
How can I get rid of the MongoClient warning and connect inside each fork?
|
2019/05/08
|
[
"https://Stackoverflow.com/questions/56034831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3531894/"
] |
I don't have a mongo DB to test on but this might work - you can get the collection (connection?) on the first get-item of each process.
```py
class MyCustomGenerator(tf.keras.utils.Sequence):
def __init__(self, ...):
self.collection = None
def __len__(self):
...
def __getitem__(self, idx):
if self.collection is None:
self.collection = MagicMongoDBConnector()
# Continue with your code
# Using collection to fetch data from mongoDB
...
def on_epoch_end(self):
...
```
|
if you're using Python 3.7 you could use [os.register\_at\_fork](https://docs.python.org/3/library/os.html#os.register_at_fork) to trigger creating the database connection
for example you could do something like:
```
from os import register_at_fork
def reinit_dbcon():
generator_obj.collection = MagicMongoDBConnector()
register_at_fork(after_in_child=reinit_dbcon)
```
somewhere before you call `fit_generator`. assuming the object is somewhere global
|
13,927,122
|
Trying to get this line of code to work, I keep running into issues no matter how I change the formatting around:
```
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
```
(year, month, day) can be either ints or strings.
Traceback:
```
Traceback (most recent call last):
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/toolbar.py", line 117, in toolbar_tween
response = _handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/panels/performance.py", line 55, in resource_timer_handler
result = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/tweens.py", line 20, in excview_tween
response = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/router.py", line 161, in handle_request
response = view_callable(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 342, in rendered_view
result = view(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 456, in _class_requestonly_view
response = getattr(inst, attr)()
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 56, in view_process
return self.handle_file_upload(self.request.params['file'], shareID)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 101, in handle_file_upload
self.save(file, newFileName, isImage, uploadTime)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 166, in save
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
File "/home/tinyup/dev/lib/python2.7/posixpath.py", line 66, in join
if b.startswith('/'):
AttributeError: 'list' object has no attribute 'startswith'
```
|
2012/12/18
|
[
"https://Stackoverflow.com/questions/13927122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247832/"
] |
You are missing the '\*' here:
```
>>> os.path.join('foo', *['a','b'])
'foo/a/b'
```
You have to use the star operator here in order to pass the list items as unpacked variable argument list to the method.
|
add \* before `[str(x) for x in [year, month, day]]`
`*[str(x) for x in [year, month, day]]`
|
13,927,122
|
Trying to get this line of code to work, I keep running into issues no matter how I change the formatting around:
```
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
```
(year, month, day) can be either ints or strings.
Traceback:
```
Traceback (most recent call last):
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/toolbar.py", line 117, in toolbar_tween
response = _handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/panels/performance.py", line 55, in resource_timer_handler
result = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/tweens.py", line 20, in excview_tween
response = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/router.py", line 161, in handle_request
response = view_callable(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 342, in rendered_view
result = view(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 456, in _class_requestonly_view
response = getattr(inst, attr)()
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 56, in view_process
return self.handle_file_upload(self.request.params['file'], shareID)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 101, in handle_file_upload
self.save(file, newFileName, isImage, uploadTime)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 166, in save
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
File "/home/tinyup/dev/lib/python2.7/posixpath.py", line 66, in join
if b.startswith('/'):
AttributeError: 'list' object has no attribute 'startswith'
```
|
2012/12/18
|
[
"https://Stackoverflow.com/questions/13927122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247832/"
] |
You are missing the '\*' here:
```
>>> os.path.join('foo', *['a','b'])
'foo/a/b'
```
You have to use the star operator here in order to pass the list items as unpacked variable argument list to the method.
|
```
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, str(year), str(month), str(day))):
```
for readability:
```
fname = os.path.join(IncludeSettings.FILE_URL, str(year), str(month), str(day))
if not os.path.exists(fname):
```
|
13,927,122
|
Trying to get this line of code to work, I keep running into issues no matter how I change the formatting around:
```
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
```
(year, month, day) can be either ints or strings.
Traceback:
```
Traceback (most recent call last):
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/toolbar.py", line 117, in toolbar_tween
response = _handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/panels/performance.py", line 55, in resource_timer_handler
result = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/tweens.py", line 20, in excview_tween
response = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/router.py", line 161, in handle_request
response = view_callable(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 342, in rendered_view
result = view(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 456, in _class_requestonly_view
response = getattr(inst, attr)()
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 56, in view_process
return self.handle_file_upload(self.request.params['file'], shareID)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 101, in handle_file_upload
self.save(file, newFileName, isImage, uploadTime)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 166, in save
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
File "/home/tinyup/dev/lib/python2.7/posixpath.py", line 66, in join
if b.startswith('/'):
AttributeError: 'list' object has no attribute 'startswith'
```
|
2012/12/18
|
[
"https://Stackoverflow.com/questions/13927122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247832/"
] |
You are missing the '\*' here:
```
>>> os.path.join('foo', *['a','b'])
'foo/a/b'
```
You have to use the star operator here in order to pass the list items as unpacked variable argument list to the method.
|
@user1833746 had the answer first, so if you accept any of these, accept that one :)
In addition to the unpacking, if you aren't going to use the resulting list, you can change the `[`'s to `(` to make it a true generator (as opposed to creating a list and then iterating through that). The `*` operator 'unpacks', meaning that the individual components of the item will be passed to the function. As you can see in the code below, `os.path.join` accepts 'two' arguments: `a` (the path name) and `*p` (an arbitrary number of path components). As you can see, you can supply any number of additional path name arguments with this syntax (i.e there are not fixed `path_component1`, `path_component2` variables). In your case, once you get your generator of values, you are 'unpacking' them into individual values (not a single `list` or `generator` object), which `os.path.join` function then handles:
```
In [1]: import os
In [2]: os.path.join('/home/myname', *(str(x) for x in ('one', 'two', 'three')))
Out[2]: '/home/myname/one/two/three'
In [3]: os.path.join??
Type: function
Base Class: <type 'function'>
String Form: <function join at 0x7f4944c31a28>
Namespace: Interactive
File: /usr/lib/python2.6/posixpath.py
Definition: os.path.join(a, *p)
Source:
def join(a, *p):
"""Join two or more pathname components, inserting '/' as needed.
If any component is an absolute path, all previous path components
will be discarded."""
path = a
for b in p:
if b.startswith('/'):
path = b
elif path == '' or path.endswith('/'):
path += b
else:
path += '/' + b
return path
```
|
13,927,122
|
Trying to get this line of code to work, I keep running into issues no matter how I change the formatting around:
```
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
```
(year, month, day) can be either ints or strings.
Traceback:
```
Traceback (most recent call last):
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/toolbar.py", line 117, in toolbar_tween
response = _handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/panels/performance.py", line 55, in resource_timer_handler
result = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/tweens.py", line 20, in excview_tween
response = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/router.py", line 161, in handle_request
response = view_callable(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 342, in rendered_view
result = view(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 456, in _class_requestonly_view
response = getattr(inst, attr)()
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 56, in view_process
return self.handle_file_upload(self.request.params['file'], shareID)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 101, in handle_file_upload
self.save(file, newFileName, isImage, uploadTime)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 166, in save
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
File "/home/tinyup/dev/lib/python2.7/posixpath.py", line 66, in join
if b.startswith('/'):
AttributeError: 'list' object has no attribute 'startswith'
```
|
2012/12/18
|
[
"https://Stackoverflow.com/questions/13927122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247832/"
] |
@user1833746 had the answer first, so if you accept any of these, accept that one :)
In addition to the unpacking, if you aren't going to use the resulting list, you can change the `[`'s to `(` to make it a true generator (as opposed to creating a list and then iterating through that). The `*` operator 'unpacks', meaning that the individual components of the item will be passed to the function. As you can see in the code below, `os.path.join` accepts 'two' arguments: `a` (the path name) and `*p` (an arbitrary number of path components). As you can see, you can supply any number of additional path name arguments with this syntax (i.e there are not fixed `path_component1`, `path_component2` variables). In your case, once you get your generator of values, you are 'unpacking' them into individual values (not a single `list` or `generator` object), which `os.path.join` function then handles:
```
In [1]: import os
In [2]: os.path.join('/home/myname', *(str(x) for x in ('one', 'two', 'three')))
Out[2]: '/home/myname/one/two/three'
In [3]: os.path.join??
Type: function
Base Class: <type 'function'>
String Form: <function join at 0x7f4944c31a28>
Namespace: Interactive
File: /usr/lib/python2.6/posixpath.py
Definition: os.path.join(a, *p)
Source:
def join(a, *p):
"""Join two or more pathname components, inserting '/' as needed.
If any component is an absolute path, all previous path components
will be discarded."""
path = a
for b in p:
if b.startswith('/'):
path = b
elif path == '' or path.endswith('/'):
path += b
else:
path += '/' + b
return path
```
|
add \* before `[str(x) for x in [year, month, day]]`
`*[str(x) for x in [year, month, day]]`
|
13,927,122
|
Trying to get this line of code to work, I keep running into issues no matter how I change the formatting around:
```
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
```
(year, month, day) can be either ints or strings.
Traceback:
```
Traceback (most recent call last):
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/toolbar.py", line 117, in toolbar_tween
response = _handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/panels/performance.py", line 55, in resource_timer_handler
result = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/tweens.py", line 20, in excview_tween
response = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/router.py", line 161, in handle_request
response = view_callable(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 342, in rendered_view
result = view(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 456, in _class_requestonly_view
response = getattr(inst, attr)()
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 56, in view_process
return self.handle_file_upload(self.request.params['file'], shareID)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 101, in handle_file_upload
self.save(file, newFileName, isImage, uploadTime)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 166, in save
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
File "/home/tinyup/dev/lib/python2.7/posixpath.py", line 66, in join
if b.startswith('/'):
AttributeError: 'list' object has no attribute 'startswith'
```
|
2012/12/18
|
[
"https://Stackoverflow.com/questions/13927122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247832/"
] |
@user1833746 had the answer first, so if you accept any of these, accept that one :)
In addition to the unpacking, if you aren't going to use the resulting list, you can change the `[`'s to `(` to make it a true generator (as opposed to creating a list and then iterating through that). The `*` operator 'unpacks', meaning that the individual components of the item will be passed to the function. As you can see in the code below, `os.path.join` accepts 'two' arguments: `a` (the path name) and `*p` (an arbitrary number of path components). As you can see, you can supply any number of additional path name arguments with this syntax (i.e there are not fixed `path_component1`, `path_component2` variables). In your case, once you get your generator of values, you are 'unpacking' them into individual values (not a single `list` or `generator` object), which `os.path.join` function then handles:
```
In [1]: import os
In [2]: os.path.join('/home/myname', *(str(x) for x in ('one', 'two', 'three')))
Out[2]: '/home/myname/one/two/three'
In [3]: os.path.join??
Type: function
Base Class: <type 'function'>
String Form: <function join at 0x7f4944c31a28>
Namespace: Interactive
File: /usr/lib/python2.6/posixpath.py
Definition: os.path.join(a, *p)
Source:
def join(a, *p):
"""Join two or more pathname components, inserting '/' as needed.
If any component is an absolute path, all previous path components
will be discarded."""
path = a
for b in p:
if b.startswith('/'):
path = b
elif path == '' or path.endswith('/'):
path += b
else:
path += '/' + b
return path
```
|
```
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, str(year), str(month), str(day))):
```
for readability:
```
fname = os.path.join(IncludeSettings.FILE_URL, str(year), str(month), str(day))
if not os.path.exists(fname):
```
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
I got the same error, but in my case I am subtracting dict.key from dict.value. I have fixed this by subtracting dict.value for corresponding key from other dict.value.
```
cosine_sim = cosine_similarity(e_b-e_a, w-e_c)
```
here I got error because e\_b, e\_a and e\_c are embedding vector for word a,b,c respectively. I didn't know that 'w' is string, when I sought out w is string then I fix this by following line:
```
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c)
```
Instead of subtracting dict.key, now I have subtracted corresponding value for key
|
I ran into the same issue, but in my case it was just a Python list instead of a Numpy array used. Using two Numpy arrays solved the issue for me.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
I got the same error, but in my case I am subtracting dict.key from dict.value. I have fixed this by subtracting dict.value for corresponding key from other dict.value.
```
cosine_sim = cosine_similarity(e_b-e_a, w-e_c)
```
here I got error because e\_b, e\_a and e\_c are embedding vector for word a,b,c respectively. I didn't know that 'w' is string, when I sought out w is string then I fix this by following line:
```
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c)
```
Instead of subtracting dict.key, now I have subtracted corresponding value for key
|
I had a similar issue where an integer in a row of a DataFrame I was iterating over was of type `numpy.int64`. I got the
>
> `TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')`
>
>
>
error when trying to subtract a float from it.
The easiest fix for me was to convert the row using `pd.to_numeric(row)`.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
I had a similar issue where an integer in a row of a DataFrame I was iterating over was of type `numpy.int64`. I got the
>
> `TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')`
>
>
>
error when trying to subtract a float from it.
The easiest fix for me was to convert the row using `pd.to_numeric(row)`.
|
I am fairly new to this myself, but I had a similar error and found that it is due to a type casting issue. I was trying to concatenate rather than take the difference but I think the principle is the same here. I provided a similar answer on another [question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop/36998261#36998261) so I hope that is OK.
In essence you need to use a different data type cast, in my case I needed str not float, I suspect yours is the same so my suggested solution is. I am sorry I cannot test it before suggesting but I am unclear from your example what you were doing.
```
return diff(str(a[slice1])-str(a[slice2]), n-1, axis=axis)
```
Please see my example code below for the fix to my code, the change occurs on the third to last line. The code is to produce a basic random forest model.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
This leads to an error of;
```
Traceback (most recent call last):
File "min_example.py", line 40, in <module>
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32')
```
The solution is to make each variable a str() type on the third to last line then write to file. No other changes to then code have been made from the above.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(str(RFpreds[i])+",,"+str(yTest[i])+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
These examples are from a larger code so I hope the examples are clear enough.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
I had a similar issue where an integer in a row of a DataFrame I was iterating over was of type `numpy.int64`. I got the
>
> `TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')`
>
>
>
error when trying to subtract a float from it.
The easiest fix for me was to convert the row using `pd.to_numeric(row)`.
|
I think @James is right. I got stuck by same error while working on Polyval(). And yeah solution is to use the same type of variabes. You can use typecast to cast all variables in the same type.
BELOW IS A EXAMPLE CODE
```
import numpy
P = numpy.array(input().split(), float)
x = float(input())
print(numpy.polyval(P,x))
```
here I used float as an output type. so even the user inputs the INT value (whole number). the final answer will be typecasted to float.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
I am fairly new to this myself, but I had a similar error and found that it is due to a type casting issue. I was trying to concatenate rather than take the difference but I think the principle is the same here. I provided a similar answer on another [question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop/36998261#36998261) so I hope that is OK.
In essence you need to use a different data type cast, in my case I needed str not float, I suspect yours is the same so my suggested solution is. I am sorry I cannot test it before suggesting but I am unclear from your example what you were doing.
```
return diff(str(a[slice1])-str(a[slice2]), n-1, axis=axis)
```
Please see my example code below for the fix to my code, the change occurs on the third to last line. The code is to produce a basic random forest model.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
This leads to an error of;
```
Traceback (most recent call last):
File "min_example.py", line 40, in <module>
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32')
```
The solution is to make each variable a str() type on the third to last line then write to file. No other changes to then code have been made from the above.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(str(RFpreds[i])+",,"+str(yTest[i])+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
These examples are from a larger code so I hope the examples are clear enough.
|
I ran into the same issue, but in my case it was just a Python list instead of a Numpy array used. Using two Numpy arrays solved the issue for me.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
Why is it applying `diff` to an array of strings.
I get an error at the same point, though with a different message
```
In [23]: a=np.array([u'A' u'B' u'C' u'D' u'E'])
In [24]: np.diff(a)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-9d5a62fc3ff0> in <module>()
----> 1 np.diff(a)
C:\Users\paul\AppData\Local\Enthought\Canopy\User\lib\site-packages\numpy\lib\function_base.pyc in diff(a, n, axis)
1112 return diff(a[slice1]-a[slice2], n-1, axis=axis)
1113 else:
-> 1114 return a[slice1]-a[slice2]
1115
1116
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'numpy.ndarray'
```
Is this `a` array the `bins` parameter? What does the docs say `bins` should be?
|
I had a similar issue where an integer in a row of a DataFrame I was iterating over was of type `numpy.int64`. I got the
>
> `TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')`
>
>
>
error when trying to subtract a float from it.
The easiest fix for me was to convert the row using `pd.to_numeric(row)`.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
I am fairly new to this myself, but I had a similar error and found that it is due to a type casting issue. I was trying to concatenate rather than take the difference but I think the principle is the same here. I provided a similar answer on another [question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop/36998261#36998261) so I hope that is OK.
In essence you need to use a different data type cast, in my case I needed str not float, I suspect yours is the same so my suggested solution is. I am sorry I cannot test it before suggesting but I am unclear from your example what you were doing.
```
return diff(str(a[slice1])-str(a[slice2]), n-1, axis=axis)
```
Please see my example code below for the fix to my code, the change occurs on the third to last line. The code is to produce a basic random forest model.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
This leads to an error of;
```
Traceback (most recent call last):
File "min_example.py", line 40, in <module>
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32')
```
The solution is to make each variable a str() type on the third to last line then write to file. No other changes to then code have been made from the above.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(str(RFpreds[i])+",,"+str(yTest[i])+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
These examples are from a larger code so I hope the examples are clear enough.
|
I think @James is right. I got stuck by same error while working on Polyval(). And yeah solution is to use the same type of variabes. You can use typecast to cast all variables in the same type.
BELOW IS A EXAMPLE CODE
```
import numpy
P = numpy.array(input().split(), float)
x = float(input())
print(numpy.polyval(P,x))
```
here I used float as an output type. so even the user inputs the INT value (whole number). the final answer will be typecasted to float.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
I got the same error, but in my case I am subtracting dict.key from dict.value. I have fixed this by subtracting dict.value for corresponding key from other dict.value.
```
cosine_sim = cosine_similarity(e_b-e_a, w-e_c)
```
here I got error because e\_b, e\_a and e\_c are embedding vector for word a,b,c respectively. I didn't know that 'w' is string, when I sought out w is string then I fix this by following line:
```
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c)
```
Instead of subtracting dict.key, now I have subtracted corresponding value for key
|
I am fairly new to this myself, but I had a similar error and found that it is due to a type casting issue. I was trying to concatenate rather than take the difference but I think the principle is the same here. I provided a similar answer on another [question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop/36998261#36998261) so I hope that is OK.
In essence you need to use a different data type cast, in my case I needed str not float, I suspect yours is the same so my suggested solution is. I am sorry I cannot test it before suggesting but I am unclear from your example what you were doing.
```
return diff(str(a[slice1])-str(a[slice2]), n-1, axis=axis)
```
Please see my example code below for the fix to my code, the change occurs on the third to last line. The code is to produce a basic random forest model.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
This leads to an error of;
```
Traceback (most recent call last):
File "min_example.py", line 40, in <module>
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32')
```
The solution is to make each variable a str() type on the third to last line then write to file. No other changes to then code have been made from the above.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(str(RFpreds[i])+",,"+str(yTest[i])+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
These examples are from a larger code so I hope the examples are clear enough.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
Why is it applying `diff` to an array of strings.
I get an error at the same point, though with a different message
```
In [23]: a=np.array([u'A' u'B' u'C' u'D' u'E'])
In [24]: np.diff(a)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-9d5a62fc3ff0> in <module>()
----> 1 np.diff(a)
C:\Users\paul\AppData\Local\Enthought\Canopy\User\lib\site-packages\numpy\lib\function_base.pyc in diff(a, n, axis)
1112 return diff(a[slice1]-a[slice2], n-1, axis=axis)
1113 else:
-> 1114 return a[slice1]-a[slice2]
1115
1116
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'numpy.ndarray'
```
Is this `a` array the `bins` parameter? What does the docs say `bins` should be?
|
I am fairly new to this myself, but I had a similar error and found that it is due to a type casting issue. I was trying to concatenate rather than take the difference but I think the principle is the same here. I provided a similar answer on another [question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop/36998261#36998261) so I hope that is OK.
In essence you need to use a different data type cast, in my case I needed str not float, I suspect yours is the same so my suggested solution is. I am sorry I cannot test it before suggesting but I am unclear from your example what you were doing.
```
return diff(str(a[slice1])-str(a[slice2]), n-1, axis=axis)
```
Please see my example code below for the fix to my code, the change occurs on the third to last line. The code is to produce a basic random forest model.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
This leads to an error of;
```
Traceback (most recent call last):
File "min_example.py", line 40, in <module>
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32')
```
The solution is to make each variable a str() type on the third to last line then write to file. No other changes to then code have been made from the above.
```
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(str(RFpreds[i])+",,"+str(yTest[i])+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
```
These examples are from a larger code so I hope the examples are clear enough.
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
Why is it applying `diff` to an array of strings.
I get an error at the same point, though with a different message
```
In [23]: a=np.array([u'A' u'B' u'C' u'D' u'E'])
In [24]: np.diff(a)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-9d5a62fc3ff0> in <module>()
----> 1 np.diff(a)
C:\Users\paul\AppData\Local\Enthought\Canopy\User\lib\site-packages\numpy\lib\function_base.pyc in diff(a, n, axis)
1112 return diff(a[slice1]-a[slice2], n-1, axis=axis)
1113 else:
-> 1114 return a[slice1]-a[slice2]
1115
1116
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'numpy.ndarray'
```
Is this `a` array the `bins` parameter? What does the docs say `bins` should be?
|
I think @James is right. I got stuck by same error while working on Polyval(). And yeah solution is to use the same type of variabes. You can use typecast to cast all variables in the same type.
BELOW IS A EXAMPLE CODE
```
import numpy
P = numpy.array(input().split(), float)
x = float(input())
print(numpy.polyval(P,x))
```
here I used float as an output type. so even the user inputs the INT value (whole number). the final answer will be typecasted to float.
|
18,624,148
|
I'm struggling to find documentation on what the ^ does in python.
EX.
>
>
> >
> >
> > >
> > > 6^1 =
> > > 7
> > >
> > >
> > > 6^2 =
> > > 4
> > >
> > >
> > > 6^3 =
> > > 5
> > >
> > >
> > > 6^4 =
> > > 2
> > >
> > >
> > > 6^5 =
> > > 3
> > >
> > >
> > > 6^6 =
> > > 0
> > >
> > >
> > >
> >
> >
> >
>
>
>
Help?
|
2013/09/04
|
[
"https://Stackoverflow.com/questions/18624148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2748552/"
] |
It is the [bitwise exclusive-or operator](https://en.wikipedia.org/wiki/Xor#Bitwise_operation), often called "xor". For each pair of corresponding bits in the operands, the corresponding bit in the result is 0 if the operand bits are the same, 1 if they are different.
Consider `6^4`:
```
6 = 0b0110
4 = 0b0100
6^4 = 0b0010 = 2
```
As you can see the least-significant bit (the one on the right, in the "one's" place) is zero in both numbers. Thus the least-significant bit in the answer is zero. The next bit is `1` in the first operand and `0` in the second, so the result is `1`.
XOR has some interesting properties:
```
a^b == b^a # xor is commutative
a^(b^c) == (a^b)^c # xor is associative
(a^b)^b == a # xor is reversible
0^a == a # 0 is the identity value
a^a == 0 # xor yourself and you go away.
```
You can change the oddness of a value with xor:
```
prev_even = odd ^ 1 (2 = 3 ^ 1)
next_odd = even ^ 1 (3 = 2 ^ 1)
```
|
for more information on XOR , please react the documentation on Python.org at here:
<http://docs.python.org/2/library/operator.html>
|
38,943,673
|
I am new to python and I am working on a project that needs to write a dictionary into a text file. The format is like:
```
{'17': [('25', 5), ('23', 3)], '12': [('28', 3), ('22', 3)], '13': [('28', 3), ('23', 3)], '16': [('22', 3), ('21', 3)], '11': [('28', 3), ('29', 1)], '14': [('22', 3), ('23', 3)], '15': [('26', 2), ('24', 2)]}.
```
as you can see, the values in the dictionary are always lists. I would like to write the below into the text file:
17, 25, 5 \n
17, 23, 3 \n
12, 28, 3 \n
12, 22, 3 \n
13, 28, 3 \n
13, 23, 3 \n
...
\n stands for a new line
Which means, the keys to be repeated for each value inside the list that 'belongs' to those keys. The reason is because I need to read the text file again into database to do further analysis.
Have try searching for an answer for the past few days and tried many ways, just cannot make it into this format. Appreciate if any of you have a solution for this.
Thanks a lot!
|
2016/08/14
|
[
"https://Stackoverflow.com/questions/38943673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6715128/"
] |
You can achieve this using separate routes, or change your parameters to be optional.
When using 3 attributes, you add separate routes for each of the options that you have - when no parameters are specified, when only `movieId` is specified, and when all 3 parameters are specified.
```
[Route("Everything/MovieCustomer/")]
[Route("Everything/MovieCustomer/{movieId}")]
[Route("Everything/MovieCustomer/{movieId}/{customerId}")]
public ActionResult MovieCustomer(int? movieId, int? customerId)
{
// the rest of the code
}
```
Alternatively you an combine change your route parameters to optional (by adding `?` in route definition) and this should cover all 3 cases that you have:
```
[Route("Everything/MovieCustomer/{movieId?}/{customerId?}")]
public ActionResult MovieCustomer(int? movieId, int? customerId)
{
// the rest of the code
}
```
Keep in mind that neither sample supports the case where you provide only `customerId`.
|
>
> Keep in mind that neither sample supports the case where you provide only customerId.
>
>
>
Check it out. I think you can use the multiple route method with EVEN ANOTHER route like this if you do want to provide only customerId:
```
[Route("Everything/MovieCustomer/null/{customerId}")]
```
|
38,943,673
|
I am new to python and I am working on a project that needs to write a dictionary into a text file. The format is like:
```
{'17': [('25', 5), ('23', 3)], '12': [('28', 3), ('22', 3)], '13': [('28', 3), ('23', 3)], '16': [('22', 3), ('21', 3)], '11': [('28', 3), ('29', 1)], '14': [('22', 3), ('23', 3)], '15': [('26', 2), ('24', 2)]}.
```
as you can see, the values in the dictionary are always lists. I would like to write the below into the text file:
17, 25, 5 \n
17, 23, 3 \n
12, 28, 3 \n
12, 22, 3 \n
13, 28, 3 \n
13, 23, 3 \n
...
\n stands for a new line
Which means, the keys to be repeated for each value inside the list that 'belongs' to those keys. The reason is because I need to read the text file again into database to do further analysis.
Have try searching for an answer for the past few days and tried many ways, just cannot make it into this format. Appreciate if any of you have a solution for this.
Thanks a lot!
|
2016/08/14
|
[
"https://Stackoverflow.com/questions/38943673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6715128/"
] |
You can achieve this using separate routes, or change your parameters to be optional.
When using 3 attributes, you add separate routes for each of the options that you have - when no parameters are specified, when only `movieId` is specified, and when all 3 parameters are specified.
```
[Route("Everything/MovieCustomer/")]
[Route("Everything/MovieCustomer/{movieId}")]
[Route("Everything/MovieCustomer/{movieId}/{customerId}")]
public ActionResult MovieCustomer(int? movieId, int? customerId)
{
// the rest of the code
}
```
Alternatively you an combine change your route parameters to optional (by adding `?` in route definition) and this should cover all 3 cases that you have:
```
[Route("Everything/MovieCustomer/{movieId?}/{customerId?}")]
public ActionResult MovieCustomer(int? movieId, int? customerId)
{
// the rest of the code
}
```
Keep in mind that neither sample supports the case where you provide only `customerId`.
|
Interstingly, I had to add optional parameter to the signature as well for it to work from Angular client like so:
```
[HttpGet]
[Route("IsFooBar/{movieId?}/{customerId?}")]
[Route("IsFooBar/null/{customerId?}")]
public bool IsFooBar(int? movieId = null, int? customerId = null)
{
// the rest of the code
}
```
In Angular
```
public IsFoobar(movieId: number | null, customerId: number | null): Observable<boolean> {
return this.httpService.get<boolean>(`api/IsFooBar/${movieId}/${customerId}`);
}
```
|
47,117,625
|
I want to split any matrix (most likely will be a 3x4) into two. One part will be the left hand and then the right hand - only the last column.
```
[[1,0,0,4], [[1,0,0], [4,
[1,0,0,2], ---> A= [1,0,0], B = 2,
[4,3,1,6]] [4,3,1]] 6]
```
Is there a way to do this in python and assign then as A and B?
Thank you!
|
2017/11/05
|
[
"https://Stackoverflow.com/questions/47117625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8606331/"
] |
Yes, you could do like this:
```
def split_last_col(mat):
"""returns a tuple of two matrices corresponding
to the Left and Right parts"""
A = [line[:-1] for line in mat]
B = [line[-1] for line in mat]
return A, B
split_last_col([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
```
### output:
```
([[1, 2], [4, 5], [7, 8]], [3, 6, 9])
```
|
You could create A and B manually, like this:
```
def split(matrix):
a = list()
b = list()
for row in matrix:
row_length = len(row)
a_row = list()
for index, col in enumerate(row):
if index == row_length - 1:
b.append(col)
else:
a_row.append(col)
a.append(a_row)
return a, b
```
Or using lists comprehension:
```
def split(matrix):
a = [row[:len(row) - 1] for row in matrix]
b = [row[len(row) - 1] for row in matrix]
return a, b
```
Example:
```
matrix = [
[1, 0, 0, 4],
[1, 0, 0, 2],
[4, 3, 1, 6]
]
a, b = split(matrix)
print("A: %s" % str(a)) # Output ==> A: [[1, 0, 0], [1, 0, 0], [4, 3, 1]]
print("B: %s" % str(b)) # Output ==> B: [4, 2, 6]
```
|
59,363,950
|
I'm trying to get started with Tensorflow-Hub to extract feature vectors from images. However, I'm not sure how one is meant to convert Tensorflow-Hub outputs (Tensors) to numpy vectors. Here's a simple example:
```
from keras.preprocessing.image import load_img
import tensorflow_hub as hub
import tensorflow as tf
import numpy as np
im = load_img('sample.png')
im = np.expand_dims(im.resize((299,299)), 0)
module = hub.Module("https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1")
out = module(im)
o = np.add(out, 0)
type(o)
```
The [docs](https://www.tensorflow.org/tutorials/customization/basics) indicate that "NumPy operations automatically convert Tensors to NumPy ndarrays", but my `np.add()` call above returns object type `tensorflow.python.framework.ops.Tensor`. Does anyone know how I can obtain a numpy array from `out`? Any pointers would be appreciated!
**Versions**:
```
# output from `pip freeze | grep tensorflow`
tensorflow==1.14.0
tensorflow-estimator==1.14.0
tensorflow-hub==0.1.1
tensorflow-probability==0.6.0
```
|
2019/12/16
|
[
"https://Stackoverflow.com/questions/59363950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1727392/"
] |
The following should work. But I did not check if the output is meaningful. But it is returning consistent results over multiple runs.
```
im = load_img('sample.png')
im = np.expand_dims(im.resize((299,299)), 0)
module = hub.Module("https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1")
out = module(im)
with tf.Session() as sess:
tf.global_variables_initializer().run()
o = sess.run(out)
o = np.add(o, 0)
print(type(o))
```
|
You can use
```py
out.numpy()
type(out)
# <class 'tensorflow.python.framework.ops.EagerTensor'>
type(out.numpy()
# <class 'numpy.ndarray'>
```
|
44,805,535
|
I am an anaconda user and Jupyter is a neat tool to run python code. However, for my macbook, I can't open it in Chrome (This page isn’t working
localhost didn’t send any data.),but it works in Safari, I have tried to reinstall chrome, but I still can't fix it. My system is Mac OS 10.11.5.
Who knows how I can fix it?
I can understand that the problem might be not specific enough, but I have been puzzled by this problem for quite a period of time.
|
2017/06/28
|
[
"https://Stackoverflow.com/questions/44805535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3013618/"
] |
You could change your approach to avoid fixed padding values:
```css
#secondary-menu {
background: #007dc5;
width: 80%;
}
ul#topnav {
list-style: none;
padding: 0;
height: 70px;
display: flex;
align-items: center;
justify-content: center;
}
ul#topnav li a {
text-transform: uppercase;
text-decoration: none;
color: #fff;
}
ul#topnav a.home:hover {
margin: 0px;
padding: 0px;
background: transparent;
border: 0px none;
}
a.cat,
ul#topnav li {
text-align: center;
cursor: pointer;
height: 100%;
display: flex;
align-items: center;
justify-content: center;
flex: 1;
}
ul#topnav li a:hover {
background-color: #e6e7eb;
color: #007dc5;
}
```
```html
<div id="secondary-menu">
<ul id="topnav">
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Long Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Another Long Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
</ul>
</div>
```
[fiddle](https://jsfiddle.net/z44eao79/1/)
|
Add `overflow:hidden` to your `ul#topnav` rules:
```
ul#topnav {
list-style: none;
margin: 0 auto;
height: 70px;
display: flex;
align-items: center;
overflow:hidden;
}
```
```css
#secondary-menu {
background: #007dc5;
width: 80%;
}
ul#topnav {
list-style: none;
margin: 0 auto;
height: 70px;
display: flex;
align-items: center;
overflow: hidden;
}
ul#topnav li a {
display: block;
text-transform: uppercase;
text-decoration: none;
color: #fff;
padding: 26px 20px;
}
ul#topnav a.home:hover {
margin: 0px;
padding: 0px;
background: transparent;
border: 0px none;
}
a.cat,
ul#topnav li {
text-align: center;
cursor: pointer;
}
ul#topnav li a:hover {
background-color: #e6e7eb;
color: #007dc5;
}
```
```html
<div id="secondary-menu">
<ul id="topnav">
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Long Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Another Long Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
</ul>
</div>
```
|
54,411,732
|
So far I'm able to print at the end if the user selects 'n' to not order another hard drive, but need to write to a file. I've tried running the code as 'python hdorders.py >> orders.txt', but it won't prompt for the questions; only shows a blank line and if I break out using Ctrl-C, it writes blank entries and while loops in the file. I hope this makes sense.
```
ui = raw_input("Would you like to order more hard drives?(y/n) ")
if ui == 'n':
print '\n','\n',"**** Order Summary ****",'\n',row,'\n',"Number of HD's:",b,'\n',"Disk Slot Position(s):",c,'\n',"Disk Size(s):",d,"GB",'\n',"Dimensions:",e,'\n','\n',
endFlag = True
```
I'd also like it so that if they select 'y', it will save to a file and start over for another disk order (saving the previous info to the file first). Then once they are done (for example going through the program twice) and select 'n', it will have the final details appended to the same file as the first order.
|
2019/01/28
|
[
"https://Stackoverflow.com/questions/54411732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9638138/"
] |
No. One team project has one process template. You can customize that process template however you wish, of course.
|
What you could do is create an inherited process in order to make your customizations, and change every Team Project to that Process.
You have to take into account that the customizations you have made to your Team Project could be affected when you change to a inherited process.
Test carefully with some test Team Projects before.
|
35,719,165
|
I have a python program with one main thread and let's say 2 other threads (or maybe even more, probably doesn't matter). I would like to let the main thread sleep until ONE of the other threads is finished. It's easy to do with polling (by calling t.join(1) and waiting for one second for every thread t).
Is it possible to do it without polling, just by
```
SOMETHING_LIKE_JOIN(1, [t1, t2])
```
where t1 and t2 are threading.Thread objects? The call must do the following: sleep 1 second, but wake up as soon as one of t1,t2 is finished. Quite similar to POSIX select(2) call with two file descriptors.
|
2016/03/01
|
[
"https://Stackoverflow.com/questions/35719165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2384856/"
] |
Here is an example of using condition object.
```
from threading import Thread, Condition, Lock
from time import sleep
from random import random
_lock = Lock()
def run(idx, condition):
sleep(random() * 3)
print('thread_%d is waiting for notifying main thread.' % idx)
_lock.acquire()
with condition:
print('thread_%d notifies main thread.' % idx)
condition.notify()
def is_working(thread_list):
for t in thread_list:
if t.is_alive():
return True
return False
def main():
condition = Condition(Lock())
thread_list = [Thread(target=run, kwargs={'idx': i, 'condition': condition}) for i in range(10)]
with condition:
with _lock:
for t in thread_list:
t.start()
while is_working(thread_list):
_lock.release()
if condition.wait(timeout=1):
print('do something')
sleep(1) # <-- Main thread is doing something.
else:
print('timeout')
for t in thread_list:
t.join()
if __name__ == '__main__':
main()
```
I don't think there is race condition as you described in comment. The condition object contains a Lock. When the main thread is working(sleep(1) in the example), it holds the lock and no thread can notify it until it finishes its work and release the lock.
---
I just realize that there is a race condition in the previous example. I added a global \_lock to ensure the condition never notifies the main thread until the main thread starts waiting. I don't like how it works, but I haven't figured out a better solution...
|
You can create a Thread Class and the main thread keeps a reference to it. So you can check whether the thread has finished and make your main thread continue again easily.
If that doesn't helped you, I suggest you to look at the **Queue** library!
```
import threading
import time, random
#THREAD CLASS#
class Thread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True
self.state = False
#START THREAD (THE RUN METHODE)#
self.start()
#THAT IS WHAT THE THREAD ACTUALLY DOES#
def run(self):
#THREAD SLEEPS FOR A RANDOM TIME RANGE#
time.sleep(random.randrange(5, 10))
#AFTERWARDS IS HAS FINISHED (STORE IN VARIABLE)#
self.state = True
#RETURNS THE STATE#
def getState(self):
return self.state
#10 SEPERATE THREADS#
threads = []
for i in range(10):
threads.append(Thread())
#MAIN THREAD#
while True:
#RUN THROUGH ALL THREADS AND CHECK FOR ITS STATE#
for i in range(len(threads)):
if threads[i].getState():
print "WAITING IS OVER: THREAD ", i
#SLEEPS ONE SECOND#
time.sleep(1)
```
|
35,719,165
|
I have a python program with one main thread and let's say 2 other threads (or maybe even more, probably doesn't matter). I would like to let the main thread sleep until ONE of the other threads is finished. It's easy to do with polling (by calling t.join(1) and waiting for one second for every thread t).
Is it possible to do it without polling, just by
```
SOMETHING_LIKE_JOIN(1, [t1, t2])
```
where t1 and t2 are threading.Thread objects? The call must do the following: sleep 1 second, but wake up as soon as one of t1,t2 is finished. Quite similar to POSIX select(2) call with two file descriptors.
|
2016/03/01
|
[
"https://Stackoverflow.com/questions/35719165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2384856/"
] |
One solution is to use a `multiprocessing.dummy.Pool`; `multiprocessing.dummy` provides an API almost identical to `multiprocessing`, but backed by threads, so it gets you a thread pool for free.
For example, you can do:
```
from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(2) # Two workers
for res in pool.imap_unordered(some_func, list_of_func_args):
# res is whatever some_func returned
```
[`multiprocessing.Pool.imap_unordered`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.imap_unordered) returns results as they become available, regardless of which task finishes first.
If you can use Python 3.2 or higher (or install the `concurrent.futures` PyPI module for older Python) you can generalize to disparate task functions by creating one or more `Future`s from a `ThreadPoolExecutor`, then using [`concurrent.futures.wait`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.wait) with `return_when=FIRST_COMPLETED`, or using `concurrent.futures.as_completed` for similar effect.
|
You can create a Thread Class and the main thread keeps a reference to it. So you can check whether the thread has finished and make your main thread continue again easily.
If that doesn't helped you, I suggest you to look at the **Queue** library!
```
import threading
import time, random
#THREAD CLASS#
class Thread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True
self.state = False
#START THREAD (THE RUN METHODE)#
self.start()
#THAT IS WHAT THE THREAD ACTUALLY DOES#
def run(self):
#THREAD SLEEPS FOR A RANDOM TIME RANGE#
time.sleep(random.randrange(5, 10))
#AFTERWARDS IS HAS FINISHED (STORE IN VARIABLE)#
self.state = True
#RETURNS THE STATE#
def getState(self):
return self.state
#10 SEPERATE THREADS#
threads = []
for i in range(10):
threads.append(Thread())
#MAIN THREAD#
while True:
#RUN THROUGH ALL THREADS AND CHECK FOR ITS STATE#
for i in range(len(threads)):
if threads[i].getState():
print "WAITING IS OVER: THREAD ", i
#SLEEPS ONE SECOND#
time.sleep(1)
```
|
35,719,165
|
I have a python program with one main thread and let's say 2 other threads (or maybe even more, probably doesn't matter). I would like to let the main thread sleep until ONE of the other threads is finished. It's easy to do with polling (by calling t.join(1) and waiting for one second for every thread t).
Is it possible to do it without polling, just by
```
SOMETHING_LIKE_JOIN(1, [t1, t2])
```
where t1 and t2 are threading.Thread objects? The call must do the following: sleep 1 second, but wake up as soon as one of t1,t2 is finished. Quite similar to POSIX select(2) call with two file descriptors.
|
2016/03/01
|
[
"https://Stackoverflow.com/questions/35719165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2384856/"
] |
One solution is to use a `multiprocessing.dummy.Pool`; `multiprocessing.dummy` provides an API almost identical to `multiprocessing`, but backed by threads, so it gets you a thread pool for free.
For example, you can do:
```
from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(2) # Two workers
for res in pool.imap_unordered(some_func, list_of_func_args):
# res is whatever some_func returned
```
[`multiprocessing.Pool.imap_unordered`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.imap_unordered) returns results as they become available, regardless of which task finishes first.
If you can use Python 3.2 or higher (or install the `concurrent.futures` PyPI module for older Python) you can generalize to disparate task functions by creating one or more `Future`s from a `ThreadPoolExecutor`, then using [`concurrent.futures.wait`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.wait) with `return_when=FIRST_COMPLETED`, or using `concurrent.futures.as_completed` for similar effect.
|
Here is an example of using condition object.
```
from threading import Thread, Condition, Lock
from time import sleep
from random import random
_lock = Lock()
def run(idx, condition):
sleep(random() * 3)
print('thread_%d is waiting for notifying main thread.' % idx)
_lock.acquire()
with condition:
print('thread_%d notifies main thread.' % idx)
condition.notify()
def is_working(thread_list):
for t in thread_list:
if t.is_alive():
return True
return False
def main():
condition = Condition(Lock())
thread_list = [Thread(target=run, kwargs={'idx': i, 'condition': condition}) for i in range(10)]
with condition:
with _lock:
for t in thread_list:
t.start()
while is_working(thread_list):
_lock.release()
if condition.wait(timeout=1):
print('do something')
sleep(1) # <-- Main thread is doing something.
else:
print('timeout')
for t in thread_list:
t.join()
if __name__ == '__main__':
main()
```
I don't think there is race condition as you described in comment. The condition object contains a Lock. When the main thread is working(sleep(1) in the example), it holds the lock and no thread can notify it until it finishes its work and release the lock.
---
I just realize that there is a race condition in the previous example. I added a global \_lock to ensure the condition never notifies the main thread until the main thread starts waiting. I don't like how it works, but I haven't figured out a better solution...
|
35,667,252
|
I have installed the python 3.5 interpretor in my device (Windows).
Can anybody guide me through the process of using packages to run it like `SublimeREPL`?
|
2016/02/27
|
[
"https://Stackoverflow.com/questions/35667252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5987890/"
] |
Yes, you can use any Python version you want to run programs from Sublime - you just need to define a new [build system](http://sublimetext.info/docs/en/reference/build_systems.html). Select **`Tools -> Build System -> New Build System`**, then delete its contents and replace it with:
```js
{
"cmd": ["C:/Python35/python.exe", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"
}
```
assuming that `C:/Python35/python.exe` is the correct path. If `python.exe` resides someplace else, just put in the correct path, using forward slashes `/` instead of the Windows standard backward slashes `\`.
Save the file as `Packages/User/Python3.sublime-build`, where `Packages` is the folder opened by selecting **`Preferences -> Browse Packages...`** - Sublime should already automatically save it in the right directory. Now, there will be a **`Tools -> Build System -> Python3`** option that you can select for running files with Python 3.
For details on setting up SublimeREPL with Python 3, please follow the instructions in [my answer here](https://stackoverflow.com/a/20861527/1426065).
|
if you have installed python3 and SublimeREPL, you can try setting up key bindings with the correct path to the python3 file.
```
[
{
"keys":["super+ctrl+r"],
"command": "repl_open",
"caption": "Python 3.6 - Open File",
"id": "repl_python",
"mnemonic": "p",
"args": {
"type": "subprocess",
"encoding": "utf8",
"cmd": ["The directory to your python3.6 file", "-i", "$file"],
"cwd": "$file_path",
"syntax": "Packages/Python/Python.tmLanguage",
"external_id": "python",
"extend_env": {"PYTHONIOENCODING": "utf-8"}
}
}
]
```
You can try by copying this code into your /Sublime Text 3/Preferences/Key Bindings/
Hope this helps!
|
36,142,393
|
In the terminal, after I enter the python interpreter I use `help('modules')` to see which modules are installed but Numpy, matplotlib and scipy are not listed.
When I try to import them, I get the following:
>
> ImportError: no module named xxx.
>
>
>
However, when I try to install these modules using `apt-get install xxx` I get a message saying:
>
> python-xxx is already the newest version.
>
>
>
Is it possible I somehow have two versions of python one with matplotlib, the other without it? Could this be linked to a separate problem I'm having with Spyder where the interpreter no longer works? See [here](https://stackoverflow.com/questions/36072477/interpreter-stopped-working-in-spyder).
I am using python 2.7. When I run which python I get: `/usr/local/bin/python`.
When I run `/usr/bin/local/python` I get:
```
Python 2.7.9 (default, Mar 18 2016, 20:34:01)
[GCC 4.8.4] on linux2
```
When I run `dpkg -l spyder` I get:
```
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig- aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============- =================================
ii spyder 2.3.0+dfsg-4 all python IDE for scientists (Python
```
|
2016/03/21
|
[
"https://Stackoverflow.com/questions/36142393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5255941/"
] |
You have a typo in the iframe rule - that might be the cause, since the absolute positioning won't work as expected:
```
iframe{
...
possition: absolute; ---> must be "position"
}
```
|
You have position spelled wrong in your CSS.
|
58,724,581
|
Below is my playbook which has a variable `running_processes` which contains a list of pids(one or more)
Next, I read the user ids for each of the pids. All good so far.
I then try to print the list of user ids in `curr_user_ids` variable using `-debug module` is when i get the error: 'dict object' has no attribute 'stdout\_lines'
I was expecting the `curr_user_ids` to contain one or more entries as evident from the output shared below.
```
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ running_processes.stdout }}/loginuid`"
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
- debug: msg="USER ID LIST HERE:{{ curr_user_ids.stdout }}"
with_items: "{{ curr_user_ids.stdout_lines }}"
TASK [Get running processes list from remote host] **********************************************************************************************************
task path: /app/wls/startstop.yml:22
ok: [10.9.9.111] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "cmd": "ps -few | grep java | grep -v grep | awk '{print $2}'", "delta": "0:00:00.166049", "end": "2019-11-06 11:49:42.298603", "rc": 0, "start": "2019-11-06 11:49:42.132554", "stderr": "", "stderr_lines": [], "stdout": "24032", "stdout_lines": ["24032"]}
TASK [Gather USER IDS of processes id before killing.] ******************************************************************************************************
task path: /app/wls/startstop.yml:59
changed: [10.9.9.111] => (item=24032) => {"ansible_loop_var": "item", "changed": true, "cmd": "id -nu `cat /proc/24032/loginuid`", "delta": "0:00:00.116639", "end": "2019-11-06 11:46:41.205843", "item": "24032", "rc": 0, "start": "2019-11-06 11:46:41.089204", "stderr": "", "stderr_lines": [], "stdout": "user1", "stdout_lines": ["user1"]}
TASK [debug] ************************************************************************************************************************************************
task path: /app/wls/startstop.yml:68
fatal: [10.9.9.111]: FAILED! => {"msg": "'dict object' has no attribute 'stdout_lines'"}
```
Can you please suggest why am I getting the error and how can I resolve it ?
|
2019/11/06
|
[
"https://Stackoverflow.com/questions/58724581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11143113/"
] |
Few points to note why your solution didn't work.
The task `Get running processes list from remote host` returns a newline splitted `\n` string. So you will need to process this and turn the output into a propper list object first.
The task `Gather USER IDs from processes id before killing.` is returning a dictionary containing the key `results` where the value is of type list, so you will need iterate over it and fetch for each element the `stdout` value.
This is how I solved it.
```
---
- hosts: "localhost"
gather_facts: true
become: true
tasks:
- name: Set default values
set_fact:
process_ids: []
user_names: []
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Register a list of Process ids (Split newline from output before)
set_fact:
process_ids: "{{ running_processes.stdout.split('\n') }}"
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ item }}/loginuid`"
register: curr_user_ids
with_items: "{{ process_ids }}"
- name: Register a list of User names (Out of result from before)
set_fact:
user_names: "{{ user_names + [item.stdout] | unique }}"
when: item.rc == 0
with_items:
- "{{ curr_user_ids.results }}"
- name: Set unique entries in User names list
set_fact:
user_names: "{{ user_names | unique }}"
- name: DEBUG
debug:
msg: "{{ user_names }}"
```
|
The variable *curr\_user\_ids* registers results of each iteration
```
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
```
The list of the results is stored in
```
curr_user_ids.results
```
Take a look at the variable
```
- debug:
var: curr_user_ids
```
and loop the stdout\_lines
```
- debug:
var: item.stdout_lines
loop: "{{ curr_user_ids.results }}"
```
|
29,634,019
|
I'm not sure what I'm doing wrong here, and am hoping someone else has the same problem. I don't get any error, and my json matches what should be correct both on Jira's docs and jira-python questions online. My versions are valid Jira versions. I also have no problem doing this directly through the API, but we are re-writing everything to go through jira-python for cleanliness/ease of use.
This just completely clears the fixVersions field in Jira.
```
issue=jira.issue("TKT-100")
issue.update(fields={'fixVersions':[{'add': {'name': 'add_me'}},{'remove': {'name': 'remove_me'}}]})
```
I can add a new version to fixVersions using issue.add\_field\_value(), but this won't work, because I need to add and remove in one request for the history of the ticket.
```
issue.add_field_value('fixVersions', {'name': 'add_me'})
```
Any ideas?
|
2015/04/14
|
[
"https://Stackoverflow.com/questions/29634019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797963/"
] |
Here's a code example of how I got it working for anyone who comes across this later...
```
fixVersions = []
issue = jira.issue('issue_key')
for version in issue.fields.fixVersions:
if version.name != 'version_to_remove':
fixVersions.append({'name': version.name})
fixVersions.append({'name': 'version_to_add'})
issue.update(fields={'fixVersions': fixVersions})
```
|
I did it other way:
1. Create version in the target project.
2. Update ticket.
ver = jira.create\_version(name='version\_name', project='PROJECT\_NAME')
issue = jira.issue('ISSUE\_NUM')
i.update(fields={'fixVersions': [{'name': ver.name}]})}
In my case that worked.
|
29,634,019
|
I'm not sure what I'm doing wrong here, and am hoping someone else has the same problem. I don't get any error, and my json matches what should be correct both on Jira's docs and jira-python questions online. My versions are valid Jira versions. I also have no problem doing this directly through the API, but we are re-writing everything to go through jira-python for cleanliness/ease of use.
This just completely clears the fixVersions field in Jira.
```
issue=jira.issue("TKT-100")
issue.update(fields={'fixVersions':[{'add': {'name': 'add_me'}},{'remove': {'name': 'remove_me'}}]})
```
I can add a new version to fixVersions using issue.add\_field\_value(), but this won't work, because I need to add and remove in one request for the history of the ticket.
```
issue.add_field_value('fixVersions', {'name': 'add_me'})
```
Any ideas?
|
2015/04/14
|
[
"https://Stackoverflow.com/questions/29634019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797963/"
] |
Here's a code example of how I got it working for anyone who comes across this later...
```
fixVersions = []
issue = jira.issue('issue_key')
for version in issue.fields.fixVersions:
if version.name != 'version_to_remove':
fixVersions.append({'name': version.name})
fixVersions.append({'name': 'version_to_add'})
issue.update(fields={'fixVersions': fixVersions})
```
|
A little bit more pythonic version of user797963 solution, may look like that.
```
def change_fix_version(tickets, remove_versions=[], add_versions=[]):
fix_versions={version.name for version in ticket.fields.fixVersions}
fix_versions.difference_update(set(remove_versions))
fix_versions.update(set(add_versions))
ticket.update(fields={'fixVersions':fix_versions})
```
You would call it like that:
```
change_fix_versions(jira.issue('my_issue'), remove_versions=['draft'], add_versions=['master', 'release'])
```
|
29,634,019
|
I'm not sure what I'm doing wrong here, and am hoping someone else has the same problem. I don't get any error, and my json matches what should be correct both on Jira's docs and jira-python questions online. My versions are valid Jira versions. I also have no problem doing this directly through the API, but we are re-writing everything to go through jira-python for cleanliness/ease of use.
This just completely clears the fixVersions field in Jira.
```
issue=jira.issue("TKT-100")
issue.update(fields={'fixVersions':[{'add': {'name': 'add_me'}},{'remove': {'name': 'remove_me'}}]})
```
I can add a new version to fixVersions using issue.add\_field\_value(), but this won't work, because I need to add and remove in one request for the history of the ticket.
```
issue.add_field_value('fixVersions', {'name': 'add_me'})
```
Any ideas?
|
2015/04/14
|
[
"https://Stackoverflow.com/questions/29634019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797963/"
] |
I did it other way:
1. Create version in the target project.
2. Update ticket.
ver = jira.create\_version(name='version\_name', project='PROJECT\_NAME')
issue = jira.issue('ISSUE\_NUM')
i.update(fields={'fixVersions': [{'name': ver.name}]})}
In my case that worked.
|
A little bit more pythonic version of user797963 solution, may look like that.
```
def change_fix_version(tickets, remove_versions=[], add_versions=[]):
fix_versions={version.name for version in ticket.fields.fixVersions}
fix_versions.difference_update(set(remove_versions))
fix_versions.update(set(add_versions))
ticket.update(fields={'fixVersions':fix_versions})
```
You would call it like that:
```
change_fix_versions(jira.issue('my_issue'), remove_versions=['draft'], add_versions=['master', 'release'])
```
|
16,209,640
|
Here is strange issue I'm facing with wxpython on Mac. Though this works completely fine with wxpython on Windows7. I'm trying to update wx.StaticText label before and after time.sleep() like this:
```
self.lblStatus = wx.StaticText(self, label="", pos=(180, 80))
self.lblStatus.SetLabel("Processing....")
time.sleep(10)
```
Above code, the label "Processing..." do not get visible until time.sleep() completes its 10 seconds. i.e. SetLabel takes effect after 10 seconds.
On windows7/wxpython works as expected but on Mac I'm facing the issue.
|
2013/04/25
|
[
"https://Stackoverflow.com/questions/16209640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1324914/"
] |
I have never seen time.sleep() NOT block the GUI on Windows. The sleep function blocks wx's main loop, plain and simple. As JHolta mentioned, you can put the sleep into a thread and update the GUI from there, assuming you use a threadsafe method, such as wx.CallAfter, wx.CallLater or wx.PostEvent.
But if you just want to arbitrarily reset a label every now and then, I think using a wx.Timer() is much simpler.
* <http://www.blog.pythonlibrary.org/2009/08/25/wxpython-using-wx-timers/>
* <http://wiki.wxpython.org/Timer>
|
The wxPython gui is a loop, to make a part of the code sleep without causing the gui to sleep one would need to multithread.
I would write a function that calls a threaded function, now this is a dirty example but should show you what needs to be done:
```
import wx
from threading import Thread
import time
from wx.lib.pubsub import setuparg1
from wx.lib.pubsub import pub as Publisher
class Example(wx.Frame):
def __init__(self, *args, **kw):
super(Example, self).__init__(*args, **kw)
self.SetTitle('This is a threaded thing')
self.st1 = wx.StaticText(self, label='', pos=(30, 10))
self.SetSize((250, 180))
self.Centre()
self.Show(True)
self.Bind(wx.EVT_MOVE, self.OnMove)
# call updateUI when the thread returns something
Publisher.subscribe(self.updateUI, "update")
def OnMove(self, evt):
''' On frame movement call thread '''
x, y = evt.GetPosition()
C = SomeClass()
C.runthread(x, y)
def updateUI(self, evt):
''' Update label '''
data = evt.data
self.st1.SetLabel(data)
class SomeClass:
def runthread(self, x,y):
''' Start a new thread '''
t = Thread(target=self._runthread, args=(x,y,))
t.start()
def _runthread(self, x,y):
''' this is a threaded function '''
wx.CallAfter(Publisher.sendMessage, "update", "processing...")
time.sleep(3)
wx.CallAfter(Publisher.sendMessage, "update", "%d, %d" % (x, y))
def main():
ex = wx.App()
Example(None)
ex.MainLoop()
if __name__ == '__main__':
main()
```
Now the thread is initialized as soon as you try to move the "frame"/window, and will return the current placing of the window.
wx.CallAfter() is a threadsafe call to the GUI-thread, and only sends the data when the GUI-thread is ready to receive.
Publisher-module simplifies the task of sending the data to our thread.
I will suggest reading this: <http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/>
|
39,024,816
|
```
#####################################
# Portscan TCP #
# #
#####################################
# -*- coding: utf-8 -*-
#!/usr/bin/python3
import socket
ip = input("Digite o IP ou endereco: ")
ports = []
count = 0
while count < 10:
ports.append(int(input("Digite a porta: ")))
count += 1
for port in ports:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.settimeout(0.05)
code = client.connect_ex((ip, port)) #conecta e traz a msg de erro
#Like connect(address), but return an error indicator instead of raising an exception for errors
if code == 0: #0 = Success
print (str(port) + " -> Porta aberta")
else:
print (str(port) + " -> Porta fechada")
print ("Scan Finalizado")
```
The python script above is a TCP Scanning. How can I change it into a TCP SYN scanning ? How to Create a port scanner TCP SYN using the method (TCP SYN ) ?
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39024816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6693417/"
] |
As @Upsampled mentioned, you might use raw sockets (<https://en.wikipedia.org/>) as you only need a subset of TCP protocol (send **SYN** and recieve **RST-ACK** or **SYN-ACK**
).
As coding something like <http://www.binarytides.com/raw-socket-programming-in-python-linux/>
could be a good excersice, I would also suggest to consider <https://github.com/secdev/scapy>
>
> Scapy is a powerful Python-based interactive packet manipulation
> program and library.
>
>
>
Here's the code sample that already implements a simple port scanner
<http://pastebin.com/YCR3vp9B> and a detailed article on what it does:
<http://null-byte.wonderhowto.com/how-to/build-stealth-port-scanner-with-scapy-and-python-0164779/>
The code is a little bit ugly but it works — I've checked it from my local Ubuntu PC against my VPS.
Here's the most important code snippet (slightly adjusted to conform to PEP8):
```
# Generate Port Number
srcport = RandShort()
# Send SYNC and receive RST-ACK or SYN-ACK
SYNACKpkt = sr1(IP(dst=target) /
TCP(sport=srcport, dport=port, flags="S"))
# Extract flags of received packet
pktflags = SYNACKpkt.getlayer(TCP).flags
if pktflags == SYNACK:
# port is open
pass
else:
# port is not open
# ...
pass
```
|
First, you will have to generate your own SYN packets using RAW sockets. You can find an example [here](http://www.binarytides.com/raw-socket-programming-in-python-linux/)
Second, you will need to listen for SYN-ACKs from the scanned host in order to determine which ports actually try to start the TCP Handshake (SYN,SYN-ACK,ACK). You should be able to detect and parse the TCP header from the applications that respond. From that header you can determine the origin port and thus figure out a listening application was there.
Also if you implement this, you also basically made a SYN DDOS utility because you will be creating a ton of half-opened tcp connections.
|
59,159,462
|
I want to find the largest value in a JSON file, using python (so it would be a dictionary).
My JSON has this shape:
```
[{
"probability": 0.623514056,
"boundingBox": { "left": 36, "top": 1, "width": 403, "height": 95 }
},
{
"probability": 0.850905955,
"boundingBox": { "left": 42, "top": 200, "width": 412, "height": 90 }
},
{
"probability": 0.308903724,
"boundingBox": { "left": 79, "top": 309, "width": 690, "height": 125 }
}]
```
And I want to find the maximum and the minimum width. And doing 2 "for"s would take a lot of time (since the JSON is larger than the showed here). Is there an optimal way to do that? like max(*something*)
So the output I would like would be:
```
Max Width: 690
Min Width: 403
```
|
2019/12/03
|
[
"https://Stackoverflow.com/questions/59159462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11476888/"
] |
The cleanest solution is probably:
```
widths = [d['boundingBox']['width'] for d in json_file]
min_value = min(widths)
max_value = max(widths)
```
However, `min` and `max` just use loops under the hood, which you mentioned may be slow. Test the above solution first, and if that is too slow for your needs, you can combine the loops into one:
```
min_value, max_value = float('inf'), float('-inf')
for d in json_file:
value = d['boundingBox']['width']
if value < min_value:
min_value = value
if value > max_value:
max_value = value
```
EDIT: Performance difference is negligible. Go with the first one.
```
Python 3.7.2 (default, Dec 29 2018, 06:19:36)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import timeit
>>> test = """\
... values = [v for v in x]
... min_value = min(values)
... max_value = max(values)
... """
>>> timeit.timeit(stmt=test, number=10000, setup="""import numpy as np; x = np.random.rand(10000)""")
7.404742785263807
>>> test2 = """\
... min_value, max_value = float('inf'), float('-inf')
... for v in x:
... value = v
... if value < min_value:
... min_value = value
... if value > max_value:
... max_value = value
... """
>>> timeit.timeit(stmt=test2, number=10000, setup="""import numpy as np; x = np.random.rand(10000)""")
7.252437709830701
```
|
That's fairly easy to do:
```
max_width = max(d["boundingBox"]["width"] for d in dicts)
min_width = min(d["boundingBox"]["height"] for d in dicts)
```
|
59,159,462
|
I want to find the largest value in a JSON file, using python (so it would be a dictionary).
My JSON has this shape:
```
[{
"probability": 0.623514056,
"boundingBox": { "left": 36, "top": 1, "width": 403, "height": 95 }
},
{
"probability": 0.850905955,
"boundingBox": { "left": 42, "top": 200, "width": 412, "height": 90 }
},
{
"probability": 0.308903724,
"boundingBox": { "left": 79, "top": 309, "width": 690, "height": 125 }
}]
```
And I want to find the maximum and the minimum width. And doing 2 "for"s would take a lot of time (since the JSON is larger than the showed here). Is there an optimal way to do that? like max(*something*)
So the output I would like would be:
```
Max Width: 690
Min Width: 403
```
|
2019/12/03
|
[
"https://Stackoverflow.com/questions/59159462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11476888/"
] |
That's fairly easy to do:
```
max_width = max(d["boundingBox"]["width"] for d in dicts)
min_width = min(d["boundingBox"]["height"] for d in dicts)
```
|
I would use a lambda function
```
max(data, key=lambda d: d['boundingBox']['width'])
min(data, key=lambda d: d['boundingBox']['width'])
```
|
59,159,462
|
I want to find the largest value in a JSON file, using python (so it would be a dictionary).
My JSON has this shape:
```
[{
"probability": 0.623514056,
"boundingBox": { "left": 36, "top": 1, "width": 403, "height": 95 }
},
{
"probability": 0.850905955,
"boundingBox": { "left": 42, "top": 200, "width": 412, "height": 90 }
},
{
"probability": 0.308903724,
"boundingBox": { "left": 79, "top": 309, "width": 690, "height": 125 }
}]
```
And I want to find the maximum and the minimum width. And doing 2 "for"s would take a lot of time (since the JSON is larger than the showed here). Is there an optimal way to do that? like max(*something*)
So the output I would like would be:
```
Max Width: 690
Min Width: 403
```
|
2019/12/03
|
[
"https://Stackoverflow.com/questions/59159462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11476888/"
] |
The cleanest solution is probably:
```
widths = [d['boundingBox']['width'] for d in json_file]
min_value = min(widths)
max_value = max(widths)
```
However, `min` and `max` just use loops under the hood, which you mentioned may be slow. Test the above solution first, and if that is too slow for your needs, you can combine the loops into one:
```
min_value, max_value = float('inf'), float('-inf')
for d in json_file:
value = d['boundingBox']['width']
if value < min_value:
min_value = value
if value > max_value:
max_value = value
```
EDIT: Performance difference is negligible. Go with the first one.
```
Python 3.7.2 (default, Dec 29 2018, 06:19:36)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import timeit
>>> test = """\
... values = [v for v in x]
... min_value = min(values)
... max_value = max(values)
... """
>>> timeit.timeit(stmt=test, number=10000, setup="""import numpy as np; x = np.random.rand(10000)""")
7.404742785263807
>>> test2 = """\
... min_value, max_value = float('inf'), float('-inf')
... for v in x:
... value = v
... if value < min_value:
... min_value = value
... if value > max_value:
... max_value = value
... """
>>> timeit.timeit(stmt=test2, number=10000, setup="""import numpy as np; x = np.random.rand(10000)""")
7.252437709830701
```
|
I would use a lambda function
```
max(data, key=lambda d: d['boundingBox']['width'])
min(data, key=lambda d: d['boundingBox']['width'])
```
|
25,060,752
|
Okay I got a file container that is a product of a Webcrawler containing a lot of different file types, likely but not all are HTML XML JPG PNG PDF. Most of the container is HTML text so I tried to open it with:
```
with open(fname) as f:
content = f.readlines()
```
which basically fails when I hit a PDF. The files are structured in a way so that every file is preceded by a little meta Information telling me what kind of file type is following.
Is there a similar method to `.readlines()` in python to read files line by line. I don't need the PDFs I will Ignore them anyway I just want to skip them.
Thanks in advance
Edit:
Example File: [GDrive Link](https://drive.google.com/file/d/0BwukDl7gHHdBeWNtcWRucXdGQ3c/edit?usp=sharing)
|
2014/07/31
|
[
"https://Stackoverflow.com/questions/25060752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3749379/"
] |
You can [retrieve changes](http://msdn.microsoft.com/en-us/library/thc1eetk.aspx) in a `DataTable` using `GetChanges`.
So you can use this code with a `DataGridView`:
```
CType(YourDataGridView.DataSource, DataTable).GetChanges(DataRowState.Modified).Rows
```
|
I have come up with a working solution in C# where I account for a user editing the current cell then performing a Save/Update without moving out of the edited row. The call to `GetChanges()` won't recognize the current edited row due to its `RowState` still being marked as "`Unchanged`". I also make a call to move to the next row in case the user stayed on the current cell being edited as `GetChange()` won't touch the last edited cell.
```
//Move to previous cell of current edited row in case user did not move from last edited cell
dgvMyDataGridView.CurrentCell = dgvMyDataGridView.Rows[dgvMyDataGridView.CurrentCell.RowIndex].Cells[dgvMyDataGridView.CurrentCell.ColumnIndex - 1];
//Attempts to end the current edit of dgvMyDataGridView for row being edited
BindingContext[dgvMyDataGridView.DataSource, dgvMyDataGridView.DataMember.ToString()].EndCurrentEdit();
//Move to next row in case user did not move from last edited row
dgvMyDataGridView.CurrentCell = dgvMyDataGridView.Rows[dgvMyDataGridView.CurrentCell.RowIndex + 1].Cells[0];
//Get all row changes from embedded DataTable of DataGridView's DataSource
DataTable changedRows = ((DataTable)((BindingSource)dgvMyDataGridView.DataSource).DataSource).GetChanges();
foreach (DataRow row in changedRows.Rows)
{
//row["columnName"].ToString();
//row[0].ToString();
//row[1].ToString();
}
```
|
25,060,752
|
Okay I got a file container that is a product of a Webcrawler containing a lot of different file types, likely but not all are HTML XML JPG PNG PDF. Most of the container is HTML text so I tried to open it with:
```
with open(fname) as f:
content = f.readlines()
```
which basically fails when I hit a PDF. The files are structured in a way so that every file is preceded by a little meta Information telling me what kind of file type is following.
Is there a similar method to `.readlines()` in python to read files line by line. I don't need the PDFs I will Ignore them anyway I just want to skip them.
Thanks in advance
Edit:
Example File: [GDrive Link](https://drive.google.com/file/d/0BwukDl7gHHdBeWNtcWRucXdGQ3c/edit?usp=sharing)
|
2014/07/31
|
[
"https://Stackoverflow.com/questions/25060752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3749379/"
] |
You can [retrieve changes](http://msdn.microsoft.com/en-us/library/thc1eetk.aspx) in a `DataTable` using `GetChanges`.
So you can use this code with a `DataGridView`:
```
CType(YourDataGridView.DataSource, DataTable).GetChanges(DataRowState.Modified).Rows
```
|
Here is a simple way to get all rows which have been modified in a DataGridView using C#:
```
DataRowCollection modifiedRows = ((DataTable)YourGridView.DataSource).GetChanges(DataRowState.Modified).Rows;
```
|
25,060,752
|
Okay I got a file container that is a product of a Webcrawler containing a lot of different file types, likely but not all are HTML XML JPG PNG PDF. Most of the container is HTML text so I tried to open it with:
```
with open(fname) as f:
content = f.readlines()
```
which basically fails when I hit a PDF. The files are structured in a way so that every file is preceded by a little meta Information telling me what kind of file type is following.
Is there a similar method to `.readlines()` in python to read files line by line. I don't need the PDFs I will Ignore them anyway I just want to skip them.
Thanks in advance
Edit:
Example File: [GDrive Link](https://drive.google.com/file/d/0BwukDl7gHHdBeWNtcWRucXdGQ3c/edit?usp=sharing)
|
2014/07/31
|
[
"https://Stackoverflow.com/questions/25060752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3749379/"
] |
I have come up with a working solution in C# where I account for a user editing the current cell then performing a Save/Update without moving out of the edited row. The call to `GetChanges()` won't recognize the current edited row due to its `RowState` still being marked as "`Unchanged`". I also make a call to move to the next row in case the user stayed on the current cell being edited as `GetChange()` won't touch the last edited cell.
```
//Move to previous cell of current edited row in case user did not move from last edited cell
dgvMyDataGridView.CurrentCell = dgvMyDataGridView.Rows[dgvMyDataGridView.CurrentCell.RowIndex].Cells[dgvMyDataGridView.CurrentCell.ColumnIndex - 1];
//Attempts to end the current edit of dgvMyDataGridView for row being edited
BindingContext[dgvMyDataGridView.DataSource, dgvMyDataGridView.DataMember.ToString()].EndCurrentEdit();
//Move to next row in case user did not move from last edited row
dgvMyDataGridView.CurrentCell = dgvMyDataGridView.Rows[dgvMyDataGridView.CurrentCell.RowIndex + 1].Cells[0];
//Get all row changes from embedded DataTable of DataGridView's DataSource
DataTable changedRows = ((DataTable)((BindingSource)dgvMyDataGridView.DataSource).DataSource).GetChanges();
foreach (DataRow row in changedRows.Rows)
{
//row["columnName"].ToString();
//row[0].ToString();
//row[1].ToString();
}
```
|
Here is a simple way to get all rows which have been modified in a DataGridView using C#:
```
DataRowCollection modifiedRows = ((DataTable)YourGridView.DataSource).GetChanges(DataRowState.Modified).Rows;
```
|
13,452,761
|
i have a table with add/remove buttons, those buttons add and remove rows from the table, the buttons are also added with each new row
here what i have as html
```
<table>
<tr>
<th>catalogue</th>
<th>date</th>
<th>add</th>
<th>remove</th>
</tr>
<- target row ->
<tr id="cat_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
</- target row ->
</table>
```
JavaScript:
```
$("#Add").click(function() {
$('#cat_row').after('<- target row with ->'); // this is only a notation to prevent repeatation
id++;
});
$("#Remove").click(function() {
$('#cat_'+id+'_row').remove();
id--;
});
```
Please note that after each addation of a new row the `id` is also changed for example here after clicking the button "Add" **1 time**
```
<table>
<tr>
<th>catalogue</th>
<th>date</th>
<th>add</th>
<th>remove</th>
</tr>
<tr id="cat_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
<tr id="cat_1_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
</table>
```
now the *new added Buttons* has no actions i must always click on the **original buttons** - add/remove
after this i want to make the **Remove Button** removes **ONLY** the row where it is clicked on
for example if i click the button in row 2, row 2 will be deleted
---
**Info:
I use web2py 2.2.1 with python 2.7 with the last version of jQuery**
|
2012/11/19
|
[
"https://Stackoverflow.com/questions/13452761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/747201/"
] |
**SOLVED**
Because my service was running in separate process i had to add this flag when accesing shared preference
```
private final static int PREFERENCES_MODE = Context.MODE_MULTI_PROCESS;
```
and change like this
```
sharedPrefs = this.getSharedPreferences("preference name", PREFERENCES_MODE);
```
|
Ensure you write your data to shared preferences correctly, specifically you `commit()` your changes, [as docs say](http://developer.android.com/reference/android/content/SharedPreferences.Editor.html):
>
> All changes you make in an editor are batched, and not copied back to
> the original SharedPreferences until you call commit() or apply()
>
>
>
Here is example code:
```
SharedPreferences.Editor editor = mPrefs.edit();
editor.putBoolean( key, value );
editor.commit();
```
|
13,452,761
|
i have a table with add/remove buttons, those buttons add and remove rows from the table, the buttons are also added with each new row
here what i have as html
```
<table>
<tr>
<th>catalogue</th>
<th>date</th>
<th>add</th>
<th>remove</th>
</tr>
<- target row ->
<tr id="cat_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
</- target row ->
</table>
```
JavaScript:
```
$("#Add").click(function() {
$('#cat_row').after('<- target row with ->'); // this is only a notation to prevent repeatation
id++;
});
$("#Remove").click(function() {
$('#cat_'+id+'_row').remove();
id--;
});
```
Please note that after each addation of a new row the `id` is also changed for example here after clicking the button "Add" **1 time**
```
<table>
<tr>
<th>catalogue</th>
<th>date</th>
<th>add</th>
<th>remove</th>
</tr>
<tr id="cat_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
<tr id="cat_1_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
</table>
```
now the *new added Buttons* has no actions i must always click on the **original buttons** - add/remove
after this i want to make the **Remove Button** removes **ONLY** the row where it is clicked on
for example if i click the button in row 2, row 2 will be deleted
---
**Info:
I use web2py 2.2.1 with python 2.7 with the last version of jQuery**
|
2012/11/19
|
[
"https://Stackoverflow.com/questions/13452761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/747201/"
] |
**SOLVED**
Because my service was running in separate process i had to add this flag when accesing shared preference
```
private final static int PREFERENCES_MODE = Context.MODE_MULTI_PROCESS;
```
and change like this
```
sharedPrefs = this.getSharedPreferences("preference name", PREFERENCES_MODE);
```
|
I think the error is on the line
```
sharedPrefs = PreferenceManager.getDefaultSharedPreferences(this);
```
where you are passing 'this' from inside a thread? Can you change it with the application context?
|
16,844,182
|
This is my first time delving into web development in python. My only other experience is PHP, and I never used a framework before, so I'm finding this very intimidating and confusing.
I'm interested in learning CherryPy/Jinja2 to make a ZFS Monitor for my NAS. I've read through the basics of the docs on CherryPy/Jinja2 but I find that the samples are disjointed and too simplistic, I don't really understand how to make these 2 things "come together" gracefully.
Some questions I have:
1. Is there a simple tutorial shows how you make CherryPy and Jinja2 work together nicely? I'm either finding samples that are too simple, like the samples on CherryPy / Jinja2 docs, or way to complex. (example: <https://github.com/jovanbrakus/cherrypy-example>).
2. Is there a standardized or "expected" way to create web applications for CherryPy? (example: What should my directory structure look like? Is there a way to declare static things; is it even necessary?)
3. Does anyone have recommended literature for this or is the online documentation the best resource?
|
2013/05/30
|
[
"https://Stackoverflow.com/questions/16844182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2437919/"
] |
Congratulations on choosing Python, I'm sure you'll learn to love it as have I.
Regarding CherryPy, I'm not an expert, but was also in the same boat as you a few days ago and I'd agree that the tutorials are a little disjointed in parts.
For integrating Jinja2, as in their [doc page](http://docs.cherrypy.org/stable/progguide/choosingtemplate.html), the snippet of HTML should have been specified that it is the template file and as such saved in the path /templates/index.html. They also used variables that didn't match up in the template code sample and controller sample.
The below is instead a complete working sample of a simple hello world using CherryPy and Jinja2
**/main.py:**
```
import cherrypy
from jinja2 import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader('templates'))
class Root:
@cherrypy.expose
def index(self):
tmpl = env.get_template('index.html')
return tmpl.render(salutation='Hello', target='World')
cherrypy.config.update({'server.socket_host': '127.0.0.1',
'server.socket_port': 8080,
})
cherrypy.quickstart(Root())
```
**/templates/index.html:**
```
<h1>{{ salutation }} {{ target }}</h1>
```
Then in your shell/command prompt, serve the app using:
```
python main.py
```
And in your browser you should be able to see it at `http://localhost:8080`
That hopefully helps you to connect Jinja2 templating to your CherryPy app. CherryPy really is a lightweight and very flexible framework, where you can choose many different ways to structure your code and file structures.
|
Application structure
=====================
First about standard directory structure of a project. There is none, as CherryPy doesn't mandate it, neither it tells you what data layer, form validation or template engine to use. It's all up to you and your requirements. And of course as this is a great flexibility as it causes some confusion to beginners. Here's how a close to real-word application directory structure may look like.
```
. — Python virtual environment
└── website — cherryd to add this to sys.path, -P switch
├── application
│ ├── controller.py — request routing, model use
│ ├── model.py — data access, domain logic
│ ├── view — template
│ │ ├── layout
│ │ ├── page
│ │ └── part
│ └── __init__.py — application bootstrap
├── public
│ └── resource — static
│ ├── css
│ ├── image
│ └── js
├── config.py — configuration, environments
└── serve.py — bootstrap call, cherryd to import this, -i switch
```
Then standing in the root of [virtual environment](http://docs.python-guide.org/en/latest/dev/virtualenvs/) you usually do the following to start CherryPy in development environment. [`cherryd`](https://cherrypy.readthedocs.org/en/3.3.0/deployguide/cherryd.html) is CherryPy's suggest way of running an application.
```
. bin/activate
cherryd -i serve -P website
```
Templating
==========
Now let's look closer to the template directory and what it can look like.
```
.
├── layout
│ └── main.html
├── page
│ ├── index
│ │ └── index.html
│ ├── news
│ │ ├── list.html
│ │ └── show.html
│ ├── user
│ │ └── profile.html
│ └── error.html
└── part
└── menu.html
```
To harness nice Jinja2's feature of [template inheritance](http://jinja.pocoo.org/docs/dev/templates/#template-inheritance), here are layouts which define structure of a page, the slots that can be filled in a particular page. You may have layout for a website and layout for email notifications. There's also a directory for a part, reusable snippet used across different pages. Now lets see the code that corresponds the structure above.
I've made the following also available as [a runnable](http://runnable.com/VGnoo-FACl9KWyPE) which is easier to navigate files, you can run and play with it. The paths start with `.` like in the first section's tree.
*website/config.py*
```
# -*- coding: utf-8 -*-
import os
path = os.path.abspath(os.path.dirname(__file__))
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 8,
'engine.autoreload.on' : False,
'tools.trailing_slash.on' : False
},
'/resource' : {
'tools.staticdir.on' : True,
'tools.staticdir.dir' : os.path.join(path, 'public', 'resource')
}
}
```
*website/serve.py*
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from application import bootstrap
bootstrap()
# debugging purpose, e.g. run with PyDev debugger
if __name__ == '__main__':
import cherrypy
cherrypy.engine.signals.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
```
*website/application/\_\_init\_\_.py*
Notable part here is a CherryPy tool which helps to avoid boilerplate related with rendering templates. You just need return a `dict` from CherryPy page handler with data for the template. Following convention-over-configuration principle, the tool when not provided with template name will use `classname/methodname.html` e.g. `user/profile.html`. To override the default template you can use `@cherrypy.tools.template(name = 'other/name')`. Also note that the tool exposes a method automatically, so you don't need to append `@cherrypy.expose` on top
```
# -*- coding: utf-8 -*-
import os
import types
import cherrypy
import jinja2
import config
class TemplateTool(cherrypy.Tool):
_engine = None
'''Jinja environment instance'''
def __init__(self):
viewLoader = jinja2.FileSystemLoader(os.path.join(config.path, 'application', 'view'))
self._engine = jinja2.Environment(loader = viewLoader)
cherrypy.Tool.__init__(self, 'before_handler', self.render)
def __call__(self, *args, **kwargs):
if args and isinstance(args[0], (types.FunctionType, types.MethodType)):
# @template
args[0].exposed = True
return cherrypy.Tool.__call__(self, **kwargs)(args[0])
else:
# @template()
def wrap(f):
f.exposed = True
return cherrypy.Tool.__call__(self, *args, **kwargs)(f)
return wrap
def render(self, name = None):
cherrypy.request.config['template'] = name
handler = cherrypy.serving.request.handler
def wrap(*args, **kwargs):
return self._render(handler, *args, **kwargs)
cherrypy.serving.request.handler = wrap
def _render(self, handler, *args, **kwargs):
template = cherrypy.request.config['template']
if not template:
parts = []
if hasattr(handler.callable, '__self__'):
parts.append(handler.callable.__self__.__class__.__name__.lower())
if hasattr(handler.callable, '__name__'):
parts.append(handler.callable.__name__.lower())
template = '/'.join(parts)
data = handler(*args, **kwargs) or {}
renderer = self._engine.get_template('page/{0}.html'.format(template))
return renderer.render(**data) if template and isinstance(data, dict) else data
def bootstrap():
cherrypy.tools.template = TemplateTool()
cherrypy.config.update(config.config)
import controller
cherrypy.config.update({'error_page.default': controller.errorPage})
cherrypy.tree.mount(controller.Index(), '/', config.config)
```
*website/application/controller.py*
As you can see with use of the tool page handlers look rather clean and will be consistent with other tools, e.g. `json_out`.
```
# -*- coding: utf-8 -*-
import datetime
import cherrypy
class Index:
news = None
user = None
def __init__(self):
self.news = News()
self.user = User()
@cherrypy.tools.template
def index(self):
pass
@cherrypy.expose
def broken(self):
raise RuntimeError('Pretend something has broken')
class User:
@cherrypy.tools.template
def profile(self):
pass
class News:
_list = [
{'id': 0, 'date': datetime.datetime(2014, 11, 16), 'title': 'Bar', 'text': 'Lorem ipsum'},
{'id': 1, 'date': datetime.datetime(2014, 11, 17), 'title': 'Foo', 'text': 'Ipsum lorem'}
]
@cherrypy.tools.template
def list(self):
return {'list': self._list}
@cherrypy.tools.template
def show(self, id):
return {'item': self._list[int(id)]}
def errorPage(status, message, **kwargs):
return cherrypy.tools.template._engine.get_template('page/error.html').render()
```
In this demo app I used [blueprint](http://www.blueprintcss.org/) css file, to demonstrate how static resource handling works. Put it in `website/application/public/resource/css/blueprint.css`. The rest is less interesting, just Jinja2 templates for completeness.
*website/application/view/layout/main.html*
```
<!DOCTYPE html>
<html>
<head>
<meta http-equiv='content-type' content='text/html; charset=utf-8' />
<title>CherryPy Application Demo</title>
<link rel='stylesheet' media='screen' href='/resource/css/blueprint.css' />
</head>
<body>
<div class='container'>
<div class='header span-24'>
{% include 'part/menu.html' %}
</div>
<div class='span-24'>{% block content %}{% endblock %}</div>
</div>
</body>
</html>
```
*website/application/view/page/index/index.html*
```
{% extends 'layout/main.html' %}
{% block content %}
<div class='span-18 last'>
<p>Root page</p>
</div>
{% endblock %}
```
*website/application/view/page/news/list.html*
```
{% extends 'layout/main.html' %}
{% block content %}
<div class='span-20 last prepend-top'>
<h1>News</h1>
<ul>
{% for item in list %}
<li><a href='/news/show/{{ item.id }}'>{{ item.title }}</a> ({{ item.date }})</li>
{% endfor %}
</ul>
</div>
{% endblock %}
```
*website/application/view/page/news/show.html*
```
{% extends 'layout/main.html' %}
{% block content %}
<div class='span-20 last prepend-top'>
<h2>{{ item.title }}</h2>
<div class='span-5 last'>{{ item.date }}</div>
<div class='span-19 last'>{{ item.text }}</div>
</div>
{% endblock %}
```
*website/application/view/page/user/profile.html*
```
{% extends 'layout/main.html' %}
{% block content %}
<div class='span-18'>
<table>
<tr><td>First name:</td><td>John</td></tr>
<tr><td>Last name:</td><td>Doe</td></tr>
<table>
</div>
{% endblock %}
```
*website/application/view/page/error.html*
It's a 404-page.
```
{% extends 'layout/main.html' %}
{% block content %}
<h1>Error has happened</h1>
{% endblock %}
```
*website/application/view/part/menu.html*
```
<div class='span-4 prepend-top'>
<h2><a href='/'>Website</a></h2>
</div>
<div class='span-20 prepend-top last'>
<ul>
<li><a href='/news/list'>News</a></li>
<li><a href='/user/profile'>Profile</a></li>
<li><a href='/broken'>Broken</a></li>
</ul>
</div>
```
References
==========
Code above goes closely with backend section of [qooxdoo-website-skeleton](https://bitbucket.org/saaj/qooxdoo-website-skeleton). For full-blown Debain deployment of such application, [cherrypy-webapp-skeleton](https://bitbucket.org/saaj/cherrypy-webapp-skeleton) may be useful.
|
39,971,929
|
Python 3.6 is about to be released. [PEP 494 -- Python 3.6 Release Schedule](https://www.python.org/dev/peps/pep-0494/) mentions the end of December, so I went through [What's New in Python 3.6](https://docs.python.org/3.6/whatsnew/3.6.html) to see they mention the *variable annotations*:
>
> [PEP 484](https://www.python.org/dev/peps/pep-0484) introduced standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables:
>
>
>
> ```
> primes: List[int] = []
>
> captain: str # Note: no initial value!
>
> class Starship:
> stats: Dict[str, int] = {}
>
> ```
>
> Just as for function annotations, the Python interpreter does not attach any particular meaning to variable annotations and only stores them in a special attribute `__annotations__` of a class or module. In contrast to variable declarations in statically typed languages, the goal of annotation syntax is to provide an easy way to specify structured type metadata for third party tools and libraries via the abstract syntax tree and the `__annotations__` attribute.
>
>
>
So from what I read they are part of the type hints coming from Python 3.5, described in [What are Type hints in Python 3.5](https://stackoverflow.com/q/32557920/1983854).
I follow the `captain: str` and `class Starship` example, but not sure about the last one: How does `primes: List[int] = []` explain? Is it defining an empty list that will just allow integers?
|
2016/10/11
|
[
"https://Stackoverflow.com/questions/39971929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1983854/"
] |
Everything between `:` and the `=` is a type hint, so `primes` is indeed defined as `List[int]`, and initially set to an empty list (and `stats` is an empty dictionary initially, defined as `Dict[str, int]`).
`List[int]` and `Dict[str, int]` are not part of the next syntax however, these were already defined in the Python 3.5 typing hints PEP. The 3.6 [PEP 526 – *Syntax for Variable Annotations*](https://www.python.org/dev/peps/pep-0526/) proposal *only* defines the syntax to attach the same hints to variables; before you could only attach type hints to variables with comments (e.g. `primes = [] # List[int]`).
Both `List` and `Dict` are *Generic* types, indicating that you have a list or dictionary mapping with specific (concrete) contents.
For `List`, there is only one 'argument' (the elements in the `[...]` syntax), the type of every element in the list. For `Dict`, the first argument is the key type, and the second the value type. So *all* values in the `primes` list are integers, and *all* key-value pairs in the `stats` dictionary are `(str, int)` pairs, mapping strings to integers.
See the [`typing.List`](https://docs.python.org/3/library/typing.html#typing.List) and [`typing.Dict`](https://docs.python.org/3/library/typing.html#typing.Dict) definitions, the [section on *Generics*](https://docs.python.org/3/library/typing.html#generics), as well as [PEP 483 – *The Theory of Type Hints*](https://www.python.org/dev/peps/pep-0483).
Like type hints on functions, their use is optional and are also considered *annotations* (provided there is an object to attach these to, so globals in modules and attributes on classes, but not locals in functions) which you could introspect via the `__annotations__` attribute. You can attach arbitrary info to these annotations, you are not strictly limited to type hint information.
You may want to read the [full proposal](https://www.python.org/dev/peps/pep-0526/); it contains some additional functionality above and beyond the new syntax; it specifies when such annotations are evaluated, how to introspect them and how to declare something as a class attribute vs. instance attribute, for example.
|
>
> *What are variable annotations?*
>
>
>
Variable annotations are just the next step from `# type` comments, as they were defined in `PEP 484`; the rationale behind this change is highlighted in the [respective section of PEP 526](https://www.python.org/dev/peps/pep-0526/#rationale).
So, instead of hinting the type with:
```
primes = [] # type: List[int]
```
*New syntax was introduced* to allow for directly annotating the type with an assignment of the form:
```
primes: List[int] = []
```
which, as @Martijn pointed out, denotes a list of integers by using types available in [`typing`](https://docs.python.org/3/library/typing.html) and initializing it to an empty list.
>
> *What changes does it bring?*
>
>
>
The first change introduced was [new syntax](https://docs.python.org/3.6/reference/simple_stmts.html#annotated-assignment-statements) that allows you to annotate a name with a type, either standalone after the `:` character or optionally annotate while also assigning a value to it:
```
annotated_assignment_stmt ::= augtarget ":" expression ["=" expression]
```
So the example in question:
```
primes: List[int] = [ ]
# ^ ^ ^
# augtarget | |
# expression |
# expression (optionally initialize to empty list)
```
Additional changes were also introduced along with the new syntax; modules and classes now have an `__annotations__` attribute (as functions have had since *[PEP 3107 -- Function Annotations](https://www.python.org/dev/peps/pep-3107/)*) in which the type metadata is attached:
```
from typing import get_type_hints # grabs __annotations__
```
Now `__main__.__annotations__` holds the declared types:
```
>>> from typing import List, get_type_hints
>>> primes: List[int] = []
>>> captain: str
>>> import __main__
>>> get_type_hints(__main__)
{'primes': typing.List<~T>[int]}
```
`captain` won't currently show up through [`get_type_hints`](https://docs.python.org/3.6/library/typing.html#typing.get_type_hints) because `get_type_hints` only returns types that can also be accessed on a module; i.e., it needs a value first:
```
>>> captain = "Picard"
>>> get_type_hints(__main__)
{'primes': typing.List<~T>[int], 'captain': <class 'str'>}
```
Using `print(__annotations__)` will show `'captain': <class 'str'>` but you really shouldn't be accessing `__annotations__` directly.
Similarly, for classes:
```
>>> get_type_hints(Starship)
ChainMap({'stats': typing.Dict<~KT, ~VT>[str, int]}, {})
```
Where a `ChainMap` is used to grab the annotations for a given class (located in the first mapping) and all annotations defined in the base classes found in its `mro` (consequent mappings, `{}` for object).
Along with the new syntax, a new [`ClassVar`](https://docs.python.org/3.6/library/typing.html#typing.ClassVar) type has been added to denote class variables. Yup, `stats` in your example is actually an *instance variable*, not a `ClassVar`.
>
> *Will I be forced to use it?*
>
>
>
As with type hints from `PEP 484`, these are ***completely optional*** and are of main use for type checking tools (and whatever else you can build based on this information). It is to be provisional when the stable version of Python 3.6 is released so small tweaks might be added in the future.
|
6,943,172
|
What does the [] mean?
Also how can I identify variables as empty arrays in python?
Thanks!
```
perl: xcoords = ()
```
How do I translate that?
|
2011/08/04
|
[
"https://Stackoverflow.com/questions/6943172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
[] - is an empty list in Python and is the same as calling list() e.g. [] == list()
To check that list is empty you can use len(l) or:
```
listV = [] # an empty list
if listV:
# do something if list is not empty
else:
# do something if list is really empty
```
To read more about list you can use [the following link](http://docs.python.org/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange)
|
Lists are like C++ arrays with some difference. One of difference is they can cary different types even lists. to check if lists is empty
```
lists = []
len(lists)
lists[0]= "More of me"
len(lists)
```
More check [Python official](http://docs.python.org/tutorial/introduction.html#lists) tutorial and above Docs
|
62,763,634
|
I currently have a dictionary that I have imported from a csv file, that I have converted into a list of variables. The original dictionary looks like this:
* server01, server01.fqdn:port
* server02, server02.fqdn:port
* server03, server03.fqdn:port
* server04, server04.fqdn:port
What I'd like to do is create another dictionary using the same key value as the existing (which would be the server name) and using the server's FQDN, use python requests to get a value. This would create a dictionary like this that I could then insert into MySQL:
* server01 0.0 0.0 2020-07-06 19:59:42
* server02 0.0 0.0 2020-07-06 19:59:42
* server03 0.0 0.0 2020-07-06 19:59:42
* server04 0.0 0.0 2020-07-06 19:59:42
I can print the results to screen using this, but how would I insert this into a new dictionary?
```
curtime = ('{:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.utcnow()))
for key, value in sorted(dict.items()):
print key, fc_grab(value), fs_grab(value), curtime
```
Thank you,
Sean
|
2020/07/06
|
[
"https://Stackoverflow.com/questions/62763634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13795713/"
] |
The [`format`](https://trino.io/docs/current/functions/conversion.html#format) function accepts any of the Java [format string specifiers](https://docs.oracle.com/javase/8/docs/api/java/util/Formatter.html#syntax):
```
presto> select format('%.2f%%', 0.18932 * 100);
_col0
--------
18.93%
(1 row)
```
|
For those of you who, like me, came here looking to round a `decimal` or `double` field down to `n` decimal places but didn't care about the `%` sign, you have a few other options than Martin's answer.
If you want the output to be type `double` then [`truncate(x,n)`](https://prestodb.io/docs/current/functions/math.html) will do this. However, it requires `x` to be type `decimal`.
```
select truncate( cast( 1.23456789 as decimal(10,2)) , 2); -> 1.23 <DOUBLE>
```
If you feel that the truncate is kind of useless here since the `decimal(10,2)` is really doing the work, then you're not alone. An arguably equivalent transformation would be.
```
select cast( cast( 1.23456789 as decimal(10,2)) as double); -> 1.23 <DOUBLE>
```
Is this syntactically better or more performant than `truncate()`, I have no idea, I guess you get to choose.
And if you don't give a hoot about what type results from the transformation, then I suppose the below is the most direct method.
`cast( 1.23456789 as decimal(10,2);`
|
71,293,767
|
I need to get the value from one element using several others as filters using Selenium on a dynamic website ([LogTrail](https://github.com/sivasamyk/logtrail) using [Kibana](https://en.wikipedia.org/wiki/Kibana)).
I got this:
```python
from selenium import webdriver
import time
from selenium.webdriver.common.keys import Keys
import os
path2driver_ffox = os.path.join(os.path.abspath(os.getcwd()), "geckodriver")
path2driver_chr = os.path.join(os.path.abspath(os.getcwd()), "chromedriver")
try:
driver = webdriver.Chrome(executable_path=path2driver_chr)
except:
driver = webdriver.Firefox(executable_path=path2driver_ffox)
driver.get("https://log-viewer.mob.dev/app/logtrail#/?q=%22lw-00005%22&h=web-sockets&t=Now&i=filebeat-*&_g=()")
print(driver.title)
driver.maximize_window()
```
Using the example below, I need to get the value from the last action where time = 28-2-2022 and lbl-00005 in *li*\*
How can I do it?
```html
<li id="IavYP38BMeu2l4fa6DvW" ng-repeat="event in events" on-last-repeat="" infinite-scroll="">
<time>2022-02-28 10:20:49,864</time>
<span class="host"><a href="" ng-click="onHostSelected(event.hostname)">ws-web-sockets-pp</a></span>
<span class="program"><a ng-click="onProgramClick(event.program)">/web/serv/logs/ws/ws.log:</a></span>
<span class="message" ng-style="event.color? {color: event.color} : ''" ng-bind-html="event.message | ansiToHtml" compile-template="">2022-02-28 10:20:49,279 ws-web-sockets-pp-1 INFO [null:-1] (executor-thread-14) - stat : <span class="highlight">lbl-00005</span>:icifYWZuBe89EUYnMe-J3vIGOWQpG45-66vaB86d, MessageId: 894912413, request message: {"action":"act_VALUE","messageId":"894912413","type":"CALL","uniqueId":"894912413","payload":"{}"}</span>
</li>
```
This works, but:
```python
time = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "li[ng-repeat='event in events']>time"))).text
```
How do I know if this is the last (newest) record? How do I get the act\_VALUE?
This
```python
message = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "li[ng-repeat='event in events'] .message"))).text
```
It doesn’t seem to input the message that belongs to the time that we get the above.
I can’t copy this and can only send an image :(
[](https://i.stack.imgur.com/XGMBC.png)
I need to be able to search this page like this.
Get the latest heartbeat (most recent record) and from that heartbeat, get the message id.
This print is with a filter only to show one record, and normally there are thousands of *li* elements.
I need to able to put in a variable like this:
```json
req=" {"action":"Heartbeat","messageId":"33","type":"CALL","uniqueId":"33","payload":"{}"}"
type="heartbeat"
msgid="33"
```
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71293767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17914605/"
] |
Given a dataframe with a `DatetimeIndex` which doesn't have any missing days like this
```
df = pd.DataFrame(
{"A": range(500)}, index=pd.date_range("2022-03-01", periods=500, freq="1D")
)
A
2022-03-01 0
2022-03-02 1
... ...
2023-07-12 498
2023-07-13 499
```
you could do the following
```
from dateutil.relativedelta import relativedelta
delta = relativedelta(months=1)
df["B"] = None # None instead of other NaNs - can be changed
idx = df.loc[df.index[0] + delta:].index
df.loc[idx, "B"] = df.loc[[day - delta for day in idx], "A"].values
```
and get
```
A B
2022-03-01 0 None
2022-03-02 1 None
... ... ...
2023-07-12 498 468
2023-07-13 499 469
```
The `idx` is there to make sure that the actual shifting doesn't fail. It's the part you're trying to address by `skip`. (Your `skip` is actually a bit imprecise because you're using 31/366 days for month/year lengths universally.)
But be prepared to run into strange phenomena when you're using months and/or years. For example
```
from datetime import date
delta = relativedelta(months=1)
date(2022, 3, 30) + delta == date(2022, 3, 31) + delta
```
is `True`.
|
We can use [`relativedelta`](https://dateutil.readthedocs.io/en/stable/relativedelta.html), [`pandas.to_datetime`](https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html) and [`pandas.DataFrame.apply`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html).
```
from dateutil.relativedelta import relativedelta
import pandas as pd
# Sample dataframe
>>> a = pd.DataFrame([('2021-01-01'), ('2021-01-02'), ('2022-01-01')], columns=['Date'])
# Contents of a
>>> a
Date
0 2021-01-01
1 2021-01-02
2 2022-01-01
# Ensuring Date is a datetime column
>>> a['Date'] = pd.to_datetime(a['Date'])
# Adding a month to all of the dates
>>> a.Date.apply(lambda x: x + relativedelta(months=1))
0 2021-02-01
1 2021-02-02
2 2022-02-01
Name: Date, dtype: datetime64[ns]
```
|
15,136,456
|
I want to parse dxf file for obtain objects (line, point, text and so on) with dxfgrabber library.
The code is as below
```
#!/usr/bin/env python
import dxfgrabber
dxf = dxfgrabber.readfile("1.dxf")
print ("DXF version : {}".format(dxf.dxfversion))
```
But it gets some error...
```
Traceback (most recent call last):
File "parsing.py", line 6, in <module>
dxf = dxfgrabber.readfile("1.dxf")
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/__init__.py", line 43, in readfile
with io.open(filename, encoding=get_encoding()) as fp:
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/__init__.py", line 39, in get_encoding
info = dxfinfo(fp)
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/tags.py", line 96, in dxfinfo
tag = next(tagreader)
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/tags.py", line 52, in __next__
return next_tag()
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/tags.py", line 45, in next_tag
raise StopIteration()
StopIteration
```
The simple 1.dxf file only contain line.
file link is <https://docs.google.com/file/d/0BySHG7k180kETlQ2UnRxQmxoUk0/edit?usp=sharing>
Is this bug of dxfgrabber library?
Is there any good library for parsing dxf file in the python?
I am using dxfgrabber 0.4 and python 2.7.3.
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15136456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761178/"
] |
I contacted the developer and he says that in current version 0.5.1 make line 49 of `__init__.py` the following: `with io.open(filename) as fp:`.
Then it works (`io` was missing).
He will make this correction official in version 0.5.2 soon.
|
You can only read dxf made in AutoCAD format!
Try "DraftSight" which is a free AutoCAD clone which exports dxf quite well. Try dxf R12 format.
This will solve your problems.
|
35,737,178
|
Thanks in advance for the help.
I'm relatively new to python and am trying to write a python script to load partial csv files from 1000 files. For example, I have 1000 files that have this format
```
x,y
1,2
2,4
2,2
3,9
...
```
I would like to load only lines, for example, where `x=2`. I've seen a lot of posts on here about picking certain lines (ie lines 1,2,3), but not picking lines that fit certain criteria. One solution would be to simply open each file individually and iterate through each one, loading lines as I go. However, I would imagine there is a much better way of doing this (efficiency is somewhat of a concern as these files are not small).
One point that might speed things up is that the x column is sorted, ie once I see a value x = a, I will never see another x value less than a as I iterate through the lines from the beginning.
Is there a more efficient way of doing this rather than going through each file line by line?
Edit:
One approach that I have taken is
```
numpy.fromregex(file, r'^' + re.compile(str(mynum)) + r'\,\-\d$', dtype='f');
```
where mynum is the number I want, but this is not working
|
2016/03/02
|
[
"https://Stackoverflow.com/questions/35737178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2736423/"
] |
Try [pandas](https://github.com/pydata/pandas) library. It has an interoperability with numpy and way more flexible. With this library you do next thing:
```py
data = pandas.read_csv('file.csv')
# keep only rows with x equals to 2
data = data[data['x'] == 2]
# convert to numpy array
arr = numpy.asarray(data)
```
You can read more about selecting data with [here](http://pandas.pydata.org/pandas-docs/stable/indexing.html).
|
The csv library comes with python and it allows for partial reading of a file.
```
import csv
def partial_load(filename):
ds = []
c = csv.reader( open(filename) )
legend = next( c )
for row in c:
row = [float(r) for r in row]
if len(row) > 0:
if row[0] > 2:
break
ds.append(row)
return ds
```
|
17,406,453
|
I have started a month ago with GAE and have successfully deployed our current startup via Flask on GAE. It works fantastically well. Now being all too exited about GAE, I am thinking about porting a couple of my older Django apps on GAE as well.
To my surprise the documentation of it is surprisingly inconsistent and partially contradicting.
The official [google page](https://developers.google.com/appengine/articles/django-nonrel) recommends using `django-nonrel`, which itself is already [disconstinued](http://www.allbuttonspressed.com/goodbye).
Django 1.5.1 seems not even to be supported yet on GAE, neither is it clear to me how to use Django 1.4.3 on GAE.
I also found this more recent [solution](http://howto.pui.ch/post/39245389801/tutorial-django-on-appengine-using-google-cloud-sql) that utilizes Django and Google Cloud (Mysql on cloud) instead of the high replication datastore. Not sure if this is a good way to go since its still experimental and subject to "breaking changes" in future. (It also doesn't seem to include any free tier, unlike high replication datastore)
I was expecting Django - as perhaps the biggest python web frameworks - to have a far better documentation or tutorials about how to do deploy it on GAE. So I wonder if its even worth it sticking with Django on GAE anymore.
If I am meant to make manually my own models and adjust my queries in Views by utilizing `ndb` anyway, I could as well stick with flask+Jinja2, why should I use Django, where I can't even use it's ORM anymore? Or am I overlooking something?
Thanks,
|
2013/07/01
|
[
"https://Stackoverflow.com/questions/17406453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/92153/"
] |
you can easily use `@request.getHeader("referer")` in your Templates, for example if you have a cancel button that should redirect you to the previous page, use this :
```
<a href="@request.getHeader("referer")">Cancel</a>
```
in this way, you don't need to pass any extra information to your templates. (tested with play 2.3.4)
|
This is what I came up with in the end, although it isn't particularly elegant, and I'd be interested in better ways of doing it. I added a hidden input to my form with the current page URL:
```
@(implicit request: RequestHeader)
...
<form action="@routes.Controller.doStuff()" method="post">
<input type="hidden" name="previousURL" value="@request.uri"/>
...
</form>
```
Then in my controller:
```
def doStuff() = Action { implicit request =>
val previousURLOpt: Option[String] =
for {
requestMap <- request.body.asFormUrlEncoded
values <- requestMap.get("previousURL")
previousURL <- values.headOption
} yield previousURL
previousURLOpt match {
case Some(previousURL) =>
Redirect(new Call("GET", previousURL))
case None =>
Redirect(routes.Controller.somewhereElse)
}
}
```
|
17,406,453
|
I have started a month ago with GAE and have successfully deployed our current startup via Flask on GAE. It works fantastically well. Now being all too exited about GAE, I am thinking about porting a couple of my older Django apps on GAE as well.
To my surprise the documentation of it is surprisingly inconsistent and partially contradicting.
The official [google page](https://developers.google.com/appengine/articles/django-nonrel) recommends using `django-nonrel`, which itself is already [disconstinued](http://www.allbuttonspressed.com/goodbye).
Django 1.5.1 seems not even to be supported yet on GAE, neither is it clear to me how to use Django 1.4.3 on GAE.
I also found this more recent [solution](http://howto.pui.ch/post/39245389801/tutorial-django-on-appengine-using-google-cloud-sql) that utilizes Django and Google Cloud (Mysql on cloud) instead of the high replication datastore. Not sure if this is a good way to go since its still experimental and subject to "breaking changes" in future. (It also doesn't seem to include any free tier, unlike high replication datastore)
I was expecting Django - as perhaps the biggest python web frameworks - to have a far better documentation or tutorials about how to do deploy it on GAE. So I wonder if its even worth it sticking with Django on GAE anymore.
If I am meant to make manually my own models and adjust my queries in Views by utilizing `ndb` anyway, I could as well stick with flask+Jinja2, why should I use Django, where I can't even use it's ORM anymore? Or am I overlooking something?
Thanks,
|
2013/07/01
|
[
"https://Stackoverflow.com/questions/17406453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/92153/"
] |
This is what I came up with in the end, although it isn't particularly elegant, and I'd be interested in better ways of doing it. I added a hidden input to my form with the current page URL:
```
@(implicit request: RequestHeader)
...
<form action="@routes.Controller.doStuff()" method="post">
<input type="hidden" name="previousURL" value="@request.uri"/>
...
</form>
```
Then in my controller:
```
def doStuff() = Action { implicit request =>
val previousURLOpt: Option[String] =
for {
requestMap <- request.body.asFormUrlEncoded
values <- requestMap.get("previousURL")
previousURL <- values.headOption
} yield previousURL
previousURLOpt match {
case Some(previousURL) =>
Redirect(new Call("GET", previousURL))
case None =>
Redirect(routes.Controller.somewhereElse)
}
}
```
|
The easiest way I've found to do this, is from within your controller method, use this:
```
String refererUrl = request().getHeader("referer");
```
So, you'd do something like:
```
public static Result query(String queryStr, int page, int offset) {
String refererUrl = request().getHeader("referer");
Logger.info("refererUrl: " + refererUrl);
if(queryStr.length() < 3) {
flash(Application.FLASH_ERROR_KEY, "type a longer search than '" + queryStr.trim() + "'");
return redirect(refererUrl);
}
return ok(listings.render(searchService.searchListings(queryStr)));
}
```
Keep in mind you need to do a redirect() and NOT a render() with a flash message.
|
17,406,453
|
I have started a month ago with GAE and have successfully deployed our current startup via Flask on GAE. It works fantastically well. Now being all too exited about GAE, I am thinking about porting a couple of my older Django apps on GAE as well.
To my surprise the documentation of it is surprisingly inconsistent and partially contradicting.
The official [google page](https://developers.google.com/appengine/articles/django-nonrel) recommends using `django-nonrel`, which itself is already [disconstinued](http://www.allbuttonspressed.com/goodbye).
Django 1.5.1 seems not even to be supported yet on GAE, neither is it clear to me how to use Django 1.4.3 on GAE.
I also found this more recent [solution](http://howto.pui.ch/post/39245389801/tutorial-django-on-appengine-using-google-cloud-sql) that utilizes Django and Google Cloud (Mysql on cloud) instead of the high replication datastore. Not sure if this is a good way to go since its still experimental and subject to "breaking changes" in future. (It also doesn't seem to include any free tier, unlike high replication datastore)
I was expecting Django - as perhaps the biggest python web frameworks - to have a far better documentation or tutorials about how to do deploy it on GAE. So I wonder if its even worth it sticking with Django on GAE anymore.
If I am meant to make manually my own models and adjust my queries in Views by utilizing `ndb` anyway, I could as well stick with flask+Jinja2, why should I use Django, where I can't even use it's ORM anymore? Or am I overlooking something?
Thanks,
|
2013/07/01
|
[
"https://Stackoverflow.com/questions/17406453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/92153/"
] |
you can easily use `@request.getHeader("referer")` in your Templates, for example if you have a cancel button that should redirect you to the previous page, use this :
```
<a href="@request.getHeader("referer")">Cancel</a>
```
in this way, you don't need to pass any extra information to your templates. (tested with play 2.3.4)
|
The easiest way I've found to do this, is from within your controller method, use this:
```
String refererUrl = request().getHeader("referer");
```
So, you'd do something like:
```
public static Result query(String queryStr, int page, int offset) {
String refererUrl = request().getHeader("referer");
Logger.info("refererUrl: " + refererUrl);
if(queryStr.length() < 3) {
flash(Application.FLASH_ERROR_KEY, "type a longer search than '" + queryStr.trim() + "'");
return redirect(refererUrl);
}
return ok(listings.render(searchService.searchListings(queryStr)));
}
```
Keep in mind you need to do a redirect() and NOT a render() with a flash message.
|
43,852,802
|
Python 3.6
I have a program that is generating a list of dictionaries.
If I print it to the screen with:
```
print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
```
It prints out exactly as I want to see it:
```
[
{
"runts": 0,
"giants": 0,
"throttles": 0,
"input errors": 0,
"CRC": 0,
"frame": 0,
"overrun": 0,
"ignored": 0,
"watchdog": 0,
"pause input": 0,
"input packets with dribble condition detected": 0,
"underruns": 0,
"output errors": 0,
"collisions": 0,
"interface resets": 2,
"babbles": 0,
"late collision": 0,
"deferred": 0,
"lost carrier": 0,
"no carrier": 0,
"PAUSE output": 0,
"output buffer failures": 0,
"output buffers swapped out": 0
},
{
"runts": 0,
"giants": 0,
"throttles": 0,
"input errors": 0,
"CRC": 0,
"frame": 0,
"overrun": 0,
"ignored": 0,
"watchdog": 0,
"pause input": 0,
"input packets with dribble condition detected": 0,
"underruns": 0,
"output errors": 0,
"collisions": 0,
"interface resets": 2,
"babbles": 0,
"late collision": 0,
"deferred": 0,
"lost carrier": 0,
"no carrier": 0,
"PAUSE output": 0,
"output buffer failures": 0,
"output buffers swapped out": 0
},
```
But if I try to print it to a file with:
```
outputfile = ("d:\\mark\\python\\Projects\\error_detect\\" + hostname)
# print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
output_lines.append(json.dumps(output_lines, indent=4, separators=(',', ': ')))
del output_lines[-1]
with open(outputfile, 'w') as f:
json.dump(output_lines, f)
```
The file is one giant line of text.
I want the formatting in the file to look like it does when I print to the screen.
I do not understand why I am losing the formatting.
|
2017/05/08
|
[
"https://Stackoverflow.com/questions/43852802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7535419/"
] |
I think all you need is `json.dump` with `indent` and it should be fine:
```
outputfile = ("d:\\mark\\python\\Projects\\error_detect\\" + hostname)
# print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
# output_lines.append(json.dumps(output_lines, indent=4, separators=(',', ': ')))
# del output_lines[-1]
with open(outputfile, 'w') as f:
json.dump(output_lines, f, indent=4, separators=(',', ': '))
```
It doesn’t make much sense to me to format to a string and then re run dump on the string.
|
Try simply outputting the formatted `json.dumps`, rather than running it through `json.dump` again.
```
with open(outputfile, 'w') as f:
f.write(output_lines)
```
|
43,852,802
|
Python 3.6
I have a program that is generating a list of dictionaries.
If I print it to the screen with:
```
print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
```
It prints out exactly as I want to see it:
```
[
{
"runts": 0,
"giants": 0,
"throttles": 0,
"input errors": 0,
"CRC": 0,
"frame": 0,
"overrun": 0,
"ignored": 0,
"watchdog": 0,
"pause input": 0,
"input packets with dribble condition detected": 0,
"underruns": 0,
"output errors": 0,
"collisions": 0,
"interface resets": 2,
"babbles": 0,
"late collision": 0,
"deferred": 0,
"lost carrier": 0,
"no carrier": 0,
"PAUSE output": 0,
"output buffer failures": 0,
"output buffers swapped out": 0
},
{
"runts": 0,
"giants": 0,
"throttles": 0,
"input errors": 0,
"CRC": 0,
"frame": 0,
"overrun": 0,
"ignored": 0,
"watchdog": 0,
"pause input": 0,
"input packets with dribble condition detected": 0,
"underruns": 0,
"output errors": 0,
"collisions": 0,
"interface resets": 2,
"babbles": 0,
"late collision": 0,
"deferred": 0,
"lost carrier": 0,
"no carrier": 0,
"PAUSE output": 0,
"output buffer failures": 0,
"output buffers swapped out": 0
},
```
But if I try to print it to a file with:
```
outputfile = ("d:\\mark\\python\\Projects\\error_detect\\" + hostname)
# print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
output_lines.append(json.dumps(output_lines, indent=4, separators=(',', ': ')))
del output_lines[-1]
with open(outputfile, 'w') as f:
json.dump(output_lines, f)
```
The file is one giant line of text.
I want the formatting in the file to look like it does when I print to the screen.
I do not understand why I am losing the formatting.
|
2017/05/08
|
[
"https://Stackoverflow.com/questions/43852802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7535419/"
] |
Try simply outputting the formatted `json.dumps`, rather than running it through `json.dump` again.
```
with open(outputfile, 'w') as f:
f.write(output_lines)
```
|
Say your program generates this list of dictionaries
```
>>> list_of_dicts = [dict(zip(list(range(2)),list(range(2)))), dict(zip(list(range(2)),list(range(2))))]
>>> list_of_dicts
[{0: 0, 1: 1}, {0: 0, 1: 1}]
```
What you can do is
```
>>> import json
>>> str_object = json.dumps(list_of_dicts, indent=4)
>>> repr(str_object)
'[\n {\n "0": 0, \n "1": 1\n }, \n {\n "0": 0, \n "1": 1\n }\n]'
>>> str_object
[
{
"0": 0,
"1": 1
},
{
"0": 0,
"1": 1
}
]
```
Now you can write `str_object`
```
>>> with open(outputfile, 'w') as f:
f.write(str_object)
```
Which makes the formatting in the file be as it is when you print it to the screen.
|
43,852,802
|
Python 3.6
I have a program that is generating a list of dictionaries.
If I print it to the screen with:
```
print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
```
It prints out exactly as I want to see it:
```
[
{
"runts": 0,
"giants": 0,
"throttles": 0,
"input errors": 0,
"CRC": 0,
"frame": 0,
"overrun": 0,
"ignored": 0,
"watchdog": 0,
"pause input": 0,
"input packets with dribble condition detected": 0,
"underruns": 0,
"output errors": 0,
"collisions": 0,
"interface resets": 2,
"babbles": 0,
"late collision": 0,
"deferred": 0,
"lost carrier": 0,
"no carrier": 0,
"PAUSE output": 0,
"output buffer failures": 0,
"output buffers swapped out": 0
},
{
"runts": 0,
"giants": 0,
"throttles": 0,
"input errors": 0,
"CRC": 0,
"frame": 0,
"overrun": 0,
"ignored": 0,
"watchdog": 0,
"pause input": 0,
"input packets with dribble condition detected": 0,
"underruns": 0,
"output errors": 0,
"collisions": 0,
"interface resets": 2,
"babbles": 0,
"late collision": 0,
"deferred": 0,
"lost carrier": 0,
"no carrier": 0,
"PAUSE output": 0,
"output buffer failures": 0,
"output buffers swapped out": 0
},
```
But if I try to print it to a file with:
```
outputfile = ("d:\\mark\\python\\Projects\\error_detect\\" + hostname)
# print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
output_lines.append(json.dumps(output_lines, indent=4, separators=(',', ': ')))
del output_lines[-1]
with open(outputfile, 'w') as f:
json.dump(output_lines, f)
```
The file is one giant line of text.
I want the formatting in the file to look like it does when I print to the screen.
I do not understand why I am losing the formatting.
|
2017/05/08
|
[
"https://Stackoverflow.com/questions/43852802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7535419/"
] |
I think all you need is `json.dump` with `indent` and it should be fine:
```
outputfile = ("d:\\mark\\python\\Projects\\error_detect\\" + hostname)
# print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
# output_lines.append(json.dumps(output_lines, indent=4, separators=(',', ': ')))
# del output_lines[-1]
with open(outputfile, 'w') as f:
json.dump(output_lines, f, indent=4, separators=(',', ': '))
```
It doesn’t make much sense to me to format to a string and then re run dump on the string.
|
Say your program generates this list of dictionaries
```
>>> list_of_dicts = [dict(zip(list(range(2)),list(range(2)))), dict(zip(list(range(2)),list(range(2))))]
>>> list_of_dicts
[{0: 0, 1: 1}, {0: 0, 1: 1}]
```
What you can do is
```
>>> import json
>>> str_object = json.dumps(list_of_dicts, indent=4)
>>> repr(str_object)
'[\n {\n "0": 0, \n "1": 1\n }, \n {\n "0": 0, \n "1": 1\n }\n]'
>>> str_object
[
{
"0": 0,
"1": 1
},
{
"0": 0,
"1": 1
}
]
```
Now you can write `str_object`
```
>>> with open(outputfile, 'w') as f:
f.write(str_object)
```
Which makes the formatting in the file be as it is when you print it to the screen.
|
8,461,306
|
I'm tracking a linux filesystem (that could be any type) with pyinotify module for python (which is actually the linux kernel behind doing the job). Many directories/folders/files (as much as the user want to) are being tracked with my application and now i would like track the md5sum of each file and store them on a database (includes every moving, renaming, new files, etc).
I guess that a database should be the best option to store all the md5sum of each file... But what should be the best database for that? Certainly a very performatic one. I'm looking for a free one, because the application is gonna be GPL.
|
2011/12/11
|
[
"https://Stackoverflow.com/questions/8461306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/952870/"
] |
You could use this:
```
string q = Regex.Replace(query, @"[:#/\\]", ".");
q = Regex.Replace(q, @""|['"",&?%\.*-]", " ");
```
EDIT:
=====
On closer inspection of what you're doing, your code is translating several characters into `.`, and *then* translating all `.` into spaces. So you could just do this:
```
string q = Regex.Replace(query, @""|['"",&?%\.*:#/\\-]", " ").Trim();
```
I'm not really sure what you're trying to do here, though. I feel like what you're **really** looking for is something like:
```
string q = Regex.Replace(query, @"[^\w\s]", "");
```
The presence of `"` in there throws me for a loop, and is why I'm not sure what you're doing. If you want to get rid of HTML entities, you could run `query` through `HttpUtility.HtmlDecode(string)` first and then apply the regex.
|
Try this.
```
string pattern = @"[^a-zA-Z0-9]";
string test = Regex.Replace("abc*&34567*opdldld(aododod';", pattern, " ");
```
|
11,191,946
|
I have spent many hours trying to build RDKit on ubuntu 11.10 for
Python 2.7 (rdkit\_201106+dfsg.orig.tar.gz) using a precompiled version
of boost 1.49. And I am failing miserably.
The recurring error is in the CMake GUI:
```
CMake Error at CMakeLists.txt:11 (install):
install FILES given no DESTINATION!
CMake Error at CMakeLists.txt:14 (add_pytest):
Unknown CMake command "add_pytest".
```
Any help please?
Solved the previous problem but now i get this error when running python even though I installed rdkit following the installation procedure:
```
from rdkit import Chem
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named rdkit
```
|
2012/06/25
|
[
"https://Stackoverflow.com/questions/11191946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1395874/"
] |
make sure you have the environment variables set:
(you might need to fix the paths with what you have): using bash on mac:
```
export RDBASE=/usr/local/share/RDKit
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/site-packages
```
you might want to add those lines to a bash script to automate the process.
|
For Ubuntu 12.04.2 LTS setting this environment variables works for me
```
export RDBASE=/usr/share/RDKit
export PYTHONPATH=$PYTHONPATH:/usr/lib/pymodules/python2.7
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.