qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
58,738,629
|
I'm trying to convert string to date using `arrow` module.
During the conversion, I received this error:
`arrow.parser.ParserMatchError: Failed to match '%A %d %B %Y %I:%M:%S %p %Z' when parsing 'Wednesday 06 November 2019 03:05:42 PM CDT'`
The conversion is done using one simple line according to this [documentation](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior):
`date = arrow.get(date, '%A %d %B %Y %I:%M:%S %p %Z')`
I also try to do this with `datetime` and got another error:
`ValueError: time data 'Wednesday 06 November 2019 03:27:33 PM CDT' does not match format '%A %d %B %Y %I:%M:%S %p %Z'`
What am I missing?
|
2019/11/06
|
[
"https://Stackoverflow.com/questions/58738629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4193208/"
] |
This looks like a [regex](https://www.regular-expressions.info/) kind of problem to me so use [Pattern](https://docs.oracle.com/javase/9/docs/api/java/util/regex/Pattern.html) class. By positively matching what we want, it implicitly ignores files that don't conform (like your `._` example)
```
final Pattern p = Pattern.compile(".*_(\\d{4})-(\\d{2})\\.pdf$");
for (File obj : contentsOfDirectory) {
if (obj.isFile())
String file = "this is the file directory";
String pdfBills = file + obj.getName().toString();
Matcher m = p.matcher(pdfBills);
if (m.matches()) {
int year = Integer.parseInt(m.group(1));
int month = Integer.parseInt(m.group(2));
// ... do stuff with year and month
}
```
|
What if instead of looking at the end of the filename, you inspected the beginning? It looks like the first part of the filename is consistently YYYY-MM, you could then parse out the year and the month using `.substring()` like so:
```
String year = pdfBills.substring(0, 4);
String month = pdfBills.substring(5, 7);
```
Then, you can convert your month numeric String to a human readable month String like this:
```
import java.text.DateFormatSymbols;
DateFormatSymbols symbols = new DateFormatSymbols();
int intMonth = Integer.parseInt(month);
String monthName = symbols.getMonths()[intMonth-1];
```
| 7,229
|
52,996,227
|
I have a JSON file that looks like this:
```
{
"authors": [
{
"name": "John Steinbeck",
"description": "An author from Salinas California"
},
{
"name": "Mark Twain",
"description": "An icon of american literature",
"publications": [
{
"book": "Huckleberry Fin"
},
{
"book": "The Mysterious Stranger"
},
{
"book": "Puddinhead Wilson"
}
]
},
{
"name": "Herman Melville",
"description": "Wrote about a famous whale.",
"publications": [
{
"book": "Moby Dick"
}
]
},
{
"name": "Edgar Poe",
"description": "Middle Name was Alan"
}
]
}
```
I'm using python to get the values of the publications elements.
my code looks like this:
```
import json
with open('derp.json') as f:
data = json.load(f)
for i in range (0, len (data['authors'])):
print data['authors'][i]['name']+data['authors'][i]['publications']
```
I'm able to get all the names if i just use a:
```
print data['authors'][i]['name']
```
But when I attempt to iterate through to return the publications, I get a keyError. I expect it's because the publications element isn't part of every author.
How can I get these values to return?
|
2018/10/25
|
[
"https://Stackoverflow.com/questions/52996227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4163962/"
] |
>
> "Use the force, Linq!" - Obi Enum Kenobi
>
>
>
```
using System.Linq;
List<Int32> numbers = new List<Int32>()
{
1,
2,
3,
4
};
String asString = String
.Join(
", ",
numbers.Select( n => n.ToString( CultureInfo.InvariantCulture ) )
);
List<Int32> fromString = asString
.Split( "," )
.Select( c => Int32.Parse( c, CultureInfo.InvariantCulture ) );
```
When converting to and from strings that are read by machines, not humans, it's important to avoid using `ToString` and `Parse` without using `CultureInfo.InvariantCulture` to ensure consistent formatting regardless of a user's culture and formatting settings.
FWIW, I have my own helper library that adds this useful extension method:
```
public static String ToStringInvariant<T>( this T value )
where T : IConvertible
{
return value.ToString( c, CultureInfo.InvariantCulture );
}
public static String StringJoin( this IEnumerable<String> source, String separator )
{
return String.Join( separator, source );
}
```
Which tidies things up somewhat:
```
String asString = numbers
.Select( n => n.ToStringInvariant() )
.StringJoin( ", " );
List<Int32> fromString = asString
.Split( "," )
.Select( c => Int32.Parse( c, CultureInfo.InvariantCulture ) );
```
|
The easiest way is to make a new list each time and casting each item as you iterate.
| 7,230
|
1,168,687
|
Right now I'm working on a scripting language that doesn't yet have a FFI. I'd like to know what's the most convenient way to get it in, assuming that I'd like to write it like cool geeks do - I'd like to write the FFI in the scripting language itself.
The programming language I need to interface is C. So for basics I know that libdl.so is my best friend. Obviously it's not the only thing I'm going to need but the most important of them.
I only have slight ideas about what else do I need for it. I'd like to get similar behavior from the FFI as what python ctypes has.
What should I need to know in order to get this far? I know there's some serious magic with data structures I'll need to deal with. How do I manage it so that I could do most of that serious magic in the scripting language itself? I'd have use from such magic for much more than just the foreign function interface. For instance I might want to pass C-like binary data to files.
|
2009/07/22
|
[
"https://Stackoverflow.com/questions/1168687",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21711/"
] |
I think an appropiate answer requires a [detailed essay](http://vmathew.in/dnc.html).
Basically, there should be wrappers for the library loading and symbol searching facilities provided by the host OS. If the core datatypes of your language are internally represented with a single C data structure, then a requirement can be placed on the library developers that the parameters and the return type of the exported C functions should be objects of that data structure. This will make data exchange simpler to implement. If your language has some form of pattern expressions and first class functions, then the signature of C functions might be written in patterns and the library searched for a function matching an equivalent signature. Here is some pseudocode of a C function and its usage in script:
```
/* arith.dll */
/* A sample C function callable from the scripting language. */
#include "my_script.h" // Data structures used by the script interpreter.
My_Script_Object* add(My_Script_Object* num1, My_Script_Object* num2)
{
int a = My_Script_Object_To_Int(num1);
int b = My_Script_Object_To_Int(num2);
return Int_To_My_Script_Object(a + b);
}
/* End of arith.dll */
// Script using the dll
clib = open_library("arith.dll");
// if it has first-class functions
add_func = clib.find([add int int]);
if (cfunc != null)
{
sum = add_func(10, 20);
print(sum);
}
// otherwise
print(clib.call("add", 10 20));
```
It is not possible to discuss more implementation details here. Note that
we haven't said anything about garbage collection, etc.
The sources available at the following links may help you move further:
<http://common-lisp.net/project/cffi/>
<http://www.nongnu.org/cinvoke/>
|
Check out <http://sourceware.org/libffi/>
Remember the calling conventions are going to be different on different architectures, i.e. what order function variables are popped onto the stack. I don't know about writing it in your own scripting language, I do know that Java JNI uses libffi.
| 7,233
|
63,966,342
|
```
import pandas as pd
import datetime as dt
from pandas_datareader import data as web
import yfinance as yf
yf.pdr_override()
```
filename=r'C:\Users\User\Desktop\from\_python\data\_from\_python.xlsx'
```
yeah = pd.read_excel(filename, sheet_name='entry')
stock = []
stock = list(yeah['name'])
stock = [ s.replace('\xa0', '') for s in stock if not pd.isna(s) ]
adj_close=pd.DataFrame([])
high_price=pd.DataFrame([])
low_price=pd.DataFrame([])
volume=pd.DataFrame([])
print(stock)
['^GSPC', 'NQ=F', 'AAU', 'ALB', 'AOS', 'APPS', 'AQB', 'ASPN', 'ATHM', 'AZRE', 'BCYC', 'BGNE', 'CAT', 'CC', 'CLAR', 'CLCT', 'CMBM', 'CMT', 'CRDF', 'CYD', 'DE', 'DKNG', 'EARN', 'EMN', 'FBIO', 'FBRX', 'FCX', 'FLXS', 'FMC', 'FMCI', 'GME', 'GRVY', 'HAIN', 'HBM', 'HIBB', 'IEX', 'IOR', 'KFS', 'MAXR', 'MPX', 'MRTX', 'NSTG', 'NVCR', 'NVO', 'OESX', 'PENN', 'PLL', 'PRTK', 'RDY', 'REGI', 'REKR', 'SBE', 'SQM', 'TCON', 'TCS', 'TGB', 'TPTX', 'TRIL', 'UEC', 'VCEL', 'VOXX', 'WIT', 'WKHS', 'XNCR']
for symbol in stock:
adj_close[symbol] = web.get_data_yahoo([symbol],start,end)['Adj Close']
```
I have a list of tickers, I have got the adj close price, how can get these tickers NAME and SECTORS?
for single ticker I found in web, it can be done like as below
```
sbux = yf.Ticker("SBUX")
tlry = yf.Ticker("TLRY")
print(sbux.info['sector'])
print(tlry.info['sector'])
```
How can I make it as a `dataframe` that I can put the data into excel as I am doing for `adj` price.
Thanks a lot!
|
2020/09/19
|
[
"https://Stackoverflow.com/questions/63966342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13933399/"
] |
You can try this answer using a package called [yahooquery](https://github.com/dpguthrie/yahooquery). Disclaimer: I am the author of the package.
```
from yahooquery import Ticker
import pandas as pd
symbols = ['^GSPC', 'NQ=F', 'AAU', 'ALB', 'AOS', 'APPS', 'AQB', 'ASPN', 'ATHM', 'AZRE', 'BCYC', 'BGNE', 'CAT', 'CC', 'CLAR', 'CLCT', 'CMBM', 'CMT', 'CRDF', 'CYD', 'DE', 'DKNG', 'EARN', 'EMN', 'FBIO', 'FBRX', 'FCX', 'FLXS', 'FMC', 'FMCI', 'GME', 'GRVY', 'HAIN', 'HBM', 'HIBB', 'IEX', 'IOR', 'KFS', 'MAXR', 'MPX', 'MRTX', 'NSTG', 'NVCR', 'NVO', 'OESX', 'PENN', 'PLL', 'PRTK', 'RDY', 'REGI', 'REKR', 'SBE', 'SQM', 'TCON', 'TCS', 'TGB', 'TPTX', 'TRIL', 'UEC', 'VCEL', 'VOXX', 'WIT', 'WKHS', 'XNCR']
# Create Ticker instance, passing symbols as first argument
# Optional asynchronous argument allows for asynchronous requests
tickers = Ticker(symbols, asynchronous=True)
data = tickers.get_modules("summaryProfile quoteType")
df = pd.DataFrame.from_dict(data).T
# flatten dicts within each column, creating new dataframes
dataframes = [pd.json_normalize([x for x in df[module] if isinstance(x, dict)]) for module in ['summaryProfile', 'quoteType']]
# concat dataframes from previous step
df = pd.concat(dataframes, axis=1)
# View columns
df.columns
Index(['address1', 'address2', 'city', 'state', 'zip', 'country', 'phone',
'fax', 'website', 'industry', 'sector', 'longBusinessSummary',
'fullTimeEmployees', 'companyOfficers', 'maxAge', 'exchange',
'quoteType', 'symbol', 'underlyingSymbol', 'shortName', 'longName',
'firstTradeDateEpochUtc', 'timeZoneFullName', 'timeZoneShortName',
'uuid', 'messageBoardId', 'gmtOffSetMilliseconds', 'maxAge'],
dtype='object')
# Data you're looking for
df[['symbol', 'shortName', 'sector']].head(10)
symbol shortName sector
0 NQZ20.CME Nasdaq 100 Dec 20 NaN
1 ALB Albemarle Corporation Basic Materials
2 AOS A.O. Smith Corporation Industrials
3 ASPN Aspen Aerogels, Inc. Industrials
4 AAU Almaden Minerals, Ltd. Basic Materials
5 ^GSPC S&P 500 NaN
6 ATHM Autohome Inc. Communication Services
7 AQB AquaBounty Technologies, Inc. Consumer Defensive
8 APPS Digital Turbine, Inc. Technology
9 BCYC Bicycle Therapeutics plc Healthcare
```
|
It processes stocks and sectors at the same time. However, some stocks do not have a sector, so an error countermeasure is added.
Since the issue column name consists of sector and issue name, we change it to a hierarchical column and update the retrieved data frame. Finally, I save it in CSV format to import it into Excel. I've only tried some of the stocks due to the large number of stocks, so there may be some issues.
```
import datetime
import pandas as pd
import yfinance as yf
import pandas_datareader.data as web
yf.pdr_override()
start = "2018-01-01"
end = "2019-01-01"
# symbol = ['^GSPC', 'NQ=F', 'AAU', 'ALB', 'AOS', 'APPS', 'AQB', 'ASPN', 'ATHM', 'AZRE', 'BCYC', 'BGNE', 'CAT',
#'CC', 'CLAR', 'CLCT', 'CMBM', 'CMT', 'CRDF', 'CYD', 'DE', 'DKNG', 'EARN', 'EMN', 'FBIO', 'FBRX', 'FCX', 'FLXS',
#'FMC', 'FMCI', 'GME', 'GRVY', 'HAIN', 'HBM', 'HIBB', 'IEX', 'IOR', 'KFS', 'MAXR', 'MPX', 'MRTX', 'NSTG', 'NVCR',
#'NVO', 'OESX', 'PENN', 'PLL', 'PRTK', 'RDY', 'REGI', 'REKR', 'SBE', 'SQM', 'TCON', 'TCS', 'TGB', 'TPTX', 'TRIL',
#'UEC', 'VCEL', 'VOXX', 'WIT', 'WKHS', 'XNCR']
stock = ['^GSPC', 'NQ=F', 'AAU', 'ALB', 'AOS', 'APPS']
adj_close = pd.DataFrame([])
for symbol in stock:
try:
sector = yf.Ticker(symbol).info['sector']
name = yf.Ticker(symbol).info['shortName']
except:
sector = 'None'
name = 'None'
adj_close[sector, symbol] = web.get_data_yahoo(symbol, start=start, end=end)['Adj Close']
idx = pd.MultiIndex.from_tuples(adj_close.columns)
adj_close.columns = idx
adj_close.head()
None Basic Materials Industrials Technology
^GSPC_None NQ=F_None AAU_None ALB_Albemarle Corporation AOS_A.O. Smith Corporation APPS_Digital Turbine, Inc.
2018-01-02 2695.810059 6514.75 1.03 125.321663 58.657742 1.79
2018-01-03 2713.060059 6584.50 1.00 125.569397 59.010468 1.87
2018-01-04 2723.989990 6603.50 0.98 124.073502 59.286930 1.86
2018-01-05 2743.149902 6667.75 1.00 125.502716 60.049587 1.96
2018-01-08 2747.709961 6688.00 0.95 130.962250 60.335583 1.96
# for excel
adj_close.to_csv('stock.csv', sep=',')
```
| 7,234
|
18,828,124
|
I am running my django web app in httpd .
In httpd.conf this is what I have.
```
Listen 8090
User ctaftest
Group ctaftest
```
And after starting httpd server when I do
`netstat -anp |grep httpd`
I get
```
root 31621 1 1 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31625 31621 5 17:23 ? 00:00:02 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31626 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31627 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31628 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31629 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
ctaftest 31646 31621 0 17:23 ? 00:00:00 /usr/local/httpd-python/bin/httpd -k start
```
Note that other than 1 process all other httpd processes are running with ctaftest user
Now this is my problem.
Within my view, if I do
```
dir_path = os.path.expanduser("~/dir_path")
```
I am getting `/root/dirpath` where as I am expecting `/home/ctaftest/dirpath`
Note: When I use Django development server (runserver) I get the expected output,
`/home/ctaftest/dirpath`
Whats wrong when I run from httpd and how can I get the user `ctaftest` itself as the current user when I run my Django webapp from `httpd`
|
2013/09/16
|
[
"https://Stackoverflow.com/questions/18828124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1371989/"
] |
What does your WSGIDaemonProcess configuration in the rest of your Apache config look like? You can set the user there.
```
WSGIDaemonProcess mysite user=ctaftest group=ctaftest threads=5
```
|
To start with, if you call:
```
os.path.expanduser("dir_path")
```
it should return just:
```
dir_path
```
Did you instead mean:
```
os.path.expanduser("~/dir_path")
```
Anyway, when you use embedded mode of mod\_wsgi, your code runs in the Apache child worker processes. These processes can be shared with other Apache modules such as PHP and Perl modules. Because it is a shared environment neither mod\_wsgi or any web application code can't be presumptuous in thinking it can change the current working directory of the process. As a result, the current working directory is inherited from what ever Apache is started with, which would be the root of the file system.
For similar reasons, you can't go overriding what environment variables may be set and as a result, if Apache passed through HOME as being that for the root user that Apache starts as, then when you use os.path.expanduser('~'), the tilde will be replaced with whatever HOME was set to.
So what you are seeing is quite normal, including one process still running as root, which is the parent Apache process, in which none of your requests are being run anyway, as it just acts as a process monitor to manage the child worker processes, handle restarts etc.
In general, in a web application it is regarded as bad practice to rely on things like the current working directory, the values of environment variables such as HOME, USERNAME, PATH etc, as they aren't always set to sensible things depending on the hosting environment.
That all said, if when using mod\_wsgi, you instead use the preferred daemon mode, then because at that point it is only running your Python web application, mod\_wsgi will override HOME to be the directory for the user that the daemon process runs as. If environment variables such as USER, USERNAME and LOGNAME are set, it will also similarly override those with a value corresponding to what user the daemon process runs as. It will even change the current working directory to be the home directory for that user.
In summary. You should not be building in such dependencies into a web application, but specify such things via configuration, otherwise you limit portability. If you for some reason don't want to do that, then use daemon mode of mod\_wsgi instead.
| 7,235
|
48,171,851
|
I can't find a command example for archiving a set of files from a given prefix in S3 into a given vault in Glacier using ONLY COMMAND LINE, i.e. no Lifecycles, no python+boto. Thanks.
This doc has a lot of examples but none fit my request:
<https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html>
|
2018/01/09
|
[
"https://Stackoverflow.com/questions/48171851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1872286/"
] |
That's because you can't. As described in the [Amazon's S3 Documentation](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html):
>
> You cannot specify GLACIER as the storage class at the time that you create an object. You create GLACIER objects by first uploading objects using STANDARD, RRS, or STANDARD\_IA as the storage class. Then, you transition these objects to the GLACIER storage class using lifecycle management.
>
>
>
|
You're looking for this:
<https://aws.amazon.com/premiumsupport/knowledge-center/restore-s3-object-glacier-storage-class/>
```
aws s3 cp s3://bucketname/key/file s3://bucketname/key/file --storage-class GLACIER
```
optionally use --recursive instead of a specific file name.
| 7,238
|
27,712,101
|
I am trying to sort dictionaries in MongoDB. However, I get the value error "too many values to unpack" because I think it's implying that there are too many values in each dictionary (there are 16 values in each one). This is my code:
```
FortyMinute.find().sort(['Rank', 1])
```
Anyone know how to get around this?
EDIT: Full traceback
```
Traceback (most recent call last):
File "main.py", line 33, in <module>
main(sys.argv[1:])
File "main.py", line 21, in main
fm.readFortyMinute(args[0])
File "/Users/Yih-Jen/Documents/Rowing Project/FortyMinute.py", line 71, in readFortyMinute
writeFortyMinute(FortyMinData)
File "/Users/Yih-Jen/Documents/Rowing Project/FortyMinute.py", line 104, in writeFortyMinute
FortyMinute.find().sort(['Rank', 1])
File "/Users/Yih-Jen/anaconda/lib/python2.7/site-packages/pymongo/cursor.py", line 692, in sort
self.__ordering = helpers._index_document(keys)
File "/Users/Yih-Jen/anaconda/lib/python2.7/site-packages/pymongo/helpers.py", line 65, in _index_document
for (key, value) in index_list:
ValueError: too many values to unpack
```
|
2014/12/30
|
[
"https://Stackoverflow.com/questions/27712101",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4392607/"
] |
You pass the arguments and values in **unpacked** as so:
```
FortyMinute.find().sort('Rank', 1)
```
---
It is only when you're passing **multiple sort parameters** that you group arguments and values using lists, and then too you must surround all your parameters with a tuple as so:
```
FortyMinute.find().sort([(Rank', 1), ('Date', 1)])
```
---
**Pro-tip:** Even the `Cursor.sort` documentation linked below recommends using `pymongo.DESCENDING` and `pymongo.ASCENDING` instead of 1 and -1; in general, you should use descriptive variable names instead of magic constants in your code as so:
```
FortyMinute.find().sort('Rank',pymongo.DESCENDING)
```
---
Finally, if you are so inclined, you can sort the list using Python's built-in as the another answerer mentioned; but even thought `sorted` accepts iterators and not just sequences it might be more inefficient and nonstandard:
```
sorted(FortyMinute.find(), key=key_function)
```
where you might define `key_function` to return the `Rank` column of a record.
---
[Link to the official documentation](http://api.mongodb.org/python/current/api/pymongo/cursor.html#pymongo.cursor.Cursor.sort)
|
If you want mong/pymongo to sort:
```
FortyMinute.find().sort('Rank', 1)
```
If you want to sort using multiple fields:
```
FortyMinute.find().sort([('Rank': 1,), ('other', -1,)])
```
You also have constants to make it more clear what you're doing:
```
FortyMinute.find().sort('Rank',pymongo.DESCENDING)
```
If you want to sort in python first you have to return the result and use a sorting method in python:
```
sorted(FortyMinute.find(), key=<some key...>)
```
| 7,239
|
50,208,381
|
I'm experimenting with developing python flask app, and would like to configure the app to apache as a daemon, so I wouldn't need to restart apache after every change. The configuration is now like [instructed here](https://code.google.com/archive/p/modwsgi/wikis/QuickConfigurationGuide.wiki#Mounting_At_Root_Of_Site%20QuickConfigGuide):
httpd.conf
```
WSGIDaemonProcess /rapo threads=5 display-name=%{GROUP}
WSGIProcessGroup /rapo
WSGIScriptAlias /rapo /var/www/cgi-bin/pycgi/koe.wsgi
```
koe.wsgi contains just
```
import sys
sys.path.insert(0, "/var/www/cgi-bin/pycgi")
from koe2 import app as application
```
And in koe2.py there is
```
@app.route('/rapo')
def hello_world():
return 'Hello, World!'
```
that output I can see when I go to the webserver's /rapo/hello -path, so flask works, but the daemon configuration does not (I still need to restart to see any changes made). Here with [similar problem](https://stackoverflow.com/a/2116481/364931) it seems the key was that the names match, and they do. SW versions: Apache/2.4.6 (CentOS) PHP/5.4.16 mod\_wsgi/3.4.
We don't have any [virtual hosts](http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html) defined in the httpd.conf, which might be the missing thing, as that worked [in this case](https://stackoverflow.com/q/36733012/364931)? Thanks for any help!
|
2018/05/07
|
[
"https://Stackoverflow.com/questions/50208381",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/364931/"
] |
You can use `sapply` or `lapply` to accomplish it .
```
#supposing your data.frame is called 'df'
sapply(df, unique)
#$x1
#[1] 1 2 3 4 6 7 8
#
#$x2
#[1] 2 5 7 8 9 0
#
#$x3
#[1] 6 5 1 2 3 4
```
or
```
lapply(df, unique)
#$x1
#[1] 1 2 3 4 6 7 8
#
#$x2
#[1] 2 5 7 8 9 0
#
#$x3
#[1] 6 5 1 2 3 4
```
|
```
# Imagine D is your data.frame object
apply(D,1, function(x) rle(x)$values)
```
| 7,240
|
56,034,831
|
I am using Keras with `fit_generator()`. My generator connects to a Database (MongoDB in my case) to fetch data for each batch. If I use the multiprocessing flag of `fit_generator()` I get this Warning:
```
UserWarning: MongoClient opened before fork. Create MongoClient only after forking.
```
I am connecting to the Database during `__init__()`:
```
class MyCustomGenerator(tf.keras.utils.Sequence):
def __init__(self, ...):
collection = MagicMongoDBConnector()
def __len__(self):
...
def __getitem__(self, idx):
# Using collection to fetch data from mongoDB
...
def on_epoch_end(self):
...
```
I would assume I need to have a separate connection for each epoch, but unfortunately, there is no `on_epoch_begin(self)` callback available (as seen [here](https://github.com/tensorflow/tensorflow/blob/v2.0.0-alpha0/tensorflow/python/keras/utils/data_utils.py)).
So two questions:
How and when does Keras fork the Generator if multiprocessing is used?
How can I get rid of the MongoClient warning and connect inside each fork?
|
2019/05/08
|
[
"https://Stackoverflow.com/questions/56034831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3531894/"
] |
I don't have a mongo DB to test on but this might work - you can get the collection (connection?) on the first get-item of each process.
```py
class MyCustomGenerator(tf.keras.utils.Sequence):
def __init__(self, ...):
self.collection = None
def __len__(self):
...
def __getitem__(self, idx):
if self.collection is None:
self.collection = MagicMongoDBConnector()
# Continue with your code
# Using collection to fetch data from mongoDB
...
def on_epoch_end(self):
...
```
|
if you're using Python 3.7 you could use [os.register\_at\_fork](https://docs.python.org/3/library/os.html#os.register_at_fork) to trigger creating the database connection
for example you could do something like:
```
from os import register_at_fork
def reinit_dbcon():
generator_obj.collection = MagicMongoDBConnector()
register_at_fork(after_in_child=reinit_dbcon)
```
somewhere before you call `fit_generator`. assuming the object is somewhere global
| 7,243
|
13,927,122
|
Trying to get this line of code to work, I keep running into issues no matter how I change the formatting around:
```
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
```
(year, month, day) can be either ints or strings.
Traceback:
```
Traceback (most recent call last):
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/toolbar.py", line 117, in toolbar_tween
response = _handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.3-py2.7.egg/pyramid_debugtoolbar/panels/performance.py", line 55, in resource_timer_handler
result = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/tweens.py", line 20, in excview_tween
response = handler(request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/router.py", line 161, in handle_request
response = view_callable(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 342, in rendered_view
result = view(context, request)
File "/home/tinyup/dev/lib/python2.7/site-packages/pyramid-1.4a3-py2.7.egg/pyramid/config/views.py", line 456, in _class_requestonly_view
response = getattr(inst, attr)()
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 56, in view_process
return self.handle_file_upload(self.request.params['file'], shareID)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 101, in handle_file_upload
self.save(file, newFileName, isImage, uploadTime)
File "/home/tinyup/dev/tinyuploads/tinyuploads/views/share.py", line 166, in save
if not os.path.exists(os.path.join(IncludeSettings.FILE_URL, [str(x) for x in [year, month, day]])):
File "/home/tinyup/dev/lib/python2.7/posixpath.py", line 66, in join
if b.startswith('/'):
AttributeError: 'list' object has no attribute 'startswith'
```
|
2012/12/18
|
[
"https://Stackoverflow.com/questions/13927122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1247832/"
] |
You are missing the '\*' here:
```
>>> os.path.join('foo', *['a','b'])
'foo/a/b'
```
You have to use the star operator here in order to pass the list items as unpacked variable argument list to the method.
|
add \* before `[str(x) for x in [year, month, day]]`
`*[str(x) for x in [year, month, day]]`
| 7,244
|
36,637,428
|
Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though [this nltk question](https://stackoverflow.com/questions/35013726/typeerror-ufunc-add-did-not-contain-a-loop-with-signature-matching-types) and [this gdsCAD question](https://stackoverflow.com/questions/34264282/typeerror-ufunc-add-did-not-contain-a-loop) are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
```
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
```
|
2016/04/15
|
[
"https://Stackoverflow.com/questions/36637428",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6201350/"
] |
I got the same error, but in my case I am subtracting dict.key from dict.value. I have fixed this by subtracting dict.value for corresponding key from other dict.value.
```
cosine_sim = cosine_similarity(e_b-e_a, w-e_c)
```
here I got error because e\_b, e\_a and e\_c are embedding vector for word a,b,c respectively. I didn't know that 'w' is string, when I sought out w is string then I fix this by following line:
```
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c)
```
Instead of subtracting dict.key, now I have subtracted corresponding value for key
|
I ran into the same issue, but in my case it was just a Python list instead of a Numpy array used. Using two Numpy arrays solved the issue for me.
| 7,249
|
18,624,148
|
I'm struggling to find documentation on what the ^ does in python.
EX.
>
>
> >
> >
> > >
> > > 6^1 =
> > > 7
> > >
> > >
> > > 6^2 =
> > > 4
> > >
> > >
> > > 6^3 =
> > > 5
> > >
> > >
> > > 6^4 =
> > > 2
> > >
> > >
> > > 6^5 =
> > > 3
> > >
> > >
> > > 6^6 =
> > > 0
> > >
> > >
> > >
> >
> >
> >
>
>
>
Help?
|
2013/09/04
|
[
"https://Stackoverflow.com/questions/18624148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2748552/"
] |
It is the [bitwise exclusive-or operator](https://en.wikipedia.org/wiki/Xor#Bitwise_operation), often called "xor". For each pair of corresponding bits in the operands, the corresponding bit in the result is 0 if the operand bits are the same, 1 if they are different.
Consider `6^4`:
```
6 = 0b0110
4 = 0b0100
6^4 = 0b0010 = 2
```
As you can see the least-significant bit (the one on the right, in the "one's" place) is zero in both numbers. Thus the least-significant bit in the answer is zero. The next bit is `1` in the first operand and `0` in the second, so the result is `1`.
XOR has some interesting properties:
```
a^b == b^a # xor is commutative
a^(b^c) == (a^b)^c # xor is associative
(a^b)^b == a # xor is reversible
0^a == a # 0 is the identity value
a^a == 0 # xor yourself and you go away.
```
You can change the oddness of a value with xor:
```
prev_even = odd ^ 1 (2 = 3 ^ 1)
next_odd = even ^ 1 (3 = 2 ^ 1)
```
|
for more information on XOR , please react the documentation on Python.org at here:
<http://docs.python.org/2/library/operator.html>
| 7,259
|
38,943,673
|
I am new to python and I am working on a project that needs to write a dictionary into a text file. The format is like:
```
{'17': [('25', 5), ('23', 3)], '12': [('28', 3), ('22', 3)], '13': [('28', 3), ('23', 3)], '16': [('22', 3), ('21', 3)], '11': [('28', 3), ('29', 1)], '14': [('22', 3), ('23', 3)], '15': [('26', 2), ('24', 2)]}.
```
as you can see, the values in the dictionary are always lists. I would like to write the below into the text file:
17, 25, 5 \n
17, 23, 3 \n
12, 28, 3 \n
12, 22, 3 \n
13, 28, 3 \n
13, 23, 3 \n
...
\n stands for a new line
Which means, the keys to be repeated for each value inside the list that 'belongs' to those keys. The reason is because I need to read the text file again into database to do further analysis.
Have try searching for an answer for the past few days and tried many ways, just cannot make it into this format. Appreciate if any of you have a solution for this.
Thanks a lot!
|
2016/08/14
|
[
"https://Stackoverflow.com/questions/38943673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6715128/"
] |
You can achieve this using separate routes, or change your parameters to be optional.
When using 3 attributes, you add separate routes for each of the options that you have - when no parameters are specified, when only `movieId` is specified, and when all 3 parameters are specified.
```
[Route("Everything/MovieCustomer/")]
[Route("Everything/MovieCustomer/{movieId}")]
[Route("Everything/MovieCustomer/{movieId}/{customerId}")]
public ActionResult MovieCustomer(int? movieId, int? customerId)
{
// the rest of the code
}
```
Alternatively you an combine change your route parameters to optional (by adding `?` in route definition) and this should cover all 3 cases that you have:
```
[Route("Everything/MovieCustomer/{movieId?}/{customerId?}")]
public ActionResult MovieCustomer(int? movieId, int? customerId)
{
// the rest of the code
}
```
Keep in mind that neither sample supports the case where you provide only `customerId`.
|
>
> Keep in mind that neither sample supports the case where you provide only customerId.
>
>
>
Check it out. I think you can use the multiple route method with EVEN ANOTHER route like this if you do want to provide only customerId:
```
[Route("Everything/MovieCustomer/null/{customerId}")]
```
| 7,260
|
47,117,625
|
I want to split any matrix (most likely will be a 3x4) into two. One part will be the left hand and then the right hand - only the last column.
```
[[1,0,0,4], [[1,0,0], [4,
[1,0,0,2], ---> A= [1,0,0], B = 2,
[4,3,1,6]] [4,3,1]] 6]
```
Is there a way to do this in python and assign then as A and B?
Thank you!
|
2017/11/05
|
[
"https://Stackoverflow.com/questions/47117625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8606331/"
] |
Yes, you could do like this:
```
def split_last_col(mat):
"""returns a tuple of two matrices corresponding
to the Left and Right parts"""
A = [line[:-1] for line in mat]
B = [line[-1] for line in mat]
return A, B
split_last_col([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
```
### output:
```
([[1, 2], [4, 5], [7, 8]], [3, 6, 9])
```
|
You could create A and B manually, like this:
```
def split(matrix):
a = list()
b = list()
for row in matrix:
row_length = len(row)
a_row = list()
for index, col in enumerate(row):
if index == row_length - 1:
b.append(col)
else:
a_row.append(col)
a.append(a_row)
return a, b
```
Or using lists comprehension:
```
def split(matrix):
a = [row[:len(row) - 1] for row in matrix]
b = [row[len(row) - 1] for row in matrix]
return a, b
```
Example:
```
matrix = [
[1, 0, 0, 4],
[1, 0, 0, 2],
[4, 3, 1, 6]
]
a, b = split(matrix)
print("A: %s" % str(a)) # Output ==> A: [[1, 0, 0], [1, 0, 0], [4, 3, 1]]
print("B: %s" % str(b)) # Output ==> B: [4, 2, 6]
```
| 7,262
|
59,363,950
|
I'm trying to get started with Tensorflow-Hub to extract feature vectors from images. However, I'm not sure how one is meant to convert Tensorflow-Hub outputs (Tensors) to numpy vectors. Here's a simple example:
```
from keras.preprocessing.image import load_img
import tensorflow_hub as hub
import tensorflow as tf
import numpy as np
im = load_img('sample.png')
im = np.expand_dims(im.resize((299,299)), 0)
module = hub.Module("https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1")
out = module(im)
o = np.add(out, 0)
type(o)
```
The [docs](https://www.tensorflow.org/tutorials/customization/basics) indicate that "NumPy operations automatically convert Tensors to NumPy ndarrays", but my `np.add()` call above returns object type `tensorflow.python.framework.ops.Tensor`. Does anyone know how I can obtain a numpy array from `out`? Any pointers would be appreciated!
**Versions**:
```
# output from `pip freeze | grep tensorflow`
tensorflow==1.14.0
tensorflow-estimator==1.14.0
tensorflow-hub==0.1.1
tensorflow-probability==0.6.0
```
|
2019/12/16
|
[
"https://Stackoverflow.com/questions/59363950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1727392/"
] |
The following should work. But I did not check if the output is meaningful. But it is returning consistent results over multiple runs.
```
im = load_img('sample.png')
im = np.expand_dims(im.resize((299,299)), 0)
module = hub.Module("https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1")
out = module(im)
with tf.Session() as sess:
tf.global_variables_initializer().run()
o = sess.run(out)
o = np.add(o, 0)
print(type(o))
```
|
You can use
```py
out.numpy()
type(out)
# <class 'tensorflow.python.framework.ops.EagerTensor'>
type(out.numpy()
# <class 'numpy.ndarray'>
```
| 7,263
|
44,805,535
|
I am an anaconda user and Jupyter is a neat tool to run python code. However, for my macbook, I can't open it in Chrome (This page isn’t working
localhost didn’t send any data.),but it works in Safari, I have tried to reinstall chrome, but I still can't fix it. My system is Mac OS 10.11.5.
Who knows how I can fix it?
I can understand that the problem might be not specific enough, but I have been puzzled by this problem for quite a period of time.
|
2017/06/28
|
[
"https://Stackoverflow.com/questions/44805535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3013618/"
] |
You could change your approach to avoid fixed padding values:
```css
#secondary-menu {
background: #007dc5;
width: 80%;
}
ul#topnav {
list-style: none;
padding: 0;
height: 70px;
display: flex;
align-items: center;
justify-content: center;
}
ul#topnav li a {
text-transform: uppercase;
text-decoration: none;
color: #fff;
}
ul#topnav a.home:hover {
margin: 0px;
padding: 0px;
background: transparent;
border: 0px none;
}
a.cat,
ul#topnav li {
text-align: center;
cursor: pointer;
height: 100%;
display: flex;
align-items: center;
justify-content: center;
flex: 1;
}
ul#topnav li a:hover {
background-color: #e6e7eb;
color: #007dc5;
}
```
```html
<div id="secondary-menu">
<ul id="topnav">
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Long Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Another Long Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
</ul>
</div>
```
[fiddle](https://jsfiddle.net/z44eao79/1/)
|
Add `overflow:hidden` to your `ul#topnav` rules:
```
ul#topnav {
list-style: none;
margin: 0 auto;
height: 70px;
display: flex;
align-items: center;
overflow:hidden;
}
```
```css
#secondary-menu {
background: #007dc5;
width: 80%;
}
ul#topnav {
list-style: none;
margin: 0 auto;
height: 70px;
display: flex;
align-items: center;
overflow: hidden;
}
ul#topnav li a {
display: block;
text-transform: uppercase;
text-decoration: none;
color: #fff;
padding: 26px 20px;
}
ul#topnav a.home:hover {
margin: 0px;
padding: 0px;
background: transparent;
border: 0px none;
}
a.cat,
ul#topnav li {
text-align: center;
cursor: pointer;
}
ul#topnav li a:hover {
background-color: #e6e7eb;
color: #007dc5;
}
```
```html
<div id="secondary-menu">
<ul id="topnav">
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Long Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Another Long Heading</a></li>
<li class="col-sm-4 secondary-menu-dropdown col-md-2 col-lg-2 "><a href="#" class="cat">Heading</a></li>
</ul>
</div>
```
| 7,264
|
54,411,732
|
So far I'm able to print at the end if the user selects 'n' to not order another hard drive, but need to write to a file. I've tried running the code as 'python hdorders.py >> orders.txt', but it won't prompt for the questions; only shows a blank line and if I break out using Ctrl-C, it writes blank entries and while loops in the file. I hope this makes sense.
```
ui = raw_input("Would you like to order more hard drives?(y/n) ")
if ui == 'n':
print '\n','\n',"**** Order Summary ****",'\n',row,'\n',"Number of HD's:",b,'\n',"Disk Slot Position(s):",c,'\n',"Disk Size(s):",d,"GB",'\n',"Dimensions:",e,'\n','\n',
endFlag = True
```
I'd also like it so that if they select 'y', it will save to a file and start over for another disk order (saving the previous info to the file first). Then once they are done (for example going through the program twice) and select 'n', it will have the final details appended to the same file as the first order.
|
2019/01/28
|
[
"https://Stackoverflow.com/questions/54411732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9638138/"
] |
No. One team project has one process template. You can customize that process template however you wish, of course.
|
What you could do is create an inherited process in order to make your customizations, and change every Team Project to that Process.
You have to take into account that the customizations you have made to your Team Project could be affected when you change to a inherited process.
Test carefully with some test Team Projects before.
| 7,265
|
35,719,165
|
I have a python program with one main thread and let's say 2 other threads (or maybe even more, probably doesn't matter). I would like to let the main thread sleep until ONE of the other threads is finished. It's easy to do with polling (by calling t.join(1) and waiting for one second for every thread t).
Is it possible to do it without polling, just by
```
SOMETHING_LIKE_JOIN(1, [t1, t2])
```
where t1 and t2 are threading.Thread objects? The call must do the following: sleep 1 second, but wake up as soon as one of t1,t2 is finished. Quite similar to POSIX select(2) call with two file descriptors.
|
2016/03/01
|
[
"https://Stackoverflow.com/questions/35719165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2384856/"
] |
Here is an example of using condition object.
```
from threading import Thread, Condition, Lock
from time import sleep
from random import random
_lock = Lock()
def run(idx, condition):
sleep(random() * 3)
print('thread_%d is waiting for notifying main thread.' % idx)
_lock.acquire()
with condition:
print('thread_%d notifies main thread.' % idx)
condition.notify()
def is_working(thread_list):
for t in thread_list:
if t.is_alive():
return True
return False
def main():
condition = Condition(Lock())
thread_list = [Thread(target=run, kwargs={'idx': i, 'condition': condition}) for i in range(10)]
with condition:
with _lock:
for t in thread_list:
t.start()
while is_working(thread_list):
_lock.release()
if condition.wait(timeout=1):
print('do something')
sleep(1) # <-- Main thread is doing something.
else:
print('timeout')
for t in thread_list:
t.join()
if __name__ == '__main__':
main()
```
I don't think there is race condition as you described in comment. The condition object contains a Lock. When the main thread is working(sleep(1) in the example), it holds the lock and no thread can notify it until it finishes its work and release the lock.
---
I just realize that there is a race condition in the previous example. I added a global \_lock to ensure the condition never notifies the main thread until the main thread starts waiting. I don't like how it works, but I haven't figured out a better solution...
|
You can create a Thread Class and the main thread keeps a reference to it. So you can check whether the thread has finished and make your main thread continue again easily.
If that doesn't helped you, I suggest you to look at the **Queue** library!
```
import threading
import time, random
#THREAD CLASS#
class Thread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True
self.state = False
#START THREAD (THE RUN METHODE)#
self.start()
#THAT IS WHAT THE THREAD ACTUALLY DOES#
def run(self):
#THREAD SLEEPS FOR A RANDOM TIME RANGE#
time.sleep(random.randrange(5, 10))
#AFTERWARDS IS HAS FINISHED (STORE IN VARIABLE)#
self.state = True
#RETURNS THE STATE#
def getState(self):
return self.state
#10 SEPERATE THREADS#
threads = []
for i in range(10):
threads.append(Thread())
#MAIN THREAD#
while True:
#RUN THROUGH ALL THREADS AND CHECK FOR ITS STATE#
for i in range(len(threads)):
if threads[i].getState():
print "WAITING IS OVER: THREAD ", i
#SLEEPS ONE SECOND#
time.sleep(1)
```
| 7,266
|
35,667,252
|
I have installed the python 3.5 interpretor in my device (Windows).
Can anybody guide me through the process of using packages to run it like `SublimeREPL`?
|
2016/02/27
|
[
"https://Stackoverflow.com/questions/35667252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5987890/"
] |
Yes, you can use any Python version you want to run programs from Sublime - you just need to define a new [build system](http://sublimetext.info/docs/en/reference/build_systems.html). Select **`Tools -> Build System -> New Build System`**, then delete its contents and replace it with:
```js
{
"cmd": ["C:/Python35/python.exe", "-u", "$file"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.python"
}
```
assuming that `C:/Python35/python.exe` is the correct path. If `python.exe` resides someplace else, just put in the correct path, using forward slashes `/` instead of the Windows standard backward slashes `\`.
Save the file as `Packages/User/Python3.sublime-build`, where `Packages` is the folder opened by selecting **`Preferences -> Browse Packages...`** - Sublime should already automatically save it in the right directory. Now, there will be a **`Tools -> Build System -> Python3`** option that you can select for running files with Python 3.
For details on setting up SublimeREPL with Python 3, please follow the instructions in [my answer here](https://stackoverflow.com/a/20861527/1426065).
|
if you have installed python3 and SublimeREPL, you can try setting up key bindings with the correct path to the python3 file.
```
[
{
"keys":["super+ctrl+r"],
"command": "repl_open",
"caption": "Python 3.6 - Open File",
"id": "repl_python",
"mnemonic": "p",
"args": {
"type": "subprocess",
"encoding": "utf8",
"cmd": ["The directory to your python3.6 file", "-i", "$file"],
"cwd": "$file_path",
"syntax": "Packages/Python/Python.tmLanguage",
"external_id": "python",
"extend_env": {"PYTHONIOENCODING": "utf-8"}
}
}
]
```
You can try by copying this code into your /Sublime Text 3/Preferences/Key Bindings/
Hope this helps!
| 7,269
|
36,142,393
|
In the terminal, after I enter the python interpreter I use `help('modules')` to see which modules are installed but Numpy, matplotlib and scipy are not listed.
When I try to import them, I get the following:
>
> ImportError: no module named xxx.
>
>
>
However, when I try to install these modules using `apt-get install xxx` I get a message saying:
>
> python-xxx is already the newest version.
>
>
>
Is it possible I somehow have two versions of python one with matplotlib, the other without it? Could this be linked to a separate problem I'm having with Spyder where the interpreter no longer works? See [here](https://stackoverflow.com/questions/36072477/interpreter-stopped-working-in-spyder).
I am using python 2.7. When I run which python I get: `/usr/local/bin/python`.
When I run `/usr/bin/local/python` I get:
```
Python 2.7.9 (default, Mar 18 2016, 20:34:01)
[GCC 4.8.4] on linux2
```
When I run `dpkg -l spyder` I get:
```
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig- aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============- =================================
ii spyder 2.3.0+dfsg-4 all python IDE for scientists (Python
```
|
2016/03/21
|
[
"https://Stackoverflow.com/questions/36142393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5255941/"
] |
You have a typo in the iframe rule - that might be the cause, since the absolute positioning won't work as expected:
```
iframe{
...
possition: absolute; ---> must be "position"
}
```
|
You have position spelled wrong in your CSS.
| 7,270
|
58,724,581
|
Below is my playbook which has a variable `running_processes` which contains a list of pids(one or more)
Next, I read the user ids for each of the pids. All good so far.
I then try to print the list of user ids in `curr_user_ids` variable using `-debug module` is when i get the error: 'dict object' has no attribute 'stdout\_lines'
I was expecting the `curr_user_ids` to contain one or more entries as evident from the output shared below.
```
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ running_processes.stdout }}/loginuid`"
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
- debug: msg="USER ID LIST HERE:{{ curr_user_ids.stdout }}"
with_items: "{{ curr_user_ids.stdout_lines }}"
TASK [Get running processes list from remote host] **********************************************************************************************************
task path: /app/wls/startstop.yml:22
ok: [10.9.9.111] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "cmd": "ps -few | grep java | grep -v grep | awk '{print $2}'", "delta": "0:00:00.166049", "end": "2019-11-06 11:49:42.298603", "rc": 0, "start": "2019-11-06 11:49:42.132554", "stderr": "", "stderr_lines": [], "stdout": "24032", "stdout_lines": ["24032"]}
TASK [Gather USER IDS of processes id before killing.] ******************************************************************************************************
task path: /app/wls/startstop.yml:59
changed: [10.9.9.111] => (item=24032) => {"ansible_loop_var": "item", "changed": true, "cmd": "id -nu `cat /proc/24032/loginuid`", "delta": "0:00:00.116639", "end": "2019-11-06 11:46:41.205843", "item": "24032", "rc": 0, "start": "2019-11-06 11:46:41.089204", "stderr": "", "stderr_lines": [], "stdout": "user1", "stdout_lines": ["user1"]}
TASK [debug] ************************************************************************************************************************************************
task path: /app/wls/startstop.yml:68
fatal: [10.9.9.111]: FAILED! => {"msg": "'dict object' has no attribute 'stdout_lines'"}
```
Can you please suggest why am I getting the error and how can I resolve it ?
|
2019/11/06
|
[
"https://Stackoverflow.com/questions/58724581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11143113/"
] |
Few points to note why your solution didn't work.
The task `Get running processes list from remote host` returns a newline splitted `\n` string. So you will need to process this and turn the output into a propper list object first.
The task `Gather USER IDs from processes id before killing.` is returning a dictionary containing the key `results` where the value is of type list, so you will need iterate over it and fetch for each element the `stdout` value.
This is how I solved it.
```
---
- hosts: "localhost"
gather_facts: true
become: true
tasks:
- name: Set default values
set_fact:
process_ids: []
user_names: []
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Register a list of Process ids (Split newline from output before)
set_fact:
process_ids: "{{ running_processes.stdout.split('\n') }}"
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ item }}/loginuid`"
register: curr_user_ids
with_items: "{{ process_ids }}"
- name: Register a list of User names (Out of result from before)
set_fact:
user_names: "{{ user_names + [item.stdout] | unique }}"
when: item.rc == 0
with_items:
- "{{ curr_user_ids.results }}"
- name: Set unique entries in User names list
set_fact:
user_names: "{{ user_names | unique }}"
- name: DEBUG
debug:
msg: "{{ user_names }}"
```
|
The variable *curr\_user\_ids* registers results of each iteration
```
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
```
The list of the results is stored in
```
curr_user_ids.results
```
Take a look at the variable
```
- debug:
var: curr_user_ids
```
and loop the stdout\_lines
```
- debug:
var: item.stdout_lines
loop: "{{ curr_user_ids.results }}"
```
| 7,271
|
29,634,019
|
I'm not sure what I'm doing wrong here, and am hoping someone else has the same problem. I don't get any error, and my json matches what should be correct both on Jira's docs and jira-python questions online. My versions are valid Jira versions. I also have no problem doing this directly through the API, but we are re-writing everything to go through jira-python for cleanliness/ease of use.
This just completely clears the fixVersions field in Jira.
```
issue=jira.issue("TKT-100")
issue.update(fields={'fixVersions':[{'add': {'name': 'add_me'}},{'remove': {'name': 'remove_me'}}]})
```
I can add a new version to fixVersions using issue.add\_field\_value(), but this won't work, because I need to add and remove in one request for the history of the ticket.
```
issue.add_field_value('fixVersions', {'name': 'add_me'})
```
Any ideas?
|
2015/04/14
|
[
"https://Stackoverflow.com/questions/29634019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797963/"
] |
Here's a code example of how I got it working for anyone who comes across this later...
```
fixVersions = []
issue = jira.issue('issue_key')
for version in issue.fields.fixVersions:
if version.name != 'version_to_remove':
fixVersions.append({'name': version.name})
fixVersions.append({'name': 'version_to_add'})
issue.update(fields={'fixVersions': fixVersions})
```
|
I did it other way:
1. Create version in the target project.
2. Update ticket.
ver = jira.create\_version(name='version\_name', project='PROJECT\_NAME')
issue = jira.issue('ISSUE\_NUM')
i.update(fields={'fixVersions': [{'name': ver.name}]})}
In my case that worked.
| 7,272
|
16,209,640
|
Here is strange issue I'm facing with wxpython on Mac. Though this works completely fine with wxpython on Windows7. I'm trying to update wx.StaticText label before and after time.sleep() like this:
```
self.lblStatus = wx.StaticText(self, label="", pos=(180, 80))
self.lblStatus.SetLabel("Processing....")
time.sleep(10)
```
Above code, the label "Processing..." do not get visible until time.sleep() completes its 10 seconds. i.e. SetLabel takes effect after 10 seconds.
On windows7/wxpython works as expected but on Mac I'm facing the issue.
|
2013/04/25
|
[
"https://Stackoverflow.com/questions/16209640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1324914/"
] |
I have never seen time.sleep() NOT block the GUI on Windows. The sleep function blocks wx's main loop, plain and simple. As JHolta mentioned, you can put the sleep into a thread and update the GUI from there, assuming you use a threadsafe method, such as wx.CallAfter, wx.CallLater or wx.PostEvent.
But if you just want to arbitrarily reset a label every now and then, I think using a wx.Timer() is much simpler.
* <http://www.blog.pythonlibrary.org/2009/08/25/wxpython-using-wx-timers/>
* <http://wiki.wxpython.org/Timer>
|
The wxPython gui is a loop, to make a part of the code sleep without causing the gui to sleep one would need to multithread.
I would write a function that calls a threaded function, now this is a dirty example but should show you what needs to be done:
```
import wx
from threading import Thread
import time
from wx.lib.pubsub import setuparg1
from wx.lib.pubsub import pub as Publisher
class Example(wx.Frame):
def __init__(self, *args, **kw):
super(Example, self).__init__(*args, **kw)
self.SetTitle('This is a threaded thing')
self.st1 = wx.StaticText(self, label='', pos=(30, 10))
self.SetSize((250, 180))
self.Centre()
self.Show(True)
self.Bind(wx.EVT_MOVE, self.OnMove)
# call updateUI when the thread returns something
Publisher.subscribe(self.updateUI, "update")
def OnMove(self, evt):
''' On frame movement call thread '''
x, y = evt.GetPosition()
C = SomeClass()
C.runthread(x, y)
def updateUI(self, evt):
''' Update label '''
data = evt.data
self.st1.SetLabel(data)
class SomeClass:
def runthread(self, x,y):
''' Start a new thread '''
t = Thread(target=self._runthread, args=(x,y,))
t.start()
def _runthread(self, x,y):
''' this is a threaded function '''
wx.CallAfter(Publisher.sendMessage, "update", "processing...")
time.sleep(3)
wx.CallAfter(Publisher.sendMessage, "update", "%d, %d" % (x, y))
def main():
ex = wx.App()
Example(None)
ex.MainLoop()
if __name__ == '__main__':
main()
```
Now the thread is initialized as soon as you try to move the "frame"/window, and will return the current placing of the window.
wx.CallAfter() is a threadsafe call to the GUI-thread, and only sends the data when the GUI-thread is ready to receive.
Publisher-module simplifies the task of sending the data to our thread.
I will suggest reading this: <http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/>
| 7,275
|
39,024,816
|
```
#####################################
# Portscan TCP #
# #
#####################################
# -*- coding: utf-8 -*-
#!/usr/bin/python3
import socket
ip = input("Digite o IP ou endereco: ")
ports = []
count = 0
while count < 10:
ports.append(int(input("Digite a porta: ")))
count += 1
for port in ports:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.settimeout(0.05)
code = client.connect_ex((ip, port)) #conecta e traz a msg de erro
#Like connect(address), but return an error indicator instead of raising an exception for errors
if code == 0: #0 = Success
print (str(port) + " -> Porta aberta")
else:
print (str(port) + " -> Porta fechada")
print ("Scan Finalizado")
```
The python script above is a TCP Scanning. How can I change it into a TCP SYN scanning ? How to Create a port scanner TCP SYN using the method (TCP SYN ) ?
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39024816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6693417/"
] |
As @Upsampled mentioned, you might use raw sockets (<https://en.wikipedia.org/>) as you only need a subset of TCP protocol (send **SYN** and recieve **RST-ACK** or **SYN-ACK**
).
As coding something like <http://www.binarytides.com/raw-socket-programming-in-python-linux/>
could be a good excersice, I would also suggest to consider <https://github.com/secdev/scapy>
>
> Scapy is a powerful Python-based interactive packet manipulation
> program and library.
>
>
>
Here's the code sample that already implements a simple port scanner
<http://pastebin.com/YCR3vp9B> and a detailed article on what it does:
<http://null-byte.wonderhowto.com/how-to/build-stealth-port-scanner-with-scapy-and-python-0164779/>
The code is a little bit ugly but it works — I've checked it from my local Ubuntu PC against my VPS.
Here's the most important code snippet (slightly adjusted to conform to PEP8):
```
# Generate Port Number
srcport = RandShort()
# Send SYNC and receive RST-ACK or SYN-ACK
SYNACKpkt = sr1(IP(dst=target) /
TCP(sport=srcport, dport=port, flags="S"))
# Extract flags of received packet
pktflags = SYNACKpkt.getlayer(TCP).flags
if pktflags == SYNACK:
# port is open
pass
else:
# port is not open
# ...
pass
```
|
First, you will have to generate your own SYN packets using RAW sockets. You can find an example [here](http://www.binarytides.com/raw-socket-programming-in-python-linux/)
Second, you will need to listen for SYN-ACKs from the scanned host in order to determine which ports actually try to start the TCP Handshake (SYN,SYN-ACK,ACK). You should be able to detect and parse the TCP header from the applications that respond. From that header you can determine the origin port and thus figure out a listening application was there.
Also if you implement this, you also basically made a SYN DDOS utility because you will be creating a ton of half-opened tcp connections.
| 7,276
|
59,159,462
|
I want to find the largest value in a JSON file, using python (so it would be a dictionary).
My JSON has this shape:
```
[{
"probability": 0.623514056,
"boundingBox": { "left": 36, "top": 1, "width": 403, "height": 95 }
},
{
"probability": 0.850905955,
"boundingBox": { "left": 42, "top": 200, "width": 412, "height": 90 }
},
{
"probability": 0.308903724,
"boundingBox": { "left": 79, "top": 309, "width": 690, "height": 125 }
}]
```
And I want to find the maximum and the minimum width. And doing 2 "for"s would take a lot of time (since the JSON is larger than the showed here). Is there an optimal way to do that? like max(*something*)
So the output I would like would be:
```
Max Width: 690
Min Width: 403
```
|
2019/12/03
|
[
"https://Stackoverflow.com/questions/59159462",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11476888/"
] |
The cleanest solution is probably:
```
widths = [d['boundingBox']['width'] for d in json_file]
min_value = min(widths)
max_value = max(widths)
```
However, `min` and `max` just use loops under the hood, which you mentioned may be slow. Test the above solution first, and if that is too slow for your needs, you can combine the loops into one:
```
min_value, max_value = float('inf'), float('-inf')
for d in json_file:
value = d['boundingBox']['width']
if value < min_value:
min_value = value
if value > max_value:
max_value = value
```
EDIT: Performance difference is negligible. Go with the first one.
```
Python 3.7.2 (default, Dec 29 2018, 06:19:36)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import timeit
>>> test = """\
... values = [v for v in x]
... min_value = min(values)
... max_value = max(values)
... """
>>> timeit.timeit(stmt=test, number=10000, setup="""import numpy as np; x = np.random.rand(10000)""")
7.404742785263807
>>> test2 = """\
... min_value, max_value = float('inf'), float('-inf')
... for v in x:
... value = v
... if value < min_value:
... min_value = value
... if value > max_value:
... max_value = value
... """
>>> timeit.timeit(stmt=test2, number=10000, setup="""import numpy as np; x = np.random.rand(10000)""")
7.252437709830701
```
|
That's fairly easy to do:
```
max_width = max(d["boundingBox"]["width"] for d in dicts)
min_width = min(d["boundingBox"]["height"] for d in dicts)
```
| 7,277
|
25,060,752
|
Okay I got a file container that is a product of a Webcrawler containing a lot of different file types, likely but not all are HTML XML JPG PNG PDF. Most of the container is HTML text so I tried to open it with:
```
with open(fname) as f:
content = f.readlines()
```
which basically fails when I hit a PDF. The files are structured in a way so that every file is preceded by a little meta Information telling me what kind of file type is following.
Is there a similar method to `.readlines()` in python to read files line by line. I don't need the PDFs I will Ignore them anyway I just want to skip them.
Thanks in advance
Edit:
Example File: [GDrive Link](https://drive.google.com/file/d/0BwukDl7gHHdBeWNtcWRucXdGQ3c/edit?usp=sharing)
|
2014/07/31
|
[
"https://Stackoverflow.com/questions/25060752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3749379/"
] |
You can [retrieve changes](http://msdn.microsoft.com/en-us/library/thc1eetk.aspx) in a `DataTable` using `GetChanges`.
So you can use this code with a `DataGridView`:
```
CType(YourDataGridView.DataSource, DataTable).GetChanges(DataRowState.Modified).Rows
```
|
I have come up with a working solution in C# where I account for a user editing the current cell then performing a Save/Update without moving out of the edited row. The call to `GetChanges()` won't recognize the current edited row due to its `RowState` still being marked as "`Unchanged`". I also make a call to move to the next row in case the user stayed on the current cell being edited as `GetChange()` won't touch the last edited cell.
```
//Move to previous cell of current edited row in case user did not move from last edited cell
dgvMyDataGridView.CurrentCell = dgvMyDataGridView.Rows[dgvMyDataGridView.CurrentCell.RowIndex].Cells[dgvMyDataGridView.CurrentCell.ColumnIndex - 1];
//Attempts to end the current edit of dgvMyDataGridView for row being edited
BindingContext[dgvMyDataGridView.DataSource, dgvMyDataGridView.DataMember.ToString()].EndCurrentEdit();
//Move to next row in case user did not move from last edited row
dgvMyDataGridView.CurrentCell = dgvMyDataGridView.Rows[dgvMyDataGridView.CurrentCell.RowIndex + 1].Cells[0];
//Get all row changes from embedded DataTable of DataGridView's DataSource
DataTable changedRows = ((DataTable)((BindingSource)dgvMyDataGridView.DataSource).DataSource).GetChanges();
foreach (DataRow row in changedRows.Rows)
{
//row["columnName"].ToString();
//row[0].ToString();
//row[1].ToString();
}
```
| 7,280
|
13,452,761
|
i have a table with add/remove buttons, those buttons add and remove rows from the table, the buttons are also added with each new row
here what i have as html
```
<table>
<tr>
<th>catalogue</th>
<th>date</th>
<th>add</th>
<th>remove</th>
</tr>
<- target row ->
<tr id="cat_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
</- target row ->
</table>
```
JavaScript:
```
$("#Add").click(function() {
$('#cat_row').after('<- target row with ->'); // this is only a notation to prevent repeatation
id++;
});
$("#Remove").click(function() {
$('#cat_'+id+'_row').remove();
id--;
});
```
Please note that after each addation of a new row the `id` is also changed for example here after clicking the button "Add" **1 time**
```
<table>
<tr>
<th>catalogue</th>
<th>date</th>
<th>add</th>
<th>remove</th>
</tr>
<tr id="cat_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
<tr id="cat_1_row">
<td>something</td>
<td>something</td>
<td><input id="Add" type="button" value="Add" /></td>
<td><input id="Remove" type="button" value="Remove" /></td>
</tr>
</table>
```
now the *new added Buttons* has no actions i must always click on the **original buttons** - add/remove
after this i want to make the **Remove Button** removes **ONLY** the row where it is clicked on
for example if i click the button in row 2, row 2 will be deleted
---
**Info:
I use web2py 2.2.1 with python 2.7 with the last version of jQuery**
|
2012/11/19
|
[
"https://Stackoverflow.com/questions/13452761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/747201/"
] |
**SOLVED**
Because my service was running in separate process i had to add this flag when accesing shared preference
```
private final static int PREFERENCES_MODE = Context.MODE_MULTI_PROCESS;
```
and change like this
```
sharedPrefs = this.getSharedPreferences("preference name", PREFERENCES_MODE);
```
|
Ensure you write your data to shared preferences correctly, specifically you `commit()` your changes, [as docs say](http://developer.android.com/reference/android/content/SharedPreferences.Editor.html):
>
> All changes you make in an editor are batched, and not copied back to
> the original SharedPreferences until you call commit() or apply()
>
>
>
Here is example code:
```
SharedPreferences.Editor editor = mPrefs.edit();
editor.putBoolean( key, value );
editor.commit();
```
| 7,283
|
16,844,182
|
This is my first time delving into web development in python. My only other experience is PHP, and I never used a framework before, so I'm finding this very intimidating and confusing.
I'm interested in learning CherryPy/Jinja2 to make a ZFS Monitor for my NAS. I've read through the basics of the docs on CherryPy/Jinja2 but I find that the samples are disjointed and too simplistic, I don't really understand how to make these 2 things "come together" gracefully.
Some questions I have:
1. Is there a simple tutorial shows how you make CherryPy and Jinja2 work together nicely? I'm either finding samples that are too simple, like the samples on CherryPy / Jinja2 docs, or way to complex. (example: <https://github.com/jovanbrakus/cherrypy-example>).
2. Is there a standardized or "expected" way to create web applications for CherryPy? (example: What should my directory structure look like? Is there a way to declare static things; is it even necessary?)
3. Does anyone have recommended literature for this or is the online documentation the best resource?
|
2013/05/30
|
[
"https://Stackoverflow.com/questions/16844182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2437919/"
] |
Congratulations on choosing Python, I'm sure you'll learn to love it as have I.
Regarding CherryPy, I'm not an expert, but was also in the same boat as you a few days ago and I'd agree that the tutorials are a little disjointed in parts.
For integrating Jinja2, as in their [doc page](http://docs.cherrypy.org/stable/progguide/choosingtemplate.html), the snippet of HTML should have been specified that it is the template file and as such saved in the path /templates/index.html. They also used variables that didn't match up in the template code sample and controller sample.
The below is instead a complete working sample of a simple hello world using CherryPy and Jinja2
**/main.py:**
```
import cherrypy
from jinja2 import Environment, FileSystemLoader
env = Environment(loader=FileSystemLoader('templates'))
class Root:
@cherrypy.expose
def index(self):
tmpl = env.get_template('index.html')
return tmpl.render(salutation='Hello', target='World')
cherrypy.config.update({'server.socket_host': '127.0.0.1',
'server.socket_port': 8080,
})
cherrypy.quickstart(Root())
```
**/templates/index.html:**
```
<h1>{{ salutation }} {{ target }}</h1>
```
Then in your shell/command prompt, serve the app using:
```
python main.py
```
And in your browser you should be able to see it at `http://localhost:8080`
That hopefully helps you to connect Jinja2 templating to your CherryPy app. CherryPy really is a lightweight and very flexible framework, where you can choose many different ways to structure your code and file structures.
|
Application structure
=====================
First about standard directory structure of a project. There is none, as CherryPy doesn't mandate it, neither it tells you what data layer, form validation or template engine to use. It's all up to you and your requirements. And of course as this is a great flexibility as it causes some confusion to beginners. Here's how a close to real-word application directory structure may look like.
```
. — Python virtual environment
└── website — cherryd to add this to sys.path, -P switch
├── application
│ ├── controller.py — request routing, model use
│ ├── model.py — data access, domain logic
│ ├── view — template
│ │ ├── layout
│ │ ├── page
│ │ └── part
│ └── __init__.py — application bootstrap
├── public
│ └── resource — static
│ ├── css
│ ├── image
│ └── js
├── config.py — configuration, environments
└── serve.py — bootstrap call, cherryd to import this, -i switch
```
Then standing in the root of [virtual environment](http://docs.python-guide.org/en/latest/dev/virtualenvs/) you usually do the following to start CherryPy in development environment. [`cherryd`](https://cherrypy.readthedocs.org/en/3.3.0/deployguide/cherryd.html) is CherryPy's suggest way of running an application.
```
. bin/activate
cherryd -i serve -P website
```
Templating
==========
Now let's look closer to the template directory and what it can look like.
```
.
├── layout
│ └── main.html
├── page
│ ├── index
│ │ └── index.html
│ ├── news
│ │ ├── list.html
│ │ └── show.html
│ ├── user
│ │ └── profile.html
│ └── error.html
└── part
└── menu.html
```
To harness nice Jinja2's feature of [template inheritance](http://jinja.pocoo.org/docs/dev/templates/#template-inheritance), here are layouts which define structure of a page, the slots that can be filled in a particular page. You may have layout for a website and layout for email notifications. There's also a directory for a part, reusable snippet used across different pages. Now lets see the code that corresponds the structure above.
I've made the following also available as [a runnable](http://runnable.com/VGnoo-FACl9KWyPE) which is easier to navigate files, you can run and play with it. The paths start with `.` like in the first section's tree.
*website/config.py*
```
# -*- coding: utf-8 -*-
import os
path = os.path.abspath(os.path.dirname(__file__))
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 8,
'engine.autoreload.on' : False,
'tools.trailing_slash.on' : False
},
'/resource' : {
'tools.staticdir.on' : True,
'tools.staticdir.dir' : os.path.join(path, 'public', 'resource')
}
}
```
*website/serve.py*
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from application import bootstrap
bootstrap()
# debugging purpose, e.g. run with PyDev debugger
if __name__ == '__main__':
import cherrypy
cherrypy.engine.signals.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
```
*website/application/\_\_init\_\_.py*
Notable part here is a CherryPy tool which helps to avoid boilerplate related with rendering templates. You just need return a `dict` from CherryPy page handler with data for the template. Following convention-over-configuration principle, the tool when not provided with template name will use `classname/methodname.html` e.g. `user/profile.html`. To override the default template you can use `@cherrypy.tools.template(name = 'other/name')`. Also note that the tool exposes a method automatically, so you don't need to append `@cherrypy.expose` on top
```
# -*- coding: utf-8 -*-
import os
import types
import cherrypy
import jinja2
import config
class TemplateTool(cherrypy.Tool):
_engine = None
'''Jinja environment instance'''
def __init__(self):
viewLoader = jinja2.FileSystemLoader(os.path.join(config.path, 'application', 'view'))
self._engine = jinja2.Environment(loader = viewLoader)
cherrypy.Tool.__init__(self, 'before_handler', self.render)
def __call__(self, *args, **kwargs):
if args and isinstance(args[0], (types.FunctionType, types.MethodType)):
# @template
args[0].exposed = True
return cherrypy.Tool.__call__(self, **kwargs)(args[0])
else:
# @template()
def wrap(f):
f.exposed = True
return cherrypy.Tool.__call__(self, *args, **kwargs)(f)
return wrap
def render(self, name = None):
cherrypy.request.config['template'] = name
handler = cherrypy.serving.request.handler
def wrap(*args, **kwargs):
return self._render(handler, *args, **kwargs)
cherrypy.serving.request.handler = wrap
def _render(self, handler, *args, **kwargs):
template = cherrypy.request.config['template']
if not template:
parts = []
if hasattr(handler.callable, '__self__'):
parts.append(handler.callable.__self__.__class__.__name__.lower())
if hasattr(handler.callable, '__name__'):
parts.append(handler.callable.__name__.lower())
template = '/'.join(parts)
data = handler(*args, **kwargs) or {}
renderer = self._engine.get_template('page/{0}.html'.format(template))
return renderer.render(**data) if template and isinstance(data, dict) else data
def bootstrap():
cherrypy.tools.template = TemplateTool()
cherrypy.config.update(config.config)
import controller
cherrypy.config.update({'error_page.default': controller.errorPage})
cherrypy.tree.mount(controller.Index(), '/', config.config)
```
*website/application/controller.py*
As you can see with use of the tool page handlers look rather clean and will be consistent with other tools, e.g. `json_out`.
```
# -*- coding: utf-8 -*-
import datetime
import cherrypy
class Index:
news = None
user = None
def __init__(self):
self.news = News()
self.user = User()
@cherrypy.tools.template
def index(self):
pass
@cherrypy.expose
def broken(self):
raise RuntimeError('Pretend something has broken')
class User:
@cherrypy.tools.template
def profile(self):
pass
class News:
_list = [
{'id': 0, 'date': datetime.datetime(2014, 11, 16), 'title': 'Bar', 'text': 'Lorem ipsum'},
{'id': 1, 'date': datetime.datetime(2014, 11, 17), 'title': 'Foo', 'text': 'Ipsum lorem'}
]
@cherrypy.tools.template
def list(self):
return {'list': self._list}
@cherrypy.tools.template
def show(self, id):
return {'item': self._list[int(id)]}
def errorPage(status, message, **kwargs):
return cherrypy.tools.template._engine.get_template('page/error.html').render()
```
In this demo app I used [blueprint](http://www.blueprintcss.org/) css file, to demonstrate how static resource handling works. Put it in `website/application/public/resource/css/blueprint.css`. The rest is less interesting, just Jinja2 templates for completeness.
*website/application/view/layout/main.html*
```
<!DOCTYPE html>
<html>
<head>
<meta http-equiv='content-type' content='text/html; charset=utf-8' />
<title>CherryPy Application Demo</title>
<link rel='stylesheet' media='screen' href='/resource/css/blueprint.css' />
</head>
<body>
<div class='container'>
<div class='header span-24'>
{% include 'part/menu.html' %}
</div>
<div class='span-24'>{% block content %}{% endblock %}</div>
</div>
</body>
</html>
```
*website/application/view/page/index/index.html*
```
{% extends 'layout/main.html' %}
{% block content %}
<div class='span-18 last'>
<p>Root page</p>
</div>
{% endblock %}
```
*website/application/view/page/news/list.html*
```
{% extends 'layout/main.html' %}
{% block content %}
<div class='span-20 last prepend-top'>
<h1>News</h1>
<ul>
{% for item in list %}
<li><a href='/news/show/{{ item.id }}'>{{ item.title }}</a> ({{ item.date }})</li>
{% endfor %}
</ul>
</div>
{% endblock %}
```
*website/application/view/page/news/show.html*
```
{% extends 'layout/main.html' %}
{% block content %}
<div class='span-20 last prepend-top'>
<h2>{{ item.title }}</h2>
<div class='span-5 last'>{{ item.date }}</div>
<div class='span-19 last'>{{ item.text }}</div>
</div>
{% endblock %}
```
*website/application/view/page/user/profile.html*
```
{% extends 'layout/main.html' %}
{% block content %}
<div class='span-18'>
<table>
<tr><td>First name:</td><td>John</td></tr>
<tr><td>Last name:</td><td>Doe</td></tr>
<table>
</div>
{% endblock %}
```
*website/application/view/page/error.html*
It's a 404-page.
```
{% extends 'layout/main.html' %}
{% block content %}
<h1>Error has happened</h1>
{% endblock %}
```
*website/application/view/part/menu.html*
```
<div class='span-4 prepend-top'>
<h2><a href='/'>Website</a></h2>
</div>
<div class='span-20 prepend-top last'>
<ul>
<li><a href='/news/list'>News</a></li>
<li><a href='/user/profile'>Profile</a></li>
<li><a href='/broken'>Broken</a></li>
</ul>
</div>
```
References
==========
Code above goes closely with backend section of [qooxdoo-website-skeleton](https://bitbucket.org/saaj/qooxdoo-website-skeleton). For full-blown Debain deployment of such application, [cherrypy-webapp-skeleton](https://bitbucket.org/saaj/cherrypy-webapp-skeleton) may be useful.
| 7,285
|
39,971,929
|
Python 3.6 is about to be released. [PEP 494 -- Python 3.6 Release Schedule](https://www.python.org/dev/peps/pep-0494/) mentions the end of December, so I went through [What's New in Python 3.6](https://docs.python.org/3.6/whatsnew/3.6.html) to see they mention the *variable annotations*:
>
> [PEP 484](https://www.python.org/dev/peps/pep-0484) introduced standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables:
>
>
>
> ```
> primes: List[int] = []
>
> captain: str # Note: no initial value!
>
> class Starship:
> stats: Dict[str, int] = {}
>
> ```
>
> Just as for function annotations, the Python interpreter does not attach any particular meaning to variable annotations and only stores them in a special attribute `__annotations__` of a class or module. In contrast to variable declarations in statically typed languages, the goal of annotation syntax is to provide an easy way to specify structured type metadata for third party tools and libraries via the abstract syntax tree and the `__annotations__` attribute.
>
>
>
So from what I read they are part of the type hints coming from Python 3.5, described in [What are Type hints in Python 3.5](https://stackoverflow.com/q/32557920/1983854).
I follow the `captain: str` and `class Starship` example, but not sure about the last one: How does `primes: List[int] = []` explain? Is it defining an empty list that will just allow integers?
|
2016/10/11
|
[
"https://Stackoverflow.com/questions/39971929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1983854/"
] |
Everything between `:` and the `=` is a type hint, so `primes` is indeed defined as `List[int]`, and initially set to an empty list (and `stats` is an empty dictionary initially, defined as `Dict[str, int]`).
`List[int]` and `Dict[str, int]` are not part of the next syntax however, these were already defined in the Python 3.5 typing hints PEP. The 3.6 [PEP 526 – *Syntax for Variable Annotations*](https://www.python.org/dev/peps/pep-0526/) proposal *only* defines the syntax to attach the same hints to variables; before you could only attach type hints to variables with comments (e.g. `primes = [] # List[int]`).
Both `List` and `Dict` are *Generic* types, indicating that you have a list or dictionary mapping with specific (concrete) contents.
For `List`, there is only one 'argument' (the elements in the `[...]` syntax), the type of every element in the list. For `Dict`, the first argument is the key type, and the second the value type. So *all* values in the `primes` list are integers, and *all* key-value pairs in the `stats` dictionary are `(str, int)` pairs, mapping strings to integers.
See the [`typing.List`](https://docs.python.org/3/library/typing.html#typing.List) and [`typing.Dict`](https://docs.python.org/3/library/typing.html#typing.Dict) definitions, the [section on *Generics*](https://docs.python.org/3/library/typing.html#generics), as well as [PEP 483 – *The Theory of Type Hints*](https://www.python.org/dev/peps/pep-0483).
Like type hints on functions, their use is optional and are also considered *annotations* (provided there is an object to attach these to, so globals in modules and attributes on classes, but not locals in functions) which you could introspect via the `__annotations__` attribute. You can attach arbitrary info to these annotations, you are not strictly limited to type hint information.
You may want to read the [full proposal](https://www.python.org/dev/peps/pep-0526/); it contains some additional functionality above and beyond the new syntax; it specifies when such annotations are evaluated, how to introspect them and how to declare something as a class attribute vs. instance attribute, for example.
|
>
> *What are variable annotations?*
>
>
>
Variable annotations are just the next step from `# type` comments, as they were defined in `PEP 484`; the rationale behind this change is highlighted in the [respective section of PEP 526](https://www.python.org/dev/peps/pep-0526/#rationale).
So, instead of hinting the type with:
```
primes = [] # type: List[int]
```
*New syntax was introduced* to allow for directly annotating the type with an assignment of the form:
```
primes: List[int] = []
```
which, as @Martijn pointed out, denotes a list of integers by using types available in [`typing`](https://docs.python.org/3/library/typing.html) and initializing it to an empty list.
>
> *What changes does it bring?*
>
>
>
The first change introduced was [new syntax](https://docs.python.org/3.6/reference/simple_stmts.html#annotated-assignment-statements) that allows you to annotate a name with a type, either standalone after the `:` character or optionally annotate while also assigning a value to it:
```
annotated_assignment_stmt ::= augtarget ":" expression ["=" expression]
```
So the example in question:
```
primes: List[int] = [ ]
# ^ ^ ^
# augtarget | |
# expression |
# expression (optionally initialize to empty list)
```
Additional changes were also introduced along with the new syntax; modules and classes now have an `__annotations__` attribute (as functions have had since *[PEP 3107 -- Function Annotations](https://www.python.org/dev/peps/pep-3107/)*) in which the type metadata is attached:
```
from typing import get_type_hints # grabs __annotations__
```
Now `__main__.__annotations__` holds the declared types:
```
>>> from typing import List, get_type_hints
>>> primes: List[int] = []
>>> captain: str
>>> import __main__
>>> get_type_hints(__main__)
{'primes': typing.List<~T>[int]}
```
`captain` won't currently show up through [`get_type_hints`](https://docs.python.org/3.6/library/typing.html#typing.get_type_hints) because `get_type_hints` only returns types that can also be accessed on a module; i.e., it needs a value first:
```
>>> captain = "Picard"
>>> get_type_hints(__main__)
{'primes': typing.List<~T>[int], 'captain': <class 'str'>}
```
Using `print(__annotations__)` will show `'captain': <class 'str'>` but you really shouldn't be accessing `__annotations__` directly.
Similarly, for classes:
```
>>> get_type_hints(Starship)
ChainMap({'stats': typing.Dict<~KT, ~VT>[str, int]}, {})
```
Where a `ChainMap` is used to grab the annotations for a given class (located in the first mapping) and all annotations defined in the base classes found in its `mro` (consequent mappings, `{}` for object).
Along with the new syntax, a new [`ClassVar`](https://docs.python.org/3.6/library/typing.html#typing.ClassVar) type has been added to denote class variables. Yup, `stats` in your example is actually an *instance variable*, not a `ClassVar`.
>
> *Will I be forced to use it?*
>
>
>
As with type hints from `PEP 484`, these are ***completely optional*** and are of main use for type checking tools (and whatever else you can build based on this information). It is to be provisional when the stable version of Python 3.6 is released so small tweaks might be added in the future.
| 7,286
|
6,943,172
|
What does the [] mean?
Also how can I identify variables as empty arrays in python?
Thanks!
```
perl: xcoords = ()
```
How do I translate that?
|
2011/08/04
|
[
"https://Stackoverflow.com/questions/6943172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
[] - is an empty list in Python and is the same as calling list() e.g. [] == list()
To check that list is empty you can use len(l) or:
```
listV = [] # an empty list
if listV:
# do something if list is not empty
else:
# do something if list is really empty
```
To read more about list you can use [the following link](http://docs.python.org/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange)
|
Lists are like C++ arrays with some difference. One of difference is they can cary different types even lists. to check if lists is empty
```
lists = []
len(lists)
lists[0]= "More of me"
len(lists)
```
More check [Python official](http://docs.python.org/tutorial/introduction.html#lists) tutorial and above Docs
| 7,287
|
62,763,634
|
I currently have a dictionary that I have imported from a csv file, that I have converted into a list of variables. The original dictionary looks like this:
* server01, server01.fqdn:port
* server02, server02.fqdn:port
* server03, server03.fqdn:port
* server04, server04.fqdn:port
What I'd like to do is create another dictionary using the same key value as the existing (which would be the server name) and using the server's FQDN, use python requests to get a value. This would create a dictionary like this that I could then insert into MySQL:
* server01 0.0 0.0 2020-07-06 19:59:42
* server02 0.0 0.0 2020-07-06 19:59:42
* server03 0.0 0.0 2020-07-06 19:59:42
* server04 0.0 0.0 2020-07-06 19:59:42
I can print the results to screen using this, but how would I insert this into a new dictionary?
```
curtime = ('{:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.utcnow()))
for key, value in sorted(dict.items()):
print key, fc_grab(value), fs_grab(value), curtime
```
Thank you,
Sean
|
2020/07/06
|
[
"https://Stackoverflow.com/questions/62763634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13795713/"
] |
The [`format`](https://trino.io/docs/current/functions/conversion.html#format) function accepts any of the Java [format string specifiers](https://docs.oracle.com/javase/8/docs/api/java/util/Formatter.html#syntax):
```
presto> select format('%.2f%%', 0.18932 * 100);
_col0
--------
18.93%
(1 row)
```
|
For those of you who, like me, came here looking to round a `decimal` or `double` field down to `n` decimal places but didn't care about the `%` sign, you have a few other options than Martin's answer.
If you want the output to be type `double` then [`truncate(x,n)`](https://prestodb.io/docs/current/functions/math.html) will do this. However, it requires `x` to be type `decimal`.
```
select truncate( cast( 1.23456789 as decimal(10,2)) , 2); -> 1.23 <DOUBLE>
```
If you feel that the truncate is kind of useless here since the `decimal(10,2)` is really doing the work, then you're not alone. An arguably equivalent transformation would be.
```
select cast( cast( 1.23456789 as decimal(10,2)) as double); -> 1.23 <DOUBLE>
```
Is this syntactically better or more performant than `truncate()`, I have no idea, I guess you get to choose.
And if you don't give a hoot about what type results from the transformation, then I suppose the below is the most direct method.
`cast( 1.23456789 as decimal(10,2);`
| 7,288
|
71,293,767
|
I need to get the value from one element using several others as filters using Selenium on a dynamic website ([LogTrail](https://github.com/sivasamyk/logtrail) using [Kibana](https://en.wikipedia.org/wiki/Kibana)).
I got this:
```python
from selenium import webdriver
import time
from selenium.webdriver.common.keys import Keys
import os
path2driver_ffox = os.path.join(os.path.abspath(os.getcwd()), "geckodriver")
path2driver_chr = os.path.join(os.path.abspath(os.getcwd()), "chromedriver")
try:
driver = webdriver.Chrome(executable_path=path2driver_chr)
except:
driver = webdriver.Firefox(executable_path=path2driver_ffox)
driver.get("https://log-viewer.mob.dev/app/logtrail#/?q=%22lw-00005%22&h=web-sockets&t=Now&i=filebeat-*&_g=()")
print(driver.title)
driver.maximize_window()
```
Using the example below, I need to get the value from the last action where time = 28-2-2022 and lbl-00005 in *li*\*
How can I do it?
```html
<li id="IavYP38BMeu2l4fa6DvW" ng-repeat="event in events" on-last-repeat="" infinite-scroll="">
<time>2022-02-28 10:20:49,864</time>
<span class="host"><a href="" ng-click="onHostSelected(event.hostname)">ws-web-sockets-pp</a></span>
<span class="program"><a ng-click="onProgramClick(event.program)">/web/serv/logs/ws/ws.log:</a></span>
<span class="message" ng-style="event.color? {color: event.color} : ''" ng-bind-html="event.message | ansiToHtml" compile-template="">2022-02-28 10:20:49,279 ws-web-sockets-pp-1 INFO [null:-1] (executor-thread-14) - stat : <span class="highlight">lbl-00005</span>:icifYWZuBe89EUYnMe-J3vIGOWQpG45-66vaB86d, MessageId: 894912413, request message: {"action":"act_VALUE","messageId":"894912413","type":"CALL","uniqueId":"894912413","payload":"{}"}</span>
</li>
```
This works, but:
```python
time = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "li[ng-repeat='event in events']>time"))).text
```
How do I know if this is the last (newest) record? How do I get the act\_VALUE?
This
```python
message = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "li[ng-repeat='event in events'] .message"))).text
```
It doesn’t seem to input the message that belongs to the time that we get the above.
I can’t copy this and can only send an image :(
[](https://i.stack.imgur.com/XGMBC.png)
I need to be able to search this page like this.
Get the latest heartbeat (most recent record) and from that heartbeat, get the message id.
This print is with a filter only to show one record, and normally there are thousands of *li* elements.
I need to able to put in a variable like this:
```json
req=" {"action":"Heartbeat","messageId":"33","type":"CALL","uniqueId":"33","payload":"{}"}"
type="heartbeat"
msgid="33"
```
|
2022/02/28
|
[
"https://Stackoverflow.com/questions/71293767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17914605/"
] |
Given a dataframe with a `DatetimeIndex` which doesn't have any missing days like this
```
df = pd.DataFrame(
{"A": range(500)}, index=pd.date_range("2022-03-01", periods=500, freq="1D")
)
A
2022-03-01 0
2022-03-02 1
... ...
2023-07-12 498
2023-07-13 499
```
you could do the following
```
from dateutil.relativedelta import relativedelta
delta = relativedelta(months=1)
df["B"] = None # None instead of other NaNs - can be changed
idx = df.loc[df.index[0] + delta:].index
df.loc[idx, "B"] = df.loc[[day - delta for day in idx], "A"].values
```
and get
```
A B
2022-03-01 0 None
2022-03-02 1 None
... ... ...
2023-07-12 498 468
2023-07-13 499 469
```
The `idx` is there to make sure that the actual shifting doesn't fail. It's the part you're trying to address by `skip`. (Your `skip` is actually a bit imprecise because you're using 31/366 days for month/year lengths universally.)
But be prepared to run into strange phenomena when you're using months and/or years. For example
```
from datetime import date
delta = relativedelta(months=1)
date(2022, 3, 30) + delta == date(2022, 3, 31) + delta
```
is `True`.
|
We can use [`relativedelta`](https://dateutil.readthedocs.io/en/stable/relativedelta.html), [`pandas.to_datetime`](https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html) and [`pandas.DataFrame.apply`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html).
```
from dateutil.relativedelta import relativedelta
import pandas as pd
# Sample dataframe
>>> a = pd.DataFrame([('2021-01-01'), ('2021-01-02'), ('2022-01-01')], columns=['Date'])
# Contents of a
>>> a
Date
0 2021-01-01
1 2021-01-02
2 2022-01-01
# Ensuring Date is a datetime column
>>> a['Date'] = pd.to_datetime(a['Date'])
# Adding a month to all of the dates
>>> a.Date.apply(lambda x: x + relativedelta(months=1))
0 2021-02-01
1 2021-02-02
2 2022-02-01
Name: Date, dtype: datetime64[ns]
```
| 7,289
|
15,136,456
|
I want to parse dxf file for obtain objects (line, point, text and so on) with dxfgrabber library.
The code is as below
```
#!/usr/bin/env python
import dxfgrabber
dxf = dxfgrabber.readfile("1.dxf")
print ("DXF version : {}".format(dxf.dxfversion))
```
But it gets some error...
```
Traceback (most recent call last):
File "parsing.py", line 6, in <module>
dxf = dxfgrabber.readfile("1.dxf")
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/__init__.py", line 43, in readfile
with io.open(filename, encoding=get_encoding()) as fp:
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/__init__.py", line 39, in get_encoding
info = dxfinfo(fp)
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/tags.py", line 96, in dxfinfo
tag = next(tagreader)
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/tags.py", line 52, in __next__
return next_tag()
File "/usr/local/lib/python2.7/dist-packages/dxfgrabber/tags.py", line 45, in next_tag
raise StopIteration()
StopIteration
```
The simple 1.dxf file only contain line.
file link is <https://docs.google.com/file/d/0BySHG7k180kETlQ2UnRxQmxoUk0/edit?usp=sharing>
Is this bug of dxfgrabber library?
Is there any good library for parsing dxf file in the python?
I am using dxfgrabber 0.4 and python 2.7.3.
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15136456",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761178/"
] |
I contacted the developer and he says that in current version 0.5.1 make line 49 of `__init__.py` the following: `with io.open(filename) as fp:`.
Then it works (`io` was missing).
He will make this correction official in version 0.5.2 soon.
|
You can only read dxf made in AutoCAD format!
Try "DraftSight" which is a free AutoCAD clone which exports dxf quite well. Try dxf R12 format.
This will solve your problems.
| 7,290
|
35,737,178
|
Thanks in advance for the help.
I'm relatively new to python and am trying to write a python script to load partial csv files from 1000 files. For example, I have 1000 files that have this format
```
x,y
1,2
2,4
2,2
3,9
...
```
I would like to load only lines, for example, where `x=2`. I've seen a lot of posts on here about picking certain lines (ie lines 1,2,3), but not picking lines that fit certain criteria. One solution would be to simply open each file individually and iterate through each one, loading lines as I go. However, I would imagine there is a much better way of doing this (efficiency is somewhat of a concern as these files are not small).
One point that might speed things up is that the x column is sorted, ie once I see a value x = a, I will never see another x value less than a as I iterate through the lines from the beginning.
Is there a more efficient way of doing this rather than going through each file line by line?
Edit:
One approach that I have taken is
```
numpy.fromregex(file, r'^' + re.compile(str(mynum)) + r'\,\-\d$', dtype='f');
```
where mynum is the number I want, but this is not working
|
2016/03/02
|
[
"https://Stackoverflow.com/questions/35737178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2736423/"
] |
Try [pandas](https://github.com/pydata/pandas) library. It has an interoperability with numpy and way more flexible. With this library you do next thing:
```py
data = pandas.read_csv('file.csv')
# keep only rows with x equals to 2
data = data[data['x'] == 2]
# convert to numpy array
arr = numpy.asarray(data)
```
You can read more about selecting data with [here](http://pandas.pydata.org/pandas-docs/stable/indexing.html).
|
The csv library comes with python and it allows for partial reading of a file.
```
import csv
def partial_load(filename):
ds = []
c = csv.reader( open(filename) )
legend = next( c )
for row in c:
row = [float(r) for r in row]
if len(row) > 0:
if row[0] > 2:
break
ds.append(row)
return ds
```
| 7,291
|
17,406,453
|
I have started a month ago with GAE and have successfully deployed our current startup via Flask on GAE. It works fantastically well. Now being all too exited about GAE, I am thinking about porting a couple of my older Django apps on GAE as well.
To my surprise the documentation of it is surprisingly inconsistent and partially contradicting.
The official [google page](https://developers.google.com/appengine/articles/django-nonrel) recommends using `django-nonrel`, which itself is already [disconstinued](http://www.allbuttonspressed.com/goodbye).
Django 1.5.1 seems not even to be supported yet on GAE, neither is it clear to me how to use Django 1.4.3 on GAE.
I also found this more recent [solution](http://howto.pui.ch/post/39245389801/tutorial-django-on-appengine-using-google-cloud-sql) that utilizes Django and Google Cloud (Mysql on cloud) instead of the high replication datastore. Not sure if this is a good way to go since its still experimental and subject to "breaking changes" in future. (It also doesn't seem to include any free tier, unlike high replication datastore)
I was expecting Django - as perhaps the biggest python web frameworks - to have a far better documentation or tutorials about how to do deploy it on GAE. So I wonder if its even worth it sticking with Django on GAE anymore.
If I am meant to make manually my own models and adjust my queries in Views by utilizing `ndb` anyway, I could as well stick with flask+Jinja2, why should I use Django, where I can't even use it's ORM anymore? Or am I overlooking something?
Thanks,
|
2013/07/01
|
[
"https://Stackoverflow.com/questions/17406453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/92153/"
] |
you can easily use `@request.getHeader("referer")` in your Templates, for example if you have a cancel button that should redirect you to the previous page, use this :
```
<a href="@request.getHeader("referer")">Cancel</a>
```
in this way, you don't need to pass any extra information to your templates. (tested with play 2.3.4)
|
This is what I came up with in the end, although it isn't particularly elegant, and I'd be interested in better ways of doing it. I added a hidden input to my form with the current page URL:
```
@(implicit request: RequestHeader)
...
<form action="@routes.Controller.doStuff()" method="post">
<input type="hidden" name="previousURL" value="@request.uri"/>
...
</form>
```
Then in my controller:
```
def doStuff() = Action { implicit request =>
val previousURLOpt: Option[String] =
for {
requestMap <- request.body.asFormUrlEncoded
values <- requestMap.get("previousURL")
previousURL <- values.headOption
} yield previousURL
previousURLOpt match {
case Some(previousURL) =>
Redirect(new Call("GET", previousURL))
case None =>
Redirect(routes.Controller.somewhereElse)
}
}
```
| 7,292
|
43,852,802
|
Python 3.6
I have a program that is generating a list of dictionaries.
If I print it to the screen with:
```
print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
```
It prints out exactly as I want to see it:
```
[
{
"runts": 0,
"giants": 0,
"throttles": 0,
"input errors": 0,
"CRC": 0,
"frame": 0,
"overrun": 0,
"ignored": 0,
"watchdog": 0,
"pause input": 0,
"input packets with dribble condition detected": 0,
"underruns": 0,
"output errors": 0,
"collisions": 0,
"interface resets": 2,
"babbles": 0,
"late collision": 0,
"deferred": 0,
"lost carrier": 0,
"no carrier": 0,
"PAUSE output": 0,
"output buffer failures": 0,
"output buffers swapped out": 0
},
{
"runts": 0,
"giants": 0,
"throttles": 0,
"input errors": 0,
"CRC": 0,
"frame": 0,
"overrun": 0,
"ignored": 0,
"watchdog": 0,
"pause input": 0,
"input packets with dribble condition detected": 0,
"underruns": 0,
"output errors": 0,
"collisions": 0,
"interface resets": 2,
"babbles": 0,
"late collision": 0,
"deferred": 0,
"lost carrier": 0,
"no carrier": 0,
"PAUSE output": 0,
"output buffer failures": 0,
"output buffers swapped out": 0
},
```
But if I try to print it to a file with:
```
outputfile = ("d:\\mark\\python\\Projects\\error_detect\\" + hostname)
# print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
output_lines.append(json.dumps(output_lines, indent=4, separators=(',', ': ')))
del output_lines[-1]
with open(outputfile, 'w') as f:
json.dump(output_lines, f)
```
The file is one giant line of text.
I want the formatting in the file to look like it does when I print to the screen.
I do not understand why I am losing the formatting.
|
2017/05/08
|
[
"https://Stackoverflow.com/questions/43852802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7535419/"
] |
I think all you need is `json.dump` with `indent` and it should be fine:
```
outputfile = ("d:\\mark\\python\\Projects\\error_detect\\" + hostname)
# print(json.dumps(output_lines, indent=4, separators=(',', ': ')))
# output_lines.append(json.dumps(output_lines, indent=4, separators=(',', ': ')))
# del output_lines[-1]
with open(outputfile, 'w') as f:
json.dump(output_lines, f, indent=4, separators=(',', ': '))
```
It doesn’t make much sense to me to format to a string and then re run dump on the string.
|
Try simply outputting the formatted `json.dumps`, rather than running it through `json.dump` again.
```
with open(outputfile, 'w') as f:
f.write(output_lines)
```
| 7,295
|
8,461,306
|
I'm tracking a linux filesystem (that could be any type) with pyinotify module for python (which is actually the linux kernel behind doing the job). Many directories/folders/files (as much as the user want to) are being tracked with my application and now i would like track the md5sum of each file and store them on a database (includes every moving, renaming, new files, etc).
I guess that a database should be the best option to store all the md5sum of each file... But what should be the best database for that? Certainly a very performatic one. I'm looking for a free one, because the application is gonna be GPL.
|
2011/12/11
|
[
"https://Stackoverflow.com/questions/8461306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/952870/"
] |
You could use this:
```
string q = Regex.Replace(query, @"[:#/\\]", ".");
q = Regex.Replace(q, @""|['"",&?%\.*-]", " ");
```
EDIT:
=====
On closer inspection of what you're doing, your code is translating several characters into `.`, and *then* translating all `.` into spaces. So you could just do this:
```
string q = Regex.Replace(query, @""|['"",&?%\.*:#/\\-]", " ").Trim();
```
I'm not really sure what you're trying to do here, though. I feel like what you're **really** looking for is something like:
```
string q = Regex.Replace(query, @"[^\w\s]", "");
```
The presence of `"` in there throws me for a loop, and is why I'm not sure what you're doing. If you want to get rid of HTML entities, you could run `query` through `HttpUtility.HtmlDecode(string)` first and then apply the regex.
|
Try this.
```
string pattern = @"[^a-zA-Z0-9]";
string test = Regex.Replace("abc*&34567*opdldld(aododod';", pattern, " ");
```
| 7,298
|
11,191,946
|
I have spent many hours trying to build RDKit on ubuntu 11.10 for
Python 2.7 (rdkit\_201106+dfsg.orig.tar.gz) using a precompiled version
of boost 1.49. And I am failing miserably.
The recurring error is in the CMake GUI:
```
CMake Error at CMakeLists.txt:11 (install):
install FILES given no DESTINATION!
CMake Error at CMakeLists.txt:14 (add_pytest):
Unknown CMake command "add_pytest".
```
Any help please?
Solved the previous problem but now i get this error when running python even though I installed rdkit following the installation procedure:
```
from rdkit import Chem
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named rdkit
```
|
2012/06/25
|
[
"https://Stackoverflow.com/questions/11191946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1395874/"
] |
make sure you have the environment variables set:
(you might need to fix the paths with what you have): using bash on mac:
```
export RDBASE=/usr/local/share/RDKit
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/site-packages
```
you might want to add those lines to a bash script to automate the process.
|
For Ubuntu 12.04.2 LTS setting this environment variables works for me
```
export RDBASE=/usr/share/RDKit
export PYTHONPATH=$PYTHONPATH:/usr/lib/pymodules/python2.7
```
| 7,299
|
6,966,205
|
I want to convert some base64 encoded png images to jpg using python. I know how to decode from base64 back to raw:
```
import base64
pngraw = base64.decodestring(png_b64text)
```
but how can I convert this now to jpg? Just writing pngraw to a file obviously only gives me a png file. **I know I can use PIL, but HOW exactly would I do it?** Thanks!
|
2011/08/06
|
[
"https://Stackoverflow.com/questions/6966205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/561766/"
] |
You can use [PIL](http://www.pythonware.com/products/pil/):
```
data = b'''iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAAAXNSR0IArs4c6QAAAIBJRE
FUOMvN08ENgCAMheG/TGniEo7iEiZuqTeiUkoLHORK++Ul8ODPZ92XS2ZiADITmwI+sWHwi
w2BGtYN1jCAZF1GMYDkGfJix3ZK8g57sJywteTFClBbjmAq+ESiGIBEX9nCqgl7sfyxIykt
7NUUD9rCiupZqAdTu6yhXgzgBtNFSXQ1+FPTAAAAAElFTkSuQmCC'''
import base64
from PIL import Image
from io import BytesIO
im = Image.open(BytesIO(base64.b64decode(data)))
im.save('accept.jpg', 'JPEG')
```
In very old Python versions (2.5 and older), replace `b'''` with `'''` and `from io import BytesIO` with `from StringIO import StringIO`.
|
Right from the PIL tutorial:
>
> To save a file, use the save method of the Image class. When saving files, the name becomes important. Unless you specify the format, the library uses the filename extension to discover which file storage format to use.
>
>
>
Convert files to JPEG
---------------------
```
import os, sys
import Image
for infile in sys.argv[1:]:
f, e = os.path.splitext(infile)
outfile = f + ".jpg"
if infile != outfile:
try:
Image.open(infile).save(outfile)
except IOError:
print "cannot convert", infile
```
So all you have to do is set the file extension to `.jpeg` or `.jpg` and it will convert the image automatically.
| 7,300
|
48,504,746
|
I am trying to add a **GUI** input box and I found out that the way you do that is by using a module called `tkinter`. While I was trying to install it on my arch linux machine through the `ActivePython` package I got the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/tkinter/__init__.py", line 36, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
shell returned 1\
```
|
2018/01/29
|
[
"https://Stackoverflow.com/questions/48504746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9219560/"
] |
All you need to do is to install the tkinter package. Now universal precompiled packages such as ActivePython will not work, well at least it didn't work for me. I don't know if this problem occurs in other OSes but I know the solution for Linux: Install the Tk package from the terminal.
In Arch, Tk is available in the Arch repository. You don't need aur for this, just type on the terminal:
```
sudo pacman -S tk
```
If you are on another Linux distro such as Debian or a Debian based distro you will probably have to find a PPA repository online and in Debian based distros just type on the terminal:
```
sudo apt-get install tk
```
Which applies to all distros.
|
I'm on Manjaro, use Gnome3 on Wayland. After installed `tk` I got an error about Xorg. So I use Google, and found I need to install `python-pygubu` from [Visual editor for creating GUI in Python 3 tkinter](https://bbs.archlinux.org/viewtopic.php?id=221077).
And then another error like: [Gtk-WARNING \*\*: Unable to locate theme engine in module\_path: "murrine"](https://ubuntuforums.org/showthread.php?t=2061142). Also found a solution, to install `gtk-engine-murrine` form that link.
| 7,301
|
58,702,300
|
I have python script for SharePoint login (using python office365-rest-python-client) and download a file. I would like to convert script to executable file so i can share it with non-technical people. Python code run fine but when i convert it to exe using Pyinstaller and try to run, it gives me FileNotFoundError.
I am relatively new to python and i tried couple of tutorial and solution found online but no luck. Any suggestions would be appreciated.
Thanks!
```
Traceback (most recent call last):
File "test.py", line 107, in <module>
File "test.py", line 35, in SPLogin
File "site-packages\office365\runtime\auth\authentication_context.py", line 18, in acquire_token_for_user
File "site-packages\office365\runtime\auth\saml_token_provider.py", line 57, in acquire_token
File "site-packages\office365\runtime\auth\saml_token_provider.py", line 82, in acquire_service_token
File "site-packages\office365\runtime\auth\saml_token_provider.py", line 147, in prepare_security_token_request
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\foo\\AppData\\Local\\Temp\\_MEI66362\\office365\\runtime\\auth\\SAML.xml'
[6664] Failed to execute script test
```
See below spec file.
SAML.xml location: C:\Users\Foo\AppData\Local\Programs\Python\Python37-32\Lib\site-packages\office365\runtime\auth\SAML.xml
```
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['test.py'],
pathex=['C:\\Users\\Foo\\Downloads\\sptest\\newbuild'],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=['.'],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='test',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True )
```
|
2019/11/04
|
[
"https://Stackoverflow.com/questions/58702300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10777463/"
] |
Create a copy of `SAML.xml` (in my test case, right next to my python script `test0.py`); you can copy/paste from [this page](https://raw.githubusercontent.com/vgrem/Office365-REST-Python-Client/master/office365/runtime/auth/providers/templates/SAML.xml). Then run:
```
pyinstaller --onefile --add-data "SAML.xml;office365/runtime/auth" test0.py
```
|
The `_MEI66362` folder is created when you execute a `Pyinstaller` created `.exe`. It will contain everything that `Pyinstaller` determined is needed by your application. However, it cannot deduce every file that your app needs. In some cases, you must tell `Pyinstaller` about needed resources. You can use the `-add-data` and `--add-binary` options (or the `datas` and `binaries` class members in a `.spec` file). See the documentation [here](https://pyinstaller.readthedocs.io/en/stable/usage.html#what-to-bundle-where-to-search) and [here](https://pyinstaller.readthedocs.io/en/stable/spec-files.html#adding-files-to-the-bundle).
In your `datas=` statement, the second argument is where the file should be saved in your executable. So you must place it in the folder where `saml_token_provider.py` is looking for it. I think you should use something like `datas=[ ('/pathtofolder/SAML.xml', 'office365/runtime/auth') ],`.
| 7,306
|
62,054,092
|
I have looked [here](https://stackoverflow.com/questions/1475123/easiest-way-to-turn-a-list-into-an-html-table-in-python) but the solution is still not working out for me...
I have `2 lists`
```py
list1 = ['src_table', 'error_response_code', 'error_count', 'max_dt']
list2 = ['src_periods_43200', 404, 21, datetime.datetime(2020, 5, 26, 21, 10, 7)',
'src_periods_86400', 404, 19, datetime.datetime(2020, 5, 25, 21, 10, 7)']
```
The `list1` carries the `column names` of the `HTML` table.
The second `list2` carries the `table data`.
How do I generate the `HTML table` out of these 2 lists so that the first list is used for `column names` and the second as the `table data` (row-wise)
the result should be:
```html
src_table | error_response_code | error_count | max_dt |
src_periods_43200 | 404 | 21 | 2020-5-26 21:10:7 |
src_periods_43200 | 404 | 19 | 2020-5-25 21:10:7 |
```
many thanks
|
2020/05/27
|
[
"https://Stackoverflow.com/questions/62054092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3107664/"
] |
This Should do it
```
import pandas as pd
import datetime
list1 = ['src_table', 'error_response_code', 'error_count', 'max_dt']
list2 = [
'src_periods_43200', 404, 21, datetime.datetime(2020, 5, 26, 21, 10, 7),
'src_periods_86400', 404, 19, datetime.datetime(2020, 5, 25, 21, 10, 7)
]
index_break = len(list1)
if len(list2) % index_break != 0:
raise Exception('Not enough data.')
staged_list = []
current_list = []
for idx in range(0, len(list2)):
current_list.append(list2[idx])
if len(current_list) == index_break:
staged_list.append(current_list.copy())
current_list = []
df = pd.DataFrame(data=staged_list, columns=list1)
print(df.to_html())
```
|
You can easily write your function for that
Something like:
```py
import datetime
list1 = ['src_table', 'error_response_code', 'error_count', 'max_dt']
list2 = ['src_periods_43200', 404, 21, datetime.datetime(2020, 5, 26, 21, 10, 7), 'src_periods_86400', 404, 19, datetime.datetime(2020, 5, 25, 21, 10, 7)]
print('<table>')
print('<thead><tr>')
for li in list1:
print(f'<th>{li}</th>')
print('</tr></thead>')
print('<tbody>')
for i in range(0, int(len(list2)/4)):
print('<tr>')
print(f'<td>{list2[4*i+0]}</td>')
print(f'<td>{list2[4*i+1]}</td>')
print(f'<td>{list2[4*i+2]}</td>')
print(f'<td>{list2[4*i+3]}</td>')
print('</tr>')
print('</tbody>')
print('</table>')
```
| 7,307
|
48,529,567
|
This is my code:
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
for line in userDetailsFile:
if chosenUser in line:
print "\n"
print chosenUser, "has taken previous tests. "
break
else:
print "That username is not registered."
```
Even after entering a username and it outputting results, the loop continues and asks me to input the username again.
I have recently asked a similar [question](https://stackoverflow.com/questions/48358686/while-loop-not-breaking-using-python) but got it working myself. This one is not working no matter what I try.
Any ideas how to fix it?
Note: `userDetailsFile` is a text file that is in the program earlier.
The problem might be obvious but I'm quite new to Python so I'm sorry if I'm wasting anyone's time.
|
2018/01/30
|
[
"https://Stackoverflow.com/questions/48529567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9244343/"
] |
As others have noted, the primary problem is that the `break` only breaks from the inner `for` loop, not from the outer `while` loop, which can be fixed using e.g. a boolean variable or `return`.
But unless you want the "not registered" line to be printed for *every* line that does not contain the user's name, you should use [`for/else`](https://docs.python.org/2/tutorial/controlflow.html#break-and-continue-statements-and-else-clauses-on-loops) instead of `if/else`. However, instead of using a `for` loop, you could also just use `next` with a generator expression to get the right line (if any).
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
lineWithUser = next((line for line in userDetailsFile if chosenUser in line), None)
if lineWithUser is not None:
print "\n"
print chosenUser, "has taken previous tests. "
break # will break from while now
else:
print "That username is not registered."
```
Or if you do not actually need the `lineWithUser`, just use `any`:
```
while True:
chosenUser = raw_input("Enter a username: ")
with open("userDetails.txt", "r") as userDetailsFile:
if any(chosenUser in line for line in userDetailsFile):
print "\n"
print chosenUser, "has taken previous tests. "
break # will break from while now
else:
print "That username is not registered."
```
This way, the code is also much more compact and easier to read/understand what it's doing.
|
The easiest way to work around this is to make the outer `while` alterable by using a boolean variable, and toggling it:
```
loop = True
while loop:
for x in y:
if exit_condition:
loop = False
break
```
That way you can stop the outer loop from within an inner one.
| 7,308
|
1,465,036
|
I have a shell that runs CentOS.
For a project I'm doing, I need python 2.5+, but centOS is pretty dependent on 2.4.
From what I've read, a number of things will break if you upgrade to 2.5.
I want to install 2.5 separately from 2.4, but I'm not sure how to do it. So far I've downloaded the source tarball, untarred it, and did a `./configure --prefix=/opt` which is where I want it to end up. Can I now just `make, make install` ? Or is there more?
|
2009/09/23
|
[
"https://Stackoverflow.com/questions/1465036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168500/"
] |
```
# yum groupinstall "Development tools"
# yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel
```
**Download and install Python 3.3.0**
```
# wget http://python.org/ftp/python/3.3.0/Python-3.3.0.tar.bz2
# tar xf Python-3.3.0.tar.bz2
# cd Python-3.3.0
# ./configure --prefix=/usr/local
# make && make altinstall
```
**Download and install Distribute for Python 3.3**
```
# wget http://pypi.python.org/packages/source/d/distribute/distribute-0.6.35.tar.gz
# tar xf distribute-0.6.35.tar.gz
# cd distribute-0.6.35
# python3.3 setup.py install
```
**Install and use virtualenv for Python 3.3**
```
# easy_install-3.3 virtualenv
# virtualenv-3.3 --distribute otherproject
New python executable in otherproject/bin/python3.3
Also creating executable in otherproject/bin/python
Installing distribute...................done.
Installing pip................done.
# source otherproject/bin/activate
# python --version
Python 3.3.0
```
|
I unistalled the original version of python (2.6.6) and install 2.7(with option `make && make altinstall`) but when I tried install something with yum didn't work.
So I solved this issue as follow:
1. `# ln -s /usr/local/bin/python /usr/bin/python`
2. Download the RPM package python-2.6.6-36.el6.i686.rpm from <http://rpm.pbone.net/index.php3/stat/4/idpl/20270470/dir/centos_6/com/python-2.6.6-36.el6.i686.rpm.html>
3. Execute as root `rpm -Uvh python-2.6.6-36.el6.i686.rpm`
Done
| 7,310
|
71,036,475
|
There
<https://www.amazon.de/sp?marketplaceID=A1PA6795UKMFR9&seller=A135E02VGPPVQ&isAmazonFulfilled=1&ref=dp_merchant_link>
I just can access on browser as well, but python requests returns 404 error status.
Until yesterday, this page worked as well with python requests. but from today, it does not work for me and it returns 404 error status.
```py
import requests
headers = {
'Connection': 'keep-alive',
'rtt': '300',
'downlink': '0.4',
'ect': '3g',
'sec-ch-ua': '" Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-User': '?1',
'Sec-Fetch-Dest': 'document',
'Accept-Language': 'en-US,en;q=0.9,ko;q=0.8',
}
response = requests.get(
'https://www.amazon.de/sp?marketplaceID=A1PA6795UKMFR9&seller=A135E02VGPPVQ&isAmazonFulfilled=1&ref=dp_merchant_link',
headers=headers
)
print(response.status_code) # 404
```
I really will appreciate if I can get any help from you.
Regards.
|
2022/02/08
|
[
"https://Stackoverflow.com/questions/71036475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
It works just fine to me. Maybe you can try changing your user agent.
```
import requests
from fake_useragent import UserAgent # fake user agent library
# random user-agent
ua = UserAgent()
user_agent = ua.random
headers = {
'Connection': 'keep-alive',
'rtt': '300',
'downlink': '0.4',
'ect': '3g',
'sec-ch-ua': '" Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'Upgrade-Insecure-Requests': '1',
'User-Agent': user_agent,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-User': '?1',
'Sec-Fetch-Dest': 'document',
'Accept-Language': 'en-US,en;q=0.9,ko;q=0.8',
}
response = requests.get(
'https://www.amazon.de/sp?marketplaceID=A1PA6795UKMFR9&seller=A135E02VGPPVQ&isAmazonFulfilled=1&ref=dp_merchant_link',
headers=headers
)
print(response.status_code)
```
|
I suggest you add a **referer url in header** and generally I don't think Amazon allow scraping but use selenium if you want to scrape it easily (sure it may be overkill) but has more customization
| 7,320
|
67,464,761
|
I am using python flask framework to develop Apis. I am planning to use mongodb as backend database. How can I connect mongodb database to python? Is there any in built in library?
|
2021/05/10
|
[
"https://Stackoverflow.com/questions/67464761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15882833/"
] |
You almost there. It is better to keep control over appearance/disappearance inside menu view. Find below fixed parts, places are highlighted with comments in code.
Tested with Xcode 12.5 / iOS 14.5
*Note: demo prepared with turned on "Simulator > Debug > Slow Animations" for better visibility*
[](https://i.stack.imgur.com/J7kcP.gif)
```
struct LeftNavigationView:View {
@EnvironmentObject var viewModel:ViewModel
var body: some View {
ZStack {
if self.viewModel.isLeftMenuVisible { // << here !!
Color.black.opacity(0.8)
.ignoresSafeArea()
.transition(.opacity)
VStack {
Button(action: {
self.viewModel.isLeftMenuVisible.toggle()
}, label: {
Text("Close Me")
})
}
.frame(maxWidth:.infinity, maxHeight: .infinity)
.background(Color.white)
.cornerRadius(10)
.padding(.trailing)
.padding(.trailing)
.padding(.trailing)
.padding(.trailing)
.transition(
.asymmetric(
insertion: .move(edge: .leading),
removal: .move(edge: .leading)
)
).zIndex(1) // << force keep at top where removed!!
}
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
.animation(.default, value: self.viewModel.isLeftMenuVisible) // << here !!
}
}
struct ContentView: View {
@StateObject var viewModel = ViewModel()
var body: some View {
ZStack {
NavigationView {
VStack(alignment:.leading) {
Button(action: {
self.viewModel.isLeftMenuVisible.toggle()
}, label: {
Text("Button")
})
}.padding(.horizontal)
.navigationTitle("ContentView")
}
// included here, everything else is managed inside (!) view
LeftNavigationView()
}.environmentObject(self.viewModel)
}
}
```
|
You should use this library [KYDrawerController](https://github.com/ykyouhei/KYDrawerController)
Declare ContentView and MenuView in SceneDelegate:
```
import KYDrawerController
class SceneDelegate: UIResponder, UIWindowSceneDelegate {
var window: UIWindow?
let drawerController = KYDrawerController(drawerDirection: .left, drawerWidth: 300)
func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
drawerController.mainViewController = UIHostingController(rootView: ContentView())
drawerController.drawerViewController = UIHostingController(rootView: LeftNavigationView()
if let windowScene = scene as? UIWindowScene {
let window = UIWindow(windowScene: windowScene)
let vc = drawerController
vc.view.frame = window.bounds
window.rootViewController = vc
self.window = window
(UIApplication.shared.delegate as? AppDelegate)?.self.window = window
window.makeKeyAndVisible()
}
//...
}
//...
}
```
| 7,321
|
12,816,464
|
I am using web.py framework to create a simple web application
I want to create a radio button so I wrote the following code
```
from web import form
from web.contrib.auth import DBAuth
import MySQLdb as mdb
render = web.template.render('templates/')
urls = (
'/project_details', 'Project_Details',
)
class Project_Details:
project_details = form.Form(
form.Radio('Home Page'),
form.Radio('Content'),
form.Radio('Contact Us'),
form.Radio('Sitemap'),
)
def GET(self):
project_details = self.project_details()
return render.projectdetails(project_details)
```
When I run the code with url `localhost:8080` I am seeing following error
```
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/web/application.py", line 237, in process
return p(lambda: process(processors))
File "/usr/lib/python2.7/site-packages/web/application.py", line 565, in processor
h()
File "/usr/lib/python2.7/site-packages/web/application.py", line 661, in __call__
self.check(mod)
File "/usr/lib/python2.7/site-packages/web/application.py", line 680, in check
reload(mod)
File "/home/local/user/python_webcode/index.py", line 68, in <module>
class Project_Details:
File "/home/local/user/python_webcode/index.py", line 72, in Project_Details
form.Radio('password'),
TypeError: __init__() takes at least 3 arguments (2 given)
```
What parameter need to be passed in the radio button in order to avoid this error
|
2012/10/10
|
[
"https://Stackoverflow.com/questions/12816464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1342109/"
] |
Looking at the [source](https://github.com/webpy/webpy/blob/master/web/form.py#L291), it looks like you have to use one `Radio` constructor for all of your items as the same `Radio` object will actually generate multiple `<input>` elements.
Try something like::
```
project_details = form.Form(
form.Radio('details', ['Home Page', 'Content', 'Contact Us', 'Sitemap']),
)
```
|
Here's what I was able to decipher. checked='checked' seems to select a random (last?) item in the list. Without a default selection, my testing was coming back with a NoneType, if none of the radio-buttons got selected.
```
project_details = form.Form(
form.Radio('selections', ['Home Page', 'Content', 'Contact Us','Sitemap'], checked='checked'),
form.Button("Submit")
)
```
To access your user selection as a string...
```
result = project_details['selections'].value
```
If you want to use javascript while your template is active, you can add, onchange='myFunction()' to the end of the Radio line-item. I'm also assigning an id for each element, to avoid frustration with my getElementById calls, so my declaration looks like this.
```
project_details = form.Form(
form.Radio('selections', ['Home Page', 'Content', 'Contact Us','Sitemap'], checked='checked', onchange='myFunction()', id='selections'),
form.Button("Submit")
)
```
| 7,322
|
30,226,891
|
I am new at both, python and stackoverflow, so please have that on mind. I tried to do this myself and manage to do it, but it works only if i hardcode hash number of previous version like this one in hash1, and then compare with hash number of currently version. I wolud like that program every time save hash number of currently version and then with every run compare it with newer version, and if the file is changed do something.
This is my code
```
import hashlib
hash1 = '3379b3b9b9c82650831db2aba0cf4e99'
hasher = hashlib.md5()
with open('word.txt', 'rb') as afile:
buf = afile.read()
hasher.update(buf)
hash2 = hasher.hexdigest()
if hash1 == hash2:
print('same version')
else
print('diffrent version')
```
|
2015/05/13
|
[
"https://Stackoverflow.com/questions/30226891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4897764/"
] |
Just simply save the hash to a file like file.txt and then when you need to compare a hash just read from your file.txt and compare the two strings.
Here is an example of how to read and write to files in python.
<http://www.pythonforbeginners.com/files/reading-and-writing-files-in-python>
|
For relatively simple comparisons, use [filecmp](https://docs.python.org/2/library/filecmp.html). For finer control and feedback, use [difflib](https://docs.python.org/2/library/difflib.html#module-difflib), which is similar to the \*nix utility, `diff`.
| 7,323
|
62,776,024
|
I simply want to reorder the rows of my pandas dataframe such that `col1` matches the order of the external list elements in `my_order`.
```
d = {'col1': ['A', 'B', 'C'], 'col2': [1,2,3]}
df = pd.DataFrame(data=d)
my_order = ['B', 'C', 'A']
```
This post [sorting by a custom list in pandas](https://stackoverflow.com/questions/23482668/sorting-by-a-custom-list-in-pandas) does the order work sorting by a custom list in pandas and using it for my data produces
```
d = {'col1': ['A', 'B', 'C'], 'col2': [1,2,3]}
df = pd.DataFrame(data=d)
my_order = ['B', 'C', 'A']
df.col1 = df.col1.astype("category")
df.col1.cat.set_categories(my_order, inplace=True)
df.sort_values(["col1"])
```
However, this seems to be a wasteful amount of code relative to an R process which would simply be
```
df = data.frame(col1 = c('A','B','C'), col2 = c(1,2,3))
my_order = c('B', 'C', 'A')
df[match(my_order, df$col1),]
```
Ordering is expensive and the python version above takes 3 steps where R takes only 1 using the match function. Can python not rival R in this case?
If this were simply done once in my real world example I wouldn't care much. But, this is a process that will be iterated on millions of times on a web server application and so a truly minimal, inexpensive path is the best approach
|
2020/07/07
|
[
"https://Stackoverflow.com/questions/62776024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7273115/"
] |
```
{{ json.Date.strftime('%Y-%m-%d') }}
```
|
strftime converts a date object into a string by specifying a format you want to use
you can do
```
now = datetime.now() # <- datetime object
now_string = now.strftime('%d.%m.%Y')
```
however, if you already converted your datetime object into a string (or getting strings from an api or something), you can convert the string back to a datetime object using strptime
```
date_string = '2020-08-05T00:00:00'
date_obj = strptime(date_string, '%Y-%m-%dT%H:%M:%S') # <- the format the string is currently in
```
You can first to strptime and then strftime directly from that to effectively convert a string into a different string, but you should avoid this if possible
```
converted_string = strptime(original_string, 'old_format').strftime('new_format')
```
This topic may be confusing because when you print(now) (remember, the datetime object), python automatically "represents" this and converts it into a string. however, it isnt actually a string at this time, but rather a datetime object. python can't print that, so it converts it - but you wont be able to jsonify this directly, for example.
| 7,324
|
5,635,054
|
I have a really large excel file and i need to delete about 20,000 rows, contingent on meeting a simple condition and excel won't let me delete such a complex range when using a filter. The condition is:
**If the first column contains the value, X, then I need to be able to delete the entire row.**
I'm trying to automate this using python and xlwt, but am not quite sure where to start. Seeking some code snippits to get me started...
Grateful for any help that's out there!
|
2011/04/12
|
[
"https://Stackoverflow.com/questions/5635054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704039/"
] |
You can use,
```
sh.Range(sh.Cells(1,1),sh.Cells(20000,1)).EntireRow.Delete()
```
will delete rows 1 to 20,000 in an open Excel spreadsheet so,
```
if sh.Cells(1,1).Value == 'X':
sh.Cells(1,1).EntireRow.Delete()
```
|
I have achieved this using Pandas package....
```
import pandas as pd
#Read from Excel
xl= pd.ExcelFile("test.xls")
#Parsing Excel Sheet to DataFrame
dfs = xl.parse(xl.sheet_names[0])
#Update DataFrame as per requirement
#(Here Removing the row from DataFrame having blank value in "Name" column)
dfs = dfs[dfs['Name'] != '']
#Updating the excel sheet with the updated DataFrame
dfs.to_excel("test.xls",sheet_name='Sheet1',index=False)
```
| 7,325
|
74,245,242
|
I want to perform the selection of a group of lines in a text file to get all jobs related to an ipref
The test file is like this :
job numbers : (1,2,3), ip ref : (10,12,10)
text file :
1
... (several lines of text)
xxx 10
2
... (several lines of text)
xxx 12
3
... (several lines of text)
xxx 10
i want to select job numbers for IPref=10.
Code :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
pattern='\n?\d.*?xxx 10'
result= re.findall(pattern,texte, re.DOTALL)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
match: 2
1
a
b
xxx 12
1
a
b
xxx 10
```
i have tried to replace .\* by a a negative lookahead assertion to only select if no expr like `"\n?xxx \d{2}\n"` is before "xxx 10" :
```
pattern='\n?\d(?!(?:\n?xxx \d{2}\n)*)xxx 10'
```
but it is not working ...
|
2022/10/29
|
[
"https://Stackoverflow.com/questions/74245242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20365230/"
] |
You can write the pattern in this way, repeating the newline and asserting not xxx followed by 1 or more digits:
```
^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$
```
The pattern matches:
* `^` Start of string
* `\d` Match a single digit (or `\d+` for 1 or more)
* `(?:` Non capture group
+ `\n` Match a newline
+ `(?!xxx \d+$)` Negative lookahead to assert that the string is not `xxx` followed by 1+ digits
+ `.*` If the assertion is true, match the whole line
* `)*` Close the group and optionally repeat it
* `\nxxx 10$` Match a newline, `xxx` and 10
[Regex demo](https://regex101.com/r/RaKVcM/1)
|
Good day to you :) and Thank you very much for your quick response!!
i give you below the result
Note : i have modified re.DOTALL by re.DOTALL|re.MULTILINE (because the result is none without that... Sorry for the previous presentation ... it wat not very clear)
Text file :
```
1
a
b
xxx 10
1
a
b
xxx 12
1
a
b
xxx 10
```
Code With your pattern :
```
#!/usr/bin/python
import re
import sys
fic=open('test2.xml','r')
texte=fic.read()
fic.close()
print(texte)
#pattern='<\/?(?!(?:span|br|b)(?: [^>]*)?>)[^>\/]*>'
#pattern='\n?\d(?!(?:\n?xxx \d{2}\n?)*?)xxx 10'
#pattern='\n?\d.*?xxx 10'
pattern='^\d(?:\n(?!xxx \d+$).*)*\nxxx 10$'
result= re.findall(pattern,texte, re.DOTALL|re.MULTILINE)
i=1
for match in result:
print("\nmatch:",i)
i=i+1
print(match)
```
Result :
```
match: 1
1
a
b
xxx 10
1
a
b
xxx 12
1
a
b
xxx 10
```
but i try to obtain :
```
match: 1
1
a
b
xxx 10
match 2 :
1
a
b
xxx 10
```
| 7,335
|
24,309,586
|
This is similar to this [question](https://stackoverflow.com/questions/20403387/how-to-remove-a-package-from-pypi) with one exception. I want to remove a few specific versions of the package from our local pypi index, which I had uploaded with the following command in the past.
```
python setup.py sdist upload -r <index_name>
```
Any ideas?
|
2014/06/19
|
[
"https://Stackoverflow.com/questions/24309586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/251096/"
] |
Removing packages from local pypi index **depends on type of pypi index you use**.
removing package from `devpi` index
===================================
`devpi` allows [removing packages](http://doc.devpi.net/latest/userman/devpi_packages.html#removing-a-release-file-or-project) only from so called volatile indexes. Non-volatile are "release like" indexes and removing from them is not allowed (as you would surprise users depending on released package).
E.g. for package `pysober` version 0.2.0:
```
$ devpi remove -y pysober==0.2.0
```
removing package from public pypi
=================================
is described in the [answer](https://stackoverflow.com/questions/20403387/how-to-remove-a-package-from-pypi) you already refered to.
removing package from other indexes
===================================
Can vary, but in many cases you can manually delete the files (with proper care).
|
I'm using [pypiserver](https://pypi.org/project/pypiserver/) and had to remove a bad package so I just SSH'd in and removed the bad packages and restarted the service.
The commands were roughly:
```
ssh root@pypiserver
cd ~pypiserver/pypiserver/packages
rm bad-package*
systemctl restart pypiserver.service
```
That seemed to work fine for me, and you can just remove what you need using standard shell commands. Just be sure to restart the process so it refreshes its index.
| 7,340
|
17,321,910
|
I have a file with a correspondence key -> value:
```
sort keyFile.txt | head
ENSMUSG00000000001 ENSMUSG00000000001_Gnai3
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000003 ENSMUSG00000000003_Pbsn
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000028 ENSMUSG00000000028_Cdc45
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
ENSMUSG00000000031 ENSMUSG00000000031_H19
```
And I would like to replace every correspondence of "key" with the "value" in the temp.txt:
```
head temp.txt
ENSMUSG00000000001:001 515
ENSMUSG00000000001:002 108
ENSMUSG00000000001:003 64
ENSMUSG00000000001:004 45
ENSMUSG00000000001:005 58
ENSMUSG00000000001:006 63
ENSMUSG00000000001:007 46
ENSMUSG00000000001:008 11
ENSMUSG00000000001:009 13
ENSMUSG00000000003:001 0
```
The result should be:
```
out.txt
ENSMUSG00000000001_Gnai3:001 515
ENSMUSG00000000001_Gnai3:002 108
ENSMUSG00000000001_Gnai3:003 64
ENSMUSG00000000001_Gnai3:004 45
ENSMUSG00000000001_Gnai3:005 58
ENSMUSG00000000001_Gnai3:006 63
ENSMUSG00000000001_Gnai3:007 46
ENSMUSG00000000001_Gnai3:008 11
ENSMUSG00000000001_Gnai3:009 13
ENSMUSG00000000001_Gnai3:001 0
```
I have tried a few variations following [this AWK example](https://stackoverflow.com/a/13079914/1274242) but as you can see the result is not what I expected:
```
awk 'NR==FNR{a[$1]=$1;next}{$1=a[$1];}1' keyFile.txt temp.txt | head
515
108
64
45
58
63
46
11
13
0
```
My guess is that column 1 of temp does not match 'exactly' column 1 of keyValues. Could someone please help me with this?
R/python/sed solutions are also welcome.
|
2013/06/26
|
[
"https://Stackoverflow.com/questions/17321910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1274242/"
] |
Use awk command like this:
```
awk 'NR==FNR {a[$1]=$2;next} {
split($1, b, ":");
if (b[1] in a)
print a[b[1]] ":" b[2], $2;
else
print $0;
}' keyFile.txt temp.txt
```
|
Another awk option
```
awk -F: 'NR == FNR{split($0, a, " "); x[a[1]]=a[2]; next}{print x[$1]":"$2}' keyFile.txt temp.txt
```
| 7,342
|
34,994,130
|
Looking through several projects recently, I noticed some of them use `platforms` argument to `setup()` in `setup.py`, though with only one value of `any`, i.e.
```
#setup.py file in project's package folder
...
setup(
...,
platforms=['any'],
...
)
```
OR
```
#setup.py file in project's package folder
...
setup(
...,
platforms='any',
...
)
```
From the name "platforms", I can make a guess about what this argument means, and it seems that the list variant is the right usage.
So I googled, looked through [setuptools docs](http://pythonhosted.org/setuptools/setuptools.html), but I failed to find any explanation to what are the possible values to `platforms` and what it does/affects in package exactly.
Please, explain or provide a link to explanation of what it does exactly and what values it accepts?
P.S. Also tried to provide different values to it in my OS independent package and see what changes, when creating wheels, but it seems it does nothing.
|
2016/01/25
|
[
"https://Stackoverflow.com/questions/34994130",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5738152/"
] |
`platforms` is an argument the `setuptools` package inherits from `distutils`; see the [*Additional meta-data* section](https://docs.python.org/2/distutils/setupscript.html#additional-meta-data) in the `distutils` documentation:
>
> *Meta-Data*: `platforms`
>
> *Description*: a list of platforms
>
> *Value*: list of strings
>
>
>
So, yes, using a list is the correct syntax.
The field just provides metadata; what platforms does the package target. Use this to communicate to tools or people about where you expect the package to be used.
There is no further specification for the contents of this list, it is unstructured and free-form. If you want to use something more structured, use the [available Trove classifier strings](https://pypi.python.org/pypi?%3Aaction=list_classifiers) in the `classifiers` field, where tags under `Operating System`, `Environment` and others let you more strictly define a platform.
Wheels do not use this field other than to include it in the metadata, just like other fields like `author` or `license`.
|
Just an update to provide more information for anyone interested.
I found an accurate description of `platforms` in a [PEP](https://www.python.org/dev/peps/pep-0345/#platform-multiple-use).
So, "*There's a PEP for that!*":
[PEP-0345](https://www.python.org/dev/peps/pep-0345/) lists all possible arguments to `setup()` in `setup.py`, but it's a little old.
[PEP-0426](https://www.python.org/dev/peps/pep-0426/) and [PEP-0459](https://www.python.org/dev/peps/pep-0459/) are newer versions describing metadata for Python packages.
| 7,347
|
48,465,325
|
I am trying to build my container image using docker\_image module of Ansible.
**My host machine details:**
```
OS: Lubuntu 17.10
ansible 2.4.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/myuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.14 (default, Sep 23 2017, 22:06:14) [GCC 7.2.0]
```
**My Remote machine details:**
```
Remote OS: CentOS 7.2
Pip modules: docker-py==1.2.3 , six==latest
```
**My tasks inside playbook:**
```
- name: Install dependent python modules
pip:
name: "{{item}}"
state: present
with_items:
- docker-py
- name: Build container image for api
docker_image:
name: api
path: /home/abc/api/ #location of my Dockerfile
```
However i am constantly getting the following error message:
```
"msg": "Failed to import docker-py - No module named 'requests.packages.urllib3'. Try `pip install docker-py`"
```
I see there is some issue with the docker-py module and also some solutions and fixes into ansible docker\_container is also merged in the following links:
<https://github.com/ansible/ansible/issues/20492>
<https://medium.com/dronzebot/ansible-and-docker-py-path-issues-and-resolving-them-e3834d5bb79a>
I even tried with the following command to run my playbook:
```
python3 ansible-playbook main.yml
```
None of the above helped to resolve it successfully yet. How should i go about this now
|
2018/01/26
|
[
"https://Stackoverflow.com/questions/48465325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3592502/"
] |
Your client (presumably) would not have Sequelize installed and cannot pass a Sequelize operator via JSON, so what you are trying to do is not exactly possible. You would probably to get the client to send string (e.g. `or`, `and` and then you would have to map those string to Sequelize operators) [1].
```
"filters": {"week": 201740, "project": 8, "itemgroup": {"or": ["group1", "group2"]}}
```
Then in your server code you would need to maintain a map of strings to Sequelize operators:
```
const operatorsMap = {
or: [Op.or],
and: [Op.and],
etc
}
```
And for each request, you then to loop over all keys and replace strings with the Sequelize operators.
```
function isObject(o) {
return o instanceof Object && o.constructor === Object;
}
function replacer(obj) {
var newObj = {};
for (key in obj) {
var value = obj[key];
if (isObject(value)) {
newObj[key] = replacer(value);
} else if (operatorsMap[key]) {
var op = operatorsMap[key];
newObj[op] = value;
} else {
newObj[key] = value
}
}
return newObj;
}
module.exports = {
getAOLDataCount: function (res, filters) {
let result = '';
wd = new WeeklyData(sequelize, Sequelize.DataTypes);
wd.count({where: replacer(filters)}).then(function (aolCount) {
res.send('the value ' + aolCount);
});
return result;
}
};
```
FYI, The code above has not been tested.
[1] Without knowing the specifics of your project, I would recommend you reconsider this approach. The client really should not send a JSON object that gets fed directly to the ORM. What about bad SQL injection? What if you upgrade Sequelize and old filters get deprecated? Instead, considering allowing filtering via query paramters and have your API create a filter object based on that.
|
In the end I found out from the sequelize slack group that the old operator method is still in the codebase. I don't know how long it will stay, but since it is the only way of sending via json, it may well stay.
So the way it works is that rather than using [Op.or] you can use $or.
Example:
```
"filters": {"week": 201740, "project": 8, "itemgroup": {"$or": ["group1", "group2"]}}
```
| 7,348
|
65,363,105
|
I am using making use of some user\_types in my application which are is\_admin and is\_landlord. I want it that whenever a user is created using the `python manage.py createsuperuser`. he is automatically assigned is\_admin.
models.py
```
class UserManager(BaseUserManager):
def _create_user(self, email, password, is_staff, is_superuser, **extra_fields):
if not email:
raise ValueError('Users must have an email address')
now = datetime.datetime.now(pytz.utc)
email = self.normalize_email(email)
user = self.model(
email=email,
is_staff=is_staff,
is_active=True,
is_superuser=is_superuser,
last_login=now,
date_joined=now,
**extra_fields
)
user.set_password(password)
user.save(using=self._db)
return user
def create_user(self, email=None, password=None, **extra_fields):
return self._create_user(email, password, False, False, **extra_fields)
def create_superuser(self, email, password, **extra_fields):
user = self._create_user(email, password, True, True, **extra_fields)
user.save(using=self._db)
return user
class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(max_length=254, unique=True)
is_staff = models.BooleanField(default=False)
is_superuser = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
last_login = models.DateTimeField(null=True, blank=True)
date_joined = models.DateTimeField(auto_now_add=True)
# CUSTOM USER FIELDS
name = models.CharField(max_length=30, blank=True, null=True)
telephone = models.IntegerField(blank=True, null=True)
USERNAME_FIELD = 'email'
EMAIL_FIELD = 'email'
REQUIRED_FIELDS = []
objects = UserManager()
def get_absolute_url(self):
return "/users/%i/" % (self.pk)
def get_email(self):
return self.email
class user_type(models.Model):
is_admin = models.BooleanField(default=False)
is_landlord = models.BooleanField(default=False)
user = models.OneToOneField(User, on_delete=models.CASCADE)
def __str__(self):
if self.is_landlord == True:
return User.get_email(self.user) + " - is_landlord"
else:
return User.get_email(self.user) + " - is_admin"
```
**Note**: I want a superuser created to automatically have the is\_admin status
|
2020/12/18
|
[
"https://Stackoverflow.com/questions/65363105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14434294/"
] |
You can automatically construct a `user_type` model in the `create_superuser` function:
```
class UserManager(BaseUserManager):
# …
def create_superuser(self, email, password, **extra_fields):
user = self._create_user(email, password, True, True, **extra_fields)
user.save(using=self._db)
**user\_type**.objects.**update\_or\_create(**
user=user,
defaults={
'is_admin': True
}
**)**
return user
```
It is however not clear to my why you use a `user_type` class, and not just add extra fields to your `User` model. This will make it more efficient since these will be fetched in the *same* query when you fetc the logged in user. It is also more convenient to just access an `.is_landlord` attribute from the user model.
---
>
> **Note**: Models in Django are written in *PerlCase*, not *snake\_case*,
> so you might want to rename the model from ~~`user_type`~~ to `UserType`.
>
>
>
|
So I have tried a method that works not so efficiently but manageably.
What I do is a simple trick.
views.py
```
def Dashboard(request, pk):
user = User.objects.get(id=request.user.pk)
landlord = user.is_staff & user.is_active
if not landlord:
reservations = Tables.objects.all()
available_ = Tables.objects.filter(status="available")
unavailable_ = Tables.objects.filter(status="unavailable")
bar_owners_ = user_type.objects.filter(is_landlord=True)
context = {"reservations":reservations, "available_":available_, "unavailable_":unavailable_, "bar_owners_":bar_owners_}
return render(request, "dashboard/super/admin/dashboard.html", context)
else:
"""Admin dashboard view."""
user = User.objects.get(pk=pk)
reservations = Tables.objects.filter(bar__user_id=user.id)
available =Tables.objects.filter(bar__user_id=user.id, status="available")
unavailable =Tables.objects.filter(bar__user_id=user.id, status="unavailable")
context = {"reservations":reservations, "available":available, "unavailable":unavailable}
return render(request, "dashboard/super/landlord/dashboard.html", context)
```
So I let Django know that if a user is active or has the is\_active set to true only then they should have access to the dashboard else they wouldn't have access to it.
| 7,350
|
68,286,395
|
I am trying to create a Toplevel window, however, this Toplevel is called from a different file in the same directory within a function.
Apologies I am by no means a tkinter or python guru. Here are the two parts of the code. (snippets)
#File 1 (Main)
```
import tkinter as tk
from tkinter import *
import comm1
from comm1 import com1
root = tk.Tk()
root.title("")
root.geometry("1900x1314")
#grid Center && 3x6 configuration for correct gui layout
root.grid_rowconfigure(0, weight=1)
root.grid_rowconfigure(11, weight=1)
root.grid_columnconfigure(0, weight=1)
root.grid_columnconfigure(11, weight=1)
#background image
canvas = Canvas(root, width=1900, height=1314)
canvas.place(x=0, y=0, relwidth=1, relheight=1)
bckground = PhotoImage(file='img.png')
canvas.create_image(20 ,20 ,anchor=NW, image=bckground)
#command to create new Toplevel
btn1 = tk.Button(root, text='Top', command=com1, justify='center', font=("Arial", 10))
btn1.config(anchor=CENTER)
btn1.grid(row=4, column=1)
```
#File 2 (Toplevel)
```
#command for new window
def com1():
newWindow1 = Toplevel(root)
newWindow1.title("")
newWindow1.geometry("500x500")
entry1 = tk.Entry(root, justify='center' , font=("Arial", 12), fg="Grey")
newWindow1.pack()
newWindow1.mainloop()
```
The weird part is this worked perfectly for a few minutes and without changing any code it just stopped working.
Where am I going wrong?
|
2021/07/07
|
[
"https://Stackoverflow.com/questions/68286395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11911902/"
] |
No, don't make it `static`. If you want you can make it an instance field, but making it a class field is not optimal. E.g. see the note on thread-safety on the `Random` class that it has been derived from:
>
> Instances of `java.util.Random` are threadsafe. However, the concurrent use of the same `java.util.Random` instance across threads may encounter contention and consequent poor performance. Consider instead using `ThreadLocalRandom` in multithreaded designs.
>
>
>
Beware though that the `ThreadLocalRandom` is **not** cryptographically secure, and therefore not a good option for you. In general, you should try and avoid using `static` class fields, **especially** when the instances are **stateful**.
If you only require the random instance in one or a few methods that are not in a tight loop then making it a local instance is perfectly fine (just using `var rng = new SecureRandom()` in other words, or even just `new SecureRandom()` if you have *a single* method call that requires it).
|
I totally agree with Maartens's answer. However one can notice that java.util classes create statics for SecureRandom themselfes.
```
public final class UUID implements java.io.Serializable, Comparable<UUID> {
...
/*
* The random number generator used by this class to create random
* based UUIDs. In a holder class to defer initialization until needed.
*/
private static class Holder {
static final SecureRandom numberGenerator = new SecureRandom();
}
```
| 7,351
|
63,306,561
|
File/Directory Structure:
```
main/home/script.py
main/home/a/.init
main/home/b/.init
```
I want to setup my `gitignore` to exclude everything in the home directory but to include specific file types.
What i tried:
```
home/* #exclude everything in the home directory and subdirectories
!home/*.py #include python files immediately in the home directory
!**.init #include .init files in all directories and subdirectories.
```
The problem, i can't seem to make sure `.init` files are included. The purpose of this file is to ensure that git will create all my directories, even if they do not have files yet. As such i want to place an empty 0 byte .init file inside of each directory to ensure the "empty" directory is committed by git.
Thanks.
|
2020/08/07
|
[
"https://Stackoverflow.com/questions/63306561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
If you want to create, e.g., `home/foo/.init` and have this file be put into Git's *index* (for more about the index, see below), you will need to tell Git *not* to cut off searches into the `home/*/` directories:
```
!home/*/
```
Then, as [Fady Adal noted](https://stackoverflow.com/a/63306691/1256452) (but I've adjusted slightly), you probably also want:
```
!**/.init
```
so that when Git searches `home/*/` it will find and de-ignore files named `.init`. Note that this de-ignores *all* `.init` files; perhaps you want:
```
!home/**/.init
```
here so that you can ignore a file named, e.g., `nothome/foo/.init`. (You might even ignore `home/**/*` while un-ignoring `home/**/*/` and `home/**/.init`.)
Long: what's going on here?
===========================
I like to say that Git stores only files, not directories, and this is true—but the *reason* it's true has to do with the way Git builds new commits, which is, from Git's *index*.
### Commits and the index
Each *commit* stores a full and complete copy of every file that Git knows about. This full-and-complete copy, however, is stored in a special, read-only, Git-only, frozen-for-all-time format in which duplicate files are automatically de-duplicated. That way, the fact that your first commit has (say) a `README.md` file that hardly ever changes, means that every commit just *shares* that `README.md` file. If it does change, the new commits begin sharing the new file. If it changes back, new commits after that go back to sharing the original file. So if there are only three *versions* of `README.md`, despite having 3 million commits, those 3 million commits all share the three *versions* of the file.
But note that these files are literally read-only. You can't change them. Not even *Git* can change them (for technical reasons having to do with hash IDs; this is true of all existing commits too). They're not in a format that most of your computer programs can use, either. That means that to *work on* the file, or even just to *look* at it, Git has to expand out the frozen-and-compressed, Git-only committed file, into an ordinary everyday form.
This means that when you pick some commit to work on it, Git has to *extract* all the files from that commit. So there are already two copies of each file: the frozen one, in the Git-only compressed-and-de-duplicated form, and the useful one, in your work-tree.
Most version control systems (VCS-es) have this same pattern: there's a committed copy of each file, in some VCS-specific form, saved inside the VCS, and there's a plain-text / ordinary-format version that you work on. Many VCSes stop here, with just the two active files (and one of them might be stored in some central repository, rather than on your computer; Git stores the VCS copy on your computer).
To make a *new* commit, then, the VCS obviously has to package up all of your work-tree (ordinary-format) files. Some version control systems literally do this. Most at least stick a cache in here to make this go faster, because doing things this way is painfully slow. Git, however, uses a sneaky trick.
In Git, there's a *third* copy of each active file. This third copy goes in what Git calls, variously, the *index*, the *staging area*, or—rarely these days—the *cache*. Technically this one isn't usually a *copy* as Git stores it in the internal, compressed-and-de-duplicated form, so it's really just a reference to a blob-hash-ID. This also means that it's ready to go into the *next* commit.
**What this means is that the index—or staging area, if you prefer that term—can be described as holding *the next commit you intend to make*.** The index takes on an expanded role during conflicted merges, so this is not a complete description, but it's good enough for thinking about it. When you use `git commit` to make a new commit, Git just packages up all the prepared, frozen-format, pre-de-duplicated files from the index. But the index holds only *files*—files with long names, like `home/a/.init` for instance, but *files*, not directories.
Checking out some commit, to work on it, means extracting the files from that commit. Git puts them—in their frozen format, but now change-able—into the index so that they're ready to make a *new* commit, and de-compresses them into ordinary format in your work-tree so that you can see and work on them. Then, when you use `git add`, you are telling Git: *Make the index copy of some file match the work-tree copy of that file.*
* If there's already an index copy, the index copy gets booted out (it's probably safely in some commit though) and Git de-duplicates the work-tree copy into the appropriate compressed, frozen-format copy and puts *that* in the index instead.
* If there *wasn't* an index copy, now there is. (It's also still de-duplicated: if you make a new file that has some old file's content, the old content from the old commit gets re-used.)
Either way, it's now ready to go into a new commit.
### This is where `.gitignore` comes in
The `.gitignore` files are somewhat misnamed. They do not literally make Git *ignore* a file. A file's presence, or absence, in new commits you make is determined strictly by whether the file was in the index at the time you ran `git commit`.
What `.gitignore` does instead is two-fold. First, when you use `git status`, Git will *complain* about files that exist in your work-tree, but are not in Git's index. This complaint comes in the form of telling you that some file is *untracked*. This is literally what untracked means: that there is a file in your work-tree, where you can see it and edit it and so on, that isn't in Git's index *right now*. That's *all* it means, since you can put a file into Git's index (`git add`) or take one out (`git rm` or `git rm --cached`) at any time. But because the index is the source of each *new* commit, it's important to know whether some file is in the index or not—which is why Git complains if it's not.
Sometimes, though, this complaint is just annoying: *Yes, I know this compiled object-code file is not in the index. Don't tell me! I **know** already and it's not important!* So, to keep Git from complaining, you list the file in another file that probably should be called `.git-do-not-complain-about-these-untracked-files`.
But that's not the only thing that you get by listing the file in `.gitgnore`. It not only shuts up `git status`, it also makes `git add` *not* actually add the file. So `git add *` or `git add .` *won't* add the object-code file, or whatever. So to keep Git from adding, you list the file in a file that perhaps should be called `.git-do-not-auto-add-these-files`.
Hence `.gitignore` might be called `.git-do-not-complain-about-these-untracked-files-and-do-not-automatically-add-them-either`. But once those files *are* in the index, a `.gitignore` entry has no effect, so maybe it should be `.git-do-not-complain-about-these-untracked-files-and-do-not-automatically-add-them-either-but-if-they-are-in-the-index-go-ahead-and-commit-them`. But that's just ridiculous, so `.gitignore` it is.
### Scanning directories is slow
When you have a massive Git repository, with millions1 of files in it, some of the things that Git normally does very quickly start really bogging down. Even at just a few hundred thousand files, some things can be slow. One of the slowest is scanning through a directory (or folder) to look for untracked files.2
By listing a *directory*, such as `home/a/`, in a `.gitignore` file, you give Git permission to take a shortcut. Normally, Git would say to itself: *Ah, here is a directory `home/a`. I must open it and read out every file in it, and see if those files are in the index or not, in order to decide whether these files are untracked and/or need to be added.* But if the entire directory is to be ignored, Git can stop a little short: *Wait! I see `home/a` is to be ignored! I can skip it entirely!* And so it goes on to `home/b/` instead of looking inside `home/a/`.
To make sure that Git *doesn't* skip a directory, you must make sure that it's not ignored. This is where trailing slashes in the `.gitignore` entries come in.
---
1Most aren't even this big, but Microsoft are working on making Git perform with repositories of this size.
2The usual trick for these kinds of speed issues is to insert a cache. The problem here is that untracked files are, by definition, not in the index. Git's index does have an extension to do some untracked caching but this can never catch everything.
---
### The `.gitignore` line format
The format of lines in `.gitignore` is:
* blank and comment lines are ignored;
* lines that start with `!` are negations;
* lines that *end* with `/` refer to directories; and
* the rest of the line names the file, complete with leading and/or embedded slashes.
A negation only makes sense to undo the effect of an earlier line. In general, later lines override earlier lines, but there is that one big exception having to do with skipping entire directories.
A line that—after any `!` marking a negation—*starts* with a slash provides a *rooted* or *anchored* path.3 So `/home` for instance means just that—`/home`—and not something like `a/home`. A line that *contains an embedded slash* is also rooted, so that `home/a` and `/home/a` both mean the same thing.
The final slash, if there, gets *removed* from the "is rooted/anchored" test. That is, `home/` and `/home/` are different, because `home` is non-rooted/non-anchored but `/home` is rooted/anchored.
As Git scans through directories (folders) and subdirectories (sub-folders), it will try matching each *file or directory name* it finds at each level to all the non-rooted / non-anchored names. Only those at the level of that particular `.gitignore` get matched against the rooted / anchored names, though.
A trailing slash in the pattern means *match only if this is a directory*. So if `home/a` is a directory, it matches both `home/*` and `home/*/`; if `home/xyz` is a file, it matches only `home/*`, not `home/*/`.
Hence, if we want to ignore all *files* underneath `home`, we use:
```
home/*
```
to ignore them. This has an embedded slash so it is rooted/anchored. Unfortunately it gives Git permission to *skip* all subdirectories, so we must counter that with:
```
!home/*/
```
which has a trailing slash so that it applies only to directories. It too is anchored.
---
3I'm borrowing the term *anchored* from regular expression descriptions here. *Rooted* refers to the top level of the Git repository work-tree. Both terms should convey the right idea; use whichever you like better.
|
It should be
```
home/* #exclude everything in the home directory and subdirectories
!home/*.py #include python files immediately in the home directory
!**/*.init #include .init files in all directories and subdirectories.
```
| 7,352
|
58,187,677
|
i've worked with tensorflow for a while and everything worked properly until i tried to switch to the gpu version.
Uninstalled previous tensorflow,
pip installed tensorflow-gpu (v2.0)
downloaded and installed visual studio community 2019
downloaded and installed CUDA 10.1
downloaded and installed cuDNN
tested with CUDA sample "deviceQuery\_vs2019" and got positive result.
test passed
Nvidia GeForce rtx 2070
run test with previous working file and get the error
tensorflow.python.framework.errors\_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
after some research i've found that the supported CUDA version is 10.0
so i've downgraded the version, changed the CUDA path, but nothing changed
using this code
```
import tensorflow as tf
print("Num GPUs Available: ",
len(tf.config.experimental.list_physical_devices('GPU')))
```
i get
```
2019-10-01 16:55:03.317232: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-10-01 16:55:03.420537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
Num GPUs Available: 1
name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
2019-10-01 16:55:03.421029: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-10-01 16:55:03.421849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
[Finished in 2.01s]
```
CUDA seems to recognize the card, so does tensorflow, but i cannot get rid of the error:
tensorflow.python.framework.errors\_impl.InternalError: cudaGetDevice() failed. Status: cudaGetErrorString symbol not found.
what am i doing wrong? should i stick with cuda 10.0? am i missing a piece of the installation?
|
2019/10/01
|
[
"https://Stackoverflow.com/questions/58187677",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12149136/"
] |
SOLVED, it's mostly an alchemy of versions to avoid conflicts.
Here's what i've done (order matters as far as i know)
1. uninstall everything (tf, cuda, visual studio)
2. pip install tensorflow-gpu
3. download and install visual studio community 2017 (2019 won't work)
4. I also have installed the c++ workload from visual studio (not sure if it's necessary but it has the required compiler visual c++ 15.x)
5. download and install cuda 10.0 (the one i have is 10.0.130)
6. go to system environment variables (search it in the windows bar) > advanced > click Environment Variables...
7. create New user variables (do not confuse with system var)
8. Variable name: CUDA\_PATH,
9. Variable value: browse to the cuda directory down to the version directory (mine is C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0)
10. the guide says you need cudnn 7.4.1, but i got an error about expected version being 7.6 minimum. go to the nvidia developers cudnn archive and download "cudnn v7.6.0 for CUDA 10.0" (be sure you get the right file). unzip, put the cudnn files into the corresponding cuda directories (lib, include, bin).
From there everything worked like a charm. I haven't been able to build the cuda sample file from visual studio (devicequery) but it's not a vital step.
Almost every error was due to incompatible versions of the files, took me 3-4 days to figure the right mix. Hope that help :)
|
tensorflow-gpu v2.0.0 is [now available on conda](https://anaconda.org/anaconda/tensorflow-gpu), and is very easy to install with:
`conda install -c anaconda tensorflow-gpu`. No additional downloads or cuda installs required.
| 7,353
|
63,887,379
|
I'm using **Visual Studio Code** with **Cloud Code** extension. When I try to "**Deploy to Cloud Run**", I'm having this error:
>
> Automatic image build detection failed. Error: Component map not
> found. Consider adding a Dockerfile to the workspace and try again.
>
>
>
But I already have a Dockerfile:
```
# Python image to use.
FROM python:3.8
# Set the working directory to /app
WORKDIR /app
# copy the requirements file used for dependencies
COPY requirements.txt .
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Install ptvsd for debugging
RUN pip install ptvsd
# Copy the rest of the working directory contents into the container at /app
COPY . .
# Run app.py when the container launches
ENTRYPOINT ["python", "-m", "ptvsd", "--port", "3000", "--host", "0.0.0.0", "manage.py", "runserver", "--noreload"]
```
How to solve that, please?
|
2020/09/14
|
[
"https://Stackoverflow.com/questions/63887379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5411494/"
] |
I wasn't able to repro the issue, but I checked in with the Cloud Code team and it sounds like there could have been an underlying issue with gcloud that wasn't your fault.
I don't think you'll see this error again, but if you do, it would be awesome if you could file an issue at the [Cloud Code VS Code repo](https://github.com/GoogleCloudPlatform/cloud-code-vscode) so that we can gather more info and take a closer look.
This does show that we need to improve our error messages though. I've filed a bug to fix messaging regarding this scenario.
|
I don't know why, but after connecting in a different network and running the commands below, the error is gone.
```
gcloud auth revoke
gcloud auth login
gcloud init
```
| 7,356
|
63,544,127
|
I'm occasionally learning Java. As a person from python background, I'd like to know whether there exists something like `sorted(iterable, key=function)` of python in java.
For exmaple, in python I can sort a list ordered by an element's specific character, such as
```
>>> a_list = ['bob', 'kate', 'jaguar', 'mazda', 'honda', 'civic', 'grasshopper']
>>> s=sorted(a_list) # sort all elements in ascending order first
>>> s
['bob', 'civic', 'grasshopper', 'honda', 'jaguar', 'kate', 'mazda']
>>> sorted(s, key=lambda x: x[1]) # sort by the second character of each element
['jaguar', 'kate', 'mazda', 'civic', 'bob', 'honda', 'grasshopper']
```
So `a_list` is sorted by ascending order first, and then by each element's 1 indexed(second) character.
My question is, if I want to sort elements by specific character in ascending order in Java, how can I achieve that?
Below is the Java code that I wrote:
```
import java.util.Arrays;
public class sort_list {
public static void main(String[] args)
{
String [] a_list = {"bob", "kate", "jaguar", "mazda", "honda", "civic", "grasshopper"};
Arrays.sort(a_list);
System.out.println(Arrays.toString(a_list));}
}
}
```
and the result is like this:
```
[bob, civic, grasshopper, honda, jaguar, kate, mazda]
```
Here I only achieved sorting the array in ascending order. I want the java array as the same as the python list result.
Java is new to me, so any suggestion would be highly appreciated.
Thank you in advance.
|
2020/08/23
|
[
"https://Stackoverflow.com/questions/63544127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11790764/"
] |
Use a custom comparator to compare two strings.
```
Arrays.sort(a_list, Comparator.comparing(s -> s.charAt(1)));
```
This compares two strings by the string's second character.
This will result in
```
[kate, jaguar, mazda, civic, bob, honda, grasshopper]
```
I see that `jaguar` and `kate` are switched in your output. I'm not sure how Python orders two String that are equal. The `Arrays.sort` does *stable* sort.
>
> This sort is guaranteed to be *stable*: equal elements will
> not be reordered as a result of the sort.
>
>
>
|
You can supply a lambda function to [`Arrays.sort`](https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(T%5B%5D,%20java.util.Comparator)). For your example, you could use:
```
Arrays.sort(a_list, (String a, String b) -> a.charAt(1) - b.charAt(1));
```
Assuming you have first sorted the array alphabetically (using `Arrays.sort(a_list)`) this will give you your desired result of:
```
['jaguar', 'kate', 'mazda', 'civic', 'bob', 'honda', 'grasshopper']
```
| 7,357
|
33,241,842
|
I'm using TextBlob for python to do some sentiment analysis on tweets. The default analyzer in TextBlob is the PatternAnalyzer which works resonably well and is appreciably fast.
```
sent = TextBlob(tweet.decode('utf-8')).sentiment
```
I have now tried to switch to the NaiveBayesAnalyzer and found the runtime to be impractical for my needs. (Approaching 5 seconds per tweet.)
```
sent = TextBlob(tweet.decode('utf-8'), analyzer=NaiveBayesAnalyzer()).sentiment
```
I have used the scikit learn implementation of the Naive Bayes Classifier before and did not find it to be this slow, so I'm wondering if I'm using it right in this case.
I am assuming the analyzer is pretrained, at least [the documentation](http://Naive%20Bayes%20analyzer%20that%20is%20trained%20on%20a%20dataset%20of%20movie%20reviews.) states "Naive Bayes analyzer that is trained on a dataset of movie reviews." But then it also has a function train() which is described as "Train the Naive Bayes classifier on the movie review corpus." Does it internally train the analyzer before each run? I hope not.
Does anyone know of a way to speed this up?
|
2015/10/20
|
[
"https://Stackoverflow.com/questions/33241842",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3174668/"
] |
Yes, Textblob will train the analyzer before each run. You can use following code to avoid train the analyzer everytime.
```
from textblob import Blobber
from textblob.sentiments import NaiveBayesAnalyzer
tb = Blobber(analyzer=NaiveBayesAnalyzer())
print tb("sentence you want to test")
```
|
Adding to Alan's very useful answer if you have table data in a dataframe and want to use textblob's NaiveBayesAnalyzer then this works. Just change out `word_list` for your relevant series of strings.
```
import textblob
import pandas as pd
tb = textblob.Blobber(analyzer=NaiveBayesAnalyzer())
for index, row in df.iterrows():
sent = tb(row['word_list']).sentiment
df.loc[index, 'classification'] = sent[0]
df.loc[index, 'p_pos'] = sent[1]
df.loc[index, 'p_neg'] = sent[2]
```
Above splits the tuple that `sentiment` returns into three separate series.
This works if the series is all strings but if it has mixed datatypes, as can be a problem in pandas with the `object` datatype then you might want to put a try/except block around it to catch exceptions.
On time it is doing 1000 rows in around 4.7 seconds in my tests.
Hope this is helpful.
| 7,359
|
68,441,299
|
I'm trying to get the number of commits of github repos using python and beautiful soup
html code:
```
<div class="flex-shrink-0">
<h2 class="sr-only">Git stats</h2>
<ul class="list-style-none d-flex">
<li class="ml-0 ml-md-3">
<a data-pjax href="..." class="pl-3 pr-3 py-3 p-md-0 mt-n3 mb-n3 mr-n3 m-md-0 Link--primary no-underline no-wrap">
<span class="d-none d-sm-inline">
<strong>26</strong>
<span aria-label="Commits on master" class="color-text-secondary d-none d-lg-inline">
commits
</span>
</span>
</a>
</li>
</ul>
</div>
```
my code:
```
r = requests.get(source_code_link)
soup = bs(r.content, 'lxml')
spans = soup.find_all('span', class_='d-none d-sm-inline')
for span in spans:
number = span.select_one('strong')
```
sometimes works but sometimes no because there are more then one `span` tag with class `d-none d-sm-inline`.
how can i solve ?
|
2021/07/19
|
[
"https://Stackoverflow.com/questions/68441299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
```
from bs4 import BeautifulSoup as bs
html="""<span class="d-none d-sm-inline">
<strong>26</strong>
<span aria-label="Commits on master" class="color-text-secondary d-none d-lg-inline">
commits
</span>
</span>
<div class="flex-shrink-0">
<h2 class="sr-only">Git stats</h2>
<ul class="list-style-none d-flex">
<li class="ml-0 ml-md-3">
<a data-pjax href="..." class="pl-3 pr-3 py-3 p-md-0 mt-n3 mb-n3 mr-n3 m-md-0 Link--primary no-underline no-wrap">
<span class="d-none d-sm-inline">
<strong>23</strong>
<span aria-label="Commits on master" class="color-text-secondary d-none d-lg-inline">
commits
</span>
</span>
</a>
</li>
</ul>
</div>"""
```
I have combine both example which look up for tag `strong` and based on that prints data by using `.contents` method
```
soup = bs(html, 'lxml')
spans = soup.find_all('span', class_='d-none d-sm-inline')
for span in spans:
for tag in span.contents:
if tag.name=="strong" :
print(tag.get_text())
```
using list comprehension :
```
for span in spans:
data=[tag for tag in span.contents if tag.name=="strong"]
print(data[0].get_text())
```
Ouput for both case:
```
26
23
```
|
You can use the [`find_next()`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-and-find-next) method to look for a `<strong>` after the class `d-none d-sm-inline`.
In your case:
```
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "html.parser")
for tag in soup.find_all("span", class_="d-none d-sm-inline"):
print(tag.find_next("strong").text)
```
| 7,361
|
23,184,410
|
I have jobs scheduled thru `apscheduler`. I have 3 jobs so far, but soon will have many more. i'm looking for a way to scale my code.
Currently, each job is its own `.py` file, and in the file, I have turned the script into a function with `run()` as the function name. Here is my code.
```
from apscheduler.scheduler import Scheduler
import logging
import job1
import job2
import job3
logging.basicConfig()
sched = Scheduler()
@sched.cron_schedule(day_of_week='mon-sun', hour=7)
def runjobs():
job1.run()
job2.run()
job3.run()
sched.start()
```
This works, right now the code is just stupid, but it gets the job done. But when I have 50 jobs, the code will be stupid long. How do I scale it?
note: the actual names of the jobs are arbitrary and doesn't follow a pattern. The name of the file is `scheduler.py` and I run it using `execfile('scheduler.py')` in python shell.
|
2014/04/20
|
[
"https://Stackoverflow.com/questions/23184410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1744744/"
] |
```
import urllib
import threading
import datetime
pages = ['http://google.com', 'http://yahoo.com', 'http://msn.com']
#------------------------------------------------------------------------------
# Getting the pages WITHOUT threads
#------------------------------------------------------------------------------
def job(url):
response = urllib.urlopen(url)
html = response.read()
def runjobs():
for page in pages:
job(page)
start = datetime.datetime.now()
runjobs()
end = datetime.datetime.now()
print "jobs run in {} microseconds WITHOUT threads" \
.format((end - start).microseconds)
#------------------------------------------------------------------------------
# Getting the pages WITH threads
#------------------------------------------------------------------------------
def job(url):
response = urllib.urlopen(url)
html = response.read()
def runjobs():
threads = []
for page in pages:
t = threading.Thread(target=job, args=(page,))
t.start()
threads.append(t)
for t in threads:
t.join()
start = datetime.datetime.now()
runjobs()
end = datetime.datetime.now()
print "jobs run in {} microsecond WITH threads" \
.format((end - start).microseconds)
```
|
Look @
<http://furius.ca/pubcode/pub/conf/bin/python-recursive-import-test>
This will help you import all python / .py files.
while importing you can create a list which keeps keeps a function call, for example.
[job1.run(),job2.run()]
Then iterate through them and call function :)
Thanks Arjun
| 7,363
|
29,413,719
|
When I input the variable a
```
a = int(1388620800)*1000
```
(of course) this variable is returned
```
1388620800000
```
But in the Google Appengine Server, this variable is changed by <https://cloud.google.com/appengine/docs/python/datastore/typesandpropertyclasses#int>
```
1388620800000L
```
How to convert 1388620800000L to int type in Appengine?
**EDIT**
I will print this number for JSON. In the local Python2.7, the variable 'a' is just integer. But in the appengine server, the variable 'a' has string 'L'. I solved it using str() and .replace('L',''), but I wonder how to solve it with changing number type.
|
2015/04/02
|
[
"https://Stackoverflow.com/questions/29413719",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1474183/"
] |
Maven has a [standard directory layout](https://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html). The directory `src/main/resources` is intended for such application resources. Place your text files into it.
You now basically have two options where exactly to place your files:
1. The resource file belongs to a class.
An example for this is a class representing a GUI element (a panel) that needs to also show some images.
In this case place the resource file into the same directory (package) as the corresponding class. E.g. for a class named `your.pkg.YourClass` place the resource file into the directory `your/pkg`:
```
src/main
+-- java/
| +-- your/pkg/
| | +-- YourClass.java
+-- resources/
+-- your/pkg/
+-- resource-file.txt
```
You now load the resource via the corresponding class. Inside the class `your.pkg.YourClass` you have the following code snippet for loading:
```
String resource = "resource-file.txt"; // the "file name" without any package or directory
Class<?> clazz = this.getClass(); // or YourClass.class
URL resourceUrl = clazz.getResource(resource);
if (resourceUrl != null) {
try (InputStream input = resourceUrl.openStream()) {
// load the resource here from the input stream
}
}
```
Note: You can also load the resource via the class' class loader:
```
String resource = "your/pkg/resource-file.txt";
ClassLoader loader = this.getClass().getClassLoader(); // or YourClass.class.getClassLoader()
URL resourceUrl = loader.getResource(resource);
if (resourceUrl != null) {
try (InputStream input = resourceUrl.openStream()) {
// load the resource here from the input stream
}
}
```
Choose, what you find more convenient.
2. The resource belongs to the application at whole.
In this case simply place the resource directly into the `src/main/resources` directory or into an appropriate sub directory. Let's look at an example with your lookup file:
```
src/main/resources/
+-- LookupTables/
+-- LookUpTable1.txt
```
You then must load the resource via a class loader, using either the current thread's context class loader or the application class loader (whatever is more appropriate - go and search for articles on this issue if interested). I will show you both ways:
```
String resource = "LookupTables/LookUpTable1.txt";
ClassLoader ctxLoader = Thread.currentThread().getContextClassLoader();
ClassLoader sysLoader = ClassLoader.getSystemClassLoader();
URL resourceUrl = ctxLoader.getResource(resource); // or sysLoader.getResource(resource)
if (resourceUrl != null) {
try (InputStream input = resourceUrl.openStream()) {
// load the resource here from the input stream
}
}
```
As a first suggestion, use the current thread's context class loader. In a standalone application this will be the system class loader or have the system class loader as a parent. (The distinction between these class loaders will become important for libraries that also load resources.)
You should always use a class loader for loading resource. This way you make loading independent from the place (just take care that the files are inside the class path when launching the application) and you can package the whole application into a JAR file which still finds the resources.
|
I tried to reproduce your problem given the MWE you provided, but did not succeed. I uploaded my project including a pom.xml (you mentioned you used maven) here: <http://www.filedropper.com/stackoverflow>
This is what my lookup class looks like (also showing how to use the getResourceAsStream method):
```
public class LookUpClass {
final static String tableName = "resources/LookUpTables/LookUpTable1.txt";
public static String getLineFromLUT(final int line) {
final URL url = LookUpClass.class.getResource(tableName);
if (url.toString().startsWith("jar:")) {
try (final URLClassLoader loader = URLClassLoader
.newInstance(new URL[] { url })) {
return getLineFromLUT(
new InputStreamReader(
loader.getResourceAsStream(tableName)), line);
} catch (final IOException e) {
e.printStackTrace();
}
} else {
return getLineFromLUT(
new InputStreamReader(
LookUpClass.class.getResourceAsStream(tableName)),
line);
}
return null;
}
public static String getLineFromLUT(final Reader reader, final int line) {
try (final BufferedReader br = new BufferedReader(reader)) {
for (int i = 0; i < line; ++i)
br.readLine();
return br.readLine();
} catch (final IOException e) {
e.printStackTrace();
}
return null;
}
}
```
| 7,364
|
74,127,611
|
I am unsure if this is one of those problems that is impossible or not, in my mind it seems like it should be possible. ***Edit** - We more or less agree it is impossible*
Given a range specified by two integers (i.e. `n1 ... n2`), is it possible to create a python generator that yields a random integer from the range WITHOUT repetitions and WITHOUT loading the list of options into memory (i.e. `list(range(n1, n2))`).
Expected usage would be something like this:
```
def random_range_generator(n1, n2):
...
gen = random_range_generator(1, 6)
for n in gen:
print(n)
```
Output:
```
4
1
5
3
2
```
|
2022/10/19
|
[
"https://Stackoverflow.com/questions/74127611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8627756/"
] |
As I stated in the comment above, what you are seeking is some of the facilities of reactive programming paradigm. (not to be confounded with the JavaScript library which borrows its name from there).
It is possible to instrument objects in Python to do so - I think the minimum setup here would be a specialized target mapping, and a special object type you set as the values in it, that would fetch the target value.
Python can do this in more straightforward ways with direct *attribute* access (using the dot notation: `myinstance.value`) than by using the key-retrieving notation used in dictionaries `mydata['value']` due to the fact a class is already a template to a certain data group, and class attributes can define mechanisms to access each instance's attribute value. That is called the "descriptor protocol" and is bound into the language model itself.
Nonetheless a minimalist Mapping based version can be implemented as such:
```
FOO = {'foo': 1}
from collections.abc import MutableMapping
class LazyValue:
def __init__(self, source, key):
self.source = source
self.key = key
def get(self):
return self.source[self.key]
def __repr__(self):
return f"<LazyValue {self.get()!r}>"
class LazyDict(MutableMapping):
def __init__(self, *args, **kw):
self.data = dict(*args, **kw)
def __getitem__(self, key):
value = self.data[key]
if isinstance(value, LazyValue):
value = value.get()
return value
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__():
return len(self.data)
def __repr__():
return repr({key: value} for key, value in self.items())
BAR = LazyDict({'test': LazyValue(FOO, 'foo')})
# ...complex logic here which ultimately updates the value of Foo['foo']...
FOO['foo'] = 2
print(BAR['test']) # Outputs 2
```
The reason this much code is needed is that there are several ways to retrieve data from a dictionary or mapping (`.values`, `.items`, `.get`, `.setdefault`) and simply inheriting from `dict` and implementing `__getitem__` would "leak" the special lazy object in any of the other methods. Going through this MutableMapping approach ensure a single point of reading of the value in the `__getitem__` method - and the resulting instance can be used reliably anywhere a mapping is expected.
However, notice that if you are using normal classes and instances rather than dictionaries, this can be much simpler - you can just use plain Python "property" and have a getter that will fetch the value. The main factor you should ponder is whether your referenced data keys are fixed, and can be hard-coded when writting the source code, or if they are dynamic, and which keys will work as lazy-references are only known at runtime. In this last case, the custom mapping approach, as above, will be usually better:
```
FOO = {'foo': 1}
class LazyStuff:
def __init__(self, source):
self.source = source
@property
def test(self):
return self.source["foo"]
BAR = LazyStuff(FOO)
FOO["foo"] = 2
print(BAR.test)
```
Perceive that in this way you have to hardcode the key "foo" and "test" in the class body, but it is just plaincode, and no need for the intermediary "LazyValue" class. Also, if you need this data as a dictionary, you could add an `.as_dict` method to `LazyStuff` that would collect all attributes in the moment it were called and yield a snapshot of those values as a dictionary..
|
You can try using lambdas and calling the value on return. Like this:
```
FOO = {'foo': 1}
BAR = {'test': lambda: FOO['foo'] }
FOO['foo'] = 2
print(BAR['test']()) # Outputs 2
```
| 7,365
|
48,903,304
|
I am naive in Big data, I am trying to connect kafka to spark.
Here is my producer code
```python
import os
import sys
import pykafka
def get_text():
## This block generates my required text.
text_as_bytes=text.encode(text)
producer.produce(text_as_bytes)
if __name__ == "__main__":
client = pykafka.KafkaClient("localhost:9092")
print ("topics",client.topics)
producer = client.topics[b'imagetext'].get_producer()
get_text()
```
This is printing my generated text on console consumer when I do
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic imagetext --from-beginning
Now I want this text to be consumed using Spark and this is my Jupyter code
```python
import findspark
findspark.init()
import os
from pyspark import SparkContext, SparkConf
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
import json
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /spark-2.1.1-bin-hadoop2.6/spark-streaming-kafka-0-8-assembly_2.11-2.1.0.jar pyspark-shell'
conf = SparkConf().setMaster("local[2]").setAppName("Streamer")
sc = SparkContext(conf=conf)
ssc = StreamingContext(sc,5)
print('ssc =================== {} {}')
kstream = KafkaUtils.createDirectStream(ssc, topics = ['imagetext'],
kafkaParams = {"metadata.broker.list": 'localhost:9092'})
print('contexts =================== {} {}')
lines = kstream.map(lambda x: x[1])
lines.pprint()
ssc.start()
ssc.awaitTermination()
ssc.stop(stopGraceFully = True)
```
But this is producing output on my Jupyter as
```
Time: 2018-02-21 15:03:25
-------------------------------------------
-------------------------------------------
Time: 2018-02-21 15:03:30
-------------------------------------------
```
Not the text that is on my console consumer..
Please help, unable to figure out the mistake.
|
2018/02/21
|
[
"https://Stackoverflow.com/questions/48903304",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9369585/"
] |
The same issue happened to me. I was unable to solve it using jsonp. What i ended up doing was to make an action in the controller that recieved the json from external url and send it to the ajax call.
For exmaple
```
return $.ajax({
type: "post",
url: "/ActionInProject/ProjectController",
});
```
then in the Controller it will be different for whichever server side language is being used. For me it was C# so i did something like
```
[HttpPost]
public JsonResult ActionInProject()
{
using(HttpClient client = new HttpClient())
{
var response = client.GetAsync("someothersite.com/api/link");
return Json(client.GetAsync());
}
}
```
|
I tried your request with Postman
and I found it not valid json in the respone you can find below what is returned of the server
```
<!DOCTYPE html><html dir="ltr"><head><title>Duolingo</title><meta charset="utf-8"><meta name="viewport" content="width=device-width,initial-scale=1,user-scalable=no"><meta name="robots" content="NOODP"><noscript><meta http-equiv="refresh" content="0; url=/nojs/splash"></noscript><meta name="apple-mobile-web-app-capable" content="yes"><meta name="apple-mobile-web-app-status-bar-style" content="black"><meta name="apple-mobile-web-app-title" content="Duolingo"><meta name="google" content="notranslate"><meta name="mobile-web-app-capable" content="yes"><meta name="apple-itunes-app" content="app-id=570060128"><meta name="google-site-verification" content="nWyTCDRw4VJS_b6YSRZiFmmj56EawZpZMhHtKXt7lkU"><link rel="chrome-webstore-item" href="https://chrome.google.com/webstore/detail/aiahmijlpehemcpleichkcokhegllfjl"><link rel="apple-touch-icon" href="//d35aaqx5ub95lt.cloudfront.net/images/duolingo-touch-icon.png"><link rel="shortcut icon" type="image/x-icon" href="//d35aaqx5ub95lt.cloudfront.net/favicon.ico"><link href="//d35aaqx5ub95lt.cloudfront.net/css/ltr-9f45956e.css" rel="stylesheet"> <link rel="manifest" href="/gcm/manifest.json"></head><body><div id="root"></div><script>window.duo={"detUiLanguages":["en","es","pt","it","fr","de","ja","zs","zt","ko","ru","hi","hu","tr"],"troubleshootingForumId":647,"uiLanguage":"en","unsupportedDirections":[],"oldWebUrlWhitelist":["^/activity_stream$","^/admin_tools$","^/c/$","^/certification/","^/comment/","^/course/","^/course_announcement/","^/courses$","^/courses/","^/debug/","^/dictionary/","^/discussion$","^/event/","^/forgot_password$","^/guidelines$","^/help$","^/j$","^/mobile$","^/p/$","^/pm/","^/power_practice/","^/preview/.+/","^/quit_classroom_session","^/redirect/","^/referred","^/register_user$","^/reset_password","^/settings/reset_lang","^/skill_practice/","^/team/","^/teams$","^/topic/","^/translation/","^/translations$","^/translations/","^/troubleshooting$","^/ui_strings/","^/upload$","^/vocab","^/welcome$","^/welcome/","^/word","^/words$"]}</script><script>window.duo.version="c89bfb9"</script><script>!function(r){function n(t){if(e[t])return e[t].exports;var s=e[t]={i:t,l:!1,exports:{}};return r[t].call(s.exports,s,s.exports,n),s.l=!0,s.exports}var t=window.webpackJsonp;window.webpackJsonp=function(e,i,o){for(var c,a,f,d=0,u=[];d<e.length;d++)a=e[d],s[a]&&u.push(s[a][0]),s[a]=0;for(c in i)Object.prototype.hasOwnProperty.call(i,c)&&(r[c]=i[c]);for(t&&t(e,i,o);u.length;)u.shift()();if(o)for(d=0;d<o.length;d++)f=n(n.s=o[d]);return f};var e={},s={31:0};n.e=function(r){function t(){c.onerror=c.onload=null,clearTimeout(a);var n=s[r];0!==n&&(n&&n[1](new Error("Loading chunk "+r+" failed.")),s[r]=void 0)}var e=s[r];if(0===e)return new Promise(function(r){r()});if(e)return e[2];var i=new Promise(function(n,t){e=s[r]=[n,t]});e[2]=i;var o=document.getElementsByTagName("head")[0],c=document.createElement("script");c.type="text/javascript",c.charset="utf-8",c.async=!0,c.timeout=12e4,n.nc&&c.setAttribute("nonce",n.nc),c.src=n.p+""+({0:"js/vendor",1:"js/app",2:"strings/zh-TW",3:"strings/zh-CN",4:"strings/vi",5:"strings/uk",6:"strings/tr",7:"strings/tl",8:"strings/th",9:"strings/te",10:"strings/ta",11:"strings/ru",12:"strings/ro",13:"strings/pt",14:"strings/pl",15:"strings/pa",16:"strings/nl-NL",17:"strings/ko",18:"strings/ja",19:"strings/it",20:"strings/id",21:"strings/hu",22:"strings/hi",23:"strings/fr",24:"strings/es",25:"strings/en",26:"strings/el",27:"strings/de",28:"strings/cs",29:"strings/bn",30:"strings/ar"}[r]||r)+"-"+{0:"2b9feda7",1:"662ee5e7",2:"c444b0a9",3:"a5658bf8",4:"3ea447d8",5:"1573893a",6:"c32ed883",7:"52cac8bc",8:"2c58adbb",9:"681aaba6",10:"d05b78c6",11:"f4071afb",12:"a1349f5c",13:"6a57ec9f",14:"762dfc94",15:"8a02897a",16:"4e429b1e",17:"8e921ddf",18:"524cc86b",19:"8df42324",20:"7d8a8fc5",21:"4fde5d79",22:"509b8809",23:"9f09bcfb",24:"77da48d4",25:"44cfb321",26:"13b268cc",27:"c0cac402",28:"3ecdeec1",29:"dfd2b224",30:"074ffddd"}[r]+".js";var a=setTimeout(t,12e4);return c.onerror=c.onload=t,o.appendChild(c),i},n.m=r,n.c=e,n.d=function(r,t,e){n.o(r,t)||Object.defineProperty(r,t,{configurable:!1,enumerable:!0,get:e})},n.n=function(r){var t=r&&r.__esModule?function(){return r.default}:function(){return r};return n.d(t,"a",t),t},n.o=function(r,n){return Object.prototype.hasOwnProperty.call(r,n)},n.p="/",n.oe=function(r){throw console.error(r),r}}([])</script><script src="//d35aaqx5ub95lt.cloudfront.net/js/vendor-2b9feda7.js"></script> <script src="//d35aaqx5ub95lt.cloudfront.net/strings/en-44cfb321.js"></script> <script src="//d35aaqx5ub95lt.cloudfront.net/js/app-662ee5e7.js"></script></body></html>
```
as you can see the JSON is wrapped inside a HTML page which is not valid JSON syntax it something related to the API itself. it returns a page instead of JSON object .
| 7,367
|
26,106,358
|
I have this code at the top of my Google App Engine program:
```
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(60)
```
I am using an opener to load stuff:
```
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor( cj ) )
opener.addheaders = [ ( 'User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64)' ) ]
resp = opener.open( 'http://www.example.com/' )
```
an exception is being thrown after 5 seconds:
```
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 1222, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib2.py", line 1187, in do_open
r = h.getresponse(buffering=True)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/gae_override/httplib.py", line 524, in getresponse
raise HTTPException(str(e))
HTTPException: Deadline exceeded while waiting for HTTP response from URL: http://www.example.com
```
How can I avoid the error?
|
2014/09/29
|
[
"https://Stackoverflow.com/questions/26106358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/847663/"
] |
Did you try setting the timeout on the `.open()` call?
```
resp = opener.open('http://example.com', None, 60)
```
If you reach the timeout as specified by `set_default_fetch_deadline`, Python will throw a `DownloadError` or `DeadlineExceededErrors` exception: <https://cloud.google.com/appengine/docs/python/urlfetch/exceptions>
|
You can also patch the httplib2 library and set the deadline to 60 seconds
```
httplib2/__init__.py:
def fixed_fetch(url, payload=None, method="GET", headers={},
allow_truncated=False, follow_redirects=True,
deadline=60):
return fetch(url, payload=payload, method=method, headers=headers,
allow_truncated=allow_truncated,
follow_redirects=follow_redirects, deadline=60,
validate_certificate=validate_certificate)
return fixed_fetch
```
It is a work around.
| 7,368
|
13,583,649
|
We're using EngineYard which has Python installed by default. But when we enabled SSL we received the following error message from our logentries chef recipe.
"WARNING: The "ssl" module is not present. Using unreliable workaround, host identity cannot be verified. Please install "ssl" module or newer version of Python (2.6) if possible."
I'm looking for a way to install the SSL module with chef recipe but I simply don't have enough experience. Could someone point me in the right direction?
Resources:
Logentries chef recipe: <https://github.com/logentries/le_chef>
Logentries EY docs: <https://logentries.com/doc/engineyard/>
SSL Module: <http://pypi.python.org/pypi/ssl/>
|
2012/11/27
|
[
"https://Stackoverflow.com/questions/13583649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/421709/"
] |
I just wrote a recipe for this, and now am able to run the latest Logentries client on EngineYard. Here you go:
```
file_dir = "/mnt/src/python-ssl"
file_name = "ssl-1.15.tar.gz"
file_path = File.join(file_dir,file_name)
uncompressed_file_dir = File.join(file_dir, file_name.split(".tar.gz").first)
directory file_dir do
owner "deploy"
group "deploy"
mode "0755"
recursive true
action :create
end
remote_file file_path do
source "http://pypi.python.org/packages/source/s/ssl/ssl-1.15.tar.gz"
mode "0644"
not_if { File.exists?(file_path) }
end
execute "gunzip ssl" do
command "gunzip -c #{file_name} | tar xf -"
cwd file_dir
not_if { File.exists?(uncompressed_file_dir) }
end
installed_file_path = File.join(uncompressed_file_dir, "installed")
execute "install python ssl module" do
command "python setup.py install"
cwd uncompressed_file_dir
not_if { File.exists?(installed_file_path) }
end
execute "touch #{installed_file_path}" do
action :run
end
```
|
You could install a new Python using PythonBrew: <https://github.com/utahta/pythonbrew>. Just make you install libssl before you build, or it still won't be able to use SSL. However, based on the warning, it seems that SSL *might* work, but it won't be able to verify host. Of course, that is one major purposes of SSL, so that is likely a non-starter.
HTH
| 7,369
|
65,534,980
|
This is a little niche, but I want to click on a discord reaction with selenium (python), but only the reaction that has a specific img src.
I had it working where it would be clicking on reactions however it was clicking on every reaction not just the one I wanted.
I've tried to make it only click on the certain element using the aria-label, however this didn't work either.
```
if i.get_attribute('aria-label') == "️, press to react":
```
My function:
```
def Bot():
elements = driver.find_elements_by_xpath("//div[contains(@class, 'message-2qnXI6')]/div[2]/div/div[1]/div/div/div")
for i in elements:
if i.find_element_by_xpath("//div[contains(@class, 'message-2qnXI6')]/div[2]/div/div[1]/div/div/div/img").get_attribute("src") == "https://discord.com/assets/e14bea9653868307f4d1e70fa17e2535.svg":
i.click()
continue
else:
break
```
The HTML:
```
<div class="reactionInner-15NvIl" aria-label="️, press to react" aria-pressed="true" role="button" tabindex="0">
<img src="/assets/e14bea9653868307f4d1e70fa17e2535.svg" alt="️" draggable="false" class="emoji">
<div class="reactionCount-2mvXRV" style="min-width: 9px;">2</div></div>
```
I'm just not exactly sure how to go about this and "verify" that the element I'm clicking on has the specific img src I want.
Any help is appreciated.
|
2021/01/02
|
[
"https://Stackoverflow.com/questions/65534980",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14924745/"
] |
The characters allowed in XML element names are given by the [W3C XML BNF for component names](https://www.w3.org/TR/xml/#NT-NameStartChar):
>
>
> ```
> NameStartChar ::= ":" | [A-Z] | "_" | [a-z] | [#xC0-#xD6] | [#xD8-#xF6] |
> [#xF8-#x2FF] | [#x370-#x37D] | [#x37F-#x1FFF] |
> [#x200C-#x200D] | [#x2070-#x218F] | [#x2C00-#x2FEF] |
> [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD] |
> [#x10000-#xEFFFF]
> NameChar ::= NameStartChar | "-" | "." | [0-9] | #xB7 | [#x0300-#x036F] |
> [#x203F-#x2040]
> Name ::= NameStartChar (NameChar)*
>
> ```
>
>
See also
--------
* [Is a colon a legal first character in an XML tag name?](https://stackoverflow.com/q/40445735/290085)
* [Represent space and tab in XML tag](https://stackoverflow.com/q/514635/290085)
* [How to include ? and / in XML tag](https://stackoverflow.com/q/59621748/290085)
* [Encoding space character in XML name](https://stackoverflow.com/q/46634193/290085)
---
|
Right: Letter, digit, hyphen, underscore and period.
One may use any Unicode letter.
And of course one may prefix the names with *name space* + colon.
| 7,372
|
8,281,119
|
So I am trying to do this problem where I have to find the most frequent 6-letter string within some lines in python, so I realize one could do something like this:
```
>>> from collections import Counter
>>> x = Counter("ACGTGCA")
>>> x
Counter({'A': 2, 'C': 2, 'G': 2, 'T': 1})
```
Now then, the data I'm using is DNA files and the format of a file goes something like this:
```
> name of the protein
ACGTGCA ... < more sequences>
ACGTGCA ... < more sequences>
ACGTGCA ... < more sequences>
ACGTGCA ... < more sequences>
> another protein
AGTTTCAGGAC ... <more sequences>
AGTTTCAGGAC ... <more sequences>
AGTTTCAGGAC ... <more sequences>
AGTTTCAGGAC ... <more sequences>
```
We can start by going one protein at a time, but then how does one modify the block of code above to search for the most frequent 6-character string pattern? Thanks.
|
2011/11/26
|
[
"https://Stackoverflow.com/questions/8281119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1067233/"
] |
Let us go through a slightly modified version of your test code as it is seen by `irb` and as a stand alone script:
```
def test_method;end
symbols = Symbol.all_symbols # This is already a "fixed" array, no need for map
puts symbols.include?(:test_method)
puts symbols.include?('test_method_nonexistent'.to_sym)
puts symbols.include?(:test_method_nonexistent)
eval 'puts symbols.include?(:really_not_there)'
```
When you try this in `irb`, each line will be parsed and evaluated before the next line. When you hit the second line, `symbols` will contain `:test_method` because `def test_method;end` has already been evaluated. But, the `:test_method_nonexistent` symbol hasn't been seen anywhere when we hit line 2 so lines 4 and 5 will say "false". Line 6 will, of course, give us another false because `:really_not_there` doesn't exist until after `eval` returns. So `irb` says this:
```
true
false
false
false
```
If you run this as a Ruby script, things happen in a slightly different order. First Ruby will parse the script into an internal format that the Ruby VM understands and then it goes back to the first line and starts executing the script. When the script is being parsed, the `:test_method` symbol will exist after the first line is parsed and `:test_method_nonexistent` will exist after the fifth line has been parsed; so, before the script runs, two of the symbols we're interested in are known. When we hit line six, Ruby just sees an `eval` and a string but it doesn't yet know that the `eval` cause a symbol to come into existence.
Now we have two of our symbols (`:test_method` and `:test_method_nonexistent`) and a simple string that, when fed to `eval`, will create a symbol (`:really_not_there`). Then we go back to the beginning and the VM starts running code. When we run line 2 and cache our symbols array, both `:test_method` and `:test_method_nonexistent` will exist and appear in the `symbols` array because the parser created them. So lines 3 through 5:
```
puts symbols.include?(:test_method)
puts symbols.include?('test_method_nonexistent'.to_sym)
puts symbols.include?(:test_method_nonexistent)
```
will print "true". Then we hit line 6:
```
eval 'puts symbols.include?(:really_not_there)'
```
and "false" is printed because `:really_not_there` is created by the `eval` at run-time rather than during parsing. The result is that Ruby says:
```
true
true
true
false
```
If we add this at the end:
```
symbols = Symbol.all_symbols
puts symbols.include?('really_not_there'.to_sym)
```
Then we'll get another "true" out of both `irb` and the stand-alone script because `eval` will have created `:really_not_there` and we will have grabbed a fresh copy of the symbol list.
|
The reason you have to convert symbols to strings when checking for existence of a symbol is that it will always return true otherwise. The argument being passed to the `include?` method gets evaluated first, so if you pass it a symbol, the a new symbol is instantiated and added into the heap, so `Symbol.all_symbols` does, in fact, have a copy of the symbol.
```
Symbol.all_symbols.include? :the_crow_flies_at_midnight #=> true
```
However, converting thousands of symbols to strings for comparison (which is much faster with symbols) is a poor solution. A better method would be to change the order that these statements get evaluated:
```
symbols = Symbol.all_symbols
symbols.include? :the_crow_flies_at_midnight #=> false
```
This "snapshot" of what symbols are in the dictionary is taken before our tested symbol is inserted onto the heap, so even though our argument exists on the heap at the time the `include?` method is called, we are getting the result we expect.
I don't know why it isn't working in your IRB console. Perhaps you mistyped.
| 7,373
|
9,917,628
|
I'm attempting to follow Heroku's python quickstart guide but am running into repeated problems. At the moment, "git push heroku master" is failing because it cannot install Bonjour. Does anyone know if this is a truly necessary requirement, and whether I can change the version required, or somehow otherwise fix this? Full text of push follows below.
```
(venv)172-26-12-64:helloflask Spike$ git push heroku master
Counting objects: 488, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (444/444), done.
Writing objects: 100% (488/488), 1.43 MiB, done.
Total 488 (delta 33), reused 0 (delta 0)
-----> Heroku receiving push
-----> Python app detected
-----> Preparing virtualenv version 1.7
New python executable in ./bin/python
Installing distribute.............................................................................................................................................................................................done.
Installing pip...............done.
-----> Activating virtualenv
-----> Installing dependencies using pip version 1.0.2
Downloading/unpacking Flask==0.8 (from -r requirements.txt (line 1))
Creating supposed download cache at /app/tmp/repo.git/.cache/pip_downloads
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FF%2FFlask%2FFlask-0.8.tar.gz
Running setup.py egg_info for package Flask
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_themes/.git'
Downloading/unpacking IMAPClient==0.8 (from -r requirements.txt (line 2))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Ffreshfoo.com%2Fprojects%2FIMAPClient%2FIMAPClient-0.8.zip
Running setup.py egg_info for package IMAPClient
no previously-included directories found matching 'doc/doctrees/'
Downloading/unpacking Jinja2==2.6 (from -r requirements.txt (line 3))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FJ%2FJinja2%2FJinja2-2.6.tar.gz
Running setup.py egg_info for package Jinja2
warning: no previously-included files matching '*' found under directory 'docs/_build'
warning: no previously-included files matching '*.pyc' found under directory 'jinja2'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'jinja2'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
Downloading/unpacking PIL==1.1.7 (from -r requirements.txt (line 4))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Feffbot.org%2Fmedia%2Fdownloads%2FPIL-1.1.7.tar.gz
Running setup.py egg_info for package PIL
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Downloading/unpacking PyRSS2Gen==1.0.0 (from -r requirements.txt (line 5))
Downloading PyRSS2Gen-1.0.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyRSS2Gen%2FPyRSS2Gen-1.0.0.tar.gz
Running setup.py egg_info for package PyRSS2Gen
Downloading/unpacking PyYAML==3.10 (from -r requirements.txt (line 6))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FP%2FPyYAML%2FPyYAML-3.10.tar.gz
Running setup.py egg_info for package PyYAML
Downloading/unpacking Twisted==11.0.0 (from -r requirements.txt (line 7))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FT%2FTwisted%2FTwisted-11.0.0.tar.bz2
Running setup.py egg_info for package Twisted
Downloading/unpacking WebOb==1.1.1 (from -r requirements.txt (line 8))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWebOb%2FWebOb-1.1.1.zip
Running setup.py egg_info for package WebOb
no previously-included directories found matching '*.pyc'
no previously-included directories found matching '*.pyo'
Downloading/unpacking Werkzeug==0.8.3 (from -r requirements.txt (line 9))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FW%2FWerkzeug%2FWerkzeug-0.8.3.tar.gz
Running setup.py egg_info for package Werkzeug
warning: no files found matching '*' under directory 'werkzeug/debug/templates'
warning: no files found matching '*' under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'docs'
warning: no previously-included files matching '*.pyo' found under directory 'docs'
warning: no previously-included files matching '*.pyc' found under directory 'tests'
warning: no previously-included files matching '*.pyo' found under directory 'tests'
warning: no previously-included files matching '*.pyc' found under directory 'examples'
warning: no previously-included files matching '*.pyo' found under directory 'examples'
no previously-included directories found matching 'docs/_build'
Downloading/unpacking altgraph==0.7.2 (from -r requirements.txt (line 10))
Downloading altgraph-0.7.2.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Faltgraph%2Faltgraph-0.7.2.tar.gz
Running setup.py egg_info for package altgraph
warning: no files found matching '*.txt'
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
Downloading/unpacking apipkg==1.0 (from -r requirements.txt (line 11))
Downloading apipkg-1.0.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fa%2Fapipkg%2Fapipkg-1.0.tar.gz
Running setup.py egg_info for package apipkg
no previously-included directories found matching '.svn'
no previously-included directories found matching '.hg'
Downloading/unpacking bdist-mpkg==0.4.4 (from -r requirements.txt (line 12))
Downloading bdist_mpkg-0.4.4.tar.gz
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fa.pypi.python.org%2Fpackages%2Fsource%2Fb%2Fbdist_mpkg%2Fbdist_mpkg-0.4.4.tar.gz
Running setup.py egg_info for package bdist-mpkg
Downloading/unpacking beautifulsoup4==4.0.1 (from -r requirements.txt (line 13))
Storing download in cache at /app/tmp/repo.git/.cache/pip_downloads/http%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fb%2Fbeautifulsoup4%2Fbeautifulsoup4-4.0.1.tar.gz
Running setup.py egg_info for package beautifulsoup4
Downloading/unpacking bonjour-py==0.3 (from -r requirements.txt (line 14))
Could not find any downloads that satisfy the requirement bonjour-py==0.3 (from -r requirements.txt (line 14))
No distributions at all found for bonjour-py==0.3 (from -r requirements.txt (line 14))
Storing complete log in /app/.pip/pip.log
! Heroku push rejected, failed to compile Python app
To git@heroku.com:radiant-night-5176.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:radiant-night-5176.git'
```
|
2012/03/29
|
[
"https://Stackoverflow.com/questions/9917628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/723212/"
] |
Are you trying to push the example app from the quickstart? Many of the requirements you're trying to install aren't required at all.
I suspect that you created your requirements file outside of the recommended virtualenv, and that twisted and bonjour-py are packages install in your system python installation.
|
Why is `bonjour-py` in your requirements.txt file? Removing it should fix your problem.
Furthermore, I can't seem to install it either, so it's no wonder Heroku fails there.
```
(so)modocache $ pip install bonjour-py
Downloading/unpacking bonjour-py
Could not find any downloads that satisfy the requirement bonjour-py
No distributions at all found for bonjour-py
Storing complete log in /Users/modocache/.pip/pip.log
```
| 7,374
|
6,632,954
|
[Why is else clause needed for try statement in python ?](https://stackoverflow.com/questions/4836139/why-is-else-clause-needed-for-try-statement-in-python)
Taking it forward:
```
try:
f = open('foo', 'r')
except IOError as e:
error_log.write('Unable to open foo : %s\n' % e)
else:
data = f.read()
f.close()
```
It occurs to me that the corner case solved by `else clause` still can be avoided by a `nested try...except` avoiding the need of `else`? :
```
try:
f = open('foo', 'r')
try:
data = f.read()
f.close()
except:
pass
except IOError as e:
error_log.write('Unable to open foo : %s\n' % e)
```
|
2011/07/09
|
[
"https://Stackoverflow.com/questions/6632954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1140524/"
] |
`try..except..else` may not be *needed*, but it can be nice. In this case, the `try..except..else` form is distinctly nicer, in my opinion.
Just because you can do without an element of syntax, doesn't make it useless. Decorator syntax is purely syntax sugar (the most obvious example, I think), `for` loops are just glorified `while` loops, et cetera. There's a good place for `try..except..else` and I would say this is one such place.
Besides, those two blocks of code are far from equivalent. If `f.read()` raises an exception (disk read error, data corruption inside the file or some other such problem), the first will raise an exception as it should but the second will lose it and think that everything has worked. For myself, I would prefer something more along these lines, which is shorter and still easier to understand:
```
try:
with open('foo', 'r') as f:
data = f.read()
except IOError as e:
error_log.write('Unable to open foo : %s\n' % e)
```
(This assumes that you want to catch errors in `file.read` and `file.close`. And I can't really see why you wouldn't.)
|
Actually, is not always needed you can simply do:
```
f = None
try:
f = open('foo', 'r')
except IOError:
error_log.write('Unable to open foo\n')
if f:
data = f.read()
f.close()
```
| 7,379
|
70,108,696
|
So i was solving a question that is in my Lab practical Syllabus. Below is the question:-
>
> Write a python class to reverse a sentence (initialized via
> constructor) word by word. Example: “I am here” should be reversed as
> “here am I”. Create instances of this class for each of the three
> strings input by the user and display the reversed string for each, in
> descending order of number of vowels in the string.
>
>
>
Below is code for the implementation of above question:-
```
class sentenceReverser:
vowels = ['a','e','i','o','u']
vowelCount =0
sentence=""
reversed_string = ""
def __init__(self,sentence):
self.sentence = sentence
self.reverser()
def reverser(self):
self.reversed_string = " ".join(reversed(self.sentence.split()))
return self.reversed_string
def getVowelCount(self):
for i in self.sentence:
if i.lower() in self.vowels:
self.vowelCount += 1
return self.vowelCount
inp = []
for i in range(2):
temp = input("Enter string:- ")
ob = sentenceReverser(temp)
inp.append(ob)
sorted_item = sorted(inp,key = lambda inp:inp.getVowelCount(),reverse=True)
for i in range (len(sorted_item)):
print('Reversed String: ',sorted_item[i].reverser(),'Vowel count: ',sorted_item[i].getVowelCount())
```
Below is output i am getting for the above code:-
[](https://i.stack.imgur.com/Ksjli.png)
**issue:-**
Could someone tell me why i am getting double the vowel count???
Any help would be appreciated!!
|
2021/11/25
|
[
"https://Stackoverflow.com/questions/70108696",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16369432/"
] |
You are calling `getVowelCount()` twice. Instead you can use the variable instead of calling in the print command
```
for i in range (len(sorted_item)):
print('Reversed String: ',sorted_item[i].reverser(),'Vowel count: ',sorted_item[i].vowelCount)
```
|
This is because you don't reset vowel count in the method. So if you execute the method once (here in sort), you'll get correct count. If you execute it twice (in printing), you will get twice as much. If you execute it once more, you'll get 3x correct amount. And so on.
The solution is to reset the number:
```py
def getVowelCount(self):
self.vowelCount = 0
for i in self.sentence:
if i.lower() in self.vowels:
self.vowelCount += 1
return self.vowelCount
```
Or to calculate it only once - set it to None, then calculate only `if self.vowelCount is None`, otherwise return existing value.
| 7,380
|
2,971,198
|
In one of my Django projects that use MySQL as the database, I need to have a *date* fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD).
The *date* field in MySQL can deal with that by accepting *00* for the month and the day. So *2010-00-00* is valid in MySQL and it represent 2010. Same thing for *2010-05-00* that represent May 2010.
So I started to create a `PartialDateField` to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a `datetime.date` object for a *date* field AND `datetime.date()` support only real date. So it's possible to modify the converter for the *date* field used by MySQLdb and return only a string in this format *'YYYY-MM-DD'*. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL *date* fields. But Django `DateField` rely on the fact that the database return a `datetime.date` object, so if I change the converter to return a string, Django is not happy at all.
Someone have an idea or advice to solve this problem? How to create a `PartialDateField` in Django ?
EDIT
----
Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by *Alison R.*) or use a *varchar* field to keep date as string in this format *YYYY-MM-DD*.
But in both solutions, if I'm not wrong, I will loose the *special* properties of a *date* field like doing query of this kind on them: *Get all entries after this date*. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)
|
2010/06/04
|
[
"https://Stackoverflow.com/questions/2971198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
You could store the partial date as an integer (preferably in a field named for the portion of the date you are storing, such as `year,` `month` or `day`) and do validation and conversion to a date object in the model.
**EDIT**
If you need real date functionality, you probably need real, not partial, dates. For instance, does "get everything after 2010-0-0" return dates inclusive of 2010 or only dates in 2011 and beyond? The same goes for your other example of May 2010. The ways in which different languages/clients deal with partial dates (if they support them at all) are likely to be highly idiosyncratic, and they are unlikely to match MySQL's implementation.
On the other hand, if you store a `year` integer such as 2010, it is easy to ask the database for "all records with year > 2010" and understand exactly what the result should be, from any client, on any platform. You can even combine this approach for more complicated dates/queries, such as "all records with year > 2010 AND month > 5".
**SECOND EDIT**
Your only other (and perhaps best) option is to store truly valid dates and come up with a convention in your application for what they mean. A DATETIME field named like `date_month` could have a value of 2010-05-01, but you would treat that as representing all dates in May, 2010. You would need to accommodate this when programming. If you had `date_month` in Python as a datetime object, you would need to call a function like `date_month.end_of_month()` to query dates following that month. (That is pseudocode, but could be easily implemented with something like the [calendar](http://docs.python.org/library/calendar.html) module.)
|
Can you store the date together with a flag that tells how much of the date is valid?
Something like this:
```
YEAR_VALID = 0x04
MONTH_VALID = 0x02
DAY_VALID = 0x01
Y_VALID = YEAR_VALID
YM_VALID = YEAR_VALID | MONTH_VALID
YMD_VALID = YEAR_VALID | MONTH_VALID | DAY_VALID
```
Then, if you have a date like 2010-00-00, convert that to 2010-01-01 and set the flag to Y\_VALID. If you have a date like 2010-06-00, convert that to 2010-06-01 and set the flag to YM\_VALID.
So, then, `PartialDateField` would be a class that bundles together a date and the date-valid flag described above.
P.S. You don't actually need to use the flags the way I showed it; that's the old C programmer in me coming to the surface. You could use Y\_VALID, YM\_VALID, YMD\_VALID = range(3) and it would work about as well. The key is to have some kind of flag that tells you how much of the date to trust.
| 7,381
|
50,389,852
|
My Visual Studio Code's Intellisense is not working properly. Every time I try to use it with `Ctrl + Shift`, it only displays a loading message. I'm using Python (with Django) and have installed `ms-python.python`. I also have `Djaneiro`. It is still not working.
[](https://i.stack.imgur.com/2aejV.png)
What seems to be the problem here?
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50389852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6463232/"
] |
In my case , what worked was reinstalling **python extension on vscode** which auto-install **pylance**.
then I just hit **"ctrl + , "** on my keyboard to take me to vscode setting
typed: **pylance** .
clicked on the edit setting.json and change the "python.languageServer" to default
```
},
"terminal.integrated.sendKeybindingsToShell": true,
"python.defaultInterpreterPath": "C:source\\env\\Scripts\\python.exe",
"python.disableInstallationCheck": true,
"terminal.integrated.defaultProfile.windows": "Command Prompt",
**"python.languageServer": "Default"**,
"python.analysis.completeFunctionParens": true,
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"python.autoComplete.addBrackets": true,
"diffEditor.ignoreTrimWhitespace": false
```
and I hope this works for you also else you can try reinstall intellicode and python extension then restart your vscode.
I hope you find this useful :)
|
Assuming you're using `Pylance` as your language server, what worked for me (based on [this](https://github.com/microsoft/pylance-release/issues/275) conversation) was adding
```
"python.analysis.extraPaths": [
"./src/lib"
],
```
(where `.src/lib/` contains your modules and an `__init__.py` file) to your `settings.json`.
**Note:** the setting is called `python.analysis.extraPaths` and not `python.autoComplete.extraPaths`!
| 7,389
|
54,493,369
|
I am running a DB job using Maven liquibase plugin on circle ci. I need to read the parameters like username,password, dburl etc from AWS Parameter store.But when I try to set the value returned by aws cli to a custom variable, its always blank/empty. I know the value exists because the same command on mac terminal returns a value.
I am using a Bash script to install AWS CLI with circle ci job.When I echo the password with in the .sh file I see the value but when I echo it on my config.yml I see blank empty value. I also try to get the value using aws ssm with the config.yml file but even there the value is blank empty.
My Config.yml
```
version: 2
references:
defaults: &defaults
working_directory: ~/tmp
environment:
PROJECT_NAME: DB Job
build-filters: &filters
filters:
tags:
only: /^v[0-9]{1,2}.[0-9]{1,2}.[0-9]{1,2}-(dev)/
branches:
ignore: /.*/
jobs:
checkout-code:
<<: *defaults
docker:
- image: circleci/openjdk:8-jdk-node
steps:
- attach_workspace:
at: ~/tmp
- checkout
- restore_cache:
key: cache-{{ checksum "pom.xml" }}
- save_cache:
paths:
- ~/.m2
key: cache-{{ checksum "pom.xml" }}
- persist_to_workspace:
root: ~/tmp
paths: ./
build-app:
<<: *defaults
docker:
- image: circleci/openjdk:8-jdk-node
steps:
- attach_workspace:
at: ~/tmp
- restore_cache:
key: cache-{{ checksum "pom.xml" }}
- run: chmod 700 resources/circleci/*.sh
- run:
name: Getting DB password
command: resources/circleci/env-setup.sh
- run: echo 'export ENV="$(echo $CIRCLE_TAG | cut -d '-' -f 2)"' >> $BASH_ENV
- run: echo $ENV
- run: echo $dbPasswordDev
- run: export PASS=$(aws ssm get-parameters --names "/enterprise/org/dev/spring.datasource.password" --with-decryption --query "Parameters[0].Value" | tr -d '"') >> $BASH_ENV
- run: echo $PASS
- run: mvn resources:resources liquibase:update -P$ENV,pre-release
workflows:
version: 2
build-deploy:
jobs:
- checkout-code:
<<: *filters
- build-app:
requires:
- checkout-code
<<: *filters
```
env-setup.sh
```
sudo apt-get update
sudo apt-get install -y python-pip python-dev
sudo pip install awscli
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
aws configure set aws_region $AWS_DEFAULT_REGION
dbPassword=`aws ssm get-parameters --names "/enterprise/org/dev/spring.datasource.password" --with-decryption --query "Parameters[0].Value" | tr -d '"' `
echo "dbPassword = ${dbPassword}"
export dbPasswordDev=$dbPassword >> $BASH_ENV
echo $"Custom = $dbPasswordDev"
```
When I echo $dbPasswordDev with in env-set.sh I see the value however in config.yml I don't see the value and I see blank/empty string. Also when I try to echo $PASS with in config.yml I expect to see the value however I see blank empty string
|
2019/02/02
|
[
"https://Stackoverflow.com/questions/54493369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3553141/"
] |
According to
[their official documentation](https://circleci.com/docs/2.0/env-vars/), you need to **echo** the "export foo=bar" into $BASH\_ENV (which is just a file that runs when the a bash session starts):
so in your env-setup.sh file:
```sh
echo "export dbPasswordDev=$dbPassword" >> $BASH_ENV
```
|
Here is a more advanced case where variables can be loaded from `.env` file.
```
version: 2
aliases:
- &step_process_dotenv
run:
name: Process .env file variables
command: echo "export $(grep -v '^#' .env | xargs)" >> $BASH_ENV
jobs:
build:
working_directory: ~/project
docker:
- image: php
steps:
- checkout
- *step_process_dotenv
```
Source repository with tests: <https://github.com/integratedexperts/circleci-sandbox/tree/feature/dotenv>
CI run result: <https://circleci.com/gh/integratedexperts/circleci-sandbox/14>
| 7,399
|
71,637,890
|
I use python and pandas to analyze big data set. I have a several arrays with different length. I need to insert values to specific column. If some values are not present for column it should be 'not defined'. Input data looks like row in dataframe with different positions.
Expected output:

Examples of input data:
```
# Example 1
{'Water Solubility': 'Insoluble ', 'Melting Point': '135-138 °C', 'logP': '4.68'}
# Example 2
{'Melting Point': '71 °C (whole mAb)', 'Hydrophobicity': '-0.529', 'Isoelectric Point': '7.89', 'Molecular Weight': '51234.9', 'Molecular Formula': 'C2224H3475N621O698S36'}
# Example 3
{'Water Solubility': '1E+006 mg/L (at 25 °C)', 'Melting Point': '204-205 °C', 'logP': '1.1', 'pKa': '6.78'}
```
I have tried to add to array 'Not defined' but I couldn't find the right approach
|
2022/03/27
|
[
"https://Stackoverflow.com/questions/71637890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18598283/"
] |
Me too was facing same issue. But using `window` worked for me,
```
const lineHeight= window
.getComputedStyle(descriptionRef.current, null)
.getPropertyValue("line-height");
```
|
you need to set the `line-height` via `style` prop to get by script. But bear in mind, `15` and `15px` are different things for `line-height` attribute.
If we remove the style attribute, even we specify the `line-height` in CSS class, we cannot get its value as `12px` and it will be empty as same as your case.
```css
/* It doesn't matter you specify or not in CSS */
.div-class {
line-height: 12px;
}
```
```
(
<div ref={divRef} className="div-class" style={{ lineHeight: '15px' }}>
</div>
)
```
```
useEffect(() => {
console.log({ lineHeight: divRef.current.style?.lineHeight }); // {lineHeight: '15px'}
}, []);
```
| 7,400
|
35,092,571
|
I am trying to create a dashboard where I can analyse my model's data (Article) using the library [plotly](https://plot.ly/python/).
The Plotly bar chart is not showing on my template, I am wondering if I am doing something wrong since there's no error with the code below :
**models.py**
```
from django.db import models
from django.contrib.auth.models import User
import plotly.plotly as py
import plotly.graph_objs as go
class Article(models.Model):
user = models.ForeignKey(User, default='1')
titre = models.CharField(max_length=100, unique=True)
slug = models.SlugField(max_length=40)
likes = models.ManyToManyField(User, related_name="likes")
def __str__(self):
return self.titre
@property
def article_chart(self):
data = [
go.Bar(
x=[self.titre], #title of the article
y=[self.likes.count()] #number of likes on an article
)
]
plot_url = py.plot(data, filename='basic-bar')
return plot_url
```
**dashboard.html**
```
<div>{{ article.article_chart }}</div>
```
Why is the bar chart not visible? Any suggestion ?
|
2016/01/29
|
[
"https://Stackoverflow.com/questions/35092571",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4859971/"
] |
The result of
```
py.plot(data, filename='basic-bar')
```
is to generate a offline HTML file and return a local URL of this file
e.g. file:///your\_project\_pwd/temp-plot.html
If you want to render it in Django framework, you need to
* use `<iframe>` and restructure of your folder in Django settings
OR
* use plotly.offline method to generate the HTML code with your input `data`
There is a example which I had tried:
```
figure_or_data = [Scatter(x=[1, 2, 3], y=[3, 1, 6])]
plot_html = plot_html, plotdivid, width, height = _plot_html(
figure_or_data, True, 'test', True,
'100%', '100%')
resize_script = ''
if width == '100%' or height == '100%':
resize_script = (
''
'<script type="text/javascript">'
'window.removeEventListener("resize");'
'window.addEventListener("resize", function(){{'
'Plotly.Plots.resize(document.getElementById("{id}"));}});'
'</script>'
).format(id=plotdivid)
html = ''.join([
plot_html,
resize_script])
return render(request, 'dashboard.html', {'html': html,})
```
|
The above answer was very useful, I am in fact watching for parent resize, I am working in angular and I used the below code to achieve the resize, I am having a similar problem and this line of code was useful
```
<div class="col-lg-12" ng-if="showME" style="padding:0px">
<div id="graphPlot" ng-bind-html="myHTML"></div>
</div>
```
The graph will be inserted through the variable myHTML
All i did was watch for the parent resize and got the div id alone using jquery and passed it to plotly and it worked.
```
$scope.$on("angular-resizable.resizeEnd", function (event, args){
Plotly.Plots.resize(document.getElementById($('#graphPlot').children().eq(0).attr("id")));
});
```
| 7,401
|
1,718,251
|
I am using the macports version of python on a Snow Leopard computer, and using cmake to build a cross-platform extension to it. I search for the python interpreter and libraries on the system using the following commands in CMakeLists.txt
```
include(FindPythonInterp)
include(FindPythonLibs )
```
However, while cmake identified the correct interpreter in `/opt/local/bin`, it tries to link against the wrong framework - namely the system Python framework.
```
-- Found PythonInterp: /opt/local/bin/python2.6
-- Found PythonLibs: -framework Python
```
And this causes the following runtime error
```
Fatal Python error: Interpreter not initialized (version mismatch?)
```
As soon as I replace `-framework Python` with `/opt/local/Library/Frameworks/Python.framework/Python` things seem to work as expected.
How can I make cmake link against the correct Python framework found in
```
/opt/local/Library/Frameworks/Python.framework/Python
```
rather than the system one in
```
/System/Library/Frameworks/Python.framework/Python
```
?
|
2009/11/11
|
[
"https://Stackoverflow.com/questions/1718251",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/134397/"
] |
Adding the following in `~/.bash_profile`
```
export DYLD_FRAMEWORK_PATH=/opt/local/Library/Frameworks
```
fixes the problem at least temporarily. Apparently, this inconsistency between the python interpreter and the python framework used by cmake is a bug that should be hopefully fixed in the new version.
|
I am not intimately familiar with CMake, but with the Apple version of gcc/ld, you can pass the `-F` flag to specify a new framework search path. For example, `-F/opt/local/Library/Frameworks` will search in MacPorts' frameworks directory. If you can specify such a flag using CMake, it may solve your problem.
| 7,402
|
69,102,892
|
I have a class object which has the task of running a file. When I was developing, my class object was in the same file as the code I used to run the file.
Now I am refactoring and making this a real package so I moved the code to a file called `class_objects.py`.
I have installed this package locally, but now when I use the class object `Naive` it looks for the file in the directory I am currently working in as opposed to looking for the file which is part of the package. I have read up on absolute paths, relative paths, and verifying that `__init__.py` exists. I am stumped on this one.
How can I make sure my package looks for the `file.ext` within its own directories as opposed to looking for `file.ext` where I am running from?
Here is how I call my package:
```py
# Trying to use my package installed locally
from my-package.class_objects import Naive
a = Naive()
a.find_and_run()
```
>
> Error
>
>
>
```
ValueError: no such file /home/user/tutorial/dir/file.ext
```
This is my directory
```py
My-Package
drwxrwxr-x - user 8 Sep 1:27 .
drwxrwxr-x - user 8 Sep 1:22 └── python
drwxrwxr-x - user 8 Sep 1:25 ├── dist
.rw-r--r-- 1.6M user 8 Sep 1:25 │ ├── my-package-0.1.0-py3-none-any.whl
.rw-rw-r-- 1.6M user 8 Sep 1:25 │ └── my-package-0.1.0.tar.gz
.rw-rw-r-- 480 user 8 Sep 1:22 ├── pyproject.toml
drwxrwxr-x - user 8 Sep 1:24 └── my-package
.rw-rw-r-- 0 user 8 Sep 1:14 ├── __init__.py
.rw-rw-r-- 2.6k user 8 Sep 1:13 ├── class_objects.py
drwxrwxr-x - user 8 Sep 1:19 ├── dir
.rw-rw-r-- 0 user 8 Sep 1:19 │ ├── __init__.py
.rwxrwxr-x 1.5M user 7 Sep 22:39 │ ├── file.exe
```
This is what is inside of `class_objects.py`
```py
class Naive(object):
...
def find_and_run():
out_dir = os.path.join("dir", "file.ext")
naive_model = RunThing(stan_file=out_dir)
```
|
2021/09/08
|
[
"https://Stackoverflow.com/questions/69102892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2444023/"
] |
Just use boolean logic:
```
WHERE (:IP_TYPE = 'HIGH' AND (TYPE = 'HIGH' OR TYPE = '' OR TYPE IS NULL)
) OR
(:IP_TYPE = 'LOW' AND TYPE = 'LOW')
```
Or more succinctly:
```
WHERE :IP_TYPE = TYPE OR
(:IP_TYPE = 'HIGH' AND (TYPE = '' OR TYPE IS NULL))
```
|
In Oracle, an empty string `''` is the same as `NULL`; so your filter can simply be:
```sql
SELECT *
FROM PAYRECORDS
WHERE :ip_type = type
OR (:ip_type = 'HIGH' AND type IS NULL);
```
| 7,403
|
21,529,118
|
I'm trying to use flask-migrate to version my database locally and then reflect the changes in production (Heroku). So far I managed to successfully version the local database and upgrade it, so now I wanted to reflect this on Heroku. To do this I pushed the latest code state to Heroku together with the newly created **migrations** folder and updated requirements.txt. I saw the dependencies were successfully installed:
```
Successfully installed Flask-Migrate alembic Flask-Script Mako
```
Then, I tried:
```
$ heroku run python app/hello.py db upgrade
```
Unfortunately I got this in response:
```
Running `python app/hello.py db upgrade` attached to terminal... up, run.4322
Traceback (most recent call last):
File "app/hello.py", line 37, in <module>
manager.run()
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 405, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/__init__.py", line 384, in handle
return handle(app, *positional_args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_script/commands.py", line 145, in handle
return self.run(*args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 97, in upgrade
config = _get_config(directory)
File "/app/.heroku/python/lib/python2.7/site-packages/flask_migrate/__init__.py", line 37, in _get_config
config.set_main_option('script_location', directory)
File "/app/.heroku/python/lib/python2.7/site-packages/alembic/config.py", line 142, in set_main_option
self.file_config.set(self.config_ini_section, name, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 753, in set
ConfigParser.set(self, section, option, value)
File "/app/.heroku/python/lib/python2.7/ConfigParser.py", line 396, in set
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'alembic'
```
I googled to find what this might be and it looks like the config file can't be opened, however I have no idea what can be done to fix that. How come this works locally but not on Heroku?
|
2014/02/03
|
[
"https://Stackoverflow.com/questions/21529118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/703809/"
] |
I was struggling with this for some time and even posted on the Heroku python forums, but no replies so far. To solve the issue I decided not to run the migration remotely on Heroku, but to run the migration on my development machine and pass the production database address instead. So I do this:
1. Sync the development db with production (when using Heroku you can easily do this with *heroku pg:pull*, you have to drop your local db prior to calling this method though)
2. Assuming your models are already updated, run the *python app.py db migrate*. **Important**: I started getting the original error on my local too, I figured out I have to be in the exact same directory where my app.py is, otherwise I get the error.
3. Review your auto-generated migration scripts
4. Upgrade your local db with *python app.py db upgrade*
5. Change the settings for your app to use the production db instead of your local development db and then run *python app.py db upgrade* again
After some thinking it struck me that this might have been the way this tool was designed to work. Although it still would be nice to be able to run the migrations remotely from Heroku, I'll settle for my solution as it is quicker and does the job.
|
I haven't tried this with Heroku, but ran into the same error and symptoms. The issue for me was that when running locally, my current working directory was set to the project root directory, and when running remotely, it was set to the user's home directory.
Try either cd'ing to the right starting directory first, or pass the --directory parameter to the flask-migrate command with the absolute path to your migrations folder.
| 7,406
|
10,524,842
|
I have a multithreaded mergesorting program in C, and a program for benchmark testing it with 0, 1, 2, or 4 threads. I also wrote a program in Python to do multiple tests and aggregate the results.
The weird thing is that when I run the Python, the tests always run in about half the time compared to when I run them directly in the shell.
For example, when I run the testing program by itself with 4 million integers to sort (the last two arguments are the seed and modulus for generating integers):
```
$ ./mergetest 4000000 4194819 140810581084
0 threads: 1.483485s wall; 1.476092s user; 0.004001s sys
1 threads: 1.489206s wall; 1.488093s user; 0.000000s sys
2 threads: 0.854119s wall; 1.608100s user; 0.008000s sys
4 threads: 0.673286s wall; 2.224139s user; 0.024002s sys
```
Using the python script:
```
$ ./mergedata.py 1 4000000
Average runtime for 1 runs with 4000000 items each:
0 threads: 0.677512s wall; 0.664041s user; 0.016001s sys
1 threads: 0.709118s wall; 0.704044s user; 0.004001s sys
2 threads: 0.414058s wall; 0.752047s user; 0.028001s sys
4 threads: 0.373708s wall; 1.24008s user; 0.024002s sys
```
This happens no matter how many I'm sorting, or how many times I run it. The python program calls the tester with the subprocess module, then parses and aggregates the output. Any ideas why this would happen? Is Python somehow optimizing the execution? or is there something slowing it down when I run it directly that I'm not aware of?
Code: <https://gist.github.com/2650009>
|
2012/05/09
|
[
"https://Stackoverflow.com/questions/10524842",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1325447/"
] |
Turns out I was passing sys.maxint to the subprocess as the modulus for generating random numbers. C was truncating the 64-bit integer and interpreting it as signed, i.e., -1 in two's complement, so every random number was being mod'd by that and becoming 0. So, sorting all the same values seems to take about half as much time as random data.
|
wrapping this in a shell script will probably have the same effect. if so its the console operations
| 7,411
|
20,694,338
|
I am trying to play around with some more of function programming parts of python and for a test I thought I would print out the sum of the first n integers for all numbers between 1 and 100.
```
for i in map(lambda n: (n*(n+1))/2, range(1,101)):
print "sum of the first %d integers: %d" % (i,i)
```
The last line prints out as:
```
sum of the first 5050 integers: 5050
```
It should read "sum of the first 100 integers is 5050 (I may have an off by one error but I'll fix that).
My question is what is variable that holds the index?
|
2013/12/20
|
[
"https://Stackoverflow.com/questions/20694338",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1761521/"
] |
You can return tuple with (index, value) from your lambda, like that:
```
for i,s in map(lambda n: (n,(n*(n+1))/2), range(1,101)):
print "sum of the first %d integers: %d" % (i,s)
```
|
Your code doesn't define a variable that holds an index. In the outermost scope, there is just the variable (sometimes called a "name" when talking about Python) "i".
If you'd like an index, you can use the built-in function enumerate()
```
for i,x in enumerate([5,10,15]):
print i, x
```
| 7,412
|
62,670,991
|
I'm trying to read multiple CSV files from blob storage using python.
The code that I'm using is:
```
blob_service_client = BlobServiceClient.from_connection_string(connection_str)
container_client = blob_service_client.get_container_client(container)
blobs_list = container_client.list_blobs(folder_root)
for blob in blobs_list:
blob_client = blob_service_client.get_blob_client(container=container, blob="blob.name")
stream = blob_client.download_blob().content_as_text()
```
I'm not sure what is the correct way to store the CSV files read in a pandas dataframe.
I tried to use:
```
df = df.append(pd.read_csv(StringIO(stream)))
```
But this shows me an error.
Any idea how can I to do this?
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62670991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6153466/"
] |
You could download the file from blob storage, then read the data into a pandas DataFrame from the downloaded file.
```
from azure.storage.blob import BlockBlobService
import pandas as pd
import tables
STORAGEACCOUNTNAME= <storage_account_name>
STORAGEACCOUNTKEY= <storage_account_key>
LOCALFILENAME= <local_file_name>
CONTAINERNAME= <container_name>
BLOBNAME= <blob_name>
#download from blob
t1=time.time()
blob_service=BlockBlobService(account_name=STORAGEACCOUNTNAME,account_key=STORAGEACCOUNTKEY)
blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
t2=time.time()
print(("It takes %s seconds to download "+blobname) % (t2 - t1))
# LOCALFILE is the file path
dataframe_blobdata = pd.read_csv(LOCALFILENAME)
```
For more details, see [here](https://learn.microsoft.com/en-us/azure/machine-learning/team-data-science-process/explore-data-blob).
---
If you want to do the conversion directly, the code will help. You need to get content from the blob object and in the `get_blob_to_text` there's no need for the local file name.
```
from io import StringIO
blobstring = blob_service.get_blob_to_text(CONTAINERNAME,BLOBNAME).content
df = pd.read_csv(StringIO(blobstring))
```
|
```
import pandas as pd
data = pd.read_csv('blob_sas_url')
```
The Blob SAS Url can be found by right clicking on the azure portal's blob file that you want to import and selecting Generate SAS. Then, click Generate SAS token and URL button and copy the SAS url to above code in place of blob\_sas\_url.
| 7,418
|
936,933
|
If you raise a KeyboardInterrupt while trying to acquire a semaphore, the threads that also try to release the same semaphore object hang indefinitely.
Code:
```
import threading
import time
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
threads.append(t)
```
Start this up and then ^C as it is running. It will hang and never exit.
```
0 finished
3 finished
1 finished
2 finished
4 finished
^C5 finished
Traceback (most recent call last):
File "/tmp/proof.py", line 15, in <module>
sema.acquire()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 290, in acquire
self.__cond.wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/threading.py", line 214, in wait
waiter.acquire()
KeyboardInterrupt
6 finished
7 finished
8 finished
9 finished
```
How can I get it to let the last few threads die natural deaths and then exit normally? (which it does if you don't try to interrupt it)
|
2009/06/01
|
[
"https://Stackoverflow.com/questions/936933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/41613/"
] |
You can use the signal module to set a flag that tells the main thread to stop processing:
```
import threading
import time
import signal
import sys
sigint = False
def sighandler(num, frame):
global sigint
sigint = True
def worker(i, sema):
time.sleep(2)
print i, "finished"
sema.release()
signal.signal(signal.SIGINT, sighandler)
sema = threading.BoundedSemaphore(value=5)
threads = []
for x in xrange(100):
sema.acquire()
if sigint:
sys.exit()
t = threading.Thread(target=worker, args=(x, sema))
t.start()
t.join()
threads.append(t)
```
|
In this case, it looks like you might just want to use a thread pool to control the starting and stopping of your threads. You could use [Chris Arndt's threadpool library](http://www.chrisarndt.de/projects/threadpool/) in a manner something like this:
```
pool = ThreadPool(5)
try:
# enqueue 100 worker threads
pool.wait()
except KeyboardInterrupt, k:
pool.dismiss(5)
# the program will exit after all running threads are complete
```
| 7,425
|
48,561,126
|
I installed opencv on my Ubuntu 14.04 system system with
```
pip install python-opencv
```
my Python version is 2.7.14
```
import cv2
cv2.__version__
```
tells me that I have the OpenCV version 3.4.0.
After that I wanted to follow the tutorial on the OpenCV website
```
import numpy as np
import cv2 as cv
img = cv.imread('messi5.jpg',0)
print img
```
It works fine until this point, but then I am supposed to enter
```
cv.imshow('image',img)
```
and I get the following error:
```
QObject::moveToThread: Current thread (0x233cdb0) is not the object's thread (0x2458430).
Cannot move to target thread (0x233cdb0)
QObject::moveToThread: Current thread (0x233cdb0) is not the object's thread (0x2458430).
Cannot move to target thread (0x233cdb0)
QPixmap: Must construct a QApplication before a QPaintDevice
```
Does anyone know what the problem is?
|
2018/02/01
|
[
"https://Stackoverflow.com/questions/48561126",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3977420/"
] |
Apparently
```
pip install python-opencv
```
is not working at all and should not be used. After I installed Opencv from their website it worked
|
Try checking if the image you are reading is loading
```
image = cv2.imread(filepath,0) #0 for gray scale
if image is None:
print "Cant Load Image"
else:
cv2.imshow("Image", image)
cv2.waitKey(0)
```
| 7,432
|
30,324,474
|
**Using the "re" i compile the datas of a handshake like this:**
```
piece_request_handshake = re.compile('13426974546f7272656e742070726f746f636f6c(?P<reserved>\w{16})(?P<info_hash>\w{40})(?P<peer_id>\w{40})')
handshake = piece_request_handshake.findall(hex_data)
```
*Then i print it*
**I'm unable to add image because i'm new so this is the output:**
```
root@debian:/home/florian/Téléchargements# python script.py
[('0000000000100005', '606d4759c464c8fd0d4a5d8fc7a223ed70d31d7b', '2d5452323532302d746d6e6a657a307a6d687932')]
```
**My question is, how can i take only the second piece of this data that is to say the "hash\_info" (the "606d47...") ?**
**I already tried with the group of re with the following line:**
```
print handshake.group('info_hash')
```
**But the result is an error (sorry again i can't show the screen...):**
```
*root@debian:/home/florian/Téléchargements# python script.py
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "script.py", line 122, in run
self.p.dispatch(0, PieceRequestSniffer.cb)
File "script.py", line 82, in cb
print handshake.group('info_hash')
AttributeError: 'list' object has no attribute 'group'*
```
This is the start of my full code for the curious:
```
import pcapy
import dpkt
from threading import Thread
import re
import binascii
import socket
import time
liste=[]
prefix = '13426974546f7272656e742070726f746f636f6c'
hash_code = re.compile('%s(?P<reserved>\w{16})(?P<info_hash>\w{40})(?P<peer_id>\w{40})' % prefix)
match = hash_code.match()
piece_request_handshake = re.compile('13426974546f7272656e742070726f746f636f6c(?P<aaa>\w{16})(?P<bbb>\w{40})(?P<ccc>\w{40})')
piece_request_tcpclose = re.compile('(?P<start>\w{12})5011')
#-----------------------------------------------------------------INIT------------------------------------------------------------
class PieceRequestSniffer(Thread):
def __init__(self, dev='eth0'):
Thread.__init__(self)
self.expr = 'udp or tcp'
self.maxlen = 65535 # max size of packet to capture
self.promiscuous = 1 # promiscuous mode?
self.read_timeout = 100 # in milliseconds
self.max_pkts = -1 # number of packets to capture; -1 => no limit
self.active = True
self.p = pcapy.open_live(dev, self.maxlen, self.promiscuous, self.read_timeout)
self.p.setfilter(self.expr)
@staticmethod
def cb(hdr, data):
eth = dpkt.ethernet.Ethernet(str(data))
ip = eth.data
#------------------------------------------------------IPV4 AND TCP PACKETS ONLY---------------------------------------------------
#Select Ipv4 packets because of problem with the .p in Ipv6
if eth.type == dpkt.ethernet.ETH_TYPE_IP6:
return
else:
#Select only TCP protocols
if ip.p == dpkt.ip.IP_PROTO_TCP:
tcp = ip.data
src_ip = socket.inet_ntoa(ip.src)
dst_ip = socket.inet_ntoa(ip.dst)
fin_flag = ( tcp.flags & dpkt.tcp.TH_FIN ) != 0
#if fin_flag:
#print "TH_FIN src:%s dst:%s" % (src_ip,dst_ip)
try:
#Return hexadecimal representation
hex_data = binascii.hexlify(tcp.data)
except:
return
#-----------------------------------------------------------HANDSHAKE-------------------------------------------------------------
handshake = piece_request_handshake.findall(hex_data)
if handshake and (src_ip+" "+dst_ip) not in liste and (dst_ip+" "+src_ip) not in liste and handshake != '':
liste.append(src_ip+" "+dst_ip)
print match.group('info_hash')
```
|
2015/05/19
|
[
"https://Stackoverflow.com/questions/30324474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872475/"
] |
`re.findall()` returns a list of tuples, each containing the matching strings that correspond to the named groups in the re pattern. This example (using a simplified pattern) demonstrates that you can access the required item with indexing:
```
import re
prefix = 'prefix'
pattern = re.compile('%s(?P<reserved>\w{4})(?P<info_hash>\w{10})(?P<peer_id>\w{10})' % prefix)
handshake = 'prefix12341234567890ABCDEF1234' # sniffed data
match = pattern.findall(handshake)
>>> print match
[('1234', '1234567890', 'ABCDEF1234')]
>>> info_hash = match[0][1]
>>> print info_hash
1234567890
```
But the point of named groups is to provide a way to access the matched values for a named group by name. You can use `re.match()` instead:
```
import re
prefix = 'prefix'
pattern = re.compile('%s(?P<reserved>\w{4})(?P<info_hash>\w{10})(?P<peer_id>\w{10})' % prefix)
handshake = 'prefix12341234567890ABCDEF1234' # sniffed data
match = pattern.match(handshake)
>>> print match
<_sre.SRE_Match object at 0x7fc201efe918>
>>> print match.group('reserved')
1234
>>> print match.group('info_hash')
1234567890
>>> print match.group('peer_id')
ABCDEF1234
```
The values are also available using dictionary access:
```
>>> d = match.groupdict()
>>> d
{'peer_id': 'ABCDEF1234', 'reserved': '1234', 'info_hash': '1234567890'}
>>> d['info_hash']
'1234567890'
```
Finally, if there are multiple handshake sequences in the input data, you can use `re.finditer()`:
```
import re
prefix = 'prefix'
pattern = re.compile('%s(?P<reserved>\w{4})(?P<info_hash>\w{10})(?P<peer_id>\w{10})' % prefix)
handshake = 'blahprefix12341234567890ABCDEF1234|randomjunkprefix12349876543210ABCDEF1234,more random junkprefix1234hellothereABCDEF1234...' # sniffed data
for match in pattern.finditer(handshake):
print match.group('info_hash')
```
Output:
```
1234567890
9876543210
hellothere
```
|
`re.findall` will return a list of tuples. The `group()` call works on `Match` objects, returned by some other functions in `re`:
```
for match in re.finditer(needle, haystack):
print match.group('info_hash')
```
Also, you might not need `findall` if you're just matching a single handshake.
| 7,435
|
30,542,336
|
I am new to python and trying to learn the recursion.
I'm trying to display all possible outcomes by changing 'a' to either number 7 or 8
For example,
```
user_type = 40aa
```
so it will display:
```
4077
4078
4087
4088
```
thank you
it doesn't have to be 40aa, it can be a4a0, aaa0, etc
this code is only replace 7, how can i fix that
```
user_type = 40aa
def replace(string, a, b)
if not string:
return ""
elif string[:len(b)] == b:
return a + replace(string[len(b):], a, b)
else:
return string[0] + replace(string[1:], a, b)
print(replace(user_type, '7', 'a'))
```
|
2015/05/30
|
[
"https://Stackoverflow.com/questions/30542336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4955371/"
] |
```
pattern = "40aa"
options = [7, 8]
def replace(left, right):
if len(right) > 0:
if right[0] == "a":
results = []
for i in options:
results.extend(replace(left + str(i), right[1:]))
return results
else:
return replace(left + right[0], right[1:])
else:
return [left]
print replace("", pattern)
```
In other words, the function is called with the already processed part of the pattern, and remaining part of the pattern. If the next pattern's character is a digit, it's passed from the pattern to the result. If it's "a" it's replaced with all options step by step and the remaining pattern is processed recursively.
|
I don't know Python very well, but I can help with the recursion.
The basic idea is that you will loop through each character in the string, and each time you hit an 'a', you will replace it with a 7 and an 8, and pass both of those values to your recursive method.
Here is an example:
Suppose you have the string "Basttaa".
Loop until you hit an a, so you are at the second character. Replace it with a '7' and an '8'. Now you have two separate strings, and you can pass both to your recursive method.
We now have "B7sttaa" and "B8sttaa". We pass both to our function.
In the first string, we get to the 6th character and replace it with a '7' and an '8' and repeat the process. After that replacement, we have "B7stt7a", "B7stt8a", and "B8sttaa".
Now with the second string that was passed, we get to the 6th character again and do the process of replacing. Now we have four strings: "B7stt7a", "B7stt8a", "B8stt7a", and "B8stt8a".
Those four strings are again passed to the recursive method and we get our final 8 strings, after the last character on each is replaced with both a '7' and an '8'.
Our four strings: "B7stt7a", "B7stt8a", "B8stt7a", and "B8stt8a" are again passed to our recursive method. The method gets to the last character of each, and replaces the a of each with a '7' and '8'. Then, because it is at the end of each string, it adds each to the list.
"B7stt7a" becomes "B7stt77" and "B7stt78" and both are added to the list.
"B7stt8a" becomes "B7stt87" and "B7stt88" and both are added to the list.
"B8stt7a" becomes "B8stt77" and "B8stt78" and both are added to the list.
"B8stt8a" becomes "B8stt87" and "B8stt88" and both are added to the list.
Now the list has ["B7stt77", "B7stt78", "B7stt87", "B7stt88", "B8stt77", "B8stt78", "B8stt87", "B8stt88"]
The psuedo-code looks something like this:
```
list[];
recusion(string str)
for each char
if char is 'a'
return recursion(str replace char with 7)
return recursion(str replace char with 8)
if at end
add str to list
return;
```
| 7,436
|
22,770,352
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in `statsmodels`. Currently R has a function `forecast::auto.arima()` which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
2014/03/31
|
[
"https://Stackoverflow.com/questions/22770352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1483927/"
] |
```
def evaluate_arima_model(X, arima_order):
# prepare training dataset
train_size = int(len(X) * 0.90)
train, test = X[0:train_size], X[train_size:]
history = [x for x in train]
# make predictions
predictions = list()
for t in range(len(test)):
model = ARIMA(history, order=arima_order)
model_fit = model.fit(disp=0)
yhat = model_fit.forecast()[0]
predictions.append(yhat)
history.append(test[t])
# calculate out of sample error
error = mean_squared_error(test, predictions)
return error
# evaluate combinations of p, d and q values for an ARIMA model
def evaluate_models(dataset, p_values, d_values, q_values):
dataset = dataset.astype('float32')
best_score, best_cfg = float("inf"), None
for p in p_values:
for d in d_values:
for q in q_values:
order = (p,d,q)
try:
mse = evaluate_arima_model(dataset, order)
if mse < best_score:
best_score, best_cfg = mse, order
print('ARIMA%s MSE=%.3f' % (order,mse))
except:
continue
print('Best ARIMA%s MSE=%.3f' % (best_cfg, best_score))
# load dataset
def parser(x):
return datetime.strptime('190'+x, '%Y-%m')
import datetime
p_values = [4,5,6,7,8]
d_values = [0,1,2]
q_values = [2,3,4,5,6]
warnings.filterwarnings("ignore")
evaluate_models(train, p_values, d_values, q_values)
```
This will give you the p,d,q values, then use the values for your ARIMA model
|
I wrote these utility functions to directly calculate pdq values
*get\_PDQ\_parallel* require three inputs data which is series with timestamp(datetime) as index. n\_jobs will provide number of parallel processor. output will be dataframe with aic and bic value with order=(P,D,Q) in index
p and q range is [0,12] while d is [0,1]
```
import statsmodels
from statsmodels import api as sm
from sklearn.metrics import r2_score,mean_squared_error
from sklearn.utils import check_array
from functools import partial
from multiprocessing import Pool
def get_aic_bic(order,series):
aic=np.nan
bic=np.nan
#print(series.shape,order)
try:
arima_mod=statsmodels.tsa.arima_model.ARIMA(series,order=order,freq='H').fit(transparams=True,method='css')
aic=arima_mod.aic
bic=arima_mod.bic
print(order,aic,bic)
except:
pass
return aic,bic
def get_PDQ_parallel(data,n_jobs=7):
p_val=13
q_val=13
d_vals=2
pdq_vals=[ (p,d,q) for p in range(p_val) for d in range(d_vals) for q in range(q_val)]
get_aic_bic_partial=partial(get_aic_bic,series=data)
p = Pool(n_jobs)
res=p.map(get_aic_bic_partial, pdq_vals)
p.close()
return pd.DataFrame(res,index=pdq_vals,columns=['aic','bic'])
```
| 7,438
|
68,562,020
|
This is my code
```
import pandas as pd
keys = ['phone match', 'account match']
d = {k: [] for k in keys}
df = pd.DataFrame(data=[[1,2,3],[4,5,6]],columns=['A','B','C'])
df['D'] = [d for _ in range(df.shape[0])]
df.at[0, 'D']['phone match'].append(4)
```
But instead of appending only on the dictionary at index 0 it appends to all the dictionaries and therefore the output is:
```
A B C D
0 1 2 3 {'phone match': [4], 'account match': []}
1 4 5 6 {'phone match': [4], 'account match': []}
```
While the desired output is:
```
A B C D
0 1 2 3 {'phone match': [4], 'account match': []}
1 4 5 6 {'phone match': [], 'account match': []}
```
I think this is because python is linking to the same dictionary, but how can I avoid that?
|
2021/07/28
|
[
"https://Stackoverflow.com/questions/68562020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16546771/"
] |
You need to create multiple `dict` in order to make each of them have different object ID
```
keys = ['phone match', 'account match']
df = pd.DataFrame(data=[[1,2,3],[4,5,6]],columns=['A','B','C'])
df['D'] = [{k: [] for k in keys} for _ in range(df.shape[0])] # Change here
df.at[0, 'D']['phone match'].append(4)
df
Out[65]:
A B C D
0 1 2 3 {'phone match': [4], 'account match': []}
1 4 5 6 {'phone match': [], 'account match': []}
```
|
`dict` objects are passed by reference in python.
In order to achieve what you want, you can use the following line which creates a copy of d for every line:
```
df['D'] = [d.copy() for _ in range(df.shape[0])]
```
| 7,448
|
31,518,864
|
I am currently generating 8 random values each time I run a program on Python. These 8 values are different each time I run the program, and I would like to be able to now save these 8 values each time I run the program to a text file in 8 separate columns. When saving these values for future runs, though, I would like to still be able to keep previous values. For example: after run 1, the text file will be 8x1, after run 2, the text file will be 8x2, and after run n, the text file will be 8xn.
I have been looking at solutions like this: [save output values in txt file in columns python](https://stackoverflow.com/questions/25967479/save-output-values-in-txt-file-in-columns-python)
And it seems using 'a' instead 'w' will append my new values instead of overwriting previous values. I've been trying to follow the documentation on the method .write but just don't quite see how I can write to a particular column using this method. I have been able to simply write each column in its own text file, but I'd rather be able to write the columns together in the same text file for future runs I do with this program.
Edit: my outputs will be 8 floating point numbers and to reiterate, they will be random each time.
So after 1 run, I will create 8 floating point values: Run11, Run12, Run13, Run14, Run15, Run16, Run17, Run18. After my second run, I will create another set of values (8 entries long): Run21, Run22, Run23, Run24, Run25, Run26, Run27, Run28.
In the text file, I would like these values to be placed in specific columns like this: [http://imgur.com/zxoxaKM](https://imgur.com/zxoxaKM) (this is what it would look like after 2 runs).
The "Value n:" titles are the headers for each column.
|
2015/07/20
|
[
"https://Stackoverflow.com/questions/31518864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5135338/"
] |
```
import csv
from tempfile import NamedTemporaryFile
from shutil import move
from itertools import chain
with open("in.csv") as f, NamedTemporaryFile(dir=".", delete=False) as temp:
r = csv.reader(f)
new = [9, 10, 11, 12, 13, 14, 15, 16]
wr = csv.writer(temp)
wr.writerows(zip(chain.from_iterable(r), new))
move(temp.name, "in.csv")
```
Input:
```
1
2
3
4
5
6
7
8
```
Output:
```
1,9
2,10
3,11
4,12
5,13
6,14
7,15
8,16
```
To take the header into account:
```
with open("in.csv") as f, NamedTemporaryFile(dir=".", delete=False) as temp:
r = csv.reader(f)
header = next(r)
new = [9, 10, 11, 12, 13, 14, 15, 16]
wr = csv.writer(temp)
wr.writerow(header+["Value {}:".format(len(header)+1)])
wr.writerows(zip(chain.from_iterable(r), new))
move(temp.name, "in.csv")
```
Input:
```
Value 1:
1
2
3
4
5
6
7
8
```
Output:
```
Value 1:,Value 2:
1,9
2,10
3,11
4,12
5,13
6,14
7,15
8,16
```
If you are adding an actual row each tie and not a column then just append:
```
with open("in.csv","a") as f:
new = [9, 10, 11, 12, 13, 14, 15, 16]
wr = csv.writer(f)
wr.writerow(new)
```
Input:
```
value 1:,value 2:,value 3:,value 4:,value 5:,value 6:,value 7:,value 8:
1,2,3,4,5,6,7,8
```
Output:
```
value 1:,value 2:,value 3:,value 4:,value 5:,value 6:,value 7:,value 8:
1,2,3,4,5,6,7,8
9,10,11,12,13,14,15,16
```
|
what about
```
a = [1, 2, 3, 4, 5, 6, 7, 8]
f = open('myFile.txt', 'a')
for n in a:
f.write('%d\t'%n)
f.write('\n')
f.close()
```
and you get as file content after running it 4 times
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
======= EDIT =========
Try this, its ugly but works ;-)
```
import os.path
h = ['Header 1', 'Hea 2', 'Header 3', 'Header 4', 'H 5', 'Header', 'Header 7', 'Header 8']
a = [1, 2, 3, 4, 5, 6, 7, 8]
fileName = 'myFile.txt'
#write header
withHeader = not os.path.isfile(fileName)
f = open(fileName, 'a')
if withHeader:
print 'Writing header'
for s in h:
f.write('%s\t'%s)
f.write('\n')
#write numbers
for i in range(0, len(a)):
space = len(h[i])/2;
n = a[i]
for c in range(0, space):
f.write(' ')
print 'Writing %d'%n
f.write('%d'%n)
for c in range(0, space):
f.write(' ')
f.write('\t')
f.write('\n')
f.close()
```
result:
```
Header 1 Hea 2 Header 3 Header 4 H 5 Header Header 7 Header 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8
```
| 7,449
|
66,602,674
|
I have two e2e automation framework , one is python based other is protractor based. I need to write a docker-compose file to run these two projects in different containers and fetch reports and their console to my local system.
below are the contents of my docker-compose.yml file
```
version: '3'
services:
e2e-Tests:
build: ./c8y/DockerFile
image: e2etests
command: npm run e2e
container_name: cn-e2eTests
py-Tests:
build: ./py/DockerFile
image: pytests
command: npm run e2e
container_name: cn-pyTests
```
When I run docker-compose up , I get the below error :
```
Building e2e-Tests
failed to get console mode for stdout: The handle is invalid.
[+] Building 0.0s (0/1)
[+] Building 0.0s (1/2) om sender: walk \\?\C:\Users\xxx\e2e\Dockerfile: The system cannot find the path specified.
=> ERROR [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 105B 0.0s
------
> [internal] load build definition from Dockerfile:
------
ERROR: Service 'e2e-Tests' failed to build
```
|
2021/03/12
|
[
"https://Stackoverflow.com/questions/66602674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7074479/"
] |
Try to rename: `Dockerfile` instead of `DockerFile`.
**Edit:**
When the docker file name is not provided explicitly, it will look for the file `Dockerfile` with a small `f`. My issue was as simple that.
|
My issue was related to registry access issues. It works fine now.
| 7,450
|
53,402,349
|
i build regex expression that matches
2 letters or 2 letters folowed by '/' and next 2 letters for example:
```
rt bl/ws se gn/wd wk bl/rt
/^(((\s+)?[a-zA-Z]{2}(\/[a-zA-Z]{2})?)(\s+|$))+$/i
```
and that works without problems.
Next problem what I have is match all "word" not containing '/' character.
and replace all matches by duplicate values separated by '/'. For above example excepted output should be:
```
rt/rt bl/ws se/se gn/wd wk/wk bl/rt
```
I tried it some time but without success. Could you help me with that ?
**Update 1:**
I've started with regex that matches words not containing 'at'
```
(\b((?!(at))\w)+\b)
```
Et the and I want to replace matched elements with python like
```
re.sub(r'(\b((?!(at))\w)+\b)', r'\1/\1', text)
```
but first have to find right elements ...
|
2018/11/20
|
[
"https://Stackoverflow.com/questions/53402349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1743688/"
] |
A list comprehension should do the trick:
```
>>> NUM_ITEMS = 5
>>> my_array = [[0, 1] for _ in range(NUM_ITEMS)]
>>> my_array
[[0, 1], [0, 1], [0, 1], [0, 1], [0, 1]]
```
|
Since you tagged arrays, here's an alternative `numpy` solution using [`numpy.tile`](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.tile.html).
```
>>> import numpy as np
>>> NUM_ITEMS = 10
>>> np.tile([0, 1], (NUM_ITEMS, 1))
array([[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[0, 1]])
```
| 7,451
|
49,695,050
|
I'm trying to write a csv file into an S3 bucket using AWS Lambda, and for this I used the following code:
```
data=[[1,2,3],[23,56,98]]
with open("s3://my_bucket/my_file.csv", "w") as f:
f.write(data)
```
And this raises the following error:
```
[Errno 2] No such file or directory: u's3://my_bucket/my_file.csv': IOError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 51, in lambda_handler
with open("s3://my_bucket/my_file.csv", "w") as f:
IOError: [Errno 2] No such file or directory: u's3://my_bucket/my_file.csv'
```
Can I have some help with this please ?
**PS: I'm using python 2.7**
Thanking you in advance
|
2018/04/06
|
[
"https://Stackoverflow.com/questions/49695050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6598781/"
] |
Better to answer later than never. There are four steps to get your data in S3:
* Call the S3 bucket
* Load the data into Lambda using the requests library (if you don't have it installed, you are gonna have to load it as a layer)
* Write the data into the Lambda '/tmp' file
* Upload the file into s3
Something like this:
```
import csv
import requests
#all other apropriate libs already be loaded in lambda
#properly call your s3 bucket
s3 = boto3.resource('s3')
bucket = s3.Bucket('your-bucket-name')
key = 'yourfilename.txt'
#you would need to grab the file from somewhere. Use this incomplete line below to get started:
with requests.Session() as s:
getfile = s.get('yourfilelocation')
#Only then you can write the data into the '/tmp' folder.
with open('/tmp/yourfilename.txt', 'w', newline='') as f:
w = csv.writer(f)
w.writerows(filelist)
#upload the data into s3
bucket.upload_file('/tmp/yourfilename.txt', key)
```
Hope it helps.
|
```
with open("s3://my_bucket/my_file.csv", "w+") as f:
```
instead of
```
with open("s3://my_bucket/my_file.csv", "w") as f:
```
notice the "w" has changed to "w+" this means that it will write to the file, and if it does not exist it will create it.
| 7,452
|
51,400,332
|
I want the insertion query do nothing if it's nothing new in csv file , In Case it is , i want to insert only this one and not again all the csv, any suggestion would be great!
PS: it's not duplicate with other questions because here we have "%s" no stable values and in python it's different the syntax!
```
cursorobject=connection.cursor()
sql2="CREATE DATABASE IF NOT EXISTS mydb"
cursorobject.execute(sql2)
sql1="CREATE TABLE IF NOT EXISTS users(id int(11) NOT NULL AUTO_INCREMENT,first_name varchar(255),last_name varchar(255),company_name varchar(255),address varchar(255),city varchar(255),country varchar(255),postal varchar(255),phone1 varchar(255),phone2 varchar(255),email varchar(255),web varchar(255),PRIMARY KEY(id))"
cursorobject.execute(sql1)
csvfile=open('Entries.csv','r')
reader = csv.reader(csvfile,delimiter=',')
for row in reader:
cursorobject.execute("INSERT INTO users(first_name,last_name,company_name,address,city,country,postal,phone1,phone2,email,web) VALUES (%s ,%s, %s,%s,%s,%s,%s,%s,%s,%s,%s)",row)
```
|
2018/07/18
|
[
"https://Stackoverflow.com/questions/51400332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9855183/"
] |
You should do something like this:
```
class eventCell: UICollectionViewCell {
@IBOutlet private weak var eventTitle: UILabel!
@IBOutlet private weak var descriptionLabel:UILabel!
@IBOutlet private weak var eventImage: UIImageView!
typealias Event = (title:String, location:String, lat:CLLocationDegrees, long:CLLocationDegrees)
var eventArray = [Event]()
override func prepareForReuse() {
eventImage.image = nil
}
func lool() {
var event = Event(title: "a", location:"b", lat:5, long:4)
eventArray.append(event)
eventTitle.text = eventArray[0].title
}
}
```
|
Why not creating some `Struct`?
Simple like this:
```
struct Event {
var title: String
var location: String
var lat: CLLocationDegrees
var long: CLLocationDegrees
}
```
Then just do that:
```
var eventArray = [Event]()
```
And call it like that:
```
for event in eventArray{
event.title = eventTitle.text
}
```
| 7,454
|
68,714,450
|
I have 2 dataframes:
**users**
```
user_id position
0 201 Senior Engineer
1 207 Senior System Architect
2 223 Senior account manage
3 212 Junior Manager
4 112 junior Engineer
5 311 junior python developer
```
```
df1 = pd.DataFrame({'user_id': ['201', '207', '223', '212', '112', '311'],
'position': ['Senior Engineer', 'Senior System Architect', 'Senior account manage', 'Junior Manager', 'junior Engineer', 'junior python developer']})
```
**roles**
```
role_id role_position
0 10 %senior%
1 20 %junior%
```
```
df2 = pd.DataFrame({'role_id': ['10', '20'],
'role_position': ['%senior%', '%junior%']})
```
I want to join them to get role\_id for each row in df1 using condition something like this:
```
lower(df1.position) LIKE df2.role_position
```
I want to use operator LIKE (like in SQL).
So it would look like this (or without role\_position - it would be even better):
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
How can i make this?
Thank you for your help!
|
2021/08/09
|
[
"https://Stackoverflow.com/questions/68714450",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16580145/"
] |
You can use `str.extract()`+`merge()`:
```
pat='('+'|'.join(df2['role_position'].str.strip('%').unique())+')'
df1['role_position']='%'+df1['position'].str.lower().str.extract(pat,expand=False)+'%'
df1=df1.merge(df2,on='role_position',how='left')
```
output of `df1`:
```
user_id position role_id role_position
0 201 Senior Engineer 10 %senior%
1 207 Senior System Architect 10 %senior%
2 223 Senior account manage 10 %senior%
3 212 Junior Manager 20 %junior%
4 112 junior Engineer 20 %junior%
5 311 junior python developer 20 %junior%
```
|
Possibilities:
* [fuzzy words](https://www.google.com/search?q=fuzzy%20in%20pandas&rlz=1C5CHFA_enPL889PL889&oq=fuzzy%20in%20pandas&aqs=chrome..69i57j0i10i22i30j0i22i30.2529j0j7&sourceid=chrome&ie=UTF-8)
* [Sequence Matcher](https://towardsdatascience.com/sequencematcher-in-python-6b1e6f3915fc)
* [.extract](https://www.geeksforgeeks.org/python-pandas-series-str-extract/#:%7E:text=extract()%20function%20is%20used,match%20of%20regular%20expression%20pat.)
---
```
df1['Similarity'] = 0
df1['Role'] = 0
from difflib import SequenceMatcher
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
for index, row in df1.iterrows():
for x in df2['role_position']:
z = similar(row['position'],x)
if z >= 0.20:
df1.loc[index, "Similarity"] = z
df1.loc[index, "Role"] = x
```
[](https://i.stack.imgur.com/S7FFY.png)
| 7,455
|
61,253,507
|
I am parsing json file that has the following data subset.
```
"title": "Revert \"testcase for check\""
```
In my python script I do the following:
```
with open('%s/staging_area/pr_info.json' % cwd) as data_file:
pr_info = json.load(data_file)
pr_title=pr_info["title"]
```
pr\_title will contain the following information after getting the title from json object.
```
Revert "testcase for check"
```
It seems that escape characters \ are not part of the string assignment. Is there any way to retain the entire string including escape characters? Thank you so much!
|
2020/04/16
|
[
"https://Stackoverflow.com/questions/61253507",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9828901/"
] |
If you really need it, you should escape it again with json and remove first and last quote:
```py
pr_title = json.dumps(pr_title)[1:-1]
```
but escape characters is for escaping, raw value of string is still `Revert "testcase for check"`. So escaping function will depend on where you data is applied (DB, HTML, XML, etc).
To explain `[1:-1]`, the `dumps` escapes raw string to be JSON-valid which adds `\` and surrounds the string with quotation marks `"`. You have to remove these quotes from resulting string. Since Python could work with string same as list you can get all letters from second to penultimate with `[1:-1]` which literally removes the first and last quotes:
```
print(pr_title)
>>> "Revert \"testcase for check\""
print(pr_title[1:-1])
>>> Revert \"testcase for check\"
```
|
If your goal is to print pr\_title, then you can probably use json.dumps() to print the original text.
```
>>> import json
>>> j = '{"name": "\"Bob\""}'
>>> print(j)
{"name": ""Bob""}
>>> json.dumps(j)
'"{\\"name\\": \\"\\"Bob\\"\\"}"'
```
| 7,459
|
62,618,261
|
I have 4 figures (y1,y2,y3,y4) that i want to plot on a common x axis (yr1,yr2,yr3,m1,m2,m3,m4,m5). In this code however i have kept axaxis as separate since i am trying to get the basics right first.
```
import matplotlib.pyplot as plt
import numpy as np
plt.figure(1)
xaxis = ['y1','y2','y3','m1','m2','m3', 'm4', 'm5']
y1 = np.array([.73,.74,.71,.75,.72,.75,.74,.74])
y2 = np.array([.82,.80,.77,.81,.72,.81,.77,.77])
y3 = np.array([.35,.36,.45,.43,.44,.45,.48,.45])
y4 = np.array([.49,.52,.59,.58,.61,.65,.61,.58])
plt.subplot(221)
plt.plot(xaxis,y1)
plt.subplot(222)
plt.plot(xaxis,y2)
plt.subplot(223)
plt.subplot(xaxis,y3)
plt.subplot(224)
plt.subplot(xaxis,y4)
plt.show()
```
Getting this error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-dfe04cc8c6c4> in <module>
14 plt.plot(xaxis,y2)
15 plt.subplot(223)
---> 16 plt.subplot(xaxis,y3)
17 plt.subplot(224)
18 plt.subplot(xaxis,y4)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\pyplot.py in subplot(*args, **kwargs)
1074
1075 fig = gcf()
-> 1076 a = fig.add_subplot(*args, **kwargs)
1077 bbox = a.bbox
1078 byebye = []
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\figure.py in add_subplot(self, *args, **kwargs)
1412 self._axstack.remove(ax)
1413
-> 1414 a = subplot_class_factory(projection_class)(self, *args, **kwargs)
1415
1416 return self._add_axes_internal(key, a)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\axes\_subplots.py in __init__(self, fig, *args, **kwargs)
62 # num - 1 for converting from MATLAB to python indexing
63 else:
---> 64 raise ValueError(f'Illegal argument(s) to subplot: {args}')
65
66 self.update_params()
ValueError: Illegal argument(s) to subplot: (['y1', 'y2', 'y3', 'm1', 'm2', 'm3', 'm4', 'm5'], array([0.35, 0.36, 0.45, 0.43, 0.44, 0.45, 0.48, 0.45]))
```
Please help understand the issue here !
|
2020/06/28
|
[
"https://Stackoverflow.com/questions/62618261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5866905/"
] |
Small mistake. You have put `plt.subplot` instead of `plt.plot`. This should work now:
```
import matplotlib.pyplot as plt
import numpy as np
plt.figure(1)
xaxis = ['y1','y2','y3','m1','m2','m3', 'm4', 'm5']
y1 = np.array([.73,.74,.71,.75,.72,.75,.74,.74])
y2 = np.array([.82,.80,.77,.81,.72,.81,.77,.77])
y3 = np.array([.35,.36,.45,.43,.44,.45,.48,.45])
y4 = np.array([.49,.52,.59,.58,.61,.65,.61,.58])
plt.subplot(221)
plt.plot(xaxis,y1)
plt.subplot(222)
plt.plot(xaxis,y2)
plt.subplot(223)
plt.plot(xaxis,y3)
plt.subplot(224)
plt.plot(xaxis,y4)
plt.show()
```
Hope this helps :)
|
Try this:
```
fig, ax = plt.subplots(4, 1,sharex=True,gridspec_kw= {'height_ratios':[3,1,1,1]})
ax[0].plot(xais,y1)
ax[1].plot(xais,y1)
ax[2].plot(xais,y1)
ax[3].plot(xais,y1)
```
for 4 figures stacked on top of each other with shared x-axis.
for 2x2:
```
fig, ax = plt.subplots(2,2)
ax[0,0].plot(xaxis,y1)
ax[0,1].plot(xaxis,y1)
ax[1,0].plot(xaxis,y1)
ax[1,1].plot(xaxis,y1)
```
and then plt.show() to see results.
Look here for more:
[For more here is an example](https://matplotlib.org/gallery/lines_bars_and_markers/spectrum_demo.html#sphx-glr-gallery-lines-bars-and-markers-spectrum-demo-py)
| 7,461
|
45,063,974
|
I have a sqlite table with 3 columns named ID (integer), N (integer) and V (real). The pair (ID, N) is unique.
Using the python module sqlite3, I would like to perform a recursive selection with the form
```
select ID from TABLE where N = 0 and V between ? and ? and ID in
(select ID from TABLE where N = 7 and V between ? and ? and ID in
(select ID from TABLE where N = 8 and V between ? and ? and ID in
(...)
)
)
```
I get the following error, probably because the maximum recursion depth was exceeded (?). I need about 20 to 50 recusion levels
```
sqlite3.OperationalError: parser stack overflow
```
I also tried to join the subselections like
```
select ID from
(select ID from TABLE where N = 0 and V between ? and ?)
join (select ID from TABLE where N = 7 and V between ? and ?) using (ID)
join (select ID from TABLE where N = 8 and V between ? and ?) using (ID)
join ...
```
but this approach is supprisingly slow, even with few subselections
Is there a better way to perform the same selection?
Note : the table is indexed on (N, V)
Below is an example to show how the selection works
```
ID N V
0 0 0,1
0 1 0,2
0 2 0,3
1 0 0,5
1 1 0,6
1 2 0,7
2 0 0,8
2 1 0,9
2 2 1,2
```
Step 0
```
select ID from TABLE where N = 0 and V between 0 and 0,6
```
ID is in (0,1)
Step 1
```
select ID from TABLE where N = 1 and V between 0 and 1 and ID in (0, 1)
```
ID is still in (0,1)
Step 2
```
select ID from TABLE where N = 2 and V between 0,5 and 1 and ID in (0, 1)
```
ID is 1
|
2017/07/12
|
[
"https://Stackoverflow.com/questions/45063974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2660966/"
] |
Unwrap the recursion, do it in reverse order and do it in Python. For this I created a table consisting of 100 records, each with an Id between 0 and 99, N=3 and V=5. Arbitrarily I selected the entire collection of records as the innermost.
You need to imagine having a list of values for N and V indexed so that the values at the head of the list are selected for the last SQL SELECT. What the loop does is simply to take the list of IDs resulting from an inner SELECT to feed it as part of the IN clause to the next SELECT.
Without any indexes this is all over in an augenblick.
```
>>> import sqlite3
>>> conn = sqlite3.connect('recur.db')
>>> c = conn.cursor()
>>> previous_ids = str(tuple(range(0,100)))
>>> for it in range(50):
... rows = c.execute('''SELECT ID FROM the_table WHERE N=3 AND V BETWEEN 2 AND 7 AND ID IN %s''' % previous_ids)
... previous_ids = str(tuple([int(_[0]) for _ in rows.fetchall()]))
...
>>> previous_ids
'(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99)'
```
Edit: This avoids the use of long strings, takes longer than an augenblick. It's essentially the same idea implemented using tables.
```
>>> import sqlite3
>>> conn = sqlite3.connect('recur.db')
>>> c = conn.cursor()
>>> N_V = [
... (0, (0,6)),
... (0, (0, 1)),
... (1, (0, 2)),
... (2, (0, 3)),
... (0, (0, 5)),
... (1, (0, 6)),
... (2, (0, 7)),
... (0, (0, 8)),
... (1, (0, 9)),
... (2, (1, 2))
... ]
>>> r = c.execute('''CREATE TABLE essentials AS SELECT ID, N, V FROM the_table WHERE N=0 AND V BETWEEN 0 AND 6''')
>>> for n_v in N_V[1:]:
... r = c.execute('''CREATE TABLE next AS SELECT * FROM essentials WHERE essentials.ID IN (SELECT ID FROM the_table WHERE N=%s AND V BETWEEN %s AND %s)''' % (n_v[0], n_v[1][0], n_v[1][1]))
... r = c.execute('''DROP TABLE essentials''')
... r = c.execute('''ALTER TABLE next RENAME TO essentials''')
...
```
|
Indexing the triplet (ID, N, V) instead of only the (N, V) doublet made the join approach fast enough for being considered
```
create index I on TABLE(ID, N, V)
```
and then
```
select ID from
(select ID from TABLE where N = 0 and V between ? and ?)
join (select ID from TABLE where N = 7 and V between ? and ?) using (ID)
join (select ID from TABLE where N = 8 and V between ? and ?) using (ID)
join ...
```
| 7,463
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.