qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
52,810,422
I go through the book: "Malware Data Science Attack Detection and Attribution" in chapter one and use pefile python module to check the AddressOfEntryPoint, I found the sample: ircbot.exe's AddressOfEntryPoint is 0xCC00FFEE when I do pe.dump\_info(). This value is quite large and look wrong. [ircbot.exe's OPTIONAL Header](https://i.stack.imgur.com/X5ez7.png) md5: 17fa7ec63b129f171511a9f96f90d0d6 how to fix this AddressOfEntryPoint?
2018/10/15
[ "https://Stackoverflow.com/questions/52810422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5742815/" ]
try this ``` select * from yourtable where youcolumn like '%''%' ```
I hope this solves your problem % **'** % -> Finds any values that have " **'** " in any position By executing below query you will get the result which have '(single quote) in them ``` SELECT * FROM TABLE_NAME WHERE COLUMN_NAME LIKE "%'%" ``` i have executed the same query [enter image description here](https://i.stack.imgur.com/vQR6q.jpg)
52,810,422
I go through the book: "Malware Data Science Attack Detection and Attribution" in chapter one and use pefile python module to check the AddressOfEntryPoint, I found the sample: ircbot.exe's AddressOfEntryPoint is 0xCC00FFEE when I do pe.dump\_info(). This value is quite large and look wrong. [ircbot.exe's OPTIONAL Header](https://i.stack.imgur.com/X5ez7.png) md5: 17fa7ec63b129f171511a9f96f90d0d6 how to fix this AddressOfEntryPoint?
2018/10/15
[ "https://Stackoverflow.com/questions/52810422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5742815/" ]
``` CREATE TABLE #TEST ( Test_Column VARCHAR(MAX)); INSERT INTO #TEST VALUES ('10011-RIO MARE EXTRA''') SELECT * FROM #TEST WHERE Test_Column LIKE '%''%' ``` The escape character for ' is ' used twice -> ''.
try this ``` select * from yourtable where youcolumn like '%''%' ```
52,810,422
I go through the book: "Malware Data Science Attack Detection and Attribution" in chapter one and use pefile python module to check the AddressOfEntryPoint, I found the sample: ircbot.exe's AddressOfEntryPoint is 0xCC00FFEE when I do pe.dump\_info(). This value is quite large and look wrong. [ircbot.exe's OPTIONAL Header](https://i.stack.imgur.com/X5ez7.png) md5: 17fa7ec63b129f171511a9f96f90d0d6 how to fix this AddressOfEntryPoint?
2018/10/15
[ "https://Stackoverflow.com/questions/52810422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5742815/" ]
Find records with single quotes in the table ``` SELECT * FROM [tableName] where Name like '%''%' ```
I hope this solves your problem % **'** % -> Finds any values that have " **'** " in any position By executing below query you will get the result which have '(single quote) in them ``` SELECT * FROM TABLE_NAME WHERE COLUMN_NAME LIKE "%'%" ``` i have executed the same query [enter image description here](https://i.stack.imgur.com/vQR6q.jpg)
52,810,422
I go through the book: "Malware Data Science Attack Detection and Attribution" in chapter one and use pefile python module to check the AddressOfEntryPoint, I found the sample: ircbot.exe's AddressOfEntryPoint is 0xCC00FFEE when I do pe.dump\_info(). This value is quite large and look wrong. [ircbot.exe's OPTIONAL Header](https://i.stack.imgur.com/X5ez7.png) md5: 17fa7ec63b129f171511a9f96f90d0d6 how to fix this AddressOfEntryPoint?
2018/10/15
[ "https://Stackoverflow.com/questions/52810422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5742815/" ]
``` CREATE TABLE #TEST ( Test_Column VARCHAR(MAX)); INSERT INTO #TEST VALUES ('10011-RIO MARE EXTRA''') SELECT * FROM #TEST WHERE Test_Column LIKE '%''%' ``` The escape character for ' is ' used twice -> ''.
I hope this solves your problem % **'** % -> Finds any values that have " **'** " in any position By executing below query you will get the result which have '(single quote) in them ``` SELECT * FROM TABLE_NAME WHERE COLUMN_NAME LIKE "%'%" ``` i have executed the same query [enter image description here](https://i.stack.imgur.com/vQR6q.jpg)
52,810,422
I go through the book: "Malware Data Science Attack Detection and Attribution" in chapter one and use pefile python module to check the AddressOfEntryPoint, I found the sample: ircbot.exe's AddressOfEntryPoint is 0xCC00FFEE when I do pe.dump\_info(). This value is quite large and look wrong. [ircbot.exe's OPTIONAL Header](https://i.stack.imgur.com/X5ez7.png) md5: 17fa7ec63b129f171511a9f96f90d0d6 how to fix this AddressOfEntryPoint?
2018/10/15
[ "https://Stackoverflow.com/questions/52810422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5742815/" ]
``` CREATE TABLE #TEST ( Test_Column VARCHAR(MAX)); INSERT INTO #TEST VALUES ('10011-RIO MARE EXTRA''') SELECT * FROM #TEST WHERE Test_Column LIKE '%''%' ``` The escape character for ' is ' used twice -> ''.
Find records with single quotes in the table ``` SELECT * FROM [tableName] where Name like '%''%' ```
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
Lev. Pandas has rewritten `to_csv` to make a big improvement in native speed. The process is now i/o bound, accounts for many subtle dtype issues, and quote cases. Here is our performance results vs. 0.10.1 (in the upcoming 0.11) release. These are in `ms`, lower ratio is better. ``` Results: t_head t_baseline ratio name frame_to_csv2 (100k) rows 190.5260 2244.4260 0.0849 write_csv_standard (10k rows) 38.1940 234.2570 0.1630 frame_to_csv_mixed (10k rows, mixed) 369.0670 1123.0412 0.3286 frame_to_csv (3k rows, wide) 112.2720 226.7549 0.4951 ``` So Throughput for a single dtype (e.g. floats), not too wide is about 20M rows / min, here is your example from above. ``` In [12]: df = pd.DataFrame({'A' : np.array(np.arange(45000000),dtype='float64')}) In [13]: df['B'] = df['A'] + 1.0 In [14]: df['C'] = df['A'] + 2.0 In [15]: df['D'] = df['A'] + 2.0 In [16]: %timeit -n 1 -r 1 df.to_csv('test.csv') 1 loops, best of 1: 119 s per loop ```
Your `df_to_csv` function is very nice, except it does a lot of assumptions and doesn't work for the general case. If it works for you, that's good, but be aware that it is not a general solution. CSV can contain commas, so what happens if there is this tuple to be written? `('a,b','c')` The python `csv` module would quote that value so that no confusion arises, and would escape quotes if quotes are present in any of the values. Of course generating something that works in all cases is much slower. But I suppose you only have a bunch of numbers. You could try this and see if it is faster: ``` #data is a tuple containing tuples for row in data: for col in xrange(len(row)): f.write('%d' % row[col]) if col < len(row)-1: f.write(',') f.write('\n') ``` I don't know if that would be faster. If not it's because too many system calls are done, so you might use `StringIO` instead of direct output and then dump it to a real file every once in a while.
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
use chunksize. I have found that makes a hell lot of difference. If you have memory in hand use good chunksize (no of rows) to get into memory and then write once.
Your `df_to_csv` function is very nice, except it does a lot of assumptions and doesn't work for the general case. If it works for you, that's good, but be aware that it is not a general solution. CSV can contain commas, so what happens if there is this tuple to be written? `('a,b','c')` The python `csv` module would quote that value so that no confusion arises, and would escape quotes if quotes are present in any of the values. Of course generating something that works in all cases is much slower. But I suppose you only have a bunch of numbers. You could try this and see if it is faster: ``` #data is a tuple containing tuples for row in data: for col in xrange(len(row)): f.write('%d' % row[col]) if col < len(row)-1: f.write(',') f.write('\n') ``` I don't know if that would be faster. If not it's because too many system calls are done, so you might use `StringIO` instead of direct output and then dump it to a real file every once in a while.
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
In 2019 for cases like this, it may be better to just use numpy. Look at the timings: ``` aa.to_csv('pandas_to_csv', index=False) # 6.47 s df2csv(aa,'code_from_question', myformats=['%d','%.1f','%.1f','%.1f']) # 4.59 s from numpy import savetxt savetxt( 'numpy_savetxt', aa.values, fmt='%d,%.1f,%.1f,%.1f', header=','.join(aa.columns), comments='' ) # 3.5 s ``` So you can cut the time by a factor of two using numpy. This, of course, comes at a cost of reduced flexibility (when compared to `aa.to_csv`). Benchmarked with Python 3.7, pandas 0.23.4, numpy 1.15.2 (`xrange` was replaced by `range` to make the posted function from the question work in Python 3). PS. If you need to include the index, `savetxt` will work fine - just pass `df.reset_index().values` and adjust the formatting string accordingly. 2021 update: as pointed in the comments the pandas performance improved greatly. `savetxt` is still the fastest option, but only by a narrow margin: when benchmarked with `pandas` 1.3.0 and `numpy` 1.20.3, `aa.to_csv()` took 2.64 s while `savetxt` 2.53 s. The code from the question (`df2csv`) took 2.98 s making it the slowest option nowadays. Your mileage may vary - the 2021 test was performed on SSD with a very fast CPU, while in 2019 I was using HDD and a slower CPU.
Your `df_to_csv` function is very nice, except it does a lot of assumptions and doesn't work for the general case. If it works for you, that's good, but be aware that it is not a general solution. CSV can contain commas, so what happens if there is this tuple to be written? `('a,b','c')` The python `csv` module would quote that value so that no confusion arises, and would escape quotes if quotes are present in any of the values. Of course generating something that works in all cases is much slower. But I suppose you only have a bunch of numbers. You could try this and see if it is faster: ``` #data is a tuple containing tuples for row in data: for col in xrange(len(row)): f.write('%d' % row[col]) if col < len(row)-1: f.write(',') f.write('\n') ``` I don't know if that would be faster. If not it's because too many system calls are done, so you might use `StringIO` instead of direct output and then dump it to a real file every once in a while.
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
I had the same question earlier today. Using to\_csv took my dataframe 1hr 27min. I found a package called pyarrow that reduced this to about 10min. This seemed like the most straight forward solution to me. To use: ``` #install with conda then import import pyarrow as pa import pyarrow.csv as csv #convert format - "old_pd_dataframe" is your "aa". new_pa_dataframe = pa.Table.from_pandas(old_pd_dataframe) #write csv csv.write_csv(new_pa_dataframe, 'output.csv') ```
Your `df_to_csv` function is very nice, except it does a lot of assumptions and doesn't work for the general case. If it works for you, that's good, but be aware that it is not a general solution. CSV can contain commas, so what happens if there is this tuple to be written? `('a,b','c')` The python `csv` module would quote that value so that no confusion arises, and would escape quotes if quotes are present in any of the values. Of course generating something that works in all cases is much slower. But I suppose you only have a bunch of numbers. You could try this and see if it is faster: ``` #data is a tuple containing tuples for row in data: for col in xrange(len(row)): f.write('%d' % row[col]) if col < len(row)-1: f.write(',') f.write('\n') ``` I don't know if that would be faster. If not it's because too many system calls are done, so you might use `StringIO` instead of direct output and then dump it to a real file every once in a while.
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
Lev. Pandas has rewritten `to_csv` to make a big improvement in native speed. The process is now i/o bound, accounts for many subtle dtype issues, and quote cases. Here is our performance results vs. 0.10.1 (in the upcoming 0.11) release. These are in `ms`, lower ratio is better. ``` Results: t_head t_baseline ratio name frame_to_csv2 (100k) rows 190.5260 2244.4260 0.0849 write_csv_standard (10k rows) 38.1940 234.2570 0.1630 frame_to_csv_mixed (10k rows, mixed) 369.0670 1123.0412 0.3286 frame_to_csv (3k rows, wide) 112.2720 226.7549 0.4951 ``` So Throughput for a single dtype (e.g. floats), not too wide is about 20M rows / min, here is your example from above. ``` In [12]: df = pd.DataFrame({'A' : np.array(np.arange(45000000),dtype='float64')}) In [13]: df['B'] = df['A'] + 1.0 In [14]: df['C'] = df['A'] + 2.0 In [15]: df['D'] = df['A'] + 2.0 In [16]: %timeit -n 1 -r 1 df.to_csv('test.csv') 1 loops, best of 1: 119 s per loop ```
use chunksize. I have found that makes a hell lot of difference. If you have memory in hand use good chunksize (no of rows) to get into memory and then write once.
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
Lev. Pandas has rewritten `to_csv` to make a big improvement in native speed. The process is now i/o bound, accounts for many subtle dtype issues, and quote cases. Here is our performance results vs. 0.10.1 (in the upcoming 0.11) release. These are in `ms`, lower ratio is better. ``` Results: t_head t_baseline ratio name frame_to_csv2 (100k) rows 190.5260 2244.4260 0.0849 write_csv_standard (10k rows) 38.1940 234.2570 0.1630 frame_to_csv_mixed (10k rows, mixed) 369.0670 1123.0412 0.3286 frame_to_csv (3k rows, wide) 112.2720 226.7549 0.4951 ``` So Throughput for a single dtype (e.g. floats), not too wide is about 20M rows / min, here is your example from above. ``` In [12]: df = pd.DataFrame({'A' : np.array(np.arange(45000000),dtype='float64')}) In [13]: df['B'] = df['A'] + 1.0 In [14]: df['C'] = df['A'] + 2.0 In [15]: df['D'] = df['A'] + 2.0 In [16]: %timeit -n 1 -r 1 df.to_csv('test.csv') 1 loops, best of 1: 119 s per loop ```
I had the same question earlier today. Using to\_csv took my dataframe 1hr 27min. I found a package called pyarrow that reduced this to about 10min. This seemed like the most straight forward solution to me. To use: ``` #install with conda then import import pyarrow as pa import pyarrow.csv as csv #convert format - "old_pd_dataframe" is your "aa". new_pa_dataframe = pa.Table.from_pandas(old_pd_dataframe) #write csv csv.write_csv(new_pa_dataframe, 'output.csv') ```
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
In 2019 for cases like this, it may be better to just use numpy. Look at the timings: ``` aa.to_csv('pandas_to_csv', index=False) # 6.47 s df2csv(aa,'code_from_question', myformats=['%d','%.1f','%.1f','%.1f']) # 4.59 s from numpy import savetxt savetxt( 'numpy_savetxt', aa.values, fmt='%d,%.1f,%.1f,%.1f', header=','.join(aa.columns), comments='' ) # 3.5 s ``` So you can cut the time by a factor of two using numpy. This, of course, comes at a cost of reduced flexibility (when compared to `aa.to_csv`). Benchmarked with Python 3.7, pandas 0.23.4, numpy 1.15.2 (`xrange` was replaced by `range` to make the posted function from the question work in Python 3). PS. If you need to include the index, `savetxt` will work fine - just pass `df.reset_index().values` and adjust the formatting string accordingly. 2021 update: as pointed in the comments the pandas performance improved greatly. `savetxt` is still the fastest option, but only by a narrow margin: when benchmarked with `pandas` 1.3.0 and `numpy` 1.20.3, `aa.to_csv()` took 2.64 s while `savetxt` 2.53 s. The code from the question (`df2csv`) took 2.98 s making it the slowest option nowadays. Your mileage may vary - the 2021 test was performed on SSD with a very fast CPU, while in 2019 I was using HDD and a slower CPU.
use chunksize. I have found that makes a hell lot of difference. If you have memory in hand use good chunksize (no of rows) to get into memory and then write once.
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
use chunksize. I have found that makes a hell lot of difference. If you have memory in hand use good chunksize (no of rows) to get into memory and then write once.
I had the same question earlier today. Using to\_csv took my dataframe 1hr 27min. I found a package called pyarrow that reduced this to about 10min. This seemed like the most straight forward solution to me. To use: ``` #install with conda then import import pyarrow as pa import pyarrow.csv as csv #convert format - "old_pd_dataframe" is your "aa". new_pa_dataframe = pa.Table.from_pandas(old_pd_dataframe) #write csv csv.write_csv(new_pa_dataframe, 'output.csv') ```
15,417,574
For python / pandas I find that df.to\_csv(fname) works at a speed of ~1 mln rows per min. I can sometimes improve performance by a factor of 7 like this: ``` def df2csv(df,fname,myformats=[],sep=','): """ # function is faster than to_csv # 7 times faster for numbers if formats are specified, # 2 times faster for strings. # Note - be careful. It doesn't add quotes and doesn't check # for quotes or separators inside elements # We've seen output time going down from 45 min to 6 min # on a simple numeric 4-col dataframe with 45 million rows. """ if len(df.columns) <= 0: return Nd = len(df.columns) Nd_1 = Nd - 1 formats = myformats[:] # take a copy to modify it Nf = len(formats) # make sure we have formats for all columns if Nf < Nd: for ii in range(Nf,Nd): coltype = df[df.columns[ii]].dtype ff = '%s' if coltype == np.int64: ff = '%d' elif coltype == np.float64: ff = '%f' formats.append(ff) fh=open(fname,'w') fh.write(','.join(df.columns) + '\n') for row in df.itertuples(index=False): ss = '' for ii in xrange(Nd): ss += formats[ii] % row[ii] if ii < Nd_1: ss += sep fh.write(ss+'\n') fh.close() aa=DataFrame({'A':range(1000000)}) aa['B'] = aa.A + 1.0 aa['C'] = aa.A + 2.0 aa['D'] = aa.A + 3.0 timeit -r1 -n1 aa.to_csv('junk1') # 52.9 sec timeit -r1 -n1 df2csv(aa,'junk3',myformats=['%d','%.1f','%.1f','%.1f']) # 7.5 sec ``` Note: the increase in performance depends on dtypes. But it is always true (at least in my tests) that to\_csv() performs much slower than non-optimized python. If I have a 45 million rows csv file, then: ``` aa = read_csv(infile) # 1.5 min aa.to_csv(outfile) # 45 min df2csv(aa,...) # ~6 min ``` Questions: ``` What are the ways to make the output even faster? What's wrong with to_csv() ? Why is it soooo slow ? ``` Note: my tests were done using pandas 0.9.1 on a local drive on a Linux server.
2013/03/14
[ "https://Stackoverflow.com/questions/15417574", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1442475/" ]
In 2019 for cases like this, it may be better to just use numpy. Look at the timings: ``` aa.to_csv('pandas_to_csv', index=False) # 6.47 s df2csv(aa,'code_from_question', myformats=['%d','%.1f','%.1f','%.1f']) # 4.59 s from numpy import savetxt savetxt( 'numpy_savetxt', aa.values, fmt='%d,%.1f,%.1f,%.1f', header=','.join(aa.columns), comments='' ) # 3.5 s ``` So you can cut the time by a factor of two using numpy. This, of course, comes at a cost of reduced flexibility (when compared to `aa.to_csv`). Benchmarked with Python 3.7, pandas 0.23.4, numpy 1.15.2 (`xrange` was replaced by `range` to make the posted function from the question work in Python 3). PS. If you need to include the index, `savetxt` will work fine - just pass `df.reset_index().values` and adjust the formatting string accordingly. 2021 update: as pointed in the comments the pandas performance improved greatly. `savetxt` is still the fastest option, but only by a narrow margin: when benchmarked with `pandas` 1.3.0 and `numpy` 1.20.3, `aa.to_csv()` took 2.64 s while `savetxt` 2.53 s. The code from the question (`df2csv`) took 2.98 s making it the slowest option nowadays. Your mileage may vary - the 2021 test was performed on SSD with a very fast CPU, while in 2019 I was using HDD and a slower CPU.
I had the same question earlier today. Using to\_csv took my dataframe 1hr 27min. I found a package called pyarrow that reduced this to about 10min. This seemed like the most straight forward solution to me. To use: ``` #install with conda then import import pyarrow as pa import pyarrow.csv as csv #convert format - "old_pd_dataframe" is your "aa". new_pa_dataframe = pa.Table.from_pandas(old_pd_dataframe) #write csv csv.write_csv(new_pa_dataframe, 'output.csv') ```
34,336,040
I am trying to extract the ranking text number from this link [link example: kaggle user ranking no1](https://www.kaggle.com/titericz). More clear in an image: [![enter image description here](https://i.stack.imgur.com/sClUu.png)](https://i.stack.imgur.com/sClUu.png) I am using the following code: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text soup = BeautifulSoup(plainText) for item_name in soup.findAll('h4',{'data-bind':"text: rankingText"}): print(item_name.string) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` The result is `None`. The problem is that `soup.findAll('h4',{'data-bind':"text: rankingText"})` outputs: `[<h4 data-bind="text: rankingText"></h4>]` but in the html of the link when inspecting this is like: `<h4 data-bind="text: rankingText">1st</h4>`. It can be seen in the image: [![enter image description here](https://i.stack.imgur.com/8i76M.png)](https://i.stack.imgur.com/8i76M.png) Its clear that the text is missing. How can I overpass that? Edit: Printing the `soup` variable in the terminal I can see that this value exists: [![enter image description here](https://i.stack.imgur.com/BFyuz.png)](https://i.stack.imgur.com/BFyuz.png) So there should be a way to access through `soup`. Edit 2: I tried unsuccessfully to use the most voted answer from this [stackoverflow question](https://stackoverflow.com/questions/24118337/fetch-data-of-variables-inside-script-tag-in-python-or-content-added-from-js). Could be a solution around there.
2015/12/17
[ "https://Stackoverflow.com/questions/34336040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4157666/" ]
The data is databound using javascript, as the "data-bind" attribute suggests. However, if you download the page with e.g. `wget`, you'll see that the rankingText value is actually there inside this script element on initial load: ``` <script type="text/javascript" profile: { ... "ranking": 96, "rankingText": "96th", "highestRanking": 3, "highestRankingText": "3rd", ... ``` So you could use that instead.
This could because of dynamic data filling. Some javascript code, fill this tag after page loading. Thus if you fetch the html using requests it is not filled yet. ``` <h4 data-bind="text: rankingText"></h4> ``` Please take a look at [Selenium web driver](http://www.seleniumhq.org/projects/webdriver/). Using this driver you can fetch the complete page and running js as normal.
34,336,040
I am trying to extract the ranking text number from this link [link example: kaggle user ranking no1](https://www.kaggle.com/titericz). More clear in an image: [![enter image description here](https://i.stack.imgur.com/sClUu.png)](https://i.stack.imgur.com/sClUu.png) I am using the following code: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text soup = BeautifulSoup(plainText) for item_name in soup.findAll('h4',{'data-bind':"text: rankingText"}): print(item_name.string) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` The result is `None`. The problem is that `soup.findAll('h4',{'data-bind':"text: rankingText"})` outputs: `[<h4 data-bind="text: rankingText"></h4>]` but in the html of the link when inspecting this is like: `<h4 data-bind="text: rankingText">1st</h4>`. It can be seen in the image: [![enter image description here](https://i.stack.imgur.com/8i76M.png)](https://i.stack.imgur.com/8i76M.png) Its clear that the text is missing. How can I overpass that? Edit: Printing the `soup` variable in the terminal I can see that this value exists: [![enter image description here](https://i.stack.imgur.com/BFyuz.png)](https://i.stack.imgur.com/BFyuz.png) So there should be a way to access through `soup`. Edit 2: I tried unsuccessfully to use the most voted answer from this [stackoverflow question](https://stackoverflow.com/questions/24118337/fetch-data-of-variables-inside-script-tag-in-python-or-content-added-from-js). Could be a solution around there.
2015/12/17
[ "https://Stackoverflow.com/questions/34336040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4157666/" ]
If you aren't going to try browser automation through `selenium` as @Ali suggested, you would have to *parse the javascript containing the desired information*. You can do this in different ways. Here is a working code that locates the `script` by a [regular expression pattern](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-regular-expression), then extracts the `profile` object, loads it with [`json`](https://docs.python.org/2/library/json.html) into a Python dictionary and prints out the desired ranking: ``` import re import json from bs4 import BeautifulSoup import requests response = requests.get("https://www.kaggle.com/titericz") soup = BeautifulSoup(response.content, "html.parser") pattern = re.compile(r"profile: ({.*}),", re.MULTILINE | re.DOTALL) script = soup.find("script", text=pattern) profile_text = pattern.search(script.text).group(1) profile = json.loads(profile_text) print profile["ranking"], profile["rankingText"] ``` Prints: ``` 1 1st ```
This could because of dynamic data filling. Some javascript code, fill this tag after page loading. Thus if you fetch the html using requests it is not filled yet. ``` <h4 data-bind="text: rankingText"></h4> ``` Please take a look at [Selenium web driver](http://www.seleniumhq.org/projects/webdriver/). Using this driver you can fetch the complete page and running js as normal.
34,336,040
I am trying to extract the ranking text number from this link [link example: kaggle user ranking no1](https://www.kaggle.com/titericz). More clear in an image: [![enter image description here](https://i.stack.imgur.com/sClUu.png)](https://i.stack.imgur.com/sClUu.png) I am using the following code: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text soup = BeautifulSoup(plainText) for item_name in soup.findAll('h4',{'data-bind':"text: rankingText"}): print(item_name.string) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` The result is `None`. The problem is that `soup.findAll('h4',{'data-bind':"text: rankingText"})` outputs: `[<h4 data-bind="text: rankingText"></h4>]` but in the html of the link when inspecting this is like: `<h4 data-bind="text: rankingText">1st</h4>`. It can be seen in the image: [![enter image description here](https://i.stack.imgur.com/8i76M.png)](https://i.stack.imgur.com/8i76M.png) Its clear that the text is missing. How can I overpass that? Edit: Printing the `soup` variable in the terminal I can see that this value exists: [![enter image description here](https://i.stack.imgur.com/BFyuz.png)](https://i.stack.imgur.com/BFyuz.png) So there should be a way to access through `soup`. Edit 2: I tried unsuccessfully to use the most voted answer from this [stackoverflow question](https://stackoverflow.com/questions/24118337/fetch-data-of-variables-inside-script-tag-in-python-or-content-added-from-js). Could be a solution around there.
2015/12/17
[ "https://Stackoverflow.com/questions/34336040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4157666/" ]
I have solved your problem using regex on the plain text: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text #soup = BeautifulSoup(plainText, "html.parser") pattern = re.compile("ranking\": [0-9]+") name = pattern.search(plainText) ranking = name.group().split()[1] print(ranking) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` This return only the rank number, but I think it will help you, since from what I see the rankText just add 'st', 'th' and etc to the right of the number
This could because of dynamic data filling. Some javascript code, fill this tag after page loading. Thus if you fetch the html using requests it is not filled yet. ``` <h4 data-bind="text: rankingText"></h4> ``` Please take a look at [Selenium web driver](http://www.seleniumhq.org/projects/webdriver/). Using this driver you can fetch the complete page and running js as normal.
34,336,040
I am trying to extract the ranking text number from this link [link example: kaggle user ranking no1](https://www.kaggle.com/titericz). More clear in an image: [![enter image description here](https://i.stack.imgur.com/sClUu.png)](https://i.stack.imgur.com/sClUu.png) I am using the following code: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text soup = BeautifulSoup(plainText) for item_name in soup.findAll('h4',{'data-bind':"text: rankingText"}): print(item_name.string) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` The result is `None`. The problem is that `soup.findAll('h4',{'data-bind':"text: rankingText"})` outputs: `[<h4 data-bind="text: rankingText"></h4>]` but in the html of the link when inspecting this is like: `<h4 data-bind="text: rankingText">1st</h4>`. It can be seen in the image: [![enter image description here](https://i.stack.imgur.com/8i76M.png)](https://i.stack.imgur.com/8i76M.png) Its clear that the text is missing. How can I overpass that? Edit: Printing the `soup` variable in the terminal I can see that this value exists: [![enter image description here](https://i.stack.imgur.com/BFyuz.png)](https://i.stack.imgur.com/BFyuz.png) So there should be a way to access through `soup`. Edit 2: I tried unsuccessfully to use the most voted answer from this [stackoverflow question](https://stackoverflow.com/questions/24118337/fetch-data-of-variables-inside-script-tag-in-python-or-content-added-from-js). Could be a solution around there.
2015/12/17
[ "https://Stackoverflow.com/questions/34336040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4157666/" ]
If you aren't going to try browser automation through `selenium` as @Ali suggested, you would have to *parse the javascript containing the desired information*. You can do this in different ways. Here is a working code that locates the `script` by a [regular expression pattern](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-regular-expression), then extracts the `profile` object, loads it with [`json`](https://docs.python.org/2/library/json.html) into a Python dictionary and prints out the desired ranking: ``` import re import json from bs4 import BeautifulSoup import requests response = requests.get("https://www.kaggle.com/titericz") soup = BeautifulSoup(response.content, "html.parser") pattern = re.compile(r"profile: ({.*}),", re.MULTILINE | re.DOTALL) script = soup.find("script", text=pattern) profile_text = pattern.search(script.text).group(1) profile = json.loads(profile_text) print profile["ranking"], profile["rankingText"] ``` Prints: ``` 1 1st ```
The data is databound using javascript, as the "data-bind" attribute suggests. However, if you download the page with e.g. `wget`, you'll see that the rankingText value is actually there inside this script element on initial load: ``` <script type="text/javascript" profile: { ... "ranking": 96, "rankingText": "96th", "highestRanking": 3, "highestRankingText": "3rd", ... ``` So you could use that instead.
34,336,040
I am trying to extract the ranking text number from this link [link example: kaggle user ranking no1](https://www.kaggle.com/titericz). More clear in an image: [![enter image description here](https://i.stack.imgur.com/sClUu.png)](https://i.stack.imgur.com/sClUu.png) I am using the following code: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text soup = BeautifulSoup(plainText) for item_name in soup.findAll('h4',{'data-bind':"text: rankingText"}): print(item_name.string) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` The result is `None`. The problem is that `soup.findAll('h4',{'data-bind':"text: rankingText"})` outputs: `[<h4 data-bind="text: rankingText"></h4>]` but in the html of the link when inspecting this is like: `<h4 data-bind="text: rankingText">1st</h4>`. It can be seen in the image: [![enter image description here](https://i.stack.imgur.com/8i76M.png)](https://i.stack.imgur.com/8i76M.png) Its clear that the text is missing. How can I overpass that? Edit: Printing the `soup` variable in the terminal I can see that this value exists: [![enter image description here](https://i.stack.imgur.com/BFyuz.png)](https://i.stack.imgur.com/BFyuz.png) So there should be a way to access through `soup`. Edit 2: I tried unsuccessfully to use the most voted answer from this [stackoverflow question](https://stackoverflow.com/questions/24118337/fetch-data-of-variables-inside-script-tag-in-python-or-content-added-from-js). Could be a solution around there.
2015/12/17
[ "https://Stackoverflow.com/questions/34336040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4157666/" ]
The data is databound using javascript, as the "data-bind" attribute suggests. However, if you download the page with e.g. `wget`, you'll see that the rankingText value is actually there inside this script element on initial load: ``` <script type="text/javascript" profile: { ... "ranking": 96, "rankingText": "96th", "highestRanking": 3, "highestRankingText": "3rd", ... ``` So you could use that instead.
I have solved your problem using regex on the plain text: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text #soup = BeautifulSoup(plainText, "html.parser") pattern = re.compile("ranking\": [0-9]+") name = pattern.search(plainText) ranking = name.group().split()[1] print(ranking) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` This return only the rank number, but I think it will help you, since from what I see the rankText just add 'st', 'th' and etc to the right of the number
34,336,040
I am trying to extract the ranking text number from this link [link example: kaggle user ranking no1](https://www.kaggle.com/titericz). More clear in an image: [![enter image description here](https://i.stack.imgur.com/sClUu.png)](https://i.stack.imgur.com/sClUu.png) I am using the following code: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text soup = BeautifulSoup(plainText) for item_name in soup.findAll('h4',{'data-bind':"text: rankingText"}): print(item_name.string) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` The result is `None`. The problem is that `soup.findAll('h4',{'data-bind':"text: rankingText"})` outputs: `[<h4 data-bind="text: rankingText"></h4>]` but in the html of the link when inspecting this is like: `<h4 data-bind="text: rankingText">1st</h4>`. It can be seen in the image: [![enter image description here](https://i.stack.imgur.com/8i76M.png)](https://i.stack.imgur.com/8i76M.png) Its clear that the text is missing. How can I overpass that? Edit: Printing the `soup` variable in the terminal I can see that this value exists: [![enter image description here](https://i.stack.imgur.com/BFyuz.png)](https://i.stack.imgur.com/BFyuz.png) So there should be a way to access through `soup`. Edit 2: I tried unsuccessfully to use the most voted answer from this [stackoverflow question](https://stackoverflow.com/questions/24118337/fetch-data-of-variables-inside-script-tag-in-python-or-content-added-from-js). Could be a solution around there.
2015/12/17
[ "https://Stackoverflow.com/questions/34336040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4157666/" ]
If you aren't going to try browser automation through `selenium` as @Ali suggested, you would have to *parse the javascript containing the desired information*. You can do this in different ways. Here is a working code that locates the `script` by a [regular expression pattern](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-regular-expression), then extracts the `profile` object, loads it with [`json`](https://docs.python.org/2/library/json.html) into a Python dictionary and prints out the desired ranking: ``` import re import json from bs4 import BeautifulSoup import requests response = requests.get("https://www.kaggle.com/titericz") soup = BeautifulSoup(response.content, "html.parser") pattern = re.compile(r"profile: ({.*}),", re.MULTILINE | re.DOTALL) script = soup.find("script", text=pattern) profile_text = pattern.search(script.text).group(1) profile = json.loads(profile_text) print profile["ranking"], profile["rankingText"] ``` Prints: ``` 1 1st ```
I have solved your problem using regex on the plain text: ``` def get_single_item_data(item_url): sourceCode = requests.get(item_url) plainText = sourceCode.text #soup = BeautifulSoup(plainText, "html.parser") pattern = re.compile("ranking\": [0-9]+") name = pattern.search(plainText) ranking = name.group().split()[1] print(ranking) item_url = 'https://www.kaggle.com/titericz' get_single_item_data(item_url) ``` This return only the rank number, but I think it will help you, since from what I see the rankText just add 'st', 'th' and etc to the right of the number
46,465,389
I have a python string like this; ``` input_str = "2548,0.8987,0.8987,0.1548" ``` I want to remove the sub-string at the end after the last comma, including the comma itself. The output string should look like this; ``` output_str = "2548,0.8987,0.8987" ``` I am using python v3.6
2017/09/28
[ "https://Stackoverflow.com/questions/46465389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3848207/" ]
With `split` and `join` ======================= ``` ','.join(input_str.split(',')[:-1]) ``` ### Explanation ``` # Split string by the commas >>> input_str.split(',') ['2548', '0.8987', '0.8987', '0.1548'] # Take all but last part >>> input_str.split(',')[:-1] ['2548', '0.8987', '0.8987'] # Join the parts with commas >>> ','.join(input_str.split(',')[:-1]) '2548,0.8987,0.8987' ``` --- With `rsplit` ============= ``` input_str.rsplit(',', maxsplit=1)[0] ``` --- With `re` ========= ``` re.sub(r',[^,]*$', '', input_str) ``` If you are gonna to use it multiple times make sure to compile the regex: ``` LAST_ELEMENT_REGEX = re.compile(r',[^,]*$') LAST_ELEMENT_REGEX.sub('', input_str) ```
There's [the split function](https://www.tutorialspoint.com/python/string_split.htm) for python : ``` print input_str.split(',') ``` Will return : ``` ['2548,0.8987,0.8987', '0.1548'] ``` But in case you have multiple commas, [rsplit is here for that](https://docs.python.org/2/library/stdtypes.html#str.rsplit) : ``` str = '123,456,789' print str.rsplit(',', 1) ``` Will return : ``` ['123,456','789'] ```
46,465,389
I have a python string like this; ``` input_str = "2548,0.8987,0.8987,0.1548" ``` I want to remove the sub-string at the end after the last comma, including the comma itself. The output string should look like this; ``` output_str = "2548,0.8987,0.8987" ``` I am using python v3.6
2017/09/28
[ "https://Stackoverflow.com/questions/46465389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3848207/" ]
With `split` and `join` ======================= ``` ','.join(input_str.split(',')[:-1]) ``` ### Explanation ``` # Split string by the commas >>> input_str.split(',') ['2548', '0.8987', '0.8987', '0.1548'] # Take all but last part >>> input_str.split(',')[:-1] ['2548', '0.8987', '0.8987'] # Join the parts with commas >>> ','.join(input_str.split(',')[:-1]) '2548,0.8987,0.8987' ``` --- With `rsplit` ============= ``` input_str.rsplit(',', maxsplit=1)[0] ``` --- With `re` ========= ``` re.sub(r',[^,]*$', '', input_str) ``` If you are gonna to use it multiple times make sure to compile the regex: ``` LAST_ELEMENT_REGEX = re.compile(r',[^,]*$') LAST_ELEMENT_REGEX.sub('', input_str) ```
You can try this simplest one. Here we are using `split`, `pop` and `join` to achieve desired result. [**Try this code snippet here**](https://eval.in/870194) ``` input_str = "2548,0.8987,0.8987,0.1548" list= input_str.split(",") #Split string over , list.pop() #pop last element print(",".join(list)) #joining list again over , ```
46,465,389
I have a python string like this; ``` input_str = "2548,0.8987,0.8987,0.1548" ``` I want to remove the sub-string at the end after the last comma, including the comma itself. The output string should look like this; ``` output_str = "2548,0.8987,0.8987" ``` I am using python v3.6
2017/09/28
[ "https://Stackoverflow.com/questions/46465389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3848207/" ]
With `split` and `join` ======================= ``` ','.join(input_str.split(',')[:-1]) ``` ### Explanation ``` # Split string by the commas >>> input_str.split(',') ['2548', '0.8987', '0.8987', '0.1548'] # Take all but last part >>> input_str.split(',')[:-1] ['2548', '0.8987', '0.8987'] # Join the parts with commas >>> ','.join(input_str.split(',')[:-1]) '2548,0.8987,0.8987' ``` --- With `rsplit` ============= ``` input_str.rsplit(',', maxsplit=1)[0] ``` --- With `re` ========= ``` re.sub(r',[^,]*$', '', input_str) ``` If you are gonna to use it multiple times make sure to compile the regex: ``` LAST_ELEMENT_REGEX = re.compile(r',[^,]*$') LAST_ELEMENT_REGEX.sub('', input_str) ```
Assuming that there is definitely a comma in your string: ``` output_str = input_str[:input_str.rindex(',')] ``` That is "Take everything from the start of the string up to the last index of a comma".
46,465,389
I have a python string like this; ``` input_str = "2548,0.8987,0.8987,0.1548" ``` I want to remove the sub-string at the end after the last comma, including the comma itself. The output string should look like this; ``` output_str = "2548,0.8987,0.8987" ``` I am using python v3.6
2017/09/28
[ "https://Stackoverflow.com/questions/46465389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3848207/" ]
With `split` and `join` ======================= ``` ','.join(input_str.split(',')[:-1]) ``` ### Explanation ``` # Split string by the commas >>> input_str.split(',') ['2548', '0.8987', '0.8987', '0.1548'] # Take all but last part >>> input_str.split(',')[:-1] ['2548', '0.8987', '0.8987'] # Join the parts with commas >>> ','.join(input_str.split(',')[:-1]) '2548,0.8987,0.8987' ``` --- With `rsplit` ============= ``` input_str.rsplit(',', maxsplit=1)[0] ``` --- With `re` ========= ``` re.sub(r',[^,]*$', '', input_str) ``` If you are gonna to use it multiple times make sure to compile the regex: ``` LAST_ELEMENT_REGEX = re.compile(r',[^,]*$') LAST_ELEMENT_REGEX.sub('', input_str) ```
Here you go ``` sep = ',' count = input_str.count(sep) int i=0; output = '' while(i<count): output += input_str.split(sep, 1)[i] i++ input_str = output ```
46,465,389
I have a python string like this; ``` input_str = "2548,0.8987,0.8987,0.1548" ``` I want to remove the sub-string at the end after the last comma, including the comma itself. The output string should look like this; ``` output_str = "2548,0.8987,0.8987" ``` I am using python v3.6
2017/09/28
[ "https://Stackoverflow.com/questions/46465389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3848207/" ]
There's [the split function](https://www.tutorialspoint.com/python/string_split.htm) for python : ``` print input_str.split(',') ``` Will return : ``` ['2548,0.8987,0.8987', '0.1548'] ``` But in case you have multiple commas, [rsplit is here for that](https://docs.python.org/2/library/stdtypes.html#str.rsplit) : ``` str = '123,456,789' print str.rsplit(',', 1) ``` Will return : ``` ['123,456','789'] ```
Here you go ``` sep = ',' count = input_str.count(sep) int i=0; output = '' while(i<count): output += input_str.split(sep, 1)[i] i++ input_str = output ```
46,465,389
I have a python string like this; ``` input_str = "2548,0.8987,0.8987,0.1548" ``` I want to remove the sub-string at the end after the last comma, including the comma itself. The output string should look like this; ``` output_str = "2548,0.8987,0.8987" ``` I am using python v3.6
2017/09/28
[ "https://Stackoverflow.com/questions/46465389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3848207/" ]
You can try this simplest one. Here we are using `split`, `pop` and `join` to achieve desired result. [**Try this code snippet here**](https://eval.in/870194) ``` input_str = "2548,0.8987,0.8987,0.1548" list= input_str.split(",") #Split string over , list.pop() #pop last element print(",".join(list)) #joining list again over , ```
Here you go ``` sep = ',' count = input_str.count(sep) int i=0; output = '' while(i<count): output += input_str.split(sep, 1)[i] i++ input_str = output ```
46,465,389
I have a python string like this; ``` input_str = "2548,0.8987,0.8987,0.1548" ``` I want to remove the sub-string at the end after the last comma, including the comma itself. The output string should look like this; ``` output_str = "2548,0.8987,0.8987" ``` I am using python v3.6
2017/09/28
[ "https://Stackoverflow.com/questions/46465389", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3848207/" ]
Assuming that there is definitely a comma in your string: ``` output_str = input_str[:input_str.rindex(',')] ``` That is "Take everything from the start of the string up to the last index of a comma".
Here you go ``` sep = ',' count = input_str.count(sep) int i=0; output = '' while(i<count): output += input_str.split(sep, 1)[i] i++ input_str = output ```
71,310,217
My goal is to use gpu(GE FORCE GTX 850M). I have tried according to this guide(<https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/>). Tensoflow 2.8 is installed and keras also. But when I execute the test code, the output is as below. It must be that the code does not or cannot use gpu. What went wrong? ``` (my_tf) PS C:\WINDOWS\system32> python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))" 2022-03-01 23:30:30.626829: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. ``` ※ Below is all the installed packages(Name/Version/Build/Channel) at my C:\Anaconda3 ``` _ipyw_jlab_nb_ext_conf 0.1.0 py38_0 _py-xgboost-mutex 2.0 cpu_0 absl-py 0.15.0 pyhd3eb1b0_0 anaconda-client 1.9.0 py38haa95532_0 anaconda-navigator 2.1.2 py38haa95532_0 anyio 3.5.0 py38haa95532_0 argon2-cffi 21.3.0 pyhd3eb1b0_0 argon2-cffi-bindings 21.2.0 py38h2bbff1b_0 astunparse 1.6.3 pypi_0 pypi attrs 21.4.0 pyhd3eb1b0_0 babel 2.9.1 pyhd3eb1b0_0 backcall 0.2.0 pyhd3eb1b0_0 backports 1.1 pyhd3eb1b0_0 backports.functools_lru_cache 1.6.4 pyhd3eb1b0_0 backports.tempfile 1.0 pyhd3eb1b0_1 backports.weakref 1.0.post1 py_1 beautifulsoup4 4.10.0 pyh06a4308_0 blas 1.0 mkl bleach 4.1.0 pyhd3eb1b0_0 bottleneck 1.3.2 py38h2a96729_1 brotli 1.0.9 ha925a31_2 brotlipy 0.7.0 py38h2bbff1b_1003 bs4 0.0.1 pypi_0 pypi bzip2 1.0.8 he774522_0 ca-certificates 2021.10.26 haa95532_4 cachetools 5.0.0 pypi_0 pypi certifi 2021.10.8 py38haa95532_2 cffi 1.15.0 py38h2bbff1b_1 chardet 4.0.0 py38haa95532_1003 charset-normalizer 2.0.4 pyhd3eb1b0_0 chime 0.6.0 pypi_0 pypi click 8.0.3 pyhd3eb1b0_0 clyent 1.2.2 py38_1 colorama 0.4.4 pyhd3eb1b0_0 conda 4.11.0 py38haa95532_0 conda-build 3.21.7 py38haa95532_1 conda-content-trust 0.1.1 pyhd3eb1b0_0 conda-env 2.6.0 haa95532_1 conda-package-handling 1.7.3 py38h8cc25b3_1 conda-repo-cli 1.0.4 pyhd3eb1b0_0 conda-token 0.3.0 pyhd3eb1b0_0 conda-verify 3.4.2 py_1 console_shortcut 0.1.1 4 cryptography 3.4.8 py38h71e12ea_0 cupy 7.0.0 pypi_0 pypi cycler 0.11.0 pyhd3eb1b0_0 datetime 4.3 pypi_0 pypi debugpy 1.5.1 py38hd77b12b_0 decorator 5.1.1 pyhd3eb1b0_0 defusedxml 0.7.1 pyhd3eb1b0_0 descartes 1.1.0 pyhd3eb1b0_4 entrypoints 0.3 py38_0 fastrlock 0.8 pypi_0 pypi filelock 3.4.2 pyhd3eb1b0_0 flatbuffers 2.0 pypi_0 pypi fonttools 4.25.0 pyhd3eb1b0_0 freetype 2.10.4 hd328e21_0 future 0.18.2 py38_1 gast 0.5.3 pyhd3eb1b0_0 glob2 0.7 pyhd3eb1b0_0 google-auth 2.6.0 pypi_0 pypi google-auth-oauthlib 0.4.6 pypi_0 pypi google-pasta 0.2.0 pypi_0 pypi graphviz 2.38 hfd603c8_2 grpcio 1.44.0 pypi_0 pypi icc_rt 2019.0.0 h0cc432a_1 icu 58.2 ha925a31_3 idna 3.3 pyhd3eb1b0_0 imap-tools 0.34.0 pypi_0 pypi importlib-metadata 4.11.1 pypi_0 pypi importlib_metadata 4.8.2 hd3eb1b0_0 intel-openmp 2021.4.0 haa95532_3556 ipykernel 6.4.1 py38haa95532_1 ipython 7.31.1 py38haa95532_0 ipython_genutils 0.2.0 pyhd3eb1b0_1 ipywidgets 7.6.5 pyhd3eb1b0_1 jedi 0.18.1 py38haa95532_1 jinja2 2.11.3 pyhd3eb1b0_0 joblib 1.1.0 pyhd3eb1b0_0 jpeg 9d h2bbff1b_0 json5 0.9.6 pyhd3eb1b0_0 jsonschema 3.2.0 pyhd3eb1b0_2 jupyter_client 7.1.2 pyhd3eb1b0_0 jupyter_core 4.9.1 py38haa95532_0 jupyter_server 1.13.5 pyhd3eb1b0_0 jupyterlab 3.2.9 pyhd3eb1b0_0 jupyterlab_pygments 0.1.2 py_0 jupyterlab_server 2.10.3 pyhd3eb1b0_1 jupyterlab_widgets 1.0.0 pyhd3eb1b0_1 keras 2.8.0 pypi_0 pypi keras-preprocessing 1.1.2 pypi_0 pypi kiwisolver 1.3.2 py38hd77b12b_0 libarchive 3.4.2 h5e25573_0 libclang 13.0.0 pypi_0 pypi libiconv 1.15 h1df5818_7 liblief 0.10.1 ha925a31_0 libpng 1.6.37 h2a8f88b_0 libtiff 4.2.0 hd0e1b90_0 libwebp 1.2.2 h2bbff1b_0 libxgboost 1.5.0 hd77b12b_1 libxml2 2.9.12 h0ad7f3c_0 lz4-c 1.9.3 h2bbff1b_1 markdown 3.3.6 pypi_0 pypi markupsafe 2.0.1 py38h2bbff1b_0 matplotlib 3.5.1 py38haa95532_0 matplotlib-base 3.5.1 py38hd77b12b_0 matplotlib-inline 0.1.2 pyhd3eb1b0_2 menuinst 1.4.18 py38h59b6b97_0 mistune 0.8.4 py38he774522_1000 mizani 0.7.3 pyhd8ed1ab_0 conda-forge mkl 2021.4.0 haa95532_640 mkl-service 2.4.0 py38h2bbff1b_0 mkl_fft 1.3.1 py38h277e83a_0 mkl_random 1.2.2 py38hf11a4ad_0 mouseinfo 0.1.3 pypi_0 pypi multitasking 0.0.10 pypi_0 pypi munkres 1.1.4 py_0 navigator-updater 0.2.1 py38_1 nbclassic 0.3.5 pyhd3eb1b0_0 nbclient 0.5.11 pyhd3eb1b0_0 nbconvert 6.1.0 py38haa95532_0 nbformat 5.1.3 pyhd3eb1b0_0 nest-asyncio 1.5.1 pyhd3eb1b0_0 notebook 6.4.8 py38haa95532_0 numexpr 2.8.1 py38hb80d3ca_0 numpy 1.22.2 pypi_0 pypi numpy-base 1.21.5 py38hc2deb75_0 oauthlib 3.2.0 pypi_0 pypi olefile 0.46 pyhd3eb1b0_0 openssl 1.1.1m h2bbff1b_0 opt-einsum 3.3.0 pypi_0 pypi packaging 21.3 pyhd3eb1b0_0 palettable 3.3.0 pyhd3eb1b0_0 pandas 1.4.1 py38hd77b12b_0 pandas-datareader 0.10.0 pypi_0 pypi pandocfilters 1.5.0 pyhd3eb1b0_0 parso 0.8.3 pyhd3eb1b0_0 patsy 0.5.2 py38haa95532_1 pickleshare 0.7.5 pyhd3eb1b0_1003 pillow 8.4.0 py38hd45dc43_0 pip 22.0.3 pypi_0 pypi pkginfo 1.8.2 pyhd3eb1b0_0 plotnine 0.8.0 pyhd8ed1ab_0 conda-forge powershell_shortcut 0.0.1 3 prometheus_client 0.13.1 pyhd3eb1b0_0 prompt-toolkit 3.0.20 pyhd3eb1b0_0 protobuf 3.19.4 pypi_0 pypi psutil 5.8.0 py38h2bbff1b_1 py-lief 0.10.1 py38ha925a31_0 py-xgboost 1.5.0 py38haa95532_1 pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pyautogui 0.9.52 pypi_0 pypi pybithumb 1.0.21 pypi_0 pypi pycosat 0.6.3 py38h2bbff1b_0 pycparser 2.21 pyhd3eb1b0_0 pygetwindow 0.0.9 pypi_0 pypi pygments 2.11.2 pyhd3eb1b0_0 pyjwt 2.1.0 py38haa95532_0 pykorbit 0.1.10 pypi_0 pypi pymsgbox 1.0.9 pypi_0 pypi pyopenssl 19.1.0 py38_0 pyparsing 3.0.4 pyhd3eb1b0_0 pyperclip 1.8.1 pypi_0 pypi pyqt 5.9.2 py38hd77b12b_6 pyqt5 5.15.4 pypi_0 pypi pyqt5-qt5 5.15.2 pypi_0 pypi pyqt5-sip 12.9.0 pypi_0 pypi pyqtchart 5.15.4 pypi_0 pypi pyqtchart-qt5 5.15.2 pypi_0 pypi pyrect 0.1.4 pypi_0 pypi pyrsistent 0.18.0 py38h196d8e1_0 pyscreeze 0.1.26 pypi_0 pypi pysocks 1.7.1 py38haa95532_0 python 3.8.12 h6244533_0 python-dateutil 2.8.2 pyhd3eb1b0_0 python-graphviz 0.16 pyhd3eb1b0_1 python-libarchive-c 2.9 pyhd3eb1b0_1 pytweening 1.0.3 pypi_0 pypi pytz 2021.3 pyhd3eb1b0_0 pyupbit 0.2.21 pypi_0 pypi pywin32 302 py38h827c3e9_1 pywinauto 0.6.8 pypi_0 pypi pywinpty 2.0.2 py38h5da7b33_0 pyyaml 6.0 py38h2bbff1b_1 pyzmq 22.3.0 py38hd77b12b_2 qt 5.9.7 vc14h73c81de_0 qtpy 1.11.2 pyhd3eb1b0_0 requests 2.27.1 pyhd3eb1b0_0 requests-oauthlib 1.3.1 pypi_0 pypi rsa 4.8 pypi_0 pypi ruamel_yaml 0.15.100 py38h2bbff1b_0 scikit-learn 1.0.2 py38hf11a4ad_1 scipy 1.7.3 py38h0a974cb_0 selenium 3.141.0 pypi_0 pypi send2trash 1.8.0 pyhd3eb1b0_1 setuptools 58.0.4 py38haa95532_0 sip 4.19.13 py38hd77b12b_0 six 1.16.0 pyhd3eb1b0_1 sniffio 1.2.0 py38haa95532_1 soupsieve 2.3.1 pyhd3eb1b0_0 sqlite 3.37.2 h2bbff1b_0 statsmodels 0.13.0 py38h2bbff1b_0 tensorboard 2.8.0 pypi_0 pypi tensorboard-data-server 0.6.1 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi tensorflow 2.8.0 pypi_0 pypi tensorflow-io-gcs-filesystem 0.24.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi terminado 0.13.1 py38haa95532_0 testpath 0.5.0 pyhd3eb1b0_0 tf-estimator-nightly 2.8.0.dev2021122109 pypi_0 pypi threadpoolctl 2.2.0 pyh0d69192_0 tk 8.6.11 h2bbff1b_0 tornado 6.1 py38h2bbff1b_0 tqdm 4.62.3 pyhd3eb1b0_1 traitlets 5.1.1 pyhd3eb1b0_0 typing-extensions 3.10.0.2 hd3eb1b0_0 typing_extensions 3.10.0.2 pyh06a4308_0 ujson 4.0.2 py38hd77b12b_0 urllib3 1.26.8 pyhd3eb1b0_0 utils 1.0.1 pypi_0 pypi vc 14.2 h21ff451_1 vs2015_runtime 14.27.29016 h5e58377_2 wcwidth 0.2.5 pyhd3eb1b0_0 webencodings 0.5.1 py38_1 websocket-client 0.58.0 py38haa95532_4 websockets 9.1 pypi_0 pypi wheel 0.37.1 pyhd3eb1b0_0 widgetsnbextension 3.5.2 py38haa95532_0 win_inet_pton 1.1.0 py38haa95532_0 wincertstore 0.2 py38haa95532_2 winpty 0.4.3 4 xgboost 1.5.0 py38haa95532_1 xz 5.2.5 h62dcd97_0 yaml 0.2.5 he774522_0 yfinance 0.1.66 pypi_0 pypi zipp 3.7.0 pyhd3eb1b0_0 zlib 1.2.11 h8cc25b3_4 zstd 1.4.9 h19a0ad4_0 ```
2022/03/01
[ "https://Stackoverflow.com/questions/71310217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5554839/" ]
The anaconda guide looks outdated, because since Tensorflow 2.0, tensorflow and tensorflow-gpu are merged into one. If you are using Tensorflow 2, I would advise you this guide <https://www.tensorflow.org/install/gpu> which has worked for me (do not download the latest cuDNN SDK version which is bugged but the one indicated in the tutorial)
In this guide <https://www.tensorflow.org/install/source_windows>, there are version configuration .It helped.
9,430,644
``` class test: def __init__(self): test_dict = {'1': 'one', '2': 'two'} def test_function(self): print self.test_dict if __name__ == '__main__': t = test() print t.test_dict ``` Error: ``` AttributeError: test instance has no attribute 'test_dict' ``` Also, if i execute code: `t.test_function()` instead of `print t.test_dict`, error occurred too: ``` AttributeError: test instance has no attribute 'test_dict' ``` Why? i have defined test\_dict in function `__init__`, so it should be initialized to each instance, but why does python tell me it cannot find the dict?
2012/02/24
[ "https://Stackoverflow.com/questions/9430644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1031769/" ]
You forgot `self`. Change this: ``` def __init__(self): test_dict = {'1': 'one', '2': 'two'} ``` with: ``` def __init__(self): self.test_dict = {'1': 'one', '2': 'two'} ``` `self` is your *instance* inside the methods in your class. That is not because `self` is a special keyword but because `self` is usually the word choosen to be a method first argument. If you want to know more about `self`, there's a good answer [here](https://stackoverflow.com/a/625098/1132524). At the end notice that you got an `AttributeError` when you tried to call ``` t.test_dict ``` beacuse the `test_dict` attribute was not defined.
You made an error in your `__init__`. This: ``` def __init__(self): test_dict = {'1': 'one', '2': 'two'} ``` Should be: ``` def __init__(self): self.test_dict = {'1': 'one', '2': 'two'} ```
9,430,644
``` class test: def __init__(self): test_dict = {'1': 'one', '2': 'two'} def test_function(self): print self.test_dict if __name__ == '__main__': t = test() print t.test_dict ``` Error: ``` AttributeError: test instance has no attribute 'test_dict' ``` Also, if i execute code: `t.test_function()` instead of `print t.test_dict`, error occurred too: ``` AttributeError: test instance has no attribute 'test_dict' ``` Why? i have defined test\_dict in function `__init__`, so it should be initialized to each instance, but why does python tell me it cannot find the dict?
2012/02/24
[ "https://Stackoverflow.com/questions/9430644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1031769/" ]
You forgot `self`. Change this: ``` def __init__(self): test_dict = {'1': 'one', '2': 'two'} ``` with: ``` def __init__(self): self.test_dict = {'1': 'one', '2': 'two'} ``` `self` is your *instance* inside the methods in your class. That is not because `self` is a special keyword but because `self` is usually the word choosen to be a method first argument. If you want to know more about `self`, there's a good answer [here](https://stackoverflow.com/a/625098/1132524). At the end notice that you got an `AttributeError` when you tried to call ``` t.test_dict ``` beacuse the `test_dict` attribute was not defined.
Think of classes/instances as dictionaries. Whenever you create instance and call any of its methods, these functions automatically receive instance as first argument (unless function is static or class method). So, if you want some variable to be stored in instance and later be accessed, put all variables into that first argument (by convention, it is called **self**). Class constructor is not an exception of the above rule. That's why all answers point out a change in the constructor in **test\_dict** assignment. Think of: ``` self.test_dict = ... ``` like ``` self.__dict__["test_dict"] = ... ``` Like will all variables in Python, you can not access it if variable was not assigned first. This is the case in your original class: **\_*init*\_** has created a local (to method) variable, while **test\_function** is trying to access instance variable in dictionary, which does not exist.
13,174,127
I want to be able to capture the value of an HTML attribute with a python regexp. currently I use ``` re.compile( r'=(["\'].*?["\'])', re.IGNORECASE | re.DOTALL ) ``` My problem is that I want the regular expression to "remember" whether the attribute started with a single or a double quote. I found the bug in my current approach with the following attribute ``` href="javascript:foo('bar')" ``` my regex catches ``` "javascript:foo(' ```
2012/11/01
[ "https://Stackoverflow.com/questions/13174127", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1616042/" ]
You can capture the first quote and then use a backreference: ``` r'=((["\']).*?\2)' ``` However, regular expressions are [not the proper approach to parsing HTML](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454). You should consider using a DOM parser instead.
The following would be more efficient in theory: ``` regex = r'"[^"]*"|\'[^']*\'' ``` For the reference, here's Jeffrey Friedl's [expression](http://regex.info/dlisting.cgi?ed=3&id=36481) for html tags (from the owl book): ``` < # Opening "<" ( # Any amount of . . . "[^"]*" # double-quoted string, | # or . . . '[^']*' # single-quoted string, | # or . . . [^'">] # "other stuff" )* # > # Closing ">" ```
21,067,443
I can use the following code to change a string to a variable and then call function of the library that was previously imported. ``` >>> import sys >>> x = 'sys' >>> globals()[x] <module 'sys' (built-in)> >>> globals()[x].__doc__ ``` Without first importing the module, I have an string to variable but I can't use the same `globals()[var]` syntax with `import`: ``` >>> y = 'os' >>> globals()[y] <module 'os' from '/usr/lib/python2.7/os.pyc'> >>> import globals()[y] File "<stdin>", line 1 import globals()[y] ^ SyntaxError: invalid syntax >>> z = globals()[y] >>> import z Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named z ``` **Is it possible for a string input and import the library that has the same name as the string input?** If so, how? @ndpu and @paulobu has answered that `__import__()` allows string to library access as a variable. But is there a problem with using the same variable name rather than using an alternate for the library? E.g.: ``` >>> x = 'sys' >>> sys = __import__(x) ```
2014/01/11
[ "https://Stackoverflow.com/questions/21067443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/610569/" ]
Most Python coders prefer using [`importlib.import_module`](http://docs.python.org/2.7/library/importlib.html#importlib.import_module) instead of `__import__`: ``` >>> from importlib import import_module >>> mod = raw_input(":") :sys >>> sys = import_module(mod) >>> sys <module 'sys' (built-in)> >>> sys.version_info # Just to demonstrate sys.version_info(major=2, minor=7, micro=5, releaselevel='final', serial=0) >>> ``` You can read about the preference of `importlib.import_module` over `__import__` [here](http://docs.python.org/2.7/library/functions.html#__import__).
You are looking for [`__import__`](http://docs.python.org/2/library/functions.html#__import__) built-in function: ``` __import__(globals()[y]) ``` Basic usage: ``` >>>math = __import__('math') >>>print math.e 2.718281828459045 ``` You can also look into `importlib.import_module` as suggested in another answer and in the `__import__`'s documentation.
21,067,443
I can use the following code to change a string to a variable and then call function of the library that was previously imported. ``` >>> import sys >>> x = 'sys' >>> globals()[x] <module 'sys' (built-in)> >>> globals()[x].__doc__ ``` Without first importing the module, I have an string to variable but I can't use the same `globals()[var]` syntax with `import`: ``` >>> y = 'os' >>> globals()[y] <module 'os' from '/usr/lib/python2.7/os.pyc'> >>> import globals()[y] File "<stdin>", line 1 import globals()[y] ^ SyntaxError: invalid syntax >>> z = globals()[y] >>> import z Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named z ``` **Is it possible for a string input and import the library that has the same name as the string input?** If so, how? @ndpu and @paulobu has answered that `__import__()` allows string to library access as a variable. But is there a problem with using the same variable name rather than using an alternate for the library? E.g.: ``` >>> x = 'sys' >>> sys = __import__(x) ```
2014/01/11
[ "https://Stackoverflow.com/questions/21067443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/610569/" ]
Use `__import__` function: ``` x = 'sys' sys = __import__(x) ```
You are looking for [`__import__`](http://docs.python.org/2/library/functions.html#__import__) built-in function: ``` __import__(globals()[y]) ``` Basic usage: ``` >>>math = __import__('math') >>>print math.e 2.718281828459045 ``` You can also look into `importlib.import_module` as suggested in another answer and in the `__import__`'s documentation.
21,067,443
I can use the following code to change a string to a variable and then call function of the library that was previously imported. ``` >>> import sys >>> x = 'sys' >>> globals()[x] <module 'sys' (built-in)> >>> globals()[x].__doc__ ``` Without first importing the module, I have an string to variable but I can't use the same `globals()[var]` syntax with `import`: ``` >>> y = 'os' >>> globals()[y] <module 'os' from '/usr/lib/python2.7/os.pyc'> >>> import globals()[y] File "<stdin>", line 1 import globals()[y] ^ SyntaxError: invalid syntax >>> z = globals()[y] >>> import z Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named z ``` **Is it possible for a string input and import the library that has the same name as the string input?** If so, how? @ndpu and @paulobu has answered that `__import__()` allows string to library access as a variable. But is there a problem with using the same variable name rather than using an alternate for the library? E.g.: ``` >>> x = 'sys' >>> sys = __import__(x) ```
2014/01/11
[ "https://Stackoverflow.com/questions/21067443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/610569/" ]
Most Python coders prefer using [`importlib.import_module`](http://docs.python.org/2.7/library/importlib.html#importlib.import_module) instead of `__import__`: ``` >>> from importlib import import_module >>> mod = raw_input(":") :sys >>> sys = import_module(mod) >>> sys <module 'sys' (built-in)> >>> sys.version_info # Just to demonstrate sys.version_info(major=2, minor=7, micro=5, releaselevel='final', serial=0) >>> ``` You can read about the preference of `importlib.import_module` over `__import__` [here](http://docs.python.org/2.7/library/functions.html#__import__).
Use `__import__` function: ``` x = 'sys' sys = __import__(x) ```
46,963,157
I'm trying to implement an efficient way of creating a frequency table in python, with a rather large numpy input array of `~30 million` entries. Currently I am using a `for-loop`, but it's taking far too long. The input is an ordered `numpy array` of the form ``` Y = np.array([4, 4, 4, 6, 6, 7, 8, 9, 9, 9..... etc]) ``` And I would like to have an output of the form: ``` Z = {4:3, 5:0, 6:2, 7:1,8:1,9:3..... etc} (as any data type) ``` Currently I am using the following implementation: ``` Z = pd.Series(index = np.arange(Y.min(), Y.max())) for i in range(Y.min(), Y.max()): Z[i] = (Y == i).sum() ``` Is there a quicker way of doing this or a way without `iterating` through a loop? Thanks for helping, and sorry if this has been asked before!
2017/10/26
[ "https://Stackoverflow.com/questions/46963157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8840045/" ]
You can simply do this using Counter from collections module. Please see the below code i ran for your test case. ``` import numpy as np from collections import Counter Y = np.array([4, 4, 4, 6, 6, 7, 8, 9, 9, 9,10,5,5,5]) print(Counter(Y)) ``` It gave the following output ``` Counter({4: 3, 9: 3, 5: 3, 6: 2, 7: 1, 8: 1, 10: 1}) ``` you can easily use this object for further. I hope this helps.
I think numpy.unique is your solution. <https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.unique.html> ``` import numpy as np t = np.random.randint(0, 1000, 100000000) print(np.unique(t, return_counts=True)) ``` This takes ~4 seconds for me. The collections.Counter approach takes ~10 seconds. But the numpy.unique returns the frequencies in an array and the collections.Counter returns a dictionary. It's up to convenience. Edit. I cannot comment on other posts so I'll write here that @lomereiters solution is lightning fast (linear) and should be the accepted one.
46,963,157
I'm trying to implement an efficient way of creating a frequency table in python, with a rather large numpy input array of `~30 million` entries. Currently I am using a `for-loop`, but it's taking far too long. The input is an ordered `numpy array` of the form ``` Y = np.array([4, 4, 4, 6, 6, 7, 8, 9, 9, 9..... etc]) ``` And I would like to have an output of the form: ``` Z = {4:3, 5:0, 6:2, 7:1,8:1,9:3..... etc} (as any data type) ``` Currently I am using the following implementation: ``` Z = pd.Series(index = np.arange(Y.min(), Y.max())) for i in range(Y.min(), Y.max()): Z[i] = (Y == i).sum() ``` Is there a quicker way of doing this or a way without `iterating` through a loop? Thanks for helping, and sorry if this has been asked before!
2017/10/26
[ "https://Stackoverflow.com/questions/46963157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8840045/" ]
You can simply do this using Counter from collections module. Please see the below code i ran for your test case. ``` import numpy as np from collections import Counter Y = np.array([4, 4, 4, 6, 6, 7, 8, 9, 9, 9,10,5,5,5]) print(Counter(Y)) ``` It gave the following output ``` Counter({4: 3, 9: 3, 5: 3, 6: 2, 7: 1, 8: 1, 10: 1}) ``` you can easily use this object for further. I hope this helps.
If your input array `x` is sorted, you can do the following to get the counts in linear time: ``` diff1 = np.diff(x) # get indices of the elements at which jumps occurred jumps = np.concatenate([[0], np.where(diff1 > 0)[0] + 1, [len(x)]]) unique_elements = x[jumps[:-1]] counts = np.diff(jumps) ```
36,433,011
At this moment I have a game that drops falling colored blocks (*obstacles*) from the top of the screen, and the objective is for the player to dodge said (*obstacles*) by moving either left or right. I currently have set up where every time the user runs the script, the blocks will be a different color, but the problem is, they will **only** be that color for the duration of game play, and for the color to be different, the user would have to exit and re-run the script. The code I have for this: ``` col1 = randint(1, 255) col2 = randint(1, 255) col3 = randint(1, 255) block_color = (col1, col2, col3) ``` Once the script is executed, a random color is defined by the three randints above, and its applied later in the script. I'm looking for advice on how I might be able to change the color of **every single block that falls**. So, for example, one block falls and it's randcolor is red, and then the second block falls and it's randcolor is blue, etc. I imagine it would function along the lines of defining 3 random integers every time a block falls and applying those three rgb values to the new block. I just cannot figure how to actually write that in python. Any help would be greatly appreciated. Thank you.
2016/04/05
[ "https://Stackoverflow.com/questions/36433011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3777680/" ]
Add relative positioning to the container and absolute positioning to the icon ``` #menui { left: 0; position:absolute; } .smalltop { background-color: #FFF; list-style-type: none; margin: 0 auto; position:relative; } ``` **[jsFiddle example](https://jsfiddle.net/j08691/pzLbruhx/1/)**
Just give property `float:left` to `#menui`, and see the result
36,433,011
At this moment I have a game that drops falling colored blocks (*obstacles*) from the top of the screen, and the objective is for the player to dodge said (*obstacles*) by moving either left or right. I currently have set up where every time the user runs the script, the blocks will be a different color, but the problem is, they will **only** be that color for the duration of game play, and for the color to be different, the user would have to exit and re-run the script. The code I have for this: ``` col1 = randint(1, 255) col2 = randint(1, 255) col3 = randint(1, 255) block_color = (col1, col2, col3) ``` Once the script is executed, a random color is defined by the three randints above, and its applied later in the script. I'm looking for advice on how I might be able to change the color of **every single block that falls**. So, for example, one block falls and it's randcolor is red, and then the second block falls and it's randcolor is blue, etc. I imagine it would function along the lines of defining 3 random integers every time a block falls and applying those three rgb values to the new block. I just cannot figure how to actually write that in python. Any help would be greatly appreciated. Thank you.
2016/04/05
[ "https://Stackoverflow.com/questions/36433011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3777680/" ]
Add relative positioning to the container and absolute positioning to the icon ``` #menui { left: 0; position:absolute; } .smalltop { background-color: #FFF; list-style-type: none; margin: 0 auto; position:relative; } ``` **[jsFiddle example](https://jsfiddle.net/j08691/pzLbruhx/1/)**
Use [**`display:table\table-row\table-cell`**](http://colintoh.com/blog/display-table-anti-hero) and [**`vertical-align`**](https://developer.mozilla.org/en-US/docs/Web/CSS/vertical-align) on icon ``` #smalllogo { max-height: 50px; width: auto; display: table-cell; margin: 0 auto; border: 5px solid black; margin-top: 5px; } .w3-opennav { display: table-cell; vertical-align: middle; } .smalltop { background-color: #FFF; list-style-type: none; margin: 0 auto; display: table-row; } ``` [**Fiddle**](https://jsfiddle.net/zer00ne/xqzks64t/1/)
71,775,713
I'm trying to do some interesting integration problems for my Calculus I students under Anaconda python 3.8.5 and sympy version 1.9, So question 1 is: integrate(sin(m \* x)\* cos(n \* x), x) [![enter image description here](https://i.stack.imgur.com/lOzzI.png)](https://i.stack.imgur.com/lOzzI.png) whereas x is the integration variable and m and n are 2 unequal and uncomplement (m != -n) real constants. Question 2 is: integrate((a \*\* 2 - x \*\* 2) \*\* (1/2), x) [![enter image description here](https://i.stack.imgur.com/XCTsK.png)](https://i.stack.imgur.com/XCTsK.png) whereas for my Calculus I students we have to assume that |a| > |x| otherwise they won't be even able to interpret the results. The following solution works for question 1: ``` m, n = symbols("m n", real=True, nonzero=True) integrate(sin(m * x)* cos(n * x), x).args[2][0] ``` [![enter image description here](https://i.stack.imgur.com/7hcEI.png)](https://i.stack.imgur.com/7hcEI.png) but for question 2 it obviously gives me results more than my Calculus I students can understand: [![enter image description here](https://i.stack.imgur.com/bWKWD.png)](https://i.stack.imgur.com/bWKWD.png) whereas I only want: [![enter image description here](https://i.stack.imgur.com/xBuub.png)](https://i.stack.imgur.com/xBuub.png) instread. Since I already know in question 1 that m!= n and m!=-n, and in question 2 |a| > |x|, is there a way that I can tell sympy this so that I don't have to dig through the Piecewise stuff (or to interpret the complex range result solutions) and get the answer directly? Thanks.
2022/04/07
[ "https://Stackoverflow.com/questions/71775713", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8438488/" ]
If you want to say that `m` and `n` are not equal (and get the answer you gave) make one odd and one even: ``` >>> var('m',odd=True) m >>> var('n',even=True) n >>> integrate(sin((m) * x)* cos(n * x), x) -m*cos(m*x)*cos(n*x)/(m**2 - n**2) - n*sin(m*x)*sin(n*x)/(m**2 - n**2) ``` (You get an interesting result if you just make `m,n,p` positive and let `m=n+p`, do the integral, and replace `p` with `m - n` and simplify. Haven't been able to investigate, though.) If you want `x` > `a`, let's integrate from `a` to `x` with real variables: ``` >>> var('a x',positive=1) (a, x) >>> integrate((a ** 2 - x ** 2) ** (S.Half), (x,a,x)) a**2*asin(x/a)/2 - pi*a**2/4 + x*sqrt(a**2 - x**2)/2 ``` If you want to get rid of the constant you can do ``` >>> _.as_independent(x)[1] + Symbol("C") C + a**2*asin(x/a)/2 + x*sqrt(a**2 - x**2)/2 ``` This will only change sign if `x < a < 0`, I b
First of all, what version of SymPy are you using? You can verify that with: ```py import sympy as sp print(sp.__version__) ``` If you are using older versions, maybe the solver is having trouble. The following solution has been tested on SymPy 1.9 and 1.10.1. ```py # define a, x as ordinary symbols with no assumptions. Sadly, it is not # possible to make assumptions similar to |a| > |x|. a, x = symbols("a, x") # integrate the expression res = integrate(sqrt(a**2 - x**2), x) print(type(res)) # Piecewise ``` [![enter image description here](https://i.stack.imgur.com/tpatB.png)](https://i.stack.imgur.com/tpatB.png) The result is a `Piecewise` object. As you can see, SymPy computed the complete solution. You can then extract the interested piece with the following command: ```py res.args[1][0] ``` Here, `res.args[1]` extracts the second piece, which is a tuple, `(expr, condition)`. With `res.args[1][0]` we extract the expression from the second piece.
7,774,740
This is an extension question of [PHP pass in $this to function outside class](https://stackoverflow.com/questions/7774444/php-pass-in-this-to-function-outside-class) And I believe this is what I'm looking for but it's in python not php: [Programmatically determining amount of parameters a function requires - Python](https://stackoverflow.com/questions/741950/programatically-determining-amount-of-parameters-a-function-requires-python) Let's say I have a function like this: ``` function client_func($cls, $arg){ } ``` and when I'm ready to call this function I might do something like this in pseudo code: ``` if function's first parameter equals '$cls', then call client_func(instanceof class, $arg) else call client_func($arg) ``` So basically, is there a way to lookahead to a function and see what parameter values are required before calling the function? I guess this would be like `debug_backtrace()`, but the other way around. `func_get_args()` can only be called from within a function which doesn't help me here. Any thoughts?
2011/10/14
[ "https://Stackoverflow.com/questions/7774740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/266763/" ]
Use [Reflection](http://php.net/book.reflection), especially [ReflectionFunction](http://php.net/class.reflectionfunction) in your case. ``` $fct = new ReflectionFunction('client_func'); echo $fct->getNumberOfRequiredParameters(); ``` As far as I can see you will find [getParameters()](http://php.net/reflectionfunctionabstract.getparameters) useful too
Only way is with reflection by going to <http://us3.php.net/manual/en/book.reflection.php> ``` class foo { function bar ($arg1, $arg2) { // ... } } $method = new ReflectionMethod('foo', 'bar'); $num = $method->getNumberOfParameters(); ```
60,973,894
[Open Street Map (pyproj). How to solve syntax issue?](https://stackoverflow.com/questions/59596835/open-street-map-pyproj-how-to-solve-syntax-issue) has a similar question and the answers there did not help me. I am using the helper class below a few hundred times and my console gets flooded with warnings: ``` /opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyproj/crs/crs.py:53: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6 return _prepare_from_string(" ".join(pjargs)) ``` <https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6> When i try to follow the hint by using: ``` return transform(Proj('epsg:4326'), Proj('epsg:3857'), lon,lat) ``` **I get some (inf,inf) results in cases where the original code worked. What is the proper way to avoid the syntax error but get the same results?** * <https://gis.stackexchange.com/questions/164043/how-to-create-a-projection-from-a-crs-string-using-pyproj> shows the old syntax but no code example for a compatible new statement. <https://github.com/pyproj4/pyproj/issues/224> states: ``` *What is the preferred way of loading EPSG CRSes now? use "EPSG:XXXX" in source_crs or target_crs arguments of proj_create_crs_to_crs() when creating a transformation, or as argument of proj_create() to instanciate a CRS object* ``` **What does this mean as a code example?** from pyproj import Proj, transform ``` class Projection: @staticmethod def wgsToXy(lon,lat): return transform(Proj(init='epsg:4326'), Proj(init='epsg:3857'), lon,lat) @staticmethod def pointToXy(point): xy=point.split(",") return Projection.wgsToXy(float(xy[0]),float(xy[1])) ```
2020/04/01
[ "https://Stackoverflow.com/questions/60973894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1497139/" ]
In order to keep using the old syntax (feeding the transformer `(Lon,Lat)` pairs) you can use the `always_xy=True` parameter when creating the transformer object: ```py from pyproj import Transformer transformer = Transformer.from_crs(4326, 3857, always_xy=True) points = [ (6.783333, 51.233333), # Dusseldorf (-122.416389, 37.7775) # San Francisco ] for pt in transformer.itransform(points): print(pt) ``` Output ``` (755117.1754412088, 6662671.876828446) (-13627330.088231295, 4548041.532457043) ```
This is my current guess for the fix: ``` #e4326=Proj(init='epsg:4326') e4326=CRS('EPSG:4326') #e3857=Proj(init='epsg:3857') e3857=CRS('EPSG:3857') ``` **Projection helper class** ``` from pyproj import Proj, CRS,transform class Projection: ''' helper to project lat/lon values to map ''' #e4326=Proj(init='epsg:4326') e4326=CRS('EPSG:4326') #e3857=Proj(init='epsg:3857') e3857=CRS('EPSG:3857') @staticmethod def wgsToXy(lon,lat): t1=transform(Projection.e4326,Projection.e3857, lon,lat) #t2=transform(Proj('epsg:4326'), Proj('epsg:3857'), lon,lat) return t1 @staticmethod def pointToXy(point): xy=point.split(",") return Projection.wgsToXy(float(xy[0]),float(xy[1])) ```
66,514,262
i want to plot a graphs of my csv file data now i want epoch as x axis and on y axis the label "acc" and "val\_acc" is plot i try the following code but it gives blank graph ` ``` x = [] y = [] with open('trainSelfVGG.csv','r') as csvfile: plots = csv.reader(csvfile, delimiter=',') for row in plots: x.append('epoch') y.append('acc') plt.plot(x,y, label='Loaded from file!') plt.xlabel('x') plt.ylabel('y') plt.title('Accuracy VS Val_acc') plt.legend() plt.show()` ``` i am new to python please help the data of csv file look like this ``` epoch| acc| | loss |lr |val_acc |val_loss 0 0.712187529 0.923782527 5.00E-05 0.734799922 0.865529358 1 0.746874988 0.845359206 5.00E-05 0.733945608 0.870365739 2 0.739687502 0.853801966 5.00E-05 0.734799922 0.869380653 3 0.734375 0.872551799 5.00E-05 0.734799922 0.818775356 4 0.735000014 0.817328095 5.00E-05 0.744980752 0.782691181 5 0.738125026 0.813450873 5.00E-05 0.743200898 0.756890059 6 0.749842465 0.769637883 5.00E-05 0.746404648 0.761445224 7 0.740312517 0.779146731 5.00E-05 0.750605166 0.74676168 8 0.745937526 0.77233541 5.00E-05 0.738217294 0.754457355 9 0.760239422 0.717389286 5.00E-05 0.756656706 0.719709456 10 0.758437514 0.727203131 5.00E-05 0.753880084 0.766058266 11 0.756562471 0.718854547 5.00E-05 0.764060915 0.699205279 12 0.751874983 0.735785842 5.00E-05 0.76099956 0.711962938 13 0.762187481 0.709208548 5.00E-05 0.762850642 0.701643765 14 0.766250014 0.689858377 5.00E-05 0.771037996 0.698576272 15 0.791562498 0.642151952 5.00E-05 0.775665641 0.674562693 16 0.773750007 0.672213078 5.00E-05 0.77153641 0.683691561 17 0.785312474 0.657182395 5.00E-05 0.778015077 0.670122385 18 0.770951509 0.685499191 5.00E-05 0.774384141 0.670817852 19 0.777812481 0.673273861 5.00E-05 0.785134554 0.652816713 20 0.80250001 0.626691639 5.00E-05 0.783141136 0.66740793 21 0.787500024 0.64432466 5.00E-05 0.788053513 0.651966989 22 0.7890625 0.621332884 5.00E-05 0.775096118 0.663884819 23 0.787500024 0.637105942 5.00E-05 0.785775304 0.657734036 24 0.794580996 0.616357446 5.00E-05 0.771749973 0.670413017 25 0.803717732 0.599221408 5.00E-05 0.788195908 0.64291203 26 0.811874986 0.587966204 5.00E-05 0.791186094 0.653984845 27 0.804062486 0.591458261 5.00E-05 0.792538822 0.642165542 28 0.797187507 0.602103412 5.00E-05 0.78812474 0.635053933 29 0.807187498 0.595692158 5.00E-05 0.77474016 0.661368072 30 0.811909258 0.577990949 5.00E-05 0.774526536 0.668637931 31 0.820625007 0.546454251 5.00E-05 0.783212304 0.650670886 32 0.82593751 0.53596288 5.00E-05 0.778655827 0.651631236 33 0.805608094 0.582103312 5.00E-05 0.792823553 0.635468125 34 0.822621286 0.555304945 5.00E-05 0.783924222 0.647240341 35 0.823125005 0.551530778 5.00E-05 0.783141136 0.662788212 ```
2021/03/07
[ "https://Stackoverflow.com/questions/66514262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12673562/" ]
You are correct, a string cannot be returned as an array via the `return` statement. When you pass or return an array, only a pointer to its first element is used. Hence the function `get_name()` returns a pointer to the array defined locally with automatic storage (aka *on the stack*). This is incorrect as this array is discarded as soon as it goes out of scope, ie: when the function returns. There are several ways for `get_name()` to provide the name to its caller: * you can pass a destination array and its length: and let the function fill the name in the array, carefully avoiding to write beyond the end of the array but making sure it has a null terminator: ``` char *get_name(char *dest, size_t size, int num) { if (num == 1) { snprintf(dest, size, "Jake Peralta"); } else { snprintf(dest, size, "John Doe"); } // return the destination pointer for convenience. return dest; } int main() { char name[30]; int num = 1; get_name(name, sizeof name, num); printf("%s\n", name); return 0; } ``` * you can allocate memory in `get_name()` and return a pointer to the allocated array where you copy the string. It will be the caller's responsibility to free this object with `free()` when it is no longer used. ``` char *get_name(int num) { if (num == 1) { return strdup("Jake Peralta"); } else { return strdup("John Doe"); } } int main() { int num = 1; char *name = get_name(num); printf("%s\n", name); free(name); return 0; } ``` * you can return a constant string, but you can only do this if all names are known at compile time. ``` const char *get_name(int num) { if (num == 1) { "Jake Peralta"; } else { "John Doe"; } } int main() { int num = 1; const char *name = get_name(num); printf("%s\n", name); return 0; } ```
You are returning address of `real_name` from `get_name` method which will go out of scope after the function returns. Instead allocate the memory of string on heap and return its address. Also, the caller would need to free the string memory allocated on the heap to avoid any memory leaks.
66,514,262
i want to plot a graphs of my csv file data now i want epoch as x axis and on y axis the label "acc" and "val\_acc" is plot i try the following code but it gives blank graph ` ``` x = [] y = [] with open('trainSelfVGG.csv','r') as csvfile: plots = csv.reader(csvfile, delimiter=',') for row in plots: x.append('epoch') y.append('acc') plt.plot(x,y, label='Loaded from file!') plt.xlabel('x') plt.ylabel('y') plt.title('Accuracy VS Val_acc') plt.legend() plt.show()` ``` i am new to python please help the data of csv file look like this ``` epoch| acc| | loss |lr |val_acc |val_loss 0 0.712187529 0.923782527 5.00E-05 0.734799922 0.865529358 1 0.746874988 0.845359206 5.00E-05 0.733945608 0.870365739 2 0.739687502 0.853801966 5.00E-05 0.734799922 0.869380653 3 0.734375 0.872551799 5.00E-05 0.734799922 0.818775356 4 0.735000014 0.817328095 5.00E-05 0.744980752 0.782691181 5 0.738125026 0.813450873 5.00E-05 0.743200898 0.756890059 6 0.749842465 0.769637883 5.00E-05 0.746404648 0.761445224 7 0.740312517 0.779146731 5.00E-05 0.750605166 0.74676168 8 0.745937526 0.77233541 5.00E-05 0.738217294 0.754457355 9 0.760239422 0.717389286 5.00E-05 0.756656706 0.719709456 10 0.758437514 0.727203131 5.00E-05 0.753880084 0.766058266 11 0.756562471 0.718854547 5.00E-05 0.764060915 0.699205279 12 0.751874983 0.735785842 5.00E-05 0.76099956 0.711962938 13 0.762187481 0.709208548 5.00E-05 0.762850642 0.701643765 14 0.766250014 0.689858377 5.00E-05 0.771037996 0.698576272 15 0.791562498 0.642151952 5.00E-05 0.775665641 0.674562693 16 0.773750007 0.672213078 5.00E-05 0.77153641 0.683691561 17 0.785312474 0.657182395 5.00E-05 0.778015077 0.670122385 18 0.770951509 0.685499191 5.00E-05 0.774384141 0.670817852 19 0.777812481 0.673273861 5.00E-05 0.785134554 0.652816713 20 0.80250001 0.626691639 5.00E-05 0.783141136 0.66740793 21 0.787500024 0.64432466 5.00E-05 0.788053513 0.651966989 22 0.7890625 0.621332884 5.00E-05 0.775096118 0.663884819 23 0.787500024 0.637105942 5.00E-05 0.785775304 0.657734036 24 0.794580996 0.616357446 5.00E-05 0.771749973 0.670413017 25 0.803717732 0.599221408 5.00E-05 0.788195908 0.64291203 26 0.811874986 0.587966204 5.00E-05 0.791186094 0.653984845 27 0.804062486 0.591458261 5.00E-05 0.792538822 0.642165542 28 0.797187507 0.602103412 5.00E-05 0.78812474 0.635053933 29 0.807187498 0.595692158 5.00E-05 0.77474016 0.661368072 30 0.811909258 0.577990949 5.00E-05 0.774526536 0.668637931 31 0.820625007 0.546454251 5.00E-05 0.783212304 0.650670886 32 0.82593751 0.53596288 5.00E-05 0.778655827 0.651631236 33 0.805608094 0.582103312 5.00E-05 0.792823553 0.635468125 34 0.822621286 0.555304945 5.00E-05 0.783924222 0.647240341 35 0.823125005 0.551530778 5.00E-05 0.783141136 0.662788212 ```
2021/03/07
[ "https://Stackoverflow.com/questions/66514262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12673562/" ]
You are correct, a string cannot be returned as an array via the `return` statement. When you pass or return an array, only a pointer to its first element is used. Hence the function `get_name()` returns a pointer to the array defined locally with automatic storage (aka *on the stack*). This is incorrect as this array is discarded as soon as it goes out of scope, ie: when the function returns. There are several ways for `get_name()` to provide the name to its caller: * you can pass a destination array and its length: and let the function fill the name in the array, carefully avoiding to write beyond the end of the array but making sure it has a null terminator: ``` char *get_name(char *dest, size_t size, int num) { if (num == 1) { snprintf(dest, size, "Jake Peralta"); } else { snprintf(dest, size, "John Doe"); } // return the destination pointer for convenience. return dest; } int main() { char name[30]; int num = 1; get_name(name, sizeof name, num); printf("%s\n", name); return 0; } ``` * you can allocate memory in `get_name()` and return a pointer to the allocated array where you copy the string. It will be the caller's responsibility to free this object with `free()` when it is no longer used. ``` char *get_name(int num) { if (num == 1) { return strdup("Jake Peralta"); } else { return strdup("John Doe"); } } int main() { int num = 1; char *name = get_name(num); printf("%s\n", name); free(name); return 0; } ``` * you can return a constant string, but you can only do this if all names are known at compile time. ``` const char *get_name(int num) { if (num == 1) { "Jake Peralta"; } else { "John Doe"; } } int main() { int num = 1; const char *name = get_name(num); printf("%s\n", name); return 0; } ```
Like you said, "However, as far as I know, string cannot be in the form `string_name = call_function()`." To understand the logic behind this, just look at your `get_name()` function: ``` char *get_name(int num) { char real_name[30]; if (num==1) strcpy(real_name,"Jake Peralta"); return real_name; } ``` Here, you try to return the starting address of `real_name`, but `real_name` is destroyed when it goes out of scope (in this case, when the function returns). There are two ways I think you could fix this. One is to add the string that should hold the return value as a parameter. In your case, it would be: ``` void get_name(int num, char *dest) { char real_name[30]; if (num==1) { strcpy(real_name,"Jake Peralta"); strcpy(dest, real_name); } } ``` Or, just avoid using `real_name` at all now to make the function shorter: ``` void get_name(int num, char *dest) { if (num==1) strcpy(dest, "Jake Peralta"); } ``` The other way is to allocate the return value on the heap, but I wouldn't recommend this; you would have to keep track of each allocated string and eventually free all of them. Nevertheless, here is how it would look like in your case: ``` char *get_name(int num, char *dest) { char *real_name = calloc(30, sizeof(char)); // Using calloc to avoid returning an uninitialized string if num is not 1 if (num==1) strcpy(real_name, "Jake Peralta"; return real_name; } ``` By the way, I kept your `strcpy` here, but might want to avoid using it in the future, as it can cause buffer overruns. [Here's a post with useful answers as to why it's bad](https://stackoverflow.com/questions/1258550/why-should-you-use-strncpy-instead-of-strcpy). In fact, even `strncpy` isn't fully safe (See [here](https://randomascii.wordpress.com/2013/04/03/stop-using-strncpy-already/) (credit to @chqrlie for providing the link)). @chqrlie provides a clean and safe alternative using `snprintf` in his own answer, I'd suggest you use that.
27,088,984
I am learning python so this question may be a simple question, I am creating a list of cars and their details in a list as bellow: ``` car_specs = [("1. Ford Fiesta - Studio", ["3", "54mpg", "Manual", "£9,995"]), ("2. Ford Focous - Studio", ["5", "48mpg", "Manual", "£17,295"]), ("3. Vauxhall Corsa STING", ["3", "53mpg", "Manual", "£8,995"]), ("4. VW Golf - S", ["5", "88mpg", "Manual", "£17,175"]) ] ``` I have then created a part for adding another car as follows: ``` new_name = input("What is the name of the new car?") new_doors = input("How many doors does it have?") new_efficency = input("What is the fuel efficency of the new car?") new_gearbox = input("What type of gearbox?") new_price = input("How much does the new car cost?") car_specs.insert(len(car_specs), (new_name[new_doors, new_efficency, new_gearbox, new_price])) ``` It isnt working though and comes up with this error: ``` Would you like to add a new car?(Y/N)Y What is the name of the new car?test How many doors does it have?123456 What is the fuel efficency of the new car?23456 What type of gearbox?234567 How much does the new car cost?234567 Traceback (most recent call last): File "/Users/JagoStrong-Wright/Documents/School Work/Computer Science/car list.py", line 35, in <module> car_specs.insert(len(car_specs), (new_name[new_doors, new_efficency, new_gearbox, new_price])) TypeError: string indices must be integers >>> ``` Anyones help would be greatly appreciated, thanks.
2014/11/23
[ "https://Stackoverflow.com/questions/27088984", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4284218/" ]
Just append the tuple to the list making sure to separate new\_name from the list with a `,`: ``` new_name = input("What is the name of the new car?") new_doors = input("How many doors does it have?") new_efficency = input("What is the fuel efficency of the new car?") new_gearbox = input("What type of gearbox?") new_price = input("How much does the new car cost?") car_specs.append(("{}. {}".format(len(car_specs) + 1,new_name),[new_doors, new_efficency, new_gearbox, new_price])) ``` I would use a dict to store the data instead: ``` car_specs = {'2. Ford Focous - Studio': ['5', '48mpg', 'Manual', '\xc2\xa317,295'], '1. Ford Fiesta - Studio': ['3', '54mpg', 'Manual', '\xc2\xa39,995'], '3. Vauxhall Corsa STING': ['3', '53mpg', 'Manual', '\xc2\xa38,995'], '4. VW Golf - S': ['5', '88mpg', 'Manual', '\xc2\xa317,175']} ``` Then add new cars using: ``` car_specs["{}. {}".format(len(car_specs)+1,new_name)] = [new_doors, new_efficency, new_gearbox, new_price] ```
You are not setting the first element go your tuple correctly. You are appending the name to the length of car specs as you expect. Also new\_name is as string, when you do new\_name[x] your asking python for the x+1th character in that string. ``` new_name = input("What is the name of the new car?") new_doors = input("How many doors does it have?") new_efficency = input("What is the fuel efficency of the new car?") new_gearbox = input("What type of gearbox?") new_price = input("How much does the new car cost?") car_specs.insert(str(len(car_specs + 1))+'. - ' + name, [new_doors, new_efficency, new_gearbox, new_price]) ```
41,474,163
I was doing Singly-Linked List implementation and I remember Linus Torvalds talking about it [here](https://youtu.be/o8NPllzkFhE?t=890). In a singly-linked list in order to remove a node we should have access to the previous node and then change the node it is currently pointing to. Like this [![enter image description here](https://i.stack.imgur.com/rVCdE.png)](https://i.stack.imgur.com/rVCdE.png) So any way we should have access to the previous node. But Linus Torvalds removed the special case by using the idea of address in C. So that head also has the 'previous thing' which is the address of head which points to head. So he has used C's feature of pointer and address to remove special case. The normal code with special case [![enter image description here](https://i.stack.imgur.com/gvF0j.png)](https://i.stack.imgur.com/gvF0j.png) The code with special case becoming normal case [![enter image description here](https://i.stack.imgur.com/7XBSb.png)](https://i.stack.imgur.com/7XBSb.png) I think this kind of special case removal in singly linked list cannot be done in python, because we don't have the concept of pointers (and hence I will not be able to go one step before head)? Am I right ?
2017/01/04
[ "https://Stackoverflow.com/questions/41474163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1597944/" ]
Sure you can do this in Python. What he's saying is that you have some data structure that represents the list itself and points to the head of the list, and you manipulate that just as you would the pointer in a list item when you're dealing with the first list item. Now Python is not C so the implementation would be different, but the principle applies. The list itself is not the same object as its first item, and list items should not have the same methods as the list as a whole, so it makes sense to use separate kinds of objects for them. Both of them can, however, use an attribute of the same name (e.g. `next`) to point to the next item. So when you iterate through the list, and you are at the first item, the "previous" item is the list itself, and you are manipulating its `next` attribute if you need to remove the first item. In the real world, of course, you would never write your own Python linked list class except as an exercise. The built-in `list` is more efficient.
You cannot use Linus's specific trick in Python, because, as you well know, Python does not have pointers (as such) or an address-of operator. You can still, however, eliminate a special case for the list head by giving the list a dummy head node. You can do that as an inherent part of the design of your list, or you can do it on the fly just by creating an extra node and making it refer to the first data-bearing node as its next node. Either way, all the nodes you might want to delete are then interior nodes, not special cases.
41,474,163
I was doing Singly-Linked List implementation and I remember Linus Torvalds talking about it [here](https://youtu.be/o8NPllzkFhE?t=890). In a singly-linked list in order to remove a node we should have access to the previous node and then change the node it is currently pointing to. Like this [![enter image description here](https://i.stack.imgur.com/rVCdE.png)](https://i.stack.imgur.com/rVCdE.png) So any way we should have access to the previous node. But Linus Torvalds removed the special case by using the idea of address in C. So that head also has the 'previous thing' which is the address of head which points to head. So he has used C's feature of pointer and address to remove special case. The normal code with special case [![enter image description here](https://i.stack.imgur.com/gvF0j.png)](https://i.stack.imgur.com/gvF0j.png) The code with special case becoming normal case [![enter image description here](https://i.stack.imgur.com/7XBSb.png)](https://i.stack.imgur.com/7XBSb.png) I think this kind of special case removal in singly linked list cannot be done in python, because we don't have the concept of pointers (and hence I will not be able to go one step before head)? Am I right ?
2017/01/04
[ "https://Stackoverflow.com/questions/41474163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1597944/" ]
Sure you can do this in Python. What he's saying is that you have some data structure that represents the list itself and points to the head of the list, and you manipulate that just as you would the pointer in a list item when you're dealing with the first list item. Now Python is not C so the implementation would be different, but the principle applies. The list itself is not the same object as its first item, and list items should not have the same methods as the list as a whole, so it makes sense to use separate kinds of objects for them. Both of them can, however, use an attribute of the same name (e.g. `next`) to point to the next item. So when you iterate through the list, and you are at the first item, the "previous" item is the list itself, and you are manipulating its `next` attribute if you need to remove the first item. In the real world, of course, you would never write your own Python linked list class except as an exercise. The built-in `list` is more efficient.
My first thought upon reading your question was: why would you want to build a singly linked list in python? Python offers a wealth of collection types and you can use these without having to worry about whether they are implemented as singly linked list, as double linked list or as some non-recursive data structure (which are usually easier to handle). But the answer to your question is: Python allows of course to build a singly linked list. For example, the following code does just that: ``` class Node: def __init__(self, x, next): self.x = x self.next = next def __str__(self): return "<{}, {!s}>".format(self.x, self.next) n = Node(1, None) n = Node(2, n) n = Node(3, n) print(n) # output: <3, <2, <1, None>>> n.next.next = n.next.next.next print(n) # output: <3, <2, None>> ``` The difference to C is: we did not have to `malloc()` or work with pointers because python handles memory for us. Python has references instead of pointers, they are similar but much safer and easier to use. However, before implementing a linked list, you should consider what your requirements are regarding your collection and maybe you can pick a good one from the built-ins or from the collections module.
41,474,163
I was doing Singly-Linked List implementation and I remember Linus Torvalds talking about it [here](https://youtu.be/o8NPllzkFhE?t=890). In a singly-linked list in order to remove a node we should have access to the previous node and then change the node it is currently pointing to. Like this [![enter image description here](https://i.stack.imgur.com/rVCdE.png)](https://i.stack.imgur.com/rVCdE.png) So any way we should have access to the previous node. But Linus Torvalds removed the special case by using the idea of address in C. So that head also has the 'previous thing' which is the address of head which points to head. So he has used C's feature of pointer and address to remove special case. The normal code with special case [![enter image description here](https://i.stack.imgur.com/gvF0j.png)](https://i.stack.imgur.com/gvF0j.png) The code with special case becoming normal case [![enter image description here](https://i.stack.imgur.com/7XBSb.png)](https://i.stack.imgur.com/7XBSb.png) I think this kind of special case removal in singly linked list cannot be done in python, because we don't have the concept of pointers (and hence I will not be able to go one step before head)? Am I right ?
2017/01/04
[ "https://Stackoverflow.com/questions/41474163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1597944/" ]
Sure you can do this in Python. What he's saying is that you have some data structure that represents the list itself and points to the head of the list, and you manipulate that just as you would the pointer in a list item when you're dealing with the first list item. Now Python is not C so the implementation would be different, but the principle applies. The list itself is not the same object as its first item, and list items should not have the same methods as the list as a whole, so it makes sense to use separate kinds of objects for them. Both of them can, however, use an attribute of the same name (e.g. `next`) to point to the next item. So when you iterate through the list, and you are at the first item, the "previous" item is the list itself, and you are manipulating its `next` attribute if you need to remove the first item. In the real world, of course, you would never write your own Python linked list class except as an exercise. The built-in `list` is more efficient.
You need two levels of indirection to do it the way Linus suggests, but you can potentially do it in Python perhaps by having a reference to an object which stores a reference to an object or something of this sort (an index to an index?). That said, I don't think it maps so elegantly or efficiently to Python and it'd probably be quite wasteful to use an object just to represent a single link in a linked structure. In Python's case I'd just do the additional branching to check for cases where you're removing from the head unless there's some trick I'm missing. As for implementing linked lists yourself, I actually find many use cases where the standard libraries don't suffice. Here's one example: [![enter image description here](https://i.stack.imgur.com/S9i9A.png)](https://i.stack.imgur.com/S9i9A.png) ... where the grid might have 10,000 cells. Most linked lists provided by standard libraries aren't optimized to store 10,000+ linked lists in the size of a 32-bit index per list since they're trying to provide interfaces that allow the linked list to be used in isolation (not using a separate backing data structure for storage like an array). Typically the most efficient use of a linked list is one that doesn't own memory or manage any resources. It's just linking data in an auxiliary fashion already allocated and managed in another data structure, like this for a 128-bit (16-byte) tree node in an n-ary tree where elements can be stored at any level of the hierarchy: ``` struct TreeNode { int32 parent; // parent index or -1 for no parent int32 first_child; // first child of this node or -1 int32 next_sibling; // next child for the parent node or -1 int32 element; // element data stored in this node or -1 // if no data is associated }; ``` So there's a lot of use cases for implementing your own linked lists and other linked structures which are significantly more efficient for a more narrowly-applicable use case (grid data structures, octrees, quad-trees, graphs, etc), but again, I don't think you can use this trick in languages that don't easily allow you to utilize two or more levels of pointer indirection. Python inherently only has one for objects -- same with Java and C#. You'd need something like a *"reference to a reference to an object"* or an *"index to an index to an object"* or *"an index to an object reference to an object"*. Also linked lists generally aren't so useful in languages that don't allow you to manage where everything is stored in memory since you can end up getting cache misses galore iterating through linked lists otherwise if each list node is fragmented in memory as would often be the case after a GC cycle, e.g. For linked lists to be really efficient as in the case of the Linux kernel requires you to be able to really have fine control over where each node resides in memory so that list traversal is actually mostly, if not entirely, just iterating through contiguous chunks of memory. Otherwise you're generally better-off using small arrays even if that implies linear-time removals and insertions to/from the middle.
41,474,163
I was doing Singly-Linked List implementation and I remember Linus Torvalds talking about it [here](https://youtu.be/o8NPllzkFhE?t=890). In a singly-linked list in order to remove a node we should have access to the previous node and then change the node it is currently pointing to. Like this [![enter image description here](https://i.stack.imgur.com/rVCdE.png)](https://i.stack.imgur.com/rVCdE.png) So any way we should have access to the previous node. But Linus Torvalds removed the special case by using the idea of address in C. So that head also has the 'previous thing' which is the address of head which points to head. So he has used C's feature of pointer and address to remove special case. The normal code with special case [![enter image description here](https://i.stack.imgur.com/gvF0j.png)](https://i.stack.imgur.com/gvF0j.png) The code with special case becoming normal case [![enter image description here](https://i.stack.imgur.com/7XBSb.png)](https://i.stack.imgur.com/7XBSb.png) I think this kind of special case removal in singly linked list cannot be done in python, because we don't have the concept of pointers (and hence I will not be able to go one step before head)? Am I right ?
2017/01/04
[ "https://Stackoverflow.com/questions/41474163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1597944/" ]
You cannot use Linus's specific trick in Python, because, as you well know, Python does not have pointers (as such) or an address-of operator. You can still, however, eliminate a special case for the list head by giving the list a dummy head node. You can do that as an inherent part of the design of your list, or you can do it on the fly just by creating an extra node and making it refer to the first data-bearing node as its next node. Either way, all the nodes you might want to delete are then interior nodes, not special cases.
You need two levels of indirection to do it the way Linus suggests, but you can potentially do it in Python perhaps by having a reference to an object which stores a reference to an object or something of this sort (an index to an index?). That said, I don't think it maps so elegantly or efficiently to Python and it'd probably be quite wasteful to use an object just to represent a single link in a linked structure. In Python's case I'd just do the additional branching to check for cases where you're removing from the head unless there's some trick I'm missing. As for implementing linked lists yourself, I actually find many use cases where the standard libraries don't suffice. Here's one example: [![enter image description here](https://i.stack.imgur.com/S9i9A.png)](https://i.stack.imgur.com/S9i9A.png) ... where the grid might have 10,000 cells. Most linked lists provided by standard libraries aren't optimized to store 10,000+ linked lists in the size of a 32-bit index per list since they're trying to provide interfaces that allow the linked list to be used in isolation (not using a separate backing data structure for storage like an array). Typically the most efficient use of a linked list is one that doesn't own memory or manage any resources. It's just linking data in an auxiliary fashion already allocated and managed in another data structure, like this for a 128-bit (16-byte) tree node in an n-ary tree where elements can be stored at any level of the hierarchy: ``` struct TreeNode { int32 parent; // parent index or -1 for no parent int32 first_child; // first child of this node or -1 int32 next_sibling; // next child for the parent node or -1 int32 element; // element data stored in this node or -1 // if no data is associated }; ``` So there's a lot of use cases for implementing your own linked lists and other linked structures which are significantly more efficient for a more narrowly-applicable use case (grid data structures, octrees, quad-trees, graphs, etc), but again, I don't think you can use this trick in languages that don't easily allow you to utilize two or more levels of pointer indirection. Python inherently only has one for objects -- same with Java and C#. You'd need something like a *"reference to a reference to an object"* or an *"index to an index to an object"* or *"an index to an object reference to an object"*. Also linked lists generally aren't so useful in languages that don't allow you to manage where everything is stored in memory since you can end up getting cache misses galore iterating through linked lists otherwise if each list node is fragmented in memory as would often be the case after a GC cycle, e.g. For linked lists to be really efficient as in the case of the Linux kernel requires you to be able to really have fine control over where each node resides in memory so that list traversal is actually mostly, if not entirely, just iterating through contiguous chunks of memory. Otherwise you're generally better-off using small arrays even if that implies linear-time removals and insertions to/from the middle.
41,474,163
I was doing Singly-Linked List implementation and I remember Linus Torvalds talking about it [here](https://youtu.be/o8NPllzkFhE?t=890). In a singly-linked list in order to remove a node we should have access to the previous node and then change the node it is currently pointing to. Like this [![enter image description here](https://i.stack.imgur.com/rVCdE.png)](https://i.stack.imgur.com/rVCdE.png) So any way we should have access to the previous node. But Linus Torvalds removed the special case by using the idea of address in C. So that head also has the 'previous thing' which is the address of head which points to head. So he has used C's feature of pointer and address to remove special case. The normal code with special case [![enter image description here](https://i.stack.imgur.com/gvF0j.png)](https://i.stack.imgur.com/gvF0j.png) The code with special case becoming normal case [![enter image description here](https://i.stack.imgur.com/7XBSb.png)](https://i.stack.imgur.com/7XBSb.png) I think this kind of special case removal in singly linked list cannot be done in python, because we don't have the concept of pointers (and hence I will not be able to go one step before head)? Am I right ?
2017/01/04
[ "https://Stackoverflow.com/questions/41474163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1597944/" ]
My first thought upon reading your question was: why would you want to build a singly linked list in python? Python offers a wealth of collection types and you can use these without having to worry about whether they are implemented as singly linked list, as double linked list or as some non-recursive data structure (which are usually easier to handle). But the answer to your question is: Python allows of course to build a singly linked list. For example, the following code does just that: ``` class Node: def __init__(self, x, next): self.x = x self.next = next def __str__(self): return "<{}, {!s}>".format(self.x, self.next) n = Node(1, None) n = Node(2, n) n = Node(3, n) print(n) # output: <3, <2, <1, None>>> n.next.next = n.next.next.next print(n) # output: <3, <2, None>> ``` The difference to C is: we did not have to `malloc()` or work with pointers because python handles memory for us. Python has references instead of pointers, they are similar but much safer and easier to use. However, before implementing a linked list, you should consider what your requirements are regarding your collection and maybe you can pick a good one from the built-ins or from the collections module.
You need two levels of indirection to do it the way Linus suggests, but you can potentially do it in Python perhaps by having a reference to an object which stores a reference to an object or something of this sort (an index to an index?). That said, I don't think it maps so elegantly or efficiently to Python and it'd probably be quite wasteful to use an object just to represent a single link in a linked structure. In Python's case I'd just do the additional branching to check for cases where you're removing from the head unless there's some trick I'm missing. As for implementing linked lists yourself, I actually find many use cases where the standard libraries don't suffice. Here's one example: [![enter image description here](https://i.stack.imgur.com/S9i9A.png)](https://i.stack.imgur.com/S9i9A.png) ... where the grid might have 10,000 cells. Most linked lists provided by standard libraries aren't optimized to store 10,000+ linked lists in the size of a 32-bit index per list since they're trying to provide interfaces that allow the linked list to be used in isolation (not using a separate backing data structure for storage like an array). Typically the most efficient use of a linked list is one that doesn't own memory or manage any resources. It's just linking data in an auxiliary fashion already allocated and managed in another data structure, like this for a 128-bit (16-byte) tree node in an n-ary tree where elements can be stored at any level of the hierarchy: ``` struct TreeNode { int32 parent; // parent index or -1 for no parent int32 first_child; // first child of this node or -1 int32 next_sibling; // next child for the parent node or -1 int32 element; // element data stored in this node or -1 // if no data is associated }; ``` So there's a lot of use cases for implementing your own linked lists and other linked structures which are significantly more efficient for a more narrowly-applicable use case (grid data structures, octrees, quad-trees, graphs, etc), but again, I don't think you can use this trick in languages that don't easily allow you to utilize two or more levels of pointer indirection. Python inherently only has one for objects -- same with Java and C#. You'd need something like a *"reference to a reference to an object"* or an *"index to an index to an object"* or *"an index to an object reference to an object"*. Also linked lists generally aren't so useful in languages that don't allow you to manage where everything is stored in memory since you can end up getting cache misses galore iterating through linked lists otherwise if each list node is fragmented in memory as would often be the case after a GC cycle, e.g. For linked lists to be really efficient as in the case of the Linux kernel requires you to be able to really have fine control over where each node resides in memory so that list traversal is actually mostly, if not entirely, just iterating through contiguous chunks of memory. Otherwise you're generally better-off using small arrays even if that implies linear-time removals and insertions to/from the middle.
34,522,741
A request comes to tornado's GET handler of a web app. From the `GET` function, a `blocking_task` function is called. This `blocking_task` function has `@run_on_executor` decorator. But this execution fails. Could you please help on this. It seems that motor db is not able to execute the thread. ``` import time from concurrent.futures import ThreadPoolExecutor from tornado import gen, web from tornado.concurrent import run_on_executor from tornado.ioloop import IOLoop import argparse from common.config import APIConfig import sys import os import motor parser = argparse.ArgumentParser() parser.add_argument("-c", "--config-file", dest='config_file', help="Config file location") args = parser.parse_args() CONF = APIConfig().parse(args.config_file) client = motor.MotorClient(CONF.mongo_url) db = client[CONF.mongo_dbname] class Handler(web.RequestHandler): executor = ThreadPoolExecutor(10) def initialize(self): """ Prepares the database for the entire class """ self.db = self.settings["db"] @gen.coroutine def get(self): self.blocking_task() @run_on_executor def blocking_task(self): mongo_dict = self.db.test_cases.find_one({"name": "Ping"}) if __name__ == "__main__": app = web.Application([ (r"/", Handler), ], db=db, debug=CONF.api_debug_on, ) app.listen(8888) IOLoop.current().start() > ERROR:tornado.application:Exception in callback <functools.partial > object at 0x7f72dfbe48e8> Traceback (most recent call last): File > "/usr/local/lib/python2.7/dist-packages/tornado-4.3-py2.7-linux-x86_64.egg/tornado/ioloop.py", > line 600, in _run_callback > ret = callback() File "/usr/local/lib/python2.7/dist-packages/tornado-4.3-py2.7-linux-x86_64.egg/tornado/stack_context.py", > line 275, in null_wrapper > return fn(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/motor-0.5-py2.7.egg/motor/frameworks/tornado.py", > line 231, in callback > child_gr.switch(future.result()) error: cannot switch to a different thread ``` Could you please help on this.
2015/12/30
[ "https://Stackoverflow.com/questions/34522741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5722595/" ]
Finally following code works, Thank you @kwarunek Also added parameters to the callback function. ``` import time from concurrent.futures import ThreadPoolExecutor from tornado import gen, web from tornado.concurrent import run_on_executor from tornado.ioloop import IOLoop import argparse from common.config import APIConfig import sys import os import motor parser = argparse.ArgumentParser() parser.add_argument("-c", "--config-file", dest='config_file', help="Config file location") args = parser.parse_args() CONF = APIConfig().parse(args.config_file) client = motor.MotorClient(CONF.mongo_url) db = client[CONF.mongo_dbname] class Handler(web.RequestHandler): executor = ThreadPoolExecutor(10) def initialize(self): """ Prepares the database for the entire class """ self.db = self.settings["db"] @gen.coroutine def get(self): self.blocking_task("Ping", "Void-R") @run_on_executor def blocking_task(self, name, status): IOLoop.instance().add_callback(callback=lambda: self.some_update(name, status)) @gen.coroutine def some_update(self, name, status): mongo_dict = yield self.db.test_cases.find_one({"name": name}) self.db.test_cases.update({ "name": name }, { "$set": { "status" : status } } ) if __name__ == "__main__": app = web.Application([ (r"/", Handler), ], db=db, debug=CONF.api_debug_on, ) app.listen(8888) IOLoop.current().start() ```
From [docs](http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.run_on_executor) > > IOLoop and executor to be used are determined by the io\_loop and > executor attributes of self. To use different attributes, pass keyword > arguments to the decorator > > > You have to provide a init threadpoolexecutor: ``` import time from concurrent.futures import ThreadPoolExecutor from tornado import gen, web from tornado.concurrent import run_on_executor from tornado.ioloop import IOLoop class Handler(web.RequestHandler): executor = ThreadPoolExecutor(10) @gen.coroutine def get(self): self.blocking_task() @run_on_executor def blocking_task(self): time.sleep(10) if __name__ == "__main__": app = web.Application([ (r"/", Handler), ]) app.listen(8888) IOLoop.current().start() ``` By default `run_on_executor` search for threadpool in `executor` attribute, unless you pass other explicitly, e.g. ``` _thread_pool = ThreadPoolExecutor(10) @run_on_executor(executor='_thread_pool') def blocking_task(self): pass ``` **edit** Basically IOLoop should be used in single-threaded env (you can run separate IOLoop on each thread, but it is not your case). To communicate with IOLoop you should use [add\_callback](http://www.tornadoweb.org/en/stable/ioloop.html#tornado.ioloop.IOLoop.add_callback), that is the only thread safe function. You can use like: ``` @run_on_executor def blocking_task(self): IOLoop.instance().add_callback(some_update) @gen.coroutine def some_update(): db.test_cases.update({ "name": "abc" }, { "$set": { "status" : "xyz" } } ) ``` But do you really need threading at all. What is the purpose of the separate thread if you schedule update on main - IOLoop's thread.
34,522,741
A request comes to tornado's GET handler of a web app. From the `GET` function, a `blocking_task` function is called. This `blocking_task` function has `@run_on_executor` decorator. But this execution fails. Could you please help on this. It seems that motor db is not able to execute the thread. ``` import time from concurrent.futures import ThreadPoolExecutor from tornado import gen, web from tornado.concurrent import run_on_executor from tornado.ioloop import IOLoop import argparse from common.config import APIConfig import sys import os import motor parser = argparse.ArgumentParser() parser.add_argument("-c", "--config-file", dest='config_file', help="Config file location") args = parser.parse_args() CONF = APIConfig().parse(args.config_file) client = motor.MotorClient(CONF.mongo_url) db = client[CONF.mongo_dbname] class Handler(web.RequestHandler): executor = ThreadPoolExecutor(10) def initialize(self): """ Prepares the database for the entire class """ self.db = self.settings["db"] @gen.coroutine def get(self): self.blocking_task() @run_on_executor def blocking_task(self): mongo_dict = self.db.test_cases.find_one({"name": "Ping"}) if __name__ == "__main__": app = web.Application([ (r"/", Handler), ], db=db, debug=CONF.api_debug_on, ) app.listen(8888) IOLoop.current().start() > ERROR:tornado.application:Exception in callback <functools.partial > object at 0x7f72dfbe48e8> Traceback (most recent call last): File > "/usr/local/lib/python2.7/dist-packages/tornado-4.3-py2.7-linux-x86_64.egg/tornado/ioloop.py", > line 600, in _run_callback > ret = callback() File "/usr/local/lib/python2.7/dist-packages/tornado-4.3-py2.7-linux-x86_64.egg/tornado/stack_context.py", > line 275, in null_wrapper > return fn(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/motor-0.5-py2.7.egg/motor/frameworks/tornado.py", > line 231, in callback > child_gr.switch(future.result()) error: cannot switch to a different thread ``` Could you please help on this.
2015/12/30
[ "https://Stackoverflow.com/questions/34522741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5722595/" ]
From [docs](http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.run_on_executor) > > IOLoop and executor to be used are determined by the io\_loop and > executor attributes of self. To use different attributes, pass keyword > arguments to the decorator > > > You have to provide a init threadpoolexecutor: ``` import time from concurrent.futures import ThreadPoolExecutor from tornado import gen, web from tornado.concurrent import run_on_executor from tornado.ioloop import IOLoop class Handler(web.RequestHandler): executor = ThreadPoolExecutor(10) @gen.coroutine def get(self): self.blocking_task() @run_on_executor def blocking_task(self): time.sleep(10) if __name__ == "__main__": app = web.Application([ (r"/", Handler), ]) app.listen(8888) IOLoop.current().start() ``` By default `run_on_executor` search for threadpool in `executor` attribute, unless you pass other explicitly, e.g. ``` _thread_pool = ThreadPoolExecutor(10) @run_on_executor(executor='_thread_pool') def blocking_task(self): pass ``` **edit** Basically IOLoop should be used in single-threaded env (you can run separate IOLoop on each thread, but it is not your case). To communicate with IOLoop you should use [add\_callback](http://www.tornadoweb.org/en/stable/ioloop.html#tornado.ioloop.IOLoop.add_callback), that is the only thread safe function. You can use like: ``` @run_on_executor def blocking_task(self): IOLoop.instance().add_callback(some_update) @gen.coroutine def some_update(): db.test_cases.update({ "name": "abc" }, { "$set": { "status" : "xyz" } } ) ``` But do you really need threading at all. What is the purpose of the separate thread if you schedule update on main - IOLoop's thread.
Motor is a non-blocking library, designed to be used from the single `IOLoop` thread. You would use a `ThreadPoolExecutor` with a blocking library like PyMongo, but you must not use other threads with Motor. Instead, you should call the Motor methods with `yield` directly: ``` @gen.coroutine def get(self): yield self.non_blocking_task() @gen.coroutine def non_blocking_task(self): motor_dict = yield self.db.test_cases.find_one({"name": "Ping"}) ``` Also note that if you do use `@run_on_executor` with a blocking library like PyMongo, the decorator makes blocking functions non-blocking, so the decorated function must be called with `yield`.
34,522,741
A request comes to tornado's GET handler of a web app. From the `GET` function, a `blocking_task` function is called. This `blocking_task` function has `@run_on_executor` decorator. But this execution fails. Could you please help on this. It seems that motor db is not able to execute the thread. ``` import time from concurrent.futures import ThreadPoolExecutor from tornado import gen, web from tornado.concurrent import run_on_executor from tornado.ioloop import IOLoop import argparse from common.config import APIConfig import sys import os import motor parser = argparse.ArgumentParser() parser.add_argument("-c", "--config-file", dest='config_file', help="Config file location") args = parser.parse_args() CONF = APIConfig().parse(args.config_file) client = motor.MotorClient(CONF.mongo_url) db = client[CONF.mongo_dbname] class Handler(web.RequestHandler): executor = ThreadPoolExecutor(10) def initialize(self): """ Prepares the database for the entire class """ self.db = self.settings["db"] @gen.coroutine def get(self): self.blocking_task() @run_on_executor def blocking_task(self): mongo_dict = self.db.test_cases.find_one({"name": "Ping"}) if __name__ == "__main__": app = web.Application([ (r"/", Handler), ], db=db, debug=CONF.api_debug_on, ) app.listen(8888) IOLoop.current().start() > ERROR:tornado.application:Exception in callback <functools.partial > object at 0x7f72dfbe48e8> Traceback (most recent call last): File > "/usr/local/lib/python2.7/dist-packages/tornado-4.3-py2.7-linux-x86_64.egg/tornado/ioloop.py", > line 600, in _run_callback > ret = callback() File "/usr/local/lib/python2.7/dist-packages/tornado-4.3-py2.7-linux-x86_64.egg/tornado/stack_context.py", > line 275, in null_wrapper > return fn(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/motor-0.5-py2.7.egg/motor/frameworks/tornado.py", > line 231, in callback > child_gr.switch(future.result()) error: cannot switch to a different thread ``` Could you please help on this.
2015/12/30
[ "https://Stackoverflow.com/questions/34522741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5722595/" ]
Finally following code works, Thank you @kwarunek Also added parameters to the callback function. ``` import time from concurrent.futures import ThreadPoolExecutor from tornado import gen, web from tornado.concurrent import run_on_executor from tornado.ioloop import IOLoop import argparse from common.config import APIConfig import sys import os import motor parser = argparse.ArgumentParser() parser.add_argument("-c", "--config-file", dest='config_file', help="Config file location") args = parser.parse_args() CONF = APIConfig().parse(args.config_file) client = motor.MotorClient(CONF.mongo_url) db = client[CONF.mongo_dbname] class Handler(web.RequestHandler): executor = ThreadPoolExecutor(10) def initialize(self): """ Prepares the database for the entire class """ self.db = self.settings["db"] @gen.coroutine def get(self): self.blocking_task("Ping", "Void-R") @run_on_executor def blocking_task(self, name, status): IOLoop.instance().add_callback(callback=lambda: self.some_update(name, status)) @gen.coroutine def some_update(self, name, status): mongo_dict = yield self.db.test_cases.find_one({"name": name}) self.db.test_cases.update({ "name": name }, { "$set": { "status" : status } } ) if __name__ == "__main__": app = web.Application([ (r"/", Handler), ], db=db, debug=CONF.api_debug_on, ) app.listen(8888) IOLoop.current().start() ```
Motor is a non-blocking library, designed to be used from the single `IOLoop` thread. You would use a `ThreadPoolExecutor` with a blocking library like PyMongo, but you must not use other threads with Motor. Instead, you should call the Motor methods with `yield` directly: ``` @gen.coroutine def get(self): yield self.non_blocking_task() @gen.coroutine def non_blocking_task(self): motor_dict = yield self.db.test_cases.find_one({"name": "Ping"}) ``` Also note that if you do use `@run_on_executor` with a blocking library like PyMongo, the decorator makes blocking functions non-blocking, so the decorated function must be called with `yield`.
61,257,025
I'm new to python and tkinter and I try to create a tool witch loops every 5 seconds over a directory to list all the files. In my code the filenames in the list appears only after I interupt the loop. My goal is to start a loop by clicking on a button to start the endless loop to list the files and a stop button to stop the loop. ``` from tkinter import filedialog import tkinter as tk import time import os global dateiListe def browse_button(): global pfad global dateiname dateiname = filedialog.askdirectory() pfad.set(dateiname) if len(dateiname) > 0: print( len(dateiname) ) btn_schleifeStart['state'] = tk.NORMAL else: print( len(dateiname) ) btn_schleifeStart['state'] = tk.DISABLED def start_schleife(): btn_ordnerWählen['state'] = tk.DISABLED btn_schleifeStart['state'] = tk.DISABLED while True: dateiListe = [] for datei in os.listdir(dateiname): if datei.lower().endswith(('.png', '.jpg', '.jpeg')): listBox.insert(1, datei) listBox.insert(2, datei) print(datei) time.sleep(5) root = tk.Tk() root.geometry("500x400") pfad = tk.StringVar() btn_ordnerWählen = tk.Button(text="Ordner wählen", command=browse_button) btn_schleifeStart = tk.Button(text="Start", command=start_schleife,state=tk.DISABLED) txt_pfad = tk.Label(master=root,textvariable=pfad, fg="blue") listBox = tk.Listbox(root) btn_ordnerWählen.grid(row=0, column=0, sticky="sw") txt_pfad.grid(row=1, column=0) btn_schleifeStart.grid(row=3, column=0) listBox.grid(row=4, column=0) root.mainloop() ```
2020/04/16
[ "https://Stackoverflow.com/questions/61257025", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10474530/" ]
Unary means one, so what they are talking about is a constructor with a single parameter. The standard name for such a thing is a [conversion constructor](https://stackoverflow.com/questions/15077466/what-is-a-converting-constructor-in-c-what-is-it-for).
Unary refers to one or singular, so a 'Unary constructor' ideally refers to a constructor with a single parameter.
45,628,813
Previously I was working without unittests and I had this structure for my project: ``` -main.py -folderFunctions: -functionA.py ``` Just using init file in folderFunctions, and importing ``` from folderFunctions import functionA ``` everything was working good. Now I have also unittests wihch I organized in this way: ``` -main.py -folderFunctions: -functionA.py -folderTest: -testFunctionA.py ``` So I had to add (in order to run testFunctionA.py) in both functionA.py and testFunctionA.py these 2 lines to import the path: ``` myPath = os.path.dirname(os.path.abspath(__file__)) sys.path.insert(0, myPath + '../..') ``` In this way the test works properly. But it is ugly to me and I guess also not very pythonic. Is there a way to make it more elegant?
2017/08/11
[ "https://Stackoverflow.com/questions/45628813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5178905/" ]
If you want your library/application to become bigger and easy to package I hardly recommend to separate source code from test code, because test code shouldn't be packaged in binary distributions (egg or wheel). You can follow this tree structure: ``` +-- src/ | +-- main.py | \-- folder_functions/ # <- Python package | +-- __init__.py | \-- function_a.py \-- tests/ \-- folder_functions/ +-- __init__.py \-- test_function_a.py ``` Note: according to the [PEP8](https://www.python.org/dev/peps/pep-0008/), Python packages and modules names should be in "snake case" (lowercase + underscore). The **src** directory could be avoided if you have (and you should) a main package. As explained in other comments, the **setup.p**y file should stand next to the **src** and **tests** folders (root level). Read the [Python Packaging User Guide](https://packaging.python.org) **edit** The next step is to create a **setup.py**, for instance: ``` from setuptools import find_packages from setuptools import setup setup( name='Your-App', version='0.1.0', author='Your Name', author_email='your@email', url='URL of your project home page', description="one line description", long_description='long description ', classifiers=[ 'Development Status :: 4 - Beta', 'Intended Audience :: Developers', 'License :: OSI Approved :: Python Software Foundation License', 'Operating System :: MacOS :: MacOS X', 'Operating System :: Microsoft :: Windows', 'Operating System :: POSIX', 'Programming Language :: Python', 'Topic :: Software Development', ], platforms=["Linux", "Windows", "OS X"], license="MIT License", keywords="keywords", packages=find_packages("src"), package_dir={'': 'src'}, entry_points={ 'console_scripts': [ 'cmd_name = main:main', ], }) ``` Once your project is configured, you can create a virtualenv and install your application inside: ``` virtualenv your-app source your-app/bin/activate pip install -e . ``` You can run your tests with unitests standard module. To import your module in your **test\_function\_a.py**, just proceed as usual: ``` from folder_functions import function_a ```
The more elegant way is `from folderFunctions.folderTest import testFunctionA` and make sure that you have an `__init__.py` file in the `folderTest` directory. You may also look at this [question](https://stackoverflow.com/questions/8953844/import-module-from-subfolder)
49,054,768
I'm going to optimize three variable `x`, `alpha` and `R`. `X` is a one dimensional vector, `alpha` is a two dimensional vector and `R` is a scalar value. How can I maximize this function? I write below code: ``` #from scipy.optimize import least_squares from scipy.optimize import minimize import numpy as np sentences_lengths =[6, 3] length_constraint=5 sentences_idx=[0, 1] sentences_scores=[.1,.2] damping=1 pairwise_idx=[(0,0),(0,1),(1,0),(1,1)] overlap_matrix=[[0,.01],[.02,0]] def func(x, R, alpha, sign=1.0): """ Objective function """ return sign*(sum(x[i] * sentences_scores[i] for i in sentences_idx) - damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx)) x0=np.array([1,0]) R0=.1 alpha0=np.array([1,0,0,0]) def func_deriv(x, R, alpha, sign=1.0): """ Derivative of objective function """ #Partial derivative to x dfdX = sign*(sum(sentences_scores[i] for i in sentences_idx)) #Partial derivative to R dfdR= sign*(- damping * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx)) #Partial derivative to alpha dfdAlpha= sign*(- damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx)) return [ dfdX, dfdR, dfdAlpha] cons = ({'type': 'ineq', ## Constraints: one constraint for the size + consistency constraints #sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint 'fun' : lambda x: length_constraint - sum(x[i] * sentences_lengths[i] for i in sentences_idx) , 'jac' : lambda x: [-sum(sentences_lengths[i] for i in sentences_idx), 0, 0]} ,{'type': 'ineq', #alpha[i][j] - x[i] <= 0 'fun' : lambda x: [x[i]-alpha[i][j] for i,j in pairwise_idx], 'jac' : lambda x: [1.0, 0.0, -1.0]} ,{'type': 'ineq', #alpha[i][j] - x[j] <= 0 'fun' : lambda x: [x[j]-alpha[i][j] for i,j in pairwise_idx], 'jac' : lambda x: [1.0, 0.0, -1.0]} ,{'type': 'ineq', #x[i] + x[j] - alpha[i][j] <= 1 'fun' : lambda x: [1+alpha[i][j]-x[i]-x[j] for i,j in pairwise_idx], 'jac' : lambda x: [-1.0-1.0, 0.0, 1.0]}) res = minimize(func, (x0,R0,alpha0) , args=(sentences_lengths ,length_constraint ,sentences_idx ,sentences_scores ,damping ,pairwise_idx ,overlap_matrix,) , jac=func_deriv , constraints=cons , method='SLSQP' , options={'disp': True}) ``` I get Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-a1a91fdf2d13> in <module>() 55 , constraints=cons 56 , method='SLSQP' ---> 57 , options={'disp': True}) 58 59 #res = least_squares(fun, (x,R,alpha), jac=jac, bounds=bounds, args=(sentences_scores, damping,overlap_matrix), verbose=1) /usr/local/lib/python3.5/dist-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options) 456 elif meth == 'slsqp': 457 return _minimize_slsqp(fun, x0, args, jac, bounds, --> 458 constraints, callback=callback, **options) 459 elif meth == 'dogleg': 460 return _minimize_dogleg(fun, x0, args, jac, hess, /usr/local/lib/python3.5/dist-packages/scipy/optimize/slsqp.py in _minimize_slsqp(func, x0, args, jac, bounds, constraints, maxiter, ftol, iprint, disp, eps, callback, **unknown_options) 305 306 # Transform x0 into an array. --> 307 x = asfarray(x0).flatten() 308 309 # Set the parameters that SLSQP will need /usr/local/lib/python3.5/dist-packages/numpy/lib/type_check.py in asfarray(a, dtype) 102 if not issubclass(dtype, _nx.inexact): 103 dtype = _nx.float_ --> 104 return asarray(a, dtype=dtype) 105 106 /usr/local/lib/python3.5/dist-packages/numpy/core/numeric.py in asarray(a, dtype, order) 529 530 """ --> 531 return array(a, dtype, copy=False, order=order) 532 533 ValueError: setting an array element with a sequence. ```
2018/03/01
[ "https://Stackoverflow.com/questions/49054768", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2742177/" ]
I find the solution. ``` from scipy.optimize import least_squares from scipy.optimize import minimize import numpy as np def func(x_f, *args, sign=1.0): """ Objective function """ sentences_lengths, length_constraint, sentences_idx, sentences_scores, damping, pairwise_idx, overlap_matrix\ , x_ini_size, R0_size, alpha0_shape = args x=(x_f[:x_ini_size]) R=x_f[x_ini_size:x_ini_size+R0_size] alpha=(x_f[x_ini_size+R0_size:].reshape(alpha0_shape)) return sign*(sum((x[i]) * sentences_scores[i] for i in sentences_idx) - damping * R * sum((alpha[i][j]) * overlap_matrix[i][j] for i,j in pairwise_idx)) def func_deriv(x, R, alpha, sign=1.0): """ Derivative of objective function """ #Partial derivative to x dfdX = sign*(sum(sentences_scores[i] for i in sentences_idx)) #Partial derivative to R dfdR= sign*(- damping * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx)) #Partial derivative to alpha dfdAlpha= sign*(- damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx)) return [ dfdX, dfdR, dfdAlpha] """print(list(x_ini)) a = np.array([list(x_ini),list(R0),list(alpha0)]) print(a) ccc=[x_ini,R0,alpha0] print(x_ini) print(list(ccc)) x0=np.concatenate([x_ini,R0,alpha0]) print(x0.flatten())""" """ pairwise_idx-------->>> array([[0, 0], [0, 1], [1, 0], [1, 1]]) overlap_matrix----------->> array([[ 0. , 0.01], [ 0.02, 0. ]]) alpha0--->>> array([[1, 0], [0, 0]]) """ sentences_lengths =[6, 3] length_constraint=5 sentences_idx=[0, 1] sentences_scores=[.1,.2] damping=1.0 pairwise_idx=np.array([[0, 0],[0, 1],[1, 0],[1, 1]]) overlap_matrix=np.array([[0,.01],[.02,0]]) x_ini=np.array([0,0]) R0=np.array([.1]) alpha0=np.array([[0,0],[0,0]]) x_ini_size = x_ini.size R0_size = R0.size alpha0_shape = alpha0.shape x0 = np.concatenate([x_ini, R0, alpha0.flatten()]) #x1bnds = [int(s) for s in range(0,2)] #x1bnds=np.array([0,1]) #x1bnds=np.array([0,2], dtype=int) #x1bnds = ((0,0),(1,1)) #x1bnds =np.arange(0,2, 1) x1bnds = (0, 1) x2bnds = (0, 1) Rbnds = (0, 1) alpha1bnds= (0, 1) alpha2bnds= (0, 1) alpha3bnds= (0, 1) alpha4bnds= (0, 1) bnds = (x1bnds, x2bnds, Rbnds, alpha1bnds, alpha2bnds, alpha3bnds, alpha4bnds) #x=x_f[:x_ini_size] #alpha=x_f[x_ini_size+R0_size:].reshape(alpha0_shape) """cons = ({'type': 'ineq', ## Constraints: one constraint for the size + consistency constraints #sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint 'fun' : lambda x_f: np.array([length_constraint - sum(x_f[:x_ini_size][i] * sentences_lengths[i] for i in sentences_idx)]) , 'jac' : lambda x_f: np.array([-sum(sentences_lengths[i] for i in sentences_idx), 0, 0])} ,{'type': 'ineq', #alpha[i][j] - x[i] <= 0 'fun' : lambda x_f: np.array([x_f[:x_ini_size][i]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx]) , 'jac' : lambda x_f: np.array([1.0, 0.0, -1.0])} ,{'type': 'ineq', #alpha[i][j] - x[j] <= 0 'fun' : lambda x_f: np.array([x_f[:x_ini_size][j]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx]) , 'jac' : lambda x_f: np.array([1.0, 0.0, -1.0])} ,{'type': 'ineq', #x[i] + x[j] - alpha[i][j] <= 1 'fun' : lambda x_f: np.array([1+x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]-x_f[:x_ini_size][i]-x_f[:x_ini_size][j] for i,j in pairwise_idx]) , 'jac' : lambda x_f: np.array([-1.0-1.0, 0.0, 1.0])}) """ cons = ({'type': 'ineq', ## Constraints: one constraint for the size + consistency constraints #sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint 'fun' : lambda x_f: np.array([length_constraint - sum(x_f[:x_ini_size][i] * sentences_lengths[i] for i in sentences_idx)]) } ,{'type': 'ineq', #alpha[i][j] - x[i] <= 0 'fun' : lambda x_f: np.array([(x_f[:x_ini_size][i])-(x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]) for i,j in pairwise_idx]) } ,{'type': 'ineq', #alpha[i][j] - x[j] <= 0 'fun' : lambda x_f: np.array([(x_f[:x_ini_size][j])-(x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]) for i,j in pairwise_idx]) } ,{'type': 'ineq', #x[i] + x[j] - alpha[i][j] <= 1 'fun' : lambda x_f: np.array([1+(x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j])-(x_f[:x_ini_size][i])-(x_f[:x_ini_size][j]) for i,j in pairwise_idx]) } ,{'type':'eq' ,'fun': lambda x_f : np.array([(x_f[:x_ini_size][i]-int(x_f[:x_ini_size][i])) for i in sentences_idx])}) res = minimize(func , x0 , args=(sentences_lengths , length_constraint , sentences_idx , sentences_scores , damping, pairwise_idx , overlap_matrix , x_ini_size , R0_size , alpha0_shape) , method='SLSQP' #, jac=func_deriv , constraints=cons , bounds=bnds , options={'disp': True}) #res = least_squares(fun, (x,R,alpha), jac=jac, bounds=bounds, args=(sentences_scores, damping,overlap_matrix), verbose=1) print(res) ``` The result is: ``` Optimization terminated successfully. (Exit mode 0) Current function value: 0.0 Iterations: 1 Function evaluations: 9 Gradient evaluations: 1 fun: 0.0 jac: array([ 0.1 , 0.2 , 0. , 0. , -0.001, -0.002, 0. ]) message: 'Optimization terminated successfully.' nfev: 9 nit: 1 njev: 1 status: 0 success: True x: array([ 0. , 0. , 0.1, 0. , 0. , 0. , 0. ]) ``` The result is the same initial values. Is it not wonderful?
I can do this task. ``` from scipy.optimize import least_squares from scipy.optimize import minimize import numpy as np def func(x_f, *args, sign=1.0): """ Objective function """ sentences_lengths, length_constraint, sentences_idx, sentences_scores, damping, pairwise_idx, overlap_matrix\ , x_ini_size, R0_size, alpha0_shape = args x=x_f[:x_ini_size] R=x_f[x_ini_size:x_ini_size+R0_size] alpha=x_f[x_ini_size+R0_size:].reshape(alpha0_shape) return sign*(sum(x[i] * sentences_scores[i] for i in sentences_idx) - damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx)) def func_deriv(x, R, alpha, sign=1.0): """ Derivative of objective function """ #Partial derivative to x dfdX = sign*(sum(sentences_scores[i] for i in sentences_idx)) #Partial derivative to R dfdR= sign*(- damping * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx)) #Partial derivative to alpha dfdAlpha= sign*(- damping * R * sum(alpha[i][j] * overlap_matrix[i][j] for i,j in pairwise_idx)) return [ dfdX, dfdR, dfdAlpha] """print(list(x_ini)) a = np.array([list(x_ini),list(R0),list(alpha0)]) print(a) ccc=[x_ini,R0,alpha0] print(x_ini) print(list(ccc)) x0=np.concatenate([x_ini,R0,alpha0]) print(x0.flatten())""" """ pairwise_idx-------->>> array([[0, 0], [0, 1], [1, 0], [1, 1]]) overlap_matrix----------->> array([[ 0. , 0.01], [ 0.02, 0. ]]) alpha0--->>> array([[1, 0], [0, 0]]) """ sentences_lengths =[6, 3] length_constraint=5 sentences_idx=[0, 1] sentences_scores=[.1,.2] damping=1.0 pairwise_idx=np.array([[0, 0],[0, 1],[1, 0],[1, 1]]) overlap_matrix=np.array([[0,.01],[.02,0]]) x_ini=np.array([1,0]) R0=np.array([.1]) alpha0=np.array([[1,0],[0,0]]) x_ini_size = x_ini.size R0_size = R0.size alpha0_shape = alpha0.shape x0 = np.concatenate([x_ini, R0, alpha0.flatten()]) #x1bnds = [int(s) for s in range(0,2)] #x1bnds=np.array([0,1]) #x1bnds=np.array([0,2], dtype=int) #x1bnds = ((0,0),(1,1)) x1bnds =np.arange(0,2, 1) x2bnds = (0, 1) Rbnds = (0, 1) alpha1bnds= [0, 1] alpha2bnds= [0, 1] alpha3bnds= [0, 1] alpha4bnds= np.array([0,2], dtype=int) bnds = (x1bnds, x2bnds, Rbnds, alpha1bnds, alpha2bnds, alpha3bnds, alpha4bnds) #x=x_f[:x_ini_size] #alpha=x_f[x_ini_size+R0_size:].reshape(alpha0_shape) """cons = ({'type': 'ineq', ## Constraints: one constraint for the size + consistency constraints #sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint 'fun' : lambda x_f: np.array([length_constraint - sum(x_f[:x_ini_size][i] * sentences_lengths[i] for i in sentences_idx)]) , 'jac' : lambda x_f: np.array([-sum(sentences_lengths[i] for i in sentences_idx), 0, 0])} ,{'type': 'ineq', #alpha[i][j] - x[i] <= 0 'fun' : lambda x_f: np.array([x_f[:x_ini_size][i]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx]) , 'jac' : lambda x_f: np.array([1.0, 0.0, -1.0])} ,{'type': 'ineq', #alpha[i][j] - x[j] <= 0 'fun' : lambda x_f: np.array([x_f[:x_ini_size][j]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx]) , 'jac' : lambda x_f: np.array([1.0, 0.0, -1.0])} ,{'type': 'ineq', #x[i] + x[j] - alpha[i][j] <= 1 'fun' : lambda x_f: np.array([1+x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]-x_f[:x_ini_size][i]-x_f[:x_ini_size][j] for i,j in pairwise_idx]) , 'jac' : lambda x_f: np.array([-1.0-1.0, 0.0, 1.0])}) """ cons = ({'type': 'ineq', ## Constraints: one constraint for the size + consistency constraints #sum(x[i] * sentences_lengths[i] for i in sentences_idx) <= length_constraint 'fun' : lambda x_f: np.array([length_constraint - sum(x_f[:x_ini_size][i] * sentences_lengths[i] for i in sentences_idx)]) } ,{'type': 'ineq', #alpha[i][j] - x[i] <= 0 'fun' : lambda x_f: np.array([x_f[:x_ini_size][i]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx]) } ,{'type': 'ineq', #alpha[i][j] - x[j] <= 0 'fun' : lambda x_f: np.array([x_f[:x_ini_size][j]-x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j] for i,j in pairwise_idx]) } ,{'type': 'ineq', #x[i] + x[j] - alpha[i][j] <= 1 'fun' : lambda x_f: np.array([1+x_f[x_ini_size+R0_size:].reshape(alpha0_shape)[i][j]-x_f[:x_ini_size][i]-x_f[:x_ini_size][j] for i,j in pairwise_idx]) }) res = minimize(func , x0 , args=(sentences_lengths , length_constraint , sentences_idx , sentences_scores , damping, pairwise_idx , overlap_matrix , x_ini_size , R0_size , alpha0_shape) , method='SLSQP' #, jac=func_deriv , constraints=cons , bounds=bnds , options={'disp': True}) #res = least_squares(fun, (x,R,alpha), jac=jac, bounds=bounds, args=(sentences_scores, damping,overlap_matrix), verbose=1) print(res) ```
34,586,114
In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In `mysite/settings.py` I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called `staticfiles` at the root level (so same directory as `myapp/`) What's the point of this? Isn't it just creating a copy of all my static files?
2016/01/04
[ "https://Stackoverflow.com/questions/34586114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Collect static files from multiple apps into a single path ---------------------------------------------------------- Well, a single Django *project* may use several *apps*, so while there you only have one `myapp`, it may actually be `myapp1`, `myapp2`, etc By copying them from inside the individual apps into a single folder, you can point your frontend web server (e.g. nginx) to that single folder `STATIC_ROOT` and serve static files from a single location, rather than configure your web server to serve static files from multiple paths. Persistent URLs with [ManifestStaticFilesStorage](https://docs.djangoproject.com/en/3.2/ref/contrib/staticfiles/#manifeststaticfilesstorage) -------------------------------------------------------------------------------------------------------------------------------------------- A note about the MD5 hash being appended to the filename for versioning: It's not part of the default behavior of `collectstatic`, as `settings.STATICFILES_STORAGE` defaults to `StaticFilesStorage` (which doesn't do that) The MD5 hash will kick in e.g. if you set it to use `ManifestStaticFilesStorage`, which adds that behavior. > > The purpose of this storage is to keep serving the old files in case > some pages still refer to those files, e.g. because they are cached by > you or a 3rd party proxy server. Additionally, it’s very helpful if > you want to apply far future Expires headers to the deployed files to > speed up the load time for subsequent page visits. > > >
It's useful when there are multiple django apps within the site. `collectstatic` will then collect static files from all the apps in one place - so that it could be served up in a production environment.
34,586,114
In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In `mysite/settings.py` I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called `staticfiles` at the root level (so same directory as `myapp/`) What's the point of this? Isn't it just creating a copy of all my static files?
2016/01/04
[ "https://Stackoverflow.com/questions/34586114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Collect static files from multiple apps into a single path ---------------------------------------------------------- Well, a single Django *project* may use several *apps*, so while there you only have one `myapp`, it may actually be `myapp1`, `myapp2`, etc By copying them from inside the individual apps into a single folder, you can point your frontend web server (e.g. nginx) to that single folder `STATIC_ROOT` and serve static files from a single location, rather than configure your web server to serve static files from multiple paths. Persistent URLs with [ManifestStaticFilesStorage](https://docs.djangoproject.com/en/3.2/ref/contrib/staticfiles/#manifeststaticfilesstorage) -------------------------------------------------------------------------------------------------------------------------------------------- A note about the MD5 hash being appended to the filename for versioning: It's not part of the default behavior of `collectstatic`, as `settings.STATICFILES_STORAGE` defaults to `StaticFilesStorage` (which doesn't do that) The MD5 hash will kick in e.g. if you set it to use `ManifestStaticFilesStorage`, which adds that behavior. > > The purpose of this storage is to keep serving the old files in case > some pages still refer to those files, e.g. because they are cached by > you or a 3rd party proxy server. Additionally, it’s very helpful if > you want to apply far future Expires headers to the deployed files to > speed up the load time for subsequent page visits. > > >
In the production installation, you want to have persistent URLs. The URL doesn't change unless the file content changes. This is to prevent having clients to have wrong version of CSS or JS file on their computer when opening a web page from Django. Django staticfiles detects file changes and updates URLs accordingly, so that if CSS or JS file changes the web browser downloads the new version. This is usually achieved by adding MD5 hash to the filename during `collectstatic` run. Edit: Also see related answer to multiple apps.
34,586,114
In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In `mysite/settings.py` I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called `staticfiles` at the root level (so same directory as `myapp/`) What's the point of this? Isn't it just creating a copy of all my static files?
2016/01/04
[ "https://Stackoverflow.com/questions/34586114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Collect static files from multiple apps into a single path ---------------------------------------------------------- Well, a single Django *project* may use several *apps*, so while there you only have one `myapp`, it may actually be `myapp1`, `myapp2`, etc By copying them from inside the individual apps into a single folder, you can point your frontend web server (e.g. nginx) to that single folder `STATIC_ROOT` and serve static files from a single location, rather than configure your web server to serve static files from multiple paths. Persistent URLs with [ManifestStaticFilesStorage](https://docs.djangoproject.com/en/3.2/ref/contrib/staticfiles/#manifeststaticfilesstorage) -------------------------------------------------------------------------------------------------------------------------------------------- A note about the MD5 hash being appended to the filename for versioning: It's not part of the default behavior of `collectstatic`, as `settings.STATICFILES_STORAGE` defaults to `StaticFilesStorage` (which doesn't do that) The MD5 hash will kick in e.g. if you set it to use `ManifestStaticFilesStorage`, which adds that behavior. > > The purpose of this storage is to keep serving the old files in case > some pages still refer to those files, e.g. because they are cached by > you or a 3rd party proxy server. Additionally, it’s very helpful if > you want to apply far future Expires headers to the deployed files to > speed up the load time for subsequent page visits. > > >
Django static files can be in many places. A file that is served as `/static/img/icon.png` could [come from many places](https://docs.djangoproject.com/en/3.2/ref/settings/#staticfiles-finders). By default: * `FileSystemFinder` will look for `img/icon.png` in each of `STATICFILES_DIRS`, * `AppDirectoriesFinder` will look for `img/icon.png` in the `static` subfolder in each of your `INSTALLED_APPS`. This allows libraries like Django Admin to add their own static files to your app. Now: this only works if you run `manage.py runserver` with DEBUG=1. When you go live, the Django process will no longer serve the static assets. It would be inefficient to use Django for serving these, there are more specialised tools specifically for that. Instead, you should do something like this: * find all of static files from every app * build a single directory that contains all of them * upload them somewhere (a `static` directory somewhere on your webserver or a third-party file storage) * configure your webserver (such as nginx) to serve `/static/*` directly from that directory and redirect any other requests to Django. `collectstatic` is a ready-made script that prepares this directory for you, so that you can connect it directly to your deployment script.
34,586,114
In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In `mysite/settings.py` I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called `staticfiles` at the root level (so same directory as `myapp/`) What's the point of this? Isn't it just creating a copy of all my static files?
2016/01/04
[ "https://Stackoverflow.com/questions/34586114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In the production installation, you want to have persistent URLs. The URL doesn't change unless the file content changes. This is to prevent having clients to have wrong version of CSS or JS file on their computer when opening a web page from Django. Django staticfiles detects file changes and updates URLs accordingly, so that if CSS or JS file changes the web browser downloads the new version. This is usually achieved by adding MD5 hash to the filename during `collectstatic` run. Edit: Also see related answer to multiple apps.
It's useful when there are multiple django apps within the site. `collectstatic` will then collect static files from all the apps in one place - so that it could be served up in a production environment.
34,586,114
In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In `mysite/settings.py` I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called `staticfiles` at the root level (so same directory as `myapp/`) What's the point of this? Isn't it just creating a copy of all my static files?
2016/01/04
[ "https://Stackoverflow.com/questions/34586114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Django static files can be in many places. A file that is served as `/static/img/icon.png` could [come from many places](https://docs.djangoproject.com/en/3.2/ref/settings/#staticfiles-finders). By default: * `FileSystemFinder` will look for `img/icon.png` in each of `STATICFILES_DIRS`, * `AppDirectoriesFinder` will look for `img/icon.png` in the `static` subfolder in each of your `INSTALLED_APPS`. This allows libraries like Django Admin to add their own static files to your app. Now: this only works if you run `manage.py runserver` with DEBUG=1. When you go live, the Django process will no longer serve the static assets. It would be inefficient to use Django for serving these, there are more specialised tools specifically for that. Instead, you should do something like this: * find all of static files from every app * build a single directory that contains all of them * upload them somewhere (a `static` directory somewhere on your webserver or a third-party file storage) * configure your webserver (such as nginx) to serve `/static/*` directly from that directory and redirect any other requests to Django. `collectstatic` is a ready-made script that prepares this directory for you, so that you can connect it directly to your deployment script.
It's useful when there are multiple django apps within the site. `collectstatic` will then collect static files from all the apps in one place - so that it could be served up in a production environment.
34,586,114
In Django, the convention is to put all of your static files (i.e css, js) specific to your app into a folder called **static**. So the structure would look like this: ``` mysite/ manage.py mysite/ --> (settings.py, etc) myapp/ --> (models.py, views.py, etc) static/ ``` In `mysite/settings.py` I have: ``` STATIC_ROOT = 'staticfiles' ``` So when I run the command: ``` python manage.py collectstatic ``` It creates a folder called `staticfiles` at the root level (so same directory as `myapp/`) What's the point of this? Isn't it just creating a copy of all my static files?
2016/01/04
[ "https://Stackoverflow.com/questions/34586114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Django static files can be in many places. A file that is served as `/static/img/icon.png` could [come from many places](https://docs.djangoproject.com/en/3.2/ref/settings/#staticfiles-finders). By default: * `FileSystemFinder` will look for `img/icon.png` in each of `STATICFILES_DIRS`, * `AppDirectoriesFinder` will look for `img/icon.png` in the `static` subfolder in each of your `INSTALLED_APPS`. This allows libraries like Django Admin to add their own static files to your app. Now: this only works if you run `manage.py runserver` with DEBUG=1. When you go live, the Django process will no longer serve the static assets. It would be inefficient to use Django for serving these, there are more specialised tools specifically for that. Instead, you should do something like this: * find all of static files from every app * build a single directory that contains all of them * upload them somewhere (a `static` directory somewhere on your webserver or a third-party file storage) * configure your webserver (such as nginx) to serve `/static/*` directly from that directory and redirect any other requests to Django. `collectstatic` is a ready-made script that prepares this directory for you, so that you can connect it directly to your deployment script.
In the production installation, you want to have persistent URLs. The URL doesn't change unless the file content changes. This is to prevent having clients to have wrong version of CSS or JS file on their computer when opening a web page from Django. Django staticfiles detects file changes and updates URLs accordingly, so that if CSS or JS file changes the web browser downloads the new version. This is usually achieved by adding MD5 hash to the filename during `collectstatic` run. Edit: Also see related answer to multiple apps.
60,418,192
Julia newbe here, transitioning from python. So, I want to build what in Python I would call list, made of lists made of lists. In my case, it's a 1000 long list whose element is a list of 3 lists. Until now, I have done it this way: ``` BIG_LIST = collect(Array{Int64,1}[[],[],[]] for i in 1:1000) ``` This served my purpose when all three most inner lists where made of integers. Now I need 2 of them to be of integers, while the third of Float. Is this possible? How do I do it? If you could also explain better how to properly initialize these objects that would be great. I am aware that collect is not the best choice here. Note that the length of the 3 inner lists is the same among the 3, but can vary during the process.
2020/02/26
[ "https://Stackoverflow.com/questions/60418192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10139617/" ]
First, if you know that intermediate lists always have 3 elements, you'll probably be better off using [`Tuple` types](https://docs.julialang.org/en/v1/manual/types/#Tuple-Types-1) for those. And tuples can specify independently the types of their elements. So something like this might suit your purposes: ``` julia> l = [(Int64[], Int64[], Float64[]) for _ in 1:10] 10-element Array{Tuple{Array{Int64,1},Array{Int64,1},Array{Float64,1}},1}: ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) julia> push!(l[1][3], 5) 1-element Array{Float64,1}: 5.0 julia> l 10-element Array{Tuple{Array{Int64,1},Array{Int64,1},Array{Float64,1}},1}: ([], [], [5.0]) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ([], [], []) ``` A few details to note here, that might be of interest to you: * Empty but typed lists can be constructed using `T[]`, where `T` is the element type. * `collect(f(i) for i in 1:n)` is essentially equivalent to a simple comprehension (like you're used to in python): `[f(i) for i in 1:n]`. Note that since variable `i` plays no role here, you can replace it with a `_` placeholder so that it more immediately appears to the reader that you're essentially creating a collection of similar objects (but not identical, in the sense that they don't share the same underlying memory; modifying one won't affect the others). * I don't know of any better way to initialize such a collection and I wouldn't think that using `collect`(or a comprehension) is a bad idea here. For collections of identical objects, [`fill`](https://docs.julialang.org/en/v1/base/arrays/#Base.fill) provides a useful shortcut, but it wouldn't apply here because all sub-lists would be linked. --- Now, if all inner sublists have the same length, you might want to switch to a slightly different data structure: a vector of vectors of tuples: ``` julia> l2 = [Tuple{Int64,Int64,Float64}[] for _ in 1:10] 10-element Array{Array{Tuple{Int64,Int64,Float64},1},1}: [] [] [] [] [] [] [] [] [] [] julia> push!(l2[2], (1,2,pi)) 1-element Array{Tuple{Int64,Int64,Float64},1}: (1, 2, 3.141592653589793) julia> l2 10-element Array{Array{Tuple{Int64,Int64,Float64},1},1}: [] [(1, 2, 3.141592653589793)] [] [] [] [] [] [] [] [] ```
Francois has given you a great answer. I just wanted to raise one other possibility. It sounds like your data has a fairly complicated, but specific, structure. For example, the fact that your outer list has 1000 elements, and your inner list always has 3 lists... Sometimes in these situations it can be more intuitive to just build your own type(s), and write a couple of accessor functions. That way you don't end up doing things like `mylist[3][2][6]` and forgetting which index refers to which dimension of your data. For example: ``` struct MyInnerType field1::Vector{Int} field2::Vector{Int} field3::Vector{Float64} end struct MyOuterType x::Vector{MyInnerType} function MyOuterType(x::Vector{MyInnerType}) length(x) != 1000 && error("This vector should always have length of 1000") new(x) end end ``` I'm guessing here, but perhaps accessor functions like this would be useful for, e.g. `field3`: ``` get_field3(y::MyInnerType, i::Int)::Float64 = y.field3[i] get_field3(z::MyOuterType, iouter::Int, iinner::Int)::Float64 = get_field3(z.x[iouter], iinner) ``` Remember that there is no performance penalty to using your own types in Julia. One other thing, I've included all type information in my functions above for clarity, but this is not actually necessary for getting maximum performance either.
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
Since you are using a custom User model, you can do 4 steps: 1. Comment out django.contrib.admin in your INSTALLED\_APPS settings > > > ``` > INSTALLED_APPS = [ > ... > #'django.contrib.admin', > ... > ] > > ``` > > 2. Comment out admin path in urls.py > > > ``` > urlpatterns = [ > ... > #path('admin/', admin.site.urls) > ... > ] > > ``` > > 3. Then run > > > ``` > python manage.py migrate > > ``` > > 4. **When done, uncomment all back**
If you set **AUTH\_USER\_MODE**L in **settings.py** like this: ``` AUTH_USER_MODEL = 'custom_user_app_name.User' ``` you should comment this line before run **makemigration** and **migrate** commands. Then you can uncomment this line again.
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
Your django\_migrations table in your database is the cause of inconsistency and deleting all the migrations just from local path won't work. You have to truncate the django\_migrations table from your database and then try applying the migrations again. It should work but if it does not then run makemigrations again and then migrate. Note: don't forget to take a backup of your data.
This Problem will come most of the time if you extend the User Model post initial migration. Because whenever you extend the Abstract user it will create basic fields which were present un the model like email, first\_name, etc. Even this is applicable to any abstract model in django. So a very simple solution for this is either create a new database then apply migrations or delete ***[You all data will be deleted in this case.]*** the same database and reapply migrations.
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
Lets start off by addressing the issue with most of the answers on this page: **You never *have* to drop your database if you are using Django's migration system correctly and you *should* never delete migrations once they are comitted** Now the best solution for you depends on a number of factors which include how experienced you are with Django, what level of understanding you have of the migration system, and how valuable the data in your database is. In short there are two ways you can address any migration error. 1. Take the *nuclear* option. **Warning:** this is only an option if you are working alone. If other people depend on existing migrations you *cannot* just delete them. * Delete all of your migrations, and rebuild a fresh set with `python3 -m manage makemigrations`. This should remove any problems you had with dependencies or inconsistencies in your migrations. * Drop your entire database. This will remove any problems you had with inconsistencies you had between your actual database schema and the schema you should have based on your migration history, and will remove any problems you had with inconsistencies between your migration history and your previous migration files [this is what the `InconsistentMigrationHistory` is complaining about]. * Recreate your database schema with `python3 -m manage migrate` 2. Determine the cause of the error and resolve it, because (speaking from experience) the cause is almost certainly something silly *you* did. (Generally as a result of not understanding how to use the migration system correctly). Based on the error's I've caused there are three categories. 1. *Inconsistencies with migration files.* This is a pretty common one when multiple people are working on a project. Hopefully your changes do not conflict and `makemigrations --merge` can solve this one, otherwise someone is going to have to roll back their migrations to the branching point in order to resolve this. 2. *Inconsistencies between your schema and your migration history.* To manage this someone will have either edited the database schema manually, or deleted migrations. If they deleted a migration, then revert their changes and yell at them; you should *never* delete migrations if others depend on them. If they edited the database schema manually, revert their changes and then yell at them; Django is managing the database schema, no one else. 3. *Inconsistencies between your migration history and your migrations files.* [This is the `InconsistentMigrationHistory` issue the asker suffers from, and the one I suffered from when I arrived at this page]. To manage this someone has either manually messed with the `django_migrations` table or deleted a migration *after* it was applied. To resolve this you are going to have to work out how the inconsistency came about and manually resolve it. If your database schema is correct, and it is just your migration history that is wrong you can manually edit the `django_migrations` table to resolve this. If your database schema is wrong then you will also have to manually edit that to bring it in line with what it should be. Based on your description of the problem and the answer you selected I'm going to assume you are working alone, are new to Django, and don't care about your data. So the nuclear option may be right for you. If you are not in this situation and the above text looks like gibberish, then I suggest asking the [Django User's Mailing List](https://docs.djangoproject.com/en/dev/internals/mailing-lists/#django-users) for help. There are very helpful people there who can help walk you through resolving the specific mess you are in. Have faith, you can resolve this error without going nuclear!
first of all backup your data. (copy your db file). **delete sqlite.db and also the migration folder**. then, run these commands: ``` ./manage.py makemigrations APP_NAME ./manage.py migrate APP_NAME ``` after deleting the DB file and migration folder make sure that write the application name after the migration commands.
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
Here how to solve this properly. Follow these steps in your migrations folder inside the project: 1. Delete the \_pycache\_ and the 0001\_initial files. 2. Delete the db.sqlite3 from the root directory (be careful all your data will go away). 3. on the terminal run: python manage.py makemigrations python manage.py migrate Voila.
when you create a new project and with no apps, you run the ``` python manage.py migrate ``` the Django will create 10 tables by default. If you want create a customer user model which inherit from `AbstractUser` after that, you will encounter this problem as follow message: > > django.db.migrations.exceptions.InconsistentMigrationHistory: > Migration admin.0001\_initial is applied before its dependency > account.0001\_initial on database 'default'. > > > finally, I drop my entire databases and run
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
When you are doing some changes to default user model or you are making a custom user model by abstractuser then lot of times you will face that error 1: Remember when we create a superuser then for logging in we need username and password but if you converted USERNAME\_FIELD = 'email' then now you can't login with username and password because your username field is converted into email.... [![So Now it will show like this :](https://i.stack.imgur.com/7JITH.png)](https://i.stack.imgur.com/7JITH.png) and if you try to make another superuser then it will not ask for username it will only ask for email and password and then after creating superuser by email and password only when you try to login in admin pannel then it will throw that error because there is not any username and username field is required [![Error while creating superuser](https://i.stack.imgur.com/fhTM3.png)](https://i.stack.imgur.com/fhTM3.png) 2: That's why after creating custom user model during migrate it will throw error so for resolving it **first add AUTH\_USER\_MODEL = 'appname.custommodelname' (appname is the app name where you definded your custom user model and custom model name is the name of the model which you gave to your custom user model) in your settings.py 3: Then delete the migrations folder of that app where you created that custom user model then delete the database db.sqlite3 of the project 4: Now run migrations python manage.py makemigrations appname(that app name where you defined your custom user model) 5: Then Migrate it by python manage.py migrate 6: That's it Now it is Done**
Since you are using a custom User model, you can first comment out ``` INSTALLED_APPS = [ ... #'django.contrib.admin', ... ] ``` in your Installed\_Apps settings. And also comment ``` urlpatterns = [ # path('admin/', admin.site.urls) .... .... ] ``` in your base urls.py Then run ``` python manage.py migrate. ``` When done uncomment ``` 'django.contrib.admin' ``` and ``` path('admin/', admin.site.urls) ```
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
Just **delete all** the `migrations` folders, `__pycache__`, `.pyc` files: ``` find . | grep -E "(__pycache__|\.pyc|\.pyo$|migrations)" | xargs rm -rf ``` then, run: ``` python manage.py makemigrations python manage.py migrate ```
first of all backup your data. (copy your db file). **delete sqlite.db and also the migration folder**. then, run these commands: ``` ./manage.py makemigrations APP_NAME ./manage.py migrate APP_NAME ``` after deleting the DB file and migration folder make sure that write the application name after the migration commands.
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
Lets start off by addressing the issue with most of the answers on this page: **You never *have* to drop your database if you are using Django's migration system correctly and you *should* never delete migrations once they are comitted** Now the best solution for you depends on a number of factors which include how experienced you are with Django, what level of understanding you have of the migration system, and how valuable the data in your database is. In short there are two ways you can address any migration error. 1. Take the *nuclear* option. **Warning:** this is only an option if you are working alone. If other people depend on existing migrations you *cannot* just delete them. * Delete all of your migrations, and rebuild a fresh set with `python3 -m manage makemigrations`. This should remove any problems you had with dependencies or inconsistencies in your migrations. * Drop your entire database. This will remove any problems you had with inconsistencies you had between your actual database schema and the schema you should have based on your migration history, and will remove any problems you had with inconsistencies between your migration history and your previous migration files [this is what the `InconsistentMigrationHistory` is complaining about]. * Recreate your database schema with `python3 -m manage migrate` 2. Determine the cause of the error and resolve it, because (speaking from experience) the cause is almost certainly something silly *you* did. (Generally as a result of not understanding how to use the migration system correctly). Based on the error's I've caused there are three categories. 1. *Inconsistencies with migration files.* This is a pretty common one when multiple people are working on a project. Hopefully your changes do not conflict and `makemigrations --merge` can solve this one, otherwise someone is going to have to roll back their migrations to the branching point in order to resolve this. 2. *Inconsistencies between your schema and your migration history.* To manage this someone will have either edited the database schema manually, or deleted migrations. If they deleted a migration, then revert their changes and yell at them; you should *never* delete migrations if others depend on them. If they edited the database schema manually, revert their changes and then yell at them; Django is managing the database schema, no one else. 3. *Inconsistencies between your migration history and your migrations files.* [This is the `InconsistentMigrationHistory` issue the asker suffers from, and the one I suffered from when I arrived at this page]. To manage this someone has either manually messed with the `django_migrations` table or deleted a migration *after* it was applied. To resolve this you are going to have to work out how the inconsistency came about and manually resolve it. If your database schema is correct, and it is just your migration history that is wrong you can manually edit the `django_migrations` table to resolve this. If your database schema is wrong then you will also have to manually edit that to bring it in line with what it should be. Based on your description of the problem and the answer you selected I'm going to assume you are working alone, are new to Django, and don't care about your data. So the nuclear option may be right for you. If you are not in this situation and the above text looks like gibberish, then I suggest asking the [Django User's Mailing List](https://docs.djangoproject.com/en/dev/internals/mailing-lists/#django-users) for help. There are very helpful people there who can help walk you through resolving the specific mess you are in. Have faith, you can resolve this error without going nuclear!
In my case the problem was with pytest starting, where I just altered `--reuse-db` to `--create-db`, run pytest, and changed it back. This fixed my problem
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
You can delete directly db.sqlite3, then migrate a new database is automatically generated. It should fix it. ``` rm sqlite3.db python manage.py makemigrations python manage.py migrate ```
**django.db.migrations.exceptions.InconsistentMigrationHistory #On Creating Custom User Model** I had that same issue today, and none of the above solutions worked, then I thought to erase all the data from my local PostgreSQL database using this following command ``` -- Drop everything from the PostgreSQL database. DO $$ DECLARE q TEXT; r RECORD; BEGIN -- triggers FOR r IN (SELECT pns.nspname, pc.relname, pt.tgname FROM pg_catalog.pg_trigger pt, pg_catalog.pg_class pc, pg_catalog.pg_namespace pns WHERE pns.oid=pc.relnamespace AND pc.oid=pt.tgrelid AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') AND pt.tgisinternal=false ) LOOP EXECUTE format('DROP TRIGGER %I ON %I.%I;', r.tgname, r.nspname, r.relname); END LOOP; -- constraints #1: foreign key FOR r IN (SELECT pns.nspname, pc.relname, pcon.conname FROM pg_catalog.pg_constraint pcon, pg_catalog.pg_class pc, pg_catalog.pg_namespace pns WHERE pns.oid=pc.relnamespace AND pc.oid=pcon.conrelid AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') AND pcon.contype='f' ) LOOP EXECUTE format('ALTER TABLE ONLY %I.%I DROP CONSTRAINT %I;', r.nspname, r.relname, r.conname); END LOOP; -- constraints #2: the rest FOR r IN (SELECT pns.nspname, pc.relname, pcon.conname FROM pg_catalog.pg_constraint pcon, pg_catalog.pg_class pc, pg_catalog.pg_namespace pns WHERE pns.oid=pc.relnamespace AND pc.oid=pcon.conrelid AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') AND pcon.contype<>'f' ) LOOP EXECUTE format('ALTER TABLE ONLY %I.%I DROP CONSTRAINT %I;', r.nspname, r.relname, r.conname); END LOOP; -- indicēs FOR r IN (SELECT pns.nspname, pc.relname FROM pg_catalog.pg_class pc, pg_catalog.pg_namespace pns WHERE pns.oid=pc.relnamespace AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') AND pc.relkind='i' ) LOOP EXECUTE format('DROP INDEX %I.%I;', r.nspname, r.relname); END LOOP; -- normal and materialised views FOR r IN (SELECT pns.nspname, pc.relname FROM pg_catalog.pg_class pc, pg_catalog.pg_namespace pns WHERE pns.oid=pc.relnamespace AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') AND pc.relkind IN ('v', 'm') ) LOOP EXECUTE format('DROP VIEW %I.%I;', r.nspname, r.relname); END LOOP; -- tables FOR r IN (SELECT pns.nspname, pc.relname FROM pg_catalog.pg_class pc, pg_catalog.pg_namespace pns WHERE pns.oid=pc.relnamespace AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') AND pc.relkind='r' ) LOOP EXECUTE format('DROP TABLE %I.%I;', r.nspname, r.relname); END LOOP; -- sequences FOR r IN (SELECT pns.nspname, pc.relname FROM pg_catalog.pg_class pc, pg_catalog.pg_namespace pns WHERE pns.oid=pc.relnamespace AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') AND pc.relkind='S' ) LOOP EXECUTE format('DROP SEQUENCE %I.%I;', r.nspname, r.relname); END LOOP; -- extensions (only if necessary; keep them normally) FOR r IN (SELECT pns.nspname, pe.extname FROM pg_catalog.pg_extension pe, pg_catalog.pg_namespace pns WHERE pns.oid=pe.extnamespace AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') ) LOOP EXECUTE format('DROP EXTENSION %I;', r.extname); END LOOP; -- aggregate functions first (because they depend on other functions) FOR r IN (SELECT pns.nspname, pp.proname, pp.oid FROM pg_catalog.pg_proc pp, pg_catalog.pg_namespace pns, pg_catalog.pg_aggregate pagg WHERE pns.oid=pp.pronamespace AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast') AND pagg.aggfnoid=pp.oid ) LOOP EXECUTE format('DROP AGGREGATE %I.%I(%s);', r.nspname, r.proname, pg_get_function_identity_arguments(r.oid)); END LOOP; -- routines (functions, aggregate functions, procedures, window functions) IF EXISTS (SELECT * FROM pg_catalog.pg_attribute WHERE attrelid='pg_catalog.pg_proc'::regclass AND attname='prokind' -- PostgreSQL 11+ ) THEN q := 'CASE pp.prokind WHEN ''p'' THEN ''PROCEDURE'' WHEN ''a'' THEN ''AGGREGATE'' ELSE ''FUNCTION'' END'; ELSIF EXISTS (SELECT * FROM pg_catalog.pg_attribute WHERE attrelid='pg_catalog.pg_proc'::regclass AND attname='proisagg' -- PostgreSQL ≤10 ) THEN q := 'CASE pp.proisagg WHEN true THEN ''AGGREGATE'' ELSE ''FUNCTION'' END'; ELSE q := '''FUNCTION'''; END IF; FOR r IN EXECUTE 'SELECT pns.nspname, pp.proname, pp.oid, ' || q || ' AS pt FROM pg_catalog.pg_proc pp, pg_catalog.pg_namespace pns WHERE pns.oid=pp.pronamespace AND pns.nspname NOT IN (''information_schema'', ''pg_catalog'', ''pg_toast'') ' LOOP EXECUTE format('DROP %s %I.%I(%s);', r.pt, r.nspname, r.proname, pg_get_function_identity_arguments(r.oid)); END LOOP; -- nōn-default schemata we own; assume to be run by a not-superuser FOR r IN (SELECT pns.nspname FROM pg_catalog.pg_namespace pns, pg_catalog.pg_roles pr WHERE pr.oid=pns.nspowner AND pns.nspname NOT IN ('information_schema', 'pg_catalog', 'pg_toast', 'public') AND pr.rolname=current_user ) LOOP EXECUTE format('DROP SCHEMA %I;', r.nspname); END LOOP; -- voilà RAISE NOTICE 'Database cleared!'; END; $$; ``` After this you can run django command for migrations ``` python manage.py makemigrations python manage.py migrate ``` And Absolutely that will work . Thank You.
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
Your Error is essentially: ``` Migration "B" is applied before its dependency "A" on database 'default'. ``` **Sanity Check**: First, open your database and look at the records in the 'django\_migrations' table. Records should be listed in Chronological order (ex: A,B,C,D...). Make sure that the name of the "A" Migration listed in the error matches the name of the "A" migration listed in the database. (They can differ if you had previously, manually, edited or deleted or renamed migration files) **To Fix This**, rename migration A. either in the database or rename the filename. BUT make sure the changes matches up with what other developers on your team have in their databases (or the changes matches what on your production database)
First delete all the migrations and db.sqlite3 files and follow these steps: ``` $ ./manage.py makemigrations myapp $ ./manage.py squashmigrations myapp 0001(may be differ) ``` Delete the old migration file and finally. ``` $ ./manage.py migrate ```
44,651,760
When I run `python manage.py migrate` on my Django project, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/hari/project/env/local/lib/python2.7/site- packages/django/core/management/__init__.py", line 363, in execute_from_command_line utility.execute() File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 355, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 86, in handle executor.loader.check_consistent_history(connection) File "/home/hari/project/env/local/lib/python2.7/site-packages/django/db/migrations/loader.py", line 298, in check_consistent_history connection.alias, django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency account.0001_initial on database 'default'. ``` I have a user model like below: ``` class User(AbstractUser): place = models.CharField(max_length=64, null=True, blank=True) address = models.CharField(max_length=128, null=True, blank=True) ``` How can I solve this problem?
2017/06/20
[ "https://Stackoverflow.com/questions/44651760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3186922/" ]
You can delete directly db.sqlite3, then migrate a new database is automatically generated. It should fix it. ``` rm sqlite3.db python manage.py makemigrations python manage.py migrate ```
I encountered this when migrating from Wagtail 2.0 to 2.4, but have seen it a few other times when a third party app squashes a migration *after* your current version but before the version you’re migrating to. The shockingly simple solution in this case at least is: ``` ./manage.py migrate ./manage.py makemigrations ./manage.py migrate ``` i.e. run a single migrate before trying to makemigrations.
54,235,347
I am implementing a GUI in Python/Flask. The way flask is designed, the local host along with the port number has to be "manually" opened. Is there a way to automate it so that upon running the code, browser(local host) is automatically opened? I tried using webbrowser package but it opens the webpage after the session is killed. I also looked at the following posts but they are going over my head. [Shell script opening flask powered webpage opens two windows](https://stackoverflow.com/questions/28056360/shell-script-opening-flask-powered-webpage-opens-two-windows) [python webbrowser.open(url)](https://stackoverflow.com/questions/2634235/python-webbrowser-openurl) Problem occurs when html pages are rendered based on user inputs. Thanks in advance. ``` import webbrowser from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == "__main__": webbrowser.open_new('http://127.0.0.1:2000/') app.run(port=2000) ```
2019/01/17
[ "https://Stackoverflow.com/questions/54235347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9557881/" ]
Use timer to start new thread to open web browser. ``` import webbrowser from threading import Timer from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" def open_browser(): webbrowser.open_new("http://127.0.0.1:5000") if __name__ == "__main__": Timer(1, open_browser).start() app.run(port=2000) ```
**I'd suggest the following improvement to allow for loading of the browser when in debug mode:** *Inspired by [this answer](https://stackoverflow.com/a/9476701/10521959), will only load the browser on the first run...* ``` def main(): # The reloader has not yet run - open the browser if not os.environ.get("WERKZEUG_RUN_MAIN"): webbrowser.open_new('http://127.0.0.1:2000/') # Otherwise, continue as normal app.run(host="127.0.0.1", port=2000) if __name__ == '__main__': main() ``` <https://stackoverflow.com/a/9476701/10521959>
42,506,954
I'm calling `curl` from a Perl script to POST a file: ```perl my $cookie = 'Cookie: _appwebSessionId_=' . $sessionid; my $reply = `curl -s -H "Content-type:application/x-www-form-urlencoded" -H "$cookie" --data \@portports.txt http://$ipaddr/remote_api.esp`; ``` I want to use the Python [requests](http://docs.python-requests.org/en/master/) module instead. I've tried the following Python code: ```py files = {'file': ('portports.txt', open('portports.txt', 'rb'))} headers = { 'Content-type' : 'application/x-www-form-urlencoded', 'Cookie' : '_appwebSessionId_=%s' % sessionid } r = requests.post('http://%s/remote_api.esp' % ip, headers=headers, files=files) print(r.text) ``` But I always get the response "ERROR no data found in request." How can I fix this?
2017/02/28
[ "https://Stackoverflow.com/questions/42506954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153606/" ]
The `files` parameter encodes your file as a multipart message, which is not what you want. Use the `data` parameter instead: ``` import requests url = 'http://www.example.com/' headers = {'Content-Type': 'application/x-www-form-urlencoded'} cookies = {'_appwebSessionId_': '1234'} with open('foo', 'rb') as file: response = requests.post(url, headers=headers, data=file, cookies=cookies) print(response.text) ``` This generates a request like: ```none POST / HTTP/1.1 Connection: keep-alive Accept: */* Accept-Encoding: gzip, deflate Host: www.example.com User-Agent: python-requests/2.13.0 Content-Length: 15 Content-Type: application/x-www-form-urlencoded Cookie: _appwebSessionId_=1234 content of foo ``` Note that in both this version and in your original `curl` command, the file must already be URL encoded.
First UTF-8 decode your URL. Put headers and files in a JSON object, lesse all\_data. Now your code should look like this. ```python all_data = { { 'file': ('portports.txt', open('portports.txt', 'rb')) }, { 'Content-type' : 'application/x-www-form-urlencoded', 'Cookie' : '_appwebSessionId_=%s' % sessionid } } all_data = json.dumps(all_data) requests.post(url, data = all_data) ```
57,812,562
I want to see the full trace of the code till a particular point so i do ``` ... import traceback traceback.print_stack() ... ``` Then it will show ``` File ".venv/lib/python3.7/site-packages/django/db/models/query.py", line 144, in __iter__ return compiler.results_iter(tuple_expected=True, chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) File ".venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1052, in results_iter results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size) File ".venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1100, in execute_sql cursor.execute(sql, params) File ".venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 110, in execute extra={'duration': duration, 'sql': sql, 'params': params} File "/usr/lib64/python3.7/logging/__init__.py", line 1371, in debug self._log(DEBUG, msg, args, **kwargs) File "/usr/lib64/python3.7/logging/__init__.py", line 1519, in _log self.handle(record) File "/usr/lib64/python3.7/logging/__init__.py", line 1528, in handle if (not self.disabled) and self.filter(record): File "/usr/lib64/python3.7/logging/__init__.py", line 762, in filter result = f.filter(record) File "basic_django/settings.py", line 402, in filter traceback.print_stack() ``` How to make this output more colorful using pygments. Generally to colorize a json string in python i do ``` from pygments import highlight from pygments.lexers import JsonLexer from pygments.formatters import TerminalTrueColorFormatter json_str = '{ "name":"John" }' print(highlight(json_str, JsonLexer(), TerminalTrueColorFormatter())) ``` Similarly how to do that with `traceback.print_stack()` **Answer I Used based on Alexander Huszagh** 1) we have to use `Python3TracebackLexer` 2) we have to use `traceback.format_stack()` which gives a `list` and then concatenate them as a `string` using `''.join(traceback.format_stack())`. ``` import traceback import pygments from pygments.lexers import Python3TracebackLexer from pygments.formatters import TerminalTrueColorFormatter traceback_color = pygments.highlight(''.join(traceback.format_stack()),Python3TracebackLexer(),TerminalTrueColorFormatter(style='trac')) # trac or rainbow_dash i prefer print(traceback_color) ```
2019/09/05
[ "https://Stackoverflow.com/questions/57812562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2897115/" ]
Pygments lists the available [lexers](http://pygments.org/docs/lexers/). You can do this with Python3TracebackLexer. ``` from pygments import highlight from pygments.lexers import Python3TracebackLexer from pygments.formatters import TerminalTrueColorFormatter err_str = ''' File ".venv/lib/python3.7/site-packages/django/db/models/query.py", line 144, in __iter__ return compiler.results_iter(tuple_expected=True, chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) File ".venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1052, in results_iter results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size) File ".venv/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1100, in execute_sql cursor.execute(sql, params) File ".venv/lib/python3.7/site-packages/django/db/backends/utils.py", line 110, in execute extra={'duration': duration, 'sql': sql, 'params': params} File "/usr/lib64/python3.7/logging/__init__.py", line 1371, in debug self._log(DEBUG, msg, args, **kwargs) File "/usr/lib64/python3.7/logging/__init__.py", line 1519, in _log self.handle(record) File "/usr/lib64/python3.7/logging/__init__.py", line 1528, in handle if (not self.disabled) and self.filter(record): File "/usr/lib64/python3.7/logging/__init__.py", line 762, in filter result = f.filter(record) File "basic_django/settings.py", line 402, in filter traceback.print_stack() ''' print(highlight(err_str, Python3TracebackLexer(), TerminalTrueColorFormatter())) ``` In order to get `err_str`, replace `print_stack` with `format_stack` as follows than do: ``` def colorize_traceback(err_str): return highlight(err_str, Python3TracebackLexer(), TerminalTrueColorFormatter()) try: ... # Some logic except Exception: # Or a more narrow exception # tb.print_stack() print(colorize_traceback(tb.format_stack())) ```
Alternatively, use the [rich](https://github.com/willmcgugan/rich) library. [With just two lines of code](https://rich.readthedocs.io/en/latest/traceback.html), it will prettify your tracebacks... and then some! ```py from rich.traceback import install install() ``` How does it look afterwards? Take a gander: [![Fabulous traceback output](https://i.stack.imgur.com/RbV8i.png)](https://i.stack.imgur.com/RbV8i.png) And the beauty of it? [It supports Pygment themes](https://rich.readthedocs.io/en/latest/reference/traceback.html#rich.traceback.install)!
40,081,601
I have a instance of django deployed in Heroku as follow, Procfile: ``` web: python manage.py collectstatic --noinput ; gunicorn MY_APP.wsgi --log-file - worker: celery -A MY_APP worker beat: celery -A MY_APP beat ``` This instance can receive 2000-4000 requests per minute, and sometimes it is too much. I know I should change the communications... but can I change something in the configuration to get a 10-30% in the server efficiency?
2016/10/17
[ "https://Stackoverflow.com/questions/40081601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1073310/" ]
The first thing that springs to mind is to check out connection pooling and/or persistent database connections. Depending on how much database access your app is using, this could significantly increase the number of RPM your app is able to handle. Check out [this StackOverflow question](https://stackoverflow.com/questions/1125504/django-persistent-database-connection) for some good ideas, in particular the following answers: * [persistent connections, available since Django 1.6](https://stackoverflow.com/a/19438241/3231557) * [PgBouncer, a lightweight connection pooler for PostgreSQL](https://stackoverflow.com/a/1698102/3231557)
The whole point of Heroku is that you can dynamically scale your app. You can spin up new web workers with `heroku ps:scale web+1` for example.
6,261,459
Here is a test case I've created for a problem I found out. For some reason the dict() 'l' in B() does not seem to hold the correct value. See the output below on my Linux 11.04 Ubuntu, python 2.7.1+. ``` class A(): name = None b = None def __init__(self, name, bname, cname, dname): self.name = name print "A: name", name self.b = B(name, bname, cname, dname) print "A self.b:", self.b class B(): name = None l = dict() c = None def __init__(self, name, bname, cname, dname): self.aname = name self.name = bname print " B: name", bname self.c = C(bname, cname, dname) self.l["bb"] = self.c print " B self:", self print " B self.c:", self.c print " B self.l[bb]:", self.l["bb"], "<<< OK >>>" def dump(self): print " A: name", self.aname print " B: name", self.name for i in self.l: print " B: i=", i, "self.l[i]", self.l[i], "<<< ERROR >>>" class C(): name = None l = dict() d = None def __init__(self, bname, cname, dname): self.bname = bname self.cname = cname print " B: name", bname print " C: name", cname print " C self:", self def dump(self): print " B name:", self.bname print " C name:", self.cname a1 = A("a1", "b1", "c1", "d1") a2 = A("a2", "b2", "c2", "d2") a3 = A("a3", "b3", "c3", "d3") a1.b.dump() a1.b.c.dump() a2.b.dump() a2.b.c.dump() a3.b.dump() a3.b.c.dump() ``` Output on my machine: ``` $ python bedntest.py A: name a1 B: name b1 B: name b1 C: name c1 C self: <__main__.C instance at 0xb76f3a6c> B self: <__main__.B instance at 0xb76f388c> B self.c: <__main__.C instance at 0xb76f3a6c> B self.l[bb]: <__main__.C instance at 0xb76f3a6c> <<< OK >>> A self.b: <__main__.B instance at 0xb76f388c> A: name a2 B: name b2 B: name b2 C: name c2 C self: <__main__.C instance at 0xb76f3acc> B self: <__main__.B instance at 0xb76f3aac> B self.c: <__main__.C instance at 0xb76f3acc> B self.l[bb]: <__main__.C instance at 0xb76f3acc> <<< OK >>> A self.b: <__main__.B instance at 0xb76f3aac> A: name a3 B: name b3 B: name b3 C: name c3 C self: <__main__.C instance at 0xb76f3b2c> B self: <__main__.B instance at 0xb76f3b0c> B self.c: <__main__.C instance at 0xb76f3b2c> B self.l[bb]: <__main__.C instance at 0xb76f3b2c> <<< OK >>> A self.b: <__main__.B instance at 0xb76f3b0c> A: name a1 B: name b1 B: i= bb self.l[i] <__main__.C instance at 0xb76f3b2c> <<< ERROR >>> B name: b1 C name: c1 A: name a2 B: name b2 B: i= bb self.l[i] <__main__.C instance at 0xb76f3b2c> <<< ERROR >>> B name: b2 C name: c2 A: name a3 B: name b3 B: i= bb self.l[i] <__main__.C instance at 0xb76f3b2c> <<< ERROR >>> B name: b3 C name: c3 ``` To my understanding, the lines above: ``` B: i= bb self.l[i] <__main__.C instance at 0xb76f3b2c> <<< ERROR >>> ``` should all hold a unique instance of C(), as seen at initialization time - not the last instance that was created (see <<< OK >>> lines). What happened here?
2011/06/07
[ "https://Stackoverflow.com/questions/6261459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
What happened is that you created a class attribute. Create an instance attribute instead by instantiating in `__init__()`.
It looks like you are trying to "declare" instance attributes at the class level. Class attributes have their own specific uses in Python, and it is wrong to put them there if you are not intending to ever use the class attributes ``` class A(): name = None # Don't do this b = None # Don't do this def __init__(self, name, bname, cname, dname): self.name = name print "A: name", name self.b = B(name, bname, cname, dname) print "A self.b:", self.b ``` In `class B` you have created a class attribute `l`. Since the instance doesn't have it's own attribute `l` it uses the class's attribute. You could just write your class B like this instead ``` class B(): def __init__(self, name, bname, cname, dname): self.aname = name self.name = bname self.l = dict() print " B: name", bname self.c = C(bname, cname, dname) self.l["bb"] = self.c print " B self:", self print " B self.c:", self.c print " B self.l[bb]:", self.l["bb"], "<<< OK >>>" ... ```
30,969,533
I have a task to draw a potential graph with 3 variables, x , y, and z. I don't think we can draw the function U(x, y, z) directly with matplotlib. So what I'm planning to do is to draw cross sectional plots of x-y and y-z. I believe this is enough because the function U(x, y, z) has periodic behavior. I'm quite new to python. So would you recommend or tell me where do I start or which method I can use for this? Thank you.
2015/06/21
[ "https://Stackoverflow.com/questions/30969533", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4358807/" ]
String literals in SQL are denoted by single quotes (`'`). Without them, and string would be treated as an object name. Here, you generate a where clause `title = Test`. Both are interpreted as columns names, and the query fails since there's no column `Test`. To solve this, you could surround `Test` by quotes: ``` String query = "SELECT * FROM "+ GROUPS +" WHERE "+ TITLE_GROUPS + " = '" + title + "'"; ```
Change your WHERE clause to be... ``` ... title = 'test' ``` The way it is written it is looking for a column named Test.
7,360,654
I am trying to *generate* self signed SSL certificates using Python, so that it is platform independent. My target is the \*.pem format. I found [this script](http://sunh11373.blogspot.com/2007/04/python-utility-for-converting.html) that generates certificates, but no information how to self-sign them.
2011/09/09
[ "https://Stackoverflow.com/questions/7360654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/722291/" ]
The script you've linked doesn't create self-signed certificate; it only creates a request. To create self-signed certificate you could use [`openssl`](https://www.openssl.org/docs/faq.html#USER4) it is available on all major OSes. ``` $ openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095 ``` If you'd like to do it using [M2Crypto](https://pypi.python.org/pypi/M2Crypto) then take a look at [`X509TestCase.test_mkcert()` method](https://gitlab.com/m2crypto/m2crypto/blob/17f7ca77afa75cedaa60bf3db767119adba4a2ec/tests/test_x509.py#L237).
You could use the openssl method that J.F. Sebastian stated from within Python. Import the OS lib and call the command like this: ``` os.system("openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095") ``` If it requires user interaction, it might work if you run it via subprocess pipe and allow for raw input to answer any prompts.
47,441,401
I have an 8\*4 numpy array with floats (myarray) and would like to transform it into a dictionary of dataframes (and eventually concatenate it into one dataframe) with pandas in python. I'm coming across the error "ValueError: DataFrame constructor not properly called!" though. Here is the way I attempt it: ``` mydict={} for i, y in enumerate(np.arange(2015,2055,5)): for j, s in enumerate(['Winter', 'Spring', 'Summer', 'Fall']): mydict[(y,s)]=pd.DataFrame(myarray[i,j]) mydict ``` Any ideas? Thanks! As requested, some sample data: ``` array([[ 29064908.33333333, 33971366.66666667, 37603508.33333331, 37105916.66666667], [ 25424991.66666666, 30156625. , 32103324.99999999, 31705075. ], [ 26972666.66666666, 28182699.99999995, 30614324.99999999, 29673008.33333333], [ 26923466.66666666, 27573075. , 28308725. , 27834291.66666666], [ 26015216.66666666, 28709191.66666666, 30807833.33333334, 27183991.66666684], [ 25711475. , 32861633.33333332, 35784916.66666666, 28748891.66666666], [ 26267299.99999999, 35030583.33333331, 37863808.33333329, 29931858.33333332], [ 28871674.99999998, 38477549.99999999, 40171374.99999999, 33853750. ]]) ``` and expected output: ``` 2015 2020 2025 2030 2035 2040 2045 2050 Winter 2.9e+07 2.5e+07 2.6e+07 2.6e+07 2.6e+07 2.5e+07 2.6e+07 2.8e+07 Spring 3.3e+07 3.0e+07 2.8e+07 2.7e+07 2.8e+07 3.2e+07 3.5e+07 3.8e+07 Summer 3.7e+07 3.2e+07 3.0e+07 2.8e+07 3.0e+07 3.5e+07 3.7e+07 4.0e+07 Fall 3.7e+07 3.1e+07 2.9e+07 2.7e+07 2.7e+07 2.8e+07 2.9e+07 3.3e+07 ```
2017/11/22
[ "https://Stackoverflow.com/questions/47441401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8938572/" ]
You are trying to cast a list of 2 numbers to int. [int only takes a number or a string as its argument](https://docs.python.org/3/library/functions.html#int). What you want is to [map](http://book.pythontips.com/en/latest/map_filter.html#map) the int function to each item in the list. ``` >>> w, h = map(int, input().split(" ")) 5 10 >>> w 5 >>> h 10 ```
`int(...)` constructs an integer which cannot be unpacked to a tuple `W, H`. What you probably want is ``` W, H = (int(x) for x in input().split(" ")) ```
40,853,556
I have a list of tuples in python containing 3-dimenstional data, where each tuple is in the form: (x, y, z, data\_value), i.e., I have data values at each (x, y, z) coordinate. I would like to make a 3D discrete heatmap plot where the colors represent the value of data\_values in my list of tuples. Here, I give an example of such a heatmap for a 2D dataset where I have a list of (x, y, data\_value) tuples: ``` import matplotlib.pyplot as plt from matplotlib import colors import numpy as np from random import randint # x and y coordinates x = np.array(range(10)) y = np.array(range(10,15)) data = np.zeros((len(y),len(x))) # Generate some discrete data (1, 2 or 3) for each (x, y) pair for i,yy in enumerate(y): for j, xx in enumerate(x): data[i,j] = randint(1,3) # Map 1, 2 and 3 to 'Red', 'Green' qnd 'Blue', respectively colormap = colors.ListedColormap(['Red', 'Green', 'Blue']) colorbar_ticklabels = ['1', '2', '3'] # Use matshow to create a heatmap fig, ax = plt.subplots() ms = ax.matshow(data, cmap = colormap, vmin=data.min() - 0.5, vmax=data.max() + 0.5, origin = 'lower') # x and y axis ticks ax.set_xticklabels([str(xx) for xx in x]) ax.set_yticklabels([str(yy) for yy in y]) ax.xaxis.tick_bottom() # Put the x- qnd y-axis ticks at the middle of each cell ax.set_xticks(np.arange(data.shape[1]), minor = False) ax.set_yticks(np.arange(data.shape[0]), minor = False) # Set custom ticks and ticklabels for color bar cbar = fig.colorbar(ms,ticks = np.arange(np.min(data),np.max(data)+1)) cbar.ax.set_yticklabels(colorbar_ticklabels) plt.show() ``` This generates a plot like this: [![enter image description here](https://i.stack.imgur.com/rwH59.jpg)](https://i.stack.imgur.com/rwH59.jpg) How can I make a similar plot in 3D-space (i.e., having a z-axis), if my data have a third dimension. For example, if ``` # x and y and z coordinates x = np.array(range(10)) y = np.array(range(10,15)) z = np.array(range(15,20)) data = np.zeros((len(y),len(x), len(y))) # Generate some random discrete data (1, 2 or 3) for each (x, y, z) triplet. # Am I defining i, j and k correctly here? for i,yy in enumerate(y): for j, xx in enumerate(x): for k, zz in enumerate(z): data[i,j, k] = randint(1,3) ``` I sounds like [plot\_surface in mplot3d](http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html) should be able to do this, but z in the input of this function is essentially the value of data at (x, y) coordinate, i.e., (x, y, z = data\_value), which is different from what I have, i.e., (x, y, z, data\_value).
2016/11/28
[ "https://Stackoverflow.com/questions/40853556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3076813/" ]
### New answer: It seems we really want to have a 3D Tetris game here ;-) So here is a way to plot cubes of different color to fill the space given by the arrays `(x,y,z)`. ``` from mpl_toolkits.mplot3d import Axes3D import numpy as np import matplotlib.pyplot as plt import matplotlib.cm import matplotlib.colorbar import matplotlib.colors def cuboid_data(center, size=(1,1,1)): # code taken from # http://stackoverflow.com/questions/30715083/python-plotting-a-wireframe-3d-cuboid?noredirect=1&lq=1 # suppose axis direction: x: to left; y: to inside; z: to upper # get the (left, outside, bottom) point o = [a - b / 2 for a, b in zip(center, size)] # get the length, width, and height l, w, h = size x = [[o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in bottom surface [o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in upper surface [o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in outside surface [o[0], o[0] + l, o[0] + l, o[0], o[0]]] # x coordinate of points in inside surface y = [[o[1], o[1], o[1] + w, o[1] + w, o[1]], # y coordinate of points in bottom surface [o[1], o[1], o[1] + w, o[1] + w, o[1]], # y coordinate of points in upper surface [o[1], o[1], o[1], o[1], o[1]], # y coordinate of points in outside surface [o[1] + w, o[1] + w, o[1] + w, o[1] + w, o[1] + w]] # y coordinate of points in inside surface z = [[o[2], o[2], o[2], o[2], o[2]], # z coordinate of points in bottom surface [o[2] + h, o[2] + h, o[2] + h, o[2] + h, o[2] + h], # z coordinate of points in upper surface [o[2], o[2], o[2] + h, o[2] + h, o[2]], # z coordinate of points in outside surface [o[2], o[2], o[2] + h, o[2] + h, o[2]]] # z coordinate of points in inside surface return x, y, z def plotCubeAt(pos=(0,0,0), c="b", alpha=0.1, ax=None): # Plotting N cube elements at position pos if ax !=None: X, Y, Z = cuboid_data( (pos[0],pos[1],pos[2]) ) ax.plot_surface(X, Y, Z, color=c, rstride=1, cstride=1, alpha=0.1) def plotMatrix(ax, x, y, z, data, cmap="jet", cax=None, alpha=0.1): # plot a Matrix norm = matplotlib.colors.Normalize(vmin=data.min(), vmax=data.max()) colors = lambda i,j,k : matplotlib.cm.ScalarMappable(norm=norm,cmap = cmap).to_rgba(data[i,j,k]) for i, xi in enumerate(x): for j, yi in enumerate(y): for k, zi, in enumerate(z): plotCubeAt(pos=(xi, yi, zi), c=colors(i,j,k), alpha=alpha, ax=ax) if cax !=None: cbar = matplotlib.colorbar.ColorbarBase(cax, cmap=cmap, norm=norm, orientation='vertical') cbar.set_ticks(np.unique(data)) # set the colorbar transparent as well cbar.solids.set(alpha=alpha) if __name__ == '__main__': # x and y and z coordinates x = np.array(range(10)) y = np.array(range(10,15)) z = np.array(range(15,20)) data_value = np.random.randint(1,4, size=(len(x), len(y), len(z)) ) print data_value.shape fig = plt.figure(figsize=(10,4)) ax = fig.add_axes([0.1, 0.1, 0.7, 0.8], projection='3d') ax_cb = fig.add_axes([0.8, 0.3, 0.05, 0.45]) ax.set_aspect('equal') plotMatrix(ax, x, y, z, data_value, cmap="jet", cax = ax_cb) plt.savefig(__file__+".png") plt.show() ``` [![enter image description here](https://i.stack.imgur.com/mVsjM.png)](https://i.stack.imgur.com/mVsjM.png) I find it really hard to see anything here, but that may be a question of taste and now hopefully also answers the question. --- ### Original Answer: *It seems I misunderstood the question. Therefore the following does not answer the question. For the moment I leave it here, to keep the comments below available for others.* I think [`plot_surface`](http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#surface-plots) is fine for the specified task. Essentially you would plot a surface with the shape given by your points `X,Y,Z` in 3D and colorize it using the values from `data_values` as shown in the code below. ``` from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.gca(projection='3d') # as plot_surface needs 2D arrays as input x = np.arange(10) y = np.array(range(10,15)) # we make a meshgrid from the x,y data X, Y = np.meshgrid(x, y) Z = np.sin(np.sqrt(X**2 + Y**2)) # data_value shall be represented by color data_value = np.random.rand(len(y), len(x)) # map the data to rgba values from a colormap colors = cm.ScalarMappable(cmap = "viridis").to_rgba(data_value) # plot_surface with points X,Y,Z and data_value as colors surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=colors, linewidth=0, antialiased=True) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/oqFab.png)](https://i.stack.imgur.com/oqFab.png)
I've update the code above to be compatible with newer version of matplot lib. ```py import numpy as np import matplotlib.pyplot as plt import matplotlib.colorbar from matplotlib import cm viridis = cm.get_cmap('plasma', 8) #Our color map def cuboid_data(center, size=(1,1,1)): # code taken from # http://stackoverflow.com/questions/30715083/python-plotting-a-wireframe-3d-cuboid?noredirect=1&lq=1 # suppose axis direction: x: to left; y: to inside; z: to upper # get the (left, outside, bottom) point o = [a - b / 2 for a, b in zip(center, size)] # get the length, width, and height l, w, h = size x = np.array([[o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in bottom surface [o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in upper surface [o[0], o[0] + l, o[0] + l, o[0], o[0]], # x coordinate of points in outside surface [o[0], o[0] + l, o[0] + l, o[0], o[0]]]) # x coordinate of points in inside surface y = np.array([[o[1], o[1], o[1] + w, o[1] + w, o[1]], # y coordinate of points in bottom surface [o[1], o[1], o[1] + w, o[1] + w, o[1]], # y coordinate of points in upper surface [o[1], o[1], o[1], o[1], o[1]], # y coordinate of points in outside surface [o[1] + w, o[1] + w, o[1] + w, o[1] + w, o[1] + w]]) # y coordinate of points in inside surface z = np.array([[o[2], o[2], o[2], o[2], o[2]], # z coordinate of points in bottom surface [o[2] + h, o[2] + h, o[2] + h, o[2] + h, o[2] + h], # z coordinate of points in upper surface [o[2], o[2], o[2] + h, o[2] + h, o[2]], # z coordinate of points in outside surface [o[2], o[2], o[2] + h, o[2] + h, o[2]]]) # z coordinate of points in inside surface return x, y, z def plotCubeAt(pos=(0,0,0), c="b", alpha=0.1, ax=None): # Plotting N cube elements at position pos if ax !=None: X, Y, Z = cuboid_data( (pos[0],pos[1],pos[2]) ) ax.plot_surface(X, Y, Z, color=c, rstride=1, cstride=1, alpha=0.1) def plotMatrix(ax, x, y, z, data, cmap=viridis, cax=None, alpha=0.1): # plot a Matrix norm = matplotlib.colors.Normalize(vmin=data.min(), vmax=data.max()) colors = lambda i,j,k : matplotlib.cm.ScalarMappable(norm=norm,cmap = cmap).to_rgba(data[i,j,k]) for i, xi in enumerate(x): for j, yi in enumerate(y): for k, zi, in enumerate(z): plotCubeAt(pos=(xi, yi, zi), c=colors(i,j,k), alpha=alpha, ax=ax) if cax !=None: cbar = matplotlib.colorbar.ColorbarBase(cax, cmap=cmap, norm=norm, orientation='vertical') cbar.set_ticks(np.unique(data)) # set the colorbar transparent as well cbar.solids.set(alpha=alpha) if __name__ == '__main__': # x and y and z coordinates x = np.array(range(10)) y = np.array(range(10,15)) z = np.array(range(15,20)) data_value = np.random.randint(1,4, size=(len(x), len(y), len(z)) ) print(data_value.shape) fig = plt.figure(figsize=(10,4)) ax = fig.add_axes([0.1, 0.1, 0.7, 0.8], projection='3d') ax_cb = fig.add_axes([0.8, 0.3, 0.05, 0.45]) ax.set_aspect('auto') plotMatrix(ax, x, y, z, data_value, cmap=viridis, cax = ax_cb) plt.savefig(__file__+".png") plt.show() ```
13,855,056
I have a list of numbers, let's say `[1091, 2053, 4099, 4909, 5023, 9011]`. Here every number has it's permutation in a list too. Now i want to group these permutations of each other, so the list becomes `[[1091, 9011], [2053, 5023], [4099, 4909]]`. I know how to use [`groupby`](http://docs.python.org/2/library/itertools.html#itertools.groupby) and [`permutations`](http://docs.python.org/2/library/itertools.html#itertools.permutations), but have no idea, what should be they key for `groupby` or how should i solve the problem some other way. Note: the numbers should be exact permutations, 112 and 121 count, but 112 and 122 don't. How to group permutations of a number in a list?
2012/12/13
[ "https://Stackoverflow.com/questions/13855056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/596361/" ]
``` import itertools as it a = [1091, 2053, 4099, 4909, 5023, 9011] sort_string = lambda x: sorted(str(x)) [[int(x) for x in v] for k,v in it.groupby(sorted(a, key=sort_string), key=sort_string)] # [[1091, 9011], [2053, 5023], [4099, 4909]] ```
You can use `collections.Counter` to represent each number as a tuple of `integer, total_occurrences` and then store all the data in instances in a dictionary: ``` from collections import Counter, defaultdict dest = defaultdict(list) data = [1091, 2053, 4099, 4909, 5023, 9011] data = ((Counter([int(x) for x in str(datum)]), datum) for datum in data) for numbers, value in data: numbers = tuple(sorted(numbers.items())) dest[numbers].append(value) print dest.values() # [[1091, 9011], [2053, 5023], [4099, 4909]] ```
13,855,056
I have a list of numbers, let's say `[1091, 2053, 4099, 4909, 5023, 9011]`. Here every number has it's permutation in a list too. Now i want to group these permutations of each other, so the list becomes `[[1091, 9011], [2053, 5023], [4099, 4909]]`. I know how to use [`groupby`](http://docs.python.org/2/library/itertools.html#itertools.groupby) and [`permutations`](http://docs.python.org/2/library/itertools.html#itertools.permutations), but have no idea, what should be they key for `groupby` or how should i solve the problem some other way. Note: the numbers should be exact permutations, 112 and 121 count, but 112 and 122 don't. How to group permutations of a number in a list?
2012/12/13
[ "https://Stackoverflow.com/questions/13855056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/596361/" ]
``` import itertools as it a = [1091, 2053, 4099, 4909, 5023, 9011] sort_string = lambda x: sorted(str(x)) [[int(x) for x in v] for k,v in it.groupby(sorted(a, key=sort_string), key=sort_string)] # [[1091, 9011], [2053, 5023], [4099, 4909]] ```
Represent each number with a normalization which fits your purpose. For your example, a suitable canonical form could be `"".join(sort("".split(str(n))))`; that is, map each number to a string made from a sorted list of the individual digits.
14,087,547
**Conclusion:** It's impossible to override or disable Python's built-in escape sequence processing, such that, you can skip using the raw prefix specifier. I dug into Python's internals to figure this out. So if anyone tries designing objects that work on complex strings (like regex) as part of some kind of framework, make sure to specify in the docstrings that string arguments to the object's `__init__()` **MUST** include the `r` prefix! **Original question:** I am finding it a bit difficult to force Python to not "change" anything about a user-inputted string, which may contain among other things, regex or escaped hexadecimal sequences. I've already tried various combinations of raw strings, `.encode('string-escape')` (and its decode counterpart), but I can't find the right approach. Given an escaped, hexadecimal representation of the Documentation IPv6 address `2001:0db8:85a3:0000:0000:8a2e:0370:7334`, using `.encode()`, this small script (called `x.py`): ``` #!/usr/bin/env python class foo(object): __slots__ = ("_bar",) def __init__(self, input): if input is not None: self._bar = input.encode('string-escape') else: self._bar = "qux?" def _get_bar(self): return self._bar bar = property(_get_bar) # x = foo("\x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34") print x.bar ``` Will yield the following output when executed: ``` $ ./x.py \x01\r\xb8\x85\xa3\x00\x00\x00\x00\x8a.\x03ps4 ``` Note the `\x20` got converted to an ASCII space character, along with a few others. This is basically correct due to Python processing the escaped hex sequences and converting them to their printable ASCII values. This can be solved if the initializer to `foo()` was treated as a raw string (and the `.encode()` call removed), like this: ``` x = foo(r"\x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34") ``` However, my end goal is to create a kind of framework that can be used and I want to hide these kinds of "implementation details" from the end user. If they called `foo()` with the above IPv6 address in escaped hexadecimal form (without the raw specifier) and immediately print it back out, they should get back *exactly* what they put in w/o knowing or using the raw specifier. So I need to find a way to have `foo`'s `__init__()` do whatever processing is necessary to enable that. **Edit:** Per [this SO question](https://stackoverflow.com/a/647787), it seems it's a defect of Python, in that it **always** performs some kind of escape sequence processing. There does not appear to be any kind of facility to completely turn off escape sequence processing, even temporarily. Sucks. I guess I am going to have to research subclassing `str` to create something like `rawstr` that intelligently determines what escape sequences Python processed in a string, and convert them back to their original format. This is not going to be fun... **Edit2:** Another example, given the sample regex below: ``` "^.{0}\xcb\x00\x71[\x00-\xff]" ``` If I assign this to a var or pass it to a function **without** using the raw specifier, the `\x71` gets converted to the letter `q`. Even if I add `.encode('string-escape')` or `.replace('\\', '\\\\')`, the escape sequences **are still processed**. thus resulting in this output: ``` "^.{0}\xcb\x00q[\x00-\xff]" ``` How can I stop this, again, without using the raw specifier? Is there some way to "turn off" the escape sequence processing or "revert" it after the fact thus that the `q` turns back into `\x71`? Is there a way to process the string and escape the backslashes **before** the escape sequence processing happens?
2012/12/30
[ "https://Stackoverflow.com/questions/14087547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/482691/" ]
I think you have an understandable confusion about a difference between Python string literals (source code representation), Python string objects in memory, and how that objects can be printed (in what format they can be represented in the output). If you read some bytes from a file into a bytestring you can write them back as is. `r""` exists only in source code there is no such thing at runtime i.e., `r"\x"` and `"\\x"` are equal, they may even be the exact same string object in memory. To see that input is not corrupted, you could print each byte as an integer: ``` print " ".join(map(ord, raw_input("input something"))) ``` Or just echo as is (there could be a difference but it is unrelated to your `"string-escape"` issue): ``` print raw_input("input something") ``` --- Identity function: ``` def identity(obj): return obj ``` If you **do nothing** to the string then your users will receive **the exact same object** back. You can provide examples in the docs what you consider a concise readable way to represent input string as Python literals. If you find confusing to work with binary strings such as `"\x20\x01"` then you could accept ascii hex-representation instead: `"2001"` (you could use binascii.hexlify/unhexlify to convert one to another). --- The regex case is more complex because there are two languages: 1. Escapes sequences are interpreted by Python according to its string literal syntax 2. Regex engine interprets the string object as a regex pattern that also has its own escape sequences
I think you will have to go the join route. Here's an example: ``` >>> m = {chr(c): '\\x{0}'.format(hex(c)[2:].zfill(2)) for c in xrange(0,256)} >>> >>> x = "\x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34" >>> print ''.join(map(m.get, x)) \x20\x01\x0d\xb8\x85\xa3\x00\x00\x00\x00\x8a\x2e\x03\x70\x73\x34 ``` --- I'm not entirely sure *why* you need that though. If your code needs to interact with other pieces of code, I'd suggest that you agree on a defined format, and stick to it.
45,625,042
I have a master python script, that goes and automates configuring nodes in parallel in a distributed system setup in our lab. I run multiple instances of kickstart.py and it goes and configures all nodes in parallel. How do I create log handler such that each instance of kickstart.py configures each node separately in parallel and each instance logs into different log file. I want to use python logging module. Any help is appreciated. Thanks
2017/08/10
[ "https://Stackoverflow.com/questions/45625042", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8448115/" ]
A solution using `st_distance` from the `sf` package. `my_df_final` is the final output. ``` # Load packages library(tidyverse) library(sp) library(sf) # Create ID for my_df_1 and my_df_2 based on row id # This step is not required, just help me to better distinguish each point my_df_1 <- my_df_1 %>% mutate(ID1 = row.names(.)) my_df_2 <- my_df_2 %>% mutate(ID2 = row.names(.)) # Create spatial point data frame my_df_1_sp <- my_df_1 coordinates(my_df_1_sp) <- ~START_LONG + START_LAT my_df_2_sp <- my_df_2 coordinates(my_df_2_sp) <- ~longitude + latitude # Convert to simple feature my_df_1_sf <- st_as_sf(my_df_1_sp) my_df_2_sf <- st_as_sf(my_df_2_sp) # Set projection based on the epsg code st_crs(my_df_1_sf) <- 4326 st_crs(my_df_2_sf) <- 4326 # Calculate the distance m_dist <- st_distance(my_df_1_sf, my_df_2_sf) # Filter for the nearest near_index <- apply(m_dist, 1, order)[1, ] # Based on the index in near_index to select the rows in my_df_2 # Combine with my_df_1 my_df_final <- cbind(my_df_1, my_df_2[near_index, ]) ```
Based on this [answer](https://stackoverflow.com/questions/31668163/geographic-distance-between-2-lists-of-lat-lon-coordinates) you could do ``` library(geosphere) mat <- distm(my_df_1[2:1], my_df_2[2:1], fun = distVincentyEllipsoid) cbind(my_df_1, my_df_2[max.col(-mat),]) ``` Which gives: ``` # START_LAT START_LONG ID1 latitude longitude depth_top ID2 #10 -33.15000 163.0000 1175 -31.8482 173.2424 1303 144 #10.1 -35.60000 165.1833 528 -31.8482 173.2424 1303 144 #10.2 -34.08333 162.8833 1328 -31.8482 173.2424 1303 144 #10.3 -34.13333 162.5833 870 -31.8482 173.2424 1303 144 #10.4 -34.31667 162.7667 672 -31.8482 173.2424 1303 144 #6 -47.38333 148.9833 707 -44.6570 174.6950 555 1481 #6.1 -47.53333 148.6667 506 -44.6570 174.6950 555 1481 #10.5 -34.08333 162.9000 981 -31.8482 173.2424 1303 144 #6.2 -47.38333 148.9833 756 -44.6570 174.6950 555 1481 #6.3 -47.15000 148.7167 210 -44.6570 174.6950 555 1481 ```
58,351,041
I am trying to install a virtualenv in windows 10 using a step process I found on some website. The steps are as follows, but only care about 1-4 for now: 1. Run Windows Power Shell as Administrator 2. pip install virtualenv 3. pip install virtualenvwrapper-win 4. mkvirtualenv ‘C:\Users\username\Documents\Virtualenv’ 5. cd Test 6. Set-ExecutionPolicy AllSigned | Press Y and Enter 7. Set-ExecutionPolicy RemoteSigned | Press Y and Enter 8. .\Scripts\activate 9. deactivate Steps 1-3 work fine, but when I try step four I get the following response: PS C:\WINDOWS\system32> mkvirtualenv 'C:\Users\username\Documents\Virtualenv' Using base prefix 'c:\users\username\appdata\local\programs\python\python37-32' New python executable in C:\Users\DANIEL~1\DOCUME~1\VIRTUA~1\Scripts\python.exe Installing setuptools, pip, wheel... done. The filename, directory name, or volume label syntax is incorrect. The filename, directory name, or volume label syntax is incorrect. The filename, directory name, or volume label syntax is incorrect. The cd step following right afterwords does not work aswell. I am pretty new to python/programming in general so I might be missing some basic things. running step 5 gives the following error message: cd : Cannot find path 'C:\WINDOWS\system32\Virtualenv' because it does not exist. At line:1 char:1 + cd Virtualenv + ~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (C:\WINDOWS\system32\Virtualenv:String) [Set-Location], ItemNotFoundExce ption + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand How do I fix this? Thanks in advance.
2019/10/12
[ "https://Stackoverflow.com/questions/58351041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12204393/" ]
The compiler changes the name of the local function, preventing you from calling it using its original name from the debugger. See [this question](https://stackoverflow.com/questions/45337983/why-local-functions-generate-il-different-from-anonymous-methods-and-lambda-expr) for examples. What you can do is temporarily modify the code to save a reference to the local function in a delegate variable. After recompiling, you can invoke the function through the delegate variable from Quick Watch or the Immediate Window. In your case, add this code to the beginning of the method: ``` Func<string,Task> f = ResetPasswordLocal; ``` Now you can invoke `f` in Quick Watch.
I'll have to say that I haven't tried it and will not bother to do so because there's a lot more to local functions than you think and I would put it very low in terms of priority for the debugger. Try putting your code in [sharplab.io](https://sharplab.io/) and see what it takes to make that local function.
53,539,612
I have a big `tab separated` file like this: ``` chr1 9507728 9517729 0 chr1 9507728 9517729 5S_rRNA chr1 9537731 9544392 0 chr1 9537731 9547732 5S_rRNA chr1 9497727 9507728 0 chr1 9497727 9507728 5S_rRNA chr1 9517729 9527730 0 chr1 9517729 9527730 5S_rRNA chr8 1118560 1118591 1 chr8 1112435 1122474 AK128400 chr8 1118591 1121351 0 chr8 1112435 1122474 AK128400 chr8 1121351 1121382 1 chr8 1112435 1122474 AK128400 chr8 1132513 1142552 0 chr8 1132513 1142552 AK128400 chr19 53436277 53446295 0 chr19 53436277 53446295 AK128361 chr19 53456313 53465410 0 chr19 53456313 53466331 AK128361 chr19 53465410 53465441 1 chr19 53456313 53466331 AK128361 chr19 53466331 53476349 0 chr19 53466331 53476349 AK128361 ``` according to the last column there are 3 groups and every group has 4 rows. based of the value of 4th column I want to get the average of 1st row of every group, 2nd row of every group, 3rd row of every group and 4th row of every group. so, in the expected output I would have 4 rows (since there are 4 rows per group) and 2 columns. the 1st column is ID and in this example would have 1, 2, 3 and 4. the 2nd column would be the average values that I mentioned how should be calculated. `expected output`: ``` 1 0.33 2 0 3 0.66 4 0 ``` I am trying to do that in python 2.7 using the following command: ``` file = open('myfile.txt', 'r') average = [] for i in file: ave = i[3]/3 average.append(ave) ``` this return only one number which is wrong. do you know how to fix it to get the expected output?
2018/11/29
[ "https://Stackoverflow.com/questions/53539612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10657934/" ]
If there is no common data between different clients/orgs, there is no point of having a shared channel between them. Taking care of permissions overs data will complicate your network setup. It would be better to abstract out that detail from network design. You should have one org corresponding to each client. In each org there will be a single channel which all the peers in that org will use to communicate.
I think you could encrypt every client's data by passing the transient key to chaincode,and just manage the keys, this may be light weight and fesible for your scenery.
22,650,001
I have a Django application running on [Dotcloud](http://dotcloud.com/ "Dotcloud"). I have tried to add [Logentries](http://logentries.com/ "Logentries") logging which works in normal usage for my site, but causes my cron jobs to fail with this error - `Traceback (most recent call last): File "/home/dotcloud/current/my_project/manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line utility.execute() File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 261, in fetch_command commands = get_commands() File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 107, in get_commands apps = settings.INSTALLED_APPS File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__ self._setup(name) File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/conf/__init__.py", line 50, in _setup self._configure_logging() File "/home/dotcloud/env/local/lib/python2.7/site-packages/django/conf/__init__.py", line 80, in _configure_logging logging_config_func(self.LOGGING) File "/usr/lib/python2.7/logging/config.py", line 777, in dictConfig dictConfigClass(config).configure() File "/usr/lib/python2.7/logging/config.py", line 575, in configure '%r: %s' % (name, e)) ValueError: Unable to configure handler 'logentries_handler': expected string or buffer` This is one of the scripts being run from cron - ``` #!/bin/bash echo "Loading definitions - check /var/log/supervisor/availsserver.log for results" . /etc/profile /home/dotcloud/env/bin/python /home/dotcloud/current/my_project/manage.py load_definitions ``` These are my settings for Logentries - `'logentries_handler': { 'token': os.getenv("LOGENTRIES_TOKEN"), 'class': 'logentries.LogentriesHandler' } .... 'logentries': { 'handlers': ['logentries_handler'], 'level': 'INFO', },` The LOGENTRIES\_TOKEN is there when I do `dotcloud env list`. **This is a summary of the symptoms -** - Logentries logging works from the site in normal usage. - If I manually run the script - `dotcloud run www ~/current/scripts/load_definitions.sh` it works. - If I remove the Logentries settings from my `settings.py` the cron jobs work. - The cron jobs fail if Logentries entries are in my `settings.py` I have spent hours trying to find a solution. Can anyone help?
2014/03/26
[ "https://Stackoverflow.com/questions/22650001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3458323/" ]
This is what you want: ``` List<Card> cards = IntStream.rangeClosed(1, 4) .boxed() .flatMap(value -> IntStream.rangeClosed(1, 13) .mapToObj(suit -> new Card(value, suit)) ) .collect(Collectors.toList()); ``` Points to note: * you have to box the ints because `flatMap` on primitives doesn't have any type-conversion overloads like `flatMapToObj` (doesn't exist); * assign to `List` rather than `ArrayList` as the `Collectors` methods make no guarantee as to the specific type they return (as it happens it currently is `ArrayList`, but you can't rely on that); * use `rangeClosed` for this kind of situation.
Another way to get what you want (based on Maurice Naftalin's answer): ``` List<Card> cards = IntStream.rangeClosed(1, 4) .mapToObj(value -> IntStream.rangeClosed(1, 13) .mapToObj(suit -> new Card(value, suit)) ) .flatMap(Function.identity()) .collect(Collectors.toList()) ; ``` Additional points to note: * you have to map the int values to Stream streams, then flatMap said streams via Function.identity(), since flatMapToObj does not exists, yet said operation is translatable to a map to Stream, then an identity flatMap.
51,634,841
I have a little project in python to do. I have to parse 4 arguments in my program. so the commands are: -i (store the source\_file) -d (store the destination\_file) -a (store the a folder named: i386, x64\_86 or all ) -p (store the folder named: Linux, Windows or all) The folder Linux has 2 folders in: i386 and x64\_86; the folder has those 2 folderswindows too My script has to copy the forders like i tell him, there are 9 combinations, for example: Python exemple.py -i -d -a i386 p windows So in this exemple i have to copy just the forder windows containing just the folder i386 to copy the files i use the shutil.copytree(source\_file, destination, ignore=ignore\_patterns(.....)) i manage to acces the input and the output( args.input, args.output) but for arch and platform i have to acces the coices and i dont know how. Any idea please ? ``` pars = argparse.ArgumentParser(prog='copy dirs script') a1 = pars.add_argument("-i", "--input", required=True, nargs="?", help="the source dirctory is /""X:/.......") a2 = pars.add_argument("-o", "--output", required=True, nargs="?", help="the destination dirctory is the curently working dirctory") pars.add_argument("-a", "--arch", choices=["all", "i386", "x86_64"], required=True, help="Targeted check architecture: 32b, 64b, All") pars.add_argument("-p", "--platform", choices=["all", "windows", "linux"], required=True, help="Targeted check platform: Windows, Linux, All") ``` Any idea please ?
2018/08/01
[ "https://Stackoverflow.com/questions/51634841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10123257/" ]
After peeking into [the PHP source code](https://github.com/php/php-src/blob/master/ext/gd/gd.c#L2287), to have some insights about the "[imagecreatefromstring](http://php.net/manual/en/function.imagecreatefromstring.php)" function, I've discovered that it handles only the following image formats: * JPEG * PNG * GIF * WBM * GD2 * BMP * WEBP PHP recognizes the format of the image contained in the argument of the "imagecreatefromstring" function by checking the image signature, as explained [here](https://oroboro.com/image-format-magic-bytes/). When an unknown signature is detected, the warning "Data is not in a recognized format" is raised. Therefore, the only plausible explanation for the error that you are experiencing is that **your PPTX file contains an image that is not in one of the above formats**. You can view the format of the images inside your PPTX file by changing its extension from ".pptx" to ".zip" and then opening it. You should see something like this: ``` Archive: sample.pptx Length Date Time Name --------- ---------- ----- ---- 5207 1980-01-01 00:00 [Content_Types].xml ... 6979 1980-01-01 00:00 ppt/media/image1.jpeg 6528 1980-01-01 00:00 ppt/media/image2.jpeg 178037 1980-01-01 00:00 ppt/media/image3.jpeg 229685 1980-01-01 00:00 ppt/media/image4.jpeg 164476 1980-01-01 00:00 ppt/media/image5.jpeg 6802 1980-01-01 00:00 ppt/media/image6.png 19012 1980-01-01 00:00 ppt/media/image7.png 32146 1980-01-01 00:00 ppt/media/image8.png ... --------- ------- 795623 74 files ``` As you can see, my **sample.pptx** file contains some images in JPEG and PNG format. Maybe your sample file contains some slides with images in a vector format (WMF or EMF); it's unclear to me (since I didn't find any reference in [the docs](https://media.readthedocs.org/pdf/phppowerpoint/latest/phppowerpoint.pdf)) if those formats are supported or not. Eventually you should try with other PPTX files, just to make sure that the problem is not related to a specific one (you can find some under "[test/resources/files](https://github.com/PHPOffice/PHPPresentation/tree/develop/tests/resources/files)"). I've searched for a list of the supported image formats for PowerPoint files, but I haven't been able to find a precise response. The only relevant links that I've found are the following: * [ECMA 376 Open Office XML 1st Edition - Image Part](https://c-rex.net/projects/samples/ooxml/e1/Part1/OOXML_P1_Fundamentals_Image_topic_ID0EGXDO.html#topic_ID0EGXDO) * [Office Implementation Information for ISO/IEC 29500 Standards Support](https://interoperability.blob.core.windows.net/files/MS-OI29500/[MS-OI29500].pdf) (2.1.32 Part 1 Section 15.2.14, Image Part, pages 57/58) * [Images in Open XML documents](https://blogs.msdn.microsoft.com/dmahugh/2006/12/10/images-in-open-xml-documents/) (read the comments at the end of the page) * [Question on OpenXML Developer Forum](http://openxmldeveloper.org/discussions/formats/f/15/p/418/944.aspx#944) This means that also the presence in the PPTX file of an image in the TIFF or PICT (QuickDraw) format could lead to the error under consideration.
Save your pptx again in PPT 2007 format in open office or MS Powerpoint.Its format issue.You are opening a very recent PPT format with 2007
56,581,577
Python says that TrackerMedianFlow\_create() is no longer an attribute of cv2. I've looked here but it's not the same: [OpenCV, How to pass parameters into cv2.TrackerMedianFlow\_create function?](https://stackoverflow.com/questions/47723349/opencv-how-to-pass-parameters-into-cv2-trackermedianflow-create-function) I've asked on several discord servers without success. I've copied this code directly from my textbook with ctrl + c so it should be exact. ``` import cv2 import numpy as np cap = cv2.VideoCapture("../data/traffic.mp4") _, frame = cap.read() bbox = cv2.selectROI(frame, False, True) cv2.destroyAllWindows() tracker = cv2.TrackerMedianFlow_create() status_tracker = tracker.init(frame, bbox) fps = 0 while True: status_cap, frame = cap.read() if not status_cap: break if status_tracker: timer = cv2.getTickCount() status_tracker, bbox = tracker.update(frame) if status_tracker: x, y, w, h = [int(i) for i in bbox] cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 15) fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer); cv2.putText(frame, "FPS: %.0f" % fps, (0, 80), cv2.FONT_HERSHEY_SIMPLEX, 3.5, (0, 0, 0), 8); else: cv2.putText(frame, "Tracking failure detected", (0, 80), cv2.FONT_HERSHEY_SIMPLEX, 3.5, (0,0,255), 8) cv2.imshow("MedianFlow tracker", frame) k = cv2.waitKey(1) if k == 27: break cv2.destroyAllWindows() ``` My line that causes the problem is: ``` tracker = cv2.TrackerMedianFlow_create() ``` Up until there the code runs. ``` Traceback (most recent call last): File "D:/Documents/E-Books/Comp Vision/opencv3computervisionwithpythoncookbook_ebook/OpenCV3ComputerVisionwithPythonCookbook_Code/Chapter04/myPart5.py", line 11, in <module> tracker = cv2.TrackerMedianFlow_create() AttributeError: module 'cv2.cv2' has no attribute 'TrackerMedianFlow_create' ``` I expected it to work without an error.
2019/06/13
[ "https://Stackoverflow.com/questions/56581577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7174600/" ]
for opencv 4.5.1 user opencv-contrib-python ``` import cv2 cv2.legacy_TrackerMedianFlow() ```
`TrackerMedianFlow` is a [module within the opencv-contrib package](https://github.com/opencv/opencv_contrib/tree/master/modules/tracking/src), and does not come standard with the official OpenCV distribution. You will need to install the opencv-contrib package to access `TrackerMedianFlow_create()` Per the [documentation](https://pypi.org/project/opencv-contrib-python/), you should uninstall the package without the additional modules and proceed to reinstall opencv with the additional modules you need. ``` pip uninstall opencv-python pip install opencv-contrib-python ```
56,581,577
Python says that TrackerMedianFlow\_create() is no longer an attribute of cv2. I've looked here but it's not the same: [OpenCV, How to pass parameters into cv2.TrackerMedianFlow\_create function?](https://stackoverflow.com/questions/47723349/opencv-how-to-pass-parameters-into-cv2-trackermedianflow-create-function) I've asked on several discord servers without success. I've copied this code directly from my textbook with ctrl + c so it should be exact. ``` import cv2 import numpy as np cap = cv2.VideoCapture("../data/traffic.mp4") _, frame = cap.read() bbox = cv2.selectROI(frame, False, True) cv2.destroyAllWindows() tracker = cv2.TrackerMedianFlow_create() status_tracker = tracker.init(frame, bbox) fps = 0 while True: status_cap, frame = cap.read() if not status_cap: break if status_tracker: timer = cv2.getTickCount() status_tracker, bbox = tracker.update(frame) if status_tracker: x, y, w, h = [int(i) for i in bbox] cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 15) fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer); cv2.putText(frame, "FPS: %.0f" % fps, (0, 80), cv2.FONT_HERSHEY_SIMPLEX, 3.5, (0, 0, 0), 8); else: cv2.putText(frame, "Tracking failure detected", (0, 80), cv2.FONT_HERSHEY_SIMPLEX, 3.5, (0,0,255), 8) cv2.imshow("MedianFlow tracker", frame) k = cv2.waitKey(1) if k == 27: break cv2.destroyAllWindows() ``` My line that causes the problem is: ``` tracker = cv2.TrackerMedianFlow_create() ``` Up until there the code runs. ``` Traceback (most recent call last): File "D:/Documents/E-Books/Comp Vision/opencv3computervisionwithpythoncookbook_ebook/OpenCV3ComputerVisionwithPythonCookbook_Code/Chapter04/myPart5.py", line 11, in <module> tracker = cv2.TrackerMedianFlow_create() AttributeError: module 'cv2.cv2' has no attribute 'TrackerMedianFlow_create' ``` I expected it to work without an error.
2019/06/13
[ "https://Stackoverflow.com/questions/56581577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7174600/" ]
`TrackerMedianFlow` is a [module within the opencv-contrib package](https://github.com/opencv/opencv_contrib/tree/master/modules/tracking/src), and does not come standard with the official OpenCV distribution. You will need to install the opencv-contrib package to access `TrackerMedianFlow_create()` Per the [documentation](https://pypi.org/project/opencv-contrib-python/), you should uninstall the package without the additional modules and proceed to reinstall opencv with the additional modules you need. ``` pip uninstall opencv-python pip install opencv-contrib-python ```
It has been installed opencv-contrib-python-4.5.4.60 version,Again AttributeError: module 'cv2' has no attribute 'TrackerMedianFlow\_create',what is the reason
56,581,577
Python says that TrackerMedianFlow\_create() is no longer an attribute of cv2. I've looked here but it's not the same: [OpenCV, How to pass parameters into cv2.TrackerMedianFlow\_create function?](https://stackoverflow.com/questions/47723349/opencv-how-to-pass-parameters-into-cv2-trackermedianflow-create-function) I've asked on several discord servers without success. I've copied this code directly from my textbook with ctrl + c so it should be exact. ``` import cv2 import numpy as np cap = cv2.VideoCapture("../data/traffic.mp4") _, frame = cap.read() bbox = cv2.selectROI(frame, False, True) cv2.destroyAllWindows() tracker = cv2.TrackerMedianFlow_create() status_tracker = tracker.init(frame, bbox) fps = 0 while True: status_cap, frame = cap.read() if not status_cap: break if status_tracker: timer = cv2.getTickCount() status_tracker, bbox = tracker.update(frame) if status_tracker: x, y, w, h = [int(i) for i in bbox] cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 15) fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer); cv2.putText(frame, "FPS: %.0f" % fps, (0, 80), cv2.FONT_HERSHEY_SIMPLEX, 3.5, (0, 0, 0), 8); else: cv2.putText(frame, "Tracking failure detected", (0, 80), cv2.FONT_HERSHEY_SIMPLEX, 3.5, (0,0,255), 8) cv2.imshow("MedianFlow tracker", frame) k = cv2.waitKey(1) if k == 27: break cv2.destroyAllWindows() ``` My line that causes the problem is: ``` tracker = cv2.TrackerMedianFlow_create() ``` Up until there the code runs. ``` Traceback (most recent call last): File "D:/Documents/E-Books/Comp Vision/opencv3computervisionwithpythoncookbook_ebook/OpenCV3ComputerVisionwithPythonCookbook_Code/Chapter04/myPart5.py", line 11, in <module> tracker = cv2.TrackerMedianFlow_create() AttributeError: module 'cv2.cv2' has no attribute 'TrackerMedianFlow_create' ``` I expected it to work without an error.
2019/06/13
[ "https://Stackoverflow.com/questions/56581577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7174600/" ]
for opencv 4.5.1 user opencv-contrib-python ``` import cv2 cv2.legacy_TrackerMedianFlow() ```
It has been installed opencv-contrib-python-4.5.4.60 version,Again AttributeError: module 'cv2' has no attribute 'TrackerMedianFlow\_create',what is the reason
55,643,507
I am getting this valid error while preprocessing some data: ``` 9:46:56.323 PM default_model Function execution took 6008 ms, finished with status: 'crash' 9:46:56.322 PM default_model Traceback (most recent call last): File "/user_code/main.py", line 31, in default_model train, endog, exog, _, _, rawDf = preprocess(ledger, apps) File "/user_code/Wrangling.py", line 73, in preprocess raise InsufficientTimespanError(args=(appDf, locDf)) ``` That's occurring here: ``` async def default_model(request): request_json = request.get_json() if not request_json: return '{"error": "empty body." }' if 'transaction_id' in request_json: transaction_id = request_json['transaction_id'] apps = [] # array of apps whose predictions we want, or uempty for all if 'apps' in request_json: apps = request_json['apps'] modelUrl = None if 'files' in request_json: try: files = request_json['files'] modelUrl = getModelFromFiles(files) except: return package(transaction_id, error="no model to execute") else: return package(transaction_id, error="no model to execute") if 'ledger' in request_json: ledger = request_json['ledger'] try: train, endog, exog, _, _, rawDf = preprocess(ledger, apps) # ... except InsufficientTimespanError as err: return package(transaction_id, error=err.message, appDf=err.args[0], locDf=err.args[1]) ``` And preprocess is correctly throwing my custom error: ``` def preprocess(ledger, apps=[]): """ convert ledger from the server, which comes in as an array of csv entries. normalize/resample timeseries, returning dataframes """ appDf, locDf = splitLedger(ledger) if len(appDf) < 3 or len(locDf) < 3: raise InsufficientDataError(args=(appDf, locDf)) endog = appDf['app_id'].unique().tolist() exog = locDf['location_id'].unique().tolist() rawDf = normalize(appDf, locDf) trainDf = cutoff(rawDf.copy(), apps) rawDf = cutoff(rawDf.copy(), apps, trim=False) # TODO - uncomment when on realish data if len(trainDf) < 2 * WEEKS: raise InsufficientTimespanError(args=(appDf, locDf)) ``` The thing is, it is in a `try``except` block precisely because I want to trap the error and return a payload with the error, rather than crashing with a 500 error. But its crashing on my custom error, in the try block, anyway. Right on that line calling `preprocess`. This must be a failure on my part to conform to proper python code. But I'm not sure what I am doing wrong. The environment is python 3.7 Here's where that error is defined, in Wrangling.py: ``` class WranglingError(Exception): """Base class for other exceptions""" pass class InsufficientDataError(WranglingError): """insufficient data to make a prediction""" def __init__(self, message='insufficient data to make a prediction', args=None): super().__init__(message) self.message = message self.args = args class InsufficientTimespanError(WranglingError): """insufficient timespan to make a prediction""" def __init__(self, message='insufficient timespan to make a prediction', args=None): super().__init__(message) self.message = message self.args = args ``` And here is how main.py declares (imports) it: ``` from Wrangling import preprocess, InsufficientDataError, InsufficientTimespanError, DataNotNormal, InappropriateValueToPredict ```
2019/04/12
[ "https://Stackoverflow.com/questions/55643507", "https://Stackoverflow.com", "https://Stackoverflow.com/users/732570/" ]
Your `preprocess` function is declared `async`. This means the code in it isn't actually run where you call `preprocess`, but instead when it is eventually `await`ed or passed to a main loop (like `asyncio.run`). Because the place where it is run is no-longer in the try block in `default_model`, the exception is not caught. You could fix this in a few ways: * make `preprocess` not async * make `default_model` async too, and `await` on `preprocess`.
Do the line numbers in the error match up with the line numbers in your code? If not is it possible that you are seeing the error from a version of the code before you added the try...except?
54,836,440
There is a chance this is still a problem and the Pyinstaller and/or Folium people have no interest in fixing it, but I'll post it again here in case someone out there has discovered a workaround. I have a program that creates maps, geocodes etc and recently added the folium package to create some interactive maps in html format. I always compile my code using pyinstaller so that others at my company can just use the executable rather than running the python code. If I run my code in an IDE, it loads, runs and performs exactly as expected. However, when I attempt to compile while I have `import folium` somewhere in my script, I get an error when trying to run the executable that pyinstaller creates. The error text reads something like this: ``` Traceback (most recent call last): File "analysisSuite.py", line 58, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\folium\__init__.py", line 8, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\branca\__init__.py", line 5, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\branca\colormap.py", line 29, in <module> File "site-packages\pkg_resources\__init__.py", line 1143, in resource_stream File "site-packages\pkg_resources\__init__.py", line 1390, in get_resource_stream File "site-packages\pkg_resources\__init__.py", line 1393, in get_resource_string File "site-packages\pkg_resources\__init__.py", line 1469, in _get File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 479, in get_data with open(path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\natha\\AppData\\Local\\Temp\\_MEI309082\\branca\\_cnames.json' [30956] Failed to execute script analysisSuite ``` I am still relatively new to Python, so trying to decipher what the issue is by this text is pretty overwhelming. I have no idea if there is a workaround, where I just need to edit a file, add a file or add some parameter to pyinstaller, but perhaps someone else out there can read this and has an idea of what could be causing this problem. Thanks in advance to anyone that has suggestions. EDIT: The problem seems to be with branca, which is a dependency of folium. It looks for that \_cnames.json file which is in the site-packages\branca folder but either doesn't get copied as it should or perhaps I need to somehow identify in my script where it should look for those files and then just manually copy them into a folder that I choose. ADDITIONAL UPDATE: I've been testing and testing and have determined the heart of the problem. When you run your exe, it gets unpacked in a temp folder. One of the modules within `branca` is `colormap.py` In the `colormap` file, there are essentially three lines that keep `branca` from loading correctly. ``` resource_package = __name__ resource_path_schemes = '/_schemes.json' resource_path_cnames = '/_cnames.json' ``` So, when the executable gets unpacked in this temp folder and branca tries to load up, because of these above lines, it expects these two files to also be in this temp folder, but of course, they won't be because they're being told to always and only be in the folder where the colormap module lives. The key here is figuring out a way so that the path reference can be relative, so that it doesn't look in the temp folder but also that the reference is dynamic, so that wherever you have your executable, as long as you have those json files present in some folder that it "knows" about, then you'll be good. Now I just need to figure out how to do that.
2019/02/22
[ "https://Stackoverflow.com/questions/54836440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9431874/" ]
I had the same problem. Pyinstaller could not work with the Python Folium package. I could not get your cx\_Freeze solution to work due to issues with Python 3.7 and cx\_Freeze but with a day of stress I found a Pyinstaller solution which I am sharing with the community. Firstly you have to edit these 3 files: 1. \folium\folium.py 2. \folium\raster\_layers.py 3. \branca\element.py Makes the following changes, commenting out the existing ENV line and replacing with the code below: ``` #ENV = Environment(loader=PackageLoader('folium', 'templates')) import os, sys from jinja2 import FileSystemLoader if getattr(sys, 'frozen', False): # we are running in a bundle templatedir = sys._MEIPASS else: # we are running in a normal Python environment templatedir = os.path.dirname(os.path.abspath(__file__)) ENV = Environment(loader=FileSystemLoader(templatedir + '\\templates')) ``` Create this spec file in your root folder, obviously your pathex and project name will be different: ``` # -*- mode: python -*- block_cipher = None a = Analysis(['time_punch_map.py'], pathex=['C:\\Users\\XXXX\\PycharmProjects\\TimePunchMap'], binaries=[], datas=[ (".\\venv\\Lib\\site-packages\\branca\\*.json","branca"), (".\\venv\\Lib\\site-packages\\branca\\templates","templates"), (".\\venv\\Lib\\site-packages\\folium\\templates","templates"), ], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='time_punch_map', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, runtime_tmpdir=None, console=True ) ``` Finally generate the single exe with this command from the terminal: ``` pyinstaller time_punch_map.spec ```
I could not get this to work using pyinstaller. I had to instead use cx\_Freeze. `pip install cx_Freeze` cx\_Freeze requires that a setup.py file is created, typically in the same folder as the main script that is being converted to an exe. My setup.py file looks like this: ``` import sys from cx_Freeze import setup, Executable import os.path PYTHON_INSTALL_DIR = os.path.dirname(os.path.dirname(os.__file__)) os.environ['TCL_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tcl8.6') os.environ['TK_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tk8.6') # Dependencies are automatically detected, but it might need fine tuning. build_exe_options = {"packages": ["pkg_resources","asyncio","os","pandas","numpy","idna","folium","branca","jinja2","matplotlib"]} # GUI applications require a different base on Windows (the default is for a # console application). base = None if sys.platform == "win32": base = "Win32GUI" options = { 'build_exe': { 'include_files':[ os.path.join(PYTHON_INSTALL_DIR, 'DLLs', 'tk86t.dll'), os.path.join(PYTHON_INSTALL_DIR, 'DLLs', 'tcl86t.dll'), # 'C:\\Users\\natha\\AppData\\Local\\Programs\\Python\\Python36-32\\Lib\\site-packages\\branca\\_cnames.json', # 'C:\\Users\\natha\\AppData\\Local\\Programs\\Python\\Python36-32\\Lib\\site-packages\\branca\\_schemes.json' ], }, } setup( name = "MyProgram", version = "0.1", description = "MyProgram that I created", options = {"build_exe": build_exe_options}, executables = [Executable("myProgram.py", base=base)]) ``` Notice I had to add various folium dependencies to the "packages" dictionary, such as branca, asyncio and pkg\_resources. Also, I did independent updates for asyncio, pkg\_resources and even setuptools using pip - for example: `pip install --upgrade setuptools` Once those were in place, I would open a command prompt from the directory where my setup.py file is saved and just type `python setup.py build` Once this runs, I have a new folder in my directory called `build` and inside of that is another folder, inside of which is my exe, which ran perfectly. Hope this helps someone else that may encounter this problem.
54,836,440
There is a chance this is still a problem and the Pyinstaller and/or Folium people have no interest in fixing it, but I'll post it again here in case someone out there has discovered a workaround. I have a program that creates maps, geocodes etc and recently added the folium package to create some interactive maps in html format. I always compile my code using pyinstaller so that others at my company can just use the executable rather than running the python code. If I run my code in an IDE, it loads, runs and performs exactly as expected. However, when I attempt to compile while I have `import folium` somewhere in my script, I get an error when trying to run the executable that pyinstaller creates. The error text reads something like this: ``` Traceback (most recent call last): File "analysisSuite.py", line 58, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\folium\__init__.py", line 8, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\branca\__init__.py", line 5, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\branca\colormap.py", line 29, in <module> File "site-packages\pkg_resources\__init__.py", line 1143, in resource_stream File "site-packages\pkg_resources\__init__.py", line 1390, in get_resource_stream File "site-packages\pkg_resources\__init__.py", line 1393, in get_resource_string File "site-packages\pkg_resources\__init__.py", line 1469, in _get File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 479, in get_data with open(path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\natha\\AppData\\Local\\Temp\\_MEI309082\\branca\\_cnames.json' [30956] Failed to execute script analysisSuite ``` I am still relatively new to Python, so trying to decipher what the issue is by this text is pretty overwhelming. I have no idea if there is a workaround, where I just need to edit a file, add a file or add some parameter to pyinstaller, but perhaps someone else out there can read this and has an idea of what could be causing this problem. Thanks in advance to anyone that has suggestions. EDIT: The problem seems to be with branca, which is a dependency of folium. It looks for that \_cnames.json file which is in the site-packages\branca folder but either doesn't get copied as it should or perhaps I need to somehow identify in my script where it should look for those files and then just manually copy them into a folder that I choose. ADDITIONAL UPDATE: I've been testing and testing and have determined the heart of the problem. When you run your exe, it gets unpacked in a temp folder. One of the modules within `branca` is `colormap.py` In the `colormap` file, there are essentially three lines that keep `branca` from loading correctly. ``` resource_package = __name__ resource_path_schemes = '/_schemes.json' resource_path_cnames = '/_cnames.json' ``` So, when the executable gets unpacked in this temp folder and branca tries to load up, because of these above lines, it expects these two files to also be in this temp folder, but of course, they won't be because they're being told to always and only be in the folder where the colormap module lives. The key here is figuring out a way so that the path reference can be relative, so that it doesn't look in the temp folder but also that the reference is dynamic, so that wherever you have your executable, as long as you have those json files present in some folder that it "knows" about, then you'll be good. Now I just need to figure out how to do that.
2019/02/22
[ "https://Stackoverflow.com/questions/54836440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9431874/" ]
With pyinstaller it works using the above trick. If we need to create a folder, then this script file can be used. ``` import platform block_cipher = None a = Analysis(['Test_Beta.py'], pathex=['C:\\Old desktop\\test\\esky\\fileserver\\test11'], binaries=[(winsparkle, '.')], datas=[ ("C:\\Users\\kv\\AppData\\Local\\Continuum\\Anaconda3\\Lib\\site-packages\\branca\\*.json","branca"), ("C:\\Users\\kv\\AppData\\Local\\Continuum\\Anaconda3\\Lib\\site-packages\\branca\\templates","templates"), ("C:\\Users\\kv\\AppData\\Local\\Continuum\\Anaconda3\\Lib\\site-packages\\folium\\templates","templates"), ], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=['scipy', 'zmq', '_gtkagg', '_tkagg', 'bsddb', 'curses', 'pywin.debugger', 'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl', 'Tkconstants', 'Tkinter'], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher) pyz = PYZ(a.pure, a.zipped_data,cipher=block_cipher) exe = EXE( pyz, a.scripts, [], exclude_binaries=True, name='Outview', debug=False, strip=False, upx=True, console=False ) coll = COLLECT(exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, name='Outview') ```
I could not get this to work using pyinstaller. I had to instead use cx\_Freeze. `pip install cx_Freeze` cx\_Freeze requires that a setup.py file is created, typically in the same folder as the main script that is being converted to an exe. My setup.py file looks like this: ``` import sys from cx_Freeze import setup, Executable import os.path PYTHON_INSTALL_DIR = os.path.dirname(os.path.dirname(os.__file__)) os.environ['TCL_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tcl8.6') os.environ['TK_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tk8.6') # Dependencies are automatically detected, but it might need fine tuning. build_exe_options = {"packages": ["pkg_resources","asyncio","os","pandas","numpy","idna","folium","branca","jinja2","matplotlib"]} # GUI applications require a different base on Windows (the default is for a # console application). base = None if sys.platform == "win32": base = "Win32GUI" options = { 'build_exe': { 'include_files':[ os.path.join(PYTHON_INSTALL_DIR, 'DLLs', 'tk86t.dll'), os.path.join(PYTHON_INSTALL_DIR, 'DLLs', 'tcl86t.dll'), # 'C:\\Users\\natha\\AppData\\Local\\Programs\\Python\\Python36-32\\Lib\\site-packages\\branca\\_cnames.json', # 'C:\\Users\\natha\\AppData\\Local\\Programs\\Python\\Python36-32\\Lib\\site-packages\\branca\\_schemes.json' ], }, } setup( name = "MyProgram", version = "0.1", description = "MyProgram that I created", options = {"build_exe": build_exe_options}, executables = [Executable("myProgram.py", base=base)]) ``` Notice I had to add various folium dependencies to the "packages" dictionary, such as branca, asyncio and pkg\_resources. Also, I did independent updates for asyncio, pkg\_resources and even setuptools using pip - for example: `pip install --upgrade setuptools` Once those were in place, I would open a command prompt from the directory where my setup.py file is saved and just type `python setup.py build` Once this runs, I have a new folder in my directory called `build` and inside of that is another folder, inside of which is my exe, which ran perfectly. Hope this helps someone else that may encounter this problem.
54,836,440
There is a chance this is still a problem and the Pyinstaller and/or Folium people have no interest in fixing it, but I'll post it again here in case someone out there has discovered a workaround. I have a program that creates maps, geocodes etc and recently added the folium package to create some interactive maps in html format. I always compile my code using pyinstaller so that others at my company can just use the executable rather than running the python code. If I run my code in an IDE, it loads, runs and performs exactly as expected. However, when I attempt to compile while I have `import folium` somewhere in my script, I get an error when trying to run the executable that pyinstaller creates. The error text reads something like this: ``` Traceback (most recent call last): File "analysisSuite.py", line 58, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\folium\__init__.py", line 8, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\branca\__init__.py", line 5, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module exec(bytecode, module.__dict__) File "site-packages\branca\colormap.py", line 29, in <module> File "site-packages\pkg_resources\__init__.py", line 1143, in resource_stream File "site-packages\pkg_resources\__init__.py", line 1390, in get_resource_stream File "site-packages\pkg_resources\__init__.py", line 1393, in get_resource_string File "site-packages\pkg_resources\__init__.py", line 1469, in _get File "c:\users\natha\appdata\local\programs\python\python36-32\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 479, in get_data with open(path, 'rb') as fp: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\natha\\AppData\\Local\\Temp\\_MEI309082\\branca\\_cnames.json' [30956] Failed to execute script analysisSuite ``` I am still relatively new to Python, so trying to decipher what the issue is by this text is pretty overwhelming. I have no idea if there is a workaround, where I just need to edit a file, add a file or add some parameter to pyinstaller, but perhaps someone else out there can read this and has an idea of what could be causing this problem. Thanks in advance to anyone that has suggestions. EDIT: The problem seems to be with branca, which is a dependency of folium. It looks for that \_cnames.json file which is in the site-packages\branca folder but either doesn't get copied as it should or perhaps I need to somehow identify in my script where it should look for those files and then just manually copy them into a folder that I choose. ADDITIONAL UPDATE: I've been testing and testing and have determined the heart of the problem. When you run your exe, it gets unpacked in a temp folder. One of the modules within `branca` is `colormap.py` In the `colormap` file, there are essentially three lines that keep `branca` from loading correctly. ``` resource_package = __name__ resource_path_schemes = '/_schemes.json' resource_path_cnames = '/_cnames.json' ``` So, when the executable gets unpacked in this temp folder and branca tries to load up, because of these above lines, it expects these two files to also be in this temp folder, but of course, they won't be because they're being told to always and only be in the folder where the colormap module lives. The key here is figuring out a way so that the path reference can be relative, so that it doesn't look in the temp folder but also that the reference is dynamic, so that wherever you have your executable, as long as you have those json files present in some folder that it "knows" about, then you'll be good. Now I just need to figure out how to do that.
2019/02/22
[ "https://Stackoverflow.com/questions/54836440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9431874/" ]
I had the same problem. Pyinstaller could not work with the Python Folium package. I could not get your cx\_Freeze solution to work due to issues with Python 3.7 and cx\_Freeze but with a day of stress I found a Pyinstaller solution which I am sharing with the community. Firstly you have to edit these 3 files: 1. \folium\folium.py 2. \folium\raster\_layers.py 3. \branca\element.py Makes the following changes, commenting out the existing ENV line and replacing with the code below: ``` #ENV = Environment(loader=PackageLoader('folium', 'templates')) import os, sys from jinja2 import FileSystemLoader if getattr(sys, 'frozen', False): # we are running in a bundle templatedir = sys._MEIPASS else: # we are running in a normal Python environment templatedir = os.path.dirname(os.path.abspath(__file__)) ENV = Environment(loader=FileSystemLoader(templatedir + '\\templates')) ``` Create this spec file in your root folder, obviously your pathex and project name will be different: ``` # -*- mode: python -*- block_cipher = None a = Analysis(['time_punch_map.py'], pathex=['C:\\Users\\XXXX\\PycharmProjects\\TimePunchMap'], binaries=[], datas=[ (".\\venv\\Lib\\site-packages\\branca\\*.json","branca"), (".\\venv\\Lib\\site-packages\\branca\\templates","templates"), (".\\venv\\Lib\\site-packages\\folium\\templates","templates"), ], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='time_punch_map', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, runtime_tmpdir=None, console=True ) ``` Finally generate the single exe with this command from the terminal: ``` pyinstaller time_punch_map.spec ```
With pyinstaller it works using the above trick. If we need to create a folder, then this script file can be used. ``` import platform block_cipher = None a = Analysis(['Test_Beta.py'], pathex=['C:\\Old desktop\\test\\esky\\fileserver\\test11'], binaries=[(winsparkle, '.')], datas=[ ("C:\\Users\\kv\\AppData\\Local\\Continuum\\Anaconda3\\Lib\\site-packages\\branca\\*.json","branca"), ("C:\\Users\\kv\\AppData\\Local\\Continuum\\Anaconda3\\Lib\\site-packages\\branca\\templates","templates"), ("C:\\Users\\kv\\AppData\\Local\\Continuum\\Anaconda3\\Lib\\site-packages\\folium\\templates","templates"), ], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=['scipy', 'zmq', '_gtkagg', '_tkagg', 'bsddb', 'curses', 'pywin.debugger', 'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl', 'Tkconstants', 'Tkinter'], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher) pyz = PYZ(a.pure, a.zipped_data,cipher=block_cipher) exe = EXE( pyz, a.scripts, [], exclude_binaries=True, name='Outview', debug=False, strip=False, upx=True, console=False ) coll = COLLECT(exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, name='Outview') ```
20,364,207
Hey I've been trying to add Python 3.3 to windows powershell by repacing 27 with 33 in the path. I tried to post a screenshot but turns out I need 10 rep so I'll just copy and paste what I've attempted: ``` [Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33", "User") ``` > ``` [Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33") ``` > ``` [Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33\python.exe", "User") ``` > ``` [Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33;C:\Python33\Scripts", "User") ``` > ``` [Enviroment]::SetEnviromentVariable("Path", "$env:Path;C:\Python33\", "User") ``` The path to the folder where python.exe resides is: C:\Python33 Somewhere I'm doing something wrong but am not sure where. Help a fellow out with his foray into programming? Thanks.
2013/12/03
[ "https://Stackoverflow.com/questions/20364207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3063721/" ]
Python 3.3 comes with PyLauncher (py.exe), which is installed in the C:\Windows directory (already on the path) and enables any installed Python to be executed via command line as follows: ``` Windows PowerShell Copyright (C) 2009 Microsoft Corporation. All rights reserved. PS C:\> py Python 3.3.3 (v3.3.3:c3896275c0f6, Nov 18 2013, 21:19:30) [MSC v.1600 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ^Z PS C:\> py -2 Python 2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ^Z PS C:\> py -3 Python 3.3.3 (v3.3.3:c3896275c0f6, Nov 18 2013, 21:19:30) [MSC v.1600 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ``` Note that the default Python if both 2.X and 3.X are installed is 2.X (3.X in later versions of Python), but this can be overridden with the `-3` switch or the default changed by setting the `PY_PYTHON` environment variable. Also, if you install Python 3.3 last and register extensions, PyLauncher will be the default program for .py files and adding a special `#!` comment to to top of a script will specify the version of Python to use for the script. This allows you to have Python 2 and Python 3 files on the desktop and just double-click them to run the correct version of Python for that script. See [Python Launcher for Windows](http://docs.python.org/3/using/windows.html?highlight=launcher#python-launcher-for-windows) in the [Python 3 docs](http://docs.python.org/3).
The windows environment variable `path` is searched left to right. If the path to the 2.7 binaries is still in the variable, it will never find the 3.3 binaries, whose path you are appending to the end of the path variable. Also, you are not adding the path to PowerShell. The windows python binaries are what PowerShell considers legacy executables. What you are doing is telling the OS where executable binaries are. PowerShell knows how to use that info to execute those binaries without an absolute path. to do what you are looking to do in Powershell, try something like this ``` $env:Path = ((($env:Path -split ";") | Where-Object { $_ -notlike "*Python*"}) -join ";") + ";C:\Python33" ``` To make it persist, do this next ``` [Environment]::SetEnvironmentVariable("Path",$env:Path, "User") ```
48,272,511
I have CLI tool I need open ([indy](https://github.com/hyperledger/indy-node/blob/stable/getting-started.md)), and then execute some commands. So I want to write a bash script to do this for me. Using python as example it might look like: ``` #!/bin/bash python print ("hello world") ``` But ofcourse all this does is open python and doesn't enter the commands. How I would I make this work? My development environment is Windows, and the run time environment will be a linux docker container. Edit: It looks like this approach will work for what I'm actually doing, it seems like Python doesn't like it though. Any clues why?
2018/01/16
[ "https://Stackoverflow.com/questions/48272511", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068446/" ]
You forgot to add minutes to `setMinutes`: ``` function getUserTimeZoneDateTime() { var currentUtcDateTime = moment.utc().toDate(); var mod_start = new Date(currentUtcDateTime.setMinutes(currentUtcDateTime.getMinutes() + GlobalValues.OffsetMinutesFromUTC - currentUtcDateTime.getTimezoneOffset())); var currentUserDateTime= moment(mod_start).format('MM/DD/YYYY h:mm A'); return currentUserDateTime; }; ``` The difference that you use wrong in parenthesis is always an integer number of hours. You can also use easier way: `moment(obj).utcOffset(OffsetMinutesFromUTC);` to set offset: ``` function getUserTimeZoneDateTime() { var currentUtcDateTime = moment.utc().toDate(); return moment(currentUtcDateTime).utcOffset(GlobalValues.OffsetMinutesFromUTC - currentUtcDateTime.getTimezoneOffset()).format('MM/DD/YYYY h:mm A'); }; ```
``` var now = new Date().getTime(); ``` This gets the time and stores it in the variable, here called `now`. It should get the time wherever the user is. Hope this helps!
73,064,635
I was trying to capture a video in kivy/android using camera4kivy. but it seems that this function won't work. I tried capture video with location, subdir and filename (kwarg\*\*) but still nothing happend. ``` from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.uix.image import Image from camera4kivy.preview import Preview class CamApp(App): def build(self): self.cam = Preview() self.cam.connect_camera(enable_analyze_pixels=True) self.cam.select_camera('1') box1 = BoxLayout() box1.add_widget(self.cam) try: self.cam.capture_video(location = 'shared', subdir='myapp', name='myvid') except Exception as e: print(e) return box1 def on_stop(self): self.cam.disconnect_camera() return super().on_stop() if __name__ == '__main__': CamApp().run() ``` > > 07-21 16:17:14.405 28320 29758 I python : JVM exception occurred: > Attempt to invoke virtual method 'void > androidx.camera.core.VideoCapture.startRecording(androidx.camera.core.VideoCapture$OutputFileOptions, > java.util.concurrent.Executor, > androidx.camera.core.VideoCapture$OnVideoSavedCallback)' on a null > object reference java.lang.NullPointerException 07-21 16:17:14.406 > 28320 28320 I python : Traceback (most recent call last): 07-21 > 16:17:14.406 28320 28320 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/android/runnable.py", > line 38, in run 07-21 16:17:14.407 28320 28320 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py", > line 289, in do\_select\_camera 07-21 16:17:14.407 28320 28320 I python > : File "jnius/jnius\_export\_class.pxi", line 857, in > jnius.jnius.JavaMethod.**call** 07-21 16:17:14.407 28320 28320 I > python : File "jnius/jnius\_export\_class.pxi", line 954, in > jnius.jnius.JavaMethod.call\_method 07-21 16:17:14.407 28320 28320 I > python : File "jnius/jnius\_utils.pxi", line 91, in > jnius.jnius.check\_exception 07-21 16:17:14.407 28320 28320 I python : > jnius.jnius.JavaException: JVM exception occurred: Attempt to invoke > virtual method 'void > androidx.camera.lifecycle.ProcessCameraProvider.unbindAll()' on a null > object reference java.lang.NullPointerException 07-21 16:17:14.408 > 28320 29758 I python : [WARNING] [Base ] Unknown > provider 07-21 16:17:14.408 28320 29758 I python : [INFO ] [Base > > ] Start application main loop 07-21 16:17:14.411 28320 29758 I python > : [INFO ] [Base ] Leaving application in progress... 07-21 > 16:17:14.412 28320 29758 I python : Traceback (most recent call > last): 07-21 16:17:14.412 28320 29758 I python : File > "/home/testapp/.buildozer/android/app/main.py", line 31, in > 07-21 16:17:14.412 28320 29758 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/app.py", > line 955, in run 07-21 16:17:14.412 28320 29758 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py", > line 574, in runTouchApp 07-21 16:17:14.413 28320 29758 I python : > > File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py", > line 339, in mainloop 07-21 16:17:14.413 28320 29758 I python : > > File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py", > line 391, in idle 07-21 16:17:14.413 28320 29758 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/clock.py", > line 783, in tick\_draw 07-21 16:17:14.414 28320 29758 I python : > > File "kivy/\_clock.pyx", line 662, in > kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21 > 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line > 708, in kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21 > 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line > 704, in kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21 > 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line > 218, in kivy.\_clock.ClockEvent.tick 07-21 16:17:14.414 28320 29758 I > python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/uix/anchorlayout.py", > line 122, in do\_layout 07-21 16:17:14.415 28320 29758 I python : > > File "kivy/properties.pyx", line 520, in > kivy.properties.Property.**set** 07-21 16:17:14.415 28320 29758 I > python : File "kivy/properties.pyx", line 1478, in > kivy.properties.ReferenceListProperty.set 07-21 16:17:14.415 28320 > 29758 I python : File "kivy/properties.pyx", line 606, in > kivy.properties.Property.\_dispatch 07-21 16:17:14.415 28320 29758 I > python : File "kivy/\_event.pyx", line 1307, in > kivy.\_event.EventObservers.dispatch 07-21 16:17:14.416 28320 29758 I > python : File "kivy/\_event.pyx", line 1213, in > kivy.\_event.EventObservers.\_dispatch 07-21 16:17:14.416 28320 29758 I > python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py", > line 159, in on\_size 07-21 16:17:14.416 28320 29758 I python : > > File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py", > line 217, in stop\_capture\_video 07-21 16:17:14.416 28320 29758 I > python : File "jnius/jnius\_export\_class.pxi", line 857, in > jnius.jnius.JavaMethod.**call** 07-21 16:17:14.417 28320 29758 I > python : File "jnius/jnius\_export\_class.pxi", line 954, in > jnius.jnius.JavaMethod.call\_method 07-21 16:17:14.417 28320 29758 I > python : File "jnius/jnius\_utils.pxi", line 91, in > jnius.jnius.check\_exception 07-21 16:17:14.417 28320 29758 I python : > jnius.jnius.JavaException: JVM exception occurred: Attempt to invoke > virtual method 'void > androidx.camera.core.VideoCapture.stopRecording()' on a null object > reference java.lang.NullPointerException 07-21 16:17:14.417 28320 > 29758 I python : Python for android ended. 07-21 16:17:14.540 28320 > 29758 F com.moria.test: mutex.cc:340] destroying mutex with owner or > contenders. Owner:29737 07-21 16:17:14.541 28320 29737 F > com.moria.test: debugger\_interface.cc:356] Check failed: removed\_it == > removed\_entries.end() > > >
2022/07/21
[ "https://Stackoverflow.com/questions/73064635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19499522/" ]
You can use `Map` collection: ``` new Map(fooArr.map(i => [i.name, i.surname])); ``` As [mdn says about `Map` collection](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map): > > The Map object holds key-value pairs and remembers the original > insertion order of the keys. Any value (both objects and primitive > values) may be used as either a key or a value. > > > An example: ```js let fooArr = [ { name: 'name 1', surname: 'surname 1' }, { name: 'name 2', surname: 'surname 2' } ]; let result = new Map(fooArr.map(i => [i.name, i.surname])); console.log(JSON.stringify([...result])); ``` As an alternative, you can use `Set` or just create simple object. Object has `key-value` too. Let me show an example: ```js let fooArr = [ { name: 'foo', surname: 'bar' }, { name: 'hello', surname: 'world' } ]; let object = fooArr.reduce( (obj, item) => Object.assign(obj, { [item.name]: item.surname }), {}); console.log(object) ```
To add to the previous answer. This React guide explains the arrays and how to use or set your keys for them: <https://reactjs.org/docs/lists-and-keys.html#keys> I would recommend making `<div>`s for each result or make a `<table>` with `<tr>` and `<td>` to store the individual items. Give each div or row a key and it is a lot easier to use it afterwards.
73,064,635
I was trying to capture a video in kivy/android using camera4kivy. but it seems that this function won't work. I tried capture video with location, subdir and filename (kwarg\*\*) but still nothing happend. ``` from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.uix.image import Image from camera4kivy.preview import Preview class CamApp(App): def build(self): self.cam = Preview() self.cam.connect_camera(enable_analyze_pixels=True) self.cam.select_camera('1') box1 = BoxLayout() box1.add_widget(self.cam) try: self.cam.capture_video(location = 'shared', subdir='myapp', name='myvid') except Exception as e: print(e) return box1 def on_stop(self): self.cam.disconnect_camera() return super().on_stop() if __name__ == '__main__': CamApp().run() ``` > > 07-21 16:17:14.405 28320 29758 I python : JVM exception occurred: > Attempt to invoke virtual method 'void > androidx.camera.core.VideoCapture.startRecording(androidx.camera.core.VideoCapture$OutputFileOptions, > java.util.concurrent.Executor, > androidx.camera.core.VideoCapture$OnVideoSavedCallback)' on a null > object reference java.lang.NullPointerException 07-21 16:17:14.406 > 28320 28320 I python : Traceback (most recent call last): 07-21 > 16:17:14.406 28320 28320 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/android/runnable.py", > line 38, in run 07-21 16:17:14.407 28320 28320 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py", > line 289, in do\_select\_camera 07-21 16:17:14.407 28320 28320 I python > : File "jnius/jnius\_export\_class.pxi", line 857, in > jnius.jnius.JavaMethod.**call** 07-21 16:17:14.407 28320 28320 I > python : File "jnius/jnius\_export\_class.pxi", line 954, in > jnius.jnius.JavaMethod.call\_method 07-21 16:17:14.407 28320 28320 I > python : File "jnius/jnius\_utils.pxi", line 91, in > jnius.jnius.check\_exception 07-21 16:17:14.407 28320 28320 I python : > jnius.jnius.JavaException: JVM exception occurred: Attempt to invoke > virtual method 'void > androidx.camera.lifecycle.ProcessCameraProvider.unbindAll()' on a null > object reference java.lang.NullPointerException 07-21 16:17:14.408 > 28320 29758 I python : [WARNING] [Base ] Unknown > provider 07-21 16:17:14.408 28320 29758 I python : [INFO ] [Base > > ] Start application main loop 07-21 16:17:14.411 28320 29758 I python > : [INFO ] [Base ] Leaving application in progress... 07-21 > 16:17:14.412 28320 29758 I python : Traceback (most recent call > last): 07-21 16:17:14.412 28320 29758 I python : File > "/home/testapp/.buildozer/android/app/main.py", line 31, in > 07-21 16:17:14.412 28320 29758 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/app.py", > line 955, in run 07-21 16:17:14.412 28320 29758 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py", > line 574, in runTouchApp 07-21 16:17:14.413 28320 29758 I python : > > File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py", > line 339, in mainloop 07-21 16:17:14.413 28320 29758 I python : > > File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/base.py", > line 391, in idle 07-21 16:17:14.413 28320 29758 I python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/clock.py", > line 783, in tick\_draw 07-21 16:17:14.414 28320 29758 I python : > > File "kivy/\_clock.pyx", line 662, in > kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21 > 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line > 708, in kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21 > 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line > 704, in kivy.\_clock.CyClockBase.\_process\_events\_before\_frame 07-21 > 16:17:14.414 28320 29758 I python : File "kivy/\_clock.pyx", line > 218, in kivy.\_clock.ClockEvent.tick 07-21 16:17:14.414 28320 29758 I > python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/kivy/uix/anchorlayout.py", > line 122, in do\_layout 07-21 16:17:14.415 28320 29758 I python : > > File "kivy/properties.pyx", line 520, in > kivy.properties.Property.**set** 07-21 16:17:14.415 28320 29758 I > python : File "kivy/properties.pyx", line 1478, in > kivy.properties.ReferenceListProperty.set 07-21 16:17:14.415 28320 > 29758 I python : File "kivy/properties.pyx", line 606, in > kivy.properties.Property.\_dispatch 07-21 16:17:14.415 28320 29758 I > python : File "kivy/\_event.pyx", line 1307, in > kivy.\_event.EventObservers.dispatch 07-21 16:17:14.416 28320 29758 I > python : File "kivy/\_event.pyx", line 1213, in > kivy.\_event.EventObservers.\_dispatch 07-21 16:17:14.416 28320 29758 I > python : File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py", > line 159, in on\_size 07-21 16:17:14.416 28320 29758 I python : > > File > "/home/testapp/.buildozer/android/platform/build-arm64-v8a/build/python-installs/test/arm64-v8a/camera4kivy/preview\_camerax.py", > line 217, in stop\_capture\_video 07-21 16:17:14.416 28320 29758 I > python : File "jnius/jnius\_export\_class.pxi", line 857, in > jnius.jnius.JavaMethod.**call** 07-21 16:17:14.417 28320 29758 I > python : File "jnius/jnius\_export\_class.pxi", line 954, in > jnius.jnius.JavaMethod.call\_method 07-21 16:17:14.417 28320 29758 I > python : File "jnius/jnius\_utils.pxi", line 91, in > jnius.jnius.check\_exception 07-21 16:17:14.417 28320 29758 I python : > jnius.jnius.JavaException: JVM exception occurred: Attempt to invoke > virtual method 'void > androidx.camera.core.VideoCapture.stopRecording()' on a null object > reference java.lang.NullPointerException 07-21 16:17:14.417 28320 > 29758 I python : Python for android ended. 07-21 16:17:14.540 28320 > 29758 F com.moria.test: mutex.cc:340] destroying mutex with owner or > contenders. Owner:29737 07-21 16:17:14.541 28320 29737 F > com.moria.test: debugger\_interface.cc:356] Check failed: removed\_it == > removed\_entries.end() > > >
2022/07/21
[ "https://Stackoverflow.com/questions/73064635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19499522/" ]
You can use `Map` collection: ``` new Map(fooArr.map(i => [i.name, i.surname])); ``` As [mdn says about `Map` collection](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map): > > The Map object holds key-value pairs and remembers the original > insertion order of the keys. Any value (both objects and primitive > values) may be used as either a key or a value. > > > An example: ```js let fooArr = [ { name: 'name 1', surname: 'surname 1' }, { name: 'name 2', surname: 'surname 2' } ]; let result = new Map(fooArr.map(i => [i.name, i.surname])); console.log(JSON.stringify([...result])); ``` As an alternative, you can use `Set` or just create simple object. Object has `key-value` too. Let me show an example: ```js let fooArr = [ { name: 'foo', surname: 'bar' }, { name: 'hello', surname: 'world' } ]; let object = fooArr.reduce( (obj, item) => Object.assign(obj, { [item.name]: item.surname }), {}); console.log(object) ```
You can use `Object.entries` which is similar to `python` `.items`: ```js const data = {a: 1, b: 2} console.log(Object.entries(data)) ```