qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
47,659,731
My code is running fine for first iteration but after that it outputs the following error: ``` ValueError: matrix must be 2-dimensional ``` To the best of my knowledge (which is not much in python), my code is correct. but I don't know, why it is not running correctly for all given iterations. Could anyone help me in this problem. ``` from __future__ import division import numpy as np import math import matplotlib.pylab as plt import sympy as sp from numpy.linalg import inv #initial guesses x = -2 y = -2.5 i1 = 0 while i1<5: F= np.matrix([[(x**2)+(x*y**3)-9],[(3*y*x**2)-(y**3)-4]]) theta = np.sum(F) J = np.matrix([[(2*x)+y**3, 3*x*y**2],[6*x*y, (3*x**2)-(3*y**2)]]) Jinv = inv(J) xn = np.array([[x],[y]]) xn_1 = xn - (Jinv*F) x = xn_1[0] y = xn_1[1] #~ print theta print xn i1 = i1+1 ```
2017/12/05
[ "https://Stackoverflow.com/questions/47659731", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5507715/" ]
In a comment, you said, > > Yes that is the structure however const items, references, and class items can't be initialized in the body of constructors or in a non-constructor method. > > > A [delegating constructor](http://www.stroustrup.com/C++11FAQ.html#delegating-ctor) can be used to initialize reference member variables. Expanding your example code a bit, I can see something like: ``` class Obj { static const AType defaultAType; const AType &aRef; static const BType defaultBType; const BType &bRef; public: // Delegate with default values for both references Obj() : Obj(defaultAType, defaultBType) {} // Delegate with default value for the B reference Obj(AType &aType) : Obj(aType, defaultBType) {} // Delegate with default value for the A reference Obj(BType &bType) : Obj(defaultAType, bType) {} // A constructor that has all the arguments. Obj(AType& aType, BType& bType) : aRef(aType), bRef(bType) {} }; ```
No. Unless you are using C++11, where you can initialize some of them in class definition: ``` struct B { B(int) {} constexpr B(double) {} }; class A { const B b1 = 1; static constexpr B b2 = 2.0; }; ``` For const values constructed from constructor input parameters, you need to use [initializer list](http://en.cppreference.com/w/cpp/language/initializer_list).
26,978,891
Using Maven I want to create 1) a JAR file for my current project with the current version included in the file name, myproject-version.jar, and 2) an overall artifact in tar.gzip format containing the project's JAR file and all dependency JARs in a lib directory and various driver scripts in a bin directory, but without the version number or an arbitrary ID in the name. I have this working somewhat using the assembly plugin, in that if I use the below pom.xml and assembly.xml then when I run 'mvn package' I can get a tar.gzip file with the JARs and scripts included as desired, however I don't seem to be able to get the naming/versioning correct -- either I get both the project's JAR file and the tar.gzip file with the version number or not depending on the build/finalName used. How can I specify these separately, and is it impossible to build the final artifact without an ID appended to the artifact name? For example I'd like to have my project's JAR file be named myproject-0.0.1-SNAPSHOT.jar and the overall "uber" artifact be named myproject.tar.gz (no version number or additional ID appended to the name). Is this possible? My current pom.xml and asssembly.xml are included below. pom.xml: ``` <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>mygroup</groupId> <artifactId>myproject</artifactId> <packaging>jar</packaging> <version>0.0.1-SNAPSHOT</version> <name>myproject</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>joda-time</groupId> <artifactId>joda-time</artifactId> <version>2.3</version> </dependency> </dependencies> <build> <finalName>${project.artifactId}-${project.version}</finalName> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.4.1</version> <configuration> <descriptors> <descriptor>src/main/maven/assembly.xml</descriptor> </descriptors> </configuration> <executions> <execution> <id>make-assembly</id> <!-- this is used for inheritance merges --> <phase>package</phase> <!-- bind to the packaging phase --> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> ``` assembly.xml: ``` <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd"> <id>bin</id> <formats> <format>tar.gz</format> </formats> <fileSets> <!-- the following file set includes the Python scripts/modules used to drive the monthly processing runs --> <fileSet> <directory>src/main/python</directory> <outputDirectory>bin</outputDirectory> <includes> <include>indices_processor.py</include> <include>concat_timeslices.py</include> </includes> </fileSet> <!-- the following file set includes the JAR artifact built by the package goal --> <fileSet> <directory>target</directory> <outputDirectory>lib</outputDirectory> <includes> <include>*.jar</include> </includes> </fileSet> </fileSets> <dependencySets> <!-- the following dependency set includes the dependency JARs needed by the main Java executable for indicator processing --> <dependencySet> <outputDirectory>lib</outputDirectory> <useProjectArtifact>true</useProjectArtifact> <scope>runtime</scope> <unpack>false</unpack> </dependencySet> </dependencySets> </assembly> ``` Thanks in advance for any suggestions.
2014/11/17
[ "https://Stackoverflow.com/questions/26978891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/85248/" ]
Just use `${project.artifactId}` as the value for your `finalName` in your assembly configuration. Example derived from your config (note the `finalName` element inside configuration): ``` <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.4.1</version> <configuration> <descriptors> <descriptor>src/main/maven/assembly.xml</descriptor> </descriptors> <finalName>${project.artifactId}</finalName> </configuration> ... </plugin> ``` `finalName` for an assembly defaults to `${project.build.finalName}` if you don't change it. The default value for `${project.build.finalName}` is `${project.artifactId}-${project.version}`
I think you are looking for this: <http://maven.apache.org/plugins/maven-assembly-plugin/single-mojo.html#finalName> just put it in the configuration of the plugin. however, i think you shouldn't remove the version if you planning to upload it to some repository.
612,253
I'm using parallel linq, and I'm trying to download many urls concurrently using essentily code like this: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel( threads ).ToDictionary( url => url, url => GetPage( url ); ``` Since downloading web pages is Network bound rather than CPU bound, using more threads than my number of processors/cores is very benificial, since most of the time in each thread is spent waiting for the network to catch up. However, judging form the fact that running the above with threads = 2 has the same performance as threads = 10 on my dual core machine, I'm thinking that the treads sent to AsParallel is limited to the number of cores. Is there any way to override this behavior? Is there a similar library available that doesn't have this limitation? (I've found such a library for python, but need something that works in .Net)
2009/03/04
[ "https://Stackoverflow.com/questions/612253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/30529/" ]
Do the URLs refer to the same server? If so, it could be that you are hitting the HTTP connection limit instead of the threading limit. There's an easy way to tell - change your code to: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel(threads) .ToDictionary(url => url, url => { Console.WriteLine("On thread {0}", Thread.CurrentThread.ManagedThreadId); return GetPage(url); }); ``` EDIT: Hmm. I can't get `ToDictionary()` to parallelise *at all* with a bit of sample code. It works fine for `Select(url => GetPage(url))` but not `ToDictionary`. Will search around a bit. EDIT: Okay, I still can't get `ToDictionary` to parallelise, but you can work around that. Here's a short but complete program: ``` using System; using System.Collections.Generic; using System.Threading; using System.Linq; using System.Linq.Parallel; public class Test { static void Main() { var urls = Enumerable.Range(0, 100).Select(i => i.ToString()); int threads = 10; Dictionary<string, string> results = urls.AsParallel(threads) .Select(url => new { Url=url, Page=GetPage(url) }) .ToDictionary(x => x.Url, x => x.Page); } static string GetPage(string x) { Console.WriteLine("On thread {0} getting {1}", Thread.CurrentThread.ManagedThreadId, x); Thread.Sleep(2000); return x; } } ``` So, how many threads does this use? 5. Why? Goodness knows. I've got 2 processors, so that's not it - and we've specified 10 threads, so that's not it. It still uses 5 even if I change `GetPage` to hammer the CPU. If you only need to use this for one particular task - and you don't mind slightly smelly code - you might be best off implementing it yourself, to be honest.
Monitor your network traffic. If the URLs are from the same domain it may be limiting the bandwidth. More connections might not actually provide any speed-up.
612,253
I'm using parallel linq, and I'm trying to download many urls concurrently using essentily code like this: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel( threads ).ToDictionary( url => url, url => GetPage( url ); ``` Since downloading web pages is Network bound rather than CPU bound, using more threads than my number of processors/cores is very benificial, since most of the time in each thread is spent waiting for the network to catch up. However, judging form the fact that running the above with threads = 2 has the same performance as threads = 10 on my dual core machine, I'm thinking that the treads sent to AsParallel is limited to the number of cores. Is there any way to override this behavior? Is there a similar library available that doesn't have this limitation? (I've found such a library for python, but need something that works in .Net)
2009/03/04
[ "https://Stackoverflow.com/questions/612253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/30529/" ]
Do the URLs refer to the same server? If so, it could be that you are hitting the HTTP connection limit instead of the threading limit. There's an easy way to tell - change your code to: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel(threads) .ToDictionary(url => url, url => { Console.WriteLine("On thread {0}", Thread.CurrentThread.ManagedThreadId); return GetPage(url); }); ``` EDIT: Hmm. I can't get `ToDictionary()` to parallelise *at all* with a bit of sample code. It works fine for `Select(url => GetPage(url))` but not `ToDictionary`. Will search around a bit. EDIT: Okay, I still can't get `ToDictionary` to parallelise, but you can work around that. Here's a short but complete program: ``` using System; using System.Collections.Generic; using System.Threading; using System.Linq; using System.Linq.Parallel; public class Test { static void Main() { var urls = Enumerable.Range(0, 100).Select(i => i.ToString()); int threads = 10; Dictionary<string, string> results = urls.AsParallel(threads) .Select(url => new { Url=url, Page=GetPage(url) }) .ToDictionary(x => x.Url, x => x.Page); } static string GetPage(string x) { Console.WriteLine("On thread {0} getting {1}", Thread.CurrentThread.ManagedThreadId, x); Thread.Sleep(2000); return x; } } ``` So, how many threads does this use? 5. Why? Goodness knows. I've got 2 processors, so that's not it - and we've specified 10 threads, so that's not it. It still uses 5 even if I change `GetPage` to hammer the CPU. If you only need to use this for one particular task - and you don't mind slightly smelly code - you might be best off implementing it yourself, to be honest.
By default, .Net has limit of 2 concurrent connections to an end service point (IP:port). Thats why you would not see a difference if all urls are to one and the same server. It can be controlled using [ServicePointManager.DefaultPersistentConnectionLimit](http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.defaultpersistentconnectionlimit.aspx) property.
612,253
I'm using parallel linq, and I'm trying to download many urls concurrently using essentily code like this: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel( threads ).ToDictionary( url => url, url => GetPage( url ); ``` Since downloading web pages is Network bound rather than CPU bound, using more threads than my number of processors/cores is very benificial, since most of the time in each thread is spent waiting for the network to catch up. However, judging form the fact that running the above with threads = 2 has the same performance as threads = 10 on my dual core machine, I'm thinking that the treads sent to AsParallel is limited to the number of cores. Is there any way to override this behavior? Is there a similar library available that doesn't have this limitation? (I've found such a library for python, but need something that works in .Net)
2009/03/04
[ "https://Stackoverflow.com/questions/612253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/30529/" ]
Do the URLs refer to the same server? If so, it could be that you are hitting the HTTP connection limit instead of the threading limit. There's an easy way to tell - change your code to: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel(threads) .ToDictionary(url => url, url => { Console.WriteLine("On thread {0}", Thread.CurrentThread.ManagedThreadId); return GetPage(url); }); ``` EDIT: Hmm. I can't get `ToDictionary()` to parallelise *at all* with a bit of sample code. It works fine for `Select(url => GetPage(url))` but not `ToDictionary`. Will search around a bit. EDIT: Okay, I still can't get `ToDictionary` to parallelise, but you can work around that. Here's a short but complete program: ``` using System; using System.Collections.Generic; using System.Threading; using System.Linq; using System.Linq.Parallel; public class Test { static void Main() { var urls = Enumerable.Range(0, 100).Select(i => i.ToString()); int threads = 10; Dictionary<string, string> results = urls.AsParallel(threads) .Select(url => new { Url=url, Page=GetPage(url) }) .ToDictionary(x => x.Url, x => x.Page); } static string GetPage(string x) { Console.WriteLine("On thread {0} getting {1}", Thread.CurrentThread.ManagedThreadId, x); Thread.Sleep(2000); return x; } } ``` So, how many threads does this use? 5. Why? Goodness knows. I've got 2 processors, so that's not it - and we've specified 10 threads, so that's not it. It still uses 5 even if I change `GetPage` to hammer the CPU. If you only need to use this for one particular task - and you don't mind slightly smelly code - you might be best off implementing it yourself, to be honest.
I think there are already good answers to the question, but I'd like to make one important point. Using PLINQ for tasks that are not CPU bound is in principle wrong design. Not to say that it won't work - it will, but using multiple threads when it is unnecessary can cause troubles. Unfortunatelly, there is no good way to solve this problem in C#. In F# you could use asynchornous workflows that run in parallel, but don't block the thread when performing asynchronous calls (under the cover, it uses `BeginOperation` and `EndOperation` methods). You can find more information here: * [Concurrency in F# – Part I – The Asynchronous Workflow](http://strangelights.com/blog/archive/2007/09/29/1597.aspx) The same idea can to some extent be used in C#, but it looks a bit weird (but it is more efficient). I wrote an article about that and there is also a library that should be slightly more evolved than my original idea: * [Asynchronous Programming in C# using Iterators](http://tomasp.net/blog/csharp-async.aspx) * [EasyAsync library](http://www.codeplex.com/EasyAsync)
612,253
I'm using parallel linq, and I'm trying to download many urls concurrently using essentily code like this: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel( threads ).ToDictionary( url => url, url => GetPage( url ); ``` Since downloading web pages is Network bound rather than CPU bound, using more threads than my number of processors/cores is very benificial, since most of the time in each thread is spent waiting for the network to catch up. However, judging form the fact that running the above with threads = 2 has the same performance as threads = 10 on my dual core machine, I'm thinking that the treads sent to AsParallel is limited to the number of cores. Is there any way to override this behavior? Is there a similar library available that doesn't have this limitation? (I've found such a library for python, but need something that works in .Net)
2009/03/04
[ "https://Stackoverflow.com/questions/612253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/30529/" ]
By default, .Net has limit of 2 concurrent connections to an end service point (IP:port). Thats why you would not see a difference if all urls are to one and the same server. It can be controlled using [ServicePointManager.DefaultPersistentConnectionLimit](http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.defaultpersistentconnectionlimit.aspx) property.
Monitor your network traffic. If the URLs are from the same domain it may be limiting the bandwidth. More connections might not actually provide any speed-up.
612,253
I'm using parallel linq, and I'm trying to download many urls concurrently using essentily code like this: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel( threads ).ToDictionary( url => url, url => GetPage( url ); ``` Since downloading web pages is Network bound rather than CPU bound, using more threads than my number of processors/cores is very benificial, since most of the time in each thread is spent waiting for the network to catch up. However, judging form the fact that running the above with threads = 2 has the same performance as threads = 10 on my dual core machine, I'm thinking that the treads sent to AsParallel is limited to the number of cores. Is there any way to override this behavior? Is there a similar library available that doesn't have this limitation? (I've found such a library for python, but need something that works in .Net)
2009/03/04
[ "https://Stackoverflow.com/questions/612253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/30529/" ]
I think there are already good answers to the question, but I'd like to make one important point. Using PLINQ for tasks that are not CPU bound is in principle wrong design. Not to say that it won't work - it will, but using multiple threads when it is unnecessary can cause troubles. Unfortunatelly, there is no good way to solve this problem in C#. In F# you could use asynchornous workflows that run in parallel, but don't block the thread when performing asynchronous calls (under the cover, it uses `BeginOperation` and `EndOperation` methods). You can find more information here: * [Concurrency in F# – Part I – The Asynchronous Workflow](http://strangelights.com/blog/archive/2007/09/29/1597.aspx) The same idea can to some extent be used in C#, but it looks a bit weird (but it is more efficient). I wrote an article about that and there is also a library that should be slightly more evolved than my original idea: * [Asynchronous Programming in C# using Iterators](http://tomasp.net/blog/csharp-async.aspx) * [EasyAsync library](http://www.codeplex.com/EasyAsync)
Monitor your network traffic. If the URLs are from the same domain it may be limiting the bandwidth. More connections might not actually provide any speed-up.
612,253
I'm using parallel linq, and I'm trying to download many urls concurrently using essentily code like this: ``` int threads = 10; Dictionary<string, string> results = urls.AsParallel( threads ).ToDictionary( url => url, url => GetPage( url ); ``` Since downloading web pages is Network bound rather than CPU bound, using more threads than my number of processors/cores is very benificial, since most of the time in each thread is spent waiting for the network to catch up. However, judging form the fact that running the above with threads = 2 has the same performance as threads = 10 on my dual core machine, I'm thinking that the treads sent to AsParallel is limited to the number of cores. Is there any way to override this behavior? Is there a similar library available that doesn't have this limitation? (I've found such a library for python, but need something that works in .Net)
2009/03/04
[ "https://Stackoverflow.com/questions/612253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/30529/" ]
By default, .Net has limit of 2 concurrent connections to an end service point (IP:port). Thats why you would not see a difference if all urls are to one and the same server. It can be controlled using [ServicePointManager.DefaultPersistentConnectionLimit](http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.defaultpersistentconnectionlimit.aspx) property.
I think there are already good answers to the question, but I'd like to make one important point. Using PLINQ for tasks that are not CPU bound is in principle wrong design. Not to say that it won't work - it will, but using multiple threads when it is unnecessary can cause troubles. Unfortunatelly, there is no good way to solve this problem in C#. In F# you could use asynchornous workflows that run in parallel, but don't block the thread when performing asynchronous calls (under the cover, it uses `BeginOperation` and `EndOperation` methods). You can find more information here: * [Concurrency in F# – Part I – The Asynchronous Workflow](http://strangelights.com/blog/archive/2007/09/29/1597.aspx) The same idea can to some extent be used in C#, but it looks a bit weird (but it is more efficient). I wrote an article about that and there is also a library that should be slightly more evolved than my original idea: * [Asynchronous Programming in C# using Iterators](http://tomasp.net/blog/csharp-async.aspx) * [EasyAsync library](http://www.codeplex.com/EasyAsync)
20,424,426
I have recently moved from Ubuntu to Mac osx. And my first thing is to bring my vim with me. I downloaded source from vim.org and compiled with gcc.( I'll put the version output at the bottom of my post) I added pathogen.vim to ~/.vim/autoload directory. But when I add the code in ~/.vim/vimrc: ``` execute pathogen#infect() ``` I got errors when tring to start vim, here is the error output: ``` Error detected while processing /Users/jack/.vim/vimrc: line 3: E117: Unknown function: pathogen#infect E15: Invalid expression: pathogen#infect() Press ENTER or type command to continue ``` First I though perhaps vim did not load pathogen.vim, but :scriptnames showed it did load! ``` 1: ~/.vim/vimrc 2: ~/.vim/bundle/vim-pathogen/autoload/pathogen.vim ``` After I ran :function, something caught my attention, there is a "abort" after the infect function, I google around, and found it did not solve my problem either: ``` function pathogen#legacyjoin(...) abort function pathogen#runtime_append_all_bundles(...) abort function pathogen#surround(path) abort function <SNR>2_Findcomplete(A, L, P) function pathogen#uniq(list) abort function pathogen#incubate(...) abort function pathogen#glob(pattern) abort function <SNR>2_warn(msg) function pathogen#runtime_findfile(file, count) abort function pathogen#separator() abort function pathogen#runtime_prepend_subdirectories(path) function pathogen#glob_directories(pattern) abort function pathogen#infect(...) abort function pathogen#is_disabled(path) function pathogen#join(...) abort function pathogen#cycle_filetype() function pathogen#split(path) abort function <SNR>2_find(count, cmd, file, lcd) function pathogen#fnameescape(string) abort function pathogen#execute(...) abort function pathogen#helptags() abort ``` Can anyone help point out what should I do to solve this problem? Here is the version output with command "vim --version": ``` JacktekiMac-Pro:.vim$ vim --version VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Dec 6 2013 17:01:30) MacOS X (unix) version Huge version without GUI. Features included (+) or not (-): +arabic +file_in_path +mouse_sgr +tag_binary +autocmd +find_in_path -mouse_sysmouse +tag_old_static -balloon_eval +float +mouse_urxvt -tag_any_white -browse +folding +mouse_xterm -tcl ++builtin_terms -footer +multi_byte +terminfo +byte_offset +fork() +multi_lang +termresponse +cindent -gettext -mzscheme +textobjects -clientserver -hangul_input +netbeans_intg +title +clipboard +iconv +path_extra -toolbar +cmdline_compl +insert_expand -perl +user_commands +cmdline_hist +jumplist +persistent_undo +vertsplit +cmdline_info +keymap +postscript +virtualedit +comments +langmap +printer +visual +conceal +libcall +profile +visualextra +cryptv +linebreak +python/dyn +viminfo -cscope +lispindent -python3 +vreplace +cursorbind +listcmds +quickfix +wildignore +cursorshape +localmap +reltime +wildmenu +dialog_con -lua +rightleft +windows +diff +menu -ruby +writebackup +digraphs +mksession +scrollbind -X11 -dnd +modify_fname +signs -xfontset -ebcdic +mouse +smartindent -xim +emacs_tags -mouseshape -sniff -xsmp +eval +mouse_dec +startuptime -xterm_clipboard +ex_extra -mouse_gpm +statusline -xterm_save +extra_search -mouse_jsbterm -sun_workshop +farsi +mouse_netterm +syntax system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" 2nd user vimrc file: "~/.vim/vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/local/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -no-cpp-precomp -O2 -fno-strength-reduce -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -o vim -lm -lncurses -liconv -framework Cocoa ```
2013/12/06
[ "https://Stackoverflow.com/questions/20424426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1914683/" ]
I found the problem. system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" 2nd user vimrc file: "~/.vim/vimrc" user exrc file: "$HOME/.exrc" I set $VIM to ~/.vim, which is the same as the 2nd user vimrc file. So the vimrc file load twice. After I change $VIM to /etc/vim, everything turns out be good.
I had a similar problem and found that I had not created the ~/.vim directory correctly. I created it in the root by changing directory there and typing mkdir /.vim but for some reason it was not working. Then I deleted this folder and did mkdir ~/.vim and was ably to install and use pathogen.
33,512,243
I am trying to understand what is a better design choice in the case when we have functions in a Class which does a bunch of things and should either return a string or raise a custom exception when a particular check fails. Example : Suppose I have a class like :- ``` #Division only for +ve numbers class DivisionError(Exception): pass class Division(object): def __init__(self, divisor, dividend): self.divisor = divisor self.dividend = dividend def divide(): if self.divisor<0: #return "-ve_divisor_error" or #raise DivisonError.divisorError if self.dividend<0: #return "-ve_dividend_error" or #raise DivisionError.dividendError return self.dividend/self.divisor ``` 1. What is better to return a custom string or raise exception especially in case of writing a python library. 2. And do we need to write separate classes for all Custom exception that we raise or is there a way to have an enum of some kind on a single customexception class?
2015/11/04
[ "https://Stackoverflow.com/questions/33512243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1982483/" ]
Your problem was due to your adding the JScrollPane and the JTextArea both to the `thePanel` JPanel, and so you see both: a JTextArea **without** JScrollPanes and an empty JScrollPane. * Don't add add the textArea itself to the JScrollPane and also to a JPanel, since you can only add a component to **one** container. Instead, add it to one component, the JScrollPane (actually you're adding it to its viewport view, but if you pass it into the JScrollPane's constructor, then you're doing this), and then add that JScrollPane to something else. * Also, **NEVER** set a text component's preferred size. You constrain the JTextArea so that it will never expand as more text is added, and so you'll never see scrollbars and not see text beyond this size. Set the visible columns and rows instead. e.g., `textArea1 = new JTextArea(rows, columns);` Note that this doesn't make much sense: ``` thePanel.setLayout(null); thePanel.setLayout(new FlowLayout(FlowLayout.LEFT)); ``` I'm not sure what you are trying to do here since 1) you want to set a container's layout only once, and 2) in general you will want to avoid use of `null` layouts. For example: ``` import java.awt.BorderLayout; import javax.swing.*; public class MyProgram extends JPanel { private static final int T_FIELD_COLS = 20; private static final int TXT_AREA_ROWS = 15; private static final int TXT_AREA_COLS = 20; private JButton button1 = new JButton("Button 1"); private JButton button2 = new JButton("Button 2"); private JTextField textField = new JTextField(T_FIELD_COLS); private JTextArea textArea = new JTextArea(TXT_AREA_ROWS, TXT_AREA_COLS); public MyProgram() { // Create a JPanel to hold your top line of components JPanel topPanel = new JPanel(); int gap = 3; topPanel.setBorder(BorderFactory.createEmptyBorder(gap, gap, gap, gap)); // set this JPanel's layout. Here I use BoxLayout. topPanel.setLayout(new BoxLayout(topPanel, BoxLayout.LINE_AXIS)); topPanel.add(button1); topPanel.add(Box.createHorizontalStrut(gap)); topPanel.add(textField); topPanel.add(Box.createHorizontalStrut(gap)); topPanel.add(button2); // so the JTextArea will wrap words textArea.setLineWrap(true); textArea.setWrapStyleWord(true); // add the JTextArea to the JScrollPane's viewport: JScrollPane scrollPane = new JScrollPane(textArea); scrollPane.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS); // set the layout of the main JPanel. setLayout(new BorderLayout()); add(topPanel, BorderLayout.PAGE_START); add(scrollPane, BorderLayout.CENTER); } private static void createAndShowGui() { MyProgram mainPanel = new MyProgram(); JFrame frame = new JFrame("My Program"); frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); frame.getContentPane().add(mainPanel); frame.pack(); // don't set the JFrame's size, preferred size or bounds frame.setLocationByPlatform(true); frame.setVisible(true); } public static void main(String[] args) { // start your program on the event thread SwingUtilities.invokeLater(new Runnable() { public void run() { createAndShowGui(); } }); } } ```
> > Try this : > > > ``` textArea1 = new JTextArea(); textArea1.setColumns(20); textArea1.setRows(5); scroller.setViewportView(textArea1); ```
67,503,532
When I try to run my localhost server I get the following error: `FileNotFoundError: [Errno 2] No such file or directory: '/static/CSV/ExtractedTweets.csv'` This error is due to the line the line `with open(staticfiles_storage.url('/CSV/ExtractedTweets.csv'), 'r', newline='', encoding="utf8") as csvfile:` This line of code can be found in a custom python module within my app folder. I have copies of /static/CSV/ExtractedTweets.csv in my project root folder, my app folder and in the folder enclosing my project root and app folders. I also have an additional copy of ExtractedTweets.csv within my app folder. settings.py ``` STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage' ``` urls.py ``` from django.conf import settings urlpatterns = [ ... ] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` I have placed the file in all possible locations yet django cannot seem to find it. Interestingly, my templates have no problem finding my static CSS files. If anyone has any idea how to resolve this error, please let me know.
2021/05/12
[ "https://Stackoverflow.com/questions/67503532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9403355/" ]
I never found a solution to getting the staticfile path however the find() function seems to be an alternative solution. custommodule.py `from django.contrib.staticfiles.finders import find` `with open(find('CSV/ExtractedTweets.csv'), 'r', newline='', encoding="utf8") as csvfile:`
if you are not looking to deploy this project you can add : ``` from django.conf import settings urlpatterns = [ path(....), path(....), ]+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` or you can try to add : ``` STATICFILES_DIRS = [ BASE_DIR / "static", ] ``` to your setting.py
64,483,271
I'm trying install packages through pip, but every package I try to install, it fails with ``` ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy ``` When running the same command with `-vvv` like `pip install numpy -vvv` it gives the following output. ``` Using pip 20.2.1 from c:\program files\python38\lib\site-packages\pip (python 3.8) Defaulting to user installation because normal site-packages is not writeable Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-ephem-wheel-cache-bo4luxtk Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Initialized build tracking at C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Created build tracker: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Entered build tracker: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-install-2se6s0ld 1 location(s) to search for versions of numpy: * https://pypi.org/simple/numpy/ Fetching project page and analyzing links: https://pypi.org/simple/numpy/ Getting page https://pypi.org/simple/numpy/ Found index url https://pypi.org/simple Looking up "https://pypi.org/simple/numpy/" in the cache Request header has "max_age" as 0, cache bypassed Starting new HTTPS connection (1): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=4, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (2): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=3, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (3): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (4): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (5): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (6): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Could not fetch URL https://pypi.org/simple/numpy/: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/numpy/ (Caused by ResponseError('too many 500 error responses')) - skipping Given no hashes to check 0 links for project 'numpy': discarding no candidates ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy Exception information: Traceback (most recent call last): File "c:\program files\python38\lib\site-packages\pip\_internal\cli\base_command.py", line 216, in _main status = self.run(options, args) File "c:\program files\python38\lib\site-packages\pip\_internal\cli\req_command.py", line 182, in wrapper return func(self, options, args) File "c:\program files\python38\lib\site-packages\pip\_internal\commands\install.py", line 324, in run requirement_set = resolver.resolve( File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 183, in resolve discovered_reqs.extend(self._resolve_one(requirement_set, req)) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 388, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 339, in _get_abstract_dist_for self._populate_link(req) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 305, in _populate_link req.link = self._find_requirement_link(req) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 270, in _find_requirement_link best_candidate = self.finder.find_requirement(req, upgrade) File "c:\program files\python38\lib\site-packages\pip\_internal\index\package_finder.py", line 926, in find_requirement raise DistributionNotFound( pip._internal.exceptions.DistributionNotFound: No matching distribution found for numpy 1 location(s) to search for versions of pip: * https://pypi.org/simple/pip/ Fetching project page and analyzing links: https://pypi.org/simple/pip/ Getting page https://pypi.org/simple/pip/ Found index url https://pypi.org/simple Looking up "https://pypi.org/simple/pip/" in the cache Request header has "max_age" as 0, cache bypassed Starting new HTTPS connection (1): pypi.org:443 https://pypi.org:443 "GET /simple/pip/ HTTP/1.1" 500 655 Could not fetch URL https://pypi.org/simple/pip/: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by ResponseError('too many 500 error responses')) - skipping Given no hashes to check 0 links for project 'pip': discarding no candidates Removed build tracker: 'C:\\Users\\d\\AppData\\Local\\Temp\\pip-req-tracker-8z32xx1a' ``` My pip.ini file looks like ``` [global] trusted-host = pypi.python.org pypi.org files.pythonhosted.org ``` How can the 'too many 500 error responses' mentioned in the error be fixed? Edit: Reinstalling Python has not fixed the issue and I am using Python 3.8.6. I've also tried restarting my computer.
2020/10/22
[ "https://Stackoverflow.com/questions/64483271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4601149/" ]
The main issue is the `alpha` argument together with `geom_line`. If you want the keys to show up as lines you set alpha to 1 in the legend via `guides(color = guide_legend(override.aes = list(alpha = c(1, 1, 1, 1))))`. If you want colored rectangles for the keys this could be achieved by adding `key_glyph = "rect"` to your `geom_line` layers Using the `economics` dataset as example data: ```r library(ggplot2) ggplot(economics, aes(x=date)) + geom_line(aes(y=`psavert`/100, color="Less Than HS Diploma"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`uempmed`/100, color="HS Diploma"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`psavert`/10, color="Some College / Associate's Degree"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`uempmed`/10, color="Bachelor's Degree and Higher"), size=2, alpha=0.5, linetype=1) + scale_color_manual(name="Educational Attainment", values = c("Less Than HS Diploma"="deepskyblue", "HS Diploma" = "firebrick1", "Some College / Associate's Degree"="mediumpurple", "Bachelor's Degree and Higher"="springgreen4")) + guides(color = guide_legend(override.aes = list(alpha = c(1, 1, 1, 1)))) + ggtitle("Unemployment Rate by Educational Attainment") + xlab("Time") + ylab("Unemployment Rate") + scale_y_continuous(labels = scales::percent) + theme(plot.title = element_text(hjust = 0.5), legend.position="bottom") ``` ![](https://i.imgur.com/Ltxkim2.png) And with `key_glyph="rect"`: ```r library(ggplot2) ggplot(economics, aes(x=date)) + geom_line(aes(y=`psavert`/100, color="Less Than HS Diploma"), size=2,alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`uempmed`/100, color="HS Diploma"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`psavert`/10, color="Some College / Associate's Degree"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`uempmed`/10, color="Bachelor's Degree and Higher"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + scale_color_manual(name="Educational Attainment", values = c("Less Than HS Diploma"="deepskyblue", "HS Diploma" = "firebrick1", "Some College / Associate's Degree"="mediumpurple", "Bachelor's Degree and Higher"="springgreen4")) + ggtitle("Unemployment Rate by Educational Attainment") + xlab("Time") + ylab("Unemployment Rate") + scale_y_continuous(labels = scales::percent) + theme(plot.title = element_text(hjust = 0.5), legend.position="bottom") ``` ![](https://i.imgur.com/9gjMBAX.png) Created on 2020-10-22 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)
The `values` argument from `scale_color_manual` should have color names instead of the line names, which you don't need to pass. Example: ``` scale_color_manual(name="Educational Attainment", values = c("red","yellow","white",...)) ```
64,483,271
I'm trying install packages through pip, but every package I try to install, it fails with ``` ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy ``` When running the same command with `-vvv` like `pip install numpy -vvv` it gives the following output. ``` Using pip 20.2.1 from c:\program files\python38\lib\site-packages\pip (python 3.8) Defaulting to user installation because normal site-packages is not writeable Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-ephem-wheel-cache-bo4luxtk Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Initialized build tracking at C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Created build tracker: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Entered build tracker: C:\Users\d\AppData\Local\Temp\pip-req-tracker-8z32xx1a Created temporary directory: C:\Users\d\AppData\Local\Temp\pip-install-2se6s0ld 1 location(s) to search for versions of numpy: * https://pypi.org/simple/numpy/ Fetching project page and analyzing links: https://pypi.org/simple/numpy/ Getting page https://pypi.org/simple/numpy/ Found index url https://pypi.org/simple Looking up "https://pypi.org/simple/numpy/" in the cache Request header has "max_age" as 0, cache bypassed Starting new HTTPS connection (1): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=4, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (2): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=3, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (3): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=2, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (4): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=1, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (5): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Incremented Retry for (url='/simple/numpy/'): Retry(total=0, connect=None, read=None, redirect=None, status=None) Retry: /simple/numpy/ Resetting dropped connection: pypi.org Starting new HTTPS connection (6): pypi.org:443 https://pypi.org:443 "GET /simple/numpy/ HTTP/1.1" 500 655 Could not fetch URL https://pypi.org/simple/numpy/: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/numpy/ (Caused by ResponseError('too many 500 error responses')) - skipping Given no hashes to check 0 links for project 'numpy': discarding no candidates ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy Exception information: Traceback (most recent call last): File "c:\program files\python38\lib\site-packages\pip\_internal\cli\base_command.py", line 216, in _main status = self.run(options, args) File "c:\program files\python38\lib\site-packages\pip\_internal\cli\req_command.py", line 182, in wrapper return func(self, options, args) File "c:\program files\python38\lib\site-packages\pip\_internal\commands\install.py", line 324, in run requirement_set = resolver.resolve( File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 183, in resolve discovered_reqs.extend(self._resolve_one(requirement_set, req)) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 388, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 339, in _get_abstract_dist_for self._populate_link(req) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 305, in _populate_link req.link = self._find_requirement_link(req) File "c:\program files\python38\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 270, in _find_requirement_link best_candidate = self.finder.find_requirement(req, upgrade) File "c:\program files\python38\lib\site-packages\pip\_internal\index\package_finder.py", line 926, in find_requirement raise DistributionNotFound( pip._internal.exceptions.DistributionNotFound: No matching distribution found for numpy 1 location(s) to search for versions of pip: * https://pypi.org/simple/pip/ Fetching project page and analyzing links: https://pypi.org/simple/pip/ Getting page https://pypi.org/simple/pip/ Found index url https://pypi.org/simple Looking up "https://pypi.org/simple/pip/" in the cache Request header has "max_age" as 0, cache bypassed Starting new HTTPS connection (1): pypi.org:443 https://pypi.org:443 "GET /simple/pip/ HTTP/1.1" 500 655 Could not fetch URL https://pypi.org/simple/pip/: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by ResponseError('too many 500 error responses')) - skipping Given no hashes to check 0 links for project 'pip': discarding no candidates Removed build tracker: 'C:\\Users\\d\\AppData\\Local\\Temp\\pip-req-tracker-8z32xx1a' ``` My pip.ini file looks like ``` [global] trusted-host = pypi.python.org pypi.org files.pythonhosted.org ``` How can the 'too many 500 error responses' mentioned in the error be fixed? Edit: Reinstalling Python has not fixed the issue and I am using Python 3.8.6. I've also tried restarting my computer.
2020/10/22
[ "https://Stackoverflow.com/questions/64483271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4601149/" ]
The main issue is the `alpha` argument together with `geom_line`. If you want the keys to show up as lines you set alpha to 1 in the legend via `guides(color = guide_legend(override.aes = list(alpha = c(1, 1, 1, 1))))`. If you want colored rectangles for the keys this could be achieved by adding `key_glyph = "rect"` to your `geom_line` layers Using the `economics` dataset as example data: ```r library(ggplot2) ggplot(economics, aes(x=date)) + geom_line(aes(y=`psavert`/100, color="Less Than HS Diploma"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`uempmed`/100, color="HS Diploma"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`psavert`/10, color="Some College / Associate's Degree"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`uempmed`/10, color="Bachelor's Degree and Higher"), size=2, alpha=0.5, linetype=1) + scale_color_manual(name="Educational Attainment", values = c("Less Than HS Diploma"="deepskyblue", "HS Diploma" = "firebrick1", "Some College / Associate's Degree"="mediumpurple", "Bachelor's Degree and Higher"="springgreen4")) + guides(color = guide_legend(override.aes = list(alpha = c(1, 1, 1, 1)))) + ggtitle("Unemployment Rate by Educational Attainment") + xlab("Time") + ylab("Unemployment Rate") + scale_y_continuous(labels = scales::percent) + theme(plot.title = element_text(hjust = 0.5), legend.position="bottom") ``` ![](https://i.imgur.com/Ltxkim2.png) And with `key_glyph="rect"`: ```r library(ggplot2) ggplot(economics, aes(x=date)) + geom_line(aes(y=`psavert`/100, color="Less Than HS Diploma"), size=2,alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`uempmed`/100, color="HS Diploma"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`psavert`/10, color="Some College / Associate's Degree"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + geom_line(aes(y=`uempmed`/10, color="Bachelor's Degree and Higher"), size=2, alpha=0.5, linetype=1, key_glyph = "rect") + scale_color_manual(name="Educational Attainment", values = c("Less Than HS Diploma"="deepskyblue", "HS Diploma" = "firebrick1", "Some College / Associate's Degree"="mediumpurple", "Bachelor's Degree and Higher"="springgreen4")) + ggtitle("Unemployment Rate by Educational Attainment") + xlab("Time") + ylab("Unemployment Rate") + scale_y_continuous(labels = scales::percent) + theme(plot.title = element_text(hjust = 0.5), legend.position="bottom") ``` ![](https://i.imgur.com/9gjMBAX.png) Created on 2020-10-22 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)
This should work (No output included as no data was shared). If you want the legend filled, you must also enable inside `aes()` the option `fill`. After that you can scale the colors for filling with `scale_fill_manual()` and use `labs()` to give them a common name. Here the code: ``` library(ggplot2) #Code ggplot(df, aes(x=Month)) + geom_line(aes(y=`Less than a high school diploma`/100, color="Less Than HS Diploma", fill="Less Than HS Diploma"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`High school graduates, no college`/100, color="HS Diploma", fill="HS Diploma"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`Some college or associate degree`/100, color="Some College / Associate's Degree", fill="Some College / Associate's Degree"), size=2, alpha=0.5, linetype=1) + geom_line(aes(y=`Bachelor's degree and higher`/100, color="Bachelor's Degree and Higher", fill="Bachelor's Degree and Higher"), size=2, alpha=0.5, linetype=1) + scale_color_manual(name="Educational Attainment", values = c("Less Than HS Diploma"="deepskyblue", "HS Diploma" = "firebrick1", "Some College / Associate's Degree"="mediumpurple", "Bachelor's Degree and Higher"="springgreen4")) + scale_fill_manual(name="Educational Attainment", values = c("Less Than HS Diploma"="deepskyblue", "HS Diploma" = "firebrick1", "Some College / Associate's Degree"="mediumpurple", "Bachelor's Degree and Higher"="springgreen4"))+ ggtitle("Unemployment Rate by Educational Attainment") + xlab("Time") + ylab("Unemployment Rate") + scale_y_continuous(labels = scales::percent) + theme(plot.title = element_text(hjust = 0.5), legend.position="bottom")+ labs(color='Class',fill='Class') ```
32,400,048
I am trying to edit a .reg file in python to replace strings in a file. I can do this for any other file type such as .txt. Here is the python code: ``` with open ("C:/Users/UKa51070/Desktop/regFile.reg", "r") as myfile: data=myfile.read() print data ``` It returns an empty string
2015/09/04
[ "https://Stackoverflow.com/questions/32400048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3473280/" ]
I am not sure why you are not seeing any output, perhaps you could try: `print len(data)` Depending on your version of Windows, your `REG` file will be saved using UTF-16 encoding, unless you specifically export it using the `Win9x/NT4` format. You could try using the following script: ``` import codecs with codecs.open("C:/Users/UKa51070/Desktop/regFile.reg", encoding='utf-16') as myfile: data = myfile.read() print data ```
It's probably not a good idea to edit `.reg` files manually. My suggestion is to search for a Python package that handles it for you. I think the [\_winreg](https://docs.python.org/2/library/_winreg.html) Python built-in library is what you are looking for.
64,256,474
I have to deploy a python project on AWS Lambda function. When I create its zip package it occupies a memory of around 80 MB (Lambda allows upto 50 MB). Also I cannot upload it to s3 because the memory size of the uncompressed package is around 284 MB (S3 allows upto 250 MB). Any idea how to tackle this problem or Is there any alternative for it?
2020/10/08
[ "https://Stackoverflow.com/questions/64256474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9920934/" ]
To work just include this jQuery, Popper.js, and bootstrap js CDN and it will work. Note that jQuery must come first, then Popper.js, and then our JavaScript plugins. for more info click [here](https://getbootstrap.com/docs/4.5/getting-started/download/) ``` <script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.1/dist/umd/popper.min.js" integrity="sha384-9/reFTGAW83EW2RDu2S0VKaIzap3H66lZH81PoYlFhbGU+6BZp6G7niu735Sk7lN" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" crossorigin="anonymous"></script> ``` You can add these scripts in the **head** tag or at the bottom of the **body**.
You forgot to add the CDN Bootstrap or link your bootstrap javascript at the bottom of the body. Here: ``` <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script> ```
60,197,890
I'm new to python and running the command: > > pip install pysam > > > Which results in: ``` Collecting pysam Using cached https://files.pythonhosted.org/packages/25/7e/098753acbdac54ace0c6dc1f8a74b54c8028ab73fb027f6a4215487d1fea/pysam-0.15.4.tar.gz ERROR: Command errored out with exit status 1: command: 'c:\path\programs\python\python38\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\path\\Local\\Temp\\pip-install-qzuue1yz\\pysam\\setup.py'"'"'; __file__='"'"'C:\\path\\Temp\\pip-install-qzuue1yz\\pysam\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info Complete output (23 lines): # pysam: cython is available - using cythonize if necessary # pysam: htslib mode is shared # pysam: HTSLIB_CONFIGURE_OPTIONS=None '.' is not recognized as an internal or external command, operable program or batch file. '.' is not recognized as an internal or external command, operable program or batch file. File "<string>", line 1, in <module> File "C:\path\Local\Temp\pip-install-qzuue1yz\pysam\setup.py", line 241, in <module> htslib_make_options = run_make_print_config() File "C:\path\\Local\Temp\pip-install-qzuue1yz\pysam\setup.py", line 68, in run_make_print_config stdout = subprocess.check_output(["make", "-s", "print-config"]) File "c:\path\programs\python\python38\lib\subprocess.py", line 411, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "c:\path\programs\python\python38\lib\subprocess.py", line 489, in run File "c:\path\programs\python\python38\lib\subprocess.py", line 854, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "c:\path\programs\python\python38\lib\subprocess.py", line 1307, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified # pysam: htslib configure options: None ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` What is the problem here? Originally I got an error about cython not being installed, so i ran pip install cython and that was able to run that without issue.
2020/02/12
[ "https://Stackoverflow.com/questions/60197890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1308743/" ]
There are many binary wheels [at PyPI](https://pypi.org/project/pysam/#files) but only for Linux and MacOS X. [The package at bioconda](https://anaconda.org/bioconda/pysam) is also compiled only for Linux and OS X. When you try to install pysam at Windows `pip` downloads the source distribution `pysam-0.15.4.tar.gz`, unpacks it and runs `setup.y` pysam's `setup.py` [configures](https://github.com/pysam-developers/pysam/blob/c818db502b8f8334e7bf29060685114dd9af9530/setup.py#L221) library `htslib` by [running](https://github.com/pysam-developers/pysam/blob/c818db502b8f8334e7bf29060685114dd9af9530/setup.py#L56) script [`htslib/configure`](https://github.com/pysam-developers/pysam/blob/master/htslib/configure). This is a shell script, it cannot be run in Windows without a Unix emulation layer. Hence the error. Bottom line: like many pieces of software related to genetics (I have some experience with software written in Python and Java) `pysam` seems to be usable only on Unix, preferably Linux or OS X.
if you have anaconda, try this: `conda install -c bioconda pysam`
57,921,006
I have flask application via python. In my page, there is three images but flask only shows one of them. I could not figure out where is the problem. Here is my code. HTML ==== ``` <div class="col-xs-4"> <img style="width:40%;padding:5px" src="static/tomato.png"/> <br> <button class="btn btn-warning"><a style="color:white;" href="http://127.0.0.1:5000/detect">Tomato Analysis</a></button> </div> <div class="col-xs-4"> <img style="width:40%;padding:5px" src="static/grapes.png"/> <br> <button class="btn btn-warning"><a style="color:white;" href="http://127.0.0.1:5000/detect">Grape Analysis</a></button> </div> ``` PYTHON ====== ``` @app.route("/main") def index(): return render_template('gui2.html') ``` It shows tomato.png but it did not sohws the grapes.png, what is the problem of it and how can I solve it. Also I am using electron.js. After running python script, I am running nmp start. The error is output is: ======================= `GET /%7B%7B%20url_for('static',%20filename%20=%20'image/corn2.png')%20%7D%7D HTTP/1.1" 404 -` Any help is appreciated... Thanks
2019/09/13
[ "https://Stackoverflow.com/questions/57921006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11697825/" ]
You can change the route main to mainPage. Try below code ``` @app.route("/mainPage") def index(): return render_template('gui2.html') ```
The error message 404 clear tells that the resource you are looking for was not found in the given location. Make sure that the file exists on the path you give. Simply, as tomato.png file is displayed correctly, just make sure that other files are also in the same location as tomato.png Try opening in Incognito or private browser.
32,075,662
I'm facing a nearly-textbook diamond inheritance problem. The (rather artificial!) example below captures all its essential features: ``` # CAVEAT: error-checking omitted for simplicity class top(object): def __init__(self, matrix): self.matrix = matrix # matrix must be non-empty and rectangular! def foo(self): '''Sum all matrix entries.''' return sum([sum(row) for row in self.matrix]) class middle_0(top): def foo(self): '''Sum all matrix entries along (wrap-around) diagonal.''' matrix = self.matrix n = len(matrix[0]) return sum([row[i % n] for i, row in enumerate(matrix)]) class middle_1(top): def __init__(self, m, n): data = range(m * n) matrix = [[1 + data[i * n + j] for j in range(n)] for i in range(m)] super(middle_1, self).__init__(matrix) ``` In summary, classes `middle_0` and `middle_1` are both subclasses of class `top`, where `middle_0` overrides method `foo` and `middle_1` overrides method `__init__`. Basically, the classic diamond inheritance set up. The one elaboration on the basic pattern is that `middle_1.__init__` actually invokes the parent class's `__init__`. (The demo below shows these classes in action.) I want to define a class `bottom` that "gets"1 `foo` from `middle_0` and `__init__` from `middle_1`. What's the "pythonic way" to implement such a `bottom` class? --- Demo: ``` matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] print top(matrix).foo() # 45 print middle_0(matrix).foo() # 15 print middle_1(3, 3).foo() # 45 # print bottom(3, 3).foo() # 15 ``` --- 1I write "gets" instead of "inherits" because I suspect this problem can't be solved easily using standard Python inheritance.
2015/08/18
[ "https://Stackoverflow.com/questions/32075662", "https://Stackoverflow.com", "https://Stackoverflow.com/users/559827/" ]
`bottom` simply inherits from both; there is nothing specific about your classes that would make this case special: ``` class bottom(middle_0, middle_1): pass ``` Demo: ``` >>> class bottom(middle_0, middle_1): ... pass ... >>> bottom(3, 3).foo() 15 ``` This works as expected because Python arranges both `middle_0` and `middle_1` to be searched for methods before `top` is: ``` >>> bottom.__mro__ (<class '__main__.bottom'>, <class '__main__.middle_0'>, <class '__main__.middle_1'>, <class '__main__.top'>, <type 'object'>) ``` This shows the *Method Resolution Order* of the class; it is that order that is used to find methods. So `bottom.__init__` is found on `middle_1`, and `bottom.foo` is found on `middle_0`, as both are listed before `top`.
I think the > > a class bottom that "gets"1 foo from middle\_0 and \_\_init\_\_ from middle\_1. > > > would be simply done by ``` class bottom(middle_0, middle_1): pass ```
33,771,929
**Definition**: > > [Bag or Multiset](https://xlinux.nist.gov/dads/HTML/bag.html) is a set data structure which allows duplicate elements, provided the order of retrieval is not significant. > > > Now as I read python documentation it is told that a [Counter](https://docs.python.org/2/library/collections.html#collections.Counter) behaves as a Bag data structure. But I am confused if we can use List or Tuple as an alternative? One possible flaw as I can see is that `removing` an element is not allowed in Bag. Also, normally retrieving an element in List or Tuple takes O(n) time but Bag can be implemented via hashing to allow constant time removal. **Question**: Can we use List or Tuple as a Bag data structure?
2015/11/18
[ "https://Stackoverflow.com/questions/33771929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/867461/" ]
> > Can we use List or Tuple as a Bag data structure? > > > Yes. It would require some code to get the structure correct, and you'd likely want a list as they are mutable. But you can add duplicates to a list, count them and remove them.
No. * The elements of a bag are unordered and non-unique. * The elements of a Counter are unordered and non-unique. * The elements of a set are unordered and unique. * The elements of a list (and tuple) are ordered and non-unique. A Counter behaves like a bag of m&m's. A list behaves like a pez dispenser - the order of its elements is significant. ``` > a = {1, 2, 3} > b = {1, 2, 3} > c = {1, 3, 2} > a == b True > a == c True > a = Counter((1, 2, 3)) > b = Counter((1, 2, 3)) > c = Counter((1, 3, 2)) > a == b True > a == c True > a = [1, 2, 3] > b = [1, 2, 3] > c = [1, 3, 2] > a == b True > a == c False ```
54,483,013
I am using a ScanSnap scanner which generates PDF-1.3 where it will auto-correct the orientation (rotate 0 or 180 degrees) of scanned documents when the PDF is viewed within Adobe Reader. OCR is done by the scanning software and I am assuming the orientation is determined then and encoded into the PDF. Note that I know I can use Tesseract or other OCR tools to determine if rotation is needed, but I do not want to use it as the scanner software seems to have already determined it and telling PDF viewers if rotation is needed (or not). When I use image extraction tools (like xpdf pdfimages, python libraries) it does not properly rotate jpeg images 180 degrees (if needed). > > NB: pdfimages extracts the raw image data from the PDF file, without > performing any additional transforms. Any rotation, clipping, color > inversion, etc. done by the PDF content stream is ignored. > > > I have scanned a document twice with rotation (0 degrees, and 180 degrees). I cannot seem to reverse engineer what is telling Adobe/Foxit to rotate (or not) the image when viewing. I have looked at the PDF-1.3 specification doc, and compared the PDF binary data between the orientation-corrected and not-corrected. I can not determine what is correcting the orientation? * No /Page/Rotate (defaults to 0) in PDF * No EXIF orientation in JPEG * I do not see any transformation matrix (cm operator) in PDF In both cases the PDF binary looks like the following (stopped at the JPEG streamed data) **UPDATED:** links to PDF files [rotated-180](http://s000.tinyupload.com/?file_id=03294969585737255560) [rotated-0](http://s000.tinyupload.com/?file_id=00344136391322927294) ``` %PDF-1.3 %âãÏÓ 1 0 obj <</Metadata 20 0 R/Pages 2 0 R/Type/Catalog>> endobj 2 0 obj <</MediaBox[0.0 0.0 606.6 794.88]/Count 1/Type/Pages/Kids[4 0 R]>> endobj 4 0 obj <</Parent 2 0 R/Contents 18 0 R/PieceInfo<</PSL<</Private<</V(3.2.9)>>/LastModified(D:20190201125524-00'00')>>>>/MediaBox[0.0 0.0 606.6 794.88]/Resources<</XObject<</Im0 5 0 R>>/Font<</C0_0 11 0 R/T1_0 16 0 R>>/ProcSet[/PDF/Text/ImageC]>>/Type/Page/LastModified(D:20190201085524-04'00')>> endobj 5 0 obj <</Subtype/Image/Length 433576/Filter/DCTDecode/Name/X/BitsPerComponent 8/ColorSpace/DeviceRGB/Width 1685/Height 2208/Type/XObject>>stream ``` **Does anyone know how PDF viewers know to rotate an image 180 (or not). Is it meta-data within the PDF or JPEG image which can be extracted?** Does Adobe and other viewers do something dynamically on opening a document to determine if orientation correction is needed? I'm no expert with PDF specification. But I was hoping someone may have already found a solution to this problem.
2019/02/01
[ "https://Stackoverflow.com/questions/54483013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/297500/" ]
The image **Im0** in the resources of the page in "internetfile-180.pdf" is not rotated: [![internetfile-180.pdf image](https://i.stack.imgur.com/DS43A.jpg?s=256)](https://i.stack.imgur.com/DS43A.jpg) But the image **Im0** in the resources of the page in "internetfile.pdf" is rotated: [![enter image description here](https://i.stack.imgur.com/LXGif.jpg?s=256)](https://i.stack.imgur.com/LXGif.jpg) In the viewer both look upright, so in "internetfile.pdf" a technique must be used that rotates the image. There are two major techniques for this: * Setting the **Rotate** property of the page accordingly, i.e. here to 180. * Applying a rotation transformation to the current transformation matrix in the content stream of the page. Let's look at the page dictionary first, a bit pretty-printed: ``` 4 0 obj << /Parent 2 0 R /Contents 13 0 R /PieceInfo << /PSL << /Private <</V (3.2.9)>> /LastModified (D:20190204142537-00'00') >> >> /MediaBox [0.0 0.0 608.64 792.24] /Resources << /XObject <</Im0 5 0 R>> /Font <</T1_0 11 0 R>> /ProcSet [/PDF /Text /ImageC] >> /Type /Page /LastModified (D:20190204102537-04'00') >> ``` As we see, there is no **Rotate** entry present. Thus, we'll have to look at the page content stream. According to the page dictionary it's in object 13, generation 0. That object is a stream object with deflated stream data: ``` 13 0 obj << /Length 4014 /Filter /FlateDecode >> stream H‰”WÛŽÛF}Ÿ¯Ð[lÀÓÓ÷˾e½ [...] ÿüòÛÿ ´ß endstream endobj ``` After inflating the stream data, they start like this: ``` q -608.3999939 0 0 -792.9600067 608.3999939 792.9600067 cm /Im0 Do Q [...] ``` And this is indeed an application of the second technique, the **cm** instruction applies the rotation and the **Do** instruction paints the image with the rotation active! In detail, the **cm** instruction applies the affine transformation represented by the matrix ``` -608.3999939 0 0 0 -792.9600067 0 608.3999939 792.9600067 1 ``` In other words: ``` x' = -608.3999939 * x + 608.3999939 y' = -792.9600067 * y + 792.9600067 ``` This transformation actually is a combination of a rotation by 180°, a horizontal scaling by 608.3999939 and a vertical scaling by 792.9600067, and a translation by 608.3999939 horizontally and 792.9600067 vertically. The **Do** instruction now paints the image. Here one needs to know that this instruction first scales the image to fit into the unit 1×1 square at the origin and then applies the current transformation matrix. Thus, the image is drawn rotated by 180°, effectively filling the whole 608.64×792.24 **MediaBox** of the page.
**mkl** answered the question correctly doing all the hard work decoding the PDF for me. I thought I would add in my python (PyPDF2) code to search for the found rotation condition in case it helps someone else. ```py input1 = PyPDF2.PdfFileReader(open(filepath, "rb")) totalPages = input1.getNumPages() for pgNum in range(0,totalPages): page0 = input1.getPage(pgNum) # Lets look to see if the page contains a transformation matrix to rotate it 180 degress # (ScanScap iX500 encoded the PDF with a cm transformation matrix to rotate 180 degrees in PDF viewers # @see https://stackoverflow.com/questions/54483013/how-to-extract-rotation-transformation-information-for-pdf-extracted-images-i-e # @see 'PDF 1.3 Reference Manual March 11, 1999' Section 3.10 Transformation matrices which is applied to the scanned image # [[a b 0] # [c d 0] # [e f 1]] isPageRotated180 = False pgContent = page0['/Contents'].getData().decode('utf-8') FLOAT_REG = '([-+]?\d*\.\d+|\d+)' m = re.search( '{} {} {} {} {} {} cm'.format(FLOAT_REG,FLOAT_REG,FLOAT_REG,FLOAT_REG,FLOAT_REG,FLOAT_REG), pgContent ) if m: (a,b,c,d,e,f) = list(map(float,m.groups())) isPageRotated180 = (a == -e and d == -f) ```
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
### Use a command-line tool By far the most efficient solution I've found is to use a specialist command-line tool to replace `";"` with `","` and *then* read into Pandas. Pandas or pure Python solutions do not come close in terms of efficiency. Essentially, using CPython or a tool written in C / C++ is likely to outperform Python-level manipulations. For example, using [Find And Replace Text](http://fart-it.sourceforge.net/): ``` import os os.chdir(r'C:\temp') # change directory location os.system('fart.exe -c file.csv ";" ","') # run FART with character to replace df = pd.read_csv('file.csv', usecols=[3, 4, 5], header=None) # read file into Pandas ```
If this is an option, substituting the character `;` with `,` in the string is faster. I have written the string `x` to a file `test.dat`. ``` def csv_reader_4(x): with open(x, 'r') as f: a = f.read() return pd.read_csv(StringIO(unicode(a.replace(';', ','))), usecols=[3, 4, 5]) ``` The `unicode()` function was necessary to avoid a TypeError in Python 2. Benchmarking: ``` %timeit csv_reader_2('test.dat') # 1.6 s per loop %timeit csv_reader_4('test.dat') # 1.2 s per loop ```
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
How about using a generator to do the replacement, and combining it with an appropriate decorator to get a file-like object suitable for pandas? ``` import io import pandas as pd # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def iterstream(iterable, buffer_size=io.DEFAULT_BUFFER_SIZE): """ http://stackoverflow.com/a/20260030/190597 (Mechanical snail) Lets you use an iterable (e.g. a generator) that yields bytestrings as a read-only input stream. The stream implements Python 3's newer I/O API (available in Python 2's io module). For efficiency, the stream is buffered. """ class IterStream(io.RawIOBase): def __init__(self): self.leftover = None def readable(self): return True def readinto(self, b): try: l = len(b) # We're supposed to return at most this much chunk = self.leftover or next(iterable) output, self.leftover = chunk[:l], chunk[l:] b[:len(output)] = output return len(output) except StopIteration: return 0 # indicate EOF return io.BufferedReader(IterStream(), buffer_size=buffer_size) def replacementgenerator(haystack, needle, replace): for s in haystack: if s == needle: yield str.encode(replace); else: yield str.encode(s); csv = pd.read_csv(iterstream(replacementgenerator(x, ";", ",")), usecols=[3, 4, 5]) ``` Note that we convert the string (or its constituent characters) to bytes through str.encode, as this is required for use by Pandas. This approach is functionally identical to the answer by Daniele except for the fact that we replace values "on-the-fly", as they are requested instead of all in one go.
If this is an option, substituting the character `;` with `,` in the string is faster. I have written the string `x` to a file `test.dat`. ``` def csv_reader_4(x): with open(x, 'r') as f: a = f.read() return pd.read_csv(StringIO(unicode(a.replace(';', ','))), usecols=[3, 4, 5]) ``` The `unicode()` function was necessary to avoid a TypeError in Python 2. Benchmarking: ``` %timeit csv_reader_2('test.dat') # 1.6 s per loop %timeit csv_reader_4('test.dat') # 1.2 s per loop ```
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
### Use a command-line tool By far the most efficient solution I've found is to use a specialist command-line tool to replace `";"` with `","` and *then* read into Pandas. Pandas or pure Python solutions do not come close in terms of efficiency. Essentially, using CPython or a tool written in C / C++ is likely to outperform Python-level manipulations. For example, using [Find And Replace Text](http://fart-it.sourceforge.net/): ``` import os os.chdir(r'C:\temp') # change directory location os.system('fart.exe -c file.csv ";" ","') # run FART with character to replace df = pd.read_csv('file.csv', usecols=[3, 4, 5], header=None) # read file into Pandas ```
How about using a generator to do the replacement, and combining it with an appropriate decorator to get a file-like object suitable for pandas? ``` import io import pandas as pd # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def iterstream(iterable, buffer_size=io.DEFAULT_BUFFER_SIZE): """ http://stackoverflow.com/a/20260030/190597 (Mechanical snail) Lets you use an iterable (e.g. a generator) that yields bytestrings as a read-only input stream. The stream implements Python 3's newer I/O API (available in Python 2's io module). For efficiency, the stream is buffered. """ class IterStream(io.RawIOBase): def __init__(self): self.leftover = None def readable(self): return True def readinto(self, b): try: l = len(b) # We're supposed to return at most this much chunk = self.leftover or next(iterable) output, self.leftover = chunk[:l], chunk[l:] b[:len(output)] = output return len(output) except StopIteration: return 0 # indicate EOF return io.BufferedReader(IterStream(), buffer_size=buffer_size) def replacementgenerator(haystack, needle, replace): for s in haystack: if s == needle: yield str.encode(replace); else: yield str.encode(s); csv = pd.read_csv(iterstream(replacementgenerator(x, ";", ",")), usecols=[3, 4, 5]) ``` Note that we convert the string (or its constituent characters) to bytes through str.encode, as this is required for use by Pandas. This approach is functionally identical to the answer by Daniele except for the fact that we replace values "on-the-fly", as they are requested instead of all in one go.
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
### Use a command-line tool By far the most efficient solution I've found is to use a specialist command-line tool to replace `";"` with `","` and *then* read into Pandas. Pandas or pure Python solutions do not come close in terms of efficiency. Essentially, using CPython or a tool written in C / C++ is likely to outperform Python-level manipulations. For example, using [Find And Replace Text](http://fart-it.sourceforge.net/): ``` import os os.chdir(r'C:\temp') # change directory location os.system('fart.exe -c file.csv ";" ","') # run FART with character to replace df = pd.read_csv('file.csv', usecols=[3, 4, 5], header=None) # read file into Pandas ```
A very very very fast one, `3.51` is the result, simply just make `csv_reader_4` the below, it simply converts `StringIO` to `str`, then replaces `;` with `,`, and reads the dataframe with `sep=','`: ``` def csv_reader_4(x): with x as fin: reader = pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',',header=None) return reader ``` The benchmark: ``` %timeit csv_reader_4(StringIO(x)) # 3.51 s per loop ```
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
### Use a command-line tool By far the most efficient solution I've found is to use a specialist command-line tool to replace `";"` with `","` and *then* read into Pandas. Pandas or pure Python solutions do not come close in terms of efficiency. Essentially, using CPython or a tool written in C / C++ is likely to outperform Python-level manipulations. For example, using [Find And Replace Text](http://fart-it.sourceforge.net/): ``` import os os.chdir(r'C:\temp') # change directory location os.system('fart.exe -c file.csv ";" ","') # run FART with character to replace df = pd.read_csv('file.csv', usecols=[3, 4, 5], header=None) # read file into Pandas ```
In my environment (Ubuntu 16.04, 4GB RAM, Python 3.5.2) the fastest method was (the prototypical1) `csv_reader_5` (taken from [U9-Forward's answer](https://stackoverflow.com/a/54166567/6394138)) which ran only less than 25% slower than reading the entire CSV file with no conversions. I improved that approach by implementing a filter/wrapper that replaces the char in the `read()` method: ``` class SingleCharReplacingFilter: def __init__(self, reader, oldchar, newchar): def proxy(obj, attr): a = getattr(obj, attr) if attr in ('read'): def f(*args): return a(*args).replace(oldchar, newchar) return f else: return a for a in dir(reader): if not a.startswith("_") or a == '__iter__': setattr(self, a, proxy(reader, a)) def csv_reader_6(x): with x as fin: return pd.read_csv(SingleCharReplacingFilter(fin, ";", ","), sep=',', header=None, usecols=[3, 4, 5]) ``` The result is a little better performance compared to reading the entire CSV file with no conversions: ``` In [3]: %timeit pd.read_csv(StringIO(x)) 605 ms ± 3.24 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [4]: %timeit csv_reader_5(StringIO(x)) 733 ms ± 3.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [5]: %timeit csv_reader_6(StringIO(x)) 568 ms ± 2.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` --- 1 I call it prototypical because it assumes that the input stream is of `StringIO` type (since it calls `.getvalue()` on it).
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
### Use a command-line tool By far the most efficient solution I've found is to use a specialist command-line tool to replace `";"` with `","` and *then* read into Pandas. Pandas or pure Python solutions do not come close in terms of efficiency. Essentially, using CPython or a tool written in C / C++ is likely to outperform Python-level manipulations. For example, using [Find And Replace Text](http://fart-it.sourceforge.net/): ``` import os os.chdir(r'C:\temp') # change directory location os.system('fart.exe -c file.csv ";" ","') # run FART with character to replace df = pd.read_csv('file.csv', usecols=[3, 4, 5], header=None) # read file into Pandas ```
Python has powerfull features to manipulate data, but don't expect performance using python.When performance is needed , C and C++ are your friend . Any fast library in python is written in C/C++. It is quite easy to use C/C++ code in python, have a look at swig utility (<http://www.swig.org/tutorial.html>) . You can write a c++ class that may contain some fast utilities that you will use in your python code when needed.
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
How about using a generator to do the replacement, and combining it with an appropriate decorator to get a file-like object suitable for pandas? ``` import io import pandas as pd # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def iterstream(iterable, buffer_size=io.DEFAULT_BUFFER_SIZE): """ http://stackoverflow.com/a/20260030/190597 (Mechanical snail) Lets you use an iterable (e.g. a generator) that yields bytestrings as a read-only input stream. The stream implements Python 3's newer I/O API (available in Python 2's io module). For efficiency, the stream is buffered. """ class IterStream(io.RawIOBase): def __init__(self): self.leftover = None def readable(self): return True def readinto(self, b): try: l = len(b) # We're supposed to return at most this much chunk = self.leftover or next(iterable) output, self.leftover = chunk[:l], chunk[l:] b[:len(output)] = output return len(output) except StopIteration: return 0 # indicate EOF return io.BufferedReader(IterStream(), buffer_size=buffer_size) def replacementgenerator(haystack, needle, replace): for s in haystack: if s == needle: yield str.encode(replace); else: yield str.encode(s); csv = pd.read_csv(iterstream(replacementgenerator(x, ";", ",")), usecols=[3, 4, 5]) ``` Note that we convert the string (or its constituent characters) to bytes through str.encode, as this is required for use by Pandas. This approach is functionally identical to the answer by Daniele except for the fact that we replace values "on-the-fly", as they are requested instead of all in one go.
A very very very fast one, `3.51` is the result, simply just make `csv_reader_4` the below, it simply converts `StringIO` to `str`, then replaces `;` with `,`, and reads the dataframe with `sep=','`: ``` def csv_reader_4(x): with x as fin: reader = pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',',header=None) return reader ``` The benchmark: ``` %timeit csv_reader_4(StringIO(x)) # 3.51 s per loop ```
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
How about using a generator to do the replacement, and combining it with an appropriate decorator to get a file-like object suitable for pandas? ``` import io import pandas as pd # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def iterstream(iterable, buffer_size=io.DEFAULT_BUFFER_SIZE): """ http://stackoverflow.com/a/20260030/190597 (Mechanical snail) Lets you use an iterable (e.g. a generator) that yields bytestrings as a read-only input stream. The stream implements Python 3's newer I/O API (available in Python 2's io module). For efficiency, the stream is buffered. """ class IterStream(io.RawIOBase): def __init__(self): self.leftover = None def readable(self): return True def readinto(self, b): try: l = len(b) # We're supposed to return at most this much chunk = self.leftover or next(iterable) output, self.leftover = chunk[:l], chunk[l:] b[:len(output)] = output return len(output) except StopIteration: return 0 # indicate EOF return io.BufferedReader(IterStream(), buffer_size=buffer_size) def replacementgenerator(haystack, needle, replace): for s in haystack: if s == needle: yield str.encode(replace); else: yield str.encode(s); csv = pd.read_csv(iterstream(replacementgenerator(x, ";", ",")), usecols=[3, 4, 5]) ``` Note that we convert the string (or its constituent characters) to bytes through str.encode, as this is required for use by Pandas. This approach is functionally identical to the answer by Daniele except for the fact that we replace values "on-the-fly", as they are requested instead of all in one go.
In my environment (Ubuntu 16.04, 4GB RAM, Python 3.5.2) the fastest method was (the prototypical1) `csv_reader_5` (taken from [U9-Forward's answer](https://stackoverflow.com/a/54166567/6394138)) which ran only less than 25% slower than reading the entire CSV file with no conversions. I improved that approach by implementing a filter/wrapper that replaces the char in the `read()` method: ``` class SingleCharReplacingFilter: def __init__(self, reader, oldchar, newchar): def proxy(obj, attr): a = getattr(obj, attr) if attr in ('read'): def f(*args): return a(*args).replace(oldchar, newchar) return f else: return a for a in dir(reader): if not a.startswith("_") or a == '__iter__': setattr(self, a, proxy(reader, a)) def csv_reader_6(x): with x as fin: return pd.read_csv(SingleCharReplacingFilter(fin, ";", ","), sep=',', header=None, usecols=[3, 4, 5]) ``` The result is a little better performance compared to reading the entire CSV file with no conversions: ``` In [3]: %timeit pd.read_csv(StringIO(x)) 605 ms ± 3.24 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [4]: %timeit csv_reader_5(StringIO(x)) 733 ms ± 3.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [5]: %timeit csv_reader_6(StringIO(x)) 568 ms ± 2.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` --- 1 I call it prototypical because it assumes that the input stream is of `StringIO` type (since it calls `.getvalue()` on it).
54,044,022
I have an awkward CSV file which has multiple delimiters: the delimiter for the non-numeric part is `','`, for the numeric part `';'`. I want to construct a dataframe only out of the numeric part as efficiently as possible. I have made 5 attempts: among them, utilising the `converters` argument of `pd.read_csv`, using regex with `engine='python'`, using `str.replace`. They are all more than 2x slower than reading the entire CSV file with no conversions. This is prohibitively slow for my use case. I understand the comparison isn't like-for-like, but it does demonstrate the overall poor performance is *not* driven by I/O. Is there a more efficient way to read in the data into a numeric Pandas dataframe? Or the equivalent NumPy array? The below string can be used for benchmarking purposes. ``` # Python 3.7.0, Pandas 0.23.4 from io import StringIO import pandas as pd import csv # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def csv_reader_1(x): df = pd.read_csv(x, usecols=[3], header=None, delimiter=',', converters={3: lambda x: x.split(';')}) return df.join(pd.DataFrame(df.pop(3).values.tolist(), dtype=float)) def csv_reader_2(x): df = pd.read_csv(x, header=None, delimiter=';', converters={0: lambda x: x.rsplit(',')[-1]}, dtype=float) return df.astype(float) def csv_reader_3(x): return pd.read_csv(x, usecols=[3, 4, 5], header=None, sep=',|;', engine='python') def csv_reader_4(x): with x as fin: reader = csv.reader(fin, delimiter=',') L = [i[-1].split(';') for i in reader] return pd.DataFrame(L, dtype=float) def csv_reader_5(x): with x as fin: return pd.read_csv(StringIO(fin.getvalue().replace(';',',')), sep=',', header=None, usecols=[3, 4, 5]) ``` Checks: ``` res1 = csv_reader_1(StringIO(x)) res2 = csv_reader_2(StringIO(x)) res3 = csv_reader_3(StringIO(x)) res4 = csv_reader_4(StringIO(x)) res5 = csv_reader_5(StringIO(x)) print(res1.head(3)) # 0 1 2 # 0 34.23 562.45 213.5432 # 1 56.23 63.45 625.2340 # 2 34.23 562.45 213.5432 assert all(np.array_equal(res1.values, i.values) for i in (res2, res3, res4, res5)) ``` Benchmarking results: ``` %timeit csv_reader_1(StringIO(x)) # 5.31 s per loop %timeit csv_reader_2(StringIO(x)) # 6.69 s per loop %timeit csv_reader_3(StringIO(x)) # 18.6 s per loop %timeit csv_reader_4(StringIO(x)) # 5.68 s per loop %timeit csv_reader_5(StringIO(x)) # 7.01 s per loop %timeit pd.read_csv(StringIO(x)) # 1.65 s per loop ``` Update ------ I'm open to using command-line tools as a last resort. To that extent, I have included such an answer. My hope is there is a pure-Python or Pandas solution with comparable efficiency.
2019/01/04
[ "https://Stackoverflow.com/questions/54044022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9209546/" ]
How about using a generator to do the replacement, and combining it with an appropriate decorator to get a file-like object suitable for pandas? ``` import io import pandas as pd # strings in first 3 columns are of arbitrary length x = '''ABCD,EFGH,IJKL,34.23;562.45;213.5432 MNOP,QRST,UVWX,56.23;63.45;625.234 '''*10**6 def iterstream(iterable, buffer_size=io.DEFAULT_BUFFER_SIZE): """ http://stackoverflow.com/a/20260030/190597 (Mechanical snail) Lets you use an iterable (e.g. a generator) that yields bytestrings as a read-only input stream. The stream implements Python 3's newer I/O API (available in Python 2's io module). For efficiency, the stream is buffered. """ class IterStream(io.RawIOBase): def __init__(self): self.leftover = None def readable(self): return True def readinto(self, b): try: l = len(b) # We're supposed to return at most this much chunk = self.leftover or next(iterable) output, self.leftover = chunk[:l], chunk[l:] b[:len(output)] = output return len(output) except StopIteration: return 0 # indicate EOF return io.BufferedReader(IterStream(), buffer_size=buffer_size) def replacementgenerator(haystack, needle, replace): for s in haystack: if s == needle: yield str.encode(replace); else: yield str.encode(s); csv = pd.read_csv(iterstream(replacementgenerator(x, ";", ",")), usecols=[3, 4, 5]) ``` Note that we convert the string (or its constituent characters) to bytes through str.encode, as this is required for use by Pandas. This approach is functionally identical to the answer by Daniele except for the fact that we replace values "on-the-fly", as they are requested instead of all in one go.
Python has powerfull features to manipulate data, but don't expect performance using python.When performance is needed , C and C++ are your friend . Any fast library in python is written in C/C++. It is quite easy to use C/C++ code in python, have a look at swig utility (<http://www.swig.org/tutorial.html>) . You can write a c++ class that may contain some fast utilities that you will use in your python code when needed.
21,616,994
I apologize if this question has been answered elsewhere. I havn't been able to find an answer yet through the search here or in the Pandas documentation (quite possible I've just missed it though). I'm trying to import a html file into python through pandas and am unsure how to obtain the data I need from the result. I'm working on Windows 7 and using Python 3.3 along with Pandas Using the read\_html function in pandas appears to work and returns a list of dataframes. I'm new to Python (migrating from Matlab) and am unsure how to use a list of dataframes. The documentation describes how to use and manipulate dataframes, but how do I get a dataframe from a list of them? Some of the other answers on this site suggest using the lxml functions directly to parse html files, however it seems the read\_html is working fine in my case. Here is the code I entered: ``` import pandas as pd file = 'F:\\Documents\\Python\\EA Performance Manager\\History.html' History = pd.read_html(file, header=0, infer_types=False) ``` Which gives: ``` >>> History [<class 'pandas.core.frame.DataFrame'> Int64Index: 428 entries, 1 to 428 Data columns (total 13 columns): Ticket 428 non-null values Strategy 428 non-null values Symbol 428 non-null values B/S 428 non-null values Amount (k) 428 non-null values Open Time 428 non-null values Open Price 428 non-null values Close Time 428 non-null values Close Price 428 non-null values High/Low 428 non-null values Rollover 428 non-null values Gross P/L 428 non-null values Pips 428 non-null values dtypes: object(13)] ``` I need to access the individual data columns for analysis (preferably storing them in array-like strutures - still learning to use python properly, will have to convert the data somehow as infer\_type is false, but I think that is another issue). The question is how do I do this? Note: The History.html file was downloaded from a web-based trading platform as History.xls, only after trying to use the excel reading functions to no avail did I find out it was actually a html file. The content of the file is the history of trade opens and closes for an automated trading system. The first row gives the heading for each column.
2014/02/07
[ "https://Stackoverflow.com/questions/21616994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3264279/" ]
`History[0]` will give you the first element. FYI, generally uppercase names are used for classes; variable names are `like_this` These are just conventions; History is a legal identifier.
For each dataframe column you wish to convert to a list, you can transpose the values, and then convert it to a list as follows. Here is an arbitrary DataFrame with one column (if there is more than one column, then slice into columns, and do this for each column): ``` s=DataFrame({'column 1':random.sample(range(10),10)}) ``` Then obtain the values using `.values` and transpose using `.T`, and convert to a list using `.tolist()` ``` s.values.T.tolist() ``` However, that might give you all of the values in long (with an L at the end of each). If that's the case, then you can use a simple datatype conversion to obtain an integer or floating point, or whatever is desirable. I hope that helps! Let me know if not.
56,867,659
While debugging `cmd is not recognized` is displayed and program is not debugged. What can be the problem? I have already checked the `path` and `pythonpath` variables and those seem to be just fine ``` bash C:\Users\rahul\Desktop\vscode\.vscode>cd c:\Users\rahul\Desktop\vscode\.vscode && cmd /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:\Users\rahul\AppData\Local\Programs\Python\Python37-32\python.exe c:\Users\rahul\.vscode\extensions\ms-python.python-2019.6.22090\pythonFiles\ptvsd_launcher.py --default --client --host localhost --port 50265 c:\Users\rahul\Desktop\vscode\.vscode\s.py " 'cmd' is not recognized as an internal or external command, operable program or batch file. ```
2019/07/03
[ "https://Stackoverflow.com/questions/56867659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8306141/" ]
> > TL;DR: `cmd` is not in your Windows Environment Path. > [![enter image description here](https://i.stack.imgur.com/9hZxD.png)](https://i.stack.imgur.com/9hZxD.png) > add `%SystemRoot%\system32` to your *System Variables* and restart VSCode. > > > --- Visual Studio Code has actually brought native support for selecting your terminal, so including cmd into your path is nolonger necessary. * Press `CTRL + SHIFT + P` -> `Terminal: Select default shell` -> select your terminal. It will add this line to your settings.json: `"terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe"` should have appeared. Or if you chose Powershell, it will look like this: `"terminal.integrated.shell.windows": "C:\\windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"` To view your settings.json file, simply: * `Ctrl + ,` scroll down to `Files: Associations` and click `Edit in settings.json`.
It means that `cmd` is not in your path. Either: * Add the path to the system or user variables in the control panel * Use the full path to `cmd` instead (typically `C:\Windows\System32\cmd.exe`), meaning something like: `cd c:\Users\rahul\Desktop\vscode\.vscode && C:\Windows\System32\cmd.exe /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:\Users\rahul\AppData\Local\Programs\Python\Python37-32\python.exe c:\Users\rahul\.vscode\extensions\ms-python.python-2019.6.22090\pythonFiles\ptvsd_launcher.py --default --client --host localhost --port 50265 c:\Users\rahul\Desktop\vscode\.vscode\s.py "`
56,867,659
While debugging `cmd is not recognized` is displayed and program is not debugged. What can be the problem? I have already checked the `path` and `pythonpath` variables and those seem to be just fine ``` bash C:\Users\rahul\Desktop\vscode\.vscode>cd c:\Users\rahul\Desktop\vscode\.vscode && cmd /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:\Users\rahul\AppData\Local\Programs\Python\Python37-32\python.exe c:\Users\rahul\.vscode\extensions\ms-python.python-2019.6.22090\pythonFiles\ptvsd_launcher.py --default --client --host localhost --port 50265 c:\Users\rahul\Desktop\vscode\.vscode\s.py " 'cmd' is not recognized as an internal or external command, operable program or batch file. ```
2019/07/03
[ "https://Stackoverflow.com/questions/56867659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8306141/" ]
> > TL;DR: `cmd` is not in your Windows Environment Path. > [![enter image description here](https://i.stack.imgur.com/9hZxD.png)](https://i.stack.imgur.com/9hZxD.png) > add `%SystemRoot%\system32` to your *System Variables* and restart VSCode. > > > --- Visual Studio Code has actually brought native support for selecting your terminal, so including cmd into your path is nolonger necessary. * Press `CTRL + SHIFT + P` -> `Terminal: Select default shell` -> select your terminal. It will add this line to your settings.json: `"terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe"` should have appeared. Or if you chose Powershell, it will look like this: `"terminal.integrated.shell.windows": "C:\\windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"` To view your settings.json file, simply: * `Ctrl + ,` scroll down to `Files: Associations` and click `Edit in settings.json`.
If cmd is in your Windows Environment Path, that means that, probably, your default integrated shell is set to wsl bash. Change it and set "terminal.integrated.shell.windows": "C:\Windows\System32\cmd.exe" in your settings json You may need to restart VSCode for this to take effect.
14,506,717
I need to print some information directly (without user confirmation) and I'm using Python and the `win32print` module. I've already read the whole [Tim Golden win32print page](http://timgolden.me.uk/python/win32_how_do_i/print.html) (even read the [win32print doc](http://timgolden.me.uk/pywin32-docs/win32print.html), which is small) and I'm using the same example he wrote there himself, but I just print nothing. If I go to the interactive shell and make one step at a time, I get the document on the printer queue (after the `StartDocPrinter`), then I get the document size (after the `StartPagePrinter, WritePrinter, EndPagePrinter` block) and then the document disappear from the queue (after the `EndDocPrinter`) without printing. I'm aware of the `ShellExecute` method Tim Golden showed. It works here, but it needs to create a temp file and it prints this filename, two things I don't want. Any ideas? Thanks in advance. This is the code I'm testing (copy and paste of Tim Golden's): ``` import os, sys import win32print import time printer_name = win32print.GetDefaultPrinter() if sys.version_info >= (3,): raw_data = bytes ("This is a test", "utf-8") else: raw_data = "This is a test" hPrinter = win32print.OpenPrinter (printer_name) try: hJob = win32print.StartDocPrinter (hPrinter, 1, ("test of raw data", None, "RAW")) try: win32print.StartPagePrinter (hPrinter) win32print.WritePrinter (hPrinter, raw_data) win32print.EndPagePrinter (hPrinter) finally: win32print.EndDocPrinter (hPrinter) finally: win32print.ClosePrinter (hPrinter) ``` [EDIT] I installed a pdf printer in my computer to test with another printer (CutePDF Writer) and I could generate the `test of raw data.pdf` file, but when I look inside there is nothing. Meaning: all commands except `WritePrinter` appears to be doing what they were supposed to do. But again, as I said in the comments, `WritePrinter` return the correct amount of bytes that were supposed to be written to the printer. I have no other idea how to solve this, but just comproved there is nothing wrong with my printer.
2013/01/24
[ "https://Stackoverflow.com/questions/14506717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1814970/" ]
I'm still looking for the best way to do this, but I found an answer that satisfy myself for the problem that I have. In Tim Golden's site (linked in question) you can find this example: ``` import win32ui import win32print import win32con INCH = 1440 hDC = win32ui.CreateDC () hDC.CreatePrinterDC (win32print.GetDefaultPrinter ()) hDC.StartDoc ("Test doc") hDC.StartPage () hDC.SetMapMode (win32con.MM_TWIPS) hDC.DrawText ("TEST", (0, INCH * -1, INCH * 8, INCH * -2), win32con.DT_CENTER) hDC.EndPage () hDC.EndDoc () ``` I adapted it a little bit after reading a lot of the documentation. I'll be using `win32ui` library and [`TextOut`](http://timgolden.me.uk/pywin32-docs/PyCDC__TextOut_meth.html) (device context method object). ``` import win32ui # X from the left margin, Y from top margin # both in pixels X=50; Y=50 multi_line_string = input_string.split() hDC = win32ui.CreateDC () hDC.CreatePrinterDC (your_printer_name) hDC.StartDoc (the_name_will_appear_on_printer_spool) hDC.StartPage () for line in multi_line_string: hDC.TextOut(X,Y,line) Y += 100 hDC.EndPage () hDC.EndDoc () ``` I searched in meta stackoverflow before answering my own question and [here](https://meta.stackexchange.com/questions/9933/is-there-a-convention-for-accepting-my-own-answer-to-my-own-question) I found it is an encouraged behavior, therefore I'm doing it. I'll wait a little more to see if I get any other answer.
``` # U must install pywin32 and import modules: import win32print, win32ui, win32con # X from the left margin, Y from top margin # both in pixels X=50; Y=50 # Separate lines from Your string # for example:input_string and create # new string for example: multi_line_string multi_line_string = input_string.splitlines() hDC = win32ui.CreateDC () # Set default printer from Windows: hDC.CreatePrinterDC (win32print.GetDefaultPrinter ()) hDC.StartDoc (the_name_will_appear_on_printer_spool) hDC.StartPage () for line in multi_line_string: hDC.TextOut(X,Y,line) Y += 100 hDC.EndPage () hDC.EndDoc () #I like Python ```
14,506,717
I need to print some information directly (without user confirmation) and I'm using Python and the `win32print` module. I've already read the whole [Tim Golden win32print page](http://timgolden.me.uk/python/win32_how_do_i/print.html) (even read the [win32print doc](http://timgolden.me.uk/pywin32-docs/win32print.html), which is small) and I'm using the same example he wrote there himself, but I just print nothing. If I go to the interactive shell and make one step at a time, I get the document on the printer queue (after the `StartDocPrinter`), then I get the document size (after the `StartPagePrinter, WritePrinter, EndPagePrinter` block) and then the document disappear from the queue (after the `EndDocPrinter`) without printing. I'm aware of the `ShellExecute` method Tim Golden showed. It works here, but it needs to create a temp file and it prints this filename, two things I don't want. Any ideas? Thanks in advance. This is the code I'm testing (copy and paste of Tim Golden's): ``` import os, sys import win32print import time printer_name = win32print.GetDefaultPrinter() if sys.version_info >= (3,): raw_data = bytes ("This is a test", "utf-8") else: raw_data = "This is a test" hPrinter = win32print.OpenPrinter (printer_name) try: hJob = win32print.StartDocPrinter (hPrinter, 1, ("test of raw data", None, "RAW")) try: win32print.StartPagePrinter (hPrinter) win32print.WritePrinter (hPrinter, raw_data) win32print.EndPagePrinter (hPrinter) finally: win32print.EndDocPrinter (hPrinter) finally: win32print.ClosePrinter (hPrinter) ``` [EDIT] I installed a pdf printer in my computer to test with another printer (CutePDF Writer) and I could generate the `test of raw data.pdf` file, but when I look inside there is nothing. Meaning: all commands except `WritePrinter` appears to be doing what they were supposed to do. But again, as I said in the comments, `WritePrinter` return the correct amount of bytes that were supposed to be written to the printer. I have no other idea how to solve this, but just comproved there is nothing wrong with my printer.
2013/01/24
[ "https://Stackoverflow.com/questions/14506717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1814970/" ]
I'm still looking for the best way to do this, but I found an answer that satisfy myself for the problem that I have. In Tim Golden's site (linked in question) you can find this example: ``` import win32ui import win32print import win32con INCH = 1440 hDC = win32ui.CreateDC () hDC.CreatePrinterDC (win32print.GetDefaultPrinter ()) hDC.StartDoc ("Test doc") hDC.StartPage () hDC.SetMapMode (win32con.MM_TWIPS) hDC.DrawText ("TEST", (0, INCH * -1, INCH * 8, INCH * -2), win32con.DT_CENTER) hDC.EndPage () hDC.EndDoc () ``` I adapted it a little bit after reading a lot of the documentation. I'll be using `win32ui` library and [`TextOut`](http://timgolden.me.uk/pywin32-docs/PyCDC__TextOut_meth.html) (device context method object). ``` import win32ui # X from the left margin, Y from top margin # both in pixels X=50; Y=50 multi_line_string = input_string.split() hDC = win32ui.CreateDC () hDC.CreatePrinterDC (your_printer_name) hDC.StartDoc (the_name_will_appear_on_printer_spool) hDC.StartPage () for line in multi_line_string: hDC.TextOut(X,Y,line) Y += 100 hDC.EndPage () hDC.EndDoc () ``` I searched in meta stackoverflow before answering my own question and [here](https://meta.stackexchange.com/questions/9933/is-there-a-convention-for-accepting-my-own-answer-to-my-own-question) I found it is an encouraged behavior, therefore I'm doing it. I'll wait a little more to see if I get any other answer.
The problem is the driver Version. If the Version is 4 you need to give XPS\_PASS instead of RAW, here is a sample. ``` drivers = win32print.EnumPrinterDrivers(None, None, 2) hPrinter = win32print.OpenPrinter(printer_name) printer_info = win32print.GetPrinter(hPrinter, 2) for driver in drivers: if driver["Name"] == printer_info["pDriverName"]: printer_driver = driver raw_type = "XPS_PASS" if printer_driver["Version"] == 4 else "RAW" try: hJob = win32print.StartDocPrinter(hPrinter, 1, ("test of raw data", None, raw_type)) try: win32print.StartPagePrinter(hPrinter) win32print.WritePrinter(hPrinter, raw_data) win32print.EndPagePrinter(hPrinter) finally: win32print.EndDocPrinter(hPrinter) finally: win32print.ClosePrinter(hPrinter) ```
14,506,717
I need to print some information directly (without user confirmation) and I'm using Python and the `win32print` module. I've already read the whole [Tim Golden win32print page](http://timgolden.me.uk/python/win32_how_do_i/print.html) (even read the [win32print doc](http://timgolden.me.uk/pywin32-docs/win32print.html), which is small) and I'm using the same example he wrote there himself, but I just print nothing. If I go to the interactive shell and make one step at a time, I get the document on the printer queue (after the `StartDocPrinter`), then I get the document size (after the `StartPagePrinter, WritePrinter, EndPagePrinter` block) and then the document disappear from the queue (after the `EndDocPrinter`) without printing. I'm aware of the `ShellExecute` method Tim Golden showed. It works here, but it needs to create a temp file and it prints this filename, two things I don't want. Any ideas? Thanks in advance. This is the code I'm testing (copy and paste of Tim Golden's): ``` import os, sys import win32print import time printer_name = win32print.GetDefaultPrinter() if sys.version_info >= (3,): raw_data = bytes ("This is a test", "utf-8") else: raw_data = "This is a test" hPrinter = win32print.OpenPrinter (printer_name) try: hJob = win32print.StartDocPrinter (hPrinter, 1, ("test of raw data", None, "RAW")) try: win32print.StartPagePrinter (hPrinter) win32print.WritePrinter (hPrinter, raw_data) win32print.EndPagePrinter (hPrinter) finally: win32print.EndDocPrinter (hPrinter) finally: win32print.ClosePrinter (hPrinter) ``` [EDIT] I installed a pdf printer in my computer to test with another printer (CutePDF Writer) and I could generate the `test of raw data.pdf` file, but when I look inside there is nothing. Meaning: all commands except `WritePrinter` appears to be doing what they were supposed to do. But again, as I said in the comments, `WritePrinter` return the correct amount of bytes that were supposed to be written to the printer. I have no other idea how to solve this, but just comproved there is nothing wrong with my printer.
2013/01/24
[ "https://Stackoverflow.com/questions/14506717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1814970/" ]
``` # U must install pywin32 and import modules: import win32print, win32ui, win32con # X from the left margin, Y from top margin # both in pixels X=50; Y=50 # Separate lines from Your string # for example:input_string and create # new string for example: multi_line_string multi_line_string = input_string.splitlines() hDC = win32ui.CreateDC () # Set default printer from Windows: hDC.CreatePrinterDC (win32print.GetDefaultPrinter ()) hDC.StartDoc (the_name_will_appear_on_printer_spool) hDC.StartPage () for line in multi_line_string: hDC.TextOut(X,Y,line) Y += 100 hDC.EndPage () hDC.EndDoc () #I like Python ```
The problem is the driver Version. If the Version is 4 you need to give XPS\_PASS instead of RAW, here is a sample. ``` drivers = win32print.EnumPrinterDrivers(None, None, 2) hPrinter = win32print.OpenPrinter(printer_name) printer_info = win32print.GetPrinter(hPrinter, 2) for driver in drivers: if driver["Name"] == printer_info["pDriverName"]: printer_driver = driver raw_type = "XPS_PASS" if printer_driver["Version"] == 4 else "RAW" try: hJob = win32print.StartDocPrinter(hPrinter, 1, ("test of raw data", None, raw_type)) try: win32print.StartPagePrinter(hPrinter) win32print.WritePrinter(hPrinter, raw_data) win32print.EndPagePrinter(hPrinter) finally: win32print.EndDocPrinter(hPrinter) finally: win32print.ClosePrinter(hPrinter) ```
56,612,386
I am trying to use the pre-made estimator `tf.estimator.DNNClassifier` to use on the MNIST dataset. I load the dataset from `tensorflow_dataset`. I pursue the following four steps: first building the dataset pipeline and defining the input function: ```py ## Step 1 mnist, info = tfds.load('mnist', with_info=True) ds_train_orig, ds_test = mnist['train'], mnist['test'] def train_input_fn(dataset, batch_size): dataset = dataset.map(lambda x:({'image-pixels':tf.reshape(x['image'], (-1,))}, x['label'])) return dataset.shuffle(1000).repeat().batch(batch_size) ``` Then, in step 2, I define the feature column with a single key, and shape 784: ```py ## Step 2: image_feature_column = tf.feature_column.numeric_column(key='image-pixels', shape=(28*28)) image_feature_column NumericColumn(key='image-pixels', shape=(784,), default_value=None, dtype=tf.float32, normalizer_fn=None) ``` Step 3, I instantiated the estimator as follows: ```py ## Step 3: dnn_classifier = tf.estimator.DNNClassifier( feature_columns=image_feature_column, hidden_units=[16, 16], n_classes=10) ``` And finally, step 4 using the estimator by calling the `.train()` method: ```py ## Step 4: dnn_classifier.train( input_fn=lambda:train_input_fn(ds_train_orig, batch_size=32), #lambda:iris_data.train_input_fn(train_x, train_y, args.batch_size), steps=20) ``` But this reuslts in the following error. It looks like the problem has arised from the dataset. ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-21-95736cd65e45> in <module> 2 dnn_classifier.train( 3 input_fn=lambda: train_input_fn(ds_train_orig, batch_size=32), ----> 4 steps=20) ~/anaconda3/envs/tf2.0-beta/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx, accept_symbolic_tensors, accept_composite_tensors) 1183 graph = get_default_graph() 1184 if not graph.building_function: -> 1185 raise RuntimeError("Attempting to capture an EagerTensor without " 1186 "building a function.") 1187 return graph.capture(value, name=name) RuntimeError: Attempting to capture an EagerTensor without building a function. ```
2019/06/15
[ "https://Stackoverflow.com/questions/56612386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2191236/" ]
I think the graph construction gets weird if you load a tensorflow\_datasets dataset outside the `input_fn`. I followed the TF2.0 migration guide example and this does not give errors. Please note that I have not tested for model correctness and you will have to modify `input_fn` logic a bit to get the function for eval. ``` # Define the estimator's input_fn def input_fn(): datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test'] dataset = mnist_train dataset = mnist_train.map(lambda x, y:({'image-pixels':tf.reshape(x, (-1,))}, y)) return dataset.shuffle(1000).repeat().batch(32) image_feature_column = tf.feature_column.numeric_column(key='image-pixels', shape=(28*28)) dnn_classifier = tf.estimator.DNNClassifier( feature_columns=[image_feature_column], hidden_units=[16, 16], n_classes=10) dnn_classifier.train( input_fn=input_fn, steps=200) ``` I get a bunch of deprecation warnings at this point, but seems like the estimator is trained.
Answer by @dgumo is correct. I just wanted to add a basic example. All tensors returned by the input function must be created within the input function. ```py #Raw data can be outside data_x = [0.0, 1.0, 2.0, 3.0, 4.0] data_y = [3.0, 4.9, 7.3, 8.65, 10.75] def supply_input(): #Tensors must be created inside the function train_x = tf.constant(data_x) train_y = tf.constant(data_y) feature = { 'x': train_x } return feature, train_y ```
17,363,611
My code works perfectly, but I want it to write the values to a text file. When I try to do it, I get 'invalid syntax'. When I use a python shell, it works. So I don't understand why it isn't working in my script. I bet it's something silly, but why wont it output the data to a text file?? ``` #!/usr/bin/env python #standard module, needed as we deal with command line args import sys from fractions import Fraction import pyexiv2 #checking whether we got enough args, if not, tell how to use, and exits #if len(sys.argv) != 2 : # print "incorrect argument, usage: " + sys.argv[0] + ' <filename>' # sys.exit(1) #so the argument seems to be ok, we use it as an imagefile imagefilename = sys.argv[1] #trying to catch the exceptions in case of problem with the file reading try: metadata = pyexiv2.metadata.ImageMetadata(imagefilename) metadata.read(); #trying to catch the exceptions in case of problem with the GPS data reading try: latitude = metadata.__getitem__("Exif.GPSInfo.GPSLatitude") latitudeRef = metadata.__getitem__("Exif.GPSInfo.GPSLatitudeRef") longitude = metadata.__getitem__("Exif.GPSInfo.GPSLongitude") longitudeRef = metadata.__getitem__("Exif.GPSInfo.GPSLongitudeRef") # get the value of the tag, and make it float number alt = float(metadata.__getitem__("Exif.GPSInfo.GPSAltitude").value) # get human readable values latitude = str(latitude).split("=")[1][1:-1].split(" "); latitude = map(lambda f: str(float(Fraction(f))), latitude) latitude = latitude[0] + u"\u00b0" + latitude[1] + "'" + latitude[2] + '"' + " " + str(latitudeRef).split("=")[1][1:-1] longitude = str(longitude).split("=")[1][1:-1].split(" "); longitude = map(lambda f: str(float(Fraction(f))), longitude) longitude = longitude[0] + u"\u00b0" + longitude[1] + "'" + longitude[2] + '"' + " " + str(longitudeRef).split("=")[1][1:-1] ## Printing out, might need to be modified if other format needed ## i just simple put tabs here to make nice columns print " \n A text file has been created with the following information \n" print "GPS EXIF data for " + imagefilename print "Latitude:\t" + latitude print "Longitude:\t" + longitude print "Altitude:\t" + str(alt) + " m" except Exception, e: # complain if the GPS reading went wrong, and print the exception print "Missing GPS info for " + imagefilename print e # Create a new file or **overwrite an existing file** text_file = open('textfile.txt', 'w') text_file.write("Latitude" + latitude) # Close the output file text_file.close() except Exception, e: # complain if the GPS reading went wrong, and print the exception print "Error processing image " + imagefilename print e; ``` The error I see says: ``` text_file = open('textfile.txt','w') ^ SyntaxError: invalid syntax ```
2013/06/28
[ "https://Stackoverflow.com/questions/17363611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2519572/" ]
`lis` is an empty list, *any* index will raise an exception. If you wanted to add elements to that list, use `lis.append()` instead. Note that you can loop over sequences *directly*, there is no need to keep your own counter: ``` def front_x(words): lis = [] words.sort() for word in words: if word.startswith("x"): lis.append(word) for entry in lis: print(entry) ``` You can reduce this further by immediately printing all words that start with `x`, no need to build a separate list: ``` def front_x(words): for word in sorted(words): if word.startswith("x"): print(word) ``` If you wanted to sort the list with all `x` words coming first, use a custom sort key: ``` def front_x(words): return sorted(words, key=lambda w: (not w.startswith('x'), w)) ``` sorts the words first by the boolean flag for `.startswith('x')`; `False` is sorted before `True` so we negate that test, then the words themselves. Demo: ``` >>> words = ['foo', 'bar', 'xbaz', 'eggs', 'xspam', 'xham'] >>> sorted(words, key=lambda w: (not w.startswith('x'), w)) ['xbaz', 'xham', 'xspam', 'bar', 'eggs', 'foo'] ```
> > i need to sort the list but the words starting with x should be the first ones. > > > Complementary to the custom search key in @Martijn's extended answer, you could also try this, which is closer to your original approach and might be easier to understand: ``` def front_x(words): has_x, hasnt = [], [] for word in sorted(words): if word.startswith('x'): has_x.append(word) else: hasnt.append(word) return has_x + hasnt ``` Concerning what was wrong with your original code, there are actually *three* problems with the line ``` lis[j]=words.pop()[i] ``` 1. `lis[j]` only works if the list already has a `j`th element, but as you are adding items to an initially empty list, you should use `lis.append(...)` instead. 2. You want to remove the word starting with "x" at index `i` from the list, but `pop()` will always remove the *last* item. `pop()` is for stacks; never remove items from a list while looping it with an index! 3. You apply the `[i]` operator *after* you've popped the item from the list, i.e., you are accessing the `i`th *letter of the word*, which may be much shorter; thus the `IndexError`
9,570,637
Working on getting Celery setup (following the basic tutorial) with a mongodb broker as backend. Following the configuration guidelines set out in the official docs, my `celeryconfig.py` is setup as follows: ``` CELERY_RESULT_BACKEND = "mongodb" BROKER_BACKEND = "mongodb" BROKER_URL = "mongodb://user:pass@subdomain.mongolab.com:123456/testdb" CELERY_MONGODB_BACKEND_SETTINGS = { "host":"subdomain.mongolab.com", "port":123456, "database":"testdb", "taskmeta_collection":"taskmeta", "user":"user", "pass":"pass", } CELERY_IMPORTS = ("tasks",) ``` Running the celeryd with `--loglevel=INFO` returns the following exception, originating in pymongo but bubbling through both kombu and celery. ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/celery/worker/__init__.py", line 230, in start component.start() File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 338, in start self.reset_connection() File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 596, in reset_connection on_decode_error=self.on_decode_error) File "/usr/local/lib/python2.7/dist-packages/celery/app/amqp.py", line 335, in get_task_consumer **kwargs) File "/usr/local/lib/python2.7/dist-packages/kombu/compat.py", line 187, in __init__ super(ConsumerSet, self).__init__(self.backend, queues, **kwargs) File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 285, in __init__ self.declare() File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 295, in declare queue.declare() File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 388, in declare self.queue_declare(nowait, passive=False) File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 408, in queue_declare nowait=nowait) File "/usr/local/lib/python2.7/dist-packages/kombu/transport/virtual/__init__.py", line 380, in queue_declare return queue, self._size(queue), 0 File "/usr/local/lib/python2.7/dist-packages/kombu/transport/mongodb.py", line 74, in _size return self.client.messages.find({"queue": queue}).count() File "/usr/local/lib/python2.7/dist-packages/kombu/transport/mongodb.py", line 171, in client self._client = self._open() File "/usr/local/lib/python2.7/dist-packages/kombu/transport/mongodb.py", line 97, in _open mongoconn = Connection(host=conninfo.hostname, port=conninfo.port) File "/usr/local/lib/python2.7/dist-packages/pymongo/connection.py", line 325, in __init__ nodes.update(uri_parser.split_hosts(entity, port)) File "/usr/local/lib/python2.7/dist-packages/pymongo/uri_parser.py", line 198, in split_hosts nodes.append(parse_host(entity, default_port)) File "/usr/local/lib/python2.7/dist-packages/pymongo/uri_parser.py", line 127, in parse_host raise ConfigurationError("Reserved characters such as ':' must be " ConfigurationError: Reserved characters such as ':' must be escaped according RFC 2396. An IPv6 address literal must be enclosed in '[' and ']' according to RFC 2732. ``` Something about the way Celery is handling the mongouri is not encoding correctly, since it is the uri parser within `pymongo` that is throwing this error. I have tried escaping the `:` characters in the uri string, but all this achieves is resetting the transport back to the default AMQP with a mangled connection string. ``` amqp://guest@localhost:5672/mongodb\http://user\:password@subdomain.mongolab.com\:29217/testdb ``` Which clearly isn't right. I've tried entering the uri in the config as a raw string using `r` and nothing changes. I know this kind of connection configuration has been supported in Celery since 2.4 (I'm using 2.5.1, pymongo 2.1.1) and the official docs all cite it as the preferred method to connect to a mongodb broker. Could this be a bug, perhaps an incompatibility with the latest pymongo build? If this approach doesn't work, how would one attach the task queue to a replica set, since I assume these have to be passed in the mongouri using the `?replicaSet` parameter. I should note that I'd rather not switch to using a RabbitMQ broker, since Mongo is already in the stack for the app in question and it just seems more intuitive to use what's already there. If there is a concrete reason why Mongo would be less effective for this purpose (the amount of tasks per day would be relatively small) I'd love to know! Thanks in advance.
2012/03/05
[ "https://Stackoverflow.com/questions/9570637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/215608/" ]
I think it's a bug. Celery passed hostname instead of server\_uri to kombu, thus cause this problem. After tracing the code, I found the following conf to bypass the bug before they fixed it. ``` CELERY_RESULT_BACKEND = 'mongodb' BROKER_HOST = "subdomain.mongolab.com" BROKER_PORT = 123456 BROKER_TRANSPORT = 'mongodb' BROKER_VHOST = 'testdb' CELERY_IMPORTS = ('tasks',) CELERY_MONGODB_BACKEND_SETTINGS = { 'host': 'subdomain.mongolab.com', 'port': 123456, 'database': 'testdb', 'user': user, 'password': password, 'taskmeta_collection': 'teskmeta' } ``` just repeating the configuration.
Would it help if you remove "user", "pass", "port", and "database" from the CELERY\_MONGODB\_BACKEND\_SETTINGS dict, and do: ``` BROKER_URL = "mongodb://user:pass@subdomain.mongolab.com:123456/testdb" CELERY_MONGODB_BACKEND_SETTINGS = { "host":BROKER_URL, "taskmeta_collection":"taskmeta", } ```
23,320,954
how to replace '1c' to '\x1c' in python. I have a list with elements like '12','13' etc and want to replace with '\x12', '\x13' etc. here is what i tried and failed ``` letters=[] for i in range(10,128,1): a=(str(hex(i))).replace('0x','\x') letters.append(a) print letters ``` **I need is '31' to be replaced by '\x31' ---> '1' not '\x31' 0r \x31**
2014/04/27
[ "https://Stackoverflow.com/questions/23320954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3559830/" ]
You need to use the built-in function [`chr`](https://docs.python.org/2/library/functions.html#chr) to return the correct ascii code (which is the string you are after): ``` >>> [chr(i) for i in range(10,20,1)] ['\n', '\x0b', '\x0c', '\r', '\x0e', '\x0f', '\x10', '\x11', '\x12', '\x13'] ```
Your code is fine, you just need to escape the `\` with a `\`. ``` letters=[] for i in range(10,128,1): a=(str(hex(i))).replace('0x','\\x') #you have to escape the \ letters.append(a) print letters ``` [DEMO ----](http://repl.it/Rvl/1)
23,320,954
how to replace '1c' to '\x1c' in python. I have a list with elements like '12','13' etc and want to replace with '\x12', '\x13' etc. here is what i tried and failed ``` letters=[] for i in range(10,128,1): a=(str(hex(i))).replace('0x','\x') letters.append(a) print letters ``` **I need is '31' to be replaced by '\x31' ---> '1' not '\x31' 0r \x31**
2014/04/27
[ "https://Stackoverflow.com/questions/23320954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3559830/" ]
You need to use the built-in function [`chr`](https://docs.python.org/2/library/functions.html#chr) to return the correct ascii code (which is the string you are after): ``` >>> [chr(i) for i in range(10,20,1)] ['\n', '\x0b', '\x0c', '\r', '\x0e', '\x0f', '\x10', '\x11', '\x12', '\x13'] ```
Best way to do your task is using string formatting, then you don't have to replace anything, and the code looks clearer: ``` letters = ['\\x%x' % i for i in range(10, 128)] print letters ```
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
**Minimal example** ``` from setuptools import setup, find_packages setup( name="foo", version="1.0", packages=find_packages(), ) ``` More info in [docs](https://packaging.python.org/tutorials/packaging-projects/)
Here you will find the simplest possible example of using distutils and setup.py: <https://docs.python.org/2/distutils/introduction.html#distutils-simple-example> This assumes that all your code is in a single file and tells how to package a project containing a single module.
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
Complete walkthrough of writing `setup.py` scripts [here](http://docs.python.org/distutils/setupscript.html). (with some examples) If you'd like a real-world example, I could point you towards the `setup.py` scripts of a couple major projects. Django's is [here](http://code.djangoproject.com/browser/django/trunk/setup.py), pyglet's is [here](https://github.com/pyglet/pyglet/blob/master/setup.py). You can just browse the source of other projects for a file named setup.py for more examples. These aren't simple examples; the tutorial link I gave has those. These are more complex, but also more practical.
Look at this complete example <https://github.com/marcindulak/python-mycli> of a small python package. It is based on packaging recommendations from <https://packaging.python.org/en/latest/distributing.html>, uses setup.py with distutils and in addition shows how to create RPM and deb packages. The project's setup.py is included below (see the repo for the full source): ``` #!/usr/bin/env python import os import sys from distutils.core import setup name = "mycli" rootdir = os.path.abspath(os.path.dirname(__file__)) # Restructured text project description read from file long_description = open(os.path.join(rootdir, 'README.md')).read() # Python 2.4 or later needed if sys.version_info < (2, 4, 0, 'final', 0): raise SystemExit, 'Python 2.4 or later is required!' # Build a list of all project modules packages = [] for dirname, dirnames, filenames in os.walk(name): if '__init__.py' in filenames: packages.append(dirname.replace('/', '.')) package_dir = {name: name} # Data files used e.g. in tests package_data = {name: [os.path.join(name, 'tests', 'prt.txt')]} # The current version number - MSI accepts only version X.X.X exec(open(os.path.join(name, 'version.py')).read()) # Scripts scripts = [] for dirname, dirnames, filenames in os.walk('scripts'): for filename in filenames: if not filename.endswith('.bat'): scripts.append(os.path.join(dirname, filename)) # Provide bat executables in the tarball (always for Win) if 'sdist' in sys.argv or os.name in ['ce', 'nt']: for s in scripts[:]: scripts.append(s + '.bat') # Data_files (e.g. doc) needs (directory, files-in-this-directory) tuples data_files = [] for dirname, dirnames, filenames in os.walk('doc'): fileslist = [] for filename in filenames: fullname = os.path.join(dirname, filename) fileslist.append(fullname) data_files.append(('share/' + name + '/' + dirname, fileslist)) setup(name='python-' + name, version=version, # PEP440 description='mycli - shows some argparse features', long_description=long_description, url='https://github.com/marcindulak/python-mycli', author='Marcin Dulak', author_email='X.Y@Z.com', license='ASL', # https://pypi.python.org/pypi?%3Aaction=list_classifiers classifiers=[ 'Development Status :: 1 - Planning', 'Environment :: Console', 'License :: OSI Approved :: Apache Software License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.4', 'Programming Language :: Python :: 2.5', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.2', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', ], keywords='argparse distutils cli unittest RPM spec deb', packages=packages, package_dir=package_dir, package_data=package_data, scripts=scripts, data_files=data_files, ) ``` and and RPM spec file which more or less follows Fedora/EPEL packaging guidelines may look like: ``` # Failsafe backport of Python2-macros for RHEL <= 6 %{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")} %{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")} %{!?python_version: %global python_version %(%{__python} -c "import sys; sys.stdout.write(sys.version[:3])")} %{!?__python2: %global __python2 %{__python}} %{!?python2_sitelib: %global python2_sitelib %{python_sitelib}} %{!?python2_sitearch: %global python2_sitearch %{python_sitearch}} %{!?python2_version: %global python2_version %{python_version}} %{!?python2_minor_version: %define python2_minor_version %(%{__python} -c "import sys ; print sys.version[2:3]")} %global upstream_name mycli Name: python-%{upstream_name} Version: 0.0.1 Release: 1%{?dist} Summary: A Python program that demonstrates usage of argparse %{?el5:Group: Applications/Scientific} License: ASL 2.0 URL: https://github.com/marcindulak/%{name} Source0: https://github.com/marcindulak/%{name}/%{name}-%{version}.tar.gz %{?el5:BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)} BuildArch: noarch %if 0%{?suse_version} BuildRequires: python-devel %else BuildRequires: python2-devel %endif %description A Python program that demonstrates usage of argparse. %prep %setup -qn %{name}-%{version} %build %{__python2} setup.py build %install %{?el5:rm -rf $RPM_BUILD_ROOT} %{__python2} setup.py install --skip-build --prefix=%{_prefix} \ --optimize=1 --root $RPM_BUILD_ROOT %check export PYTHONPATH=`pwd`/build/lib export PATH=`pwd`/build/scripts-%{python2_version}:${PATH} %if 0%{python2_minor_version} >= 7 %{__python2} -m unittest discover -s %{upstream_name}/tests -p '*.py' %endif %clean %{?el5:rm -rf $RPM_BUILD_ROOT} %files %doc LICENSE README.md %{_bindir}/* %{python2_sitelib}/%{upstream_name} %{?!el5:%{python2_sitelib}/*.egg-info} %changelog * Wed Jan 14 2015 Marcin Dulak <X.Y@Z.com> - 0.0.1-1 - initial version ```
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
Complete walkthrough of writing `setup.py` scripts [here](http://docs.python.org/distutils/setupscript.html). (with some examples) If you'd like a real-world example, I could point you towards the `setup.py` scripts of a couple major projects. Django's is [here](http://code.djangoproject.com/browser/django/trunk/setup.py), pyglet's is [here](https://github.com/pyglet/pyglet/blob/master/setup.py). You can just browse the source of other projects for a file named setup.py for more examples. These aren't simple examples; the tutorial link I gave has those. These are more complex, but also more practical.
**Minimal example** ``` from setuptools import setup, find_packages setup( name="foo", version="1.0", packages=find_packages(), ) ``` More info in [docs](https://packaging.python.org/tutorials/packaging-projects/)
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
**READ THIS FIRST** <https://packaging.python.org/en/latest/current.html> > > Installation Tool Recommendations > ================================= > > > 1. Use pip to install Python packages > from PyPI. > 2. Use virtualenv, or pyvenv to isolate application specific dependencies from a shared Python installation. > 3. Use pip wheel to create a cache of wheel distributions, for the purpose of > speeding up subsequent installations. > 4. If you’re looking for management of fully integrated cross-platform software stacks, consider buildout (primarily focused on the web development community) or Hashdist, or conda (both primarily focused on the scientific community). > > > Packaging Tool Recommendations > ============================== > > > 1. Use setuptools to define projects and create Source Distributions. > 2. Use the bdist\_wheel setuptools extension available from the wheel project to create wheels. This is especially beneficial, if your project contains binary extensions. > 3. Use twine for uploading distributions to PyPI. > > > --- This anwser has aged, and indeed there is a rescue plan for python packaging world called wheels way ========== I qoute [pythonwheels.com](http://pythonwheels.com/) here: > > **What are wheels?** > > > Wheels are the new standard of python distribution > and are intended to replace eggs. Support is offered in pip >= 1.4 and > setuptools >= 0.8. > > > Advantages of wheels 1. Faster installation for pure python and native C extension packages. 2. Avoids arbitrary code execution for installation. (Avoids setup.py) 3. Installation of a C extension does not require a compiler on Windows or OS X. 4. Allows better caching for testing and continuous integration. 5. Creates .pyc files as part of installation to ensure they match the python interpreter used. 6. More consistent installs across platforms and machines. The full story of correct python packaging (and about wheels) is covered at [packaging.python.org](https://packaging.python.org/en/latest/distributing.html) --- conda way ========= For scientific computing (this is also recommended on packaging.python.org, see above) I would consider using [CONDA packaging](http://conda.pydata.org/docs/) which can be seen as a 3rd party service build on top of PyPI and pip tools. It also works great on setting up your own version of [binstar](https://binstar.org/) so I would imagine it can do the trick for sophisticated custom enterprise package management. Conda can be installed into a user folder (no super user permisssions) and works like magic with > > conda install > > > and powerful virtual env expansion. --- eggs way ======== *This option was related to python-distribute.org and is largerly outdated (as well as the site) so let me point you to one of the ready to use yet compact setup.py examples I like:* * A very practical example/implementation of mixing scripts and single python files into setup.py is giving [here](https://stackoverflow.com/questions/10458158/python-setup-py-configuration-to-install-files-in-custom-directories) * Even better one from [hyperopt](https://github.com/hyperopt/hyperopt-convnet/blob/master/setup.py) This quote was taken from the guide on the **state of setup.py** and still applies: * setup.py gone! * distutils gone! * distribute gone! * pip and virtualenv here to stay! * eggs ... gone! I add one more point (from me) * **wheels**! I would recommend to get some understanding of [packaging-ecosystem](http://guide.python-distribute.org/introduction.html#the-packaging-ecosystem) (from the guide pointed by gotgenes) before attempting mindless copy-pasting. Most of examples out there in the Internet start with ``` from distutils.core import setup ``` but this for example does not support building an egg **python setup.py bdist\_egg** (as well as some other *old* features), which were available in ``` from setuptools import setup ``` And the reason is that they are **deprecated**. Now according to the guide > > Warning > > > Please use the Distribute package rather than the Setuptools package > because there are problems in this package that can and will not be > fixed. > > > deprecated setuptools are to be replaced by [distutils2](http://alexis.notmyidea.org/distutils2/tutorial.html), which "will be part of the standard library in Python 3.3". I must say I liked setuptools and eggs and have not yet been completely convinced by convenience of distutils2. It requires ``` pip install Distutils2 ``` and to install ``` python -m distutils2.run install ``` PS == Packaging never was trivial (one learns this by trying to develop a new one), so I assume a lot of things have gone for reason. I just hope this time it will be is done correctly.
Here you will find the simplest possible example of using distutils and setup.py: <https://docs.python.org/2/distutils/introduction.html#distutils-simple-example> This assumes that all your code is in a single file and tells how to package a project containing a single module.
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
Complete walkthrough of writing `setup.py` scripts [here](http://docs.python.org/distutils/setupscript.html). (with some examples) If you'd like a real-world example, I could point you towards the `setup.py` scripts of a couple major projects. Django's is [here](http://code.djangoproject.com/browser/django/trunk/setup.py), pyglet's is [here](https://github.com/pyglet/pyglet/blob/master/setup.py). You can just browse the source of other projects for a file named setup.py for more examples. These aren't simple examples; the tutorial link I gave has those. These are more complex, but also more practical.
**READ THIS FIRST** <https://packaging.python.org/en/latest/current.html> > > Installation Tool Recommendations > ================================= > > > 1. Use pip to install Python packages > from PyPI. > 2. Use virtualenv, or pyvenv to isolate application specific dependencies from a shared Python installation. > 3. Use pip wheel to create a cache of wheel distributions, for the purpose of > speeding up subsequent installations. > 4. If you’re looking for management of fully integrated cross-platform software stacks, consider buildout (primarily focused on the web development community) or Hashdist, or conda (both primarily focused on the scientific community). > > > Packaging Tool Recommendations > ============================== > > > 1. Use setuptools to define projects and create Source Distributions. > 2. Use the bdist\_wheel setuptools extension available from the wheel project to create wheels. This is especially beneficial, if your project contains binary extensions. > 3. Use twine for uploading distributions to PyPI. > > > --- This anwser has aged, and indeed there is a rescue plan for python packaging world called wheels way ========== I qoute [pythonwheels.com](http://pythonwheels.com/) here: > > **What are wheels?** > > > Wheels are the new standard of python distribution > and are intended to replace eggs. Support is offered in pip >= 1.4 and > setuptools >= 0.8. > > > Advantages of wheels 1. Faster installation for pure python and native C extension packages. 2. Avoids arbitrary code execution for installation. (Avoids setup.py) 3. Installation of a C extension does not require a compiler on Windows or OS X. 4. Allows better caching for testing and continuous integration. 5. Creates .pyc files as part of installation to ensure they match the python interpreter used. 6. More consistent installs across platforms and machines. The full story of correct python packaging (and about wheels) is covered at [packaging.python.org](https://packaging.python.org/en/latest/distributing.html) --- conda way ========= For scientific computing (this is also recommended on packaging.python.org, see above) I would consider using [CONDA packaging](http://conda.pydata.org/docs/) which can be seen as a 3rd party service build on top of PyPI and pip tools. It also works great on setting up your own version of [binstar](https://binstar.org/) so I would imagine it can do the trick for sophisticated custom enterprise package management. Conda can be installed into a user folder (no super user permisssions) and works like magic with > > conda install > > > and powerful virtual env expansion. --- eggs way ======== *This option was related to python-distribute.org and is largerly outdated (as well as the site) so let me point you to one of the ready to use yet compact setup.py examples I like:* * A very practical example/implementation of mixing scripts and single python files into setup.py is giving [here](https://stackoverflow.com/questions/10458158/python-setup-py-configuration-to-install-files-in-custom-directories) * Even better one from [hyperopt](https://github.com/hyperopt/hyperopt-convnet/blob/master/setup.py) This quote was taken from the guide on the **state of setup.py** and still applies: * setup.py gone! * distutils gone! * distribute gone! * pip and virtualenv here to stay! * eggs ... gone! I add one more point (from me) * **wheels**! I would recommend to get some understanding of [packaging-ecosystem](http://guide.python-distribute.org/introduction.html#the-packaging-ecosystem) (from the guide pointed by gotgenes) before attempting mindless copy-pasting. Most of examples out there in the Internet start with ``` from distutils.core import setup ``` but this for example does not support building an egg **python setup.py bdist\_egg** (as well as some other *old* features), which were available in ``` from setuptools import setup ``` And the reason is that they are **deprecated**. Now according to the guide > > Warning > > > Please use the Distribute package rather than the Setuptools package > because there are problems in this package that can and will not be > fixed. > > > deprecated setuptools are to be replaced by [distutils2](http://alexis.notmyidea.org/distutils2/tutorial.html), which "will be part of the standard library in Python 3.3". I must say I liked setuptools and eggs and have not yet been completely convinced by convenience of distutils2. It requires ``` pip install Distutils2 ``` and to install ``` python -m distutils2.run install ``` PS == Packaging never was trivial (one learns this by trying to develop a new one), so I assume a lot of things have gone for reason. I just hope this time it will be is done correctly.
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
Look at this complete example <https://github.com/marcindulak/python-mycli> of a small python package. It is based on packaging recommendations from <https://packaging.python.org/en/latest/distributing.html>, uses setup.py with distutils and in addition shows how to create RPM and deb packages. The project's setup.py is included below (see the repo for the full source): ``` #!/usr/bin/env python import os import sys from distutils.core import setup name = "mycli" rootdir = os.path.abspath(os.path.dirname(__file__)) # Restructured text project description read from file long_description = open(os.path.join(rootdir, 'README.md')).read() # Python 2.4 or later needed if sys.version_info < (2, 4, 0, 'final', 0): raise SystemExit, 'Python 2.4 or later is required!' # Build a list of all project modules packages = [] for dirname, dirnames, filenames in os.walk(name): if '__init__.py' in filenames: packages.append(dirname.replace('/', '.')) package_dir = {name: name} # Data files used e.g. in tests package_data = {name: [os.path.join(name, 'tests', 'prt.txt')]} # The current version number - MSI accepts only version X.X.X exec(open(os.path.join(name, 'version.py')).read()) # Scripts scripts = [] for dirname, dirnames, filenames in os.walk('scripts'): for filename in filenames: if not filename.endswith('.bat'): scripts.append(os.path.join(dirname, filename)) # Provide bat executables in the tarball (always for Win) if 'sdist' in sys.argv or os.name in ['ce', 'nt']: for s in scripts[:]: scripts.append(s + '.bat') # Data_files (e.g. doc) needs (directory, files-in-this-directory) tuples data_files = [] for dirname, dirnames, filenames in os.walk('doc'): fileslist = [] for filename in filenames: fullname = os.path.join(dirname, filename) fileslist.append(fullname) data_files.append(('share/' + name + '/' + dirname, fileslist)) setup(name='python-' + name, version=version, # PEP440 description='mycli - shows some argparse features', long_description=long_description, url='https://github.com/marcindulak/python-mycli', author='Marcin Dulak', author_email='X.Y@Z.com', license='ASL', # https://pypi.python.org/pypi?%3Aaction=list_classifiers classifiers=[ 'Development Status :: 1 - Planning', 'Environment :: Console', 'License :: OSI Approved :: Apache Software License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.4', 'Programming Language :: Python :: 2.5', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.2', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', ], keywords='argparse distutils cli unittest RPM spec deb', packages=packages, package_dir=package_dir, package_data=package_data, scripts=scripts, data_files=data_files, ) ``` and and RPM spec file which more or less follows Fedora/EPEL packaging guidelines may look like: ``` # Failsafe backport of Python2-macros for RHEL <= 6 %{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")} %{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")} %{!?python_version: %global python_version %(%{__python} -c "import sys; sys.stdout.write(sys.version[:3])")} %{!?__python2: %global __python2 %{__python}} %{!?python2_sitelib: %global python2_sitelib %{python_sitelib}} %{!?python2_sitearch: %global python2_sitearch %{python_sitearch}} %{!?python2_version: %global python2_version %{python_version}} %{!?python2_minor_version: %define python2_minor_version %(%{__python} -c "import sys ; print sys.version[2:3]")} %global upstream_name mycli Name: python-%{upstream_name} Version: 0.0.1 Release: 1%{?dist} Summary: A Python program that demonstrates usage of argparse %{?el5:Group: Applications/Scientific} License: ASL 2.0 URL: https://github.com/marcindulak/%{name} Source0: https://github.com/marcindulak/%{name}/%{name}-%{version}.tar.gz %{?el5:BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)} BuildArch: noarch %if 0%{?suse_version} BuildRequires: python-devel %else BuildRequires: python2-devel %endif %description A Python program that demonstrates usage of argparse. %prep %setup -qn %{name}-%{version} %build %{__python2} setup.py build %install %{?el5:rm -rf $RPM_BUILD_ROOT} %{__python2} setup.py install --skip-build --prefix=%{_prefix} \ --optimize=1 --root $RPM_BUILD_ROOT %check export PYTHONPATH=`pwd`/build/lib export PATH=`pwd`/build/scripts-%{python2_version}:${PATH} %if 0%{python2_minor_version} >= 7 %{__python2} -m unittest discover -s %{upstream_name}/tests -p '*.py' %endif %clean %{?el5:rm -rf $RPM_BUILD_ROOT} %files %doc LICENSE README.md %{_bindir}/* %{python2_sitelib}/%{upstream_name} %{?!el5:%{python2_sitelib}/*.egg-info} %changelog * Wed Jan 14 2015 Marcin Dulak <X.Y@Z.com> - 0.0.1-1 - initial version ```
Here is the utility I wrote to generate a simple *setup.py* file (template) with useful comments and links. I hope, it will be useful. Installation ------------ ```sh sudo pip install setup-py-cli ``` Usage ----- To generate *setup.py* file just type in the terminal. ```sh setup-py ``` Now *setup.py* file should occur in the current directory. Generated setup.py ------------------ ```py from distutils.core import setup from setuptools import find_packages import os # User-friendly description from README.md current_directory = os.path.dirname(os.path.abspath(__file__)) try: with open(os.path.join(current_directory, 'README.md'), encoding='utf-8') as f: long_description = f.read() except Exception: long_description = '' setup( # Name of the package name=<name of current directory>, # Packages to include into the distribution packages=find_packages('.'), # Start with a small number and increase it with every change you make # https://semver.org version='1.0.0', # Chose a license from here: https://help.github.com/articles/licensing-a-repository # For example: MIT license='', # Short description of your library description='', # Long description of your library long_description = long_description, long_description_context_type = 'text/markdown', # Your name author='', # Your email author_email='', # Either the link to your github or to your website url='', # Link from which the project can be downloaded download_url='', # List of keyword arguments keywords=[], # List of packages to install with this one install_requires=[], # https://pypi.org/classifiers/ classifiers=[] ) ``` Content of the generated *setup.py*: * automatically fulfilled package name based on the name of the current directory. * some basic fields to fulfill. * clarifying comments and links to useful resources. * automatically inserted description from *README.md* or an empty string if there is no *README.md*. Here is the [link](https://github.com/VoIlAlex/setup-py-cli) to the repository. Fill free to enhance the solution.
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
**READ THIS FIRST** <https://packaging.python.org/en/latest/current.html> > > Installation Tool Recommendations > ================================= > > > 1. Use pip to install Python packages > from PyPI. > 2. Use virtualenv, or pyvenv to isolate application specific dependencies from a shared Python installation. > 3. Use pip wheel to create a cache of wheel distributions, for the purpose of > speeding up subsequent installations. > 4. If you’re looking for management of fully integrated cross-platform software stacks, consider buildout (primarily focused on the web development community) or Hashdist, or conda (both primarily focused on the scientific community). > > > Packaging Tool Recommendations > ============================== > > > 1. Use setuptools to define projects and create Source Distributions. > 2. Use the bdist\_wheel setuptools extension available from the wheel project to create wheels. This is especially beneficial, if your project contains binary extensions. > 3. Use twine for uploading distributions to PyPI. > > > --- This anwser has aged, and indeed there is a rescue plan for python packaging world called wheels way ========== I qoute [pythonwheels.com](http://pythonwheels.com/) here: > > **What are wheels?** > > > Wheels are the new standard of python distribution > and are intended to replace eggs. Support is offered in pip >= 1.4 and > setuptools >= 0.8. > > > Advantages of wheels 1. Faster installation for pure python and native C extension packages. 2. Avoids arbitrary code execution for installation. (Avoids setup.py) 3. Installation of a C extension does not require a compiler on Windows or OS X. 4. Allows better caching for testing and continuous integration. 5. Creates .pyc files as part of installation to ensure they match the python interpreter used. 6. More consistent installs across platforms and machines. The full story of correct python packaging (and about wheels) is covered at [packaging.python.org](https://packaging.python.org/en/latest/distributing.html) --- conda way ========= For scientific computing (this is also recommended on packaging.python.org, see above) I would consider using [CONDA packaging](http://conda.pydata.org/docs/) which can be seen as a 3rd party service build on top of PyPI and pip tools. It also works great on setting up your own version of [binstar](https://binstar.org/) so I would imagine it can do the trick for sophisticated custom enterprise package management. Conda can be installed into a user folder (no super user permisssions) and works like magic with > > conda install > > > and powerful virtual env expansion. --- eggs way ======== *This option was related to python-distribute.org and is largerly outdated (as well as the site) so let me point you to one of the ready to use yet compact setup.py examples I like:* * A very practical example/implementation of mixing scripts and single python files into setup.py is giving [here](https://stackoverflow.com/questions/10458158/python-setup-py-configuration-to-install-files-in-custom-directories) * Even better one from [hyperopt](https://github.com/hyperopt/hyperopt-convnet/blob/master/setup.py) This quote was taken from the guide on the **state of setup.py** and still applies: * setup.py gone! * distutils gone! * distribute gone! * pip and virtualenv here to stay! * eggs ... gone! I add one more point (from me) * **wheels**! I would recommend to get some understanding of [packaging-ecosystem](http://guide.python-distribute.org/introduction.html#the-packaging-ecosystem) (from the guide pointed by gotgenes) before attempting mindless copy-pasting. Most of examples out there in the Internet start with ``` from distutils.core import setup ``` but this for example does not support building an egg **python setup.py bdist\_egg** (as well as some other *old* features), which were available in ``` from setuptools import setup ``` And the reason is that they are **deprecated**. Now according to the guide > > Warning > > > Please use the Distribute package rather than the Setuptools package > because there are problems in this package that can and will not be > fixed. > > > deprecated setuptools are to be replaced by [distutils2](http://alexis.notmyidea.org/distutils2/tutorial.html), which "will be part of the standard library in Python 3.3". I must say I liked setuptools and eggs and have not yet been completely convinced by convenience of distutils2. It requires ``` pip install Distutils2 ``` and to install ``` python -m distutils2.run install ``` PS == Packaging never was trivial (one learns this by trying to develop a new one), so I assume a lot of things have gone for reason. I just hope this time it will be is done correctly.
Look at this complete example <https://github.com/marcindulak/python-mycli> of a small python package. It is based on packaging recommendations from <https://packaging.python.org/en/latest/distributing.html>, uses setup.py with distutils and in addition shows how to create RPM and deb packages. The project's setup.py is included below (see the repo for the full source): ``` #!/usr/bin/env python import os import sys from distutils.core import setup name = "mycli" rootdir = os.path.abspath(os.path.dirname(__file__)) # Restructured text project description read from file long_description = open(os.path.join(rootdir, 'README.md')).read() # Python 2.4 or later needed if sys.version_info < (2, 4, 0, 'final', 0): raise SystemExit, 'Python 2.4 or later is required!' # Build a list of all project modules packages = [] for dirname, dirnames, filenames in os.walk(name): if '__init__.py' in filenames: packages.append(dirname.replace('/', '.')) package_dir = {name: name} # Data files used e.g. in tests package_data = {name: [os.path.join(name, 'tests', 'prt.txt')]} # The current version number - MSI accepts only version X.X.X exec(open(os.path.join(name, 'version.py')).read()) # Scripts scripts = [] for dirname, dirnames, filenames in os.walk('scripts'): for filename in filenames: if not filename.endswith('.bat'): scripts.append(os.path.join(dirname, filename)) # Provide bat executables in the tarball (always for Win) if 'sdist' in sys.argv or os.name in ['ce', 'nt']: for s in scripts[:]: scripts.append(s + '.bat') # Data_files (e.g. doc) needs (directory, files-in-this-directory) tuples data_files = [] for dirname, dirnames, filenames in os.walk('doc'): fileslist = [] for filename in filenames: fullname = os.path.join(dirname, filename) fileslist.append(fullname) data_files.append(('share/' + name + '/' + dirname, fileslist)) setup(name='python-' + name, version=version, # PEP440 description='mycli - shows some argparse features', long_description=long_description, url='https://github.com/marcindulak/python-mycli', author='Marcin Dulak', author_email='X.Y@Z.com', license='ASL', # https://pypi.python.org/pypi?%3Aaction=list_classifiers classifiers=[ 'Development Status :: 1 - Planning', 'Environment :: Console', 'License :: OSI Approved :: Apache Software License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.4', 'Programming Language :: Python :: 2.5', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.2', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', ], keywords='argparse distutils cli unittest RPM spec deb', packages=packages, package_dir=package_dir, package_data=package_data, scripts=scripts, data_files=data_files, ) ``` and and RPM spec file which more or less follows Fedora/EPEL packaging guidelines may look like: ``` # Failsafe backport of Python2-macros for RHEL <= 6 %{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")} %{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")} %{!?python_version: %global python_version %(%{__python} -c "import sys; sys.stdout.write(sys.version[:3])")} %{!?__python2: %global __python2 %{__python}} %{!?python2_sitelib: %global python2_sitelib %{python_sitelib}} %{!?python2_sitearch: %global python2_sitearch %{python_sitearch}} %{!?python2_version: %global python2_version %{python_version}} %{!?python2_minor_version: %define python2_minor_version %(%{__python} -c "import sys ; print sys.version[2:3]")} %global upstream_name mycli Name: python-%{upstream_name} Version: 0.0.1 Release: 1%{?dist} Summary: A Python program that demonstrates usage of argparse %{?el5:Group: Applications/Scientific} License: ASL 2.0 URL: https://github.com/marcindulak/%{name} Source0: https://github.com/marcindulak/%{name}/%{name}-%{version}.tar.gz %{?el5:BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX)} BuildArch: noarch %if 0%{?suse_version} BuildRequires: python-devel %else BuildRequires: python2-devel %endif %description A Python program that demonstrates usage of argparse. %prep %setup -qn %{name}-%{version} %build %{__python2} setup.py build %install %{?el5:rm -rf $RPM_BUILD_ROOT} %{__python2} setup.py install --skip-build --prefix=%{_prefix} \ --optimize=1 --root $RPM_BUILD_ROOT %check export PYTHONPATH=`pwd`/build/lib export PATH=`pwd`/build/scripts-%{python2_version}:${PATH} %if 0%{python2_minor_version} >= 7 %{__python2} -m unittest discover -s %{upstream_name}/tests -p '*.py' %endif %clean %{?el5:rm -rf $RPM_BUILD_ROOT} %files %doc LICENSE README.md %{_bindir}/* %{python2_sitelib}/%{upstream_name} %{?!el5:%{python2_sitelib}/*.egg-info} %changelog * Wed Jan 14 2015 Marcin Dulak <X.Y@Z.com> - 0.0.1-1 - initial version ```
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
Complete walkthrough of writing `setup.py` scripts [here](http://docs.python.org/distutils/setupscript.html). (with some examples) If you'd like a real-world example, I could point you towards the `setup.py` scripts of a couple major projects. Django's is [here](http://code.djangoproject.com/browser/django/trunk/setup.py), pyglet's is [here](https://github.com/pyglet/pyglet/blob/master/setup.py). You can just browse the source of other projects for a file named setup.py for more examples. These aren't simple examples; the tutorial link I gave has those. These are more complex, but also more practical.
I recommend the [setup.py](https://github.com/pypa/sampleproject/blob/master/setup.py) of the [Python Packaging User Guide](https://packaging.python.org/)'s example project. The Python Packaging User Guide "aims to be the authoritative resource on how to package, publish and install Python distributions using current tools".
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
**Minimal example** ``` from setuptools import setup, find_packages setup( name="foo", version="1.0", packages=find_packages(), ) ``` More info in [docs](https://packaging.python.org/tutorials/packaging-projects/)
Here is the utility I wrote to generate a simple *setup.py* file (template) with useful comments and links. I hope, it will be useful. Installation ------------ ```sh sudo pip install setup-py-cli ``` Usage ----- To generate *setup.py* file just type in the terminal. ```sh setup-py ``` Now *setup.py* file should occur in the current directory. Generated setup.py ------------------ ```py from distutils.core import setup from setuptools import find_packages import os # User-friendly description from README.md current_directory = os.path.dirname(os.path.abspath(__file__)) try: with open(os.path.join(current_directory, 'README.md'), encoding='utf-8') as f: long_description = f.read() except Exception: long_description = '' setup( # Name of the package name=<name of current directory>, # Packages to include into the distribution packages=find_packages('.'), # Start with a small number and increase it with every change you make # https://semver.org version='1.0.0', # Chose a license from here: https://help.github.com/articles/licensing-a-repository # For example: MIT license='', # Short description of your library description='', # Long description of your library long_description = long_description, long_description_context_type = 'text/markdown', # Your name author='', # Your email author_email='', # Either the link to your github or to your website url='', # Link from which the project can be downloaded download_url='', # List of keyword arguments keywords=[], # List of packages to install with this one install_requires=[], # https://pypi.org/classifiers/ classifiers=[] ) ``` Content of the generated *setup.py*: * automatically fulfilled package name based on the name of the current directory. * some basic fields to fulfill. * clarifying comments and links to useful resources. * automatically inserted description from *README.md* or an empty string if there is no *README.md*. Here is the [link](https://github.com/VoIlAlex/setup-py-cli) to the repository. Fill free to enhance the solution.
4,740,473
After studying this page: <http://docs.python.org/distutils/builtdist.html> I am hoping to find some setup.py files to study so as to make my own (with the goal of making a fedora rpm file). Could the s.o. community point me towards some good examples?
2011/01/19
[ "https://Stackoverflow.com/questions/4740473", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62255/" ]
**READ THIS FIRST** <https://packaging.python.org/en/latest/current.html> > > Installation Tool Recommendations > ================================= > > > 1. Use pip to install Python packages > from PyPI. > 2. Use virtualenv, or pyvenv to isolate application specific dependencies from a shared Python installation. > 3. Use pip wheel to create a cache of wheel distributions, for the purpose of > speeding up subsequent installations. > 4. If you’re looking for management of fully integrated cross-platform software stacks, consider buildout (primarily focused on the web development community) or Hashdist, or conda (both primarily focused on the scientific community). > > > Packaging Tool Recommendations > ============================== > > > 1. Use setuptools to define projects and create Source Distributions. > 2. Use the bdist\_wheel setuptools extension available from the wheel project to create wheels. This is especially beneficial, if your project contains binary extensions. > 3. Use twine for uploading distributions to PyPI. > > > --- This anwser has aged, and indeed there is a rescue plan for python packaging world called wheels way ========== I qoute [pythonwheels.com](http://pythonwheels.com/) here: > > **What are wheels?** > > > Wheels are the new standard of python distribution > and are intended to replace eggs. Support is offered in pip >= 1.4 and > setuptools >= 0.8. > > > Advantages of wheels 1. Faster installation for pure python and native C extension packages. 2. Avoids arbitrary code execution for installation. (Avoids setup.py) 3. Installation of a C extension does not require a compiler on Windows or OS X. 4. Allows better caching for testing and continuous integration. 5. Creates .pyc files as part of installation to ensure they match the python interpreter used. 6. More consistent installs across platforms and machines. The full story of correct python packaging (and about wheels) is covered at [packaging.python.org](https://packaging.python.org/en/latest/distributing.html) --- conda way ========= For scientific computing (this is also recommended on packaging.python.org, see above) I would consider using [CONDA packaging](http://conda.pydata.org/docs/) which can be seen as a 3rd party service build on top of PyPI and pip tools. It also works great on setting up your own version of [binstar](https://binstar.org/) so I would imagine it can do the trick for sophisticated custom enterprise package management. Conda can be installed into a user folder (no super user permisssions) and works like magic with > > conda install > > > and powerful virtual env expansion. --- eggs way ======== *This option was related to python-distribute.org and is largerly outdated (as well as the site) so let me point you to one of the ready to use yet compact setup.py examples I like:* * A very practical example/implementation of mixing scripts and single python files into setup.py is giving [here](https://stackoverflow.com/questions/10458158/python-setup-py-configuration-to-install-files-in-custom-directories) * Even better one from [hyperopt](https://github.com/hyperopt/hyperopt-convnet/blob/master/setup.py) This quote was taken from the guide on the **state of setup.py** and still applies: * setup.py gone! * distutils gone! * distribute gone! * pip and virtualenv here to stay! * eggs ... gone! I add one more point (from me) * **wheels**! I would recommend to get some understanding of [packaging-ecosystem](http://guide.python-distribute.org/introduction.html#the-packaging-ecosystem) (from the guide pointed by gotgenes) before attempting mindless copy-pasting. Most of examples out there in the Internet start with ``` from distutils.core import setup ``` but this for example does not support building an egg **python setup.py bdist\_egg** (as well as some other *old* features), which were available in ``` from setuptools import setup ``` And the reason is that they are **deprecated**. Now according to the guide > > Warning > > > Please use the Distribute package rather than the Setuptools package > because there are problems in this package that can and will not be > fixed. > > > deprecated setuptools are to be replaced by [distutils2](http://alexis.notmyidea.org/distutils2/tutorial.html), which "will be part of the standard library in Python 3.3". I must say I liked setuptools and eggs and have not yet been completely convinced by convenience of distutils2. It requires ``` pip install Distutils2 ``` and to install ``` python -m distutils2.run install ``` PS == Packaging never was trivial (one learns this by trying to develop a new one), so I assume a lot of things have gone for reason. I just hope this time it will be is done correctly.
I recommend the [setup.py](https://github.com/pypa/sampleproject/blob/master/setup.py) of the [Python Packaging User Guide](https://packaging.python.org/)'s example project. The Python Packaging User Guide "aims to be the authoritative resource on how to package, publish and install Python distributions using current tools".
46,229,543
I have written a fraction adder in Python for my computer science class. However, I am running into problems with the final answer reduction procedure. The procedure uses the "not equal" comparison operator **!=** at the start of a **for** loop to test whether, when dividing the numerator and denominator, there will be a remainder. If there will be a remainder (numerator % denominator ≠ 0), the procedure executes: each gets divided by **n**, then **n** increments and the **for** loop runs again. This continues until they divide evenly into each other. Firstly, I am recieving a syntax error: ``` python FractionAdder.py 2 4 6 8 File "FractionAdder.py", line 23 for ansnum % n != 0 and ansdenom % n != 0: ^ SyntaxError: invalid syntax ``` Secondly, the **for** loop is not fully robust. My intended purpose was to have it reduce the final answer to its simplest form, but right now, it is only continuing to increment **n** and reduce until the numerator and denominator divide into each other evenly. This is a problem: 3 divides evenly into 6, but 3/6 is not in its simplest form. May I have some suggestions as to how to improve the robustness of my procedure, such that **n** continues to increment and the loop keeps cycling until the simplest form has been achieved? (Is there a better way to structure my conditional to achieve this?) Full Code: ``` import sys num1 = int(sys.argv[1]) denom1 = int(sys.argv[2]) num2 = int(sys.argv[3]) denom2 = int(sys.argv[4]) n = 1 # Find common denominators and adjust both fractions accordingly. while denom1 != denom2: denom1 = denom1 * denom2 num1 = num1 * denom2 denom2 = denom2 * denom1 num2 = num2 * denom2 # Add the numerators and set the ansdenom (denom1 and denom2 should be equal by this point if LCD function worked) ansnum = num1 + num2 ansdenom = denom1 # Reduce the answer. n = 2 for ansnum % n != 0 and ansdenom % n != 0: ansnum = ansnum / n ansdenom = ansdenom / n n += 1 print("The sum of the two fractions is:" + str(ansnum) + "//" + str(ansdenom)) ``` Thanks in advance!
2017/09/14
[ "https://Stackoverflow.com/questions/46229543", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8580749/" ]
The error you see is derived by the wrong usage of `for` where `while` is the right type of loop (`for` is for iteration, `while` for condition). Nevertheless, your logic at deciding the common denominators is flawed, and leads to an infinite loop. Please read about [least common multiple](https://en.wikipedia.org/wiki/Least_common_multiple), and consider the following pseudocode for determining the "new" numerators: ``` lcm = lcm(den1, den2) num1 *= lcm / den1 num2 *= lcm / den2 ```
You are trying to write a greatest-common-denominator finder, and your terminating condition is wrong. [Euclid's Algorithm](https://en.wikipedia.org/wiki/Euclidean_algorithm) repeatedly takes takes the modulo difference of the two numbers until the result is 0; then the next-to-last result is the GCD. The standard python implementation looks like ``` def gcd(a, b): while b: a, b = b, a % b return a ``` There is an implementation already in the standard library, `math.gcd`. ``` from math import gcd import sys def add_fractions(n1, d1, n2, d2): """ Return the result of n1/d1 + n2/d2 """ num = n1 * d2 + n2 * d1 denom = d1 * d2 div = gcd(num, denom) return num // div, denom // div if __name__ == "__main__": if len(sys.argv) != 5: print("Usage: {} num1 denom1 num2 denom2".format(sys.argv[0])) else: n1, d1, n2, d2 = [int(i) for i in sys.argv[1:]] num, denom = add_fractions(n1, d1, n2, d2) print("{}/{} + {}/{} = {}/{}".format(n1, d1, n2, d2, num, denom)) ```
16,514,570
I can get matplotlib to work in pylab (ipython --pylab), but when I execute the same command in a python script a plot does not appear. My workspace focus changes from a fullscreened terminal to a Desktop when I run my script, which suggests that it is trying to plot something but failing. The following code works in `ipython --pylab` but not in my script. ``` import matplotlib.pyplot as plt plt.plot(arange(10)) ``` I am on Mac OS X Mountain Lion. **What is causing this to fail when I run a script but not in the interactive prompt?**
2013/05/13
[ "https://Stackoverflow.com/questions/16514570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3749393/" ]
I believe you need `plt.show()` .
You need to add `plt.show()` after `plt.plot(...)`. `plt.plot()` just makes the plot, `plt.show()` takes the plot you made and displays it on the screen.
44,057,032
my python program isn't working properly and it's something with the submit button and it gives me an error saying: ``` TypeError: 'str' object is not callable ``` help please. Here is the part of the code that doesn't work: ``` def submit(): g_name = ent0.get() g_surname = ent1.get() g_dob = ent2.get() g_tutorg = ent3.get() #Gets all the entry boxes g_email = ent4.get() cursor = db.cursor() sql = '''INSERT into Students, (g_name, g_surname, g_dob, g_tutorg, g_email) VALUES (?,?,?,?,?)''' cursor.execute(sql (g_name, g_surname, g_dob, g_tutorg, g_email)) #Puts it all on to SQL db.commit() mlabe2=Label(mGui,text="Form submitted, press exit to exit").place(x=90,y=0) ``` I'm not sure what else you need so here's the rest of the SQL part that creates the table ``` cursor = db.cursor() cursor.execute(""" CREATE TABLE IF NOT EXISTS Students( StudentID integer, Name text, Surname text, DOB blob, Tutor_Grop blob, Email blob, Primary Key(StudentID)); """) #Will create if it doesn't exist db.commit() ``` I've been trying so long and couldn't find a solution to this problem so if you can help that would be great thanks
2017/05/18
[ "https://Stackoverflow.com/questions/44057032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8033270/" ]
`childByAutoId()` is for the iOS SDK. For `admin.Database()`, use [push()](https://firebase.google.com/docs/reference/admin/node/admin.database.Reference#push). ``` var reference = admin.database().ref(path).push(); ```
It should work like this: ``` exports.addPersonalRecordHistory = functions.database.ref('/personalRecords/{userId}/current/{exerciseId}').onWrite(event => { var path = 'personalRecords/' + event.params.userId + '/history/' + event.params.exerciseId; return admin.database().ref(path).set({ username: "asd", email: "asd" }); }); ```
14,626,189
> > **Possible Duplicate:** > > [python looping seems to not follow sequence?](https://stackoverflow.com/questions/4123266/python-looping-seems-to-not-follow-sequence) > > [In what order does python display dictionary keys?](https://stackoverflow.com/questions/4458169/in-what-order-does-python-display-dictionary-keys) > > > ``` d = {'x': 9, 'y': 10, 'z': 20} for key in d: print d[key] ``` The above code give different outputs every time I run it. Not exactly different outputs, but output in different sequence. I executed the code multiple times using Aptana 3. **First Execution Gave: 10 9 20** **Second Execution Gave: 20 10 9** **I also executed the code in an online IDE <http://labs.codecademy.com>. There the output was always 10 9 20** I just wanted to know why is this. Ideally it should have printed 9 10 20 every time I execute the above code. Please Explain.
2013/01/31
[ "https://Stackoverflow.com/questions/14626189", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1990185/" ]
A dictionary is a mapping of keys to values; it does not have an order. You want a `collections.OrderedDict`: ``` collections.OrderedDict([('x', 9), ('y', 10), ('z', 20)]) Out[175]: OrderedDict([('x', 9), ('y', 10), ('z', 20)]) for key in Out[175]: print Out[175][key] ``` Note, however, that dictionary ordering *is* deterministic -- if you iterate over the same dictionary twice, you will get the same results.
A dictionary is a collection that is not ordered. So in theory the order of the elements may change on each operation you perform on it. If you want the keys to be printed in order, you will have to sort them before printing(i.e. collect the keys and then sort them).
70,023,042
I was wondering if anyone can help. I'm trying to take a CSV from a GCP bucket, run it into a dataframe, and then output the file to another bucket in the project, however using this method my dag is running but i dont im not getting any outputs into my designated bucket? My dag just takes ages to run. Any insight on this issue? ``` import gcsfs from airflow.operators import python_operator from airflow import models import pandas as pd import logging import csv import datetime fs = gcsfs.GCSFileSystem(project='project-goes-here') with fs.open('gs://path/file.csv') as f: gas_data = pd.read_csv(f) def make_csv(): # Creates the CSV file with a datetime with no index, and adds the map, collection and collection address to the CSV # Calisto changed their mind on the position of where the conversion factor and multiplication factor should go gas_data['Asset collection'] = 'Distribution' gas_data['Asset collection address 1'] = 'Distribution' gas_data['Asset collection address 2'] = 'Units1+2 Central City' gas_data['Asset collection address 3'] = 'ind Est' gas_data['Asset collection city'] = 'Coventry' gas_data['Asset collection postcode'] = 'CV6 5RY' gas_data['Multiplication Factor'] = '1.000' gas_data['Conversion Factor'] = '1.022640' gas_data.to_csv('gs://path/' 'Clean_zenos_data_' + datetime.datetime.today().strftime('%m%d%Y%H%M%S''.csv'), index=False, quotechar='"', sep=',', quoting=csv.QUOTE_NONNUMERIC) logging.info('Added Map, Asset collection, Asset collection address and Saved CSV') make_csv_function = python_operator.PythonOperator( task_id='make_csv', python_callable=make_csv ) ```
2021/11/18
[ "https://Stackoverflow.com/questions/70023042", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12176250/" ]
With broadcasting ``` res = np.where(arr0[...,None] == entries, arr1[...,None], 0).max(axis=(0, 1)) ``` The result of `np.where(...)` is a (3, 3, 4) array, where slicing `[...,0]` would give you the same 3x3 array you get by manually doing the `np.where` with just `entries[0]`, etc. Then taking the max of each 3x3 subarray leaves you with the desired result. Timings ------- Apparently this method doesn't scale well for bigger arrays. The other answer using `np.unique` is more efficient because it reduces the maximum operation down to a few unique value regardless of how big the original arrays are. ``` import timeit import matplotlib.pyplot as plt import numpy as np def loops(): return [np.where(arr0==index,arr1,0).max() for index in entries] def broadcast(): return np.where(arr0[...,None] == entries, arr1[...,None], 0).max(axis=(0, 1)) def numpy_1d(): arr0_1D = arr0.ravel() arr1_1D = arr1.ravel() arg_idx = np.argsort(arr0_1D) u, idx = np.unique(arr0_1D[arg_idx], return_index=True) return np.maximum.reduceat(arr1_1D[arg_idx], idx) sizes = (3, 10, 25, 50, 100, 250, 500, 1000) lengths = (4, 10, 25, 50, 100) methods = (loops, broadcast, numpy_1d) fig, ax = plt.subplots(len(lengths), sharex=True) for i, M in enumerate(lengths): entries = np.arange(M) times = [[] for _ in range(len(methods))] for N in sizes: arr0 = np.random.randint(1000, size=(N, N)) arr1 = np.random.randint(1000, size=(N, N)) for j, method in enumerate(methods): times[j].append(np.mean(timeit.repeat(method, number=1, repeat=10))) for t in times: ax[i].plot(sizes, t) ax[i].legend(['loops', 'broadcasting', 'numpy_1d']) ax[i].set_title(f'Entries size {M}') plt.xticks(sizes) fig.text(0.5, 0.04, 'Array size (NxN)', ha='center') fig.text(0.04, 0.5, 'Time (s)', va='center', rotation='vertical') plt.show() ``` [![enter image description here](https://i.stack.imgur.com/gNOmv.png)](https://i.stack.imgur.com/gNOmv.png)
It's more convenient to work in 1D case. You need to sort your `arr0` then find starting indices for every group and use `np.maximum.reduceat`. ``` arr0_1D = np.array([[0,3,0],[1,3,2],[1,2,0]]).ravel() arr1_1D = np.array([[4,5,6],[6,2,4],[3,7,9]]).ravel() arg_idx = np.argsort(arr0_1D) >>> arr0_1D[arg_idx] array([0, 0, 0, 1, 1, 2, 2, 3, 3]) u, idx = np.unique(arr0_1D[arg_idx], return_index=True) >>> idx array([0, 3, 5, 7], dtype=int64) >>> np.maximum.reduceat(arr1_1D[arg_idx], idx) array([9, 6, 7, 5], dtype=int32) ```
35,346,971
I'm having some problems with inheritance. I need to import simplejson or install if it can't be found and import. I'm doing this in a another class and sending it via inheritance where needed. The way I'm doing it here works in python 2.6+ but not in 2.4. ``` # This class will hold all things needed over in all classes import subprocess class Global(object): def __init__(self): # Making sure simple json is installed and accessible try: import simplejson as json self.json = json except ImportError: subprocess.Popen(['apt-get -y install python-simplejson'], shell=True, stdout=subprocess.PIPE).wait() import simplejson as json self.json = json ``` And I'm passing it to this class ``` class Init(Global): # Holds json object INFO_OBJECT = { 'filesystem': { 'root': {}, 'archive': {}, 'buffer': {} }, 'mysql': { 'is_corrupt': False, 'corrupt_files': {}, 'version': '' } } def __init__(self): super(Init, self).__init__() self.create_log_folder() self.create_object() self.gather_info() # if json object not found in file create a empty on and save it def create_object(self): try: f = open('/usr/local/careview/video/archive/rcpchecker/info/info.txt') info_object = self.json.load(f) f.close() self.INFO_OBJECT = info_object except self.json.JSONDecodeError: f = open('/usr/local/careview/video/archive/rcpchecker/info/info.txt', 'wb') self.json.dump(self.INFO_OBJECT, f, sort_keys=True, indent=4) f.close() ``` This is my Error ``` Traceback (most recent call last): File "Main.py", line 42, in ? start() File "Main.py", line 11, in start Init() File "/home/careview/ibarron/rcptester/Init/init.py", line 29, in __init__ self.create_object() File "/home/careview/ibarron/rcptester/Init/init.py", line 43, in create_object except self.json.JSONDecodeError: AttributeError: 'module' object has no attribute 'JSONDecodeError' ```
2016/02/11
[ "https://Stackoverflow.com/questions/35346971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4840281/" ]
I know this is not the question, but you should not be using subprocess.popen for this. Use pip. It's great. ``` try: import simplejson as json except ImportError: import pip try: import os isAdmin = os.getuid() == 0 except AttributeError: import ctypes isAdmin = ctypes.windll.shell32.IsUserAnAdmin() != 0 if isAdmin: c = pip.main(['install', 'simplejson']) else: c = pip.main(['install', '--user', 'simplejson']) if c: print("Could not install 'simplejson'.") exit(c) # => or desired error code... don't use 0 (thanks to Håken Lid for pointing this out) because it indicates success import simplejson as json self.json = json ``` As for your error, open python2.4 interpreter and just simply run: ``` >>> import simplejson as json >>> 'JSONDecodeError' in dir(json) True # => or false? ``` If it does not exist (perhaps the 2.4 version does not support it?), you can easily grab the source code from the 2.7 module: ``` >>> from inspect import getsourcelines as gsl >>> import simplejson as json >>> json.JSONDecodeError <class 'simplejson.scanner.JSONDecodeError'> >>> x, _ = gsl(json.scanner.JSONDecodeError) >>> print(''.join(x)) class JSONDecodeError(ValueError): """Subclass of ValueError with the following additional properties: msg: The unformatted error message doc: The JSON document being parsed pos: The start index of doc where parsing failed end: The end index of doc where parsing failed (may be None) lineno: The line corresponding to pos colno: The column corresponding to pos endlineno: The line corresponding to end (may be None) endcolno: The column corresponding to end (may be None) """ # Note that this exception is used from _speedups def __init__(self, msg, doc, pos, end=None): ValueError.__init__(self, errmsg(msg, doc, pos, end=end)) self.msg = msg self.doc = doc self.pos = pos self.end = end self.lineno, self.colno = linecol(doc, pos) if end is not None: self.endlineno, self.endcolno = linecol(doc, end) else: self.endlineno, self.endcolno = None, None def __reduce__(self): return self.__class__, (self.msg, self.doc, self.pos, self.end) ``` Now, in the python2.4 error, you can easily check if it has the attribute. If not, add it. ``` if not hasattr('json.scanner', 'JSONDecodeError'): class myJSONDecodeError(ValueError): """Subclass of ValueError with the following additional properties: msg: The unformatted error message doc: The JSON document being parsed pos: The start index of doc where parsing failed end: The end index of doc where parsing failed (may be None) lineno: The line corresponding to pos colno: The column corresponding to pos endlineno: The line corresponding to end (may be None) endcolno: The column corresponding to end (may be None) """ # Note that this exception is used from _speedups def __init__(self, msg, doc, pos, end=None): ValueError.__init__(self, errmsg(msg, doc, pos, end=end)) self.msg = msg self.doc = doc self.pos = pos self.end = end self.lineno, self.colno = linecol(doc, pos) if end is not None: self.endlineno, self.endcolno = linecol(doc, end) else: self.endlineno, self.endcolno = None, None def __reduce__(self): return self.__class__, (self.msg, self.doc, self.pos, self.end) self.json.JSONDecodeError = self.json.scanner.JSONDecodeError = myJSONDecodeError ```
`apt-get install` won't guarantee that you are installing `simplejson` for all versions of python. It will only work for the *system installed* version of Python which may or may not be 2.4. That's going to depend highly on what underlying version of Linux or Ubuntu or Debian you are using. If you want to be portable across multiple Python versions, you should be using Python's method of managing dependencies instead of trying to do it via `apt-get`.
35,346,971
I'm having some problems with inheritance. I need to import simplejson or install if it can't be found and import. I'm doing this in a another class and sending it via inheritance where needed. The way I'm doing it here works in python 2.6+ but not in 2.4. ``` # This class will hold all things needed over in all classes import subprocess class Global(object): def __init__(self): # Making sure simple json is installed and accessible try: import simplejson as json self.json = json except ImportError: subprocess.Popen(['apt-get -y install python-simplejson'], shell=True, stdout=subprocess.PIPE).wait() import simplejson as json self.json = json ``` And I'm passing it to this class ``` class Init(Global): # Holds json object INFO_OBJECT = { 'filesystem': { 'root': {}, 'archive': {}, 'buffer': {} }, 'mysql': { 'is_corrupt': False, 'corrupt_files': {}, 'version': '' } } def __init__(self): super(Init, self).__init__() self.create_log_folder() self.create_object() self.gather_info() # if json object not found in file create a empty on and save it def create_object(self): try: f = open('/usr/local/careview/video/archive/rcpchecker/info/info.txt') info_object = self.json.load(f) f.close() self.INFO_OBJECT = info_object except self.json.JSONDecodeError: f = open('/usr/local/careview/video/archive/rcpchecker/info/info.txt', 'wb') self.json.dump(self.INFO_OBJECT, f, sort_keys=True, indent=4) f.close() ``` This is my Error ``` Traceback (most recent call last): File "Main.py", line 42, in ? start() File "Main.py", line 11, in start Init() File "/home/careview/ibarron/rcptester/Init/init.py", line 29, in __init__ self.create_object() File "/home/careview/ibarron/rcptester/Init/init.py", line 43, in create_object except self.json.JSONDecodeError: AttributeError: 'module' object has no attribute 'JSONDecodeError' ```
2016/02/11
[ "https://Stackoverflow.com/questions/35346971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4840281/" ]
I know this is not the question, but you should not be using subprocess.popen for this. Use pip. It's great. ``` try: import simplejson as json except ImportError: import pip try: import os isAdmin = os.getuid() == 0 except AttributeError: import ctypes isAdmin = ctypes.windll.shell32.IsUserAnAdmin() != 0 if isAdmin: c = pip.main(['install', 'simplejson']) else: c = pip.main(['install', '--user', 'simplejson']) if c: print("Could not install 'simplejson'.") exit(c) # => or desired error code... don't use 0 (thanks to Håken Lid for pointing this out) because it indicates success import simplejson as json self.json = json ``` As for your error, open python2.4 interpreter and just simply run: ``` >>> import simplejson as json >>> 'JSONDecodeError' in dir(json) True # => or false? ``` If it does not exist (perhaps the 2.4 version does not support it?), you can easily grab the source code from the 2.7 module: ``` >>> from inspect import getsourcelines as gsl >>> import simplejson as json >>> json.JSONDecodeError <class 'simplejson.scanner.JSONDecodeError'> >>> x, _ = gsl(json.scanner.JSONDecodeError) >>> print(''.join(x)) class JSONDecodeError(ValueError): """Subclass of ValueError with the following additional properties: msg: The unformatted error message doc: The JSON document being parsed pos: The start index of doc where parsing failed end: The end index of doc where parsing failed (may be None) lineno: The line corresponding to pos colno: The column corresponding to pos endlineno: The line corresponding to end (may be None) endcolno: The column corresponding to end (may be None) """ # Note that this exception is used from _speedups def __init__(self, msg, doc, pos, end=None): ValueError.__init__(self, errmsg(msg, doc, pos, end=end)) self.msg = msg self.doc = doc self.pos = pos self.end = end self.lineno, self.colno = linecol(doc, pos) if end is not None: self.endlineno, self.endcolno = linecol(doc, end) else: self.endlineno, self.endcolno = None, None def __reduce__(self): return self.__class__, (self.msg, self.doc, self.pos, self.end) ``` Now, in the python2.4 error, you can easily check if it has the attribute. If not, add it. ``` if not hasattr('json.scanner', 'JSONDecodeError'): class myJSONDecodeError(ValueError): """Subclass of ValueError with the following additional properties: msg: The unformatted error message doc: The JSON document being parsed pos: The start index of doc where parsing failed end: The end index of doc where parsing failed (may be None) lineno: The line corresponding to pos colno: The column corresponding to pos endlineno: The line corresponding to end (may be None) endcolno: The column corresponding to end (may be None) """ # Note that this exception is used from _speedups def __init__(self, msg, doc, pos, end=None): ValueError.__init__(self, errmsg(msg, doc, pos, end=end)) self.msg = msg self.doc = doc self.pos = pos self.end = end self.lineno, self.colno = linecol(doc, pos) if end is not None: self.endlineno, self.endcolno = linecol(doc, end) else: self.endlineno, self.endcolno = None, None def __reduce__(self): return self.__class__, (self.msg, self.doc, self.pos, self.end) self.json.JSONDecodeError = self.json.scanner.JSONDecodeError = myJSONDecodeError ```
I have a ton to say about what's going on here, but I think the other comments have that covered. It looks like `simplejson` simply isn't supported in python versions below 2.5: <https://github.com/simplejson/simplejson>. And I'm sure if you're using apt-get, you're installing the latest. Try just using python 2.4's normal json package (if there is one - I would assume there is).
35,346,971
I'm having some problems with inheritance. I need to import simplejson or install if it can't be found and import. I'm doing this in a another class and sending it via inheritance where needed. The way I'm doing it here works in python 2.6+ but not in 2.4. ``` # This class will hold all things needed over in all classes import subprocess class Global(object): def __init__(self): # Making sure simple json is installed and accessible try: import simplejson as json self.json = json except ImportError: subprocess.Popen(['apt-get -y install python-simplejson'], shell=True, stdout=subprocess.PIPE).wait() import simplejson as json self.json = json ``` And I'm passing it to this class ``` class Init(Global): # Holds json object INFO_OBJECT = { 'filesystem': { 'root': {}, 'archive': {}, 'buffer': {} }, 'mysql': { 'is_corrupt': False, 'corrupt_files': {}, 'version': '' } } def __init__(self): super(Init, self).__init__() self.create_log_folder() self.create_object() self.gather_info() # if json object not found in file create a empty on and save it def create_object(self): try: f = open('/usr/local/careview/video/archive/rcpchecker/info/info.txt') info_object = self.json.load(f) f.close() self.INFO_OBJECT = info_object except self.json.JSONDecodeError: f = open('/usr/local/careview/video/archive/rcpchecker/info/info.txt', 'wb') self.json.dump(self.INFO_OBJECT, f, sort_keys=True, indent=4) f.close() ``` This is my Error ``` Traceback (most recent call last): File "Main.py", line 42, in ? start() File "Main.py", line 11, in start Init() File "/home/careview/ibarron/rcptester/Init/init.py", line 29, in __init__ self.create_object() File "/home/careview/ibarron/rcptester/Init/init.py", line 43, in create_object except self.json.JSONDecodeError: AttributeError: 'module' object has no attribute 'JSONDecodeError' ```
2016/02/11
[ "https://Stackoverflow.com/questions/35346971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4840281/" ]
`apt-get install` won't guarantee that you are installing `simplejson` for all versions of python. It will only work for the *system installed* version of Python which may or may not be 2.4. That's going to depend highly on what underlying version of Linux or Ubuntu or Debian you are using. If you want to be portable across multiple Python versions, you should be using Python's method of managing dependencies instead of trying to do it via `apt-get`.
I have a ton to say about what's going on here, but I think the other comments have that covered. It looks like `simplejson` simply isn't supported in python versions below 2.5: <https://github.com/simplejson/simplejson>. And I'm sure if you're using apt-get, you're installing the latest. Try just using python 2.4's normal json package (if there is one - I would assume there is).
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
the `repr` function will return a string which is the exact definition of your dict (except for the order of the element, dicts are unordered in python). unfortunately, i can't tell a way to automatically get a string which represent the variable name. ``` >>> dict = {'one': 1, 'two': 2} >>> repr(dict) "{'two': 2, 'one': 1}" ``` writing to a file is pretty standard stuff, like any other file write: ``` f = open( 'file.py', 'w' ) f.write( 'dict = ' + repr(dict) + '\n' ) f.close() ```
You could do: ``` import inspect mydict = {'one': 1, 'two': 2} source = inspect.getsourcelines(inspect.getmodule(inspect.stack()[0][0]))[0] print([x for x in source if x.startswith("mydict = ")]) ``` Also: make sure not to shadow the dict builtin!
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
You could do: ``` import inspect mydict = {'one': 1, 'two': 2} source = inspect.getsourcelines(inspect.getmodule(inspect.stack()[0][0]))[0] print([x for x in source if x.startswith("mydict = ")]) ``` Also: make sure not to shadow the dict builtin!
The default string representation for a dictionary seems to be just right: ``` >>> a={3: 'foo', 17: 'bar' } >>> a {17: 'bar', 3: 'foo'} >>> print a {17: 'bar', 3: 'foo'} >>> print "a=", a a= {17: 'bar', 3: 'foo'} ``` Not sure if you can get at the "variable name", since variables in Python are just labels for values. See [this question](https://stackoverflow.com/questions/592746/how-can-you-print-a-variable-name-in-python).
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
You can use `pickle` ``` import pickle data = {'one': 1, 'two': 2} file = open('dump.txt', 'wb') pickle.dump(data, file) file.close() ``` and to read it again ``` file = open('dump.txt', 'rb') data = pickle.load(file) ``` EDIT: Guess I misread your question, sorry ... but pickle might help all the same. :)
Do you just want to know how to write a line to a [file](http://docs.python.org/library/stdtypes.html#file-objects)? First, you need to open the file: ``` f = open("filename.txt", 'w') ``` Then, you need to write the string to the file: ``` f.write("dict = {'one': 1, 'two': 2}" + '\n') ``` You can repeat this for each line (the `+'\n'` adds a newline if you want it). Finally, you need to close the file: ``` f.close() ``` You can also be slightly more clever and use [`with`](http://www.python.org/dev/peps/pep-0343/): ``` with open("filename.txt", 'w') as f: f.write("dict = {'one': 1, 'two': 2}" + '\n') ### repeat for all desired lines ``` This will automatically close the file, even if exceptions are raised. But I suspect this is not what you are asking...
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
Is something like this what you're looking for? ``` def write_vars_to_file(f, **vars): for name, val in vars.items(): f.write("%s = %s\n" % (name, repr(val))) ``` Usage: ``` >>> import sys >>> write_vars_to_file(sys.stdout, dict={'one': 1, 'two': 2}) dict = {'two': 2, 'one': 1} ```
The default string representation for a dictionary seems to be just right: ``` >>> a={3: 'foo', 17: 'bar' } >>> a {17: 'bar', 3: 'foo'} >>> print a {17: 'bar', 3: 'foo'} >>> print "a=", a a= {17: 'bar', 3: 'foo'} ``` Not sure if you can get at the "variable name", since variables in Python are just labels for values. See [this question](https://stackoverflow.com/questions/592746/how-can-you-print-a-variable-name-in-python).
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
Do you just want to know how to write a line to a [file](http://docs.python.org/library/stdtypes.html#file-objects)? First, you need to open the file: ``` f = open("filename.txt", 'w') ``` Then, you need to write the string to the file: ``` f.write("dict = {'one': 1, 'two': 2}" + '\n') ``` You can repeat this for each line (the `+'\n'` adds a newline if you want it). Finally, you need to close the file: ``` f.close() ``` You can also be slightly more clever and use [`with`](http://www.python.org/dev/peps/pep-0343/): ``` with open("filename.txt", 'w') as f: f.write("dict = {'one': 1, 'two': 2}" + '\n') ### repeat for all desired lines ``` This will automatically close the file, even if exceptions are raised. But I suspect this is not what you are asking...
1) Make the dictionary: ``` X = {'a': 1} ``` 2) Write to a new file: ``` file = open('X_Data.py', 'w') file.write(str(X)) file.close() ``` Lastly, in the file that you want the variable to be, read that file and make a new variable with the data from the data file: ``` import ast file = open('X_Data.py', 'r') f = file.read() file.close() X = ast.literal_eval(f) ```
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
Is something like this what you're looking for? ``` def write_vars_to_file(f, **vars): for name, val in vars.items(): f.write("%s = %s\n" % (name, repr(val))) ``` Usage: ``` >>> import sys >>> write_vars_to_file(sys.stdout, dict={'one': 1, 'two': 2}) dict = {'two': 2, 'one': 1} ```
I found an easy way to get the dictionary value, **and its name as well**! I'm not sure yet about reading it back, I'm going to continue to do research and see if I can figure that out. Here is the code: ``` your_dict = {'one': 1, 'two': 2} variables = [var for var in dir() if var[0:2] != "__" and var[-1:-2] != "__"] file = open("your_file","w") for var in variables: if isinstance(locals()[var], dict): file.write(str(var) + " = " + str(locals()[var]) + "\n") file.close() ``` Only problem here is this will output every dictionary in your namespace to the file, maybe you can sort them out by values? `locals()[var] == your_dict` for reference. You can also remove `if isinstance(locals()[var], dict):` to output EVERY variable in your namespace, regardless of type. Your output looks exactly like your decleration `your_dict = {'one': 1, 'two': 2}`. Hopefully this gets you one step closer! I'll make an edit if I can figure out how to read them back into the namespace :) **---EDIT---** Got it! I've added a few variables (and variable types) for proof of concept. Here is what my "testfile.txt" looks like: ``` string_test = Hello World integer_test = 42 your_dict = {'one': 1, 'two': 2} ``` And here is the code the processes it: ``` import ast file = open("testfile.txt", "r") data = file.readlines() file.close() for line in data: var_name, var_val = line.split(" = ") for possible_num_types in range(3): # Range is the == number of types we will try casting to try: var_val = int(var_val) break except (TypeError, ValueError): try: var_val = ast.literal_eval(var_val) break except (TypeError, ValueError, SyntaxError): var_val = str(var_val).replace("\n","") break locals()[var_name] = var_val print("string_test =", string_test, " : Type =", type(string_test)) print("integer_test =", integer_test, " : Type =", type(integer_test)) print("your_dict =", your_dict, " : Type =", type(your_dict)) ``` This is what that outputs: ``` string_test = Hello World : Type = <class 'str'> integer_test = 42 : Type = <class 'int'> your_dict = {'two': 2, 'one': 1} : Type = <class 'dict'> ``` I really don't like how the casting here works, the try-except block is bulky and ugly. Even worse, you cannot accept just any type! You have to know what you are expecting to take in. This wouldn't be nearly as bad if you only cared about dictionaries, but I really wanted something a bit more universal. If anybody knows how to better cast these input vars I would **LOVE** to hear about it! Regardless, this should still get you there :D I hope I've helped out!
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
the `repr` function will return a string which is the exact definition of your dict (except for the order of the element, dicts are unordered in python). unfortunately, i can't tell a way to automatically get a string which represent the variable name. ``` >>> dict = {'one': 1, 'two': 2} >>> repr(dict) "{'two': 2, 'one': 1}" ``` writing to a file is pretty standard stuff, like any other file write: ``` f = open( 'file.py', 'w' ) f.write( 'dict = ' + repr(dict) + '\n' ) f.close() ```
Is something like this what you're looking for? ``` def write_vars_to_file(f, **vars): for name, val in vars.items(): f.write("%s = %s\n" % (name, repr(val))) ``` Usage: ``` >>> import sys >>> write_vars_to_file(sys.stdout, dict={'one': 1, 'two': 2}) dict = {'two': 2, 'one': 1} ```
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
the `repr` function will return a string which is the exact definition of your dict (except for the order of the element, dicts are unordered in python). unfortunately, i can't tell a way to automatically get a string which represent the variable name. ``` >>> dict = {'one': 1, 'two': 2} >>> repr(dict) "{'two': 2, 'one': 1}" ``` writing to a file is pretty standard stuff, like any other file write: ``` f = open( 'file.py', 'w' ) f.write( 'dict = ' + repr(dict) + '\n' ) f.close() ```
Do you just want to know how to write a line to a [file](http://docs.python.org/library/stdtypes.html#file-objects)? First, you need to open the file: ``` f = open("filename.txt", 'w') ``` Then, you need to write the string to the file: ``` f.write("dict = {'one': 1, 'two': 2}" + '\n') ``` You can repeat this for each line (the `+'\n'` adds a newline if you want it). Finally, you need to close the file: ``` f.close() ``` You can also be slightly more clever and use [`with`](http://www.python.org/dev/peps/pep-0343/): ``` with open("filename.txt", 'w') as f: f.write("dict = {'one': 1, 'two': 2}" + '\n') ### repeat for all desired lines ``` This will automatically close the file, even if exceptions are raised. But I suspect this is not what you are asking...
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
the `repr` function will return a string which is the exact definition of your dict (except for the order of the element, dicts are unordered in python). unfortunately, i can't tell a way to automatically get a string which represent the variable name. ``` >>> dict = {'one': 1, 'two': 2} >>> repr(dict) "{'two': 2, 'one': 1}" ``` writing to a file is pretty standard stuff, like any other file write: ``` f = open( 'file.py', 'w' ) f.write( 'dict = ' + repr(dict) + '\n' ) f.close() ```
I found an easy way to get the dictionary value, **and its name as well**! I'm not sure yet about reading it back, I'm going to continue to do research and see if I can figure that out. Here is the code: ``` your_dict = {'one': 1, 'two': 2} variables = [var for var in dir() if var[0:2] != "__" and var[-1:-2] != "__"] file = open("your_file","w") for var in variables: if isinstance(locals()[var], dict): file.write(str(var) + " = " + str(locals()[var]) + "\n") file.close() ``` Only problem here is this will output every dictionary in your namespace to the file, maybe you can sort them out by values? `locals()[var] == your_dict` for reference. You can also remove `if isinstance(locals()[var], dict):` to output EVERY variable in your namespace, regardless of type. Your output looks exactly like your decleration `your_dict = {'one': 1, 'two': 2}`. Hopefully this gets you one step closer! I'll make an edit if I can figure out how to read them back into the namespace :) **---EDIT---** Got it! I've added a few variables (and variable types) for proof of concept. Here is what my "testfile.txt" looks like: ``` string_test = Hello World integer_test = 42 your_dict = {'one': 1, 'two': 2} ``` And here is the code the processes it: ``` import ast file = open("testfile.txt", "r") data = file.readlines() file.close() for line in data: var_name, var_val = line.split(" = ") for possible_num_types in range(3): # Range is the == number of types we will try casting to try: var_val = int(var_val) break except (TypeError, ValueError): try: var_val = ast.literal_eval(var_val) break except (TypeError, ValueError, SyntaxError): var_val = str(var_val).replace("\n","") break locals()[var_name] = var_val print("string_test =", string_test, " : Type =", type(string_test)) print("integer_test =", integer_test, " : Type =", type(integer_test)) print("your_dict =", your_dict, " : Type =", type(your_dict)) ``` This is what that outputs: ``` string_test = Hello World : Type = <class 'str'> integer_test = 42 : Type = <class 'int'> your_dict = {'two': 2, 'one': 1} : Type = <class 'dict'> ``` I really don't like how the casting here works, the try-except block is bulky and ugly. Even worse, you cannot accept just any type! You have to know what you are expecting to take in. This wouldn't be nearly as bad if you only cared about dictionaries, but I really wanted something a bit more universal. If anybody knows how to better cast these input vars I would **LOVE** to hear about it! Regardless, this should still get you there :D I hope I've helped out!
1,900,956
Let's say I have the following dictionary in a small application. ``` dict = {'one': 1, 'two': 2} ``` What if I would like to write the exact code line, with the dict name and all, to a file. Is there a function in python that let me do it? Or do I have to convert it to a string first? Not a problem to convert it, but maybe there is an easier way. I do not need a way to convert it to a string, that I can do. But if there is a built in function that does this for me, I would like to know. To make it clear, what I would like to write to the file is: ``` write_to_file("dict = {'one': 1, 'two': 2}") ```
2009/12/14
[ "https://Stackoverflow.com/questions/1900956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55366/" ]
Is something like this what you're looking for? ``` def write_vars_to_file(f, **vars): for name, val in vars.items(): f.write("%s = %s\n" % (name, repr(val))) ``` Usage: ``` >>> import sys >>> write_vars_to_file(sys.stdout, dict={'one': 1, 'two': 2}) dict = {'two': 2, 'one': 1} ```
1) Make the dictionary: ``` X = {'a': 1} ``` 2) Write to a new file: ``` file = open('X_Data.py', 'w') file.write(str(X)) file.close() ``` Lastly, in the file that you want the variable to be, read that file and make a new variable with the data from the data file: ``` import ast file = open('X_Data.py', 'r') f = file.read() file.close() X = ast.literal_eval(f) ```
62,933,026
I am new to python and I am trying to loop through the list of urls in a `csv` file and grab the website `title`using `BeautifulSoup`, which I would like then to save to a file `Headlines.csv`. But I am unable to grab the webpage `title`. If I use a variable with single url as follows: ``` url = 'https://www.space.com/japan-hayabusa2-asteroid-samples-landing-date.html' resp = req.get(url) soup = BeautifulSoup(resp.text, 'lxml') print(soup.title.text) ``` It works just fine and I get the title `Japanese capsule carrying pieces of asteroid Ryugu will land on Earth Dec. 6 | Space` But when I use the loop, ``` import csv with open('urls_file2.csv', newline='', encoding='utf-8') as f: reader = csv.reader(f) for url in reader: print(url) resp = req.get(url) soup = BeautifulSoup(resp.text, 'lxml') print(soup.title.text) ``` I get the following `['\ufeffhttps://www.foxnews.com/us/this-day-in-history-july-16']` and an error message `InvalidSchema: No connection adapters were found for "['\\ufeffhttps://www.foxnews.com/us/this-day-in-history-july-16']"` I am not sure what am I doing wrong.
2020/07/16
[ "https://Stackoverflow.com/questions/62933026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13122812/" ]
As the previous answer has already mentioned about the "\ufeff", you would need to change the encoding. The second issue is that when you read a CSV file, you will get a list containing all the columns for each row. The keyword here is list. You are passing the request a list instead of a string. Based on the example you have given, I would assume that your urls are in the first column of the csv. Python lists starts with a index of 0 and not 1. So to extract out the url, you would need to extract the index of 0 which refers to the first column. ``` import csv with open('urls_file2.csv', newline='', encoding='utf-8-sig') as f: reader = csv.reader(f) for url in reader: print(url[0]) ``` To read up more on lists, you can refer [here](https://www.w3schools.com/python/python_lists.asp). You can add more columns to the CSV file and experiment to see how the results would appear. If you would like to refer to the column name while reading each row, you can refer [here](https://stackoverflow.com/questions/41567508/read-csv-items-with-column-name).
You have a byte order mark `\\ufeff` on the URL you parse from your file. It looks like your file is a signature file and has encoding like utf-8-sig. You need to read with the file with `encoding='utf-8-sig'` Read more [here](https://stackoverflow.com/a/49150749/7502914).
31,039,972
I am trying to run a Python script from another Python script, and getting its `pid` so I can kill it later. I tried `subprocess.Popen()` with argument `shell=True', but the`pid`attribute returns the`pid` of the parent script, so when I try to kill the subprocess, it kills the parent. Here is my code: ```py proc = subprocess.Popen(" python ./script.py", shell=True) pid_ = proc.pid . . . # later in my code os.system('kill -9 %s'%pid_) #IT KILLS THE PARENT :( ```
2015/06/25
[ "https://Stackoverflow.com/questions/31039972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4759209/" ]
`shell=True` starts a new shell process. `proc.pid` is the pid of that shell process. `kill -9` kills the shell process making the grandchild python process into an orphan. If the grandchild python script can spawn its own child processes and you want to kill the whole process tree then see [How to terminate a python subprocess launched with shell=True](https://stackoverflow.com/q/4789837/4279): ``` #!/usr/bin/env python import os import signal import subprocess proc = subprocess.Popen("python script.py", shell=True, preexec_fn=os.setsid) # ... os.killpg(proc.pid, signal.SIGTERM) ``` If `script.py` does not spawn any processes then use [@icktoofay suggestion](https://stackoverflow.com/a/31040013/4279): drop `shell=True`, use a list argument, and call `proc.terminate()` or `proc.kill()` -- the latter always works eventually: ``` #!/usr/bin/env python import subprocess proc = subprocess.Popen(["python", "script.py"]) # ... proc.terminate() ``` If you want to run your parent script from a different directory; you might need [`get_script_dir()` function](https://stackoverflow.com/a/22881871/4279). Consider importing the python module and running its functions, using its object (perhaps via `multiprocessing`) instead of running it as a script. Here's [code example that demonstrates `get_script_dir()` and `multiprocessing` usage](https://stackoverflow.com/a/30165768/4279).
So run it directly without a shell: ``` proc = subprocess.Popen(['python', './script.py']) ``` By the way, you may want to consider changing the hardcoded `'python'` to [`sys.executable`](https://docs.python.org/3.5/library/sys.html#sys.executable). Also, you can use [`proc.kill()`](https://docs.python.org/3.5/library/subprocess.html#subprocess.Popen.kill) to kill the process rather than extracting the PID and using that; furthermore, even if you did need to kill by PID, you could use [`os.kill`](https://docs.python.org/3.5/library/os.html#os.kill) to kill the process rather than spawning another command.
47,403,218
-I am successfully logged into my Virtual Machine and I have uploaded my files to the AWS as well (Amazon EC2). What I wish to do is execute my python code on the server but it says that the dependencies are not installed. When I run a pip install command, it returns the following error: PermissionError: [Errno 13] Permission denied: '/usr/local/lib64/python3.4/site-packages/apiclient How do I fix this? Is it even possible to install packages using pip? If yes, how?
2017/11/21
[ "https://Stackoverflow.com/questions/47403218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8888799/" ]
Assuming you have the number 3 in cell A1 on Sheet2, the following will display the value of column A in the row that has rank 3 in Sheet1 This can be copied down in Sheet2 if you have other numbers in the rows below ``` =INDEX(Sheet1!A:AH,MATCH($A1,Sheet1!$AH:$AH,0),1) ```
Sounds like you need `INDEX/MATCH` like this `=INDEX(Sheet1!A:A,MATCH(3,Sheet1!AH:AH,0))` The `MATCH` function finds the position of 3 in column `AH` and then the `INDEX` function returns the value from column `A` in the same row. Is that what you need?
45,046,601
I have this weird problem that can be reproduced with the [simple tutorial](https://docs.docker.com/compose/django/) from Docker. If I follow the tutorial exactly, everything would work fine, i.e. after `docker-compose up` command, the web container would run and connect nicely to the db container. However, if I choose to create the same Django project on the host, change its settings for the postgres db, and copy it over to the web image in its Dockerfile, instead of mounting the host directory to the container and doing those things there as shown in the tutorial (using the command `docker-compose run web django-admin.py startproject composeexample .` and then change the settings file generated and located in the mounted directory on the host), the first time I run `docker-compose up`, the web container would have problems connecting to the db, with the error as below > > web\_1 | psycopg2.OperationalError: could not connect to server: Connection refused > web\_1 | Is the server running on host "db" (172.18.0.2) and accepting > web\_1 | TCP/IP connections on port 5432? > > > However, if I stop the compose with docker-compose down and then run it again with docker-compose up, the web container would connect to the db successfully with no problems. 'Connection refused' seems to be not an uncommon problem here but I have checked and verified that all the settings are correct and the usual causes like wrong port number, port not exposed or setting host as 'local' instead of 'db', etc. are not the problems in this case. Note: FWIW, I use CNTLM as the system proxy in the host and have to set the environment variables for the web image, and it works fine for other scenarios. EDIT: Please find additional info as below. In the host directory I have the following files and directories * composeexample (generated by another container following the same tutorial and copied over to here) * manage.py (generated by another container and copied over to here) * requirements.txt (exactly as the one in the tutorial) * Dockerfile (slightly modified from the one in the tutorial) * docker-compose.yml (slightly modified from the one in the tutorial) composeexample/settings.py: ``` ......... DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'postgres', 'USER': 'postgres', 'HOST': 'db', 'PORT': 5432, } } ......... ``` Dockerfile (mostly the same, with the added env vars): ``` FROM python:3.5 ENV PYTHONUNBUFFERED 1 ENV http_proxy "http://172.17.0.1:3128" ENV https_proxy "http://172.17.0.1:3128" ENV HTTP_PROXY "http://172.17.0.1:3128" ENV HTTPS_PROXY "http://172.17.0.1:3128" RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD . /code/ ``` docker-compose (I removed the mounted volume .:/code as the project files have already been copied to the web image when it's built. I tested with leaving it as in the original file and it made no difference): ``` version: '3' services: db: image: postgres web: build: . command: python3 manage.py runserver 0.0.0.0:8000 ports: - "8000:8000" depends_on: - db ```
2017/07/12
[ "https://Stackoverflow.com/questions/45046601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3814824/" ]
Use [wait-for-it.sh](https://github.com/vishnubob/wait-for-it) to wait for Postgres to be ready: Download this well known script: <https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh> ``` version: '3' services: db: image: postgres web: build: . command: /wait-for-it.sh db:5432 -- python3 manage.py runserver 0.0.0.0:8000 volumes: - ./wait-for-it.sh:/wait-for-it.sh ports: - "8000:8000" depends_on: - db ``` It will wait until the db port is open and won't waste any further.
You can use [healthcheck](https://docs.docker.com/compose/compose-file/#healthcheck). example from: [peter-evans/docker-compose-healthcheck: How to wait for container X before starting Y using docker-compose healthcheck](https://github.com/peter-evans/docker-compose-healthcheck#waiting-for-postgresql-to-be-healthy) ``` version: '3' services: db: image: postgres healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 3s timeout: 30s retries: 3 web: build: . command: python3 manage.py runserver 0.0.0.0:8000 ports: - "8000:8000" depends_on: db: condition: service_healthy ```
45,046,601
I have this weird problem that can be reproduced with the [simple tutorial](https://docs.docker.com/compose/django/) from Docker. If I follow the tutorial exactly, everything would work fine, i.e. after `docker-compose up` command, the web container would run and connect nicely to the db container. However, if I choose to create the same Django project on the host, change its settings for the postgres db, and copy it over to the web image in its Dockerfile, instead of mounting the host directory to the container and doing those things there as shown in the tutorial (using the command `docker-compose run web django-admin.py startproject composeexample .` and then change the settings file generated and located in the mounted directory on the host), the first time I run `docker-compose up`, the web container would have problems connecting to the db, with the error as below > > web\_1 | psycopg2.OperationalError: could not connect to server: Connection refused > web\_1 | Is the server running on host "db" (172.18.0.2) and accepting > web\_1 | TCP/IP connections on port 5432? > > > However, if I stop the compose with docker-compose down and then run it again with docker-compose up, the web container would connect to the db successfully with no problems. 'Connection refused' seems to be not an uncommon problem here but I have checked and verified that all the settings are correct and the usual causes like wrong port number, port not exposed or setting host as 'local' instead of 'db', etc. are not the problems in this case. Note: FWIW, I use CNTLM as the system proxy in the host and have to set the environment variables for the web image, and it works fine for other scenarios. EDIT: Please find additional info as below. In the host directory I have the following files and directories * composeexample (generated by another container following the same tutorial and copied over to here) * manage.py (generated by another container and copied over to here) * requirements.txt (exactly as the one in the tutorial) * Dockerfile (slightly modified from the one in the tutorial) * docker-compose.yml (slightly modified from the one in the tutorial) composeexample/settings.py: ``` ......... DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'postgres', 'USER': 'postgres', 'HOST': 'db', 'PORT': 5432, } } ......... ``` Dockerfile (mostly the same, with the added env vars): ``` FROM python:3.5 ENV PYTHONUNBUFFERED 1 ENV http_proxy "http://172.17.0.1:3128" ENV https_proxy "http://172.17.0.1:3128" ENV HTTP_PROXY "http://172.17.0.1:3128" ENV HTTPS_PROXY "http://172.17.0.1:3128" RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD . /code/ ``` docker-compose (I removed the mounted volume .:/code as the project files have already been copied to the web image when it's built. I tested with leaving it as in the original file and it made no difference): ``` version: '3' services: db: image: postgres web: build: . command: python3 manage.py runserver 0.0.0.0:8000 ports: - "8000:8000" depends_on: - db ```
2017/07/12
[ "https://Stackoverflow.com/questions/45046601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3814824/" ]
As documentation say [depends\_on](https://docs.docker.com/compose/compose-file/#depends_on) `depends_on` it express dependency between containers but that does not mean that a container will wait to other to be ready, a possible solution is to add within of `docker-compose` a bit sleep, something like this: ``` command: /bin/bash -c "sleep 7; python3 manage.py runserver -h 0.0.0.0 -p 9000 -r -d" ```
You can use [healthcheck](https://docs.docker.com/compose/compose-file/#healthcheck). example from: [peter-evans/docker-compose-healthcheck: How to wait for container X before starting Y using docker-compose healthcheck](https://github.com/peter-evans/docker-compose-healthcheck#waiting-for-postgresql-to-be-healthy) ``` version: '3' services: db: image: postgres healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 3s timeout: 30s retries: 3 web: build: . command: python3 manage.py runserver 0.0.0.0:8000 ports: - "8000:8000" depends_on: db: condition: service_healthy ```
40,062,854
i want to see some info and get info about my os with python as in my tutorial but actually can't run this code: ``` import os F = os.popen('dir') ``` and this : ``` F.readline() ' Volume in drive C has no label.\n' F = os.popen('dir') # Read by sized blocks F.read(50) ' Volume in drive C has no label.\n Volume Serial Nu' os.popen('dir').readlines()[0] # Read all lines: index ' Volume in drive C has no label.\n' os.popen('dir').read()[:50] # Read all at once: slice ' Volume in drive C has no label.\n Volume Serial Nu' for line in os.popen('dir'): # File line iterator loop ... print(line.rstrip()) ``` this is the the error for the first on terminal, (on IDLE it return just an ' f = open('dir') Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: 'dir' I know on mac it should be different but how? to get the same result using macOS sierra.
2016/10/15
[ "https://Stackoverflow.com/questions/40062854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4716040/" ]
The problem is you can't save your custom array in NSUserDefaults. To do that you should change them to NSData then save it in NSUserDefaults Here is the code I used in my project it's in swift 2 syntax and I don't think it's going be hard to convert it to swift 3 ``` let data = NSKeyedArchiver.archivedDataWithRootObject(yourObject); NSUserDefaults.standardUserDefaults().setObject(data, forKey: "yourKey") NSUserDefaults.standardUserDefaults().synchronize() ``` and to the get part use this combination ``` if let data = NSUserDefaults.standardUserDefaults().objectForKey("yourKey") as? NSData { let myItem = NSKeyedUnarchiver.unarchiveObjectWithData(data) as? yourType } ``` hope this will help
The closest type to a Swift struct that UserDefaults supports might be an NSDictionary. You could copy the struct elements into an Objective C NSDictionary object before saving the data.
40,062,854
i want to see some info and get info about my os with python as in my tutorial but actually can't run this code: ``` import os F = os.popen('dir') ``` and this : ``` F.readline() ' Volume in drive C has no label.\n' F = os.popen('dir') # Read by sized blocks F.read(50) ' Volume in drive C has no label.\n Volume Serial Nu' os.popen('dir').readlines()[0] # Read all lines: index ' Volume in drive C has no label.\n' os.popen('dir').read()[:50] # Read all at once: slice ' Volume in drive C has no label.\n Volume Serial Nu' for line in os.popen('dir'): # File line iterator loop ... print(line.rstrip()) ``` this is the the error for the first on terminal, (on IDLE it return just an ' f = open('dir') Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: 'dir' I know on mac it should be different but how? to get the same result using macOS sierra.
2016/10/15
[ "https://Stackoverflow.com/questions/40062854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4716040/" ]
The problem is you can't save your custom array in NSUserDefaults. To do that you should change them to NSData then save it in NSUserDefaults Here is the code I used in my project it's in swift 2 syntax and I don't think it's going be hard to convert it to swift 3 ``` let data = NSKeyedArchiver.archivedDataWithRootObject(yourObject); NSUserDefaults.standardUserDefaults().setObject(data, forKey: "yourKey") NSUserDefaults.standardUserDefaults().synchronize() ``` and to the get part use this combination ``` if let data = NSUserDefaults.standardUserDefaults().objectForKey("yourKey") as? NSData { let myItem = NSKeyedUnarchiver.unarchiveObjectWithData(data) as? yourType } ``` hope this will help
I was able to program a solution based on @ahruss ([How to save an array of custom struct to NSUserDefault with swift?](https://stackoverflow.com/questions/38406457/how-to-save-an-array-of-custom-struct-to-nsuserdefault-with-swift?rq=1)). However, I modified it for swift 3 and it also shows how to implement this solution in a UITableView. I hope it can help someone in the future: 1. Add the extension from below to your structure (adjust it to your own variables) 2. Save the required array item like this: let encoded = riskEntry.map { $0.encode() } riskItemDefaults.set(encoded, forKey: "consequences") riskItemDefaults.synchronize() 3. Load your item like this let dataArray = riskItemDefaults.object(forKey: "consequences") as! [NSData] let savedFoo = dataArray.map { RiskEntry(data: $0)! } 4. If you'd like to show the saved array item in your cells, proceed this way: cell.consequences.text = savedFoo[indexPath.row].consequences as String Here is the complete code, modified for **Swift3** structure ``` // ---------------- structure for table row content ----------------- struct RiskEntry { let title: String var consequences: String } ``` extension ``` extension RiskEntry { init?(data: NSData) { if let coding = NSKeyedUnarchiver.unarchiveObject(with: data as Data) as? Encoding { title = coding.title as String consequences = (coding.consequences as String?)! } else { return nil } } func encode() -> NSData { return NSKeyedArchiver.archivedData(withRootObject: Encoding(self)) as NSData } private class Encoding: NSObject, NSCoding { let title : NSString let consequences : NSString? init(_ RiskEntry: RiskEntry) { title = RiskEntry.title as NSString consequences = RiskEntry.consequences as NSString? } public required init?(coder aDecoder: NSCoder) { if let title = aDecoder.decodeObject(forKey: "title") as? NSString { self.title = title } else { return nil } consequences = aDecoder.decodeObject(forKey: "consequences") as? NSString } public func encode(with aCoder: NSCoder) { aCoder.encode(title, forKey: "title") aCoder.encode(consequences, forKey: "consequences") } } } ```
40,062,854
i want to see some info and get info about my os with python as in my tutorial but actually can't run this code: ``` import os F = os.popen('dir') ``` and this : ``` F.readline() ' Volume in drive C has no label.\n' F = os.popen('dir') # Read by sized blocks F.read(50) ' Volume in drive C has no label.\n Volume Serial Nu' os.popen('dir').readlines()[0] # Read all lines: index ' Volume in drive C has no label.\n' os.popen('dir').read()[:50] # Read all at once: slice ' Volume in drive C has no label.\n Volume Serial Nu' for line in os.popen('dir'): # File line iterator loop ... print(line.rstrip()) ``` this is the the error for the first on terminal, (on IDLE it return just an ' f = open('dir') Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: 'dir' I know on mac it should be different but how? to get the same result using macOS sierra.
2016/10/15
[ "https://Stackoverflow.com/questions/40062854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4716040/" ]
Saving objects in `UserDefaults` have very specific restrictions: > > [set(\_:forKey:) reference:](https://developer.apple.com/reference/foundation/userdefaults/1414067-set) > > > The value parameter can be only property list objects: NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary. For NSArray and NSDictionary objects, their contents must be property list objects. > > > You need to serialize your model, either using NSCoding or as an alternative using JSON, to map to a supported value by `UserDefaults`.
The closest type to a Swift struct that UserDefaults supports might be an NSDictionary. You could copy the struct elements into an Objective C NSDictionary object before saving the data.
40,062,854
i want to see some info and get info about my os with python as in my tutorial but actually can't run this code: ``` import os F = os.popen('dir') ``` and this : ``` F.readline() ' Volume in drive C has no label.\n' F = os.popen('dir') # Read by sized blocks F.read(50) ' Volume in drive C has no label.\n Volume Serial Nu' os.popen('dir').readlines()[0] # Read all lines: index ' Volume in drive C has no label.\n' os.popen('dir').read()[:50] # Read all at once: slice ' Volume in drive C has no label.\n Volume Serial Nu' for line in os.popen('dir'): # File line iterator loop ... print(line.rstrip()) ``` this is the the error for the first on terminal, (on IDLE it return just an ' f = open('dir') Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: 'dir' I know on mac it should be different but how? to get the same result using macOS sierra.
2016/10/15
[ "https://Stackoverflow.com/questions/40062854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4716040/" ]
Saving objects in `UserDefaults` have very specific restrictions: > > [set(\_:forKey:) reference:](https://developer.apple.com/reference/foundation/userdefaults/1414067-set) > > > The value parameter can be only property list objects: NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary. For NSArray and NSDictionary objects, their contents must be property list objects. > > > You need to serialize your model, either using NSCoding or as an alternative using JSON, to map to a supported value by `UserDefaults`.
I was able to program a solution based on @ahruss ([How to save an array of custom struct to NSUserDefault with swift?](https://stackoverflow.com/questions/38406457/how-to-save-an-array-of-custom-struct-to-nsuserdefault-with-swift?rq=1)). However, I modified it for swift 3 and it also shows how to implement this solution in a UITableView. I hope it can help someone in the future: 1. Add the extension from below to your structure (adjust it to your own variables) 2. Save the required array item like this: let encoded = riskEntry.map { $0.encode() } riskItemDefaults.set(encoded, forKey: "consequences") riskItemDefaults.synchronize() 3. Load your item like this let dataArray = riskItemDefaults.object(forKey: "consequences") as! [NSData] let savedFoo = dataArray.map { RiskEntry(data: $0)! } 4. If you'd like to show the saved array item in your cells, proceed this way: cell.consequences.text = savedFoo[indexPath.row].consequences as String Here is the complete code, modified for **Swift3** structure ``` // ---------------- structure for table row content ----------------- struct RiskEntry { let title: String var consequences: String } ``` extension ``` extension RiskEntry { init?(data: NSData) { if let coding = NSKeyedUnarchiver.unarchiveObject(with: data as Data) as? Encoding { title = coding.title as String consequences = (coding.consequences as String?)! } else { return nil } } func encode() -> NSData { return NSKeyedArchiver.archivedData(withRootObject: Encoding(self)) as NSData } private class Encoding: NSObject, NSCoding { let title : NSString let consequences : NSString? init(_ RiskEntry: RiskEntry) { title = RiskEntry.title as NSString consequences = RiskEntry.consequences as NSString? } public required init?(coder aDecoder: NSCoder) { if let title = aDecoder.decodeObject(forKey: "title") as? NSString { self.title = title } else { return nil } consequences = aDecoder.decodeObject(forKey: "consequences") as? NSString } public func encode(with aCoder: NSCoder) { aCoder.encode(title, forKey: "title") aCoder.encode(consequences, forKey: "consequences") } } } ```
65,849,470
I am writing a unit test in python for a function that takes an object from an S3 bucket as the input parameter. The input parameter is of type `boto3.resources.factory.s3.ObjectSummary`. I don't want my unit test to access S3. I am writing a test that reads a .csv file into an object of type `pandas.core.frame.DataFrame.` Does anyone know how I can create an object of type boto3.resources.factory.s3.ObjectSummary from it? Thanks for your response.
2021/01/22
[ "https://Stackoverflow.com/questions/65849470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15061060/" ]
The answer is you shouldn't have a `loadingData()` Redux action in the first place. Loading or not is, as you correctly pointed out, every component's "local" state, so you should store it appropriately - inside each component's "normal" state. Redux store is designed for storing the data that is mutual to several components. And whether some component is ready or not is certainly NOT that.
There is good practice that you have `loading` for each `subject` you're calling a backend `api`, for example a `loading` for calling `books` api, a `loading` for calling `movies` api and so on. I recommend you create a `loadings` object in your state and fill it with different loadings that you need like this: ``` loadings: { books_loading, movie_loading } ``` so in your components, you wouldn't call a general `loading state` which affects a lot of components, only those who need the specific `loading` will use it and you will solve the problem you have
65,849,470
I am writing a unit test in python for a function that takes an object from an S3 bucket as the input parameter. The input parameter is of type `boto3.resources.factory.s3.ObjectSummary`. I don't want my unit test to access S3. I am writing a test that reads a .csv file into an object of type `pandas.core.frame.DataFrame.` Does anyone know how I can create an object of type boto3.resources.factory.s3.ObjectSummary from it? Thanks for your response.
2021/01/22
[ "https://Stackoverflow.com/questions/65849470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15061060/" ]
The answer is you shouldn't have a `loadingData()` Redux action in the first place. Loading or not is, as you correctly pointed out, every component's "local" state, so you should store it appropriately - inside each component's "normal" state. Redux store is designed for storing the data that is mutual to several components. And whether some component is ready or not is certainly NOT that.
It is perfectly fine to handle a loading state either in local component state, the part of your redux state where you will finally store the data, or a completely different part. There is no "one size fits all" solution and different applications handle it differently. If you want to track that state globally, it is a fairly common pattern to have a `yourApi/pending` action followed either by a `yourApi/fulfilled` or `yourApi/rejected` action - this is how [createAsyncThunk](https://redux-toolkit.js.org/api/createAsyncThunk) of the official redux toolkit handles it. But of course, if you have two components sharing the same data, then they also share the same loading state. Maybe you should check if the data is already present and fetch it only when it is not already present, because why fetch it twice in the first place? Or, if the loading state is really describing a different endpoint, really split that up into multiple loading state.
65,849,470
I am writing a unit test in python for a function that takes an object from an S3 bucket as the input parameter. The input parameter is of type `boto3.resources.factory.s3.ObjectSummary`. I don't want my unit test to access S3. I am writing a test that reads a .csv file into an object of type `pandas.core.frame.DataFrame.` Does anyone know how I can create an object of type boto3.resources.factory.s3.ObjectSummary from it? Thanks for your response.
2021/01/22
[ "https://Stackoverflow.com/questions/65849470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15061060/" ]
It is perfectly fine to handle a loading state either in local component state, the part of your redux state where you will finally store the data, or a completely different part. There is no "one size fits all" solution and different applications handle it differently. If you want to track that state globally, it is a fairly common pattern to have a `yourApi/pending` action followed either by a `yourApi/fulfilled` or `yourApi/rejected` action - this is how [createAsyncThunk](https://redux-toolkit.js.org/api/createAsyncThunk) of the official redux toolkit handles it. But of course, if you have two components sharing the same data, then they also share the same loading state. Maybe you should check if the data is already present and fetch it only when it is not already present, because why fetch it twice in the first place? Or, if the loading state is really describing a different endpoint, really split that up into multiple loading state.
There is good practice that you have `loading` for each `subject` you're calling a backend `api`, for example a `loading` for calling `books` api, a `loading` for calling `movies` api and so on. I recommend you create a `loadings` object in your state and fill it with different loadings that you need like this: ``` loadings: { books_loading, movie_loading } ``` so in your components, you wouldn't call a general `loading state` which affects a lot of components, only those who need the specific `loading` will use it and you will solve the problem you have
10,572,671
I'm new to c/c++ and I've been working with python for a long time, I didn't take any tutorials, but I got this error when I tried to declare an array of strings. code: ``` QString months[12]={'Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'}; ``` error: invalid conversion from 'int' to 'const char\*' What does that error mean?
2012/05/13
[ "https://Stackoverflow.com/questions/10572671", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1164958/" ]
Use double quotes for strings (`"`). `'` is for character literals.
In the Python is not difference between `'` and `"`(are strings) , but in the C++ They are different: ``` char c = 'c'; string str = "string"; ``` Don't forget the C++ has not `'''`, while it was as string in Python. Your code: ``` ... "Oct", "Nov", "Dec"}; ```
63,381,325
i am using a python script with regex module trying to process 2 files and create a final output as required but getting some errors. cat links.txt ``` https://videos-a.jwpsrv.com/content/conversions/7kHOkkQa/videos/XXXXJD8C-32313922.mp4.m3u8?hdnts=exp=1596554537~acl=*/bGxpJD8C-32313922.mp4.m3u8~hmac=2ac95222f1693d11e7fd8758eb0a18d6d2ee187bb10e3c27311e627785687bd5 https://videos-a.jwpsrv.com/content/conversions/7kHOkkQa/videos/XXXXkxI1-32313922.mp4.m3u8?hdnts=exp=1596554733~acl=*/bM07kxI1-32313922.mp4.m3u8~hmac=dd0fc6f433a8ac74c9eaa2a376fa4324a65ae7c410cdcf8e869c6961f1a5b5ea https://videos-a.jwpsrv.com/content/conversions/7kHOkkQa/videos/XXXXpGKZ-32313922.mp4.m3u8?hdnts=exp=1596554748~acl=*/onhIpGKZ-32313922.mp4.m3u8~hmac=d4030cf7813cef02a58ca17127a0bc6b19dc93cccd6add4edc72a2ee5154f236 https://videos-a.jwpsrv.com/content/conversions/7kHOkkQa/videos/XXXXLbgy-32313922.mp4.m3u8?hdnts=exp=1596554871~acl=*/xGXCLbgy-32313922.mp4.m3u8~hmac=7c515306c033c88d32072d54ba1d6aa4abf1be23070d1bb14d1311e4e74cc1d7 ``` cat name.txt ``` Introduction Lecture 1 Questions Lecture 1B Theory Lecture 2 Labour Costing Lecture 352 (Classroom Lecture) ``` Expected ( final.txt ) ``` https://cdn.jwplayer.com/vidoes/XXXXJD8C-32313922.mp4 out=Lecture 001- Introduction.mp4 https://cdn.jwplayer.com/vidoes/XXXXkxI1-32313922.mp4 out=Lecture 001B- Questions.mp4 https://cdn.jwplayer.com/vidoes/XXXXpGKZ-32313922.mp4 out=Lecture 002- Theory.mp4 https://cdn.jwplayer.com/vidoes/XXXXLbgy-32313922.mp4 out=Lecture 352- Labour Costing (Classroom Lecture).mp4 ``` cat sort.py ( my existing script ) ``` import re final = open('final.txt','w') a = open('links.txt','r') b = open('name.txt','r') base = 'https://cdn.jwplayer.com/videos/' kek = re.compile(r'(?<=\/)[\w\-\.]+(?=.m3u8)') # find max lecture number n = None for line in b: b_n = int(''.join([c for c in line.rpartition(' ')[2] if c in '1234567890'])) if n is None or b_n > n: n = b_n n = len(str(n)) # string len of the max lecture number b = open('name.txt','r') for line in a: final.write(base + kek.search(line).group() + '\n') b_line = b.readline().rstrip() line_before_lecture, _, lecture = b_line.partition('Lecture') line_before_lecture = line_before_lecture.strip() lecture_no = lecture.rpartition(' ')[2] lecture_str = lecture_no.rjust(n, '0') + '-' + " " + line_before_lecture final.write(' out=' + 'Lecture ' + lecture_str + '.mp4\n') ``` Traceback ``` Traceback (most recent call last): File "sort.py", line 11, in <module> b_n = int(''.join([c for c in line.rpartition(' ')[2] if c in '1234567890'])) ValueError: invalid literal for int() with base 10: '' ``` **Edit** - It seems that the error is due to the last line in name.txt as my script assumes all lines in name.txt would end in format of Lecture X. One way to fix it i guess is to edit the script and add a **if** condition as follows : If any line in name.txt doesn't end in format - Lecture X , then shift the text succeeding Lecture X prior to word Lecture. Example the 4th line of name.txt `Labour Costing Lecture 352 (Classroom Lecture)` Could be converted to `Labour Costing (Classroom Lecture) Lecture 352` and edit the below line in my script to match only the last occurrence of "Lecture" in a line in name.txt ``` line_before_lecture, _, lecture = b_line.partition('Lecture') ``` i basically need the expected output ( final.txt ) from those 2 files ( names.txt and links.txt ) using the script , if there's a better/smart way to do it , i would definitely be happy to use it. I just theoretically suggested one way of doing it which i have no clue how to do it myself
2020/08/12
[ "https://Stackoverflow.com/questions/63381325", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13516930/" ]
If you are using regular expressions anyway, why not use them to pull out this information, too? ```py import re base = 'https://cdn.jwplayer.com/videos/' kek = re.compile(r'(?<=\/)[\w\-\.]+(?=.m3u8)') nre = re.compile(r'(.*)\s+Lecture (\d+)(.*)') with open('name.txt') as b: lecture = [] for line in b: parsed = nre.match(line) if parsed: lecture.append((int(parsed.group(2)), parsed.group(3), parsed.group(1))) else: raise ValueError('Unable to parse %r' % line) n = len(str(lecture[-1][0])) with open('links.txt','r') as a: for idx, line in enumerate(a): print(base + kek.search(line).group()) fmt=' out=Lecture {0:0' + str(n) + 'n}{1}- {2}.mp4' print(fmt.format(*lecture[idx])) ``` This only traverses the contents in `name.txt` once, and stores the results in a variable `lecture` which contains a tuple of the pieces we pulled out (number, suffix, title). I also changed this to write to standard output; redirect to a file if you like, or switch back to explicitly hard-coding the output file in the script itself. The splat syntax `*lecture` is just a shorthand to avoid having to write `lecture[0], lecture[1], lecture[2]` explicitly. Demo: <https://repl.it/repls/TatteredInexperiencedFibonacci#main.py>
The issue is with the last line of cat names.txt. ``` >>> line = "Labour Costing Lecture 352 (Classroom Lecture)" >>> [c for c in line.rpartition(' ')[2]] ['L', 'e', 'c', 't', 'u', 'r', 'e', ')'] ``` Clearly not what you are intending to extract. Since none of these is a number, it returns an empty string which cannot be cast to an int. If you are looking to extract the int, I would suggest looking at this question: [How to extract numbers from a string in Python?](https://stackoverflow.com/questions/4289331/how-to-extract-numbers-from-a-string-in-python)
45,026,566
i was try to use python API but its not working if i try to use multiple parameter **Not working** ``` from flask import Flask, request @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] UserPassword = req_json['password'] return str(UserName) ``` **Working** ``` from flask import Flask, request @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] return str(UserName) ``` **Error** <https://www.herokucdn.com/error-pages/application-error.html> **Logs** ``` State changed from crashed to starting 2017-07-11T06:44:13.760404+00:00 heroku[web.1]: Starting process with command `python server.py` 2017-07-11T06:44:16.078195+00:00 app[web.1]: File "server.py", line 29 2017-07-11T06:44:16.078211+00:00 app[web.1]: account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) 2017-07-11T06:44:16.078211+00:00 app[web.1]: ^ 2017-07-11T06:44:16.078213+00:00 app[web.1]: IndentationError: unexpected indent 2017-07-11T06:44:16.179785+00:00 heroku[web.1]: Process exited with status 1 2017-07-11T06:44:16.192829+00:00 heroku[web.1]: State changed from starting to crashed ``` **Server.py** ``` import os from flask import Flask, request from twilio.jwt.access_token import AccessToken, VoiceGrant from twilio.rest import Client import twilio.twiml ACCOUNT_SID = 'accountsid' API_KEY = 'apikey' API_KEY_SECRET = 'apikeysecret' PUSH_CREDENTIAL_SID = 'pushsid' APP_SID = 'appsid' app = Flask(__name__) @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] Password = req_json['password'] return str(UserName) @app.route('/accessToken') def token(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) grant = VoiceGrant( push_credential_sid=push_credential_sid, outgoing_application_sid=app_sid ) token = AccessToken(account_sid, api_key, api_key_secret, IDENTITY) token.add_grant(grant) return str(token) @app.route('/outgoing', methods=['GET', 'POST']) def outgoing(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have made your first oubound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/incoming', methods=['GET', 'POST']) def incoming(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have received your first inbound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/placeCall', methods=['GET', 'POST']) def placeCall(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] CALLER_ID = req_json['callerid'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) client = Client(api_key, api_key_secret, account_sid) call = client.calls.create(url=request.url_root + 'incoming', to='client:' + CALLER_ID, from_='client:' + IDENTITY) return str(call.sid) @app.route('/', methods=['GET', 'POST']) def welcome(): resp = twilio.twiml.Response() resp.say("Welcome") return str(resp) if __name__ == "__main__": port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port, debug=True) ```
2017/07/11
[ "https://Stackoverflow.com/questions/45026566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6472692/" ]
as you can see in the logs ,the app crashed due to indentation error. please check indentation of account\_sid variable in your code.
The hint is in your logs. ``` 2017-07-11T06:44:16.078195+00:00 app[web.1]: File "server.py", line 29 2017-07-11T06:44:16.078211+00:00 app[web.1]: account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) 2017-07-11T06:44:16.078211+00:00 app[web.1]: ^ 2017-07-11T06:44:16.078213+00:00 app[web.1]: IndentationError: unexpected indent ``` You have bad indentation in server.py on line 29. ``` req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) ``` should look like: ``` req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) ``` It looks like you have loads of other badly indented lines as well.
45,026,566
i was try to use python API but its not working if i try to use multiple parameter **Not working** ``` from flask import Flask, request @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] UserPassword = req_json['password'] return str(UserName) ``` **Working** ``` from flask import Flask, request @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] return str(UserName) ``` **Error** <https://www.herokucdn.com/error-pages/application-error.html> **Logs** ``` State changed from crashed to starting 2017-07-11T06:44:13.760404+00:00 heroku[web.1]: Starting process with command `python server.py` 2017-07-11T06:44:16.078195+00:00 app[web.1]: File "server.py", line 29 2017-07-11T06:44:16.078211+00:00 app[web.1]: account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) 2017-07-11T06:44:16.078211+00:00 app[web.1]: ^ 2017-07-11T06:44:16.078213+00:00 app[web.1]: IndentationError: unexpected indent 2017-07-11T06:44:16.179785+00:00 heroku[web.1]: Process exited with status 1 2017-07-11T06:44:16.192829+00:00 heroku[web.1]: State changed from starting to crashed ``` **Server.py** ``` import os from flask import Flask, request from twilio.jwt.access_token import AccessToken, VoiceGrant from twilio.rest import Client import twilio.twiml ACCOUNT_SID = 'accountsid' API_KEY = 'apikey' API_KEY_SECRET = 'apikeysecret' PUSH_CREDENTIAL_SID = 'pushsid' APP_SID = 'appsid' app = Flask(__name__) @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] Password = req_json['password'] return str(UserName) @app.route('/accessToken') def token(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) grant = VoiceGrant( push_credential_sid=push_credential_sid, outgoing_application_sid=app_sid ) token = AccessToken(account_sid, api_key, api_key_secret, IDENTITY) token.add_grant(grant) return str(token) @app.route('/outgoing', methods=['GET', 'POST']) def outgoing(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have made your first oubound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/incoming', methods=['GET', 'POST']) def incoming(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have received your first inbound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/placeCall', methods=['GET', 'POST']) def placeCall(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] CALLER_ID = req_json['callerid'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) client = Client(api_key, api_key_secret, account_sid) call = client.calls.create(url=request.url_root + 'incoming', to='client:' + CALLER_ID, from_='client:' + IDENTITY) return str(call.sid) @app.route('/', methods=['GET', 'POST']) def welcome(): resp = twilio.twiml.Response() resp.say("Welcome") return str(resp) if __name__ == "__main__": port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port, debug=True) ```
2017/07/11
[ "https://Stackoverflow.com/questions/45026566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6472692/" ]
I honestly can't tell where the issue w/ your indents is and whether that is a misunderstanding how whitespacing works in python or posting code blocks on stackoverflow (my guess is a combo of both). So I took your code and put it in PyCharm and properly indented it and pasted that code into this nice [tool](http://wittman.org/projects/stackoverflowindentfourspaces/) I just found so I could properly submit it. This should hopefully resolve your issues. Just copy and paste it then change all the necessary values. ``` import os from flask import Flask, request from twilio.jwt.access_token import AccessToken, VoiceGrant from twilio.rest import Client import twilio.twiml ACCOUNT_SID = 'accountsid' API_KEY = 'apikey' API_KEY_SECRET = 'apikeysecret' PUSH_CREDENTIAL_SID = 'pushsid' APP_SID = 'appsid' app = Flask(__name__) @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] Password = req_json['password'] return str(UserName) @app.route('/accessToken') def token(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) grant = VoiceGrant( push_credential_sid=push_credential_sid, outgoing_application_sid=app_sid ) token = AccessToken(account_sid, api_key, api_key_secret, IDENTITY) token.add_grant(grant) return str(token) @app.route('/outgoing', methods=['GET', 'POST']) def outgoing(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have made your first oubound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/incoming', methods=['GET', 'POST']) def incoming(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have received your first inbound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/placeCall', methods=['GET', 'POST']) def placeCall(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] CALLER_ID = req_json['callerid'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) client = Client(api_key, api_key_secret, account_sid) call = client.calls.create(url=request.url_root + 'incoming', to='client:' + CALLER_ID, from_='client:' + IDENTITY) return str(call.sid) @app.route('/', methods=['GET', 'POST']) def welcome(): resp = twilio.twiml.Response() resp.say("Welcome") return str(resp) if __name__ == "__main__": port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port, debug=True) ```
as you can see in the logs ,the app crashed due to indentation error. please check indentation of account\_sid variable in your code.
45,026,566
i was try to use python API but its not working if i try to use multiple parameter **Not working** ``` from flask import Flask, request @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] UserPassword = req_json['password'] return str(UserName) ``` **Working** ``` from flask import Flask, request @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] return str(UserName) ``` **Error** <https://www.herokucdn.com/error-pages/application-error.html> **Logs** ``` State changed from crashed to starting 2017-07-11T06:44:13.760404+00:00 heroku[web.1]: Starting process with command `python server.py` 2017-07-11T06:44:16.078195+00:00 app[web.1]: File "server.py", line 29 2017-07-11T06:44:16.078211+00:00 app[web.1]: account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) 2017-07-11T06:44:16.078211+00:00 app[web.1]: ^ 2017-07-11T06:44:16.078213+00:00 app[web.1]: IndentationError: unexpected indent 2017-07-11T06:44:16.179785+00:00 heroku[web.1]: Process exited with status 1 2017-07-11T06:44:16.192829+00:00 heroku[web.1]: State changed from starting to crashed ``` **Server.py** ``` import os from flask import Flask, request from twilio.jwt.access_token import AccessToken, VoiceGrant from twilio.rest import Client import twilio.twiml ACCOUNT_SID = 'accountsid' API_KEY = 'apikey' API_KEY_SECRET = 'apikeysecret' PUSH_CREDENTIAL_SID = 'pushsid' APP_SID = 'appsid' app = Flask(__name__) @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] Password = req_json['password'] return str(UserName) @app.route('/accessToken') def token(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) grant = VoiceGrant( push_credential_sid=push_credential_sid, outgoing_application_sid=app_sid ) token = AccessToken(account_sid, api_key, api_key_secret, IDENTITY) token.add_grant(grant) return str(token) @app.route('/outgoing', methods=['GET', 'POST']) def outgoing(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have made your first oubound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/incoming', methods=['GET', 'POST']) def incoming(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have received your first inbound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/placeCall', methods=['GET', 'POST']) def placeCall(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] CALLER_ID = req_json['callerid'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) client = Client(api_key, api_key_secret, account_sid) call = client.calls.create(url=request.url_root + 'incoming', to='client:' + CALLER_ID, from_='client:' + IDENTITY) return str(call.sid) @app.route('/', methods=['GET', 'POST']) def welcome(): resp = twilio.twiml.Response() resp.say("Welcome") return str(resp) if __name__ == "__main__": port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port, debug=True) ```
2017/07/11
[ "https://Stackoverflow.com/questions/45026566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6472692/" ]
I honestly can't tell where the issue w/ your indents is and whether that is a misunderstanding how whitespacing works in python or posting code blocks on stackoverflow (my guess is a combo of both). So I took your code and put it in PyCharm and properly indented it and pasted that code into this nice [tool](http://wittman.org/projects/stackoverflowindentfourspaces/) I just found so I could properly submit it. This should hopefully resolve your issues. Just copy and paste it then change all the necessary values. ``` import os from flask import Flask, request from twilio.jwt.access_token import AccessToken, VoiceGrant from twilio.rest import Client import twilio.twiml ACCOUNT_SID = 'accountsid' API_KEY = 'apikey' API_KEY_SECRET = 'apikeysecret' PUSH_CREDENTIAL_SID = 'pushsid' APP_SID = 'appsid' app = Flask(__name__) @app.route('/test', methods=['GET', 'POST']) def test(): req_json = request.get_json(force=True) UserName = req_json['username'] Password = req_json['password'] return str(UserName) @app.route('/accessToken') def token(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) grant = VoiceGrant( push_credential_sid=push_credential_sid, outgoing_application_sid=app_sid ) token = AccessToken(account_sid, api_key, api_key_secret, IDENTITY) token.add_grant(grant) return str(token) @app.route('/outgoing', methods=['GET', 'POST']) def outgoing(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have made your first oubound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/incoming', methods=['GET', 'POST']) def incoming(): resp = twilio.twiml.Response() #resp.say("Congratulations! You have received your first inbound call! Good bye.") resp.say("Thanks for Calling! Please try again later.") return str(resp) @app.route('/placeCall', methods=['GET', 'POST']) def placeCall(): req_json = request.get_json(force=True) IDENTITY = req_json['identity'] CALLER_ID = req_json['callerid'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) client = Client(api_key, api_key_secret, account_sid) call = client.calls.create(url=request.url_root + 'incoming', to='client:' + CALLER_ID, from_='client:' + IDENTITY) return str(call.sid) @app.route('/', methods=['GET', 'POST']) def welcome(): resp = twilio.twiml.Response() resp.say("Welcome") return str(resp) if __name__ == "__main__": port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port, debug=True) ```
The hint is in your logs. ``` 2017-07-11T06:44:16.078195+00:00 app[web.1]: File "server.py", line 29 2017-07-11T06:44:16.078211+00:00 app[web.1]: account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) 2017-07-11T06:44:16.078211+00:00 app[web.1]: ^ 2017-07-11T06:44:16.078213+00:00 app[web.1]: IndentationError: unexpected indent ``` You have bad indentation in server.py on line 29. ``` req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) ``` should look like: ``` req_json = request.get_json(force=True) IDENTITY = req_json['identity'] account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID) api_key = os.environ.get("API_KEY", API_KEY) api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET) push_credential_sid = os.environ.get("PUSH_CREDENTIAL_SID", PUSH_CREDENTIAL_SID) app_sid = os.environ.get("APP_SID", APP_SID) ``` It looks like you have loads of other badly indented lines as well.
28,619,302
I'M using pycharm (python) (and mapnik)on windows 7, I just wanted to test if everything is in place after installation. I used an example from the net here is it , and I have a frame error. Could it be an installation problem ? compiler ?? I'M very new to python. thanks in advance for your time. ``` """ This is a simple wxPython application demonstrates how to integrate mapnik, it do nothing but draw the map from the World Poplulation XML example: https://github.com/mapnik/mapnik/wiki/GettingStartedInXML Victor Lin. (bornstub@gmail.com) Blog http://blog.ez2learn.com """ import mapnik import wx class Frame(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, size=(800, 500) ,*args, **kwargs) self.Bind(wx.EVT_PAINT, self.onPaint) self.mapfile = "population.xml" self.width = 800 self.height = 500 self.createMap() self.drawBmp() def createMap(self): """Create mapnik object """ self.map = mapnik.Map(self.width, self.height) mapnik.load_map(self.map, self.mapfile) bbox = mapnik.Envelope(mapnik.Coord(-180.0, -75.0), mapnik.Coord(180.0, 90.0)) self.map.zoom_to_box(bbox) def drawBmp(self): """Draw map to Bitmap object """ # create a Image32 object image = mapnik.Image(self.width, self.height) # render map to Image32 object mapnik.render(self.map, image) # load raw data from Image32 to bitmap self.bmp = wx.BitmapFromBufferRGBA(self.width, self.height, image.tostring()) def onPaint(self, event): dc = wx.PaintDC(self) memoryDC = wx.MemoryDC(self.bmp) # draw map to dc dc.Blit(0, 0, self.width, self.height, memoryDC, 0, 0) if __name__ == '__main__': app = wx.App() frame = frame(None, title="wxPython Mapnik Demo") frame.Show() app.MainLoop() ``` here is the error message: ``` Traceback (most recent call last): File "C:/Python27/example.py", line 16, in <module> class Frame(wx.Frame): File "C:/Python27/example.py", line 56, in Frame frame = frame(None, title="wxPython Mapnik Demo") NameError: name 'frame' is not defined Process finished with exit code 1 ```
2015/02/19
[ "https://Stackoverflow.com/questions/28619302", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4586167/" ]
A little insight on your code first: `all` will fetch ALL records from the database and pass it to your ruby code, this is resource and time consuming. Then the `shuffle`, `sort_by` and `reverse` are all executed by ruby. You will quickly hit performance issues as your database grow. Your solution is to let your database server do that work. DB servers are very optimized for all sorting operations. So if you are for example using MySQL you should use this instead: ``` @articles = Article.order('`articles`.`date_published` DESC, RAND()') ``` Which will sort primarily by date\_published in reverse order, and secondarily randomly for all articles of the same date
Hmm, here's a fun hack that *should* work: ``` @articles = Article. all. sort_by{|t| (t.date_published.beginning_of_day.to_i * 1000) + rand(100)} ``` This works by forcing all the dates to be the beginning of the day (so that everything published on '2015-02-19' for example will have the same `to_i` value. Then you multiply by 1000 and add a random number between 0 and 100 for the sort (any number less than 1000 would work).
12,121,260
I've run into a specific problem and thought of an solution. But since the solution is pretty involved, I was wondering if others have encountered something similar and could comment on best practises or propose alternatives. The problem is as follows: I have a webapp written in Django which has some screen in which data from multiple tables is collected, grouped and aggregated in time intervals. It's basically a big excel like matrix where we have data aggregated in time intervals on one axis, against resources for the aggregated data per interval on the other axis. It involves many inner and left joins to gather all data, and because of the "report" like character of the presented data, I use raw sql to query everything together. The problem is that multiple users can concurrently view & edit data in these intervals. They can also edit data on finer or coarser granularities than other users working with the same data, but in sub/overlapping intervals. Currently, when a user edits some data, a django request is fired, the data is altered, the affected intervals are aggregated & grouped again and presented back. But because of the volatile nature of this data, other users might have changed something before them. Also grouping/aggregating and rerendering the table each time is a very heavy operation (depending on amount of data and range of the intervals). This gets worse with concurrent users editting.. My proposed solution: It's clear a http request/response mechanism is not really ideal for this kind of thing; The grouping/aggregation is pretty heavyweight, not ideal to do this per request, the concurrency would ideally be channeled amongst users, and feedback should be realtime like googledocs instead of full page refreshes. I was thinking about making a daemon process which reads in *flat* data of interestfrom the dbms on request and caches this in memory. All changes to the data would then occur in memory with a write-through to the dbms. This daemon channels access to the data through a lock, so the daemon can handle which users can overwrite others changes. The flat data is aggregated and grouped using python code and only the slices required by the user are returned; user/daemon communication would run over websockets. The daemon would provide a subscriber/publisher channel, where users interested in specific slices of data are notified when something changes. This daemon could be implemented using a framework like twisted. But I'm not sure an event driven approach would work here, as we want to "channel" all incomming requests... Maybe these should be put in a queue and be run in a seperate thread? Would it be better to have twisted run in a thread next to my scheduler, or should the twisted main loop spin off a thread that works on this queue? My understanding is that threading works best for IO, and python heavy code basically blocks other threads. I have both (websockets/dbms and processing data), would that work? Has anyone done something similar before? Thanks in advance! Karl
2012/08/25
[ "https://Stackoverflow.com/questions/12121260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1227852/" ]
I tried something similar and you might be interested in the solution. Here is my question: [python Socket.IO client for sending broadcast messages to TornadIO2 server](https://stackoverflow.com/questions/10950365/python-socket-io-client-for-sending-broadcast-messages-to-tornadio2-server) And this is the answer: <https://stackoverflow.com/a/10950702/675065> He also wrote a blog post about the solution: <http://blog.y3xz.com/blog/2012/06/08/a-modern-python-stack-for-a-real-time-web-application/> The software stack consists of: * [SockJS Client](https://github.com/sockjs/sockjs-client) * [SockJS Tornado Server](https://github.com/MrJoes/sockjs-tornado) * [Redis Pub/Sub](http://redis.io/commands#pubsub) * [Django Redis Client: Brukva](https://github.com/evilkost/brukva) I implemented this myself and it works like a charm.
The scheme Google implemented for the now abandoned Wave product's concurrent editing features is documented, <http://www.waveprotocol.org/whitepapers/operational-transform>. This aspect of Wave seemed like a success, even though Wave itself was quickly abandoned. As far as the questions you asked about implementing your proposed scheme: 1. An event driven system is perfectly capable of implementing this idea. Being event driven is a way to organize your code. It doesn't prevent you from implementing any particular functionality. 2. Threading doesn't work best for very much, particularly in Python. 1. It has significant disadvantages for CPU-bound work, since CPython only runs a single Python thread at a time (regardless of available hardware resources). This means a multi-threaded CPU-bound Python program is typically no faster, or even slower, than the single-threaded equivalent. 2. For IO, this shortcoming is less of a limitation, because IO does not involve running Python code on CPython (the IO APIs are all implemented in C). This means you can do IO in multiple threads concurrently, so threading is potentially a benefit. However, doing IO concurrently in a single thread is exactly what Twisted is for. Threading offers no benefits over doing the IO in a single thread, as long as you're doing the IO non-blockingly (or perhaps asychronously). 3. Hello world.
40,499,702
I‘m studying the tensoflow, and want to test the example of slim. When I command ./scripts/train\_lenet\_on\_mnist.sh, The program run to eval\_image\_classifier give a Type Error, The Error information as follows: ``` I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so.5 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so.8.0 locally INFO:tensorflow:Scale of 0 disables regularizer. INFO:tensorflow:Evaluating /tmp/lenet-model/model.ckpt-20002 INFO:tensorflow:Starting evaluation at 2016-11-09-02:55:57 I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties: name: Quadro K5000 major: 3 minor: 0 memoryClockRate (GHz) 0.7055 pciBusID 0000:03:00.0 Total memory: 3.94GiB Free memory: 3.61GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Quadro K5000, pci bus id: 0000:03:00.0) INFO:tensorflow:Executing eval ops INFO:tensorflow:Executing eval_op 1/100 INFO:tensorflow:Error reported to Coordinator: <class 'TypeError'>, Fetch argument dict_values([<tf.Tensor 'accuracy/update_op:0' shape=() dtype=float32>, <tf.Tensor 'recall_at_5/update_op:0' shape=() dtype=float32>]) has invalid type <class 'dict_values'>, must be a string or Tensor. (Can not convert a dict_values into a Tensor or Operation.) Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 218, in init fetch, allow_tensor=True, allow_operation=True)) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2455, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2547, in _as_graph_element_locked % (type(obj).name, types_str)) TypeError: Can not convert a dict_values into a Tensor or Operation. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "eval_image_classifier.py", line 191, in <module> tf.app.run() File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 30, in run sys.exit(main(sys.argv[:1] + flags_passthrough)) File "eval_image_classifier.py", line 187, in main variables_to_restore=variables_to_restore) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/slim/python/slim/evaluation.py", line 359, in evaluate_once global_step=global_step) File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/slim/python/slim/evaluation.py", line 260, in evaluation sess.run(eval_op, eval_op_feed_dict) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 717, in run run_metadata_ptr) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 902, in _run fetch_handler = _FetchHandler(self._graph, fetches, feed_dict_string) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 358, in init self._fetch_mapper = _FetchMapper.for_fetch(fetches) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 189, in for_fetch return _ElementFetchMapper(fetches, contraction_fn) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 222, in init % (fetch, type(fetch), str(e))) TypeError: Fetch argument dict_values([<tf.Tensor 'accuracy/update_op:0' shape=() dtype=float32>, <tf.Tensor 'recall_at_5/update_op:0' shape=() dtype=float32>]) has invalid type <class 'dict_values'>, must be a string or Tensor. (Can not convert a dict_values into a Tensor or Operation.) ``` I don't know what happened to the program, I didnot revised any code, just download the code package from github, and for datadownload and train, which give correct result.Is there any help me? I am waiting online. Thanks
2016/11/09
[ "https://Stackoverflow.com/questions/40499702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7134638/" ]
The problem is the compatible of `python2` and `python3`. As I used `python3` for interpretation, but the the Keys From a Dictionary is different from p`ython2` and `python3`. In `Python2`, simply calling `keys()` on a dictionary object will return what you expect, however, in `Python3`, `keys()` no longer returns a `list`, but a view object, so the TypeError can be avoided and compatibility can be maintained by simply converting the `dict_keys` object into a list which can then be indexed as normal in both `Python2` and `Python3`. I edited the `eval_image_classifier` using `eval_op=list(names_to_updates.values())`, then it can work perfectly.
The other python3 change for eval\_image\_classifier.py is ``` for name, value in names_to_values.iteritems(): to for name, value in names_to_values.items(): ```
56,689,803
I'm trying to remove some part of text in the given string. So the problem is as follows. I have a string. Say HTML code like this. ``` <!DOCTYPE html> <html> <head> <style> body {background-color: powderblue;} h1 {color: blue;} p {color: red;} </style> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> ``` I want the code to remove all the css related code. i.e. the string should now look like: ``` <!DOCTYPE html> <html> <head> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> ``` I have tried that with this function in python: ``` def css_remover(text): m = re.findall('<style>(.*)</style>$', text,re.DOTALL) if m: for eachText in text.split(" "): for eachM in m: if eachM in eachText: text=text.replace(eachText,"") print(text) ``` But this doesn't work. I want the function to handle spaces, newline character so that it removes everything in between `<style> </style>` tag. Also, I hope if any word is attached to the tag, they aren't affected. Like `hello<style> klasjdklasd </style>>` should yield `hello>`
2019/06/20
[ "https://Stackoverflow.com/questions/56689803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7928692/" ]
You put the `$` which means end of string. try this: ``` x = re.sub('<style>.*?</style>', '', text, flags=re.DOTALL) print(x) ``` You can check out [this website](https://regex101.com/r/Fti1aD/1), has a nice regex demo. **A little note**: I am not extremely familiar with CSS so if there are nested `<style>` tags it might be a problem.
Note particularly the `?` character in the `<style>(.*?)</style>` portion of the RegExp expression so as not to be "too greedy". Otherwise, in the example below, it would also remove the `<title>` HTML tag. ``` import re text = """ <!DOCTYPE html> <html> <head> <style> body {background-color: powderblue;} h1 {color: blue;} p {color: red;} </style> <title>Test</title> <style> body {background-color: powderblue;} h1 {color: blue;} p {color: red;} </style> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> """ regex = re.compile(r' *<style>(.*?)</style> *\n?', re.DOTALL|re.MULTILINE) text = regex.sub('', text, 0) print (text == """ <!DOCTYPE html> <html> <head> <title>Test</title> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> """) ```
33,984,889
I want to use an array and its first derivative (diff) as features for training. Since the diff array is of an smaller size I would like to fill it up so that I don't have problems with sizes when I stack them and use both as features. If I fill the diff(array) with a 0, How should I align them? Do I put the 0 at the beginning of the resulting diff(array) or at the end? What is the correct way of aligning an array with its derivative? e.g. in python: ``` a = [1,32,43,54] b = np.diff(np.array(a)) np.insert(b, -1, 0) # at the end? np.insert(b, 0, 0) # or at the beginning? ```
2015/11/29
[ "https://Stackoverflow.com/questions/33984889", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1978146/" ]
Instead of left- or right-sided finite differences, you could use a centered finite difference (which is equivalent to taking the average of the left- and right-sided differences), and then pad both ends with appropriate approximations of the derivatives there. This will keep the estimation of the derivative aligned with its data value, and usually give a better estimate of the derivative. For example, ``` In [33]: y = np.array([1, 2, 3.5, 3.5, 4, 3, 2.5, 1.25]) In [34]: dy = np.empty(len(y)) In [35]: dy[1:-1] = 0.5*(y[2:] - y[:-2]) In [36]: dy[0] = y[1] - y[0] In [37]: dy[-1] = y[-1] - y[-2] In [38]: dy Out[38]: array([ 1. , 1.25 , 0.75 , 0.25 , -0.25 , -0.75 , -0.875, -1.25 ]) ``` The following script using matplotlib to create this visualization of the estimates of the derivatives: [![plot](https://i.stack.imgur.com/ugdVM.png)](https://i.stack.imgur.com/ugdVM.png) ``` import numpy as np import matplotlib.pyplot as plt y = np.array([1, 2, 3.5, 3.5, 4, 3, 2.5, 1.25]) dy = np.empty(len(y)) dy[1:-1] = 0.5*(y[2:] - y[:-2]) dy[0] = y[1] - y[0] dy[-1] = y[-1] - y[-2] plt.plot(y, 'b-o') for k, (y0, dy0) in enumerate(zip(y, dy)): t = 0.25 plt.plot([k-t, k+t], [y0 - t*dy0, y0 + t*dy0], 'c', alpha=0.4, linewidth=4) plt.grid() plt.show() ``` There are more sophisticated tools for estimating derivatives (e.g. [`scipy.signal.savgol_filter`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.savgol_filter.html) has an option for estimating the derivative, and if your data is periodic, you could use [`scipy.fftpack.diff`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.diff.html)), but a simple finite difference might work fine as your training input.
According to the [documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html), diff is simply doing `out[n] = a[n+1] - a[n]`. This means that it is not a derivative approximated by finite difference, but the discrete difference. To calculate the finite difference, you need to divide by the step size, except if your step size is 1, of course. Example: ``` import numpy as np x = np.linspace(0,2*np.pi,30) y = np.sin(x) dy = np.diff(y) / np.diff(x) ``` Here, `y` is a function of `x` at specific points, `dy` is it's derivative. The derivative by this formula is a central derivative, meaning that its location is between the points in `x`. If you need the derivatives at the same points, I would suggest you to calculate the derivative using the two neighbouring points: ``` (y[:-2]-y[2:])/(x[:-2]-x[2:]) ``` This way, you could add a `0` to both sides of the derivative vector, or trim you input vector accordingly.
38,596,674
I bet I am doing something very simple wrong. I want to start with an empty 2D numpy array and append arrays to it (with dimensions 1 row by 4 columns). ``` open_cost_mat_train = np.matrix([]) for i in xrange(10): open_cost_mat = np.array([i,0,0,0]) open_cost_mat_train = np.vstack([open_cost_mat_train,open_cost_mat]) ``` my error trace is: ``` File "/Users/me/anaconda/lib/python2.7/site-packages/numpy/core/shape_base.py", line 230, in vstack return _nx.concatenate([atleast_2d(_m) for _m in tup], 0) ValueError: all the input array dimensions except for the concatenation axis must match exactly ``` What am I doing wrong? I have tried append, concatenate, defining the empty 2D array as `[[]]`, as `[]`, `array([])` and many others.
2016/07/26
[ "https://Stackoverflow.com/questions/38596674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3948664/" ]
If `open_cost_mat_train` is large I would encourage you to replace the for loop by a **vectorized algorithm**. I will use the following funtions to show how efficiency is improved by vectorizing loops: ``` def fvstack(): import numpy as np np.random.seed(100) ocmt = np.matrix([]).reshape((0, 4)) for i in xrange(10): x = np.random.random() ocm = np.array([x, x + 1, 10*x, x/10]) ocmt = np.vstack([ocmt, ocm]) return ocmt def fshape(): import numpy as np from numpy.matlib import empty np.random.seed(100) ocmt = empty((10, 4)) for i in xrange(ocmt.shape[0]): ocmt[i, 0] = np.random.random() ocmt[:, 1] = ocmt[:, 0] + 1 ocmt[:, 2] = 10*ocmt[:, 0] ocmt[:, 3] = ocmt[:, 0]/10 return ocmt ``` I've assumed that the values that populate the first column of `ocmt` (shorthand for `open_cost_mat_train`) are obtained from a for loop, and the remaining columns are a function of the first column, as stated in your comments to my original answer. As real costs data are not available, in the forthcoming example the values in the first column are random numbers, and the second, third and fourth columns are the functions `x + 1`, `10*x` and `x/10`, respectively, where `x` is the corresponding value in the first column. ``` In [594]: fvstack() Out[594]: matrix([[ 5.43404942e-01, 1.54340494e+00, 5.43404942e+00, 5.43404942e-02], [ 2.78369385e-01, 1.27836939e+00, 2.78369385e+00, 2.78369385e-02], [ 4.24517591e-01, 1.42451759e+00, 4.24517591e+00, 4.24517591e-02], [ 8.44776132e-01, 1.84477613e+00, 8.44776132e+00, 8.44776132e-02], [ 4.71885619e-03, 1.00471886e+00, 4.71885619e-02, 4.71885619e-04], [ 1.21569121e-01, 1.12156912e+00, 1.21569121e+00, 1.21569121e-02], [ 6.70749085e-01, 1.67074908e+00, 6.70749085e+00, 6.70749085e-02], [ 8.25852755e-01, 1.82585276e+00, 8.25852755e+00, 8.25852755e-02], [ 1.36706590e-01, 1.13670659e+00, 1.36706590e+00, 1.36706590e-02], [ 5.75093329e-01, 1.57509333e+00, 5.75093329e+00, 5.75093329e-02]]) In [595]: np.allclose(fvstack(), fshape()) Out[595]: True ``` In order for the calls to `fvstack()` and `fshape()` produce the same results, the random number generator is initialized in both functions through `np.random.seed(100)`. Notice that the equality test has been performed using [`numpy.allclose`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html) instead of `fvstack() == fshape()` to avoid the round off errors associated to floating point artihmetic. As for efficiency, the following interactive session shows that initializing `ocmt` with its final shape is significantly faster than repeatedly stacking rows: ``` In [596]: import timeit In [597]: timeit.timeit('fvstack()', setup="from __main__ import fvstack", number=10000) Out[597]: 1.4884241055042366 In [598]: timeit.timeit('fshape()', setup="from __main__ import fshape", number=10000) Out[598]: 0.8819408006311278 ```
You need to reshape your original matrix so that the number of columns match the appended arrays: ``` open_cost_mat_train = np.matrix([]).reshape((0,4)) ``` After which, it gives: ``` open_cost_mat_train # matrix([[ 0., 0., 0., 0.], # [ 1., 0., 0., 0.], # [ 2., 0., 0., 0.], # [ 3., 0., 0., 0.], # [ 4., 0., 0., 0.], # [ 5., 0., 0., 0.], # [ 6., 0., 0., 0.], # [ 7., 0., 0., 0.], # [ 8., 0., 0., 0.], # [ 9., 0., 0., 0.]]) ```
23,280,253
I have the following code - ``` from sys import version class ExampleClass(object): def get_sys_version(self): return version x = ExampleClass() print x.get_sys_version() ``` and it gets parsed by this code - ``` import ast source = open("input.py") code = source.read() node = ast.parse(code, mode='eval') ``` and results in this error - ``` Traceback (most recent call last): File "parse.py", line 5, in <module> node = ast.parse(code, mode='eval') File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ast.py", line 37, in parse return compile(source, filename, mode, PyCF_ONLY_AST) File "<unknown>", line 1 from sys import version ``` This appears to be a very simple file to parse - it certainly runs - why does the parser throw this error?
2014/04/24
[ "https://Stackoverflow.com/questions/23280253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/827401/" ]
This is because you're using `mode='eval'`, which only works for single expressions. Your code has multiple statements, so use `mode='exec'` instead. (It's the default) See the [documentation for `compile()`](https://docs.python.org/2/library/functions.html#compile) for explanation of the `mode` argument, since that's what `ast.parse()` uses.
It's not related to `ast`. You get same error, when try: ``` In [1]: eval('from sys import version') File "<string>", line 1 from sys import version ^ SyntaxError: invalid syntax ``` Try `exec` mode: ``` In [1]: exec('from sys import version') In [2]: ```
48,821,856
I would like to clean a list from leading occurrences of `'a'`. That is, `['a', 'a', 'b', 'b']` should become `['b', 'b']` and at the same time `['b', 'a', 'a', 'b']` should be kept unchanged. ``` def remove_leading_items(l): if len(l) == 1 or l[0] != 'a': return l else: return remove_leading_items(l[1:]) ``` Is there a more pythonic way to do it?
2018/02/16
[ "https://Stackoverflow.com/questions/48821856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/671013/" ]
Yes. Immediately, you should be using a for loop. Recursion is generally not Pythonic. Second, use built in tools: ``` from itertools import dropwhile def remove_leading_items(l, item): return list(dropwhile (lambda x: x == item, l)) ``` Or ``` return list(dropwhile(item.__eq__, l)) ``` ### Edit Out of curiosity, I did some experiments with different approaches to this problem: ``` from itertools import dropwhile from functools import partial from operator import eq def _eq_drop(l, e): return dropwhile(e.__eq__, l) def lam_drop(l, e): return dropwhile(lambda x:x==e, l) def partial_drop(l, e): return dropwhile(partial(eq, e), l) ``` First, with a list that is entirely dropped: i.e. `(1, 1, 1, ...)` ``` In [64]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(_eq_drop(t0, 1)) ...: 1000 loops, best of 3: 389 µs per loop In [65]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(lam_drop(t0, 1)) ...: 1000 loops, best of 3: 1.19 ms per loop In [66]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(partial_drop(t0, 1)) ...: 1000 loops, best of 3: 893 µs per loop ``` So `__eq__` is clearly the fastest in this situation. I like it, but it makes use of a dunder-method directly, which is sometimes frowned upon. The `dropwhile(partial(eq...` approach (wordy, yet explicit) is somewhere inbetween that and the sluggish, clumsy `lambda` approach comes last. Not surprising. --- Now, when half is dropped, i.e `(1, 1, 1, ..., 0, 0, 0)`: ``` In [52]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(_eq_drop(t2, 1)) ...: 1000 loops, best of 3: 245 µs per loop In [53]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(lam_drop(t2, 1)) ...: 1000 loops, best of 3: 652 µs per loop In [54]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(partial_drop(t2, 1)) ...: 1000 loops, best of 3: 487 µs per loop ``` The difference isn't as pronounced. --- As for why I say recursion isn't Pythonic, consider the following: ``` In [6]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t0, 1) ...: 1 loop, best of 3: 405 ms per loop In [7]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t1, 1) ...: 10000 loops, best of 3: 34.7 µs per loop In [8]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t2, 1) ...: 1 loop, best of 3: 280 ms per loop ``` It performs catastrophically worse on all but the degenerate case of dropping 0 (well, 1 item). A fast, lest flexible approach ------------------------------- Now, if you know you always want a list, consider a highly iterative, very approach: ``` def for_loop(l, e): it = iter(l) for x in it: if x != e: break else: return [] return [x, *it] ``` It performs better than using built-ins! ``` In [33]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t0, 1) ...: 1000 loops, best of 3: 270 µs per loop In [34]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t1, 1) ...: 10000 loops, best of 3: 50.7 µs per loop In [35]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t2, 1) ...: 10000 loops, best of 3: 160 µs per loop ``` Less fast, but more flexible! ------------------------------ Perhaps a good compromise that maintains flexibility is to use a generator-based approach: ``` In [5]: def gen_drop(l, e): ...: it = iter(l) ...: for x in it: ...: if x != e: ...: break ...: yield x ...: yield from it ...: In [6]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t0, 1)) ...: 1000 loops, best of 3: 287 µs per loop In [7]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t1, 1)) ...: 1000 loops, best of 3: 359 µs per loop In [8]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t2, 1)) ...: 1000 loops, best of 3: 324 µs per loop ``` ### Using a deque Finally, the `deque` approach: ``` In [1]: from collections import deque ...: ...: def noLeadingZero(l, e): ...: d = deque(l) ...: for x in l: ...: if e == x: ...: d.popleft() ...: else: ...: break ...: return list(d) ...: In [2]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t0, 1) ...: 1000 loops, best of 3: 873 µs per loop In [3]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t1, 1) ...: 10000 loops, best of 3: 121 µs per loop In [4]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t2, 1) ...: 1000 loops, best of 3: 502 µs per loop ```
### Code: ``` def remove_leading(a_list, to_remove): i = 0 while i < len(a_list) and a_list[i] == to_remove: i += 1 return a_list[i:] ``` ### Test Code: ``` print(remove_leading(list('aabb'), 'a')) print(remove_leading(list('baab'), 'a')) print(remove_leading([], 'a')) ``` ### Results: ``` ['b', 'b'] ['b', 'a', 'a', 'b'] [] ```
48,821,856
I would like to clean a list from leading occurrences of `'a'`. That is, `['a', 'a', 'b', 'b']` should become `['b', 'b']` and at the same time `['b', 'a', 'a', 'b']` should be kept unchanged. ``` def remove_leading_items(l): if len(l) == 1 or l[0] != 'a': return l else: return remove_leading_items(l[1:]) ``` Is there a more pythonic way to do it?
2018/02/16
[ "https://Stackoverflow.com/questions/48821856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/671013/" ]
Yes. Immediately, you should be using a for loop. Recursion is generally not Pythonic. Second, use built in tools: ``` from itertools import dropwhile def remove_leading_items(l, item): return list(dropwhile (lambda x: x == item, l)) ``` Or ``` return list(dropwhile(item.__eq__, l)) ``` ### Edit Out of curiosity, I did some experiments with different approaches to this problem: ``` from itertools import dropwhile from functools import partial from operator import eq def _eq_drop(l, e): return dropwhile(e.__eq__, l) def lam_drop(l, e): return dropwhile(lambda x:x==e, l) def partial_drop(l, e): return dropwhile(partial(eq, e), l) ``` First, with a list that is entirely dropped: i.e. `(1, 1, 1, ...)` ``` In [64]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(_eq_drop(t0, 1)) ...: 1000 loops, best of 3: 389 µs per loop In [65]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(lam_drop(t0, 1)) ...: 1000 loops, best of 3: 1.19 ms per loop In [66]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(partial_drop(t0, 1)) ...: 1000 loops, best of 3: 893 µs per loop ``` So `__eq__` is clearly the fastest in this situation. I like it, but it makes use of a dunder-method directly, which is sometimes frowned upon. The `dropwhile(partial(eq...` approach (wordy, yet explicit) is somewhere inbetween that and the sluggish, clumsy `lambda` approach comes last. Not surprising. --- Now, when half is dropped, i.e `(1, 1, 1, ..., 0, 0, 0)`: ``` In [52]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(_eq_drop(t2, 1)) ...: 1000 loops, best of 3: 245 µs per loop In [53]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(lam_drop(t2, 1)) ...: 1000 loops, best of 3: 652 µs per loop In [54]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(partial_drop(t2, 1)) ...: 1000 loops, best of 3: 487 µs per loop ``` The difference isn't as pronounced. --- As for why I say recursion isn't Pythonic, consider the following: ``` In [6]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t0, 1) ...: 1 loop, best of 3: 405 ms per loop In [7]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t1, 1) ...: 10000 loops, best of 3: 34.7 µs per loop In [8]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t2, 1) ...: 1 loop, best of 3: 280 ms per loop ``` It performs catastrophically worse on all but the degenerate case of dropping 0 (well, 1 item). A fast, lest flexible approach ------------------------------- Now, if you know you always want a list, consider a highly iterative, very approach: ``` def for_loop(l, e): it = iter(l) for x in it: if x != e: break else: return [] return [x, *it] ``` It performs better than using built-ins! ``` In [33]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t0, 1) ...: 1000 loops, best of 3: 270 µs per loop In [34]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t1, 1) ...: 10000 loops, best of 3: 50.7 µs per loop In [35]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t2, 1) ...: 10000 loops, best of 3: 160 µs per loop ``` Less fast, but more flexible! ------------------------------ Perhaps a good compromise that maintains flexibility is to use a generator-based approach: ``` In [5]: def gen_drop(l, e): ...: it = iter(l) ...: for x in it: ...: if x != e: ...: break ...: yield x ...: yield from it ...: In [6]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t0, 1)) ...: 1000 loops, best of 3: 287 µs per loop In [7]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t1, 1)) ...: 1000 loops, best of 3: 359 µs per loop In [8]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t2, 1)) ...: 1000 loops, best of 3: 324 µs per loop ``` ### Using a deque Finally, the `deque` approach: ``` In [1]: from collections import deque ...: ...: def noLeadingZero(l, e): ...: d = deque(l) ...: for x in l: ...: if e == x: ...: d.popleft() ...: else: ...: break ...: return list(d) ...: In [2]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t0, 1) ...: 1000 loops, best of 3: 873 µs per loop In [3]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t1, 1) ...: 10000 loops, best of 3: 121 µs per loop In [4]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t2, 1) ...: 1000 loops, best of 3: 502 µs per loop ```
You can try this also:- ``` s = ['a', 'a', 'b', 'b','a','a','b'] def check(ls): new_ls = ls count = 0 while ls[0]=='a': new_ls = ls[(count+1):] ls = new_ls return new_ls print(check(s)) ```
48,821,856
I would like to clean a list from leading occurrences of `'a'`. That is, `['a', 'a', 'b', 'b']` should become `['b', 'b']` and at the same time `['b', 'a', 'a', 'b']` should be kept unchanged. ``` def remove_leading_items(l): if len(l) == 1 or l[0] != 'a': return l else: return remove_leading_items(l[1:]) ``` Is there a more pythonic way to do it?
2018/02/16
[ "https://Stackoverflow.com/questions/48821856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/671013/" ]
Yes. Immediately, you should be using a for loop. Recursion is generally not Pythonic. Second, use built in tools: ``` from itertools import dropwhile def remove_leading_items(l, item): return list(dropwhile (lambda x: x == item, l)) ``` Or ``` return list(dropwhile(item.__eq__, l)) ``` ### Edit Out of curiosity, I did some experiments with different approaches to this problem: ``` from itertools import dropwhile from functools import partial from operator import eq def _eq_drop(l, e): return dropwhile(e.__eq__, l) def lam_drop(l, e): return dropwhile(lambda x:x==e, l) def partial_drop(l, e): return dropwhile(partial(eq, e), l) ``` First, with a list that is entirely dropped: i.e. `(1, 1, 1, ...)` ``` In [64]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(_eq_drop(t0, 1)) ...: 1000 loops, best of 3: 389 µs per loop In [65]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(lam_drop(t0, 1)) ...: 1000 loops, best of 3: 1.19 ms per loop In [66]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(partial_drop(t0, 1)) ...: 1000 loops, best of 3: 893 µs per loop ``` So `__eq__` is clearly the fastest in this situation. I like it, but it makes use of a dunder-method directly, which is sometimes frowned upon. The `dropwhile(partial(eq...` approach (wordy, yet explicit) is somewhere inbetween that and the sluggish, clumsy `lambda` approach comes last. Not surprising. --- Now, when half is dropped, i.e `(1, 1, 1, ..., 0, 0, 0)`: ``` In [52]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(_eq_drop(t2, 1)) ...: 1000 loops, best of 3: 245 µs per loop In [53]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(lam_drop(t2, 1)) ...: 1000 loops, best of 3: 652 µs per loop In [54]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(partial_drop(t2, 1)) ...: 1000 loops, best of 3: 487 µs per loop ``` The difference isn't as pronounced. --- As for why I say recursion isn't Pythonic, consider the following: ``` In [6]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t0, 1) ...: 1 loop, best of 3: 405 ms per loop In [7]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t1, 1) ...: 10000 loops, best of 3: 34.7 µs per loop In [8]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: remove_leading_items(t2, 1) ...: 1 loop, best of 3: 280 ms per loop ``` It performs catastrophically worse on all but the degenerate case of dropping 0 (well, 1 item). A fast, lest flexible approach ------------------------------- Now, if you know you always want a list, consider a highly iterative, very approach: ``` def for_loop(l, e): it = iter(l) for x in it: if x != e: break else: return [] return [x, *it] ``` It performs better than using built-ins! ``` In [33]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t0, 1) ...: 1000 loops, best of 3: 270 µs per loop In [34]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t1, 1) ...: 10000 loops, best of 3: 50.7 µs per loop In [35]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: for_loop(t2, 1) ...: 10000 loops, best of 3: 160 µs per loop ``` Less fast, but more flexible! ------------------------------ Perhaps a good compromise that maintains flexibility is to use a generator-based approach: ``` In [5]: def gen_drop(l, e): ...: it = iter(l) ...: for x in it: ...: if x != e: ...: break ...: yield x ...: yield from it ...: In [6]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t0, 1)) ...: 1000 loops, best of 3: 287 µs per loop In [7]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t1, 1)) ...: 1000 loops, best of 3: 359 µs per loop In [8]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: list(gen_drop(t2, 1)) ...: 1000 loops, best of 3: 324 µs per loop ``` ### Using a deque Finally, the `deque` approach: ``` In [1]: from collections import deque ...: ...: def noLeadingZero(l, e): ...: d = deque(l) ...: for x in l: ...: if e == x: ...: d.popleft() ...: else: ...: break ...: return list(d) ...: In [2]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t0, 1) ...: 1000 loops, best of 3: 873 µs per loop In [3]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t1, 1) ...: 10000 loops, best of 3: 121 µs per loop In [4]: %%timeit n = 10000; t0 = (1,)*n; t1 = (1,) + (0,)*(n-1); t2 = (1,)*(n//2) + (0,)*(n//2); ...: noLeadingZero(t2, 1) ...: 1000 loops, best of 3: 502 µs per loop ```
Code: ``` my_list = ['a', 'a', 'b', 'b', 'a', 'b'] item_i_hate = 'a' for index in range(len(my_list)): if my_list[index] != item_i_hate: my_list = my_list[index:] break ```
50,534,429
I made a FC neural network with numpy based on the video's of welch's lab but when I try to train it I seem to have exploding gradients at launch, which is weird, I will put down the whole code which is testable in python 3+. only costfunctionprime seem to break the gradient descent stuff going but I have no idea what is happening. Can someone smarter than me help? EDIT: the trng\_input and trng\_output are not the one I use, I use a big dataset ``` import numpy as np import random trng_input = [[random.random() for _ in range(7)] for _ in range(100)] trng_output = [[random.random() for _ in range(2)] for _ in range(100)] def relu(x): return x * (x > 0) def reluprime(x): return (x>0).astype(x.dtype) class Neural_Net(): def __init__(self, data_input, data_output): self.data_input = data_input self.trng_output = trng_output self.bias = 0 self.nodes = np.array([7, 2]) self.LR = 0.01 self.weightinit() self.training(1000, self.LR) def randomweight(self, n): output = [] for i in range(n): output.append(random.uniform(-1,1)) return output def weightinit(self): self.weights = [] for n in range(len(self.nodes)-1): temp = [] for _ in range(self.nodes[n]+self.bias): temp.append(self.randomweight(self.nodes[n+1])) self.weights.append(temp) self.weights = [np.array(tuple(self.weights[i])) for i in range(len(self.weights))] def forward(self, data): self.Z = [] self.A = [np.array(data)] for layer in range(len(self.weights)): self.Z.append(np.dot(self.A[layer], self.weights[layer])) self.A.append(relu(self.Z[layer])) self.output = self.A[-1] return self.output def costFunction(self): self.totalcost = 0.5*sum((self.trng_output-self.output)**2) return self.totalcost def costFunctionPrime(self): self.forward(self.data_input) self.delta = [[] for x in range(len(self.weights))] self.DcostDw = [[] for x in range(len(self.weights))] for layer in reversed(range(len(self.weights))): Zprime = reluprime(self.Z[layer]) if layer == len(self.weights)-1: self.delta[layer] = np.multiply(-(self.trng_output-self.output), Zprime) else: self.delta[layer] = np.dot(self.delta[layer+1], self.weights[layer+1].T) * Zprime self.DcostDw[layer] = np.dot(self.A[layer].T, self.delta[layer]) return self.DcostDw def backprop(self, LR): self.DcostDw = (np.array(self.DcostDw)*LR).tolist() self.weights = (np.array(self.weights) - np.array(self.DcostDw)).tolist() def training(self, iteration, LR): for i in range(iteration): self.costFunctionPrime() self.backprop(LR) if (i/1000.0) == (i/1000): print(self.costFunction()) print(sum(self.costFunction())/len(self.costFunction())) NN = Neural_Net(trng_input, trng_output) ``` as asked, this is the expected result (result I got using the sigmoid activation function): [![](https://i.stack.imgur.com/en6ty.jpg)](https://i.stack.imgur.com/en6ty.jpg) as you can see, the numbers are going down and thus the network is training. this is the result using the relu activation function: [![](https://i.stack.imgur.com/wQrQq.jpg)](https://i.stack.imgur.com/wQrQq.jpg) Here, the network is stuck and isnt getting trained, it never gets trained using the relu activation function and would like to understand why
2018/05/25
[ "https://Stackoverflow.com/questions/50534429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7974205/" ]
This conversion works. ``` #string[] path = File.ReadLines("C:\\Users\\M\\numb.txt").ToArray(); String[] path = {"1","2","3"}; int[] numb = Array.ConvertAll(path,int.Parse); for (int i = 0; i < path.Length; i++) { Console.WriteLine(path[i]); } for (int i = 0; i < numb.Length; i++) { Console.WriteLine(numb[i]); } ```
I can't imagine this wouldn't work: ``` string[] path = File.ReadAllLines("C:\\Users\\M\\numb.txt"); int[] numb = new int[path.Length]; for (int i = 0; i < path.Length; i++) { numb[i] = int.Parse(path[i]); } ``` I think your issue is that you are using `File.ReadLines`, which reads each line into a single string. Strings have no such `ToArray` function.
50,534,429
I made a FC neural network with numpy based on the video's of welch's lab but when I try to train it I seem to have exploding gradients at launch, which is weird, I will put down the whole code which is testable in python 3+. only costfunctionprime seem to break the gradient descent stuff going but I have no idea what is happening. Can someone smarter than me help? EDIT: the trng\_input and trng\_output are not the one I use, I use a big dataset ``` import numpy as np import random trng_input = [[random.random() for _ in range(7)] for _ in range(100)] trng_output = [[random.random() for _ in range(2)] for _ in range(100)] def relu(x): return x * (x > 0) def reluprime(x): return (x>0).astype(x.dtype) class Neural_Net(): def __init__(self, data_input, data_output): self.data_input = data_input self.trng_output = trng_output self.bias = 0 self.nodes = np.array([7, 2]) self.LR = 0.01 self.weightinit() self.training(1000, self.LR) def randomweight(self, n): output = [] for i in range(n): output.append(random.uniform(-1,1)) return output def weightinit(self): self.weights = [] for n in range(len(self.nodes)-1): temp = [] for _ in range(self.nodes[n]+self.bias): temp.append(self.randomweight(self.nodes[n+1])) self.weights.append(temp) self.weights = [np.array(tuple(self.weights[i])) for i in range(len(self.weights))] def forward(self, data): self.Z = [] self.A = [np.array(data)] for layer in range(len(self.weights)): self.Z.append(np.dot(self.A[layer], self.weights[layer])) self.A.append(relu(self.Z[layer])) self.output = self.A[-1] return self.output def costFunction(self): self.totalcost = 0.5*sum((self.trng_output-self.output)**2) return self.totalcost def costFunctionPrime(self): self.forward(self.data_input) self.delta = [[] for x in range(len(self.weights))] self.DcostDw = [[] for x in range(len(self.weights))] for layer in reversed(range(len(self.weights))): Zprime = reluprime(self.Z[layer]) if layer == len(self.weights)-1: self.delta[layer] = np.multiply(-(self.trng_output-self.output), Zprime) else: self.delta[layer] = np.dot(self.delta[layer+1], self.weights[layer+1].T) * Zprime self.DcostDw[layer] = np.dot(self.A[layer].T, self.delta[layer]) return self.DcostDw def backprop(self, LR): self.DcostDw = (np.array(self.DcostDw)*LR).tolist() self.weights = (np.array(self.weights) - np.array(self.DcostDw)).tolist() def training(self, iteration, LR): for i in range(iteration): self.costFunctionPrime() self.backprop(LR) if (i/1000.0) == (i/1000): print(self.costFunction()) print(sum(self.costFunction())/len(self.costFunction())) NN = Neural_Net(trng_input, trng_output) ``` as asked, this is the expected result (result I got using the sigmoid activation function): [![](https://i.stack.imgur.com/en6ty.jpg)](https://i.stack.imgur.com/en6ty.jpg) as you can see, the numbers are going down and thus the network is training. this is the result using the relu activation function: [![](https://i.stack.imgur.com/wQrQq.jpg)](https://i.stack.imgur.com/wQrQq.jpg) Here, the network is stuck and isnt getting trained, it never gets trained using the relu activation function and would like to understand why
2018/05/25
[ "https://Stackoverflow.com/questions/50534429", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7974205/" ]
It is better to use TryParse instead of parse. Also it is easier to work with List, than with array. ``` using System; using System.Collections.Generic; namespace StringToInt { class Program { static void Main(string[] args) { String[] path = { "1", "2", "3", "a", "b7" }; List<int> numb = new List<int>(); foreach (string p in path) { if (int.TryParse(p, out int result)) { numb.Add(result); } } for (int i = 0; i < path.Length; i++) { Console.WriteLine(path[i]); } for (int i = 0; i < numb.Count; i++) { Console.WriteLine(numb[i]); } } } } ```
I can't imagine this wouldn't work: ``` string[] path = File.ReadAllLines("C:\\Users\\M\\numb.txt"); int[] numb = new int[path.Length]; for (int i = 0; i < path.Length; i++) { numb[i] = int.Parse(path[i]); } ``` I think your issue is that you are using `File.ReadLines`, which reads each line into a single string. Strings have no such `ToArray` function.
45,916,726
here is my output.txt file ``` 4f337d5000000001 4f337d5000000001 0082004600010000 0082004600010000 334f464600010000 334f464600010000 [... many values omitted ...] 334f464600010000 334f464600010000 4f33464601000100 4f33464601000100 ``` how i can change these values into decimal with the help of python and save into a new .txt file..
2017/08/28
[ "https://Stackoverflow.com/questions/45916726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8523601/" ]
Since the values are 16 hex digits long I assume these are 64-bit integers you want to play with. If the file is reasonably small then you can use `read` to bring in the whole string and `split` to break it into individual values: ``` with open("newfile.txt", 'w') as out_file, open("outfile,txt") as in_file: for hex in in_file.read().split(): print(int(hex, 16), file=out_file) ``` should do this for you.
You can do this: ``` with open('output.txt') as f: new_file = open("new_file.txt", "w") for item in f.readlines(): new_file.write(str(int(item, 16)) + "\n") new_file.close() ```
45,916,726
here is my output.txt file ``` 4f337d5000000001 4f337d5000000001 0082004600010000 0082004600010000 334f464600010000 334f464600010000 [... many values omitted ...] 334f464600010000 334f464600010000 4f33464601000100 4f33464601000100 ``` how i can change these values into decimal with the help of python and save into a new .txt file..
2017/08/28
[ "https://Stackoverflow.com/questions/45916726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8523601/" ]
Since the values are 16 hex digits long I assume these are 64-bit integers you want to play with. If the file is reasonably small then you can use `read` to bring in the whole string and `split` to break it into individual values: ``` with open("newfile.txt", 'w') as out_file, open("outfile,txt") as in_file: for hex in in_file.read().split(): print(int(hex, 16), file=out_file) ``` should do this for you.
One of the comments mentioned a shell command. This is how a Python one-liner can be invoked from the shell command line (Linux bash in this example). I/O redirection is handled by the shell. ``` $ python -c $'import sys\nfor line in sys.stdin: print(int(line,16))' <hex.txt >dec.txt ```
45,916,726
here is my output.txt file ``` 4f337d5000000001 4f337d5000000001 0082004600010000 0082004600010000 334f464600010000 334f464600010000 [... many values omitted ...] 334f464600010000 334f464600010000 4f33464601000100 4f33464601000100 ``` how i can change these values into decimal with the help of python and save into a new .txt file..
2017/08/28
[ "https://Stackoverflow.com/questions/45916726", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8523601/" ]
You can do this: ``` with open('output.txt') as f: new_file = open("new_file.txt", "w") for item in f.readlines(): new_file.write(str(int(item, 16)) + "\n") new_file.close() ```
One of the comments mentioned a shell command. This is how a Python one-liner can be invoked from the shell command line (Linux bash in this example). I/O redirection is handled by the shell. ``` $ python -c $'import sys\nfor line in sys.stdin: print(int(line,16))' <hex.txt >dec.txt ```