content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Execute (part of) try block after except block I know that is a weird question, and probably there is not an answer. I'm trying to execute the rest of the try block after an exception was caught and the except block was executed. Example: [...] try: do.this() do.that() [...] except: foo.bar() [...] do.this() raise an exception managed by foo.bar(), then I would like to execute the code from do.that(). I know that there is not a GOTO statement, but maybe some kind of hack or workaround! Thanks! A: A try... except... block catches one exception. That's what it's for. It executes the code inside the try, and if an exception is raised, handles it in the except. You can't raise multiple exceptions inside the try. This is deliberate: the point of the construction is that you need explicitly to handle the exceptions that occur. Returning to the end of the try violates this, because then the except statement handles more than one thing. You should do: try: do.this() except FailError: clean.up() try: do.that() except FailError: clean.up() so that any exception you raise is handled explicitly. A: Use a finally block? Am I missing something? [...] try: do.this() except: foo.bar() [...] finally: do.that() [...] A: If you always need to execute foo.bar() why not just move it after the try/except block? Or maybe even to a finally: block. A: One possibility is to write a code in such a way that you can re-execute it all when the error condition has been solved, e.g.: while 1: try: complex_operation() except X: solve_problem() continue break A: fcts = [do.this, do.that] for fct in fcts: try: fct() except: foo.bar() A: You need two try blocks, one for each statement in your current try block. A: This doesn't scale up well, but for smaller blocks of code you could use a classic finite-state-machine: states = [do.this, do.that] state = 0 while state < len(states): try: states[state]() except: foo.bar() state += 1 A: Here's another alternative. Handle the error condition with a callback, so that after fixing the problem you can continue. The callback would basically contain exactly the same code you would put in the except block. As a silly example, let's say that the exception you want to handle is a missing file, and that you have a way to deal with that problem (a default file or whatever). fileRetriever is the callback that knows how to deal with the problem. Then you would write: def myOp(fileRetriever): f = acquireFile() if not f: f = fileRetriever() # continue with your stuff... f2 = acquireAnotherFile() if not f2: f2 = fileRetriever() # more stuff... myOp(magicalCallback) Note: I've never seen this design used in practice, but in specific situations I guess it might be usable.
Execute (part of) try block after except block
I know that is a weird question, and probably there is not an answer. I'm trying to execute the rest of the try block after an exception was caught and the except block was executed. Example: [...] try: do.this() do.that() [...] except: foo.bar() [...] do.this() raise an exception managed by foo.bar(), then I would like to execute the code from do.that(). I know that there is not a GOTO statement, but maybe some kind of hack or workaround! Thanks!
[ "A try... except... block catches one exception. That's what it's for. It executes the code inside the try, and if an exception is raised, handles it in the except. You can't raise multiple exceptions inside the try.\nThis is deliberate: the point of the construction is that you need explicitly to handle the except...
[ 5, 2, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "exception_handling", "python" ]
stackoverflow_0004225921_exception_handling_python.txt
Q: PDF programming in Python:Set limits on number of times the PDF can be printed PDF allows you to set permissions such as a document can be printed just once or just 10 times etc. I believe Adobe Acrobat Professional allows you to set those. My question is it possible to do so in Python programmatically? If so how? A: There doesn't seem to be a way to restrict the number of times a PDF file can be printed (outside Adobe LiveCycle or some other very controlled hosted solution). Although there is lots of discussion about this e.g. https://superuser.com/questions/37216/restrict-print-copies-on-a-pdf If you have other info please indicate a reference. There is an open source pypdf module http://pybrary.net/pyPdf/ But no hints there about print restriction. A: It looks like ReportLab can allow you to set whether a PDF can be printed. See page 58 of the manual. I don't know if it can also let you control the number of prints.
PDF programming in Python:Set limits on number of times the PDF can be printed
PDF allows you to set permissions such as a document can be printed just once or just 10 times etc. I believe Adobe Acrobat Professional allows you to set those. My question is it possible to do so in Python programmatically? If so how?
[ "There doesn't seem to be a way to restrict the number of times a PDF file can be printed (outside Adobe LiveCycle or some other very controlled hosted solution). Although there is lots of discussion about this e.g. https://superuser.com/questions/37216/restrict-print-copies-on-a-pdf\nIf you have other info please...
[ 1, 0 ]
[]
[]
[ "pdf", "pdf_generation", "python" ]
stackoverflow_0004226942_pdf_pdf_generation_python.txt
Q: Atomic state storage in Python? I'm working on a project on an unreliable system which I'm assuming can fail at any point. What I want to guarantee is that if I write_state and the machine fails mid-operation, a read_state will either read a valid state or no state at all. I've implemented something which I think will work below -- I'm interested in criticism of that or alternative solutions if anyone knows of one. My idea: import hashlib, cPickle, os def write_state(logname, state): state_string = cPickle.dumps(state, cPickle.HIGHEST_PROTOCOL) state_string += hashlib.sha224(state_string).hexdigest() handle = open('%s.1' % logname, 'wb') handle.write(state_string) handle.close() handle = open('%s.2' % logname, 'wb') handle.write(state_string) handle.close() def get_state(logname): def read_file(name): try: f = open(name,'rb') data = f.read() f.close() return data except IOError: return '' def parse(data): if len(data) < 56: return (None, '', False) hash = data[-56:] data = data[:-56] valid = hashlib.sha224(data).hexdigest() == hash try: parsed = cPickle.loads(data) except cPickle.UnpicklingError: parsed = None return (parsed, valid) data1,valid1 = parse(read_file('%s.1'%logname)) data2,valid2 = parse(read_file('%s.2'%logname)) if valid1 and valid2: return data1 elif valid1 and not valid2: return data1 elif valid2 and not valid1: return data2 elif not valid1 and not valid2: raise Exception('Theoretically, this never happens...') e.g.: write_state('test_log', {'x': 5}) print get_state('test_log') A: Your two copies won't work. The filesystem can reorder things so that both files have been truncated before any has been written to disk. There are a few filesystem operations that are guaranteed to be atomic: renaming a file over another is one, insofar as the file will either be in one place or another. However, as far as POSIX is concerned, it doesn't guarantee the move is done before the file contents have hit the disk, meaning it only gives you locking. Linux filesystems have enforced that file contents hit the disk before the atomic move does (but not synchronously), so this does what you want. ext4 has broken that assumption for a short while, making those files actually more likely to end up empty. This was widely regarded as a dick move, and has been remedied since. Anyway, the proper way to do this is: create temporary file in the same directory (so it's on the same filesystem); write new data; fsync the temporary file; rename it over the previous version. This is as atomic as the OS can guarantee. It also gives you durability at the cost of spinning up the disks, which is why app developers prefer not using fsync and blacklisting the offending ext4 versions. A: I will add a heretic response: what about using sqlite? Or, possibly, bsddb, however that seems to be deprecated and you would have to use a third-party module. A: My vague recollection from the way databases work is this. It involves three files. A control file, the target database file and a pending transaction log. The control file has a global transaction counter and a hash or other checksum. This is a small file that's one physical block in size. One OS-level write. Have a global transaction counter in your target file with the real data, plus a hash or other checksum. Have a pending transaction log that just grows or is a circular queue of a finite size, or perhaps rolls over. It doesn't much matter. Log all pending transactions to the simple log. There's a sequence number and the content of the change. Update the transaction counter, update the hash in the control file. One write, flushed. If this fails, then nothing has changed. If this succeeds, the control file and target file don't match, indicating a transaction was started but not finished. Do the expected update on the target file. Seek to the beginning and update the counter and the checksum. If this fails, the control file has a counter one more than the target file. The target file is damaged. When this works, the last logged transaction, the control file and the target file all agree on the sequence number. You can recover by replaying the log, since you know the last good sequence number. A: Under UNIX like systems the usual answer is to do the link dance. Create the file under a unique name (use the tmpfile module) then use the os.link() function to create a hard link to the destination name after you synchronized the contents into the desired (publication) state. Under this scheme your readers don't see the file until the state is sane. The link operation is atomic. You can unlink the temporary name after you'successfully linked to the "ready" name. There are some additional wrinkles to handle if you need to guarantee semantics over old versions of NFS without depending on the locking daemons. A: I think you can simplify a few things def read_file(name): try: with open(name,'rb') as f return f.read() except IOError: return '' if valid1: return data1 elif valid2: return data2 else: raise Exception('Theoretically, this never happens...') You probably don't need to write both files all the time, just write file2 and rename it over file1. I think there is still a chance that a hard reset (eg power cut) could cause both files not to be written to disk properly due to delayed writing
Atomic state storage in Python?
I'm working on a project on an unreliable system which I'm assuming can fail at any point. What I want to guarantee is that if I write_state and the machine fails mid-operation, a read_state will either read a valid state or no state at all. I've implemented something which I think will work below -- I'm interested in criticism of that or alternative solutions if anyone knows of one. My idea: import hashlib, cPickle, os def write_state(logname, state): state_string = cPickle.dumps(state, cPickle.HIGHEST_PROTOCOL) state_string += hashlib.sha224(state_string).hexdigest() handle = open('%s.1' % logname, 'wb') handle.write(state_string) handle.close() handle = open('%s.2' % logname, 'wb') handle.write(state_string) handle.close() def get_state(logname): def read_file(name): try: f = open(name,'rb') data = f.read() f.close() return data except IOError: return '' def parse(data): if len(data) < 56: return (None, '', False) hash = data[-56:] data = data[:-56] valid = hashlib.sha224(data).hexdigest() == hash try: parsed = cPickle.loads(data) except cPickle.UnpicklingError: parsed = None return (parsed, valid) data1,valid1 = parse(read_file('%s.1'%logname)) data2,valid2 = parse(read_file('%s.2'%logname)) if valid1 and valid2: return data1 elif valid1 and not valid2: return data1 elif valid2 and not valid1: return data2 elif not valid1 and not valid2: raise Exception('Theoretically, this never happens...') e.g.: write_state('test_log', {'x': 5}) print get_state('test_log')
[ "Your two copies won't work. The filesystem can reorder things so that both files have been truncated before any has been written to disk.\nThere are a few filesystem operations that are guaranteed to be atomic: renaming a file over another is one, insofar as the file will either be in one place or another. However...
[ 3, 2, 1, 1, 0 ]
[]
[]
[ "atomic", "persistence", "python", "state" ]
stackoverflow_0004220803_atomic_persistence_python_state.txt
Q: mod-wsgi segmentation fault under debian/apache2 I'm trying to get mod-wsgi running under Apache2 with an eye towards using it with Django. Right now I'm just getting a basic app running and am getting a segmentation fault. Any suggestions on how to track down the error would be appreciated, I'm stuck. This is under Debian/Lenny, stock versions of Apache, mod-wsgi, and python 2.5. I've checked and mod-wsgi is linked against /usr/lib/libpython2.5.so.1.0. I originally had python 2.4 installed, but removed it just in case it was picking up the wrong version. The script file is: def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] My config is: WSGIScriptAlias /myapp /var/www/test/myapp.wsgi <Directory /var/www/test/myapp.wsgi> Order allow,deny Allow from all </Directory> And when I try to view the url I see this in the apache error log. [Fri Nov 19 09:29:58 2010] [info] mod_wsgi (pid=7190): Create interpreter 'morpheus.gateway.2wire.net|/myapp'. [Fri Nov 19 09:29:58 2010] [info] mod_wsgi (pid=7331): Attach interpreter ''. [Fri Nov 19 09:29:58 2010] [notice] child pid 7190 exit signal Segmentation fault (11) Python seems to work fine otherwise, I've run my django app under the built-in server with no problems. Just FYI, I started out trying to get my django app running, but ran into this error: [Fri Nov 19 08:25:08 2010] [info] mod_wsgi (pid=6861): Create interpreter 'morpheus.gateway.2wire.net|/curtana'. [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] mod_wsgi (pid=6861): Exception occurred processing WSGI script '/var/data/curtana/curtana.wsgi'. [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] File "/var/data/curtana/curtana.wsgi", line 1 [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] import sys [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] ^ [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] SyntaxError: invalid syntax Which seems let me to try the basic app. Thanks for any suggestions. Edit to add, here's a backtrace: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb751a700 (LWP 8092)] 0xb6b3a920 in PyParser_AddToken (ps=0x8543f90, type=8, str=0x845a480 ")", lineno=1, col_offset=39, expected_ret=0xbfffe378) at ../Parser/parser.c:274 274 ../Parser/parser.c: No such file or directory. in ../Parser/parser.c (gdb) backtrace #0 0xb6b3a920 in PyParser_AddToken (ps=0x8543f90, type=8, str=0x845a480 ")", lineno=1, col_offset=39, expected_ret=0xbfffe378) at ../Parser/parser.c:274 #1 0xb6b3ab86 in parsetok (tok=0x8535460, g=<value optimized out>, start=257, err_ret=0xbfffe360, flags=<value optimized out>) at ../Parser/parsetok.c:194 #2 0xb6bec5eb in PyParser_SimpleParseFileFlags (fp=0x84f3288, filename=0x85301b0 "/var/www/test/myapp.wsgi", start=257, flags=0) at ../Python/pythonrun.c:1404 #3 0xb6c76877 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0x084f3288 in ?? () #5 0x085301b0 in ?? () #6 0x00000101 in ?? () #7 0x00000000 in ?? () A: Read through mod_wsgi documentation looking for where it discusses crashes. Start with: http://code.google.com/p/modwsgi/wiki/FrequentlyAskedQuestions http://code.google.com/p/modwsgi/wiki/InstallationIssues Likely a clash with mod_python or due to wrong Python installation being found at run time. For later, do checks of installation as outlined in: http://code.google.com/p/modwsgi/wiki/CheckingYourInstallation UPDATE 1 The generation of a stack trace as mentioned in comment is documented in: http://code.google.com/p/modwsgi/wiki/DebuggingTechniques
mod-wsgi segmentation fault under debian/apache2
I'm trying to get mod-wsgi running under Apache2 with an eye towards using it with Django. Right now I'm just getting a basic app running and am getting a segmentation fault. Any suggestions on how to track down the error would be appreciated, I'm stuck. This is under Debian/Lenny, stock versions of Apache, mod-wsgi, and python 2.5. I've checked and mod-wsgi is linked against /usr/lib/libpython2.5.so.1.0. I originally had python 2.4 installed, but removed it just in case it was picking up the wrong version. The script file is: def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] My config is: WSGIScriptAlias /myapp /var/www/test/myapp.wsgi <Directory /var/www/test/myapp.wsgi> Order allow,deny Allow from all </Directory> And when I try to view the url I see this in the apache error log. [Fri Nov 19 09:29:58 2010] [info] mod_wsgi (pid=7190): Create interpreter 'morpheus.gateway.2wire.net|/myapp'. [Fri Nov 19 09:29:58 2010] [info] mod_wsgi (pid=7331): Attach interpreter ''. [Fri Nov 19 09:29:58 2010] [notice] child pid 7190 exit signal Segmentation fault (11) Python seems to work fine otherwise, I've run my django app under the built-in server with no problems. Just FYI, I started out trying to get my django app running, but ran into this error: [Fri Nov 19 08:25:08 2010] [info] mod_wsgi (pid=6861): Create interpreter 'morpheus.gateway.2wire.net|/curtana'. [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] mod_wsgi (pid=6861): Exception occurred processing WSGI script '/var/data/curtana/curtana.wsgi'. [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] File "/var/data/curtana/curtana.wsgi", line 1 [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] import sys [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] ^ [Fri Nov 19 08:25:08 2010] [error] [client 192.168.2.70] SyntaxError: invalid syntax Which seems let me to try the basic app. Thanks for any suggestions. Edit to add, here's a backtrace: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb751a700 (LWP 8092)] 0xb6b3a920 in PyParser_AddToken (ps=0x8543f90, type=8, str=0x845a480 ")", lineno=1, col_offset=39, expected_ret=0xbfffe378) at ../Parser/parser.c:274 274 ../Parser/parser.c: No such file or directory. in ../Parser/parser.c (gdb) backtrace #0 0xb6b3a920 in PyParser_AddToken (ps=0x8543f90, type=8, str=0x845a480 ")", lineno=1, col_offset=39, expected_ret=0xbfffe378) at ../Parser/parser.c:274 #1 0xb6b3ab86 in parsetok (tok=0x8535460, g=<value optimized out>, start=257, err_ret=0xbfffe360, flags=<value optimized out>) at ../Parser/parsetok.c:194 #2 0xb6bec5eb in PyParser_SimpleParseFileFlags (fp=0x84f3288, filename=0x85301b0 "/var/www/test/myapp.wsgi", start=257, flags=0) at ../Python/pythonrun.c:1404 #3 0xb6c76877 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #4 0x084f3288 in ?? () #5 0x085301b0 in ?? () #6 0x00000101 in ?? () #7 0x00000000 in ?? ()
[ "Read through mod_wsgi documentation looking for where it discusses crashes. Start with:\nhttp://code.google.com/p/modwsgi/wiki/FrequentlyAskedQuestions\nhttp://code.google.com/p/modwsgi/wiki/InstallationIssues\nLikely a clash with mod_python or due to wrong Python installation being found at run time.\nFor later, ...
[ 2 ]
[]
[]
[ "apache2", "mod_wsgi", "python" ]
stackoverflow_0004226521_apache2_mod_wsgi_python.txt
Q: How to programmatically edit Excel sheets? I need to edit an Excel workbook using python. Is there a way of doing this without reading in the workbook, editing what I want and the writing it back? i.e. is there a way I can do this on the fly, as I only need to edit a couple of values per sheet? I have looked at pyexcelerator, xlrd, and xlwt, but they only seem to support (as far as I can work out) reading and writing not editing. I cannot use pywin32 as I am using linux. Any suggestions of libraries or particular ways of doing things? A: First off, what version of Excel? Excel2007+ use an XML file format, while Excel2003- used a proprietary binary format... so the tools to read and write these work in totally different ways. If you're after the more recent xlsx files, then take a look at Eric' Gazoni's openpyxl project. The code can be found on bitbucket. The driving force behind this is the ability to read and write xlsx files from Python within a single library. Even then, it reads the entire workbook, but allows you to modify cells before writing it back. Simply put: the structure of an Excel file doesn't lend itself to easy editing.... it's not simply a case of changing a few characters. A: xlutils has a copy module that may be interseting for you A: I have used pyexcelerator on linux to edit and build xls files
How to programmatically edit Excel sheets?
I need to edit an Excel workbook using python. Is there a way of doing this without reading in the workbook, editing what I want and the writing it back? i.e. is there a way I can do this on the fly, as I only need to edit a couple of values per sheet? I have looked at pyexcelerator, xlrd, and xlwt, but they only seem to support (as far as I can work out) reading and writing not editing. I cannot use pywin32 as I am using linux. Any suggestions of libraries or particular ways of doing things?
[ "First off, what version of Excel? Excel2007+ use an XML file format, while Excel2003- used a proprietary binary format... so the tools to read and write these work in totally different ways. \nIf you're after the more recent xlsx files, then take a look at Eric' Gazoni's openpyxl project. The code can be found on ...
[ 9, 2, 0 ]
[]
[]
[ "editing", "excel", "linux", "python" ]
stackoverflow_0004226754_editing_excel_linux_python.txt
Q: Fast way to read interleaved data? I've got a file containing several channels of data. The file is sampled at a base rate, and each channel is sampled at that base rate divided by some number -- it seems to always be a power of 2, though I don't think that's important. So, if I have channels a, b, and c, sampled at divders of 1, 2, and 4, my stream will look like: a0 b0 c0 a1 a2 b1 a3 a4 b2 c1 a5 ... For added fun, the channels can independently be floats or ints (though I know for each one), and the data stream does not necessarily end on a power of 2: the example stream would be valid without further extension. The values are sometimes big and sometimes little-endian, though I know what I'm dealing with up-front. I've got code that properly unpacks these and fills numpy arrays with the correct values, but it's slow: it looks something like (hope I'm not glossing over too much; just giving an idea of the algorithm): for sample_num in range(total_samples): channels_to_sample = [ch for ch in all_channels if ch.samples_for(sample_num)] format_str = ... # build format string from channels_to_sample data = struct.unpack( my_file.read( ... ) ) # read and unpack the data # iterate over data tuple and put values in channels_to_sample for val, ch in zip(data, channels_to_sample): ch.data[sample_num / ch.divider] = val And it's slow -- a few seconds to read a 20MB file on my laptop. Profiler tells me I'm spending a bunch of time in Channel#samples_for() -- which makes sense; there's a bit of conditional logic there. My brain feels like there's a way to do this in one fell swoop instead of nesting loops -- maybe using indexing tricks to read the bytes I want into each array? The idea of building one massive, insane format string also seems like a questionable road to go down. Update Thanks to those who responded. For what it's worth, the numpy indexing trick reduced the time required to read my test data from about 10 second to about 0.2 seconds, for a speedup of 50x. A: The best way to really improve the performance is to get rid of the Python loop over all samples and let NumPy do this loop in compiled C code. This is a bit tricky to achieve, but it is possible. First, you need a bit of preparation. As pointed out by Justin Peel, the pattern in which the samples are arranged repeats after some number of steps. If d_1, ..., d_k are the divisors for your k data streams and b_1, ..., b_k are the sample sizes of the streams in bytes, and lcm is the least common multiple of these divisors, then N = lcm*sum(b_1/d_1+...+b_k/d_k) will be the number of bytes which the pattern of streams will repeat after. If you have figured out which stream each of the first N bytes belongs to, you can simply repeat this pattern. You can now build the array of stream indices for the first N bytes by something similar to stream_index = [] for sample_num in range(lcm): stream_index += [i for i, ch in enumerate(all_channels) if ch.samples_for(sample_num)] repeat_count = [b[i] for i in stream_index] stream_index = numpy.array(stream_index).repeat(repeat_count) Here, d is the sequence d_1, ..., d_k and b is the sequence b_1, ..., b_k. Now you can do data = numpy.fromfile(my_file, dtype=numpy.uint8).reshape(-1, N) streams = [data[:,stream_index == i].ravel() for i in range(k)] You possibly need to pad the data a bit at the end to make the reshape() work. Now you have all the bytes belonging to each stream in separate NumPy arrays. You can reinterpret the data by simply assigning to the dtype attribute of each stream. If you want the first stream to be intepreted as big endian integers, simply write streams[0].dtype = ">i" This won't change the data in the array in any way, just the way it is interpreted. This may look a bit cryptic, but should be much better performance-wise. A: Replace channel.samples_for(sample_num) with a iter_channels(channels_config) iterator that keeps some internal state and lets you read the file in one pass. Use it like this: for (chan, sample_data) in izip(iter_channels(), data): decoded_data = chan.decode(sample_data) To implement the iterator, think of a base clock with a period of one. The periods of the various channels are integers. Iterate the channels in order, and emit a channel if the clock modulo its period is zero. for i in itertools.count(): for chan in channels: if i % chan.period == 0: yield chan A: The grouper() recipe along with itertools.izip() should be of some help here.
Fast way to read interleaved data?
I've got a file containing several channels of data. The file is sampled at a base rate, and each channel is sampled at that base rate divided by some number -- it seems to always be a power of 2, though I don't think that's important. So, if I have channels a, b, and c, sampled at divders of 1, 2, and 4, my stream will look like: a0 b0 c0 a1 a2 b1 a3 a4 b2 c1 a5 ... For added fun, the channels can independently be floats or ints (though I know for each one), and the data stream does not necessarily end on a power of 2: the example stream would be valid without further extension. The values are sometimes big and sometimes little-endian, though I know what I'm dealing with up-front. I've got code that properly unpacks these and fills numpy arrays with the correct values, but it's slow: it looks something like (hope I'm not glossing over too much; just giving an idea of the algorithm): for sample_num in range(total_samples): channels_to_sample = [ch for ch in all_channels if ch.samples_for(sample_num)] format_str = ... # build format string from channels_to_sample data = struct.unpack( my_file.read( ... ) ) # read and unpack the data # iterate over data tuple and put values in channels_to_sample for val, ch in zip(data, channels_to_sample): ch.data[sample_num / ch.divider] = val And it's slow -- a few seconds to read a 20MB file on my laptop. Profiler tells me I'm spending a bunch of time in Channel#samples_for() -- which makes sense; there's a bit of conditional logic there. My brain feels like there's a way to do this in one fell swoop instead of nesting loops -- maybe using indexing tricks to read the bytes I want into each array? The idea of building one massive, insane format string also seems like a questionable road to go down. Update Thanks to those who responded. For what it's worth, the numpy indexing trick reduced the time required to read my test data from about 10 second to about 0.2 seconds, for a speedup of 50x.
[ "The best way to really improve the performance is to get rid of the Python loop over all samples and let NumPy do this loop in compiled C code. This is a bit tricky to achieve, but it is possible.\nFirst, you need a bit of preparation. As pointed out by Justin Peel, the pattern in which the samples are arranged ...
[ 7, 2, 1 ]
[]
[]
[ "binary_data", "numpy", "optimization", "python" ]
stackoverflow_0004227990_binary_data_numpy_optimization_python.txt
Q: Issue with reversing list using list.pop() I was working on writing a small code snippet to reverse a string using list appends and pop. The script that I wrote is as follows: someStr = raw_input("Enter some string here:") strList = [] for c in someStr: strList.append(c) print strList reverseCharList = [] for someChar in strList: reverseCharList.append(strList.pop()) print reverseCharList When I enter a string abcd, the output that's returned is [d,c]. I know I am mutating the list I am iterating over but can somebody explain why the chars 'a' and 'b' is not displayed here? Thanks A: How about a simple reversal of string. >>> x = 'abcd' >>> x[::-1] 'dcba' >>> On your code: Never mutate the list on which you are iterating with. It can cause subtle errors. >>> strList = [1, 2, 3, 4, 5] >>> reverseCharList = [] >>> for someChar in strList: ... print strList ... reverseCharList.append(strList.pop()) ... print strList ... [1, 2, 3, 4, 5] <-- Iteration 1 [1, 2, 3, 4] [1, 2, 3, 4] <-- Iteration 2 [1, 2, 3] [1, 2, 3] <-- Iteration 3 [1, 2] See the following. Since you are using iterator (for .. in ..). You can see the iterator details directly and how mutating the list messes up with iterator. >>> strList = [1, 2, 3, 4, 5] >>> k = strList.__iter__() >>> k.next() 1 >>> k.__length_hint__() <--- Still 4 to go 4 >>> strList.pop() <---- You pop an element 5 >>> k.__length_hint__() <----- Now only 3 to go 3 >>> >>> k.next() 2 >>> k.__length_hint__() 2 A: for someChar in strList: reverseCharList.append(strList.pop()) Is essentially the same as: i = 0 while i < len(strList): reverseCharList.append(strList.pop()) i += 1 First iteration i is 0, len(strList) is 4, and you pop+append 'd'. Second iteration i is 1, len(strList) is 3, and you pop+append 'c'. Third iteration i is 2, len(strList) is 2, so the loop condition fails and you're done. (This is really done with an iterator on the list, not a local variable 'i'. I've shown it this way for clarity.) If you want to manipulate the sequence you're iterating over it's generally better to use a while loop. eg: while strList: reverseCharList.append(strList.pop()) A: You shorten the list when you pop. reverseCharList = [] while strList: reverseCharList.append(strList.pop()) A: A simple recersive version: def reverse(the_list): if not the_list: return [] return [the_list.pop()] + reverse(the_list) Of course, [].reverse() is faster.
Issue with reversing list using list.pop()
I was working on writing a small code snippet to reverse a string using list appends and pop. The script that I wrote is as follows: someStr = raw_input("Enter some string here:") strList = [] for c in someStr: strList.append(c) print strList reverseCharList = [] for someChar in strList: reverseCharList.append(strList.pop()) print reverseCharList When I enter a string abcd, the output that's returned is [d,c]. I know I am mutating the list I am iterating over but can somebody explain why the chars 'a' and 'b' is not displayed here? Thanks
[ "How about a simple reversal of string.\n>>> x = 'abcd'\n>>> x[::-1]\n'dcba'\n>>> \n\nOn your code:\n\nNever mutate the list on which you are iterating with. It can cause subtle errors.\n\n>>> strList = [1, 2, 3, 4, 5]\n>>> reverseCharList = []\n>>> for someChar in strList:\n... print strList\n... reverseCh...
[ 5, 5, 1, 1 ]
[]
[]
[ "loops", "python", "reverse", "stack" ]
stackoverflow_0004228805_loops_python_reverse_stack.txt
Q: Child process detecting the parent process' death in Python Is there a way for a child process in Python to detect if the parent process has died? A: If your Python process is running under Linux, and the prctl() system call is exposed, you can use the answer here. This can cause a signal to be sent to the child when the parent process dies. A: Assuming the parent is alive when you start to do this, you can check whether it is still alive in a busy loop as such, by using psutil: import psutil, os, time me = psutil.Process(os.getpid()) while 1: if me.parent is not None: # still alive time.sleep(0.1) continue else: print "my parent is gone" Not very nice but... A: The only reliable way I know of is to create a pipe specifically for this purpose. The child will have to repeatedly attempt to read from the pipe, preferably in a non-blocking fashion, or using select. It will get an error when the pipe does not exist anymore (presumably because of the parent's death). A: You might get away with reading your parent process' ID very early in your process, and then checking, but of course that is prone to race conditions. The parent that did the spawn might have died immediately, and even before your process got to execute its first instruction. Unless you have a way of verifying if a given PID refers to the "expected" parent, I think it's hard to do reliably.
Child process detecting the parent process' death in Python
Is there a way for a child process in Python to detect if the parent process has died?
[ "If your Python process is running under Linux, and the prctl() system call is exposed, you can use the answer here.\nThis can cause a signal to be sent to the child when the parent process dies.\n", "Assuming the parent is alive when you start to do this, you can check whether it is still alive in a busy loop as...
[ 5, 4, 2, 1 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0000759443_python_subprocess.txt
Q: What is the difference between using decorators and extending a sub class by inheritance? I was trying to wrap my brain around Decorators in python but can't understand why we cannot achieve the same thing by using sub classes? A: You can achieve the same thing using subclasses, and in fact you don't even need subclasses - you can also achieve the same thing simply by wrapping a method in another method and reassigning it. There was a lot of discussion about whether or not the decorator syntax should be added to the language as it doesn't allow you to do anything new and requires programmers to learn one more new thing. What the syntax dooes is formalize a pattern that many people were already using, and make it to a standard syntax that has a name and guidelines of how to use. It is not necessary for you to use decorators - you can achieve the same effect in other ways - but using the officially supported standard approach with a concise, easy-to-read syntax makes life a bit easier. A: You do know that prepending a definition with @spam class|def name just means "define name as written here, then bind name to spam(name)"? Decorators are very often applied to functions rather than classes. Sure, you could make a callable class and subclass that... you could also implement your own integer type. Neither is viable. In quite a few cases, you propably could do something very similar by subclassing... except that decorators are defined once and can be applied to several classes, as opposed to writing a new subclass yourself in every case. Every solution to this inevitably would end up being equivalent to or very similar to decorators. As robert points out in a comment, if you had an example, the answers could be more specific...
What is the difference between using decorators and extending a sub class by inheritance?
I was trying to wrap my brain around Decorators in python but can't understand why we cannot achieve the same thing by using sub classes?
[ "You can achieve the same thing using subclasses, and in fact you don't even need subclasses - you can also achieve the same thing simply by wrapping a method in another method and reassigning it. There was a lot of discussion about whether or not the decorator syntax should be added to the language as it doesn't a...
[ 4, 1 ]
[]
[]
[ "decorator", "python" ]
stackoverflow_0004229287_decorator_python.txt
Q: Python (with Django) and PHP Just wondering, as I think about learning either PHP or Django (I have previous Python knowledge), what advantages do Python and Django have over PHP, what disadvantages etc. I don't want to know which one is better, surely neither is better, both have their good sides as well as bad sides and I will probably learn both at some point. I don't want to start a flame war or anything, but please tell me some advantages and disadvantages for both to help me choose which one to learn first. Thanks in advance! A: PHP is a popular language for web development with tons of libraries and examples online. Python is a modern, well-design programming language where everything is an object. It works well in many environments, including web programming, although it wasn't originally designed for that environment. If you want a general purpose scripting language that can also be used for web development then learning Python would be a good idea. If you only plan to do web development and your main concern is to get a job, experience in PHP will make you attractive to a large number of potential employers who are already using this technology.
Python (with Django) and PHP
Just wondering, as I think about learning either PHP or Django (I have previous Python knowledge), what advantages do Python and Django have over PHP, what disadvantages etc. I don't want to know which one is better, surely neither is better, both have their good sides as well as bad sides and I will probably learn both at some point. I don't want to start a flame war or anything, but please tell me some advantages and disadvantages for both to help me choose which one to learn first. Thanks in advance!
[ "\nPHP is a popular language for web development with tons of libraries and examples online.\nPython is a modern, well-design programming language where everything is an object. It works well in many environments, including web programming, although it wasn't originally designed for that environment.\n\nIf you want...
[ 4 ]
[]
[]
[ "django", "php", "python" ]
stackoverflow_0004229394_django_php_python.txt
Q: python and process What is the best way in python find the process name and owner? Now i use WMI, but this version is too slow. A: By using https://github.com/giampaolo/psutil: >>> import psutil, os >>> p = psutil.Process(os.getpid()) >>> p.name() 'python.exe' >>> p.username() 'giampaolo' >>> A: Process name: is sys.argv[0] not sufficient for your purposes?
python and process
What is the best way in python find the process name and owner? Now i use WMI, but this version is too slow.
[ "By using https://github.com/giampaolo/psutil:\n>>> import psutil, os\n>>> p = psutil.Process(os.getpid())\n>>> p.name()\n'python.exe'\n>>> p.username()\n'giampaolo'\n>>> \n\n", "Process name: is sys.argv[0] not sufficient for your purposes?\n" ]
[ 1, 0 ]
[]
[]
[ "python", "windows", "wmi" ]
stackoverflow_0002299627_python_windows_wmi.txt
Q: Pausing a process in Windows I'm making a nice little Python GUI frontend for ffmpeg on Windows (one that is specifically designed to convert videos to an iPhone-friendly format and automatically import it to iTunes and tag it), and I want it to work so that you can pause the process and resume it if you want. Since I start ffmpeg as a separate process, the obvious solution would be for the program to suspend the process (which I know is possible in Windows, Process Explorer can do it), but I can't figure out how to do it. Does anyone have any idea how to do this in Python? A: You can easily do this by using psutil ( https://github.com/giampaolo/psutil ): import psutil pid = 1034 # replace this with the pid of your process p = psutil.Process(pid) p.suspend() ...to resume it: p.resume() Internally this is implemented in C by using SuspendThread() and ResumeThread() Windows system calls.
Pausing a process in Windows
I'm making a nice little Python GUI frontend for ffmpeg on Windows (one that is specifically designed to convert videos to an iPhone-friendly format and automatically import it to iTunes and tag it), and I want it to work so that you can pause the process and resume it if you want. Since I start ffmpeg as a separate process, the obvious solution would be for the program to suspend the process (which I know is possible in Windows, Process Explorer can do it), but I can't figure out how to do it. Does anyone have any idea how to do this in Python?
[ "You can easily do this by using psutil ( https://github.com/giampaolo/psutil ):\nimport psutil\npid = 1034 # replace this with the pid of your process\np = psutil.Process(pid)\np.suspend()\n\n...to resume it:\np.resume()\n\nInternally this is implemented in C by using SuspendThread() and ResumeThread() Windows sy...
[ 6 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0001892356_python_windows.txt
Q: Monitor Process in Python? I think this is a pretty basic question, but here it is anyway. I need to write a python script that checks to make sure a process, say notepad.exe, is running. If the process is running, do nothing. If it is not, start it. How would this be done. I am using Python 2.6 on Windows XP A: The process creation functions of the os module are apparently deprecated in Python 2.6 and later, with the subprocess module being the module of choice now, so... if 'notepad.exe' not in subprocess.Popen('tasklist', stdout=subprocess.PIPE).communicate()[0]: subprocess.Popen('notepad.exe') Note that in Python 3, the string being checked will need to be a bytes object, so it'd be if b'notepad.exe' not in [blah]: subprocess.Popen('notepad.exe') (The name of the file/process to start does not need to be a bytes object.) A: There are a couple of options, 1: the more crude but obvious would be to do some text processing against: os.popen('tasklist').read() 2: A more involved option would be to use pywin32 and research the win32 APIs to figure out what processes are running. 3: WMI (I found this just now), and here is a vbscript example of how to query the machine for processes through WMI. A: Python library for Linux process management
Monitor Process in Python?
I think this is a pretty basic question, but here it is anyway. I need to write a python script that checks to make sure a process, say notepad.exe, is running. If the process is running, do nothing. If it is not, start it. How would this be done. I am using Python 2.6 on Windows XP
[ "The process creation functions of the os module are apparently deprecated in Python 2.6 and later, with the subprocess module being the module of choice now, so...\nif 'notepad.exe' not in subprocess.Popen('tasklist', stdout=subprocess.PIPE).communicate()[0]:\n subprocess.Popen('notepad.exe')\n\nNote that in Py...
[ 15, 4, 3 ]
[]
[]
[ "monitor", "process", "python", "restart" ]
stackoverflow_0003215262_monitor_process_python_restart.txt
Q: Stuck in SQLAlchemy and Object Rlational I'm just working on a simple database of an address book for learning sqlalchemy. This is the code under the 'tables.py' file (that is pretty like one in the tutorial!): #!/usr/bin/env python # -*- coding: utf-8 -*- from sqlalchemy import Column, Integer, String, create_engine, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref engine = create_engine('sqlite:///phone.db', echo=True) Base = declarative_base() class namesT(Base): __tablename__ = 'Names' id = Column(Integer, primary_key=True) name = Column(String) sirname = Column(String) job = Column(String) work = Column(String) def __init__(self, namesTuple): self.name, self.sirname, self.job, self.work = namesTuple print self.name, self.sirname, self.job, self.work def __repr__(self): return '%s, %s, %s, %s' % (self.name, self.sirname, self.job, self.work) class detailT(Base): __tablename__ = "Details" id = Column(Integer, primary_key=True) names_id = Column(Integer, ForeignKey('Names.id')) type = Column(String) info = Column(String) detail = Column(String) names = relationship(namesT, backref='Details', order_by=id, cascade="all, delete, delete-orphan") def __init__(self, detailsTuple): self.type, self.info, self.detail = detailsTuple print self.type, self.info, self.detail def __repr__(self): return "%s, %s, %s" % (self.type, self.info, self.detail) Base.metadata.create_all(engine) And this is the codes in the 'dbtrans.py': #!/usr/bin/env python # -*- coding: utf-8 -*- from tables import engine as engine from tables import namesT as names from sqlalchemy.orm import sessionmaker class transaction: def __init__(self): self.Session = sessionmaker(bind=engine) self.session = self.Session() def insert_in_names(self, namesTuple): print namesTuple ed = names(namesTuple) self.session.add(ed) def find(self): self.session.query(names).filter(names.name=='ed').all() def commitAll(self): self.session.commit() if __name__ == "__main__": tup = ('Blah', 'Blah', 'Blah', 'Blah') ins = transaction() ins.insert_in_names(tup) # print ins.sessQuery() ins.commitAll() I just get this error every time I run the dbtrans: sqlalchemy.orm.exc.FlushError: Instance <namesT at 0x14538d0> is an unsaved, pending instance and is an orphan (is not attached to any parent 'detailT' instance via that classes' 'names' attribute) where is the problem? A: Please correct me if I'm wrong but the error is saying there is no detailT instance, and looking at the code I can't see anywhere where the detailT class is instantiated. It looks to me that since detailT is the parent of of nameT via the relationship you can't save nameT without its parent. I would start here. A: Just removed the relation from 'detailT' class and added it to 'namesT' like this: details = relationship("detailT", backref='namesT', order_by=id, cascade="all, delete, delete-orphan") now it's working!
Stuck in SQLAlchemy and Object Rlational
I'm just working on a simple database of an address book for learning sqlalchemy. This is the code under the 'tables.py' file (that is pretty like one in the tutorial!): #!/usr/bin/env python # -*- coding: utf-8 -*- from sqlalchemy import Column, Integer, String, create_engine, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref engine = create_engine('sqlite:///phone.db', echo=True) Base = declarative_base() class namesT(Base): __tablename__ = 'Names' id = Column(Integer, primary_key=True) name = Column(String) sirname = Column(String) job = Column(String) work = Column(String) def __init__(self, namesTuple): self.name, self.sirname, self.job, self.work = namesTuple print self.name, self.sirname, self.job, self.work def __repr__(self): return '%s, %s, %s, %s' % (self.name, self.sirname, self.job, self.work) class detailT(Base): __tablename__ = "Details" id = Column(Integer, primary_key=True) names_id = Column(Integer, ForeignKey('Names.id')) type = Column(String) info = Column(String) detail = Column(String) names = relationship(namesT, backref='Details', order_by=id, cascade="all, delete, delete-orphan") def __init__(self, detailsTuple): self.type, self.info, self.detail = detailsTuple print self.type, self.info, self.detail def __repr__(self): return "%s, %s, %s" % (self.type, self.info, self.detail) Base.metadata.create_all(engine) And this is the codes in the 'dbtrans.py': #!/usr/bin/env python # -*- coding: utf-8 -*- from tables import engine as engine from tables import namesT as names from sqlalchemy.orm import sessionmaker class transaction: def __init__(self): self.Session = sessionmaker(bind=engine) self.session = self.Session() def insert_in_names(self, namesTuple): print namesTuple ed = names(namesTuple) self.session.add(ed) def find(self): self.session.query(names).filter(names.name=='ed').all() def commitAll(self): self.session.commit() if __name__ == "__main__": tup = ('Blah', 'Blah', 'Blah', 'Blah') ins = transaction() ins.insert_in_names(tup) # print ins.sessQuery() ins.commitAll() I just get this error every time I run the dbtrans: sqlalchemy.orm.exc.FlushError: Instance <namesT at 0x14538d0> is an unsaved, pending instance and is an orphan (is not attached to any parent 'detailT' instance via that classes' 'names' attribute) where is the problem?
[ "Please correct me if I'm wrong but the error is saying there is no detailT instance, and looking at the code I can't see anywhere where the detailT class is instantiated. It looks to me that since detailT is the parent of of nameT via the relationship you can't save nameT without its parent. I would start here.\n"...
[ 1, 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0004225483_python_sqlalchemy.txt
Q: Is there a Python equivalent of Groovy/Grails for Java I'm thinking of something like Jython/Jango? Does this exist? Or does Jython allow you to do everything-Python in Java including Django (I'm not sure how Jython differs from Python)? A: http://wiki.python.org/jython/DjangoOnJython A: "I'm not sure how Jython differs from Python" http://www.jython.org/Project/ Jython is an implementation of the high-level, dynamic, object-oriented language Python seamlessly integrated with the Java platform. http://docs.python.org/reference/introduction.html#alternate-implementations Python implemented in Java. This implementation can be used as a scripting language for Java applications, or can be used to create applications using the Java class libraries. It is also often used to create tests for Java libraries. More information can be found at the Jython website. A: http://turbogears.org/ might be closer to what you are looking for
Is there a Python equivalent of Groovy/Grails for Java
I'm thinking of something like Jython/Jango? Does this exist? Or does Jython allow you to do everything-Python in Java including Django (I'm not sure how Jython differs from Python)?
[ "http://wiki.python.org/jython/DjangoOnJython\n", "\"I'm not sure how Jython differs from Python\"\nhttp://www.jython.org/Project/\n\nJython is an implementation of the\n high-level, dynamic, object-oriented\n language Python seamlessly integrated\n with the Java platform.\n\nhttp://docs.python.org/reference/i...
[ 8, 3, 2 ]
[]
[]
[ "django", "grails", "groovy", "jython", "python" ]
stackoverflow_0000524147_django_grails_groovy_jython_python.txt
Q: Reading Command Line Arguments of Another Process (Win32 C code) I need to be able to list the command line arguments (if any) passed to other running processes. I have the PIDs already of the running processes on the system, so basically I need to determine the arguments passed to process with given PID XXX. I'm working on a core piece of a Python module for managing processes. The code is written as a Python extension in C and will be wrapped by a higher level Python library. The goal of this project is to avoid dependency on third party libs such as the pywin32 extensions, or on ugly hacks like calling 'ps' or taskkill on the command line, so I'm looking for a way to do this in C code. I've Googled this around and found some brief suggestions of using CreateRemoteThread() to inject myself into the other process, then run GetCommandLine() but I was hoping someone might have some working code samples and/or better suggestions. UPDATE: I've found full working demo code and a solution using NtQueryProcessInformation on CodeProject: http://www.codeproject.com/KB/threads/GetNtProcessInfo.aspx - It's not ideal since it's "unsupported" to cull the information directly from the NTDLL structures but I'll live with it. Thanks to all for the suggestions. UPDATE 2: I managed through more Googling to dig up a C version that does not use C++ code, and is a little more direct/concisely pointed toward this problem. See http://wj32.wordpress.com/2009/01/24/howto-get-the-command-line-of-processes/ for details. Thanks! A: To answer my own question, I finally found a CodeProject solution that does exactly what I'm looking for: http://www.codeproject.com/KB/threads/GetNtProcessInfo.aspx As @Reuben already pointed out, you can use NtQueryProcessInformation to retrieve this information. Unfortuantely it's not a recommended approach, but given the only other solution seems to be to incur the overhead of a WMI query, I think we'll take this approach for now. Note that this seems to not work if using code compiled from 32bit Windows on a 64bit Windows OS, but since our modules are compiled from source on the target that should be OK for our purposes. I'd rather use this existing code and should it break in Windows 7 or a later date, we can look again at using WMI. Thanks for the responses! UPDATE: A more concise and C only (as opposed to C++) version of the same technique is illustrated here: http://wj32.wordpress.com/2009/01/24/howto-get-the-command-line-of-processes/ A: The cached solution: http://74.125.45.132/search?q=cache:-wPkE2PbsGwJ:windowsxp.mvps.org/listproc.htm+running+process+command+line&hl=es&ct=clnk&cd=1&gl=ar&client=firefox-a in CMD WMIC /OUTPUT:C:\ProcessList.txt PROCESS get Caption,Commandline,Processid or WMIC /OUTPUT:C:\ProcessList.txt path win32_process get Caption,Processid,Commandline Also: http://mail.python.org/pipermail/python-win32/2007-December/006498.html http://tgolden.sc.sabren.com/python/wmi_cookbook.html#running_processes seems to do the trick: import wmi c = wmi.WMI () for process in c.Win32_Process (): print process.CommandLine A: By using psutil ( https://github.com/giampaolo/psutil ): >>> import psutil, os >>> psutil.Process(os.getpid()).cmdline() ['C:\\Python26\\python.exe', '-O'] >>> A: The WMI approach mentioned in another response is probably the most reliable way of doing this. Looking through MSDN, I spotted what looks like another possible approach; it's documented, but its not clear whether it's fully supported. In MSDN's language, it-- may be altered or unavailable in future versions of Windows... In any case, provided that your process has the right permissions, you should be able to call NtQueryProcessInformation with a ProcessInformationClass of ProcessBasicInformation. In the returned PROCESS_BASIC_INFORMATION structure, you should get back a pointer to the target process's process execution block (as field PebBaseAddress). The ProcessParameters field of the PEB will give you a pointer to an RTL_USER_PROCESS_PARAMETERS structure. The CommandLine field of that structure will be a UNICODE_STRING structure. (Be careful not too make too many assumptions about the string; there are no guarantees that it will be NULL-terminated, and it's not clear whether or not you'll need to strip off the name of the executed application from the beginning of the command line.) I haven't tried this approach--and as I mentioned above, it seems a bit... iffy (read: non-portable)--but it might be worth a try. Best of luck... A: If you aren't the parent of these processes, then this is not possible using documented functions :( Now, if you're the parent, you can do your CreateRemoteThread trick, but otherwise you will almost certainly get Access Denied unless your app has admin rights.
Reading Command Line Arguments of Another Process (Win32 C code)
I need to be able to list the command line arguments (if any) passed to other running processes. I have the PIDs already of the running processes on the system, so basically I need to determine the arguments passed to process with given PID XXX. I'm working on a core piece of a Python module for managing processes. The code is written as a Python extension in C and will be wrapped by a higher level Python library. The goal of this project is to avoid dependency on third party libs such as the pywin32 extensions, or on ugly hacks like calling 'ps' or taskkill on the command line, so I'm looking for a way to do this in C code. I've Googled this around and found some brief suggestions of using CreateRemoteThread() to inject myself into the other process, then run GetCommandLine() but I was hoping someone might have some working code samples and/or better suggestions. UPDATE: I've found full working demo code and a solution using NtQueryProcessInformation on CodeProject: http://www.codeproject.com/KB/threads/GetNtProcessInfo.aspx - It's not ideal since it's "unsupported" to cull the information directly from the NTDLL structures but I'll live with it. Thanks to all for the suggestions. UPDATE 2: I managed through more Googling to dig up a C version that does not use C++ code, and is a little more direct/concisely pointed toward this problem. See http://wj32.wordpress.com/2009/01/24/howto-get-the-command-line-of-processes/ for details. Thanks!
[ "To answer my own question, I finally found a CodeProject solution that does exactly what I'm looking for:\nhttp://www.codeproject.com/KB/threads/GetNtProcessInfo.aspx\nAs @Reuben already pointed out, you can use NtQueryProcessInformation to retrieve this information. Unfortuantely it's not a recommended approach, ...
[ 6, 5, 3, 2, 0 ]
[]
[]
[ "c", "python", "winapi" ]
stackoverflow_0000440932_c_python_winapi.txt
Q: Number formatting in python How can I make this python code: # -*- coding: cp1252 -*- a=4 b=2 c=1.0 d=1.456 print '%fx³ + %fx² + %fx + %f = 0' %(a,b,c,d) print like this: 4x³ + 2x² + 1x + 1.456 = 0 instead of like this (how it prints currently): 4.000000x³ + 2.000000x² + 1.0000000x + 1.456000 = 0 A: print '%gx³ + %gx² + %gx + %g = 0' %(a,b,c,d) A: Use this > Python Decimals format
Number formatting in python
How can I make this python code: # -*- coding: cp1252 -*- a=4 b=2 c=1.0 d=1.456 print '%fx³ + %fx² + %fx + %f = 0' %(a,b,c,d) print like this: 4x³ + 2x² + 1x + 1.456 = 0 instead of like this (how it prints currently): 4.000000x³ + 2.000000x² + 1.0000000x + 1.456000 = 0
[ "print '%gx³ + %gx² + %gx + %g = 0' %(a,b,c,d)\n\n", "Use this > Python Decimals format\n" ]
[ 4, 2 ]
[]
[]
[ "formatting", "number_formatting", "python" ]
stackoverflow_0004229767_formatting_number_formatting_python.txt
Q: What is the easiest way to see if a process with a given pid exists in Python? In a POSIX system, I want to see if a given process (PID 4356, for example) is running. It would be even better if I could get metadata about that process. A: Instead of os.waitpid, you can also use os.kill with signal 0: >>> os.kill(8861, 0) >>> os.kill(12765, 0) Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: [Errno 3] No such process >>> Edit: more expansively: import errno import os def pid_exists(pid): try: os.kill(pid, 0) except OSError, e: return e.errno == errno.EPERM else: return True This works fine on my Linux box. I haven't verified that "signal 0" is actually Posix, but it's always worked on every Unix variant I've tried. A: On linux at least the /proc directory has what you are looking for. It's basically system data from the kernel represented as directories and files. All the numeric directories are details of processes. Just use the basic python os functions to get at this data: #ls /proc 1 17 18675 25346 26390 28071 28674 28848 28871 29347 590 851 874 906 9621 9655 devices iomem modules ... #ls /proc/1 auxv cmdline cwd environ exe fd maps mem mounts root stat statm status task wchan #cat /proc/1/cmdline init [3] A: In a portable way, by using psutil ( https://github.com/giampaolo/psutil ) >>> import psutil, os >>> psutil.pid_exists(342342) False >>> psutil.pid_exists(os.getpid()) True >>> A: Look at /proc/pid. This exists only of the process is running, and contains lots of information. A: os.waitpid() might be of help: try: os.waitpid(pid, 0) except OSError: running = False else: running = True A: One way to do this to get information would be: import commands output = commands.getstatusoutput("ps -ef | awk '{print $2}' | grep MYPID") See: http://docs.python.org/library/commands.html I think: commands.getoutput(...) could be used to get metadata available on the 'ps' line. Since you're using a POSIX system, I imagine ps (or equivalent) would be available (e.g. prstat under Solaris).
What is the easiest way to see if a process with a given pid exists in Python?
In a POSIX system, I want to see if a given process (PID 4356, for example) is running. It would be even better if I could get metadata about that process.
[ "Instead of os.waitpid, you can also use os.kill with signal 0:\n>>> os.kill(8861, 0)\n>>> os.kill(12765, 0)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nOSError: [Errno 3] No such process\n>>> \n\nEdit: more expansively:\nimport errno\nimport os\n\ndef pid_exists(pid):\n try:\n ...
[ 11, 4, 3, 1, 0, 0 ]
[]
[]
[ "posix", "python" ]
stackoverflow_0001005972_posix_python.txt
Q: Clearing background in matplotlib using wxPython I want to create an animation with matplotlib to monitor the convergence of a clustering algorithm. It should draw a scatterplot of my data when called the first time and draw error ellipses each time the plot is updated. I am trying to use canvas_copy_from_bbox() and restore_region() to save the scatterplot and then draw a new set of ellipses whenever I'm updating the plot. However, the code just plots the new ellipses on top of the old ones, without clearing the previous plot first. I suspect, somehow this approach doesn't work well with the Ellipse() and add_path() commands, but I don't know how to fix this. Here is the code: import wx import math from math import pi from matplotlib.patches import Ellipse from matplotlib.figure import Figure from matplotlib.backends.backend_wxagg import \ FigureCanvasWxAgg as FigureCanvas TIMER_ID = wx.NewId() class _MonitorPlot(wx.Frame): def __init__(self, data, scale=1): self.scale = scale wx.Frame.__init__(self, None, wx.ID_ANY, title="FlowVB Progress Monitor", size=(800, 600)) self.fig = Figure((8, 6), 100) self.canvas = FigureCanvas(self, wx.ID_ANY, self.fig) self.ax = self.fig.add_subplot(111) x_lims = [data[:, 0].min(), data[:, 0].max()] y_lims = [data[:, 1].min(), data[:, 1].max()] self.ax.set_xlim(x_lims) self.ax.set_ylim(y_lims) self.ax.set_autoscale_on(False) self.l_data = self.ax.plot(data[:, 0], data[:, 1], color='blue', linestyle='', marker='o') self.canvas.draw() self.bg = self.canvas.copy_from_bbox(self.ax.bbox) self.Bind(wx.EVT_IDLE, self._onIdle) def update_plot(self, pos, cov): self.canvas.restore_region(self.bg) for k in range(pos.shape[0]): l_center, = self.ax.plot(pos[k, 0], pos[k, 1], color='red', marker='+') U, s, Vh = np.linalg.svd(cov[k, :, :]) orient = math.atan2(U[1, 0], U[0, 0]) * 180 / pi ellipsePlot = Ellipse(xy=pos[k, :], width=2.0 * math.sqrt(s[0]), height=2.0 * math.sqrt(s[1]), angle=orient, facecolor='none', edgecolor='red') self.ax.add_patch(ellipsePlot) self.canvas.draw() self.canvas.blit(self.ax.bbox) A: What's happening is that you're adding new patches to the plot each time, and then drawing all of them when you call self.canvas.draw(). The quickest fix is to just call self.canvas.draw_artist(ellipsePlot) after adding each patch and remove the call to self.canvas.draw() As a simple, stand-alone example: # Animates 3 ellipses overlain on a scatterplot import matplotlib.pyplot as plt from matplotlib.patches import Ellipse import numpy as np num = 10 x = np.random.random(num) y = np.random.random(num) plt.ion() fig = plt.figure() ax = fig.add_subplot(111) line = ax.plot(x, y, 'bo') fig.canvas.draw() bg = fig.canvas.copy_from_bbox(ax.bbox) # Pseudo-main loop for i in range(100): fig.canvas.restore_region(bg) # Make a new ellipse each time... (inefficient!) for i in range(3): width, height, angle = np.random.random(3) angle *= 180 ellip = Ellipse(xy=(0.5, 0.5), width=width, height=height, facecolor='red', angle=angle, alpha=0.5) ax.add_patch(ellip) ax.draw_artist(ellip) fig.canvas.blit(ax.bbox) However, this will probably cause memory consumptions problems over time, as the axes object will keep track of all artists added to it. If your axes doesn't hang around for a long time, this may be negligible, but you should at least be aware that it will cause a memory leak. One way around this is to remove the ellipsis artists from the axes by calling ax.remove(ellipsePlot) for each ellipse after drawing them. However, this is still slightly inefficient, as you're constantly creating and destroying ellipse artists, when you could just update them. (Creating and destroying them doesn't have much overhead at all, though, it's mostly a stylistic issue...) If the number of ellipses is staying the same over time it's better and easier to just update the properties of each ellipse artist object instead of creating and adding new ones. This will avoid have remove "old" ellipses from the axes, as only the number that you need will ever exist. As a simple, stand-alone example of this: # Animates 3 ellipses overlain on a scatterplot import matplotlib.pyplot as plt from matplotlib.patches import Ellipse import numpy as np num = 10 x = np.random.random(num) y = np.random.random(num) plt.ion() fig = plt.figure() ax = fig.add_subplot(111) line = ax.plot(x, y, 'bo') fig.canvas.draw() bg = fig.canvas.copy_from_bbox(ax.bbox) # Make and add the ellipses the first time (won't ever be drawn) ellipses = [] for i in range(3): ellip = Ellipse(xy=(0.5, 0.5), width=1, height=1, facecolor='red', alpha=0.5) ax.add_patch(ellip) ellipses.append(ellip) # Pseudo-main loop for i in range(100): fig.canvas.restore_region(bg) # Update the ellipse artists... for ellip in ellipses: ellip.width, ellip.height, ellip.angle = np.random.random(3) ellip.angle *= 180 ax.draw_artist(ellip) fig.canvas.blit(ax.bbox)
Clearing background in matplotlib using wxPython
I want to create an animation with matplotlib to monitor the convergence of a clustering algorithm. It should draw a scatterplot of my data when called the first time and draw error ellipses each time the plot is updated. I am trying to use canvas_copy_from_bbox() and restore_region() to save the scatterplot and then draw a new set of ellipses whenever I'm updating the plot. However, the code just plots the new ellipses on top of the old ones, without clearing the previous plot first. I suspect, somehow this approach doesn't work well with the Ellipse() and add_path() commands, but I don't know how to fix this. Here is the code: import wx import math from math import pi from matplotlib.patches import Ellipse from matplotlib.figure import Figure from matplotlib.backends.backend_wxagg import \ FigureCanvasWxAgg as FigureCanvas TIMER_ID = wx.NewId() class _MonitorPlot(wx.Frame): def __init__(self, data, scale=1): self.scale = scale wx.Frame.__init__(self, None, wx.ID_ANY, title="FlowVB Progress Monitor", size=(800, 600)) self.fig = Figure((8, 6), 100) self.canvas = FigureCanvas(self, wx.ID_ANY, self.fig) self.ax = self.fig.add_subplot(111) x_lims = [data[:, 0].min(), data[:, 0].max()] y_lims = [data[:, 1].min(), data[:, 1].max()] self.ax.set_xlim(x_lims) self.ax.set_ylim(y_lims) self.ax.set_autoscale_on(False) self.l_data = self.ax.plot(data[:, 0], data[:, 1], color='blue', linestyle='', marker='o') self.canvas.draw() self.bg = self.canvas.copy_from_bbox(self.ax.bbox) self.Bind(wx.EVT_IDLE, self._onIdle) def update_plot(self, pos, cov): self.canvas.restore_region(self.bg) for k in range(pos.shape[0]): l_center, = self.ax.plot(pos[k, 0], pos[k, 1], color='red', marker='+') U, s, Vh = np.linalg.svd(cov[k, :, :]) orient = math.atan2(U[1, 0], U[0, 0]) * 180 / pi ellipsePlot = Ellipse(xy=pos[k, :], width=2.0 * math.sqrt(s[0]), height=2.0 * math.sqrt(s[1]), angle=orient, facecolor='none', edgecolor='red') self.ax.add_patch(ellipsePlot) self.canvas.draw() self.canvas.blit(self.ax.bbox)
[ "What's happening is that you're adding new patches to the plot each time, and then drawing all of them when you call self.canvas.draw(). \nThe quickest fix is to just call self.canvas.draw_artist(ellipsePlot) after adding each patch and remove the call to self.canvas.draw()\nAs a simple, stand-alone example:\n# An...
[ 6 ]
[]
[]
[ "matplotlib", "python", "wxpython" ]
stackoverflow_0004222344_matplotlib_python_wxpython.txt
Q: Process size in XP from Python I have a python script that can approach the 2 GB process limit under Windows XP. On a machine with 2 GB physical memory, that can pretty much lock up the machine, even if the Python script is running at below normal priority. Is there a way in Python to find out my own process size? Thanks, Gerry A: try: import win32process print win32process.GetProcessMemoryInfo(win32process.GetCurrentProcess()) A: By using psutil https://github.com/giampaolo/psutil : >>> import psutil, os >>> p = psutil.Process(os.getpid()) >>> p.memory_info() meminfo(rss=6971392, vms=47755264) >>> p.memory_percent() 0.16821255914801228 >>> A: Rather than worrying about limiting your process size at runtime, it might be better to figure out if all the pieces of data that you're currently storing in memory truly need to be in memory at all times. You probably have plenty of disk space, and simply by creating some temporary files (see the tempfile module) there should be ample opportunity to write any data you are no longer using for the current calculation to disk. You can then read it back in later when/if you need it again. This is (simplistically) how many databases work. While disk is considered "slow" in a computational sense, it is still very fast and it is an extremely useful tool when working with large data sets. And since you are already setting the process priority to "Below Normal" it doesn't sound like speed should be a serious issue for you anyway, whereas the memory clearly is.
Process size in XP from Python
I have a python script that can approach the 2 GB process limit under Windows XP. On a machine with 2 GB physical memory, that can pretty much lock up the machine, even if the Python script is running at below normal priority. Is there a way in Python to find out my own process size? Thanks, Gerry
[ "try:\nimport win32process\nprint win32process.GetProcessMemoryInfo(win32process.GetCurrentProcess())\n\n", "By using psutil https://github.com/giampaolo/psutil :\n>>> import psutil, os\n>>> p = psutil.Process(os.getpid())\n>>> p.memory_info()\nmeminfo(rss=6971392, vms=47755264)\n>>> p.memory_percent()\n0.1682125...
[ 2, 1, 0 ]
[]
[]
[ "python", "windows_xp" ]
stackoverflow_0000417763_python_windows_xp.txt
Q: python generators duplicates How do I either avoid adding duplicate entries into a generator or remove them once there are already there? If I should be using something else, please advice. A: If the values are hashable, the simplest, dumbest way to remove duplicates is to use a set: values = mygenerator() unique_values = set(values) But watch out: sets don't remember what order the values were originally in. So this scrambles the sequence. The function below might be better than set for your purpose. It filters out duplicates without getting any of the other values out of order: def nub(it): seen = set() for x in it: if x not in seen: yield x seen.add(x) Call nub with one argument, any iterable of hashable values. It returns an iterator that produces all the same items, but with the duplicates removed. A: itertools.groupby() can collapse adjacent duplicates if you're willing to do a bit of work. print [x[0] for x in itertools.groupby([1, 2, 2, 3])]
python generators duplicates
How do I either avoid adding duplicate entries into a generator or remove them once there are already there? If I should be using something else, please advice.
[ "If the values are hashable, the simplest, dumbest way to remove duplicates is to use a set:\nvalues = mygenerator()\nunique_values = set(values)\n\nBut watch out: sets don't remember what order the values were originally in. So this scrambles the sequence.\nThe function below might be better than set for your purp...
[ 12, 2 ]
[]
[]
[ "python" ]
stackoverflow_0004230063_python.txt
Q: Python pattern matching I'm currently in the process of converting an old bash script of mine into a Python script with added functionality. I've been able to do most things, but I'm having a lot of trouble with Python pattern matching. In my previous script, I downloaded a web page and used sed to get the elemented I wanted. The matching was done like so (for one of the values I wanted): PM_NUMBER=`cat um.htm | LANG=sv_SE.iso88591 sed -n 's/.*ol.st.*pm.*count..\([0-9]*\).*/\1/p'` It would match the number wrapped in <span class="count"></span> after the phrase "olästa pm". The markup I'm running this against is: <td style="padding-left: 11px;"> <a href="/abuse_list.php"> <img src="/gfx/abuse_unread.png" width="15" height="12" alt="" title="9 anmälningar" /> </a> </td> <td align="center"> <a class="page_login_text" href="/pm.php" title="Du har 3 olästa pm."> <span class="count">3</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/blogg_latest.php" title="Du har 1 ny bloggkommentar"> <span class="count">1</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/user_guestbook.php" title="Min gästbok"> <span class="count">1</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/forum.php?view=3" title="Du har 1 ny forumkommentar"> <span class="count">1</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/user_images.php?user_id=162005&func=display_new_comments" title="Du har 1 ny albumkommentar"> <span class="count">1</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/forum_favorites.php" title="Du har 2 uppdaterade trådar i &quot;bevakade trådar&quot;"> <span class="count">2</span> </td> I'm hesitant to post this, because it seems like I'm asking for a lot, but could someone please help me with a way to parse this in Python? I've been pulling my hair trying to do this, but regular expressions and I just don't match (pardon the pun). I've spent the last couple of hours experimenting and reading the Python manual on regular expressions, but I can't seem to figure it out. Just to make it clear, what I need are 7 different expressions for matching the number within <span class="count"></span>. I need to, for example, be able to find the number of unread PMs ("olästa pm"). A: You will not parse html yourself. You will use a html parser built in python to parse the html. Lightweight xml dom parser in python Beautiful Soup A: You can user lxml to pull out the values you are looking for pretty easily with xpaths lxml xpath Example from lxml import html page = html.fromstring(open("um.htm", "r").read()) matches = page.xpath("//a[contains(@title, 'pm.') or contains(@title, 'ol')]/span") print [elem.text for elem in matches] A: use either: BeautifulSoup lxml parsing HTML with regexes is a recipe for disaster. A: It is impossible to reliably match HTML using regular expressions. It is usually possible to cobble something together that works for a specific page, but it is not advisable as even a subtle tweak to the source HTML can render all your work useless. HTML simply has a more complex structure than Regex is capable of describing. The proper solution is to use a dedicated HTML parser. Note that even XML parsers won't do what you need, not reliably anyway. Valid XHTML is valid XML, but even valid HTML is not, even though it's quite similar. And valid HTML/XHTML is nearly impossible to find in the wild anyway. There are a few different HTML parsers available: BeautifulSoup is not in the standard library, but it is the most forgiving parser, it can handle almost all real-world HTML and it's designed to do exactly what you're trying to do. HTMLParser is included in the Python standard library, but it is fairly strict about accepting only valid HTML. htmllib is also in the standard library, but is deprecated. As other people have suggested, BeautifulSoup is almost certainly your best choice.
Python pattern matching
I'm currently in the process of converting an old bash script of mine into a Python script with added functionality. I've been able to do most things, but I'm having a lot of trouble with Python pattern matching. In my previous script, I downloaded a web page and used sed to get the elemented I wanted. The matching was done like so (for one of the values I wanted): PM_NUMBER=`cat um.htm | LANG=sv_SE.iso88591 sed -n 's/.*ol.st.*pm.*count..\([0-9]*\).*/\1/p'` It would match the number wrapped in <span class="count"></span> after the phrase "olästa pm". The markup I'm running this against is: <td style="padding-left: 11px;"> <a href="/abuse_list.php"> <img src="/gfx/abuse_unread.png" width="15" height="12" alt="" title="9 anmälningar" /> </a> </td> <td align="center"> <a class="page_login_text" href="/pm.php" title="Du har 3 olästa pm."> <span class="count">3</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/blogg_latest.php" title="Du har 1 ny bloggkommentar"> <span class="count">1</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/user_guestbook.php" title="Min gästbok"> <span class="count">1</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/forum.php?view=3" title="Du har 1 ny forumkommentar"> <span class="count">1</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/user_images.php?user_id=162005&func=display_new_comments" title="Du har 1 ny albumkommentar"> <span class="count">1</span> </td> <td style="padding-left: 11px;" align="center"> <a class="page_login_text" href="/forum_favorites.php" title="Du har 2 uppdaterade trådar i &quot;bevakade trådar&quot;"> <span class="count">2</span> </td> I'm hesitant to post this, because it seems like I'm asking for a lot, but could someone please help me with a way to parse this in Python? I've been pulling my hair trying to do this, but regular expressions and I just don't match (pardon the pun). I've spent the last couple of hours experimenting and reading the Python manual on regular expressions, but I can't seem to figure it out. Just to make it clear, what I need are 7 different expressions for matching the number within <span class="count"></span>. I need to, for example, be able to find the number of unread PMs ("olästa pm").
[ "You will not parse html yourself. You will use a html parser built in python to parse the html. \n\nLightweight xml dom parser in python\nBeautiful Soup \n\n", "You can user lxml to pull out the values you are looking for pretty easily with xpaths\n\nlxml\nxpath\n\nExample\nfrom lxml import html\npage = html.f...
[ 4, 2, 1, 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0004227637_python_regex.txt
Q: Python: Test if an argument is an integer I want to write a python script that takes 3 parameters. The first parameter is a string, the second is an integer, and the third is also an integer. I want to put conditional checks at the start to ensure that the proper number of arguments are provided, and they are the right type before proceeding. I know we can use sys.argv to get the argument list, but I don't know how to test that a parameter is an integer before assigning it to my local variable for use. Any help would be greatly appreciated. A: str.isdigit() can be used to test if a string is comprised solely of numbers. A: More generally, you can use isinstance to see if something is an instance of a class. Obviously, in the case of script arguments, everything is a string, but if you are receiving arguments to a function/method and want to check them, you can use: def foo(bar): if not isinstance(bar, int): bar = int(bar) # continue processing... You can also pass a tuple of classes to isinstance: isinstance(bar, (int, float, decimal.Decimal)) A: If you're running Python 2.7, try importing argparse. Python 3.2 will also use it, and it is the new preferred way to parse arguments. This sample code from the Python documentation page takes in a list of ints and finds either the max or the sum of the numbers passed. import argparse parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+', help='an integer for the accumulator') parser.add_argument('--sum', dest='accumulate', action='store_const', const=sum, default=max, help='sum the integers (default: find the max)') args = parser.parse_args() print(args.accumulate(args.integers)) A: Python way is to try and fail if the input does not support operation like try: sys.argv = sys.argv[:1]+map(int,sys.argv[1:]) except: print 'Incorrect integers', sys.argv[1:] A: You can use type to determine the type of any object in Python. This works in Python 2.6, I don't personally know if it's present in other versions. obvious_string = "This is a string." if type(obvious_string) != int: print "Bro, that is so _not_ an integer." else: print "Thanks for the integer, brotato chip." A: I am new to Python so I am posting this not only to help but also be helped: get comments on why my approach is/isn't the the best one, that is. So, with the disclaimer that I am not an experienced python dev, here is what I would do: inp = sys.argv[x] try: input = int(inp) except ValueError: print("Input is not an integer") What the above does is that it puts sys.argv[x] to inp and then tries to put the integer form of inp to input. If there is not an integer form of inp then inp is not a number so a ValueError exception is raised. I take it that's your main problem and you know how to check if you have all three parameters in the correct form. If not, just let us know and I am sure you will get more answers. :) Just realized Tony Veijalainen posted a similar answer A: >>> int('foo') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: 'foo' Give it to int. If it doesn't raise a ValueError then the string was an integer. A: You can cast the argument and try... except the ValueError. If you are using sys.argv, also investigate argparse.
Python: Test if an argument is an integer
I want to write a python script that takes 3 parameters. The first parameter is a string, the second is an integer, and the third is also an integer. I want to put conditional checks at the start to ensure that the proper number of arguments are provided, and they are the right type before proceeding. I know we can use sys.argv to get the argument list, but I don't know how to test that a parameter is an integer before assigning it to my local variable for use. Any help would be greatly appreciated.
[ "str.isdigit() can be used to test if a string is comprised solely of numbers.\n", "More generally, you can use isinstance to see if something is an instance of a class.\nObviously, in the case of script arguments, everything is a string, but if you are receiving arguments to a function/method and want to check t...
[ 20, 11, 9, 6, 4, 3, 1, 0 ]
[]
[]
[ "parameters", "python", "validation" ]
stackoverflow_0004228757_parameters_python_validation.txt
Q: Creating a 2d matrix in python I create a 6x5 2d array, initially with just None in each cell. I then read a file and replace the Nones with data as I read them. I create the empty array first because the data is in an undefined order in the file I'm reading. My first attempt I did this: x = [[None]*5]*6 which resulted in some weird errors that I now understand is because the * operator on lists may create references instead of copies. Is there an easy one liner to create this empty array? I could just do some for loops and build it up, but that seems needlessly verbose for python. A: Using nested comprehension lists : x = [[None for _ in range(5)] for _ in range(6)] A: What's going on here is that the line x = [[None]*5]*6 expands out to x = [[None, None, None, None, None, None]]*6 At this point you have a list with 6 different references to the singleton None. You also have a list with a reference to the inner list as it's first and only entry. When you multiply it by 6, you are getting 5 more references to the inner list as you understand. But the point is that theres no problem with the inner list, just the outer one so there's no need to expand the construction of the inner lists out into a comprehension. x = [[None]*5 for _ in range(6)] This avoids duplicating references to any lists and is about as concise as it can readably get I believe. A: If you aren't going the numpy route, you can fake 2D arrays with dictionaries: >>> x = dict( ((i,j),None) for i in range(5) for j in range(6) ) >>> print x[3,4] None
Creating a 2d matrix in python
I create a 6x5 2d array, initially with just None in each cell. I then read a file and replace the Nones with data as I read them. I create the empty array first because the data is in an undefined order in the file I'm reading. My first attempt I did this: x = [[None]*5]*6 which resulted in some weird errors that I now understand is because the * operator on lists may create references instead of copies. Is there an easy one liner to create this empty array? I could just do some for loops and build it up, but that seems needlessly verbose for python.
[ "Using nested comprehension lists :\nx = [[None for _ in range(5)] for _ in range(6)]\n\n", "What's going on here is that the line \nx = [[None]*5]*6\n\nexpands out to\nx = [[None, None, None, None, None, None]]*6\n\nAt this point you have a list with 6 different references to the singleton None. You also have a ...
[ 40, 31, 1 ]
[]
[]
[ "list", "python" ]
stackoverflow_0004230000_list_python.txt
Q: Python Tkinter listener in text box I would like to know how (If Possible) to listen to a certain phrase or word that is entered in a text box and the run a command. For instance if i type the phrase "turn me red" i would like to know if it is possible to turn it red without pressing enter. I just started and here is what i have: from Tkinter import * class mywidgets: def __init__(self,root): frame=Frame(root) frame.pack() self.txtfr(frame) return def txtfr(self,frame): #define a new frame and put a text area in it textfr=Frame(frame) self.text=Text(textfr,height=10,width=50,background='white') # put a scroll bar in the frame scroll=Scrollbar(textfr) self.text.configure(yscrollcommand=scroll.set) #pack everything self.text.pack(side=LEFT) scroll.pack(side=RIGHT,fill=Y) textfr.pack(side=TOP) return def main(): root = Tk() s=mywidgets(root) root.title('textarea') root.mainloop() main() A: So i thought it would be a little cleaner if rather than edit your code, i just provided a fresh example of working code that exhibits the behavior you are interested in. Here's what the code below does: when you run it, you get a little widget with an empty text box (technically, a Label in Tkinter) for the user to supply some value. When they enter a numeric value (integer or float) and then click the Calculate button then equivalent value in meters appears just below. If however, the user keys in 'red' then the word 'blue' appears as soon as it is entered--i.e., Blue will appear even though the Calculate button nor anything else was clicked. As you can see in the penultimate line below, getting the behavior you want is just a matter of describing the behavior you want in the Tkinter event syntax. from Tkinter import * import ttk root = Tk() def calculate(*args) : value = float(feet.get()) meters.set((0.305 * value * 10000. + .5)/10000.) def callback_function(*args) : meters.set('blue') mf = ttk.Frame(root, padding="3 3 12 12") mf.grid(column=0, row=0, sticky=(N, W, E, S)) mf.columnconfigure(0, weight=1) mf.rowconfigure(0, weight=1) feet = StringVar() meters = StringVar() feet_entry = ttk.Entry(mf, width=7, textvariable=feet) feet_entry.grid(column=2, row=1, sticky=(W, E)) ttk.Label(mf, textvariable=meters, background='#E9D66B').grid(column=2, row=2, sticky=(W, E)) ttk.Button(mf, text="Calculate", command=calculate).grid(column=2,row=3, sticky=W) ttk.Label(mf, text="feet").grid(column=3, row=1, sticky=W) ttk.Label(mf, text="is equivalent to").grid(column=1, row=2, sticky=E) ttk.Label(mf, text="meters").grid(column=3, row=2, sticky=W) for child in mf.winfo_children(): child.grid_configure(padx=5, pady=5) feet_entry.focus() root.bind('<Return>', calculate) # this is the key line root.bind('red', callback_function) root.mainloop() A: What you want is certainly possible. The solution depends on what you really want. Do you want to turn something red only if the user types "turn me red" precisely? Or, if the text is "turn me blue" and they change the word "blue" to "red", will that trigger the action? If the former (must type exactly "turn me red") you can just bind to that exact sequence (eg: widget.bind("<t><u><r><n><space><m><e>....", doSomething)). It becomes impossible to manage, however, if you also want "Turn ME Red" to do the very same thing. If the latter (whenever you type anything it looks to see if "turn it red" surrounds the insertion point), it's a tiny bit more work. You can bind on <KeyRelease> and then look at the characters prior to the insertion point for the magic phrase. Bottom line is, you set up a binding either on something generic like <KeyRelease> then make the decision in the callback, or set up a highly specific binding for an exact phrase.
Python Tkinter listener in text box
I would like to know how (If Possible) to listen to a certain phrase or word that is entered in a text box and the run a command. For instance if i type the phrase "turn me red" i would like to know if it is possible to turn it red without pressing enter. I just started and here is what i have: from Tkinter import * class mywidgets: def __init__(self,root): frame=Frame(root) frame.pack() self.txtfr(frame) return def txtfr(self,frame): #define a new frame and put a text area in it textfr=Frame(frame) self.text=Text(textfr,height=10,width=50,background='white') # put a scroll bar in the frame scroll=Scrollbar(textfr) self.text.configure(yscrollcommand=scroll.set) #pack everything self.text.pack(side=LEFT) scroll.pack(side=RIGHT,fill=Y) textfr.pack(side=TOP) return def main(): root = Tk() s=mywidgets(root) root.title('textarea') root.mainloop() main()
[ "So i thought it would be a little cleaner if rather than edit your code, i just provided a fresh example of working code that exhibits the behavior you are interested in.\nHere's what the code below does: when you run it, you get a little widget with an empty text box (technically, a Label in Tkinter) for the user...
[ 4, 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0004230093_python_tkinter.txt
Q: numpy magic to clean up function I have the following function in which I wish to interpolate from a table at a specified value. The trick is that the table is defined in a log-log sense such that straight lines between points in log-log are really exponential. Thus I can't really use any of the typical scipy interpolate routines. So here's what I have: PSD = np.array([[5.0, 0.001], [25.0, 0.03], [30.0, 0.03], [89.0, 0.321], [90.0, 1.0], [260.0, 1.0], [261.0, 0.03], [359.0, 0.03], [360.0, 0.5], [520.0, 0.5], [540.0, 0.25], [780.0, 0.25], [781.0, 0.03], [2000.0, 0.03]]) def W_F(freq): ''' A line connecting two points in a log-log plot are exponential ''' w_f = [] for f in freq: index = np.searchsorted(PSD[:,0], f) if index <= 0: w_f.append(PSD[:,1][0]) elif index + 1>= PSD.shape[0]: w_f.append(PSD[:,1][-1]) x0 = PSD[:,0][index-1] F0 = PSD[:,1][index-1] x1 = PSD[:,0][index] F1 = PSD[:,1][index] w_f.append(F0*(f/x0)**(math.log(F1/F0)/math.log(x1/x0))) return np.array(w_f) I'm looking for a better, cleaner, "numpy-ish" way to implement this A: The easiest way to go is to just take the logarithm of PSD and then use SciPy interpolation functions: logPSD = numpy.log(PSD) logW_F = scipy.interpolate.interp1d(logPSD[:,0], logPSD[:,1]) W_F = numpy.exp(logW_F(numpy.log(f))) This will throw an error for out-of-bounds values. To avoid the error, you could Pass bounds_error=False to the interp1d() function, see the documentation. Add an entry at the beginning and the end of PSD with a very small and very large x-value to capture all possible values. As an alternative to using interp1d(), it is possible to vectorise your code, but I would only do this for a reason.
numpy magic to clean up function
I have the following function in which I wish to interpolate from a table at a specified value. The trick is that the table is defined in a log-log sense such that straight lines between points in log-log are really exponential. Thus I can't really use any of the typical scipy interpolate routines. So here's what I have: PSD = np.array([[5.0, 0.001], [25.0, 0.03], [30.0, 0.03], [89.0, 0.321], [90.0, 1.0], [260.0, 1.0], [261.0, 0.03], [359.0, 0.03], [360.0, 0.5], [520.0, 0.5], [540.0, 0.25], [780.0, 0.25], [781.0, 0.03], [2000.0, 0.03]]) def W_F(freq): ''' A line connecting two points in a log-log plot are exponential ''' w_f = [] for f in freq: index = np.searchsorted(PSD[:,0], f) if index <= 0: w_f.append(PSD[:,1][0]) elif index + 1>= PSD.shape[0]: w_f.append(PSD[:,1][-1]) x0 = PSD[:,0][index-1] F0 = PSD[:,1][index-1] x1 = PSD[:,0][index] F1 = PSD[:,1][index] w_f.append(F0*(f/x0)**(math.log(F1/F0)/math.log(x1/x0))) return np.array(w_f) I'm looking for a better, cleaner, "numpy-ish" way to implement this
[ "The easiest way to go is to just take the logarithm of PSD and then use SciPy interpolation functions:\nlogPSD = numpy.log(PSD)\nlogW_F = scipy.interpolate.interp1d(logPSD[:,0], logPSD[:,1])\nW_F = numpy.exp(logW_F(numpy.log(f)))\n\nThis will throw an error for out-of-bounds values. To avoid the error, you could\...
[ 3 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0004230100_numpy_python.txt
Q: batch script or python program to edit string in xml tags I am looking to write a program that searches for the tags in an xml document and changes the string between the tags from localhost to manager. The tag might appear in the xml document multiple times, and the document does have a definite path. Would python or vbscript make the most sense for this problem? And can anyone provide a template so I can get started? That would be great. Thanks. A: You can solve this problem in almost all languages including Python and Vbscript. How ever it will be nicer to have the script in python or other languages that have quite a number of XML processing libraries. If you are just searching for tags, you can use beautifulsoup. http://www.crummy.com/software/BeautifulSoup/documentation.html A: I'd use XSLT for this. How you invoke the XSLT is up to you, but libxslt comes with xsltproc. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="//sometag"> <sometag>manager</sometag> </xsl:template> </xsl:stylesheet> A: Whether to use vbscript or python is really a question of what makes sense given the environment you're in, the systems you're working on, and the requirements of your company/client. An xml document template may help either way, but I'd lean toward Python for parsing the xml directly, as a personal preference. Some helpful examples I started with, can be found here: http://www.xml.com/pub/a/2002/12/11/py-xml.html Though they don't address your specific problem, they might help get you started. A: Here's a specific VB example that does exactly what you are asking for. It can easily be converted into VBScript and uses the MSXML2.DOMDocument COM object. Un Dim doc Dim nlist Dim node Set doc = CreateObject("MSXML2.DOMDocument") doc.setProperty "SelectionLanguage", "XPath" doc.Load "c:\books.xml" Set nlist = doc.selectNodes("//book/Title[contains(.,'localhost')]") MsgBox "Matching Nodes : " & nlist.length For Each node In nlist WScript.Echo node.nodeName & " : " & node.Text Next Another way to do it would be a rather dirty way, but it would work. You can perform a simple string replace, replacing ">localhost<" with ">manager<". by including the ">" and "<" characters, it would ensure the XML node value was exactly "localhost". strXML = "<foo><bar>localhost</bar><bar2>localhost</bar></foo>" WScript.echo Replace(strXML, ">localhost<", ">manager<") A: I was able to get this to work by using the vbscript solutions provided. The reasons I hadn't committed to a Visual Basic script before was that I didn't think it was possible to execute this script remotely with PsExec. It turns out I solved this problem as well with the help of Server Fault. In case you are interested in how that works, cscript.exe is the command parameter of PsExec and the vbscript file serves as the argument of cscript. Thanks for all the help, everyone!
batch script or python program to edit string in xml tags
I am looking to write a program that searches for the tags in an xml document and changes the string between the tags from localhost to manager. The tag might appear in the xml document multiple times, and the document does have a definite path. Would python or vbscript make the most sense for this problem? And can anyone provide a template so I can get started? That would be great. Thanks.
[ "You can solve this problem in almost all languages including Python and Vbscript. \nHow ever it will be nicer to have the script in python or other languages that have quite a number of XML processing libraries. \nIf you are just searching for tags, you can use beautifulsoup.\n\nhttp://www.crummy.com/software/Beau...
[ 2, 2, 0, 0, 0 ]
[]
[]
[ "batch_file", "python", "scripting", "vbscript", "xml" ]
stackoverflow_0004198416_batch_file_python_scripting_vbscript_xml.txt
Q: Hidden Multithreading Bottlenecks in Jython? What are some common hidden things that can bottleneck multithreading/parallelism in Jython? I have some parallel code (using Python's threading library) that won't scale past 3-4 CPUs, and I'm sure it's not because of any of these obvious pitfalls: Explicit locks Calling library code that requires synchronization (The algorithm I'm trying to parallelize is basically written from scratch and doesn't use any libraries.) Basically all the algorithm does is a bunch of string processing, list and dictionary lookups and math. My understanding is that, unlike CPython, Jython does not have a GIL. A: Accessing variables is one of those "hidden" bottlenecks. If all threads access some shared datastructure(s) there will be synchronization between the threads. Jython tries hard to achieve language compatibility with CPython. One thing that the GIL ensures is that access to local/global variables, object members, dict elements (technically locals, globals and object members are also dict elements) or even list elements are atomic. To avoid surprises for users Jython uses a concurrent hash map to implement dicts. This means that there is some synchronization going on when accessing any kind of dict elements in Jython. This sycnhronization is striped to support access to the dict from multiple threads without blocking them, but if multiple threads access the same variable they are going to hit the same lock. The best way to achieve scalability in Jython, and any other language, is to make sure that the data you are accessing in each thread is not accessed from other threads as well. A: Jython doesn't have a GIL, but it's pretty tough to get a lot of parallelism. If you have any part that can't be done in parallel, you get bitten by Ahmdahl's Law: The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. Moreover, even if you do purely parallel computation, you get bitten by other things, like straining your cache. Also remember that your code is running on top of a virtual machine, so even if your code is purely parallel, the JVM might have some internal coordination that holds you back (garbage collection is an obvious candidate). A: Have you tried any performance analysis packages? Even if they're not explicitly for Jython I bet it would provide some help. I'd try YourKit first if you have access to a license.
Hidden Multithreading Bottlenecks in Jython?
What are some common hidden things that can bottleneck multithreading/parallelism in Jython? I have some parallel code (using Python's threading library) that won't scale past 3-4 CPUs, and I'm sure it's not because of any of these obvious pitfalls: Explicit locks Calling library code that requires synchronization (The algorithm I'm trying to parallelize is basically written from scratch and doesn't use any libraries.) Basically all the algorithm does is a bunch of string processing, list and dictionary lookups and math. My understanding is that, unlike CPython, Jython does not have a GIL.
[ "Accessing variables is one of those \"hidden\" bottlenecks. If all threads access some shared datastructure(s) there will be synchronization between the threads.\nJython tries hard to achieve language compatibility with CPython. One thing that the GIL ensures is that access to local/global variables, object member...
[ 4, 3, 1 ]
[]
[]
[ "java", "jvm", "jython", "multithreading", "python" ]
stackoverflow_0004227269_java_jvm_jython_multithreading_python.txt
Q: Avoiding django QuerySet caching in a @staticmethod The following few lines of code illustrate a distributed worker model that I use to crunch data. Jobs are being created in a database, their data goes onto the big drives, and once all information is available, the job status is set to 'WAITING'. From here, multiple active workers come into play: from time to time each of them issues a query, in which it attempts to "claim" a job. In order to synchronize the claims, the queries are encapsulated into a transaction that immediately changes the job state if the query returns a candidate. So far so good. The problem is that the call to claim only works the first time. Reading up on QuerySets and their caching behavior, it seems to me that combining static methods and QuerySet caching always falls back on the cache... see for yourselves: I have a class derived from django.db.models.Model: class Job(models.Model): [...] in which I define the following static function. @staticmethod @transaction.commit_on_success def claim(): # select the oldest, top priority job and # update its record jobs = Job.objects.filter(state__exact = 'WAITING').order_by('-priority', 'create_timestamp') if jobs.count() > 0: j = jobs[0] j.state = 'CLAIMED' j.save() logger.info('Job::claim: claimed %s' % j.name) return j return None Is there any obvious thing that I am doing wrong? What would be a better way of dealing with this? How can I make sure that the QuerySet does not cache its results across different invocations of the static method? Or am I missing something and chasing a phantom? Any help would be greatly appreciated... Thanks! A: Why not just have a plain module-level function claim_jobs() that would run the query? def claim_jobs(): jobs = Job.objects.filter(...) ... etc.
Avoiding django QuerySet caching in a @staticmethod
The following few lines of code illustrate a distributed worker model that I use to crunch data. Jobs are being created in a database, their data goes onto the big drives, and once all information is available, the job status is set to 'WAITING'. From here, multiple active workers come into play: from time to time each of them issues a query, in which it attempts to "claim" a job. In order to synchronize the claims, the queries are encapsulated into a transaction that immediately changes the job state if the query returns a candidate. So far so good. The problem is that the call to claim only works the first time. Reading up on QuerySets and their caching behavior, it seems to me that combining static methods and QuerySet caching always falls back on the cache... see for yourselves: I have a class derived from django.db.models.Model: class Job(models.Model): [...] in which I define the following static function. @staticmethod @transaction.commit_on_success def claim(): # select the oldest, top priority job and # update its record jobs = Job.objects.filter(state__exact = 'WAITING').order_by('-priority', 'create_timestamp') if jobs.count() > 0: j = jobs[0] j.state = 'CLAIMED' j.save() logger.info('Job::claim: claimed %s' % j.name) return j return None Is there any obvious thing that I am doing wrong? What would be a better way of dealing with this? How can I make sure that the QuerySet does not cache its results across different invocations of the static method? Or am I missing something and chasing a phantom? Any help would be greatly appreciated... Thanks!
[ "Why not just have a plain module-level function claim_jobs() that would run the query?\ndef claim_jobs():\n\n jobs = Job.objects.filter(...)\n ... etc.\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0004230510_django_django_models_python.txt
Q: Does this python code employs Depth First Search (DFS) for finding all paths? This code is given in python official essays on graph theory. Here's the code: def find_all_paths(graph, start, end, path=[]): path = path + [start] if start == end: return [path] if not graph.has_key(start): return [] paths = [] for node in graph[start]: if node not in path: newpaths = find_all_paths(graph, node, end, path) for newpath in newpaths: paths.append(newpath) return paths I am not adept at python as I haven't yet had enough of practicing and reading in it. Can you please explain the code by relating this to the child-sibling concept in DFS diagram? Thanks. A: The key to seeing that it is a DFS is that the recursion happens before the accumulation of paths. In other words the recursion will go as deep as it needs to go before putting anything on the "paths" list. All the deepest siblings are accumulated on "paths" before returning the list. I believe the code is correct with the "append" rather than "extend", since "paths" is the accumulator of all paths. Though it could probably be written as paths += find_all_paths(graph, node, end, path) (edit) ...instead of newpaths = find_all_paths(graph, node, end, path) for newpath in newpaths: paths.append(newpath) A: Consider the following modifications and execution script: def find_all_paths(graph, start, end, path=[]): path = path + [start] print 'adding %d'%start if start == end: return [path] if not graph.has_key(start): return [] paths = [] for node in graph[start]: if node not in path: paths.extend(find_all_paths(graph, node, end, path)) print 'returning ' + str(paths) return paths G = {1:[2,3,4], 2:[1,4], 3:[1,4], 4:[1,2,3]} find_all_paths(G, 1, 4) Output: adding 1 adding 2 adding 4 returning [[1, 2, 4]] adding 3 adding 4 returning [[1, 3, 4]] adding 4 returning [[1, 2, 4], [1, 3, 4], [1, 4]] Note how the first path is returned before adding 3, and the second path is returned before adding 4. A: Yes, this algorithm is indeed a DFS. Notice how it recurses right away (go into the child) when looping over the various nodes, as opposed to a Breadth First Search which would basically make a list of viable nodes (e.g. everything on the same level of depth, a.k.a. siblings) and only recursing when those do not match your requirements.
Does this python code employs Depth First Search (DFS) for finding all paths?
This code is given in python official essays on graph theory. Here's the code: def find_all_paths(graph, start, end, path=[]): path = path + [start] if start == end: return [path] if not graph.has_key(start): return [] paths = [] for node in graph[start]: if node not in path: newpaths = find_all_paths(graph, node, end, path) for newpath in newpaths: paths.append(newpath) return paths I am not adept at python as I haven't yet had enough of practicing and reading in it. Can you please explain the code by relating this to the child-sibling concept in DFS diagram? Thanks.
[ "The key to seeing that it is a DFS is that the recursion happens before the accumulation of paths. In other words the recursion will go as deep as it needs to go before putting anything on the \"paths\" list. All the deepest siblings are accumulated on \"paths\" before returning the list.\nI believe the code is ...
[ 4, 4, 1 ]
[]
[]
[ "depth_first_search", "graph_theory", "python" ]
stackoverflow_0004230878_depth_first_search_graph_theory_python.txt
Q: How to pass UTF-8 string from wx.TextCtrl to wx.ListCtrl If I enter Baltic characters in textctrl and click button test1 I have an error "InicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)" Button test2 works fine. #!/usr/bin/python # -*- coding: UTF-8 -*- import wx class MyFrame(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title, (-1, -1), wx.Size(450, 300)) self.panel = wx.Panel(self) self.input_area = wx.TextCtrl(self.panel, -1, '',(5,5),(200,200), style=wx.TE_MULTILINE) self.output_list = wx.ListCtrl(self.panel, -1, (210,5), (200,200), style=wx.LC_REPORT) self.output_list.InsertColumn(0, 'column') self.output_list.SetColumnWidth(0, 100) self.btn1 = wx.Button(self.panel, -1, 'test1', (5,220)) self.btn1.Bind(wx.EVT_BUTTON, self.OnTest1) self.btn2 = wx.Button(self.panel, -1, 'test2', (100,220)) self.btn2.Bind(wx.EVT_BUTTON, self.OnTest2) self.Centre() def OnTest1(self, event): self.output_list.InsertStringItem(0,str(self.input_area.GetValue()).decode('utf-8')) def OnTest2(self, event): self.output_list.InsertStringItem(0,"ąčęėįš".decode('utf-8')) class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, 'encoding') frame.Show(True) return True app = MyApp(0) app.MainLoop() Update 1 I have tried this code on two Windows 7 Ultimate x64 computers. Both have python 2.7 and wxPython2.8 win64 unicode for python 2.7 In both machines I have the same error. A: can't reproduce... If I try with swedish caracters "åäö" it seems to work, also when using "ąčęėįš" locale problem? A: Are you using the unicode build of wxPython? You didn't mention your platform and other system details. A: Replace def OnTest1(self, event): self.output_list.InsertStringItem(0,str(self.input_area.GetValue()).decode('utf-8')) with def OnTest1(self, event): self.output_list.InsertStringItem(0,self.input_area.GetValue())
How to pass UTF-8 string from wx.TextCtrl to wx.ListCtrl
If I enter Baltic characters in textctrl and click button test1 I have an error "InicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)" Button test2 works fine. #!/usr/bin/python # -*- coding: UTF-8 -*- import wx class MyFrame(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title, (-1, -1), wx.Size(450, 300)) self.panel = wx.Panel(self) self.input_area = wx.TextCtrl(self.panel, -1, '',(5,5),(200,200), style=wx.TE_MULTILINE) self.output_list = wx.ListCtrl(self.panel, -1, (210,5), (200,200), style=wx.LC_REPORT) self.output_list.InsertColumn(0, 'column') self.output_list.SetColumnWidth(0, 100) self.btn1 = wx.Button(self.panel, -1, 'test1', (5,220)) self.btn1.Bind(wx.EVT_BUTTON, self.OnTest1) self.btn2 = wx.Button(self.panel, -1, 'test2', (100,220)) self.btn2.Bind(wx.EVT_BUTTON, self.OnTest2) self.Centre() def OnTest1(self, event): self.output_list.InsertStringItem(0,str(self.input_area.GetValue()).decode('utf-8')) def OnTest2(self, event): self.output_list.InsertStringItem(0,"ąčęėįš".decode('utf-8')) class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, 'encoding') frame.Show(True) return True app = MyApp(0) app.MainLoop() Update 1 I have tried this code on two Windows 7 Ultimate x64 computers. Both have python 2.7 and wxPython2.8 win64 unicode for python 2.7 In both machines I have the same error.
[ "can't reproduce... If I try with swedish caracters \"åäö\" it seems to work, also when using \"ąčęėįš\" locale problem?\n", "Are you using the unicode build of wxPython? You didn't mention your platform and other system details.\n", "Replace\ndef OnTest1(self, event):\n self.output_list.InsertStringItem...
[ 0, 0, 0 ]
[]
[]
[ "listctrl", "python", "textctrl", "utf_8", "wxpython" ]
stackoverflow_0004192744_listctrl_python_textctrl_utf_8_wxpython.txt
Q: pdflatex in a python subprocess on mac I'm trying to run pdflatex on a .tex file from a Python 2.4.4. subprocess (on a mac): import subprocess subprocess.Popen(["pdflatex", "fullpathtotexfile"], shell=True) which effectively does nothing. However, I can run "pdflatex fullpathtotexfile" in the terminal without issues, generating a pdf. What am I missing? [EDIT] As suggested in one of the answers, I tried: return_value = subprocess.call(['pdflatex', '/Users/Benjamin/Desktop/directory/ON.tex'], shell =False) which fails with: Traceback (most recent call last): File "/Users/Benjamin/Desktop/directory/generate_tex_files_v3.py", line 285, in -toplevel- return_value = subprocess.call(['pdflatex', '/Users/Benjamin/Desktop/directory/ON.tex'], shell =False) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/subprocess.py", line 413, in call return Popen(*args, **kwargs).wait() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/subprocess.py", line 543, in __init__ errread, errwrite) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/subprocess.py", line 975, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory The file does exist and I am able to run pdflatex /Users/Benjamin/Desktop/directory/ON.tex in the Terminal. Note that pdflatex does throw a good number of warnings... but that shouldn't matter, and this also gives the same error: return_value = subprocess.call(['pdflatex', '-interaction=batchmode', '/Users/Benjamin/Desktop/directory/ON.tex'], shell =False) A: Use the convenience function, subprocess.call You don't need to use Popen here, call should suffice. For example: >>> import subprocess >>> return_value = subprocess.call(['pdflatex', 'textfile'], shell=False) # shell should be set to False If the call was successful, return_value will be set to 0, or else 1. Usage of Popen is typically for cases when you want the store the output. For example, you want to check for the kernel release using the command uname and store it in some variable: >>> process = subprocess.Popen(['uname', '-r'], shell=False, stdout=subprocess.PIPE) >>> output = process.communicate()[0] >>> output '2.6.35-22-generic\n' Again, never set shell=True. A: You might want either: output = Popen(["pdflatex", "fullpathtotexfile"], stdout=PIPE).communicate()[0] print output or p = subprocess.Popen(["pdflatex" + " fullpathtotexfile"], shell=True) sts = os.waitpid(p.pid, 0)[1] (Shamelessly ripped from this subprocess doc page section ).
pdflatex in a python subprocess on mac
I'm trying to run pdflatex on a .tex file from a Python 2.4.4. subprocess (on a mac): import subprocess subprocess.Popen(["pdflatex", "fullpathtotexfile"], shell=True) which effectively does nothing. However, I can run "pdflatex fullpathtotexfile" in the terminal without issues, generating a pdf. What am I missing? [EDIT] As suggested in one of the answers, I tried: return_value = subprocess.call(['pdflatex', '/Users/Benjamin/Desktop/directory/ON.tex'], shell =False) which fails with: Traceback (most recent call last): File "/Users/Benjamin/Desktop/directory/generate_tex_files_v3.py", line 285, in -toplevel- return_value = subprocess.call(['pdflatex', '/Users/Benjamin/Desktop/directory/ON.tex'], shell =False) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/subprocess.py", line 413, in call return Popen(*args, **kwargs).wait() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/subprocess.py", line 543, in __init__ errread, errwrite) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/python2.4/subprocess.py", line 975, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory The file does exist and I am able to run pdflatex /Users/Benjamin/Desktop/directory/ON.tex in the Terminal. Note that pdflatex does throw a good number of warnings... but that shouldn't matter, and this also gives the same error: return_value = subprocess.call(['pdflatex', '-interaction=batchmode', '/Users/Benjamin/Desktop/directory/ON.tex'], shell =False)
[ "Use the convenience function, subprocess.call\nYou don't need to use Popen here, call should suffice. \nFor example:\n>>> import subprocess\n>>> return_value = subprocess.call(['pdflatex', 'textfile'], shell=False) # shell should be set to False\n\nIf the call was successful, return_value will be set to 0, or else...
[ 5, 1 ]
[]
[]
[ "macos", "pdflatex", "python", "subprocess" ]
stackoverflow_0004230926_macos_pdflatex_python_subprocess.txt
Q: 3 Questions involving python and sqlite Ok, I'm creating functions for use with a game server. This server uses plugins. I have these functions wich use a sqlite database, along with apsw to retrieve items stored by another function. I have 3 questions on this. Question One: I keep getting the error "SQLError: near "?": syntax error" Since my statement features multiple ?, its proving hard tot rack down what is exactly wrong.So what is wrong? Question Two: I know about SQL-Injection, but these functions only take input from the runner of the script, and the only stuff he would be damaging is his own. Even so, is there an easy way to make this sql-injection proof? Question Three: Is there any way to make this function more efficient? Here is the function: EDIT:Heres what it looks like now: def readdb(self,entry,column,returncolumn = "id,matbefore,matafter,name,date"): self.memwrite if isinstance(entry, int) or isinstance(entry, str): statement = 'SELECT {0} FROM main WHERE {1} IN {2}'.format(returncolumn,column,entry) self.memcursor.execute(statement) blockinfo = self.memcursor.fetchall() return(blockinfo) if isinstance(entry, tuple) or isinstance(entry, list): statement = '''SELECT {0} FROM main WHERE {1} IN (%s)'''.format(returncolumn,column) self.memcursor.execute(statement % ("?," * len(entry))[:-1], entry) blockinfo = self.memcursor.fetchall() return(blockinfo A: This is funny (read on to learn why). The first statement you have actually uses the value binding mechanism of the sqlite3-module (I assume that is what you use). Hence, the * (which is the default column) gets escaped, making the statement invalid. This is SQL-injection proof, and your own code tries to inject SQL (see the funny now?). The second time you use Pythons string replacement in order to build the query string, which is not SQL-injection proof. A: Use names to get more informative error messages. For example I deliberately left out a comma with this: cur.execute("select ? ?,?", (1,2,3)) SQLError: near "?": syntax error Now with names: cur.execute("select :1 :2,:3", (1,2,3)) SQLError: near ":2": syntax error If you have a lot of bindings I'd recommend you switch to the named bindings style and pass a dictionary for the bindings themselves. cur.execute("select :artist, :painting, :date", {"artist": "Monster", "painting": "The Duck", "date": "10/10/2010" }) You can only use bindings for values, not for column or table names. There are several possible approaches. Although SQLite supports arbitrary column/table names, you can require that they are only ASCII alphanumeric text. If you want to be less restrictive then you need to quote the names. Use square brackets around a name that has double quotes in it and double quotes around a name that has square brackets in it. A name can't have both. The alternative to all that is using the authorizer mechanism. See Connection.setauthorizer for the API and a pointer to an example. In brief your callback is called with the actions that will be taken so for example you can reject anything that would write to the database. In terms of efficiency, you can improve things depending on how the caller uses the results. Cursors are very cheap. There is no need to try to reuse the same one over and over again and doing so can lead to subtle errors. SQLite only gets the next row of a result when you ask for it. By using fetchall you insist of building a list of all the results. If the list could be large or if you may stop part way through then just return db.cursor().execute("... query ..."). The caller should then use your function to iterate: for id,matbefore,matafter,name,date in readdb(...): ... do something ... In your place I would just junk this readdb function completely as it doesn't add any value and write queries directly: for id,foo,bar in db.cursor().execute("select id,foo,bar from .... where ...."): ... do something ... Your coding style indicates that you are fairly new to Python. I strongly recommend looking up iterators and generators. It is a far simpler coding style producing and consuming results as they are needed. BTW this SQL creates a table with a zero length name, and columns named double quote and semicolon. SQLite functions just fine, but don't do this :-) It is however useful for testing. create table "" (["], ";"); Disclosure: I am the APSW author
3 Questions involving python and sqlite
Ok, I'm creating functions for use with a game server. This server uses plugins. I have these functions wich use a sqlite database, along with apsw to retrieve items stored by another function. I have 3 questions on this. Question One: I keep getting the error "SQLError: near "?": syntax error" Since my statement features multiple ?, its proving hard tot rack down what is exactly wrong.So what is wrong? Question Two: I know about SQL-Injection, but these functions only take input from the runner of the script, and the only stuff he would be damaging is his own. Even so, is there an easy way to make this sql-injection proof? Question Three: Is there any way to make this function more efficient? Here is the function: EDIT:Heres what it looks like now: def readdb(self,entry,column,returncolumn = "id,matbefore,matafter,name,date"): self.memwrite if isinstance(entry, int) or isinstance(entry, str): statement = 'SELECT {0} FROM main WHERE {1} IN {2}'.format(returncolumn,column,entry) self.memcursor.execute(statement) blockinfo = self.memcursor.fetchall() return(blockinfo) if isinstance(entry, tuple) or isinstance(entry, list): statement = '''SELECT {0} FROM main WHERE {1} IN (%s)'''.format(returncolumn,column) self.memcursor.execute(statement % ("?," * len(entry))[:-1], entry) blockinfo = self.memcursor.fetchall() return(blockinfo
[ "This is funny (read on to learn why). \nThe first statement you have actually uses the value binding mechanism of the sqlite3-module (I assume that is what you use). Hence, the * (which is the default column) gets escaped, making the statement invalid. This is SQL-injection proof, and your own code tries to inject...
[ 3, 0 ]
[]
[]
[ "database", "python", "sqlite" ]
stackoverflow_0004114708_database_python_sqlite.txt
Q: sqlite / python - named parameters without enclosing quotes? When using prepared statements with named parameters in SQLite (specifically with the python sqlite3 module http://docs.python.org/library/sqlite3.html ) is there anyway to include string values without getting quotes put around them ? I've got this : columnName = '''C1''' cur = cur.execute('''SELECT DISTINCT(:colName) FROM T1''', {'colName': columnName}) And it seems the SQL I end up with is this : SELECT DISTINCT('C1') FROM T1 which isn't much use of course, what I really want is : SELECT DISTINCT(C1) FROM T1 . Is there any way to prompt the execute method to interpret the supplied arguments in such a way that it doesn't wrap quotes around them ? I've written a little test program to explore this fully so for what it's worth here it is : import sys import sqlite3 def getDatabaseConnection(): DEFAULTDBPATH = ':memory:' conn = sqlite3.connect(DEFAULTDBPATH, detect_types=sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES) conn.text_factory = str return conn def initializeDBTables(conn): conn.execute(''' CREATE TABLE T1( id INTEGER PRIMARY KEY AUTOINCREMENT, C1 STRING);''') cur = conn.cursor() cur.row_factory = sqlite3.Row # fields by name for v in ['A','A','A','B','B','C']: cur.execute('''INSERT INTO T1 values (NULL, ?)''', v) columnName = '''C1''' cur = cur.execute('''SELECT DISTINCT(:colName) FROM T1''', {'colName': columnName}) #Should end up with three output rows, in #fact we end up with one for row in cur: print row def main(): conn = getDatabaseConnection() initializeDBTables(conn) if __name__ == '__main__': main() Would be interested to hear of anyway of manipulating the execute method to allow this to work. A: In SELECT DISTINCT(C1) FROM T1 the C1 is not a string value, it is a piece of SQL code. The parameters (escaped in execute) are used to insert values, not pieces of code. A: You are using bindings and bindings can only be used for values, not for table or column names. You will have to use string interpolation/formstting to get the effect you want but it does leave you open to SQL injection attacks if the column name came from an untrusted source. In that case you can sanitize the string (eg only allow alphanumerics) and use the authorizer interface to check no unexpected activity will happen.
sqlite / python - named parameters without enclosing quotes?
When using prepared statements with named parameters in SQLite (specifically with the python sqlite3 module http://docs.python.org/library/sqlite3.html ) is there anyway to include string values without getting quotes put around them ? I've got this : columnName = '''C1''' cur = cur.execute('''SELECT DISTINCT(:colName) FROM T1''', {'colName': columnName}) And it seems the SQL I end up with is this : SELECT DISTINCT('C1') FROM T1 which isn't much use of course, what I really want is : SELECT DISTINCT(C1) FROM T1 . Is there any way to prompt the execute method to interpret the supplied arguments in such a way that it doesn't wrap quotes around them ? I've written a little test program to explore this fully so for what it's worth here it is : import sys import sqlite3 def getDatabaseConnection(): DEFAULTDBPATH = ':memory:' conn = sqlite3.connect(DEFAULTDBPATH, detect_types=sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES) conn.text_factory = str return conn def initializeDBTables(conn): conn.execute(''' CREATE TABLE T1( id INTEGER PRIMARY KEY AUTOINCREMENT, C1 STRING);''') cur = conn.cursor() cur.row_factory = sqlite3.Row # fields by name for v in ['A','A','A','B','B','C']: cur.execute('''INSERT INTO T1 values (NULL, ?)''', v) columnName = '''C1''' cur = cur.execute('''SELECT DISTINCT(:colName) FROM T1''', {'colName': columnName}) #Should end up with three output rows, in #fact we end up with one for row in cur: print row def main(): conn = getDatabaseConnection() initializeDBTables(conn) if __name__ == '__main__': main() Would be interested to hear of anyway of manipulating the execute method to allow this to work.
[ "In SELECT DISTINCT(C1) FROM T1 the C1 is not a string value, it is a piece of SQL code. The parameters (escaped in execute) are used to insert values, not pieces of code.\n", "You are using bindings and bindings can only be used for values, not for table or column names. You will have to use string interpolatio...
[ 4, 0 ]
[]
[]
[ "named_parameters", "prepared_statement", "python", "sqlite" ]
stackoverflow_0004191438_named_parameters_prepared_statement_python_sqlite.txt
Q: string operation i have a string in which a word is like this "#<i><b>when</b></i>". i want only word without any tag. when in striped "#" word became "<i><b>when</b></i>". but when i striped "<i>" word became like "b>when"<b>when</b>"? A: Slice it. >>> '#<i><b>when</b></i>'[4:-4] '<b>when</b>' A: Use regular expressions. >>> import re >>> s='#<i><b>when</b></i>' >>> wordPattern = re.compile(r'\>(\w+)\<') >>> wordPattern.search(s).groups() ('when',)
string operation
i have a string in which a word is like this "#<i><b>when</b></i>". i want only word without any tag. when in striped "#" word became "<i><b>when</b></i>". but when i striped "<i>" word became like "b>when"<b>when</b>"?
[ "Slice it.\n>>> '#<i><b>when</b></i>'[4:-4]\n'<b>when</b>'\n\n", "Use regular expressions.\n>>> import re\n>>> s='#<i><b>when</b></i>'\n>>> wordPattern = re.compile(r'\\>(\\w+)\\<')\n>>> wordPattern.search(s).groups()\n('when',)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "regex", "string" ]
stackoverflow_0004231362_python_regex_string.txt
Q: How to implement database-style table in Python I am implementing a class that resembles a typical database table: has named columns and unnamed rows has a primary key by which I can refer to the rows supports retrieval and assignment by primary key and column title can be asked to add unique or non-unique index for any of the columns, allowing fast retrieval of a row (or set of rows) which have a given value in that column removal of a row is fast and is implemented as "soft-delete": the row is kept physically, but is marked for deletion and won't show up in any subsequent retrieval operations addition of a column is fast rows are rarely added columns are rarely deleted I decided to implement the class directly rather than use a wrapper around sqlite. What would be a good data structure to use? Just as an example, one approach I was thinking about is a dictionary. Its keys are the values in the primary key column of the table; its values are the rows implemented in one of these ways: As lists. Column numbers are mapped into column titles (using a list for one direction and a map for the other). Here, a retrieval operation would first convert column title into column number, and then find the corresponding element in the list. As dictionaries. Column titles are the keys of this dictionary. Not sure about the pros/cons of the two. The reasons I want to write my own code are: I need to track row deletions. That is, at any time I want to be able to report which rows where deleted and for what "reason" (the "reason" is passed to my delete method). I need some reporting during indexing (e.g., while an non-unique index is being built, I want to check certain conditions and report if they are violated) A: I would consider building a dictionary with keys that are tuples or lists. Eg: my_dict(("col_2", "row_24")) would get you this element. Starting from there, it would be pretty easy (if not extremely fast for very large databases) to write 'get_col' and 'get_row' methods, as well as 'get_row_slice' and 'get_col_slice' from the 2 preceding ones to gain access to your methods. Using a whole dictionary like that will have 2 advantages. 1) Getting a single element will be faster than your 2 proposed methods; 2) If you want to have different number of elements (or missing elements) in your columns, this will make it extremely easy and memory efficient. Just a thought :) I'll be curious to see what packages people will suggest! Cheers A: You might want to consider creating a class which uses an in-memory sqlite table under the hood: import sqlite3 class MyTable(object): def __init__(self): self.conn=sqlite3.connect(':memory:') self.cursor=self.conn.cursor() sql='''\ CREATE TABLE foo ... ''' self.execute(sql) def execute(self,sql,args): self.cursor.execute(sql,args) def delete(self,id,reason): sql='UPDATE table SET softdelete = 1, reason = %s where tableid = %s' self.cursor.execute(sql,(reason,id,)) def verify(self): # Check that certain conditions are true # Report (or raise exception?) if violated def build_index(self): self.verify() ... Soft-delete can be implemented by having a softdelete column (of bool type). Similarly, you can have a column to store reason for deletion. Undeleting would simply involve updating the row and changing the softdelete value. Selecting rows that have not been deleted could be achieved with the SQL condition WHERE softdelete != 1. You could write a verify method to verify conditions on your data are satisfied. And you could call that method from within your build_index method. Another alternative is to use a numpy structured masked array. It's hard to say what would be fastest. Perhaps the only sure way to tell would be to write code for each and benchmark on real-world data with timeit. A: You really should use SQLite. For your first reason (tracking deletion reasons) you can easily implement this by having a second table that you "move" rows to on deletion. The reason can be tracked in additional column in that table or another table you can join. If a deletion reason isn't always required then you can even use triggers on your source table to copy rows about to be deleted, and/or have a user defined function that can get the reason. The indexing reason is somewhat covered by constraints etc but I can't directly address it without more details.
How to implement database-style table in Python
I am implementing a class that resembles a typical database table: has named columns and unnamed rows has a primary key by which I can refer to the rows supports retrieval and assignment by primary key and column title can be asked to add unique or non-unique index for any of the columns, allowing fast retrieval of a row (or set of rows) which have a given value in that column removal of a row is fast and is implemented as "soft-delete": the row is kept physically, but is marked for deletion and won't show up in any subsequent retrieval operations addition of a column is fast rows are rarely added columns are rarely deleted I decided to implement the class directly rather than use a wrapper around sqlite. What would be a good data structure to use? Just as an example, one approach I was thinking about is a dictionary. Its keys are the values in the primary key column of the table; its values are the rows implemented in one of these ways: As lists. Column numbers are mapped into column titles (using a list for one direction and a map for the other). Here, a retrieval operation would first convert column title into column number, and then find the corresponding element in the list. As dictionaries. Column titles are the keys of this dictionary. Not sure about the pros/cons of the two. The reasons I want to write my own code are: I need to track row deletions. That is, at any time I want to be able to report which rows where deleted and for what "reason" (the "reason" is passed to my delete method). I need some reporting during indexing (e.g., while an non-unique index is being built, I want to check certain conditions and report if they are violated)
[ "I would consider building a dictionary with keys that are tuples or lists. Eg: my_dict((\"col_2\", \"row_24\")) would get you this element. Starting from there, it would be pretty easy (if not extremely fast for very large databases) to write 'get_col' and 'get_row' methods, as well as 'get_row_slice' and 'get_col...
[ 2, 2, 0 ]
[]
[]
[ "data_structures", "implementation", "performance", "python" ]
stackoverflow_0004188202_data_structures_implementation_performance_python.txt
Q: trouble getting pylint to find inherited methods in pylons/SA models I have a Pylons app that I'm using SqlAlchemy declarative models for. In order to make the code a bit cleaner I add a .query onto the SA Base and inherit all my models from that. So in my app.model.meta I have Base = declarative_base() metadata = Base.metadata Session = scoped_session(sessionmaker()) Base.query = Session.query_property(Query) I think inherit this into app.model.mymodel and declare it as a child of meta.Base. This lets me write my queries as mymodel.query.filter(mymodel.id == 3).all() The trouble is that pylint is not seeing .query as a valid attribute of my models. E:102:JobCounter.reset_count: Class 'JobCounter' has no 'query' member Obviously this error is all over the place since it occurs on any model doing any query. I don't want to just skip the error because it might point out something down the road on non-orm classes, but I must be missing something for pylint to accept this. Any hints? A: Best I could find for this is to pass pylint a list of classes to ignore this check on. It'll still do other checks for these classes, you'll just have to maintain a list of these somewhere: pylint --ignored-classes=MyModel1,MyModel2 myfile.py I know it's not ideal, but there's something about the way that sqlalchemy sets up the models that confuses pylint. At least with this you still get the check for non-orm classes.
trouble getting pylint to find inherited methods in pylons/SA models
I have a Pylons app that I'm using SqlAlchemy declarative models for. In order to make the code a bit cleaner I add a .query onto the SA Base and inherit all my models from that. So in my app.model.meta I have Base = declarative_base() metadata = Base.metadata Session = scoped_session(sessionmaker()) Base.query = Session.query_property(Query) I think inherit this into app.model.mymodel and declare it as a child of meta.Base. This lets me write my queries as mymodel.query.filter(mymodel.id == 3).all() The trouble is that pylint is not seeing .query as a valid attribute of my models. E:102:JobCounter.reset_count: Class 'JobCounter' has no 'query' member Obviously this error is all over the place since it occurs on any model doing any query. I don't want to just skip the error because it might point out something down the road on non-orm classes, but I must be missing something for pylint to accept this. Any hints?
[ "Best I could find for this is to pass pylint a list of classes to ignore this check on. It'll still do other checks for these classes, you'll just have to maintain a list of these somewhere:\npylint --ignored-classes=MyModel1,MyModel2 myfile.py\nI know it's not ideal, but there's something about the way that sqla...
[ 8 ]
[]
[]
[ "pylint", "pylons", "python", "sqlalchemy" ]
stackoverflow_0004061720_pylint_pylons_python_sqlalchemy.txt
Q: Retrieving only HTTP header without the content in Python Possible Duplicate: How do you send a HEAD HTTP request in Python? I am using Python's urllib and urllib2 to do an automated login. I am also using HTTPCookieProcessor to automate the handling of the cookies. The code is somewhat like this: o = urllib2.build_opener( urllib2.HTTPCookieProcessor() ) # assuming the site expects 'user' and 'pass' as query params p = urllib.urlencode( { 'username': 'me', 'password': 'mypass' } ) # perform login with params f = o.open( 'http://www.mysite.com/login/', p ) data = f.read() f.close() # second request t = o.open( 'http://www.mysite.com/protected/area/' ) data = t.read() t.close() Now, the point is that I don't want to waste bandwidth in downloading the contents of http://www.mysite.com/login/, since all I want to do is receive the cookies (which are there in the Headers). Also, the site redirects me to http://www.mysite.com/userprofile when I first login (that is, the f.geturl() = http://www.mysite.com/userprofile). So is there any way that I can avoid fetching the content in the first request? P.S. Please don't ask me why am I avoiding the small network usage of transferring the content. Although the content is small, I still don't want to download it. A: Just send a HEAD request instad of a GET request. You can use Python's httplib to do that. Something like this: import httplib, urllib creds = urllib.urlencode({ 'username': 'me', 'password': 'mypass' }); connection = httplib.HTTPConnection("www.mysite.com") connection.request("HEAD", "/login/", creds) response = connection.getresponse() print response.getheaders()
Retrieving only HTTP header without the content in Python
Possible Duplicate: How do you send a HEAD HTTP request in Python? I am using Python's urllib and urllib2 to do an automated login. I am also using HTTPCookieProcessor to automate the handling of the cookies. The code is somewhat like this: o = urllib2.build_opener( urllib2.HTTPCookieProcessor() ) # assuming the site expects 'user' and 'pass' as query params p = urllib.urlencode( { 'username': 'me', 'password': 'mypass' } ) # perform login with params f = o.open( 'http://www.mysite.com/login/', p ) data = f.read() f.close() # second request t = o.open( 'http://www.mysite.com/protected/area/' ) data = t.read() t.close() Now, the point is that I don't want to waste bandwidth in downloading the contents of http://www.mysite.com/login/, since all I want to do is receive the cookies (which are there in the Headers). Also, the site redirects me to http://www.mysite.com/userprofile when I first login (that is, the f.geturl() = http://www.mysite.com/userprofile). So is there any way that I can avoid fetching the content in the first request? P.S. Please don't ask me why am I avoiding the small network usage of transferring the content. Although the content is small, I still don't want to download it.
[ "Just send a HEAD request instad of a GET request. You can use Python's httplib to do that. \nSomething like this:\n import httplib, urllib\n creds = urllib.urlencode({ 'username': 'me', 'password': 'mypass' });\n connection = httplib.HTTPConnection(\"www.mysite.com\")\n connection.request(\"HEAD\", \"/logi...
[ 1 ]
[]
[]
[ "python", "urllib2" ]
stackoverflow_0004231469_python_urllib2.txt
Q: pyparsing isn't nesting list ... why? For some reason, pyparsing isn't nesting the list for my string: rank = oneOf("2 3 4 5 6 7 8 9 T J Q K A") suit = oneOf("h c d s") card = rank + Optional(suit) suit_filter = oneOf("z o") hand = card + card + Optional(suit_filter) greater = Literal("+") through = Literal("-") series = hand + Optional(greater | through + hand) series_split = Literal(",") hand_range = series + ZeroOrMore(series_split + series) hand_range.parseString('22+,AKo-ATo,KQz') >> ['2', '2', '+', ',', 'A', 'K', 'o', '-', 'A', 'T', 'o', ',', 'K', 'Q', 'z'] I'm not sure why the pyparsing isn't creating lists around 22+, AKo-ATo, and KQz (or any layers deeper than that). What am I missing? A: Pyparsing isn't grouping these tokens because you didn't tell it to. Pyparsing's default behavior is to simply string together all matched tokens into a single list. To get grouping of your tokens, wrap the expressions in your parser that are to be grouped in a pyparsing Group expression. In your case, change series from: series = hand + Optional(greater | through + hand) to series = Group(hand + Optional(greater | through + hand)) Also, I recommend that you not implement your own comma-delimited list as you have done in series, but instead use the pyparsing helper, delimitedList: hand_range = delimitedList(series) delimitedList assumes comma delimiters, but any character (or even complete pyparsing expression) can be given as the delim argument. The delimiters themselves are suppressed from the results, as delimitedList assumes that the delimiters are there simply as separators between the important bits, the list elements. After making these two changes, the parse results now start to look more like what you are asking for: [['2', '2', '+'], ['A', 'K', 'o', '-', 'A', 'T', 'o'], ['K', 'Q', 'z']] I'm guessing that you might also want to put Group around the hand definition, to structure those results as well. If this is an expression that will be evaluated in some way (like a poker hand), then please look at these examples on the pyparsing wiki, which use classes as parse actions to construct objects that can be evaluated for rank or boolean value or whatever. http://pyparsing.wikispaces.com/file/view/invRegex.py http://pyparsing.wikispaces.com/file/view/simpleBool.py http://pyparsing.wikispaces.com/file/view/eval_arith.py If you construct objects for these expressions, then you won't need to use Group.
pyparsing isn't nesting list ... why?
For some reason, pyparsing isn't nesting the list for my string: rank = oneOf("2 3 4 5 6 7 8 9 T J Q K A") suit = oneOf("h c d s") card = rank + Optional(suit) suit_filter = oneOf("z o") hand = card + card + Optional(suit_filter) greater = Literal("+") through = Literal("-") series = hand + Optional(greater | through + hand) series_split = Literal(",") hand_range = series + ZeroOrMore(series_split + series) hand_range.parseString('22+,AKo-ATo,KQz') >> ['2', '2', '+', ',', 'A', 'K', 'o', '-', 'A', 'T', 'o', ',', 'K', 'Q', 'z'] I'm not sure why the pyparsing isn't creating lists around 22+, AKo-ATo, and KQz (or any layers deeper than that). What am I missing?
[ "Pyparsing isn't grouping these tokens because you didn't tell it to. Pyparsing's default behavior is to simply string together all matched tokens into a single list. To get grouping of your tokens, wrap the expressions in your parser that are to be grouped in a pyparsing Group expression. In your case, change se...
[ 8 ]
[]
[]
[ "pyparsing", "python" ]
stackoverflow_0004231349_pyparsing_python.txt
Q: Python - NumPy - tuples as elements of an array I'm a CS major in university working on a programming project for my Calc III course involving singular-value decomposition. The idea is basically to convert an image of m x n dimensions into an m x n matrix wherein each element is a tuple representing the color channels (r, g, b) of the pixel at point (m, n). I'm using Python because it's the only language I've really been (well-)taught so far. From what I can tell, Python generally doesn't like tuples as elements of an array. I did a little research of my own and found a workaround, namely, pre-allocating the array as follows: def image_to_array(): #converts an image to an array aPic = loadPicture("zorak_color.gif") ph = getHeight(aPic) pw = getWidth(aPic) anArray = zeros((ph,pw), dtype='O') for h in range(ph): for w in range(pw): p = getPixel(aPic, w, h) anArray[h][w] = (getRGB(p)) return anArray This worked correctly for the first part of the assignment, which was simply to convert an image to a matrix (no linear algebra involved). The part with SVD, though, is where it gets trickier. When I call the built-in numPy svd function, using the array I built from my image (where each element is a tuple), I get the following error: Traceback (most recent call last): File "<pyshell#5>", line 1, in -toplevel- svd(x) File "C:\Python24\Lib\site-packages\numpy\linalg\linalg.py", line 724, in svd a = _fastCopyAndTranspose(t, a) File "C:\Python24\Lib\site-packages\numpy\linalg\linalg.py", line 107, in _fastCopyAndTranspose cast_arrays = cast_arrays + (_fastCT(a.astype(type)),) ValueError: setting an array element with a sequence. This is the same error I was getting initially, before I did some research and found that I could pre-allocate my arrays to allow tuples as elements. The issue now is that I am only in my first semester of (college-level) programming, and these numPy functions written by and for professional programmers are a little too black-box for me (though I'm sure they're much clearer to those with experience). So editing these functions to allow for tuples is a bit more complicated than when I did it on my own function. Where do I need to go from here? I assume I should copy the relevant numPy functions into my own program, and modify them accordingly? Thanks in advance. A: Instead of setting the array element type to 'O' (object) you should set it to a tuple. See the SciPy manual for some examples. In your case, easiest is to use something like a = zeros((ph,pw), dtype=(float,3)) Assuming your RGB values are tuples of 3 floating point numbers. This is similar to creating a 3d array (as Steve suggested) and, in fact, the tuple elements are accessed as a[n,m][k] or z[n,m,k] where k is the element in the tuple. Of course, the SVD is defined for 2d matrices and not 3d arrays so you cannot use linalg.svd(a). You have to decide SVD of what matrix (of the three possible ones: R G and B) you need. If, for example, you want the SVD of the "R" matrix (assuming that is the first element of the tuple) use something like: linalg.svd(a[:,:,1]) A: I think you want a ph by pw by 3 numpy array. anArray = zeros((ph,pw,3)) for h in range(ph): for w in range(pw): p = getPixel(aPic, w, h) anArray[h][w] = getRGB(p) You just need to make sure getRGB returns a 3-element list instead of a tuple.
Python - NumPy - tuples as elements of an array
I'm a CS major in university working on a programming project for my Calc III course involving singular-value decomposition. The idea is basically to convert an image of m x n dimensions into an m x n matrix wherein each element is a tuple representing the color channels (r, g, b) of the pixel at point (m, n). I'm using Python because it's the only language I've really been (well-)taught so far. From what I can tell, Python generally doesn't like tuples as elements of an array. I did a little research of my own and found a workaround, namely, pre-allocating the array as follows: def image_to_array(): #converts an image to an array aPic = loadPicture("zorak_color.gif") ph = getHeight(aPic) pw = getWidth(aPic) anArray = zeros((ph,pw), dtype='O') for h in range(ph): for w in range(pw): p = getPixel(aPic, w, h) anArray[h][w] = (getRGB(p)) return anArray This worked correctly for the first part of the assignment, which was simply to convert an image to a matrix (no linear algebra involved). The part with SVD, though, is where it gets trickier. When I call the built-in numPy svd function, using the array I built from my image (where each element is a tuple), I get the following error: Traceback (most recent call last): File "<pyshell#5>", line 1, in -toplevel- svd(x) File "C:\Python24\Lib\site-packages\numpy\linalg\linalg.py", line 724, in svd a = _fastCopyAndTranspose(t, a) File "C:\Python24\Lib\site-packages\numpy\linalg\linalg.py", line 107, in _fastCopyAndTranspose cast_arrays = cast_arrays + (_fastCT(a.astype(type)),) ValueError: setting an array element with a sequence. This is the same error I was getting initially, before I did some research and found that I could pre-allocate my arrays to allow tuples as elements. The issue now is that I am only in my first semester of (college-level) programming, and these numPy functions written by and for professional programmers are a little too black-box for me (though I'm sure they're much clearer to those with experience). So editing these functions to allow for tuples is a bit more complicated than when I did it on my own function. Where do I need to go from here? I assume I should copy the relevant numPy functions into my own program, and modify them accordingly? Thanks in advance.
[ "Instead of setting the array element type to 'O' (object) you should set it to a tuple. See the SciPy manual for some examples.\nIn your case, easiest is to use something like\na = zeros((ph,pw), dtype=(float,3))\n\nAssuming your RGB values are tuples of 3 floating point numbers.\nThis is similar to creating a 3d ...
[ 15, 3 ]
[]
[]
[ "arrays", "linear_algebra", "numpy", "python" ]
stackoverflow_0004231190_arrays_linear_algebra_numpy_python.txt
Q: How to find a relative URL and translate it to an absolute URL in Python I extract some code from a web page (http://www.opensolaris.org/os/community/on/flag-days/all/) like follows, <tr class="build"> <th colspan="0">Build 110</th> </tr> <tr class="arccase project flagday"> <td>Feb-25</td> <td></td> <td></td> <td></td> <td> <a href="../pages/2009022501/">Flag Day and Heads Up: Power Aware Dispatcher and Deep C-States</a><br /> cpupm keyword mode extensions - <a href="/os/community/arc/caselog/2008/777/">PSARC/2008/777</a><br /> CPU Deep Idle Keyword - <a href="/os/community/arc/caselog/2008/663/">PSARC/2008/663</a><br /> </td> </tr> and there are some relative url path in it, now I want to search it with regular expression and replace them with absolute url path. Since I know urljoin can do the replace work like that, >>> urljoin("http://www.opensolaris.org/os/community/on/flag-days/all/", ... "/os/community/arc/caselog/2008/777/") 'http://www.opensolaris.org/os/community/arc/caselog/2008/777/' Now I want to know that how to search them using regular expressions, and finally tanslate the code to, <tr class="build"> <th colspan="0">Build 110</th> </tr> <tr class="arccase project flagday"> <td>Feb-25</td> <td></td> <td></td> <td></td> <td> <a href="http://www.opensolaris.org/os/community/on/flag-days/all//pages/2009022501/">Flag Day and Heads Up: Power Aware Dispatcher and Deep C-States</a><br /> cpupm keyword mode extensions - <a href="http://www.opensolaris.org/os/community/arc/caselog/2008/777/">PSARC/2008/777</a><br /> CPU Deep Idle Keyword - <a href="http://www.opensolaris.org/os/community/arc/caselog/2008/663/">PSARC/2008/663</a><br /> </td> </tr> My knowledge of regular expressions is so poor that I want to know how to do that. Thanks I have finished the work using Beautiful Soup, haha~ Thx for everybody! A: I'm not sure about what you're trying to achieve but using the BASE tag in HTML may do this trick for you without having to resort to regular expressions when doing the processing. A: First, I'd recommend using a HTML parser, such as BeautifulSoup. HTML is not a regular language, and thus can't be parsed fully by regular expressions alone. Parts of HTML can be parsed though. If you don't want to use a full HTML parser, you could use something like this to approximate the work: import re, urlparse find_re = re.compile(r'\bhref\s*=\s*("[^"]*"|\'[^\']*\'|[^"\'<>=\s]+)') def fix_urls(document, base_url): ret = [] last_end = 0 for match in find_re.finditer(document): url = match.group(1) if url[0] in "\"'": url = url.strip(url[0]) parsed = urlparse.urlparse(url) if parsed.scheme == parsed.netloc == '': #relative to domain url = urlparse.urljoin(base_url, url) ret.append(document[last_end:match.start(1)]) ret.append('"%s"' % (url,)) last_end = match.end(1) ret.append(document[last_end:]) return ''.join(ret) Example: >>> document = '''<tr class="build"><th colspan="0">Build 110</th></tr> <tr class="arccase project flagday"><td>Feb-25</td><td></td><td></td><td></td><td><a href="../pages/2009022501/">Flag Day and Heads Up: Power Aware Dispatcher and Deep C-States</a><br />cpupm keyword mode extensions - <a href="/os/community/arc/caselog/2008/777/">PSARC/2008/777</a><br /> CPU Deep Idle Keyword - <a href="/os/community/arc/caselog/2008/663/">PSARC/2008/663</a><br /></td></tr>''' >>> fix_urls(document,"http://www.opensolaris.org/os/community/on/flag-days/all/") '<tr class="build"><th colspan="0">Build 110</th></tr> <tr class="arccase project flagday"><td>Feb-25</td><td></td><td></td><td></td><td><a href="http://www.opensolaris.org/os/community/on/flag-days/pages/2009022501/">Flag Day and Heads Up: Power Aware Dispatcher and Deep C-States</a><br />cpupm keyword mode extensions - <a href="http://www.opensolaris.org/os/community/arc/caselog/2008/777/">PSARC/2008/777</a><br /> CPU Deep Idle Keyword - <a href="http://www.opensolaris.org/os/community/arc/caselog/2008/663/">PSARC/2008/663</a><br /></td></tr>' >>> A: Don't use regular expressions to parse HTML. Use a real parser for that. For example BeautifulSoup. A: this isn't elegant, but does the job: import re from urlparse import urljoin relative_urls_re = re.compile('(<\s*a[^>]+href\s*=\s*["\']?)(?!http)([^"\'>]+)', re.IGNORECASE) relative_urls_re.sub(lambda m: m.group(1) + urljoin(base_url, m.group(2)), html) A: Something like this should do it: "(?:[^/:"]+|/(?!/))(?:/[^/"]+)*"
How to find a relative URL and translate it to an absolute URL in Python
I extract some code from a web page (http://www.opensolaris.org/os/community/on/flag-days/all/) like follows, <tr class="build"> <th colspan="0">Build 110</th> </tr> <tr class="arccase project flagday"> <td>Feb-25</td> <td></td> <td></td> <td></td> <td> <a href="../pages/2009022501/">Flag Day and Heads Up: Power Aware Dispatcher and Deep C-States</a><br /> cpupm keyword mode extensions - <a href="/os/community/arc/caselog/2008/777/">PSARC/2008/777</a><br /> CPU Deep Idle Keyword - <a href="/os/community/arc/caselog/2008/663/">PSARC/2008/663</a><br /> </td> </tr> and there are some relative url path in it, now I want to search it with regular expression and replace them with absolute url path. Since I know urljoin can do the replace work like that, >>> urljoin("http://www.opensolaris.org/os/community/on/flag-days/all/", ... "/os/community/arc/caselog/2008/777/") 'http://www.opensolaris.org/os/community/arc/caselog/2008/777/' Now I want to know that how to search them using regular expressions, and finally tanslate the code to, <tr class="build"> <th colspan="0">Build 110</th> </tr> <tr class="arccase project flagday"> <td>Feb-25</td> <td></td> <td></td> <td></td> <td> <a href="http://www.opensolaris.org/os/community/on/flag-days/all//pages/2009022501/">Flag Day and Heads Up: Power Aware Dispatcher and Deep C-States</a><br /> cpupm keyword mode extensions - <a href="http://www.opensolaris.org/os/community/arc/caselog/2008/777/">PSARC/2008/777</a><br /> CPU Deep Idle Keyword - <a href="http://www.opensolaris.org/os/community/arc/caselog/2008/663/">PSARC/2008/663</a><br /> </td> </tr> My knowledge of regular expressions is so poor that I want to know how to do that. Thanks I have finished the work using Beautiful Soup, haha~ Thx for everybody!
[ "I'm not sure about what you're trying to achieve but using the BASE tag in HTML may do this trick for you without having to resort to regular expressions when doing the processing.\n", "First, I'd recommend using a HTML parser, such as BeautifulSoup. HTML is not a regular language, and thus can't be parsed fully...
[ 5, 3, 2, 2, 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0000589833_python_regex.txt
Q: Python PIL - All areas of PNG with opacity > 0 have their opacity set to 1 Imagine a red circle with a black dropshadow that fades away on top of a fully transparent background. When I open and resave the image with PIL the background remains fully transparent but the dropshadow becomes full black. The problem appears without even altering the image: image = Image.open('input.png') image = image.convert('RGBA') image.save('output.png') I want to keep the image looking exactly as the original so that I can crop or resize it. EDIT: Here's a PNG that demonstrates the effect. It was converted to 8bit by using PNGNQ. When using the above Python code it comes out as the following: A: It looks like PIL currently doesn't support full alpha for PNG8. There is a patch here for read-only support: http://mail.python.org/pipermail/image-sig/2010-October/006533.html If you're feeling naughty, you could monkeypatch PIL: from PIL import Image, ImageFile, PngImagePlugin def patched_chunk_tRNS(self, pos, len): i16 = PngImagePlugin.i16 s = ImageFile._safe_read(self.fp, len) if self.im_mode == "P": self.im_info["transparency"] = map(ord, s) elif self.im_mode == "L": self.im_info["transparency"] = i16(s) elif self.im_mode == "RGB": self.im_info["transparency"] = i16(s), i16(s[2:]), i16(s[4:]) return s PngImagePlugin.PngStream.chunk_tRNS = patched_chunk_tRNS def patched_load(self): if self.im and self.palette and self.palette.dirty: apply(self.im.putpalette, self.palette.getdata()) self.palette.dirty = 0 self.palette.rawmode = None try: trans = self.info["transparency"] except KeyError: self.palette.mode = "RGB" else: try: for i, a in enumerate(trans): self.im.putpalettealpha(i, a) except TypeError: self.im.putpalettealpha(trans, 0) self.palette.mode = "RGBA" if self.im: return self.im.pixel_access(self.readonly) Image.Image.load = patched_load Image.open('kHrY6.png').convert('RGBA').save('kHrY6-out.png') A: I think that the problem has been somewhat resolved, but is it possible that you need to set the depth of the alpha channel?
Python PIL - All areas of PNG with opacity > 0 have their opacity set to 1
Imagine a red circle with a black dropshadow that fades away on top of a fully transparent background. When I open and resave the image with PIL the background remains fully transparent but the dropshadow becomes full black. The problem appears without even altering the image: image = Image.open('input.png') image = image.convert('RGBA') image.save('output.png') I want to keep the image looking exactly as the original so that I can crop or resize it. EDIT: Here's a PNG that demonstrates the effect. It was converted to 8bit by using PNGNQ. When using the above Python code it comes out as the following:
[ "It looks like PIL currently doesn't support full alpha for PNG8.\nThere is a patch here for read-only support: http://mail.python.org/pipermail/image-sig/2010-October/006533.html\nIf you're feeling naughty, you could monkeypatch PIL:\nfrom PIL import Image, ImageFile, PngImagePlugin\n\ndef patched_chunk_tRNS(self,...
[ 6, 0 ]
[]
[]
[ "png", "python", "python_imaging_library" ]
stackoverflow_0004217869_png_python_python_imaging_library.txt
Q: Zip and apply a list of functions over a list of values in Python Is there idiomatic and/or elegant Python for zipping and applying a list of functions over a list of values? For example, suppose you have a list of functions: functions = [int, unicode, float, lambda x: '~' + x + '~'] and a list of values: values = ['33', '\xc3\xa4', '3.14', 'flange'] Is there a way to apply the ith function to the ith value and return a list of the same length of the transformed values, while avoiding this ugly list comprehension? [functions[i](values[i]) for i in range(len(functions))] # <- ugly What I want is something like zip() + map() (zipmap()!) the functions list with the values list and have the functions be applied to their paired values. I thought itertools might offer something relevant, but functions like imap and starmap are for mapping a single function over an iterable, not an iterable of functions over another iterable. A: [x(y) for x, y in zip(functions, values)] A: These solutions seem overly complicated: map already zips its arguments: map(lambda x,y:x(y), functions, values) Or, if you prefer the iterator version: from itertools import imap imap(lambda x,y:x(y), functions, values) A: One of the beautiful features of functional encapsulation is they can hide ugliness. If you need zipmap, define it: def zipmap(values, functions): return [functions[i](values[i]) for i in range( len(functions))] A: Code that is meant to function robustly would do this in a for loop so that it could provide meaningful error reporting: for i in xrange(num_columns): try: outrow.append(functions[i](input_values[i]) except (ValueError, EtcetcError) as e: self.do_error_reporting(row_number, i, input_value[i], e) outrow.append(None)
Zip and apply a list of functions over a list of values in Python
Is there idiomatic and/or elegant Python for zipping and applying a list of functions over a list of values? For example, suppose you have a list of functions: functions = [int, unicode, float, lambda x: '~' + x + '~'] and a list of values: values = ['33', '\xc3\xa4', '3.14', 'flange'] Is there a way to apply the ith function to the ith value and return a list of the same length of the transformed values, while avoiding this ugly list comprehension? [functions[i](values[i]) for i in range(len(functions))] # <- ugly What I want is something like zip() + map() (zipmap()!) the functions list with the values list and have the functions be applied to their paired values. I thought itertools might offer something relevant, but functions like imap and starmap are for mapping a single function over an iterable, not an iterable of functions over another iterable.
[ "[x(y) for x, y in zip(functions, values)]\n\n", "These solutions seem overly complicated: map already zips its arguments:\nmap(lambda x,y:x(y), functions, values)\n\nOr, if you prefer the iterator version:\nfrom itertools import imap\nimap(lambda x,y:x(y), functions, values)\n\n", "One of the beautiful featur...
[ 24, 24, 4, 1 ]
[]
[]
[ "python" ]
stackoverflow_0004231345_python.txt
Q: Matching a space at the beginning of a line using pyparsing I'm trying to parse a unified diff file using pyparsing as an exercise and I can't get something right. Here the part of my diff file that's causing me troubles : (... some stuff over...) banana +apple orange The first line starts with " " then "banana". I have the following expression for parsing a line : linestart = Literal(" ") | Literal("+") | Literal("-") line = linestart.leaveWhitespace() + restOfLine This works when parsing a single line, but when I try to parse the whole file, the "leaveWhitespace" instruction make the parser start at the end of the last line. In my example, after parsing " banana", the next char is "\n" (because of leaveWhitespace) and the parser tries to match " " or "+" or "-" and so throws an error. How can I handle this correctly? A: You can read and parse one line at a time. The following code works for me. from pyparsing import Literal, restOfLine linestart = Literal(" ") | Literal("+") | Literal("-") line = linestart.leaveWhitespace() + restOfLine f = open("/tmp/test.diff") for l in f.readlines(): fields = line.parseString(l) print fields And the output is [' ', 'banana'] ['+', 'apple'] [' ', 'orange'] Or if you have to parse several lines, you can explicitly specify the LineEnd linestart = Literal(" ") | Literal("+") | Literal("-") line = linestart.leaveWhitespace() + restOfLine + LineEnd() lines = ZeroOrMore(line) lines.parseString(f.read())
Matching a space at the beginning of a line using pyparsing
I'm trying to parse a unified diff file using pyparsing as an exercise and I can't get something right. Here the part of my diff file that's causing me troubles : (... some stuff over...) banana +apple orange The first line starts with " " then "banana". I have the following expression for parsing a line : linestart = Literal(" ") | Literal("+") | Literal("-") line = linestart.leaveWhitespace() + restOfLine This works when parsing a single line, but when I try to parse the whole file, the "leaveWhitespace" instruction make the parser start at the end of the last line. In my example, after parsing " banana", the next char is "\n" (because of leaveWhitespace) and the parser tries to match " " or "+" or "-" and so throws an error. How can I handle this correctly?
[ "You can read and parse one line at a time. The following code works for me.\nfrom pyparsing import Literal, restOfLine\n\nlinestart = Literal(\" \") | Literal(\"+\") | Literal(\"-\")\nline = linestart.leaveWhitespace() + restOfLine\n\nf = open(\"/tmp/test.diff\")\nfor l in f.readlines():\n fields = line.parseStri...
[ 1 ]
[]
[]
[ "pyparsing", "python" ]
stackoverflow_0004231835_pyparsing_python.txt
Q: Automating JPEG download I need to download jpeg images of size > MIN_SIZE from the pages 1 <= PAGE_NUMBER <= NUM_OF_PAGES http://somewebsite.com/showthread.php?t=12345&page=PAGE_NUMBER How can I do that in python? I am new to python. A: Here's how I would do it in Python: Fetch each page you need to grab image from (easy, just use mechanize or some other HTTP fetcher library) Parse each HTML file to grab the image URLs. This a bit more involved -- have a look at HTMLParser. From memory, you can subclass HTMLParser to only grab the text that you're interested in. In this case, this is the src attribute from the HTML img tag, e.g. something like <img src="this is what you want" width=640 height=480/> Fetch each image obtained above (easy) Personally, though, I wouldn't use Python for this. The first and last steps of the above approach are easily done with wget. The second can be performed with grep, with bash to tie everything together. In fact, this is pretty much exactly what I recommended here. That is, of course, if you're on Linux. If you don't have bash and get Python may be your next best option.
Automating JPEG download
I need to download jpeg images of size > MIN_SIZE from the pages 1 <= PAGE_NUMBER <= NUM_OF_PAGES http://somewebsite.com/showthread.php?t=12345&page=PAGE_NUMBER How can I do that in python? I am new to python.
[ "Here's how I would do it in Python:\n\nFetch each page you need to grab image from (easy, just use mechanize or some other HTTP fetcher library)\nParse each HTML file to grab the image URLs. This a bit more involved -- have a look at HTMLParser. From memory, you can subclass HTMLParser to only grab the text that...
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0004232128_python.txt
Q: StringType and NoneType in python3.x I have a codebase which uses StringType and NoneType(types module) in the python2.x codebase. On porting to Python3, tests failed as the types module in Python3.x does not have the above mentioned two types. I solved the problem by replacing them with "str" and "None" respectively. I was wondering if there is another (right)way of doing this. What I'm doing currently definitely seems to work, but I'm doubtful. Should I stick to the approach I have followed or is there something wrong in what I have done? If so, how do I correct it? A: Checking None is usually done by calling obj is None, while checking for string usually is isinstance(obj, str). In Python 2.x to detect both string and unicode, you could use isinstance(obj, basestring). If you use 2to3, it's enough, but if you need to have single piece of code working in both Py2 and Py3, you may end up with something like that: try: return isinstance(obj, basestring) except NameError: return isinstance(obj, str) A: Where possible I would recommend that you avoid the values in types if their values are obvious; so for something like binding a method to an object, use types.MethodType, but for types.StringTypes use (str, unicode) or rather basestring. For this situation, I would do this: Use obj is None or obj is not None rather than isinstance(obj, NoneType) or not isinstance(obj, NoneType). Use isinstance(obj, basestring) rather than isinstance(obj, StringTypes) Use isinstance(obj, str) rather than isinstance(obj, StringType) Then, when you're needing to distribute for Python 3, use 2to3. Then your basestring will become str and the rest will continue to work as it did before. (Also, bear in mind this, in particular the difference between StringType and StringTypes: types.UnicodeType == unicode types.StringType == str types.StringTypes == (str, unicode) )
StringType and NoneType in python3.x
I have a codebase which uses StringType and NoneType(types module) in the python2.x codebase. On porting to Python3, tests failed as the types module in Python3.x does not have the above mentioned two types. I solved the problem by replacing them with "str" and "None" respectively. I was wondering if there is another (right)way of doing this. What I'm doing currently definitely seems to work, but I'm doubtful. Should I stick to the approach I have followed or is there something wrong in what I have done? If so, how do I correct it?
[ "Checking None is usually done by calling obj is None, while checking for string usually is isinstance(obj, str). In Python 2.x to detect both string and unicode, you could use isinstance(obj, basestring).\nIf you use 2to3, it's enough, but if you need to have single piece of code working in both Py2 and Py3, you m...
[ 26, 7 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0004232111_python_python_3.x.txt
Q: python modify Tkinter.Listbox parameter I need help with Tkinter.Listbox class, I'm trying to change the color of selected line A: from Tkinter import * master = Tk() listbox = Listbox(master, selectbackground="red") If there are any other colours you want to change, then look them up at the documentation.
python modify Tkinter.Listbox parameter
I need help with Tkinter.Listbox class, I'm trying to change the color of selected line
[ "from Tkinter import *\nmaster = Tk()\nlistbox = Listbox(master, selectbackground=\"red\")\n\nIf there are any other colours you want to change, then look them up at the documentation. \n" ]
[ 2 ]
[]
[]
[ "listbox", "python", "selection", "tkinter" ]
stackoverflow_0004232823_listbox_python_selection_tkinter.txt
Q: Communication between threads in PySide I have a thread which produces some data (a python list) and which shall be available for a widget that will read and display the data in the main thread. Actually, I'm using QMutex to provide access to the data, in this way: class Thread(QThread): def get_data(self): QMutexLock(self.mutex) return deepcopy(self.data) def set_data(self, data): QMutexLock(self.mutex) self.data = deepcopy(data) def run(self): self.mutex = QMutex() while True: self.data = slowly_produce_data() self.emit(SIGNAL("dataReady()")) class Widget(QWidget): def __init__(self): self.thread = Thread() self.connect(self.thread, SIGNAL("dataReady()"), self.get_data) self.thread.start() def get_data(self): self.data = self.thread.get_data() def paintEvent(self, event): paint_somehow(self.data) Note that I'm not passing the data in the emit() as they are generic data (I tried to use PyObject as data type, but a double free() would crash the program), but I'm copying the data with a deepcopy() (assuming the data can be copied like this). I used a deepcopy() because I guess that a code like: def get_data(self): QMutexLock(self.mutex) return self.data would only copy a reference to the data (right?) and data would be shared AND unlocked after the return... Is this code correct? What can I do if data are really large (like a list of 1'000'000 items)? Thanks. P.S. I saw some examples, like the Qt Mandelbrot example, or the threading example with PyQt, but they use QImage as parameter in the slots. A: I think this should work with PySide. if not work please report a bug on PySide bugzilla(http://bugs.openbossa.org/) with a small test case: class Thread(QThread): dataReady = Signal(object) def run(self): while True: self.data = slowly_produce_data() # this will add a ref to self.data and avoid the destruction self.dataReady.emit(self.data) class Widget(QWidget): def __init__(self): self.thread = Thread() self.thread.dataReady.connect(self.get_data, Qt.QueuedConnection) self.thread.start() def get_data(self, data): self.data = data def paintEvent(self, event): paint_somehow(self.data)
Communication between threads in PySide
I have a thread which produces some data (a python list) and which shall be available for a widget that will read and display the data in the main thread. Actually, I'm using QMutex to provide access to the data, in this way: class Thread(QThread): def get_data(self): QMutexLock(self.mutex) return deepcopy(self.data) def set_data(self, data): QMutexLock(self.mutex) self.data = deepcopy(data) def run(self): self.mutex = QMutex() while True: self.data = slowly_produce_data() self.emit(SIGNAL("dataReady()")) class Widget(QWidget): def __init__(self): self.thread = Thread() self.connect(self.thread, SIGNAL("dataReady()"), self.get_data) self.thread.start() def get_data(self): self.data = self.thread.get_data() def paintEvent(self, event): paint_somehow(self.data) Note that I'm not passing the data in the emit() as they are generic data (I tried to use PyObject as data type, but a double free() would crash the program), but I'm copying the data with a deepcopy() (assuming the data can be copied like this). I used a deepcopy() because I guess that a code like: def get_data(self): QMutexLock(self.mutex) return self.data would only copy a reference to the data (right?) and data would be shared AND unlocked after the return... Is this code correct? What can I do if data are really large (like a list of 1'000'000 items)? Thanks. P.S. I saw some examples, like the Qt Mandelbrot example, or the threading example with PyQt, but they use QImage as parameter in the slots.
[ "I think this should work with PySide. if not work please report a bug on PySide bugzilla(http://bugs.openbossa.org/) with a small test case:\nclass Thread(QThread):\n dataReady = Signal(object)\n\n def run(self):\n while True:\n self.data = slowly_produce_data()\n # this will add a ref to self.data ...
[ 15 ]
[]
[]
[ "multithreading", "pyqt", "pyqt4", "pyside", "python" ]
stackoverflow_0002823112_multithreading_pyqt_pyqt4_pyside_python.txt
Q: Installing easy_install... to get to installing lxml I've come to grips with the fact that ElementTree isn't going to do what I want it to do. I've checked out the documentation for lxml, and it appears that it will serve my purposes. To get lxml, I need to get easy_install. So I downloaded it from here, and put it in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/. Then I went to that folder, and ran sh setuptools-0.6c11-py2.6.egg. That installed successfully. Then I got excited because I thought the whole point of easy_install was that I could then just install via easy_install lxml, and lxml would magically get downloaded, built, and installed properly, ready for my importing enjoyment. So I ran easy_install lxml. I pasted the results below. What should I do? easy_install lxml Searching for lxml Reading http://pypi.python.org/simple/lxml/ Reading http://codespeak.net/lxml Best match: lxml 2.2.6 Downloading http://codespeak.net/lxml/lxml-2.2.6.tgz Processing lxml-2.2.6.tgz Running lxml-2.2.6/setup.py -q bdist_egg --dist-dir /var/folders/49/49N0+g5QFKCm51AbzMtghE+++TI/-Tmp-/easy_install-rxbP6K/lxml-2.2.6/egg-dist-tmp-fjakR0 Building lxml version 2.2.6. NOTE: Trying to build without Cython, pre-generated 'src/lxml/lxml.etree.c' needs to be available. Using build configuration of libxslt 1.1.12 Building against libxml2/libxslt in the following directory: /usr/lib unable to execute gcc-4.0: No such file or directory error: Setup script exited with error: command 'gcc-4.0' failed with exit status 1 A: First off we don't use easy_install anymore. We use pip. Please use pip instead. To get to your particular troubles, as the comments point out, you're missing GCC. On OS X, Xcode Command Line Tools provides GCC, as well as many other programs necessary for building software on OS X. For OS X 10.9 (Mavericks) and newer, either install Xcode through the App Store, or alternatively, install only the Xcode Command Line Tools with xcode-select --install For more details, please see the Apple Developer FAQ or search the web for "install Xcode Command Line Tools". For older versions of OS X, you can get Xcode Command Line Tools from the downloads page of the Apple Developer website (free registration required). Once you have GCC installed, you may still encounter errors during compilation if the C/C++ library dependencies are not installed on your system. On OS X, the Homebrew project is the easiest way to install and manage such dependencies. Follow the instructions on the Homebrew website to install Homebrew on your system, then issue brew update brew install libxml2 libxslt Possibly causing further trouble in your case, you placed the downloaded setuptools in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/. Please do not download any files to this location. Instead, I suggest you download the file to your home directory, or your usual Downloads directory. After downloading it, you're supposed to run sh setuptools-X.Y.Z.egg, which will then install it properly into the appropriate site-packages and put the executable easy_install on your path. A: Ensure you have libxml2-dev and libxslt1-dev installed apt-get install libxml2-dev apt-get install libxslt1-dev Then your installation should build properly. A: try: sudo apt-get install python-lxml A: It looks like lxml wants to build an extension that requires access to a C compiler. You will need gcc for that. Try running sudo apt-get install build-essential and that should fix this particular issue. A: Make sure that all the following packages are installed on your system first: gcc gcc-c++ python-devel libxml2 libxml2-dev libxslt libxslt-dev You should be able to install them using some variant of: sudo apt-get install sudo yum install Only after all of the above have been successfully installed should you attempt to run: sudo pip install lxml
Installing easy_install... to get to installing lxml
I've come to grips with the fact that ElementTree isn't going to do what I want it to do. I've checked out the documentation for lxml, and it appears that it will serve my purposes. To get lxml, I need to get easy_install. So I downloaded it from here, and put it in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/. Then I went to that folder, and ran sh setuptools-0.6c11-py2.6.egg. That installed successfully. Then I got excited because I thought the whole point of easy_install was that I could then just install via easy_install lxml, and lxml would magically get downloaded, built, and installed properly, ready for my importing enjoyment. So I ran easy_install lxml. I pasted the results below. What should I do? easy_install lxml Searching for lxml Reading http://pypi.python.org/simple/lxml/ Reading http://codespeak.net/lxml Best match: lxml 2.2.6 Downloading http://codespeak.net/lxml/lxml-2.2.6.tgz Processing lxml-2.2.6.tgz Running lxml-2.2.6/setup.py -q bdist_egg --dist-dir /var/folders/49/49N0+g5QFKCm51AbzMtghE+++TI/-Tmp-/easy_install-rxbP6K/lxml-2.2.6/egg-dist-tmp-fjakR0 Building lxml version 2.2.6. NOTE: Trying to build without Cython, pre-generated 'src/lxml/lxml.etree.c' needs to be available. Using build configuration of libxslt 1.1.12 Building against libxml2/libxslt in the following directory: /usr/lib unable to execute gcc-4.0: No such file or directory error: Setup script exited with error: command 'gcc-4.0' failed with exit status 1
[ "First off we don't use easy_install anymore. We use pip. Please use pip instead.\nTo get to your particular troubles, as the comments point out, you're missing GCC. On OS X, Xcode Command Line Tools provides GCC, as well as many other programs necessary for building software on OS X. For OS X 10.9 (Mavericks) and ...
[ 22, 13, 10, 2, 2 ]
[]
[]
[ "easy_install", "lxml", "python" ]
stackoverflow_0002368008_easy_install_lxml_python.txt
Q: sqlite3 rowid alias not being properly created According to the sqlite3 documentation, creating a table where the primary key is an ascending integer causes the primary key to be an alias for the rowID. This isn't happening for me. Here is my creation code: import sqlite3 con = sqlite3.connect("/tmp/emaildb.sqlite3") c = con.cursor() try: c.execute("create table drives (driveid integer primary key asc, drivename text unique);") con.commit() except sqlite3.OperationalError: pass Here is my checking code: try: c.execute("insert into drives (drivename) values (?)",(drivename,)) print "new ID=",c.lastrowid except sqlite3.IntegrityError: c.execute("select rowid from drives where drivename=?",(drivename,)) driveid = c.fetchone()[0] print "old ID=",driveid This works if I select rowid but not if I select driveid. What's wrong? A: http://www.sqlite.org/autoinc.html I am not familiar with the "asc" directive in your create-table.
sqlite3 rowid alias not being properly created
According to the sqlite3 documentation, creating a table where the primary key is an ascending integer causes the primary key to be an alias for the rowID. This isn't happening for me. Here is my creation code: import sqlite3 con = sqlite3.connect("/tmp/emaildb.sqlite3") c = con.cursor() try: c.execute("create table drives (driveid integer primary key asc, drivename text unique);") con.commit() except sqlite3.OperationalError: pass Here is my checking code: try: c.execute("insert into drives (drivename) values (?)",(drivename,)) print "new ID=",c.lastrowid except sqlite3.IntegrityError: c.execute("select rowid from drives where drivename=?",(drivename,)) driveid = c.fetchone()[0] print "old ID=",driveid This works if I select rowid but not if I select driveid. What's wrong?
[ "http://www.sqlite.org/autoinc.html\nI am not familiar with the \"asc\" directive in your create-table.\n" ]
[ 1 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0004233098_python_sqlite.txt
Q: sha1WithRSAEncryption in Python Can someone recommend a library for calculating SHA1WithRSAEncryption in Python? Context: I'm trying to do some message authentication. I've looked at PyXMLDSig, but it seemed to expect the certificates as separate files. As a first step to better understanding the problem space, I wanted to calculate the digest values "by hand". I've looked around and seen Java implementations, but not Python ones. (Jython isn't really an option for my environment.) Thanks in advance. A: Take a look at M2Crypto, it's probably the best and most complete crypto library for Python.
sha1WithRSAEncryption in Python
Can someone recommend a library for calculating SHA1WithRSAEncryption in Python? Context: I'm trying to do some message authentication. I've looked at PyXMLDSig, but it seemed to expect the certificates as separate files. As a first step to better understanding the problem space, I wanted to calculate the digest values "by hand". I've looked around and seen Java implementations, but not Python ones. (Jython isn't really an option for my environment.) Thanks in advance.
[ "Take a look at M2Crypto, it's probably the best and most complete crypto library for Python.\n" ]
[ 0 ]
[]
[]
[ "python", "sha1", "xml" ]
stackoverflow_0004107888_python_sha1_xml.txt
Q: Receiving Mail in Google App Engine I am reading the tutorial about Receiving Mail. I updated the app.yaml file as instructed: application: hello-1-world version: 1 runtime: python api_version: 1 handlers: - url: /favicon.ico static_files: static/images/favicon.ico upload: static/images/favicon.ico - url: /_ah/mail/.+ script: handle_incoming_email.py login: admin - url: /.* script: hw.py inbound_services: - mail And created a handle_incoming_email.py import cgi import os import logging from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db from google.appengine.api import mail from google.appengine.ext.webapp.mail_handlers import InboundMailHandler class ReceiveEmail(InboundMailHandler): def receive(self,message): logging.info("Received email from %s" % message.sender) plaintext = message.bodies(content_type='text/plain') for text in plaintext: txtmsg = "" txtmsg = text[1].decode() logging.info("Body is %s" % txtmsg) self.response.out.write(txtmsg) application = webapp.WSGIApplication([ ReceiveEmail.mapping() ], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() I also have hw.py that I used to practice sending email. That one works. Now I go to http://localhost:8081/_ah/admin/inboundmail and send an email to help@hello-1-world.appspotmail.com Can anyone explain to me how I process this email? How do I access the content of the email? I have the code self.response.out.write(txtmsg) in handle_incoming_email.py but that does not print anything. I would greatly appreciate if someone clarify how receiving email works. For instance, in this question class MailHandler (InboundMailHandler): def receive(self, message): sender = message.sender user_account = db.GqlQuery("SELECT * FROM Task WHERE user = :1", sender).fetch(5) as far as I understand sender is the email of the sender. So, in my case, how do I access the sender email address. Also, why do I need to have a separate script to handle incoming mail? Why can't I put the ReceiveEmail handler in my hw.py script? If I do that, where do I put the line application = webapp.WSGIApplication([ ReceiveEmail.mapping() ], debug=True) I would be grateful if you can help me with these questions. (I asked the same question in GAE group but there were no answers.) A: Is help@hello-1-world.appspotmail.com a valid google user? GAE can receive/send mails only from the google user of your application. Your code seems correct. "Also, why do I need to have a separate script to handle incoming mail? Why can't I put the ReceiveEmail handler in my hw.py" -> the main script is to handle url request, I think is much clearer in this way.
Receiving Mail in Google App Engine
I am reading the tutorial about Receiving Mail. I updated the app.yaml file as instructed: application: hello-1-world version: 1 runtime: python api_version: 1 handlers: - url: /favicon.ico static_files: static/images/favicon.ico upload: static/images/favicon.ico - url: /_ah/mail/.+ script: handle_incoming_email.py login: admin - url: /.* script: hw.py inbound_services: - mail And created a handle_incoming_email.py import cgi import os import logging from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db from google.appengine.api import mail from google.appengine.ext.webapp.mail_handlers import InboundMailHandler class ReceiveEmail(InboundMailHandler): def receive(self,message): logging.info("Received email from %s" % message.sender) plaintext = message.bodies(content_type='text/plain') for text in plaintext: txtmsg = "" txtmsg = text[1].decode() logging.info("Body is %s" % txtmsg) self.response.out.write(txtmsg) application = webapp.WSGIApplication([ ReceiveEmail.mapping() ], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() I also have hw.py that I used to practice sending email. That one works. Now I go to http://localhost:8081/_ah/admin/inboundmail and send an email to help@hello-1-world.appspotmail.com Can anyone explain to me how I process this email? How do I access the content of the email? I have the code self.response.out.write(txtmsg) in handle_incoming_email.py but that does not print anything. I would greatly appreciate if someone clarify how receiving email works. For instance, in this question class MailHandler (InboundMailHandler): def receive(self, message): sender = message.sender user_account = db.GqlQuery("SELECT * FROM Task WHERE user = :1", sender).fetch(5) as far as I understand sender is the email of the sender. So, in my case, how do I access the sender email address. Also, why do I need to have a separate script to handle incoming mail? Why can't I put the ReceiveEmail handler in my hw.py script? If I do that, where do I put the line application = webapp.WSGIApplication([ ReceiveEmail.mapping() ], debug=True) I would be grateful if you can help me with these questions. (I asked the same question in GAE group but there were no answers.)
[ "Is help@hello-1-world.appspotmail.com a valid google user? GAE can receive/send mails only from the google user of your application.\nYour code seems correct.\n\"Also, why do I need to have a separate script to handle incoming mail? Why can't I put the ReceiveEmail handler in my hw.py\" -> the main script is to ha...
[ 1 ]
[]
[]
[ "email", "google_app_engine", "python" ]
stackoverflow_0004233201_email_google_app_engine_python.txt
Q: Python script import fails if script is moved to subdirectory This may be my own misunderstanding of how Python imports and search paths work, or it may be a problem in the packaging of the caldav package. I have set up a virtualenv environment named myproject In the top level of myproject, I have a script test.py which contains two imports: import lxml import caldav In this directory, I type: python test.py and it works fine without any problem Now I move the script to the subdirectory test and run the command: python test/test.py The import lxml seems to still work. The import caldav fails with the following exception: Traceback (most recent call last): File "test/test.py", line 34, in <module> main() File "test/test.py", line 29, in main exec ( "import " + modulename ) File "<string>", line 1, in <module> File "/home/ec2-user/caldav2sql/myproject/test/caldav/__init__.py", line 3, in <module> from davclient import DAVClient File "/home/ec2-user/caldav2sql/myproject/test/caldav/davclient.py", line 8, in <module> from caldav.lib import error ImportError: No module named lib Am I doing something wrong here? Should I be setting up some kind of path? A: Most likely, caldav was in the same directory as test.py, so when you import it it worked fine. Now that you moved test.py to a subdirectory, your imports can't find it. You can either move caldav or set your PYTHONPATH. You could also modify your sys.path Information from Python's module tutorial: http://docs.python.org/tutorial/modules.html The variable sys.path is a list of strings that determines the interpreter’s search path for modules. It is initialized to a default path taken from the environment variable PYTHONPATH, or from a built-in default if PYTHONPATH is not set. You can modify it using standard list operations: >>> import sys >>> sys.path.append('/ufs/guido/lib/python')
Python script import fails if script is moved to subdirectory
This may be my own misunderstanding of how Python imports and search paths work, or it may be a problem in the packaging of the caldav package. I have set up a virtualenv environment named myproject In the top level of myproject, I have a script test.py which contains two imports: import lxml import caldav In this directory, I type: python test.py and it works fine without any problem Now I move the script to the subdirectory test and run the command: python test/test.py The import lxml seems to still work. The import caldav fails with the following exception: Traceback (most recent call last): File "test/test.py", line 34, in <module> main() File "test/test.py", line 29, in main exec ( "import " + modulename ) File "<string>", line 1, in <module> File "/home/ec2-user/caldav2sql/myproject/test/caldav/__init__.py", line 3, in <module> from davclient import DAVClient File "/home/ec2-user/caldav2sql/myproject/test/caldav/davclient.py", line 8, in <module> from caldav.lib import error ImportError: No module named lib Am I doing something wrong here? Should I be setting up some kind of path?
[ "Most likely, caldav was in the same directory as test.py, so when you import it it worked fine. Now that you moved test.py to a subdirectory, your imports can't find it. You can either move caldav or set your PYTHONPATH.\nYou could also modify your sys.path\nInformation from Python's module tutorial: http://docs.p...
[ 3 ]
[]
[]
[ "python", "virtualenv" ]
stackoverflow_0004233311_python_virtualenv.txt
Q: refreshing QTreeView / QSortFilterProxyModel Good Day to All, Been writing code for years, but still a bit green when it comes to PyQt, so please forgive my syntactically lacking question(s) ;-) I'm hacking a derivative of the (famous?) packaged example that comes with PyQt4 (and Qt), namely "basicsortfiltermodel.pyw" from "../examples/itemviews" in PyQt4... I've added a little popup menu (let's call this B.py) that one can launch from the BasicSort-derivative (let's call this A.py). I believe I'm correcting adding new data (a new record) to the QSortFilterProxyModel(). (I think this because I'm not getting any errors now, after some effort) But I seem to be unable to get the QTreeView to refresh. I've scoured the Qt class docs and Google'd the heck out of it (seems like a common question from the looks of it, lol).. Now I know this is an ugly hack, but just to try to get it to work (elegance can come later is my theory)... At the bottom of A.py, I declared a global "wX", global wX; [...] window = Window() wX = Window() window.setSourceModel(createMailModel(window)) so that when I hit a button later, I could more easily get a hold of the "parent" value found in the runtime "createMailModel". From which I get the "model" handle.. model = QtGui.QStandardItemModel(0, 17, WinX) addMail(model, "image", "tabl00", etc etc) Anyways,..I think this is working.... But after adding a new record via addMail(), I can't seem to get self.proxyModel to refresh itself.. I'm pretty sure this a stupid newbie issue, lol....but could anyone help shed some light on how to make this work? Many Thanks, A: I believe you have to add new items into your original model not the proxy one. Once items is added proxy model and view will updates themselves accordingly. See if an example below would work for you: import sys from PyQt4 import QtGui class MainForm(QtGui.QMainWindow): def __init__(self, parent=None): super(MainForm, self).__init__(parent) self.setMinimumSize(300, 400) self.model = QtGui.QStandardItemModel() self.sortModel = QtGui.QSortFilterProxyModel() self.sortModel.setSourceModel(self.model) parentItem = self.model.invisibleRootItem() parentItem.appendRow(QtGui.QStandardItem("3")) parentItem.appendRow(QtGui.QStandardItem("1")) parentItem.appendRow(QtGui.QStandardItem("4")) parentItem.appendRow(QtGui.QStandardItem("2")) self.view = QtGui.QListView(self) self.view.setModel(self.sortModel) self.view.setGeometry(0, 0, 200, 400) self.button = QtGui.QPushButton("add items", self) self.button.move(200, 0) self.button.clicked.connect(self.on_button_clicked) self.layout = QtGui.QVBoxLayout(self.centralWidget()) self.layout.addWidget(self.view) self.layout.addWidget(self.button) self.sortModel.sort(0) def on_button_clicked(self): parentItem = self.model.invisibleRootItem() parentItem.appendRow(QtGui.QStandardItem("222")) parentItem.appendRow(QtGui.QStandardItem("333")) parentItem.appendRow(QtGui.QStandardItem("444")) def main(): app = QtGui.QApplication(sys.argv) form = MainForm() form.show() app.exec_() if __name__ == '__main__': main() hope this helps, regards
refreshing QTreeView / QSortFilterProxyModel
Good Day to All, Been writing code for years, but still a bit green when it comes to PyQt, so please forgive my syntactically lacking question(s) ;-) I'm hacking a derivative of the (famous?) packaged example that comes with PyQt4 (and Qt), namely "basicsortfiltermodel.pyw" from "../examples/itemviews" in PyQt4... I've added a little popup menu (let's call this B.py) that one can launch from the BasicSort-derivative (let's call this A.py). I believe I'm correcting adding new data (a new record) to the QSortFilterProxyModel(). (I think this because I'm not getting any errors now, after some effort) But I seem to be unable to get the QTreeView to refresh. I've scoured the Qt class docs and Google'd the heck out of it (seems like a common question from the looks of it, lol).. Now I know this is an ugly hack, but just to try to get it to work (elegance can come later is my theory)... At the bottom of A.py, I declared a global "wX", global wX; [...] window = Window() wX = Window() window.setSourceModel(createMailModel(window)) so that when I hit a button later, I could more easily get a hold of the "parent" value found in the runtime "createMailModel". From which I get the "model" handle.. model = QtGui.QStandardItemModel(0, 17, WinX) addMail(model, "image", "tabl00", etc etc) Anyways,..I think this is working.... But after adding a new record via addMail(), I can't seem to get self.proxyModel to refresh itself.. I'm pretty sure this a stupid newbie issue, lol....but could anyone help shed some light on how to make this work? Many Thanks,
[ "I believe you have to add new items into your original model not the proxy one. Once items is added proxy model and view will updates themselves accordingly. See if an example below would work for you:\nimport sys\nfrom PyQt4 import QtGui\n\nclass MainForm(QtGui.QMainWindow):\n def __init__(self, parent=None):\...
[ 3 ]
[]
[]
[ "pyqt", "python", "qt" ]
stackoverflow_0004218778_pyqt_python_qt.txt
Q: Download prices with python I have tried this before. I'm completely at a loss for ideas. On this page this dialog box to qet quotes. http://www.schwab.com/public/schwab/non_navigable/marketing/email/get_quote.html? I used SPY, XLV, IBM, MSFT The output is the above with a table. If you have an account the quote are real time --- via cookie. How do I get the table into python using 2.6. The data as list or dictionary A: Use something like Beautiful Soup to parse the HTML response from the web site and load it into a dictionary. use the symbol as the key and a tuple of whatever data you're interested in as the value. Iterate over all the symbols returned and add one entry per symbol. You can see examples of how to do this in Toby Segaran's "Programming Collective Intelligence". The samples are all in Python. A: First problem: the data is actually in an iframe in a frame; you need to be looking at https://www.schwab.wallst.com/public/research/stocks/summary.asp?user_id=schwabpublic&symbol=APC (where you substitute the appropriate symbol on the end of the URL). Second problem: extracting the data from the page. I personally like lxml and xpath, but there are many packages which will do the job. I would probably expect some code like import urllib2 import lxml.html import re re_dollars = '\$?\s*(\d+\.\d{2})' def urlExtractData(url, defs): """ Get html from url, parse according to defs, return as dictionary defs is a list of tuples ("name", "xpath", "regex", fn ) name becomes the key in the returned dictionary xpath is used to extract a string from the page regex further processes the string (skipped if None) fn casts the string to the desired type (skipped if None) """ page = urllib2.urlopen(url) # can modify this to include your cookies tree = lxml.html.parse(page) res = {} for name,path,reg,fn in defs: txt = tree.xpath(path)[0] if reg != None: match = re.search(reg,txt) txt = match.group(1) if fn != None: txt = fn(txt) res[name] = txt return res def getStockData(code): url = 'https://www.schwab.wallst.com/public/research/stocks/summary.asp?user_id=schwabpublic&symbol=' + code defs = [ ("stock_name", '//span[@class="header1"]/text()', None, str), ("stock_symbol", '//span[@class="header2"]/text()', None, str), ("last_price", '//span[@class="neu"]/text()', re_dollars, float) # etc ] return urlExtractData(url, defs) When called as print repr(getStockData('MSFT')) it returns {'stock_name': 'Microsoft Corp', 'last_price': 25.690000000000001, 'stock_symbol': 'MSFT:NASDAQ'} Third problem: the markup on this page is presentational, not structural - which says to me that code based on it will likely be fragile, ie any change to the structure of the page (or variation between pages) will require reworking your xpaths. Hope that helps! A: Have you thought of using yahoo's quotes api? see: http://developer.yahoo.com/yql/console/?q=show%20tables&env=store://datatables.org/alltableswithkeys#h=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20%3D%20%22YHOO%22 You will be able to dynamically generate a request to the website such as: http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20%3D%20%22YHOO%22&diagnostics=true&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys And just poll it with standard a http GET request. The response is in XML format. A: matplotlib has a module that gets historical quotes from Yahoo: >>> from matplotlib.finance import quotes_historical_yahoo >>> from datetime import date >>> from pprint import pprint >>> pprint(quotes_historical_yahoo('IBM', date(2010, 11, 12), date(2010, 11, 18))) [(734088.0, 144.59, 143.74000000000001, 145.77000000000001, 143.55000000000001, 4731500.0), (734091.0, 143.88999999999999, 143.63999999999999, 144.75, 143.27000000000001, 3827700.0), (734092.0, 142.93000000000001, 142.24000000000001, 143.38, 141.18000000000001, 6342100.0), (734093.0, 142.49000000000001, 141.94999999999999, 142.49000000000001, 141.38999999999999, 4785900.0)]
Download prices with python
I have tried this before. I'm completely at a loss for ideas. On this page this dialog box to qet quotes. http://www.schwab.com/public/schwab/non_navigable/marketing/email/get_quote.html? I used SPY, XLV, IBM, MSFT The output is the above with a table. If you have an account the quote are real time --- via cookie. How do I get the table into python using 2.6. The data as list or dictionary
[ "Use something like Beautiful Soup to parse the HTML response from the web site and load it into a dictionary. use the symbol as the key and a tuple of whatever data you're interested in as the value. Iterate over all the symbols returned and add one entry per symbol.\nYou can see examples of how to do this in To...
[ 5, 4, 3, 0 ]
[]
[]
[ "python" ]
stackoverflow_0004219007_python.txt
Q: Python separating images and text from MS office files Is there a way to separate the text and images from MS Office files like word, excel, ppt etc. and getting the position of the image in a document( where the image starts in the document between text)? The application needs to be developed for Linux box. Please suggest. A: You may want to look at the Python UNO bindings for OpenOffice - wiki at http://wiki.services.openoffice.org/wiki/Python - this should let you open and work with MSOffice docs on linux. What exactly are you trying to accomplish - a new way to HTML-ize Office docs?
Python separating images and text from MS office files
Is there a way to separate the text and images from MS Office files like word, excel, ppt etc. and getting the position of the image in a document( where the image starts in the document between text)? The application needs to be developed for Linux box. Please suggest.
[ "You may want to look at the Python UNO bindings for OpenOffice - wiki at http://wiki.services.openoffice.org/wiki/Python - this should let you open and work with MSOffice docs on linux.\nWhat exactly are you trying to accomplish - a new way to HTML-ize Office docs?\n" ]
[ 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0004223958_file_python.txt
Q: Override __mul__ from a child class using parent implementation: leads to problems I'm trying to implement the __mul__ method of class C which extends class P. Class P has an implementation of __mul__ but that's only for elements of that type (P() * P()). So in C.__mul__ I want to implement simple multiplication for a float when the argument is float. When it's not I want to use P.__mul__ ... but that leads to problems as in P.__mul__ that's a return P(something)... So basically the fact that they were originally of type C is lost after some operations. The following code better explain the issue. Any idea fix this? class MyFloat(object): def __init__(self, a): self.a = a def __mul__(self, other): return MyFloat(self.a * other.a) def __repr__(self): return str(self.a) class MyFloatExt(MyFloat): def __init__(self, a): MyFloat.__init__(self, a) def __add__(self, other): return MyFloatExt(self.a + other.a) def __mul__(self, other): if type(other) == (int, long, float): return MyFloatExt(self.a * other) else: return MyFloat.__mul__(self, other) a = MyFloatExt(0.5) b = MyFloatExt(1.5) c = a + b print c d = a * b print d e = d * c print e print isinstance(e, MyFloat) f = e * 0.5 print f A: First you your typecheck in __mul__ for MyFloatExt should look like isinstance(other,(int,long,float)) or even better isinstance(other,Number) #from numbers import Number Also you would like to change definition of __mul__ in MyFloat to this: class MyFloat(object): #... def __mul__(self, other): return type(self)(self.a * other.a) #... so it can create instances of your actual type And you can prefer call super instead of calling MyFloat.__mul__ with reasons of evolution your type hierarchies. full source: from numbers import Number class MyFloat(object): def __init__(self, a): self.a = a def __mul__(self, other): return type(self)(self.a * other.a) def __repr__(self): return str(self.a) class MyFloatExt(MyFloat): def __init__(self, a): super(MyFloatExt,self).__init__(a) def __add__(self, other): return type(self)(self.a + other.a) def __mul__(self, other): if isinstance(other,Number): return type(self)(self.a * other) else: return super(MyFloatExt,self).__mul__(other) a = MyFloatExt(0.5) b = MyFloatExt(1.5) c = a + b print c d = a * b print d e = d * c print e print isinstance(e, MyFloat) f = e * 0.5 print f print map(type,[a,b,c,d,e,f]) == [MyFloatExt]*6 A: Two problems here In your __mul__ implementation of MyFloatExt you're never checking if other is an instance of MyFloatExt isinstance(e, MyFloat) will always be true, because MyFloatExt inherits from MyFloat To fix it: def __mul__(self, other): # check if we deal with a MyFloatExt instance if isinstance(other, MyFloatExt): return MyFloatExt(self.a * other.a) if type(other) == (int, long, float): return MyFloatExt(self.a * other) else: return MyFloat.__mul__(self, other) # do the correct check print isinstance(e, MyFloatExt)
Override __mul__ from a child class using parent implementation: leads to problems
I'm trying to implement the __mul__ method of class C which extends class P. Class P has an implementation of __mul__ but that's only for elements of that type (P() * P()). So in C.__mul__ I want to implement simple multiplication for a float when the argument is float. When it's not I want to use P.__mul__ ... but that leads to problems as in P.__mul__ that's a return P(something)... So basically the fact that they were originally of type C is lost after some operations. The following code better explain the issue. Any idea fix this? class MyFloat(object): def __init__(self, a): self.a = a def __mul__(self, other): return MyFloat(self.a * other.a) def __repr__(self): return str(self.a) class MyFloatExt(MyFloat): def __init__(self, a): MyFloat.__init__(self, a) def __add__(self, other): return MyFloatExt(self.a + other.a) def __mul__(self, other): if type(other) == (int, long, float): return MyFloatExt(self.a * other) else: return MyFloat.__mul__(self, other) a = MyFloatExt(0.5) b = MyFloatExt(1.5) c = a + b print c d = a * b print d e = d * c print e print isinstance(e, MyFloat) f = e * 0.5 print f
[ "First you your typecheck in __mul__ for MyFloatExt should look like\nisinstance(other,(int,long,float))\n\nor even better\nisinstance(other,Number) #from numbers import Number\n\nAlso you would like to change definition of __mul__ in MyFloat to this:\nclass MyFloat(object):\n#...\n def __mul__(self, other):\n ...
[ 6, 2 ]
[]
[]
[ "inheritance", "python" ]
stackoverflow_0004233628_inheritance_python.txt
Q: Loop will not break I'm creating a sort of database, using a list that is read in from the user. When the user enters finish I want the while loop to stop. However, for some reason I need to enter finish TWICE for it to break the loop. Also, the list is empty after being returned. def readNames(): nameList = [] count = 0 while count != -1: #infinite loop addList = raw_input("Please enter a name: ") if addList == 'finish': return nameList break nameList.append(addList) print nameList I'm invoking it and checking if it worked with readNames() print readNames() Also, here is the output Please enter a name: Dave ['Dave'] Please enter a name: Gavin ['Dave', 'Gavin'] Please enter a name: Paul ['Dave', 'Gavin', 'Paul'] Please enter a name: Test1 ['Dave', 'Gavin', 'Paul', 'Test1'] Please enter a name: finish Please enter a name: finish [] >>> A: When you do readNames() print readNames() you run the function twice. On the 2nd run you just enter "finish" and thats why your list remains empty. What you want to do is this: def readNames(): nameList = [] while True: #infinite loop addList = raw_input("Please enter a name: ") if addList == 'finish': return nameList nameList.append(addList) # store the result, then print it names = readNames() print names A: I think your calling code is accidentally invoking readnames() twice. A: Ah, after you posted your code, I can see the issue: readNames() print readNames() You call readNames, read those names from stdin as planned, properly return the read names and then throw the result away because you don't assign it to anything (names = readNames()). Then you call readNames again, and it appears to you as if it didn't exit the loop (it did, but you told it to loop again). You type finish again, and the second invocation of readNames ends without any names entered (nameList is a local variable, so it is lost after the function execution ends), so you get back []. To fix this, (1) brush up your general programming knowledge ;) and (2) do something like names = readNames(); print names.
Loop will not break
I'm creating a sort of database, using a list that is read in from the user. When the user enters finish I want the while loop to stop. However, for some reason I need to enter finish TWICE for it to break the loop. Also, the list is empty after being returned. def readNames(): nameList = [] count = 0 while count != -1: #infinite loop addList = raw_input("Please enter a name: ") if addList == 'finish': return nameList break nameList.append(addList) print nameList I'm invoking it and checking if it worked with readNames() print readNames() Also, here is the output Please enter a name: Dave ['Dave'] Please enter a name: Gavin ['Dave', 'Gavin'] Please enter a name: Paul ['Dave', 'Gavin', 'Paul'] Please enter a name: Test1 ['Dave', 'Gavin', 'Paul', 'Test1'] Please enter a name: finish Please enter a name: finish [] >>>
[ "When you do\nreadNames()\nprint readNames()\n\nyou run the function twice. On the 2nd run you just enter \"finish\" and thats why your list remains empty.\nWhat you want to do is this:\ndef readNames():\n nameList = []\n while True: #infinite loop\n addList = raw_input(\"Please enter a name: \")\n ...
[ 4, 2, 1 ]
[ "Could you not replace\n\nif addList == 'finish':\n return nameList\n break\n\nWith\n\nif addList == 'finish':\n return nameList\n count = -1\n\n?\nJames\n" ]
[ -1 ]
[ "break", "list", "python", "while_loop" ]
stackoverflow_0004233785_break_list_python_while_loop.txt
Q: In python, how can you retrieve a key from a dictionary? I have a hashable identifier for putting things in a dictionary: class identifier(): def __init__(self, d): self.my_dict = d self.my_frozenset = frozenset(d.items()) def __getitem__(self, item): return self.my_dict[item] def __hash__(self): return hash(self.my_frozenset) def __eq__(self, rhs): return self.my_frozenset == rhs.my_frozenset def __ne__(self, rhs): return not self == rhs I have a node type that encapsulates identifer for purposes of hashing and equality: class node: def __init__(self, id, value): # id is of type identifier self.id = id self.value = value # define other data here... def __hash__(self): return hash(self.id) def __eq__(self, rhs): if isinstance(rhs, node): return self.id == rhs.id ### for the case when rhs is an identifier; this allows dictionary ### node lookup of a key without wrapping it in a node return self.id == rhs def __ne__(self, rhs): return not self == rhs I put some nodes into a dictionary: d = {} n1 = node(identifier({'name':'Bob'}), value=1) n2 = node(identifier({'name':'Alex'}), value=2) n3 = node(identifier({'name':'Alex', 'nationality':'Japanese'}), value=3) d[n1] = 'Node 1' d[n2] = 'Node 2' d[n3] = 'Node 3' Some time later, I have only an identifier: my_id = identifier({'name':'Alex'}) Is there any way to efficiently lookup the node that has been stored with this identifier in this dictionary? Please note that this is a little trickier than it sounds; I know that I can trivially use d[my_id] to retrieve the associated item 'Node 2', but I want to efficiently return a reference to n2. I know that I could do it by looking at every element in d, but I've tried that and it's much too slow (the dictionary has thousands of items in it and I do this a fair number of times). I know that internally dict is using the hash and eq operators for that identifier to store node n2 and its associated item, 'Node 2'. In fact, using my_id to lookup 'Node 2' actually needs to lookup n2 as an intermediate step, so this should definitely be possible. I am using this to store data in a graph. The nodes have a lot of additional data (where I put value) that is not used in the hash. I didn't create the graph package I'm using (networkX), but I can see the dictionary that stores my nodes. I could also keep an extra dictionary around of identifiers to nodes, but this would be a pain (I'd need to wrap the graph class and rewrite all add node, remove node, add nodes from list, remove nodes from list, add edge, etc. type functions to keep that dictionary up to date). This is quite the puzzle. Any help would be really appreciated! A: Instead of d[n1] = 'Node 1' use: d[n1] = ('Node 1', n1) Then you have access to n1 no matter how you found the value. I don't believe there is a way with dictionaries to retrieve the original key k1 if all you have is a k2 equal to k1. A: Have two dictionaries. - Whenever you add a key/value to the primary dictionary, also add them to the reverse dictionary, but with the key/value swapped. For example: # When adding a value: d[n2] = value; # Must also add to the reverse dictionary: rev[value] = d # This means that: value = d[n2] # Will be able to efficiently find out the key used with: key = rev[value] A: Here is a way to use a custom node object with NetworkX. If you store the object in the "node attribute" dictionary you can use it as a reverse dictionary to get the object back by referencing the id. It's a little awkward but it works. import networkx as nx class Node(object): def __init__(self,id,**attr): self.id=id self.properties={} self.properties.update(attr) def __hash__(self): return self.id def __eq__(self,other): return self.id==other.id def __repr__(self): return str(self.id) def __str__(self): return str(self.id) G=nx.Graph() # add two nodes n1=Node(1,color='red') # the node id must be hashable n2=Node(2,color='green') G.add_node(n1,obj=n1) G.add_node(n2,obj=n2) # check what we have print G.nodes() # 1,2 print n1,n1.properties['color'] # 1,red print n1==n2 # False for n in G: print n.properties['color'] print Node(1) in G # True # change color of node 1 n1.properties['color']='blue' for n in G: print n.properties # use "node attribute" data in NetworkX to retrieve object n=G.node[Node(1)]['obj'] print type(n) # <class '__main__.Node'> print n # 1 print n.id # 1 print n.properties # {'color': 'blue'} You can of course define a function that makes this simpler: def get_node(G,n): return G.node[Node(1)]['obj'] n=get_node(G,1) print n.properties A: The thing is, there is no guaranty that the key is effectively a Node. What if you do d[my_id]=d[my_id] Everything would still work perfectly except now, your key is an Identifier and not a Node. Allowing two classes to "equal" like this is really dangerous. If you really need to find a Node by it's name that should be done in the Node class or externaly, but shouldn't depend of the presence of not of the node in a hash. If you can't modify that (because you can't modify the code), then I guess you are stuck to do the ineffecient way A: using my_id to lookup 'Node 2' actually needs to lookup n2 as an intermediate step This is not true. A dictionary is a hashtable: it maps the hash of an item to (a bucket of) entries. When you ask for d[my_id], Python first gets hash(my_id) and then looks that up in d. You are getting confused because you have that hash(n1) == hash(id1), which is a Very Bad Thing. You are asking for a mapping between identifiers and nodes. If you want one of these, you will have to create one yourself. Are the identifiers all matched with nodes upon creation, or do you construct them later? That is, are you really asking to be able to find the node with identifier identifier({'name':'Alex'}), or has that identifier already been created and added to a node? If the latter, you could do the following: class Node: def __init__(self, id, value): id.parent = self ...
In python, how can you retrieve a key from a dictionary?
I have a hashable identifier for putting things in a dictionary: class identifier(): def __init__(self, d): self.my_dict = d self.my_frozenset = frozenset(d.items()) def __getitem__(self, item): return self.my_dict[item] def __hash__(self): return hash(self.my_frozenset) def __eq__(self, rhs): return self.my_frozenset == rhs.my_frozenset def __ne__(self, rhs): return not self == rhs I have a node type that encapsulates identifer for purposes of hashing and equality: class node: def __init__(self, id, value): # id is of type identifier self.id = id self.value = value # define other data here... def __hash__(self): return hash(self.id) def __eq__(self, rhs): if isinstance(rhs, node): return self.id == rhs.id ### for the case when rhs is an identifier; this allows dictionary ### node lookup of a key without wrapping it in a node return self.id == rhs def __ne__(self, rhs): return not self == rhs I put some nodes into a dictionary: d = {} n1 = node(identifier({'name':'Bob'}), value=1) n2 = node(identifier({'name':'Alex'}), value=2) n3 = node(identifier({'name':'Alex', 'nationality':'Japanese'}), value=3) d[n1] = 'Node 1' d[n2] = 'Node 2' d[n3] = 'Node 3' Some time later, I have only an identifier: my_id = identifier({'name':'Alex'}) Is there any way to efficiently lookup the node that has been stored with this identifier in this dictionary? Please note that this is a little trickier than it sounds; I know that I can trivially use d[my_id] to retrieve the associated item 'Node 2', but I want to efficiently return a reference to n2. I know that I could do it by looking at every element in d, but I've tried that and it's much too slow (the dictionary has thousands of items in it and I do this a fair number of times). I know that internally dict is using the hash and eq operators for that identifier to store node n2 and its associated item, 'Node 2'. In fact, using my_id to lookup 'Node 2' actually needs to lookup n2 as an intermediate step, so this should definitely be possible. I am using this to store data in a graph. The nodes have a lot of additional data (where I put value) that is not used in the hash. I didn't create the graph package I'm using (networkX), but I can see the dictionary that stores my nodes. I could also keep an extra dictionary around of identifiers to nodes, but this would be a pain (I'd need to wrap the graph class and rewrite all add node, remove node, add nodes from list, remove nodes from list, add edge, etc. type functions to keep that dictionary up to date). This is quite the puzzle. Any help would be really appreciated!
[ "Instead of \nd[n1] = 'Node 1'\n\nuse:\nd[n1] = ('Node 1', n1)\n\nThen you have access to n1 no matter how you found the value.\nI don't believe there is a way with dictionaries to retrieve the original key k1 if all you have is a k2 equal to k1.\n", "Have two dictionaries.\n - Whenever you add a key/value to the...
[ 5, 3, 1, 0, 0 ]
[]
[]
[ "dictionary", "networkx", "performance", "python", "reverse_lookup" ]
stackoverflow_0004224895_dictionary_networkx_performance_python_reverse_lookup.txt
Q: I need a class that creates a dictionary file that lives on disk I want to create a very very large dictionary, and I'd like to store it on disk so as not to kill my memory. Basically, my needs are a cross between cPickle and the dict class, in that it's a class that Python treats like a dictionary, but happens to live on the disk. My first thought was to create some sort of wrapper around a simple MySQL table, but I have to store types in the entries of the structure that MySQL can't even hope to support out of the box. A: The simplest way is the shelve module, which works almost exactly like a dictionary: import shelve myshelf = shelve.open("filename") # Might turn into filename.db myshelf["A"] = "First letter of alphabet" print myshelf["A"] # ... myshelf.close() # You should do this explicitly when you're finished Note the caveats in the module documentation about changing mutable values (lists, dicts, etc.) stored on a shelf (you can, but it takes a bit more fiddling). It uses (c)pickle and dbm under the hood, so it will cheerfully store anything you can pickle. I don't know how well it performs relative to other solutions, but it doesn't require any custom code or third party libraries. A: Look at dbm in specific, and generally the entire Data Persistence chapter in the manual. Most key/value-store databases (gdbm, bdb, metakit, etc.) have a dict-like API which would probably serve your needs (and are fully embeddable so no need to manage an external database process). A: File IO is expensive in terms of CPU cycles. So my first thoughts would be in favor of a database. However, you could also split your "English dictionary" across multiple files so that (say) each file holds words that start with a specific letter of the alphabet (therefore, you'll have 26 files). Now, when you say I want to create a very very large dictionary, do you mean a python dict or an English dictionary with words and their definitions, stored in a dict (with words as keys and definitions as values)? The second can be easily implemented with cPickle, as you pointed out. Again, if memory is your main concern, then you'll need to recheck the number of files you want to use, because, if you're pickling dicts into each file, then you want the dicts to not get too big Perhaps a usable solution for you would be to do this (I am going to assume that all the English words are sorted): Get all the words in the English language into one file. Count how many such words there are and split them into as many files as you see fit, depending on how large the files get. Now, these smaller files contain the words and their meanings This is how this solution is useful: Say that your problem is to lookup the definition of a particular word. Now, at runtime, you can read the first word in each file, and determine if the word that you are looking for is in the previous file that you read (you will need a loop counter to check if you are at the last file). Once you have determined which file the word you are looking for is in, then you can open that file and load the contents of that file into the dict. It's a little difficult to offer a solution without knowing more details about the problem at hand.
I need a class that creates a dictionary file that lives on disk
I want to create a very very large dictionary, and I'd like to store it on disk so as not to kill my memory. Basically, my needs are a cross between cPickle and the dict class, in that it's a class that Python treats like a dictionary, but happens to live on the disk. My first thought was to create some sort of wrapper around a simple MySQL table, but I have to store types in the entries of the structure that MySQL can't even hope to support out of the box.
[ "The simplest way is the shelve module, which works almost exactly like a dictionary:\nimport shelve\nmyshelf = shelve.open(\"filename\") # Might turn into filename.db\nmyshelf[\"A\"] = \"First letter of alphabet\"\nprint myshelf[\"A\"]\n# ...\nmyshelf.close() # You should do this explicitly when you're finished\...
[ 2, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0004233923_python.txt
Q: Changing python immutable type while iterating through a mutable container such as list I am wondering what is the most pythonic way to do the following and have it work: strings = ['a','b'] for s in strings: s = s+'c' obviously this doesn't work in python but the result that I want to acheive is strings = ['ac','bc'] Whats the most pythonic way to achieve this kind of result? Thanks for the great answers! A: strings = ['a', 'b'] strings = [s + 'c' for s in strings] A: You can use list comprehension to create a list that has these values: [s + 'c' for s in strings]. You can modify the list in-place like this: for i, s in enumerate(strings): strings[i] = s + 'c' But I found that quite often, in-place modification is not needed. Look at your code to see if this applies. A: You can use map function for that. strings = ['a', 'b'] strings = map(lambda s: s + 'c', strings)
Changing python immutable type while iterating through a mutable container such as list
I am wondering what is the most pythonic way to do the following and have it work: strings = ['a','b'] for s in strings: s = s+'c' obviously this doesn't work in python but the result that I want to acheive is strings = ['ac','bc'] Whats the most pythonic way to achieve this kind of result? Thanks for the great answers!
[ "strings = ['a', 'b']\nstrings = [s + 'c' for s in strings]\n\n", "You can use list comprehension to create a list that has these values: [s + 'c' for s in strings]. You can modify the list in-place like this:\nfor i, s in enumerate(strings):\n strings[i] = s + 'c'\n\nBut I found that quite often, in-place mod...
[ 6, 3, 1 ]
[]
[]
[ "immutability", "iteration", "python", "types" ]
stackoverflow_0004234298_immutability_iteration_python_types.txt
Q: 64bit prcessor with 32bit operating system program compatibilities? on 64bit processors. with 32bit operating system all programs works fine ? like suppose all python 32bit installation programs works fine ? or there are any issues found? a program or software developed on 32bit operating system & 32bit processor installed on 32bit operating system and 64bit processor does it work well or any issues? why i am asking this, bec these days on market all 64bit processor os computers coming and mostly sold. there is no choice also for us..if i want to buy a i5processor computer.. A: on 64bit processors. with 32bit operating system all programs works fine ? Yes. like suppose all python 32bit installation programs works fine ? or there are any issues found? No issues. a program or software developed on 32bit operating system & 32bit processor installed on 32bit operating system and 64bit processor does it work well or any issues? Yes. why i am asking this, bec these days on market all 64bit processor os computers coming and mostly sold. there is no choice also for us..if i want to buy a i5processor computer.. If you have say a 64bit edition of Windows, there is the 64bit version of Python available. The advantage for you is that you can have more RAM (> 4GB) and thus access large amounts of data. The difference between 32bit and 64bit in case of Python should not worry you at all. Python is a high level language, the process architecture and the difference in size of data types is not your concern. A: Should be fine. X64 processors run x32 code natively. My dev PC is x64 but it has run a 32 bit OS and code with no problems whatsoever for 4 or so years. Similarly, it should run 16 bit (8086) code and operating systems just fine!
64bit prcessor with 32bit operating system program compatibilities?
on 64bit processors. with 32bit operating system all programs works fine ? like suppose all python 32bit installation programs works fine ? or there are any issues found? a program or software developed on 32bit operating system & 32bit processor installed on 32bit operating system and 64bit processor does it work well or any issues? why i am asking this, bec these days on market all 64bit processor os computers coming and mostly sold. there is no choice also for us..if i want to buy a i5processor computer..
[ "\non 64bit processors. with 32bit\n operating system all programs works\n fine ?\n\nYes. \n\nlike suppose all python 32bit\n installation programs works fine ? or\n there are any issues found?\n\nNo issues. \n\na program or software developed on\n 32bit operating system & 32bit\n processor installed on 32bit...
[ 3, 0 ]
[]
[]
[ "linux", "python" ]
stackoverflow_0004234275_linux_python.txt
Q: using stored variables as regex patterns is there a way for python to use values stored in variables as patterns in regex? supposing i have two variables: begin_tag = '<%marker>' end_tag = '<%marker/>' doc = '<html> something here <%marker> and here and here <%marker/> and more here <html>' how do you extract the text between begin_tag and end_tag? the tags are determined after parsing another file, so they're not fixed. A: Don't use a regex at all. parse html inteligently! from BeautifulSoup import BeautifulSoup marker = 'mytag' doc = '<html>some stuff <mytag> different stuff </mytag> other things </html>' soup = BeautifulSoup(doc) print soup.find(marker).renderContents() A: Regular expressions are strings. So you can do anything you want to build them: concatenate them (using + operator), interpolation (using % operator), etc. Just concatenate the variables you want to match with the regex you want to use: begin_tag + ".*?" + end_tag The only pitfall is when your variables contain characters that might be taken by the regular expression engine to have special meaning. You need to make sure they are escaped properly in that case. You can do this with the re.escape() function. The usual caveat ("don't parse HTML with regular expressions") applies.
using stored variables as regex patterns
is there a way for python to use values stored in variables as patterns in regex? supposing i have two variables: begin_tag = '<%marker>' end_tag = '<%marker/>' doc = '<html> something here <%marker> and here and here <%marker/> and more here <html>' how do you extract the text between begin_tag and end_tag? the tags are determined after parsing another file, so they're not fixed.
[ "Don't use a regex at all. parse html inteligently!\nfrom BeautifulSoup import BeautifulSoup\nmarker = 'mytag'\ndoc = '<html>some stuff <mytag> different stuff </mytag> other things </html>'\nsoup = BeautifulSoup(doc)\nprint soup.find(marker).renderContents()\n\n", "Regular expressions are strings. So you can do...
[ 2, 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0004234240_python_regex.txt
Q: Web Proxy to Simulate Network Problems I need a way to simulate connectivity problems in an automated test suite, on Linux, and preferably from Python. Some sort of proxy that I can put in front of the web server that can hang or drop connections after one trigger or another (after X bytes transferred, etc) would be perfect. It doesn't seem too hard to build, but I'd rather grab something pre-existing, if anyone has any good recommendations. A: when i needed one, i found that building it yourself is the best thing.. start by raising a threaded server in python http://docs.python.org/dev/library/socketserver.html (you don't have to use the class itself). and it's very simple: in the new connection thread, you create a new socket and connects it to the real server. then, you put both of them in a list and sends it to select.select (import select). then, when socket x receive data - sends it to y. when socket y receives data sends it to x. (don't forget to close the socket when you receive empty string). now you can do whatever you want.. if you need anything, i'm here..
Web Proxy to Simulate Network Problems
I need a way to simulate connectivity problems in an automated test suite, on Linux, and preferably from Python. Some sort of proxy that I can put in front of the web server that can hang or drop connections after one trigger or another (after X bytes transferred, etc) would be perfect. It doesn't seem too hard to build, but I'd rather grab something pre-existing, if anyone has any good recommendations.
[ "when i needed one, i found that building it yourself is the best thing..\nstart by raising a threaded server in python http://docs.python.org/dev/library/socketserver.html (you don't have to use the class itself).\nand it's very simple:\nin the new connection thread, you create a new socket and connects it to the ...
[ 2 ]
[]
[]
[ "http", "proxy", "python" ]
stackoverflow_0004218900_http_proxy_python.txt
Q: Unicode not printing correctly to cp850 (cp437), play card suits To summarize: How do I print unicode system independently to produce play card symbols? What I do wrong, I consider myself quite fluent in Python, except I seem not able to print correctly! # coding: utf-8 from __future__ import print_function from __future__ import unicode_literals import sys symbols = ('♥','♦','♠','♣') # red suits to sdterr for IDLE print(' '.join(symbols[:2]), file=sys.stderr) print(' '.join(symbols[2:])) sys.stdout.write(symbols) # also correct in IDLE print(' '.join(symbols)) Printing to console, which is main consern for console application, is failing miserably though: J:\test>chcp Aktiivinen koodisivu: 850 J:\test>symbol2 Traceback (most recent call last): File "J:\test\symbol2.py", line 9, in <module> print(''.join(symbols)) File "J:\Python26\lib\encodings\cp850.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_map) UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-3: character maps to <unde fined> J:\test>chcp 437 Aktiivinen koodisivu: 437 J:\test>d:\Python27\python.exe symbol2.py Traceback (most recent call last): File "symbol2.py", line 6, in <module> print(' '.join(symbols)) File "d:\Python27\lib\encodings\cp437.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_map) UnicodeEncodeError: 'charmap' codec can't encode character u'\u2660' in position 0: character maps o <undefined> J:\test> So summa summarum I have console application which works as long as you are not using console, but IDLE. I can of course generate the symbols myself by producing them by chr: # correct symbols for cp850 print(''.join(chr(n) for n in range(3,3+4))) But this looks very stupid way to do it. And I do not make programs only run on Windows or have many special cases (like conditional compiling). I want readable code. I do not mind which letters it outputs, as long as it looks correct no matter if it is Nokia phone, Windows or Linux. Unicode should do it but it does not print correctly to Console A: Whenever I need to output utf-8 characters, I use the following approach: import codecs out = codecs.getwriter('utf-8')(sys.stdout) str = u'♠' out.write("%s\n" % str) This saves me an encode('utf-8') every time something needs to be sent to sdtout/stderr. A: In response to the updated question Since all you want to do is to print out UTF-8 characters on the CMD, you're out of luck, CMD does not support UTF-8: Is there a Windows command shell that will display Unicode characters? Old Answer It's not totally clear what you're trying to do here, my best bet is that you want to write the encoded UTF-8 to a file. Your problems are: symbols = ('♠','♥', '♦','♣') while your file encoding maybe UTF-8, unless you're using Python 3 your strings wont be UTF-8 by default, you need to prefix them with a small u: symbols = (u'♠', u'♥', u'♦', u'♣') Your str(arg) converts the unicode string back into a normal one, just leave it out or use unicode(arg) to convert to a unicode string The naming of .decode() may be confusing, this decodes bytes into UTF-8, but what you need to do is to encode UTF-8 into bytes so use .encode() You're not writing to the file in binary mode, instead of open('test.txt', 'w') your need to use open('test.txt', 'wb') (notice the wb) this will open the file in binary mode which is important on windows If we put all of this together we get: # -*- coding: utf-8 -*- from __future__ import print_function import sys symbols = (u'♠',u'♥', u'♦',u'♣') print(' '.join(symbols)) print('Failure!') def print(*args,**kwargs): end = kwargs[end] if 'end' in kwargs else '\n' sep = kwargs[sep] if 'sep' in kwargs else ' ' stdout = sys.stdout if 'file' not in kwargs else kwargs['file'] stdout.write(sep.join(unicode(arg).encode('utf-8') for arg in args)) stdout.write(end) print(*symbols) print('Success!') with open('test.txt', 'wb') as testfile: print(*symbols, file=testfile) That happily writes the byte encoded UTF-8 to the file (at least on my Ubuntu box here). A: Use Unicode strings and the codecs module: Either: # coding: utf-8 from __future__ import print_function import sys import codecs symbols = (u'♠',u'♥',u'♦',u'♣') print(u' '.join(symbols)) print(*symbols) with codecs.open('test.txt','w','utf-8') as testfile: print(*symbols, file=testfile) or: # coding: utf-8 from __future__ import print_function from __future__ import unicode_literals import sys import codecs symbols = ('♠','♥','♦','♣') print(' '.join(symbols)) print(*symbols) with codecs.open('test.txt','w','utf-8') as testfile: print(*symbols, file=testfile) No need to re-implement print. A: UTF-8 in the Windows console is a long and painful story. You can read issue 1602 and issue 6058 and have something that works, more or less, but it's fragile. Let me summarise: add 'cp65001' as an alias for 'utf8' in Lib/encodings/aliases.py select Lucida Console or Consolas as your console font run chcp 65001 run python
Unicode not printing correctly to cp850 (cp437), play card suits
To summarize: How do I print unicode system independently to produce play card symbols? What I do wrong, I consider myself quite fluent in Python, except I seem not able to print correctly! # coding: utf-8 from __future__ import print_function from __future__ import unicode_literals import sys symbols = ('♥','♦','♠','♣') # red suits to sdterr for IDLE print(' '.join(symbols[:2]), file=sys.stderr) print(' '.join(symbols[2:])) sys.stdout.write(symbols) # also correct in IDLE print(' '.join(symbols)) Printing to console, which is main consern for console application, is failing miserably though: J:\test>chcp Aktiivinen koodisivu: 850 J:\test>symbol2 Traceback (most recent call last): File "J:\test\symbol2.py", line 9, in <module> print(''.join(symbols)) File "J:\Python26\lib\encodings\cp850.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_map) UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-3: character maps to <unde fined> J:\test>chcp 437 Aktiivinen koodisivu: 437 J:\test>d:\Python27\python.exe symbol2.py Traceback (most recent call last): File "symbol2.py", line 6, in <module> print(' '.join(symbols)) File "d:\Python27\lib\encodings\cp437.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_map) UnicodeEncodeError: 'charmap' codec can't encode character u'\u2660' in position 0: character maps o <undefined> J:\test> So summa summarum I have console application which works as long as you are not using console, but IDLE. I can of course generate the symbols myself by producing them by chr: # correct symbols for cp850 print(''.join(chr(n) for n in range(3,3+4))) But this looks very stupid way to do it. And I do not make programs only run on Windows or have many special cases (like conditional compiling). I want readable code. I do not mind which letters it outputs, as long as it looks correct no matter if it is Nokia phone, Windows or Linux. Unicode should do it but it does not print correctly to Console
[ "Whenever I need to output utf-8 characters, I use the following approach:\nimport codecs\n\nout = codecs.getwriter('utf-8')(sys.stdout)\n\nstr = u'♠'\n\nout.write(\"%s\\n\" % str)\n\nThis saves me an encode('utf-8') every time something needs to be sent to sdtout/stderr.\n", "In response to the updated question\...
[ 2, 1, 1, 0 ]
[]
[]
[ "cmd", "python", "windows", "windows_xp" ]
stackoverflow_0004233227_cmd_python_windows_windows_xp.txt
Q: Access XAMPP by default via MySQLdb Python lib on Ubuntu 10.10 I'm attempting to build a Virtual Machine to develop a Django application. The OS is Ubuntu 10.10. I have most everything installed. The last piece is getting MySQLdb to work with the MySQL instance that comes with XAMPP. How can I get MySQLdb to default to work with the XAMPP MySQL? I found this: Accessing a XAMPP mysql via Python I understand the problem there, but the solution doesn't work for me because Django handles creating connections behind the scenes. I also don't want to be manipulating Django for some an application. I've attempted to modify the My.cnf in two different ways but it doesn't work. I'm still getting the same error. That error is listed here: Traceback (most recent call last): File "test-mysqldb.py", line 4, in <module> db = MySQLdb.connect( user="root", passwd="", db="faceless001" ) File "/usr/lib/pymodules/python2.6/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 170, in __init__ super(Connection, self).__init__(*args, **kwargs2) _mysql_exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") XAMPP installed itself to: /opt/lampp/ I'm looking for the simplest, correct solution for this. Any background info you can share would be helpful, also. A: I don't understand why you don't just install MySQL via sudo apt-get install mysql-server, but nevertheless the answer to your question would seem to be to configure Django to use the specified socket in place of the default. From the documentation to settings.DATABASES, under HOST: If this value starts with a forward slash ('/') and you're using MySQL, MySQL will connect via a Unix socket to the specified socket. So if you just set the HOST setting for your database to "/opt/lampp/var/mysql/mysql.sock" it should find the relevant MySQL socket.
Access XAMPP by default via MySQLdb Python lib on Ubuntu 10.10
I'm attempting to build a Virtual Machine to develop a Django application. The OS is Ubuntu 10.10. I have most everything installed. The last piece is getting MySQLdb to work with the MySQL instance that comes with XAMPP. How can I get MySQLdb to default to work with the XAMPP MySQL? I found this: Accessing a XAMPP mysql via Python I understand the problem there, but the solution doesn't work for me because Django handles creating connections behind the scenes. I also don't want to be manipulating Django for some an application. I've attempted to modify the My.cnf in two different ways but it doesn't work. I'm still getting the same error. That error is listed here: Traceback (most recent call last): File "test-mysqldb.py", line 4, in <module> db = MySQLdb.connect( user="root", passwd="", db="faceless001" ) File "/usr/lib/pymodules/python2.6/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 170, in __init__ super(Connection, self).__init__(*args, **kwargs2) _mysql_exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") XAMPP installed itself to: /opt/lampp/ I'm looking for the simplest, correct solution for this. Any background info you can share would be helpful, also.
[ "I don't understand why you don't just install MySQL via sudo apt-get install mysql-server, but nevertheless the answer to your question would seem to be to configure Django to use the specified socket in place of the default.\nFrom the documentation to settings.DATABASES, under HOST:\n\nIf this value starts with a...
[ 3 ]
[]
[]
[ "django", "mysql", "python", "ubuntu", "xampp" ]
stackoverflow_0004234316_django_mysql_python_ubuntu_xampp.txt
Q: Google app engine TypeError issue I got a simple app on google app engine using django and I have two classes that look pretty much identical, but one of them crashes with TypeError. Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3211, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3154, in _Dispatch base_env_dict=env_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 527, in Dispatch base_env_dict=base_env_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2404, in Dispatch self._module_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2314, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2212, in ExecuteOrImportScript script_module.main() File "C:\Development\fuluus\momadthenomad\main.py", line 20, in main run_wsgi_app(application) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp\util.py", line 97, in run_wsgi_app run_bare_wsgi_app(add_wsgi_middleware(application)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp\util.py", line 115, in run_bare_wsgi_app result = application(env, _start_response) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 500, in __call__ handler = handler_class() TypeError: NotFoundPage() takes exactly 1 argument (0 given) My class looks like this (main.py): import os import datetime from google.appengine.ext import webapp from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app class BasePage(webapp.RequestHandler): def initialize(self, request, response): webapp.RequestHandler.initialize(self, request, response) dir = os.path.join(os.path.dirname(__file__), "../templates") self.template_path = os.path.abspath(dir) def render_to_response(self, page, template_values): page_path = os.path.join(self.template_path, page) self.response.out.write(template.render(page_path, template_values)) class DefaultPage(BasePage): def get(self): visitor = Visitor() visitor.ip = self.request.remote_addr visitor.put() page = Page() page.title = "MY PORTAL" page.subtitle = "Home" page.name = self.request.path visitors_query = Visitor.all().order('-added_on') visitors = visitors_query.fetch(20) self.render_to_response("main.html", { "page": page, "visitors": visitors, }) def NotFoundPage(BasePage): def get(self): page = Page() page.title = "MY PORTAL" page.subtitle = "Not Found" page.name = self.request.path self.render_to_response("empty.html", { "page": page, }) application = webapp.WSGIApplication( [('/', DefaultPage), ('/index.html', DefaultPage), ('/.*', NotFoundPage), ], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() When I go to /index.html, everythin works perfectly. But when I go to /not-found.html, it crashes with the error. I can't figure out what is wrong with this script. Maybe I am overlooking something. Please help. A: def NotFoundPage(BasePage): should be: class NotFoundPage(BasePage):
Google app engine TypeError issue
I got a simple app on google app engine using django and I have two classes that look pretty much identical, but one of them crashes with TypeError. Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3211, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3154, in _Dispatch base_env_dict=env_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 527, in Dispatch base_env_dict=base_env_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2404, in Dispatch self._module_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2314, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2212, in ExecuteOrImportScript script_module.main() File "C:\Development\fuluus\momadthenomad\main.py", line 20, in main run_wsgi_app(application) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp\util.py", line 97, in run_wsgi_app run_bare_wsgi_app(add_wsgi_middleware(application)) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp\util.py", line 115, in run_bare_wsgi_app result = application(env, _start_response) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 500, in __call__ handler = handler_class() TypeError: NotFoundPage() takes exactly 1 argument (0 given) My class looks like this (main.py): import os import datetime from google.appengine.ext import webapp from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app class BasePage(webapp.RequestHandler): def initialize(self, request, response): webapp.RequestHandler.initialize(self, request, response) dir = os.path.join(os.path.dirname(__file__), "../templates") self.template_path = os.path.abspath(dir) def render_to_response(self, page, template_values): page_path = os.path.join(self.template_path, page) self.response.out.write(template.render(page_path, template_values)) class DefaultPage(BasePage): def get(self): visitor = Visitor() visitor.ip = self.request.remote_addr visitor.put() page = Page() page.title = "MY PORTAL" page.subtitle = "Home" page.name = self.request.path visitors_query = Visitor.all().order('-added_on') visitors = visitors_query.fetch(20) self.render_to_response("main.html", { "page": page, "visitors": visitors, }) def NotFoundPage(BasePage): def get(self): page = Page() page.title = "MY PORTAL" page.subtitle = "Not Found" page.name = self.request.path self.render_to_response("empty.html", { "page": page, }) application = webapp.WSGIApplication( [('/', DefaultPage), ('/index.html', DefaultPage), ('/.*', NotFoundPage), ], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() When I go to /index.html, everythin works perfectly. But when I go to /not-found.html, it crashes with the error. I can't figure out what is wrong with this script. Maybe I am overlooking something. Please help.
[ "def NotFoundPage(BasePage):\n\nshould be:\nclass NotFoundPage(BasePage):\n\n" ]
[ 6 ]
[]
[]
[ "django", "google_app_engine", "python", "typeerror" ]
stackoverflow_0004235031_django_google_app_engine_python_typeerror.txt
Q: How to avoid infinite recursion with super()? I have code like this: class A(object): def __init__(self): self.a = 1 class B(A): def __init__(self): self.b = 2 super(self.__class__, self).__init__() class C(B): def __init__(self): self.c = 3 super(self.__class__, self).__init__() Instantiating B works as expected but instantiating C recursed infinitely and causes a stack overflow. How can I solve this? A: When instantiating C calls B.__init__, self.__class__ will still be C, so the super() call brings it back to B. When calling super(), use the class names directly. So in B, call super(B, self), rather than super(self.__class__, self) (and for good measure, use super(C, self) in C). From Python 3, you can just use super() with no arguments to achieve the same thing
How to avoid infinite recursion with super()?
I have code like this: class A(object): def __init__(self): self.a = 1 class B(A): def __init__(self): self.b = 2 super(self.__class__, self).__init__() class C(B): def __init__(self): self.c = 3 super(self.__class__, self).__init__() Instantiating B works as expected but instantiating C recursed infinitely and causes a stack overflow. How can I solve this?
[ "When instantiating C calls B.__init__, self.__class__ will still be C, so the super() call brings it back to B.\nWhen calling super(), use the class names directly. So in B, call super(B, self), rather than super(self.__class__, self) (and for good measure, use super(C, self) in C). From Python 3, you can just use...
[ 51 ]
[]
[]
[ "multiple_inheritance", "oop", "python", "super" ]
stackoverflow_0004235078_multiple_inheritance_oop_python_super.txt
Q: Python: Store the data into two lists and then convert to a dictionary I am new to python, and have a question regarding store columns in lists and converting them to dictionary as follow: I have a data in two column shown below, with nodes(N) and edges(E), and I want to first make a list of these two columns and then make a dictionary of those two lists as {1:[9,2,10],2:[10,111,9],3:[166,175,7],4:[118,155,185]}. How can I do that? Thanks. N E 1 9 1 2 1 10 2 10 2 111 2 9 3 166 3 175 3 7 4 118 4 155 4 185 A: A defaultdict is a subclass of dict which would be useful here: import collections result=collections.defaultdict(list) for n,e in zip(N,E): result[n].append(e) A: yourDict={} for line in file('r.txt', 'r'): k , v = line.split() if k in yourDict.keys(): yourDict[k].append(v) else: yourDict[k] = [v] print yourDict Output: (You can always remove N:E in the last) {'1': ['9', '2', '10'], '3': ['166', '175', '7'], '2': ['10', '111', '9'], '4': ['118', '155', '185'], 'N': ['E']} A: The following does not have a for loop over the edges. That iteration is handled internally by Python using built-in methods, and it may be faster for large graphs: import itertools import operator N = [ 1, 1, 1, 2, 2] E = [ 2, 3, 5, 4, 5] iter_g = itertools.groupby(zip(N,E), operator.itemgetter(0)) dict_g = dict( (v, map(operator.itemgetter(1), n)) for v,n in iter_g ) Also, if you only need the data once, you could just use iter_g and not construct the dictionary. A: a bit slower than unutbu's version, but shorter :) result = { } for n, e in ( line.split( ) for line in open( 'r.txt' ) ): result[ n ] = result.setdefault( n, [ ] ) + [ e ] A: This does exactly what you wanted: import collections N = [] E = [] with open('edgelist.txt', 'r') as inputfile: inputfile.readline() # skip header line for line in inputfile: n,e = map(int,line.split()) N.append(n) E.append(e) dct = collections.defaultdict(list) for n,e in zip(N,E): dct[n].append(e) dct = dict(dct) print dct # {1: [9, 2, 10], 2: [10, 111, 9], 3: [166, 175, 7], 4: [118, 155, 185]} A: Here is the short answer: l1 = [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4] l2 = [9, 2, 10, 10, 111, 9, 166, 175, 7, 118, 155,185] d = dict((i,[j for j,k in zip(l2,l1) if k == i]) for i in frozenset(l1))
Python: Store the data into two lists and then convert to a dictionary
I am new to python, and have a question regarding store columns in lists and converting them to dictionary as follow: I have a data in two column shown below, with nodes(N) and edges(E), and I want to first make a list of these two columns and then make a dictionary of those two lists as {1:[9,2,10],2:[10,111,9],3:[166,175,7],4:[118,155,185]}. How can I do that? Thanks. N E 1 9 1 2 1 10 2 10 2 111 2 9 3 166 3 175 3 7 4 118 4 155 4 185
[ "A defaultdict is a subclass of dict which would be useful here:\nimport collections\nresult=collections.defaultdict(list)\nfor n,e in zip(N,E):\n result[n].append(e)\n\n", "yourDict={}\nfor line in file('r.txt', 'r'):\n k , v = line.split()\n if k in yourDict.keys():\n yourDict[k].append(v)\n ...
[ 6, 2, 2, 1, 1, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0004208204_dictionary_list_python.txt
Q: function that process multilple lines and/or single line in a file if i have a file, how should i implement a function so that it can both read single and multiple lines. for example: TimC Tim Cxe USA http://www.TimTimTim.com TimTim facebook! ENDBIO Charles Dwight END Mcdon Mcdonald Africa # website in here is empty, but we still need to consider it # bio in here is empty, but we need to include this in the dict # bio can be multiple lines ENDBIO Moon King END etc I am just wondering if anyone could use some python beginner keywords (like dont use yield,break, continue). In my own version, I actually defined 4 functions. 3 of the 4 functions are helper functions. and i want a function to return: dict = {'TimC':{'name':Tim Cxd, 'location':'USA', 'Web':'http://www.TimTimTim.com', 'bio':'TimTim facebook!','follows': ['Charles','Dwight']}, 'Mcdon':{'name':Mcdonald , 'location':'Africa', 'Web':'', 'bio':'','follows': ['Moon','King']}} A: from itertools import izip line_meanings = ("name", "location", "web") result = {} user = None def readClean(iterable, sentinel=None): for line in iterable: line = line.strip() if line == sentinel: break yield line while True: line = yourfile.readline() if not line: break line = line.strip() if not line: continue user = result[line] = {} user.update(izip(line_meanings, readClean(yourfile))) user['bio'] = list(readClean(yourfile, 'ENDBIO')) user['follows'] = set(readClean(yourfile, 'END')) print result {'Mcdon': {'bio': [''], 'follows': set(['King', 'Moon']), 'location': 'Africa', 'name': 'Mcdonald', 'web': ''}, 'TimC': {'bio': ['TimTim facebook!'], 'follows': set(['Charles', 'Dwight']), 'location': 'USA', 'name': 'Tim Cxe', 'web': 'http://www.TimTimTim.com'}} A: Iterate through the file collecting the various pieces of data, and yield it when you reach an appropriate sentinel. A: import sys def bio_gen(it, sentinel="END"): def read_line(): return next(it).partition("#")[0].strip() while True: key = read_line() ret = { 'name': read_line(), 'location': read_line(), 'website': read_line(), 'bio': read_line(), 'follows': []} next(it) #skip the ENDBIO line while True: line = read_line() if line == sentinel: yield key, ret break ret['follows'].append(line) all_bios = dict(bio_gen(sys.stdin)) import pprint pprint.pprint(all_bios) {'Mcdon': {'bio': '', 'follows': ['Moon', 'King'], 'location': 'Africa', 'name': 'Mcdonald', 'website': ''}, 'TimC': {'bio': 'TimTim facebook!', 'follows': ['Charles', 'Dwight'], 'location': 'USA', 'name': 'Tim Cxe', 'website': 'http://www.TimTimTim.com'}}
function that process multilple lines and/or single line in a file
if i have a file, how should i implement a function so that it can both read single and multiple lines. for example: TimC Tim Cxe USA http://www.TimTimTim.com TimTim facebook! ENDBIO Charles Dwight END Mcdon Mcdonald Africa # website in here is empty, but we still need to consider it # bio in here is empty, but we need to include this in the dict # bio can be multiple lines ENDBIO Moon King END etc I am just wondering if anyone could use some python beginner keywords (like dont use yield,break, continue). In my own version, I actually defined 4 functions. 3 of the 4 functions are helper functions. and i want a function to return: dict = {'TimC':{'name':Tim Cxd, 'location':'USA', 'Web':'http://www.TimTimTim.com', 'bio':'TimTim facebook!','follows': ['Charles','Dwight']}, 'Mcdon':{'name':Mcdonald , 'location':'Africa', 'Web':'', 'bio':'','follows': ['Moon','King']}}
[ "from itertools import izip\n\nline_meanings = (\"name\", \"location\", \"web\")\nresult = {}\nuser = None\n\ndef readClean(iterable, sentinel=None):\n for line in iterable:\n line = line.strip()\n if line == sentinel:\n break\n yield line\n\nwhile True:\n line = yourfile.readl...
[ 1, 0, 0 ]
[]
[]
[ "file", "python", "twitter" ]
stackoverflow_0004235106_file_python_twitter.txt
Q: Pygame doesn't draw I need help with a program I'm making. It's a version of Conway's Game of Life. This game is right now made out of 3 files: main.py, cellBoard.py, cell.py main.py takes care to instance cellboard and make it update its data, give it mouse input, and tell it to draw itself (an instance of the pygame surface is given to it, which handles it to the cells which are the actual ones that draw themselves) cellboard.py creates a list of cells based off their size and the screen's size, to fill it properly. It's a 2D list. When it creates the cells it sets their state (alive currently) and handles them an instance of its instance of the original surface instance. cell.py contains all the things a cell can do: die, live, be toggled, be drawn. In fact, when I need to draw the whole board I just call cellBoard's own draw() and it should take care of calling each cell's draw. And it does. The execution gets to the point where the cell should be drawn (checked with prints) and the pixel filling function is executed (using a for loop to cover an area). But nothing is actually drawn to the screen, or at least nothing is visible. I have no idea what is causing this. I checked the code multiple times, I've even rewritten the whole program from scratch to make it more tidy (and I had the same problem as now) What is causing this? My idea would be that somehow the instance of surface Cell gets is not good anymore to work because something happened to it (it goes through cellboard before getting to each cell, could that be the problem?) Here's the source code (all 3 files, they are very short and barebones so they should be easy to read) http://dl.dropbox.com/u/2951174/src.zip Thanks in advance to anyone who feels like helping. I need to complete this project very fast so your help would be greatly appreciated. A: First of a quick suggestion: People are much more likely to help you if they don't have to download a zip file, next time just post the code parts you suspect not to work. Anyways, problem seems to be in your main loop: #Keyboard events events = pygame.event.get() for event in events: if event.type == pygame.QUIT: running = 0 #Mouse events #todo #Grid update <------- here you update the grid and the cells are being drawn cb.draw() #Graphical output <------------ here you're filling the WHOLE screen with white screen.fill(THECOLORS["white"]) pygame.display.flip() You need to move your screen.fill call above cb.draw so you don't paint over the cells. Also in cell.py your drawing code is A) Broken and B) bad. Instead of setting every pixel on its own, which is slow and in it's current state doesn't draw the cells correctly, you can just as well draw rectangle: pygame.draw.rect(self.surface, (100, 10, 10), (self.pos[0], self.pos[1], self.size, self.size))
Pygame doesn't draw
I need help with a program I'm making. It's a version of Conway's Game of Life. This game is right now made out of 3 files: main.py, cellBoard.py, cell.py main.py takes care to instance cellboard and make it update its data, give it mouse input, and tell it to draw itself (an instance of the pygame surface is given to it, which handles it to the cells which are the actual ones that draw themselves) cellboard.py creates a list of cells based off their size and the screen's size, to fill it properly. It's a 2D list. When it creates the cells it sets their state (alive currently) and handles them an instance of its instance of the original surface instance. cell.py contains all the things a cell can do: die, live, be toggled, be drawn. In fact, when I need to draw the whole board I just call cellBoard's own draw() and it should take care of calling each cell's draw. And it does. The execution gets to the point where the cell should be drawn (checked with prints) and the pixel filling function is executed (using a for loop to cover an area). But nothing is actually drawn to the screen, or at least nothing is visible. I have no idea what is causing this. I checked the code multiple times, I've even rewritten the whole program from scratch to make it more tidy (and I had the same problem as now) What is causing this? My idea would be that somehow the instance of surface Cell gets is not good anymore to work because something happened to it (it goes through cellboard before getting to each cell, could that be the problem?) Here's the source code (all 3 files, they are very short and barebones so they should be easy to read) http://dl.dropbox.com/u/2951174/src.zip Thanks in advance to anyone who feels like helping. I need to complete this project very fast so your help would be greatly appreciated.
[ "First of a quick suggestion:\nPeople are much more likely to help you if they don't have to download a zip file, next time just post the code parts you suspect not to work.\nAnyways, problem seems to be in your main loop: \n#Keyboard events\nevents = pygame.event.get()\nfor event in events:\n if event.type == ...
[ 6 ]
[]
[]
[ "draw", "pygame", "python" ]
stackoverflow_0004235381_draw_pygame_python.txt
Q: SQLAlchemy declarative one-to-many not defined error I'm trying to figure how to define a one-to-many relationship using SQLAlchemy's declarative ORM, and trying to get the example to work, but I'm getting an error that my sub-class can't be found (naturally, because it's declared later...) InvalidRequestError: When initializing mapper Mapper|Parent|parent, expression 'Child' failed to locate a name ("name 'Child' is not defined"). If this is a class name, consider adding this relationship() to the class after both dependent classes have been defined. But how do I define this, without the error? The code: from sqlalchemy import create_engine from sqlalchemy import Column, Integer, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, relationship from dev.historyMeta import VersionedMeta, VersionedListener global engine, Base, Session engine = create_engine('mysql+mysqldb://user:pass@localhost:3306/testdb', pool_recycle=3600) Base = declarative_base(bind=engine, metaclass=VersionedMeta) Session = sessionmaker(extension=VersionedListener()) class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) children = relationship("Child", backref="parent") class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('parent.id')) Base.metadata.create_all() A: Here's how I do it: from sqlalchemy import create_engine from sqlalchemy import Column, Integer, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, relationship engine = create_engine('sqlite://', echo=True) Base = declarative_base(bind=engine) Session = sessionmaker(bind=engine) class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('parent.id')) parent = relationship(Parent, backref='children') Base.metadata.create_all()
SQLAlchemy declarative one-to-many not defined error
I'm trying to figure how to define a one-to-many relationship using SQLAlchemy's declarative ORM, and trying to get the example to work, but I'm getting an error that my sub-class can't be found (naturally, because it's declared later...) InvalidRequestError: When initializing mapper Mapper|Parent|parent, expression 'Child' failed to locate a name ("name 'Child' is not defined"). If this is a class name, consider adding this relationship() to the class after both dependent classes have been defined. But how do I define this, without the error? The code: from sqlalchemy import create_engine from sqlalchemy import Column, Integer, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, relationship from dev.historyMeta import VersionedMeta, VersionedListener global engine, Base, Session engine = create_engine('mysql+mysqldb://user:pass@localhost:3306/testdb', pool_recycle=3600) Base = declarative_base(bind=engine, metaclass=VersionedMeta) Session = sessionmaker(extension=VersionedListener()) class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) children = relationship("Child", backref="parent") class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('parent.id')) Base.metadata.create_all()
[ "Here's how I do it:\nfrom sqlalchemy import create_engine\nfrom sqlalchemy import Column, Integer, ForeignKey\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy.orm import sessionmaker, relationship\n\nengine = create_engine('sqlite://', echo=True)\nBase = declarative_base(bind=engine)\nSess...
[ 16 ]
[]
[]
[ "database", "python", "sqlalchemy" ]
stackoverflow_0004234493_database_python_sqlalchemy.txt
Q: Google App Engine (Python) - Import Fails I have a file structure like so: app.yaml something/ __init__.py models.py test.py I have URL set up to run tests.py in app.yaml: ... - url: /test script: something/test.py test.py imports models.py When I try to navigate to http://myapp.appspot.com/test/ I get the following error: Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the > query that caused it And, when I check the logs on the dashboard I see the following error occurred: <type 'exceptions.ImportError'>: No module named models How do I import the file properly? Cheers, Pete A: inside test.py you can write at the top something like: from something.models import * This will import your models. For corrective code though - the wildcard '*' is not great and you explicitly import the models your using: from something.models import ModelName, OtherModel and so on. A: test.py should have imports models, not imports models.py A: Try to import models like this: import something.models as models
Google App Engine (Python) - Import Fails
I have a file structure like so: app.yaml something/ __init__.py models.py test.py I have URL set up to run tests.py in app.yaml: ... - url: /test script: something/test.py test.py imports models.py When I try to navigate to http://myapp.appspot.com/test/ I get the following error: Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the > query that caused it And, when I check the logs on the dashboard I see the following error occurred: <type 'exceptions.ImportError'>: No module named models How do I import the file properly? Cheers, Pete
[ "inside test.py you can write at the top something like:\nfrom something.models import *\n\nThis will import your models.\nFor corrective code though - the wildcard '*' is not great and you explicitly import the models your using:\nfrom something.models import ModelName, OtherModel\n\nand so on.\n", "test.py shou...
[ 1, 1, 0 ]
[]
[]
[ "google_app_engine", "import", "python" ]
stackoverflow_0004235591_google_app_engine_import_python.txt
Q: How to make python scripts executable on Windows? Possible Duplicate: Set up Python on Windows to not type python in cmd When I use python on Linux, or even Mac OS from command line, I take advantage of the shebang and run some of my scripts directly, like so: ./myScript.py. I do need to give this script executable permissions, but that is all. Now, I just installed Python 3.1.2 on Windows 7, and I want to be able to do the same from command line. What additional steps do I need to follow? A: This sums it up better than I can say it: http://docs.python.org/faq/windows.html More specifically, check out the 2nd section titled "How do I make Python scripts executable?" On Windows, the standard Python installer already associates the .py extension with a file type (Python.File) and gives that file type an open command that runs the interpreter (D:\Program Files\Python\python.exe "%1" %*). This is enough to make scripts executable from the command prompt as foo.py. If you’d rather be able to execute the script by simple typing foo with no extension you need to add .py to the PATHEXT environment variable.
How to make python scripts executable on Windows?
Possible Duplicate: Set up Python on Windows to not type python in cmd When I use python on Linux, or even Mac OS from command line, I take advantage of the shebang and run some of my scripts directly, like so: ./myScript.py. I do need to give this script executable permissions, but that is all. Now, I just installed Python 3.1.2 on Windows 7, and I want to be able to do the same from command line. What additional steps do I need to follow?
[ "This sums it up better than I can say it: \nhttp://docs.python.org/faq/windows.html\nMore specifically, check out the 2nd section titled \"How do I make Python scripts executable?\"\n\nOn Windows, the standard Python installer already associates the .py extension with a file type (Python.File) and gives that file ...
[ 39 ]
[]
[]
[ "python", "shebang", "windows" ]
stackoverflow_0004235834_python_shebang_windows.txt
Q: All files of a dir in tar as file-like obj there is a simple way to get all files inside a specifically dir (dir that is in a tar file) one after another as a same file-like object? TNX A: Use TarFile.getmembers() to get all the entries in a tarball, and iterate through them to filter the ones you want. Pass each in turn to TarFile.extractfile() to get a file-like to the entry within the tarball.
All files of a dir in tar as file-like obj
there is a simple way to get all files inside a specifically dir (dir that is in a tar file) one after another as a same file-like object? TNX
[ "Use TarFile.getmembers() to get all the entries in a tarball, and iterate through them to filter the ones you want. Pass each in turn to TarFile.extractfile() to get a file-like to the entry within the tarball.\n" ]
[ 1 ]
[]
[]
[ "python", "tar" ]
stackoverflow_0004235974_python_tar.txt
Q: naming of physical quantities in python I would like to establish a good naming scheme for physical/mathematical quantities used in my simulation code. Consider the following example: from math import * class GaussianBeamIntensity(object): """ Optical intensity profile of a Gaussian laser beam. """ def __init__(self, intensity_at_waist_center, waist_radius, wavelength): """ Arguments: *intensity_at_waist_center*: The optical intensity of the beam at the center of its waist in W/m^2 units. *waist_radius*: The radius of the beam waist in meters. *wavelength*: The wavelength of the laser beam in meters. """ self.intensity_at_waist_center = intensity_at_waist_center self.waist_radius = waist_radius self.wavelength = wavelength self._calculate_auxiliary_quantities() def _calculate_auxiliary_quantities(self): # Shorthand notation w_0, lambda_ = self.waist_radius, self.wavelength self.rayleigh_range = pi * w_0**2 / lambda_ # Generally some more quantities could follow def __call__(self, rho, z): """ Arguments: *rho*, *z*: Cylindrical coordinates of a spatial point. """ # Shorthand notation I_0, w_0 = self.intensity_at_waist_center, self.waist_radius z_R = self.rayleigh_range w_z = w_0 * sqrt(1.0 + (z / z_R)**2) I = I_0 * (w_0 / w_z)**2 * exp(-2.0 * rho**2 / w_z**2) return I What consistent naming scheme would you propose for the physical properties (properties, function arguments etc.) in order to balance between readability and concise notation (that formulae remain relatively short)? Could you please refine the example above? Or perhaps propose a better scheme? It would be nice to follow the guidelines of PEP8, remembering that "A Foolish Consistency is the Hobgoblin of Little Minds". It seems difficult to stick to descriptive names while obeying the traditional 80-character limit for line lengths. Thank you in advance! A: I think you've already found the good balance. Expressive names are important, so I totally agree with the use of wavelenght instead of lambda as a class attribute. This way the interface remains clear and expressive. In a long formula, though, lambda_ is good choice as shorthand notation, because this is a commonly accepted and widely used notation for the wavelength in optics. I think when you implement a formula, what you want to do is staying as close as possible to the form of the equations you'd write on a piece of paper (or as they appear in an article etc). In short: keep the interfaces expressive, the formulae short. A: Use Python3 and you can use the actual symbol λ for a variable name. I look forward to writing code like: from math import pi as π sphere_volume = lambda r : 4/3 * π * r**3
naming of physical quantities in python
I would like to establish a good naming scheme for physical/mathematical quantities used in my simulation code. Consider the following example: from math import * class GaussianBeamIntensity(object): """ Optical intensity profile of a Gaussian laser beam. """ def __init__(self, intensity_at_waist_center, waist_radius, wavelength): """ Arguments: *intensity_at_waist_center*: The optical intensity of the beam at the center of its waist in W/m^2 units. *waist_radius*: The radius of the beam waist in meters. *wavelength*: The wavelength of the laser beam in meters. """ self.intensity_at_waist_center = intensity_at_waist_center self.waist_radius = waist_radius self.wavelength = wavelength self._calculate_auxiliary_quantities() def _calculate_auxiliary_quantities(self): # Shorthand notation w_0, lambda_ = self.waist_radius, self.wavelength self.rayleigh_range = pi * w_0**2 / lambda_ # Generally some more quantities could follow def __call__(self, rho, z): """ Arguments: *rho*, *z*: Cylindrical coordinates of a spatial point. """ # Shorthand notation I_0, w_0 = self.intensity_at_waist_center, self.waist_radius z_R = self.rayleigh_range w_z = w_0 * sqrt(1.0 + (z / z_R)**2) I = I_0 * (w_0 / w_z)**2 * exp(-2.0 * rho**2 / w_z**2) return I What consistent naming scheme would you propose for the physical properties (properties, function arguments etc.) in order to balance between readability and concise notation (that formulae remain relatively short)? Could you please refine the example above? Or perhaps propose a better scheme? It would be nice to follow the guidelines of PEP8, remembering that "A Foolish Consistency is the Hobgoblin of Little Minds". It seems difficult to stick to descriptive names while obeying the traditional 80-character limit for line lengths. Thank you in advance!
[ "I think you've already found the good balance. Expressive names are important, so I totally agree with the use of wavelenght instead of lambda as a class attribute. This way the interface remains clear and expressive.\nIn a long formula, though, lambda_ is good choice as shorthand notation, because this is a commo...
[ 4, 0 ]
[]
[]
[ "coding_style", "naming", "python", "readability", "scientific_computing" ]
stackoverflow_0004227503_coding_style_naming_python_readability_scientific_computing.txt
Q: Generate Tkinter Buttons dynamically I want to generate n amount of Tkinter Buttons which do different things. I have this code: import Tkinter as tk for i in range(boardWidth): newButton = tk.Button(root, text=str(i+1), command=lambda: Board.playColumn(i+1, Board.getCurrentPlayer())) Board.boardButtons.append(newButton) If boardWidth is 5, though I get buttons labelled 1 to 5, when clicked they all do Board.playColumn(5, Board.getCurrentPlayer()). I need the first button to do Board.playColumn(1, Board.getCurrentPlayer()), the second to do Board.playColumn(2, Board.getCurrentPlayer()) and so on. A: I think the problem is that the lambda is picking up the final value of i after the for loop ends. This should fix that (untested): import Tkinter as tk for i in range(boardWidth): newButton = tk.Button(root, text=str(i+1), command=lambda j=i+1: Board.playColumn(j, Board.getCurrentPlayer())) Board.boardButtons.append(newButton) Update BTW, this worked by adding an argument to the lambda function with a default value calculated from the value of i at the time each one is created in the loop rather than referring back to the final value of i through a closure when the expression within it executes later. A: Your problem is that you create lots of lambda objects in the same namespace, and those lambdas make reference to names in the outer scope. That means they don't become closures and they don't store references to the objects until later... When it happens, all lambdas will refer to the last value of i. Try using a callback factory to fix that: import Tkinter as tk def callbackFactory(b, n): def _callback(): return b.playColumn(n, b.getCurrentPlayer()) return _callback for i in range(boardWidth): newButton = tk.Button(root, text=str(i+1), command=callbackFactory(Board, i+1)) Board.boardButtons.append(newButton) Another idea is to store the current value of i as a default argument value in the lambda object, instead of relying on closure behavior to store the reference: for i in range(boardWidth): newButton = tk.Button(root, text=str(i+1), command=lambda x=i: Board.playColumn(x+1, Board.getCurrentPlayer())) Board.boardButtons.append(newButton)
Generate Tkinter Buttons dynamically
I want to generate n amount of Tkinter Buttons which do different things. I have this code: import Tkinter as tk for i in range(boardWidth): newButton = tk.Button(root, text=str(i+1), command=lambda: Board.playColumn(i+1, Board.getCurrentPlayer())) Board.boardButtons.append(newButton) If boardWidth is 5, though I get buttons labelled 1 to 5, when clicked they all do Board.playColumn(5, Board.getCurrentPlayer()). I need the first button to do Board.playColumn(1, Board.getCurrentPlayer()), the second to do Board.playColumn(2, Board.getCurrentPlayer()) and so on.
[ "I think the problem is that the lambda is picking up the final value of i after the for loop ends. This should fix that (untested):\nimport Tkinter as tk\n\nfor i in range(boardWidth):\n newButton = tk.Button(root, text=str(i+1),\n command=lambda j=i+1: Board.playColumn(j, Board.getCurrentPla...
[ 18, 4 ]
[]
[]
[ "button", "python", "tkinter", "user_interface" ]
stackoverflow_0004236182_button_python_tkinter_user_interface.txt
Q: Keystroke detection using Python How is keystroke detection done using Python? A: This old discussion in StackOverflow might helpful. A: It's not clear from your question what problem you are trying to solve. However, there is a Python FAQ (with a stock answer) that should have you on your way: How do I get a single keypress at a time?.
Keystroke detection using Python
How is keystroke detection done using Python?
[ "This old discussion in StackOverflow might helpful.\n", "It's not clear from your question what problem you are trying to solve. However, there is a Python FAQ (with a stock answer) that should have you on your way: How do I get a single keypress at a time?.\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0003013057_python.txt
Q: Is the cc recipients in a received email a Python list? (Google App Engine) I am trying to pull the cc'ed email addresses from received email. I am working in the development server. The tutorial says that "cc contains a list of the cc recipients." But it seems that message.cc returns a string. I am just using the code I copied from the cookbook: class ReceiveEmail(InboundMailHandler): def receive(self, message): logging.info("Received email from %s" % message.sender) plaintext = message.bodies(content_type='text/plain') for text in plaintext: txtmsg = "" txtmsg = text[1].decode() logging.info("Body is %s" % txtmsg) logging.info("CC email is %s" % message.cc) So if I have 1 cc, the log shows: CC email is cc12@example.com If there are more than 1: CC email is cc12@example.com, cc13@example.com To get the first email "cc12@example.com", I tried: logging.info("CC email is %s" % message.cc[0]) but this gives: CC email is c so the result is treated as a string. When I try logging.info("CC email is %s" % list(message.cc) I get ['c', 'c', '1', '2', '@', 'e', 'x', 'a', 'm', 'p', 'l', 'e', '.', 'c', 'o', 'm', ',', ' ', 'c', 'c', '1', '3', '@', 'e', 'x', 'a', 'm', 'p', 'l', 'e', '.', 'c', 'o', 'm', ',', ' ', 'c', 'c', '1', '4', '@', 'e', 'x', 'a', 'm', 'p', 'l', 'e', '.', 'c', 'o', 'm' Again, it appears that message.cc returns string not list. Do I need to use regex to get the emails? Any suggestions about what I am doing wrong? Thanks! A: Try: cc_list = message.cc.split(',') A: cc A recipient's email address (a string) or a list of email addresses to appear on the Cc: line in the message header. Message Fields cc is a string message.cc.split(", ")[0] is "cc12@example.com" that you want.
Is the cc recipients in a received email a Python list? (Google App Engine)
I am trying to pull the cc'ed email addresses from received email. I am working in the development server. The tutorial says that "cc contains a list of the cc recipients." But it seems that message.cc returns a string. I am just using the code I copied from the cookbook: class ReceiveEmail(InboundMailHandler): def receive(self, message): logging.info("Received email from %s" % message.sender) plaintext = message.bodies(content_type='text/plain') for text in plaintext: txtmsg = "" txtmsg = text[1].decode() logging.info("Body is %s" % txtmsg) logging.info("CC email is %s" % message.cc) So if I have 1 cc, the log shows: CC email is cc12@example.com If there are more than 1: CC email is cc12@example.com, cc13@example.com To get the first email "cc12@example.com", I tried: logging.info("CC email is %s" % message.cc[0]) but this gives: CC email is c so the result is treated as a string. When I try logging.info("CC email is %s" % list(message.cc) I get ['c', 'c', '1', '2', '@', 'e', 'x', 'a', 'm', 'p', 'l', 'e', '.', 'c', 'o', 'm', ',', ' ', 'c', 'c', '1', '3', '@', 'e', 'x', 'a', 'm', 'p', 'l', 'e', '.', 'c', 'o', 'm', ',', ' ', 'c', 'c', '1', '4', '@', 'e', 'x', 'a', 'm', 'p', 'l', 'e', '.', 'c', 'o', 'm' Again, it appears that message.cc returns string not list. Do I need to use regex to get the emails? Any suggestions about what I am doing wrong? Thanks!
[ "Try:\ncc_list = message.cc.split(',')\n\n", "cc\nA recipient's email address (a string) or a list of email addresses to appear on the Cc: line in the message header.\nMessage Fields\ncc is a string \nmessage.cc.split(\", \")[0] is \"cc12@example.com\" that you want.\n" ]
[ 1, 0 ]
[]
[]
[ "google_app_engine", "python", "regex" ]
stackoverflow_0004236087_google_app_engine_python_regex.txt
Q: python: module has no attribute mechanize #!/usr/bin/env python import mechanize mech = mechanize.Browser() page = br.open(SchoolRank('KY')) Gives: Traceback (most recent call last): File "mechanize.py", line 2, in <module> import mechanize File "/home/jcress/Documents/programming/schooldig/trunk/mechanize.py", line 12, in <module> mech = mechanize.Browser() AttributeError: 'module' object has no attribute 'Browser' And I'm confused. I have the module installed for 2.6 and 2.7, same result... A: Change your filename away from mechanize.py. Python is importing your file as the module.
python: module has no attribute mechanize
#!/usr/bin/env python import mechanize mech = mechanize.Browser() page = br.open(SchoolRank('KY')) Gives: Traceback (most recent call last): File "mechanize.py", line 2, in <module> import mechanize File "/home/jcress/Documents/programming/schooldig/trunk/mechanize.py", line 12, in <module> mech = mechanize.Browser() AttributeError: 'module' object has no attribute 'Browser' And I'm confused. I have the module installed for 2.6 and 2.7, same result...
[ "Change your filename away from mechanize.py. Python is importing your file as the module.\n" ]
[ 18 ]
[]
[]
[ "attributes", "mechanize_python", "module", "python" ]
stackoverflow_0004236365_attributes_mechanize_python_module_python.txt
Q: which more secure to build web apps with, php or python? which programming language is more secure to build web apps with, php or python? A: This has little to do with the language, and much to do with the code.
which more secure to build web apps with, php or python?
which programming language is more secure to build web apps with, php or python?
[ "This has little to do with the language, and much to do with the code.\n" ]
[ 11 ]
[]
[]
[ "php", "programming_languages", "python", "web_applications" ]
stackoverflow_0004236416_php_programming_languages_python_web_applications.txt
Q: django apache mod-wsgi hangs on importing a python module from .so file I'm trying to deploy a django application for production on apache mod-wsgi. I have a third party python application called freecad which packages python module in an FreeCAD.so library file. Requests hang on 'import FreeCAD'. Some apache log errors tell me that it might be problem with zlib?? compression when trying to import this module. Note that everything works just fine when using django's runserver. After looking more into this, it's not a compression issue,neither is a permission. I did as www-data user using $ sudo -u www-data python Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path.append('/usr/lib/freecad/lib') >>> import FreeCAD FreeCAD 0.10, Libs: 0.10R3225 >>> but it still hangs on 'import FreeCAD' from web page request A: Set: WSGIApplicationGroup %{GLOBAL} See the application issues document on mod_wsgi wiki. Most likely you have an extension module not designed to work in sub interpreter. The above forces it to run in main interpreter.
django apache mod-wsgi hangs on importing a python module from .so file
I'm trying to deploy a django application for production on apache mod-wsgi. I have a third party python application called freecad which packages python module in an FreeCAD.so library file. Requests hang on 'import FreeCAD'. Some apache log errors tell me that it might be problem with zlib?? compression when trying to import this module. Note that everything works just fine when using django's runserver. After looking more into this, it's not a compression issue,neither is a permission. I did as www-data user using $ sudo -u www-data python Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path.append('/usr/lib/freecad/lib') >>> import FreeCAD FreeCAD 0.10, Libs: 0.10R3225 >>> but it still hangs on 'import FreeCAD' from web page request
[ "Set:\nWSGIApplicationGroup %{GLOBAL}\n\nSee the application issues document on mod_wsgi wiki.\nMost likely you have an extension module not designed to work in sub interpreter. The above forces it to run in main interpreter.\n" ]
[ 26 ]
[]
[]
[ "django", "module", "python", "wsgi" ]
stackoverflow_0004236045_django_module_python_wsgi.txt
Q: IndexError: list index out of range (in query results) I am having problems understanding how to work with query results. I asked about half a dozen questions about this but I still do not understand. I copy from previous code and I make it work somehow but since I don't understand the underlying concept the code breaks down if I make a minor change. I would really appreciate if you could tell me how you visualize what is happenning here and explain it to me. Thank you. class ReceiveEmail(InboundMailHandler): def receive(self, message): logging.info("Received email from %s" % message.sender) plaintext = message.bodies(content_type='text/plain') for text in plaintext: txtmsg = "" txtmsg = text[1].decode() logging.info("Body is %s" % txtmsg) logging.info("CC email is %s" % ((message.cc).split(",")[1])) query = User.all() query.filter("userEmail =", ((message.cc).split(",")[1])) results = query.fetch(1) for result in results: result.userScore += 1 um = results[0] um.userScore = result.userScore um.put() In this code, as I understand it, the query takes the second email address from the cc list and fetches the result. Then I increment the userScore by 1. Next, I want to update this item in Datastore so I say um = results[0] um.userScore = result.userScore um.put() But this gives an index out of range error: um = results[0] IndexError: list index out of range Why? I am imagining that results[0] is the zeroeth item of the results. Why is it out of range? Only thing I can think of is that, the list may be None. But I don't understand why. It must have the 1 item that was fetched. Also, if I try to test for the first email address by changing the index from [1] to [0] query.filter("userEmail =", ((message.cc).split(",")[0])) then I don't get the IndexError. What am I doing wrong here? Thanks! EDIT See comments: (message.cc).split(",")[0]) left a space in front of the emails (starting with the second email), so the query was not matching them; >>> cc.split(",") ['cc12@example.com', ' cc13@example.com', ' cc13@example.com'] adding a space after comma fixed the problem: >>> listcc = cc.split(", ") >>> listcc ['cc12@example.com', 'cc13@example.com', 'cc13@example.com'] >>> A: To understand the code break it down and look at it piece by piece: class ReceiveEmail(InboundMailHandler): def receive(self, message): logging.info("Received email from %s" % message.sender) # Get a list of CC addresses. This is basically a for loop. cc_addresses = [address.strip() for address in message.cc.split(",")] # The CC list goes with the message, not the bodies. logging.info("CC email is %s" % (cc_addresses)) # Get and iterate over all of the *plain-text* bodies in the email. plaintext = message.bodies(content_type='text/plain') for text in plaintext: txtmsg = "" txtmsg = text[1].decode() logging.info("Body is %s" % txtmsg) # Setup a query object. query = User.all() # Filter the user objects to get only the emails in the CC list. query.filter("userEmail IN", cc_addresses) # But, only get at most 10 users. users = query.fetch(10) logging.info('Got %d user entities from the datastore.' % len(users)) # Iterate over each of the users increasing their score by one. for user in users: user.userScore += 1 # Now, write the users back to the datastore. db.put(users) logging.info('Wrote %d user entities.' % len(users)) I would make an adjustment to your model structure. When you create the User entity, I would set the key_name to the email address. You will be able to make your queries much more efficient. Some references: List Comprehension. Query Object. db.put().
IndexError: list index out of range (in query results)
I am having problems understanding how to work with query results. I asked about half a dozen questions about this but I still do not understand. I copy from previous code and I make it work somehow but since I don't understand the underlying concept the code breaks down if I make a minor change. I would really appreciate if you could tell me how you visualize what is happenning here and explain it to me. Thank you. class ReceiveEmail(InboundMailHandler): def receive(self, message): logging.info("Received email from %s" % message.sender) plaintext = message.bodies(content_type='text/plain') for text in plaintext: txtmsg = "" txtmsg = text[1].decode() logging.info("Body is %s" % txtmsg) logging.info("CC email is %s" % ((message.cc).split(",")[1])) query = User.all() query.filter("userEmail =", ((message.cc).split(",")[1])) results = query.fetch(1) for result in results: result.userScore += 1 um = results[0] um.userScore = result.userScore um.put() In this code, as I understand it, the query takes the second email address from the cc list and fetches the result. Then I increment the userScore by 1. Next, I want to update this item in Datastore so I say um = results[0] um.userScore = result.userScore um.put() But this gives an index out of range error: um = results[0] IndexError: list index out of range Why? I am imagining that results[0] is the zeroeth item of the results. Why is it out of range? Only thing I can think of is that, the list may be None. But I don't understand why. It must have the 1 item that was fetched. Also, if I try to test for the first email address by changing the index from [1] to [0] query.filter("userEmail =", ((message.cc).split(",")[0])) then I don't get the IndexError. What am I doing wrong here? Thanks! EDIT See comments: (message.cc).split(",")[0]) left a space in front of the emails (starting with the second email), so the query was not matching them; >>> cc.split(",") ['cc12@example.com', ' cc13@example.com', ' cc13@example.com'] adding a space after comma fixed the problem: >>> listcc = cc.split(", ") >>> listcc ['cc12@example.com', 'cc13@example.com', 'cc13@example.com'] >>>
[ "To understand the code break it down and look at it piece by piece:\nclass ReceiveEmail(InboundMailHandler):\n def receive(self, message):\n logging.info(\"Received email from %s\" % message.sender)\n\n # Get a list of CC addresses. This is basically a for loop.\n cc_addresses = [address.s...
[ 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0004236427_google_app_engine_python.txt
Q: Best way to iterate through entries delimited by two keywords? Text File contents: &CRB A='test1' B=123,345, 678 &END Misc text potentially between entries ... &CRB A='test2' B=788, 345, 3424 &END &CRB A='test3' B=788, 345, 3424 &END &CRB A='test4' B=788, 345, 3424 &END What is the most efficient way to iterate through the entries between the keywords? Note that some entries span lines. Something like the following is desired - f = open(filename) for entry in f: - do something with entry Of course it is not that easy. But, are there suggestions on a straightforward way to iterate thorough the entries delimiated by two key words. A: Assuming that the entry is all of the text between &CRB and &END pairs, you can pull out the text between them with something like this: import re # the regular expression treats newlines as a regular character, so the # multiline entries are okay. It's non-greedy, so it gets individual entries. pat = re.compile(r'&CRB(.+?)&END', re.DOTALL) s = ''' &CRB A='test1' B=123,345, 678 &END Misc text potentially between entries ... &CRB A='test2' B=788, 345, 3424 &END &CRB A='test3' B=788, 345, 3424 &END &CRB A='test4' B=788, 345, 3424 &END''' for entry in pat.findall(s): # do something with each entry print entry prints this: A='test1' B=123,345, 678 A='test2' B=788, 345, 3424 A='test3' B=788, 345, 3424 A='test4' B=788, 345, 3424 ...it's your problem to clean up and interpret the contents of each of those records... A: I'd use re.finditer instead of re.findall, since we do not know the size of the file parsing it in one time might be too much ram-consuming, while having an iterator yielding the results will prevent the program from eating too much RAM. So I think the best solution is the one posted by bgporter, using pat.finditer instead of pat.findall in the for loop. finditer yields MatchObjects and not strings, to obtain the string matched just call .group(): for entry in pat.finditer(s): entry_text = entry.group() #do something with entry_text. A: I would read in the file, use filecontents.split('&CRB') and then parse each line with regular expressions (see re module).
Best way to iterate through entries delimited by two keywords?
Text File contents: &CRB A='test1' B=123,345, 678 &END Misc text potentially between entries ... &CRB A='test2' B=788, 345, 3424 &END &CRB A='test3' B=788, 345, 3424 &END &CRB A='test4' B=788, 345, 3424 &END What is the most efficient way to iterate through the entries between the keywords? Note that some entries span lines. Something like the following is desired - f = open(filename) for entry in f: - do something with entry Of course it is not that easy. But, are there suggestions on a straightforward way to iterate thorough the entries delimiated by two key words.
[ "Assuming that the entry is all of the text between &CRB and &END pairs, you can pull out the text between them with something like this:\nimport re\n\n# the regular expression treats newlines as a regular character, so the\n# multiline entries are okay. It's non-greedy, so it gets individual entries.\npat = re.com...
[ 4, 1, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0004233250_python_string.txt
Q: Is there a better way to do this python code? Looking at this snippet of python code I wrote: return map(lambda x: x[1], filter(lambda x: x[0] == 0b0000, my_func(i) ) ) (Hoping it's self-explanatory) I'm wondering if python has a better way to do it? I learned python several months ago, wrote a couple scripts, and haven't used it much since. It puts me in a weird spot for learning because I know enough to do what I want but don't have the newbie instinct to find the "proper" way. I'm hoping this question will put me back on course... A: I think you want a list comprehension: [x[1] for x in my_func(i) if x[0] == 0] List comprehensions are an extremely common Python idiom. A: You could use something like: return [x[1] for x in my_func(i) if x[0] == 0b0000] Many people would call that "better" as its a little shorter and more obvious. (I would be tempted to consider turning it into a simple loop and if statement. Functional programming is nice but simple loops are nice too.) A: If you are writing in Python 3.x, then you could write an efficient generator expression such as this: return (x[1] for x in my_func(i) if not x[0]) A: In python 3.x you can use unpacking to avoid using x[0] and x[1]. Also you may consider returning a generator expression instead of a list-comprehension if you only want to loop over the result once: return (y for x,y,*z in my_func(i) if x == 0b0000)
Is there a better way to do this python code?
Looking at this snippet of python code I wrote: return map(lambda x: x[1], filter(lambda x: x[0] == 0b0000, my_func(i) ) ) (Hoping it's self-explanatory) I'm wondering if python has a better way to do it? I learned python several months ago, wrote a couple scripts, and haven't used it much since. It puts me in a weird spot for learning because I know enough to do what I want but don't have the newbie instinct to find the "proper" way. I'm hoping this question will put me back on course...
[ "I think you want a list comprehension:\n[x[1] for x in my_func(i) if x[0] == 0]\n\nList comprehensions are an extremely common Python idiom.\n", "You could use something like:\nreturn [x[1] for x in my_func(i) if x[0] == 0b0000]\n\nMany people would call that \"better\" as its a little shorter and more obvious.\...
[ 9, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0004230689_python.txt
Q: How can I express this query in sqlalchemy ORM? Here is my SQL query: select survey_spec.survey_spec_id, cr.login as created_by, installed_at, req.login as requested_by, role from survey_spec join (select survey_spec_id, role, max(installed_at) as installed_at from survey_installation_history group by 1, 2) latest using (survey_spec_id) left join survey_installation_history using (survey_spec_id, role, installed_at) left join users cr on created_by = cr.user_id left join users req on requested_by = req.user_id where survey_id = :survey_id order by created_at desc, installed_at desc I have ORM entities for survey_spec, survey_installation_history, and users, and survey_spec.installations is a relationship to survey_installation_history using survey_spec_id as the key. A: Do you have the example output of what you have got so far? i.e. the output of: print survey_spec.query.filter(survey_spec.survey_id==survey_id).options( eagerload(...)) If you just want to load up your entities, you could bypass the SQL generation and load from your given literal SQL, something along the lines of: session.query(survey_spec).from_statement("""select survey_spec.survey_spec_id, cr.login as created_by, installed_at, req.login as requested_by, role from survey_spec join (select survey_spec_id, role, max(installed_at) as installed_at from survey_installation_history group by 1, 2) latest using (survey_spec_id) left join survey_installation_history using (survey_spec_id, role, installed_at) left join users cr on created_by = cr.user_id left join users req on requested_by = req.user_id where survey_id = :survey_id order by created_at desc, installed_at desc""").params(survey_id=survey_id).all()
How can I express this query in sqlalchemy ORM?
Here is my SQL query: select survey_spec.survey_spec_id, cr.login as created_by, installed_at, req.login as requested_by, role from survey_spec join (select survey_spec_id, role, max(installed_at) as installed_at from survey_installation_history group by 1, 2) latest using (survey_spec_id) left join survey_installation_history using (survey_spec_id, role, installed_at) left join users cr on created_by = cr.user_id left join users req on requested_by = req.user_id where survey_id = :survey_id order by created_at desc, installed_at desc I have ORM entities for survey_spec, survey_installation_history, and users, and survey_spec.installations is a relationship to survey_installation_history using survey_spec_id as the key.
[ "Do you have the example output of what you have got so far? i.e. the output of:\nprint survey_spec.query.filter(survey_spec.survey_id==survey_id).options(\n eagerload(...))\n\nIf you just want to load up your entities, you could bypass the SQL generation and load from your given literal SQL, something alo...
[ 1 ]
[]
[]
[ "python", "sql", "sqlalchemy" ]
stackoverflow_0004228825_python_sql_sqlalchemy.txt
Q: Combine model data with list of objects I have a list of objects : film_hc = [{'count': 2, 'pk': '33'}, {'count': 1, 'pk': '37'}, {'count': 1, 'pk': '49'}] The 'pk' value is the primary key for a record in a model. I would like to add the name field of that record to this list of objects. To get one name, I can use: record = Film.objects.get(pk = film_hc[0]['pk']) record.name In the end, I would like to have something like this: film_hc = [{'count': 2, 'pk': '33', 'name': 'name1'}, {'count': 1, 'pk': '37', 'name': 'name2'}, {'count': 1, 'pk': '49', 'name': 'name3'}] Question: What is the most efficient way to attach the necessary data to this preexisting list? I am thinking I could use the zip function: film_hc_with_names = zip(????, film_hc) The problem is I'm not sure what I would substitute in place of those ???? to get the object then the name for each object in the list. Should I use a for loop instead? What is the most preferable option? A: To avoid hitting the database multiple times, I recommend you use the in_bulk queryset method. This takes a list of IDs, and returns a dictionary of ID mapped to model instance. So what you need to do is to run through your list of dictionaries first to extract the ID values, then do the query, then run through again to get the name for each instance. Even though this is doing two extra iterations, it should still be quicker than running multiple DB queries (although as always you should profile to make sure). id_list = [film['id'] for film in film_hc] objects = Film.objects.only('name').in_bulk(id_list) for film in film_hc: film['name'] = objects[film['id']].name
Combine model data with list of objects
I have a list of objects : film_hc = [{'count': 2, 'pk': '33'}, {'count': 1, 'pk': '37'}, {'count': 1, 'pk': '49'}] The 'pk' value is the primary key for a record in a model. I would like to add the name field of that record to this list of objects. To get one name, I can use: record = Film.objects.get(pk = film_hc[0]['pk']) record.name In the end, I would like to have something like this: film_hc = [{'count': 2, 'pk': '33', 'name': 'name1'}, {'count': 1, 'pk': '37', 'name': 'name2'}, {'count': 1, 'pk': '49', 'name': 'name3'}] Question: What is the most efficient way to attach the necessary data to this preexisting list? I am thinking I could use the zip function: film_hc_with_names = zip(????, film_hc) The problem is I'm not sure what I would substitute in place of those ???? to get the object then the name for each object in the list. Should I use a for loop instead? What is the most preferable option?
[ "To avoid hitting the database multiple times, I recommend you use the in_bulk queryset method. This takes a list of IDs, and returns a dictionary of ID mapped to model instance. So what you need to do is to run through your list of dictionaries first to extract the ID values, then do the query, then run through ag...
[ 1 ]
[]
[]
[ "django", "python", "zip" ]
stackoverflow_0004236779_django_python_zip.txt
Q: Plotting time against size of input for Longest Common Subsequence Problem I wish to plot the time against size of input, for Longest common subsequence problem in recursive as well as dynamic programming approaches. Until now I've developed programs for evaluating lcs functions in both ways, a simple random string generator(with help from here) and a program to plot the graph. Now I need to connect all these in the following way. Now I have to connect all these. That is, the two programs for calculating lcs should run about 10 times with output from simple random string generator given as command line arguments to these programs. The time taken for execution of these programs are calculated and this along with the length of strings used is stored in a file like l=15, r=0.003, c=0.001 This is parsed by the python program to populate the following lists sequence_lengths = [] recursive_times = [] dynamic_times = [] and then the graph is plotted. I've the following questions regarding above. 1) How do I pass the output of one C program to another C program as command line arguments? 2) Is there any function to evaluate the time taken to execute the function in microseconds? Presently the only option I have is time function in unix. Being a command-line utility makes it tougher to handle. Any help would be much appreciated. A: If the data being passed from program to program is small and can be converted to character format, you can pass it as one or more command-line arguments. If not you can write it to a file and pass its name as a argument. For Python programs many people use the timeit module's Timer class to measure code execution speed. You can also roll-you-own using the clock() or time() functions in time module. The resolution depends on what platform you're running on. A: 1) There are many ways, the simplest is to use system with a string constructed from the output (or popen to open it as a pipe if you need to read back its output), or if you wish to leave the current program then you can use the various exec (placing the output in the arguments). In an sh shell you can also do this with command2 $(command1 args_to_command_1) 2) For timing in C, see clock and getrusage.
Plotting time against size of input for Longest Common Subsequence Problem
I wish to plot the time against size of input, for Longest common subsequence problem in recursive as well as dynamic programming approaches. Until now I've developed programs for evaluating lcs functions in both ways, a simple random string generator(with help from here) and a program to plot the graph. Now I need to connect all these in the following way. Now I have to connect all these. That is, the two programs for calculating lcs should run about 10 times with output from simple random string generator given as command line arguments to these programs. The time taken for execution of these programs are calculated and this along with the length of strings used is stored in a file like l=15, r=0.003, c=0.001 This is parsed by the python program to populate the following lists sequence_lengths = [] recursive_times = [] dynamic_times = [] and then the graph is plotted. I've the following questions regarding above. 1) How do I pass the output of one C program to another C program as command line arguments? 2) Is there any function to evaluate the time taken to execute the function in microseconds? Presently the only option I have is time function in unix. Being a command-line utility makes it tougher to handle. Any help would be much appreciated.
[ "If the data being passed from program to program is small and can be converted to character format, you can pass it as one or more command-line arguments. If not you can write it to a file and pass its name as a argument.\nFor Python programs many people use the timeit module's Timer class to measure code executio...
[ 0, 0 ]
[]
[]
[ "c", "lcs", "python" ]
stackoverflow_0004237384_c_lcs_python.txt
Q: Regular expression further checking I am working with a regular expression that would check a string,Is it a function or not. My regular expression for checking that as follows: regex=r' \w+[\ ]*\(.*?\)*' It succefully checks whether the string contains a function or not. But it grabs normal string which contains firs barcket value,such as "test (meaning of test)". So I have to check further that if there is a space between function name and that brackets that will not be caught as match.So I did another checking as follows: regex2=r'\s' It work successfully and can differentiate between "test()" and "test ()". But now I have to maintain another condition that,if there is no space after the brackets(eg. test()abcd),it will not catch it as a function.The regular expression should only treat as match when it will be like "test() abcd". But I tried using different regular expression ,unfortunately those are not working. Her one thing to mention the checking string is inserted in to a list at when it finds a match and in second step it only check the portion of the string.Example: String : This is a python function test()abcd At first it will check the string for function and when find matches with function test() then send only "test()" for whether there is a gap between "test" and "()". In this last step I have to find is there any gap between "test()" and "abcd".If there is gap it will not show match as function otherwise as a normal portion of string. How should I write the regular expression for such case? The regular expression will have to show in following cases: 1.test() abc 2.test(as) abc 3.test() will not treat as a function if: 1.test (a)abc 2.test ()abc A: (\w+\([^)]*\))(\s+|$) Bascially you make sure it ends with either spaces or end of line. BTW the kiki tool is very useful for debugging Python re: http://code.google.com/p/kiki-re/ A: regex=r'\w+\([\w,]+\)(?:\s+|$)' A: I have solved the problem at first I just chexked for the string that have "()"using the regular expression: regex = r' \w+[\ ]*\(.*?\)\w*' Then for checking the both the space between function name and brackets,also the gap after the brackets,I used following function with regular expression: def testFunction(self, func): a=" " func=str(func).strip() if a in func: return False else: a = re.findall(r'\w+[\ ]*', func) j = len(a) if j<=1: return True else: return False So it can now differentiate between "test() abc" and "test()abc". Thanks
Regular expression further checking
I am working with a regular expression that would check a string,Is it a function or not. My regular expression for checking that as follows: regex=r' \w+[\ ]*\(.*?\)*' It succefully checks whether the string contains a function or not. But it grabs normal string which contains firs barcket value,such as "test (meaning of test)". So I have to check further that if there is a space between function name and that brackets that will not be caught as match.So I did another checking as follows: regex2=r'\s' It work successfully and can differentiate between "test()" and "test ()". But now I have to maintain another condition that,if there is no space after the brackets(eg. test()abcd),it will not catch it as a function.The regular expression should only treat as match when it will be like "test() abcd". But I tried using different regular expression ,unfortunately those are not working. Her one thing to mention the checking string is inserted in to a list at when it finds a match and in second step it only check the portion of the string.Example: String : This is a python function test()abcd At first it will check the string for function and when find matches with function test() then send only "test()" for whether there is a gap between "test" and "()". In this last step I have to find is there any gap between "test()" and "abcd".If there is gap it will not show match as function otherwise as a normal portion of string. How should I write the regular expression for such case? The regular expression will have to show in following cases: 1.test() abc 2.test(as) abc 3.test() will not treat as a function if: 1.test (a)abc 2.test ()abc
[ "(\\w+\\([^)]*\\))(\\s+|$)\n\nBascially you make sure it ends with either spaces or end of line.\nBTW the kiki tool is very useful for debugging Python re: http://code.google.com/p/kiki-re/\n", "regex=r'\\w+\\([\\w,]+\\)(?:\\s+|$)'\n\n", "\nI have solved the problem at first I just chexked for the string that h...
[ 1, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0004236513_python_regex.txt
Q: Is it possible to check that a list is not empty before grabbing the first element using as few lines as possible in Python? I keep having to do this operation all over the place in my python code. I'm willing to bet there is an easier (aka one-line) way to do this. results = getResults() if len(results) > 0: result = results[0] I actually don't need "results" anywhere else, and I should only run "getResults" once. Any ideas? A: You haven't specified what result should be if results is empty, but this is one option (assuming Python 2.6 or greater): result = results[0] if results else None A: result = len(results) and results[0] or None. In case if results[0] is not 0, or False, or any empty container [], (), '', {}, set(), .... A: next(iter(getResults()), None) for Python 2.6 A: If you don't actually need results anywhere, consider a function that returns just the first result, and use that instead. def getFirstResult(): results = getResults() if len(results) > 0: return results[0] else: return None A: If you can change getResults() slightly along these lines: def getResults(): # ... getResults.seq = ... # save results in a func attribute return getResults.seq It would allow you to write: results = getResults.seq[0] if getResults() else None
Is it possible to check that a list is not empty before grabbing the first element using as few lines as possible in Python?
I keep having to do this operation all over the place in my python code. I'm willing to bet there is an easier (aka one-line) way to do this. results = getResults() if len(results) > 0: result = results[0] I actually don't need "results" anywhere else, and I should only run "getResults" once. Any ideas?
[ "You haven't specified what result should be if results is empty, but this is one option (assuming Python 2.6 or greater):\nresult = results[0] if results else None\n\n", "result = len(results) and results[0] or None. In case if results[0] is not 0, or False, or any empty container [], (), '', {}, set(), ....\n",...
[ 8, 2, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0004237422_python.txt
Q: Python: How can you access an object or dictionary interchangeably? I'm writing a Django view that sometimes gets data from the database, and sometimes from an external API. When it comes from the database, it is a Django model instance. Attributes must be accessed with dot notation. Coming from the API, the data is a dictionary and is accessed through subscript notation. In either case, some processing is done on the data. I'd like to avoid if from_DB: item.image_url='http://example.com/{0}'.format(item.image_id) else: item['image_url']='http://example.com/{0}'.format(item['image_id']) I'm trying to find a more elegant, DRY way to do this. Is there a way to get/set by key that works on either dictionaries or objects? A: In JavaScript they're equivalent (often useful; I mention it in case you didn't know as you're doing web development), but in Python they're different - [items] versus .attributes. It's easy to write something which allows access through attributes, using __getattr__: class AttrDict(dict): def __getattr__(self, attr): return self[attr] def __setattr__(self, attr, value): self[attr] = value Then just use it as you'd use a dict (it'll accept a dict as a parameter, as it's extending dict), but you can do things like item.image_url and it'll map it to item.image_url, getting or setting. A: You could use a Bunch class, which transforms the dictionary into something that accepts dot notation. A: I don't know what the implications will be, but I would add a method to the django model which reads the dictionary into itself, so you can access the data through the model.
Python: How can you access an object or dictionary interchangeably?
I'm writing a Django view that sometimes gets data from the database, and sometimes from an external API. When it comes from the database, it is a Django model instance. Attributes must be accessed with dot notation. Coming from the API, the data is a dictionary and is accessed through subscript notation. In either case, some processing is done on the data. I'd like to avoid if from_DB: item.image_url='http://example.com/{0}'.format(item.image_id) else: item['image_url']='http://example.com/{0}'.format(item['image_id']) I'm trying to find a more elegant, DRY way to do this. Is there a way to get/set by key that works on either dictionaries or objects?
[ "In JavaScript they're equivalent (often useful; I mention it in case you didn't know as you're doing web development), but in Python they're different - [items] versus .attributes.\nIt's easy to write something which allows access through attributes, using __getattr__:\nclass AttrDict(dict):\n def __getattr__(s...
[ 6, 6, 2 ]
[]
[]
[ "dictionary", "django", "object", "python" ]
stackoverflow_0004237541_dictionary_django_object_python.txt
Q: run a function (from a .py file) from a linux console maybe the title is not very clear, let me elaborate. I have a python script that open a ppm file , apply a chosen filter(rotations...) and create a new picture. until here everything work fine. but I want to do the same thing through a linux console like: ppmfilter.py ROTD /path/imageIn.ppm /path/imageOut.ppm here ROTD is the name of the function that apply a rotation. I don't know how to do this, I'm looking for a library that'll allow me to do this. looking forward for your help. P.S.: I'm using python 2.7 A: There is a relatively easy way: You can determine the global names (functions, variables, etc.) with the use of 'globals()'. This gives you a dictionary of all global symbols. You'll just need to check the type (with type() and the module types) and if it's a function, you can call it with sys.argv: import types import sys def ROTD(infile, outfile): # do something if __name__ == '__main__': symbol = globals().get(sys.argv[1]) if hasattr(symbol, '__call__'): symbol(*sys.argv[2:]) This will pass the program argument (excluding the filename and the command name) to the function. EDIT: Please, don't forget the error handling. I omited it for reasons of clarity. A: Use main() function: def main() # call your function here if __name__ == "__main__": main() A: A nice way to do it would be to define a big dictionary {alias: function} inside your module. For instance: actions = { 'ROTD': ROTD, 'REFL': reflect_image, 'INVT': invIm, } You get the idea. Then take the first command-line argument and interpret it as a key of this dictionary, applying actions[k] to the rest of the arguments. A: You can define in your ppmfilter.py main section doing this: if __name__ == "__main__": import sys ROTD(sys.argv[1], sys.argv[2]) # change according to the signature of the function and call it: python ppmfilter.py file1 file2 You can also run python -c in the directory that contains you *.py file: python -c "import ppmfilter; ppmfilter.ROTD('/path/to/file1', '/path/to/file2')"
run a function (from a .py file) from a linux console
maybe the title is not very clear, let me elaborate. I have a python script that open a ppm file , apply a chosen filter(rotations...) and create a new picture. until here everything work fine. but I want to do the same thing through a linux console like: ppmfilter.py ROTD /path/imageIn.ppm /path/imageOut.ppm here ROTD is the name of the function that apply a rotation. I don't know how to do this, I'm looking for a library that'll allow me to do this. looking forward for your help. P.S.: I'm using python 2.7
[ "There is a relatively easy way:\nYou can determine the global names (functions, variables, etc.) with the use of 'globals()'. This gives you a dictionary of all global symbols. You'll just need to check the type (with type() and the module types) and if it's a function, you can call it with sys.argv:\nimport types...
[ 7, 1, 1, 0 ]
[]
[]
[ "console", "ppm", "python" ]
stackoverflow_0004237749_console_ppm_python.txt
Q: problem with a 2 dimensional array(list of list of dict) cell={'num':0,'state':1} cell_2d=[] cell_list=[] for i in range(2): for j in range(2): cell_list=cell_list+[cell] cell_2d=cell_2d+[cell_list] cell_list=[] print "initially:" print cell_2d cell_2d[0][0]['num']=-1 print "finally:" print cell_2d Output obtained is: initially: [[{'state': 1, 'num': 0}, {'state': 1, 'num': 0}], [{'state': 1, 'num': 0}, {'state': 1, 'num': 0}]] finally: [[{'state': 1, 'num': -1}, {'state': 1, 'num': -1}], [{'state': 1, 'num': -1}, {'state': 1, 'num': -1}]] when the line 11 is executed, I expect only the first element of the first list of cell_2d to be changed. But the output shows that all 'num' of all elements of cell_2d is changed to -1. Not able to get why this is happening. Can someone please tell me what is the mistake with the code? Thanx in advance. A: OK, I see it. You're reusing the cell object. Because Python uses references, you're just making four references to the same object, so when you change one, you change them all. Inside your inner loop, try: cell_list = cell_list + [{'num':1, 'state':0}] Which can be shortened to: cell_list.append({'num':1, 'state':0}) Or, in fact, you can replace the inner loop (with j) with: cell_list = [{'num':1, 'state':0} for j in range(2)] A: Simply replace this line cell_2d=cell_2d+[cell_list] With this cell_2d = cell_2d + [ cell_list.copy() ] This way python will make a copy from the dictionary 'cell_list' instead of storing a reference.
problem with a 2 dimensional array(list of list of dict)
cell={'num':0,'state':1} cell_2d=[] cell_list=[] for i in range(2): for j in range(2): cell_list=cell_list+[cell] cell_2d=cell_2d+[cell_list] cell_list=[] print "initially:" print cell_2d cell_2d[0][0]['num']=-1 print "finally:" print cell_2d Output obtained is: initially: [[{'state': 1, 'num': 0}, {'state': 1, 'num': 0}], [{'state': 1, 'num': 0}, {'state': 1, 'num': 0}]] finally: [[{'state': 1, 'num': -1}, {'state': 1, 'num': -1}], [{'state': 1, 'num': -1}, {'state': 1, 'num': -1}]] when the line 11 is executed, I expect only the first element of the first list of cell_2d to be changed. But the output shows that all 'num' of all elements of cell_2d is changed to -1. Not able to get why this is happening. Can someone please tell me what is the mistake with the code? Thanx in advance.
[ "OK, I see it. You're reusing the cell object. Because Python uses references, you're just making four references to the same object, so when you change one, you change them all.\nInside your inner loop, try:\ncell_list = cell_list + [{'num':1, 'state':0}]\n\nWhich can be shortened to:\ncell_list.append({'num':1, '...
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0004237824_python.txt
Q: Python Qt: embedded html bug? Here is a strange thing that is happening... I have used embedded html with Qt Python to display a form inside the GUI/Widget. The problem is, if the cell has more content, it shows a black shadow like a box on the right side of that cell/table. Here is an example of the code working fine: html += ("<BR><BR><table border='0' cellspacing='0' cellpadding='0'>" "<tr>" "<td bgcolor='#000000'>" "<table border='0' cellspacing='1' cellpadding='4'>" "<tr>" "<TD WIDTH=837 bgcolor='#FFFFFF'><font size='4' color='black'><DIV align=center><B> StackOverFlow Forum<BR>YOUR FAVORITE WEB SITE</DIV></font></td>" "</tr>" "</table>" "</td>" "</tr>" "</table>" ) As expected, the first example shows this: +----------------------------------------------------+ | STACKOVERFLOW FORUM | | YOUR FAVORITE WEB SITE | +----------------------------------------------------+ then in the sequence, the same code, but with a little more content to that cell: html += ("<table border='0' cellspacing='0' cellpadding='0'>" "<tr>" "<td bgcolor='#000000'>" "<table border='0' cellspacing='1' cellpadding='4'>" "<tr>" "<TD WIDTH=837 bgcolor='#FFFFFF'><font size='4' color='black'><DIV align=center><B>STACKOVERFLOW FORUM STACKOVERFLOW FORUM STACKOVERFLOW FORUM STACKOVERFLOW FORUM STACKOVERFLOW FORUM<BR>YOUR FAVORITE WEB SITE </DIV></font></td>" "</tr>" "</table>" "</td>" "</tr>" "</table>" ) For the second, it shows a black shadow/box on the right-side of the table, just like this. +----------------------------------------------------+|||| | STACKOVERFLOW FORUM STACKOVERFLOW FORUM ... ||||| | YOUR FAVORITE WEB SITE ||||| +----------------------------------------------------+|||| So, quite strange, considering that it is exactly the same code, only the second having more text inside it. Any suggestion? A: If this is an exact snippet from you code then problem may be connected with the fact that you don't have closing </b> tag. If that doesn't help try removing <div> and adding align='center' to <td>
Python Qt: embedded html bug?
Here is a strange thing that is happening... I have used embedded html with Qt Python to display a form inside the GUI/Widget. The problem is, if the cell has more content, it shows a black shadow like a box on the right side of that cell/table. Here is an example of the code working fine: html += ("<BR><BR><table border='0' cellspacing='0' cellpadding='0'>" "<tr>" "<td bgcolor='#000000'>" "<table border='0' cellspacing='1' cellpadding='4'>" "<tr>" "<TD WIDTH=837 bgcolor='#FFFFFF'><font size='4' color='black'><DIV align=center><B> StackOverFlow Forum<BR>YOUR FAVORITE WEB SITE</DIV></font></td>" "</tr>" "</table>" "</td>" "</tr>" "</table>" ) As expected, the first example shows this: +----------------------------------------------------+ | STACKOVERFLOW FORUM | | YOUR FAVORITE WEB SITE | +----------------------------------------------------+ then in the sequence, the same code, but with a little more content to that cell: html += ("<table border='0' cellspacing='0' cellpadding='0'>" "<tr>" "<td bgcolor='#000000'>" "<table border='0' cellspacing='1' cellpadding='4'>" "<tr>" "<TD WIDTH=837 bgcolor='#FFFFFF'><font size='4' color='black'><DIV align=center><B>STACKOVERFLOW FORUM STACKOVERFLOW FORUM STACKOVERFLOW FORUM STACKOVERFLOW FORUM STACKOVERFLOW FORUM<BR>YOUR FAVORITE WEB SITE </DIV></font></td>" "</tr>" "</table>" "</td>" "</tr>" "</table>" ) For the second, it shows a black shadow/box on the right-side of the table, just like this. +----------------------------------------------------+|||| | STACKOVERFLOW FORUM STACKOVERFLOW FORUM ... ||||| | YOUR FAVORITE WEB SITE ||||| +----------------------------------------------------+|||| So, quite strange, considering that it is exactly the same code, only the second having more text inside it. Any suggestion?
[ "If this is an exact snippet from you code then problem may be connected with the fact that you don't have closing </b> tag. If that doesn't help try removing <div> and adding align='center' to <td>\n" ]
[ 0 ]
[]
[]
[ "html", "html_table", "python", "qt" ]
stackoverflow_0002130705_html_html_table_python_qt.txt
Q: Django url problem I'm getting this error error at / unknown specifier: ?P& and I suppose it's coming from this line (r'^(?P&lt;template&gt;\w+)/$', static_page), I copied this from a tutorial, how do I fix this error ? A: You want to use r'^(?P<template>\w+)/$' for your regex. You seem to have copied the regex with HTML entities still encoded; the regex engine expects verbatim < and >.
Django url problem
I'm getting this error error at / unknown specifier: ?P& and I suppose it's coming from this line (r'^(?P&lt;template&gt;\w+)/$', static_page), I copied this from a tutorial, how do I fix this error ?
[ "You want to use\nr'^(?P<template>\\w+)/$'\n\nfor your regex. You seem to have copied the regex with HTML entities still encoded; the regex engine expects verbatim < and >.\n" ]
[ 2 ]
[]
[]
[ "django", "python", "regex", "url" ]
stackoverflow_0004237805_django_python_regex_url.txt
Q: Python: max/min builtin functions depend on parameter order max(float('nan'), 1) evaluates to nan max(1, float('nan')) evaluates to 1 Is it the intended behavior? Thanks for the answers. max raises an exception when the iterable is empty. Why wouldn't Python's max raise an exception when nan is present? Or at least do something useful, like return nan or ignore nan. The current behavior is very unsafe and seems completely unreasonable. I found an even more surprising consequence of this behavior, so I just posted a related question. A: In [19]: 1>float('nan') Out[19]: False In [20]: float('nan')>1 Out[20]: False The float nan is neither bigger nor smaller than the integer 1. max starts by choosing the first element, and only replaces it when it finds an element which is strictly larger. In [31]: max(1,float('nan')) Out[31]: 1 Since nan is not larger than 1, 1 is returned. In [32]: max(float('nan'),1) Out[32]: nan Since 1 is not larger than nan, nan is returned. PS. Note that np.max treats float('nan') differently: In [36]: import numpy as np In [91]: np.max([1,float('nan')]) Out[91]: nan In [92]: np.max([float('nan'),1]) Out[92]: nan but if you wish to ignore np.nans, you can use np.nanmax: In [93]: np.nanmax([1,float('nan')]) Out[93]: 1.0 In [94]: np.nanmax([float('nan'),1]) Out[94]: 1.0 A: I haven't seen this before, but it makes sense. Notice that nan is a very weird object: >>> x = float('nan') >>> x == x False >>> x > 1 False >>> x < 1 False I would say that the behaviour of max is undefined in this case -- what answer would you expect? The only sensible behaviour is to assume that the operations are antisymmetric. Notice that you can reproduce this behaviour by making a broken class: >>> class Broken(object): ... __le__ = __ge__ = __eq__ = __lt__ = __gt__ = __ne__ = ... lambda self, other: False ... >>> x = Broken() >>> x == x False >>> x < 1 False >>> x > 1 False >>> max(x, 1) <__main__.Broken object at 0x024B5B50> >>> max(1, x) 1 A: Max works the following way: The first item is set as maxval and then the next is compared to this value. The comparation will always return False: >>> float('nan') < 1 False >>> float('nan') > 1 False So if the first value is nan, then (since the comparation returns false) it will not be replaced upon the next step. OTOH if 1 is the first, the same happens: but in this case, since 1 was set, it will be the maximum. You can verify this in the python code, just look up the function min_max in Python/bltinmodule.c
Python: max/min builtin functions depend on parameter order
max(float('nan'), 1) evaluates to nan max(1, float('nan')) evaluates to 1 Is it the intended behavior? Thanks for the answers. max raises an exception when the iterable is empty. Why wouldn't Python's max raise an exception when nan is present? Or at least do something useful, like return nan or ignore nan. The current behavior is very unsafe and seems completely unreasonable. I found an even more surprising consequence of this behavior, so I just posted a related question.
[ "In [19]: 1>float('nan')\nOut[19]: False\n\nIn [20]: float('nan')>1\nOut[20]: False\n\nThe float nan is neither bigger nor smaller than the integer 1.\nmax starts by choosing the first element, and only replaces it when it finds an element which is strictly larger.\nIn [31]: max(1,float('nan'))\nOut[31]: 1\n\nSince...
[ 49, 9, 1 ]
[]
[]
[ "comparison", "math", "python" ]
stackoverflow_0004237914_comparison_math_python.txt
Q: Security Web programming API in Python Is there a secure API available which will help protect against attacks like XSS, CSRF etc by providing encoders, token support etc.? for python which can be used with google app engine? I do not want to reinvent the wheel by coding it all again if its already out there . A: Python Security is a good resource: Access Control/Authorization Authentication Configuration Cross-Site Request Forgery (CSRF) Cross-Site Scripting (XSS) Cryptography Escaping Hashing Injection Object Reference Redirects Session Management Taint Mode Transport Layer Security Validation
Security Web programming API in Python
Is there a secure API available which will help protect against attacks like XSS, CSRF etc by providing encoders, token support etc.? for python which can be used with google app engine? I do not want to reinvent the wheel by coding it all again if its already out there .
[ "Python Security is a good resource:\n\nAccess Control/Authorization\nAuthentication\nConfiguration\nCross-Site Request Forgery (CSRF)\nCross-Site Scripting (XSS) \nCryptography\nEscaping\nHashing\nInjection\nObject Reference\nRedirects\nSession Management\nTaint Mode\nTransport Layer Security\nValidation\n\n" ]
[ 5 ]
[]
[]
[ "api", "google_app_engine", "python", "security" ]
stackoverflow_0004238112_api_google_app_engine_python_security.txt
Q: Encoding problem in Google AppEngine when BlobstoreUploadHandler I am seeing weird characters in the datastore when reading them in BlobstoreUploadHandler. The problem is only on Google servers, everything works great on the development server. This usually works: item = models.Item() item.description = self.request.get("description") item.put() However, if this is within a BlobstoreUploadHandler, the description text is all messed up. The corrupted characters or on the form '=XX', where X is a hex. Line breaks are also corrupted. Explanation on how it is best to deal with unicode in user submitted content would be appreciated. Update: It is a known bug. I still don't have a workaround yet. A: It's a known bug, check Blobstore handler breaking data encoding issue. Performing a POST to a Blobstore handler, test fields are getting converted to MIME quoted printable format. I think you could workaround this using quopri Python standard module.
Encoding problem in Google AppEngine when BlobstoreUploadHandler
I am seeing weird characters in the datastore when reading them in BlobstoreUploadHandler. The problem is only on Google servers, everything works great on the development server. This usually works: item = models.Item() item.description = self.request.get("description") item.put() However, if this is within a BlobstoreUploadHandler, the description text is all messed up. The corrupted characters or on the form '=XX', where X is a hex. Line breaks are also corrupted. Explanation on how it is best to deal with unicode in user submitted content would be appreciated. Update: It is a known bug. I still don't have a workaround yet.
[ "It's a known bug, check Blobstore handler breaking data encoding issue.\nPerforming a POST to a Blobstore handler, test fields are getting converted to MIME quoted printable format.\nI think you could workaround this using quopri Python standard module.\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "python", "unicode" ]
stackoverflow_0004235992_google_app_engine_python_unicode.txt
Q: Mechanize for Python 3.x is there any way how to use Mechanize with Python 3.x? Or is there any substitute which works in Python 3.x? I've been searching for hours, but I didn't find anything :( I'm looking for way how to login to the site with Python, but the site uses javascript. Thanks in advance, Adam. A: lxml.html provides form handling facilities and supports Python 3. A: I'm working on a similar project, but the faq for mechanize explicitly says they don't intend on supporting 3x any time soon. Is there a reason the code has to be written in 3? The way I'm trying to tackle the problem is by emulating the java script with form submits, it takes some reverse engineering. (which is, if the javascript ends by submitting a form, and you can find the arguments the script passes to the submit(), just follow the example from the mechanize doc http://wwwsearch.sourceforge.net/mechanize/ br.select_form(name="order") # Browser passes through unknown attributes (including methods) # to the selected HTMLForm. br["cheeses"] = ["mozzarella", "caerphilly"] # (the method here is __setitem__) # Submit current form. Browser calls .close() on the current response on # navigation, so this closes response1 response2 = br.submit()
Mechanize for Python 3.x
is there any way how to use Mechanize with Python 3.x? Or is there any substitute which works in Python 3.x? I've been searching for hours, but I didn't find anything :( I'm looking for way how to login to the site with Python, but the site uses javascript. Thanks in advance, Adam.
[ "lxml.html provides form handling facilities and supports Python 3.\n", "I'm working on a similar project, but the faq for mechanize explicitly says they don't intend on supporting 3x any time soon. Is there a reason the code has to be written in 3? \nThe way I'm trying to tackle the problem is by emulating the ...
[ 3, 0 ]
[]
[]
[ "authentication", "mechanize", "python", "screen", "screen_scraping" ]
stackoverflow_0004237164_authentication_mechanize_python_screen_screen_scraping.txt
Q: TypeError coercing to Unicode: need string or buffer In a Django project, I am trying to pass the url to a want instance. Comments are applied to a Want. I have been trying to figure out this error but am stumped. This function: def comment_email(request, comment, **kwargs): want = get_object_or_404(Want, id=comment.object_pk) url = want.get_absolute_url print url Is throwing this error Environment: Request Method: POST Request URL: http://localhost:8000/comments/post/ Django Version: 1.2.3 Python Version: 2.7.0 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.admin', 'django.contrib.comments', 'django.contrib.markup', 'src', 'lib.tagging', 'lib.markdown', 'lib.avatar', 'ajaxcomments', 'south'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware') Traceback: File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ajaxcomments/utils.py" in wrapped 57. return func(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view 76. response = view_func(request, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/views/decorators/http.py" in inner 37. return func(request, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/comments/views/comments.py" in post_comment 127. request = request File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/dispatch/dispatcher.py" in send 162. response = receiver(signal=self, sender=sender, **named) File "/Users/emilepetrone/code/apprentice2/src/utils.py" in comment_email 24. print url File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/base.py" in __repr__ 344. u = unicode(self) Exception Type: TypeError at /comments/post/ Exception Value: coercing to Unicode: need string or buffer, Want found Here is the Want model: class Want(models.Model): pub_date = models.DateTimeField(default=datetime.now,auto_now_add=True,db_index=True) body = models.TextField(default='',max_length=1000) body_html = models.TextField(editable=False, blank=True) #Metadata mentee = models.ForeignKey(User) points = models.IntegerField(default=3) enable_comments = models.BooleanField(default=True) featured = models.BooleanField(default=False) #Tags tags = TagField(help_text="Autocomplete") def get_tags(self): return Tag.objects.get_for_object(self) class Meta: ordering = ['-pub_date'] def __unicode__(self): return self def save(self): self.body_html = markdown(self.body) super(Want, self).save() def get_absolute_url(self): return ( { 'object_id': self.id }) get_absolute_url = models.permalink(get_absolute_url) Thank you for your help! A: I think the problem is def __unicode__(self): return self which should return a unicode string rather than a "Want" instance. I'm not sure what you want there instead -- maybe "self.id"
TypeError coercing to Unicode: need string or buffer
In a Django project, I am trying to pass the url to a want instance. Comments are applied to a Want. I have been trying to figure out this error but am stumped. This function: def comment_email(request, comment, **kwargs): want = get_object_or_404(Want, id=comment.object_pk) url = want.get_absolute_url print url Is throwing this error Environment: Request Method: POST Request URL: http://localhost:8000/comments/post/ Django Version: 1.2.3 Python Version: 2.7.0 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.admin', 'django.contrib.comments', 'django.contrib.markup', 'src', 'lib.tagging', 'lib.markdown', 'lib.avatar', 'ajaxcomments', 'south'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware') Traceback: File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ajaxcomments/utils.py" in wrapped 57. return func(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view 76. response = view_func(request, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/views/decorators/http.py" in inner 37. return func(request, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/comments/views/comments.py" in post_comment 127. request = request File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/dispatch/dispatcher.py" in send 162. response = receiver(signal=self, sender=sender, **named) File "/Users/emilepetrone/code/apprentice2/src/utils.py" in comment_email 24. print url File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/base.py" in __repr__ 344. u = unicode(self) Exception Type: TypeError at /comments/post/ Exception Value: coercing to Unicode: need string or buffer, Want found Here is the Want model: class Want(models.Model): pub_date = models.DateTimeField(default=datetime.now,auto_now_add=True,db_index=True) body = models.TextField(default='',max_length=1000) body_html = models.TextField(editable=False, blank=True) #Metadata mentee = models.ForeignKey(User) points = models.IntegerField(default=3) enable_comments = models.BooleanField(default=True) featured = models.BooleanField(default=False) #Tags tags = TagField(help_text="Autocomplete") def get_tags(self): return Tag.objects.get_for_object(self) class Meta: ordering = ['-pub_date'] def __unicode__(self): return self def save(self): self.body_html = markdown(self.body) super(Want, self).save() def get_absolute_url(self): return ( { 'object_id': self.id }) get_absolute_url = models.permalink(get_absolute_url) Thank you for your help!
[ "I think the problem is\ndef __unicode__(self):\n return self\n\nwhich should return a unicode string rather than a \"Want\" instance. I'm not sure what you want there instead -- maybe \"self.id\"\n" ]
[ 7 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0004237324_django_django_models_python.txt