title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Downloading a Large Number of Files from S3 | 1,051,275 | <p>What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). </p>
<p>At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. </p>
<p>Would some type of concurrency help? PyCurl.CurlMulti object? </p>
<p>I am open to all suggestions. Thanks!</p>
| 2 | 2009-06-26T21:02:04Z | 1,051,408 | <p>I've been using txaws with twisted for S3 work, though what you'd probably want is just to get the authenticated URL and use twisted.web.client.DownloadPage (by default will happily go from stream to file without much interaction).</p>
<p>Twisted makes it easy to run at whatever concurrency you want. For something on the order of 200,000, I'd probably make a generator and use a cooperator to set my concurrency and just let the generator generate every required download request.</p>
<p>If you're not familiar with twisted, you'll find the model takes a bit of time to get used to, but it's oh so worth it. In this case, I'd expect it to take minimal CPU and memory overhead, but you'd have to worry about file descriptors. It's quite easy to mix in perspective broker and farm the work out to multiple machines should you find yourself needing more file descriptors or if you have multiple connections over which you'd like it to pull down.</p>
| 0 | 2009-06-26T21:47:56Z | [
"python",
"curl",
"amazon-s3",
"amazon-web-services",
"boto"
] |
Downloading a Large Number of Files from S3 | 1,051,275 | <p>What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). </p>
<p>At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. </p>
<p>Would some type of concurrency help? PyCurl.CurlMulti object? </p>
<p>I am open to all suggestions. Thanks!</p>
| 2 | 2009-06-26T21:02:04Z | 1,051,414 | <p>In the case of python, as this is IO bound, multiple threads will use of the CPU, but it will probably use up only one core. If you have multiple cores, you might want to consider the new <a href="http://docs.python.org/dev/library/multiprocessing.html" rel="nofollow">multiprocessor</a> module. Even then you may want to have each process use multiple threads. You would have to do some tweaking of number of processors and threads.</p>
<p>If you do use multiple threads, this is a good candidate for the <a href="http://docs.python.org/dev/library/queue.html" rel="nofollow">Queue</a> class.</p>
| 1 | 2009-06-26T21:50:34Z | [
"python",
"curl",
"amazon-s3",
"amazon-web-services",
"boto"
] |
Downloading a Large Number of Files from S3 | 1,051,275 | <p>What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). </p>
<p>At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. </p>
<p>Would some type of concurrency help? PyCurl.CurlMulti object? </p>
<p>I am open to all suggestions. Thanks!</p>
| 2 | 2009-06-26T21:02:04Z | 1,051,596 | <p>what about thread + queue, I love this article: <a href="http://www.ibm.com/developerworks/aix/library/au-threadingpython/" rel="nofollow">Practical threaded programming with Python</a></p>
| 0 | 2009-06-26T22:42:24Z | [
"python",
"curl",
"amazon-s3",
"amazon-web-services",
"boto"
] |
Downloading a Large Number of Files from S3 | 1,051,275 | <p>What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). </p>
<p>At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. </p>
<p>Would some type of concurrency help? PyCurl.CurlMulti object? </p>
<p>I am open to all suggestions. Thanks!</p>
| 2 | 2009-06-26T21:02:04Z | 1,052,748 | <p>You might consider using <a href="http://code.google.com/p/s3fs/wiki/FuseOverAmazon" rel="nofollow">s3fs</a>, and just running concurrent file system commands from Python.</p>
| 1 | 2009-06-27T12:18:30Z | [
"python",
"curl",
"amazon-s3",
"amazon-web-services",
"boto"
] |
Downloading a Large Number of Files from S3 | 1,051,275 | <p>What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). </p>
<p>At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. </p>
<p>Would some type of concurrency help? PyCurl.CurlMulti object? </p>
<p>I am open to all suggestions. Thanks!</p>
| 2 | 2009-06-26T21:02:04Z | 1,085,055 | <p>Each job can be done with appropriate tools :)</p>
<p>You want use python for stress testing S3 :), so I suggest find a large volume downloader program and pass link to it.</p>
<p>On Windows I have experience for installing ReGet program (shareware, from <a href="http://reget.com" rel="nofollow">http://reget.com</a>) and creating downloading tasks via COM interface.</p>
<p>Of course there may other programs with usable interface exists.</p>
<p>Regards!</p>
| 0 | 2009-07-05T23:25:12Z | [
"python",
"curl",
"amazon-s3",
"amazon-web-services",
"boto"
] |
Mutex in Python Twisted | 1,051,652 | <p>I'm using the Twisted framework, and am getting RPCs asynchronously. I have another function which does a task every 2 seconds, and sleeps in between. This is called through reactor.callInThread. These depend on a shared resources, so I need some thread-safe way of accessing them. How does one go about using critical sections / mutexes / locks in twisted?</p>
| 2 | 2009-06-26T23:02:43Z | 1,051,678 | <p>Though you can use threads in twisted, the usual idiom with twisted is to do RPC asyncronously using a single thread. Thats one of its advantages. The twisted framework will run the reactor and call your handler events when RPC results are ready for you. Then your code runs, and when your handler exits, control goes back to the reactor which will call the next handler that has code ready. So even though a lot of things are going on in paralell, twisted ensures that only one of your functions is running at a time, so you shouldn't need any mutexing, just maintaining state variables so your callbacks know what the current context they are operating in is enough.</p>
<p>If you are explictly create threads and use them with the twisted framework running you'd probably need something like <a href="http://docs.python.org/library/mutex.html" rel="nofollow">Standard Python Mutex</a>, though you'd need to be very careful not to ever have your main Reactor callback thread waiting on a mutex for any length of time as callbacks inside the reactor aren't supposed to block.</p>
| 2 | 2009-06-26T23:18:48Z | [
"python",
"multithreading",
"locking",
"twisted",
"mutex"
] |
Mutex in Python Twisted | 1,051,652 | <p>I'm using the Twisted framework, and am getting RPCs asynchronously. I have another function which does a task every 2 seconds, and sleeps in between. This is called through reactor.callInThread. These depend on a shared resources, so I need some thread-safe way of accessing them. How does one go about using critical sections / mutexes / locks in twisted?</p>
| 2 | 2009-06-26T23:02:43Z | 1,302,967 | <p>Twisted lets you write event-driven code in a single thread. Multiple events can write to standard Python non-thread-safe data structures in a safe matter, and non-thread-safe data structures can be used as mutexes. If you <strong>do</strong> start using threads, then you have to worry about these things. But you don't have to use them.</p>
<p>So, as commented: use task.LoopingCall or reactor.CallLater for your task. Never call time.sleep(), let the reactor call your task at the right time (and do other work in between). Respond to your RPCs as they come.</p>
<p>There won't be two threads running your code at once. However, you don't know the order in which your callbacks will be called. Once you relinquish control to a Deferred, application state may have changed by the time you get it back.</p>
| 0 | 2009-08-19T22:29:13Z | [
"python",
"multithreading",
"locking",
"twisted",
"mutex"
] |
Reading a single character (getch style) in Python is not working in Unix | 1,052,107 | <p>Any time I use the recipe at <a href="http://code.activestate.com/recipes/134892/" rel="nofollow">http://code.activestate.com/recipes/134892/</a> I can't seem to get it working. It always throws the following error:</p>
<pre><code>Traceback (most recent call last):
...
old_settings = termios.tcgetattr(fd)
termios.error: (22, 'Invalid argument)
</code></pre>
<p>My best thought is that it is because I'm running it in Eclipse so <code>termios</code> is throwing a fit about the file descriptor.</p>
| 4 | 2009-06-27T04:19:23Z | 1,052,115 | <p>This is working on Ubuntu 8.04.1 , Python 2.5.2, i get no such error. May be you should try it from command line, eclipse may be using its own stdin, i get exact same error if I run it from Wing IDE, but from command line it works great.
Reason is that IDE e.g Wing is using there own class netserver.CDbgInputStream as sys.stdin
so sys.stdin.fileno is zero, thats why the error.
Basically IDE stdin is not a tty (print sys.stdin.isatty() is False)</p>
<pre><code>class _GetchUnix:
def __init__(self):
import tty, sys
def __call__(self):
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
getch = _GetchUnix()
print getch()
</code></pre>
| 9 | 2009-06-27T04:34:18Z | [
"python",
"getch"
] |
Reading a single character (getch style) in Python is not working in Unix | 1,052,107 | <p>Any time I use the recipe at <a href="http://code.activestate.com/recipes/134892/" rel="nofollow">http://code.activestate.com/recipes/134892/</a> I can't seem to get it working. It always throws the following error:</p>
<pre><code>Traceback (most recent call last):
...
old_settings = termios.tcgetattr(fd)
termios.error: (22, 'Invalid argument)
</code></pre>
<p>My best thought is that it is because I'm running it in Eclipse so <code>termios</code> is throwing a fit about the file descriptor.</p>
| 4 | 2009-06-27T04:19:23Z | 17,939,632 | <p>Putting terminal into raw mode isn't always a good idea. Actually it's enough to clear ICANON bit. Here is another version of getch() with timeout support:</p>
<pre><code>import tty, sys, termios
import select
def setup_term(fd, when=termios.TCSAFLUSH):
mode = termios.tcgetattr(fd)
mode[tty.LFLAG] = mode[tty.LFLAG] & ~(termios.ECHO | termios.ICANON)
termios.tcsetattr(fd, when, mode)
def getch(timeout=None):
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
setup_term(fd)
try:
rw, wl, xl = select.select([fd], [], [], timeout)
except select.error:
return
if rw:
return sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
if __name__ == "__main__":
print getch()
</code></pre>
| 2 | 2013-07-30T06:21:12Z | [
"python",
"getch"
] |
Some help understanding async USB operations with libusb-1.0 and ctypes | 1,052,135 | <p>Alright. I figured it out. transfer.flags needed to be a byte instead of an int. Silly me. Now I'm getting an error code from ioctl, errno 16, which I think means the device is busy. What a workaholic. I've asked on the libusb mailing list.</p>
<p>Below is what I have so far. This isn't really that much code. Most of it is ctypes structures for libusb. Scroll down to the bottom to see the actual code where the error occurs. </p>
<pre><code>from ctypes import *
VENDOR_ID = 0x04d8
PRODUCT_ID = 0xc002
_USBLCD_MAX_DATA_LEN = 24
LIBUSB_ENDPOINT_IN = 0x80
LIBUSB_ENDPOINT_OUT = 0x00
class EnumerationType(type(c_uint)):
def __new__(metacls, name, bases, dict):
if not "_members_" in dict:
_members_ = {}
for key,value in dict.items():
if not key.startswith("_"):
_members_[key] = value
dict["_members_"] = _members_
cls = type(c_uint).__new__(metacls, name, bases, dict)
for key,value in cls._members_.items():
globals()[key] = value
return cls
def __contains__(self, value):
return value in self._members_.values()
def __repr__(self):
return "<Enumeration %s>" % self.__name__
class Enumeration(c_uint):
__metaclass__ = EnumerationType
_members_ = {}
def __init__(self, value):
for k,v in self._members_.items():
if v == value:
self.name = k
break
else:
raise ValueError("No enumeration member with value %r" % value)
c_uint.__init__(self, value)
@classmethod
def from_param(cls, param):
if isinstance(param, Enumeration):
if param.__class__ != cls:
raise ValueError("Cannot mix enumeration members")
else:
return param
else:
return cls(param)
def __repr__(self):
return "<member %s=%d of %r>" % (self.name, self.value, self.__class__)
class LIBUSB_TRANSFER_STATUS(Enumeration):
_members_ = {'LIBUSB_TRANSFER_COMPLETED':0,
'LIBUSB_TRANSFER_ERROR':1,
'LIBUSB_TRANSFER_TIMED_OUT':2,
'LIBUSB_TRANSFER_CANCELLED':3,
'LIBUSB_TRANSFER_STALL':4,
'LIBUSB_TRANSFER_NO_DEVICE':5,
'LIBUSB_TRANSFER_OVERFLOW':6}
class LIBUSB_TRANSFER_FLAGS(Enumeration):
_members_ = {'LIBUSB_TRANSFER_SHORT_NOT_OK':1<<0,
'LIBUSB_TRANSFER_FREE_BUFFER':1<<1,
'LIBUSB_TRANSFER_FREE_TRANSFER':1<<2}
class LIBUSB_TRANSFER_TYPE(Enumeration):
_members_ = {'LIBUSB_TRANSFER_TYPE_CONTROL':0,
'LIBUSB_TRANSFER_TYPE_ISOCHRONOUS':1,
'LIBUSB_TRANSFER_TYPE_BULK':2,
'LIBUSB_TRANSFER_TYPE_INTERRUPT':3}
class LIBUSB_CONTEXT(Structure):
pass
class LIBUSB_DEVICE(Structure):
pass
class LIBUSB_DEVICE_HANDLE(Structure):
pass
class LIBUSB_CONTROL_SETUP(Structure):
_fields_ = [("bmRequestType", c_int),
("bRequest", c_int),
("wValue", c_int),
("wIndex", c_int),
("wLength", c_int)]
class LIBUSB_ISO_PACKET_DESCRIPTOR(Structure):
_fields_ = [("length", c_int),
("actual_length", c_int),
("status", LIBUSB_TRANSFER_STATUS)]
class LIBUSB_TRANSFER(Structure):
pass
LIBUSB_TRANSFER_CB_FN = CFUNCTYPE(c_void_p, POINTER(LIBUSB_TRANSFER))
LIBUSB_TRANSFER._fields_ = [("dev_handle", POINTER(LIBUSB_DEVICE_HANDLE)),
("flags", c_ubyte),
("endpoint", c_ubyte),
("type", c_ubyte),
("timeout", c_uint),
("status", LIBUSB_TRANSFER_STATUS),
("length", c_int),
("actual_length", c_int),
("callback", LIBUSB_TRANSFER_CB_FN),
("user_data", c_void_p),
("buffer", POINTER(c_ubyte)),
("num_iso_packets", c_int),
("iso_packet_desc", POINTER(LIBUSB_ISO_PACKET_DESCRIPTOR))]
class TIMEVAL(Structure):
_fields_ = [('tv_sec', c_long), ('tv_usec', c_long)]
lib = cdll.LoadLibrary("libusb-1.0.so")
lib.libusb_open_device_with_vid_pid.restype = POINTER(LIBUSB_DEVICE_HANDLE)
lib.libusb_alloc_transfer.restype = POINTER(LIBUSB_TRANSFER)
def libusb_fill_interrupt_transfer(transfer, dev_handle, endpoint, buffer, length, callback, user_data, timeout):
transfer[0].dev_handle = dev_handle
transfer[0].endpoint = chr(endpoint)
transfer[0].type = chr(LIBUSB_TRANSFER_TYPE_INTERRUPT)
transfer[0].timeout = timeout
transfer[0].buffer = buffer
transfer[0].length = length
transfer[0].user_data = user_data
transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback)
def cb_transfer(transfer):
print "Transfer status %d" % transfer.status
if __name__ == "__main__":
context = POINTER(LIBUSB_CONTEXT)()
lib.libusb_init(None)
transfer = lib.libusb_alloc_transfer(0)
handle = lib.libusb_open_device_with_vid_pid(None, VENDOR_ID, PRODUCT_ID)
size = _USBLCD_MAX_DATA_LEN
buffer = c_char_p(size)
libusb_fill_interrupt_transfer(transfer, handle, LIBUSB_ENDPOINT_IN + 1, buffer, size, cb_transfer, None, 0)
r = lib.libusb_submit_transfer(transfer) # This is returning -2, should be => 0.
if r < 0:
print "libusb_submit_transfer failed", r
while r >= 0:
print "Poll before"
tv = TIMEVAL(1, 0)
r = lib.libusb_handle_events_timeout(None, byref(tv))
print "Poll after", r
</code></pre>
| 2 | 2009-06-27T04:51:18Z | 1,052,223 | <p>where is the initial declaration of transfer? I am not familiar with python, but is this ok to assign values to fields in your struct without defining what data type it should be?</p>
| -1 | 2009-06-27T06:15:16Z | [
"python",
"usb",
"ctypes",
"libusb"
] |
Some help understanding async USB operations with libusb-1.0 and ctypes | 1,052,135 | <p>Alright. I figured it out. transfer.flags needed to be a byte instead of an int. Silly me. Now I'm getting an error code from ioctl, errno 16, which I think means the device is busy. What a workaholic. I've asked on the libusb mailing list.</p>
<p>Below is what I have so far. This isn't really that much code. Most of it is ctypes structures for libusb. Scroll down to the bottom to see the actual code where the error occurs. </p>
<pre><code>from ctypes import *
VENDOR_ID = 0x04d8
PRODUCT_ID = 0xc002
_USBLCD_MAX_DATA_LEN = 24
LIBUSB_ENDPOINT_IN = 0x80
LIBUSB_ENDPOINT_OUT = 0x00
class EnumerationType(type(c_uint)):
def __new__(metacls, name, bases, dict):
if not "_members_" in dict:
_members_ = {}
for key,value in dict.items():
if not key.startswith("_"):
_members_[key] = value
dict["_members_"] = _members_
cls = type(c_uint).__new__(metacls, name, bases, dict)
for key,value in cls._members_.items():
globals()[key] = value
return cls
def __contains__(self, value):
return value in self._members_.values()
def __repr__(self):
return "<Enumeration %s>" % self.__name__
class Enumeration(c_uint):
__metaclass__ = EnumerationType
_members_ = {}
def __init__(self, value):
for k,v in self._members_.items():
if v == value:
self.name = k
break
else:
raise ValueError("No enumeration member with value %r" % value)
c_uint.__init__(self, value)
@classmethod
def from_param(cls, param):
if isinstance(param, Enumeration):
if param.__class__ != cls:
raise ValueError("Cannot mix enumeration members")
else:
return param
else:
return cls(param)
def __repr__(self):
return "<member %s=%d of %r>" % (self.name, self.value, self.__class__)
class LIBUSB_TRANSFER_STATUS(Enumeration):
_members_ = {'LIBUSB_TRANSFER_COMPLETED':0,
'LIBUSB_TRANSFER_ERROR':1,
'LIBUSB_TRANSFER_TIMED_OUT':2,
'LIBUSB_TRANSFER_CANCELLED':3,
'LIBUSB_TRANSFER_STALL':4,
'LIBUSB_TRANSFER_NO_DEVICE':5,
'LIBUSB_TRANSFER_OVERFLOW':6}
class LIBUSB_TRANSFER_FLAGS(Enumeration):
_members_ = {'LIBUSB_TRANSFER_SHORT_NOT_OK':1<<0,
'LIBUSB_TRANSFER_FREE_BUFFER':1<<1,
'LIBUSB_TRANSFER_FREE_TRANSFER':1<<2}
class LIBUSB_TRANSFER_TYPE(Enumeration):
_members_ = {'LIBUSB_TRANSFER_TYPE_CONTROL':0,
'LIBUSB_TRANSFER_TYPE_ISOCHRONOUS':1,
'LIBUSB_TRANSFER_TYPE_BULK':2,
'LIBUSB_TRANSFER_TYPE_INTERRUPT':3}
class LIBUSB_CONTEXT(Structure):
pass
class LIBUSB_DEVICE(Structure):
pass
class LIBUSB_DEVICE_HANDLE(Structure):
pass
class LIBUSB_CONTROL_SETUP(Structure):
_fields_ = [("bmRequestType", c_int),
("bRequest", c_int),
("wValue", c_int),
("wIndex", c_int),
("wLength", c_int)]
class LIBUSB_ISO_PACKET_DESCRIPTOR(Structure):
_fields_ = [("length", c_int),
("actual_length", c_int),
("status", LIBUSB_TRANSFER_STATUS)]
class LIBUSB_TRANSFER(Structure):
pass
LIBUSB_TRANSFER_CB_FN = CFUNCTYPE(c_void_p, POINTER(LIBUSB_TRANSFER))
LIBUSB_TRANSFER._fields_ = [("dev_handle", POINTER(LIBUSB_DEVICE_HANDLE)),
("flags", c_ubyte),
("endpoint", c_ubyte),
("type", c_ubyte),
("timeout", c_uint),
("status", LIBUSB_TRANSFER_STATUS),
("length", c_int),
("actual_length", c_int),
("callback", LIBUSB_TRANSFER_CB_FN),
("user_data", c_void_p),
("buffer", POINTER(c_ubyte)),
("num_iso_packets", c_int),
("iso_packet_desc", POINTER(LIBUSB_ISO_PACKET_DESCRIPTOR))]
class TIMEVAL(Structure):
_fields_ = [('tv_sec', c_long), ('tv_usec', c_long)]
lib = cdll.LoadLibrary("libusb-1.0.so")
lib.libusb_open_device_with_vid_pid.restype = POINTER(LIBUSB_DEVICE_HANDLE)
lib.libusb_alloc_transfer.restype = POINTER(LIBUSB_TRANSFER)
def libusb_fill_interrupt_transfer(transfer, dev_handle, endpoint, buffer, length, callback, user_data, timeout):
transfer[0].dev_handle = dev_handle
transfer[0].endpoint = chr(endpoint)
transfer[0].type = chr(LIBUSB_TRANSFER_TYPE_INTERRUPT)
transfer[0].timeout = timeout
transfer[0].buffer = buffer
transfer[0].length = length
transfer[0].user_data = user_data
transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback)
def cb_transfer(transfer):
print "Transfer status %d" % transfer.status
if __name__ == "__main__":
context = POINTER(LIBUSB_CONTEXT)()
lib.libusb_init(None)
transfer = lib.libusb_alloc_transfer(0)
handle = lib.libusb_open_device_with_vid_pid(None, VENDOR_ID, PRODUCT_ID)
size = _USBLCD_MAX_DATA_LEN
buffer = c_char_p(size)
libusb_fill_interrupt_transfer(transfer, handle, LIBUSB_ENDPOINT_IN + 1, buffer, size, cb_transfer, None, 0)
r = lib.libusb_submit_transfer(transfer) # This is returning -2, should be => 0.
if r < 0:
print "libusb_submit_transfer failed", r
while r >= 0:
print "Poll before"
tv = TIMEVAL(1, 0)
r = lib.libusb_handle_events_timeout(None, byref(tv))
print "Poll after", r
</code></pre>
| 2 | 2009-06-27T04:51:18Z | 1,052,319 | <ul>
<li>Have you checked to make sure the return values of <code>libusb_alloc_transfer</code> and <code>libusb_open_device_with_vid_pid</code> are valid?</li>
<li>Have you tried annotating the library functions with the appropriate <a href="http://python.net/crew/theller/ctypes/reference.html#foreign-functions" rel="nofollow">argtypes</a>?</li>
<li>You may run in to trouble with <code>transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback)</code>âyou're not keeping any references to the <code>CFunctionType</code> object returned from <code>LIBUSB_TRANSFER_CB_FN()</code>, and so that object might be getting released and overwritten.</li>
</ul>
<p>The next step, I suppose, would be to install a version of libusb with debugging symbols, boot up GDB, set a breakpoint at <code>libusb_submit_transfer()</code>, make sure the passed-in <code>libusb_transfer</code> is sane, and see what's triggering the error to be returned.</p>
| 2 | 2009-06-27T07:39:17Z | [
"python",
"usb",
"ctypes",
"libusb"
] |
Some help understanding async USB operations with libusb-1.0 and ctypes | 1,052,135 | <p>Alright. I figured it out. transfer.flags needed to be a byte instead of an int. Silly me. Now I'm getting an error code from ioctl, errno 16, which I think means the device is busy. What a workaholic. I've asked on the libusb mailing list.</p>
<p>Below is what I have so far. This isn't really that much code. Most of it is ctypes structures for libusb. Scroll down to the bottom to see the actual code where the error occurs. </p>
<pre><code>from ctypes import *
VENDOR_ID = 0x04d8
PRODUCT_ID = 0xc002
_USBLCD_MAX_DATA_LEN = 24
LIBUSB_ENDPOINT_IN = 0x80
LIBUSB_ENDPOINT_OUT = 0x00
class EnumerationType(type(c_uint)):
def __new__(metacls, name, bases, dict):
if not "_members_" in dict:
_members_ = {}
for key,value in dict.items():
if not key.startswith("_"):
_members_[key] = value
dict["_members_"] = _members_
cls = type(c_uint).__new__(metacls, name, bases, dict)
for key,value in cls._members_.items():
globals()[key] = value
return cls
def __contains__(self, value):
return value in self._members_.values()
def __repr__(self):
return "<Enumeration %s>" % self.__name__
class Enumeration(c_uint):
__metaclass__ = EnumerationType
_members_ = {}
def __init__(self, value):
for k,v in self._members_.items():
if v == value:
self.name = k
break
else:
raise ValueError("No enumeration member with value %r" % value)
c_uint.__init__(self, value)
@classmethod
def from_param(cls, param):
if isinstance(param, Enumeration):
if param.__class__ != cls:
raise ValueError("Cannot mix enumeration members")
else:
return param
else:
return cls(param)
def __repr__(self):
return "<member %s=%d of %r>" % (self.name, self.value, self.__class__)
class LIBUSB_TRANSFER_STATUS(Enumeration):
_members_ = {'LIBUSB_TRANSFER_COMPLETED':0,
'LIBUSB_TRANSFER_ERROR':1,
'LIBUSB_TRANSFER_TIMED_OUT':2,
'LIBUSB_TRANSFER_CANCELLED':3,
'LIBUSB_TRANSFER_STALL':4,
'LIBUSB_TRANSFER_NO_DEVICE':5,
'LIBUSB_TRANSFER_OVERFLOW':6}
class LIBUSB_TRANSFER_FLAGS(Enumeration):
_members_ = {'LIBUSB_TRANSFER_SHORT_NOT_OK':1<<0,
'LIBUSB_TRANSFER_FREE_BUFFER':1<<1,
'LIBUSB_TRANSFER_FREE_TRANSFER':1<<2}
class LIBUSB_TRANSFER_TYPE(Enumeration):
_members_ = {'LIBUSB_TRANSFER_TYPE_CONTROL':0,
'LIBUSB_TRANSFER_TYPE_ISOCHRONOUS':1,
'LIBUSB_TRANSFER_TYPE_BULK':2,
'LIBUSB_TRANSFER_TYPE_INTERRUPT':3}
class LIBUSB_CONTEXT(Structure):
pass
class LIBUSB_DEVICE(Structure):
pass
class LIBUSB_DEVICE_HANDLE(Structure):
pass
class LIBUSB_CONTROL_SETUP(Structure):
_fields_ = [("bmRequestType", c_int),
("bRequest", c_int),
("wValue", c_int),
("wIndex", c_int),
("wLength", c_int)]
class LIBUSB_ISO_PACKET_DESCRIPTOR(Structure):
_fields_ = [("length", c_int),
("actual_length", c_int),
("status", LIBUSB_TRANSFER_STATUS)]
class LIBUSB_TRANSFER(Structure):
pass
LIBUSB_TRANSFER_CB_FN = CFUNCTYPE(c_void_p, POINTER(LIBUSB_TRANSFER))
LIBUSB_TRANSFER._fields_ = [("dev_handle", POINTER(LIBUSB_DEVICE_HANDLE)),
("flags", c_ubyte),
("endpoint", c_ubyte),
("type", c_ubyte),
("timeout", c_uint),
("status", LIBUSB_TRANSFER_STATUS),
("length", c_int),
("actual_length", c_int),
("callback", LIBUSB_TRANSFER_CB_FN),
("user_data", c_void_p),
("buffer", POINTER(c_ubyte)),
("num_iso_packets", c_int),
("iso_packet_desc", POINTER(LIBUSB_ISO_PACKET_DESCRIPTOR))]
class TIMEVAL(Structure):
_fields_ = [('tv_sec', c_long), ('tv_usec', c_long)]
lib = cdll.LoadLibrary("libusb-1.0.so")
lib.libusb_open_device_with_vid_pid.restype = POINTER(LIBUSB_DEVICE_HANDLE)
lib.libusb_alloc_transfer.restype = POINTER(LIBUSB_TRANSFER)
def libusb_fill_interrupt_transfer(transfer, dev_handle, endpoint, buffer, length, callback, user_data, timeout):
transfer[0].dev_handle = dev_handle
transfer[0].endpoint = chr(endpoint)
transfer[0].type = chr(LIBUSB_TRANSFER_TYPE_INTERRUPT)
transfer[0].timeout = timeout
transfer[0].buffer = buffer
transfer[0].length = length
transfer[0].user_data = user_data
transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback)
def cb_transfer(transfer):
print "Transfer status %d" % transfer.status
if __name__ == "__main__":
context = POINTER(LIBUSB_CONTEXT)()
lib.libusb_init(None)
transfer = lib.libusb_alloc_transfer(0)
handle = lib.libusb_open_device_with_vid_pid(None, VENDOR_ID, PRODUCT_ID)
size = _USBLCD_MAX_DATA_LEN
buffer = c_char_p(size)
libusb_fill_interrupt_transfer(transfer, handle, LIBUSB_ENDPOINT_IN + 1, buffer, size, cb_transfer, None, 0)
r = lib.libusb_submit_transfer(transfer) # This is returning -2, should be => 0.
if r < 0:
print "libusb_submit_transfer failed", r
while r >= 0:
print "Poll before"
tv = TIMEVAL(1, 0)
r = lib.libusb_handle_events_timeout(None, byref(tv))
print "Poll after", r
</code></pre>
| 2 | 2009-06-27T04:51:18Z | 1,071,460 | <p>Running it as root once fixed the busy flag.</p>
| 0 | 2009-07-01T21:22:40Z | [
"python",
"usb",
"ctypes",
"libusb"
] |
Convert python filenames to unicode | 1,052,225 | <p>I am on python 2.6 for Windows.</p>
<p>I use os.walk to read a file tree. Files may have non-7-bit characters (German "ae" for example) in their filenames. These are encoded in Pythons internal string representation.</p>
<p>I am processing these filenames with Python library functions and that fails due to wrong encoding.</p>
<p>How can I convert these filenames to proper (unicode?) python strings? </p>
<p>I have a file "d:\utest\ü.txt". Passing the path as unicode does not work:</p>
<pre><code>>>> list(os.walk('d:\\utest'))
[('d:\\utest', [], ['\xfc.txt'])]
>>> list(os.walk(u'd:\\utest'))
[(u'd:\\utest', [], [u'\xfc.txt'])]
</code></pre>
| 15 | 2009-06-27T06:17:20Z | 1,052,238 | <p>If you pass a Unicode string to <code>os.walk()</code>, you'll get Unicode results:</p>
<pre><code>>>> list(os.walk(r'C:\example')) # Passing an ASCII string
[('C:\\example', [], ['file.txt'])]
>>>
>>> list(os.walk(ur'C:\example')) # Passing a Unicode string
[(u'C:\\example', [], [u'file.txt'])]
</code></pre>
| 42 | 2009-06-27T06:34:37Z | [
"python",
"unicode"
] |
Convert python filenames to unicode | 1,052,225 | <p>I am on python 2.6 for Windows.</p>
<p>I use os.walk to read a file tree. Files may have non-7-bit characters (German "ae" for example) in their filenames. These are encoded in Pythons internal string representation.</p>
<p>I am processing these filenames with Python library functions and that fails due to wrong encoding.</p>
<p>How can I convert these filenames to proper (unicode?) python strings? </p>
<p>I have a file "d:\utest\ü.txt". Passing the path as unicode does not work:</p>
<pre><code>>>> list(os.walk('d:\\utest'))
[('d:\\utest', [], ['\xfc.txt'])]
>>> list(os.walk(u'd:\\utest'))
[(u'd:\\utest', [], [u'\xfc.txt'])]
</code></pre>
| 15 | 2009-06-27T06:17:20Z | 1,052,241 | <p><a href="http://docs.python.org/library/os.html#os.walk" rel="nofollow"><code>os.walk</code></a> isn't specified to always use <code>os.listdir</code>, but neither is it listed how Unicode is handled. However, <a href="http://docs.python.org/library/os.html#os.listdir" rel="nofollow"><code>os.listdir</code></a> does say:</p>
<blockquote>
<p>Changed in version 2.3: On Windows
NT/2k/XP and Unix, if path is a
Unicode object, the result will be a
list of Unicode objects. Undecodable
filenames will still be returned as
string objects.</p>
</blockquote>
<p>Does simply using a Unicode argument work for you?</p>
<pre><code>for dirpath, dirnames, filenames in os.walk(u"."):
print dirpath
for fn in filenames:
print " ", fn
</code></pre>
| 1 | 2009-06-27T06:36:27Z | [
"python",
"unicode"
] |
Convert python filenames to unicode | 1,052,225 | <p>I am on python 2.6 for Windows.</p>
<p>I use os.walk to read a file tree. Files may have non-7-bit characters (German "ae" for example) in their filenames. These are encoded in Pythons internal string representation.</p>
<p>I am processing these filenames with Python library functions and that fails due to wrong encoding.</p>
<p>How can I convert these filenames to proper (unicode?) python strings? </p>
<p>I have a file "d:\utest\ü.txt". Passing the path as unicode does not work:</p>
<pre><code>>>> list(os.walk('d:\\utest'))
[('d:\\utest', [], ['\xfc.txt'])]
>>> list(os.walk(u'd:\\utest'))
[(u'd:\\utest', [], [u'\xfc.txt'])]
</code></pre>
| 15 | 2009-06-27T06:17:20Z | 1,052,293 | <p>No, they are not encoded in Pythons internal string representation, there is no such thing. They are encoded in the encoding of the operating system/file system. Passing in unicode works for os.walk though. </p>
<p>I don't know how os.walk behaves when filenames can't be decoded, but I assume that you'll get a string back, like with os.listdir(). In that case you'll again have problems later. Also, not all of Python 2.x standard library will accept unicode parameters properly, so you may need to encode them as strings anyway. So, the problem may in fact be somewhere else, but you'll notice if that is the case. ;-)</p>
<p>If you need more control of the decoding you can't always pass in a string, and then just decode it with
filename = filename.decode()
as usual.</p>
| 1 | 2009-06-27T07:18:26Z | [
"python",
"unicode"
] |
Convert python filenames to unicode | 1,052,225 | <p>I am on python 2.6 for Windows.</p>
<p>I use os.walk to read a file tree. Files may have non-7-bit characters (German "ae" for example) in their filenames. These are encoded in Pythons internal string representation.</p>
<p>I am processing these filenames with Python library functions and that fails due to wrong encoding.</p>
<p>How can I convert these filenames to proper (unicode?) python strings? </p>
<p>I have a file "d:\utest\ü.txt". Passing the path as unicode does not work:</p>
<pre><code>>>> list(os.walk('d:\\utest'))
[('d:\\utest', [], ['\xfc.txt'])]
>>> list(os.walk(u'd:\\utest'))
[(u'd:\\utest', [], [u'\xfc.txt'])]
</code></pre>
| 15 | 2009-06-27T06:17:20Z | 2,392,967 | <p>a more direct way might be to try the following -- find your file system's encoding, and then convert that to unicode. for example,</p>
<pre><code>unicode_name = unicode(filename, "utf-8", errors="ignore")
</code></pre>
<p>to go the other way,</p>
<pre><code>unicode_name.encode("utf-8")
</code></pre>
| 1 | 2010-03-06T15:27:15Z | [
"python",
"unicode"
] |
Convert python filenames to unicode | 1,052,225 | <p>I am on python 2.6 for Windows.</p>
<p>I use os.walk to read a file tree. Files may have non-7-bit characters (German "ae" for example) in their filenames. These are encoded in Pythons internal string representation.</p>
<p>I am processing these filenames with Python library functions and that fails due to wrong encoding.</p>
<p>How can I convert these filenames to proper (unicode?) python strings? </p>
<p>I have a file "d:\utest\ü.txt". Passing the path as unicode does not work:</p>
<pre><code>>>> list(os.walk('d:\\utest'))
[('d:\\utest', [], ['\xfc.txt'])]
>>> list(os.walk(u'd:\\utest'))
[(u'd:\\utest', [], [u'\xfc.txt'])]
</code></pre>
| 15 | 2009-06-27T06:17:20Z | 6,007,172 | <p>I was looking for a solution for Python 3.0+. Will put it up here incase someone else needs it.</p>
<pre><code>rootdir = r'D:\COUNTRY\ROADS\'
fs_enc = sys.getfilesystemencoding()
for (root, dirname, filename) in os.walk(rootdir.encode(fs_enc)):
# do your stuff here, but remember that now
# root, dirname, filename are represented as bytearrays
</code></pre>
| 6 | 2011-05-15T07:48:05Z | [
"python",
"unicode"
] |
Convert python filenames to unicode | 1,052,225 | <p>I am on python 2.6 for Windows.</p>
<p>I use os.walk to read a file tree. Files may have non-7-bit characters (German "ae" for example) in their filenames. These are encoded in Pythons internal string representation.</p>
<p>I am processing these filenames with Python library functions and that fails due to wrong encoding.</p>
<p>How can I convert these filenames to proper (unicode?) python strings? </p>
<p>I have a file "d:\utest\ü.txt". Passing the path as unicode does not work:</p>
<pre><code>>>> list(os.walk('d:\\utest'))
[('d:\\utest', [], ['\xfc.txt'])]
>>> list(os.walk(u'd:\\utest'))
[(u'd:\\utest', [], [u'\xfc.txt'])]
</code></pre>
| 15 | 2009-06-27T06:17:20Z | 30,253,676 | <pre><code>os.walk(unicode(root_dir, 'utf-8'))
</code></pre>
| 1 | 2015-05-15T07:14:48Z | [
"python",
"unicode"
] |
How to create a message box with tkinter? | 1,052,420 | <p>I've been trying to build a fairly simple message box in tkinter that has "YES" and "NO" buttons. When I push the "YES" button internally it must go and write YES to a file. Similarly, when "NO" is pushed, NO must be written to a file. How can I do this?</p>
| 5 | 2009-06-27T08:48:36Z | 1,052,428 | <p>You can use the module <a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/dialogs.html#tkMessageBox" rel="nofollow">tkMessageBox</a> for Python 2.7 or the corresponding version for Python 3 called <code>tkinter.messagebox</code>.</p>
<p>It looks like <code>askquestion()</code> is exactly the function that you want. It will even return the string <code>"yes"</code> or <code>"no"</code> for you.</p>
| 14 | 2009-06-27T08:54:42Z | [
"python",
"tkinter",
"messagebox"
] |
How to create a message box with tkinter? | 1,052,420 | <p>I've been trying to build a fairly simple message box in tkinter that has "YES" and "NO" buttons. When I push the "YES" button internally it must go and write YES to a file. Similarly, when "NO" is pushed, NO must be written to a file. How can I do this?</p>
| 5 | 2009-06-27T08:48:36Z | 11,497,024 | <p>Here's how you can ask a question using a message box in Python 2.7. You need specifically the module <code>tkMessageBox</code>.</p>
<pre><code>from Tkinter import *
import tkMessageBox
root = Tk().withdraw() # hiding the main window
var = tkMessageBox.askyesno("Title", "Your question goes here?")
filename = "log.txt"
f = open(filename, "w")
f.write(str(var))
print str(var) + " has been written to the file " + filename
f.close()
</code></pre>
| 5 | 2012-07-16T01:06:29Z | [
"python",
"tkinter",
"messagebox"
] |
How to create a message box with tkinter? | 1,052,420 | <p>I've been trying to build a fairly simple message box in tkinter that has "YES" and "NO" buttons. When I push the "YES" button internally it must go and write YES to a file. Similarly, when "NO" is pushed, NO must be written to a file. How can I do this?</p>
| 5 | 2009-06-27T08:48:36Z | 13,797,893 | <p>You can assign the return value of the <code>askquestion</code> function to a variable, and then you simply write the variable to a file:</p>
<pre><code>from tkinter import messagebox
variable = messagebox.askquestion('title','question')
with open('myfile.extension', 'w') as file: # option 'a' to append
file.write(variable + '\n')
</code></pre>
| 6 | 2012-12-10T09:10:44Z | [
"python",
"tkinter",
"messagebox"
] |
Run a function every X minutes - Python | 1,052,574 | <p>I'm using Python and PyGTK. I'm interested in running a certain function, which gets data from a serial port and saves it, every several minutes.</p>
<p>Currently, I'm using the sleep() function in the time library. In order to be able to do processing, I have my system set up like this:</p>
<pre><code>import time
waittime = 300 # 5 minutes
while(1):
time1 = time.time()
readserial() # Read data from serial port
processing() # Do stuff with serial data, including dumping it to a file
time2 = time.time()
processingtime = time2 - time1
sleeptime = waittime - processingtime
time.sleep(sleeptime)
</code></pre>
<p>This setup allows me to have 5 minute intervals between reading data from the serial port. My issue is that I'd like to be able to have my readserial() function pause whatever is going on every 5 minutes and be able to do things all the time instead of using the time.sleep() function.</p>
<p>Any suggestions on how to solve this problem? Multithreading? Interrupts? Please keep in mind that I'm using python.</p>
<p>Thanks.</p>
| 11 | 2009-06-27T10:28:42Z | 1,052,581 | <p>Do not use such loop with sleep, it will block gtk from processing any UI events, instead use gtk timer e.g.</p>
<pre><code>def my_timer(*args):
return True# do ur work here, but not for long
gtk.timeout_add(60*1000, my_timer) # call every min
</code></pre>
| 22 | 2009-06-27T10:34:54Z | [
"python",
"gtk",
"timer",
"pygtk"
] |
Run a function every X minutes - Python | 1,052,574 | <p>I'm using Python and PyGTK. I'm interested in running a certain function, which gets data from a serial port and saves it, every several minutes.</p>
<p>Currently, I'm using the sleep() function in the time library. In order to be able to do processing, I have my system set up like this:</p>
<pre><code>import time
waittime = 300 # 5 minutes
while(1):
time1 = time.time()
readserial() # Read data from serial port
processing() # Do stuff with serial data, including dumping it to a file
time2 = time.time()
processingtime = time2 - time1
sleeptime = waittime - processingtime
time.sleep(sleeptime)
</code></pre>
<p>This setup allows me to have 5 minute intervals between reading data from the serial port. My issue is that I'd like to be able to have my readserial() function pause whatever is going on every 5 minutes and be able to do things all the time instead of using the time.sleep() function.</p>
<p>Any suggestions on how to solve this problem? Multithreading? Interrupts? Please keep in mind that I'm using python.</p>
<p>Thanks.</p>
| 11 | 2009-06-27T10:28:42Z | 1,309,480 | <p>This is exactly like <a href="http://stackoverflow.com/questions/1309006/running-function-5-seconds-after-pygtk-widget-is-shown/1309404#1309404">my answer here</a></p>
<p>If the time is not critical to be exact to the tenth of a second, use</p>
<pre><code>glib.timeout_add_seconds(60, ..)
</code></pre>
<p>else as above.</p>
<p>*timeout_add_seconds* allows the system to align timeouts to other events, in the long run reducing CPU wakeups (especially if the timeout is reocurring) and <a href="http://gould.cx/ted/blog/Saving%5Fthe%5Fworld%5Fone%5F%5Fw%5Fat%5Fa%5Ftime">save energy for the planet</a>(!)</p>
| 8 | 2009-08-21T00:01:57Z | [
"python",
"gtk",
"timer",
"pygtk"
] |
Run a function every X minutes - Python | 1,052,574 | <p>I'm using Python and PyGTK. I'm interested in running a certain function, which gets data from a serial port and saves it, every several minutes.</p>
<p>Currently, I'm using the sleep() function in the time library. In order to be able to do processing, I have my system set up like this:</p>
<pre><code>import time
waittime = 300 # 5 minutes
while(1):
time1 = time.time()
readserial() # Read data from serial port
processing() # Do stuff with serial data, including dumping it to a file
time2 = time.time()
processingtime = time2 - time1
sleeptime = waittime - processingtime
time.sleep(sleeptime)
</code></pre>
<p>This setup allows me to have 5 minute intervals between reading data from the serial port. My issue is that I'd like to be able to have my readserial() function pause whatever is going on every 5 minutes and be able to do things all the time instead of using the time.sleep() function.</p>
<p>Any suggestions on how to solve this problem? Multithreading? Interrupts? Please keep in mind that I'm using python.</p>
<p>Thanks.</p>
| 11 | 2009-06-27T10:28:42Z | 4,651,015 | <p>gtk.timeout_add appears to be deprecated, so you should use </p>
<pre><code>def my_timer(*args):
# Do your work here
return True
gobject.timeout_add( 60*1000, my_timer )
</code></pre>
| 4 | 2011-01-10T20:11:00Z | [
"python",
"gtk",
"timer",
"pygtk"
] |
Run a function every X minutes - Python | 1,052,574 | <p>I'm using Python and PyGTK. I'm interested in running a certain function, which gets data from a serial port and saves it, every several minutes.</p>
<p>Currently, I'm using the sleep() function in the time library. In order to be able to do processing, I have my system set up like this:</p>
<pre><code>import time
waittime = 300 # 5 minutes
while(1):
time1 = time.time()
readserial() # Read data from serial port
processing() # Do stuff with serial data, including dumping it to a file
time2 = time.time()
processingtime = time2 - time1
sleeptime = waittime - processingtime
time.sleep(sleeptime)
</code></pre>
<p>This setup allows me to have 5 minute intervals between reading data from the serial port. My issue is that I'd like to be able to have my readserial() function pause whatever is going on every 5 minutes and be able to do things all the time instead of using the time.sleep() function.</p>
<p>Any suggestions on how to solve this problem? Multithreading? Interrupts? Please keep in mind that I'm using python.</p>
<p>Thanks.</p>
| 11 | 2009-06-27T10:28:42Z | 14,468,874 | <p>try:</p>
<pre><code>import wx
wx.CallLater(1000, my_timer)
</code></pre>
| 1 | 2013-01-22T22:02:39Z | [
"python",
"gtk",
"timer",
"pygtk"
] |
How can I parse the output of /proc/net/dev into key:value pairs per interface using Python? | 1,052,589 | <p>The output of <a href="http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/en-US/Reference%5FGuide/s2-proc-dir-net.html" rel="nofollow">/proc/net/dev</a> on Linux looks like this:</p>
<pre><code>Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
lo:18748525 129811 0 0 0 0 0 0 18748525 129811 0 0 0 0 0 0
eth0:1699369069 226296437 0 0 0 0 0 3555 4118745424 194001149 0 0 0 0 0 0
eth1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
sit0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
</code></pre>
<p>How can I use Python to parse this output into key:value pairs for each interface? I have found <a href="http://www.unix.com/shell-programming-scripting/90035-parsing-proc-net-dev-into-key-value-pairs-self-answered.html" rel="nofollow">this forum topic</a> for achieving it using shell scripting and there is <a href="http://cpansearch.perl.org/src/VSEGO/Linux-net-dev-1.00/dev.html" rel="nofollow">a Perl extension</a> but I need to use Python.</p>
| 2 | 2009-06-27T10:40:34Z | 1,052,623 | <p>Does this help?</p>
<pre><code>dev = open("/proc/net/dev", "r").readlines()
header_line = dev[1]
header_names = header_line[header_line.index("|")+1:].replace("|", " ").split()
values={}
for line in dev[2:]:
intf = line[:line.index(":")].strip()
values[intf] = [int(value) for value in line[line.index(":")+1:].split()]
print intf,values[intf]
</code></pre>
<p>Output:</p>
<pre><code>lo [803814, 16319, 0, 0, 0, 0, 0, 0, 803814, 16319, 0, 0, 0, 0, 0, 0]
eth0 [123605646, 102196, 0, 0, 0, 0, 0, 0, 9029534, 91901, 0, 0, 0, 0, 0, 0]
wmaster0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
eth1 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
vboxnet0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
pan0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
</code></pre>
<p>You could, of course, use the header names in <code>header_names</code> to construct a dict of dicts.</p>
| 1 | 2009-06-27T10:56:59Z | [
"python",
"linux",
"parsing"
] |
How can I parse the output of /proc/net/dev into key:value pairs per interface using Python? | 1,052,589 | <p>The output of <a href="http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/en-US/Reference%5FGuide/s2-proc-dir-net.html" rel="nofollow">/proc/net/dev</a> on Linux looks like this:</p>
<pre><code>Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
lo:18748525 129811 0 0 0 0 0 0 18748525 129811 0 0 0 0 0 0
eth0:1699369069 226296437 0 0 0 0 0 3555 4118745424 194001149 0 0 0 0 0 0
eth1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
sit0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
</code></pre>
<p>How can I use Python to parse this output into key:value pairs for each interface? I have found <a href="http://www.unix.com/shell-programming-scripting/90035-parsing-proc-net-dev-into-key-value-pairs-self-answered.html" rel="nofollow">this forum topic</a> for achieving it using shell scripting and there is <a href="http://cpansearch.perl.org/src/VSEGO/Linux-net-dev-1.00/dev.html" rel="nofollow">a Perl extension</a> but I need to use Python.</p>
| 2 | 2009-06-27T10:40:34Z | 1,052,628 | <p>this is pretty formatted input and you can easily get columns and data list by splitting each line, and then create a dict of of it.</p>
<p>here is a simple script without regex</p>
<pre><code>lines = open("/proc/net/dev", "r").readlines()
columnLine = lines[1]
_, receiveCols , transmitCols = columnLine.split("|")
receiveCols = map(lambda a:"recv_"+a, receiveCols.split())
transmitCols = map(lambda a:"trans_"+a, transmitCols.split())
cols = receiveCols+transmitCols
faces = {}
for line in lines[2:]:
if line.find(":") < 0: continue
face, data = line.split(":")
faceData = dict(zip(cols, data.split()))
faces[face] = faceData
import pprint
pprint.pprint(faces)
</code></pre>
<p>it outputs</p>
<pre><code>{' lo': {'recv_bytes': '7056295',
'recv_compressed': '0',
'recv_drop': '0',
'recv_errs': '0',
'recv_fifo': '0',
'recv_frame': '0',
'recv_multicast': '0',
'recv_packets': '12148',
'trans_bytes': '7056295',
'trans_carrier': '0',
'trans_colls': '0',
'trans_compressed': '0',
'trans_drop': '0',
'trans_errs': '0',
'trans_fifo': '0',
'trans_packets': '12148'},
' eth0': {'recv_bytes': '34084530',
'recv_compressed': '0',
'recv_drop': '0',
'recv_errs': '0',
'recv_fifo': '0',
'recv_frame': '0',
'recv_multicast': '0',
'recv_packets': '30599',
'trans_bytes': '6170441',
'trans_carrier': '0',
'trans_colls': '0',
'trans_compressed': '0',
'trans_drop': '0',
'trans_errs': '0',
'trans_fifo': '0',
'trans_packets': '32377'}}
</code></pre>
| 9 | 2009-06-27T11:00:13Z | [
"python",
"linux",
"parsing"
] |
How can I parse the output of /proc/net/dev into key:value pairs per interface using Python? | 1,052,589 | <p>The output of <a href="http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/en-US/Reference%5FGuide/s2-proc-dir-net.html" rel="nofollow">/proc/net/dev</a> on Linux looks like this:</p>
<pre><code>Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
lo:18748525 129811 0 0 0 0 0 0 18748525 129811 0 0 0 0 0 0
eth0:1699369069 226296437 0 0 0 0 0 3555 4118745424 194001149 0 0 0 0 0 0
eth1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
sit0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
</code></pre>
<p>How can I use Python to parse this output into key:value pairs for each interface? I have found <a href="http://www.unix.com/shell-programming-scripting/90035-parsing-proc-net-dev-into-key-value-pairs-self-answered.html" rel="nofollow">this forum topic</a> for achieving it using shell scripting and there is <a href="http://cpansearch.perl.org/src/VSEGO/Linux-net-dev-1.00/dev.html" rel="nofollow">a Perl extension</a> but I need to use Python.</p>
| 2 | 2009-06-27T10:40:34Z | 1,052,652 | <pre><code>#!/usr/bin/env python
from __future__ import with_statement
import re
import pprint
ifaces = {}
with open('/proc/net/dev') as fd:
lines = map(lambda x: x.strip(), fd.readlines())
lines = lines[1:]
lines[0] = lines[0].replace('|', ':', 1)
lines[0] = lines[0].replace('|', ' ', 1)
lines[0] = lines[0].split(':')[1]
keys = re.split('\s+', lines[0])
keys = map(lambda x: 'rx' + x[1] if x[0] < 8 else 'tx' + x[1], enumerate(keys))
for line in lines[1:]:
interface, values = line.split(':')
values = re.split('\s+', values)
if values[0] == '':
values = values[1:]
values = map(int, values)
ifaces[interface] = dict(zip(keys, values))
pprint.pprint(ifaces)
</code></pre>
| 1 | 2009-06-27T11:13:57Z | [
"python",
"linux",
"parsing"
] |
Issue with Facebook showAddSectionButton | 1,052,613 | <p>I am going absolutely batty trying to figure out how to get showAddSectionButton to work.</p>
<p>The problem:
I'm trying to get the 'add section button' to show up. There is nothing showing up right now.</p>
<p>My code:</p>
<pre><code><html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml" >
<body>
<div id="s1"></div>
<script type="text/javascript" src="{{ fb_js }}"></script>
<script type="text/javascript">
window.onload = function() {
FB_RequireFeatures(["XFBML"], function() {
FB.Facebook.init('{{ api_key }}','{{ receiver_path }}', null);
FB.Connect.showAddSectionButton("profile", document.getElementById("s1"));
});
};
</script>
<div id="s2"></div>
</body>
</html>
</code></pre>
<p>Stuff I've tried:</p>
<ol>
<li>Copy-pasted the code from the working Facebook example app Smiley and made only the minimal changes to customize it to my settings</li>
<li>Manually checked to make sure all of the links (js library, xd_receiver) work</li>
<li>receiver_path is a relative path</li>
<li>confirmed that the facebook js include is supposed to be in the body of page</li>
</ol>
<p>I'm pretty new at firebug, but I've taken a poke around, and it looks like the facebook js has re-written the HTML, specifically, there is an iframe inside of the [div id="s1"][/div] which looks like it should be a button.</p>
<p>Unfortunately, I don't see anything displayed. Any help would be greatly appreciated.</p>
| -1 | 2009-06-27T10:52:29Z | 1,097,219 | <p>I did finally figure out the problem.</p>
<p>Facebook doesn't allow anything but FBML on the profile page (for some reason I thought I could use an iframe), which means you have to call setFBML with the profile_main property filled first.</p>
<p>Once I did that, the button popped right up.</p>
| 0 | 2009-07-08T10:13:40Z | [
"python",
"facebook",
"google-app-engine"
] |
Is registered atexit handler inherited by spawned child processes? | 1,052,716 | <p>I am writing a daemon program using python 2.5. In the main process an exit handler is registered with <code>atexit</code> module, it <strong>seems</strong> that the handler gets called when each child process ends, which is not I expected.</p>
<p>I noticed this behavior isn't mentioned in python <code>atexit</code> doc, anybody knows the issue? If this is how it should behave, how can I unregister the exit handler in children processes? There is a atexit.unregister in version 3.0, but I am using 2.5.</p>
| 2 | 2009-06-27T11:54:07Z | 1,052,742 | <p><code>atexit.register()</code> basically registers your function in <code>atexit._exithandlers</code>, which is a module private list of functions called by <code>sys.exitfunc()</code>. You can set <code>exitfunc()</code> to your custom made exit handler function, which then checks for child status or simply unregisters it. What about just copying the 3.0 <code>atexit.py</code> to your local source tree and using that instead?</p>
<p>EDIT: I copied the atexit.py from my 2.6 version and extended it by</p>
<pre><code>def unregister(func, *targs, **kargs):
_exithandlers.remove((func, targs, kargs))
</code></pre>
<p>If you take that instead of your original version it should work. I have not tested it with subprocesses, though.</p>
| 1 | 2009-06-27T12:13:21Z | [
"python",
"multiprocessing",
"atexit"
] |
Is registered atexit handler inherited by spawned child processes? | 1,052,716 | <p>I am writing a daemon program using python 2.5. In the main process an exit handler is registered with <code>atexit</code> module, it <strong>seems</strong> that the handler gets called when each child process ends, which is not I expected.</p>
<p>I noticed this behavior isn't mentioned in python <code>atexit</code> doc, anybody knows the issue? If this is how it should behave, how can I unregister the exit handler in children processes? There is a atexit.unregister in version 3.0, but I am using 2.5.</p>
| 2 | 2009-06-27T11:54:07Z | 1,053,223 | <p>When you <code>fork</code> to make a child process, that child is an exact copy of the parent -- including of course registered exit functions as well as all other code and data structures. I believe that's the issue you're observing -- of course it's not mentioned in each and every module, because it necessarily applies to every single one.</p>
| 2 | 2009-06-27T17:14:18Z | [
"python",
"multiprocessing",
"atexit"
] |
Is registered atexit handler inherited by spawned child processes? | 1,052,716 | <p>I am writing a daemon program using python 2.5. In the main process an exit handler is registered with <code>atexit</code> module, it <strong>seems</strong> that the handler gets called when each child process ends, which is not I expected.</p>
<p>I noticed this behavior isn't mentioned in python <code>atexit</code> doc, anybody knows the issue? If this is how it should behave, how can I unregister the exit handler in children processes? There is a atexit.unregister in version 3.0, but I am using 2.5.</p>
| 2 | 2009-06-27T11:54:07Z | 1,062,691 | <p>There isn't an API to do it in Python 2.5, but you can just:</p>
<pre><code>import atexit
atexit._exithandlers = []
</code></pre>
<p>in your child processes - if you know you only have one exit handler installed, and that no other handlers are installed. However, be aware that some parts of the stdlib (e.g. <code>logging</code>) register <code>atexit</code> handlers. To avoid trampling on them, you could try:</p>
<pre><code>my_handler_entries = [e for e in atexit._exithandlers if e[0] == my_handler_func]
for e in my_handler_entries:
atexit._exithandlers.remove(e)
</code></pre>
<p>where <code>my_handler_func</code> is the <code>atexit</code> handler you registered, and this should remove your entry without removing the others.</p>
| 3 | 2009-06-30T09:28:56Z | [
"python",
"multiprocessing",
"atexit"
] |
matplotlib "DLL load failed" when import pylab | 1,052,831 | <p>I'm new to matplotlib. My environment is WinXP, PythonWin 2.6.2, NumPy 1.3.0, matplotlib 0.98.5.3.</p>
<pre><code>>>> import matplotlib.pylab as pylab
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "D:\Python26\lib\site-packages\matplotlib\pylab.py", line 253, in <module>
from matplotlib.pyplot import *
File "D:\Python26\lib\site-packages\matplotlib\pyplot.py", line 75, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "D:\Python26\lib\site-packages\matplotlib\backends\__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "D:\Python26\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 8, in <module>
import tkagg # Paint image to Tk photo blitter extension
File "D:\Python26\lib\site-packages\matplotlib\backends\tkagg.py", line 1, in <module>
import _tkagg
ImportError: DLL load failed: cannot find the module
</code></pre>
<p>I searched the web and it's said because lack of msvcp71.dll, but there is already one in my C:\windows\system32\</p>
<p>anyone can help?
Thanks.</p>
| 1 | 2009-06-27T13:32:09Z | 1,052,910 | <p>try this, before using any other module</p>
<pre><code>import matplotlib
matplotlib.use('Agg')
import matplotlib.pylab as pylab
</code></pre>
<p>see <a href="http://www.mail-archive.com/matplotlib-users@lists.sourceforge.net/msg05372.html">http://www.mail-archive.com/matplotlib-users@lists.sourceforge.net/msg05372.html</a> for more details and other ways</p>
<p>matplotlib can use different backends for rendering, agg is pure draw with no UI, so you can only save images e.g.</p>
<pre><code>plt.savefig("plt.png")
</code></pre>
<p>read <a href="http://matplotlib.sourceforge.net/faq/installing_faq.html#what-is-a-backend">http://matplotlib.sourceforge.net/faq/installing_faq.html#what-is-a-backend</a> for more details, e.g. you can output to ps, pdf, wxagg, tkagg etc, so if you have wxpython installed use this</p>
<pre><code>matplotlib.use('wxagg')
</code></pre>
<p>also i think tkagg backend should have been work, as tkinter(<a href="http://wiki.python.org/moin/TkInter">http://wiki.python.org/moin/TkInter</a>) is default gui with python, did you install python from python.org? </p>
| 8 | 2009-06-27T14:26:56Z | [
"python",
"matplotlib"
] |
matplotlib "DLL load failed" when import pylab | 1,052,831 | <p>I'm new to matplotlib. My environment is WinXP, PythonWin 2.6.2, NumPy 1.3.0, matplotlib 0.98.5.3.</p>
<pre><code>>>> import matplotlib.pylab as pylab
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "D:\Python26\lib\site-packages\matplotlib\pylab.py", line 253, in <module>
from matplotlib.pyplot import *
File "D:\Python26\lib\site-packages\matplotlib\pyplot.py", line 75, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "D:\Python26\lib\site-packages\matplotlib\backends\__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "D:\Python26\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 8, in <module>
import tkagg # Paint image to Tk photo blitter extension
File "D:\Python26\lib\site-packages\matplotlib\backends\tkagg.py", line 1, in <module>
import _tkagg
ImportError: DLL load failed: cannot find the module
</code></pre>
<p>I searched the web and it's said because lack of msvcp71.dll, but there is already one in my C:\windows\system32\</p>
<p>anyone can help?
Thanks.</p>
| 1 | 2009-06-27T13:32:09Z | 1,302,790 | <p>I had the same problem installing activestat python 2.6 and pylab. After installing the sourceforge python 2.6.2 however it worked fine</p>
| 1 | 2009-08-19T21:43:33Z | [
"python",
"matplotlib"
] |
matplotlib "DLL load failed" when import pylab | 1,052,831 | <p>I'm new to matplotlib. My environment is WinXP, PythonWin 2.6.2, NumPy 1.3.0, matplotlib 0.98.5.3.</p>
<pre><code>>>> import matplotlib.pylab as pylab
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "D:\Python26\lib\site-packages\matplotlib\pylab.py", line 253, in <module>
from matplotlib.pyplot import *
File "D:\Python26\lib\site-packages\matplotlib\pyplot.py", line 75, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "D:\Python26\lib\site-packages\matplotlib\backends\__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "D:\Python26\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 8, in <module>
import tkagg # Paint image to Tk photo blitter extension
File "D:\Python26\lib\site-packages\matplotlib\backends\tkagg.py", line 1, in <module>
import _tkagg
ImportError: DLL load failed: cannot find the module
</code></pre>
<p>I searched the web and it's said because lack of msvcp71.dll, but there is already one in my C:\windows\system32\</p>
<p>anyone can help?
Thanks.</p>
| 1 | 2009-06-27T13:32:09Z | 20,202,509 | <p>I actually found the answer and the graphs are running great on my computer. If you are getting a DLL error like this, try downloading msvcp71.dll and msvcr71.dll in your computer and then copy paste these two in your System32 folder:</p>
<blockquote>
<p>C:\Windows\System32 </p>
</blockquote>
<p>and also copy-paste these two dll's in SysWOW64 folder if you are working on 64bit operating System</p>
<blockquote>
<p>C:\Windows\SysWOW64</p>
</blockquote>
<p>Now try running your code file in Python and it will load the graph in couple of seconds. Here is the link which says how to copy-paste dll's to both folder, this might help</p>
<blockquote>
<p><a href="http://www.youtube.com/watch?v=xmvRF7koJ5E" rel="nofollow">http://www.youtube.com/watch?v=xmvRF7koJ5E</a></p>
</blockquote>
<p>Cheers...</p>
| 0 | 2013-11-25T20:21:39Z | [
"python",
"matplotlib"
] |
matplotlib "DLL load failed" when import pylab | 1,052,831 | <p>I'm new to matplotlib. My environment is WinXP, PythonWin 2.6.2, NumPy 1.3.0, matplotlib 0.98.5.3.</p>
<pre><code>>>> import matplotlib.pylab as pylab
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "D:\Python26\lib\site-packages\matplotlib\pylab.py", line 253, in <module>
from matplotlib.pyplot import *
File "D:\Python26\lib\site-packages\matplotlib\pyplot.py", line 75, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "D:\Python26\lib\site-packages\matplotlib\backends\__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "D:\Python26\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 8, in <module>
import tkagg # Paint image to Tk photo blitter extension
File "D:\Python26\lib\site-packages\matplotlib\backends\tkagg.py", line 1, in <module>
import _tkagg
ImportError: DLL load failed: cannot find the module
</code></pre>
<p>I searched the web and it's said because lack of msvcp71.dll, but there is already one in my C:\windows\system32\</p>
<p>anyone can help?
Thanks.</p>
| 1 | 2009-06-27T13:32:09Z | 33,422,418 | <p>I had this issue using iPython notebooks using Python 2.7. Apparently the latest Seaborn update does not play well with my local configuration settings, so I did "conda/pip install seaborn=0.5.1". It downgraded the necessary packages automatically, and my problem went away.</p>
| 0 | 2015-10-29T18:52:53Z | [
"python",
"matplotlib"
] |
How can I convert a Perl regex with named groups to Python? | 1,052,930 | <p>I am trying to convert the following Perl regex I found in the <a href="http://search.cpan.org/dist/Video-Filename" rel="nofollow">Video::Filename</a> Perl module to a Python 2.5.4 regex to parse a filename</p>
<pre><code># Perl > v5.10
re => '^(?:(?<name>.*?)[\/\s._-]*)?(?<openb>\[)?(?<season>\d{1,2})[x\/](?<episode>\d{1,2})(?:-(?:\k<season>x)?(?<endep>\d{1,2}))?(?(<openb>)\])(?:[\s._-]*(?<epname>[^\/]+?))?$',
</code></pre>
<p>I would like to use named groups too, and I know in Python the regex extension for named groups is different, but I am not 100% sure on the syntax.</p>
<p>This is what I tried:</p>
<pre><code># Python (not working)
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?(?(P<openb>)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>The error I get:</p>
<pre><code> raise error, v # invalid expression
sre_constants.error: bad character in group name
</code></pre>
<p>For example, this one I managed to convert and it works. But the one above I can't seem to get right. I get a compilation error in Python.</p>
<pre><code># Perl:
re => '^(?:(?<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?<part>\d+))?(?<subep>[a-z])?(?:[\/\s._-]*(?<epname>[^\/]+?))?$',
# Python (working):
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?P<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?P<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?P<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?P<part>\d+))?(?P<subep>[a-z])?(?:[\/\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>I am not sure where to start looking.</p>
| 3 | 2009-06-27T14:40:28Z | 1,052,954 | <p>Those regexps are the product of a sick an twisted mind... :-)</p>
<p>Anyway, (?()) are conditions in both Python and Perl, and the perl syntax above looks like it should be the same as the Python syntax, i.e., it evaluates as true of the group named exists.</p>
<p>Where to start looking? The documentation for the modules are here:</p>
<p><a href="http://docs.python.org/library/re.html" rel="nofollow">http://docs.python.org/library/re.html</a>
<a href="http://www.perl.com/doc/manual/html/pod/perlre.html" rel="nofollow">http://www.perl.com/doc/manual/html/pod/perlre.html</a></p>
| 0 | 2009-06-27T14:50:42Z | [
"python",
"regex",
"perl"
] |
How can I convert a Perl regex with named groups to Python? | 1,052,930 | <p>I am trying to convert the following Perl regex I found in the <a href="http://search.cpan.org/dist/Video-Filename" rel="nofollow">Video::Filename</a> Perl module to a Python 2.5.4 regex to parse a filename</p>
<pre><code># Perl > v5.10
re => '^(?:(?<name>.*?)[\/\s._-]*)?(?<openb>\[)?(?<season>\d{1,2})[x\/](?<episode>\d{1,2})(?:-(?:\k<season>x)?(?<endep>\d{1,2}))?(?(<openb>)\])(?:[\s._-]*(?<epname>[^\/]+?))?$',
</code></pre>
<p>I would like to use named groups too, and I know in Python the regex extension for named groups is different, but I am not 100% sure on the syntax.</p>
<p>This is what I tried:</p>
<pre><code># Python (not working)
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?(?(P<openb>)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>The error I get:</p>
<pre><code> raise error, v # invalid expression
sre_constants.error: bad character in group name
</code></pre>
<p>For example, this one I managed to convert and it works. But the one above I can't seem to get right. I get a compilation error in Python.</p>
<pre><code># Perl:
re => '^(?:(?<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?<part>\d+))?(?<subep>[a-z])?(?:[\/\s._-]*(?<epname>[^\/]+?))?$',
# Python (working):
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?P<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?P<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?P<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?P<part>\d+))?(?P<subep>[a-z])?(?:[\/\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>I am not sure where to start looking.</p>
| 3 | 2009-06-27T14:40:28Z | 1,052,985 | <p>I found the offending part but can't figure out what exactly is wrong without wrapping my mind around the whole thing.</p>
<pre><code>r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?
(?(P<openb>)\]) // this part here causes the error message
(?:[\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>The problem seems to be with the fact that group names in python must be valid python identifiers (check <a href="http://docs.python.org/library/re.html" rel="nofollow">documentation</a>). The parentheses seem to be the problem. Removing them gives</p>
<pre><code>(?(P<openb>)\]) //with parentheses
(?P<openb>\]) //without parentheses
redefinition of group name 'openb' as group 6; was group 2
</code></pre>
| 2 | 2009-06-27T15:03:53Z | [
"python",
"regex",
"perl"
] |
How can I convert a Perl regex with named groups to Python? | 1,052,930 | <p>I am trying to convert the following Perl regex I found in the <a href="http://search.cpan.org/dist/Video-Filename" rel="nofollow">Video::Filename</a> Perl module to a Python 2.5.4 regex to parse a filename</p>
<pre><code># Perl > v5.10
re => '^(?:(?<name>.*?)[\/\s._-]*)?(?<openb>\[)?(?<season>\d{1,2})[x\/](?<episode>\d{1,2})(?:-(?:\k<season>x)?(?<endep>\d{1,2}))?(?(<openb>)\])(?:[\s._-]*(?<epname>[^\/]+?))?$',
</code></pre>
<p>I would like to use named groups too, and I know in Python the regex extension for named groups is different, but I am not 100% sure on the syntax.</p>
<p>This is what I tried:</p>
<pre><code># Python (not working)
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?(?(P<openb>)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>The error I get:</p>
<pre><code> raise error, v # invalid expression
sre_constants.error: bad character in group name
</code></pre>
<p>For example, this one I managed to convert and it works. But the one above I can't seem to get right. I get a compilation error in Python.</p>
<pre><code># Perl:
re => '^(?:(?<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?<part>\d+))?(?<subep>[a-z])?(?:[\/\s._-]*(?<epname>[^\/]+?))?$',
# Python (working):
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?P<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?P<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?P<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?P<part>\d+))?(?P<subep>[a-z])?(?:[\/\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>I am not sure where to start looking.</p>
| 3 | 2009-06-27T14:40:28Z | 1,052,993 | <p>There are 2 problems with your translation. First of all, the second mention of <code>openb</code> has extra parenthesis around it making it a <a href="http://perldoc.perl.org/perlre.html#%27%28?%28condition%29yes-pattern|no-pattern%29%27" rel="nofollow" title="Perl RE conditional expression documentation">conditional expression</a>, not a named expression.</p>
<p>Next is that you didn't translate the <code>\k<season></code> <a href="http://perldoc.perl.org/perlre.html#%27%5Ck%3CNAME%3E%27" rel="nofollow" title="Perl RE backreference documentation">backreference</a>, Python uses <code>(P=season)</code> to match the same. The following compiles for me:</p>
<pre><code>r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:(?P=season)x)?(?P<endep>\d{1,2}))?(?(openb)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>If I were you, I'd use re.VERBOSE to split this expression over multiple lines and add copious documentation so you can keep understanding the expression in the future if this is something that needs to remain maintainable though.</p>
<p><em>(edited after realising the second <code>openb</code> reference was a conditional expression, and to properly translate the backreference).</em></p>
| 6 | 2009-06-27T15:07:59Z | [
"python",
"regex",
"perl"
] |
How can I convert a Perl regex with named groups to Python? | 1,052,930 | <p>I am trying to convert the following Perl regex I found in the <a href="http://search.cpan.org/dist/Video-Filename" rel="nofollow">Video::Filename</a> Perl module to a Python 2.5.4 regex to parse a filename</p>
<pre><code># Perl > v5.10
re => '^(?:(?<name>.*?)[\/\s._-]*)?(?<openb>\[)?(?<season>\d{1,2})[x\/](?<episode>\d{1,2})(?:-(?:\k<season>x)?(?<endep>\d{1,2}))?(?(<openb>)\])(?:[\s._-]*(?<epname>[^\/]+?))?$',
</code></pre>
<p>I would like to use named groups too, and I know in Python the regex extension for named groups is different, but I am not 100% sure on the syntax.</p>
<p>This is what I tried:</p>
<pre><code># Python (not working)
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?(?(P<openb>)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>The error I get:</p>
<pre><code> raise error, v # invalid expression
sre_constants.error: bad character in group name
</code></pre>
<p>For example, this one I managed to convert and it works. But the one above I can't seem to get right. I get a compilation error in Python.</p>
<pre><code># Perl:
re => '^(?:(?<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?<part>\d+))?(?<subep>[a-z])?(?:[\/\s._-]*(?<epname>[^\/]+?))?$',
# Python (working):
r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?P<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?P<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?P<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?P<part>\d+))?(?P<subep>[a-z])?(?:[\/\s._-]*(?P<epname>[^\/]+?))?$')
</code></pre>
<p>I am not sure where to start looking.</p>
| 3 | 2009-06-27T14:40:28Z | 1,053,031 | <p>I may be wrong but you tried to get the backreference using :</p>
<pre><code>(?:\k<season>x)
</code></pre>
<p>Isn't the syntax <code>\g<name></code> in Python ?</p>
| 0 | 2009-06-27T15:29:28Z | [
"python",
"regex",
"perl"
] |
Testing for ImportErrors in Python | 1,052,931 | <p>We're having a real problem with people checking in code that doesn't work because something's been refactored. Admittedly, this is partly because our developers don't really have any good tools for finding these kinds of mistakes easily.</p>
<p>Are there any tools to help find ImportErrors in Python? Of course, the correct answer here is "you should use your unit tests for that." But, I'm in legacy code land (at least by Michael Feathers's definition), so our unit tests are something we're working on heavily.</p>
<p>In the meantime, it would be nice to have some sort of tool that will walk through each directory and import each file within it just to find any scripts that have ImportErrors (like say if a file or class has been renamed recently). I suppose this wouldn't be <em>terribly</em> difficult to write myself, but are there any programs that are already written to do this? </p>
| 1 | 2009-06-27T14:41:37Z | 1,053,028 | <p>Something like this?</p>
<pre><code>import os
for x in os.listdir("some/path"):
execfile( x )
</code></pre>
<p>Is that enough, or would you need more?</p>
<p>Note that any module that lacks <code>if __name__ == "__main__":</code> will -- obviously -- take off and start doing stuff.</p>
| 1 | 2009-06-27T15:26:22Z | [
"python",
"unit-testing",
"refactoring",
"continuous-integration",
"importerror"
] |
Testing for ImportErrors in Python | 1,052,931 | <p>We're having a real problem with people checking in code that doesn't work because something's been refactored. Admittedly, this is partly because our developers don't really have any good tools for finding these kinds of mistakes easily.</p>
<p>Are there any tools to help find ImportErrors in Python? Of course, the correct answer here is "you should use your unit tests for that." But, I'm in legacy code land (at least by Michael Feathers's definition), so our unit tests are something we're working on heavily.</p>
<p>In the meantime, it would be nice to have some sort of tool that will walk through each directory and import each file within it just to find any scripts that have ImportErrors (like say if a file or class has been renamed recently). I suppose this wouldn't be <em>terribly</em> difficult to write myself, but are there any programs that are already written to do this? </p>
| 1 | 2009-06-27T14:41:37Z | 1,053,195 | <p>Pychecker is for you. It imports the modules and will find these errors.</p>
<p><a href="http://pychecker.sourceforge.net/" rel="nofollow">http://pychecker.sourceforge.net/</a></p>
<p>Oh, and "pylint <modulename>" will import the module, but I guess you would have to call it once for every module you want, where pychecker at least supports *.py. (Pylint also support *.py but won't import the modules in that situation).</p>
| 3 | 2009-06-27T16:57:43Z | [
"python",
"unit-testing",
"refactoring",
"continuous-integration",
"importerror"
] |
psycopg2 on OSX: do I have to install PostgreSQL too? | 1,052,957 | <p>I want to access a postgreSQL database that's running on a remote machine, from Python in OS/X. Do I have to install postgres on the mac as well? Or will psycopg2 work on its own. </p>
<p>Any hints for a good installation guide for psycopg2 for os/x? </p>
| 1 | 2009-06-27T14:52:33Z | 1,052,990 | <p>macports tells me that the psycopg2 package has a dependency on the postgres client and libraries (but not the db server). If you successfully installed psycopg, then you should be good to go.</p>
<p>If you haven't installed yet, consider using macports or fink to deal with dependency resolution for you. In most cases, this will make things easier (occasionally build problems erupt).</p>
| 3 | 2009-06-27T15:07:33Z | [
"python",
"osx",
"postgresql"
] |
psycopg2 on OSX: do I have to install PostgreSQL too? | 1,052,957 | <p>I want to access a postgreSQL database that's running on a remote machine, from Python in OS/X. Do I have to install postgres on the mac as well? Or will psycopg2 work on its own. </p>
<p>Any hints for a good installation guide for psycopg2 for os/x? </p>
| 1 | 2009-06-27T14:52:33Z | 1,054,337 | <p>psycopg2 requires the PostgreSQL libpq libraries and the pg_config utility, which means you need a decent chunk of PostgreSQL to be installed. You could install Postgres and psycopg2 via MacPorts, but the version situation is somewhat messy--you may need to install a newer Python as well, particularly if you want to use a recent version of the PostgreSQL libraries. Depending on what you want to do, for example if you have some other Python you want to use, it may be easier to grab a more standard PostgreSQL install and just build psycopg2 yourself. That's pretty easy if you've already got gcc etc. installed, typically the only build issue is making sure it looks in the right place for the libpq include files. See <a href="http://jasonism.org/weblog/2008/nov/06/getting-psycopg2-work-mac-os-x-leopard/" rel="nofollow">Getting psycopg2 to work on Mac OS X Leopard</a> and <a href="http://blog.jonypawks.net/2008/06/20/installing-psycopg2-on-os-x/" rel="nofollow">Installing psycopg2 on OS X</a> for a few recipes covering the usual build issues you might run into.</p>
| 1 | 2009-06-28T05:41:29Z | [
"python",
"osx",
"postgresql"
] |
psycopg2 on OSX: do I have to install PostgreSQL too? | 1,052,957 | <p>I want to access a postgreSQL database that's running on a remote machine, from Python in OS/X. Do I have to install postgres on the mac as well? Or will psycopg2 work on its own. </p>
<p>Any hints for a good installation guide for psycopg2 for os/x? </p>
| 1 | 2009-06-27T14:52:33Z | 1,139,001 | <p>You can install from an <a href="http://www.postgresql.org/download/macosx" rel="nofollow">OS X PostgreSQL package</a>. Allow it to change your memory settings and reboot (it's reversible by removing '/etc/sysctl.conf') - the README file (which tells you to do this yourself) is out of date. Then use (or get, if you haven't already and) <a href="http://peak.telecommunity.com/DevCenter/EasyInstall" rel="nofollow">EasyInstall</a>.</p>
<p>Check where the PostgreSQL installer has put things - mine is here:</p>
<pre><code>/Library/PostgreSQL/8.4/
</code></pre>
<p>Add this path to your <code>.bash_login</code> or <code>.bash_profile</code> file in your home directory (make one if you don't have it already):</p>
<pre><code>export PATH="/Library/PostgreSQL/8.4/bin:$PATH"
</code></pre>
<p>Then (on an Intel iMac running OS 10.4.11 and Python 2.6) do:</p>
<pre><code>sudo easy_install psycopg2
</code></pre>
<p>This found psycopg2 2.0.11 and (on my setup) gave the following readout:</p>
<pre><code>warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching 'MANIFEST'
zip_safe flag not set; analyzing archive contents...
Adding psycopg2 2.0.11 to easy-install.pth file
Installed /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/psycopg2-2.0.11-py2.6-macosx-10.3-i386.egg
Processing dependencies for psycopg2
Finished processing dependencies for psycopg2
</code></pre>
<p>So I guess I have no <code>psycopg2</code> documentation... however, despite the warnings, I could then do:</p>
<pre><code>python
>>>import psycopg2
>>>
</code></pre>
<p>Success? Perhaps. I haven't tried running anything yet, but getting a successful import was the first goal. BTW this was for Django.</p>
| 1 | 2009-07-16T17:16:08Z | [
"python",
"osx",
"postgresql"
] |
Is there any way to create a "project file" in Emacs? | 1,052,969 | <p>I say "project file" in the loosest sense. I have a few python projects that I work on with ropemacs using emacs W32 for Windows. What would be ideal is if I could have an icon I could click on on my desktop to open up emacs, open up the rope project, and set the speed bar in the top-level directory of that project. Then I could also maybe have a way to open up the next project in its own emacs set up the same way (but for that project). Of course, it's also acceptable if there were an emacs command or a shell command I could use to achieve the same effect instead of an icon on my desktop.</p>
<p>Is there any way to do this? I have absolutely no elisp-fu. :-(</p>
| 7 | 2009-06-27T14:56:04Z | 1,052,995 | <p>You can roll your own in bash/cmd.exe. The windows emacs distribution comes with a .bat file called runemacs.bat which accepts files to open as arguments. Write a small script and it should be able to open everything up from one icon.</p>
| 1 | 2009-06-27T15:08:10Z | [
"python",
"windows",
"emacs",
"elisp",
"ropemacs"
] |
Is there any way to create a "project file" in Emacs? | 1,052,969 | <p>I say "project file" in the loosest sense. I have a few python projects that I work on with ropemacs using emacs W32 for Windows. What would be ideal is if I could have an icon I could click on on my desktop to open up emacs, open up the rope project, and set the speed bar in the top-level directory of that project. Then I could also maybe have a way to open up the next project in its own emacs set up the same way (but for that project). Of course, it's also acceptable if there were an emacs command or a shell command I could use to achieve the same effect instead of an icon on my desktop.</p>
<p>Is there any way to do this? I have absolutely no elisp-fu. :-(</p>
| 7 | 2009-06-27T14:56:04Z | 1,053,004 | <p>I use the following "solution" for myself: A project root is defined by a directory that contains a <code>Makefile</code>. I've bound F12 to the mode specific elisp function that "compiles" the project by makeing the first target of the corresponding Makefile. This is found by recursively going upwards from the directory of the current file.</p>
<p>It is a little bit of setup, but you will never have to reconstruct your <code>.metadata</code> directory as was frequently the case with Eclipse before. And once set up, you just have to place the proper Makefiles around and you have your projects.</p>
<pre><code>(defun get-makefile-recursively-higher ()
(loop for i below 100 for cwd = (expand-file-name default-directory)
then next for next = (expand-file-name (concat cwd "/..")) for file =
(concat cwd "/" "Makefile") do (cond ((and (file-exists-p file))
(return file))) until (equal cwd next))
)
</code></pre>
<p>This is then used e.g. by the LaTeX and Python mode as follows:</p>
<pre><code>(defun py-execute-prog (&optional target)
"Invoke python on the file being edited in the current buffer using
arguments obtained from the minibuffer. It will save all of the modified
buffers before trying to execute the file."
(interactive)
(let* (makefile file cmd)
(setq makefile (get-makefile-recursively-higher))
(setq file (buffer-file-name (current-buffer)))
(setq cmd (concat "make -f " makefile))
(setq default-directory (file-name-directory makefile))
(save-some-buffers (not py-ask-about-save) nil)
(setq py-pdbtrack-do-tracking-p t)
(if (get-buffer py-output-buffer)
(kill-buffer py-output-buffer)) ; Get rid of buffer if it exists.
(global-unset-key "\C-x\C-l" )
(global-unset-key "\C-x\C-p" )
(global-unset-key "\C-x\C-e" )
(global-unset-key "\C-x\C-n" )
(global-set-key "\C-x\C-l" 'py-last-exception)
(global-set-key "\C-x\C-p" 'py-previous-exception)
(global-set-key "\C-x\C-e" 'py-current-line-exception)
(global-set-key "\C-x\C-n" 'py-next-exception)
(define-key comint-mode-map [mouse-3] 'py-current-line-exception)
(make-comint "Python Output" "make" nil "-f" makefile)
(if (not (get-buffer py-output-buffer))
(message "No output.")
(setq py-exception-buffer (current-buffer))
(pop-to-buffer py-output-buffer)
)))
(defun make-tex (&optional target)
(interactive)
(let (makefile cmd)
(setq makefile (get-makefile-recursively-higher))
(save-buffer)
(TeX-save-document (TeX-master-file))
(setq cmd (concat "make -j4 -f " makefile " LATEXPARAM=\"-halt-on-error -file-line-error\""
" TEXMASTER=" (expand-file-name (TeX-master-file)) ".tex"
" TEXMASTERDIR=" (file-name-directory makefile) "/"))
(when (stringp target)
(setq cmd (concat cmd " " target))
)
(ad-activate-regexp "auto-compile-yes-or-no-p-always-yes")
(compile cmd)
(ad-deactivate-regexp "auto-compile-yes-or-no-p-always-yes")
)
)
</code></pre>
| 2 | 2009-06-27T15:11:36Z | [
"python",
"windows",
"emacs",
"elisp",
"ropemacs"
] |
Is there any way to create a "project file" in Emacs? | 1,052,969 | <p>I say "project file" in the loosest sense. I have a few python projects that I work on with ropemacs using emacs W32 for Windows. What would be ideal is if I could have an icon I could click on on my desktop to open up emacs, open up the rope project, and set the speed bar in the top-level directory of that project. Then I could also maybe have a way to open up the next project in its own emacs set up the same way (but for that project). Of course, it's also acceptable if there were an emacs command or a shell command I could use to achieve the same effect instead of an icon on my desktop.</p>
<p>Is there any way to do this? I have absolutely no elisp-fu. :-(</p>
| 7 | 2009-06-27T14:56:04Z | 1,053,681 | <p>You could get everything set up the way you want for a project, then use the answer I posted about using desktop.el and named sessions:</p>
<p><a href="http://stackoverflow.com/questions/847962/what-alternate-session-managers-are-available-for-emacs/849180#849180">http://stackoverflow.com/questions/847962/what-alternate-session-managers-are-available-for-emacs/849180#849180</a></p>
| 6 | 2009-06-27T21:37:17Z | [
"python",
"windows",
"emacs",
"elisp",
"ropemacs"
] |
Is there any way to create a "project file" in Emacs? | 1,052,969 | <p>I say "project file" in the loosest sense. I have a few python projects that I work on with ropemacs using emacs W32 for Windows. What would be ideal is if I could have an icon I could click on on my desktop to open up emacs, open up the rope project, and set the speed bar in the top-level directory of that project. Then I could also maybe have a way to open up the next project in its own emacs set up the same way (but for that project). Of course, it's also acceptable if there were an emacs command or a shell command I could use to achieve the same effect instead of an icon on my desktop.</p>
<p>Is there any way to do this? I have absolutely no elisp-fu. :-(</p>
| 7 | 2009-06-27T14:56:04Z | 1,055,091 | <p>There's <a href="http://wiki.github.com/jrockway/eproject" rel="nofollow">eproject</a>, but it seems to be very sparsely documented.</p>
| 2 | 2009-06-28T15:15:02Z | [
"python",
"windows",
"emacs",
"elisp",
"ropemacs"
] |
Is there any way to create a "project file" in Emacs? | 1,052,969 | <p>I say "project file" in the loosest sense. I have a few python projects that I work on with ropemacs using emacs W32 for Windows. What would be ideal is if I could have an icon I could click on on my desktop to open up emacs, open up the rope project, and set the speed bar in the top-level directory of that project. Then I could also maybe have a way to open up the next project in its own emacs set up the same way (but for that project). Of course, it's also acceptable if there were an emacs command or a shell command I could use to achieve the same effect instead of an icon on my desktop.</p>
<p>Is there any way to do this? I have absolutely no elisp-fu. :-(</p>
| 7 | 2009-06-27T14:56:04Z | 1,055,878 | <p>I'm actually working on a solution to this very problem. I always had a group of files I wanted to open/close at the same time, plus do things like open a magit-status window, a dired buffer, etc. I've started on my own project mode called metaproject. It's in the very early stages, but is functional enough that I'm using it for project groups at work now.</p>
<p>Check it out here: <a href="http://nafai77.github.com/metaproject/" rel="nofollow">http://nafai77.github.com/metaproject/</a></p>
<p>What's in git is pretty stable, though sparsely documented. I'm going to start working on it again here soon. I currently have the basics of a small plug-in architecture, so you can register custom actions that can be done on project open.</p>
| 4 | 2009-06-28T21:58:15Z | [
"python",
"windows",
"emacs",
"elisp",
"ropemacs"
] |
Is there any way to create a "project file" in Emacs? | 1,052,969 | <p>I say "project file" in the loosest sense. I have a few python projects that I work on with ropemacs using emacs W32 for Windows. What would be ideal is if I could have an icon I could click on on my desktop to open up emacs, open up the rope project, and set the speed bar in the top-level directory of that project. Then I could also maybe have a way to open up the next project in its own emacs set up the same way (but for that project). Of course, it's also acceptable if there were an emacs command or a shell command I could use to achieve the same effect instead of an icon on my desktop.</p>
<p>Is there any way to do this? I have absolutely no elisp-fu. :-(</p>
| 7 | 2009-06-27T14:56:04Z | 2,105,780 | <p>See <a href="http://www.emacswiki.org/emacs/CategoryProgrammerUtils#ProjectSupport" rel="nofollow">http://www.emacswiki.org/emacs/CategoryProgrammerUtils#ProjectSupport</a>. There are several packages which manage "projects". I favor mk-project, but then again, I wrote it. ;-)</p>
| 1 | 2010-01-20T23:13:46Z | [
"python",
"windows",
"emacs",
"elisp",
"ropemacs"
] |
Is there any way to create a "project file" in Emacs? | 1,052,969 | <p>I say "project file" in the loosest sense. I have a few python projects that I work on with ropemacs using emacs W32 for Windows. What would be ideal is if I could have an icon I could click on on my desktop to open up emacs, open up the rope project, and set the speed bar in the top-level directory of that project. Then I could also maybe have a way to open up the next project in its own emacs set up the same way (but for that project). Of course, it's also acceptable if there were an emacs command or a shell command I could use to achieve the same effect instead of an icon on my desktop.</p>
<p>Is there any way to do this? I have absolutely no elisp-fu. :-(</p>
| 7 | 2009-06-27T14:56:04Z | 7,303,057 | <p>You can use Desktop bookmarks, Dired bookmarks, or Bookmark-List bookmarks to do what you want -- see <a href="http://www.emacswiki.org/emacs/BookmarkPlus" rel="nofollow"><strong><em>Bookmark+</em></strong></a>. You can even encapsulate code and multiple bookmarks in a single bookmark to get just the state and setup you want.</p>
<p>See also: <a href="http://www.emacswiki.org/emacs/Icicles" rel="nofollow"><strong><em>Icicles</em></strong></a> <a href="http://www.emacswiki.org/emacs/Icicles_-_Support_for_Projects" rel="nofollow">support for projects</a> for more options.</p>
| 0 | 2011-09-05T00:44:09Z | [
"python",
"windows",
"emacs",
"elisp",
"ropemacs"
] |
Can you access a subbed-classed model from within the super-class model in the Django ORM? | 1,053,053 | <p>Lets say I have model inheritance set up in the way defined below.</p>
<pre><code>class ArticleBase(models.Model):
title = models.CharField()
author = models.CharField()
class Review(ArticleBase):
rating = models.IntegerField()
class News(ArticleBase):
source = models.CharField()
</code></pre>
<p>If I need a list of all articles regardless of type (in this case both Reviews and News) ordered by title, I can run a query on ArticleBase. Is there an easy way once I have an ArticleBase record to determine if it relates to a Review or a News record without querying both models to see which has the foreign key of the record I am on?</p>
| 3 | 2009-06-27T15:40:40Z | 1,053,213 | <p>You don't ever need to know the subclass of an ArticleBase. The two subclasses -- if you design this properly -- have the same methods and can be used interchangeably.</p>
<p>Currently, your two subclasses aren't trivially polymorphic. They each have unique attributes. </p>
<p>For the 80% case, they have nothing to do with each other, and this works out fine.</p>
<p>The phrase "all articles regardless of type" is a misleading way to describe what you're doing. It makes it sound simpler than it really is.</p>
<p>What you're asking for a is a "union of News and Reviews". I'm not sure what you'd do with this union -- index them perhaps. For the index page to work simply, you have to delegate the formatting details to each subclass. They have to emit an HTML-friendly summary from a method that both subclasses implement. This summary is what your template would rely on.</p>
<pre><code>class Review(ArticleBase):
rating = models.IntegerField()
def summary( self ):
return '<span class="title">%s</span><span class="author">%s</span><span class="rating">%s</span>' % ( self.title, self.author, self.rating )
</code></pre>
<p>A similar <code>summary</code> method is required for News.</p>
<p>Then you can union these two disjoint types together to produce a single index.</p>
| 1 | 2009-06-27T17:09:32Z | [
"python",
"django",
"django-models"
] |
Can you access a subbed-classed model from within the super-class model in the Django ORM? | 1,053,053 | <p>Lets say I have model inheritance set up in the way defined below.</p>
<pre><code>class ArticleBase(models.Model):
title = models.CharField()
author = models.CharField()
class Review(ArticleBase):
rating = models.IntegerField()
class News(ArticleBase):
source = models.CharField()
</code></pre>
<p>If I need a list of all articles regardless of type (in this case both Reviews and News) ordered by title, I can run a query on ArticleBase. Is there an easy way once I have an ArticleBase record to determine if it relates to a Review or a News record without querying both models to see which has the foreign key of the record I am on?</p>
| 3 | 2009-06-27T15:40:40Z | 1,053,907 | <p>A less sophisticated yet practical approach might be to have a single model with an article_type field. It's a lot easier to work with all articles and you're just a simple filter clause away from your news and review query sets.</p>
<p>I'd be interested to hear the arguments against this or the scenarios in which it might prove to be problematic. I've used this and inheritance at various times and haven't reached a clear conclusion on the best technique.</p>
| 0 | 2009-06-28T00:13:27Z | [
"python",
"django",
"django-models"
] |
Can you access a subbed-classed model from within the super-class model in the Django ORM? | 1,053,053 | <p>Lets say I have model inheritance set up in the way defined below.</p>
<pre><code>class ArticleBase(models.Model):
title = models.CharField()
author = models.CharField()
class Review(ArticleBase):
rating = models.IntegerField()
class News(ArticleBase):
source = models.CharField()
</code></pre>
<p>If I need a list of all articles regardless of type (in this case both Reviews and News) ordered by title, I can run a query on ArticleBase. Is there an easy way once I have an ArticleBase record to determine if it relates to a Review or a News record without querying both models to see which has the foreign key of the record I am on?</p>
| 3 | 2009-06-27T15:40:40Z | 1,075,981 | <p>I am assuming that all ArticleBase instances are instances of ArticleBase subclasses.</p>
<p>One solution is to store the subclass name in ArticleBase and some methods that return the subclass or subclass object based on that information. As multi-table inheritance defines a property on the parent instance to access a child instance, this is all pretty straight forward.</p>
<pre><code>from django.db import models
class ArticleBase(models.Model):
title = models.CharField()
author = models.CharField()
# Store the actual class name.
class_name = models.CharField()
# Define save to make sure class_name is set.
def save(self, *args, **kwargs):
self.class_name = self.__class__.__name__
super(ArticleBase, self).save(*args, **kwargs)
# Multi-table inheritance defines an attribute to fetch the child
# from a parent instance given the lower case subclass name.
def get_child(self):
return getattr(self, self.class_name.lower())
# If indeed you really need the class.
def get_child_class(self):
return self.get_child().__class__
# Check the type against a subclass name or a subclass.
# For instance, 'if article.child_is(News):'
# or 'if article.child_is("News"):'.
def child_is(self, cls):
if isinstance(cls, basestring):
return cls.lower() == self.class_name.lower()
else:
return self.get_child_class() == cls
class Review(ArticleBase):
rating = models.IntegerField()
class News(ArticleBase):
source = models.CharField()
</code></pre>
<p>This is by no means the only way to go about this. It is, however, a pretty simple and straight forward solution. The excellent contrib contenttypes app and the generic module which leverages it offer a wealth of ways to do this, well, generically.</p>
<p>It could be useful to have the following in ArticleBase:</p>
<pre><code>def __unicode__(self)
return self.get_child().__unicode__()
</code></pre>
<p>In that case, be aware that failure to define <code>__unicode__</code> in the subclasses, or calling <code>__unicode__</code> on an instance of ArticleBase (one that has not been subclassed) would lead to an infinite recursion. Thus the admonition below re sanity checking (for instance, preventing just such an instantiation of ArticleBase directly).</p>
<p><strong>Disclaimer:</strong></p>
<p>This code is untested, I'm sure I've got a typo or two in there, but the basic concept should be sound. Production level code should probably have some sanity checking to intercept usage errors.</p>
| 1 | 2009-07-02T18:33:41Z | [
"python",
"django",
"django-models"
] |
Can you access a subbed-classed model from within the super-class model in the Django ORM? | 1,053,053 | <p>Lets say I have model inheritance set up in the way defined below.</p>
<pre><code>class ArticleBase(models.Model):
title = models.CharField()
author = models.CharField()
class Review(ArticleBase):
rating = models.IntegerField()
class News(ArticleBase):
source = models.CharField()
</code></pre>
<p>If I need a list of all articles regardless of type (in this case both Reviews and News) ordered by title, I can run a query on ArticleBase. Is there an easy way once I have an ArticleBase record to determine if it relates to a Review or a News record without querying both models to see which has the foreign key of the record I am on?</p>
| 3 | 2009-06-27T15:40:40Z | 9,519,172 | <p><a href="https://docs.djangoproject.com/en/dev/topics/db/models/#multi-table-inheritance" rel="nofollow">https://docs.djangoproject.com/en/dev/topics/db/models/#multi-table-inheritance</a></p>
<p>You don't have to <em>explicitly</em> query the database, you can do</p>
<pre><code>article_base_instance.review
</code></pre>
<p>this will either return the Review instance or raise an exception. Same for the rest of the subclasses.</p>
<p>If you are using MTI that's your only option. :)</p>
| 0 | 2012-03-01T15:42:50Z | [
"python",
"django",
"django-models"
] |
How do I convert a Python float to a hexadecimal string in python 2.5? Nonworking solution attached | 1,053,121 | <p>What I really need to do is to export a floating point number to C with no precision loss.</p>
<p>I did this in python:</p>
<pre><code>import math
import struct
x = math.sqrt(2)
print struct.unpack('ii', struct.pack('d', x))
# prints (1719614413, 1073127582)
</code></pre>
<p>And in C I try this:</p>
<pre><code>#include <math.h>
#include <stdio.h>
int main(void)
{
unsigned long long x[2] = {1719614413, 1073127582};
long long lx;
double xf;
lx = (x[0] << 32) | x[1];
xf = (double)lx;
printf("%lf\n", xf);
return 0;
}
</code></pre>
<p>But in C I get:</p>
<p>7385687666638364672.000000 and not sqrt(2).</p>
<p>What am I missing?</p>
<p>Thanks.</p>
| 2 | 2009-06-27T16:21:37Z | 1,053,135 | <p>The Python code appears to work. The problem is in the C code: you have the <code>long long</code> filled out right, but then you convert the integer value directly into floating point, rather than <strong>reinterpreting the bytes</strong> as a <code>double</code>. If you throw some pointers/addressing at it it works:</p>
<pre><code>jkugelman$ cat float.c
#include <stdio.h>
int main(void)
{
unsigned long x[2] = {1719614413, 1073127582};
double d = *(double *) x;
printf("%f\n", d);
return 0;
}
jkugelman$ gcc -o float float.c
jkugelman$ ./float
1.414214
</code></pre>
<p>Notice also that the format specifier for <code>double</code> (and for <code>float</code>) is <code>%f</code>, not <code>%lf</code>. <code>%lf</code> is for <code>long double</code>.</p>
| 6 | 2009-06-27T16:27:57Z | [
"python",
"double"
] |
How do I convert a Python float to a hexadecimal string in python 2.5? Nonworking solution attached | 1,053,121 | <p>What I really need to do is to export a floating point number to C with no precision loss.</p>
<p>I did this in python:</p>
<pre><code>import math
import struct
x = math.sqrt(2)
print struct.unpack('ii', struct.pack('d', x))
# prints (1719614413, 1073127582)
</code></pre>
<p>And in C I try this:</p>
<pre><code>#include <math.h>
#include <stdio.h>
int main(void)
{
unsigned long long x[2] = {1719614413, 1073127582};
long long lx;
double xf;
lx = (x[0] << 32) | x[1];
xf = (double)lx;
printf("%lf\n", xf);
return 0;
}
</code></pre>
<p>But in C I get:</p>
<p>7385687666638364672.000000 and not sqrt(2).</p>
<p>What am I missing?</p>
<p>Thanks.</p>
| 2 | 2009-06-27T16:21:37Z | 1,053,141 | <p>If you're targeting a little-endian architecture,</p>
<pre><code>>>> s = struct.pack('<d', x)
>>> ''.join('%.2x' % ord(c) for c in s)
'cd3b7f669ea0f63f'
</code></pre>
<p>if big-endian, use <code>'>d'</code> instead of <code><d</code>. In either case, this gives you a hex string as you're asking for in the question title's, and of course C code can interpret it; I'm not sure what those two ints have to do with a "hex string".</p>
| 3 | 2009-06-27T16:30:20Z | [
"python",
"double"
] |
How do I convert a Python float to a hexadecimal string in python 2.5? Nonworking solution attached | 1,053,121 | <p>What I really need to do is to export a floating point number to C with no precision loss.</p>
<p>I did this in python:</p>
<pre><code>import math
import struct
x = math.sqrt(2)
print struct.unpack('ii', struct.pack('d', x))
# prints (1719614413, 1073127582)
</code></pre>
<p>And in C I try this:</p>
<pre><code>#include <math.h>
#include <stdio.h>
int main(void)
{
unsigned long long x[2] = {1719614413, 1073127582};
long long lx;
double xf;
lx = (x[0] << 32) | x[1];
xf = (double)lx;
printf("%lf\n", xf);
return 0;
}
</code></pre>
<p>But in C I get:</p>
<p>7385687666638364672.000000 and not sqrt(2).</p>
<p>What am I missing?</p>
<p>Thanks.</p>
| 2 | 2009-06-27T16:21:37Z | 1,053,839 | <p>repr() is your friend.</p>
<pre><code>C:\junk\es2>type es2.c
#include <stdio.h>
#include <math.h>
#include <assert.h>
int main(int argc, char** argv) {
double expected, actual;
int nconv;
expected = sqrt(2.0);
printf("expected: %20.17g\n", expected);
actual = -666.666;
nconv = scanf("%lf", &actual);
assert(nconv == 1);
printf("actual: %20.17g\n", actual);
assert(actual == expected);
return 0;
}
C:\junk\es2>gcc es2.c
C:\junk\es2>\python26\python -c "import math; print repr(math.sqrt(2.0))" | a
expected: 1.4142135623730951
actual: 1.4142135623730951
C:\junk\es2>
</code></pre>
| 1 | 2009-06-27T23:13:36Z | [
"python",
"double"
] |
Modeling a complex relationship in Django | 1,053,344 | <p>I'm working on a Web service in Django, and I need to model a very specific, complex relationship which I just can't be able to solve.</p>
<p>Imagine three general models, let's call them Site, Category and Item. Each Site contains one or several Categories, but it can relate to them in one of two possible ways: one are "common" categories, which are in a many-to-many relationship: they are predefined, and each Site can relate to zero or more of the Categories, and vice versa. The other type of categories are individually defined for each site, and one such category "belongs" only to that site and none other; i.e. they are in a many-to-one relationship, as each Site may have a number of those Categories.</p>
<p>Internally, those two type of Categories are completely identical, they only differ in the way they are related to the Sites. It could, however, separate them in two different models (with a common parent model probably), but that solves only half of my problem: the Item model is in a many-to-one relationship with the Categories, i.e. each Item belongs to only one Category, and ideally it shouldn't care how it is related to a Site.</p>
<p>Another solution would be to allow the two separate types of Site-Category relations to coexist (i.e. to have both a ForeignKey and a ManyToMany field on the same Category model), but this solution feels like opening a whole other can of worms.</p>
<p>Does anyone have an idea if there is a third, better solution to this dead end?</p>
| 2 | 2009-06-27T18:05:54Z | 1,053,361 | <p>Why not just have both types of category in one model, so you just have 3 models?</p>
<pre><code>Site
Category
Sites = models.ManyToManyField(Site)
IsCommon = models.BooleanField()
Item
Category = models.ForeignKey(Category)
</code></pre>
<p>You say "Internally, those two type of Categories are completely identical". So in sounds like this is possible. Note it is perfectly valid for a ManyToManyField to have only one value, so you don't need "ForeignKey and a ManyToMany field on the same Category model" which just sounds like a hassle. Just put only one value in the ManyToMany field</p>
| 4 | 2009-06-27T18:17:36Z | [
"python",
"django",
"django-models",
"entity-relationship"
] |
Modeling a complex relationship in Django | 1,053,344 | <p>I'm working on a Web service in Django, and I need to model a very specific, complex relationship which I just can't be able to solve.</p>
<p>Imagine three general models, let's call them Site, Category and Item. Each Site contains one or several Categories, but it can relate to them in one of two possible ways: one are "common" categories, which are in a many-to-many relationship: they are predefined, and each Site can relate to zero or more of the Categories, and vice versa. The other type of categories are individually defined for each site, and one such category "belongs" only to that site and none other; i.e. they are in a many-to-one relationship, as each Site may have a number of those Categories.</p>
<p>Internally, those two type of Categories are completely identical, they only differ in the way they are related to the Sites. It could, however, separate them in two different models (with a common parent model probably), but that solves only half of my problem: the Item model is in a many-to-one relationship with the Categories, i.e. each Item belongs to only one Category, and ideally it shouldn't care how it is related to a Site.</p>
<p>Another solution would be to allow the two separate types of Site-Category relations to coexist (i.e. to have both a ForeignKey and a ManyToMany field on the same Category model), but this solution feels like opening a whole other can of worms.</p>
<p>Does anyone have an idea if there is a third, better solution to this dead end?</p>
| 2 | 2009-06-27T18:05:54Z | 1,053,526 | <p>Caveat: I know Object-Relation mapping, Rails, and Python, but not Django specifically.</p>
<p>I see two additinal options:</p>
<ol>
<li>Thinking from a database point of view, I could make the table needed for the many-many relation hold an additional field which indicates a "common" vs. "site" relationship and add constraints to limit the type of "site" relationships. This can be done in Django, I think, in the section <a href="http://docs.djangoproject.com/en/1.0/topics/db/models/#extra-fields-on-many-to-many-relationships" rel="nofollow">"Extra Fields on Many-To-Many Relationships."</a></li>
</ol>
<p>If you are at an earlier version of Django, you can still do this by making the many-many-table an explict model.</p>
<ol>
<li><p>Thinking from an object point of view, I could see splitting the Categories into three classes:</p>
<p>BaseCategory</p>
<p>CommonCategory(BaseCategory)</p>
<p>SiteCategory(BaseCategory)</p></li>
</ol>
<p>and then use one of Django's inheritance models.</p>
| 0 | 2009-06-27T20:08:51Z | [
"python",
"django",
"django-models",
"entity-relationship"
] |
Modeling a complex relationship in Django | 1,053,344 | <p>I'm working on a Web service in Django, and I need to model a very specific, complex relationship which I just can't be able to solve.</p>
<p>Imagine three general models, let's call them Site, Category and Item. Each Site contains one or several Categories, but it can relate to them in one of two possible ways: one are "common" categories, which are in a many-to-many relationship: they are predefined, and each Site can relate to zero or more of the Categories, and vice versa. The other type of categories are individually defined for each site, and one such category "belongs" only to that site and none other; i.e. they are in a many-to-one relationship, as each Site may have a number of those Categories.</p>
<p>Internally, those two type of Categories are completely identical, they only differ in the way they are related to the Sites. It could, however, separate them in two different models (with a common parent model probably), but that solves only half of my problem: the Item model is in a many-to-one relationship with the Categories, i.e. each Item belongs to only one Category, and ideally it shouldn't care how it is related to a Site.</p>
<p>Another solution would be to allow the two separate types of Site-Category relations to coexist (i.e. to have both a ForeignKey and a ManyToMany field on the same Category model), but this solution feels like opening a whole other can of worms.</p>
<p>Does anyone have an idea if there is a third, better solution to this dead end?</p>
| 2 | 2009-06-27T18:05:54Z | 1,061,061 | <p>As as alternative implementation you could use django content types (generic relations) to accomplish the connection of the items. A bonus for using this implementation is that it allows you to utilize the category models in different ways depending on your data needs down the road. </p>
<p>You can make using the site categories easier by writing model methods for pulling and sorting categories. Django's contrib admin also supports the generic relation inlines.</p>
<p>Your models would be as follow:</p>
<pre><code>Site(models.Model):
label = models.CharField(max_length=255)
Category(models.Model):
site = models.ManyToManyField(Site)
label = models.CharField(max_length=255)
SiteCategory(models.Model):
site = models.ForeignKey(Site)
label = models.CharField(max_length=255)
Item(models.Model):
label = models.CharField(max_length=255)
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
content_object = generic.GenericForeignKey('content_type', 'object_id')
</code></pre>
<p>For a more in depth review of content types and how to query the generic relations you can read here: <a href="http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/" rel="nofollow">http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/</a></p>
| 1 | 2009-06-29T23:24:52Z | [
"python",
"django",
"django-models",
"entity-relationship"
] |
Regex and a sequences of patterns? | 1,053,481 | <p>Is there a way to match a pattern (<code>e\d\d</code>) several times, capturing each one into a group? For example, given the string..</p>
<pre><code>blah.s01e24e25
</code></pre>
<p>..I wish to get four groups:</p>
<pre><code>1 -> blah
2 -> 01
3 -> 24
4 -> 25
</code></pre>
<p>The obvious regex to use is (in Python regex:</p>
<pre><code>import re
re.match("(\w+).s(\d+)e(\d+)e(\d+)", "blah.s01e24e25").groups()
</code></pre>
<p>..but I also want to match either of the following:</p>
<pre><code>blah.s01e24
blah.s01e24e25e26
</code></pre>
<p>You can't seem to do <code>(e\d\d)+</code>, or rather you can, but it only captures the last occurrence:</p>
<pre><code>>>> re.match("(\w+).s(\d+)(e\d\d){2}", "blah.s01e24e25e26").groups()
('blah', '01', 'e25')
>>> re.match("(\w+).s(\d+)(e\d\d){3}", "blah.s01e24e25e26").groups()
('blah', '01', 'e26')
</code></pre>
<p>I want to do this in a single regex because I have multiple patterns to match TV episode filenames, and do not want to duplicate each expression to handle multiple episodes:</p>
<pre><code>\w+\.s(\d+)\.e(\d+) # matches blah.s01e01
\w+\.s(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02
\w+\.s(\d+)\.e(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02e03
\w - \d+x\d+ # matches blah - 01x01
\w - \d+x\d+\d+ # matches blah - 01x01x02
\w - \d+x\d+\d+\d+ # matches blah - 01x01x02x03
</code></pre>
<p>..and so on for numerous other patterns.</p>
<p>Another thing to complicate matters - I wish to store these regexs in a config file, so a solution using multiple regexs and function calls is not desired - but if this proves impossible I'll just allow the user to add simple regexs</p>
<p>Basically, is there a way to capture a repeating pattern using regex?</p>
| 4 | 2009-06-27T19:45:29Z | 1,053,501 | <p>Do it in two steps, one to find all the numbers, then one to split them:</p>
<pre><code>import re
def get_pieces(s):
# Error checking omitted!
whole_match = re.search(r'\w+\.(s\d+(?:e\d+)+)', s)
return re.findall(r'\d+', whole_match.group(1))
print get_pieces(r"blah.s01e01")
print get_pieces(r"blah.s01e01e02")
print get_pieces(r"blah.s01e01e02e03")
# prints:
# ['01', '01']
# ['01', '01', '02']
# ['01', '01', '02', '03']
</code></pre>
| 5 | 2009-06-27T19:53:39Z | [
"python",
"regex",
"sequences"
] |
Regex and a sequences of patterns? | 1,053,481 | <p>Is there a way to match a pattern (<code>e\d\d</code>) several times, capturing each one into a group? For example, given the string..</p>
<pre><code>blah.s01e24e25
</code></pre>
<p>..I wish to get four groups:</p>
<pre><code>1 -> blah
2 -> 01
3 -> 24
4 -> 25
</code></pre>
<p>The obvious regex to use is (in Python regex:</p>
<pre><code>import re
re.match("(\w+).s(\d+)e(\d+)e(\d+)", "blah.s01e24e25").groups()
</code></pre>
<p>..but I also want to match either of the following:</p>
<pre><code>blah.s01e24
blah.s01e24e25e26
</code></pre>
<p>You can't seem to do <code>(e\d\d)+</code>, or rather you can, but it only captures the last occurrence:</p>
<pre><code>>>> re.match("(\w+).s(\d+)(e\d\d){2}", "blah.s01e24e25e26").groups()
('blah', '01', 'e25')
>>> re.match("(\w+).s(\d+)(e\d\d){3}", "blah.s01e24e25e26").groups()
('blah', '01', 'e26')
</code></pre>
<p>I want to do this in a single regex because I have multiple patterns to match TV episode filenames, and do not want to duplicate each expression to handle multiple episodes:</p>
<pre><code>\w+\.s(\d+)\.e(\d+) # matches blah.s01e01
\w+\.s(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02
\w+\.s(\d+)\.e(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02e03
\w - \d+x\d+ # matches blah - 01x01
\w - \d+x\d+\d+ # matches blah - 01x01x02
\w - \d+x\d+\d+\d+ # matches blah - 01x01x02x03
</code></pre>
<p>..and so on for numerous other patterns.</p>
<p>Another thing to complicate matters - I wish to store these regexs in a config file, so a solution using multiple regexs and function calls is not desired - but if this proves impossible I'll just allow the user to add simple regexs</p>
<p>Basically, is there a way to capture a repeating pattern using regex?</p>
| 4 | 2009-06-27T19:45:29Z | 1,053,505 | <p>Number of captured groups equal to number of parentheses groups. Look at <code>findall</code> or <code>finditer</code> for solving your problem.</p>
| 1 | 2009-06-27T19:56:18Z | [
"python",
"regex",
"sequences"
] |
Regex and a sequences of patterns? | 1,053,481 | <p>Is there a way to match a pattern (<code>e\d\d</code>) several times, capturing each one into a group? For example, given the string..</p>
<pre><code>blah.s01e24e25
</code></pre>
<p>..I wish to get four groups:</p>
<pre><code>1 -> blah
2 -> 01
3 -> 24
4 -> 25
</code></pre>
<p>The obvious regex to use is (in Python regex:</p>
<pre><code>import re
re.match("(\w+).s(\d+)e(\d+)e(\d+)", "blah.s01e24e25").groups()
</code></pre>
<p>..but I also want to match either of the following:</p>
<pre><code>blah.s01e24
blah.s01e24e25e26
</code></pre>
<p>You can't seem to do <code>(e\d\d)+</code>, or rather you can, but it only captures the last occurrence:</p>
<pre><code>>>> re.match("(\w+).s(\d+)(e\d\d){2}", "blah.s01e24e25e26").groups()
('blah', '01', 'e25')
>>> re.match("(\w+).s(\d+)(e\d\d){3}", "blah.s01e24e25e26").groups()
('blah', '01', 'e26')
</code></pre>
<p>I want to do this in a single regex because I have multiple patterns to match TV episode filenames, and do not want to duplicate each expression to handle multiple episodes:</p>
<pre><code>\w+\.s(\d+)\.e(\d+) # matches blah.s01e01
\w+\.s(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02
\w+\.s(\d+)\.e(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02e03
\w - \d+x\d+ # matches blah - 01x01
\w - \d+x\d+\d+ # matches blah - 01x01x02
\w - \d+x\d+\d+\d+ # matches blah - 01x01x02x03
</code></pre>
<p>..and so on for numerous other patterns.</p>
<p>Another thing to complicate matters - I wish to store these regexs in a config file, so a solution using multiple regexs and function calls is not desired - but if this proves impossible I'll just allow the user to add simple regexs</p>
<p>Basically, is there a way to capture a repeating pattern using regex?</p>
| 4 | 2009-06-27T19:45:29Z | 1,053,538 | <p>non-grouping parentheses:
(?:asdfasdg)</p>
<p>which do not have to appear:
(?:adsfasdf)?</p>
<pre><code>c = re.compile(r"""(\w+).s(\d+)
(?:
e(\d+)
(?:
e(\d+)
)?
)?
""", re.X)
</code></pre>
<p>or</p>
<pre><code>c = re.compile(r"""(\w+).s(\d+)(?:e(\d+)(?:e(\d+))?)?""", re.X)
</code></pre>
| 1 | 2009-06-27T20:18:27Z | [
"python",
"regex",
"sequences"
] |
Regex and a sequences of patterns? | 1,053,481 | <p>Is there a way to match a pattern (<code>e\d\d</code>) several times, capturing each one into a group? For example, given the string..</p>
<pre><code>blah.s01e24e25
</code></pre>
<p>..I wish to get four groups:</p>
<pre><code>1 -> blah
2 -> 01
3 -> 24
4 -> 25
</code></pre>
<p>The obvious regex to use is (in Python regex:</p>
<pre><code>import re
re.match("(\w+).s(\d+)e(\d+)e(\d+)", "blah.s01e24e25").groups()
</code></pre>
<p>..but I also want to match either of the following:</p>
<pre><code>blah.s01e24
blah.s01e24e25e26
</code></pre>
<p>You can't seem to do <code>(e\d\d)+</code>, or rather you can, but it only captures the last occurrence:</p>
<pre><code>>>> re.match("(\w+).s(\d+)(e\d\d){2}", "blah.s01e24e25e26").groups()
('blah', '01', 'e25')
>>> re.match("(\w+).s(\d+)(e\d\d){3}", "blah.s01e24e25e26").groups()
('blah', '01', 'e26')
</code></pre>
<p>I want to do this in a single regex because I have multiple patterns to match TV episode filenames, and do not want to duplicate each expression to handle multiple episodes:</p>
<pre><code>\w+\.s(\d+)\.e(\d+) # matches blah.s01e01
\w+\.s(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02
\w+\.s(\d+)\.e(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02e03
\w - \d+x\d+ # matches blah - 01x01
\w - \d+x\d+\d+ # matches blah - 01x01x02
\w - \d+x\d+\d+\d+ # matches blah - 01x01x02x03
</code></pre>
<p>..and so on for numerous other patterns.</p>
<p>Another thing to complicate matters - I wish to store these regexs in a config file, so a solution using multiple regexs and function calls is not desired - but if this proves impossible I'll just allow the user to add simple regexs</p>
<p>Basically, is there a way to capture a repeating pattern using regex?</p>
| 4 | 2009-06-27T19:45:29Z | 1,053,552 | <p>After thinking about the problem, I think I have a simpler solution, using named groups.</p>
<p>The simplest regex a user (or I) could use is:</p>
<pre><code>(\w+\).s(\d+)\.e(\d+)
</code></pre>
<p>The filename parsing class will take the first group as the show name, second as season number, third as episode number. This covers a majority of files.</p>
<p>I'll allow a few different named groups for these:</p>
<pre><code>(?P<showname>\w+\).s(?P<seasonnumber>\d+)\.e(?P<episodenumber>\d+)
</code></pre>
<p>To support multiple episodes, I'll support two named groups, something like <code>startingepisodenumber</code> and <code>endingepisodenumber</code> to support things like <code>showname.s01e01-03</code>:</p>
<pre><code>(?P<showname>\w+\)\.s(?P<seasonnumber>\d+)\.e(?P<startingepisodenumber>\d+)-(?P<endingepisodenumber>e\d+)
</code></pre>
<p>And finally, allow named groups with names matching <code>episodenumber\d+</code> (<code>episodenumber1</code>, <code>episodenumber2</code> etc):</p>
<pre><code>(?P<showname>\w+\)\.
s(?P<seasonnumber>\d+)\.
e(?P<episodenumber1>\d+)
e(?P<episodenumber2>\d+)
e(?P<episodenumber3>\d+)
</code></pre>
<p>It still requires possibly duplicating the patterns for different amounts of <code>e01</code>s, but there will never be a file with two non-consecutive episodes (like <code>show.s01e01e03e04</code>), so using the <code>starting/endingepisodenumber</code> groups should solve this, and for weird cases users come across, they can use the <code>episodenumber\d+</code> group names</p>
<p>This doesn't really answer the sequence-of-patterns question, but it solves the problem that led me to ask it! (I'll still accept another answer that shows how to match <code>s01e23e24...e27</code> in one regex - if someone works this out!)</p>
| 0 | 2009-06-27T20:32:22Z | [
"python",
"regex",
"sequences"
] |
Regex and a sequences of patterns? | 1,053,481 | <p>Is there a way to match a pattern (<code>e\d\d</code>) several times, capturing each one into a group? For example, given the string..</p>
<pre><code>blah.s01e24e25
</code></pre>
<p>..I wish to get four groups:</p>
<pre><code>1 -> blah
2 -> 01
3 -> 24
4 -> 25
</code></pre>
<p>The obvious regex to use is (in Python regex:</p>
<pre><code>import re
re.match("(\w+).s(\d+)e(\d+)e(\d+)", "blah.s01e24e25").groups()
</code></pre>
<p>..but I also want to match either of the following:</p>
<pre><code>blah.s01e24
blah.s01e24e25e26
</code></pre>
<p>You can't seem to do <code>(e\d\d)+</code>, or rather you can, but it only captures the last occurrence:</p>
<pre><code>>>> re.match("(\w+).s(\d+)(e\d\d){2}", "blah.s01e24e25e26").groups()
('blah', '01', 'e25')
>>> re.match("(\w+).s(\d+)(e\d\d){3}", "blah.s01e24e25e26").groups()
('blah', '01', 'e26')
</code></pre>
<p>I want to do this in a single regex because I have multiple patterns to match TV episode filenames, and do not want to duplicate each expression to handle multiple episodes:</p>
<pre><code>\w+\.s(\d+)\.e(\d+) # matches blah.s01e01
\w+\.s(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02
\w+\.s(\d+)\.e(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02e03
\w - \d+x\d+ # matches blah - 01x01
\w - \d+x\d+\d+ # matches blah - 01x01x02
\w - \d+x\d+\d+\d+ # matches blah - 01x01x02x03
</code></pre>
<p>..and so on for numerous other patterns.</p>
<p>Another thing to complicate matters - I wish to store these regexs in a config file, so a solution using multiple regexs and function calls is not desired - but if this proves impossible I'll just allow the user to add simple regexs</p>
<p>Basically, is there a way to capture a repeating pattern using regex?</p>
| 4 | 2009-06-27T19:45:29Z | 1,053,988 | <p>Perhaps something like that?</p>
<pre><code>def episode_matcher(filename):
m1= re.match(r"(?i)(.*?)\.s(\d+)((?:e\d+)+)", filename)
if m1:
m2= re.findall(r"\d+", m1.group(3))
return m1.group(1), m1.group(2), m2
# auto return None here
>>> episode_matcher("blah.s01e02")
('blah', '01', ['02'])
>>> episode_matcher("blah.S01e02E03")
('blah', '01', ['02', '03'])
</code></pre>
| 0 | 2009-06-28T01:11:38Z | [
"python",
"regex",
"sequences"
] |
What does `@` mean in Python? | 1,053,732 | <p>like <code>@login_required</code>?</p>
| 1 | 2009-06-27T22:03:20Z | 1,053,735 | <p>It is decorator syntax.</p>
<blockquote>
<p>A function definition may be wrapped by one or more decorator expressions. Decorator expressions are evaluated when the function is defined, in the scope that contains the function definition. The result must be a callable, which is invoked with the function object as the only argument. The returned value is bound to the function name instead of the function object. Multiple decorators are applied in nested fashion.</p>
</blockquote>
<p>So doing something like this:</p>
<pre><code>@login_required
def my_function():
pass
</code></pre>
<p>Is just a fancy way of doing this:</p>
<pre><code>def my_function():
pass
my_function = login_required(my_function)
</code></pre>
<p>For more, check out <a href="http://docs.python.org/reference/compound%5Fstmts.html#function-definitions" rel="nofollow">the documentation</a>.</p>
| 30 | 2009-06-27T22:04:27Z | [
"python",
"syntax"
] |
What does `@` mean in Python? | 1,053,732 | <p>like <code>@login_required</code>?</p>
| 1 | 2009-06-27T22:03:20Z | 1,053,739 | <p>It's a decorator.
More here: <a href="http://www.ibm.com/developerworks/linux/library/l-cpdecor.html" rel="nofollow">http://www.ibm.com/developerworks/linux/library/l-cpdecor.html</a></p>
| 1 | 2009-06-27T22:05:11Z | [
"python",
"syntax"
] |
What does `@` mean in Python? | 1,053,732 | <p>like <code>@login_required</code>?</p>
| 1 | 2009-06-27T22:03:20Z | 1,053,742 | <p>A decorator, also called pie syntax. it allows you to "decorate" a function with another function. You already had decoration with staticmethod() and classmethod(). The pie syntax makes it more easy to access and extend.</p>
| 1 | 2009-06-27T22:06:35Z | [
"python",
"syntax"
] |
What does `@` mean in Python? | 1,053,732 | <p>like <code>@login_required</code>?</p>
| 1 | 2009-06-27T22:03:20Z | 1,053,744 | <p>If you ask this type of question you will probably be interested in the other <a href="http://stackoverflow.com/questions/101268/hidden-features-of-python">hidden features of Python</a>.</p>
| 2 | 2009-06-27T22:07:30Z | [
"python",
"syntax"
] |
What does `@` mean in Python? | 1,053,732 | <p>like <code>@login_required</code>?</p>
| 1 | 2009-06-27T22:03:20Z | 1,053,892 | <p>Some resources for decorator:
<a href="http://docs.python.org/glossary.html#term-decorator" rel="nofollow">decorator</a>,
<a href="http://www.python.org/dev/peps/pep-0318/" rel="nofollow">PEP 318: Decorators for Functions and Methods</a>,
<a href="http://wiki.python.org/moin/PythonDecorators" rel="nofollow">PythonDecorators</a> and
<a href="http://wiki.python.org/moin/PythonDecoratorLibrary/" rel="nofollow">PythonDecoratorLibrary</a>.</p>
<p>A <a href="http://www.ddj.com/web-development/184406073" rel="nofollow"> decorator article on DDJ</a> and
<a href="http://muharem.wordpress.com/2006/10/18/3/" rel="nofollow">another article</a> (blog post).</p>
| 0 | 2009-06-28T00:04:39Z | [
"python",
"syntax"
] |
What does `@` mean in Python? | 1,053,732 | <p>like <code>@login_required</code>?</p>
| 1 | 2009-06-27T22:03:20Z | 1,055,389 | <p>That specific decorator looks like it comes from Django.</p>
<p>It might help you get a better understanding by reading the <a href="http://docs.djangoproject.com/en/dev/topics/auth/" rel="nofollow">Django documentation</a> about that decorator.</p>
| 1 | 2009-06-28T18:01:05Z | [
"python",
"syntax"
] |
Changing python interpreter windows | 1,053,794 | <p>I have two python installations, 2.5 and 2.6
I want to change the default python interpreter from 2.5 to 2.6.
Anyone know how?</p>
| 3 | 2009-06-27T22:41:01Z | 1,053,856 | <p>PYTHONPATH is <strong>NOT</strong> what you are looking for. That is for varying where Python's "import" looks for packages and modules.</p>
<p>You need to change the <strong>PATH</strong> variable in your environment so that it contains e.g. "....;c:\python26;...." instead of "....;c:\python25;....". Click on start > control panel > system > advanced > environment variables. Select "path". Edit it. Click on OK enough times to get out of there.</p>
| 6 | 2009-06-27T23:33:41Z | [
"python",
"windows"
] |
Changing python interpreter windows | 1,053,794 | <p>I have two python installations, 2.5 and 2.6
I want to change the default python interpreter from 2.5 to 2.6.
Anyone know how?</p>
| 3 | 2009-06-27T22:41:01Z | 1,053,888 | <p>just FYI, since both c:\python25 and c:\python26 are on PATH, I copy C:\Python25\python.exe to C:\Python25\py25.exe, and copy C:\Python26\python.exe to C:\Python26\py26.exe
Then just type py25(or py26) get the specific version.</p>
| 1 | 2009-06-28T00:00:05Z | [
"python",
"windows"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,053,951 | <p>Usually when we deal with large matrices we implement them as <a href="http://en.wikipedia.org/wiki/Sparse%5Fmatrix" rel="nofollow">Sparse Matrices</a>.</p>
<p>I don't know if numpy supports sparse matrices but I found <a href="http://www.scipy.org/SciPy%5FTutorial" rel="nofollow">this</a> instead.</p>
| 1 | 2009-06-28T00:45:43Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,053,953 | <p>As far as I know about numpy, no, but I could be wrong. </p>
<p>I can propose you this alternative solution: write the matrix on the disk and access it in chunks. I suggest you the HDF5 file format. If you need it transparently, you can reimplement the ndarray interface to paginate your disk-stored matrix into memory. Be careful if you modify the data to sync them back on the disk. </p>
| 1 | 2009-06-28T00:46:30Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,054,017 | <p>You should be able to use numpy.memmap to memory map a file on disk. With newer python and 64-bit machine, you should have the necessary address space, without loading everything into memory. The OS should handle only keep part of the file in memory.</p>
| 28 | 2009-06-28T01:46:24Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,054,070 | <p>To handle sparse matrices, you need the <code>scipy</code> package that sits on top of <code>numpy</code> -- see <a href="http://docs.scipy.org/doc/scipy/reference/sparse.html" rel="nofollow">here</a> for more details about the sparse-matrix options that <code>scipy</code> gives you.</p>
| 21 | 2009-06-28T02:23:40Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,054,079 | <p>Are you asking how to handle a 2,500,000,000 element matrix without terabytes of RAM? </p>
<p>The way to handle 2 billion items without 8 billion bytes of RAM is by not keeping the matrix in memory.</p>
<p>That means much more sophisticated algorithms to fetch it from the file system in pieces.</p>
| 3 | 2009-06-28T02:32:13Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,054,113 | <p><code>numpy.array</code>s are meant to live in memory. If you want to work with matrices larger than your RAM, you have to work around that. There are at least two approaches you can follow:</p>
<ol>
<li><strong>Try a more efficient matrix representation</strong> that exploits any special structure that your matrices have. For example, as others have already pointed out, there are efficient data structures for sparse matrices (matrices with lots of zeros), like <a href="http://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.sparse.csc%5Fmatrix.html"><code>scipy.sparse.csc_matrix</code></a>.</li>
<li><strong>Modify your algorithm to work on submatrices</strong>. You can read from disk only the matrix blocks that are currently being used in computations. Algorithms designed to run on clusters usually work blockwise, since the data is scatted across different computers, and passed by only when needed. For example, <a href="http://facultyfp.salisbury.edu/taanastasio/COSC490/Fall03/Lectures/FoxMM/example.pdf">the Fox algorithm for matrix multiplication (PDF file)</a>.</li>
</ol>
| 46 | 2009-06-28T02:53:45Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,054,114 | <p>Stefano Borini's <a href="http://stackoverflow.com/questions/1053928/python-numpy-very-large-matrices/1053953#1053953">post</a> got me to look into how far along this sort of thing already is. </p>
<p><a href="http://h5py.alfven.org/">This is it.</a> It appears to do basically what you want. HDF5 will let you store very large datasets, and then access and use them in the same ways NumPy does. </p>
| 9 | 2009-06-28T02:54:20Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,062,629 | <p>PyTables and NumPy are the way to go.</p>
<p>PyTables will store the data on disk in HDF format, with optional compression. My datasets often get 10x compression, which is handy when dealing with tens or hundreds of millions of rows. It's also very fast; my 5 year old laptop can crunch through data doing SQL-like GROUP BY aggregation at 1,000,000 rows/second. Not bad for a Python-based solution!</p>
<p>Accessing the data as a NumPy recarray again is as simple as:</p>
<pre><code>data = table[row_from:row_to]
</code></pre>
<p>The HDF library takes care of reading in the relevant chunks of data and converting to NumPy.</p>
| 74 | 2009-06-30T09:11:55Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 1,297,278 | <p>Make sure you're using a 64-bit operating system and a 64-bit version of Python/NumPy. Note that on 32-bit architectures you can address typically 3GB of memory (with about 1GB lost to memory mapped I/O and such). </p>
<p>With 64-bit and things arrays larger than the available RAM you can get away with virtual memory, though things will get slower if you have to swap. Also, memory maps (see numpy.memmap) are a way to work with huge files on disk without loading them into memory, but again, you need to have a 64-bit address space to work with for this to be of much use. PyTables will do most of this for you as well.</p>
| 5 | 2009-08-19T00:27:02Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 14,698,464 | <p>It's a bit alpha, but <a href="http://blaze.pydata.org/">http://blaze.pydata.org/</a> seems to be working on solving this. </p>
| 5 | 2013-02-05T00:58:35Z | [
"python",
"matrix",
"numpy"
] |
Very large matrices using Python and NumPy | 1,053,928 | <p><a href="http://en.wikipedia.org/wiki/NumPy">NumPy</a> is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.</p>
<p>Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?</p>
| 60 | 2009-06-28T00:32:21Z | 39,841,187 | <p>Sometimes one simple solution is using a custom time for your matrix items. Based on the range of numbers you need you can use a manual dtype and specially smaller for your items. Because Numpy considers the largest type for objetc, by default. Here is an example:</p>
<pre><code>In [70]: a = np.arange(5)
In [71]: a[0].dtype
Out[71]: dtype('int64')
In [72]: a.nbytes
Out[72]: 40
In [73]: a = np.arange(0, 2, 0.5)
In [74]: a[0].dtype
Out[74]: dtype('float64')
In [75]: a.nbytes
Out[75]: 32
</code></pre>
<p>And with custom type:</p>
<pre><code>In [80]: a = np.arange(5, dtype=np.int8)
In [81]: a.nbytes
Out[81]: 5
In [76]: a = np.arange(0, 2, 0.5, dtype=np.float16)
In [78]: a.nbytes
Out[78]: 8
</code></pre>
| 0 | 2016-10-03T22:09:11Z | [
"python",
"matrix",
"numpy"
] |
With Python unittest, how do I create and use a "callable object that returns a test suite"? | 1,053,983 | <p>I'm learning Python and have been trying to understand more about the details of Python's <code>unittest</code> module. The documentation includes the following:</p>
<blockquote>
<p>For the ease of running tests, as we
will see later, it is a good idea to
provide in each test module a callable
object that returns a pre-built test
suite:</p>
<pre><code>def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('testDefaultSize'))
suite.addTest(WidgetTestCase('testResize'))
return suite
</code></pre>
</blockquote>
<p>As far as I can tell, the purpose of doing this is not explained. In addition, I was unable to figure out how one would use such a method. I tried several things without success (aside from learning about the error messages I got):</p>
<pre><code>import unittest
def average(values):
return sum(values) / len(values)
class MyTestCase(unittest.TestCase):
def testFoo(self):
self.assertEqual(average([10,100]),55)
def testBar(self):
self.assertEqual(average([11]),11)
def testBaz(self):
self.assertEqual(average([20,20]),20)
def suite():
suite = unittest.TestSuite()
suite.addTest(MyTestCase('testFoo'))
suite.addTest(MyTestCase('testBar'))
suite.addTest(MyTestCase('testBaz'))
return suite
if __name__ == '__main__':
# s = MyTestCase.suite()
# TypeError: unbound method suite() must be called
# with MyTestCase instance as first argument
# s = MyTestCase.suite(MyTestCase())
# ValueError: no such test method in <class '__main__.MyTestCase'>: runTest
# s = MyTestCase.suite(MyTestCase('testFoo'))
# TypeError: suite() takes no arguments (1 given)
</code></pre>
<p>The following "worked" but seems awkward and it required that I change the method signature of <code>suite()</code> to '<code>def suite(self):</code>'.</p>
<pre><code>s = MyTestCase('testFoo').suite()
unittest.TextTestRunner().run(s)
</code></pre>
| 4 | 2009-06-28T01:08:36Z | 1,053,998 | <p>The <a href="http://docs.python.org/library/unittest.html" rel="nofollow">documentation</a> is saying <code>suite()</code> should be a function in the module, rather than a method in your class. It appears to be merely a convenience function so you can later aggregate test suites for many modules:</p>
<pre><code>alltests = unittest.TestSuite([
my_module_1.suite(),
my_module_2.suite(),
])
</code></pre>
<p>Your problems with calling your function are all related to it being a method in the class, yet not written as a method. The <code>self</code> parameter is the "current object", and is required for instance methods. (E.g. <code>a.b(1, 2)</code> is conceptually the same as <code>b(a, 1, 2)</code>.) If the method operates on the class instead of instances, read about <a href="http://docs.python.org/library/functions.html#classmethod" rel="nofollow"><code>classmethod</code></a>. If it is just grouped with the class for convenience but doesn't operate on the class nor instances, then read about <a href="http://docs.python.org/library/functions.html#staticmethod" rel="nofollow"><code>staticmethod</code></a>. Neither of these will particularly help you use <code>unittest</code>, but they might help explain why you saw what you did.</p>
| 1 | 2009-06-28T01:18:48Z | [
"python"
] |
With Python unittest, how do I create and use a "callable object that returns a test suite"? | 1,053,983 | <p>I'm learning Python and have been trying to understand more about the details of Python's <code>unittest</code> module. The documentation includes the following:</p>
<blockquote>
<p>For the ease of running tests, as we
will see later, it is a good idea to
provide in each test module a callable
object that returns a pre-built test
suite:</p>
<pre><code>def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('testDefaultSize'))
suite.addTest(WidgetTestCase('testResize'))
return suite
</code></pre>
</blockquote>
<p>As far as I can tell, the purpose of doing this is not explained. In addition, I was unable to figure out how one would use such a method. I tried several things without success (aside from learning about the error messages I got):</p>
<pre><code>import unittest
def average(values):
return sum(values) / len(values)
class MyTestCase(unittest.TestCase):
def testFoo(self):
self.assertEqual(average([10,100]),55)
def testBar(self):
self.assertEqual(average([11]),11)
def testBaz(self):
self.assertEqual(average([20,20]),20)
def suite():
suite = unittest.TestSuite()
suite.addTest(MyTestCase('testFoo'))
suite.addTest(MyTestCase('testBar'))
suite.addTest(MyTestCase('testBaz'))
return suite
if __name__ == '__main__':
# s = MyTestCase.suite()
# TypeError: unbound method suite() must be called
# with MyTestCase instance as first argument
# s = MyTestCase.suite(MyTestCase())
# ValueError: no such test method in <class '__main__.MyTestCase'>: runTest
# s = MyTestCase.suite(MyTestCase('testFoo'))
# TypeError: suite() takes no arguments (1 given)
</code></pre>
<p>The following "worked" but seems awkward and it required that I change the method signature of <code>suite()</code> to '<code>def suite(self):</code>'.</p>
<pre><code>s = MyTestCase('testFoo').suite()
unittest.TextTestRunner().run(s)
</code></pre>
| 4 | 2009-06-28T01:08:36Z | 1,054,011 | <p>The very first error message you got is meaningful, and explains a lot.</p>
<pre><code>print MyTestCase.suite # <unbound method MyTestCase.suite>
</code></pre>
<p><em>Unbound</em>. It means that you cannot call it unless you bind it to an instance. It's actually the same for <code>MyTestCase.run</code>:</p>
<pre><code>print MyTestCase.run # <unbound method MyTestCase.run>
</code></pre>
<p>Maybe for now you don't understand why you can't call <code>suite</code>, but please leave it aside for now. Would you try to call <code>run</code> on the class, like above? Something like:</p>
<pre><code>MyTestCase.run() # ?
</code></pre>
<p>Probably not, right? It does not make sense to write this, and it will not work, because <code>run</code> is an instance method, and needs a <code>self</code> instance to work on. Well it appears that Python "understands" <code>suite</code> in the same way it understands <code>run</code>, as an unbound method. </p>
<p>Let's see why:</p>
<p>If you try to put the <code>suite</code> method out of the class scope, and define it as a global function, it just works:</p>
<pre><code>import unittest
def average(values):
return sum(values) / len(values)
class MyTestCase(unittest.TestCase):
def testFoo(self):
self.assertEqual(average([10,100]),55)
def testBar(self):
self.assertEqual(average([11]),11)
def testBaz(self):
self.assertEqual(average([20,20]),20)
def suite():
suite = unittest.TestSuite()
suite.addTest(MyTestCase('testFoo'))
suite.addTest(MyTestCase('testBar'))
suite.addTest(MyTestCase('testBaz'))
return suite
print suite() # <unittest.TestSuite tests=[<__main__.MyTestCase testMethod=testFoo>, <__main__.MyTestCase testMethod=testBar>, <__main__.MyTestCase testMethod=testBaz>]>
</code></pre>
<p>But you don't want it out of the class scope, because you want to call <code>MyTestCase.suite()</code></p>
<p>You probably thought that since <code>suite</code> was sort of "static", or instance-independent, it did not make sense to put a <code>self</code> argument, did you?
It's right.</p>
<p>But if you define a method inside a Python class, Python will expect that method to have a <code>self</code> argument as a first argument. Just omitting the <code>self</code> argument does not make your method <code>static</code> automatically. When you want to define a "static" method, you have to use the <a href="http://docs.python.org/library/functions.html#staticmethod" rel="nofollow">staticmethod</a> decorator:</p>
<pre><code>@staticmethod
def suite():
suite = unittest.TestSuite()
suite.addTest(MyTestCase('testFoo'))
suite.addTest(MyTestCase('testBar'))
suite.addTest(MyTestCase('testBaz'))
return suite
</code></pre>
<p>This way Python does not consider MyTestCase as an instance method but as a function:</p>
<pre><code>print MyTestCase.suite # <function suite at 0x...>
</code></pre>
<p>And of course now you can call <code>MyTestCase.suite()</code>, and that will work as expected.</p>
<pre><code>if __name__ == '__main__':
s = MyTestCase.suite()
unittest.TextTestRunner().run(s) # Ran 3 tests in 0.000s, OK
</code></pre>
| 5 | 2009-06-28T01:37:28Z | [
"python"
] |
With Python unittest, how do I create and use a "callable object that returns a test suite"? | 1,053,983 | <p>I'm learning Python and have been trying to understand more about the details of Python's <code>unittest</code> module. The documentation includes the following:</p>
<blockquote>
<p>For the ease of running tests, as we
will see later, it is a good idea to
provide in each test module a callable
object that returns a pre-built test
suite:</p>
<pre><code>def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('testDefaultSize'))
suite.addTest(WidgetTestCase('testResize'))
return suite
</code></pre>
</blockquote>
<p>As far as I can tell, the purpose of doing this is not explained. In addition, I was unable to figure out how one would use such a method. I tried several things without success (aside from learning about the error messages I got):</p>
<pre><code>import unittest
def average(values):
return sum(values) / len(values)
class MyTestCase(unittest.TestCase):
def testFoo(self):
self.assertEqual(average([10,100]),55)
def testBar(self):
self.assertEqual(average([11]),11)
def testBaz(self):
self.assertEqual(average([20,20]),20)
def suite():
suite = unittest.TestSuite()
suite.addTest(MyTestCase('testFoo'))
suite.addTest(MyTestCase('testBar'))
suite.addTest(MyTestCase('testBaz'))
return suite
if __name__ == '__main__':
# s = MyTestCase.suite()
# TypeError: unbound method suite() must be called
# with MyTestCase instance as first argument
# s = MyTestCase.suite(MyTestCase())
# ValueError: no such test method in <class '__main__.MyTestCase'>: runTest
# s = MyTestCase.suite(MyTestCase('testFoo'))
# TypeError: suite() takes no arguments (1 given)
</code></pre>
<p>The following "worked" but seems awkward and it required that I change the method signature of <code>suite()</code> to '<code>def suite(self):</code>'.</p>
<pre><code>s = MyTestCase('testFoo').suite()
unittest.TextTestRunner().run(s)
</code></pre>
| 4 | 2009-06-28T01:08:36Z | 1,054,014 | <p>One thing that's important to note, <a href="http://somethingaboutorange.com/mrl/projects/nose/0.11.1/" rel="nofollow">nose</a> is a tool that really simplifies running tests. It allows you to specify exactly which tests to run from the command-line as well as walking through each directory and running each test.</p>
| 1 | 2009-06-28T01:41:39Z | [
"python"
] |
With Python unittest, how do I create and use a "callable object that returns a test suite"? | 1,053,983 | <p>I'm learning Python and have been trying to understand more about the details of Python's <code>unittest</code> module. The documentation includes the following:</p>
<blockquote>
<p>For the ease of running tests, as we
will see later, it is a good idea to
provide in each test module a callable
object that returns a pre-built test
suite:</p>
<pre><code>def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('testDefaultSize'))
suite.addTest(WidgetTestCase('testResize'))
return suite
</code></pre>
</blockquote>
<p>As far as I can tell, the purpose of doing this is not explained. In addition, I was unable to figure out how one would use such a method. I tried several things without success (aside from learning about the error messages I got):</p>
<pre><code>import unittest
def average(values):
return sum(values) / len(values)
class MyTestCase(unittest.TestCase):
def testFoo(self):
self.assertEqual(average([10,100]),55)
def testBar(self):
self.assertEqual(average([11]),11)
def testBaz(self):
self.assertEqual(average([20,20]),20)
def suite():
suite = unittest.TestSuite()
suite.addTest(MyTestCase('testFoo'))
suite.addTest(MyTestCase('testBar'))
suite.addTest(MyTestCase('testBaz'))
return suite
if __name__ == '__main__':
# s = MyTestCase.suite()
# TypeError: unbound method suite() must be called
# with MyTestCase instance as first argument
# s = MyTestCase.suite(MyTestCase())
# ValueError: no such test method in <class '__main__.MyTestCase'>: runTest
# s = MyTestCase.suite(MyTestCase('testFoo'))
# TypeError: suite() takes no arguments (1 given)
</code></pre>
<p>The following "worked" but seems awkward and it required that I change the method signature of <code>suite()</code> to '<code>def suite(self):</code>'.</p>
<pre><code>s = MyTestCase('testFoo').suite()
unittest.TextTestRunner().run(s)
</code></pre>
| 4 | 2009-06-28T01:08:36Z | 2,114,395 | <p>As already been mentioned, documentation refers to suite() as a method in the module, and unittest (strangely) doesn't seem to have any special recognition for this method, so you do have to call it explicitly. However, if you use a tool called "<a href="http://testoob.sourceforge.net/index.html" rel="nofollow">testoob</a>", it automatically calls the suite() method (if you specify the defaultTest="suite" argument to its main()) and adds several other features on top of the base unittest package. It also provides options for generating XML files that includes collected stdout and stderr from these tests (which is big plus for automated tests) and can also generate HTML reports (though you would need to install additional packages). I couldn't find a way to automatically discover all the tests which nose claims to support, so nose is probably a better option.</p>
| 1 | 2010-01-22T01:40:09Z | [
"python"
] |
With Python unittest, how do I create and use a "callable object that returns a test suite"? | 1,053,983 | <p>I'm learning Python and have been trying to understand more about the details of Python's <code>unittest</code> module. The documentation includes the following:</p>
<blockquote>
<p>For the ease of running tests, as we
will see later, it is a good idea to
provide in each test module a callable
object that returns a pre-built test
suite:</p>
<pre><code>def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('testDefaultSize'))
suite.addTest(WidgetTestCase('testResize'))
return suite
</code></pre>
</blockquote>
<p>As far as I can tell, the purpose of doing this is not explained. In addition, I was unable to figure out how one would use such a method. I tried several things without success (aside from learning about the error messages I got):</p>
<pre><code>import unittest
def average(values):
return sum(values) / len(values)
class MyTestCase(unittest.TestCase):
def testFoo(self):
self.assertEqual(average([10,100]),55)
def testBar(self):
self.assertEqual(average([11]),11)
def testBaz(self):
self.assertEqual(average([20,20]),20)
def suite():
suite = unittest.TestSuite()
suite.addTest(MyTestCase('testFoo'))
suite.addTest(MyTestCase('testBar'))
suite.addTest(MyTestCase('testBaz'))
return suite
if __name__ == '__main__':
# s = MyTestCase.suite()
# TypeError: unbound method suite() must be called
# with MyTestCase instance as first argument
# s = MyTestCase.suite(MyTestCase())
# ValueError: no such test method in <class '__main__.MyTestCase'>: runTest
# s = MyTestCase.suite(MyTestCase('testFoo'))
# TypeError: suite() takes no arguments (1 given)
</code></pre>
<p>The following "worked" but seems awkward and it required that I change the method signature of <code>suite()</code> to '<code>def suite(self):</code>'.</p>
<pre><code>s = MyTestCase('testFoo').suite()
unittest.TextTestRunner().run(s)
</code></pre>
| 4 | 2009-06-28T01:08:36Z | 5,665,094 | <p>I would suggest using the following. This saves you from manually entering all the tests you add as you go along.</p>
<pre><code>def suite():
suite = unittest.TestLoader().loadTestsFromTestCase(Your_Test_Case_Class)
return suite
</code></pre>
<p>This gives you the flexibility to execute tests in the same module by defining:</p>
<pre><code>if __name__ == "__main__":
suite()
</code></pre>
<p>And if you want to bundle your suites in another module i.e. test_suite.py (for example), then it could be done by the following way:</p>
<p>import test_module name
import unittest</p>
<pre><code>if __name__=="__main__":
suite1=test_module_name.suite()
...
...
alltests = unittest.TestSuite([suite1,suite2])
</code></pre>
<p>Now how to run your tests. I generally use the simpler way to run by executing this command at the package level, and unittest automatically discovers the tests:</p>
<pre><code>python -m unittest discover
</code></pre>
<p>or </p>
<pre><code>nosetests
</code></pre>
<p><strong>Caveat:</strong> Unittest is X times faster then nosetest, so then its up to developers preferences especially if they have used 3rd-party nosetest plugins and wants to continue using it.</p>
| 0 | 2011-04-14T14:51:59Z | [
"python"
] |
printing bit representation of numbers in python | 1,054,116 | <p>I want to print the bit representation of numbers onto console, so that I can see all operations that are being done on bits itself.</p>
<p>How can I possibly do it in python?</p>
| 37 | 2009-06-28T02:55:44Z | 1,054,126 | <p><strong>In Python 2.6+</strong>:</p>
<pre><code>print bin(123)
</code></pre>
<p>Results in:</p>
<pre><code>0b1111011
</code></pre>
<p><strong>In python 2.x</strong></p>
<pre><code>>>> binary = lambda n: n>0 and [n&1]+binary(n>>1) or []
>>> binary(123)
[1, 1, 0, 1, 1, 1, 1]
</code></pre>
<p><sup>Note, example taken from: "Mark Dufour" at <a href="http://mail.python.org/pipermail/python-list/2003-December/240914.html">http://mail.python.org/pipermail/python-list/2003-December/240914.html</a></sup></p>
| 19 | 2009-06-28T03:00:48Z | [
"python"
] |
printing bit representation of numbers in python | 1,054,116 | <p>I want to print the bit representation of numbers onto console, so that I can see all operations that are being done on bits itself.</p>
<p>How can I possibly do it in python?</p>
| 37 | 2009-06-28T02:55:44Z | 1,054,127 | <p><a href="http://docs.python.org/3.1/library/functions.html#bin" rel="nofollow">The <code>bin</code> function</a></p>
| 2 | 2009-06-28T03:02:03Z | [
"python"
] |
printing bit representation of numbers in python | 1,054,116 | <p>I want to print the bit representation of numbers onto console, so that I can see all operations that are being done on bits itself.</p>
<p>How can I possibly do it in python?</p>
| 37 | 2009-06-28T02:55:44Z | 1,054,130 | <p>This kind of thing?</p>
<pre><code>>>> ord('a')
97
>>> hex(ord('a'))
'0x61'
>>> bin(ord('a'))
'0b1100001'
</code></pre>
| 41 | 2009-06-28T03:02:54Z | [
"python"
] |
printing bit representation of numbers in python | 1,054,116 | <p>I want to print the bit representation of numbers onto console, so that I can see all operations that are being done on bits itself.</p>
<p>How can I possibly do it in python?</p>
| 37 | 2009-06-28T02:55:44Z | 18,946,037 | <p>From Python 2.6, with the <a href="http://docs.python.org/2/library/string.html#string-formatting">string.format method</a>:</p>
<pre><code>"{0:b}".format(0x1234)
</code></pre>
<p>in particular, you might like to use padding, so that multiple prints of different numbers still line up:</p>
<pre><code>"{0:16b}".format(0x1234)
</code></pre>
<p>and to have left padding with leading 0s rather than spaces:</p>
<pre><code>"{0:016b}".format(0x1234)
</code></pre>
| 15 | 2013-09-22T16:26:14Z | [
"python"
] |
printing bit representation of numbers in python | 1,054,116 | <p>I want to print the bit representation of numbers onto console, so that I can see all operations that are being done on bits itself.</p>
<p>How can I possibly do it in python?</p>
| 37 | 2009-06-28T02:55:44Z | 30,196,781 | <p>Slightly off-topic, but might be helpful. Some time ago I made a user-friendly print function for 1D numpy arrays in binary form. I can define symbols, leading and groups as I like. Here is what I've found most readable form of binary representation.</p>
<pre><code>def binprint (arr): # print array in binary form
bits = arr.itemsize * 8
fs = "{0:0" + str(bits) + "b}"
for i in range(0, arr.size):
mystr = fs.format(arr[i])
newstr = []
on = False
for i in range(0, len(mystr)) :
ch = mystr[i]
#if i % 4 == 0 : newstr.append(" ")
if i % 8 == 0 and i > 0 : newstr.append(" ")
if ch == "0" :
if on : newstr.append("=")
else : newstr.append(".")
elif ch == "1":
newstr.append("X")
on = True
print "".join(newstr)
</code></pre>
<p>Test:</p>
<pre><code>a = numpy.array([13,22,2,0], dtype = "uint8")
binprint (a)
</code></pre>
<p>Output:</p>
<pre><code>....XX=X
...X=XX=
......X=
........
</code></pre>
| 0 | 2015-05-12T16:38:45Z | [
"python"
] |
formencode invalid return type | 1,054,210 | <p>if an exception occurs in form encode then what will be the return type??</p>
<p>suppose</p>
<pre><code>if(request.POST):
formvalidate = ValidationRule()
try:
new = formvalidate.to_python(request.POST)
data = Users1( n_date = new['n_date'], heading = new['heading'],
desc = new['desc'], link = new['link'],
module_name = new['module_name'] )
session.add(data)
session.commit()
except formencode.Invalid, e:
errors = e
</code></pre>
<p>how we can find the field wise error</p>
| 0 | 2009-06-28T04:11:32Z | 1,054,278 | <p>I assume you are using formencode(<a href="http://formencode.org" rel="nofollow">http://formencode.org</a>)</p>
<p>you can use unpack_errors to get per field error e.g.</p>
<pre><code>import formencode
from formencode import validators
class UserForm(formencode.Schema):
first_name = validators.String(not_empty=True)
last_name = validators.String(not_empty=True)
form = UserForm()
try:
form.to_python({})
except formencode.Invalid,e:
print e.unpack_errors()
</code></pre>
<p>it will print a dict of errors per field.</p>
<p>you can use formencode.htmlfill.render to render all errors, in different ways, read
<a href="http://formencode.org/htmlfill.html#errors" rel="nofollow">http://formencode.org/htmlfill.html#errors</a></p>
| 3 | 2009-06-28T04:58:50Z | [
"python",
"error-handling",
"webforms",
"formencode"
] |
How frequently should Python decorators be used? | 1,054,249 | <p>I recently started experimenting with Python decorators (and higher-order functions) because it looked like they might make my Django unit tests more concise. e.g., instead of writing:</p>
<pre><code>def visit1():
login()
do_stuff()
logout()
</code></pre>
<p>I could instead do</p>
<pre><code>@handle_login
def visit1():
do_stuff()
</code></pre>
<p>However, after some experimenting, I have found that decorators are not as simple as I had hoped. First, I was confused by the different decorator syntax I found in different examples, until I learned that decorators behave very differently when they <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=240845" rel="nofollow">take arguments</a>. Then I tried decorating a method, and eventually learned that it wasn't working because I first have to <a href="http://www.nabble.com/Decorating-methods---where-do-my-arguments-go--td23448341.html" rel="nofollow">turn my decorator into a descriptor</a> by adding a <code>__get__</code> method. During this whole process I've ended up confused more than a few times and still find that debugging this "decorated" code is more complicated than it normally is for Python. I'm now re-evaluating whether I really need decorators in my code, since my initial motivation was to save a bit of typing, not because there was anything that really required higher-order functions.</p>
<p>So my question is: should decorators be used liberally or sparingly? Is it ever more Pythonic to avoid using them?</p>
| 3 | 2009-06-28T04:41:28Z | 1,054,277 | <p>If you have the same code at the beginning and end of many functions, I think that would justify the added complexity of using a decorator. </p>
<p>Rather like using a nice (but perhaps complex) template for a website with a lot of pages, it really saves time and adds clarity in the end.</p>
| 0 | 2009-06-28T04:58:22Z | [
"python",
"django",
"decorator"
] |
How frequently should Python decorators be used? | 1,054,249 | <p>I recently started experimenting with Python decorators (and higher-order functions) because it looked like they might make my Django unit tests more concise. e.g., instead of writing:</p>
<pre><code>def visit1():
login()
do_stuff()
logout()
</code></pre>
<p>I could instead do</p>
<pre><code>@handle_login
def visit1():
do_stuff()
</code></pre>
<p>However, after some experimenting, I have found that decorators are not as simple as I had hoped. First, I was confused by the different decorator syntax I found in different examples, until I learned that decorators behave very differently when they <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=240845" rel="nofollow">take arguments</a>. Then I tried decorating a method, and eventually learned that it wasn't working because I first have to <a href="http://www.nabble.com/Decorating-methods---where-do-my-arguments-go--td23448341.html" rel="nofollow">turn my decorator into a descriptor</a> by adding a <code>__get__</code> method. During this whole process I've ended up confused more than a few times and still find that debugging this "decorated" code is more complicated than it normally is for Python. I'm now re-evaluating whether I really need decorators in my code, since my initial motivation was to save a bit of typing, not because there was anything that really required higher-order functions.</p>
<p>So my question is: should decorators be used liberally or sparingly? Is it ever more Pythonic to avoid using them?</p>
| 3 | 2009-06-28T04:41:28Z | 1,054,283 | <p>Decorators are fine in their place and definitely not to be avoided -- when appropriate;-). I see your question as meaning essentially "OK so when <em>are</em> they appropriate"?</p>
<p>Adding some prefix and/or postfix code around some but not all methods of some classes is a good example. Were it all methods, a <em>class</em> decorator to wrap all methods would be better than repeating <code>@thisonetoo</code> endlessly;-). If it's once in a blue moon then it's not worth refactoring out to wrappers (decorators or otherwise). In the middle, there's a large ground where decorators are quite suitable indeed.</p>
<p>It boils down to one of the golden rules of programming -- DRY, for Don't Repeat Yourself. When you see your code becoming repetitious, you should refactor the repetition out -- and decorators are an excellent tool for that, although they're far from the only one (auxiliary methods and functions, custom metaclasses, generators and other iterators, context managers... <em>many</em> of the features we added to Python over the last few years can best be thought of as DRY-helpers, easier and smoother ways to factor out this or that frequent form of repetition!).</p>
<p>If there's no repetition, there's no real call for refactoring, hence (in particular) no real need for decorators -- in such cases, YAGNI (Y'Ain't Gonna Need It) can trump DRY;-).</p>
| 12 | 2009-06-28T05:02:09Z | [
"python",
"django",
"decorator"
] |
How frequently should Python decorators be used? | 1,054,249 | <p>I recently started experimenting with Python decorators (and higher-order functions) because it looked like they might make my Django unit tests more concise. e.g., instead of writing:</p>
<pre><code>def visit1():
login()
do_stuff()
logout()
</code></pre>
<p>I could instead do</p>
<pre><code>@handle_login
def visit1():
do_stuff()
</code></pre>
<p>However, after some experimenting, I have found that decorators are not as simple as I had hoped. First, I was confused by the different decorator syntax I found in different examples, until I learned that decorators behave very differently when they <a href="http://www.artima.com/weblogs/viewpost.jsp?thread=240845" rel="nofollow">take arguments</a>. Then I tried decorating a method, and eventually learned that it wasn't working because I first have to <a href="http://www.nabble.com/Decorating-methods---where-do-my-arguments-go--td23448341.html" rel="nofollow">turn my decorator into a descriptor</a> by adding a <code>__get__</code> method. During this whole process I've ended up confused more than a few times and still find that debugging this "decorated" code is more complicated than it normally is for Python. I'm now re-evaluating whether I really need decorators in my code, since my initial motivation was to save a bit of typing, not because there was anything that really required higher-order functions.</p>
<p>So my question is: should decorators be used liberally or sparingly? Is it ever more Pythonic to avoid using them?</p>
| 3 | 2009-06-28T04:41:28Z | 1,054,372 | <p>Alex already answered your question pretty well, what I would add is decorators, make your code MUCH easier to understand. (Sometimes, even if you are doing it only once).</p>
<p>For example, initially, I write my Django views, without thinking about authorisation at all. And when, I am done writing them, I can see which need authorised users and just put a @login_required for them.</p>
<p>So anyone coming after me can at one glance see what views are auth protected.</p>
<p>And of course, they are much more DRY and putting this everywhere.</p>
<p>if not request.user.is_authenticated():
return HttpResponseREdiect(..)</p>
| 3 | 2009-06-28T06:12:16Z | [
"python",
"django",
"decorator"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.