title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
App dock icon in wxPython
824,458
<p>In wxPython on Mac OS X is it possible to display a custom icon when an application is launched. Right now when I launch the app I wrote, a blank white icon is displayed in the dock. How would I go about changing it?</p>
1
2009-05-05T11:25:19Z
824,475
<p>I have this in my setup file to add an icon:</p> <pre><code>from setuptools import setup APP = ['MyApp.py'] DATA_FILES = [] OPTIONS = {'argv_emulation': True, 'iconfile': 'MyAppIcon.icns' } setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) </code></pre> <p>I think you have to use an icns file. I used <a href="http://www.img2icnsapp.com/" rel="nofollow">img2icns</a> to create one.</p>
1
2009-05-05T11:30:43Z
[ "python", "osx", "wxpython" ]
Initialisation of keyword args in Python
824,924
<p>Why does the following:</p> <pre><code>class A(object): def __init__(self, var=[]): self._var = var print 'var = %s %s' % (var, id(var)) a1 = A() a1._var.append('one') a2 = A() </code></pre> <p>result in:</p> <pre><code>var = [] 182897439952 var = ['one'] 182897439952 </code></pre> <p>I don't understand why it is not using a new instance of a list when using optional keyword arguments, can anyone explain this?</p>
0
2009-05-05T13:47:27Z
824,943
<p>The empty list in your function definition is created once, at the time the function itself is created. It isn't created every time the function is called.</p> <p>If you want a new one each time, do this:</p> <pre><code>class A(object): def __init__(self, var=None): if var is None: var = [] self._var = var </code></pre>
6
2009-05-05T13:51:21Z
[ "python", "initialization", "arguments", "instantiation" ]
Initialisation of keyword args in Python
824,924
<p>Why does the following:</p> <pre><code>class A(object): def __init__(self, var=[]): self._var = var print 'var = %s %s' % (var, id(var)) a1 = A() a1._var.append('one') a2 = A() </code></pre> <p>result in:</p> <pre><code>var = [] 182897439952 var = ['one'] 182897439952 </code></pre> <p>I don't understand why it is not using a new instance of a list when using optional keyword arguments, can anyone explain this?</p>
0
2009-05-05T13:47:27Z
824,951
<p>This is simply wrong. You can't (meaningfully) provide a mutable object as a default value in a function declaration.</p> <pre><code>class A(object): def __init__(self, var=[]): self._var = var print 'var = %s %s' % (var, id(var)) </code></pre> <p>You must do something like this.</p> <pre><code>class A(object): def __init__(self, var=None): self._var = var if var is not None else [] print 'var = %s %s' % (var, id(var)) </code></pre>
2
2009-05-05T13:52:56Z
[ "python", "initialization", "arguments", "instantiation" ]
Initialisation of keyword args in Python
824,924
<p>Why does the following:</p> <pre><code>class A(object): def __init__(self, var=[]): self._var = var print 'var = %s %s' % (var, id(var)) a1 = A() a1._var.append('one') a2 = A() </code></pre> <p>result in:</p> <pre><code>var = [] 182897439952 var = ['one'] 182897439952 </code></pre> <p>I don't understand why it is not using a new instance of a list when using optional keyword arguments, can anyone explain this?</p>
0
2009-05-05T13:47:27Z
824,964
<blockquote> <p>The empty list in your function definition is created once, at the time the function itself is created. It isn't created every time the function is called.</p> </blockquote> <p>Exactly. To work around this assign None int he function definiton and check for None in the function body:</p> <pre><code>class A(object): def __init__(self, var=None): if var is None: var = [] self._var = var print 'var = %s %s' % (var, id(var)) </code></pre>
1
2009-05-05T13:55:20Z
[ "python", "initialization", "arguments", "instantiation" ]
Why in the world does Python's Tkinter break using canvas.create_image?
824,988
<p>I've got a python GUI app in the workings, which I intend to use on both Windows and Mac. The documentation on Tkinter isn't the greatest, and google-fu has failed me.</p> <p>In short, I'm doing:</p> <pre><code>c = Canvas( master=frame, width=settings.WINDOW_SIZE[0], height=settings.WINDOW_SIZE[1], background=settings.CANVAS_COLOUR ) file = PhotoImage(file=os.path.join('path', 'to', 'gif')) c.create_bitmap(position, image=file) c.pack() root.mainloop() </code></pre> <p>If I comment out the create_bitmap line, the app draws fine. If I comment it back in, I get the following error:</p> <p><code>_tkinter.TclError: unknown option "-image"</code></p> <p>Which is odd. Tkinter is fine, according to the python tests (ie, importing _tkinter, Tkinter, and doing Tk()). I've since installed PIL against my windows setup (XP SP3, Python 2.6) imagining that it was doing some of the heavy lifting at a low level. It doesn't seem to be; I still get the aforementioned error.</p> <p>The full stacktrace, excluding the code I've already pasted, is:</p> <pre><code>File "C:\Python26\lib\lib-tk\Tkinter.py", line 2153, in create_bitmap return self._create('bitmap', args, kw) File "C:\Python26\lib\lib-tk\Tkinter.py", line 2147, in _create *(args + self._options(cnf, kw)))) </code></pre> <p>Anyone able to shed any light?</p>
1
2009-05-05T13:59:47Z
825,493
<p>Short answer: Don't use create_bitmap when you mean to use create_image.</p>
2
2009-05-05T15:41:44Z
[ "python", "python-imaging-library", "tkinter", "tk" ]
Why in the world does Python's Tkinter break using canvas.create_image?
824,988
<p>I've got a python GUI app in the workings, which I intend to use on both Windows and Mac. The documentation on Tkinter isn't the greatest, and google-fu has failed me.</p> <p>In short, I'm doing:</p> <pre><code>c = Canvas( master=frame, width=settings.WINDOW_SIZE[0], height=settings.WINDOW_SIZE[1], background=settings.CANVAS_COLOUR ) file = PhotoImage(file=os.path.join('path', 'to', 'gif')) c.create_bitmap(position, image=file) c.pack() root.mainloop() </code></pre> <p>If I comment out the create_bitmap line, the app draws fine. If I comment it back in, I get the following error:</p> <p><code>_tkinter.TclError: unknown option "-image"</code></p> <p>Which is odd. Tkinter is fine, according to the python tests (ie, importing _tkinter, Tkinter, and doing Tk()). I've since installed PIL against my windows setup (XP SP3, Python 2.6) imagining that it was doing some of the heavy lifting at a low level. It doesn't seem to be; I still get the aforementioned error.</p> <p>The full stacktrace, excluding the code I've already pasted, is:</p> <pre><code>File "C:\Python26\lib\lib-tk\Tkinter.py", line 2153, in create_bitmap return self._create('bitmap', args, kw) File "C:\Python26\lib\lib-tk\Tkinter.py", line 2147, in _create *(args + self._options(cnf, kw)))) </code></pre> <p>Anyone able to shed any light?</p>
1
2009-05-05T13:59:47Z
825,632
<p>Tk has two types of graphics, bitmap and image. Images come in two flavours, bitmap and photo. Bitmaps and Images of type bitmap are not the same thing, which leads to confusion in docs. PhotoImage creates an image of type photo, and needs an image object in the canvas, so the solution is, as you already concluded, to use create_image.</p>
4
2009-05-05T16:05:38Z
[ "python", "python-imaging-library", "tkinter", "tk" ]
Why in the world does Python's Tkinter break using canvas.create_image?
824,988
<p>I've got a python GUI app in the workings, which I intend to use on both Windows and Mac. The documentation on Tkinter isn't the greatest, and google-fu has failed me.</p> <p>In short, I'm doing:</p> <pre><code>c = Canvas( master=frame, width=settings.WINDOW_SIZE[0], height=settings.WINDOW_SIZE[1], background=settings.CANVAS_COLOUR ) file = PhotoImage(file=os.path.join('path', 'to', 'gif')) c.create_bitmap(position, image=file) c.pack() root.mainloop() </code></pre> <p>If I comment out the create_bitmap line, the app draws fine. If I comment it back in, I get the following error:</p> <p><code>_tkinter.TclError: unknown option "-image"</code></p> <p>Which is odd. Tkinter is fine, according to the python tests (ie, importing _tkinter, Tkinter, and doing Tk()). I've since installed PIL against my windows setup (XP SP3, Python 2.6) imagining that it was doing some of the heavy lifting at a low level. It doesn't seem to be; I still get the aforementioned error.</p> <p>The full stacktrace, excluding the code I've already pasted, is:</p> <pre><code>File "C:\Python26\lib\lib-tk\Tkinter.py", line 2153, in create_bitmap return self._create('bitmap', args, kw) File "C:\Python26\lib\lib-tk\Tkinter.py", line 2147, in _create *(args + self._options(cnf, kw)))) </code></pre> <p>Anyone able to shed any light?</p>
1
2009-05-05T13:59:47Z
828,469
<p>The <code>create_bitmap()</code> method does not have an <code>image</code> argument; it has a <a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/create%5Fbitmap.html" rel="nofollow"><code>bitmap</code></a> argument instead.</p> <p>The error you get comes from the fact that in Tkinter, a Tcl interpreter is running embedded in the Python process, and all GUI interaction goes back and forth between Python and Tcl; so, the error you get comes from the fact that Tcl replies "I don't know any -image option in the .create_bitmap call".</p> <p>In any case, like Jeff said, you probably want the <code>create_image</code> method.</p>
0
2009-05-06T07:44:52Z
[ "python", "python-imaging-library", "tkinter", "tk" ]
How do you make the Python Msqldb module use ? in stead of %s for query parameters?
825,042
<p>MySqlDb is a fantastic Python module -- but one part is incredibly annoying. Query parameters look like this</p> <pre><code>cursor.execute("select * from Books where isbn=%s", (isbn,)) </code></pre> <p>whereas everywhere else in the known universe (oracle, sqlserver, access, sybase...) they look like this</p> <pre><code>cursor.execute("select * from Books where isbn=?", (isbn,)) </code></pre> <p>This means that if you want to be portable you have to somehow switch between the two notations ? and %s, which is really annoying. (Please don't tell me to use an ORM layer -- I will strangle you).</p> <p>Supposedly you can convince mysqldb to use the standard syntax, but I haven't yet made it work. Any suggestions?</p>
3
2009-05-05T14:10:36Z
825,064
<p>I found a lot of information out there about <code>paramstyle</code> that seemed to imply it might be what you wanted, but <a href="http://helpful.knobs-dials.com/index.php/Python%5Fnotes/Databases#parameters.2C%5Fparamstyle" rel="nofollow">according to this wiki</a> you have to use the <code>paramstyle</code> your library uses, and most of them do not allow you to change it:</p> <blockquote> <p><code>paramstyle</code> is specific to the library you use, and informational - you have to use the one it uses. This is probably the most annoying part of this standard. (a few allow you to set different <code>paramstyles</code>, but this isn't standard behavior) </p> </blockquote> <p>I found some posts that talked about MySQLdb allowing this, but apparently it doesn't as someone indicated it didn't work for them.</p>
2
2009-05-05T14:15:20Z
[ "python", "sql", "mysql", "database" ]
How do you make the Python Msqldb module use ? in stead of %s for query parameters?
825,042
<p>MySqlDb is a fantastic Python module -- but one part is incredibly annoying. Query parameters look like this</p> <pre><code>cursor.execute("select * from Books where isbn=%s", (isbn,)) </code></pre> <p>whereas everywhere else in the known universe (oracle, sqlserver, access, sybase...) they look like this</p> <pre><code>cursor.execute("select * from Books where isbn=?", (isbn,)) </code></pre> <p>This means that if you want to be portable you have to somehow switch between the two notations ? and %s, which is really annoying. (Please don't tell me to use an ORM layer -- I will strangle you).</p> <p>Supposedly you can convince mysqldb to use the standard syntax, but I haven't yet made it work. Any suggestions?</p>
3
2009-05-05T14:10:36Z
825,073
<p>I don't recommend doing this, but the simplest solution is to monkeypatch the <code>Cursor</code> class:</p> <pre><code>from MySQLdb.cursors import Cursor old_execute = Cursor.execute def new_execute(self, query, args): return old_execute(self, query.replace("?", "%s"), args) Cursor.execute = new_execute </code></pre>
1
2009-05-05T14:16:45Z
[ "python", "sql", "mysql", "database" ]
How do you make the Python Msqldb module use ? in stead of %s for query parameters?
825,042
<p>MySqlDb is a fantastic Python module -- but one part is incredibly annoying. Query parameters look like this</p> <pre><code>cursor.execute("select * from Books where isbn=%s", (isbn,)) </code></pre> <p>whereas everywhere else in the known universe (oracle, sqlserver, access, sybase...) they look like this</p> <pre><code>cursor.execute("select * from Books where isbn=?", (isbn,)) </code></pre> <p>This means that if you want to be portable you have to somehow switch between the two notations ? and %s, which is really annoying. (Please don't tell me to use an ORM layer -- I will strangle you).</p> <p>Supposedly you can convince mysqldb to use the standard syntax, but I haven't yet made it work. Any suggestions?</p>
3
2009-05-05T14:10:36Z
1,151,575
<p>From what I can see you cannot use '?' for a parameter marker with MySQLdb (out of box)</p> <p>you <strong>can</strong> however use named parameters</p> <p>i.e. </p> <pre><code>cursor.execute("%(param1)s = %(param1)s", {'param1':1}) </code></pre> <p>would effectively execute <code>1=1</code></p> <p>in mysql</p> <p><strong>but</strong> sort of like Eli answered (but not hackish)</p> <p>you could instead do:</p> <p><strong>MyNewCursorModule.py</strong></p> <pre><code>import MySQLdb.cursors import Cursor class MyNewCursor(Cursor): def execute(self, query, args=None): """This cursor is able to use '?' as a parameter marker""" return Cursor.execute(self, query.replace('?', '%s'), args) def executemany(self, query, args): ...implement... </code></pre> <p>in this case you would have a custom cursor which would do what you want it to do <strong>and</strong> it's not a hack. It's just a subclass ;)</p> <p>and use it with:</p> <pre><code>from MyNewCursorModule import MyNewCursor conn = MySQLdb.connect(...connection information... cursorclass=MyNewCursor) </code></pre> <p><em>(you can also give the class to the connection.cursor function to create it there if you want to use the normal execute most of the time (a temp override))</em></p> <p>...you can also change the simple replacement to something a little more correct (assuming there is a way to escape the question mark), but that is something I'll leave up to you :)</p>
2
2009-07-20T03:10:13Z
[ "python", "sql", "mysql", "database" ]
Change/adapt date widget on admin interface
825,253
<p>I'm configuring the admin site for my new app, and I found a little problem with my setup.</p> <p>I have a 'birth date' field on my database editable via the admin site, but, the date widget isn't very handy for that, because it makes that, if I have to enter i.e. 01-04-1956 in the widget, i would have to page through a lot of years. Also, I don't want people writing the full date manually in only one edit box, as there are always problems with using dashes or slashes as a separator, or introducing date in European, American or Asian format...</p> <p>What would McGyver ( I mean you ) do?</p>
2
2009-05-05T14:55:00Z
825,343
<p>Use the <a href="http://docs.djangoproject.com/en/dev/ref/contrib/admin/#django.contrib.admin.ModelAdmin.formfield%5Foverrides" rel="nofollow">formfield_overrides</a> option to use a different widget. For example (untested):</p> <pre><code>class MyModelAdmin(admin.ModelAdmin): formfield_overrides = { models.DateField: {'widget': forms.TextInput}, } </code></pre> <p>Then you'll need to do date conversion/validation yourself.</p>
4
2009-05-05T15:11:46Z
[ "python", "django", "date", "widget" ]
Extract domain name from a host name
825,694
<p>Is there a programatic way to find the domain name from a given hostname?</p> <p>given -> www.yahoo.co.jp return -> yahoo.co.jp</p> <p>The approach that works but is very slow is:</p> <p>split on "." and remove 1 group from the left, join and query an SOA record using dnspython when a valid SOA record is returned, consider that a domain</p> <p>Is there a cleaner/faster way to do this without using regexps?</p>
15
2009-05-05T16:17:13Z
825,738
<p>You can use <a href="http://docs.python.org/library/stdtypes.html" rel="nofollow"><code>partition</code></a> instead of <code>split</code>:</p> <pre><code>&gt;&gt;&gt; 'www.yahoo.co.jp'.partition('.')[2] 'yahoo.co.jp' </code></pre> <p>This will help with the parsing but obviously won't check if the returned string is a valid domain.</p>
3
2009-05-05T16:26:17Z
[ "python", "dns", "hostname" ]
Extract domain name from a host name
825,694
<p>Is there a programatic way to find the domain name from a given hostname?</p> <p>given -> www.yahoo.co.jp return -> yahoo.co.jp</p> <p>The approach that works but is very slow is:</p> <p>split on "." and remove 1 group from the left, join and query an SOA record using dnspython when a valid SOA record is returned, consider that a domain</p> <p>Is there a cleaner/faster way to do this without using regexps?</p>
15
2009-05-05T16:17:13Z
828,380
<p>There's no trivial definition of which "domain name" is the parent of any particular "host name".</p> <p>Your current method of traversing up the tree until you see an <code>SOA</code> record is actually the most correct.</p> <p>Technically, what you're doing there is finding a "zone cut", and in the vast majority of cases that will correspond to the point at which the domain was delegated from its TLD.</p> <p>Any method that relies on mere text parsing of the host name without reference to the DNS is doomed to failure.</p> <p>Alternatively, make use of the centrally maintained lists of delegation-centric domains from <a href="http://publicsuffix.org/">http://publicsuffix.org/</a>, but beware that these lists can be incomplete and/or out of date.</p> <p>See also <a href="http://stackoverflow.com/questions/399250/going-where-php-parseurl-doesnt-parsing-only-the-domain/401456#401456">this question</a> where all of this has been gone over before...</p>
15
2009-05-06T07:09:22Z
[ "python", "dns", "hostname" ]
Extract domain name from a host name
825,694
<p>Is there a programatic way to find the domain name from a given hostname?</p> <p>given -> www.yahoo.co.jp return -> yahoo.co.jp</p> <p>The approach that works but is very slow is:</p> <p>split on "." and remove 1 group from the left, join and query an SOA record using dnspython when a valid SOA record is returned, consider that a domain</p> <p>Is there a cleaner/faster way to do this without using regexps?</p>
15
2009-05-05T16:17:13Z
828,397
<p>Your algorithm is the right one. Since zone cuts are <strong>not</strong> reflected in the domain name (you see domain cuts - the dots - but not zone cuts), it is the only correct one.</p> <p>An <strong>approximate</strong> algorithm is to use a list of zones, like the one mentioned by Alnitak. Remember that these static lists are not authoritative, they lack many registries, they are stale, etc.</p>
1
2009-05-06T07:16:21Z
[ "python", "dns", "hostname" ]
Extract domain name from a host name
825,694
<p>Is there a programatic way to find the domain name from a given hostname?</p> <p>given -> www.yahoo.co.jp return -> yahoo.co.jp</p> <p>The approach that works but is very slow is:</p> <p>split on "." and remove 1 group from the left, join and query an SOA record using dnspython when a valid SOA record is returned, consider that a domain</p> <p>Is there a cleaner/faster way to do this without using regexps?</p>
15
2009-05-05T16:17:13Z
4,985,387
<p>Whilst not in Python, you could port this code: <a href="http://pastebin.com/raw.php?i=VY3DCNhp" rel="nofollow">http://pastebin.com/raw.php?i=VY3DCNhp</a></p>
-1
2011-02-13T16:37:58Z
[ "python", "dns", "hostname" ]
Catch only some runtime errors in Python
825,909
<p>I'm importing a module which raises the following error in some conditions: RuntimeError: pyparted requires root access</p> <p>I know that I can just check for root access before the import, but I'd like to know how to catch this spesific kind of error via a try/except statement for future reference. Is there any way to differentiate between this RuntimeError and others that might be raised?</p>
5
2009-05-05T17:01:44Z
825,931
<pre><code>try: import pyparted except RuntimeError: print('RuntimeError is raised') raise </code></pre> <p>more on <a href="http://docs.python.org/tutorial/errors.html#handling-exceptions" rel="nofollow">exception handling in tutorial</a>.</p> <p>This situation should produce <a href="http://docs.python.org/library/exceptions.html?highlight=exception#exceptions.ImportError" rel="nofollow"><code>ImportError</code></a> in my opinion. And you can do it yourself:</p> <pre><code>try: import pyparted except RuntimeError as e: raise ImportError(e) </code></pre>
4
2009-05-05T17:08:01Z
[ "python", "exception-handling", "try-catch", "runtime-error" ]
Catch only some runtime errors in Python
825,909
<p>I'm importing a module which raises the following error in some conditions: RuntimeError: pyparted requires root access</p> <p>I know that I can just check for root access before the import, but I'd like to know how to catch this spesific kind of error via a try/except statement for future reference. Is there any way to differentiate between this RuntimeError and others that might be raised?</p>
5
2009-05-05T17:01:44Z
825,932
<p>Yes.</p> <pre><code> try: import module except RuntimeError: pass </code></pre> <p>imports are interpreted as any other statement, they are not special. You could do an </p> <pre><code>if condition: import module </code></pre>
1
2009-05-05T17:08:31Z
[ "python", "exception-handling", "try-catch", "runtime-error" ]
Catch only some runtime errors in Python
825,909
<p>I'm importing a module which raises the following error in some conditions: RuntimeError: pyparted requires root access</p> <p>I know that I can just check for root access before the import, but I'd like to know how to catch this spesific kind of error via a try/except statement for future reference. Is there any way to differentiate between this RuntimeError and others that might be raised?</p>
5
2009-05-05T17:01:44Z
825,934
<pre><code>try: import ... except RuntimeError: # Do something </code></pre>
1
2009-05-05T17:08:49Z
[ "python", "exception-handling", "try-catch", "runtime-error" ]
Catch only some runtime errors in Python
825,909
<p>I'm importing a module which raises the following error in some conditions: RuntimeError: pyparted requires root access</p> <p>I know that I can just check for root access before the import, but I'd like to know how to catch this spesific kind of error via a try/except statement for future reference. Is there any way to differentiate between this RuntimeError and others that might be raised?</p>
5
2009-05-05T17:01:44Z
825,978
<p>You can check attributes of the exception to differentiate from other possible <code>RuntimeError</code> exceptions. For example, re-raise the error if it does not match a predefined message text.</p> <pre><code> try: import pypatred except RuntimeError,e: if e.message == 'RuntimeError: pyparted requires root access': return 'pyparted - no root access' raise </code></pre> <p>Of course, direct text comparison is just an example, you could search for included substrings or regular expressions. </p> <p>It is worth noting that the <code>.message</code> attribute of exceptions is <a href="http://docs.python.org/dev/whatsnew/2.6.html#other-language-changes">deprecated starting with Python 2.6</a>. You can find the text in <code>.args</code>, usually <code>args[0]</code>.</p> <blockquote> <p>... For 2.6, the <code>message</code> attribute is being deprecated in favor of the <code>args</code> attribute.</p> </blockquote>
6
2009-05-05T17:17:04Z
[ "python", "exception-handling", "try-catch", "runtime-error" ]
Catch only some runtime errors in Python
825,909
<p>I'm importing a module which raises the following error in some conditions: RuntimeError: pyparted requires root access</p> <p>I know that I can just check for root access before the import, but I'd like to know how to catch this spesific kind of error via a try/except statement for future reference. Is there any way to differentiate between this RuntimeError and others that might be raised?</p>
5
2009-05-05T17:01:44Z
826,144
<blockquote> <p>I know that I can just check for root access before the import, but I'd like to know how to catch this spesific kind of error via a try/except statement for future reference. Is there any way to differentiate between this RuntimeError and others that might be raised?</p> </blockquote> <p>If the error is caused by a specific condition, then I think the easiest way to catch the error is to test for the condition, and you can raise a more specific error yourself. After all the 'error' exists before the error is thrown, since in this case its a problem with the environment.</p> <p>I agree with those above - text matching on an error is kind of a terrifying prospect.</p>
6
2009-05-05T17:56:44Z
[ "python", "exception-handling", "try-catch", "runtime-error" ]
Catch only some runtime errors in Python
825,909
<p>I'm importing a module which raises the following error in some conditions: RuntimeError: pyparted requires root access</p> <p>I know that I can just check for root access before the import, but I'd like to know how to catch this spesific kind of error via a try/except statement for future reference. Is there any way to differentiate between this RuntimeError and others that might be raised?</p>
5
2009-05-05T17:01:44Z
16,558,265
<p><a href="http://docs.python.org/2/library/exceptions.html#exceptions.RuntimeError" rel="nofollow">RuntimeError</a> Raised when an error is detected that doesn’t fall in any of the other categories</p> <pre><code>def foo(): try: foo() except RuntimeError, e: print e print " Runtime Error occurred due to exceeded maximum recursion depth " </code></pre> <p>That is how we will catch the RuntimeError caused by the exceeded recursion limit in python</p> <p>And if you want to call your function over the recursion limit you can do the following</p> <pre><code>import sys def foo(): try: foo() except RuntimeError, e: sys.setrecursionlimit(1200) foo() </code></pre> <p>But always it is highly not recommended to change the recursion limit, but very small changes in the recursion limit are allowed</p>
0
2013-05-15T06:29:27Z
[ "python", "exception-handling", "try-catch", "runtime-error" ]
Abstract class + mixin + multiple inheritance in python
825,945
<p>So, I think the code probably explains what I'm trying to do better than I can in words, so here goes:</p> <pre><code>import abc class foo(object): __metaclass__ = abc.ABCMeta @abc.abstractmethod def bar(self): pass class bar_for_foo_mixin(object): def bar(self): print "This should satisfy the abstract method requirement" class myfoo(foo, bar_for_foo_mixin): def __init__(self): print "myfoo __init__ called" self.bar() obj = myfoo() </code></pre> <p>The result:</p> <pre><code>TypeError: Can't instantiate abstract class myfoo with abstract methods bar </code></pre> <p>I'm trying to get the mixin class to satisfy the requirements of the abstract/interface class. What am I missing?</p>
11
2009-05-05T17:10:43Z
826,118
<p>Shouldn't the inheritance be the other way round? In the MRO <code>foo</code> currently comes before <code>bar_for_foo_mixin</code>, and then rightfully complains. With <code>class myfoo(bar_for_foo_mixin, foo)</code> it should work.</p> <p>And I am not sure if your class design is the right way to do it. Since you use a mixin for implementing <code>bar</code> it might be better not to derive from foo and just register it with the 'foo' class (i.e. <code>foo.register(myfoo)</code>). But this is just my gut feeling.</p> <p>For completeness, here is the <a href="http://docs.python.org/library/abc.html">documentation for ABCs</a>.</p>
12
2009-05-05T17:49:37Z
[ "python", "abstract-class", "multiple-inheritance", "mixins" ]
Abstract class + mixin + multiple inheritance in python
825,945
<p>So, I think the code probably explains what I'm trying to do better than I can in words, so here goes:</p> <pre><code>import abc class foo(object): __metaclass__ = abc.ABCMeta @abc.abstractmethod def bar(self): pass class bar_for_foo_mixin(object): def bar(self): print "This should satisfy the abstract method requirement" class myfoo(foo, bar_for_foo_mixin): def __init__(self): print "myfoo __init__ called" self.bar() obj = myfoo() </code></pre> <p>The result:</p> <pre><code>TypeError: Can't instantiate abstract class myfoo with abstract methods bar </code></pre> <p>I'm trying to get the mixin class to satisfy the requirements of the abstract/interface class. What am I missing?</p>
11
2009-05-05T17:10:43Z
18,655,581
<p>i think (tested in similar case) that reversing the baseclasses works: </p> <pre><code>class myfoo(bar_for_foo_mixin, foo): def __init__(self): print "myfoo __init__ called" self.bar() </code></pre> <p>so in the mro() it would find a concrete version of bar() before it finds the abstract one. No idea if this is actually what happens in the background though.</p> <p>Cheers, Lars</p> <p>PS: the code that worked was:</p> <pre><code>class A(object): __metaclass__ = abc.ABCMeta @abc.abstractmethod def do(self): pass class B(object): def do(self): print "do" class C(B, A): pass c = C() </code></pre>
0
2013-09-06T10:25:40Z
[ "python", "abstract-class", "multiple-inheritance", "mixins" ]
Changing case (upper/lower) on adding data through Django admin site
825,955
<p>I'm configuring the admin site of my new project, and I have a little doubt on how should I do for, on hitting 'Save' when adding data through the admin site, everything is converted to upper case...</p> <p>Edit: Ok I know the .upper property, and I I did a view, I would know how to do it, but I'm wondering if there is any property available for the field configuration on the admin site :P</p>
4
2009-05-05T17:11:49Z
826,088
<p>you have to <a href="http://docs.djangoproject.com/en/dev/topics/db/models/#overriding-predefined-model-methods" rel="nofollow">override save()</a>. An example from the documentation:</p> <pre><code>class Blog(models.Model): name = models.CharField(max_length=100) tagline = models.TextField() def save(self, force_insert=False, force_update=False): do_something() super(Blog, self).save(force_insert, force_update) # Call the "real" save() method. do_something_else() </code></pre>
1
2009-05-05T17:40:47Z
[ "python", "django", "admin", "case" ]
Changing case (upper/lower) on adding data through Django admin site
825,955
<p>I'm configuring the admin site of my new project, and I have a little doubt on how should I do for, on hitting 'Save' when adding data through the admin site, everything is converted to upper case...</p> <p>Edit: Ok I know the .upper property, and I I did a view, I would know how to do it, but I'm wondering if there is any property available for the field configuration on the admin site :P</p>
4
2009-05-05T17:11:49Z
826,126
<p>If your goal is to only have things converted to upper case when saving in the admin section, you'll want to <a href="http://docs.djangoproject.com/en/dev/ref/contrib/admin/#adding-custom-validation-to-the-admin">create a form with custom validation</a> to make the case change:</p> <pre><code>class MyArticleAdminForm(forms.ModelForm): class Meta: model = Article def clean_name(self): return self.cleaned_data["name"].upper() </code></pre> <p>If your goal is to always have the value in uppercase, then you should <a href="http://docs.djangoproject.com/en/dev/topics/db/models/#overriding-predefined-model-methods">override save</a> in the model field:</p> <pre><code>class Blog(models.Model): name = models.CharField(max_length=100) def save(self, force_insert=False, force_update=False): self.name = self.name.upper() super(Blog, self).save(force_insert, force_update) </code></pre>
12
2009-05-05T17:52:51Z
[ "python", "django", "admin", "case" ]
Changing case (upper/lower) on adding data through Django admin site
825,955
<p>I'm configuring the admin site of my new project, and I have a little doubt on how should I do for, on hitting 'Save' when adding data through the admin site, everything is converted to upper case...</p> <p>Edit: Ok I know the .upper property, and I I did a view, I would know how to do it, but I'm wondering if there is any property available for the field configuration on the admin site :P</p>
4
2009-05-05T17:11:49Z
10,072,776
<p>Updated example from documentation suggests using args, kwargs to pass through as:</p> <blockquote> <p>Django will, from time to time, extend the capabilities of built-in model methods, adding new arguments. If you use *args, **kwargs in your method definitions, you are guaranteed that your code will automatically support those arguments when they are added.</p> </blockquote> <pre><code>class Blog(models.Model): name = models.CharField(max_length=100) tagline = models.TextField() def save(self, *args, **kwargs): do_something() super(Blog, self).save( *args, **kwargs) # Call the "real" save() method. do_something_else() </code></pre>
2
2012-04-09T11:52:36Z
[ "python", "django", "admin", "case" ]
Python: finding uid/gid for a given username/groupname (for os.chown)
826,082
<p>What's a good way to find the uid/gid for a given username or groupname using Python? I need to set file ownership with os.chown and need the integer ids instead of the alphabetic.</p> <p>[Quick note]: getpwnam works great but is not available on windows, so here's some code that creates stubs to allow you to run the same code on windows and unix.</p> <pre><code>try: from pwd import getpwnam except: getpwnam = lambda x: (0,0,0) os.chown = lambda x, y, z: True os.chmod = lambda x, y: True os.fchown = os.chown os.fchmod = os.chmod </code></pre>
50
2009-05-05T17:39:54Z
826,099
<p>Use the <a href="http://docs.python.org/library/pwd.html"><code>pwd</code></a> and <a href="http://docs.python.org/library/grp.html"><code>grp</code></a> modules:</p> <pre><code>from pwd import getpwnam print getpwnam('someuser')[2] # or print getpwnam('someuser').pw_uid </code></pre>
79
2009-05-05T17:43:19Z
[ "python" ]
How do I convert a string 2 bytes long to an integer in python
826,284
<p>I have a python program I've inherited and am trying to extend.</p> <p>I have extracted a two byte long string into a string called pS.</p> <p>pS 1st byte is 0x01, the second is 0x20, decimal value == 288</p> <p>I've been trying to get its value as an integer, I've used lines of the form</p> <pre><code>x = int(pS[0:2], 16) # this was fat fingered a while back and read [0:3] </code></pre> <p>and get the message</p> <pre><code>ValueError: invalid literal for int() with base 16: '\x01 ' </code></pre> <p>Another C programmer and I have been googling and trying to get this to work all day.</p> <p>Suggestions, please.</p>
4
2009-05-05T18:28:44Z
826,305
<p>Look at the <a href="http://www.python.org/doc/2.5.2/lib/module-struct.html">struct</a> module.</p> <pre><code>struct.unpack( "h", pS[0:2] ) </code></pre> <p>For a signed 2-byte value. Use "H" for unsigned.</p>
18
2009-05-05T18:33:17Z
[ "python" ]
How do I convert a string 2 bytes long to an integer in python
826,284
<p>I have a python program I've inherited and am trying to extend.</p> <p>I have extracted a two byte long string into a string called pS.</p> <p>pS 1st byte is 0x01, the second is 0x20, decimal value == 288</p> <p>I've been trying to get its value as an integer, I've used lines of the form</p> <pre><code>x = int(pS[0:2], 16) # this was fat fingered a while back and read [0:3] </code></pre> <p>and get the message</p> <pre><code>ValueError: invalid literal for int() with base 16: '\x01 ' </code></pre> <p>Another C programmer and I have been googling and trying to get this to work all day.</p> <p>Suggestions, please.</p>
4
2009-05-05T18:28:44Z
826,308
<p>You can convert the characters to their character code with <code>ord</code> and then add them together in the appropriate way:</p> <pre><code>x = 256*ord(pS[0]) + ord(pS[1]) </code></pre>
4
2009-05-05T18:34:41Z
[ "python" ]
List Comprehensions and Conditions?
826,407
<p>I am trying to see if I can make this code better using list comprehensions.<br> Lets say that I have the following lists:</p> <pre><code>a_list = [ 'HELLO', 'FOO', 'FO1BAR', 'ROOBAR', 'SHOEBAR' ] regex_list = [lambda x: re.search(r'FOO', x, re.IGNORECASE), lambda x: re.search(r'RO', x, re.IGNORECASE)] </code></pre> <p>I basically want to add all the elements that do not have any matches in the <code>regex_list</code> into another list. </p> <p>E.g. ==></p> <pre><code>newlist = [] for each in a_list: for regex in regex_list: if(regex(each) == None): newlist.append(each) </code></pre> <p>How can I do this using list comprehensions? Is it even possible?</p>
7
2009-05-05T18:57:53Z
826,430
<p>Sure, I think this should do it</p> <pre><code>newlist = [s for s in a_list if not any(r(s) for r in regex_list)] </code></pre> <p><em>EDIT</em>: on closer inspection, I notice that your example code actually adds to the new list each string in <code>a_list</code> that doesn't match <em>all</em> the regexes - and what's more, it adds each string once for each regex that it doesn't match. My list comprehension does what I think you meant, which is add only one copy of each string that doesn't match <em>any</em> of the regexes.</p>
17
2009-05-05T19:02:28Z
[ "python", "list", "list-comprehension" ]
List Comprehensions and Conditions?
826,407
<p>I am trying to see if I can make this code better using list comprehensions.<br> Lets say that I have the following lists:</p> <pre><code>a_list = [ 'HELLO', 'FOO', 'FO1BAR', 'ROOBAR', 'SHOEBAR' ] regex_list = [lambda x: re.search(r'FOO', x, re.IGNORECASE), lambda x: re.search(r'RO', x, re.IGNORECASE)] </code></pre> <p>I basically want to add all the elements that do not have any matches in the <code>regex_list</code> into another list. </p> <p>E.g. ==></p> <pre><code>newlist = [] for each in a_list: for regex in regex_list: if(regex(each) == None): newlist.append(each) </code></pre> <p>How can I do this using list comprehensions? Is it even possible?</p>
7
2009-05-05T18:57:53Z
826,471
<p>I'd work your code down to this:</p> <pre><code>a_list = [ 'HELLO', 'FOO', 'FO1BAR', 'ROOBAR', 'SHOEBAR' ] regex_func = lambda x: not re.search(r'(FOO|RO)', x, re.IGNORECASE) </code></pre> <p>Then you have two options:</p> <ol> <li><p>Filter</p> <p><code>newlist = filter(regex_func, a_list)</code></p></li> <li><p>List comprehensions</p> <p><code>newlist = [x for x in a_list if regex_func(x)]</code></p></li> </ol>
0
2009-05-05T19:14:04Z
[ "python", "list", "list-comprehension" ]
Python: Mixing files and loops
826,493
<p>I'm writing a script that logs errors from another program and restarts the program where it left off when it encounters an error. For whatever reasons, the developers of this program didn't feel it necessary to put this functionality into their program by default.</p> <p>Anyways, the program takes an input file, parses it, and creates an output file. The input file is in a specific format:</p> <p>UI - 26474845 TI - the title (can be any number of lines) AB - the abstract (can also be any number of lines)</p> <p>When the program throws an error, it gives you the reference information you need to track the error - namely, the UI, which section (title or abstract), and the line number relative to the beginning of the title or abstract. I want to log the offending sentences from the input file with a function that takes the reference number and the file, finds the sentence, and logs it. The best way I could think of doing it involves moving forward through the file a specific number of times (namely, n times, where n is the line number relative to the beginning of the seciton). The way that seemed to make sense to do this is:</p> <pre><code>i = 1 while i &lt;= lineNumber: print original.readline() i += 1 </code></pre> <p>I don't see how this would make me lose data, but Python thinks it would, and says ValueError: Mixing iteration and read methods would lose data. Does anyone know how to do this properly?</p>
17
2009-05-05T19:20:06Z
826,518
<p>Use <code>for</code> and <a href="http://docs.python.org/library/functions.html#enumerate">enumerate</a>.</p> <p>Example:</p> <pre><code>for line_num, line in enumerate(file): if line_num &lt; cut_off: print line </code></pre> <p><strong>NOTE</strong>: This assumes you are already cleaning up your file handles, etc.</p> <p>Also, the <a href="http://docs.python.org/library/itertools.html#itertools.takewhile">takewhile</a> function could prove useful if you prefer a more functional flavor.</p>
10
2009-05-05T19:23:58Z
[ "python", "file", "loops", "while-loop" ]
Python: Mixing files and loops
826,493
<p>I'm writing a script that logs errors from another program and restarts the program where it left off when it encounters an error. For whatever reasons, the developers of this program didn't feel it necessary to put this functionality into their program by default.</p> <p>Anyways, the program takes an input file, parses it, and creates an output file. The input file is in a specific format:</p> <p>UI - 26474845 TI - the title (can be any number of lines) AB - the abstract (can also be any number of lines)</p> <p>When the program throws an error, it gives you the reference information you need to track the error - namely, the UI, which section (title or abstract), and the line number relative to the beginning of the title or abstract. I want to log the offending sentences from the input file with a function that takes the reference number and the file, finds the sentence, and logs it. The best way I could think of doing it involves moving forward through the file a specific number of times (namely, n times, where n is the line number relative to the beginning of the seciton). The way that seemed to make sense to do this is:</p> <pre><code>i = 1 while i &lt;= lineNumber: print original.readline() i += 1 </code></pre> <p>I don't see how this would make me lose data, but Python thinks it would, and says ValueError: Mixing iteration and read methods would lose data. Does anyone know how to do this properly?</p>
17
2009-05-05T19:20:06Z
826,615
<p>You get the ValueError because your code probably has <code>for line in original:</code> in addition to <code>original.readline()</code>. An easy solution which fixes the problem without making your program slower or consume more memory is changing</p> <pre><code>for line in original: ... </code></pre> <p>to</p> <pre><code>while True: line = original.readline() if not line: break ... </code></pre>
38
2009-05-05T19:47:25Z
[ "python", "file", "loops", "while-loop" ]
Python: Mixing files and loops
826,493
<p>I'm writing a script that logs errors from another program and restarts the program where it left off when it encounters an error. For whatever reasons, the developers of this program didn't feel it necessary to put this functionality into their program by default.</p> <p>Anyways, the program takes an input file, parses it, and creates an output file. The input file is in a specific format:</p> <p>UI - 26474845 TI - the title (can be any number of lines) AB - the abstract (can also be any number of lines)</p> <p>When the program throws an error, it gives you the reference information you need to track the error - namely, the UI, which section (title or abstract), and the line number relative to the beginning of the title or abstract. I want to log the offending sentences from the input file with a function that takes the reference number and the file, finds the sentence, and logs it. The best way I could think of doing it involves moving forward through the file a specific number of times (namely, n times, where n is the line number relative to the beginning of the seciton). The way that seemed to make sense to do this is:</p> <pre><code>i = 1 while i &lt;= lineNumber: print original.readline() i += 1 </code></pre> <p>I don't see how this would make me lose data, but Python thinks it would, and says ValueError: Mixing iteration and read methods would lose data. Does anyone know how to do this properly?</p>
17
2009-05-05T19:20:06Z
828,525
<p>Assuming you need only one line, this could be of help</p> <pre><code>import itertools def getline(fobj, line_no): "Return a (1-based) line from a file object" return itertools.islice(fobj, line_no-1, line_no).next() # 1-based! &gt;&gt;&gt; print getline(open("/etc/passwd", "r"), 4) 'adm:x:3:4:adm:/var/adm:/bin/false\n' </code></pre> <p>You might want to catch StopIteration errors (if the file has less lines).</p>
0
2009-05-06T08:02:56Z
[ "python", "file", "loops", "while-loop" ]
Controlling VirtualBox via COM from Python?
826,494
<p>I'm trying to control latest Sun VirtualBox via it's COM interface from Python. But, unfortunately, the following code don't work:</p> <pre><code>import win32com.client VBOX_GUID = "{B1A7A4F2-47B9-4A1E-82B2-07CCD5323C3F}" try : oVbox = win32com.client.Dispatch( VBOX_GUID ) oVbox.FindMachine( "kubuntu" ) except Exception as oEx: print str( oEx ) </code></pre> <p>Error is general "(-2147467262, 'No such interface supported', None, None)" It seems that the wrong part is my COM handing via Python. Anyone can drop a look and suggest some obvious thing i'm doing wrong?</p>
0
2009-05-05T19:20:07Z
829,495
<p>The problem is that the object returned by <code>FindMachine("kubuntu")</code> does not support the <code>IDispatch interface</code>, and win32com does not support that.</p> <p>You could use my <code>comtypes</code> package <a href="http://starship.python.net/crew/theller/comtypes/" rel="nofollow">http://starship.python.net/crew/theller/comtypes/</a> for that, but you need to patch the version in the repository to make it work with the VirtualBox type libraries.</p> <p>Here's a demo session:</p> <pre><code>Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; from comtypes.client import CreateObject &gt;&gt;&gt; box = CreateObject("VirtualBox.VirtualBox") &gt;&gt;&gt; m = box.FindMachine("Fedora") &gt;&gt;&gt; print m.State 4 &gt;&gt;&gt; print m.CpuCount 1 &gt;&gt;&gt; print m.Name Fedora &gt;&gt;&gt; </code></pre> <p>And here is the patch that you need:</p> <pre><code>Index: automation.py =================================================================== --- automation.py (revision 507) +++ automation.py (working copy) @@ -753,6 +753,8 @@ c_float: VT_R4, c_double: VT_R8, + c_ulonglong: VT_I8, + VARIANT_BOOL: VT_BOOL, BSTR: VT_BSTR, </code></pre>
3
2009-05-06T13:15:24Z
[ "python", "com", "virtualbox" ]
how to get the n-th record of a datastore query
826,724
<p>Suppose that I have the model Foo in GAE and this query:</p> <p>query = Foo.all().order('-<strong>key</strong>')</p> <p>I want to get the n-th record. What is the most efficient way to achieve that? </p> <p>Will the solution break if the ordering property is not unique, such as the one below:</p> <p>query = Foo.all().order('-<strong>color</strong>')</p> <p>edit: n > 1000</p> <p>edit 2: I want to develop a friendly paging mechanism that shows pages available (such as Page 1, Page 2, ... Page 185) and requires a "?page=x" in the query string, instead of a "?bookmark=XXX". When page = x, the query is to fetch the records beginning from the first record of that page.</p>
1
2009-05-05T20:14:11Z
827,114
<p>Documentation for the Query class can be found at: <a href="http://code.google.com/appengine/docs/python/datastore/queryclass.html#Query" rel="nofollow">http://code.google.com/appengine/docs/python/datastore/queryclass.html#Query</a></p> <p>The query class provides fetch witch takes a limit and a offset in your case 1 and n</p> <p>The running time of the fetch grows linearly with the offset + the limit</p> <p>so the only way to optimize in your case would be to make sure that the records you want to access most often are closer to the beginning of the array.</p> <p>You could use query.filter('key = ', n) query.get()</p> <p>which would return the first match with a key of n</p>
2
2009-05-05T22:01:27Z
[ "python", "google-app-engine", "gae-datastore", "custompaging" ]
how to get the n-th record of a datastore query
826,724
<p>Suppose that I have the model Foo in GAE and this query:</p> <p>query = Foo.all().order('-<strong>key</strong>')</p> <p>I want to get the n-th record. What is the most efficient way to achieve that? </p> <p>Will the solution break if the ordering property is not unique, such as the one below:</p> <p>query = Foo.all().order('-<strong>color</strong>')</p> <p>edit: n > 1000</p> <p>edit 2: I want to develop a friendly paging mechanism that shows pages available (such as Page 1, Page 2, ... Page 185) and requires a "?page=x" in the query string, instead of a "?bookmark=XXX". When page = x, the query is to fetch the records beginning from the first record of that page.</p>
1
2009-05-05T20:14:11Z
827,149
<p>There is no efficient way to do this - in any DBMS. In every case, you have to at least read sequentially through the index records until you find the nth one, then look up the corresponding data record. This is more or less what fetch(count, offset) does in GAE, with the additional limitation of 1000 records.</p> <p>A better approach to this is to keep a 'bookmark', consisting of the value of the field you're ordering on for the last entity you retrieved, and the entity's key. Then, when you want to continue from where you left off, you can add the field's value as the lower bound of an inequality query, and skip records until you match or exceed the last one you saw.</p> <p>If you want to provide 'friendly' page offsets to users, what you can do is to use memcache to store an association between a start offset and a bookmark (order_property, key) tuple. When you generate a page, insert or update the bookmark for the entity following the last one. When you fetch a page, use the bookmark if it exists, or generate it the hard way, by doing queries with offsets - potentially multiple queries if the offset is high enough.</p>
3
2009-05-05T22:14:26Z
[ "python", "google-app-engine", "gae-datastore", "custompaging" ]
Combining 2 .csv files by common column
826,812
<p>So I have two .csv files where the first line in file 1 is:</p> <pre><code>MPID,Title,Description,Model,Category ID,Category Description,Subcategory ID,Subcategory Description,Manufacturer ID,Manufacturer Description,URL,Manufacturer (Brand) URL,Image URL,AR Price,Price,Ship Price,Stock,Condition </code></pre> <p>The first line from file 2:</p> <pre><code>Regular Price,Sale Price,Manufacturer Name,Model Number,Retailer Category,Buy URL,Product Name,Availability,Shipping Cost,Condition,MPID,Image URL,UPC,Description </code></pre> <p>and then rest of every file is filled with info.</p> <p>As you can see, both files have a common field called MPID (file 1: col 1, file 2: col 9, where the first col is col 1).</p> <p>I would like to create a new file which will combine these two files by looking at this column (as in: if there is an MPID that is in both files, then in the new file this MPID will appear with both its row from file 1 and its row from file 2). IF one MPID appears only in one file then it should also go into this combined file. </p> <p>The files are not sorted in any way.</p> <p>How do I do this on a debian machine with either a shell script or python?</p> <p>Thanks.</p> <p>EDIT: Both files dont have commas other than the ones separating the fields. </p>
8
2009-05-05T20:36:44Z
826,829
<p>It seems that you're trying to do in a shell script, which is commonly done using SQL server. Is it possible to use SQL for that task? For example, you could import both files into mysql, then create a join, then export it to CSV.</p>
0
2009-05-05T20:41:46Z
[ "python", "shell", "join", "csv", "debian" ]
Combining 2 .csv files by common column
826,812
<p>So I have two .csv files where the first line in file 1 is:</p> <pre><code>MPID,Title,Description,Model,Category ID,Category Description,Subcategory ID,Subcategory Description,Manufacturer ID,Manufacturer Description,URL,Manufacturer (Brand) URL,Image URL,AR Price,Price,Ship Price,Stock,Condition </code></pre> <p>The first line from file 2:</p> <pre><code>Regular Price,Sale Price,Manufacturer Name,Model Number,Retailer Category,Buy URL,Product Name,Availability,Shipping Cost,Condition,MPID,Image URL,UPC,Description </code></pre> <p>and then rest of every file is filled with info.</p> <p>As you can see, both files have a common field called MPID (file 1: col 1, file 2: col 9, where the first col is col 1).</p> <p>I would like to create a new file which will combine these two files by looking at this column (as in: if there is an MPID that is in both files, then in the new file this MPID will appear with both its row from file 1 and its row from file 2). IF one MPID appears only in one file then it should also go into this combined file. </p> <p>The files are not sorted in any way.</p> <p>How do I do this on a debian machine with either a shell script or python?</p> <p>Thanks.</p> <p>EDIT: Both files dont have commas other than the ones separating the fields. </p>
8
2009-05-05T20:36:44Z
826,831
<p>You'll need to look at the <code>join</code> command in the shell. You will also need to sort the data, and probably lose the first lines. The whole process will fall flat if any of the data contains commas. Or you will need to process the data with a CSV-sensitive process that introduces a different field separator (perhaps control-A) that you can use to split fields unambiguously.</p> <p>The alternative, using Python, reads the two files into a pair of dictionaries (keyed on the common column(s)) and then use a loop to cover all the elements in the smaller of the two dictionaries, looking for matching values in the other. (This is basic nested loop query processing.)</p>
1
2009-05-05T20:42:15Z
[ "python", "shell", "join", "csv", "debian" ]
Combining 2 .csv files by common column
826,812
<p>So I have two .csv files where the first line in file 1 is:</p> <pre><code>MPID,Title,Description,Model,Category ID,Category Description,Subcategory ID,Subcategory Description,Manufacturer ID,Manufacturer Description,URL,Manufacturer (Brand) URL,Image URL,AR Price,Price,Ship Price,Stock,Condition </code></pre> <p>The first line from file 2:</p> <pre><code>Regular Price,Sale Price,Manufacturer Name,Model Number,Retailer Category,Buy URL,Product Name,Availability,Shipping Cost,Condition,MPID,Image URL,UPC,Description </code></pre> <p>and then rest of every file is filled with info.</p> <p>As you can see, both files have a common field called MPID (file 1: col 1, file 2: col 9, where the first col is col 1).</p> <p>I would like to create a new file which will combine these two files by looking at this column (as in: if there is an MPID that is in both files, then in the new file this MPID will appear with both its row from file 1 and its row from file 2). IF one MPID appears only in one file then it should also go into this combined file. </p> <p>The files are not sorted in any way.</p> <p>How do I do this on a debian machine with either a shell script or python?</p> <p>Thanks.</p> <p>EDIT: Both files dont have commas other than the ones separating the fields. </p>
8
2009-05-05T20:36:44Z
826,836
<pre><code>sort -t , -k index1 file1 &gt; sorted1 sort -t , -k index2 file2 &gt; sorted2 join -t , -1 index1 -2 index2 -a 1 -a 2 sorted1 sorted2 </code></pre>
13
2009-05-05T20:43:30Z
[ "python", "shell", "join", "csv", "debian" ]
Combining 2 .csv files by common column
826,812
<p>So I have two .csv files where the first line in file 1 is:</p> <pre><code>MPID,Title,Description,Model,Category ID,Category Description,Subcategory ID,Subcategory Description,Manufacturer ID,Manufacturer Description,URL,Manufacturer (Brand) URL,Image URL,AR Price,Price,Ship Price,Stock,Condition </code></pre> <p>The first line from file 2:</p> <pre><code>Regular Price,Sale Price,Manufacturer Name,Model Number,Retailer Category,Buy URL,Product Name,Availability,Shipping Cost,Condition,MPID,Image URL,UPC,Description </code></pre> <p>and then rest of every file is filled with info.</p> <p>As you can see, both files have a common field called MPID (file 1: col 1, file 2: col 9, where the first col is col 1).</p> <p>I would like to create a new file which will combine these two files by looking at this column (as in: if there is an MPID that is in both files, then in the new file this MPID will appear with both its row from file 1 and its row from file 2). IF one MPID appears only in one file then it should also go into this combined file. </p> <p>The files are not sorted in any way.</p> <p>How do I do this on a debian machine with either a shell script or python?</p> <p>Thanks.</p> <p>EDIT: Both files dont have commas other than the ones separating the fields. </p>
8
2009-05-05T20:36:44Z
826,848
<p>This is the classical "relational join" problem.</p> <p>You have several algorithms.</p> <ul> <li><p>Nested Loops. You read from one file to pick a "master" record. You read the entire other file locating all "detail" records that match the master. This is a bad idea.</p></li> <li><p>Sort-Merge. You sort each file into a temporary copy based on the common key. You then merge both files by reading from the master and then reading all matching rows from the detail and writing the merged records.</p></li> <li><p>Lookup. You read one of the files entirely into a dictionary in memory, indexed by the key field. This can be tricky for the detail file, where you'll have multiple children per key. Then you read the other file and lookup the matching records in the dictionary.</p></li> </ul> <p>Of these, sort-merge is often the fastest. This is done entirely using the unix <a href="http://www.manpagez.com/man/1/sort/">sort</a> command.</p> <p>Lookup Implementation</p> <pre><code>import csv import collections index = collections.defaultdict(list) file1= open( "someFile", "rb" ) rdr= csv.DictReader( file1 ) for row in rdr: index[row['MPID']].append( row ) file1.close() file2= open( "anotherFile", "rb" ) rdr= csv.DictReader( file2 ) for row in rdr: print row, index[row['MPID']] file2.close() </code></pre>
8
2009-05-05T20:46:06Z
[ "python", "shell", "join", "csv", "debian" ]
Combining 2 .csv files by common column
826,812
<p>So I have two .csv files where the first line in file 1 is:</p> <pre><code>MPID,Title,Description,Model,Category ID,Category Description,Subcategory ID,Subcategory Description,Manufacturer ID,Manufacturer Description,URL,Manufacturer (Brand) URL,Image URL,AR Price,Price,Ship Price,Stock,Condition </code></pre> <p>The first line from file 2:</p> <pre><code>Regular Price,Sale Price,Manufacturer Name,Model Number,Retailer Category,Buy URL,Product Name,Availability,Shipping Cost,Condition,MPID,Image URL,UPC,Description </code></pre> <p>and then rest of every file is filled with info.</p> <p>As you can see, both files have a common field called MPID (file 1: col 1, file 2: col 9, where the first col is col 1).</p> <p>I would like to create a new file which will combine these two files by looking at this column (as in: if there is an MPID that is in both files, then in the new file this MPID will appear with both its row from file 1 and its row from file 2). IF one MPID appears only in one file then it should also go into this combined file. </p> <p>The files are not sorted in any way.</p> <p>How do I do this on a debian machine with either a shell script or python?</p> <p>Thanks.</p> <p>EDIT: Both files dont have commas other than the ones separating the fields. </p>
8
2009-05-05T20:36:44Z
826,891
<p>You could take a look at my FOSS project <a href="http://code.google.com/p/csvfix/" rel="nofollow">CSVfix</a>, which is a stream editor for manipulating CSV files. It supports joins, among its other features, and requires no scripting to use.</p>
0
2009-05-05T21:00:33Z
[ "python", "shell", "join", "csv", "debian" ]
Combining 2 .csv files by common column
826,812
<p>So I have two .csv files where the first line in file 1 is:</p> <pre><code>MPID,Title,Description,Model,Category ID,Category Description,Subcategory ID,Subcategory Description,Manufacturer ID,Manufacturer Description,URL,Manufacturer (Brand) URL,Image URL,AR Price,Price,Ship Price,Stock,Condition </code></pre> <p>The first line from file 2:</p> <pre><code>Regular Price,Sale Price,Manufacturer Name,Model Number,Retailer Category,Buy URL,Product Name,Availability,Shipping Cost,Condition,MPID,Image URL,UPC,Description </code></pre> <p>and then rest of every file is filled with info.</p> <p>As you can see, both files have a common field called MPID (file 1: col 1, file 2: col 9, where the first col is col 1).</p> <p>I would like to create a new file which will combine these two files by looking at this column (as in: if there is an MPID that is in both files, then in the new file this MPID will appear with both its row from file 1 and its row from file 2). IF one MPID appears only in one file then it should also go into this combined file. </p> <p>The files are not sorted in any way.</p> <p>How do I do this on a debian machine with either a shell script or python?</p> <p>Thanks.</p> <p>EDIT: Both files dont have commas other than the ones separating the fields. </p>
8
2009-05-05T20:36:44Z
29,784,804
<p>For merging multiple files (even > 2) based on one or more common columns, one of the best and efficient approaches in python would be to use "brewery". You could even specify what fields need to be considered for merging and what fields need to be saved.</p> <pre><code>import brewery from brewery import ds import sys sources = [ {"file": "grants_2008.csv", "fields": ["receiver", "amount", "date"]}, {"file": "grants_2009.csv", "fields": ["id", "receiver", "amount", "contract_number", "date"]}, {"file": "grants_2010.csv", "fields": ["receiver", "subject", "requested_amount", "amount", "date"]} ] </code></pre> <p>Create list of all fields and add filename to store information about origin of data records.Go through source definitions and collect the fields:</p> <pre><code>for source in sources: for field in source["fields"]: if field not in all_fields: out = ds.CSVDataTarget("merged.csv") out.fields = brewery.FieldList(all_fields) out.initialize() for source in sources: path = source["file"] # Initialize data source: skip reading of headers # use XLSDataSource for XLS files # We ignore the fields in the header, because we have set-up fields # previously. We need to skip the header row. src = ds.CSVDataSource(path,read_header=False,skip_rows=1) src.fields = ds.FieldList(source["fields"]) src.initialize() for record in src.records(): # Add file reference into ouput - to know where the row comes from record["file"] = path out.append(record) # Close the source stream src.finalize() cat merged.csv | brewery pipe pretty_printer </code></pre>
0
2015-04-21T23:10:25Z
[ "python", "shell", "join", "csv", "debian" ]
crontab in python
826,882
<p>I'm writing code in python for some sort of daemon that has to execute a specific action at a certain instance in time defined by a crontab string. Is there a module I can use? If not, can someone paste/link an algorithm I can use to check whether the instance of time defined by the crontab has occured in the time from when the previous check was done. Thanks.</p>
2
2009-05-05T20:58:43Z
826,912
<p><a href="http://docs.python.org/library/sched.html" rel="nofollow">sched</a> ftw</p>
3
2009-05-05T21:08:20Z
[ "python", "crontab" ]
crontab in python
826,882
<p>I'm writing code in python for some sort of daemon that has to execute a specific action at a certain instance in time defined by a crontab string. Is there a module I can use? If not, can someone paste/link an algorithm I can use to check whether the instance of time defined by the crontab has occured in the time from when the previous check was done. Thanks.</p>
2
2009-05-05T20:58:43Z
826,939
<p><a href="http://www.razorvine.net/download/kronos.py" rel="nofollow">Kronos</a> is another option.</p> <p>Here is a similar SO <a href="http://stackoverflow.com/questions/373335/suggestions-for-a-cron-like-scheduler-in-python">question</a>.</p>
1
2009-05-05T21:16:13Z
[ "python", "crontab" ]
crontab in python
826,882
<p>I'm writing code in python for some sort of daemon that has to execute a specific action at a certain instance in time defined by a crontab string. Is there a module I can use? If not, can someone paste/link an algorithm I can use to check whether the instance of time defined by the crontab has occured in the time from when the previous check was done. Thanks.</p>
2
2009-05-05T20:58:43Z
830,201
<p>You might want to take a look at <a href="http://www.kalab.com/freeware/pycron/pycron.htm" rel="nofollow">pycron</a>.</p>
0
2009-05-06T15:33:17Z
[ "python", "crontab" ]
Unit testing for D-Bus and HAL?
827,295
<p>How does one test a method that does some interactions with the local D-Bus (accessing a HAL object)? </p> <p>Results of tests will differ depending on the system that the test is run on, so I don't know how to provide the method reliable input. </p> <p>I'm working in Python, by the way.</p>
2
2009-05-05T22:56:01Z
827,391
<p>If you can not mock the environment then it's probably impossible for you to write the test. If your access to HAL/D-Bus is via an object and you provide a mock instance to your test then it should be possible to emulate the necessary inputs to your test from the mock implementation.</p>
2
2009-05-05T23:31:36Z
[ "python", "unit-testing", "dbus", "hal" ]
Unit testing for D-Bus and HAL?
827,295
<p>How does one test a method that does some interactions with the local D-Bus (accessing a HAL object)? </p> <p>Results of tests will differ depending on the system that the test is run on, so I don't know how to provide the method reliable input. </p> <p>I'm working in Python, by the way.</p>
2
2009-05-05T22:56:01Z
1,993,774
<p>It's also possible to create a temporary D-Bus buses for tests, and emulate any services your program uses in your test code. You can use this approach for programs which are D-Bus services, D-Bus clients, or both. The downside is that the bus setup is a bit hairy. There's <a href="http://git.collabora.co.uk/?p=telepathy-gabble.git;a=tree;f=tests/twisted/tools" rel="nofollow">code for doing this</a> in e.g. Telepathy Gabble.</p>
0
2010-01-03T02:46:07Z
[ "python", "unit-testing", "dbus", "hal" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
827,397
<pre><code>import win32api drives = win32api.GetLogicalDriveStrings() drives = drives.split('\000')[:-1] print drives </code></pre> <p>Adapted from: <a href="http://www.faqts.com/knowledge_base/view.phtml/aid/4670">http://www.faqts.com/knowledge_base/view.phtml/aid/4670</a></p>
40
2009-05-05T23:32:57Z
[ "python", "windows" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
827,398
<p>Without using any external libraries, if that matters to you:</p> <pre><code>import string from ctypes import windll def get_drives(): drives = [] bitmask = windll.kernel32.GetLogicalDrives() for letter in string.uppercase: if bitmask &amp; 1: drives.append(letter) bitmask &gt;&gt;= 1 return drives if __name__ == '__main__': print get_drives() # On my PC, this prints ['A', 'C', 'D', 'F', 'H'] </code></pre>
43
2009-05-05T23:33:19Z
[ "python", "windows" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
827,484
<p>Those look like better answers. Here's my hackish cruft</p> <pre><code>import os, re re.findall(r"[A-Z]+:.*$",os.popen("mountvol /").read(),re.MULTILINE) </code></pre>
6
2009-05-06T00:05:22Z
[ "python", "windows" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
827,490
<p>The <a href="http://www.microsoft.com/technet/scriptcenter/scripts/python/default.mspx?mfr=true" rel="nofollow">Microsoft Script Repository</a> includes <a href="http://www.microsoft.com/technet/scriptcenter/scripts/python/storage/disks/drives/stdvpy05.mspx" rel="nofollow">this recipe</a> which might help. I don't have a windows machine to test it, though, so I'm not sure if you want "Name", "System Name", "Volume Name", or maybe something else.</p>
8
2009-05-06T00:10:02Z
[ "python", "windows" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
23,431,616
<p>As I don't have win32api installed on my field of notebooks I used this solution using wmic:</p> <pre><code>import subprocess import string #define alphabet alphabet = [] for i in string.ascii_uppercase: alphabet.append(i + ':') #get letters that are mounted somewhere mounted_letters = subprocess.Popen("wmic logicaldisk get name", shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) #erase mounted letters from alphabet in nested loop for line in mounted_letters.stdout.readlines(): if "Name" in line: continue for letter in alphabet: if letter in line: print 'Deleting letter %s from free alphabet %s' % letter alphabet.pop(alphabet.index(letter)) print alphabet </code></pre> <p>alternatively you can get the difference from both list like this simpler solution (after launching wmic subprocess as mounted_letters):</p> <pre><code>#get output to list mounted_letters_list = [] for line in mounted_letters.stdout.readlines(): if "Name" in line: continue mounted_letters_list.append(line.strip()) rest = list(set(alphabet) - set(mounted_letters_list)) rest.sort() print rest </code></pre> <p>both solutions are similiarly fast, yet I guess set list is better for some reason, right?</p>
0
2014-05-02T15:23:21Z
[ "python", "windows" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
25,752,504
<p>As part of a similar task I also needed to grab a free drive letter. I decided I wanted the highest available letter. I first wrote it out more idiomatically, then crunched it to a 1-liner to see if it still made sense. As awesome as list comprehensions are I love sets for this: <code>unused=set(alphabet)-set(used)</code> instead of having to do <code>unused = [a for a in aphabet if a not in used]</code>. Cool stuff!</p> <pre><code>def get_used_drive_letters(): drives = win32api.GetLogicalDriveStrings() drives = drives.split('\000')[:-1] letters = [d[0] for d in drives] return letters def get_unused_drive_letters(): alphabet = map(chr, range(ord('A'), ord('Z')+1)) used = get_used_drive_letters() unused = list(set(alphabet)-set(used)) return unused def get_highest_unused_drive_letter(): unused = get_unused_drive_letters() highest = list(reversed(sorted(unused)))[0] return highest </code></pre> <p>The one liner:</p> <pre><code>def get_drive(): highest = sorted(list(set(map(chr, range(ord('A'), ord('Z')+1))) - set(win32api.GetLogicalDriveStrings().split(':\\\000')[:-1])))[-1] </code></pre> <p>I also chose the alphabet using map/range/ord/chr over using string since parts of string are deprecated.</p>
0
2014-09-09T19:49:42Z
[ "python", "windows" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
34,187,346
<p>Found this solution on Google, slightly modified from original. Seem pretty pythonic and does not need any "exotic" imports</p> <pre><code>import os, string available_drives = ['%s:' % d for d in string.ascii_uppercase if os.path.exists('%s:' % d)] </code></pre>
2
2015-12-09T19:26:32Z
[ "python", "windows" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
37,761,506
<p>I wrote this piece of code:</p> <p><code>import os drives = [chr(x) + ":" for x in range(65,90) if os.path.exists(chr(x) + ":")]</code></p> <p>It´s based on @Barmaley ´s answer, but has the advantage of not using the <code>string</code> module, in case you don´t want to use it. It also works on my system, other than @SingleNegationElimination ´s answer.</p>
0
2016-06-11T08:14:47Z
[ "python", "windows" ]
Is there a way to list all the available drive letters in python?
827,371
<p>More or less what it says on the tin: is there an (easy) way in Python to list all the currently in-use drive letters in a windows system?</p> <p>(My google-fu seems to have let me down on this one.)</p> <p>Related:</p> <ul> <li><a href="http://stackoverflow.com/questions/286534/enumerating-all-available-drive-letters-in-windows">Enumerating all available drive letters in Windows</a> (C++ / Win32)</li> </ul>
25
2009-05-05T23:23:47Z
38,584,073
<p>More optimal solution based on @RichieHindle</p> <pre><code>def get_drives(): drives = [] bitmask = windll.kernel32.GetLogicalDrives() letter = ord('A') while bitmask &gt; 0: if bitmask &amp; 1: drives.append(chr(letter) + ':\\') bitmask &gt;&gt;= 1 letter += 1 return drives </code></pre>
0
2016-07-26T07:54:16Z
[ "python", "windows" ]
Should my python web app use unicode for all strings?
827,415
<p>I see some frameworks like Django using unicode all over the place so it seems like it might be a good idea.</p> <p>On the other hand, it seems like a big pain to have all these extra 'u's floating around everywhere. </p> <p>What will be a problem if I don't do this?</p> <p>Are there any issues that will come up if I do do this?</p> <p>I'm using Pylons right now as my framework.</p>
6
2009-05-05T23:39:38Z
827,429
<p>In Python 3, all strings are Unicode. So, you can prepare for this by using <code>u''</code> strings everywhere you need to, and then when you eventually upgrade to Python 3 using the <code>2to3</code> tool all the <code>u</code>s will disappear. And you'll be in a better position because you will have already tested your code with Unicode strings.</p> <p>See <a href="http://docs.python.org/3.0/whatsnew/3.0.html#text-vs-data-instead-of-unicode-vs-8-bit" rel="nofollow">Text Vs. Data Instead Of Unicode Vs. 8-bit</a> for more information.</p>
10
2009-05-05T23:43:40Z
[ "python", "django", "web-applications", "unicode", "pylons" ]
Should my python web app use unicode for all strings?
827,415
<p>I see some frameworks like Django using unicode all over the place so it seems like it might be a good idea.</p> <p>On the other hand, it seems like a big pain to have all these extra 'u's floating around everywhere. </p> <p>What will be a problem if I don't do this?</p> <p>Are there any issues that will come up if I do do this?</p> <p>I'm using Pylons right now as my framework.</p>
6
2009-05-05T23:39:38Z
827,461
<p>You can avoid the <code>u''</code> in python 2.6 by doing:</p> <pre><code>from __future__ import unicode_literals </code></pre> <p>That will make <code>'string literals'</code> to be unicode objects, just like it is in python 3;</p>
19
2009-05-05T23:55:49Z
[ "python", "django", "web-applications", "unicode", "pylons" ]
Should my python web app use unicode for all strings?
827,415
<p>I see some frameworks like Django using unicode all over the place so it seems like it might be a good idea.</p> <p>On the other hand, it seems like a big pain to have all these extra 'u's floating around everywhere. </p> <p>What will be a problem if I don't do this?</p> <p>Are there any issues that will come up if I do do this?</p> <p>I'm using Pylons right now as my framework.</p>
6
2009-05-05T23:39:38Z
829,155
<blockquote> <p>What will be a problem if I don't do this?</p> </blockquote> <p>I'm a westerner living in Japan, so I've seen first-hand what is needed to work with non-ASCII characters. The problem if you don't use Unicode strings is that your code will be a frustration to the parts of the world that use anything other than A-Z. Our company has had a great deal of frustration getting certain web software to do Japanese characters without making a total mess of it.</p> <p>It takes a little effort for English speakers to appreciate how great Unicode is, but it really is a terrific bit of work to make computers accessible to all cultures and languages.</p> <p>"Gotchas":</p> <ol> <li><p>Make sure your output web pages state the encoding in use properly (e.g. using content-encoding header), and then encode all Unicode strings properly at output. Python 3 Unicode strings is a great improvement to do this right.</p></li> <li><p>Do everything with Unicode strings, and only convert to a specific encoding at the last moment, when doing output. Other languages, such as PHP, are prone to bugs when manipulating Unicode in e.g. UTF-8 form. Say you have to truncate a Unicode string. If it's in UTF-8 form internally, there's a risk you could chop off a multi-byte character half-way through, resulting in rubbish output. Python's use of Unicode strings internally makes it harder to make these mistakes.</p></li> </ol>
3
2009-05-06T11:29:45Z
[ "python", "django", "web-applications", "unicode", "pylons" ]
Should my python web app use unicode for all strings?
827,415
<p>I see some frameworks like Django using unicode all over the place so it seems like it might be a good idea.</p> <p>On the other hand, it seems like a big pain to have all these extra 'u's floating around everywhere. </p> <p>What will be a problem if I don't do this?</p> <p>Are there any issues that will come up if I do do this?</p> <p>I'm using Pylons right now as my framework.</p>
6
2009-05-05T23:39:38Z
1,440,981
<p>Using Unicode internally is a good way to avoid problems with non-ASCII characters. Convert at the boundaries of your application (incoming data to unicode, outgoing data to UTF-8 or whatever). Pylons can do the conversion for you in many cases: e.g. controllers can safely return unicode strings; SQLAlchemy models may declare Unicode columns.</p> <p>Regarding string literals in your source code: the u prefix is usually not necessary. You can safely mix str objects containing ASCII with unicode objects. Just make sure all your string literals are either pure ASCII or are u"unicode".</p>
1
2009-09-17T19:52:32Z
[ "python", "django", "web-applications", "unicode", "pylons" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
827,585
<p>The regex provided should match any url of the form <a href="http://www.ietf.org/rfc/rfc3986.txt" rel="nofollow">http://www.ietf.org/rfc/rfc3986.txt</a>; and does when tested in the python interpreter.</p> <p>What format have the URLs you've been having trouble parsing had?</p>
3
2009-05-06T00:55:26Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
827,592
<pre><code>urlfinders = [ re.compile("([0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}|(((news|telnet|nttp|file|http|ftp|https)://)|(www|ftp)[-A-Za-z0-9]*\\.)[-A-Za-z0-9\\.]+)(:[0-9]*)?/[-A-Za-z0-9_\\$\\.\\+\\!\\*\\(\\),;:@&amp;=\\?/~\\#\\%]*[^]'\\.}&gt;\\),\\\"]"), re.compile("([0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}|(((news|telnet|nttp|file|http|ftp|https)://)|(www|ftp)[-A-Za-z0-9]*\\.)[-A-Za-z0-9\\.]+)(:[0-9]*)?"), re.compile("(~/|/|\\./)([-A-Za-z0-9_\\$\\.\\+\\!\\*\\(\\),;:@&amp;=\\?/~\\#\\%]|\\\\ )+"), re.compile("'\\&lt;((mailto:)|)[-A-Za-z0-9\\.]+@[-A-Za-z0-9\\.]+"), ] </code></pre> <p>NOTE: As ugly as it looks in your browser just copy paste and the formatting should be good</p> <p>Found at the python mailing lists and used for the gnome-terminal</p> <p>source: <a href="http://mail.python.org/pipermail/python-list/2007-January/595436.html" rel="nofollow">http://mail.python.org/pipermail/python-list/2007-January/595436.html</a></p>
0
2009-05-06T00:58:19Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
827,621
<p>An easy way to parse (and validate) URL's is the <code>urlparse</code> (<a href="https://docs.python.org/2/library/urlparse.html" rel="nofollow">py2</a>, <a href="https://docs.python.org/3.0/library/urllib.parse.html" rel="nofollow">py3</a>) module. </p> <p>A regex is too much work.</p> <hr> <p>There's no "validate" method because almost anything is a valid URL. There are some punctuation rules for splitting it up. Absent any punctuation, you still have a valid URL.</p> <p>Check the RFC carefully and see if you can construct an "invalid" URL. The rules are very flexible. </p> <p>For example <code>:::::</code> is a valid URL. The path is <code>":::::"</code>. A pretty stupid filename, but a valid filename.</p> <p>Also, <code>/////</code> is a valid URL. The netloc ("hostname") is <code>""</code>. The path is <code>"///"</code>. Again, stupid. Also valid. This URL normalizes to <code>"///"</code> which is the equivalent.</p> <p>Something like <code>"bad://///worse/////"</code> is perfectly valid. Dumb but valid.</p> <p><strong>Bottom Line</strong>. Parse it, and look at the pieces to see if they're displeasing in some way. </p> <p>Do you want the scheme to always be "http"? Do you want the netloc to always be "www.somename.somedomain"? Do you want the path to look unix-like? Or windows-like? Do you want to remove the query string? Or preserve it?</p> <p>These are not RFC-specified validations. These are validations unique to your application.</p>
125
2009-05-06T01:09:19Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
827,639
<p>I admit, I find your regular expression totally incomprehensible. I wonder if you could use urlparse instead? Something like:</p> <pre><code>pieces = urlparse.urlparse(url) assert all([pieces.scheme, pieces.netloc]) assert set(pieces.netloc) &lt;= set(string.letters + string.digits + '-.') # and others? assert pieces.scheme in ['http', 'https', 'ftp'] # etc. </code></pre> <p>It might be slower, and maybe you'll miss conditions, but it seems (to me) a lot easier to read and debug than <a href="http://www.codinghorror.com/blog/archives/001181.html">a regular expression for URLs</a>.</p>
17
2009-05-06T01:18:04Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
827,645
<p>I've needed to do this many times over the years and always end up copying someone else's regular expression who has thought about it way more than I <em>want</em> to think about it.</p> <p>Having said that, there is a regex in the Django forms code which should do the trick:</p> <p><a href="http://code.djangoproject.com/browser/django/trunk/django/forms/fields.py#L534" rel="nofollow">http://code.djangoproject.com/browser/django/trunk/django/forms/fields.py#L534</a></p>
1
2009-05-06T01:23:09Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
835,527
<p>Here's the complete regexp to parse a URL.</p> <pre class="lang-none prettyprint-override"><code>(?:http://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\. )*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+) ){3}))(?::(?:\d+))?)(?:/(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F \d]{2}))|[;:@&amp;=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{ 2}))|[;:@&amp;=])*))*)(?:\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{ 2}))|[;:@&amp;=])*))?)?)|(?:ftp://(?:(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(? :%[a-fA-F\d]{2}))|[;?&amp;=])*)(?::(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a- fA-F\d]{2}))|[;?&amp;=])*))?@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|- )*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(? :\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?))(?:/(?:(?:(?:(?:[a-zA-Z\d$\-_.+! *'(),]|(?:%[a-fA-F\d]{2}))|[?:@&amp;=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'() ,]|(?:%[a-fA-F\d]{2}))|[?:@&amp;=])*))*)(?:;type=[AIDaid])?)?)|(?:news:(?: (?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;/?:&amp;=])+@(?:(?:( ?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[ a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3})))|(?:[a-zA-Z]( ?:[a-zA-Z\d]|[_.+-])*)|\*))|(?:nntp://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[ a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d ])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?)/(?:[a-zA-Z](?:[a-zA-Z \d]|[_.+-])*)(?:/(?:\d+))?)|(?:telnet://(?:(?:(?:(?:(?:[a-zA-Z\d$\-_.+ !*'(),]|(?:%[a-fA-F\d]{2}))|[;?&amp;=])*)(?::(?:(?:(?:[a-zA-Z\d$\-_.+!*'() ,]|(?:%[a-fA-F\d]{2}))|[;?&amp;=])*))?@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a -zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d] )?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?))/?)|(?:gopher://(?:(?: (?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?: (?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+ ))?)(?:/(?:[a-zA-Z\d$\-_.+!*'(),;/?:@&amp;=]|(?:%[a-fA-F\d]{2}))(?:(?:(?:[ a-zA-Z\d$\-_.+!*'(),;/?:@&amp;=]|(?:%[a-fA-F\d]{2}))*)(?:%09(?:(?:(?:[a-zA -Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;:@&amp;=])*)(?:%09(?:(?:[a-zA-Z\d$ \-_.+!*'(),;/?:@&amp;=]|(?:%[a-fA-F\d]{2}))*))?)?)?)?)|(?:wais://(?:(?:(?: (?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?: [a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))? )/(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)(?:(?:/(?:(?:[a-zA -Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)/(?:(?:[a-zA-Z\d$\-_.+!*'(),]|( ?:%[a-fA-F\d]{2}))*))|\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d] {2}))|[;:@&amp;=])*))?)|(?:mailto:(?:(?:[a-zA-Z\d$\-_.+!*'(),;/?:@&amp;=]|(?:% [a-fA-F\d]{2}))+))|(?:file://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d] |-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?: (?:\d+)(?:\.(?:\d+)){3}))|localhost)?/(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'() ,]|(?:%[a-fA-F\d]{2}))|[?:@&amp;=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|( ?:%[a-fA-F\d]{2}))|[?:@&amp;=])*))*))|(?:prospero://(?:(?:(?:(?:(?:[a-zA-Z \d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-) *[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?)/(?:(?:(?:(? :[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&amp;=])*)(?:/(?:(?:(?:[a- zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&amp;=])*))*)(?:(?:;(?:(?:(?:[ a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&amp;])*)=(?:(?:(?:[a-zA-Z\d $\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&amp;])*)))*)|(?:ldap://(?:(?:(?:(?: (?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?: [a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))? ))?/(?:(?:(?:(?:(?:(?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]) )|(?:%20))+|(?:OID|oid)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%2 0)*)=(?:(?:%0[Aa])?(?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F \d]{2}))*))(?:(?:(?:%0[Aa])?(?:%20)*)\+(?:(?:%0[Aa])?(?:%20)*)(?:(?:(? :(?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?:OID |oid)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[Aa]) ?(?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)))*)(?:( ?:(?:(?:%0[Aa])?(?:%20)*)(?:[;,])(?:(?:%0[Aa])?(?:%20)*))(?:(?:(?:(?:( ?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?:OID|o id)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[Aa])?( ?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*))(?:(?:(?: %0[Aa])?(?:%20)*)\+(?:(?:%0[Aa])?(?:%20)*)(?:(?:(?:(?:(?:[a-zA-Z\d]|%( ?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?:OID|oid)\.(?:(?:\d+)(?: \.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[Aa])?(?:%20)*))?(?:(?:[a -zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)))*))*(?:(?:(?:%0[Aa])?(?:%2 0)*)(?:[;,])(?:(?:%0[Aa])?(?:%20)*))?)(?:\?(?:(?:(?:(?:[a-zA-Z\d$\-_.+ !*'(),]|(?:%[a-fA-F\d]{2}))+)(?:,(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-f A-F\d]{2}))+))*)?)(?:\?(?:base|one|sub)(?:\?(?:((?:[a-zA-Z\d$\-_.+!*'( ),;/?:@&amp;=]|(?:%[a-fA-F\d]{2}))+)))?)?)?)|(?:(?:z39\.50[rs])://(?:(?:(? :(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(? :[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+)) ?)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+)(?:\+(?:(?: [a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+))*(?:\?(?:(?:[a-zA-Z\d$\-_ .+!*'(),]|(?:%[a-fA-F\d]{2}))+))?)?(?:;esn=(?:(?:[a-zA-Z\d$\-_.+!*'(), ]|(?:%[a-fA-F\d]{2}))+))?(?:;rs=(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA -F\d]{2}))+)(?:\+(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+))*) ?))|(?:cid:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?:@&amp;= ])*))|(?:mid:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?:@ &amp;=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?:@&amp;=] )*))?)|(?:vemmi://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z \d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\ .(?:\d+)){3}))(?::(?:\d+))?)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a -fA-F\d]{2}))|[/?:@&amp;=])*)(?:(?:;(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a -fA-F\d]{2}))|[/?:@&amp;])*)=(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d ]{2}))|[/?:@&amp;])*))*))?)|(?:imap://(?:(?:(?:(?:(?:(?:(?:[a-zA-Z\d$\-_.+ !*'(),]|(?:%[a-fA-F\d]{2}))|[&amp;=~])+)(?:(?:;[Aa][Uu][Tt][Hh]=(?:\*|(?:( ?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&amp;=~])+))))?)|(?:(?:;[ Aa][Uu][Tt][Hh]=(?:\*|(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2 }))|[&amp;=~])+)))(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[ &amp;=~])+))?))@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d]) ?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?: \d+)){3}))(?::(?:\d+))?))/(?:(?:(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?: %[a-fA-F\d]{2}))|[&amp;=~:@/])+)?;[Tt][Yy][Pp][Ee]=(?:[Ll](?:[Ii][Ss][Tt]| [Ss][Uu][Bb])))|(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2})) |[&amp;=~:@/])+)(?:\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[ &amp;=~:@/])+))?(?:(?:;[Uu][Ii][Dd][Vv][Aa][Ll][Ii][Dd][Ii][Tt][Yy]=(?:[1- 9]\d*)))?)|(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&amp;=~ :@/])+)(?:(?:;[Uu][Ii][Dd][Vv][Aa][Ll][Ii][Dd][Ii][Tt][Yy]=(?:[1-9]\d* )))?(?:/;[Uu][Ii][Dd]=(?:[1-9]\d*))(?:(?:/;[Ss][Ee][Cc][Tt][Ii][Oo][Nn ]=(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&amp;=~:@/])+)))?)) )?)|(?:nfs:(?:(?://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA- Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?: \.(?:\d+)){3}))(?::(?:\d+))?)(?:(?:/(?:(?:(?:(?:(?:[a-zA-Z\d\$\-_.!~*' (),])|(?:%[a-fA-F\d]{2})|[:@&amp;=+])*)(?:/(?:(?:(?:[a-zA-Z\d\$\-_.!~*'(), ])|(?:%[a-fA-F\d]{2})|[:@&amp;=+])*))*)?)))?)|(?:/(?:(?:(?:(?:(?:[a-zA-Z\d \$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&amp;=+])*)(?:/(?:(?:(?:[a-zA-Z\d\$\ -_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&amp;=+])*))*)?))|(?:(?:(?:(?:(?:[a-zA- Z\d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&amp;=+])*)(?:/(?:(?:(?:[a-zA-Z\d \$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&amp;=+])*))*)?))) </code></pre> <p>Given its complexibility, I think you should go the urlparse way.</p> <p>For completeness, here's the pseudo-BNF of the above regex (as a documentation):</p> <pre>; The generic form of a URL is: genericurl = scheme ":" schemepart ; Specific predefined schemes are defined here; new schemes ; may be registered with IANA url = httpurl | ftpurl | newsurl | nntpurl | telneturl | gopherurl | waisurl | mailtourl | fileurl | prosperourl | otherurl ; new schemes follow the general syntax otherurl = genericurl ; the scheme is in lower case; interpreters should use case-ignore scheme = 1*[ lowalpha | digit | "+" | "-" | "." ] schemepart = *xchar | ip-schemepart ; URL schemeparts for ip based protocols: ip-schemepart = "//" login [ "/" urlpath ] login = [ user [ ":" password ] "@" ] hostport hostport = host [ ":" port ] host = hostname | hostnumber hostname = *[ domainlabel "." ] toplabel domainlabel = alphadigit | alphadigit *[ alphadigit | "-" ] alphadigit toplabel = alpha | alpha *[ alphadigit | "-" ] alphadigit alphadigit = alpha | digit hostnumber = digits "." digits "." digits "." digits port = digits user = *[ uchar | ";" | "?" | "&" | "=" ] password = *[ uchar | ";" | "?" | "&" | "=" ] urlpath = *xchar ; depends on protocol see section 3.1 ; The predefined schemes: ; FTP (see also RFC959) ftpurl = "ftp://" login [ "/" fpath [ ";type=" ftptype ]] fpath = fsegment *[ "/" fsegment ] fsegment = *[ uchar | "?" | ":" | "@" | "&" | "=" ] ftptype = "A" | "I" | "D" | "a" | "i" | "d" ; FILE fileurl = "file://" [ host | "localhost" ] "/" fpath ; HTTP httpurl = "http://" hostport [ "/" hpath [ "?" search ]] hpath = hsegment *[ "/" hsegment ] hsegment = *[ uchar | ";" | ":" | "@" | "&" | "=" ] search = *[ uchar | ";" | ":" | "@" | "&" | "=" ] ; GOPHER (see also RFC1436) gopherurl = "gopher://" hostport [ / [ gtype [ selector [ "%09" search [ "%09" gopher+_string ] ] ] ] ] gtype = xchar selector = *xchar gopher+_string = *xchar ; MAILTO (see also RFC822) mailtourl = "mailto:" encoded822addr encoded822addr = 1*xchar ; further defined in RFC822 ; NEWS (see also RFC1036) newsurl = "news:" grouppart grouppart = "*" | group | article group = alpha *[ alpha | digit | "-" | "." | "+" | "_" ] article = 1*[ uchar | ";" | "/" | "?" | ":" | "&" | "=" ] "@" host ; NNTP (see also RFC977) nntpurl = "nntp://" hostport "/" group [ "/" digits ] ; TELNET telneturl = "telnet://" login [ "/" ] ; WAIS (see also RFC1625) waisurl = waisdatabase | waisindex | waisdoc waisdatabase = "wais://" hostport "/" database waisindex = "wais://" hostport "/" database "?" search waisdoc = "wais://" hostport "/" database "/" wtype "/" wpath database = *uchar wtype = *uchar wpath = *uchar ; PROSPERO prosperourl = "prospero://" hostport "/" ppath *[ fieldspec ] ppath = psegment *[ "/" psegment ] psegment = *[ uchar | "?" | ":" | "@" | "&" | "=" ] fieldspec = ";" fieldname "=" fieldvalue fieldname = *[ uchar | "?" | ":" | "@" | "&" ] fieldvalue = *[ uchar | "?" | ":" | "@" | "&" ] ; Miscellaneous definitions lowalpha = "a" | "b" | "c" | "d" | "e" | "f" | "g" | "h" | "i" | "j" | "k" | "l" | "m" | "n" | "o" | "p" | "q" | "r" | "s" | "t" | "u" | "v" | "w" | "x" | "y" | "z" hialpha = "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" | "J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" | "S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z" alpha = lowalpha | hialpha digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" safe = "$" | "-" | "_" | "." | "+" extra = "!" | "*" | "'" | "(" | ")" | "," national = "{" | "}" | "|" | "\" | "^" | "~" | "[" | "]" | "`" punctuation = "" | "#" | "%" | reserved = ";" | "/" | "?" | ":" | "@" | "&" | "=" hex = digit | "A" | "B" | "C" | "D" | "E" | "F" | "a" | "b" | "c" | "d" | "e" | "f" escape = "%" hex hex unreserved = alpha | digit | safe | extra uchar = unreserved | escape xchar = unreserved | reserved | escape digits = 1*digit </pre>
199
2009-05-07T15:59:04Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
2,431,517
<p><code>urlparse</code> quite happily takes invalid URLs, it is more a string string-splitting library than any kind of validator. For example:</p> <pre><code>from urlparse import urlparse urlparse('http://----') # returns: ParseResult(scheme='http', netloc='----', path='', params='', query='', fragment='') </code></pre> <p>Depending on the situation, this might be fine..</p> <p>If you mostly trust the data, and just want to verify the protocol is HTTP, then <code>urlparse</code> is perfect.</p> <p>If you want to make the URL is actually a legal URL, use <a href="http://stackoverflow.com/questions/827557/how-do-you-validate-a-url-with-a-regular-expression-in-python/835527#835527">the ridiculous regex</a></p> <p>If you want to make sure it's a real web address,</p> <pre><code>import urllib try: urllib.urlopen(url) except IOError: print "Not a real URL" </code></pre>
5
2010-03-12T09:11:04Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
2,758,281
<p><strong>note</strong> - Lepl is no longer maintained or supported.</p> <p>RFC 3696 defines "best practices" for URL validation - <a href="http://www.faqs.org/rfcs/rfc3696.html" rel="nofollow">http://www.faqs.org/rfcs/rfc3696.html</a></p> <p>The latest release of Lepl (a Python parser library) includes an implementation of RFC 3696. You would use it something like:</p> <pre><code>from lepl.apps.rfc3696 import Email, HttpUrl # compile the validators (do once at start of program) valid_email = Email() valid_http_url = HttpUrl() # use the validators (as often as you like) if valid_email(some_email): # email is ok else: # email is bad if valid_http_url(some_url): # url is ok else: # url is bad </code></pre> <p>Although the validators are defined in Lepl, which is a recursive descent parser, they are largely compiled internally to regular expressions. That combines the best of both worlds - a (relatively) easy to read definition that can be checked against RFC 3696 <em>and</em> an efficient implementation. There's a post on my blog showing how this simplifies the parser - <a href="http://www.acooke.org/cute/LEPLOptimi0.html" rel="nofollow">http://www.acooke.org/cute/LEPLOptimi0.html</a></p> <p>Lepl is available at <a href="http://www.acooke.org/lepl" rel="nofollow">http://www.acooke.org/lepl</a> and the RFC 3696 module is documented at <a href="http://www.acooke.org/lepl/rfc3696.html" rel="nofollow">http://www.acooke.org/lepl/rfc3696.html</a></p> <p>This is completely new in this release, so may contain bugs. Please contact me if you have any problems and I will fix them ASAP. Thanks.</p>
6
2010-05-03T13:18:59Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
7,995,979
<p>I'm using the one used by Django and it seems pretty well:</p> <pre><code>def is_valid_url(url): import re regex = re.compile( r'^https?://' # http:// or https:// r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+[A-Z]{2,6}\.?|' # domain... r'localhost|' # localhost... r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip r'(?::\d+)?' # optional port r'(?:/?|[/?]\S+)$', re.IGNORECASE) return url is not None and regex.search(url) </code></pre> <p>You always can check the last version here: <a href="https://code.djangoproject.com/browser/django/trunk/django/core/validators.py#L47">https://code.djangoproject.com/browser/django/trunk/django/core/validators.py#L47</a></p>
13
2011-11-03T13:49:07Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
9,837,658
<p><a href="http://pypi.python.org/pypi/rfc3987">http://pypi.python.org/pypi/rfc3987</a> gives regular expressions for consistency with the rules in RFC 3986 and RFC 3987 (that is, not with scheme-specific rules).</p> <p>A regexp for IRI_reference is:</p> <pre><code>(?P&lt;scheme&gt;[a-zA-Z][a-zA-Z0-9+.-]*):(?://(?P&lt;iauthority&gt;(?:(?P&lt;iuserinfo&gt;(?:(?:[ a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U0002 0000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U 00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009ff fd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U00 0dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:)*)@)?(?P&lt;ihost&gt;\ \[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4] [0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0 -9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01] ?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::( ?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]| [01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F ]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(? :(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[ 0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?: [0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3 }(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1, 4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0 -9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0- 9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5] |2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))| (?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6 }[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&amp;'()*+,;=]|:)+)\\]|(?:(?:( ?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][ 0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\ U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U000500 00-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00 090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd \U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=])*)( ?::(?P&lt;port&gt;[0-9]*))?)(?P&lt;ipath&gt;(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\uf dcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\ U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007f ffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U0 00bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A- F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)|(?P&lt;ipath&gt;/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7 ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000 -\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U0007 0000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U 000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000eff fd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff \uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\ U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U000700 00-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U00 0b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd ])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)?)|(?P&lt;ipath&gt;(?:(?:[a-zA-Z0-9._~-]|[\ xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U 00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006ff fd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U00 0afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000- \U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa 0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00 030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd \U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000a fffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U 000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)|(?P&lt;ipath&gt;))(?:\\?(?P&lt;iquery &gt;(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U000 1fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\ U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U000900 00-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U00 0d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)|[\ ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P&lt;ifra gment&gt;(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000- \U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050 000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U0 0090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfff d\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:| @)|/|\\?)*))?|(?:(?://(?P&lt;iauthority&gt;(?:(?P&lt;iuserinfo&gt;(?:(?:[a-zA-Z0-9._~-]|[\xa 0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00 030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd \U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000a fffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U 000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:)*)@)?(?P&lt;ihost&gt;\\[(?:(?:[0-9A-F]{1, 4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0- 9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?: [0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3 }(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4 }(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\ .){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1 ,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0- 4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(? :[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A -F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][ 0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1 ,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9] ?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0- 9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[ 0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4} :){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::| v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&amp;'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0- 9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA -Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000 -\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U0006 0000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U 000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dff fd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=])*)(?::(?P&lt;port&gt;[0-9]*) )?)(?P&lt;ipath&gt;(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U0 0010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fff d\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U000 8fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\ U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()* +,;=]|:|@)*)*)|(?P&lt;ipath&gt;/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufd f0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U000400 00-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00 080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd \U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A -F]|[!$&amp;'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0 -\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000 -\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U0008 0000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U 000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F ]|[!$&amp;'()*+,;=]|:|@)*)*)?)|(?P&lt;ipath&gt;(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\u fdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd \U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007 fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U 000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A -F][0-9A-F]|[!$&amp;'()*+,;=]|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf \ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00 040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd \U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000b fffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][ 0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)|(?P&lt;ipath&gt;))(?:\\?(?P&lt;iquery&gt;(?:(?:(?:[a-zA-Z0-9. _~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U000 2fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\ U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a00 00-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U00 0e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f000 0-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P&lt;ifragment&gt;(?:(?:(?:[a-zA- Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000- \U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060 000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U0 00a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfff d\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)|/|\\?)*))?) </code></pre> <p>In one line:</p> <pre><code>(?P&lt;scheme&gt;[a-zA-Z][a-zA-Z0-9+.-]*):(?://(?P&lt;iauthority&gt;(?:(?P&lt;iuserinfo&gt;(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:)*)@)?(?P&lt;ihost&gt;\\[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&amp;'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=])*)(?::(?P&lt;port&gt;[0-9]*))?)(?P&lt;ipath&gt;(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)|(?P&lt;ipath&gt;/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)?)|(?P&lt;ipath&gt;(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)|(?P&lt;ipath&gt;))(?:\\?(?P&lt;iquery&gt;(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P&lt;ifragment&gt;(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)|/|\\?)*))?|(?:(?://(?P&lt;iauthority&gt;(?:(?P&lt;iuserinfo&gt;(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:)*)@)?(?P&lt;ihost&gt;\\[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&amp;'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=])*)(?::(?P&lt;port&gt;[0-9]*))?)(?P&lt;ipath&gt;(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)|(?P&lt;ipath&gt;/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)?)|(?P&lt;ipath&gt;(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)*)*)|(?P&lt;ipath&gt;))(?:\\?(?P&lt;iquery&gt;(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P&lt;ifragment&gt;(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&amp;'()*+,;=]|:|@)|/|\\?)*))?) </code></pre>
6
2012-03-23T10:31:27Z
[ "python", "regex", "google-app-engine" ]
How do you validate a URL with a regular expression in Python?
827,557
<p>I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.</p> <p>I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL. </p> <p>I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<a href="http://www.ietf.org/rfc/rfc3986.txt">http://www.ietf.org/rfc/rfc3986.txt</a>)</p> <p>Below is a snipped which <em>should</em> work? </p> <pre><code>p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url </code></pre>
73
2009-05-06T00:40:30Z
34,266,413
<p>Nowadays In 90% of case if you working with URL in python than you probably use python-requests. So now is the question "Why do not reuse URL validation from requests" ?</p> <pre><code>from requests.models import PreparedRequest import requests.exceptions def check_url(url): prepared_request = PreparedRequest() try: prepared_request.prepare_url(url, None) return prepared_request.url except requests.exceptions.MissingSchema, e: raise SomeException </code></pre> <p>feature:</p> <ul> <li>don't reinvent wheel</li> <li>DRY</li> <li>work offline</li> <li>minimal resource</li> </ul>
0
2015-12-14T11:54:33Z
[ "python", "regex", "google-app-engine" ]
Python interpreter with Linux Screen
827,879
<p>I was working with Python with a <a href="http://www.rackaid.com/resources/linux-tutorials/general-tutorials/using-screen/" rel="nofollow">Linux terminal screen</a>. When I typed:</p> <pre><code>help(somefunction) </code></pre> <p>It printed the appropriate output, but then my screen was stuck, and at the bottom of the terminal was "(end)". </p> <p>How do I get unstuck? Thanks in advance.</p>
1
2009-05-06T03:12:46Z
827,881
<p>That program uses your pager, which is by default more. You can exit just by pressing q.</p>
5
2009-05-06T03:14:25Z
[ "python", "linux", "terminal" ]
Python interpreter with Linux Screen
827,879
<p>I was working with Python with a <a href="http://www.rackaid.com/resources/linux-tutorials/general-tutorials/using-screen/" rel="nofollow">Linux terminal screen</a>. When I typed:</p> <pre><code>help(somefunction) </code></pre> <p>It printed the appropriate output, but then my screen was stuck, and at the bottom of the terminal was "(end)". </p> <p>How do I get unstuck? Thanks in advance.</p>
1
2009-05-06T03:12:46Z
827,936
<p>The standard on GNU (or other Unix-like) systems is to use the environment variable <code>PAGER</code> for the command that should receive output for <strong>viewing one screenful ("page") at a time</strong>.</p> <p>Mine is set to:</p> <pre><code>$ echo $PAGER less </code></pre> <p>Yours might be set to <code>more</code>, or a different command, or not set at all in which case a system-wide default command will be used.</p> <p>It sounds like yours is modelled after the <code>more</code> program. The program is showing you page-by-page output, and in this case telling you you're at the end.</p> <p>Most of them (basically, any pager more modern than <code>more</code>) allow you to go forward and backward in the output by using the cursor control keys (arrows and <code>PgUp</code>/<code>PgDown</code>), and many other operations besides.</p> <p>Since you can do all these things wherever you are in the output, the program <strong>needs an explicit command from you to know that you're done</strong> navigating the output. In all likelihood that command is the keypress <code>q</code>.</p> <p>For more information on how to drive your pager, e.g. <code>less</code>, read its manpage with the command <code>man less</code> (which, of course, will show pages of output using the pager program :-)</p>
10
2009-05-06T03:41:29Z
[ "python", "linux", "terminal" ]
How do I constrain the SCons Command builder to run only if its dependencies have changed?
828,075
<p>I am using the Command builder in scons to specify that a particular script needs to be invoked to produce a particular file. </p> <p>I would like to only run the script if it has been modified since the file was previously generated. The default behaviour of the Command builder seems to be to always run the script. How can I change this? </p> <p>This is my current SConstruct:</p> <pre><code>speed = Command('speed_analysis.tex','','python code/speed.py') report = PDF(target = 'report.pdf', source = 'report.tex') Depends(report, speed) </code></pre>
4
2009-05-06T04:53:18Z
830,071
<p>Maybe your example is incomplete, but aren't you supposed to do:</p> <pre><code>env = Environment() env.Command(.... </code></pre> <p>I think you need to specify your dependencies as the second argument to Command:</p> <pre><code>Command('speed_analysis.tex','code/speed.py','python code/speed.py') </code></pre>
1
2009-05-06T15:07:13Z
[ "python", "scons" ]
How do I constrain the SCons Command builder to run only if its dependencies have changed?
828,075
<p>I am using the Command builder in scons to specify that a particular script needs to be invoked to produce a particular file. </p> <p>I would like to only run the script if it has been modified since the file was previously generated. The default behaviour of the Command builder seems to be to always run the script. How can I change this? </p> <p>This is my current SConstruct:</p> <pre><code>speed = Command('speed_analysis.tex','','python code/speed.py') report = PDF(target = 'report.pdf', source = 'report.tex') Depends(report, speed) </code></pre>
4
2009-05-06T04:53:18Z
2,512,948
<p>First, it looks like <code>code/speed.py</code> has no control on the output filename... Hardcoded output filenames are usually considered bad practice in scons (see yacc tool). It would read better like this:</p> <pre><code>speed = Command('speed_analysis.tex', [], 'python code/speed.py -o $TARGET') </code></pre> <p>Now, the PDF target produces a <code>report.pdf</code> from <code>report.tex</code>. I'm guessing there's an implicit dependency from <code>report.tex</code> to <code>speed_analysis.tex</code> (through Tex include or something like that).</p> <p>This:</p> <pre><code>Depends(report, speed) </code></pre> <p>Is correct to express that dependency if it's missing. Though I'm surprised the scanner for the PDF builder did not see that implicit dependency...</p> <p>You should verify the dep tree using:</p> <pre><code>scons --tree=all </code></pre> <p>It should look something like this:</p> <pre><code>+ report.pdf + report.tex + speed_analysis.tex + code/speed.py + /usr/bin/python + /usr/bin/pdflatex </code></pre> <p>Now, to answer your question about the script (<code>speed.py</code>) always running, that's because it has no input. There's nothing for scons to check against. That script file must be reading <em>something</em> as an input, if only the py file itself. You need to tell scons about all direct and implicit dependencies for it to short-circuit subsequent runs:</p> <pre><code>Command('speed_analysis.tex', 'code/speed.py', 'python $SOURCE -o $TARGET') </code></pre>
11
2010-03-25T03:43:09Z
[ "python", "scons" ]
Python double pointer
828,139
<p>I'm trying to get the values from a pointer to a float array, but it returns as c_void_p in python</p> <p>The C code</p> <pre><code>double v; const void *data; pa_stream_peek(s, &amp;data, &amp;length); v = ((const float*) data)[length / sizeof(float) -1]; </code></pre> <p>Python so far</p> <pre><code>import ctypes null_ptr = ctypes.c_void_p() pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length)) </code></pre> <p>The issue being the null_ptr has an int value (memory address?) but there is no way to read the array?!</p>
4
2009-05-06T05:23:05Z
828,154
<p>My ctypes is rusty, but I believe you want POINTER(c_float) instead of c_void_p. </p> <p>So try this:</p> <pre><code>null_ptr = POINTER(c_float)() pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length)) null_ptr[0] null_ptr[5] # etc </code></pre>
2
2009-05-06T05:31:40Z
[ "python", "ctypes" ]
Python double pointer
828,139
<p>I'm trying to get the values from a pointer to a float array, but it returns as c_void_p in python</p> <p>The C code</p> <pre><code>double v; const void *data; pa_stream_peek(s, &amp;data, &amp;length); v = ((const float*) data)[length / sizeof(float) -1]; </code></pre> <p>Python so far</p> <pre><code>import ctypes null_ptr = ctypes.c_void_p() pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length)) </code></pre> <p>The issue being the null_ptr has an int value (memory address?) but there is no way to read the array?!</p>
4
2009-05-06T05:23:05Z
828,503
<p>You'll also probably want to be passing the null_ptr using byref, e.g.</p> <pre><code>pa_stream_peek(stream, ctypes.byref(null_ptr), ctypes.c_ulong(length)) </code></pre>
0
2009-05-06T07:55:41Z
[ "python", "ctypes" ]
Python double pointer
828,139
<p>I'm trying to get the values from a pointer to a float array, but it returns as c_void_p in python</p> <p>The C code</p> <pre><code>double v; const void *data; pa_stream_peek(s, &amp;data, &amp;length); v = ((const float*) data)[length / sizeof(float) -1]; </code></pre> <p>Python so far</p> <pre><code>import ctypes null_ptr = ctypes.c_void_p() pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length)) </code></pre> <p>The issue being the null_ptr has an int value (memory address?) but there is no way to read the array?!</p>
4
2009-05-06T05:23:05Z
830,493
<p>To use ctypes in a way that mimics your C code, I would suggest (and I'm out-of-practice and this is untested):</p> <pre><code>vdata = ctypes.c_void_p() length = ctypes.c_ulong(0) pa_stream_peek(stream, ctypes.byref(vdata), ctypes.byref(length)) fdata = ctypes.cast(vdata, POINTER(float)) </code></pre>
1
2009-05-06T16:29:23Z
[ "python", "ctypes" ]
Python double pointer
828,139
<p>I'm trying to get the values from a pointer to a float array, but it returns as c_void_p in python</p> <p>The C code</p> <pre><code>double v; const void *data; pa_stream_peek(s, &amp;data, &amp;length); v = ((const float*) data)[length / sizeof(float) -1]; </code></pre> <p>Python so far</p> <pre><code>import ctypes null_ptr = ctypes.c_void_p() pa_stream_peek(stream, null_ptr, ctypes.c_ulong(length)) </code></pre> <p>The issue being the null_ptr has an int value (memory address?) but there is no way to read the array?!</p>
4
2009-05-06T05:23:05Z
830,560
<p>When you pass pointer arguments without using ctypes.pointer or ctypes.byref, their contents simply get set to the integer value of the memory address (i.e., the pointer bits). These arguments should be passed with <code>byref</code> (or <code>pointer</code>, but <code>byref</code> has less overhead):</p> <pre><code>data = ctypes.pointer(ctypes.c_float()) nbytes = ctypes.c_sizeof() pa_stream_peek(s, byref(data), byref(nbytes)) nfloats = nbytes.value / ctypes.sizeof(c_float) v = data[nfloats - 1] </code></pre>
0
2009-05-06T16:43:52Z
[ "python", "ctypes" ]
file handling in python
828,214
<p><br /> Thanks in advance. I have written a program which works for small files. But that doesn't work for files of 1 GB. Please tell me is there is any way to handle big file. Here is the code.</p> <pre><code>fh=open('reg.fa','r') c=fh.readlines() fh.close() s='' for i in range(0,(len(c))): s=s+c[i] lines=s.split('\n') for line in s: s=s.replace('\n','') s=s.replace('\n','') print s </code></pre>
2
2009-05-06T05:55:17Z
828,224
<p>With readlines() you read whole file at once, so you use 1 GB of memory. Insted of this try:</p> <pre><code>f = open(...) while 1: line = f.readline() if not line: break line = line.rstrip() ... do something with line ... f.close() </code></pre> <p>If all you need is to remove \n then do not do it line by line, but do it with chunks of text:</p> <pre><code>import sys f = open('query.txt','r') while 1: part = f.read(1024) if not part: break part = part.replace('\n', '') sys.stdout.write(part) </code></pre>
5
2009-05-06T05:59:43Z
[ "python" ]
file handling in python
828,214
<p><br /> Thanks in advance. I have written a program which works for small files. But that doesn't work for files of 1 GB. Please tell me is there is any way to handle big file. Here is the code.</p> <pre><code>fh=open('reg.fa','r') c=fh.readlines() fh.close() s='' for i in range(0,(len(c))): s=s+c[i] lines=s.split('\n') for line in s: s=s.replace('\n','') s=s.replace('\n','') print s </code></pre>
2
2009-05-06T05:55:17Z
828,234
<p>The <code>readlines</code> method reads in the <strong>entire</strong> file. You don't want to do that for a file that is large in relation to your physical memory size.</p> <p>The fix is to read the file in small chunks, and process those individually. You can, for example, do something like this:</p> <pre><code>for line in f.xreadlines(): ... do something with the line </code></pre> <p>The <code>xreadlines</code> does not return a list of lines, but an iterator, which returns one line at a time, when the <code>for</code> loop calls it. An even simpler way of doing that is:</p> <pre><code>for line in f: ... do something with the line </code></pre> <p>Depending on what you do, processing the file line-by-line may be easy or hard. I didn't really get what your sample code is trying to do, but it looks like it should be doable to do it by line.</p>
17
2009-05-06T06:05:40Z
[ "python" ]
file handling in python
828,214
<p><br /> Thanks in advance. I have written a program which works for small files. But that doesn't work for files of 1 GB. Please tell me is there is any way to handle big file. Here is the code.</p> <pre><code>fh=open('reg.fa','r') c=fh.readlines() fh.close() s='' for i in range(0,(len(c))): s=s+c[i] lines=s.split('\n') for line in s: s=s.replace('\n','') s=s.replace('\n','') print s </code></pre>
2
2009-05-06T05:55:17Z
828,652
<p>The script is not working because it reads all lines of the file in advance, making it nescessary to keep the whole file in memory. The easiest way to iterate over all lines in a file is </p> <pre><code>for line in open("test.txt", "r"): # do something with the "line" </code></pre>
7
2009-05-06T08:41:32Z
[ "python" ]
file handling in python
828,214
<p><br /> Thanks in advance. I have written a program which works for small files. But that doesn't work for files of 1 GB. Please tell me is there is any way to handle big file. Here is the code.</p> <pre><code>fh=open('reg.fa','r') c=fh.readlines() fh.close() s='' for i in range(0,(len(c))): s=s+c[i] lines=s.split('\n') for line in s: s=s.replace('\n','') s=s.replace('\n','') print s </code></pre>
2
2009-05-06T05:55:17Z
829,133
<p>Your program is very redundant. Looks like everything you do can be done using these lines:</p> <pre><code>import sys for line in open('reg.fa'): sys.stdout.write(line.rstrip()) </code></pre> <p>That is enough. This program <strong>gives the same result from your original code in the question</strong> but is much simpler and clearer. And it can also handle files of any size.</p>
2
2009-05-06T11:23:06Z
[ "python" ]
file handling in python
828,214
<p><br /> Thanks in advance. I have written a program which works for small files. But that doesn't work for files of 1 GB. Please tell me is there is any way to handle big file. Here is the code.</p> <pre><code>fh=open('reg.fa','r') c=fh.readlines() fh.close() s='' for i in range(0,(len(c))): s=s+c[i] lines=s.split('\n') for line in s: s=s.replace('\n','') s=s.replace('\n','') print s </code></pre>
2
2009-05-06T05:55:17Z
963,807
<p>From your coding it is clear that you want string buffer of single line. As a point of view of coding it is bad that you storethe whole file content in one string buffer. And then you processed your requirement. And code contain too many local variables.</p> <p>You could have used following chunk of code.</p> <p>f = open (file_name,mode) </p> <p>for line in f:</p> <pre><code>""" Do the processing """ </code></pre>
0
2009-06-08T07:41:09Z
[ "python" ]
Python - Save the context
828,494
<p>I need to save the context of the program before exiting ... I've put all the needed stuff to an object that I've previously created a I tried many times to picke it, but no way !! I continuously have errors like :</p> <ul> <li><p>PicklingError: Can't pickle 'SRE_Match' object: &lt;_sre.SRE_Match object at 0x2a969cd9c0></p></li> <li><p>OSError: [Errno 1] Operation not permitted: [the file that I am referencing is rwx for all]</p></li> </ul> <p>the code that I use is :</p> <pre><code>f = open ("/xxxx/yyyy/toto/titi/important_stuff//impo/tmp/contest.obj", 'w').close() os.chmod("/xxxx/yyyy/toto/titi/important_stuff//impo/tmp/contest.obj", 0777) f = open ("/xxxx/yyyy/toto/titi/important_stuff//impo/tmp/contest.obj", 'w') pickle.dump(useCaseList, f) </code></pre> <p>Any knows how can I solve these problems or has another solution ?</p> <p>Thanks in advance</p>
0
2009-05-06T07:52:51Z
828,587
<p>See the python doc <a href="http://docs.python.org/library/pickle.html#what-can-be-pickled-and-unpickled" rel="nofollow">What can be pickled and unpickled</a>. You have objects that can not be pickled.</p>
3
2009-05-06T08:27:36Z
[ "python", "serialization", "pickle" ]
List a dictionary
828,578
<p>In a list appending is possible. But how I achieve appending in dictionary?</p> <pre><code> Symbols from __ctype_tab.o: Name Value Class Type Size Line Section __ctype |00000000| D | OBJECT|00000004| |.data __ctype_tab |00000000| r | OBJECT|00000101| |.rodata Symbols from _ashldi3.o: Name Value Class Type Size Line Section __ashldi3 |00000000| T | FUNC|00000050| |.text Symbols from _ashrdi3.o: Name Value Class Type Size Line Section __ashrdi3 |00000000| T | FUNC|00000058| |.text Symbols from _fixdfdi.o: Name Value Class Type Size Line Section __fixdfdi |00000000| T | FUNC|0000004c| |.text __fixunsdfdi | | U | NOTYPE| | |*UND* </code></pre> <p>How can I create a dictionary like:</p> <pre><code>dictOfTables {'__ctype_tab.o':{'__ctype': Name:...,Value:...,Class:...,Type:...,Size:...,Line:...,Section:...}} etc. </code></pre> <p>for the above text?</p>
1
2009-05-06T08:25:47Z
828,595
<p>Look into using an <a href="http://www.python.org/dev/peps/pep-0372/" rel="nofollow">ordered dictionary</a>. I don't think this is in official Python yet, but there's a reference implementation available in the PEP.</p>
1
2009-05-06T08:29:33Z
[ "python", "list", "dictionary" ]
List a dictionary
828,578
<p>In a list appending is possible. But how I achieve appending in dictionary?</p> <pre><code> Symbols from __ctype_tab.o: Name Value Class Type Size Line Section __ctype |00000000| D | OBJECT|00000004| |.data __ctype_tab |00000000| r | OBJECT|00000101| |.rodata Symbols from _ashldi3.o: Name Value Class Type Size Line Section __ashldi3 |00000000| T | FUNC|00000050| |.text Symbols from _ashrdi3.o: Name Value Class Type Size Line Section __ashrdi3 |00000000| T | FUNC|00000058| |.text Symbols from _fixdfdi.o: Name Value Class Type Size Line Section __fixdfdi |00000000| T | FUNC|0000004c| |.text __fixunsdfdi | | U | NOTYPE| | |*UND* </code></pre> <p>How can I create a dictionary like:</p> <pre><code>dictOfTables {'__ctype_tab.o':{'__ctype': Name:...,Value:...,Class:...,Type:...,Size:...,Line:...,Section:...}} etc. </code></pre> <p>for the above text?</p>
1
2009-05-06T08:25:47Z
828,688
<p>Appending doesn't make sense to the concept of dictionary in the same way as for list. Instead, it's more sensible to speak in terms of inserting and removing key/values, as there's no "<em>end</em>" to append to - the dict is unordered.</p> <p>From your desired output, it looks like you want to have a dict of dicts of dicts, (ie <code>{filename : { symbol : { key:value }}</code>. I think you can get this from your input with something like this:</p> <pre><code>import re header_re = re.compile('Symbols from (.*):') def read_syms(f): """Read list of symbols from provided iterator and return dict of values""" d = {} headings=None for line in f: line = line.strip() if not line: return d # Finished. if headings is None: headings = [x.strip() for x in line.split()] continue # First line is headings items = [x.strip() for x in line.split("|")] d[items[0]] = dict(zip(headings[1:], items[1:])) return d f=open('input.txt') d={} for line in f: m=header_re.match(line) if m: d[m.group(1)] = read_syms(f) </code></pre>
7
2009-05-06T08:51:50Z
[ "python", "list", "dictionary" ]
How to measure Django cache performance?
828,702
<p>I have a rather small (ca. 4.5k pageviews a day) website running on Django, with PostgreSQL 8.3 as the db.</p> <p>I am using the database as both the cache and the sesssion backend. I've heard a lot of good things about using Memcached for this purpose, and I would definitely like to give it a try. However, I would like to know exactly what would be the benefits of such a change: I imagine that my site may be just not big enough for the better cache backend to make a difference. The point is: it wouldn't be me who would be installing and configuring memcached, and I don't want to waste somebody's time for nothing or very little.</p> <p>How can I measure the overhead introduced by using the db as the cache backend? I've looked at django-debug-toolbar, but if I understand correctly it isn't something you'd like to put on a production site (you have to set <code>DEBUG=True</code> for it to work). Unfortunately, I cannot quite reproduce the production setting on my laptop (I have a different OS, CPU and a lot more RAM).</p> <p>Has anyone benchmarked different Django cache/session backends? Does anybody know what would be the performance difference if I was doing, for example, one session-write on every request?</p>
4
2009-05-06T08:56:44Z
828,826
<p>Short answer : If you have enougth ram, memcached will be always faster. You can't really benchhmark memcached vs. database cache, just keep in mind that the big bottleneck with servers is disk access, specially write access.</p> <p>Anyway, disk cache is better if you have many objects to cache and long time expiration. But for this situation, if you want gig performances, it is better to generate your pages statically with a python script and deliver them with ligthtpd or nginx.</p> <p>For memcached, you could adjust the amount of ram dedicated to the server.</p>
2
2009-05-06T09:32:27Z
[ "python", "django", "postgresql", "caching", "memcached" ]
How to measure Django cache performance?
828,702
<p>I have a rather small (ca. 4.5k pageviews a day) website running on Django, with PostgreSQL 8.3 as the db.</p> <p>I am using the database as both the cache and the sesssion backend. I've heard a lot of good things about using Memcached for this purpose, and I would definitely like to give it a try. However, I would like to know exactly what would be the benefits of such a change: I imagine that my site may be just not big enough for the better cache backend to make a difference. The point is: it wouldn't be me who would be installing and configuring memcached, and I don't want to waste somebody's time for nothing or very little.</p> <p>How can I measure the overhead introduced by using the db as the cache backend? I've looked at django-debug-toolbar, but if I understand correctly it isn't something you'd like to put on a production site (you have to set <code>DEBUG=True</code> for it to work). Unfortunately, I cannot quite reproduce the production setting on my laptop (I have a different OS, CPU and a lot more RAM).</p> <p>Has anyone benchmarked different Django cache/session backends? Does anybody know what would be the performance difference if I was doing, for example, one session-write on every request?</p>
4
2009-05-06T08:56:44Z
829,260
<p>At my previous work we tried to measure caching impact on site we was developing. On the same machine we load-tested the set of 10 pages that are most commonly used as start pages (object listings), plus some object detail pages taken randomly from the pool of ~200000. The difference was like 150 requests/second to 30000 requests/second and the database queries dropped to 1-2 per page.</p> <p>What was cached:</p> <ul> <li>sessions</li> <li>lists of objects retrieved for each individual page in object listing</li> <li>secondary objects and common content (found on each page)</li> <li>lists of object categories and other <em>categorising properties</em></li> <li>object counters (calculated offline by cron job)</li> <li>individual objects</li> </ul> <p>In general, we used only low-level granular caching, not the high-level cache framework. It required very careful design (cache had to be properly invalidated upon each database state change, like adding or modifying any object).</p>
5
2009-05-06T12:08:36Z
[ "python", "django", "postgresql", "caching", "memcached" ]
How to measure Django cache performance?
828,702
<p>I have a rather small (ca. 4.5k pageviews a day) website running on Django, with PostgreSQL 8.3 as the db.</p> <p>I am using the database as both the cache and the sesssion backend. I've heard a lot of good things about using Memcached for this purpose, and I would definitely like to give it a try. However, I would like to know exactly what would be the benefits of such a change: I imagine that my site may be just not big enough for the better cache backend to make a difference. The point is: it wouldn't be me who would be installing and configuring memcached, and I don't want to waste somebody's time for nothing or very little.</p> <p>How can I measure the overhead introduced by using the db as the cache backend? I've looked at django-debug-toolbar, but if I understand correctly it isn't something you'd like to put on a production site (you have to set <code>DEBUG=True</code> for it to work). Unfortunately, I cannot quite reproduce the production setting on my laptop (I have a different OS, CPU and a lot more RAM).</p> <p>Has anyone benchmarked different Django cache/session backends? Does anybody know what would be the performance difference if I was doing, for example, one session-write on every request?</p>
4
2009-05-06T08:56:44Z
2,105,437
<p>Just try it out. Use firebug or a similar tool and run memcache with a bit of RAM allocation (e.g. 64mb) on the test server.</p> <p>Mark your average loading results seen in firebug without memcache, then turn caching on and mark new results. That's done as easy as it said.</p> <p>The results usually shocks people, because the perfomance raises up very nicely.</p>
0
2010-01-20T22:19:04Z
[ "python", "django", "postgresql", "caching", "memcached" ]
How to measure Django cache performance?
828,702
<p>I have a rather small (ca. 4.5k pageviews a day) website running on Django, with PostgreSQL 8.3 as the db.</p> <p>I am using the database as both the cache and the sesssion backend. I've heard a lot of good things about using Memcached for this purpose, and I would definitely like to give it a try. However, I would like to know exactly what would be the benefits of such a change: I imagine that my site may be just not big enough for the better cache backend to make a difference. The point is: it wouldn't be me who would be installing and configuring memcached, and I don't want to waste somebody's time for nothing or very little.</p> <p>How can I measure the overhead introduced by using the db as the cache backend? I've looked at django-debug-toolbar, but if I understand correctly it isn't something you'd like to put on a production site (you have to set <code>DEBUG=True</code> for it to work). Unfortunately, I cannot quite reproduce the production setting on my laptop (I have a different OS, CPU and a lot more RAM).</p> <p>Has anyone benchmarked different Django cache/session backends? Does anybody know what would be the performance difference if I was doing, for example, one session-write on every request?</p>
4
2009-05-06T08:56:44Z
28,123,744
<p>Use <a href="http://django-debug-toolbar.readthedocs.org/en/" rel="nofollow">django-debug-toolbar</a> to see how much time has been saved on <code>SQL</code> query</p>
0
2015-01-24T08:37:35Z
[ "python", "django", "postgresql", "caching", "memcached" ]
How to measure Django cache performance?
828,702
<p>I have a rather small (ca. 4.5k pageviews a day) website running on Django, with PostgreSQL 8.3 as the db.</p> <p>I am using the database as both the cache and the sesssion backend. I've heard a lot of good things about using Memcached for this purpose, and I would definitely like to give it a try. However, I would like to know exactly what would be the benefits of such a change: I imagine that my site may be just not big enough for the better cache backend to make a difference. The point is: it wouldn't be me who would be installing and configuring memcached, and I don't want to waste somebody's time for nothing or very little.</p> <p>How can I measure the overhead introduced by using the db as the cache backend? I've looked at django-debug-toolbar, but if I understand correctly it isn't something you'd like to put on a production site (you have to set <code>DEBUG=True</code> for it to work). Unfortunately, I cannot quite reproduce the production setting on my laptop (I have a different OS, CPU and a lot more RAM).</p> <p>Has anyone benchmarked different Django cache/session backends? Does anybody know what would be the performance difference if I was doing, for example, one session-write on every request?</p>
4
2009-05-06T08:56:44Z
36,097,439
<p>The <a href="http://www.grantjenks.com/docs/diskcache/" rel="nofollow" title="Python DiskCache">DiskCache</a> project publishes <a href="http://www.grantjenks.com/docs/diskcache/djangocache-benchmarks.html" rel="nofollow" title="DiskCache Django cache benchmarks">Django cache benchmarks</a> comparing local memory, Memcached, Redis, file based, and <a href="http://www.grantjenks.com/docs/diskcache/api.html#djangocache" rel="nofollow" title="DiskCache DjangoCache">diskcache.DjangoCache</a>. An added benefit of DiskCache is that no separate process is necessary (unlike Memcached and Redis). Instead cache keys and small values are memory-mapped into the Django process memory. Retrieving values from the cache is generally faster than Memcached on localhost. A number of <a href="http://www.grantjenks.com/docs/diskcache/api.html#constants" rel="nofollow" title="DiskCache settings">settings</a> control how much data is kept in memory; the rest being paged out to disk.</p>
2
2016-03-19T03:22:06Z
[ "python", "django", "postgresql", "caching", "memcached" ]
Debugging swig extensions for Python
828,843
<p>Is there any other way to debug swig extensions except for doing </p> <blockquote> <p>gdb python stuff.py</p> </blockquote> <p>?</p> <p>I have wrapped the legacy library <a href="http://libkdtree.alioth.debian.org/" rel="nofollow">libkdtree++</a> and followed all the swig related memory managemant points (borrowed ref vs. own ref, etc.). But still, I am not sure whether my binding is not eating up memory. It would be helpful to be able to just debug step by step each publicized function: starting from Python then going to via the C glue binding into C space, and returning back.</p> <p>Is there already such a possibility?</p> <p>Thanks,</p> <p>wr</p>
4
2009-05-06T09:38:33Z
829,391
<p>Well, for debugging, you use a debugger ;-).</p> <p>When debugging, it may be a good idea to configure Python with '--with-pydebug' and recompile. It does additional checks then.</p> <p>If you are looking for memory leaks, there is a simple way:</p> <p>Run your code over and over in a loop, and look for Python's memory consumption.</p>
1
2009-05-06T12:45:30Z
[ "python", "debugging", "swig" ]
Debugging swig extensions for Python
828,843
<p>Is there any other way to debug swig extensions except for doing </p> <blockquote> <p>gdb python stuff.py</p> </blockquote> <p>?</p> <p>I have wrapped the legacy library <a href="http://libkdtree.alioth.debian.org/" rel="nofollow">libkdtree++</a> and followed all the swig related memory managemant points (borrowed ref vs. own ref, etc.). But still, I am not sure whether my binding is not eating up memory. It would be helpful to be able to just debug step by step each publicized function: starting from Python then going to via the C glue binding into C space, and returning back.</p> <p>Is there already such a possibility?</p> <p>Thanks,</p> <p>wr</p>
4
2009-05-06T09:38:33Z
1,541,358
<p>gdb 7.0 supports python scripting. It might help you in this particular case.</p>
2
2009-10-09T01:06:30Z
[ "python", "debugging", "swig" ]
Is Python2.6 stable enough for production use?
828,862
<p>Or should I just stick with Python2.5 for a bit longer?</p>
7
2009-05-06T09:44:28Z
828,881
<p>From <a href="http://python.org/download/" rel="nofollow">python.org</a>:</p> <blockquote> <p>The current production versions are Python 2.6.2 and Python 3.0.1.</p> </blockquote> <p>So, yes.</p> <p>Python 3.x contains some backwards incompatible changes, so <a href="http://python.org/download/" rel="nofollow">python.org</a> also says:</p> <blockquote> <p>start with Python 2.6 since more existing third party software is compatible with Python 2 than Python 3 right now</p> </blockquote>
18
2009-05-06T09:51:02Z
[ "python" ]
Is Python2.6 stable enough for production use?
828,862
<p>Or should I just stick with Python2.5 for a bit longer?</p>
7
2009-05-06T09:44:28Z
828,894
<p>Ubuntu has switched to 2.6 in it's latest release, and has not had any significant problems. So I would say "yes, it's stable".</p>
10
2009-05-06T09:55:39Z
[ "python" ]
Is Python2.6 stable enough for production use?
828,862
<p>Or should I just stick with Python2.5 for a bit longer?</p>
7
2009-05-06T09:44:28Z
828,951
<p>It depends from libraries you use. For example there is no precompiled InformixDB package for 2.6 if you have to use Python on Windows.</p> <p>Also web2py framework sticks with 2.5 because of some bug in 2.6.</p> <p>Personally I use CPython 2.6 (workhorse) and 3.0 (experimental), and Jython 2.5 beta (for my test with JDBC and ODBC).</p>
6
2009-05-06T10:14:31Z
[ "python" ]
Is Python2.6 stable enough for production use?
828,862
<p>Or should I just stick with Python2.5 for a bit longer?</p>
7
2009-05-06T09:44:28Z
828,994
<p>Yes it it, but this is not the right question. The right question is "can I use Python 2.6, taking in consideration the incompatibilities it introduces ?". And the short answser is "most probably yes, unless you use a specific lib that wouldn't work with 2.6, which is pretty rare".</p>
4
2009-05-06T10:30:24Z
[ "python" ]
Is Python2.6 stable enough for production use?
828,862
<p>Or should I just stick with Python2.5 for a bit longer?</p>
7
2009-05-06T09:44:28Z
829,464
<p>I've found 2.6 to be fairly good with two exceptions:</p> <ol> <li>If you're using it on a server, I've had trouble in the past with some libraries which are used by elements of the server (Debian Etch IIRC). It's possible with a bit of jiggery pokery to maintain several versions of python in unison though if you're careful :-)</li> <li>This is no-longer true, but the last time I tried 2.6, wxPython had not been updated which meant all my gui tools I've written broke. There's now a version available that's built against 2.6.</li> </ol> <p>So I'd suggest you check all the modules you use and check their compatibility with 2.6...</p>
1
2009-05-06T13:06:47Z
[ "python" ]