content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
Converting integer hex format into strings
I am programming an application to send data using UDP sockets with Python 3.1.
The command socket.send requires data in bytes format.
The problem I am having is that the package I have to send has three different fields, the first one contains a 16 bits integer variable (c_ushort) and so does the second field whereas the third one is an string whose length can go up to 900 characters.
I decided then to create an struct that contains these three fields:
class PHAL_msg(Structure):
_fields_ = [("Port", c_ushort),
("Size", c_ushort),
("Text", c_wchar_p)]
I would expect I could send this object by just converting it to a bytes object:
Msg_TX = PHAL_msg(Port=PHAL_ADDRESS, Size=PAYLOAD_SIZE, Text='HELLO woRLD!')
socket.send(bytes(Msg_TX))
, but it does not work.
Any idea how this could be done?
Regards
A:
You need to serialize your class, use pickle.
class Blah:
def __init__(self,mynum, mystr):
self.mynum = mynum
self.mystr = mystr
a = Blah(3,"blahblah")
#bytes(a) # this will fail with "TypeError: 'Blah' object is not iterable"
import pickle
b = pickle.dumps(a) # turn it into a bytestring
c = pickle.loads(b) # and back to the class
print("a: ", a.__repr__(), a.mynum, a.mystr)
print("pickled....")
print("b: type is:",type(b)) # note it's already in bytes.
print(b.__repr__())
print("unpickled....")
print("c: ", c.__repr__(), c.mynum, c.mystr)
Output:
a: <__main__.Blah object at 0x00BCB470> 3 blahblah
pickled....
b: type is: <class 'bytes'>
b'\x80\x03c__main__\nBlah\nq\x00)\x81q\x01}q\x02(X\x
05\x00\x00\x00mystrq\x03X\x08\x00\x00\x00blahblahq\x
04X\x05\x00\x00\x00mynumq\x05K\x03ub.'
unpickled....
c: <__main__.Blah object at 0x00BCB950> 3 blahblah
| Converting integer hex format into strings | I am programming an application to send data using UDP sockets with Python 3.1.
The command socket.send requires data in bytes format.
The problem I am having is that the package I have to send has three different fields, the first one contains a 16 bits integer variable (c_ushort) and so does the second field whereas the third one is an string whose length can go up to 900 characters.
I decided then to create an struct that contains these three fields:
class PHAL_msg(Structure):
_fields_ = [("Port", c_ushort),
("Size", c_ushort),
("Text", c_wchar_p)]
I would expect I could send this object by just converting it to a bytes object:
Msg_TX = PHAL_msg(Port=PHAL_ADDRESS, Size=PAYLOAD_SIZE, Text='HELLO woRLD!')
socket.send(bytes(Msg_TX))
, but it does not work.
Any idea how this could be done?
Regards
| [
"You need to serialize your class, use pickle.\nclass Blah:\n def __init__(self,mynum, mystr):\n self.mynum = mynum\n self.mystr = mystr\n\na = Blah(3,\"blahblah\")\n#bytes(a) # this will fail with \"TypeError: 'Blah' object is not iterable\"\n\nimport pickle\nb = pickle.dumps(a) # turn it into a b... | [
1
] | [] | [] | [
"python",
"sockets"
] | stackoverflow_0003609156_python_sockets.txt |
Q:
Trac Using Database Authentication
Is it possible to use a database for authentication with Trac?
.htpasswd auth is not desired in this install.
Using Trac .11 and MySQL as the database. Trac is currently using the database, but provides no authentication.
A:
Out of the box, Trac doesn't actually do its own authentication, it leaves it up to the web server. So, you've got a wealth of Apache-related options available to you. You could maybe look at something like auth_mysql to let you keep user credentials in a database.
Alternatively, take a look at the AccountManagerPlugin on trac-hacks.org
A:
You can use Account Manager Plugin with SessionStore
The AccountManagerPlugin offers several features for managing user accounts:
allow users to register new accounts
login via an HTML form instead of using HTTP authentication
allow existing users to change their passwords or delete their accounts
send a new password to users who’ve forgotten their password
administration of user accounts
A:
Please refer to http://trac-hacks.org/wiki/AccountManagerPlugin
Do the following on your trac.ini
[components]
; be sure to enable the component
acct_mgr.svnserve.* = enabled
acct_mgr.svnserve.svnservepasswordstore = enabled
; choose one of the hash methods
acct_mgr.pwhash.htdigesthashmethod = enabled
acct_mgr.pwhash.htpasswdhashmethod = enabled
[account-manager]
password_store = SvnServePasswordStore
password_file = /path/to/svn/repos/conf/passwd
; choose one of the hash methods
hash_method = HtDigestHashMethod
hash_method = HtPasswdHashMethod
Now trac will use user in the database
| Trac Using Database Authentication | Is it possible to use a database for authentication with Trac?
.htpasswd auth is not desired in this install.
Using Trac .11 and MySQL as the database. Trac is currently using the database, but provides no authentication.
| [
"Out of the box, Trac doesn't actually do its own authentication, it leaves it up to the web server. So, you've got a wealth of Apache-related options available to you. You could maybe look at something like auth_mysql to let you keep user credentials in a database.\nAlternatively, take a look at the AccountManager... | [
5,
1,
0
] | [] | [] | [
"apache",
"mysql",
"python",
"trac"
] | stackoverflow_0000982226_apache_mysql_python_trac.txt |
Q:
Ways for combining Adobe AIR and Python - Python
I need to use Python because: I have implemented many scripts and libraries using Python in order to solve a certain problem.
I would like to use AIR because: I really love the flexibility of building UIs using HTML and Javascript, also implementing beautiful UI designs is actually very easy.
Any ideas if I can integrate these 2 technologies or if not, any ideas how I could substitute AIR?
Help would be awesome! :)
A:
I've been thinking about the same combo as well. PyAMF might be worth a look - I've been thinking of PyAMF + web2py + AIR myself, possibly with py2exe thrown in for good measure.
| Ways for combining Adobe AIR and Python - Python | I need to use Python because: I have implemented many scripts and libraries using Python in order to solve a certain problem.
I would like to use AIR because: I really love the flexibility of building UIs using HTML and Javascript, also implementing beautiful UI designs is actually very easy.
Any ideas if I can integrate these 2 technologies or if not, any ideas how I could substitute AIR?
Help would be awesome! :)
| [
"I've been thinking about the same combo as well. PyAMF might be worth a look - I've been thinking of PyAMF + web2py + AIR myself, possibly with py2exe thrown in for good measure.\n"
] | [
2
] | [] | [] | [
"air",
"desktop_application",
"python"
] | stackoverflow_0003610084_air_desktop_application_python.txt |
Q:
Create a new array from numpy array based on the conditions from a list
Suppose that I have an array defined by:
data = np.array([('a1v1', 'a2v1', 'a3v1', 'a4v1', 'a5v1'),
('a1v1', 'a2v1', 'a3v1', 'a4v2', 'a5v1'),
('a1v3', 'a2v1', 'a3v1', 'a4v1', 'a5v2'),
('a1v2', 'a2v2', 'a3v1', 'a4v1', 'a5v2'),
('a1v2', 'a2v3', 'a3v2', 'a4v1', 'a5v2'),
('a1v2', 'a2v3', 'a3v2', 'a4v2', 'a5v1'),
('a1v3', 'a2v3', 'a3v2', 'a4v2', 'a5v2'),
('a1v1', 'a2v2', 'a3v1', 'a4v1', 'a5v1'),
('a1v1', 'a2v3', 'a3v2', 'a4v1', 'a5v2'),
('a1v2', 'a2v2', 'a3v2', 'a4v1', 'a5v2'),
('a1v1', 'a2v2', 'a3v2', 'a4v2', 'a5v2'),
('a1v3', 'a2v2', 'a3v1', 'a4v2', 'a5v2'),
('a1v3', 'a2v1', 'a3v2', 'a4v1', 'a5v2'),
('a1v2', 'a2v2', 'a3v1', 'a4v2', 'a5v1')],
dtype=[('a1', '|S4'), ('a2', '|S4'), ('a3', '|S4'),
('a4', '|S4'), ('a5', '|S4')])
How to create a function to list out data elements by row with conditions given in a list of tuples, r.
r = [('a1', 'a1v1'), ('a4', 'a4v1')]
I know that it can be done manually like this:
data[(data['a1']=='a1v1') & data['a4']=='a4v1']
What about removing rows from data that comply with the r.
data[(data['a1']!='a1v1') | data['a4']!='a4v1']
Thanks.
A:
If I'm understanding you correctly, you want to list the entire row, where a given tuple of columns is equal to some value. In that case, this should be what you want, though it's a bit verbose and obscure:
test_cols = data[['a1', 'a4']]
test_vals = np.array(('a1v1', 'a4v1'), test_cols.dtype)
data[test_cols == test_vals]
Note the "nested list" style indexing... That's the easiest way to select multiple columns of a structured array. E.g.
data[['a1', 'a4']]
will yield
array([('a1v1', 'a4v1'), ('a1v1', 'a4v2'), ('a1v3', 'a4v1'),
('a1v2', 'a4v1'), ('a1v2', 'a4v1'), ('a1v2', 'a4v2'),
('a1v3', 'a4v2'), ('a1v1', 'a4v1'), ('a1v1', 'a4v1'),
('a1v2', 'a4v1'), ('a1v1', 'a4v2'), ('a1v3', 'a4v2'),
('a1v3', 'a4v1'), ('a1v2', 'a4v2')],
dtype=[('a1', '|S4'), ('a4', '|S4')])
You can then test this agains a tuple of the values that you're checking for and get a one-dimensional boolean array where those columns are equal to those values.
However, with structured arrays, the dtype has to be an exact match. E.g. data[['a1', 'a4']] == ('a1v1', 'a4v1') just yields False, so we have to make an array of the values we want to test using the same dtype as the columns we're testing against. Thus, we have to do something like:
test_cols = data[['a1', 'a4']]
test_vals = np.array(('a1v1', 'a4v1'), test_cols.dtype)
before we can do this:
data[test_cols == test_vals]
Which yields what we were originally after:
array([('a1v1', 'a2v1', 'a3v1', 'a4v1', 'a5v1'),
('a1v1', 'a2v2', 'a3v1', 'a4v1', 'a5v1'),
('a1v1', 'a2v3', 'a3v2', 'a4v1', 'a5v2')],
dtype=[('a1', '|S4'), ('a2', '|S4'), ('a3', '|S4'), ('a4', '|S4'), ('a5', '|S4')])
Hope that makes some sense, anyway...
| Create a new array from numpy array based on the conditions from a list | Suppose that I have an array defined by:
data = np.array([('a1v1', 'a2v1', 'a3v1', 'a4v1', 'a5v1'),
('a1v1', 'a2v1', 'a3v1', 'a4v2', 'a5v1'),
('a1v3', 'a2v1', 'a3v1', 'a4v1', 'a5v2'),
('a1v2', 'a2v2', 'a3v1', 'a4v1', 'a5v2'),
('a1v2', 'a2v3', 'a3v2', 'a4v1', 'a5v2'),
('a1v2', 'a2v3', 'a3v2', 'a4v2', 'a5v1'),
('a1v3', 'a2v3', 'a3v2', 'a4v2', 'a5v2'),
('a1v1', 'a2v2', 'a3v1', 'a4v1', 'a5v1'),
('a1v1', 'a2v3', 'a3v2', 'a4v1', 'a5v2'),
('a1v2', 'a2v2', 'a3v2', 'a4v1', 'a5v2'),
('a1v1', 'a2v2', 'a3v2', 'a4v2', 'a5v2'),
('a1v3', 'a2v2', 'a3v1', 'a4v2', 'a5v2'),
('a1v3', 'a2v1', 'a3v2', 'a4v1', 'a5v2'),
('a1v2', 'a2v2', 'a3v1', 'a4v2', 'a5v1')],
dtype=[('a1', '|S4'), ('a2', '|S4'), ('a3', '|S4'),
('a4', '|S4'), ('a5', '|S4')])
How to create a function to list out data elements by row with conditions given in a list of tuples, r.
r = [('a1', 'a1v1'), ('a4', 'a4v1')]
I know that it can be done manually like this:
data[(data['a1']=='a1v1') & data['a4']=='a4v1']
What about removing rows from data that comply with the r.
data[(data['a1']!='a1v1') | data['a4']!='a4v1']
Thanks.
| [
"If I'm understanding you correctly, you want to list the entire row, where a given tuple of columns is equal to some value. In that case, this should be what you want, though it's a bit verbose and obscure:\ntest_cols = data[['a1', 'a4']]\ntest_vals = np.array(('a1v1', 'a4v1'), test_cols.dtype)\ndata[test_cols ==... | [
1
] | [] | [] | [
"arrays",
"numpy",
"python",
"recarray"
] | stackoverflow_0003607001_arrays_numpy_python_recarray.txt |
Q:
Django: Advice on designing a model with varying fields
I'm looking for some advice/opinions on the best way to approach creating a sort-of-dynamic model in django.
The structure needs to describe data for Products. There are about 60 different possible data points that could be relevant, with each Product choosing about 20 of those points (with much overlapping) depending on its ProductType. There are about 2 dozen different ProductTypes, with each ProductType falling into one of 6 categories.
The obvious, but ungraceful method, would be to just put all 100 data points into the Product model, and just pick and choose, and ignore any fields left empty. Advantage: it's obvious. Disadvantage: it's ugly, and if I need to add more data points, it requires updating the database.
My thought is to make each product completely abstract, and only define its product-type, and a few other identifying traits.
class Product(models.Model):
productType = models.ForeignKey(ProductType)
I would then have a separate models for each kind of data point (number, text, date, etc.) which would look like this:
class TextData(models.Model):
fieldName = models.CharField(etc)
verboseName = models.CharField(etc)
data = models.CharField(etc)
product = models.ForeignKey(Product)
Finally, I would make a model that defines what kinds of data are relevant for each product-type, and what each data point should be called.
class ProductType(models.Model):
name = models.CharField(etc)
....
So the main question is: what would be the best way to fill out the relevant fields for a product type, when they can be varying? Perhaps another model that holds the name/type of a data point, and a ManyToManyField between ProductType and that model? A text field with XML? An outside XML field defining the structure? Abandon this entire pursuit and go with something better?
Thanks for any help!
A:
If you are looking for implementations of dynamic attributes for models in django in some kind of eav style, have a look at eav-django, or at django-expando.
A:
This is ordinary relational database design. Don't over-optimize it with OO and inheritance techniques.
You have a Product Category table with (probably) just names.
You have a Product Type table with not much information with FK's to category.
You have a Product table with an FK relationship to Product Type.
You have a Product Feature table with your data points and an FK to product.
To work with a Product, you query the product. If you need the features, you're just working with the "feature_set". Django fetches them for you.
This works remarkably well and is a well-established feature of a relational database and an ORM layer. The DB caching and the ORM caching make these queries go very quickly.
| Django: Advice on designing a model with varying fields | I'm looking for some advice/opinions on the best way to approach creating a sort-of-dynamic model in django.
The structure needs to describe data for Products. There are about 60 different possible data points that could be relevant, with each Product choosing about 20 of those points (with much overlapping) depending on its ProductType. There are about 2 dozen different ProductTypes, with each ProductType falling into one of 6 categories.
The obvious, but ungraceful method, would be to just put all 100 data points into the Product model, and just pick and choose, and ignore any fields left empty. Advantage: it's obvious. Disadvantage: it's ugly, and if I need to add more data points, it requires updating the database.
My thought is to make each product completely abstract, and only define its product-type, and a few other identifying traits.
class Product(models.Model):
productType = models.ForeignKey(ProductType)
I would then have a separate models for each kind of data point (number, text, date, etc.) which would look like this:
class TextData(models.Model):
fieldName = models.CharField(etc)
verboseName = models.CharField(etc)
data = models.CharField(etc)
product = models.ForeignKey(Product)
Finally, I would make a model that defines what kinds of data are relevant for each product-type, and what each data point should be called.
class ProductType(models.Model):
name = models.CharField(etc)
....
So the main question is: what would be the best way to fill out the relevant fields for a product type, when they can be varying? Perhaps another model that holds the name/type of a data point, and a ManyToManyField between ProductType and that model? A text field with XML? An outside XML field defining the structure? Abandon this entire pursuit and go with something better?
Thanks for any help!
| [
"If you are looking for implementations of dynamic attributes for models in django in some kind of eav style, have a look at eav-django, or at django-expando.\n",
"This is ordinary relational database design. Don't over-optimize it with OO and inheritance techniques. \nYou have a Product Category table with (pr... | [
2,
1
] | [] | [] | [
"django",
"models",
"orm",
"python"
] | stackoverflow_0003610327_django_models_orm_python.txt |
Q:
How come this way of ending a thread is not working?
I just came out with my noob way of ending a thread, but I don't know why it's not working. Would somebody please help me out?
Here's my sample code:
import wx
import thread
import time
import threading
class TestFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, parent = None, id = -1, title = "Testing", pos=(350, 110), size=(490, 200), style=wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX | wx.MINIMIZE_BOX)
self.panel = wx.Panel(self)
self.stop = False
self.StartButton = wx.Button(parent = self.panel, id = -1, label = "Start", pos = (110, 17), size = (50, 20))
self.MultiLine = wx.TextCtrl(parent = self.panel, id = -1, pos = (38, 70), size = (410, 90), style = wx.TE_MULTILINE|wx.TE_READONLY|wx.TE_AUTO_URL)
self.Bind(wx.EVT_BUTTON, self.OnStart, self.StartButton)
self.Bind(wx.EVT_CLOSE, self.OnClose)
def OnStart(self, event):
self.StartButton.Disable()
self.NewThread = threading.Thread(target = self.LongRunning)
self.NewThread.start()
def OnClose(self, event):
self.stop = True
BusyBox = wx.BusyInfo("Just a moment please!", self)
wx.Yield()
while True:
try:
if not self.NewThread.isAlive():
self.Destroy()
break
time.sleep(0.5)
except:
self.Destroy()
break
def LongRunning(self):
Counter = 1
while True:
time.sleep(2)
print "Hello, ", Counter
self.MultiLine.AppendText("hello, " + str(Counter) + "\n") #If you comment out this line, everything works fine. Why can't I update the fame after I hit the close button?
Counter = Counter + 1
if self.stop:
break
class TestApp(wx.App):
def OnInit(self):
self.TestFrame = TestFrame()
self.TestFrame.Show()
self.SetTopWindow(self.TestFrame)
return True
def main():
App = TestApp(redirect = False)
App.MainLoop()
if __name__ == "__main__":
main()
As you can see in my code, there's a infinite loop in the thread, what I tell the thread to do is break out of the loop once I click the close button. But the problem is, every time when I hit the close button, it seems the code stuck at self.MultiLine.AppendText("hello, " + str(Counter) + "\n") line, I don't know why. Anybody can help?
A:
Try using a thread safe method such as wx.CallAfter when updating your multiline.
def LongRunning(self):
Counter = 1
while True:
time.sleep(2)
print "Hello, ", Counter
wx.CallAfter(self.updateMultiLine, "hello, " + str(Counter) + "\n")
Counter = Counter + 1
if self.stop:
break
def updateMultiLine(self, data):
self.MultiLine.AppendText(data)
A:
In general with GUI toolkits, only one thread should access GUI functions. An exception is wx.CallAfter
As you (should) know, software defects can be classified into three groups:
Your bugs.
Their bugs.
Threads.
;)
| How come this way of ending a thread is not working? | I just came out with my noob way of ending a thread, but I don't know why it's not working. Would somebody please help me out?
Here's my sample code:
import wx
import thread
import time
import threading
class TestFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, parent = None, id = -1, title = "Testing", pos=(350, 110), size=(490, 200), style=wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX | wx.MINIMIZE_BOX)
self.panel = wx.Panel(self)
self.stop = False
self.StartButton = wx.Button(parent = self.panel, id = -1, label = "Start", pos = (110, 17), size = (50, 20))
self.MultiLine = wx.TextCtrl(parent = self.panel, id = -1, pos = (38, 70), size = (410, 90), style = wx.TE_MULTILINE|wx.TE_READONLY|wx.TE_AUTO_URL)
self.Bind(wx.EVT_BUTTON, self.OnStart, self.StartButton)
self.Bind(wx.EVT_CLOSE, self.OnClose)
def OnStart(self, event):
self.StartButton.Disable()
self.NewThread = threading.Thread(target = self.LongRunning)
self.NewThread.start()
def OnClose(self, event):
self.stop = True
BusyBox = wx.BusyInfo("Just a moment please!", self)
wx.Yield()
while True:
try:
if not self.NewThread.isAlive():
self.Destroy()
break
time.sleep(0.5)
except:
self.Destroy()
break
def LongRunning(self):
Counter = 1
while True:
time.sleep(2)
print "Hello, ", Counter
self.MultiLine.AppendText("hello, " + str(Counter) + "\n") #If you comment out this line, everything works fine. Why can't I update the fame after I hit the close button?
Counter = Counter + 1
if self.stop:
break
class TestApp(wx.App):
def OnInit(self):
self.TestFrame = TestFrame()
self.TestFrame.Show()
self.SetTopWindow(self.TestFrame)
return True
def main():
App = TestApp(redirect = False)
App.MainLoop()
if __name__ == "__main__":
main()
As you can see in my code, there's a infinite loop in the thread, what I tell the thread to do is break out of the loop once I click the close button. But the problem is, every time when I hit the close button, it seems the code stuck at self.MultiLine.AppendText("hello, " + str(Counter) + "\n") line, I don't know why. Anybody can help?
| [
"Try using a thread safe method such as wx.CallAfter when updating your multiline.\n def LongRunning(self):\n Counter = 1\n\n while True:\n time.sleep(2)\n print \"Hello, \", Counter\n\n wx.CallAfter(self.updateMultiLine, \"hello, \" + str(Counter) + \"\\n\")\n Counter = Coun... | [
2,
1
] | [] | [] | [
"python",
"wx.textctrl",
"wxpython"
] | stackoverflow_0003609627_python_wx.textctrl_wxpython.txt |
Q:
doesPythonLikeCamels
Are Java style camelCase names good practice in Python. I know Capilized names should be reserved by convention for Class names. Methods should be small letters according to good style, or actually I am not so sure. Is there PEP about naming?
COMMENTS:
Sorry for camels :) , I learned from answer PEP8, that my title is actually properly called mixedCase (Capitalized version is the CamelCase) After reading the PEP, I know that normal small case function naming with underscores should be used for methods also.
A:
PEP 8 contains all the answers.
A:
It's best to match whatever your organization uses or is comfortable with. Preaching "the One True Python style" doesn't exactly build harmony if everyone else already uses some other uniform manner. If it's some random hodgepodge of styles, then go ahead and advocate for some unification.
A:
Yes there's a PEP on code style, PEP 8.
Check "Naming Conventions"
| doesPythonLikeCamels | Are Java style camelCase names good practice in Python. I know Capilized names should be reserved by convention for Class names. Methods should be small letters according to good style, or actually I am not so sure. Is there PEP about naming?
COMMENTS:
Sorry for camels :) , I learned from answer PEP8, that my title is actually properly called mixedCase (Capitalized version is the CamelCase) After reading the PEP, I know that normal small case function naming with underscores should be used for methods also.
| [
"PEP 8 contains all the answers.\n",
"It's best to match whatever your organization uses or is comfortable with. Preaching \"the One True Python style\" doesn't exactly build harmony if everyone else already uses some other uniform manner. If it's some random hodgepodge of styles, then go ahead and advocate for... | [
10,
2,
1
] | [] | [] | [
"case",
"convention",
"naming",
"pep",
"python"
] | stackoverflow_0003610071_case_convention_naming_pep_python.txt |
Q:
How to read a musical file using python and identify the various frequency levels of the notes?
please help me with the python...this is my project topic...
A:
Fourier transforms. Learn some basics about music and signals before even considering code.
Basic Outline:
Audio Import
See http://wiki.python.org/moin/Audio/ and find one that will import your (unspecified) file.
Analysis
Get numpy.
>>> from numpy.fft import fft
>>> a = abs(fft([1,2,3,2]*4))
>>> a
array([ 32., 0., 0., 0., 8., 0., 0., 0.,
0., 0., 0., 0., 8., 0., 0., 0.])
We can clearly see the DC component at 0, then the major AC component at fs/4 and 3*fs/4 due to this being a real signal, as all frequency components are mirrored over the X-axis.
| How to read a musical file using python and identify the various frequency levels of the notes? | please help me with the python...this is my project topic...
| [
"Fourier transforms. Learn some basics about music and signals before even considering code.\nBasic Outline:\nAudio Import\nSee http://wiki.python.org/moin/Audio/ and find one that will import your (unspecified) file.\nAnalysis\nGet numpy.\n>>> from numpy.fft import fft\n>>> a = abs(fft([1,2,3,2]*4))\n>>> a\narray... | [
2
] | [] | [] | [
"python"
] | stackoverflow_0003610847_python.txt |
Q:
Is it possible to memcache a json result in App Engine?
I think my question is already clear enough, but to make it even more clear i will illustrate it with my example.
I'm currently returning many json every request, which I would like to cache in some way. I thought memcache would be great, but I only see that they use memcache for caching queries.
A:
JSON is just text, so yes, you can store it in memcache.
| Is it possible to memcache a json result in App Engine? | I think my question is already clear enough, but to make it even more clear i will illustrate it with my example.
I'm currently returning many json every request, which I would like to cache in some way. I thought memcache would be great, but I only see that they use memcache for caching queries.
| [
"JSON is just text, so yes, you can store it in memcache.\n"
] | [
7
] | [] | [] | [
"google_app_engine",
"json",
"python"
] | stackoverflow_0003610854_google_app_engine_json_python.txt |
Q:
python and ruby equivalent of perls Template::Declare?
CPAN has the Template::Declare package. A declarative way to create HTML templates in Perl without any HTML directly written.
I would love to use similar packages in python and ruby. Are there equivalent packages for those languages?
A:
In Ruby there is Markaby. The closest I know if in Python is Brevé.
Also there are a few more in Perl and other languages as well.
/I3az/
A:
If you like the look of Markaby, also see Erector which is inspired by it but said to be somewhat cleaner
A:
Are you looking for something like Haml in Ruby ?
Example for Ruby :
$ sudo gem install haml
$ irb
> require 'haml'
> Haml::Engine.new('%p Hello, World').render
=> "<p>Hello, World</p>\n"
| python and ruby equivalent of perls Template::Declare? | CPAN has the Template::Declare package. A declarative way to create HTML templates in Perl without any HTML directly written.
I would love to use similar packages in python and ruby. Are there equivalent packages for those languages?
| [
"In Ruby there is Markaby. The closest I know if in Python is Brevé.\nAlso there are a few more in Perl and other languages as well.\n/I3az/\n",
"If you like the look of Markaby, also see Erector which is inspired by it but said to be somewhat cleaner\n",
"Are you looking for something like Haml in Ruby ?\nExa... | [
6,
1,
0
] | [] | [] | [
"perl",
"python",
"ruby"
] | stackoverflow_0003607491_perl_python_ruby.txt |
Q:
How to empty a Python list without doing list = []?
If the my_list variable is global, you can't do:
my_list = []
that just create a new reference in the local scope.
Also, I found disgusting using the global keyword, so how can I empty a list using its methods?
A:
del a[:]
or
a[:] = []
A:
How about the following to delete all list items:
def emptyit():
del l[:]
| How to empty a Python list without doing list = []? | If the my_list variable is global, you can't do:
my_list = []
that just create a new reference in the local scope.
Also, I found disgusting using the global keyword, so how can I empty a list using its methods?
| [
"del a[:]\n\nor\na[:] = []\n\n",
"How about the following to delete all list items:\ndef emptyit():\n del l[:]\n\n"
] | [
11,
1
] | [] | [] | [
"python",
"types"
] | stackoverflow_0003611203_python_types.txt |
Q:
special character at the begin who match with the end from every word [only regex]
what's best solution using regex, to remove special characters from the begin and the end of every word.
"as-df-- as-df- as-df (as-df) 'as-df' asdf-asdf) (asd-f asdf' asd-f' -asdf- %asdf%s asdf& $asdf$ +asdf+ asdf++ asdf''"
the output should be:
"as-df-- as-df- as-df (as-df) as-df asdf-asdf) (asd-f asdf' asd-f' asdf %asdf%s asdf& asdf asdf asdf++ asdf''"
if the special character at the begin match with the end, remove it
i am learning about regex.
[only regex]
A:
For Perl, how about /\b([^\s\w])\w+\1\b/g? Note things like \b don't work in all regex languages.
Oops, as @Nick pointed out, this doesn't work for non-identical pairs, like () [] etc.
Instead you could do:
s/\b([^\s\w([\]){}])\w+\1\b/\2/g
s/\b\((\w+)\)\b/\1/g
s/\b\[(\w+)\]\b/\1/g
s/\b\{(\w+)\}\b/\1/g
(untested)
A:
import re
a = ("as-df-- as-df- as-df (as-df) 'as-df' asdf-asdf) (asd-f"
"asdf' asd-f' -asdf- %asdf%s asdf& $asdf$ +asdf+ asdf++ asdf''")
b = re.sub(r"((?<=\s)|\A)(?P<chr>[-()+%&'$])([^\s]*)(?P=chr)((?=\s)|\Z)",r"\3",a)
print b
Gives:
as-df-- as-df- as-df (as-df) as-df asdf-asdf) (asd-f
asdf' asd-f' asdf %asdf%s asdf& asdf asdf asdf++ asdf''
Getting non-identical characters to work is tricker (), [], {}
| special character at the begin who match with the end from every word [only regex] | what's best solution using regex, to remove special characters from the begin and the end of every word.
"as-df-- as-df- as-df (as-df) 'as-df' asdf-asdf) (asd-f asdf' asd-f' -asdf- %asdf%s asdf& $asdf$ +asdf+ asdf++ asdf''"
the output should be:
"as-df-- as-df- as-df (as-df) as-df asdf-asdf) (asd-f asdf' asd-f' asdf %asdf%s asdf& asdf asdf asdf++ asdf''"
if the special character at the begin match with the end, remove it
i am learning about regex.
[only regex]
| [
"For Perl, how about /\\b([^\\s\\w])\\w+\\1\\b/g? Note things like \\b don't work in all regex languages.\nOops, as @Nick pointed out, this doesn't work for non-identical pairs, like () [] etc.\nInstead you could do:\n s/\\b([^\\s\\w([\\]){}])\\w+\\1\\b/\\2/g\n s/\\b\\((\\w+)\\)\\b/\\1/g\n s/\\b\\[(\\w+)\\]\\b/\\1/... | [
1,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003611139_python_regex.txt |
Q:
python: confusion with local class name
I have the following code:
def f():
class XYZ:
# ...
cls = type('XXX', (XYZ, ), {})
# ...
return cls
I am now using it as follows:
C1 = f()
C2 = f()
and it seems to work fine: C1 is C2 returns False, there's no conflict between the class attributes of the two classes, etc.
Question 1
Why is that? How is it possible that C1 and C2 are both shown as class
<'__main__.XXX'>
and yet not the same class?
Question 2
Is there some problem with the fact that I have two identical names for two different classes?
Question 3
I would like to be able to write instead:
f('C1')
f('C2')
with the same effect. Is it possible?
Question 4
If I want C1 to look like a regular class, not main.XXX, is it ok to say:
C1.__name__ = '__main__.C1'
A:
Question 3
To have cls.__name__ be anything you want, (with a nod to delnan's suggestion)
def f(clsname):
class XYZ:
# ...
XYZ.__name__ = XYZ
# ...
return XYZ
Question 1
The reason that c1 is not c2 is that they are two different objects stored at two different locations in memory.
Question 4
Try an answer to question 1 and see how it works out for you
Question 2
It can complicate debugging that their class attributes __name__ share a common value and this is bad enough to take pains to avoid. (see question 3). I would maintain though that they don't have the same name. One is named C1 and the other is named C2 (at least in the scope you are showing. If you were to pass them to a function, then there name in that scope would be the same as the name of parameter that they were passed through)
In fact, I'm so sure that they don't have the same name that trying to tell me otherwise is likely to cause me to turn the music up louder and pretend I can't hear you.
In response to comment
It can be done but it's just wrong. I'll illustrate anyway because it's illuminating:
def f(clsname):
class XYZ(object):
pass
XYZ.__name__ = clsname
globals()[clsname] = XYZ
f('C1')
f('C2')
print C1
print C2
This just works by sticking the class in the globals dict keyed by clsname. But what's the point? You can stick it in the globals dict under any name in fact because this is just another assignment. You are best off just returning the class from the function and letting the caller decide what name to give the class in it's own scope. You still have the __name__ attribute of the class set to the string you pass to the function for debugging purposes.
A:
Actually, you don't need to the cls = ... line at all.
>>> def f():
... class C:
... pass
... return C
...
>>> f() is f()
False
Reason: class (as well as e.g. def) defines a new class each time it is encountered = each time the function is called.
As for cls.__name__, it's really no semantic difference. The name is useful for debugging (you don't expose it directly to the user, do you?) and introspection, but it shouldn't be an issue. But if you absolutely want to have different names, you can change cls.__name__ before returning (also note that after C.__name__ = 'foo', C.__name__ == '__main__.foo'!).
At question 3: It would be possible to inject it directly into global namespace... don't do this. It has no advantages, only disatvantages: nonobvious side effects, bad style, the fact it's a hack at all, etc!
| python: confusion with local class name | I have the following code:
def f():
class XYZ:
# ...
cls = type('XXX', (XYZ, ), {})
# ...
return cls
I am now using it as follows:
C1 = f()
C2 = f()
and it seems to work fine: C1 is C2 returns False, there's no conflict between the class attributes of the two classes, etc.
Question 1
Why is that? How is it possible that C1 and C2 are both shown as class
<'__main__.XXX'>
and yet not the same class?
Question 2
Is there some problem with the fact that I have two identical names for two different classes?
Question 3
I would like to be able to write instead:
f('C1')
f('C2')
with the same effect. Is it possible?
Question 4
If I want C1 to look like a regular class, not main.XXX, is it ok to say:
C1.__name__ = '__main__.C1'
| [
"Question 3\nTo have cls.__name__ be anything you want, (with a nod to delnan's suggestion)\ndef f(clsname):\n class XYZ:\n # ...\n XYZ.__name__ = XYZ\n # ...\n return XYZ\n\nQuestion 1\nThe reason that c1 is not c2 is that they are two different objects stored at two different locations in memor... | [
2,
2
] | [] | [] | [
"class",
"namespaces",
"python"
] | stackoverflow_0003611432_class_namespaces_python.txt |
Q:
Storing and escaping Django tags and filters in Django models
I am outputting content from my models to my templates, however some model fields call data stored in other models. This happens only in a few fields. I am wondering whether using an if tag to evaluate this would be more efficient compared to storing the django tags inside the models.
Answers to this question say that storing django tags in models is a bad idea without giving reasons (although I think one of the reasons may be someone else may inject some tags in the Database). Assuming that database injection is a rarity, is there a way of escaping Django tags and filters stored in a model.
Or better yet, what would be the most efficient method to handle the above situation where one model field in several fields calls fields stored in another model.
Example:
This should be stored in my models
<p>We focus on:</p>
{% for item in services %}
{% url service_view item.id as service_url %}
<ul>
<li><a href="service_url">{{item.title}}</a></li>
</ul>
{% endfor %}
Outputting it should result in django parsing the relevant django tags as if part of the template
A:
Thanks Ned, I tried implementing that but I found it to be quite complex and its also disadvantageous in terms of portability.
However, I found exactly what I needed at Django Snippets (dont know why I didn't look there first). Its a quite useful utility known as render_as_template.
After setting it up as a custom tag, all that I needed was to use it in the form {% render_as_template about_view.content %} and the tags in the models were rendered as models.
Instructions on creating your own custom templates and tags available here
A:
django-dbtemplates probably comes close to doing what you want.
A:
You should be using inclusion tags and then include that tag wherever you'd like the html to be rendered. The model should just be generating values for the variables, not formatting.
| Storing and escaping Django tags and filters in Django models | I am outputting content from my models to my templates, however some model fields call data stored in other models. This happens only in a few fields. I am wondering whether using an if tag to evaluate this would be more efficient compared to storing the django tags inside the models.
Answers to this question say that storing django tags in models is a bad idea without giving reasons (although I think one of the reasons may be someone else may inject some tags in the Database). Assuming that database injection is a rarity, is there a way of escaping Django tags and filters stored in a model.
Or better yet, what would be the most efficient method to handle the above situation where one model field in several fields calls fields stored in another model.
Example:
This should be stored in my models
<p>We focus on:</p>
{% for item in services %}
{% url service_view item.id as service_url %}
<ul>
<li><a href="service_url">{{item.title}}</a></li>
</ul>
{% endfor %}
Outputting it should result in django parsing the relevant django tags as if part of the template
| [
"Thanks Ned, I tried implementing that but I found it to be quite complex and its also disadvantageous in terms of portability. \nHowever, I found exactly what I needed at Django Snippets (dont know why I didn't look there first). Its a quite useful utility known as render_as_template. \nAfter setting it up as a cu... | [
1,
0,
0
] | [] | [] | [
"django",
"django_templates",
"python"
] | stackoverflow_0003594909_django_django_templates_python.txt |
Q:
List query with, facebook friends in list?
In a python based facebook application on GAE, i want to check which friends of current user have "marked" a web page or not.
For this i have to run as many DB queries as the number of friends (say 100)
I fear this may run into "timeout" because of large no of queries.
Google DOCs suggest that "list" queries run in parallel, will this save time ??
Also list has a limit of 30, so i have to make 2 or 3 queries of list type.
Please suggest a better way if possible, using task ques or something....
A:
I would suggest the following:
Make 'marked' entities child entities of the users who have marked them.
Use a key name for the 'marked' entity that is based on the URL of the page marked
To find friends who have marked a page, retrieve a list of friends, then generate the list of entity keys from the list of friends (easy, since you know the friend key and the URL), and do a single batch get to retrieve a list of 'mark' entities indicating which friends have marked that page.
A:
You can fetch up to 1000 entities in parallel if you already know their keys or their key names.
There are a few ways to solve your specific problem. Here are is one.
Let's assume that when a user "marks" a web page, you create an entity with a key_name that derives from a user's facebook id and the page key.
class PageMarker(db.Model):
user = db.ReferenceProperty(AppUser)
....
@classmethod
def mark_page(cls, user, page_key):
marker = cls.get_or_insert("%s_%s" % (user.facebook_id,
page_key, user=user)
This allows you to fetch all the users who marked a page in parallel:
key_names = ["%s_%s" % (friend.facebook_id, page_key) for friend in friends]
markers = db.get(key_names)
# Use get_value_for_datastore to get the entity key without making a trip to the
# datastore
friends_who_bookmarked_keys = [marker.__class__.user.get_value_for_datastore(marker)\
for marker in markers]
friends = db.get(friends_who_bookmarked_keys)
| List query with, facebook friends in list? | In a python based facebook application on GAE, i want to check which friends of current user have "marked" a web page or not.
For this i have to run as many DB queries as the number of friends (say 100)
I fear this may run into "timeout" because of large no of queries.
Google DOCs suggest that "list" queries run in parallel, will this save time ??
Also list has a limit of 30, so i have to make 2 or 3 queries of list type.
Please suggest a better way if possible, using task ques or something....
| [
"I would suggest the following:\n\nMake 'marked' entities child entities of the users who have marked them.\nUse a key name for the 'marked' entity that is based on the URL of the page marked\nTo find friends who have marked a page, retrieve a list of friends, then generate the list of entity keys from the list of ... | [
1,
1
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003606669_google_app_engine_python.txt |
Q:
in python; convert list of files to file like object
Er, so im juggling parsers and such, and I'm going from one thing which processes files to another.
The output from the first part of my code is a list of strings; I'm thinking of each string as a line from a text file.
The second part of the code needs a file type as an input.
So my question is, is there a proper, pythonic, way to convert a list of strings into a file like object?
I could write my list of strings to a file, and then reopen that file and it would work fine, but is seems a little silly to have to write to disk if not necessary.
I believe all the second part needs is to call 'read()' on the file like object, so I could also define a new class, with read as a method, which returns one long string, which is the concatenation of all of the line strings.
thanks,
-nick
A:
StringIO implements (nearly) all stdio methods. Example:
>>> import StringIO
>>> StringIO.StringIO("hello").read()
'hello'
cStringIO is a faster counterpart.
To convert your list of string, just join them:
>>> list_of_strings = ["hello", "line two"]
>>> handle = StringIO.StringIO('\n'.join(list_of_strings))
>>> handle.read()
'hello\nline two'
| in python; convert list of files to file like object | Er, so im juggling parsers and such, and I'm going from one thing which processes files to another.
The output from the first part of my code is a list of strings; I'm thinking of each string as a line from a text file.
The second part of the code needs a file type as an input.
So my question is, is there a proper, pythonic, way to convert a list of strings into a file like object?
I could write my list of strings to a file, and then reopen that file and it would work fine, but is seems a little silly to have to write to disk if not necessary.
I believe all the second part needs is to call 'read()' on the file like object, so I could also define a new class, with read as a method, which returns one long string, which is the concatenation of all of the line strings.
thanks,
-nick
| [
"StringIO implements (nearly) all stdio methods. Example:\n>>> import StringIO\n>>> StringIO.StringIO(\"hello\").read()\n'hello'\n\ncStringIO is a faster counterpart.\nTo convert your list of string, just join them:\n>>> list_of_strings = [\"hello\", \"line two\"]\n>>> handle = StringIO.StringIO('\\n'.join(list_of_... | [
12
] | [] | [] | [
"python",
"text"
] | stackoverflow_0003611972_python_text.txt |
Q:
How to fix broken relative links in offline webpages?
I wrote a simple Python script to download a web page for offline viewing. The problem is that the relative links are broken. So the offline file "c:\temp\webpage.html" has a href="index.aspx" but when opened in a browser it resolves to "file:///C:/temp/index.aspx" instead of "http://myorginalwebsite.com/index.aspx".
So I imagine that I would have to modify my script to fix each of the relative links so that it points to the original website. Is there an easier way? If not, anyone have some sample Python code that can do this? I'm a Python newbie so any pointers will be appreciated.
Thanks.
A:
If you just want your relative links to refer to the website, just add a base tag in the head:
<base href="http://myoriginalwebsite.com/" />
A:
lxml makes this braindead simple!
>>> import lxml.html, urllib
>>> url = 'http://www.google.com/'
>>> e = lxml.html.parse(urllib.urlopen(url))
>>> e.xpath('//a/@href')[-4:]
['/intl/en/ads/', '/services/', '/intl/en/about.html', '/intl/en/privacy.html']
>>> e.getroot().make_links_absolute()
>>> e.xpath('//a/@href')[-4:]
['http://www.google.com/intl/en/ads/', 'http://www.google.com/services/', 'http://www.google.com/intl/en/about.html', 'http://www.google.com/intl/en/privacy.html']
From there you can write the DOM out to disk as a file.
A:
So you want to check all links that start with http:// but any that don't you want to append http://myoriginalwebsite.com to the front of the string, then test for connection?
Sounds easy enough. Or is it the python code proper you're having issues with?
| How to fix broken relative links in offline webpages? | I wrote a simple Python script to download a web page for offline viewing. The problem is that the relative links are broken. So the offline file "c:\temp\webpage.html" has a href="index.aspx" but when opened in a browser it resolves to "file:///C:/temp/index.aspx" instead of "http://myorginalwebsite.com/index.aspx".
So I imagine that I would have to modify my script to fix each of the relative links so that it points to the original website. Is there an easier way? If not, anyone have some sample Python code that can do this? I'm a Python newbie so any pointers will be appreciated.
Thanks.
| [
"If you just want your relative links to refer to the website, just add a base tag in the head:\n<base href=\"http://myoriginalwebsite.com/\" />\n\n",
"lxml makes this braindead simple!\n>>> import lxml.html, urllib\n>>> url = 'http://www.google.com/'\n>>> e = lxml.html.parse(urllib.urlopen(url))\n>>> e.xpath('//... | [
5,
1,
0
] | [] | [] | [
"html",
"hyperlink",
"offline_browsing",
"python"
] | stackoverflow_0003611961_html_hyperlink_offline_browsing_python.txt |
Q:
Python ElementTree Check the node / element type
I am using ElementTree and cannot figure out if the childnode is text or not. childelement.text does not seem to work as it gives false positive even on nodes which are not text nodes.
Any suggestions?
Example
<tr>
<td><a href="sdas3">something for link</a></td>
<td>tttttk</td>
<td><a href="tyty">tyt for link</a></td>
</tr>
After parsing this xml file, I do this in Python:
for elem_main in container_trs: #elem_main is each tr
elem0 = elem_main.getchildren()[0] #td[0]
elem1 = elem_main.getchildren()[1] #td[1]
elem0 = elem_main.getchildren()[0]
print elem0.text
elem1 = elem_main.getchildren()[1]
print elem1.text
The above code does not output elem0.text; it is blank. I do see the elem1.text (that is, tttttk) in the output.
Update 2
I am actually building a dictionary. The text from the element with each so that I can sort the HTML table. How would I get the s in this code?
A:
How about using the getiterator method to iterate through the all the descendant nodes:
import xml.etree.ElementTree as xee
content='''
<tr>
<td><a href="sdas3">something for link</a></td>
<td>tttttk</td>
<td><a href="tyty">tyt for link</a></td>
</tr>
'''
def text_content(node):
result=[]
for elem in node.getiterator():
text=elem.text
if text and text.strip():
result.append(text)
return result
container_trs=xee.fromstring(content)
adict={}
for elem in container_trs:
adict[elem]=text_content(elem)
print(adict)
# {<Element td at b767e52c>: ['tttttk'], <Element td at b767e58c>: ['tyt for link'], <Element td at b767e36c>: ['something for link']}
The loop for elem_main in container_trs: iterates through the children of cantainer_trs.
In contrast, the loop for elem_main in container_trs.getiterator(): iteraters through container_trs itself, and its children, and grand-children, etc.
A:
elem0.text is None because the text is actually part of the <a> subelement. Just go one level deeper:
print elem0.getchildren()[0].text
By the way, elem0[0].text is a shortcut for that same construct -- no need for getchildren().
| Python ElementTree Check the node / element type | I am using ElementTree and cannot figure out if the childnode is text or not. childelement.text does not seem to work as it gives false positive even on nodes which are not text nodes.
Any suggestions?
Example
<tr>
<td><a href="sdas3">something for link</a></td>
<td>tttttk</td>
<td><a href="tyty">tyt for link</a></td>
</tr>
After parsing this xml file, I do this in Python:
for elem_main in container_trs: #elem_main is each tr
elem0 = elem_main.getchildren()[0] #td[0]
elem1 = elem_main.getchildren()[1] #td[1]
elem0 = elem_main.getchildren()[0]
print elem0.text
elem1 = elem_main.getchildren()[1]
print elem1.text
The above code does not output elem0.text; it is blank. I do see the elem1.text (that is, tttttk) in the output.
Update 2
I am actually building a dictionary. The text from the element with each so that I can sort the HTML table. How would I get the s in this code?
| [
"How about using the getiterator method to iterate through the all the descendant nodes:\nimport xml.etree.ElementTree as xee\n\ncontent='''\n<tr>\n <td><a href=\"sdas3\">something for link</a></td>\n <td>tttttk</td>\n <td><a href=\"tyty\">tyt for link</a></td>\n</tr>\n'''\n\ndef text_content(node):\n result=... | [
1,
1
] | [] | [] | [
"elementtree",
"python"
] | stackoverflow_0003611513_elementtree_python.txt |
Q:
Does django with mongodb make migrations a thing of the past?
Since mongo doesn't have a schema, does that mean that we won't have to do migrations when we change the models?
What does the migration process look like with a non-relational db?
A:
I think this is a really good question, but the answers are going to be a little scattered based on the libs you're using and your expectations for a "migration".
Let's take a look at some common migration actions:
Add a field: Mongo makes this very easy. Just add a field and you're done.
Delete a field: In theory, you're not actually tied to your schema, so "deletion" here is relative. If you remove the "property" and no longer load the field, then it doesn't really matter if that field is in the data. So if you don't care about "cleaning up" the database, then removing a field doesn't affect the database. If you do care about cleaning the DB, you'll basically need to run a giant for loop against the DB.
Modify a field name: This is also a difficult problem. When you rename a field "where" are you renaming it? If you want the DB to reflect the new field name, then you basically have to execute a giant for loop on the DB. TO be safe you probably have to "add" data, then push code, then "unset" the old field.
Some Wrinkles
However, the concept of a field name in tandem with an ActiveRecord object is just a little skewed. An ActiveRecord object is effectively providing mappings of object properties to actual database fields.
In a typical RDBMS the "size" of a field name is not really relevant. However, in Mongo, the field name actually occupies data space and this makes a big difference in terms of performance.
Now, if you're using some form of "data object" like ActiveRecord, why would you attempt to store full field names in the data? The DB should probably be storing all fields in alphabetical order with a map on the Object side. So a Document could have 8 fields/properties and the DB names would be "a", "b"..."j", but the Object names would be readable stuff like "Name", "Price", "Quantity".
The reason I bring this up is that it adds yet another wrinkle to Modify a field name. If you're implementing a mapping then modifying a field name doesn't really cause a migration at all.
Some more Wrinkles
If you do want to implement a migration on a deletion, then you'll have to do so after a deploy. You'll also have to recognize that you won't save any current disk space when you do so.
Mongo pre-allocates space and it doesn't really "give it back" unless you do a DB repair. So if you delete a bunch of fields on documents, those documents still occupy the same space on disk. If the documents are later moved, then you may reclaim space, however documents only move when they grow.
If you remove a large field from lots of documents you'll want to do a repair or a check out the new in-place compact command.
A:
There is no silver bullet. Adding or removing fields is easier with non-relational db (just don't use unneeded fields or use new fields), renaming a field is easier with traditional db (you'll usually have to change a lot of data in case of field rename in schemaless db), data migration is on par - depending on task.
A:
What does the migration process look like with a non-relational db?
Depends on if you need to update all the existing data or not.
In many cases, you may not need to touch the old data, such as when adding a new optional field. If that field also has a default value, you may also not need to update the old documents, if your application can handle a missing field correctly. However, if you want to build an index on the new field to be able to search/filter/sort, you need to add the default value back into the old documents.
Something like field renaming (trivial in a relational db, because you only need to update the catalog and not touch any data) is a major undertaking in MongoDB (you need to rewrite all documents).
If you need to update the existing data, you usually have to write a migration function that iterates over all the documents and updates them one by one (although this process can be shared and run in parallel). For large data sets, this can take a lot of time (and space), and you may miss transactions (if you end up with a crashed migration that went half-way through).
| Does django with mongodb make migrations a thing of the past? | Since mongo doesn't have a schema, does that mean that we won't have to do migrations when we change the models?
What does the migration process look like with a non-relational db?
| [
"I think this is a really good question, but the answers are going to be a little scattered based on the libs you're using and your expectations for a \"migration\".\nLet's take a look at some common migration actions:\n\nAdd a field: Mongo makes this very easy. Just add a field and you're done.\nDelete a field: In... | [
16,
2,
1
] | [] | [] | [
"django",
"mongodb",
"python"
] | stackoverflow_0003604565_django_mongodb_python.txt |
Q:
Output Django Object into XML-RPC response
I'm trying to return a django object in a XML-RPC response. Is it possible to serialize a model as XML-RPC methodResponse?
A:
I did figure out how serialize with xmlrpclib.dumps
def get_model(uuid):
o = MyModel.objects.get(uuid=uuid)
return xmlrpclib.dumps((o, ), allow_none=True, methodresponse=1)
This will result in a XML-RPC methodResponse.
Then on the client end I just need to use xmlrpclib.loads to convert to a python native object.
got_model = rpc_srv.getmodel('f21e4e0b-493a-460b-982a-d2bb31c45864')
m, method = xmlrpclib.loads(got_model)
for item in m:
print item
| Output Django Object into XML-RPC response | I'm trying to return a django object in a XML-RPC response. Is it possible to serialize a model as XML-RPC methodResponse?
| [
"I did figure out how serialize with xmlrpclib.dumps\ndef get_model(uuid):\n o = MyModel.objects.get(uuid=uuid)\n return xmlrpclib.dumps((o, ), allow_none=True, methodresponse=1)\n\nThis will result in a XML-RPC methodResponse.\nThen on the client end I just need to use xmlrpclib.loads to convert to a python ... | [
1
] | [] | [] | [
"django",
"python",
"serialization",
"xml_rpc"
] | stackoverflow_0003611827_django_python_serialization_xml_rpc.txt |
Q:
can't multiply sequence by non-int of type 'float'
Why do I get an error of "can't multiply sequence by non-int of type 'float'"? from the following code:
def nestEgVariable(salary, save, growthRates):
SavingsRecord = []
fund = 0
depositPerYear = salary * save * 0.01
for i in growthRates:
fund = fund * (1 + 0.01 * growthRates) + depositPerYear
SavingsRecord += [fund,]
return SavingsRecord
print nestEgVariable(10000,10,[3,4,5,0,3])
A:
for i in growthRates:
fund = fund * (1 + 0.01 * growthRates) + depositPerYear
should be:
for i in growthRates:
fund = fund * (1 + 0.01 * i) + depositPerYear
You are multiplying 0.01 with the growthRates list object. Multiplying a list by an integer is valid (it's overloaded syntactic sugar that allows you to create an extended a list with copies of its element references).
Example:
>>> 2 * [1,2]
[1, 2, 1, 2]
A:
Python allows for you to multiply sequences to repeat their values. Here is a visual example:
>>> [1] * 5
[1, 1, 1, 1, 1]
But it does not allow you to do it with floating point numbers:
>>> [1] * 5.1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't multiply sequence by non-int of type 'float'
A:
You're multipling your "1 + 0.01" times the growthRate list, not the item in the list you're iterating through. I've renamed i to rate and using that instead. See the updated code below:
def nestEgVariable(salary, save, growthRates):
SavingsRecord = []
fund = 0
depositPerYear = salary * save * 0.01
# V-- rate is a clearer name than i here, since you're iterating through the rates contained in the growthRates list
for rate in growthRates:
# V-- Use the `rate` item in the growthRate list you're iterating through rather than multiplying by the `growthRate` list itself.
fund = fund * (1 + 0.01 * rate) + depositPerYear
SavingsRecord += [fund,]
return SavingsRecord
print nestEgVariable(10000,10,[3,4,5,0,3])
A:
In this line:
fund = fund * (1 + 0.01 * growthRates) + depositPerYear
growthRates is a sequence ([3,4,5,0,3]). You can't multiply that sequence by a float (0.1). It looks like what you wanted to put there was i.
Incidentally, i is not a great name for that variable. Consider something more descriptive, like growthRate or rate.
A:
In this line:
fund = fund * (1 + 0.01 * growthRates) + depositPerYear
I think you mean this:
fund = fund * (1 + 0.01 * i) + depositPerYear
When you try to multiply a float by growthRates (which is a list), you get that error.
A:
Because growthRates is a sequence (you're even iterating it!) and you multiply it by (1 + 0.01), which is obviously a float (1.01). I guess you mean for growthRate in growthRates: ... * growthrate?
| can't multiply sequence by non-int of type 'float' | Why do I get an error of "can't multiply sequence by non-int of type 'float'"? from the following code:
def nestEgVariable(salary, save, growthRates):
SavingsRecord = []
fund = 0
depositPerYear = salary * save * 0.01
for i in growthRates:
fund = fund * (1 + 0.01 * growthRates) + depositPerYear
SavingsRecord += [fund,]
return SavingsRecord
print nestEgVariable(10000,10,[3,4,5,0,3])
| [
"for i in growthRates: \n fund = fund * (1 + 0.01 * growthRates) + depositPerYear\n\nshould be:\nfor i in growthRates: \n fund = fund * (1 + 0.01 * i) + depositPerYear\n\nYou are multiplying 0.01 with the growthRates list object. Multiplying a list by an integer is valid (it's overloaded syntactic sugar th... | [
26,
20,
3,
2,
1,
0
] | [] | [] | [
"floating_point",
"python",
"sequence"
] | stackoverflow_0003612378_floating_point_python_sequence.txt |
Q:
Better way to zip files in Python (zip a whole directory with a single command)?
Possible Duplicate:
How do I zip the contents of a folder using python (version 2.5)?
Suppose I have a directory: /home/user/files/. This dir has a bunch of files:
/home/user/files/
-- test.py
-- config.py
I want to zip this directory using ZipFile in python. Do I need to loop through the directory and add these files recursively, or is it possible do pass the directory name and the ZipFile class automatically adds everything beneath it?
In the end, I would like to have:
/home/user/files.zip (and inside my zip, I dont need to have a /files folder inside the zip:)
-- test.py
-- config.py
A:
Note that this doesn't include empty directories. If those are required there are workarounds available on the web; probably best to get the ZipInfo record for empty directories in our favorite archiving programs to see what's in them.
Hardcoding file/path to get rid of specifics of my code...
target_dir = '/tmp/zip_me_up'
zip = zipfile.ZipFile('/tmp/example.zip', 'w', zipfile.ZIP_DEFLATED)
rootlen = len(target_dir) + 1
for base, dirs, files in os.walk(target_dir):
for file in files:
fn = os.path.join(base, file)
zip.write(fn, fn[rootlen:])
A:
You could try using the distutils package:
distutils.archive_util.make_zipfile(base_name, base_dir[, verbose=0, dry_run=0])
A:
You might also be able to get away with using the zip command available in the Unix shell with a call to os.system
A:
You could use subprocess module:
import subprocess
PIPE = subprocess.PIPE
pd = subprocess.Popen(['/usr/bin/zip', '-r', 'files', 'files'],
stdout=PIPE, stderr=PIPE)
stdout, stderr = pd.communicate()
The code is not tested and pretends to works just on unix machines, i don't know if windows has similar command line utilities.
| Better way to zip files in Python (zip a whole directory with a single command)? |
Possible Duplicate:
How do I zip the contents of a folder using python (version 2.5)?
Suppose I have a directory: /home/user/files/. This dir has a bunch of files:
/home/user/files/
-- test.py
-- config.py
I want to zip this directory using ZipFile in python. Do I need to loop through the directory and add these files recursively, or is it possible do pass the directory name and the ZipFile class automatically adds everything beneath it?
In the end, I would like to have:
/home/user/files.zip (and inside my zip, I dont need to have a /files folder inside the zip:)
-- test.py
-- config.py
| [
"Note that this doesn't include empty directories. If those are required there are workarounds available on the web; probably best to get the ZipInfo record for empty directories in our favorite archiving programs to see what's in them.\nHardcoding file/path to get rid of specifics of my code...\ntarget_dir = '/tm... | [
24,
6,
2,
1
] | [] | [] | [
"python",
"zip"
] | stackoverflow_0003612094_python_zip.txt |
Q:
Why Python gets installed in Frameworks directory?
I've been wondering why python gets installed in directory named Frameworks? (though it's not Framework)
$ which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
Somebody please explain! Thanks!
A:
That's the way it is in OS X.
The Mac/README file in the Python source tree goes into some more details of the advantages of a framework build versus a traditional UNIX shared-library build, which will also work on OS X. The main points:
"The main reason is because you want
to create GUI programs in Python.
With the exception of
X11/XDarwin-based GUI toolkits all
GUI programs need to be run from a
fullblown MacOSX application (a
".app" bundle).
While it is technically possible to
create a .app without using
frameworks you will have to do the
work yourself if you really want
this.
A second reason for using frameworks
is that they put Python-related items
in only two places:
"/Library/Framework/Python.framework"
and "/Applications/MacPython 2.6".
This simplifies matters for users
installing Python from a binary
distribution if they want to get rid
of it again. Moreover, due to the way
frameworks work a user without admin
privileges can install a binary
distribution in his or her home
directory without recompilation."
| Why Python gets installed in Frameworks directory? | I've been wondering why python gets installed in directory named Frameworks? (though it's not Framework)
$ which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
Somebody please explain! Thanks!
| [
"That's the way it is in OS X.\nThe Mac/README file in the Python source tree goes into some more details of the advantages of a framework build versus a traditional UNIX shared-library build, which will also work on OS X. The main points:\n\n\"The main reason is because you want\nto create GUI programs in Python.... | [
5
] | [] | [] | [
"macos",
"python"
] | stackoverflow_0003612273_macos_python.txt |
Q:
Python 3 string slice behavior inconsistent
Wanted an easy way to extract year month and day from a string.
Using Python 3.1.2
Tried this:
processdate = "20100818"
print(processdate[0:4])
print(processdate[4:2])
print(processdate[6:2])
Results in:
...2010
...
...
Reread all the string docs, did some searching, can't figure out why it'd be doing this.
I'm sure this is a no brainer that I'm missing somehow, I've just banged my head on this enough today.
A:
With a slice of [4:2], you're telling Python to start at character index 4 and stop at character index 2. Since 4 > 2, you are already past where you should stop when you start, so the slice is empty.
Did you want the fourth and fifth characters? Then you want [4:6] instead.
A:
The best way to do this is with strptime!
print( strptime( ..., "%Y%m%d" ) )
A:
processdate = "20100818"
print(processdate[0:4]) # year
print(processdate[4:6]) # month
print(processdate[6:8]) # date
| Python 3 string slice behavior inconsistent | Wanted an easy way to extract year month and day from a string.
Using Python 3.1.2
Tried this:
processdate = "20100818"
print(processdate[0:4])
print(processdate[4:2])
print(processdate[6:2])
Results in:
...2010
...
...
Reread all the string docs, did some searching, can't figure out why it'd be doing this.
I'm sure this is a no brainer that I'm missing somehow, I've just banged my head on this enough today.
| [
"With a slice of [4:2], you're telling Python to start at character index 4 and stop at character index 2. Since 4 > 2, you are already past where you should stop when you start, so the slice is empty.\nDid you want the fourth and fifth characters? Then you want [4:6] instead.\n",
"The best way to do this is with... | [
6,
6,
2
] | [] | [] | [
"python",
"slice"
] | stackoverflow_0003612217_python_slice.txt |
Q:
Adjust Python Regex to not include a single digit in the findall results
I am trying to capture / extract numeric values from some strings.
Here is a sample string:
s='The shipping company had 93,999,888.5685 gallons of fuel on hand'
I want to pull the 93,999,888.5685 value
I have gotten my regex to this
> mine=re.compile("(\d{1,3}([,\d{3}])*[.\d+]*)")
However, when I do a findall I get the following:
mine.findall(s)
[('93,999,888.5685', '8')]
I have tried a number of different strategies to keep it from matching on the 8
But I am now realizing that I am not sure I know why it matched on the 8
Any illumination would be appreciated.
A:
The reason the 8 is being captured is because you have 2 capturing groups. Mark the 2nd group as a non-capturing group using ?: with this pattern: (\d{1,3}(?:[,\d{3}])*[.\d+]*)
Your second group, ([,\d{3}]) is responsible for the additional match.
A:
Your string broken up:
(
\d{1,3} This will match any group of 1-3 digits (`8`, `12`, `000`, etc)
(
[,\d{3}] This will match groups of a "," and 3 digits (`,123`, `,000`, etc)
)* **from zero to infinity times**
[.\d+]* This matches any number of periods "." and digits from 0 to infinity
)
A:
findall returns a tuple for each match. The tuple contains each group (delineated by parenthesis in the regex) of the match. You want the first group only. Below I've used a list comprehension to pull out the first group.
>>> mine=re.compile("(\d{1,3}(,\d{3})*(\.?\d+)*)")
>>> s='blah 93,999,888.5685 blah blah blah 988,122.3.'
>>> [m[0] for m in mine.findall(s)]
['93,999,888.5685', '988,122.3']
| Adjust Python Regex to not include a single digit in the findall results | I am trying to capture / extract numeric values from some strings.
Here is a sample string:
s='The shipping company had 93,999,888.5685 gallons of fuel on hand'
I want to pull the 93,999,888.5685 value
I have gotten my regex to this
> mine=re.compile("(\d{1,3}([,\d{3}])*[.\d+]*)")
However, when I do a findall I get the following:
mine.findall(s)
[('93,999,888.5685', '8')]
I have tried a number of different strategies to keep it from matching on the 8
But I am now realizing that I am not sure I know why it matched on the 8
Any illumination would be appreciated.
| [
"The reason the 8 is being captured is because you have 2 capturing groups. Mark the 2nd group as a non-capturing group using ?: with this pattern: (\\d{1,3}(?:[,\\d{3}])*[.\\d+]*)\nYour second group, ([,\\d{3}]) is responsible for the additional match.\n",
"Your string broken up:\n(\n\\d{1,3} This will mat... | [
4,
1,
0
] | [
"Why not wrap it in \\D ? mine=re.compile(\"\\D(\\d{1,3}([,\\d{3}])[.\\d+])\\D\").\n"
] | [
-1
] | [
"numbers",
"python",
"regex"
] | stackoverflow_0003612693_numbers_python_regex.txt |
Q:
Best practices to make an Installer - Can I use Yum
I am new to installers and up until now have just been manually executing a line by line list of items to install. Clearly this is not a scaleable approach, especially when new servers need to be installed regularly, and not by the same person.
Currently I need to install about 30 packages via Yum (from large ones like mySQL to smaller 70KB random ones)
Manually install a bunch of others (Python packages that are basically just "python setup.py install" commands)
Create some directories, change some permissions etc.
What is the best way to create something that automatically does this. I can't always assume the client server has Yum, so would I need to download all the binaries and dependencies and have the script install them?
I know this is a loaded question. Does anyone know any good tutorials for this?
A:
You're asking a few questions at once, so I'm just going to touch on packaging and installing Python libraries...
Using setup.py you can turn Python packages into RPMs for installation on any Red Hat/CentOS
box using yum. This is how I install all my packages internally at my job. Assuming the foundation rpmbuild utilities are installed, it's a simple as this:
python setup.py bdist_rpm
This will build an RPM of your package in the dist folder (e.g. dist/mypackage-1.01-1-noarch.rpm). Then you can just add this RPM to your internal yum mirror (if you have an internal mirror) and from there you can easily distribute packages to your internal hosts.
A:
You can create an RPM package that depends on whichever packages you need.
http://fedoraproject.org/wiki/PackageMaintainers/CreatingPackageHowTo (For Fedora, but would be the same for RHEL/CentOS)
You would basically have a line in the .spec file like this:
Requires: mysql-server, httpd, php
So you can add this to your yum mirror (assuming you have one), then whoever is doing the installing can just do yum install server-setup and it'll automatically pull in all the required packages. As jathanism said, you can create RPMs from the setup.py scripts and put those on your mirror, and then just have your meta package depend on those RPMs.
And you could also do a Debian package if there is a possibility of a Debian system being used.
| Best practices to make an Installer - Can I use Yum | I am new to installers and up until now have just been manually executing a line by line list of items to install. Clearly this is not a scaleable approach, especially when new servers need to be installed regularly, and not by the same person.
Currently I need to install about 30 packages via Yum (from large ones like mySQL to smaller 70KB random ones)
Manually install a bunch of others (Python packages that are basically just "python setup.py install" commands)
Create some directories, change some permissions etc.
What is the best way to create something that automatically does this. I can't always assume the client server has Yum, so would I need to download all the binaries and dependencies and have the script install them?
I know this is a loaded question. Does anyone know any good tutorials for this?
| [
"You're asking a few questions at once, so I'm just going to touch on packaging and installing Python libraries...\nUsing setup.py you can turn Python packages into RPMs for installation on any Red Hat/CentOS \nbox using yum. This is how I install all my packages internally at my job. Assuming the foundation rpmbui... | [
4,
2
] | [] | [] | [
"installation",
"python",
"yum"
] | stackoverflow_0003612785_installation_python_yum.txt |
Q:
Passing class instantiations (layering)
Program design:
Class A, which implements lower level data handling
Classes B-E, which provide a higher level interface to A to perform various functions
Class F, which is a UI object that interacts with B-E according to user input
There can only be one instantiation of A at any given time, to avoid race conditions, data corruption, etc.
What is the best way to provide a copy of A to B-E? Currently F instantiates A and holds onto it for the life of the program, passing it to B-E when creating them. Alternately I could create a globally available module with a shared copy of A that everything uses. Another alternative is to make B-E subclasses of A, but that violates the constraint of only one A (since each subclass would be their own data handler, so to speak).
Language is Python 3, FWIW.
A:
Use a Borg instead of a Singleton.
>>> class Borg( object ):
... __ss = {}
... def __init__( self ):
... self.__dict__ = self.__ss
...
>>> foo = Borg()
>>> foo.x = 1
>>> bar = Borg()
>>> bar.x
1
A:
How about using the module technique, this is much simpler.
in module "A.py"
class A(object):
def __init__(self, ..)
...
a = A()
in module "B.py"
from A import a
class B(object):
def __init__(self)
global a
self.a = a
The both have single instance a.
The same could be done for other classes C, D, F etc
| Passing class instantiations (layering) | Program design:
Class A, which implements lower level data handling
Classes B-E, which provide a higher level interface to A to perform various functions
Class F, which is a UI object that interacts with B-E according to user input
There can only be one instantiation of A at any given time, to avoid race conditions, data corruption, etc.
What is the best way to provide a copy of A to B-E? Currently F instantiates A and holds onto it for the life of the program, passing it to B-E when creating them. Alternately I could create a globally available module with a shared copy of A that everything uses. Another alternative is to make B-E subclasses of A, but that violates the constraint of only one A (since each subclass would be their own data handler, so to speak).
Language is Python 3, FWIW.
| [
"Use a Borg instead of a Singleton.\n>>> class Borg( object ):\n... __ss = {}\n... def __init__( self ):\n... self.__dict__ = self.__ss\n...\n>>> foo = Borg()\n>>> foo.x = 1\n>>> bar = Borg()\n>>> bar.x\n1\n\n",
"How about using the module technique, this is much simpler.\nin module \"A.py\"\n... | [
4,
0
] | [] | [] | [
"database",
"oop",
"python",
"python_3.x"
] | stackoverflow_0003612535_database_oop_python_python_3.x.txt |
Q:
Python can't import module named wsgi_soap from the soaplib
This code works on Debian under Python 2.5 but doesn't on Ubuntu under Python 2.6:
from soaplib.wsgi_soap import SimpleWSGISoapApp
On ubuntu under python 2.6 I get the error:
from soaplib.wsgi_soap import SimpleWSGISoapApp
ImportError: No module named wsgi_soap
A:
Don't call your own file soaplib.py. Rename it to something else. Also, remove the soaplib.pyc file that was generated.
A:
I have used the latest version of the soaplib which is incompatible with the version 0.8.
| Python can't import module named wsgi_soap from the soaplib | This code works on Debian under Python 2.5 but doesn't on Ubuntu under Python 2.6:
from soaplib.wsgi_soap import SimpleWSGISoapApp
On ubuntu under python 2.6 I get the error:
from soaplib.wsgi_soap import SimpleWSGISoapApp
ImportError: No module named wsgi_soap
| [
"Don't call your own file soaplib.py. Rename it to something else. Also, remove the soaplib.pyc file that was generated.\n",
"I have used the latest version of the soaplib which is incompatible with the version 0.8.\n"
] | [
2,
0
] | [] | [] | [
"python",
"soap"
] | stackoverflow_0003422354_python_soap.txt |
Q:
Placing a Button in UltimateListCtrl using wxPython
I'm new to Pythong and I have been trying to get a button within UltimateListCtrl. I still can't figure out what I'm doing wrong. Here is my code:
try:
from agw import ultimatelistctrl as ULC
except ImportError: # if it's not there locally, try the wxPython lib.
from wx.lib.agw import ultimatelistctrl as ULC
self.table = ULC.UltimateListCtrl(self, -1, agwStyle=ULC.ULC_REPORT|
ULC.ULC_HAS_VARIABLE_ROW_HEIGHT)
self.table.InsertColumn(0, "Name")
self.table.InsertColumn(1, "Size")
self.table.InsertColumn(2, "Download")
for i in range(0, len(masterlist)):
pos = self.table.InsertStringItem(i,str(masterlist[i]['name']))
self.table.SetStringItem(pos, 1,str(masterlist[i]['size']))
button = wx.Button(self, id=i, label="Download")
self.table.SetItemWindow(pos, col=2, wnd=button, expand=True)
masterlist is a list of download items.
I get this traceback:
Traceback (most recent call last):
File "E:\TestApp.py", line 67, in Display
self.table.SetItemWindow(pos, col=5, wnd=button, expand=True)
File "C:\Python27\lib\site-packages\wx-2.8-msw-unicode\wx\lib\agw\ultimatelistctrl.py", line 12961, in SetItemWindow
return self._mainWin.SetItemWindow(item, wnd, expand)
File "C:\Python27\lib\site-packages\wx-2.8-msw-unicode\wx\lib\agw\ultimatelistctrl.py", line 9021, in SetItemWindow
item.SetWindow(wnd, expand)
File "C:\Python27\lib\site-packages\wx-2.8-msw-unicode\wx\lib\agw\ultimatelistctrl.py", line 1863, in SetWindow
mainWin = listCtrl._mainWin
AttributeError: 'MainWindow' object has no attribute '_mainWin'
A:
button's parent should be your ULC i.e self.table
So change this line:
button = wx.Button(self, id=wx.ID_ANY, label="Download")
to this:
button = wx.Button(self.table, id=wx.ID_ANY, label="Download")
Update in response to comment:
For some reason it doesn't seem to be possible to delete all items in a ULC with the
DeleteAllItems() method if any of the items contain widgets so instead use DeleteItem().
def emptyList(self)
itemCount = self.list.GetItemCount()
for item in xrange(itemCount):
self.list.DeleteItem(0)
| Placing a Button in UltimateListCtrl using wxPython | I'm new to Pythong and I have been trying to get a button within UltimateListCtrl. I still can't figure out what I'm doing wrong. Here is my code:
try:
from agw import ultimatelistctrl as ULC
except ImportError: # if it's not there locally, try the wxPython lib.
from wx.lib.agw import ultimatelistctrl as ULC
self.table = ULC.UltimateListCtrl(self, -1, agwStyle=ULC.ULC_REPORT|
ULC.ULC_HAS_VARIABLE_ROW_HEIGHT)
self.table.InsertColumn(0, "Name")
self.table.InsertColumn(1, "Size")
self.table.InsertColumn(2, "Download")
for i in range(0, len(masterlist)):
pos = self.table.InsertStringItem(i,str(masterlist[i]['name']))
self.table.SetStringItem(pos, 1,str(masterlist[i]['size']))
button = wx.Button(self, id=i, label="Download")
self.table.SetItemWindow(pos, col=2, wnd=button, expand=True)
masterlist is a list of download items.
I get this traceback:
Traceback (most recent call last):
File "E:\TestApp.py", line 67, in Display
self.table.SetItemWindow(pos, col=5, wnd=button, expand=True)
File "C:\Python27\lib\site-packages\wx-2.8-msw-unicode\wx\lib\agw\ultimatelistctrl.py", line 12961, in SetItemWindow
return self._mainWin.SetItemWindow(item, wnd, expand)
File "C:\Python27\lib\site-packages\wx-2.8-msw-unicode\wx\lib\agw\ultimatelistctrl.py", line 9021, in SetItemWindow
item.SetWindow(wnd, expand)
File "C:\Python27\lib\site-packages\wx-2.8-msw-unicode\wx\lib\agw\ultimatelistctrl.py", line 1863, in SetWindow
mainWin = listCtrl._mainWin
AttributeError: 'MainWindow' object has no attribute '_mainWin'
| [
"button's parent should be your ULC i.e self.table\nSo change this line:\nbutton = wx.Button(self, id=wx.ID_ANY, label=\"Download\")\n\nto this:\nbutton = wx.Button(self.table, id=wx.ID_ANY, label=\"Download\")\n\nUpdate in response to comment:\nFor some reason it doesn't seem to be possible to delete all items in ... | [
3
] | [] | [] | [
"python",
"user_interface",
"wxpython"
] | stackoverflow_0003612934_python_user_interface_wxpython.txt |
Q:
writing large CSV files - dictionary based CSV writer seems to be the problem
I have a big bag of words array (words, and their counts) that I need to write to large flat csv file.
In testing with around 1000 or so words, this works just fine - I use the dictwriter as follows:
self.csv_out = csv.DictWriter(open(self.loc+'.csv','w'), quoting=csv.QUOTE_ALL, fieldnames=fields)
where fields is list of words (i.e. the keys, in the dictionary that I pass to csv_out.writerow).
However, it seems that this is scaling horribly, and as the number of words increase - the time required to write a row increases exponentially. The dict_to_list method in csv seems to be the instigator of my troubles.
I'm not entirely as to how to even begin to optimize here ? any faster CSV routines I could use ?
A:
Ok, this is by no means the answer but i looked up the source-code for the csv module and noticed that there is a very expensive if not check in the module (§ 136-141 in python 2.6).
if self.extrasaction == "raise":
wrong_fields = [k for k in rowdict if k not in self.fieldnames]
if wrong_fields:
raise ValueError("dict contains fields not in fieldnames: " +
", ".join(wrong_fields))
return [rowdict.get(key, self.restval) for key in self.fieldnames]
so a quick workaround seems to be to pass extrasaction="ignore" when creating the writer. This seems to speed up things very substantially.
Not a perfect solution, and perhaps somewhat obvious, but just posting it is helpful to somebody else..
A:
The obvious optimisation is to use a csv.writer instead of a DictWriter, passing in iterables for each row instead of dictionaries. Does that not help?
When you say "the number of words", do you mean the number of columns in the CSV? Because I've never seen a CSV that needs thousands of columns! Maybe you have transposed your data and are writing columns instead of rows? Each row should represent one datum, with sections as defined by the columns. If you really do need that sort of size, maybe a database is a better choice?
| writing large CSV files - dictionary based CSV writer seems to be the problem | I have a big bag of words array (words, and their counts) that I need to write to large flat csv file.
In testing with around 1000 or so words, this works just fine - I use the dictwriter as follows:
self.csv_out = csv.DictWriter(open(self.loc+'.csv','w'), quoting=csv.QUOTE_ALL, fieldnames=fields)
where fields is list of words (i.e. the keys, in the dictionary that I pass to csv_out.writerow).
However, it seems that this is scaling horribly, and as the number of words increase - the time required to write a row increases exponentially. The dict_to_list method in csv seems to be the instigator of my troubles.
I'm not entirely as to how to even begin to optimize here ? any faster CSV routines I could use ?
| [
"Ok, this is by no means the answer but i looked up the source-code for the csv module and noticed that there is a very expensive if not check in the module (§ 136-141 in python 2.6). \nif self.extrasaction == \"raise\":\n wrong_fields = [k for k in rowdict if k not in self.fieldnames]\n if wrong_fields:\n ... | [
2,
1
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0003613457_csv_python.txt |
Q:
django middleware redirect infinite loop
I have a middleware that checks a session value and redirects depending that value. My problem is, it is creating an infinite redirect loop and I'm not sure why.
So, what I want to do is check to see if the value of the session visible is yes and if not redirect the user to my test view.
Here is my middleware:
class CheckStatus(object):
def process_request(self, request):
if request.user.is_authenticated():
s = request.session.get('visible')
if str(s) is not 'yes':
return HttpResponseRedirect(reverse("myapp.myview.views.test"))
A:
You should at least avoid having it run when serving some media files:
from django.conf import settings
class CheckStatus(object):
def process_request(self, request):
if request.user.is_authenticated():
if not request.path.startswith(settings.MEDIA_URL):
s = request.session.get('visible')
if str(s) is not 'yes':
return HttpResponseRedirect(reverse("myapp.myview.views.test"))
But the more descent way seems to be using the process_view-method!
| django middleware redirect infinite loop | I have a middleware that checks a session value and redirects depending that value. My problem is, it is creating an infinite redirect loop and I'm not sure why.
So, what I want to do is check to see if the value of the session visible is yes and if not redirect the user to my test view.
Here is my middleware:
class CheckStatus(object):
def process_request(self, request):
if request.user.is_authenticated():
s = request.session.get('visible')
if str(s) is not 'yes':
return HttpResponseRedirect(reverse("myapp.myview.views.test"))
| [
"You should at least avoid having it run when serving some media files:\nfrom django.conf import settings\n\nclass CheckStatus(object): \n\n def process_request(self, request): \n if request.user.is_authenticated(): \n if not request.path.startswith(settings.MEDIA_URL):\n ... | [
3
] | [] | [] | [
"django",
"django_middleware",
"python"
] | stackoverflow_0003613385_django_django_middleware_python.txt |
Q:
Wrappers around lambda expressions
I have functions in python that take two inputs, do some manipulations, and return two outputs. I would like to rearrange the output arguments, so I wrote a wrapper function around the original function that creates a new function with the new output order
def rotate(f):
h = lambda x,y: -f(x,y)[1], f(x,y)[0]
return h
f = lambda x, y: (-y, x)
h = rotate(f)
However, this is giving an error message:
NameError: global name 'x' is not defined
x is an argument to a lambda expression, so why does it have to be defined?
The expected behavior is that h should be a new function that is identical to lambda x,y: (-x,-y)
A:
You need to add parentheses around the lambda expression:
h = lambda x,y: (-f(x,y)[1], f(x,y)[0])
Otherwise, Python interprets the code as:
h = (lambda x,y: -f(x,y)[1]), f(x,y)[0]
and h is a 2-tuple.
A:
There is problem with precedence. Just use additional parentheses:
def rotate(f):
h = lambda x,y: (-f(x,y)[1], f(x,y)[0])
return h
| Wrappers around lambda expressions | I have functions in python that take two inputs, do some manipulations, and return two outputs. I would like to rearrange the output arguments, so I wrote a wrapper function around the original function that creates a new function with the new output order
def rotate(f):
h = lambda x,y: -f(x,y)[1], f(x,y)[0]
return h
f = lambda x, y: (-y, x)
h = rotate(f)
However, this is giving an error message:
NameError: global name 'x' is not defined
x is an argument to a lambda expression, so why does it have to be defined?
The expected behavior is that h should be a new function that is identical to lambda x,y: (-x,-y)
| [
"You need to add parentheses around the lambda expression:\nh = lambda x,y: (-f(x,y)[1], f(x,y)[0])\n\nOtherwise, Python interprets the code as:\nh = (lambda x,y: -f(x,y)[1]), f(x,y)[0]\n\nand h is a 2-tuple.\n",
"There is problem with precedence. Just use additional parentheses:\ndef rotate(f):\n h = lambda x... | [
7,
5
] | [] | [] | [
"lambda",
"python"
] | stackoverflow_0003613981_lambda_python.txt |
Q:
Google Appengine: objects passed to a template changes their addresses in memory
I query an array of objects from DB, then compare addresses of the objects in Model and in View. They differs! Why? I want access the same objects as from template as from business logics code.
I wouldn't ask for it but it really bother me because function calls are disallowed in Django-styled templates and I even can't assign custom properties to DB-objects within the business logics code.
In request handler:
from google.appengine.ext.webapp import template
cats = db.GqlQuery("SELECT * FROM Cats")
for cat in cats:
self.response.out.write("<li>%s</li>" % (a))
In template:
{% for a in articles %}
{{a}},
{% endfor %}
Addresses (hash codes) differs in such code.
A:
When you use the query iterator, you in fact do several fetches in sequence, each one will result in a new model instance.
Instead of doing:
cats = db.GqlQuery("SELECT * FROM Cats")
for cat in cats:
...
...do this instead:
cats = db.GqlQuery("SELECT * FROM Cats").fetch(50)
for cat in cats:
...
And pass the list of cats to the template. You will have same lists in handler and template, as each entity is loaded into a model instance only once.
| Google Appengine: objects passed to a template changes their addresses in memory | I query an array of objects from DB, then compare addresses of the objects in Model and in View. They differs! Why? I want access the same objects as from template as from business logics code.
I wouldn't ask for it but it really bother me because function calls are disallowed in Django-styled templates and I even can't assign custom properties to DB-objects within the business logics code.
In request handler:
from google.appengine.ext.webapp import template
cats = db.GqlQuery("SELECT * FROM Cats")
for cat in cats:
self.response.out.write("<li>%s</li>" % (a))
In template:
{% for a in articles %}
{{a}},
{% endfor %}
Addresses (hash codes) differs in such code.
| [
"When you use the query iterator, you in fact do several fetches in sequence, each one will result in a new model instance.\nInstead of doing:\ncats = db.GqlQuery(\"SELECT * FROM Cats\")\nfor cat in cats:\n ...\n\n...do this instead:\ncats = db.GqlQuery(\"SELECT * FROM Cats\").fetch(50)\nfor cat in cats:\n ..... | [
4
] | [] | [] | [
"google_app_engine",
"python",
"templates"
] | stackoverflow_0003613573_google_app_engine_python_templates.txt |
Q:
AttributeError when unpickling an object
I'm trying to pickle an instance of a class in one module, and unpickle it in another.
Here's where I pickle:
import cPickle
def pickleObject():
object = Foo()
savefile = open('path/to/file', 'w')
cPickle.dump(object, savefile, cPickle.HIGHEST_PROTOCOL)
class Foo(object):
(...)
and here's where I try to unpickle:
savefile = open('path/to/file', 'r')
object = cPickle.load(savefile)
On that second line, I get AttributeError: 'module' object has no attribute 'Foo'
Anyone see what I'm doing wrong?
A:
class Foo must be importable via the same path in the unpickling environment so that the pickled object can be reinstantiated.
I think your issue is that you define Foo in the module that you are executing as main (__name__ == "__main__"). Pickle will serialize the path (not the class object/definition!!!) to Foo as being in the main module. Foo is not an attribute of the main unpickle script.
In this example, you could redefine class Foo in the unpickling script and it should unpickle just fine. But the intention is really to have a common library that is shared between the two scripts that will be available by the same path. Example: define Foo in foo.py
Simple Example:
$PROJECT_DIR/foo.py
class Foo(object):
pass
$PROJECT_DIR/picklefoo.py
import cPickle
from foo import Foo
def pickleObject():
obj = Foo()
savefile = open('pickle.txt', 'w')
cPickle.dump(obj, savefile, cPickle.HIGHEST_PROTOCOL)
pickleObject()
$PROJECT_DIR/unpicklefoo.py
import cPickle
savefile = open('pickle.txt', 'r')
obj = cPickle.load(savefile)
...
A:
Jeremy Brown had the right answer, here is a more concrete version of the same point:
import cPickle
import myFooDefiningModule
def pickleObject():
object = myFooDefiningModule.Foo()
savefile = open('path/to/file', 'w')
cPickle.dump(object, savefile)
and:
import cPickle
import myFooDefiningModule
savefile = open('path/to/file', 'r')
object = cPickle.load(savefile)
such that Foo lives in the same namespace in each piece of code.
| AttributeError when unpickling an object | I'm trying to pickle an instance of a class in one module, and unpickle it in another.
Here's where I pickle:
import cPickle
def pickleObject():
object = Foo()
savefile = open('path/to/file', 'w')
cPickle.dump(object, savefile, cPickle.HIGHEST_PROTOCOL)
class Foo(object):
(...)
and here's where I try to unpickle:
savefile = open('path/to/file', 'r')
object = cPickle.load(savefile)
On that second line, I get AttributeError: 'module' object has no attribute 'Foo'
Anyone see what I'm doing wrong?
| [
"class Foo must be importable via the same path in the unpickling environment so that the pickled object can be reinstantiated. \nI think your issue is that you define Foo in the module that you are executing as main (__name__ == \"__main__\"). Pickle will serialize the path (not the class object/definition!!!) t... | [
25,
3
] | [] | [] | [
"pickle",
"python"
] | stackoverflow_0003614379_pickle_python.txt |
Q:
import django settings in app-engine-patch
I have a problem with Django settings.
My app runs with app-engine-patch.
I added a script that runs without django, and is reached directly via the app.yaml handlers.
I then get this error:
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/conf/__init__.py", line 53, in _import_settings
raise EnvironmentError, "Environment variable %s is undefined." % ENVIRONMENT_VARIABLE
<type 'exceptions.EnvironmentError'>: Environment variable DJANGO_SETTINGS_MODULE is undefined.
I found this tip in google:
# Force Django to reload its settings.
from django.conf import settings
settings._target = None
# Must set this env var before importing any part of Django
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
But then I got this error:
raise EnvironmentError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e)
<type 'exceptions.EnvironmentError'>: Could not import settings 'settings.py' (Is it on sys.path? Does it have syntax errors?): No module named ragendja.settings_pre
I think it is a problem with app-engine-patch paths modification, how can I import settings_pre correctly?
Thanks!
A:
Change
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings.py'
to
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
The value of the DJANGO_SETTINGS_MODULE is the name of the module (ie, as you would write it in an import statement in a Python script), not the path to the module.
A:
Thanks to another question, I replaced the beinning with:
from common.appenginepatch.aecmd import setup_env
setup_env(manage_py_env=True)
This imports all the settings and my task can run without reference to Django
| import django settings in app-engine-patch | I have a problem with Django settings.
My app runs with app-engine-patch.
I added a script that runs without django, and is reached directly via the app.yaml handlers.
I then get this error:
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/conf/__init__.py", line 53, in _import_settings
raise EnvironmentError, "Environment variable %s is undefined." % ENVIRONMENT_VARIABLE
<type 'exceptions.EnvironmentError'>: Environment variable DJANGO_SETTINGS_MODULE is undefined.
I found this tip in google:
# Force Django to reload its settings.
from django.conf import settings
settings._target = None
# Must set this env var before importing any part of Django
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
But then I got this error:
raise EnvironmentError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e)
<type 'exceptions.EnvironmentError'>: Could not import settings 'settings.py' (Is it on sys.path? Does it have syntax errors?): No module named ragendja.settings_pre
I think it is a problem with app-engine-patch paths modification, how can I import settings_pre correctly?
Thanks!
| [
"Change\nos.environ['DJANGO_SETTINGS_MODULE'] = 'settings.py' \n\nto\nos.environ['DJANGO_SETTINGS_MODULE'] = 'settings'\n\nThe value of the DJANGO_SETTINGS_MODULE is the name of the module (ie, as you would write it in an import statement in a Python script), not the path to the module.\n",
"Thanks to another que... | [
3,
0
] | [] | [] | [
"app_engine_patch",
"django",
"google_app_engine",
"python"
] | stackoverflow_0003579544_app_engine_patch_django_google_app_engine_python.txt |
Q:
How to check if key exists in datastore without returning the object
I want to be able to check if a key_name for my model exists in the datastore.
My code goes:
t=MyModel.get_by_key_name(c)
if t==None:
#key_name does not exist
I don't need the object, so is there a way (which would be faster and cost less resource) to check if the object exist without returning it? I only know the key name, not the key.
A:
You can't avoid get_by_key_name() or key-related equivalents to check if a key exists. Your code is fine.
A:
The API talks about Model.all(keys_only=False) returning all the key names when keys_only is set to True
Look at the query that is fired for this, and then you can write a query similar to this but just for your object and see if any row is fetched or not.
| How to check if key exists in datastore without returning the object | I want to be able to check if a key_name for my model exists in the datastore.
My code goes:
t=MyModel.get_by_key_name(c)
if t==None:
#key_name does not exist
I don't need the object, so is there a way (which would be faster and cost less resource) to check if the object exist without returning it? I only know the key name, not the key.
| [
"You can't avoid get_by_key_name() or key-related equivalents to check if a key exists. Your code is fine.\n",
"The API talks about Model.all(keys_only=False) returning all the key names when keys_only is set to True\nLook at the query that is fired for this, and then you can write a query similar to this but j... | [
4,
1
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003614521_google_app_engine_python.txt |
Q:
Django failing to route url (simple question)
I'm doing something stupid, and I'm not sure what it is. I have a the following urls.py in the root of my django project:
from django.conf.urls.defaults import *
from django.conf import settings
urlpatterns = patterns('',
(r'^$', include('preview_signup.urls')),
)
In my preview_signup module (django app) I have the following urls.py file:
from django.conf.urls.defaults import *
urlpatterns = patterns('django.views.generic.simple',
(r'^thanks/$', 'direct_to_template', {'template': 'thankyou.html'})
)
The urls.py above doesn't work when I go to http://localhost:8000/thanks/. But if it's changed to this:
from django.conf.urls.defaults import *
urlpatterns = patterns('django.views.generic.simple',
(r'^$', 'direct_to_template', {'template': 'thankyou.html'})
)
And I go to http://localhost:8000/ it works fine.
What am I doing wrong?
A:
This code should work:
urlpatterns = patterns('',
(r'^', include('preview_signup.urls')),
)
$ (end of line) just removed.
A:
When something goes wrong (or even if it doesn't), thoroughly read the django docs. Here's an excerpt from the aforementioned link:
from django.conf.urls.defaults import *
urlpatterns = patterns('',
(r'^weblog/', include('django_website.apps.blog.urls.blog')),
(r'^documentation/', include('django_website.apps.docs.urls.docs')),
(r'^comments/', include('django.contrib.comments.urls')),
)
Note that the regular expressions in
this example don't have a $
(end-of-string match character) but do
include a trailing slash. Whenever
Django encounters include(), it chops
off whatever part of the URL matched
up to that point and sends the
remaining string to the included
URLconf for further processing.
| Django failing to route url (simple question) | I'm doing something stupid, and I'm not sure what it is. I have a the following urls.py in the root of my django project:
from django.conf.urls.defaults import *
from django.conf import settings
urlpatterns = patterns('',
(r'^$', include('preview_signup.urls')),
)
In my preview_signup module (django app) I have the following urls.py file:
from django.conf.urls.defaults import *
urlpatterns = patterns('django.views.generic.simple',
(r'^thanks/$', 'direct_to_template', {'template': 'thankyou.html'})
)
The urls.py above doesn't work when I go to http://localhost:8000/thanks/. But if it's changed to this:
from django.conf.urls.defaults import *
urlpatterns = patterns('django.views.generic.simple',
(r'^$', 'direct_to_template', {'template': 'thankyou.html'})
)
And I go to http://localhost:8000/ it works fine.
What am I doing wrong?
| [
"This code should work:\nurlpatterns = patterns('',\n (r'^', include('preview_signup.urls')),\n)\n\n$ (end of line) just removed.\n",
"When something goes wrong (or even if it doesn't), thoroughly read the django docs. Here's an excerpt from the aforementioned link:\nfrom django.conf.urls.defaults import *\n\... | [
4,
1
] | [] | [] | [
"django",
"django_urls",
"python"
] | stackoverflow_0003614594_django_django_urls_python.txt |
Q:
Python: open a file *with* script?
I have a python script bundled into a application (I'm on a mac) and have the application set to be able to open .zip files. But when I say "open foo.zip with bar.py" how do I access the file that I have passed to it?
Additional info:
Using tkinter.
What's a good way to debug this, as there is no terminal to pass info to?
A:
You should be using sys.argv[1]
task = sys.argv[1].decode('utf-8')
if task == u'uppercase':
pass
elif task == u'openitems':
item_paths = sys.argv[2:]
for itempath in item_paths:
itempath = itempath.decode('utf-8')
A:
If I'm not greatly mistaken, it should pass the name of the file as the first argument to the script - sys.argv[1].
| Python: open a file *with* script? | I have a python script bundled into a application (I'm on a mac) and have the application set to be able to open .zip files. But when I say "open foo.zip with bar.py" how do I access the file that I have passed to it?
Additional info:
Using tkinter.
What's a good way to debug this, as there is no terminal to pass info to?
| [
"You should be using sys.argv[1]\ntask = sys.argv[1].decode('utf-8')\nif task == u'uppercase':\n pass\nelif task == u'openitems':\n item_paths = sys.argv[2:]\n for itempath in item_paths:\n itempath = itempath.decode('utf-8')\n\n",
"If I'm not greatly mistaken, it should pass the name of the file ... | [
1,
0
] | [] | [] | [
"macos",
"python",
"tkinter"
] | stackoverflow_0003614609_macos_python_tkinter.txt |
Q:
How do I serve and log my current directory with a python web server?
I need to create a webserver that will respond to GET requests by serving pages from a specified folder, as well as log the pages the user is GETting, and the IP of the user.
The main trouble comes from me not knowing how to serve the directory listing to the user when overriding the do_GET method. Here is my code so far:
#!/usr/bin/env python
import logging
import SimpleHTTPServer
import SocketServer
import SimpleHTTPServer
import BaseHTTPServer
import os
PORT = 8001
LOG_FILENAME = 'log.txt'
logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
class MyHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_GET(self):
try:
#self.send_response(200)
#self.send_header('Content-type', 'text/html')
#self.end_headers();
#self.list_directory(self.path)
#os.listdir()
logging.debug('Test text')
except IOError:
print "nothing"
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
httpd = SocketServer.TCPServer(("", PORT), MyHandler)
print "serving at port", PORT
httpd.serve_forever()
A:
You need to use dir_listing() to list directories.
Rather than writing it here, I would suggest you look at the python cookbook/ recipes for detailed directions and understanding.
http://code.activestate.com/recipes/392879-my-first-application-server/
| How do I serve and log my current directory with a python web server? | I need to create a webserver that will respond to GET requests by serving pages from a specified folder, as well as log the pages the user is GETting, and the IP of the user.
The main trouble comes from me not knowing how to serve the directory listing to the user when overriding the do_GET method. Here is my code so far:
#!/usr/bin/env python
import logging
import SimpleHTTPServer
import SocketServer
import SimpleHTTPServer
import BaseHTTPServer
import os
PORT = 8001
LOG_FILENAME = 'log.txt'
logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG)
class MyHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_GET(self):
try:
#self.send_response(200)
#self.send_header('Content-type', 'text/html')
#self.end_headers();
#self.list_directory(self.path)
#os.listdir()
logging.debug('Test text')
except IOError:
print "nothing"
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
httpd = SocketServer.TCPServer(("", PORT), MyHandler)
print "serving at port", PORT
httpd.serve_forever()
| [
"You need to use dir_listing() to list directories.\nRather than writing it here, I would suggest you look at the python cookbook/ recipes for detailed directions and understanding.\n\nhttp://code.activestate.com/recipes/392879-my-first-application-server/\n\n"
] | [
1
] | [] | [] | [
"get",
"logging",
"python",
"webserver"
] | stackoverflow_0003614729_get_logging_python_webserver.txt |
Q:
How to apply a "mixin" class to an old-style base class
I've written a mixin class that's designed to be layered on top of a new-style class, for example via
class MixedClass(MixinClass, BaseClass):
pass
What's the smoothest way to apply this mixin to an old-style class? It is using a call to super in its __init__ method, so this will presumably (?) have to change, but otherwise I'd like to make as few changes as possible to MixinClass. I should be able to derive a subclass that makes the necessary changes.
I'm considering using a class decorator on top of a class derived from BaseClass, e.g.
@old_style_mix(MixinOldSchoolRemix)
class MixedWithOldStyleClass(OldStyleClass)
where MixinOldSchoolRemix is derived from MixinClass and just re-implements methods that use super to instead use a class variable that contains the class it is mixed with, in this case OldStyleClass. This class variable would be set by old_style_mix as part of the mixing process.
old_style_mix would just update the class dictionary of e.g. MixedWithOldStyleClass with the contents of the mixin class (e.g. MixinOldSchoolRemix) dictionary.
Is this a reasonable strategy? Is there a better way? It seems like this would be a common problem, given that there are numerous available modules still using old-style classes.
A:
This class variable would be set by
old_style_mix as part of the mixing
process.
...I assume you mean: "...on the class it's decorating..." as opposed to "on the class that is its argument" (the latter would be a disaster).
old_style_mix would just update the
class dictionary of e.g.
MixedWithOldStyleClass with the
contents of the mixin class (e.g.
MixinOldSchoolRemix) dictionary.
No good -- the information that MixinOldSchoolRemix derives from MixinClass, for example, is not in the former's dictionary. So, old_style_mix must take a different strategy: for example, build a new class (which I believe has to be a new-style one, because old-style ones do not accept new-style ones as __bases__) with the appropriate sequence of bases, as well as a suitably tweaked dictionary.
Is this a reasonable strategy?
With the above provisos.
It seems like this would be a common
problem, given that there are numerous
available modules still using
old-style classes.
...but mixins with classes that were never designed to take mixins are definitely not a common design pattern, so the problem isn't common at all (I don't remember seeing it even once in the many years since new-style classes were born, and I was actively consulting, teaching advanced classes, and helping people with Python problems for many of those years, as well as doing a lot of software development myself -- I do tend to have encountered any "reasonably common" problem that people may have with features which have been around long enough!-).
Here's example code for what your class decorator could do (if you prefer to have it in a class decorator rather than directly inline...):
>>> class Mixo(object):
... def foo(self):
... print 'Mixo.foo'
... self.thesuper.foo(self)
...
>>> class Old:
... def foo(self):
... print 'Old.foo'
...
>>> class Mixed(Mixo, Old):
... thesuper = Old
...
>>> m = Mixed()
>>> m.foo()
Mixo.foo
Old.foo
If you want to build Mixed under the assumed name/binding of Mixo in your decorator, you could do it with a call to type, or by setting Mixed.__name__ = cls.__name__ (where cls is the class you're decorating). I think the latter approach is simpler (warning, untested code -- the above interactive shell session is a real one, but I have not tested the following code):
def oldstylemix(mixin):
def makemix(cls):
class Mixed(mixin, cls):
thesuper = cls
Mixed.__name__ = cls.__name__
return Mixed
return makemix
| How to apply a "mixin" class to an old-style base class | I've written a mixin class that's designed to be layered on top of a new-style class, for example via
class MixedClass(MixinClass, BaseClass):
pass
What's the smoothest way to apply this mixin to an old-style class? It is using a call to super in its __init__ method, so this will presumably (?) have to change, but otherwise I'd like to make as few changes as possible to MixinClass. I should be able to derive a subclass that makes the necessary changes.
I'm considering using a class decorator on top of a class derived from BaseClass, e.g.
@old_style_mix(MixinOldSchoolRemix)
class MixedWithOldStyleClass(OldStyleClass)
where MixinOldSchoolRemix is derived from MixinClass and just re-implements methods that use super to instead use a class variable that contains the class it is mixed with, in this case OldStyleClass. This class variable would be set by old_style_mix as part of the mixing process.
old_style_mix would just update the class dictionary of e.g. MixedWithOldStyleClass with the contents of the mixin class (e.g. MixinOldSchoolRemix) dictionary.
Is this a reasonable strategy? Is there a better way? It seems like this would be a common problem, given that there are numerous available modules still using old-style classes.
| [
"\nThis class variable would be set by\n old_style_mix as part of the mixing\n process.\n\n...I assume you mean: \"...on the class it's decorating...\" as opposed to \"on the class that is its argument\" (the latter would be a disaster).\n\nold_style_mix would just update the\n class dictionary of e.g.\n MixedW... | [
1
] | [] | [] | [
"inheritance",
"mixins",
"python"
] | stackoverflow_0003614792_inheritance_mixins_python.txt |
Q:
Switch between version of Python?
I just installed Python 2.7, but IDLE is currently broken on OS X 10.6.4. Is there anyway I can revert to the earlier, Apple installed, version? A simple PATH adjustment, perhaps?
Right now $PATH looks like this for me:
/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin:
A:
/usr/bin/python is where Apple puts (the symlink to) the system version of Python -- so, just remove that first item from the PATH, and you should be fine.
A:
The default version is in /usr/bin, so just do a
export PATH=/usr/bin:$PATH
(Adjust the command according to your choice of shell)
It is simply a matter of setting the path. Look in /Library/Frameworks/Python.framework/Versions/ for the different versions
I have the following aliases in my .profile
alias python25="export PATH=/usr/bin:${PATH}"
alias python26="export PATH=/Library/Frameworks/Python.framework/Versions/2.6/bin:${PATH}"
alias pythonepd="export PATH=/Library/Frameworks/Python.framework/Versions/6.2/bin:${PATH}"
Switching between versions is then just a matter of a simple command.
A:
If you want to continue to use Python 2.7, just replace it using the other, 32-bit only (10.3 and above) OS X installer available at the python.org download link. IDLE for 2.7 is only broken when using the 10.5 and above 64-bit installer; see Issue 9227.
If you really do want to remove Python 2.7 as your default Python, you'll need to undo the PATH change that the Python Installer makes by default to various shell login scripts, ~/.bash_profile or ~/.profile. It leaves the original files as ~/.bash_profile.pysave and ~/.profile.pysave. So you can compare them and just move the original back. For example, if your login shell is bash:
$ diff .bash_profile{,.pysave} # does it look ok?
$ mv .bash_profile.pysave .bash_profile
| Switch between version of Python? | I just installed Python 2.7, but IDLE is currently broken on OS X 10.6.4. Is there anyway I can revert to the earlier, Apple installed, version? A simple PATH adjustment, perhaps?
Right now $PATH looks like this for me:
/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/X11/bin:
| [
"/usr/bin/python is where Apple puts (the symlink to) the system version of Python -- so, just remove that first item from the PATH, and you should be fine.\n",
"The default version is in /usr/bin, so just do a\nexport PATH=/usr/bin:$PATH\n\n(Adjust the command according to your choice of shell)\nIt is simply a m... | [
2,
2,
0
] | [] | [] | [
"macos",
"path",
"python"
] | stackoverflow_0003614898_macos_path_python.txt |
Q:
Python raw_input("") error
I am writing a simple commandline script that uses raw_input, but it doesn't seem to work.
This code:
print "Hello!"
raw_input("")
Produces this error:
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
raw_input("")
TypeError: 'str' object is not callable
I have never encountered this error before, and couldn't find anything on Google. I am using Python 2.6 on Windows 7.
A:
Works fine as presented, e.g. in an interpreter prompt in any Python 2 version:
>>> print "Hello!"
Hello!
>>> raw_input("")
bah
'bah'
>>>
where bah is what I typed after the code you gave in response to the empty-prompt;-).
The only explanation for the error you mention is that you've performed other code before this, which included binding identifier raw_input to a string.
A:
It appears you are using something called pyshell. There is likely a bug there in that shell itself. Try just using vanilla bash.
| Python raw_input("") error | I am writing a simple commandline script that uses raw_input, but it doesn't seem to work.
This code:
print "Hello!"
raw_input("")
Produces this error:
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
raw_input("")
TypeError: 'str' object is not callable
I have never encountered this error before, and couldn't find anything on Google. I am using Python 2.6 on Windows 7.
| [
"Works fine as presented, e.g. in an interpreter prompt in any Python 2 version:\n>>> print \"Hello!\"\nHello!\n>>> raw_input(\"\")\nbah\n'bah'\n>>> \n\nwhere bah is what I typed after the code you gave in response to the empty-prompt;-).\nThe only explanation for the error you mention is that you've performed othe... | [
2,
2
] | [] | [] | [
"python",
"python_2.6",
"raw_input",
"windows"
] | stackoverflow_0003615228_python_python_2.6_raw_input_windows.txt |
Q:
python: return value from __new__
EDIT
I actually called object.__new__(cls), and I didn't realize that by this I built an object of class cls! Thanks for pointing this out to me.
ORIGINAL QUESTION
The documentation says
If new() does not return an
instance of cls, then the new
instance’s init() method will not
be invoked.
However, when I return object.__new__() from cls.__new__(), the __init__() is still invoked. I wouldn't consider an instance of object to qualify as an instance of cls. What am I missing?
A:
Cannot reproduce your observation:
>>> class cls(object):
... def __new__(cls):
... return object.__new__(object)
... def __init__(self):
... print 'in __init__'
...
>>> x = cls()
>>>
As you see, cls.__init__ isn't executing.
How are you calling object.__new__ (and, btw, why are you?-).
| python: return value from __new__ | EDIT
I actually called object.__new__(cls), and I didn't realize that by this I built an object of class cls! Thanks for pointing this out to me.
ORIGINAL QUESTION
The documentation says
If new() does not return an
instance of cls, then the new
instance’s init() method will not
be invoked.
However, when I return object.__new__() from cls.__new__(), the __init__() is still invoked. I wouldn't consider an instance of object to qualify as an instance of cls. What am I missing?
| [
"Cannot reproduce your observation:\n>>> class cls(object):\n... def __new__(cls):\n... return object.__new__(object)\n... def __init__(self):\n... print 'in __init__'\n... \n>>> x = cls()\n>>> \n\nAs you see, cls.__init__ isn't executing.\nHow are you calling object.__new__ (and, btw, why are you?-).\n... | [
4
] | [] | [] | [
"constructor",
"python"
] | stackoverflow_0003615299_constructor_python.txt |
Q:
Python 2.7, ValueError when dealing with HTMLParser
First time working with the HTMLParser module. Trying to use standard string formatting on the ouput, but it's giving me an error. The following code:
import urllib2
from HTMLParser import HTMLParser
class LinksParser(HTMLParser):
def __init__(self, url):
HTMLParser.__init__(self)
req = urllib2.urlopen(url)
self.feed(req.read())
def handle_starttag(self, tag, attrs):
if tag != 'a': return
for name, value in attrs:
print("Found Link --> {]".format(value))
if __name__ == "__main__":
LinksParser("http://www.facebook.com"
Produces the following error:
File "C:\Users\workspace\test\src\test.py", line 15, in handle_starttag
print("Found Link --> {]".format(value))
ValueError: unmatched '{' in format
A:
print("Found Link --> {]".format(value))
Should instead be:
print("Found Link --> {}".format(value))
You used a square bracket instead of a brace.
A:
This format string looks broken: print("Found Link --> {]".format(value)). You need to change this to print("Found Link --> {key}".format(key = value)).
A:
There are several problems
the print statement in handle_starttag should be indented
in the last line you're missing the closing parenthesis
in the print statement in handle_starttag you should use {0} instead of {]
| Python 2.7, ValueError when dealing with HTMLParser | First time working with the HTMLParser module. Trying to use standard string formatting on the ouput, but it's giving me an error. The following code:
import urllib2
from HTMLParser import HTMLParser
class LinksParser(HTMLParser):
def __init__(self, url):
HTMLParser.__init__(self)
req = urllib2.urlopen(url)
self.feed(req.read())
def handle_starttag(self, tag, attrs):
if tag != 'a': return
for name, value in attrs:
print("Found Link --> {]".format(value))
if __name__ == "__main__":
LinksParser("http://www.facebook.com"
Produces the following error:
File "C:\Users\workspace\test\src\test.py", line 15, in handle_starttag
print("Found Link --> {]".format(value))
ValueError: unmatched '{' in format
| [
"print(\"Found Link --> {]\".format(value)) \n\nShould instead be:\nprint(\"Found Link --> {}\".format(value))\n\nYou used a square bracket instead of a brace.\n",
"This format string looks broken: print(\"Found Link --> {]\".format(value)). You need to change this to print(\"Found Link --> {key}\".format(key = v... | [
2,
0,
0
] | [] | [] | [
"html_parsing",
"python"
] | stackoverflow_0003615447_html_parsing_python.txt |
Q:
Is there a way to serve up a Python dictionary to a compatible type in Visual Basic 6 using win32com?
Is there a way to serve up a Python dictionary to a compatible type in Visual Basic 6 using win32com?
A:
I shudder to think of the requirements for this project. I feel sorry for you already.
Since there is no dictionary type in COM, my guess is that you'll have to pass it out as two SAFEARRAYS and join it back together inside VB. That's the approach I would take.
I found this helpful, especially the second half: http://oreilly.com/catalog/pythonwin32/chapter/ch12.html
And then this article on working with COM datatypes in VB: http://theunknownuser.com/code/COMObjectsC.html
| Is there a way to serve up a Python dictionary to a compatible type in Visual Basic 6 using win32com? | Is there a way to serve up a Python dictionary to a compatible type in Visual Basic 6 using win32com?
| [
"I shudder to think of the requirements for this project. I feel sorry for you already.\nSince there is no dictionary type in COM, my guess is that you'll have to pass it out as two SAFEARRAYS and join it back together inside VB. That's the approach I would take.\nI found this helpful, especially the second half:... | [
0
] | [] | [] | [
"python",
"vb6",
"win32com"
] | stackoverflow_0003613403_python_vb6_win32com.txt |
Q:
Why is there so many Pythons installed in /usr/bin for my Snow Leopard? What decides which one is the System Python?
Why is there so many Pythons installed in /usr/bin for my Snow Leopard? What decides which one is the System Python?
When I simply type "python" it is 2.6.1 ~ but this doesn't seem to be the "System Python", why not? How does one change system Python and what are the drawbacks?
A:
My snow leopard only has python 2.5 and 2.6 installed, so it's not that many. You may have additional pythons installed (i.e. python3.0), either system wide (in /usr/bin/) or through macports (/opt/local).
The default system python is defined through a setting,
defaults write com.apple.versioner.python Version 2.5
will change the default to 2.5. You can also use an environment variable, i.e. for bash:
export VERSIONER_PYTHON_VERSION=2.5
All of this is documented in the python manpage,
man python
Overall, it's better not to change the system default. It's what OSX may depend on for certain scripts, and you never know if these scripts work as expected on different versions. Especially Python 3 is different, and may really break your whole system.
If you want a different python to be used for your own scripts, either
Use virtualenv (always good)
Change your PATH and make sure your preferred python is included before the /usr/sbin one
Be explicit, invoke the script using /my/preferred/python
| Why is there so many Pythons installed in /usr/bin for my Snow Leopard? What decides which one is the System Python? | Why is there so many Pythons installed in /usr/bin for my Snow Leopard? What decides which one is the System Python?
When I simply type "python" it is 2.6.1 ~ but this doesn't seem to be the "System Python", why not? How does one change system Python and what are the drawbacks?
| [
"My snow leopard only has python 2.5 and 2.6 installed, so it's not that many. You may have additional pythons installed (i.e. python3.0), either system wide (in /usr/bin/) or through macports (/opt/local). \nThe default system python is defined through a setting,\ndefaults write com.apple.versioner.python Version ... | [
3
] | [] | [] | [
"macos",
"osx_snow_leopard",
"python",
"system"
] | stackoverflow_0003615630_macos_osx_snow_leopard_python_system.txt |
Q:
Would someone explain to me why type(foo)(bar) is so heavily discouraged?
I have a dict of configuration variables that looks something like this:
self.config = {
"foo": "abcdef",
"bar": 42,
"xyz": True
}
I want to be able to update these variables from user input (which, in this case, will always be in the form of a string). The problem I'm facing is obvious, and my first solution seemed good enough to me:
def updateconfig(self, key, value):
if key in self.config:
self.config[key] = type(self.config[key])(value)
However, #python in Freenode almost seemed offended that I would suggest such a solution. Could someone tell me why this is bad practice?
A:
Not all types support the idiom "call the type with a string to make a new instance of that type". However, if you ensure you only have such types in your config dict (with a sanity check at init time maybe), and put suitable try/except protection around your conversion attempt (to deal with user errors such as typos in a far better way than dying with a stack trace would be;-), there's nothing "inherently wrong" in using that functionality for the types that do support it.
A:
Not to mention, that there is config module in Python, here is how I would deal with config, with example of integer value "bar". +1 for alex as '' for the way to say False, see value "xyz"!
config = {
"foo": "abcdef",
"bar": "42",
"xyz": "True" ## or 'Yes' or anything not False, "" for False
}
bar = ''
while not bar:
barinput = raw_input('Enter property bar, integer (1..99): ')
try:
if 0 < int(barinput) < 100:
pass
else:
raise ValueError("%s is not integer in range 1..99" % barinput)
except ValueError as e:
print(str(e)+"\nWrong input, try again")
else:
print("Saving correct value")
bar = config["bar"] = barinput
print('New value of "bar" in config: %i' % int(config["bar"]))
The value could be saved as int in config also, but we have not need for type as we know that we are inputing integer.
| Would someone explain to me why type(foo)(bar) is so heavily discouraged? | I have a dict of configuration variables that looks something like this:
self.config = {
"foo": "abcdef",
"bar": 42,
"xyz": True
}
I want to be able to update these variables from user input (which, in this case, will always be in the form of a string). The problem I'm facing is obvious, and my first solution seemed good enough to me:
def updateconfig(self, key, value):
if key in self.config:
self.config[key] = type(self.config[key])(value)
However, #python in Freenode almost seemed offended that I would suggest such a solution. Could someone tell me why this is bad practice?
| [
"Not all types support the idiom \"call the type with a string to make a new instance of that type\". However, if you ensure you only have such types in your config dict (with a sanity check at init time maybe), and put suitable try/except protection around your conversion attempt (to deal with user errors such as... | [
5,
0
] | [] | [] | [
"python",
"types",
"user_input"
] | stackoverflow_0003614246_python_types_user_input.txt |
Q:
How do you make a choices field in django with an editable "other" option?
(
('one', 'One'),
('two', 'Two'),
('other', EDITABLE_HUMAN_READABLE_CHOICE),
)
So what I would like is a choices field with some common choices that are used frequently, but still be able to have the option of filling in a custom human readable value.
Is this possible or is there some better way of doing this that I am completely missing?
A:
One way to do this would be to use a custom ModelForm for admin. This form can have two fields - one that accepts a set of predefined choices and another one that accepts arbitrary values. In the clean() method you can ensure that only one of these has been selected.
If you are particular about how the UI should look - say, radio buttons that allow you to choose either a pre-defined value xor enter a custom value, then you may have to come up with your own custom field.
A:
I quick solution I used:
used standard ModelChoice Fields
custom Form, which added a regular Input Field for every ModelChoice Field
custom JQuery which showed the regular Input Field only, when the last choice is selected
in the View, when I got POST data I parsed the POST array and created the Models
| How do you make a choices field in django with an editable "other" option? | (
('one', 'One'),
('two', 'Two'),
('other', EDITABLE_HUMAN_READABLE_CHOICE),
)
So what I would like is a choices field with some common choices that are used frequently, but still be able to have the option of filling in a custom human readable value.
Is this possible or is there some better way of doing this that I am completely missing?
| [
"One way to do this would be to use a custom ModelForm for admin. This form can have two fields - one that accepts a set of predefined choices and another one that accepts arbitrary values. In the clean() method you can ensure that only one of these has been selected. \nIf you are particular about how the UI should... | [
7,
3
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003614237_django_python.txt |
Q:
How to assure that filehandle.write() does not fail due to str/bytes conversions issues?
I need to detect if a filehandle is using binary mode or text mode - this is required in order to be able to encode/decode str/bytes. How can I do that?
When using binary mode myfile.write(bytes) works, and when in text mode myfile.write(str) works.
The idea is that I need to know this in order to be able to encode/decode the argument before calling myfile.write(), otherwise it may fail with an exception.
A:
http://docs.python.org/library/stdtypes.html#file.mode
>>> f = open("blah.txt", "wb")
>>> f
<open file 'blah.txt', mode 'wb' at 0x0000000001E44E00>
>>> f.mode
'wb'
>>> "b" in f.mode
True
With this caveat:
file.mode
The I/O mode for the file. If the file was created using the open()
built-in function, this will be the
value of the mode parameter. This is a
read-only attribute and may not be
present on all file-like objects.
A:
How about solving your problem this way:
try:
f.write(msg)
except TypeError:
f.write(msg.encode("utf-8"))
This will work even if your handle does not provide a .mode.
| How to assure that filehandle.write() does not fail due to str/bytes conversions issues? | I need to detect if a filehandle is using binary mode or text mode - this is required in order to be able to encode/decode str/bytes. How can I do that?
When using binary mode myfile.write(bytes) works, and when in text mode myfile.write(str) works.
The idea is that I need to know this in order to be able to encode/decode the argument before calling myfile.write(), otherwise it may fail with an exception.
| [
"http://docs.python.org/library/stdtypes.html#file.mode\n>>> f = open(\"blah.txt\", \"wb\")\n>>> f\n<open file 'blah.txt', mode 'wb' at 0x0000000001E44E00>\n>>> f.mode\n'wb'\n>>> \"b\" in f.mode\nTrue\n\nWith this caveat:\n\nfile.mode\nThe I/O mode for the file. If the file was created using the open()\nbuilt-in fu... | [
4,
1
] | [] | [] | [
"file_io",
"filehandle",
"python",
"python_3.x"
] | stackoverflow_0003611102_file_io_filehandle_python_python_3.x.txt |
Q:
How to select questions which have no answers, in sqlalchemy
I've two classes: Question and Answer. A question may have 0 or many answers.
class Question(Base):
__tablename__ = "questions"
answers = relationship('Answer', backref='question',
primaryjoin="Question.id==Answer.question_id")
class Answer(Base):
__tablename__ = "answers"
Now I want to find all the questions have no answers, how to do it?
I tried:
Session.query(Question).filter('count(Question.answers)==0').all()
It is incorrect. What is the right one?
A:
Just use
session.query(Question).filter(Question.answers == None).all()
which basically is a NULL check (common filter operators).
Here's a gist example: http://gist.github.com/560473
The query generates the following SQL:
SELECT questions.id AS questions_id
FROM questions
WHERE NOT (EXISTS (SELECT 1
FROM answers
WHERE questions.id = answers.question_id))
A:
I worked it out, use not exist:
Session.query(Question).filter(not_(exists().where(Answer.question_id==Question.id))).all()
| How to select questions which have no answers, in sqlalchemy | I've two classes: Question and Answer. A question may have 0 or many answers.
class Question(Base):
__tablename__ = "questions"
answers = relationship('Answer', backref='question',
primaryjoin="Question.id==Answer.question_id")
class Answer(Base):
__tablename__ = "answers"
Now I want to find all the questions have no answers, how to do it?
I tried:
Session.query(Question).filter('count(Question.answers)==0').all()
It is incorrect. What is the right one?
| [
"Just use\nsession.query(Question).filter(Question.answers == None).all()\n\nwhich basically is a NULL check (common filter operators).\nHere's a gist example: http://gist.github.com/560473\nThe query generates the following SQL:\nSELECT questions.id AS questions_id \nFROM questions \nWHERE NOT (EXISTS (SELECT 1 \n... | [
3,
0
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0003616530_python_sqlalchemy.txt |
Q:
Have I organised my django app correctly?
I'm in a situation where I need to merge two Django apps into a single, re-usable app. Neither are particularly large, but they are certainly not trivial apps and to preserve readability / sanity I'm trying to keep the two apps separated to some extent.
I could set up each app as a sub-package (which would be a pythonic way to achieve this end), or I can stick to Django's conventions and split the functionality separately in each case.
A Pythonic 'sub-package' approach:
package
|-- __init__.py
|-- views.py
|-- models.py # imports models from both sub-packages
|-- tests.py # imports TestCase instances from both sub-packages
|-- etc. # shared stuff
|-- a
| |-- __init__.py
| |-- views.py
| |-- models.py
| |-- tests.py
| `-- etc. # a-specific code
`-- b
|-- __init__.py
|-- views.py
|-- models.py
|-- tests.py
`-- etc. # b-specific code
Or to appease the Django gods more directly:
package
|-- __init__.py
|-- views
| |-- __init__.py
| |-- a.py
| `-- b.py
|-- models
| |-- __init__.py # imports models from a and b
| |-- a.py
| `-- b.py
|-- tests
| |-- __init__.py # imports TestCase instances from a and b
| |-- a.py
| `-- b.py
`-- etc. # shared/additional files
While I'd lean towards the former at the moment (which feels a little lighter), my gut tells me that although both work (and both involve importing 'hacks' to conform to Django's structure) the best choice depends on the contents of a and b - specifically how much of the code is shared or app-specific. It doesn't feel right to be constantly repeating the __ init__.py, a.py, b.py pattern in every subdirectory!
I'm interested in knowing which would is more appropriate from people with more experience dealing with Python!
ps.
I'm aware they could live as two distinct apps, but they are so inter-dependant now that I do feel like they should be merged! (even aside from the improved portability of a single Django app)
A:
Flat is better than nested.
A "composite" application, built from two peer applications is fine. It works well.
And it promotes reuse by allowing the two components to be "plug-and-play" options in the larger application.
Don't nest things unless you're forced to. The number one reason forcing you to nest is name collisions. You don't have that so you don't really need any nesting.
A:
I'm not an expert in python but I always like to seperate applications and other artifact out as much as I can.
I am following the approach that is described in this blog for my own django project and it requires a little bit of tweaking on django. It has served me well so far.
The direct link to the github project
A:
In my projects I often want to organize views and tests somehow, so I use structure like this:
package
|-- __init__.py
|-- models.py # models are in one file
|-- etc. # shared stuff
|-- tests
| |-- __init__.py
| |-- tests_orders.py
| |-- tests_clients.py
| |--
| `-- etc.
|-- views
| |-- __init__.py
| |-- orders.py
| |-- clients.py
| |--
| `-- etc.
For bigger projects having view functions in one file is a pain (for me).
This is what works for some projects I am working on - hopefully someone find this useful also.
| Have I organised my django app correctly? | I'm in a situation where I need to merge two Django apps into a single, re-usable app. Neither are particularly large, but they are certainly not trivial apps and to preserve readability / sanity I'm trying to keep the two apps separated to some extent.
I could set up each app as a sub-package (which would be a pythonic way to achieve this end), or I can stick to Django's conventions and split the functionality separately in each case.
A Pythonic 'sub-package' approach:
package
|-- __init__.py
|-- views.py
|-- models.py # imports models from both sub-packages
|-- tests.py # imports TestCase instances from both sub-packages
|-- etc. # shared stuff
|-- a
| |-- __init__.py
| |-- views.py
| |-- models.py
| |-- tests.py
| `-- etc. # a-specific code
`-- b
|-- __init__.py
|-- views.py
|-- models.py
|-- tests.py
`-- etc. # b-specific code
Or to appease the Django gods more directly:
package
|-- __init__.py
|-- views
| |-- __init__.py
| |-- a.py
| `-- b.py
|-- models
| |-- __init__.py # imports models from a and b
| |-- a.py
| `-- b.py
|-- tests
| |-- __init__.py # imports TestCase instances from a and b
| |-- a.py
| `-- b.py
`-- etc. # shared/additional files
While I'd lean towards the former at the moment (which feels a little lighter), my gut tells me that although both work (and both involve importing 'hacks' to conform to Django's structure) the best choice depends on the contents of a and b - specifically how much of the code is shared or app-specific. It doesn't feel right to be constantly repeating the __ init__.py, a.py, b.py pattern in every subdirectory!
I'm interested in knowing which would is more appropriate from people with more experience dealing with Python!
ps.
I'm aware they could live as two distinct apps, but they are so inter-dependant now that I do feel like they should be merged! (even aside from the improved portability of a single Django app)
| [
"Flat is better than nested.\nA \"composite\" application, built from two peer applications is fine. It works well.\nAnd it promotes reuse by allowing the two components to be \"plug-and-play\" options in the larger application. \nDon't nest things unless you're forced to. The number one reason forcing you to ne... | [
2,
1,
1
] | [] | [] | [
"django",
"package",
"python",
"structure"
] | stackoverflow_0003611631_django_package_python_structure.txt |
Q:
How to create an in-memory zip file with directories without touching the disk?
In a python web application, I'm packaging up some stuff in a zip-file. I want to do this completely on the fly, in memory, without touching the disk. This goes fine using ZipFile.writestr as long as I'm creating a flat directory structure, but how do I create directories inside the zip?
I'm using python2.4.
http://docs.python.org/library/zipfile.html
A:
What 'theomega' said in the comment to my original post, adding a '/' in the filename does the trick. Thanks!
from zipfile import ZipFile
from StringIO import StringIO
inMemoryOutputFile = StringIO()
zipFile = ZipFile(inMemoryOutputFile, 'w')
zipFile.writestr('OEBPS/content.xhtml', 'hello world')
zipFile.close()
inMemoryOutputFile.seek(0)
A:
Use a StringIO. It is apparently OK to use them for zipfiles.
| How to create an in-memory zip file with directories without touching the disk? | In a python web application, I'm packaging up some stuff in a zip-file. I want to do this completely on the fly, in memory, without touching the disk. This goes fine using ZipFile.writestr as long as I'm creating a flat directory structure, but how do I create directories inside the zip?
I'm using python2.4.
http://docs.python.org/library/zipfile.html
| [
"What 'theomega' said in the comment to my original post, adding a '/' in the filename does the trick. Thanks!\nfrom zipfile import ZipFile\nfrom StringIO import StringIO\n\ninMemoryOutputFile = StringIO()\n\nzipFile = ZipFile(inMemoryOutputFile, 'w') \nzipFile.writestr('OEBPS/content.xhtml', 'hello world')\nzipFil... | [
33,
1
] | [] | [] | [
"python"
] | stackoverflow_0003610221_python.txt |
Q:
A better way to assign list into a var
Was coding something in Python. Have a piece of code, wanted to know if it can be done more elegantly...
# Statistics format is - done|remaining|200's|404's|size
statf = open(STATS_FILE, 'r').read()
starf = statf.strip().split('|')
done = int(starf[0])
rema = int(starf[1])
succ = int(starf[2])
fails = int(starf[3])
size = int(starf[4])
...
This goes on. I wanted to know if after splitting the line into a list, is there any better way to assign each list into a var. I have close to 30 lines assigning index values to vars. Just trying to learn more about Python that's it...
A:
done, rema, succ, fails, size, ... = [int(x) for x in starf]
Better:
labels = ("done", "rema", "succ", "fails", "size")
data = dict(zip(labels, [int(x) for x in starf]))
print data['done']
A:
What I don't like about the answers so far is that they stick everything in one expression. You want to reduce the redundancy in your code, without doing too much at once.
If all of the items on the line are ints, then convert them all together, so you don't have to write int(...) each time:
starf = [int(i) for i in starf]
If only certain items are ints--maybe some are strings or floats--then you can convert just those:
for i in 0,1,2,3,4:
starf[i] = int(starf[i]))
Assigning in blocks is useful; if you have many items--you said you had 30--you can split it up:
done, rema, succ = starf[0:2]
fails, size = starf[3:4]
A:
I might use the csv module with a separator of | (though that might be overkill if you're "sure" the format will always be super-simple, single-line, no-strings, etc, etc). Like your low-level string processing, the csv reader will give you strings, and you'll need to call int on each (with a list comprehension or a map call) to get integers. Other tips include using the with statement to open your file, to ensure it won't cause a "file descriptor leak" (not indispensable in current CPython version, but an excellent idea for portability and future-proofing).
But I question the need for 30 separate barenames to represent 30 related values. Why not, for example, make a collections.NamedTuple type with appropriately-named fields, and initialize an instance thereof, then use qualified names for the fields, i.e., a nice namespace? Remember the last koan in the Zen of Python (import this at the interpreter prompt): "Namespaces are one honking great idea -- let's do more of those!"... barenames have their (limited;-) place, but representing dozens of related values is not one -- rather, this situation "cries out" for the "let's do more of those" approach (i.e., add one appropriate namespace grouping the related fields -- a much better way to organize your data).
A:
Using a Python dict is probably the most elegant choice.
If you put your keys in a list as such:
keys = ("done", "rema", "succ" ... )
somedict = dict(zip(keys, [int(v) for v in values]))
That would work. :-) Looks better than 30 lines too :-)
EDIT: I think there are dict comphrensions now, so that may look even better too! :-)
EDIT Part 2: Also, for the keys collection, you'd want to break that into multpile lines.
EDIT Again: fixed buggy part :)
A:
Thanks for all the answers. So here's the summary -
Glenn's answer was to handle this issue in blocks. i.e. done, rema, succ = starf[0:2] etc.
Leoluk's approach was more short & sweet taking advantage of python's immensely powerful dict comprehensions.
Alex's answer was more design oriented. Loved this approach. I know it should be done the way Alex suggested but lot of code re-factoring needs to take place for that. Not a good time to do it now.
townsean - same as 2
I have taken up Leoluk's approach. I am not sure what the speed implication for this is? I have no idea if List/Dict comprehensions take a hit on speed of execution. But it reduces the size of my code considerable for now. I'll optimize when the need comes :) Going by - "Pre-mature optimization is the root of all evil"...
| A better way to assign list into a var | Was coding something in Python. Have a piece of code, wanted to know if it can be done more elegantly...
# Statistics format is - done|remaining|200's|404's|size
statf = open(STATS_FILE, 'r').read()
starf = statf.strip().split('|')
done = int(starf[0])
rema = int(starf[1])
succ = int(starf[2])
fails = int(starf[3])
size = int(starf[4])
...
This goes on. I wanted to know if after splitting the line into a list, is there any better way to assign each list into a var. I have close to 30 lines assigning index values to vars. Just trying to learn more about Python that's it...
| [
"done, rema, succ, fails, size, ... = [int(x) for x in starf]\n\nBetter:\nlabels = (\"done\", \"rema\", \"succ\", \"fails\", \"size\")\n\ndata = dict(zip(labels, [int(x) for x in starf]))\n\nprint data['done']\n\n",
"What I don't like about the answers so far is that they stick everything in one expression. You ... | [
6,
5,
4,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003613073_python.txt |
Q:
opening file: Writing is invalid mode
When executing:
path=os.path.dirname(__file__)+'/log.txt'
log=open(path,"w",encoding='utf-8')
I get:
log=open(path,'w',encoding='utf-8')
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1203, in __init__
raise IOError('invalid mode: %s' % mode)
IOError: invalid mode: w
I'm not sure why I can't write to the file?
A:
App Engine's Python runtime supports Python 2.5 – newer versions of Python, including Python 2.6, are not currently supported. For security reasons, some Python modules written in C won't run in App Engine's sandbox. Because App Engine doesn't support writing to disk or opening direct network connections, other libraries that rely on this may not be fully usable.
http://code.google.com/appengine/kb/general.html#language
A:
You can't write to disk in App Engine. At all. You must use datastore.
| opening file: Writing is invalid mode | When executing:
path=os.path.dirname(__file__)+'/log.txt'
log=open(path,"w",encoding='utf-8')
I get:
log=open(path,'w',encoding='utf-8')
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1203, in __init__
raise IOError('invalid mode: %s' % mode)
IOError: invalid mode: w
I'm not sure why I can't write to the file?
| [
"\nApp Engine's Python runtime supports Python 2.5 – newer versions of Python, including Python 2.6, are not currently supported. For security reasons, some Python modules written in C won't run in App Engine's sandbox. Because App Engine doesn't support writing to disk or opening direct network connections, other ... | [
3,
3
] | [] | [] | [
"app_engine_patch",
"google_app_engine",
"python"
] | stackoverflow_0003616964_app_engine_patch_google_app_engine_python.txt |
Q:
Iterating over dictionary items(), values(), keys() in Python 3
If I understand correctly, in Python 2, iter(d.keys()) was the same as d.iterkeys(). But now, d.keys() is a view, which is in between the list and the iterator. What's the difference between a view and an iterator?
In other words, in Python 3, what's the difference between
for k in d.keys()
f(k)
and
for k in iter(d.keys())
f(k)
Also, how do these differences show up in a simple for loop (if at all)?
A:
I'm not sure if this is quite an answer to your questions but hopefully it explains a bit about the difference between Python 2 and 3 in this regard.
In Python 2, iter(d.keys()) and d.iterkeys() are not quite equivalent, although they will behave the same. In the first, keys() will return a copy of the dictionary's list of keys and iter will then return an iterator object over this list, with the second a copy of the full list of keys is never built.
The view objects returned by d.keys() in Python 3 are iterable (i.e. an iterator can be made from them) so when you say for k in d.keys() Python will create the iterator for you. Therefore your two examples will behave the same.
The significance in the change of the return type for keys() is that the Python 3 view object is dynamic. i.e. if we say ks = d.keys() and later add to d then ks will reflect this. In Python 2, keys() returns a list of all the keys currently in the dict. Compare:
Python 3
>>> d = { "first" : 1, "second" : 2 }
>>> ks = d.keys()
>>> ks
dict_keys(['second', 'first'])
>>> d["third"] = 3
>>> ks
dict_keys(['second', 'third', 'first'])
Python 2.x
>>> d = { "first" : 1, "second" : 2 }
>>> ks = d.keys()
>>> ks
['second', 'first']
>>> d["third"] = 3
>>> ks
['second', 'first']
As Python 3's keys() returns the dynamic object Python 3 doesn't have (and has no need for) a separate iterkeys method.
Further clarification
In Python 3, keys() returns a dict_keys object but if we use it in a for loop context for k in d.keys() then an iterator is implicitly created. So the difference between for k in d.keys() and for k in iter(d.keys()) is one of implicit vs. explicit creation of the iterator.
In terms of another difference, whilst they are both dynamic, remember if we create an explicit iterator then it can only be used once whereas the view can be reused as required. e.g.
>>> ks = d.keys()
>>> 'first' in ks
True
>>> 'second' in ks
True
>>> i = iter(d.keys())
>>> 'first' in i
True
>>> 'second' in i
False # because we've already reached the end of the iterator
Also, notice that if we create an explicit iterator and then modify the dict then the iterator is invalidated:
>>> i2 = iter(d.keys())
>>> d['fourth'] = 4
>>> for k in i2: print(k)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: dictionary changed size during iteration
In Python 2, given the existing behaviour of keys a separate method was needed to provide a way to iterate without copying the list of keys whilst still maintaining backwards compatibility. Hence iterkeys()
| Iterating over dictionary items(), values(), keys() in Python 3 | If I understand correctly, in Python 2, iter(d.keys()) was the same as d.iterkeys(). But now, d.keys() is a view, which is in between the list and the iterator. What's the difference between a view and an iterator?
In other words, in Python 3, what's the difference between
for k in d.keys()
f(k)
and
for k in iter(d.keys())
f(k)
Also, how do these differences show up in a simple for loop (if at all)?
| [
"I'm not sure if this is quite an answer to your questions but hopefully it explains a bit about the difference between Python 2 and 3 in this regard.\nIn Python 2, iter(d.keys()) and d.iterkeys() are not quite equivalent, although they will behave the same. In the first, keys() will return a copy of the dictionary... | [
71
] | [] | [] | [
"dictionary",
"iterator",
"python",
"python_3.x"
] | stackoverflow_0003616721_dictionary_iterator_python_python_3.x.txt |
Q:
part of GtkLabel clickable
How to do, that just a part of GtkLabel has a clicked event and calls a function.
I make a twitter client, witch shows tweets and i would like to, when in the tweet is a # hashtag and I click it, the application shows a new window with search of this #hashtag. and I dont know how to do that just the #hashtag would invoke this event.
A:
You can surround the clickable part in <a> tags and connect to the activate-link signal.
Here is an example:
import gtk
def hashtag_handler(label, uri):
print('You clicked on the tag #%s' % uri)
return True # to indicate that we handled the link request
window = gtk.Window()
label = gtk.Label()
label.set_markup('Unclickable line\nLine with a <a href="hashtag">#hashtag</a>\nLine with a <a href="different">#different</a> hashtag')
label.connect('activate-link', hashtag_handler)
window.add(label)
window.connect('destroy', gtk.main_quit)
window.show_all()
gtk.main()
| part of GtkLabel clickable | How to do, that just a part of GtkLabel has a clicked event and calls a function.
I make a twitter client, witch shows tweets and i would like to, when in the tweet is a # hashtag and I click it, the application shows a new window with search of this #hashtag. and I dont know how to do that just the #hashtag would invoke this event.
| [
"You can surround the clickable part in <a> tags and connect to the activate-link signal. \nHere is an example:\nimport gtk\n\ndef hashtag_handler(label, uri):\n print('You clicked on the tag #%s' % uri)\n return True # to indicate that we handled the link request\n\nwindow = gtk.Window()\nlabel = gtk.Label()... | [
3
] | [] | [] | [
"gtk",
"pygtk",
"python"
] | stackoverflow_0003608207_gtk_pygtk_python.txt |
Q:
returning matches from an unknown number of python lists
I have a list which contains 1-5 lists within it. I want to return only the values which appear in all the lists. I can easily create an exception for it there is only one list, but I can't think of a way round it when there are an multiple (unknown number of) lists. For example:
[[1,2,3,4],[2,3,7,8],[2,3,6,9],[1,2,5,7]]
would only return 2, because it's the only list item to appear in all the sub-lists
How can I achieve this?
A:
reduce(set.intersection, (set(x) for x in [[1,2,3,4],[2,3,7,8],[2,3,6,9],[1,2,5,7]]))
A:
You can use frozenset.intersection (or set.intersection if you prefer):
>>> l = [[1,2,3,4],[2,3,7,8],[2,3,6,9],[1,2,5,7]]
>>> frozenset.intersection(*(frozenset(x) for x in l))
frozenset({2})
Add a call to list if you want the result as a list instead of a set.
A:
I suggest this solution:
s = [[1,2,3,4],[2,3,7,8],[2,3,6,9],[1,2,5,7]]
#flatten the list
flat = sum(s,[])
#only keep digits that appear a certain number of times (size of s)
filtered = filter(lambda x: flat.count(x) == len(s),flat)
# clear repeated values
list(set(filtered))
There are better ways to do this but this one is more explicit.
| returning matches from an unknown number of python lists | I have a list which contains 1-5 lists within it. I want to return only the values which appear in all the lists. I can easily create an exception for it there is only one list, but I can't think of a way round it when there are an multiple (unknown number of) lists. For example:
[[1,2,3,4],[2,3,7,8],[2,3,6,9],[1,2,5,7]]
would only return 2, because it's the only list item to appear in all the sub-lists
How can I achieve this?
| [
"reduce(set.intersection, (set(x) for x in [[1,2,3,4],[2,3,7,8],[2,3,6,9],[1,2,5,7]]))\n\n",
"You can use frozenset.intersection (or set.intersection if you prefer):\n>>> l = [[1,2,3,4],[2,3,7,8],[2,3,6,9],[1,2,5,7]]\n>>> frozenset.intersection(*(frozenset(x) for x in l))\nfrozenset({2})\n\nAdd a call to list if ... | [
4,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0003617179_python.txt |
Q:
Python socket module: http proxy
Hello I'm trying to use protected http socks server with socket module as in the code shown below
>>> import socket
>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> host = 'http://user:pass@server.com'
>>> port = 8888
>>> s.bind((host, port))
It gives me error:
socket.gaierror: [Errno -2] Name or service not known
Though if I setup proxy on a Firefox it works fine. What is in the code?
Sultan
A:
I believe your problem is because your host is malformed. The Socket host is just a name not a protocol. Your host should be something like:
host = 'server.com'
The authentication should be done after you connect, i.e., the first message you send is the authentication.
I can't give you the specifics of how to authenticate because that greatly depends on the server you are connecting to. Check this question
| Python socket module: http proxy | Hello I'm trying to use protected http socks server with socket module as in the code shown below
>>> import socket
>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> host = 'http://user:pass@server.com'
>>> port = 8888
>>> s.bind((host, port))
It gives me error:
socket.gaierror: [Errno -2] Name or service not known
Though if I setup proxy on a Firefox it works fine. What is in the code?
Sultan
| [
"I believe your problem is because your host is malformed. The Socket host is just a name not a protocol. Your host should be something like:\nhost = 'server.com'\n\nThe authentication should be done after you connect, i.e., the first message you send is the authentication.\nI can't give you the specifics of how to... | [
1
] | [] | [] | [
"proxy",
"python",
"sockets"
] | stackoverflow_0003617376_proxy_python_sockets.txt |
Q:
Representing an immutable hierarchy using tuples
I am trying to represent a hierarchy using namedtuple. Essentially, every node has three attributes relevant to the hierarchy: parent, leftChild and rightChild (they also have some attributes that carry the actual information, but that is not important for the question). The problem is the circular reference between parents and children. Since I need to specify all values at construction time, I run into problems because parents need the children to be constructed first, and children need the parents to be constructed first. Is there any way around this (other than using a custom class instead of tuples)?
A:
No, there is not.
A:
Remove parent field. You can still implement any tree-manipulation operations efficient without keeping reference to parent node.
A:
One trick is not to use an object reference, but instead, a symbolic ID that you maintain in a hash table.
| Representing an immutable hierarchy using tuples | I am trying to represent a hierarchy using namedtuple. Essentially, every node has three attributes relevant to the hierarchy: parent, leftChild and rightChild (they also have some attributes that carry the actual information, but that is not important for the question). The problem is the circular reference between parents and children. Since I need to specify all values at construction time, I run into problems because parents need the children to be constructed first, and children need the parents to be constructed first. Is there any way around this (other than using a custom class instead of tuples)?
| [
"No, there is not.\n",
"Remove parent field. You can still implement any tree-manipulation operations efficient without keeping reference to parent node.\n",
"One trick is not to use an object reference, but instead, a symbolic ID that you maintain in a hash table.\n"
] | [
2,
0,
0
] | [] | [] | [
"hierarchy",
"python",
"tuples"
] | stackoverflow_0003616822_hierarchy_python_tuples.txt |
Q:
Does Python's urllib2 have a gethostbyname function?
I need to get a requested host's ip address using urllib2 like:
import urllib2
req = urllib2.Request('http://www.example.com/')
r = urllib2.urlopen(req)
Are there any functions like ip = urllib2.gethostbyname(req)?
A:
You can use:
import socket
socket.gethostbyname('www.google.com')
this will return the IP address for the host. Don't pass 'http://www.google.com'. That will not work.
A:
There's a socket.gethostbyname function which will resolve the host names if that's what you mean.
Although if you already have a connection made by urllib2, then get the destination host via your_request.get_host().
| Does Python's urllib2 have a gethostbyname function? | I need to get a requested host's ip address using urllib2 like:
import urllib2
req = urllib2.Request('http://www.example.com/')
r = urllib2.urlopen(req)
Are there any functions like ip = urllib2.gethostbyname(req)?
| [
"You can use:\nimport socket\nsocket.gethostbyname('www.google.com')\n\nthis will return the IP address for the host. Don't pass 'http://www.google.com'. That will not work.\n",
"There's a socket.gethostbyname function which will resolve the host names if that's what you mean.\nAlthough if you already have a conn... | [
2,
1
] | [] | [] | [
"gethostbyname",
"python",
"urllib2"
] | stackoverflow_0003617616_gethostbyname_python_urllib2.txt |
Q:
Execute a python command within vim and getting the output
When Vim is compiled with Python support, you can script Vim with Python using the :python command. How would I go about using this to execute the command and insert the result under the cursor? For example, if I were to execute :python import os; os.listdir('aDirectory')[0], I would want the first filename returned to be inserted under the cursor.
EDIT: To clarify, I want the same effect as going to the terminal, executing the command, copying the result and executing "+p.
A:
:,!python -c "import os; print os.listdir('aDirectory')[0]"
A:
The following works fine for me:
write the python code you want to execute in the line you want.
import os
print(os.listdir('.'))
after that visually select the lines you want to execute in python
:'<,'>!python
and after that the python code will replaced by the python output.
A:
You need to assign it to the current line, you can use the vim module:
:python import os; import vim; vim.current.line=os.listdir('.')[0]
A:
In the end, I solved it by writing a script called pyexec.vim and put it in my plugin directory. The script is reproduced below:
python << endpython
import vim
def pycurpos(pythonstatement):
#split the python statement at ;
pythonstatement = pythonstatement.split(';')
stringToInsert = ''
for aStatement in pythonstatement:
#try to eval() the statement. This will work if the statement is a valid expression
try:
s = str(eval(aStatement))
except SyntaxError:
#statement is not a valid expression, so try exec. This will work if the statement is a valid python statement (such as if a==b: or print 'a')
#if this doesn't work either, fail
s = None
exec aStatement
stringToInsert += s if s is not None else ''
currentPos = vim.current.window.cursor[1]
currentLine = vim.current.line
vim.current.line = currentLine[:currentPos]+stringToInsert+currentLine[currentPos:]
endpython
This works as expected for oneliners, but doesn't quite work for multiple statements following a block. So python pycurpos('a=2;if a==3:b=4;c=6') will result in c always being 6, since the if block ends with the first line following it.
But for quick and dirty python execution, which is what I wanted, the script is adequate.
| Execute a python command within vim and getting the output | When Vim is compiled with Python support, you can script Vim with Python using the :python command. How would I go about using this to execute the command and insert the result under the cursor? For example, if I were to execute :python import os; os.listdir('aDirectory')[0], I would want the first filename returned to be inserted under the cursor.
EDIT: To clarify, I want the same effect as going to the terminal, executing the command, copying the result and executing "+p.
| [
":,!python -c \"import os; print os.listdir('aDirectory')[0]\"\n\n",
"The following works fine for me:\nwrite the python code you want to execute in the line you want.\nimport os\nprint(os.listdir('.'))\n\nafter that visually select the lines you want to execute in python \n:'<,'>!python\n\nand after that the pyt... | [
5,
3,
2,
0
] | [] | [] | [
"python",
"vim"
] | stackoverflow_0003608742_python_vim.txt |
Q:
Getting python exceptions printed the normal way with PyObjC
I'm getting errors like this:
2010-07-13 20:43:15.131
Python[1527:60f] main: Caught
OC_PythonException: :
LoginMenuSet instance has no attribute
'play_sound'
That's with this code:
@try {
[section loop]; //Loop through section
} @catch (NSException *exception) {
NSLog(@"Caught %@: %@", [exception name], [exception reason]);
}
I want the python exception to be printed normally with the traceback and everything else.
Thank you.
A:
One trick to see Python exceptions is to call objc.setVerbose(1). This makes PyObjC slightly more verbose and causes it to print Python stack traces when converting exceptions from Python to Objective-C.
A:
Here's my own solution:
In Objective-C class:
@try {
[section loop]; //Loop through section
} @catch (NSException *exception) {
NSLog(@"main: Caught %@: %@", [exception name], [exception reason]);
[self exception: [[exception userInfo] valueForKey: @"__pyobjc_exc_traceback__"]];
}
In python pyobjc subclass:
def exception_(self,trace):
traceback.print_tb(trace)
NSApplication.sharedApplication().terminate_(None) #Accept no errors
I, of-course, imported the traceback module.
| Getting python exceptions printed the normal way with PyObjC | I'm getting errors like this:
2010-07-13 20:43:15.131
Python[1527:60f] main: Caught
OC_PythonException: :
LoginMenuSet instance has no attribute
'play_sound'
That's with this code:
@try {
[section loop]; //Loop through section
} @catch (NSException *exception) {
NSLog(@"Caught %@: %@", [exception name], [exception reason]);
}
I want the python exception to be printed normally with the traceback and everything else.
Thank you.
| [
"One trick to see Python exceptions is to call objc.setVerbose(1). This makes PyObjC slightly more verbose and causes it to print Python stack traces when converting exceptions from Python to Objective-C.\n",
"Here's my own solution:\nIn Objective-C class:\n@try {\n [section loop]; //Loop through section\n... | [
9,
0
] | [] | [] | [
"exception",
"exception_handling",
"objective_c",
"pyobjc",
"python"
] | stackoverflow_0003240867_exception_exception_handling_objective_c_pyobjc_python.txt |
Q:
How similar are Python, jQuery, C syntax wise?
I'm trying to get a sense of the similarities between languages in syntax. How similar are Python, jQuery and C? I started programming in Actionscript 3 and then moved on to Javascript , then went on and learned Prototype, and then I started using jQuery and found that the syntax is very different. So is jQuery more like C and Python?
A:
C is much different from the languages you've asked about. Remember that C isn't an interpreted language and will not be treated as such in your code. In short, you're up for a lot more material to learn --while dealing with C-- in terms of things like memory management and semantics than the other languages.
In regards to syntax: You'll find that if you're writing code in any language other than Lisp, brainfuck, or some other non-intuitive language (not claiming that C is, but in comparison, certainly), the syntax isn't too much of a variable. There are some differences, but nothing that should be considered too abstract. In C, you have to worry about things like pointers and whatnot which is a pain, but I think the difference is more-so about memory management than syntax. You mostly have to worry about the differences in usages of semicolons and whatnot.
You'll find that Python is like writing English sentences, or at least writing pseudocode with constraints, which makes it significantly easier than C. Additionally, I wouldn't consider jQuery a language on its own. It's an extension of a language though, just as STL might be considered a particular type of extension to C++.
Good luck.
A:
Syntax wise, JavaScript (the language jQuery is implemented in) is similar to C. Python uses a different syntax which does not rely on semicolons and braces, but instead on indentation.
Semantically, JavaScript is closer to Python, so this would be easier to learn. I don't understand how you "moved" from ActionScript 3 to JavaScript to Prototype; ActionScript has the same syntax and is also otherwise very similar to JavaScript, and Protoype/jQuery are just applications written in JavaScript (so it's the same language, but different frameworks!)
A:
For jQuery, the answer is pretty simple: jQuery isn't a language, therefore it doesn't have syntax.
For Python and C, the answer from a high-level point of view is also very simple: Python's syntax is directly inspired by C's syntax. (Or more precisely, both Python's and C's syntax are inspired by ALGOL's syntax.) There is really only one significant difference from a high-level point of view: C uses opening and closing curly braces to delimit blocks, Python uses indentation.
Otherwise, the two high-level syntaxes are almost the same: both have unary and binary operators, even with similar precedence (unline Smalltalk, for example, which doesn't have operators), both distinguish between statements and expressions (unlike Ruby, for example, which doesn't have statements), both use semicolons between statements (although technically, the semicolon is a statement terminator in C and a statement separator in Python), both use similar syntax for numeric literals and string literals as well as array/list indexing.
There are a couple of syntactic differences related to the different semantics: in Python, variables are untyped (only objects are typed), so there is no type annotation syntax for variable declarations (in fact, there is no syntax for variable declarations at all). There is syntax for type annotations of function parameters and function return values, but in Python the types come after the parameter name, and the type annotations are optional. With variables being untyped, the concept of type casting doesn't make sense, so there is no syntax for that. Neither is there any pointer-related syntax, since Python doesn't have those.
Python has a couple more literals than C: lists, sets, dictionaries, in particular. However, they follow in the C tradition: in C, an array is declared and indexed using square brackets, so Python uses square brackets for array literals.
| How similar are Python, jQuery, C syntax wise? | I'm trying to get a sense of the similarities between languages in syntax. How similar are Python, jQuery and C? I started programming in Actionscript 3 and then moved on to Javascript , then went on and learned Prototype, and then I started using jQuery and found that the syntax is very different. So is jQuery more like C and Python?
| [
"C is much different from the languages you've asked about. Remember that C isn't an interpreted language and will not be treated as such in your code. In short, you're up for a lot more material to learn --while dealing with C-- in terms of things like memory management and semantics than the other languages.\nI... | [
8,
3,
3
] | [] | [] | [
"c",
"javascript",
"jquery",
"python",
"syntax"
] | stackoverflow_0003615122_c_javascript_jquery_python_syntax.txt |
Q:
Django QuerySet .defer() problem - bug or feature?
An example is better than a thousand words:
In [3]: User.objects.filter(id=19)[0] == User.objects.filter(id=19)[0]
Out[3]: True
In [4]: User.objects.filter(id=19)[0] == User.objects.filter(id=19).defer('email')[0]
Out[4]: False
Does it work like this on purpose ?
Subquestion: is there any simple way to get a regular model instance from the deferred one ?
EDIT:
It looks like contenttypes framework is patched appropriately:
http://code.djangoproject.com/changeset/10523
so I would say that the Model._____eq_____() operator shouldn't look like this:
def __eq__(self, other):
return isinstance(other, self.__class__) and self._get_pk_val() == other._get_pk_val()
but more like this:
def __eq__(self, other):
return ContentType.objects.get_for_model(self) is ContentType.objects.get_for_model(other) and self._get_pk_val() == other._get_pk_val()
This of course causes two DB hits for the first time, but fortunately get_for_model seems to implement cache.
A:
Deferred queries return a different class, provided by the deferred_class_factory:
# in db/models/query_utils.py
def deferred_class_factory(model, attrs):
"""
Returns a class object that is a copy of "model" with the specified "attrs"
being replaced with DeferredAttribute objects. The "pk_value" ties the
deferred attributes to a particular instance of the model.
"""
It is basically a proxy, as you can see from the method resolution order:
>>> x = User.objects.filter(id=1).defer("email")[0]
>>> x.__class__.__mro__
(<class 'django.contrib.auth.models.User_Deferred_email'>, \
<class 'django.contrib.auth.models.User'>, \
<class 'django.db.models.base.Model'>, <type 'object'>)
A:
Its the normal behaviour, Because User.objects.filter(id=19)[0] will return a queryset with all of the related fields of the model, but User.objects.filter(id=19).defer('email')[0] will bring a queryset without email... So you have two querysets, one with a fewer field.
Update:
Test...
In [30]: a = User.objects.filter(id=1)[0]
In [31]: a
Out[31]: <User: mustafa>
In [27]: b = User.objects.filter(id=1).defer('username')[0]
In [28]: b
Out[28]: <User_Deferred_username: mustafa>
In [32]: a == b
Out[32]: False
In [33]: type(a)
Out[33]: <class 'django.contrib.auth.models.User'>
In [34]: type(b)
Out[34]: <class 'django.contrib.auth.models.User_Deferred_username'>
In [35]: a.username
Out[35]: u'mustafa'
In [36]: b.username
Out[36]: u'mustafa'
Defer Documentation explains this as:
A queryset that has deferred fields will still return model instances. Each deferred field will be retrieved from the database if you access that field (one at a time, not all the deferred fields at once).
EDIT 2:
In [43]: isinstance(b, a.__class__)
Out[43]: True
In [40]: User.__eq__??
Type: instancemethod
Base Class: <type 'instancemethod'>
String Form: <unbound method User.__eq__>
Namespace: Interactive
File: /home/mustafa/python/lib/django/db/models/base.py
Definition: User.__eq__(self, other)
Source:
def __eq__(self, other):
return isinstance(other, self.__class__) and self._get_pk_val() == other._get_pk_val()
== is a simple comparison and it compares two objects, it is not using related class ____eq____ method.
| Django QuerySet .defer() problem - bug or feature? | An example is better than a thousand words:
In [3]: User.objects.filter(id=19)[0] == User.objects.filter(id=19)[0]
Out[3]: True
In [4]: User.objects.filter(id=19)[0] == User.objects.filter(id=19).defer('email')[0]
Out[4]: False
Does it work like this on purpose ?
Subquestion: is there any simple way to get a regular model instance from the deferred one ?
EDIT:
It looks like contenttypes framework is patched appropriately:
http://code.djangoproject.com/changeset/10523
so I would say that the Model._____eq_____() operator shouldn't look like this:
def __eq__(self, other):
return isinstance(other, self.__class__) and self._get_pk_val() == other._get_pk_val()
but more like this:
def __eq__(self, other):
return ContentType.objects.get_for_model(self) is ContentType.objects.get_for_model(other) and self._get_pk_val() == other._get_pk_val()
This of course causes two DB hits for the first time, but fortunately get_for_model seems to implement cache.
| [
"Deferred queries return a different class, provided by the deferred_class_factory:\n# in db/models/query_utils.py\n\ndef deferred_class_factory(model, attrs):\n \"\"\"\n Returns a class object that is a copy of \"model\" with the specified \"attrs\"\n being replaced with DeferredAttribute objects. The \"p... | [
4,
0
] | [] | [] | [
"django",
"django_models",
"orm",
"python"
] | stackoverflow_0003617886_django_django_models_orm_python.txt |
Q:
Python - Regular Expression Wildcards from Socket data?
I have a question regarding regular expressions in Python. The expressions are composed of data that would be read from a server, connected via socket. I'm trying to use and read wildcards in these expressions. Example: Let's say I run a chat server. When a message is recieved, the server sends to all clients (JSmith sends "Hello everyone!").
My question is, if there are multiple usernames(not just JSmith), how can I have the client programs read the data sent by the server, and instead of writing "[username] sends "Hello everyone!", have it write "[usernamehere]: Hello everyone!"?
is there a way to store data from Regular expression wildcards into variables?
A:
If the data is always that simple, you do not need to use regular expresssions at all:
line = 'JSmith sends "Hello everyone!"'
user, data = line.split(' sends ', 1)
# remove the quotes
data = data[1:-1]
print "%s: %s" % (user, data)
With regular expressions (using named expressions):
import re
line = 'JSmith sends "Hello everyone!"'
chatre = re.compile('^(?P<user>\S+) sends "(?P<data>.*)"$')
m = chatre.match(line)
if m:
print "%s: %s" % (m.group('user'), m.group('data'))
| Python - Regular Expression Wildcards from Socket data? | I have a question regarding regular expressions in Python. The expressions are composed of data that would be read from a server, connected via socket. I'm trying to use and read wildcards in these expressions. Example: Let's say I run a chat server. When a message is recieved, the server sends to all clients (JSmith sends "Hello everyone!").
My question is, if there are multiple usernames(not just JSmith), how can I have the client programs read the data sent by the server, and instead of writing "[username] sends "Hello everyone!", have it write "[usernamehere]: Hello everyone!"?
is there a way to store data from Regular expression wildcards into variables?
| [
"If the data is always that simple, you do not need to use regular expresssions at all:\nline = 'JSmith sends \"Hello everyone!\"'\nuser, data = line.split(' sends ', 1)\n# remove the quotes\ndata = data[1:-1]\nprint \"%s: %s\" % (user, data)\n\nWith regular expressions (using named expressions):\nimport re\nline =... | [
1
] | [] | [] | [
"chat",
"python",
"regex",
"wildcard"
] | stackoverflow_0003611573_chat_python_regex_wildcard.txt |
Q:
How to insert bulk data in Google App Engine Datastore?
I have some CSV files for cities,state and countries with their ids, names etc. I want to put all this data into Google app engine datastore.
Can someone please suggest an efficient way of doing this on development server as well as on the production server?
Thanks in advance.
A:
You're in luck. The functionality you described is baked into appcfg.py:
http://code.google.com/appengine/docs/python/tools/uploadingdata.html
| How to insert bulk data in Google App Engine Datastore? | I have some CSV files for cities,state and countries with their ids, names etc. I want to put all this data into Google app engine datastore.
Can someone please suggest an efficient way of doing this on development server as well as on the production server?
Thanks in advance.
| [
"You're in luck. The functionality you described is baked into appcfg.py:\nhttp://code.google.com/appengine/docs/python/tools/uploadingdata.html\n"
] | [
3
] | [] | [] | [
"google_app_engine",
"google_cloud_datastore",
"python"
] | stackoverflow_0003618147_google_app_engine_google_cloud_datastore_python.txt |
Q:
How to empty a Python dict without doing my_dict = {}?
If the my_dict variable is global, you can't do:
my_dict = {}
that just create a new reference in the local scope.
Also, I found disgusting using the global keyword, so how can I empty a dict using its methods?
A:
Use the clear() method?
Documentation - (docs.python.org)
A:
you mean like .clear() ?
A:
my_dict.clear()
A:
my_dict.clear()
| How to empty a Python dict without doing my_dict = {}? | If the my_dict variable is global, you can't do:
my_dict = {}
that just create a new reference in the local scope.
Also, I found disgusting using the global keyword, so how can I empty a dict using its methods?
| [
"Use the clear() method?\nDocumentation - (docs.python.org)\n",
"you mean like .clear() ?\n",
"my_dict.clear()\n\n",
"my_dict.clear()\n"
] | [
8,
3,
1,
1
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0003618612_dictionary_python.txt |
Q:
python: defining registry in base class
I'm implementing enumeration using a base class that defines a variety of methods. The actual enumerations are subclasses of that, with no additional methods or attributes. (Each subclass is populated with its own values using the constructor defined in the base class).
I use a registry (a class attribute that stores all the instances of that class). Ideally, I'd like to avoid defining it in each subclass. Unfortunately, if I define it in the base class, all the subclasses will end up sharing the same registry.
What's a good approach here?
Below is the implementation in case it helps (it's based on @jchl comment in python enumeration class for ORM purposes).
class IterRegistry(type):
def __iter__(cls):
return iter(cls._registry.values())
class EnumType(metaclass = IterRegistry):
_registry = {}
_frozen = False
def __init__(self, token):
if hasattr(self, 'token'):
return
self.token = token
self.id = len(type(self)._registry)
type(self)._registry[token] = self
def __new__(cls, token):
if token in cls._registry:
return cls._registry[token]
else:
if cls._frozen:
raise TypeError('No more instances allowed')
else:
return object.__new__(cls)
@classmethod
def freeze(cls):
cls._frozen = True
def __repr__(self):
return self.token
@classmethod
def instance(cls, token):
return cls._registry[token]
class Enum1(EnumType): pass
Enum1('a')
Enum1('b')
for i in Enum1:
print(i)
# not going to work properly because _registry is shared
class Enum2(EnumType): pass
A:
As you already have a metaclass you might as well use it to put a add a separate _registry attribute to each subclass automatically.
class IterRegistry(type):
def __new__(cls, name, bases, attr):
attr['_registry'] = {} # now every class has it's own _registry
return type.__new__(cls, name, bases, attr)
A:
Marty Alchin has a very nice pattern for this: see his blog entry.
A:
What if you share the same registry, but with sub-registries per class, i.e.
if cls.__name__ not in self._registry:
self._registry[cls.__name__] = {}
self._registry[cls.__name__][token] = cls
You actually don't even need cls.__name__, you should be able to use cls itself as key.
| python: defining registry in base class | I'm implementing enumeration using a base class that defines a variety of methods. The actual enumerations are subclasses of that, with no additional methods or attributes. (Each subclass is populated with its own values using the constructor defined in the base class).
I use a registry (a class attribute that stores all the instances of that class). Ideally, I'd like to avoid defining it in each subclass. Unfortunately, if I define it in the base class, all the subclasses will end up sharing the same registry.
What's a good approach here?
Below is the implementation in case it helps (it's based on @jchl comment in python enumeration class for ORM purposes).
class IterRegistry(type):
def __iter__(cls):
return iter(cls._registry.values())
class EnumType(metaclass = IterRegistry):
_registry = {}
_frozen = False
def __init__(self, token):
if hasattr(self, 'token'):
return
self.token = token
self.id = len(type(self)._registry)
type(self)._registry[token] = self
def __new__(cls, token):
if token in cls._registry:
return cls._registry[token]
else:
if cls._frozen:
raise TypeError('No more instances allowed')
else:
return object.__new__(cls)
@classmethod
def freeze(cls):
cls._frozen = True
def __repr__(self):
return self.token
@classmethod
def instance(cls, token):
return cls._registry[token]
class Enum1(EnumType): pass
Enum1('a')
Enum1('b')
for i in Enum1:
print(i)
# not going to work properly because _registry is shared
class Enum2(EnumType): pass
| [
"As you already have a metaclass you might as well use it to put a add a separate _registry attribute to each subclass automatically.\nclass IterRegistry(type):\n def __new__(cls, name, bases, attr):\n attr['_registry'] = {} # now every class has it's own _registry\n return type.__new__(cls, name, ... | [
3,
1,
0
] | [] | [] | [
"attributes",
"class",
"python"
] | stackoverflow_0003617996_attributes_class_python.txt |
Q:
Python & MySQL: Matching element from list of with a record from database
I have a list of objects which I built with a class, and one of the properties of this class is the variable "tag". (below called tagList)
I am trying to match this variable from a record that is bought in using MySQLdb. (below called record)
I can output both to the screen, and see them identically by eye, although cannot get any if statement to match them.
I have tried several approaches, such as:
if str(tagList[i].tag)[2:6] is record[2]:
if str(tagList[i].tag)[2:6] is str(record[2]):
and other similar things. ([2:6] just to remove the [' '] from the list element. Printing these variables to the screen does show they are in the correct format, and I must be doing something stupid!
Still new to both Python and MySQL so would appreciate any advice!
Thanks
A:
You should use == instead of is.
>>> 'abcdefgh'[2:6] is 'cdef'
False
>>> 'abcdefgh'[2:6] == 'cdef'
True
Related Question
Python '==' vs 'is' comparing strings, 'is' fails sometimes, why?
| Python & MySQL: Matching element from list of with a record from database | I have a list of objects which I built with a class, and one of the properties of this class is the variable "tag". (below called tagList)
I am trying to match this variable from a record that is bought in using MySQLdb. (below called record)
I can output both to the screen, and see them identically by eye, although cannot get any if statement to match them.
I have tried several approaches, such as:
if str(tagList[i].tag)[2:6] is record[2]:
if str(tagList[i].tag)[2:6] is str(record[2]):
and other similar things. ([2:6] just to remove the [' '] from the list element. Printing these variables to the screen does show they are in the correct format, and I must be doing something stupid!
Still new to both Python and MySQL so would appreciate any advice!
Thanks
| [
"You should use == instead of is.\n>>> 'abcdefgh'[2:6] is 'cdef'\nFalse\n>>> 'abcdefgh'[2:6] == 'cdef'\nTrue\n\nRelated Question\n\nPython '==' vs 'is' comparing strings, 'is' fails sometimes, why?\n\n"
] | [
1
] | [] | [] | [
"mysql",
"python"
] | stackoverflow_0003619048_mysql_python.txt |
Q:
oauth on appengine: access issue
I am having some difficulty accessing resources through OAuth on AppEngine.
My client application (on Linux using python-oauth) is able to retrieve a valid "access token" but when I try to access a protected resource (e.g. user = oauth.get_current_user()) , I get a oauth.OAuthRequestError exception thrown.
headers: {'Content-Length': '21', 'User-Agent': 'musync,gzip(gfe),gzip(gfe)', 'Host': 'services.systemical.com', 'X-Google-Apps-Metadata': 'domain=systemical.com', 'X-Zoo': 'app-id=services-systemical,domain=systemical.com', 'Content-Type': 'application/json', 'Authorization': 'OAuth oauth_nonce="03259912", oauth_timestamp="1282928181", oauth_consumer_key="services.systemical.com", oauth_signature_method="HMAC-SHA1", oauth_version="1.0", oauth_token="1%2Fo0-tcGTfRzkkm449qVxd_8CfCvMcW_0xwL024nO3HgI", oauth_signature="2ojSK6Ws%2BvDxx3Rdlltf53hlI2w%3D"'}
I suspect that the issue might be domain related i.e. I am using an end-point on services.systemical.com whilst I see that AppEngine reports domain=systemical.com for the X-Google-Apps-Metadata.
How do I fix this? Is it a problem with how I am using sub-domains with Apps/AppEngine??
The domain "services.systemical.com" points to (DNS CNAME) my appengine application services-systemical.appspot.com. The domain systemical.com is associated with a Google Apps domain.
Update: Here is the client code I am using:
class OauthClient(object):
gREQUEST_TOKEN_URL = 'OAuthGetRequestToken'
gACCESS_TOKEN_URL = 'OAuthGetAccessToken'
gAUTHORIZATION_URL = 'OAuthAuthorizeToken'
def __init__(self, server, port, base):
self.server=server
self.port=port
self.base=base
self.request_token_url=self.base+self.gREQUEST_TOKEN_URL
self.access_token_url=self.base+self.gACCESS_TOKEN_URL
self.authorize_token_url=self.base+self.gAUTHORIZATION_URL
self.connection = httplib.HTTPConnection("%s:%d" % (self.server, self.port))
def fetch_request_token(self, oauth_request):
self.connection.request(oauth_request.http_method, self.request_token_url, headers=oauth_request.to_header())
response = self.connection.getresponse()
print response.status, response.reason
print response.msg
return oauth.OAuthToken.from_string(response.read())
def fetch_access_token(self, oauth_request):
self.connection.request(oauth_request.http_method, self.access_token_url, headers=oauth_request.to_header())
response = self.connection.getresponse()
return oauth.OAuthToken.from_string(response.read())
def authorize_token(self, oauth_request):
self.connection.request(oauth_request.http_method, oauth_request.to_url())
response = self.connection.getresponse()
return response.read()
A:
I had a similar problem. Although i can't be more specific (I don't have the code with me) I can tell you that the problem is probably related to the request. Google App Engine is very picky with the request you make.
Try sending the request body empty. For example, this is the usual call:
import httplib
connection = httplib.HTTPSConnection('server.com')
connection.request('POST', REQUEST_TOKEN_URL,body = urlencode(body), headers = {'Authorization': header})
but you should send with an empty body:
connection.request('POST', REQUEST_TOKEN_URL, headers = {'Authorization': header})
The headers have all the information that is needed for OAuth.
| oauth on appengine: access issue | I am having some difficulty accessing resources through OAuth on AppEngine.
My client application (on Linux using python-oauth) is able to retrieve a valid "access token" but when I try to access a protected resource (e.g. user = oauth.get_current_user()) , I get a oauth.OAuthRequestError exception thrown.
headers: {'Content-Length': '21', 'User-Agent': 'musync,gzip(gfe),gzip(gfe)', 'Host': 'services.systemical.com', 'X-Google-Apps-Metadata': 'domain=systemical.com', 'X-Zoo': 'app-id=services-systemical,domain=systemical.com', 'Content-Type': 'application/json', 'Authorization': 'OAuth oauth_nonce="03259912", oauth_timestamp="1282928181", oauth_consumer_key="services.systemical.com", oauth_signature_method="HMAC-SHA1", oauth_version="1.0", oauth_token="1%2Fo0-tcGTfRzkkm449qVxd_8CfCvMcW_0xwL024nO3HgI", oauth_signature="2ojSK6Ws%2BvDxx3Rdlltf53hlI2w%3D"'}
I suspect that the issue might be domain related i.e. I am using an end-point on services.systemical.com whilst I see that AppEngine reports domain=systemical.com for the X-Google-Apps-Metadata.
How do I fix this? Is it a problem with how I am using sub-domains with Apps/AppEngine??
The domain "services.systemical.com" points to (DNS CNAME) my appengine application services-systemical.appspot.com. The domain systemical.com is associated with a Google Apps domain.
Update: Here is the client code I am using:
class OauthClient(object):
gREQUEST_TOKEN_URL = 'OAuthGetRequestToken'
gACCESS_TOKEN_URL = 'OAuthGetAccessToken'
gAUTHORIZATION_URL = 'OAuthAuthorizeToken'
def __init__(self, server, port, base):
self.server=server
self.port=port
self.base=base
self.request_token_url=self.base+self.gREQUEST_TOKEN_URL
self.access_token_url=self.base+self.gACCESS_TOKEN_URL
self.authorize_token_url=self.base+self.gAUTHORIZATION_URL
self.connection = httplib.HTTPConnection("%s:%d" % (self.server, self.port))
def fetch_request_token(self, oauth_request):
self.connection.request(oauth_request.http_method, self.request_token_url, headers=oauth_request.to_header())
response = self.connection.getresponse()
print response.status, response.reason
print response.msg
return oauth.OAuthToken.from_string(response.read())
def fetch_access_token(self, oauth_request):
self.connection.request(oauth_request.http_method, self.access_token_url, headers=oauth_request.to_header())
response = self.connection.getresponse()
return oauth.OAuthToken.from_string(response.read())
def authorize_token(self, oauth_request):
self.connection.request(oauth_request.http_method, oauth_request.to_url())
response = self.connection.getresponse()
return response.read()
| [
"I had a similar problem. Although i can't be more specific (I don't have the code with me) I can tell you that the problem is probably related to the request. Google App Engine is very picky with the request you make.\nTry sending the request body empty. For example, this is the usual call:\nimport httplib\nconnec... | [
0
] | [] | [] | [
"google_app_engine",
"oauth",
"python"
] | stackoverflow_0003586555_google_app_engine_oauth_python.txt |
Q:
Pygame - calling surface.convert() on animated sprite causes transparent background to become white
Everyone says to use .convert() on surfaces to speed up animations (which will be an issue with my game because it will be an MMO to some extent, so it might have a dozen or a couple dozen characters moving at the same time), the problem is that my transparent PNG images work great without convert but as soon as I use .convert() all of the transparent backgrounds suddenly become white
Do I need to sample the color and make it transparent using color_key?
A:
convert_alpha should do the trick
http://www.pygame.org/docs/ref/surface.html#Surface.convert_alpha
| Pygame - calling surface.convert() on animated sprite causes transparent background to become white | Everyone says to use .convert() on surfaces to speed up animations (which will be an issue with my game because it will be an MMO to some extent, so it might have a dozen or a couple dozen characters moving at the same time), the problem is that my transparent PNG images work great without convert but as soon as I use .convert() all of the transparent backgrounds suddenly become white
Do I need to sample the color and make it transparent using color_key?
| [
"convert_alpha should do the trick \nhttp://www.pygame.org/docs/ref/surface.html#Surface.convert_alpha\n"
] | [
3
] | [] | [] | [
"geometry_surface",
"pygame",
"python",
"sprite",
"transparent"
] | stackoverflow_0003616903_geometry_surface_pygame_python_sprite_transparent.txt |
Q:
How do I embed a python library in a C++ app?
I've embedded python on a mobile device successfully, but now how do I include a python library such as urllib?
Additionally, how can I include my own python scripts without a PYTHONPATH?
(please note: python is not installed on this system)
A:
The easiest way is to create a .zip file containing all the python code you need and add this to your process's PYTHONPATH environment variable (via setenv()) prior to initializing the embedded Python interpreter. Usage of .pyd libraries can be done similarly by adding them to the same directory as the .zip and including the directory in the PYTHONPATH as well.
Usage of the setenv() call can cause trouble on Windows if you're mixing c-runtime versions. I spent many aggrivating hours learing that setenv() only sets the environment variables for the version of the c-runtime your compiler ships with. So if, for example, Python was built with VC++ 2005 and your compiler is VC++ 2008, you'll need to use an alternative mechanism. Browsing the sources for py2exe and/or PyInstaller may provide you with a better solution (since you're doing essentially the same thing as these tools) but a simple alternative is to "cheat" by using PyRun_SimpleString() to set the module search path from within Python itself.
snprintf(buff, "import sys\nsys.path.append("%s")\n", py_zip_filename)
PyRun_SimpleString(buff)
| How do I embed a python library in a C++ app? | I've embedded python on a mobile device successfully, but now how do I include a python library such as urllib?
Additionally, how can I include my own python scripts without a PYTHONPATH?
(please note: python is not installed on this system)
| [
"The easiest way is to create a .zip file containing all the python code you need and add this to your process's PYTHONPATH environment variable (via setenv()) prior to initializing the embedded Python interpreter. Usage of .pyd libraries can be done similarly by adding them to the same directory as the .zip and in... | [
1
] | [] | [] | [
"embed",
"python"
] | stackoverflow_0003618281_embed_python.txt |
Q:
How to calculate timedelta until next execution for scheduled events
I have three lists that define when a task should be executed:
minute: A list of integers from 0-59 that represent the minutes of an hour of when execution should occur;
hour: A list of integers from 0-23 that represent the hours of a day of when execution should occur
day_of_week: A list of integers from 0-6, where Sunday = 0 and Saturday = 6, that represent the days of a week that execution should occur.
Is there a easy way to calculate what's the timedelta until the next execution in Python?
Thanks!
EDIT:
For example, if we have the following lists:
day_of_week = [0]
hour = [1]
minute = [0, 30]
The task should run twice a week at 1:00 and 1:30 every Sunday.
I'd like to calculate the timedelta until the next occurance based on current time.
A:
Using dateutil (edited to address the OP's updated question):
import datetime
import random
import dateutil.relativedelta as dr
import itertools
day_of_week = [1,3,5,6]
hour = [1,10,15,17,20]
minute = [4,34,51,58]
now=datetime.datetime.now()
deltas=[]
for min,hr,dow in itertools.product(minute,hour,day_of_week):
# dateutil convention: Monday = 0, Sunday = 6.
next_dt=now+dr.relativedelta(minute=min,hour=hr,weekday=dow)
delta=next_dt-now
deltas.append(delta)
deltas.sort()
This is the next timedelta:
print(deltas[0])
# 4 days, 14:22:00
And here is the corresponding datetime:
print(now+deltas[0])
# 2010-09-02 01:04:23.258204
Note that dateutil uses the convention Monday = 0, Sunday = 6.
A:
Just in case anybody is interested, this is the code that I developed using ~unutbu suggestions. The main advantage is that it scales fine.
import datetime
import dateutil.relativedelta as dr
def next_ocurrance(minutes, hours, days_of_week):
# days_of_week convention: Sunday = 0, Saturday = 6
# dateutil convention: Monday = 0, Sunday = 6
now = datetime.datetime.now()
weekday = now.isoweekday()
execute_this_hour = weekday in days_of_week \
and now.hour in hours \
and now.minute < max(minutes)
if execute_this_hour:
next_minute = min([minute for minute in minutes
if minute > now.minute])
return now + dr.relativedelta(minute=next_minute,
second=0,
microsecond=0)
else:
next_minute = min(minutes)
execute_today = weekday in day_of_week \
and (now.hour < max(hours) or execute_this_hour)
if execute_today:
next_hour = min([hour for hour in hours if hour > now.hour])
return now + dr.relativedelta(hour=next_hour,
minute=next_minute,
second=0,
microsecond=0)
else:
next_hour = min(hours)
next_day = min([day for day in days_of_week if day > weekday] \
or days_of_week)
return now + dr.relativedelta(weekday=(next_day - 1) % 7,
hour=next_hour,
minute=next_minute,
second=0,
microsecond=0)
if __name__=='__main__':
day_of_week = [4]
hour = [1, 10, 12, 13]
minute = [4, 14, 34, 51, 58]
print next_ocurrance(minute, hour, day_of_week)
| How to calculate timedelta until next execution for scheduled events | I have three lists that define when a task should be executed:
minute: A list of integers from 0-59 that represent the minutes of an hour of when execution should occur;
hour: A list of integers from 0-23 that represent the hours of a day of when execution should occur
day_of_week: A list of integers from 0-6, where Sunday = 0 and Saturday = 6, that represent the days of a week that execution should occur.
Is there a easy way to calculate what's the timedelta until the next execution in Python?
Thanks!
EDIT:
For example, if we have the following lists:
day_of_week = [0]
hour = [1]
minute = [0, 30]
The task should run twice a week at 1:00 and 1:30 every Sunday.
I'd like to calculate the timedelta until the next occurance based on current time.
| [
"Using dateutil (edited to address the OP's updated question):\nimport datetime\nimport random\nimport dateutil.relativedelta as dr\nimport itertools\n\nday_of_week = [1,3,5,6]\nhour = [1,10,15,17,20]\nminute = [4,34,51,58]\n\nnow=datetime.datetime.now()\ndeltas=[]\n\nfor min,hr,dow in itertools.product(minute,hour... | [
1,
0
] | [] | [] | [
"datetime",
"python"
] | stackoverflow_0003618538_datetime_python.txt |
Q:
PyQt application crashes after closing QMessagebox window
Here is the code of my simple tray application. It crashes with segfault when i call information window from context menu of application and then close it.
I've tryed different variants to find a reason of segfault, this is my last try.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
from PyQt4 import QtCore
from PyQt4 import QtGui
class SystemTrayIcon(QtGui.QSystemTrayIcon):
def __init__(self, parent=None):
QtGui.QSystemTrayIcon.__init__(self, parent)
self.setIcon(QtGui.QIcon("icon.png"))
self.iconMenu = QtGui.QMenu(parent)
appabout = self.iconMenu.addAction("About")
appexit = self.iconMenu.addAction("Exit")
self.setContextMenu(self.iconMenu)
self.aboutdialog = QtGui.QWidget(parent)
self.connect(appabout,QtCore.SIGNAL('triggered()'),self.showAbout)
self.connect(appexit,QtCore.SIGNAL('triggered()'),self.appExit)
self.show()
def showAbout(self):
QtGui.QMessageBox.information(self.aboutdialog, self.tr("About Tunarium"), self.tr("Your text here."))
def appExit(self):
sys.exit()
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
trayIcon = SystemTrayIcon()
trayIcon.show()
sys.exit(app.exec_())
A:
I don't know Python but in your appExit(), you should be calling quit() or exit() on the application object which will cause your call to sys.exit(app.exec_()) in main to return. Again, not knowing the Python specifics, you can do this by using the Qt macro qApp and call qApp->quit() or QCoreApplication::instance()->quit().
Calling quit() is the same as calling exit(0). You can use exit() directly to return any exit code of your choice.
Update:
I've tried your code in C++ with a few tweaks and it does work. I've commented where you should try making changes to your code. Hope it works for you.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
from PyQt4 import QtCore
from PyQt4 import QtGui
class SystemTrayIcon(QtGui.QSystemTrayIcon):
def __init__(self, parent=None):
QtGui.QSystemTrayIcon.__init__(self, parent)
self.setIcon(QtGui.QIcon("icon.png"))
self.iconMenu = QtGui.QMenu(parent)
appabout = self.iconMenu.addAction("About")
appexit = self.iconMenu.addAction("Exit")
self.setContextMenu(self.iconMenu)
# Remove this next line, it isn't needed
#self.aboutdialog = QtGui.QWidget(parent)
self.connect(appabout,QtCore.SIGNAL('triggered()'),self.showAbout)
self.connect(appexit,QtCore.SIGNAL('triggered()'),self.appExit)
# Remove this next line, it isn't needed
#self.show()
def showAbout(self):
# Before showing the message box, disable the tray icon menu
self.iconMenu.setEnabled(false)
# Replace self.aboutdialog with the Python equivalent of null (0?)
QtGui.QMessageBox.information(0, self.tr("About Tunarium"), self.tr("Your text here."))
# Re-enable the tray icon menu
self.iconMenu.setEnabled(true)
def appExit(self):
# Replace the next line with something that calls the QApplication's
# exit() or quit() function.
#sys.exit()
app.quit()
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
# Tell the application not to exit when the last window is closed. This should
# prevent the application from exiting when the message box is closed.
app.setQuitOnLastWindowClosed(false)
trayIcon = SystemTrayIcon()
trayIcon.show()
sys.exit(app.exec_())
Update 2:
As requested, here is the equivalent C++ code:
main.cpp
#include <QtGui/QApplication>
#include "SystemTrayIcon.h"
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
app.setQuitOnLastWindowClosed(false);
SystemTrayIcon trayIcon(&app);
trayIcon.show();
return app.exec();
}
SystemTrayIcon.h
#ifndef SYSTEMTRAYICON_H
#define SYSTEMTRAYICON_H
#include <QtGui/QSystemTrayIcon>
#include <QtGui/QAction>
#include <QtGui/QMenu>
#include <QtGui/QWidget>
class SystemTrayIcon : public QSystemTrayIcon
{
Q_OBJECT
public:
SystemTrayIcon(QObject * parent = 0);
virtual ~SystemTrayIcon();
private:
QAction * m_appabout;
QAction * m_appexit;
QMenu * m_iconMenu;
QWidget * m_aboutdialog;
private slots:
void slot_showAbout();
void slot_exit();
};
#endif /* SYSTEMTRAYICON_H */
SystemTrayIcon.cpp
#include <iostream>
#include <QtCore/QCoreApplication>
#include <QtGui/QIcon>
#include <QtGui/QAction>
#include <QtGui/QMessageBox>
#include "SystemTrayIcon.h"
SystemTrayIcon::SystemTrayIcon(QObject * parent) :
QSystemTrayIcon(parent),
m_appabout(0),
m_appexit(0),
m_iconMenu(0),
m_aboutdialog(0)
{
setIcon(QIcon("icon.png"));
m_iconMenu = new QMenu();
m_appabout = m_iconMenu->addAction("About");
m_appexit = m_iconMenu->addAction("Exit");
setContextMenu(m_iconMenu);
connect(m_appabout, SIGNAL(triggered()), this, SLOT(slot_showAbout()));
connect(m_appexit, SIGNAL(triggered()), this, SLOT(slot_exit()));
}
SystemTrayIcon::~SystemTrayIcon()
{
}
void SystemTrayIcon::slot_showAbout()
{
std::cout << "slot show about." << std::endl;
m_iconMenu->setEnabled(false);
QMessageBox::information(0, "About Tunarium", "Your text here.");
m_iconMenu->setEnabled(true);
}
void SystemTrayIcon::slot_exit()
{
std::cout << "slot exit." << std::endl;
qApp->quit();
}
| PyQt application crashes after closing QMessagebox window | Here is the code of my simple tray application. It crashes with segfault when i call information window from context menu of application and then close it.
I've tryed different variants to find a reason of segfault, this is my last try.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
from PyQt4 import QtCore
from PyQt4 import QtGui
class SystemTrayIcon(QtGui.QSystemTrayIcon):
def __init__(self, parent=None):
QtGui.QSystemTrayIcon.__init__(self, parent)
self.setIcon(QtGui.QIcon("icon.png"))
self.iconMenu = QtGui.QMenu(parent)
appabout = self.iconMenu.addAction("About")
appexit = self.iconMenu.addAction("Exit")
self.setContextMenu(self.iconMenu)
self.aboutdialog = QtGui.QWidget(parent)
self.connect(appabout,QtCore.SIGNAL('triggered()'),self.showAbout)
self.connect(appexit,QtCore.SIGNAL('triggered()'),self.appExit)
self.show()
def showAbout(self):
QtGui.QMessageBox.information(self.aboutdialog, self.tr("About Tunarium"), self.tr("Your text here."))
def appExit(self):
sys.exit()
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
trayIcon = SystemTrayIcon()
trayIcon.show()
sys.exit(app.exec_())
| [
"I don't know Python but in your appExit(), you should be calling quit() or exit() on the application object which will cause your call to sys.exit(app.exec_()) in main to return. Again, not knowing the Python specifics, you can do this by using the Qt macro qApp and call qApp->quit() or QCoreApplication::instance(... | [
3
] | [] | [] | [
"contextmenu",
"pyqt",
"python",
"tray",
"trayicon"
] | stackoverflow_0003619514_contextmenu_pyqt_python_tray_trayicon.txt |
Q:
ctypes.windll.user32.GetCursorInfo() - how can I manage this to work? [Python]
I have to get the information about the current mouse cursor from windows but I'm not managing to work this command...
what should I do?
Can someone post one example?
A:
What information are you trying to get out of the GetCursorInfo() call? It would be easier to use the win32 extensions (especially if you just want cursor position).
>>> import win32gui
>>> win32gui.GetCursorInfo()
(1, 65555, (717, 412))
| ctypes.windll.user32.GetCursorInfo() - how can I manage this to work? [Python] | I have to get the information about the current mouse cursor from windows but I'm not managing to work this command...
what should I do?
Can someone post one example?
| [
"What information are you trying to get out of the GetCursorInfo() call? It would be easier to use the win32 extensions (especially if you just want cursor position).\n>>> import win32gui\n>>> win32gui.GetCursorInfo()\n(1, 65555, (717, 412))\n\n"
] | [
1
] | [] | [] | [
"ctypes",
"python",
"windows"
] | stackoverflow_0003619690_ctypes_python_windows.txt |
Q:
Python Directory Display in Finder, Explorer, Dolphin, etc... (Cross-Platform)
I would like to find some way of Viewing a Directory in the default file system viewer (Windows Explorer, Finder, Dolphin, etc...) that will work on all major platforms.
I do not have the detailed knowledge of Linux, nor of OSX in order to write this. Is there some script out there that will do what I want?
A:
OSX:
os.system('open "%s"' % foldername)
Windows:
os.startfile(foldername)
Unix:
os.system('xdg-open "%s"' % foldername)
Combined:
import os
systems = {
'nt': os.startfile,
'posix': lambda foldername: os.system('xdg-open "%s"' % foldername)
'os2': lambda foldername: os.system('open "%s"' % foldername)
}
systems.get(os.name, os.startfile)(foldername)
| Python Directory Display in Finder, Explorer, Dolphin, etc... (Cross-Platform) | I would like to find some way of Viewing a Directory in the default file system viewer (Windows Explorer, Finder, Dolphin, etc...) that will work on all major platforms.
I do not have the detailed knowledge of Linux, nor of OSX in order to write this. Is there some script out there that will do what I want?
| [
"OSX:\nos.system('open \"%s\"' % foldername)\n\nWindows:\nos.startfile(foldername)\n\nUnix:\nos.system('xdg-open \"%s\"' % foldername)\n\nCombined:\nimport os\n\nsystems = {\n 'nt': os.startfile,\n 'posix': lambda foldername: os.system('xdg-open \"%s\"' % foldername)\n 'os2': lambda foldername: os.system('... | [
4
] | [] | [] | [
"cross_platform",
"directory",
"python"
] | stackoverflow_0003619908_cross_platform_directory_python.txt |
Q:
simple twisted receiver for orbited comet server
I have an unusual request.
I've just moved to a new apartment and I won't have my internet hooked up for over a week. I'm trying to develop my application using my phone for online documentation. Before I moved I found this video (vodpod.com/watch/4071950-building-real-time-network-applications-for-the-web-with-twisted-and-orbited-part-001?u=snaky&c=snaky) from pycon about orbited/twisted basics, unfortunately I forget how it was done and my phone won't play the video.
Could someone watch the first bit of the video for me and post one of the first python examples? The presenter has a simple reactor (I think) which can relay chat messages from telnet and web clients. I just need the basic example, even the one where each event method just has a "pass" in it, then I can continue from there.
A:
You can watch the video on Android/Symbian/WinMobile using SkyFire.
A:
The URL to the code examples used in that video:
http://orbited.org/blog/files/tutorial/examples.tgz
| simple twisted receiver for orbited comet server | I have an unusual request.
I've just moved to a new apartment and I won't have my internet hooked up for over a week. I'm trying to develop my application using my phone for online documentation. Before I moved I found this video (vodpod.com/watch/4071950-building-real-time-network-applications-for-the-web-with-twisted-and-orbited-part-001?u=snaky&c=snaky) from pycon about orbited/twisted basics, unfortunately I forget how it was done and my phone won't play the video.
Could someone watch the first bit of the video for me and post one of the first python examples? The presenter has a simple reactor (I think) which can relay chat messages from telnet and web clients. I just need the basic example, even the one where each event method just has a "pass" in it, then I can continue from there.
| [
"You can watch the video on Android/Symbian/WinMobile using SkyFire.\n",
"The URL to the code examples used in that video:\nhttp://orbited.org/blog/files/tutorial/examples.tgz\n"
] | [
1,
1
] | [] | [] | [
"orbited",
"python",
"twisted"
] | stackoverflow_0003619836_orbited_python_twisted.txt |
Q:
python: regular expressions, how to match a string of undefind length which has a structure and finishes with a specific group
I need to create a regexp to match strings like this 999-123-222-...-22
The string can be finished by &Ns=(any number) or without this... So valid strings for me are
999-123-222-...-22
999-123-222-...-22&Ns=12
999-123-222-...-22&Ns=12
And following are not valid:
999-123-222-...-22&N=1
I have tried testing it several hours already... But did not manage to solve, really need some help
A:
Not sure if you want to literally match 999-123-22-...-22 or if that can be any sequence of numbers/dashes. Here are two different regexes:
/^[\d-]+(&Ns=\d+)?$/
/^999-123-222-\.\.\.-22(&Ns=\d+)?$/
The key idea is the (&Ns=\d+)?$ part, which matches an optional &Ns=<digits>, and is anchored to the end of the string with $.
A:
If you just want to allow strings 999-123-222-...-22 and 999-123-222-...-22&Ns=12 you better use a string function.
If you want to allow any numbers between - you can use the regex:
^(\d+-){3}[.]{3}-\d+(&Ns=\d+)?$
If the numbers must be of only 3 digits and the last number of only 2 digits you can use:
^(\d{3}-){3}[.]{3}-\d{2}(&Ns=\d{2})?$
A:
This looks like a phone number and extension information..
Why not make things simpler for yourself (and anyone who has to read this later) and split the input rather than use a complicated regex?
s = '999-123-222-...-22&Ns=12'
parts = s.split('&Ns=') # splits on Ns and removes it
If the piece before the "&" is a phone number, you could do another split and get the area code etc into separate fields, like so:
phone_parts = parts[0].split('-') # breaks up the digit string and removes the '-'
area_code = phone_parts[0]
The portion found after the the optional '&Ns=' can be checked to see if it is numeric with the string method isdigit, which will return true if all characters in the string are digits and there is at least one character, false otherwise.
if len(parts) > 1:
extra_digits_ok = parts[1].isdigit()
| python: regular expressions, how to match a string of undefind length which has a structure and finishes with a specific group | I need to create a regexp to match strings like this 999-123-222-...-22
The string can be finished by &Ns=(any number) or without this... So valid strings for me are
999-123-222-...-22
999-123-222-...-22&Ns=12
999-123-222-...-22&Ns=12
And following are not valid:
999-123-222-...-22&N=1
I have tried testing it several hours already... But did not manage to solve, really need some help
| [
"Not sure if you want to literally match 999-123-22-...-22 or if that can be any sequence of numbers/dashes. Here are two different regexes:\n/^[\\d-]+(&Ns=\\d+)?$/\n\n/^999-123-222-\\.\\.\\.-22(&Ns=\\d+)?$/\n\nThe key idea is the (&Ns=\\d+)?$ part, which matches an optional &Ns=<digits>, and is anchored to the end... | [
1,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003618193_python_regex.txt |
Q:
Database Wrapper Class for Python
I am a PHP developer and recently migrated to Python. In PHP, there are many classes available at for example phpclasses.org which saves a lot of developer's time. I am looking for similar kind of repository for python. I need a database wrapper class for accessing the database with python. One of the class i found was at http://www.devx.com/dbzone/Article/22093 . But i didnt find any other similar class. So i m wondering no one in this world has created such kind of classes. Is there any better approach to access database in Python. Am i on the wrong track? I m aware of the ORM and its advantage but for small projects it doesnt make sense to me to use ORM's. Can anyone suggest some suitable python database wrapper class.
Thanks in advance.
A:
Following Skurmedel's advice, I remembered reading about SQLAlchemy before. It's pretty well documented and maintained. Here's a good start point: SQLAlchemy 1.4 / 2.0 Tutorial
A:
Most databases you choose to use will have a module available that conforms to the DBAPI. That gives you access that is quite easy to use and reasonably portable across different types of database (though not entirely). You can see an example of the API in action with the built-in SQLite support.
If you'll reconsider your opposition to ORMs however, as well as SQLAlchemy mentioned in another answer, SQLObject is another ORM, and is very simple to use. There's also Storm.
| Database Wrapper Class for Python | I am a PHP developer and recently migrated to Python. In PHP, there are many classes available at for example phpclasses.org which saves a lot of developer's time. I am looking for similar kind of repository for python. I need a database wrapper class for accessing the database with python. One of the class i found was at http://www.devx.com/dbzone/Article/22093 . But i didnt find any other similar class. So i m wondering no one in this world has created such kind of classes. Is there any better approach to access database in Python. Am i on the wrong track? I m aware of the ORM and its advantage but for small projects it doesnt make sense to me to use ORM's. Can anyone suggest some suitable python database wrapper class.
Thanks in advance.
| [
"Following Skurmedel's advice, I remembered reading about SQLAlchemy before. It's pretty well documented and maintained. Here's a good start point: SQLAlchemy 1.4 / 2.0 Tutorial\n",
"Most databases you choose to use will have a module available that conforms to the DBAPI. That gives you access that is quite easy ... | [
3,
1
] | [] | [] | [
"database",
"python"
] | stackoverflow_0003618995_database_python.txt |
Q:
Release a lock temporarily if it is held, in python
I have a bunch of different methods that are not supposed to run concurrently, so I use a single lock to synchronize them. Looks something like this:
selected_method = choose_method()
with lock:
selected_method()
In some of these methods, I sometimes call a helper function that does some slow network IO. (Let's call that one network_method()). I would like to release the lock while this function is running, to allow other threads to continue their processing.
One way to achieve this would be by calling lock.release() and lock.acquire() before and after calling the network method. However, I would prefer to keep the methods oblivious to the lock, since there are many of them and they change all the time.
I would much prefer to rewrite network_method() so that it checks to see whether the lock is held, and if so release it before starting and acquire it again at the end.
Note that network_method() sometimes gets called from other places, so it shouldn't release the lock if it's not on the thread that holds it.
I tried using the locked() method on the Lock object, but that method only tells me whether the lock is held, not if it is held by the current thread.
By the way, lock is a global object and I'm fine with that.
A:
I would much prefer to rewrite network_method() so that it checks to see whether the lock is held, and if so release it before starting and acquire it again at the end.
Note that network_method() sometimes gets called from other places, so it shouldn't release the lock if it's not on the thread that holds it.
This just sounds like entirely the wrong thing to do :(
For a start, it's bad to have a function that sometimes has some other magical side-effect depending on where you call it from. That's the sort of thing that is a nightmare to debug.
Secondly, a lock should have clear acquire and release semantics. If I look at code that says "lock(); do_something(); unlock();" then I expect it to be locked for the duration of do_something(). In fact, it is also telling me that do_something() requires a lock. If I find out that someone has written a particular do_something() which actually unlocks the lock that I just saw to be locked, I will either (a) fire them or (b) hunt them down with weapons, depending on whether I am in a position of seniority relative to them or not.
By the way, lock is a global object and I'm fine with that.
Incidentally, this is also why globals are bad. If I modify a value, call a function, and then modify a value again, I don't want that function in the middle being able to reach back out and modify this value in an unpredictable way.
My suggestion to you is this: your lock is in the wrong place, or doing the wrong thing, or both. You say these methods aren't supposed to run concurrently, but you actually want some of them to run concurrently. The fact that one of them is "slow" can't possibly make it acceptable to remove the lock - either you need the mutual exclusion during this type of operation for it to be correct, or you do not. If the slower operation is indeed inherently safe when the others are not, then maybe it doesn't need the lock - but that implies the lock should go inside each of the faster operations, not outside them. But all of this is dependent on what exactly the lock is for.
A:
Why not just do this?
with lock:
before_network()
do_network_stuff()
with lock:
after_network()
| Release a lock temporarily if it is held, in python | I have a bunch of different methods that are not supposed to run concurrently, so I use a single lock to synchronize them. Looks something like this:
selected_method = choose_method()
with lock:
selected_method()
In some of these methods, I sometimes call a helper function that does some slow network IO. (Let's call that one network_method()). I would like to release the lock while this function is running, to allow other threads to continue their processing.
One way to achieve this would be by calling lock.release() and lock.acquire() before and after calling the network method. However, I would prefer to keep the methods oblivious to the lock, since there are many of them and they change all the time.
I would much prefer to rewrite network_method() so that it checks to see whether the lock is held, and if so release it before starting and acquire it again at the end.
Note that network_method() sometimes gets called from other places, so it shouldn't release the lock if it's not on the thread that holds it.
I tried using the locked() method on the Lock object, but that method only tells me whether the lock is held, not if it is held by the current thread.
By the way, lock is a global object and I'm fine with that.
| [
"\nI would much prefer to rewrite network_method() so that it checks to see whether the lock is held, and if so release it before starting and acquire it again at the end.\nNote that network_method() sometimes gets called from other places, so it shouldn't release the lock if it's not on the thread that holds it.\n... | [
3,
0
] | [] | [] | [
"locking",
"multithreading",
"python"
] | stackoverflow_0003618515_locking_multithreading_python.txt |
Q:
tkinter in python. .pack works, but .grid produces nothing
This code works fine and produces checkbuttons in a long long list.
def createbutton(self,name):
var = IntVar()
account = name[0]
chk = Checkbutton(self.root, text=account, variable=var)
chk.pack(side = BOTTOM)
self.states.append((name,var))
The problem is that the list of buttons is so long, that it stretches farther then the length of my screen so i want to put them into a grid, so that i can have maybe 10 checkbuttons in a column. Just to test the capability, i did this:
def createbutton(self,name):
var = IntVar()
account = name[0]
chk = Checkbutton(self.root, text=account, variable=var)
chk.grid(column=0)
self.states.append((name,var))
And nothing happens, no tk interface opens and the program just waits. Please help!
A:
Is it possible that you have other widgets that are in the root window, and they are put there using pack? If you try to use pack and grid in the same container your app can go into an infinite loop as each manager struggles for control of the container.
| tkinter in python. .pack works, but .grid produces nothing | This code works fine and produces checkbuttons in a long long list.
def createbutton(self,name):
var = IntVar()
account = name[0]
chk = Checkbutton(self.root, text=account, variable=var)
chk.pack(side = BOTTOM)
self.states.append((name,var))
The problem is that the list of buttons is so long, that it stretches farther then the length of my screen so i want to put them into a grid, so that i can have maybe 10 checkbuttons in a column. Just to test the capability, i did this:
def createbutton(self,name):
var = IntVar()
account = name[0]
chk = Checkbutton(self.root, text=account, variable=var)
chk.grid(column=0)
self.states.append((name,var))
And nothing happens, no tk interface opens and the program just waits. Please help!
| [
"Is it possible that you have other widgets that are in the root window, and they are put there using pack? If you try to use pack and grid in the same container your app can go into an infinite loop as each manager struggles for control of the container.\n"
] | [
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0003619671_python_tkinter.txt |
Q:
Who stops my threads?
I have some threads fishing into a queue for jobs, something like this:
class Worker(Thread):
[...]
def run(self):
while not self.terminated:
job = myQueue.get_nowait()
job.dosomething()
sleep(0.5)
Now, self.terminated is just a bool value I use to exit the loop but, this is the problem, several times in a day they stop working without my intervention. All of them but one: the application starts with, lets say, 5 working threads and at random time I check them and one only is working. All the others have both _Thread__initialized and _Thread__stopped fields true. Threads and jobs does not interact with each other. What I should look for?
PS: I understand it's really hard to try to figure out the issue without the actual code, but it's huge.
UPDATE: actually Queue.Empty is the only exception trapped - guess I believed to let all the jobs' internal errors to propagate without kill the threads eheh - so I'm going to block all the exceptions and see...
A:
If that is the actual code it's pretty obvious: myQueue.get_nowait() raises an Exception (Empty) when the queue is empty!
A:
stackoverflow? :)
A:
As example, an exception inside the loop will stop the thread.
Why do you use get_nowait() and not get()? What if the Queue is empty?
A:
I have two suggestions.
1) get_nowait() will raise a Queue.Empty exception if no items are available. Make sure exceptions aren't killing your threads.
2) Use get() instead. Put a None in your queue to signal the thread to exit instead of the boolean flag. Then you don't need a half second sleep and you'll process items faster.
def run(self):
while True:
job = queue.get()
if job:
try:
job.do_something()
except Exception as e:
print e
else: # exit thread when job is None
break
A:
GIL? at one time only an interpreter is doing the job if you want true parallelization you must use multiprocessing see http://docs.python.org/library/multiprocessing.html
| Who stops my threads? | I have some threads fishing into a queue for jobs, something like this:
class Worker(Thread):
[...]
def run(self):
while not self.terminated:
job = myQueue.get_nowait()
job.dosomething()
sleep(0.5)
Now, self.terminated is just a bool value I use to exit the loop but, this is the problem, several times in a day they stop working without my intervention. All of them but one: the application starts with, lets say, 5 working threads and at random time I check them and one only is working. All the others have both _Thread__initialized and _Thread__stopped fields true. Threads and jobs does not interact with each other. What I should look for?
PS: I understand it's really hard to try to figure out the issue without the actual code, but it's huge.
UPDATE: actually Queue.Empty is the only exception trapped - guess I believed to let all the jobs' internal errors to propagate without kill the threads eheh - so I'm going to block all the exceptions and see...
| [
"If that is the actual code it's pretty obvious: myQueue.get_nowait() raises an Exception (Empty) when the queue is empty!\n",
"stackoverflow? :)\n",
"As example, an exception inside the loop will stop the thread.\nWhy do you use get_nowait() and not get()? What if the Queue is empty?\n",
"I have two suggesti... | [
3,
2,
2,
1,
0
] | [] | [] | [
"multithreading",
"python"
] | stackoverflow_0003620185_multithreading_python.txt |
Q:
Modeling Hierarchical Data - GAE
I'm new in google-app-engine and google datastore (bigtable) and I've some doubts in order of which could be the best approach to design the required data model.
I need to create a hierarchy model, something like a product catalog, each domain has some subdomains in deep. For the moment the structure for the products changes less than the read requirements. Wine example:
Origin (Toscana, Priorat, Alsacian)
Winery (Belongs only to one Origin)
Wine (Belongs only to one Winery)
All the relations are disjoint and incomplete. Additionally in order of the requirements probably we need to store counters of use for every wine (could require transactions)
In order of the documentation seems there're different potential solutions:
Ancestors management. Using parent relations and transactions
Pseudo-ancestor management. Simulating ancestors with a db.ListProperty(db.Key)
ReferenceProperty. Specifying explicitelly the relation between the classes
But in order of the expected requests to get wines... sometimes by variety, sometimes by origin, sometimes by winery... i'm worried about the behaviour of the queries using these structures (like the multiple joins in a relational model. If you ask for the products of a family... you need to join for the final deep qualifier in the tree of products and join since the family)
Maybe is better to create some duplicated information (in order of the google team recommendations: operations are expensive, but storage is not, so duplicate content should not be seen the main problem)
Some responses of other similar questions suggest:
Store all the parent ids as a hierarchy in a string... like a path property
Duplicate the relations between the Drink entity an all the parents in the tree ...
Any suggestions?
Hi Will,
Our case is more an strict hierarchical approach as you represent in the second example. And the queries is for retrieving list of products, retrieve only one is not usual.
We need to retrieve all the wines from an Origin, from a Winery or from a Variety (If we supose that the variety is another node of the strict hierarchical tree, is only an example)
One way could be include a path property, as you mentioned:
/origin/{id}/winery/{id}/variety/{id}
To allow me to retrieve a list of wines from a variety applying a query like this:
wines_query = Wine.all()
wines_query.filter('key_name >','/origin/toscana/winery/latoscana/variety/merlot/')
wines_query.filter('key_name <','/origin/toscana/winery/latoscana/variety/merlot/zzzzzzzz')
Or like this from an Origin:
wines_query = Wine.all()
wines_query.filter('key_name >','/origin/toscana/')
wines_query.filter('key_name <','/origin/toscana/zzzzzz')
Thank you!
A:
I'm not sure what kinds of queries you'll need to do in addition to those mentioned in the question, but storing the data in an explicit ancestor hierarchy would make the ones you asked about fall out pretty easily.
For example, to get all wines from a particular origin:
origin_key = db.Key.from_path('Origin', 123)
wines_query = db.Query(Wine).ancestor(origin_key)
or to get all wines from a particular winery:
origin_key = db.Key.from_path('Origin', 123)
winery_key = db.Key.from_path('Winery', 456, parent=origin_key)
wines_query = db.Query(Wine).ancestor(winery_key)
and, assuming you're storing the variety as a property on the Wine model, all wines of a particular variety is as simple as
wines_query = Wine.all().filter('variety =', 'merlot')
One possible downside of this strict hierarchical approach is the kind of URL scheme it can impose on you. With a hierarchy that looks like
Origin -> Winery -> Wine
you must know the key name or ID of a wine's origin and winery in order to build a key to retrieve that wine. Unless you've already got the string representation of a wine's key. This basically forces you to have URLs for wines in one of the following forms:
/origin/{id}/winery/{id}/wine/{id}
/wine/{opaque and unfriendly datastore key as a string}
(The first URL could of course be replaced with querystring parameters; the important part is that you need three different pieces of information to identify a given wine.)
Maybe there are other alternatives to these URL schemes that have not occurred to me, though.
| Modeling Hierarchical Data - GAE | I'm new in google-app-engine and google datastore (bigtable) and I've some doubts in order of which could be the best approach to design the required data model.
I need to create a hierarchy model, something like a product catalog, each domain has some subdomains in deep. For the moment the structure for the products changes less than the read requirements. Wine example:
Origin (Toscana, Priorat, Alsacian)
Winery (Belongs only to one Origin)
Wine (Belongs only to one Winery)
All the relations are disjoint and incomplete. Additionally in order of the requirements probably we need to store counters of use for every wine (could require transactions)
In order of the documentation seems there're different potential solutions:
Ancestors management. Using parent relations and transactions
Pseudo-ancestor management. Simulating ancestors with a db.ListProperty(db.Key)
ReferenceProperty. Specifying explicitelly the relation between the classes
But in order of the expected requests to get wines... sometimes by variety, sometimes by origin, sometimes by winery... i'm worried about the behaviour of the queries using these structures (like the multiple joins in a relational model. If you ask for the products of a family... you need to join for the final deep qualifier in the tree of products and join since the family)
Maybe is better to create some duplicated information (in order of the google team recommendations: operations are expensive, but storage is not, so duplicate content should not be seen the main problem)
Some responses of other similar questions suggest:
Store all the parent ids as a hierarchy in a string... like a path property
Duplicate the relations between the Drink entity an all the parents in the tree ...
Any suggestions?
Hi Will,
Our case is more an strict hierarchical approach as you represent in the second example. And the queries is for retrieving list of products, retrieve only one is not usual.
We need to retrieve all the wines from an Origin, from a Winery or from a Variety (If we supose that the variety is another node of the strict hierarchical tree, is only an example)
One way could be include a path property, as you mentioned:
/origin/{id}/winery/{id}/variety/{id}
To allow me to retrieve a list of wines from a variety applying a query like this:
wines_query = Wine.all()
wines_query.filter('key_name >','/origin/toscana/winery/latoscana/variety/merlot/')
wines_query.filter('key_name <','/origin/toscana/winery/latoscana/variety/merlot/zzzzzzzz')
Or like this from an Origin:
wines_query = Wine.all()
wines_query.filter('key_name >','/origin/toscana/')
wines_query.filter('key_name <','/origin/toscana/zzzzzz')
Thank you!
| [
"I'm not sure what kinds of queries you'll need to do in addition to those mentioned in the question, but storing the data in an explicit ancestor hierarchy would make the ones you asked about fall out pretty easily.\nFor example, to get all wines from a particular origin:\norigin_key = db.Key.from_path('Origin', 1... | [
1
] | [] | [] | [
"bigtable",
"google_app_engine",
"python"
] | stackoverflow_0003620147_bigtable_google_app_engine_python.txt |
Q:
Tagging similar sentences with lower time complexity than n^2
This is my first post, have been a lurker for a long time, so will try my best to explain myself here.
I have been using lowest common substring method along with basic word match and substring match(regexp) for clustering similar stories on the net.
But the problem is its time complexity is n^2 (I compare each title to all the others).
I've done very basic optimizations like storing and skipping all the matched titles.
What I want is some kind of preprocessing of the chunk of text so that for each iteration i reduce number of posts to match to. Any further optimizations are also welcome.
Here are the functions i use for the same. the main function which calls them first calls word_match, if more than 70% of the word matches i further go down and call 'substring_match' and LCSubstr_len. The code is in Python, I can use C as well
import re
def substring_match(a,b):
try:
c = re.match(a,b)
return c if c else True if re.match(b,a) else False
except:
return False
def LCSubstr_len(S, T):
m = len(S); n = len(T)
L = [[0] * (n+1) for i in xrange(m+1)]
lcs = 0
for i in xrange(m):
for j in xrange(n):
if S[i] == T[j]:
L[i+1][j+1] = L[i][j] + 1
lcs = max(lcs, L[i+1][j+1])
else:
L[i+1][j+1] = max(L[i+1][j], L[i][j+1])
return lcs/((float(m+n)/2))
def word_match(str1,str2):
matched = 0
try:
str1,str2 = str(str1),str(str2)
assert isinstance(str1,str)
except:
return 0.0
words1 = str1.split(None)
words2 = str2.split(None)
for i in words1:
for j in words2:
if i.strip() ==j.strip():
matched +=1
len1 = len(words1)
len2 = len(words2)
perc_match = float(matched)/float((len1+len2)/2)
return perc_match
A:
Use an inverted index: for each word, store a list of pairs (docId, numOccurences).
Then, to find all strings which might be similar to a given string, go through its words and look up strings containing that word in the inverted index. This way you'll get a table "(docId, wordMatchScore)" that automatically contains only entries where wordMatchScore is non-zero.
There are a huge number of possible optimizations; also, your code is extremely non-optimal, but if we're talking about decreasing the number of string pairs for comparison, then that's it.
A:
Speeding up word_match is easy with sets:
def word_match(str1,str2):
# .split() splits on all whitespace, you dont needs .strip() after
words1 = set(str1.split())
words2 = set(str2.split())
common_words = words1 & words2
return 2.0*len(common_words)/(len(words1)+len(words2))
It also shows that 'A A A' and 'A' have 100% in common by this measure ...
| Tagging similar sentences with lower time complexity than n^2 | This is my first post, have been a lurker for a long time, so will try my best to explain myself here.
I have been using lowest common substring method along with basic word match and substring match(regexp) for clustering similar stories on the net.
But the problem is its time complexity is n^2 (I compare each title to all the others).
I've done very basic optimizations like storing and skipping all the matched titles.
What I want is some kind of preprocessing of the chunk of text so that for each iteration i reduce number of posts to match to. Any further optimizations are also welcome.
Here are the functions i use for the same. the main function which calls them first calls word_match, if more than 70% of the word matches i further go down and call 'substring_match' and LCSubstr_len. The code is in Python, I can use C as well
import re
def substring_match(a,b):
try:
c = re.match(a,b)
return c if c else True if re.match(b,a) else False
except:
return False
def LCSubstr_len(S, T):
m = len(S); n = len(T)
L = [[0] * (n+1) for i in xrange(m+1)]
lcs = 0
for i in xrange(m):
for j in xrange(n):
if S[i] == T[j]:
L[i+1][j+1] = L[i][j] + 1
lcs = max(lcs, L[i+1][j+1])
else:
L[i+1][j+1] = max(L[i+1][j], L[i][j+1])
return lcs/((float(m+n)/2))
def word_match(str1,str2):
matched = 0
try:
str1,str2 = str(str1),str(str2)
assert isinstance(str1,str)
except:
return 0.0
words1 = str1.split(None)
words2 = str2.split(None)
for i in words1:
for j in words2:
if i.strip() ==j.strip():
matched +=1
len1 = len(words1)
len2 = len(words2)
perc_match = float(matched)/float((len1+len2)/2)
return perc_match
| [
"Use an inverted index: for each word, store a list of pairs (docId, numOccurences).\nThen, to find all strings which might be similar to a given string, go through its words and look up strings containing that word in the inverted index. This way you'll get a table \"(docId, wordMatchScore)\" that automatically co... | [
4,
3
] | [] | [] | [
"algorithm",
"python",
"string"
] | stackoverflow_0003620318_algorithm_python_string.txt |
Q:
How do you generate the non-convex hull from a series of points?
I am currently trying to construct the area covered by a device over an operating period.
The first step in this process appears to be constructing a polygon of the covered area.
Since the pattern is not a standard shape, convex hulls overstate the covered area by jumping to the largest coverage area possible.
I have found a paper that appears to cover the concept of non-convex hull generation, but no discussions on how to implement this within a high level language.
http://www.geosensor.net/papers/duckham08.PR.pdf
Has anyone seen a straight forward algorithm for constructing a non-convex hull or concave hull or perhaps any python code to achieve the same result?
I have tried convex hulls mainly qhull, with a limited edge size with limited success.
Also I have noticed some licensed libraries that will not be able to be distributed, so unfortunately thats off the table.
Any better ideas or cookbooks?
A:
You might try looking into Alpha Shapes. The CGAL library can compute them.
Edit: I see that the paper you linked references alpha shapes, and also has an algorithm listing. Is that not high level enough for you? Since you listed python as a tag, I'm sure there are Delaunay triangulation libraries in Python, which I think is the hardest part of implementing the algorithm; you just need to make sure you can modify the resulting triangulation output. The boundary query functions can probably be implemented with associative arrays.
| How do you generate the non-convex hull from a series of points? | I am currently trying to construct the area covered by a device over an operating period.
The first step in this process appears to be constructing a polygon of the covered area.
Since the pattern is not a standard shape, convex hulls overstate the covered area by jumping to the largest coverage area possible.
I have found a paper that appears to cover the concept of non-convex hull generation, but no discussions on how to implement this within a high level language.
http://www.geosensor.net/papers/duckham08.PR.pdf
Has anyone seen a straight forward algorithm for constructing a non-convex hull or concave hull or perhaps any python code to achieve the same result?
I have tried convex hulls mainly qhull, with a limited edge size with limited success.
Also I have noticed some licensed libraries that will not be able to be distributed, so unfortunately thats off the table.
Any better ideas or cookbooks?
| [
"You might try looking into Alpha Shapes. The CGAL library can compute them.\nEdit: I see that the paper you linked references alpha shapes, and also has an algorithm listing. Is that not high level enough for you? Since you listed python as a tag, I'm sure there are Delaunay triangulation libraries in Python, whic... | [
4
] | [] | [] | [
"computational_geometry",
"geometry",
"gis",
"math",
"python"
] | stackoverflow_0003620446_computational_geometry_geometry_gis_math_python.txt |
Q:
Why is my code stopping?
Hey I've encountered an issue where my program stops iterating through the file at the 57802 record for some reason I cannot figure out. I put a heartbeat section in so I would be able to see which line it is on and it helped but now I am stuck as to why it stops here. I thought it was a memory issue but I just ran it on my 6GB memory computer and it still stopped.
Is there a better way to do anything I am doing below?
My goal is to read the file (if you need me to send it to you I can 15MB text log)
find a match based on the regex expression and print the matching line. More to come but that's as far as I have gotten. I am using python 2.6
Any ideas would help and code comments also! I am a python noob and am still learning.
import sys, os, os.path, operator
import re, time, fileinput
infile = os.path.join("C:\\","Python26","Scripts","stdout.log")
start = time.clock()
filename = open(infile,"r")
match = re.compile(r'(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}),\d{3} +\w+ +\[([\w.]+)\] ((\w+).?)+:\d+ - (\w+)_SEARCH:(.+)')
count = 0
heartbeat = 0
for line in filename:
heartbeat = heartbeat + 1
print heartbeat
lookup = match.search(line)
if lookup:
count = count + 1
print line
end = time.clock()
elapsed = end-start
print "Finished processing at:",elapsed,"secs. Count of records =",count,"."
filename.close()
This is line 57802 where it fails:
2010-08-06 08:15:15,390 DEBUG [ah_admin] com.thg.struts2.SecurityInterceptor.intercept:46 - Action not SecurityAware; skipping privilege check.
This is a matching line:
2010-08-06 09:27:29,545 INFO [patrick.phelan] com.thg.sam.actions.marketmaterial.MarketMaterialAction.result:223 - MARKET_MATERIAL_SEARCH:{"_appInfo":{"_appId":21,"_companyDivisionId":42,"_environment":"PRODUCTION"},"_description":"symlin","_createdBy":"","_fieldType":"GEO","_geoIds":["Illinois"],"_brandIds":[2883],"_archived":"ACTIVE","_expired":"UNEXPIRED","_customized":"CUSTOMIZED","_webVisible":"VISIBLE_ONLY"}
Sample data just the first 5 lines:
2010-08-06 00:00:00,035 DEBUG [] com.thg.sam.jobs.PlanFormularyLoadJob.executeInternal:67 - Entered into PlanFormularyLoadJob: executeInternal
2010-08-06 00:00:00,039 DEBUG [] com.thg.ftpComponent.service.JScapeFtpService.open:153 - Opening FTP connection to sdrive/hibbert@tccfp01.hibbertnet.com:21
2010-08-06 00:00:00,040 DEBUG [] com.thg.sam.email.EmailUtils.sendEmail:206 - org.apache.commons.mail.MultiPartEmail@446e79
2010-08-06 00:00:00,045 DEBUG [] com.thg.sam.services.OrderService.getOrdersWithStatus:121 - Orders list size=13
2010-08-06 00:00:00,045 DEBUG [] com.thg.ftpComponent.service.JScapeFtpService.open:153 - Opening FTP connection to sdrive/hibbert@tccfp01.hibbertnet.com:21
A:
What does the input line that gives you trouble look like? I'd try printing that out. I suspect your CPU is pegged while this is running.
Nested regexps, like you have can have VERY bad performance when they don't match quickly.
((\w+).?)+:
Imagine a string that doesn't have the : in it but is fairly long. You'll end up in a world of backtracking as the regexp tries EVERY combination of ways to separate word characters between \w and . and THEN tries to group them in every way possible. If you can be more specific in your pattern it'll pay off big time.
A:
Your problem is definitely the part @paulrubel pointed out:
((\w+).?)+:\d+
Now that you've added sample data, it's obvious that the . is supposed to match a literal dot, which means you should have escaped it (\.). Also, you don't need the inner set of parentheses, and the outer set should be non-capturing, but it's the basic structure that's killing you; there are too many arrangements of word characters and dots it has to try before giving up. The other lines all fail before that part of the regex is attempted, which is why you don't have any problem with them.
When I try it in RegexBuddy, your regex matches the good line in 186 steps, and gives up trying on line 57802 after 1,000,000 steps. When I escape the dot, the good line only takes 90 steps to match, but it still times out on line 57802. But now I know that part of the regex can only match word characters and dots. Once it has consumed all of those it can, the next bit has to match :\d+; if it doesn't, I know there's no point trying other arrangements. I can use an atomic group to tell it not to bother:
(?>(?:\w+\.?)+):\d+
With that change, the good line matches in 83 steps, and line 57802 only takes 66 steps to report failure. But it's not always feasible to use atomic groups, so you should try to make your regex conform to the actual structure of the text it's matching. In this case you're matching what looks like a Java class name (some word characters, followed by zero or more instances of (a dot and some more word characters)) followed by a colon and and a line number:
\w+(?:\.\w+)*:\d+
When I plug that into the regex, it matches the good line in 80 steps, and rejects line 57802 in 67 steps--the atomic group isn't even needed.
A:
you compiled your regex but never use it?
lookup = re.search(match,line)
should be
lookup = match.search(line)
and you should use os.path.join()
infile = os.path.join("C:\\","Python26","Scripts","stdout.log")
Update:
Your regular expression can be simpler.Just check for the date time stamp. Or else, don't use regular expression at all. Say your date and time starts at beginning of line
for line in open("stdout.log"):
s = line.split()
D,T=s[0],s[1]
# use the time module and strptime to check valid date/time
# or you can split "-" on D and T and do manual check using > or < and math
A:
Your pattern contains the fixed string SEARCH_ and a bunch of complicated expressions (including captures) that are really going to hammer the regex engine.. but you don't do anything with the captured text so all you want to know 'is does it match?'
It may be simpler and quicker to just search for the fixed pattern on each line.
if '_SEARCH:' in line:
print line
count += 1
A:
Try using pdb. If you put pdb.set_trace() in your heartbeat shortly before it stops, you can look at the specific line it's stopping on and see what each of your lines of code does with that line.
Edit: An example of pdb use:
import pdb
for i in range(50):
print i
if i == 12:
pdb.set_trace()
Run that script, and you'll get something like the following:
0
1
2
3
4
5
6
7
8
9
10
11
12
> <stdin>(1)<module>()
(Pdb)
Now you can evaluate Python expressions from the context of i=12.
(Pdb) print i
12
Use that, but put the pdb.set_trace() in your loop after you increment heartbeat, if heartbeat == 57802. Then you can print out line with p line, the result of your regex search with p match.search(line), etc.
A:
It might be a memory issue anyway. With huge files it's probably better to use the fileinput module instead like this:
import fileinput
for line in fileinput.input([infile]):
lookup = re.search(match, line)
# etc.
| Why is my code stopping? | Hey I've encountered an issue where my program stops iterating through the file at the 57802 record for some reason I cannot figure out. I put a heartbeat section in so I would be able to see which line it is on and it helped but now I am stuck as to why it stops here. I thought it was a memory issue but I just ran it on my 6GB memory computer and it still stopped.
Is there a better way to do anything I am doing below?
My goal is to read the file (if you need me to send it to you I can 15MB text log)
find a match based on the regex expression and print the matching line. More to come but that's as far as I have gotten. I am using python 2.6
Any ideas would help and code comments also! I am a python noob and am still learning.
import sys, os, os.path, operator
import re, time, fileinput
infile = os.path.join("C:\\","Python26","Scripts","stdout.log")
start = time.clock()
filename = open(infile,"r")
match = re.compile(r'(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}),\d{3} +\w+ +\[([\w.]+)\] ((\w+).?)+:\d+ - (\w+)_SEARCH:(.+)')
count = 0
heartbeat = 0
for line in filename:
heartbeat = heartbeat + 1
print heartbeat
lookup = match.search(line)
if lookup:
count = count + 1
print line
end = time.clock()
elapsed = end-start
print "Finished processing at:",elapsed,"secs. Count of records =",count,"."
filename.close()
This is line 57802 where it fails:
2010-08-06 08:15:15,390 DEBUG [ah_admin] com.thg.struts2.SecurityInterceptor.intercept:46 - Action not SecurityAware; skipping privilege check.
This is a matching line:
2010-08-06 09:27:29,545 INFO [patrick.phelan] com.thg.sam.actions.marketmaterial.MarketMaterialAction.result:223 - MARKET_MATERIAL_SEARCH:{"_appInfo":{"_appId":21,"_companyDivisionId":42,"_environment":"PRODUCTION"},"_description":"symlin","_createdBy":"","_fieldType":"GEO","_geoIds":["Illinois"],"_brandIds":[2883],"_archived":"ACTIVE","_expired":"UNEXPIRED","_customized":"CUSTOMIZED","_webVisible":"VISIBLE_ONLY"}
Sample data just the first 5 lines:
2010-08-06 00:00:00,035 DEBUG [] com.thg.sam.jobs.PlanFormularyLoadJob.executeInternal:67 - Entered into PlanFormularyLoadJob: executeInternal
2010-08-06 00:00:00,039 DEBUG [] com.thg.ftpComponent.service.JScapeFtpService.open:153 - Opening FTP connection to sdrive/hibbert@tccfp01.hibbertnet.com:21
2010-08-06 00:00:00,040 DEBUG [] com.thg.sam.email.EmailUtils.sendEmail:206 - org.apache.commons.mail.MultiPartEmail@446e79
2010-08-06 00:00:00,045 DEBUG [] com.thg.sam.services.OrderService.getOrdersWithStatus:121 - Orders list size=13
2010-08-06 00:00:00,045 DEBUG [] com.thg.ftpComponent.service.JScapeFtpService.open:153 - Opening FTP connection to sdrive/hibbert@tccfp01.hibbertnet.com:21
| [
"What does the input line that gives you trouble look like? I'd try printing that out. I suspect your CPU is pegged while this is running. \nNested regexps, like you have can have VERY bad performance when they don't match quickly.\n((\\w+).?)+:\n\nImagine a string that doesn't have the : in it but is fairly long. ... | [
7,
2,
1,
1,
0,
0
] | [] | [] | [
"python",
"regex",
"string_matching"
] | stackoverflow_0003614075_python_regex_string_matching.txt |
Q:
How to remove leading and trailing spaces from strings in a Python list
i have a list:
row=['hi', 'there', 'how', ...........'some stuff is here are ','you']
as you can see row[8]='some stuff is here are '
if the last character is a space i would like to get everything except for the last character like this:
if row[8][len(row[8])-1]==' ':
row[8]=row[8][0:len(row[8])-2]
this method is not working. can someone suggest a better syntax please?
A:
row = [x.strip() for x in row]
(if you just want to get spaces at the end, use rstrip)
A:
Negative indexes count from the end. And slices are anchored before the index given.
if row[8][-1]==' ':
row[8]=row[8][:-1]
A:
So you want it without trailing spaces? Can you just use row[8].rstrip?
| How to remove leading and trailing spaces from strings in a Python list | i have a list:
row=['hi', 'there', 'how', ...........'some stuff is here are ','you']
as you can see row[8]='some stuff is here are '
if the last character is a space i would like to get everything except for the last character like this:
if row[8][len(row[8])-1]==' ':
row[8]=row[8][0:len(row[8])-2]
this method is not working. can someone suggest a better syntax please?
| [
"row = [x.strip() for x in row]\n\n(if you just want to get spaces at the end, use rstrip)\n",
"Negative indexes count from the end. And slices are anchored before the index given.\nif row[8][-1]==' ':\n row[8]=row[8][:-1]\n\n",
"So you want it without trailing spaces? Can you just use row[8].rstrip?\n"
] | [
10,
4,
4
] | [] | [] | [
"python"
] | stackoverflow_0003621008_python.txt |
Q:
Appengine - Reportlab PDF
I'm using Google appengine and want to generate a PDF with reportlab.
The application works well and can generate PDF's like 'Hello World' and little else.
But what I want is to fetch data from a form with the data that the user entered and generate PDF dynamically.
Anyone can share a piece of code? I would be grateful.
A:
I assume you use the webapp framework.
import cgi
from google.appengine.api import users
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
class MainPage(webapp.RequestHandler):
def get(self):
self.response.out.write("""
<html>
<body>
<form action="/makepdf" method="post">
<div><textarea name="content" rows="3" cols="60"></textarea></div>
<div><input type="submit" value="Make a PDF for me"></div>
</form>
</body>
</html>""")
class MakePDF(webapp.RequestHandler):
def post(self):
# now here you can make your PDF like you did for the "Hello world" one
# and you can access the entered data like this: self.request.get('content')
application = webapp.WSGIApplication(
[('/', MainPage),
('/makepdf', MakePDF)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
| Appengine - Reportlab PDF | I'm using Google appengine and want to generate a PDF with reportlab.
The application works well and can generate PDF's like 'Hello World' and little else.
But what I want is to fetch data from a form with the data that the user entered and generate PDF dynamically.
Anyone can share a piece of code? I would be grateful.
| [
"I assume you use the webapp framework.\nimport cgi\n\nfrom google.appengine.api import users\nfrom google.appengine.ext import webapp\nfrom google.appengine.ext.webapp.util import run_wsgi_app\n\nclass MainPage(webapp.RequestHandler):\n def get(self):\n self.response.out.write(\"\"\"\n <html>\n ... | [
2
] | [] | [] | [
"google_app_engine",
"python",
"reportlab"
] | stackoverflow_0003621010_google_app_engine_python_reportlab.txt |
Q:
Accessing the content of a variable array with ctypes
I use ctypes to access a file reading C function in python. As the read data is huge and unknown in size I use **float in C .
int read_file(const char *file,int *n_,int *m_,float **data_) {...}
The functions mallocs an 2d array, called data, of the appropriate size, here n and m, and copies the values to the referenced ones. See following snippet:
*data_ = data;
*n_ = n;
*m_ = m;
I access this function with the following python code:
p_data=POINTER(c_float)
n=c_int(0)
m=c_int(0)
filename='datasets/usps'
read_file(filename,byref(n),byref(m),byref(p_data))
Afterwards I try to acces p_data using contents, but I get only a single float value.
p_data.contents
c_float(-1.0)
My question is: How can I access data in python?
What do you recommended? Please don't hesitate to point out if I left something unclear!
A:
might be simpler to do the whole thing in python with the struct library. but if you're sold on ctypes (and I don't blame you, it's pretty cool):
#include <malloc.h>
void floatarr(int* n, float** f)
{
int i;
float* f2 = malloc(sizeof(float)*10);
n[0] = 10;
for (i=0;i<10;i++)
{ f2[i] = (i+1)/2.0; }
f[0] = f2;
}
and then in python:
from ctypes import *
fd = cdll.LoadLibrary('float.dll')
fd.floatarr.argtypes = [POINTER(c_int),POINTER(POINTER(c_float))]
fpp = POINTER(c_float)()
ip = c_int(0)
fd.floatarr(pointer(ip),pointer(fpp))
print ip
print fpp[0]
print fpp[1]
the trick is that capitals POINTER makes a type and lowercase pointer makes a pointer to existing storage. you can use byref instead of pointer, they claim it's faster. I like pointer better because it's clearer what's happening.
| Accessing the content of a variable array with ctypes | I use ctypes to access a file reading C function in python. As the read data is huge and unknown in size I use **float in C .
int read_file(const char *file,int *n_,int *m_,float **data_) {...}
The functions mallocs an 2d array, called data, of the appropriate size, here n and m, and copies the values to the referenced ones. See following snippet:
*data_ = data;
*n_ = n;
*m_ = m;
I access this function with the following python code:
p_data=POINTER(c_float)
n=c_int(0)
m=c_int(0)
filename='datasets/usps'
read_file(filename,byref(n),byref(m),byref(p_data))
Afterwards I try to acces p_data using contents, but I get only a single float value.
p_data.contents
c_float(-1.0)
My question is: How can I access data in python?
What do you recommended? Please don't hesitate to point out if I left something unclear!
| [
"might be simpler to do the whole thing in python with the struct library. but if you're sold on ctypes (and I don't blame you, it's pretty cool):\n#include <malloc.h>\nvoid floatarr(int* n, float** f)\n{\n int i;\n float* f2 = malloc(sizeof(float)*10);\n n[0] = 10;\n for (i=0;i<10;i++)\n { f2[i] = (... | [
3
] | [] | [] | [
"ctypes",
"multidimensional_array",
"pointers",
"python"
] | stackoverflow_0003620348_ctypes_multidimensional_array_pointers_python.txt |
Q:
Is there a way to write a command so that it aborts a running function call?
I have a widget that measures elapsed time, then after a certain duration it does a command. However, if the widget is left I want I want it to abort this function call and not do the command.
How do I go about this?
A:
Use the threading module and start a new thread that will run the function.
Just abort the function is a bad idea as you don't know if you interrupt the thread in a critical situation. You should extend your function like this:
import threading
class WidgetThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self._stop = False
def run(self):
# ... do some time-intensive stuff that stops if self._stop ...
#
# Example:
# while not self._stop:
# do_somthing()
def stop(self):
self._stop = True
# Start the thread and make it run the function:
thread = WidgetThread()
thread.start()
# If you want to abort it:
thread.stop()
A:
Why not use threads and stop that? I don't think it's possible to intercept a function call in a single threaded program (if not with some kind of signal or interrupt).
Also, with your specific issue, you might want to introduce a flag and check that in the command.
A:
No idea about python threads, but in general the way you interrupt a thread is by having some sort of a threadsafe state object that you can set from the widget, and the logic in thread code to check for the change in state object value and break out of the thread loop.
| Is there a way to write a command so that it aborts a running function call? | I have a widget that measures elapsed time, then after a certain duration it does a command. However, if the widget is left I want I want it to abort this function call and not do the command.
How do I go about this?
| [
"Use the threading module and start a new thread that will run the function.\nJust abort the function is a bad idea as you don't know if you interrupt the thread in a critical situation. You should extend your function like this:\nimport threading\n\nclass WidgetThread(threading.Thread):\n def __init__(self):\n... | [
3,
0,
0
] | [] | [] | [
"abort",
"python",
"tkinter"
] | stackoverflow_0003621111_abort_python_tkinter.txt |
Q:
Is there a way to package a python extension written in C into a binary so I don't have to python-install it?
I wrote a Python extension in C, and my python program uses that extension. In order for it to work, I would have to install the extension on the user's system before my program can run. Is there a way to bypass that installation step and somehow just have the extension in my python package? The only compiled part obviously is the extension (since it's in C).
A:
You can avoid having some one to install it independently but you can not avoid installation completely. If his computing platform differs from yours, he will have to build the extension.
What can be done is that you setup a package distribution using distutils. This way the package could be installed or built. You can include the "C" extension in your package.
For some standard platform, you can then provide binary package distribution. The user should have the ability to rebuild, if the binary package is not working out for him.
A:
just put the .d compiled python dll in the same directory as your python script. then you'll be able to import it.
| Is there a way to package a python extension written in C into a binary so I don't have to python-install it? | I wrote a Python extension in C, and my python program uses that extension. In order for it to work, I would have to install the extension on the user's system before my program can run. Is there a way to bypass that installation step and somehow just have the extension in my python package? The only compiled part obviously is the extension (since it's in C).
| [
"You can avoid having some one to install it independently but you can not avoid installation completely. If his computing platform differs from yours, he will have to build the extension.\nWhat can be done is that you setup a package distribution using distutils. This way the package could be installed or built. Y... | [
2,
1
] | [] | [] | [
"c",
"installation",
"python"
] | stackoverflow_0003619990_c_installation_python.txt |
Q:
python: cleaning up a string
i have a string like this
somestring='in this/ string / i have many. interesting.occurrences of {different chars} that need to .be removed '
here is the result i want:
somestring='in this string i have many interesting occurrences of different chars that need to be removed'
i started to manually do all kinds of .replace, but there are so many different combinations that i think there must be a simpler way. perhaps there's a library that already does this?
does anyone know how i can clean up this string>?
A:
I would use regular expression to replace all non-alphanumerics to spaces:
>>> import re
>>> somestring='in this/ string / i have many. interesting.occurrences of {different chars} that need to .be removed '
>>> rx = re.compile('\W+')
>>> res = rx.sub(' ', somestring).strip()
>>> res
'in this string i have many interesting occurrences of different chars that need to be removed'
A:
You have two steps: remove the punctuation then remove the extra whitespace.
1) Use string.translate
import string
trans_table = string.maketrans( string.punctuation, " "*len(string.punctuation)
new_string = some_string.translate(trans_table)
This makes then applies a translation table that maps punctuation characters to whitespace.
2) Remove excess whitespace
new_string = " ".join(new_string.split())
A:
re.sub('[\[\]/{}.,]+', '', somestring)
| python: cleaning up a string | i have a string like this
somestring='in this/ string / i have many. interesting.occurrences of {different chars} that need to .be removed '
here is the result i want:
somestring='in this string i have many interesting occurrences of different chars that need to be removed'
i started to manually do all kinds of .replace, but there are so many different combinations that i think there must be a simpler way. perhaps there's a library that already does this?
does anyone know how i can clean up this string>?
| [
"I would use regular expression to replace all non-alphanumerics to spaces:\n>>> import re\n>>> somestring='in this/ string / i have many. interesting.occurrences of {different chars} that need to .be removed '\n>>> rx = re.compile('\\W+')\n>>> res = rx.sub(' ', somestring).strip()\n>>> res\n'in this string i ... | [
17,
2,
1
] | [] | [] | [
"python",
"string"
] | stackoverflow_0003621296_python_string.txt |
Q:
python round problem
I am facing the problem while dividing
my max_sum = 14
total_no=4
so when i do
print "x :", (total_sum/total_no)
, I get 3 and not 3.5
I tried many ways for printing but failed, can somebody let me know what way I get in 3.5 format?
Thank you
A:
In Python 2.x, dividing two integers by default dives you another integer. This is often confusing, and has been fixed in Python 3.x. You can bypass it by casting one of the numbers to a float, which will automatically cast the other:
float( 14 ) / 4 == 3.5
The relevant PEP is number 238:
The current division (/) operator has
an ambiguous meaning for
numerical arguments: it returns the floor of the mathematical
result of division if the arguments are ints or longs, but it
returns a reasonable approximation of the division result if the
arguments are floats or complex. This makes expressions expecting
float or complex results error-prone when integers are not
expected but possible as inputs.
It was not changed in Python 2.x because of severe backwards-compatibility issues, but was one of the major changes in Python 3.x. You can force the new division with the line
from __future__ import division
at the top of your Python script. This is a __future__-import -- it is used to force syntax changes that otherwise might break your script. There are many other __future__ imports; they are often a good idea to use in preparation for a move to Python 3.x.
Note that the // operator always means integer division; if you really want this behaviour, you should use it in preference to /. Remember, "explicit is better than implicit"!
A:
You are dividing two integers, the result will be an integer too, in python 2. Try make one of your operands a float, like max_sum = 14.0 or total_no = 4.0 to get a float result.
If you want python 2.X behave a bit more intuitive in that matter you can add
from __future__ import division
on the top of your script. In python 3, division works as you expected it in your case.
A:
make max_sum=14.0 or total_no=4.0
A:
You're doing an integer operation so you get the nearest integer result : 3.
You should either explicitly/implicitly cast max_sum or total to float.
A:
Another way of doing this would be to import division from Python 3:
>>> from __future__ import division
>>> max_sum = 14
>>> total_no = 4
>>> print max_sum / total_no # the way you want it
3.5
>>> print max_sum // total_no
3
| python round problem | I am facing the problem while dividing
my max_sum = 14
total_no=4
so when i do
print "x :", (total_sum/total_no)
, I get 3 and not 3.5
I tried many ways for printing but failed, can somebody let me know what way I get in 3.5 format?
Thank you
| [
"In Python 2.x, dividing two integers by default dives you another integer. This is often confusing, and has been fixed in Python 3.x. You can bypass it by casting one of the numbers to a float, which will automatically cast the other:\nfloat( 14 ) / 4 == 3.5\nThe relevant PEP is number 238:\n\nThe current division... | [
9,
2,
1,
0,
0
] | [] | [] | [
"division",
"integer_division",
"python",
"python_2.x",
"rounding"
] | stackoverflow_0003621717_division_integer_division_python_python_2.x_rounding.txt |
Q:
Python - Minimum of a List of Instance Variables
I'm new to Python and I really love the min function.
>>>min([1,3,15])
0
But what if I have a list of instances, and they all have a variable named number?
class Instance():
def __init__(self, number):
self.number = number
i1 = Instance(1)
i2 = Instance(3)
i3 = Instance(15)
iList = [i1,i2,i3]
Do I really have to something like
lowestI = iList[0].number
for i in iList:
if lowestI > iList[i].number: lowestI = iList[i].number
print lowestI
Can't I use min in a nice pythonic way?
A:
The OOP way would be to implement __lt__:
class Instance():
def __init__(self, number):
self.number = number
def __lt__(self, other):
return self.number < other.number
# now min(iList) just works
Another way is
imin = min(iList, key=lambda x:x.number)
Functions like sort, min, max all take a key argument. You give a function that takes an item and returns whatever should stand for this item when comparing it.
A:
from operator import attrgetter
min( iList, key = attrgetter( "number" ) )
The same key argument also works with sort, for implementing the decorate-sort-undecorate idiom Pythonically.
A:
Generator syntax:
min(i.number for i in iList)
key function:
min(iList, key=lambda i: i.number)
A:
min(iList, key=lambda inst: inst.number)
| Python - Minimum of a List of Instance Variables | I'm new to Python and I really love the min function.
>>>min([1,3,15])
0
But what if I have a list of instances, and they all have a variable named number?
class Instance():
def __init__(self, number):
self.number = number
i1 = Instance(1)
i2 = Instance(3)
i3 = Instance(15)
iList = [i1,i2,i3]
Do I really have to something like
lowestI = iList[0].number
for i in iList:
if lowestI > iList[i].number: lowestI = iList[i].number
print lowestI
Can't I use min in a nice pythonic way?
| [
"The OOP way would be to implement __lt__:\nclass Instance():\n def __init__(self, number):\n self.number = number\n\n def __lt__(self, other):\n return self.number < other.number\n # now min(iList) just works\n\nAnother way is\nimin = min(iList, key=lambda x:x.number)\nFunctions like sor... | [
14,
10,
5,
1
] | [] | [] | [
"list",
"min",
"python"
] | stackoverflow_0003621826_list_min_python.txt |
Q:
How to deal with Python ~ static typing?
I am from Java world and I wonder what is so great about dynamic typing in Python besides missing errors while compiling the code?
Do you like Python's typing? Do you have an example where it helped in a big project? Isn't it a bit error prone?
A:
Static type checking is undecidable in the general case. This means that there are programs which are statically type-safe but for which the type-checker cannot prove that they are statically type-safe, and thus the type-checker must reject those programs.
In other words: there are type-safe programs that the type-checker will not allow you to write. Or, even more succinctly: static typing prevents you from writing certain programs.
This applies to all static typing in general, not just to Java.
As to Java specifically: it has a rather crappy type system. Its type system is not expressive enough to express even very simple properties. For example: where in the type of static void java.util.Arrays.sort(Object[] a) does it actually say that the result has to be, you know, sorted? Or that the array elements have to be partially ordered?
Another problem with Java is that its type system has holes so big that you can drive a truck through:
String[] a = new String[1];
Object[] b = a;
b[0] = 1; // ArrayStoreException
The problem in this particular case are covariant arrays. It's simply not possible for arrays to be both covariant and type-safe.
Java combines all the hassle of static typing with none of the advantages. So, you might just as well get rid of the hassle.
However, note that this is not universal. There are other languages which have much better type systems for which the trade-offs are much less clear.
For example, here is the most stupid language benchmark of all time (Fibonacci) in Python:
def fib(n):
if n < 2: return n
return fib(n-2) + fib(n-1)
and Java:
int fib(int n) {
if (n < 2) return n;
return fib(n-2) + fib(n-1);
}
Note that there is quite a bit more clutter there, which is solely related to static typing. To make the comparison more fair, let's imagine a language with Python's syntax and Java's semantics:
def fib(n: int) -> int:
if n < 2: return n
return fib(n-2) + fib(n-1)
[Interesting side note: with the addition of optional static type annotations in Python 3.x, that is actually also valid Python code, although it is obviously still not statically type-safe, since the annotations are just that: annotations. They are never actually checked anywhere.]
There is some definite clutter there. However, in Haskell it looks like this:
fib n
| n < 2 = n
| otherwise = fib (n-2) + fib (n-1)
Unlike the Python version, this is perfectly statically type-safe, but there is zero type-related clutter.
In this particular case, the question between the benefits of static and dynamic typing are much less clear.
By the way, a more idiomatic Haskell version would probably look like this:
fib 0 = 0
fib 1 = 1
fib n = fib (n-2) + fib (n-1)
or this:
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Really, the much more important difference between Java and Python is not so much that Java is statically typed and Python is dynamically typed, but rather that Java is just not a good programming language, while Python is. So, Java is just always going to lose, not because it is statically typed, but because it is crap. Comparing BASIC with Haskell, Haskell clearly wins, but again, not because it is statically typed but because BASIC is crap.
A much more interesting comparison would be Java vs. BASIC or Python vs. Haskell.
A:
I suspect that the vast majority of non-trivial Java programs have dynamic typing in them.
Every time in Java that you do a cast from Object to an explicit type you are doing dynamic type checking - this includes every use of a collection class before generics were introduced in 1.5. Actually Java generics can still defer some type checking until runtime.
Every time you use Java reflection you are doing dynamic type checking. This includes mapping from a class or method name in an text file to a real class or method - e.g. every time you use a Spring XML configuration file.
Does this make Java programs fragile and error prone? Do Java programmers spend a significant part of their time having to track down and fix problems with incorrect dynamic typing? Probably not - and neither do Python programmers.
Some of the advantages of dynamic typing:
Greatly reduced reliance on inheritance. I have seen Java programs with massive inheritance trees. Python programs often use little or no inheritance, preferring to use duck typing.
It is easy to write truly generic code. For example the min() and max() functions can take a sequence of any comparable type - integers, strings, floats, classes that have the appropriate comparison methods, lists, tuples etc.
Less code. A huge proportion of Java code contributes nothing to solving the problem at hand - it is purely there to keep the type system happy. If a Python program is a fifth the size of the equivalent Java program then there is one fifth the code to write, maintain, read and understand. To put it another way, Python has a much higher signal to noise ratio.
Faster development cycle. This goes hand in hand with less code - you spend less time thinking about types and classes and more time thinking about solving the problem you are working on.
Little need for AOP. I think there are aspect oriented libraries for Python but I don't know of anyone that uses them, since for 99% of what you need AOP for you can do with decorators and dynamic object modification. The wide use of AspectJ in the Java world suggests to me that there are deficiencies in the core Java language that have to be compensated for with an external tool.
A:
Do you like it in Python?
It's part of Python. Liking it in Python is silly.
Do you have an example where it helped in a big project?
Yes. Every single day I rejoice that I can make changes and -- because of Duck typing -- they are reasonably localized, pass all the unit tests, pass all the integration tests, and nothing is disrupted elsewhere.
If this was Java, the changes would require endless refactoring to pull interfaces out of classes so that I could introduce variations that were still permitted under Java's static type checking.
Doesn't it a bit error prone?
Not any more than static typing is. A simple unit test confirms that the objects conform to the expected features.
It's easy to write a class in Java that (a) passes compile-time checks and (b) crashes horribly at run time. Casts are a good way to do this. Failing to meet the classes intent is a common thing -- a class may compile but still not work.
A:
A lot of patterns (e.g. from GoF) are unnecessary or can be implemented with less efforts in dynamic-typed languages with functional flavor. In fact, a lot of patterns are "built-in" into python so if you write short and 'pythonic' code you will get all the benefits for free. You don't need Iterator, Observer, Strategy, Factory Method, Abstract Factory, and a bunch of other patterns that are common in Java or C++.
This means less code to write and (much more important) less code to read, understand and support. I think this is the main benefit of languages like python. And in my opinion this greatly outweighs the absence of static typing. Type-related errors are not often in python code and they are easy to catch with simple functional tests (and such tests are easier to write in python than in java for sure).
A:
It's a load off your mind. You can think the color red as "Red" (a constant) as "255, 0, 0" (a tuple) or "#FF0000" (a string): three different formats, that would require three different types, or complex lookup and conversion methods in Java.
It makes code simpler.
A:
For example, you can write functions
to which you can pass an integer as
well as a string or a list or a
dictionary or whatever else, and it
will be able to transparently handle
all of them in appropriate ways (or
throw an exception if it cannot handle
the type). You can do things like that
in other languages, too, but usually
you have to resort to (ab)use things
like pointers, references or
typecasts, which opens holes for
programming errors, and it's just
plain ugly.
A:
As you're from the Java world, the obvious answer would be that it's great not to be forced to write all that stuff you are forced to write, just to keep Java's type system happy.
Of course, there are other statically type checked languages that don't force you to write all that stuff that Java forces you to write.
Even C# does type inference for local method variables!
And there are other statically type checked languages that provide more compile time error checking than Java provides.
(The less obvious answers for - what is so great about dynamic typing in Python? - probably require more understanding of Python to understand.)
| How to deal with Python ~ static typing? | I am from Java world and I wonder what is so great about dynamic typing in Python besides missing errors while compiling the code?
Do you like Python's typing? Do you have an example where it helped in a big project? Isn't it a bit error prone?
| [
"Static type checking is undecidable in the general case. This means that there are programs which are statically type-safe but for which the type-checker cannot prove that they are statically type-safe, and thus the type-checker must reject those programs.\nIn other words: there are type-safe programs that the typ... | [
17,
13,
5,
2,
0,
0,
0
] | [] | [] | [
"dynamic_typing",
"java",
"python",
"static_typing"
] | stackoverflow_0003621297_dynamic_typing_java_python_static_typing.txt |
Q:
I have to make mouse move until cursor change, but, how?
I want to make a little script that make the mouse moves until the icon changes, but I'm not having success with it...
Here it's what I'm trying
def enterLink():
mouseMove(*position[4])
for win32gui.GetCursorInfo()[1] == 65567:
mouseMove(*position[5])
mouseMove(*position[4])
How I have to do this?
The commands are correct =/
thank you
edit:
I want the mouse cursor moving to one location to another until the area becomes a link...
For example, the page could take 5 minutes to load, so, the mouse cursor will be moving around until the page loads completely and the area become a link.
A:
The commands are correct =/
If they were correct, it would work...
for win32gui.GetCursorInfo()[1] == 65567:
I suggest if.
| I have to make mouse move until cursor change, but, how? | I want to make a little script that make the mouse moves until the icon changes, but I'm not having success with it...
Here it's what I'm trying
def enterLink():
mouseMove(*position[4])
for win32gui.GetCursorInfo()[1] == 65567:
mouseMove(*position[5])
mouseMove(*position[4])
How I have to do this?
The commands are correct =/
thank you
edit:
I want the mouse cursor moving to one location to another until the area becomes a link...
For example, the page could take 5 minutes to load, so, the mouse cursor will be moving around until the page loads completely and the area become a link.
| [
"\nThe commands are correct =/\n\nIf they were correct, it would work...\nfor win32gui.GetCursorInfo()[1] == 65567:\n\nI suggest if.\n"
] | [
1
] | [] | [] | [
"python",
"windows"
] | stackoverflow_0003622019_python_windows.txt |
Q:
Using Twill from Python to open a link: " 'module' object has no attribute 'Popen' " What is it?
I have downloaded and installed Python 2.5.4 on my computer (my OS is Windows XP), downloaded “Goggle App Engine Software Development Kit” and created my first application in Python, which was a directory named helloworld that contained a small python file with the same name (helloworld.py). Here are the contents of that small file:
print 'Content-Type: text/plain'
print ''
print 'Hello, world!'
Whenever I ran this application locally on my computer with “Goggle App Engine Software Development Kit”, my browser (FireFox) always showed me a white window with Hello, world! written in it.
Then I downloaded Twill and unpacked it into helloworld directory. Having installed Twill properly, I was able to execute some small commands from Twill shell. For example, I could turn to a web page by some link:
Then I wanted to perform the same operation directly from Python (i.e. by means of using Twill from Python.) Here is what the Twill documentation page says about it:
twill's Python API
Using TwillBrowser Making extensions
twill is essentially a thin shell around the mechanize package. All twill commands are implemented in the commands.py file, and pyparsing does the work of parsing the input and converting it into Python commands (see parse.py). Interactive shell work and readline support is implemented via the cmd module (from the standard Python library).
Using twill from Python
There are two fairly simple ways to use twill from Python. (They are compatible with each other, so you don't need to choose between them; just use whichever is appropriate.)
The first is to simply import all of the commands in commands.py and use them directly from Python. For example,
from twill.commands import *
go("http://www.python.org/")
showforms()
This has the advantage of being very simple, as well as being tied directly to the documented set of commands in the commands reference.
So I decided to use this way. I deleted the previous contents of helloworld.py and gave it the new contents:
from twill.commands import *
go("http://www.python.org/")
showforms()
But when I tried to run that file on my computer with “Goggle App Engine Software Development Kit”, my browser, instead of depicting the contents of www.python.org web site, gives me an error message: 'module' object has no attribute 'Popen' :
Please, take a look at the whole page here.
Here are the last three lines of that page:
: 'module' object has no attribute 'Popen'
args = ("'module' object has no attribute 'Popen'",)
message = "'module' object has no attribute 'Popen'"
Can anybody, please, explain to me what this Popen attribute is all about and what I am doing wrong here?
Thank you all in advance.
Update 1
(this update is my response to the second answer provided below by leoluk)
Hello, leoluk!!!
I tried doing it this way:
config use_tidy 0
from twill.commands import *
go("http://www.python.org/")
but it didn't work. I received this error message:
<type 'exceptions.SyntaxError'>: invalid syntax (helloworld.py, line 1)
args = ('invalid syntax', (r'E:\helloworld\helloworld.py', 1, 15, 'config use_tidy 0\n'))
filename = r'E:\helloworld\helloworld.py'
lineno = 1
message = ''
msg = 'invalid syntax'
offset = 15
print_file_and_line = None
text = 'config use_tidy 0\n'
(You can see the whole page HERE)
Do You have any idea what it means and what went wrong?
A:
I think you should use mechanize directly. Twill communicates with the system in a way that's not supported by Google App Engine.
import mechanize
browser = mechanize.Browser()
browser.open('http://www.python.org')
for f in browser.forms():
print f # you'll have to extend it
A:
you can't use anything in the Google App engine. Twill uses stuff not available on google app engine to work. So twill is not fully supported by app engine.
notably, the code is trying to call on an external command, tidy, and calling external commands in app engine doesn't work.
A:
The tidy program does a nice job of
producing correct HTML from mangled,
broken, eeevil Web pages. By default,
twill will run pages through tidy
before processing them. This is on by
default because the Python libraries
that parse HTML are very bad at
dealing with incorrect HTML, and will
often return incorrect results on
"real world" Web pages.
To disable this feature, set config do_run_tidy 0.
| Using Twill from Python to open a link: " 'module' object has no attribute 'Popen' " What is it? | I have downloaded and installed Python 2.5.4 on my computer (my OS is Windows XP), downloaded “Goggle App Engine Software Development Kit” and created my first application in Python, which was a directory named helloworld that contained a small python file with the same name (helloworld.py). Here are the contents of that small file:
print 'Content-Type: text/plain'
print ''
print 'Hello, world!'
Whenever I ran this application locally on my computer with “Goggle App Engine Software Development Kit”, my browser (FireFox) always showed me a white window with Hello, world! written in it.
Then I downloaded Twill and unpacked it into helloworld directory. Having installed Twill properly, I was able to execute some small commands from Twill shell. For example, I could turn to a web page by some link:
Then I wanted to perform the same operation directly from Python (i.e. by means of using Twill from Python.) Here is what the Twill documentation page says about it:
twill's Python API
Using TwillBrowser Making extensions
twill is essentially a thin shell around the mechanize package. All twill commands are implemented in the commands.py file, and pyparsing does the work of parsing the input and converting it into Python commands (see parse.py). Interactive shell work and readline support is implemented via the cmd module (from the standard Python library).
Using twill from Python
There are two fairly simple ways to use twill from Python. (They are compatible with each other, so you don't need to choose between them; just use whichever is appropriate.)
The first is to simply import all of the commands in commands.py and use them directly from Python. For example,
from twill.commands import *
go("http://www.python.org/")
showforms()
This has the advantage of being very simple, as well as being tied directly to the documented set of commands in the commands reference.
So I decided to use this way. I deleted the previous contents of helloworld.py and gave it the new contents:
from twill.commands import *
go("http://www.python.org/")
showforms()
But when I tried to run that file on my computer with “Goggle App Engine Software Development Kit”, my browser, instead of depicting the contents of www.python.org web site, gives me an error message: 'module' object has no attribute 'Popen' :
Please, take a look at the whole page here.
Here are the last three lines of that page:
: 'module' object has no attribute 'Popen'
args = ("'module' object has no attribute 'Popen'",)
message = "'module' object has no attribute 'Popen'"
Can anybody, please, explain to me what this Popen attribute is all about and what I am doing wrong here?
Thank you all in advance.
Update 1
(this update is my response to the second answer provided below by leoluk)
Hello, leoluk!!!
I tried doing it this way:
config use_tidy 0
from twill.commands import *
go("http://www.python.org/")
but it didn't work. I received this error message:
<type 'exceptions.SyntaxError'>: invalid syntax (helloworld.py, line 1)
args = ('invalid syntax', (r'E:\helloworld\helloworld.py', 1, 15, 'config use_tidy 0\n'))
filename = r'E:\helloworld\helloworld.py'
lineno = 1
message = ''
msg = 'invalid syntax'
offset = 15
print_file_and_line = None
text = 'config use_tidy 0\n'
(You can see the whole page HERE)
Do You have any idea what it means and what went wrong?
| [
"I think you should use mechanize directly. Twill communicates with the system in a way that's not supported by Google App Engine.\nimport mechanize\n\nbrowser = mechanize.Browser()\n\nbrowser.open('http://www.python.org')\n\nfor f in browser.forms():\n print f # you'll have to extend it\n\n",
"you can't use a... | [
2,
2,
2
] | [] | [] | [
"google_app_engine",
"popen",
"python",
"twill"
] | stackoverflow_0003621432_google_app_engine_popen_python_twill.txt |
Q:
How to include PDF in Sphinx documentation?
I have a PDF that has some in depth explanation for an example in the Sphinx documentation for a package I have. Is there a way to easily include the PDF in my project (and have it copy over when I build the docs)? I tried linking to it with :doc: but this did not copy it over.
A:
Use the :download: text role to bring in an arbitrary additional file. So in your case you might do something like this:
For an in-depth explanation, please see :download:`A Detailed Example <some_extra_file.pdf>`.
| How to include PDF in Sphinx documentation? | I have a PDF that has some in depth explanation for an example in the Sphinx documentation for a package I have. Is there a way to easily include the PDF in my project (and have it copy over when I build the docs)? I tried linking to it with :doc: but this did not copy it over.
| [
"Use the :download: text role to bring in an arbitrary additional file. So in your case you might do something like this:\nFor an in-depth explanation, please see :download:`A Detailed Example <some_extra_file.pdf>`.\n\n"
] | [
24
] | [] | [] | [
"pdf",
"python",
"python_sphinx"
] | stackoverflow_0003615142_pdf_python_python_sphinx.txt |
Q:
Deploying cx_Oracle onto various versions of Oracle Client
I have some small python apps that use cx_Oracle to connect to an Oracle database. I deploy these apps by compiling them with py2exe, which works fine in many cases.
The problem is, there is no standard Oracle Client version (9i and 10g for example) across the many people who need to install this, and it would be very frustrating to try to get everyone to standardize on a single Oracle Client version. I'm using the 9.2 client with cx_Oracle 4.4.1 for 9i at the moment, and so when I py2exe the resulting exe includes the cx_Oracle 4.4.1 library and will not work with 10g clients.
I don't use any specific features of any of the Oracle versions so there's really no reason for me to care what client version is being used, except for the cx_Oracle compatibility issues.
The ideal solution would be to somehow compile a version that is completely independent of the Oracle Client installed on the machine.
If that's not possible, I would be willing to compile separate exes for each major Oracle version (my_app_9i.exe, my_app_10g.exe, etc) but I can't figure out an easy way to even do this since installing a new cx_Oracle overwrites my old version, I would have to keep swapping the library back and forth to compile the other versions whenever I make a change.
Any advice or other options are welcome.
A:
If you want to build multiple cx_Oracle versions (eg: cx_Oracle10g, cx_Oracle11g, etc.) then you'll need to modify the cx_Oracle setup.py script. The last step in the script is a call to setup(); the first parameter is the name of the module to build. All you need to do is to change "cx_Oracle" to "cx_Oracle" + ver, where ver is 10g, 11g, etc. Either create several scripts and hard-code it, or add another parameter to setup.py to select it dynamically.
Of course, once you've got that, you need a mechanism to load the correct module at runtime. To do that you'll want to create your own cx_Oracle module that has a __init__.py file that looks something like this:
try:
from cx_Oracle9g import *
except ImportError:
try:
from cx_Oracle10g import *
except ImportError:
try:
from cx_Oracle11g import *
All you need to do is ship your custom cx_Oracle module plus the correct cx_OracleXg module with your application.
Alternately, you could have your custom cx_Oracle module dynamically check for each available Oracle client library (9g, 10g, 11g, etc) and then only import the correct matching cx_OracleXg module. In this case, you only have to ship a single binary, containing your custom cx_Oracle module plus all of the cx_OracleXg modules.
| Deploying cx_Oracle onto various versions of Oracle Client | I have some small python apps that use cx_Oracle to connect to an Oracle database. I deploy these apps by compiling them with py2exe, which works fine in many cases.
The problem is, there is no standard Oracle Client version (9i and 10g for example) across the many people who need to install this, and it would be very frustrating to try to get everyone to standardize on a single Oracle Client version. I'm using the 9.2 client with cx_Oracle 4.4.1 for 9i at the moment, and so when I py2exe the resulting exe includes the cx_Oracle 4.4.1 library and will not work with 10g clients.
I don't use any specific features of any of the Oracle versions so there's really no reason for me to care what client version is being used, except for the cx_Oracle compatibility issues.
The ideal solution would be to somehow compile a version that is completely independent of the Oracle Client installed on the machine.
If that's not possible, I would be willing to compile separate exes for each major Oracle version (my_app_9i.exe, my_app_10g.exe, etc) but I can't figure out an easy way to even do this since installing a new cx_Oracle overwrites my old version, I would have to keep swapping the library back and forth to compile the other versions whenever I make a change.
Any advice or other options are welcome.
| [
"If you want to build multiple cx_Oracle versions (eg: cx_Oracle10g, cx_Oracle11g, etc.) then you'll need to modify the cx_Oracle setup.py script. The last step in the script is a call to setup(); the first parameter is the name of the module to build. All you need to do is to change \"cx_Oracle\" to \"cx_Oracle\... | [
3
] | [] | [] | [
"cx_oracle",
"instantclient",
"oracle",
"py2exe",
"python"
] | stackoverflow_0003348894_cx_oracle_instantclient_oracle_py2exe_python.txt |
Q:
Break from for loop
Here is my code:
def detLoser(frag, a):
word = frag + a
if word in wordlist:
lost = True
else:
for words in wordlist:
if words[:len(word) == word:
return #I want this to break out.
else:
lost = True
Where I have a return, I've tried putting in both return and break and both give me errors. Both give me the following error: SyntaxError: invalid syntax. Any Ideas? What is the best way to handle this?
A:
You've omitted the ] from the list slice. But what is the code trying to achieve, anyway?
foo[ : len( foo ) ] == foo
always!
I assume this isn't the complete code -- if so, where is wordlist defined? (is it a list? -- it's much faster to test containment for a set.)
A:
def detLoser(frag, a):
word = frag + a
if word in wordlist:
lost = True
else:
for words in wordlist:
if word.startswith(words):
return #I want this to break out.
else:
lost = True
you can probably rewrite the for loop using any or all eg. ( you should use a set instead of a list for wordlist though)
def detLoser(frag, a):
word = frag + a
return word in wordlist or any(w.startswith(word) for w in wordlist)
| Break from for loop | Here is my code:
def detLoser(frag, a):
word = frag + a
if word in wordlist:
lost = True
else:
for words in wordlist:
if words[:len(word) == word:
return #I want this to break out.
else:
lost = True
Where I have a return, I've tried putting in both return and break and both give me errors. Both give me the following error: SyntaxError: invalid syntax. Any Ideas? What is the best way to handle this?
| [
"You've omitted the ] from the list slice. But what is the code trying to achieve, anyway? \nfoo[ : len( foo ) ] == foo\n\nalways! \nI assume this isn't the complete code -- if so, where is wordlist defined? (is it a list? -- it's much faster to test containment for a set.)\n",
"def detLoser(frag, a):\n\n word... | [
6,
2
] | [] | [] | [
"breakpoints",
"python"
] | stackoverflow_0003622135_breakpoints_python.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.