Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sorry for my English in advance.
I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.
With one thread, that operation took around 3 min. But I would like do the same ... | 0 | 0 | 0 | 1 | false | 6,078,703 | 0 | 1,686 | 4 | 0 | 0 | 5,950,427 | You might consider Redis. Its single-node throughput is supposed to be faster. It's different from Cassandra though, so whether or not it's an appropriate option would depend on your use case. | 1 | 0 | 0 | Insert performance with Cassandra | 4 | python,multithreading,insert,cassandra | 0 | 2011-05-10T13:02:00.000 |
sorry for my English in advance.
I am a beginner with Cassandra and his data model. I am trying to insert one million rows in a cassandra database in local on one node. Each row has 10 columns and I insert those only in one column family.
With one thread, that operation took around 3 min. But I would like do the same ... | 0 | 0 | 0 | 1 | false | 8,491,215 | 0 | 1,686 | 4 | 0 | 0 | 5,950,427 | The time taken doubled because you inserted twice as much data. Is it possible that you are I/O bound? | 1 | 0 | 0 | Insert performance with Cassandra | 4 | python,multithreading,insert,cassandra | 0 | 2011-05-10T13:02:00.000 |
I'm using this javascript library (http://valums.com/ajax-upload/) to upload file to a tornado web server, but I don't know how to get the file content. The javascript library is uploading using XHR, so I assume I have to read the raw post data to get the file content. But I don't know how to do it with Tornado. Their ... | 3 | 2 | 1.2 | 0 | true | 5,989,216 | 1 | 1,836 | 1 | 1 | 0 | 5,983,032 | I got the answer.
I need to use self.request.body to get the raw post data.
I also need to pass in the correct _xsrf token, otherwise tornado will fire a 403 exception.
So that's about it. | 1 | 0 | 0 | asynchronous file upload with ajaxupload to a tornado web server | 1 | python,file-upload,tornado,ajax-upload | 0 | 2011-05-12T19:00:00.000 |
Is there a function in Python that checks if the returned value is None and if it is, allows you to set it to another value like the IFNULL function in MySQL? | 15 | -2 | -0.049958 | 0 | false | 16,633,853 | 0 | 35,088 | 2 | 0 | 0 | 5,987,371 | Since this question is now over 2 years old I guess this is more for future references :)
What I like to do is max('', mightBeNoneVar) or max(0, mightBeNoneVar) (depending on the context).
More elaborate example:
print max('', col1).ljust(width1) + ' ==> '+ max('', col2).ljust(width2) | 1 | 0 | 1 | Python equivalent for MySQL's IFNULL | 8 | python | 0 | 2011-05-13T04:54:00.000 |
Is there a function in Python that checks if the returned value is None and if it is, allows you to set it to another value like the IFNULL function in MySQL? | 15 | 1 | 0.024995 | 0 | false | 50,119,942 | 0 | 35,088 | 2 | 0 | 0 | 5,987,371 | nvl(v1,v2) will return v1 if not null otherwise it returns v2.
nvl = lambda a,b: a or b | 1 | 0 | 1 | Python equivalent for MySQL's IFNULL | 8 | python | 0 | 2011-05-13T04:54:00.000 |
I have been running Davical on a CentOS 5 box for a while now with no problems.
Yesterday however, I installed Trac bug-tracker which eventually forced me to run a full update via Yum which updated a whole heap of packages.
I cant seem to work out exactly what the issue is and time spent googling didn't seem to bring a... | 6 | 8 | 1.2 | 0 | true | 5,998,204 | 0 | 5,259 | 1 | 0 | 0 | 5,994,297 | I have had this problem before, although with 8.4 instead of 8.1, but the issue is the same, I believe.
A recent minor upgrade of all supported maintenance branches of PostgreSQL introduced the function PinPortal in the server, and made PL/pgSQL use it. So if you use a plpgsql.so from the newer version with a server f... | 1 | 0 | 0 | Could not load library "/usr/lib/pgsql/plpgsql.so" & undefined symbol: PinPortal | 1 | python,postgresql,centos,trac | 0 | 2011-05-13T15:38:00.000 |
What is the recommonded way to interact between python and MySQL? Currently I am using MySQLdb and I heared from Oursql. But I asked myself, if there is a more appropriate way to manage this. | 3 | 0 | 0 | 0 | false | 40,519,201 | 0 | 234 | 1 | 0 | 0 | 6,002,147 | I personally use pymysql, but have heard a lot of people use MySQLdb. Both are very similar in the way they behave, and could easily be interchangeable. Personally, (working as a python/MySQL QA) I've yet to hear of / let alone work with OurSQL.
With that said, it honestly depends what you want to accomplish. Python h... | 1 | 0 | 0 | Is there a recommended way for interaction between python and MySQL? | 2 | python,mysql,interaction | 0 | 2011-05-14T13:32:00.000 |
I have setup an Apache server with mod_wsgi, python_sql, mysql and django.
Everything works fine, except the fact that if I make some code changes, they do not reflect immidiately, though I thing that everything is compiled on the fly when it comes to python/mod_wsgi.
I have to shut down the server and come back again ... | 4 | 3 | 0.291313 | 0 | false | 6,007,285 | 1 | 768 | 1 | 0 | 0 | 6,006,666 | Just touching the wsgi file allways worked for me. | 1 | 0 | 0 | Hot deployment using mod_wsgi,python and django on Apache | 2 | python,django,apache2,mod-wsgi,hotdeploy | 0 | 2011-05-15T05:26:00.000 |
I need to insert rows into PG one of the fields is date and time with time stamp, this is the time of incident, so I can not use --> current_timestamp function of Postgres at the time of insertion, so how can I then insert the time and date which I collected before into pg row in the same format as it would have been c... | 64 | -4 | -1 | 0 | false | 18,624,640 | 0 | 190,399 | 1 | 0 | 0 | 6,018,214 | Just use
now()
or
CURRENT_TIMESTAMP
I prefer the latter as I like not having additional parenthesis but thats just personal preference. | 1 | 0 | 1 | How to insert current_timestamp into Postgres via python | 7 | python,postgresql,datetime | 0 | 2011-05-16T13:34:00.000 |
I'm trying to run a python script using python 2.6.4. The hosting company has 2.4 installed so I compiled my own 2.6.4 on a similar server and then moved the files over into ~/opt/python. that part seems to be working fine.
anyhow, when I run the script below, I am getting ImportError: No module named _sqlite3 and I'm ... | 2 | 0 | 0 | 0 | false | 6,026,507 | 0 | 1,962 | 1 | 0 | 0 | 6,026,485 | In general, the first thing to do is to ask your host. I seems a bit odd that SQLite is not installed (or installed properly). So they'll likely fix it quite fast if you ask them. | 1 | 0 | 0 | How can I get sqlite working on a shared hosting server? | 4 | python,linux,unix,sqlite | 0 | 2011-05-17T05:10:00.000 |
I need to have some references in my table and a bunch of "deferrable initially deferred" modifiers, but I can't find a way to make this work in the default generated Django code.
Is it safe to create the table manually and still use Django models? | 2 | 2 | 0.132549 | 0 | false | 6,053,509 | 1 | 112 | 1 | 0 | 0 | 6,053,426 | Yes.
I don't see why not, but that would be most unconventional and breaking convention usually leads to complications down the track.
Describe the problem you think it will solve and perhaps someone can offer a more conventional solution. | 1 | 0 | 0 | Is it safe to write your own table creation SQL for use with Django, when the generated tables are not enough? | 3 | python,sql,django,postgresql | 0 | 2011-05-19T03:30:00.000 |
I am creating a software with user + password. After autentification, the user can access some semi public services, but also encrypt some files that only the user can access.
The user must be stored as is, without modification, if possible. After auth, the user and the password are kept in memory as long as the softwa... | 10 | 2 | 0.197375 | 0 | false | 6,058,858 | 0 | 3,307 | 1 | 0 | 0 | 6,058,019 | If you use a different salt for each user, you must store it somewhere (ideally in a different place). If you use the same salt for every user, you can hardcode it in your app, but it can be considered less secure.
If you don't keep the salt, you will not be able to match a given password against the one in your datab... | 1 | 0 | 0 | Storing user and password in a database | 2 | python,security,passwords | 0 | 2011-05-19T11:37:00.000 |
The sqlite docs says that using the pragma default_cache_size is deprecated. I looked, but I couldn't see any explanation for why. Is there a reason for this? I'm working on an embedded python program, and we open and close connections a lot. Is the only alternative to use the pragma cache_size on every database connec... | 5 | 2 | 1.2 | 0 | true | 6,175,144 | 0 | 890 | 1 | 0 | 0 | 6,062,999 | As Firefox is massively using SQLite I wouldn't be surprised if this request came from their camp to prevent any kind of 3rd party interference (e.g. "trashing" with large/small/invalid/obscure values) by this kind of pragma propagating through all database connections
Hence, my strong belief is that there is no altern... | 1 | 0 | 0 | Alternative to deprecated sqlite pragma "default_cache_size" | 1 | python,sqlite | 0 | 2011-05-19T18:10:00.000 |
I have a website where people post comments, pictures, and other content. I want to add a feature that users can like/unlike these items.
I use a database to store all the content.
There are a few approaches I am looking at:
Method 1:
Add a 'like_count' column to the table, and increment it whenever someone likes an i... | 0 | 1 | 0.066568 | 0 | false | 6,067,968 | 1 | 104 | 2 | 0 | 0 | 6,067,919 | You will actually only need the user_likes table. The like_count is calculated from that table. You will only need to store that if you need to gain performance, but since you're using memcached, It may be a good idea to not store the aggregated value in the database, but store it only in memcached. | 1 | 0 | 0 | What would be a good strategy to implement functionality similar to facebook 'likes'? | 3 | python,architecture | 1 | 2011-05-20T05:46:00.000 |
I have a website where people post comments, pictures, and other content. I want to add a feature that users can like/unlike these items.
I use a database to store all the content.
There are a few approaches I am looking at:
Method 1:
Add a 'like_count' column to the table, and increment it whenever someone likes an i... | 0 | 1 | 0.066568 | 0 | false | 6,067,953 | 1 | 104 | 2 | 0 | 0 | 6,067,919 | One relation table that does a many-to-many mapping between user and item should do the trick. | 1 | 0 | 0 | What would be a good strategy to implement functionality similar to facebook 'likes'? | 3 | python,architecture | 1 | 2011-05-20T05:46:00.000 |
I'm very new to Python and I have Python 3.2 installed on a Win 7-32 workstation. Trying to connect to MSSQLServer 2005 Server using adodbapi-2.4.2.2, the latest update to that package.
The code/connection string looks like this:
conn = adodbapi.connect('Provider=SQLNCLI.1;Integrated Security=SSPI;Persist Security Info... | 6 | 2 | 0.132549 | 0 | false | 21,480,454 | 0 | 8,199 | 1 | 0 | 0 | 6,086,341 | I had the same problem, and I tracked it down to a failure to load win32com.pyd, because of some system DLLs that was not in the "dll load path", such as msvcp100.dll
I solved the problem by copying a lot of these dll's (probably too many) into C:\WinPython-64bit-3.3.3.2\python-3.3.3.amd64\Lib\site-packages\win32 | 1 | 0 | 0 | Connecting to SQLServer 2005 with adodbapi | 3 | python,database,sql-server-2005,adodbapi | 0 | 2011-05-22T06:03:00.000 |
My company has decided to implement a datamart using [Greenplum] and I have the task of figuring out how to go on about it. A ballpark figure of the amount of data to be transferred from the existing [DB2] DB to the Greenplum DB is about 2 TB.
I would like to know :
1) Is the Greenplum DB the same as vanilla [PostgresS... | 0 | 0 | 0 | 0 | false | 7,550,497 | 0 | 2,294 | 2 | 0 | 0 | 6,110,384 | Many of Greenplum's utilities are written in python and the current DBMS distribution comes with python 2.6.2 installed, including the pygresql module which you can use to work inside the GPDB.
For data transfer into greenplum, I've written python scripts that connect to the source (Oracle) DB using cx_Oracle and then ... | 1 | 0 | 0 | Transferring data from a DB2 DB to a greenplum DB | 4 | python,postgresql,db2,datamart,greenplum | 0 | 2011-05-24T12:28:00.000 |
My company has decided to implement a datamart using [Greenplum] and I have the task of figuring out how to go on about it. A ballpark figure of the amount of data to be transferred from the existing [DB2] DB to the Greenplum DB is about 2 TB.
I would like to know :
1) Is the Greenplum DB the same as vanilla [PostgresS... | 0 | 0 | 0 | 0 | false | 23,668,974 | 0 | 2,294 | 2 | 0 | 0 | 6,110,384 | Generally, it is really slow if you use SQL insert or merge to import big bulk data.
The recommended way is to use the external tables you define to use file-based, web-based or gpfdist protocol hosted files.
And also greenplum has a utility named gpload, which can be used to define your transferring jobs, like source,... | 1 | 0 | 0 | Transferring data from a DB2 DB to a greenplum DB | 4 | python,postgresql,db2,datamart,greenplum | 0 | 2011-05-24T12:28:00.000 |
I have a lot of objects which form a network by keeping references to other objects. All objects (nodes) have a dict which is their properties.
Now I'm looking for a fast way to store these objects (in a file?) and reload all of them into memory later (I don't need random access). The data is about 300MB in memory whic... | 2 | 0 | 0 | 0 | false | 6,130,718 | 0 | 252 | 1 | 0 | 0 | 6,128,458 | Perhaps you could set up some layer of indirection where the objects are actually held within, say, another dictionary, and an object referencing another object will store the key of the object being referenced and then access the object through the dictionary. If the object for the stored key is not in the dictionary,... | 1 | 0 | 1 | Store and load a large number linked objects in Python | 3 | python,persistent-storage | 0 | 2011-05-25T17:31:00.000 |
I have a written a very small web-based survey using cgi with python(This is my first web app. ).The questions are extracted from a MySQL database table and the results are supposed to be saved in the same database. I have created the database along with its table locally. My app works fine on my local computer(localho... | 0 | 0 | 0 | 0 | false | 6,139,936 | 0 | 513 | 1 | 0 | 0 | 6,139,777 | You can write a simple script like
import MySQLdb and catch any errors
to see if the required package is
installed. If this fails you can ask
the hosting provider to install your
package, typically via a ticket
The hosting providers typically also provide URL's to connect to the MySQL tables they provision for you, and... | 1 | 0 | 0 | Uploading a mysql database to a webserver supporting python | 3 | python,mysql,database-connection,cpanel | 0 | 2011-05-26T13:59:00.000 |
I'm building a real-time service so my real-time database need to storage in memcached before fetch to DB (Avoid to read/write mysql db too much). I want to fetch data to mysql DB when some events occur , etc : before data expire or have least-recently-used (LRU) data. What is solution for my problem ? My system used m... | 1 | 1 | 0.197375 | 0 | false | 6,146,042 | 1 | 329 | 1 | 0 | 0 | 6,143,748 | Memcached is not a persistent store, so if you need your data to be durable at all then you will need to store them in a persistent store immediately.
So you need to put them somewhere - possibly a MySQL table - as soon as the data arrive, and make sure they are fsync'd to disc. Storing them in memcached as well only s... | 1 | 0 | 0 | Can auto transfer data from memcached to mysql DB? | 1 | python,mysql,django,memcached | 0 | 2011-05-26T19:06:00.000 |
Can someone please point me in the right direction of how I can connect to MS SQL Server with Python? What I want to do is read a text file, extract some values and then insert the values from the text file into a table in my Sql Server database. I am using Python 3.1.3, and it seems some of the modules I have come acr... | 3 | 1 | 0.066568 | 0 | false | 6,193,973 | 0 | 8,530 | 1 | 0 | 0 | 6,154,069 | I found a module called CEODBC that I was able to use with Python 3 after doing some research. It looks like they will also be releasing a Python3 compatible version of PYODBC soon. Thanks for all your help. | 1 | 0 | 0 | Connecting to Sql Server with Python 3 in Windows | 3 | python,sql,python-3.x,database-connection,python-module | 0 | 2011-05-27T14:59:00.000 |
I have to write up a python program that communicates with a My SQL database to write in data... I have done the code however it does not enter all the data as it says there are duplicates... is there a way to just inclue them? | 0 | 0 | 0 | 0 | false | 6,162,872 | 0 | 38 | 1 | 0 | 0 | 6,162,827 | You should provide more information like you SQL and database schema. It sounds like you are trying to insert items with the same primary key. If you remove the primary key you should be able to insert the data, or change the insert statement to not insert the field which is the primary key. | 1 | 0 | 0 | Is there a way to write into python code a command to include duplicate entries into a My SQL database | 1 | python,sql | 0 | 2011-05-28T16:14:00.000 |
I am planning to develop a web-based application which could crawl wikipedia for finding relations and store it in a database. By relations, I mean searching for a name say,'Bill Gates' and find his page, download it and pull out the various information from the page and store it in a database. Information may include ... | 2 | 2 | 0.132549 | 0 | false | 6,171,789 | 0 | 3,092 | 1 | 0 | 0 | 6,171,764 | You mention Python and Open Source, so I would investigate the NLTK (Natural Language Toolkit). Text mining and natural language processing is one of those things that you can do a lot with a dumb algorithm (eg. Pattern matching), but if you want to go a step further and do something more sophisticated - ie. Trying to ... | 1 | 0 | 0 | Mining Wikipedia for mapping relations for text mining | 3 | python,pattern-matching,data-mining,wikipedia,text-mining | 0 | 2011-05-30T02:24:00.000 |
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people ... | 2 | 7 | 1 | 0 | false | 6,172,230 | 0 | 3,332 | 5 | 0 | 0 | 6,172,123 | This may not be a usable answer but someone needs to say it. You shouldn't have to do this. CSV is a file format with an expected data encoding. If someone is supplying you a CSV file then it should be delimited and escaped properly, otherwise its a corrupted file and you should reject it. Make the supplier re-export t... | 1 | 0 | 0 | What is an easy way to clean an unparsable csv file | 6 | php,python,mysql,csv | 0 | 2011-05-30T03:54:00.000 |
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people ... | 2 | 0 | 0 | 0 | false | 6,172,324 | 0 | 3,332 | 5 | 0 | 0 | 6,172,123 | First of all - find all kinds of mistake. And then just replace them with empty strings. Just do it! If you need this corrupted data - only you can recover it. | 1 | 0 | 0 | What is an easy way to clean an unparsable csv file | 6 | php,python,mysql,csv | 0 | 2011-05-30T03:54:00.000 |
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people ... | 2 | 0 | 0 | 0 | false | 6,172,154 | 0 | 3,332 | 5 | 0 | 0 | 6,172,123 | MySQL import has many parameters including escape characters. Given the example, I think the quotes are escaped by putting a quote in the front. So an import with esaped by '"' would work. | 1 | 0 | 0 | What is an easy way to clean an unparsable csv file | 6 | php,python,mysql,csv | 0 | 2011-05-30T03:54:00.000 |
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people ... | 2 | 0 | 0 | 0 | false | 6,172,145 | 0 | 3,332 | 5 | 0 | 0 | 6,172,123 | That's a really tough issue. I don't know of any real way to solve it, but maybe you could try splitting on ",", cleaning up the items in the resulting array (unicorns :) ) and then re-joining the row? | 1 | 0 | 0 | What is an easy way to clean an unparsable csv file | 6 | php,python,mysql,csv | 0 | 2011-05-30T03:54:00.000 |
The csv file was created correctly but the name and address fields contain every piece of punctuation there is available. So when you try to import into mysql you get parsing errors. For example the name field could look like this, "john ""," doe". I have no control over the data I receive so I'm unable to stop people ... | 2 | 1 | 0.033321 | 0 | false | 6,172,224 | 0 | 3,332 | 5 | 0 | 0 | 6,172,123 | You don't say if you have control over the creation of the CSV file. I am assuming you do, as if not, the CVS file is corrupt and cannot be recovered without human intervention, or some very clever algorithms to "guess" the correct delimiters vs the user entered ones.
Convert user entered tabs (assuming there are some)... | 1 | 0 | 0 | What is an easy way to clean an unparsable csv file | 6 | php,python,mysql,csv | 0 | 2011-05-30T03:54:00.000 |
I'm writing the server for a Javascript app that has a syncing feature. Files and directories being created and modified by the client need to be synced to the server (the same changes made on the client need to be made on the server, including deletes).
Since every file is on the server, I'm debating the need for a My... | 1 | 0 | 0 | 0 | false | 6,181,167 | 0 | 216 | 1 | 0 | 0 | 6,180,732 | In my opinion, the only real way to be sure is to build a test system and compare the space requirements. It shouldn't take that long to generate some random data programatically. One might think the file system would be more efficient, but databases can and might compress the data or deduplicate it, or whatever. Don't... | 1 | 0 | 0 | Memory usage of file versus database for simple data storage | 3 | python,django,memory | 0 | 2011-05-30T21:04:00.000 |
I need a job scheduler (a library) that queries a db every 5 minutes and, based on time, triggers events which have expired and rerun on failure.
It should be in Python or PHP.
I researched and came up with Advanced Python Scheduler but it is not appropriate because it only schedules the jobs in its job store. Instead,... | 0 | 1 | 1.2 | 0 | true | 6,184,556 | 1 | 710 | 1 | 0 | 0 | 6,184,491 | Here's a possible solution
- a script, either in php or python performing your database tasks
- a scheduler : Cron for linux, or the windows task scheduler ; where you set the frequency of your jobs.
I'm using this solution for multiple projects.
Very easy to set up. | 1 | 0 | 0 | Database Based Job scheduler | 2 | php,python,database | 0 | 2011-05-31T07:50:00.000 |
When accessing a MySQL database on low level using python, I use the MySQLdb module.
I create a connection instance, then a cursor instance then I pass it to every function, that needs the cursor.
Sometimes I have many nested function calls, all desiring the mysql_cursor. Would it hurt to initialise the connection as g... | 2 | 1 | 1.2 | 0 | true | 6,191,102 | 0 | 301 | 1 | 0 | 0 | 6,190,982 | I think that database cursors are scarce resources, so passing them around can limit your scalability and cause management issues (e.g. which method is responsible for closing the connection)?
I'd recommend pooling connections and keeping them open for the shortest time possible. Check out the connection, perform the ... | 1 | 0 | 0 | What is the best way to handle connections (e.g. to mysql server using MySQLdb) in python, needed by multiple nested functions? | 1 | python,connection,global-variables | 0 | 2011-05-31T17:03:00.000 |
I am trying to push user account data from an Active Directory to our MySQL-Server. This works flawlessly but somehow the strings end up showing an encoded version of umlauts and other special characters.
The Active Directory returns a string using this sample format: M\xc3\xbcller
This actually is the UTF-8 encoding f... | 37 | 0 | 0 | 0 | false | 7,720,395 | 0 | 77,413 | 1 | 0 | 0 | 6,202,726 | and db.set_character_set('utf8'), imply that
use_unicode=True ? | 1 | 0 | 0 | Writing UTF-8 String to MySQL with Python | 8 | python,unicode,utf-8 | 0 | 2011-06-01T14:23:00.000 |
I'd like to be able to include python code snippets in Excel (ideally, in a nice format -- all colors/formats should be kept the same).
What would be the best way to go about it?
EDIT: I just want to store python code in an Excel spreadsheet for an easy overview -- I am not going to run it -- just want it to be nicely ... | 2 | 1 | 0.099668 | 0 | false | 6,220,700 | 0 | 486 | 1 | 0 | 0 | 6,216,278 | While Excel itself doesnot support other scripting Langauges than VBA, the open source OpenOffice and LibreOffice packages - which include a spreadsheet - can be scriptable with Python. Still, they won't allow Python code to be pasted on teh cells out of the box - but it is possible to write Python code which can act o... | 1 | 0 | 1 | Include Python Code In Excel? | 2 | python,excel | 0 | 2011-06-02T14:58:00.000 |
What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database.
Thanks! | 6 | 8 | 1.2 | 0 | true | 6,224,703 | 0 | 15,473 | 3 | 0 | 0 | 6,222,381 | Well conceptually you can store a list as a bunch of rows in a table using a one-to-many relation, or you can focus on how to store a list in a particular database backend. For example postgres can store an array in a particular cell using the sqlalchemy.dialects.postgres.ARRAY data type which can serialize a python ar... | 1 | 0 | 1 | The best way to store a python list to a database? | 4 | python,database,sqlalchemy,pyramid | 0 | 2011-06-03T02:22:00.000 |
What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database.
Thanks! | 6 | 0 | 0 | 0 | false | 40,277,177 | 0 | 15,473 | 3 | 0 | 0 | 6,222,381 | sqlalchemy.types.PickleType can store list | 1 | 0 | 1 | The best way to store a python list to a database? | 4 | python,database,sqlalchemy,pyramid | 0 | 2011-06-03T02:22:00.000 |
What would be the best way of storing a python list of numbers (such as [4, 7, 10, 39, 91]) to a database? I am using the Pyramid framework with SQLAlchemy to communicate to a database.
Thanks! | 6 | 0 | 0 | 0 | false | 6,224,600 | 0 | 15,473 | 3 | 0 | 0 | 6,222,381 | Use string(Varchar).
From Zen of Python: "Simple is better than complex." | 1 | 0 | 1 | The best way to store a python list to a database? | 4 | python,database,sqlalchemy,pyramid | 0 | 2011-06-03T02:22:00.000 |
I'm writing a small Python CGI script that captures the User-Agent, parses the OS, browser name and version, maps it to a database, and returns a device grade (integer). Since this is only one table, it's a pretty simple operation, but I will likely have substantial traffic (10,000+ hits a day, potentially scaling muc... | 2 | 2 | 0.132549 | 0 | false | 6,231,573 | 0 | 776 | 1 | 0 | 0 | 6,230,793 | It depends on your use-case. Are you planning on caching the records temporarily or do you want the records to persist? If the former, Redis would be the best choice because of its speed. If the latter, it would be better to choose either CouchDB or MongoDB because they can handle large datasets. | 1 | 0 | 0 | Recommendations for a noSQL database for use with Python | 3 | python,nosql | 0 | 2011-06-03T17:53:00.000 |
MongoDB performs really well compared to our hacking of MySQL in de-normalized way. After database migration, I realized that we might need some server-side procedures to invoke after/before database manipulation. Some sorta 3-tier architecture. I am just wondering the possible and easy way to prototype it. Are there a... | 2 | 0 | 0 | 0 | false | 19,877,756 | 0 | 1,855 | 2 | 0 | 0 | 6,273,573 | FWIW, one of the messages in the web UI seems to imply that some hooks do exist ("adding sharding hook to enable versioning and authentication to remote servers"), but they might be only avilable within the compiled binaries, not to clients. | 1 | 0 | 0 | What is suggested way to have server-side hooks over mongodb? | 2 | python,mongodb,hook,server-side,3-tier | 0 | 2011-06-08T02:26:00.000 |
MongoDB performs really well compared to our hacking of MySQL in de-normalized way. After database migration, I realized that we might need some server-side procedures to invoke after/before database manipulation. Some sorta 3-tier architecture. I am just wondering the possible and easy way to prototype it. Are there a... | 2 | 2 | 0.197375 | 0 | false | 6,277,024 | 0 | 1,855 | 2 | 0 | 0 | 6,273,573 | No, there are no features currently available in MongoDB equivalent to hooks or triggers. It'd be best to handle this sort of thing from within your application logic. | 1 | 0 | 0 | What is suggested way to have server-side hooks over mongodb? | 2 | python,mongodb,hook,server-side,3-tier | 0 | 2011-06-08T02:26:00.000 |
I have a script with several functions that all need to make database calls. I'm trying to get better at writing clean code rather than just throwing together scripts with horrible style. What is generally considered the best way to establish a global database connection that can be accessed anywhere in the script but ... | 3 | 0 | 0 | 0 | false | 6,282,794 | 0 | 317 | 1 | 0 | 0 | 6,281,732 | Use a model system/ORM system. | 1 | 0 | 0 | Proper way to establish database connection in python | 2 | python,database,coding-style,mysql-python | 0 | 2011-06-08T15:58:00.000 |
I'm developing a python code that uses Sqlite in a multi-threaded program. A remote host calls some xmlrpc functions and new threads are created. Each function which is running in a new thread, uses sqlite for either inserting data into or reading data from the database.
My problem is that when call the server more th... | 0 | 2 | 0.197375 | 0 | false | 6,289,986 | 0 | 1,212 | 2 | 0 | 0 | 6,289,821 | If you read the sqlite documentation (http://www.sqlite.org/threadsafe.html), you'll see that it says:
SQLite support three different
threading modes:
Single-thread. In this mode, all
mutexes are disabled and SQLite is
unsafe to use in more than a single
thread at once.
Multi-thread. In this mode, SQLite can
... | 1 | 0 | 1 | Segmentation Fault in Python multi-threaded Sqlite use! | 2 | python,multithreading,sqlite | 0 | 2011-06-09T08:04:00.000 |
I'm developing a python code that uses Sqlite in a multi-threaded program. A remote host calls some xmlrpc functions and new threads are created. Each function which is running in a new thread, uses sqlite for either inserting data into or reading data from the database.
My problem is that when call the server more th... | 0 | 1 | 0.099668 | 0 | false | 6,313,973 | 0 | 1,212 | 2 | 0 | 0 | 6,289,821 | My APSW module is threadsafe and you can use that. The standard Python SQLite cannot be safely used concurrently across multiple threads. | 1 | 0 | 1 | Segmentation Fault in Python multi-threaded Sqlite use! | 2 | python,multithreading,sqlite | 0 | 2011-06-09T08:04:00.000 |
Suppose that I have a huge SQLite file (say, 500[MB]) stored in Amazon S3.
Can a python script that is run on a small EC2 instance directly access and modify that SQLite file? or must I first copy the file to the EC2 instance, change it there and then copy over to S3?
Will the I/O be efficient?
Here's what I am tryin... | 4 | 0 | 0 | 0 | false | 38,705,012 | 0 | 7,758 | 2 | 0 | 0 | 6,301,795 | Amazon EFS can be shared among ec2 instances. It's a managed NFS share. SQLITE will still lock the whole DB on write.
The SQLITE Website does not recommend NFS shares, though. But depending on the application you can share the DB read-only among several ec2 instances and store the results of your processing somewhere ... | 1 | 0 | 0 | Amazon EC2 & S3 When using Python / SQLite? | 5 | python,sqlite,amazon-s3,amazon-ec2 | 0 | 2011-06-10T03:54:00.000 |
Suppose that I have a huge SQLite file (say, 500[MB]) stored in Amazon S3.
Can a python script that is run on a small EC2 instance directly access and modify that SQLite file? or must I first copy the file to the EC2 instance, change it there and then copy over to S3?
Will the I/O be efficient?
Here's what I am tryin... | 4 | 2 | 0.07983 | 0 | false | 6,301,870 | 0 | 7,758 | 2 | 0 | 0 | 6,301,795 | Since S3 cannot be directly mounted, your best bet is to create an EBS volume containing the SQLite file and work directly with the EBS volume from another (controller) instance. You can then create snapshots of the volume, and archive it into S3. Using a tool like boto (Python API), you can automate the creation of ... | 1 | 0 | 0 | Amazon EC2 & S3 When using Python / SQLite? | 5 | python,sqlite,amazon-s3,amazon-ec2 | 0 | 2011-06-10T03:54:00.000 |
Suppose that I have a huge SQLite file (say, 500[MB]). Can 10 different python instances access this file at the same time and update different records of it?. Note, the emphasis here is on different records.
For example, suppose that the SQLite file has say 1M rows:
instance 1 will deal with (and update) rows 0 - 1000... | 2 | 4 | 1.2 | 0 | true | 6,301,903 | 0 | 1,234 | 1 | 0 | 0 | 6,301,816 | Updated, thanks to André Caron.
You can do that, but only read operations supports concurrency in SQLite, since entire database is locked on any write operation. SQLite engine will return SQLITE_BUSY status in this situation (if it exceeds default timeout for access). Also consider that this heavily depends on how goo... | 1 | 0 | 1 | SQLite Concurrency with Python? | 2 | python,sqlite,concurrency | 0 | 2011-06-10T03:58:00.000 |
I've Collective Intelligence book, but I'm not sure how it can be apply in practical.
Let say I have a PHP website with mySQL database. User can insert articles with title and content in the database. For the sake of simplicity, we just compare the title.
How to Make Coffee?
15 Things About Coffee.
The Big Question.
H... | 7 | 0 | 0 | 0 | false | 47,667,603 | 0 | 21,630 | 1 | 0 | 0 | 6,302,184 | This can be simply achieved by using wildcards in SQL queries. If you have larger texts and the wildcard seems to be unable to capture the middle part of text then check if the substring of one matches the other. I hope this helps.
BTW, your question title asks about implementing recommendation system and the question ... | 1 | 0 | 0 | How to Implement A Recommendation System? | 3 | php,python,mysql,recommendation-engine | 0 | 2011-06-10T05:05:00.000 |
I'm using MySQLdb in Python.
I have an update that may succeed or fail:
UPDATE table
SET reserved_by = PID
state = "unavailable"
WHERE state = "available"
AND id = REQUESTED_ROW_ID
LIMIT 1;
As you may be able to infer, multiple processes are using the database, and I need processes to be ab... | 0 | 0 | 0 | 0 | false | 6,339,210 | 0 | 576 | 1 | 0 | 0 | 6,337,798 | Turn autocommit on.
The commit operation just "confirms" updates already done. The alternative is rollback, which "undoes" any updates already made. | 1 | 0 | 0 | How do I get the actual cursor.rowcount upon .commit? | 1 | python,mysql,connect,mysql-python,rowcount | 0 | 2011-06-14T00:11:00.000 |
I want to do the following:
Have a software running written in Python 2.7
This software connects to a database (Currently a MySQL database)
This software listen for connections on a port X on TCP
When a connection is established, a client x request or command something, then the software use the database to store, rem... | 0 | 1 | 1.2 | 0 | true | 6,338,431 | 0 | 183 | 1 | 0 | 0 | 6,337,812 | What is the problem here? SQLAlchemy maintains a thread-local connection pool..what else do you need? | 1 | 0 | 1 | How to use SQLAlchemy in this context | 1 | python,sqlalchemy | 0 | 2011-06-14T00:14:00.000 |
I've trying to make large changes to a number of excel workbooks(over 20). Each workbook contains about 16 separate sheets, and I want to write a script that will loop through each workbook and the sheets contains inside and write/modify the cells that I need. I need to keep all string validation, macros, and formattin... | 4 | 0 | 0 | 0 | false | 6,361,909 | 0 | 3,284 | 1 | 0 | 0 | 6,348,011 | You can also use the PyWin32 libraries to script this with Python using typical COM techniques. This lets you use Python to do your processing, and still save all of the extra parts of each workbook that other Python Excel libraries may not handle. | 1 | 0 | 0 | Scripting changes to multiple excel workbooks | 3 | python,vba,scripting,excel | 0 | 2011-06-14T18:09:00.000 |
I'm looking to implement an audit trail for a reasonably complicated relational database, whose schema is prone to change. One avenue I'm thinking of is using a DVCS to track changes.
(The benefits I can imagine are: schemaless history, snapshots of entire system's state, standard tools for analysis, playback and migra... | 2 | 0 | 0 | 0 | false | 6,380,661 | 0 | 386 | 2 | 0 | 0 | 6,380,623 | If the database is not write-heavy (as you say), why not just implement the actual database tables in a way that achieves your goal? For example, add a "version" column. Then never update or delete rows, except for this special column, which you can set to NULL to mean "current," 1 to mean "the oldest known", and go ... | 1 | 0 | 0 | Using DVCS for an RDBMS audit trail | 3 | python,git,mercurial,rdbms,audit-trail | 0 | 2011-06-17T01:55:00.000 |
I'm looking to implement an audit trail for a reasonably complicated relational database, whose schema is prone to change. One avenue I'm thinking of is using a DVCS to track changes.
(The benefits I can imagine are: schemaless history, snapshots of entire system's state, standard tools for analysis, playback and migra... | 2 | 2 | 0.132549 | 0 | false | 6,396,514 | 0 | 386 | 2 | 0 | 0 | 6,380,623 | I've looked into this a little on my own, and here are some comments to share.
Although I had thought using mercurial from python would make things easier, there's a lot of functionality that the DVCS's have that aren't necessary (esp branching, merging). I think it would be easier to simply steal some design decisions... | 1 | 0 | 0 | Using DVCS for an RDBMS audit trail | 3 | python,git,mercurial,rdbms,audit-trail | 0 | 2011-06-17T01:55:00.000 |
I'm trying to figure out how to use python's mysqldb. I can do my job with my current knownledge, but I want to use the best practices.
Should I close properly my cursor? Exiting the program isn't close it autmatically? (Shouldn't I expect the object destructor to do it anyway?)
Should I create new cursors for every qu... | 2 | 2 | 1.2 | 0 | true | 6,453,159 | 0 | 742 | 1 | 0 | 0 | 6,453,067 | Should I close properly my cursor?
Yes, you should. Explicit is better than implicit.
Should I create new cursors for every
query, or one cursor is enough for
multiple different queries in the same
DB?
This depends on how you use this cursor. For simple tasks it is enough to use one cursor. For some complex... | 1 | 0 | 0 | How to properly use mysqldb in python | 1 | python,cursor,mysql-python | 0 | 2011-06-23T11:08:00.000 |
How can Flask / SQLAlchemy be configured to create a new database connection if one is not present?
I have an infrequently visited Python / Flask server which uses SQLAlchemy. It gets visited every couple of days, and on the first visit it often throws a "MySQL server has gone away" error. Subsequent page views are fin... | 64 | 6 | 1 | 0 | false | 58,821,330 | 1 | 33,654 | 2 | 0 | 0 | 6,471,549 | The pessimistic approach as described by @wim
pool_pre_ping=True
can now be done for Flask-SQLAlchemy using a config var -->
SQLALCHEMY_POOL_PRE_PING = True | 1 | 0 | 0 | Avoiding "MySQL server has gone away" on infrequently used Python / Flask server with SQLAlchemy | 7 | python,mysql,sqlalchemy,flask,database-connection | 0 | 2011-06-24T17:34:00.000 |
How can Flask / SQLAlchemy be configured to create a new database connection if one is not present?
I have an infrequently visited Python / Flask server which uses SQLAlchemy. It gets visited every couple of days, and on the first visit it often throws a "MySQL server has gone away" error. Subsequent page views are fin... | 64 | 2 | 0.057081 | 0 | false | 51,015,137 | 1 | 33,654 | 2 | 0 | 0 | 6,471,549 | When I encountered this error I was storing a LONGBLOB / LargeBinary image ~1MB in size. I had to adjust the max_allowed_packet config setting in MySQL.
I used mysqld --max-allowed-packet=16M | 1 | 0 | 0 | Avoiding "MySQL server has gone away" on infrequently used Python / Flask server with SQLAlchemy | 7 | python,mysql,sqlalchemy,flask,database-connection | 0 | 2011-06-24T17:34:00.000 |
I have to read incoming data from a barcode scanner using pyserial. Then I have to store the contents into a MySQL database. I have the database part but not the serial part. can someone show me examples of how to do this. I'm using a windows machine. | 1 | 1 | 1.2 | 0 | true | 6,474,062 | 0 | 916 | 1 | 0 | 0 | 6,471,569 | You will find it easier to use a USB scanner. These will decode the scan, and send it as if it were typed on the keyboard, and entered with a trailing return.
The barcode is typically written with leading and trailing * characters, but these are not sent with the scan.
Thus you print "*AB123*" using a 3 of 9 font, a... | 1 | 0 | 0 | Reading incoming data from barcode | 1 | python,pyserial | 0 | 2011-06-24T17:35:00.000 |
I couldn't find any information about this in the documentation, but how can I get a list of tables created in SQLAlchemy?
I used the class method to create the tables. | 133 | 99 | 1 | 0 | false | 30,554,677 | 0 | 133,023 | 1 | 0 | 0 | 6,473,925 | There is a method in engine object to fetch the list of tables name. engine.table_names() | 1 | 0 | 0 | SQLAlchemy - Getting a list of tables | 14 | python,mysql,sqlalchemy,pyramid | 0 | 2011-06-24T21:25:00.000 |
I'm dealing with some big (tens of millions of records, around 10gb) database files using SQLite. I'm doint this python's standard interface.
When I try to insert millions of records into the database, or create indices on some of the columns, my computer slowly runs out of memory. If I look at the normal system moni... | 2 | 0 | 1.2 | 0 | true | 6,491,966 | 0 | 1,060 | 1 | 0 | 0 | 6,491,856 | The memory may be not assigned to a process, but it can be e.g. a file on tmpfs filesystem (/dev/shm, /tmp sometimes). You should show us the output of top or free (please note those tools do not show a single 'memory usage' value) to let us tell something more about the memory usage.
In case of inserting records to a ... | 1 | 0 | 0 | Why does running SQLite (through python) cause memory to "unofficially" fill up? | 2 | python,sqlite,memory,ubuntu,memory-leaks | 0 | 2011-06-27T11:03:00.000 |
I'm doing a project that is serial based and has to update a database when a barcode is being read. Which programming language has better tools for working with a MySQl database and Serial communication. I debating right now between python and realbasic. | 0 | 3 | 0.291313 | 0 | false | 6,498,450 | 0 | 1,691 | 2 | 0 | 0 | 6,498,272 | It's hard to imagine that Realbasic is a better choice than Python for any project. | 1 | 0 | 0 | What language is better for serial programming and working with MySQL database? Python? Realbasic? | 2 | python,mysql,serial-port,realbasic | 0 | 2011-06-27T20:02:00.000 |
I'm doing a project that is serial based and has to update a database when a barcode is being read. Which programming language has better tools for working with a MySQl database and Serial communication. I debating right now between python and realbasic. | 0 | 3 | 1.2 | 0 | true | 6,498,607 | 0 | 1,691 | 2 | 0 | 0 | 6,498,272 | Python is a general purpose language with tremendous community support and a "batteries-included" philosophy that leads to simple-designs that focus on the business problem at hand. It is a good choice for a wide variety of projects.
The only reasons not to choose Python would be:
You (or your team) have greater exper... | 1 | 0 | 0 | What language is better for serial programming and working with MySQL database? Python? Realbasic? | 2 | python,mysql,serial-port,realbasic | 0 | 2011-06-27T20:02:00.000 |
When building a website, one have to decide how to store the session info, when a user is logged in.
What is a pros and cons of storing each session in its own file versus storing it in a database? | 7 | 3 | 1.2 | 0 | true | 6,510,307 | 0 | 2,180 | 2 | 0 | 0 | 6,510,075 | I generally wouldnt ever store this information in a file - you run the risk of potentially swapping this file in and out of memory (yes it could be cached at times) but I would rather use an in memory mechanism designed as such and you are then using something that is fairly nonstandard.
In ASP.Net
you can use in in... | 1 | 0 | 0 | What is the pros/cons of storing session data in file vs database? | 3 | php,asp.net,python,ruby-on-rails,ruby | 0 | 2011-06-28T16:45:00.000 |
When building a website, one have to decide how to store the session info, when a user is logged in.
What is a pros and cons of storing each session in its own file versus storing it in a database? | 7 | 1 | 0.066568 | 0 | false | 6,527,272 | 0 | 2,180 | 2 | 0 | 0 | 6,510,075 | I'm guessing, based on your previous questions, that this is being asked in the context of using perl's CGI::Application module, with CGI::Application::Plugin::Session. If you use that module with the default settings, it will write the session data into files stored in the /tmp directory - which is very similar to wha... | 1 | 0 | 0 | What is the pros/cons of storing session data in file vs database? | 3 | php,asp.net,python,ruby-on-rails,ruby | 0 | 2011-06-28T16:45:00.000 |
I have a pyramid project that uses mongodb for storage. Now I'm trying to write a test but how do I specify connection to the mongodb?
More specifically, which database should I connect to (test?) and how do I use fixtures? In Django it creates a temporary database but how does it work in pyramid? | 2 | 2 | 0.379949 | 0 | false | 6,934,811 | 0 | 594 | 1 | 0 | 0 | 6,515,160 | Just create a database in your TestCase.setUp and delete in TestCase.tearDown
You need mongodb running because there is no mongolite3 like sqlite3 for sql
I doubt that django is able to create a temporary file to store a mongodb database. It probably just use sqlite:/// which create a database with a memory storage. | 1 | 0 | 0 | How do i create unittest in pyramid with mongodb? | 1 | python,mongodb,pyramid | 1 | 2011-06-29T02:30:00.000 |
I've used a raw SQL Query to access them, and it seems to have worked. However, I can't figure out a way to actually print the results to an array. The only thing that I can find is the cursor.fetchone() command, which gives me a single row.
Is there any way that I can return an entire column in a django query set? | 0 | 0 | 0 | 0 | false | 6,539,716 | 1 | 223 | 2 | 0 | 0 | 6,539,687 | You can use cursor.fetchall() instead of cursor.fetchone() to retrieve all rows.
And then extract nessesary field:
raw_items = cursor.fetchall()
items = [ item.field for item in raw_items ] | 1 | 0 | 0 | How do I use django db API to save all the elements of a given column in a dictionary? | 2 | python,django | 0 | 2011-06-30T18:54:00.000 |
I've used a raw SQL Query to access them, and it seems to have worked. However, I can't figure out a way to actually print the results to an array. The only thing that I can find is the cursor.fetchone() command, which gives me a single row.
Is there any way that I can return an entire column in a django query set? | 0 | 1 | 1.2 | 0 | true | 6,539,798 | 1 | 223 | 2 | 0 | 0 | 6,539,687 | dict(MyModel.objects.values_list('id', 'my_column')) will return a dictionary with all elements of my_column with the row's id as the key. But probably you're just looking for a list of all the values, which you should receive via MyModel.objects.values_list('my_column', flat=True)! | 1 | 0 | 0 | How do I use django db API to save all the elements of a given column in a dictionary? | 2 | python,django | 0 | 2011-06-30T18:54:00.000 |
I have a photo gallery with an album model (just title and date and stuff) and a photo model with a foriegn key to the album and three imageFields in it (regular, mid and thumb).
When a user delete an album i need to delete all the photos reletaed to the album (from server) then all the DB records that point to the alb... | 1 | 0 | 1.2 | 0 | true | 6,553,381 | 1 | 80 | 1 | 0 | 0 | 6,550,003 | Here is a possible answer for the question i figured out:
Getting the list of albums in a string, in my case separated by commas
You need to import shutil, then:
@login_required
def remove_albums(request):
if request.is_ajax():
if request.method == 'POST':
#if the ajax call for delete what ok ... | 1 | 0 | 0 | How to delete an object and all related objects with all imageFields insite them (photo gallery) | 1 | python,django,django-models,django-views | 0 | 2011-07-01T15:30:00.000 |
(1) What's the fastest way to check if an item I'm about to "insert" into a MongoDB collection is unique (and if so not insert it)
(2) For an existing database, what's the fastest way to look at all the entries and remove duplicates but keep one copy i.e. like a "set" function: {a,b,c,a,a,b} -> {a,b,c}
I am aware that... | 0 | 2 | 0.379949 | 0 | false | 6,567,552 | 0 | 824 | 1 | 0 | 0 | 6,567,511 | (1) Create a unique index on the related columns and catch the error upon insertion time | 1 | 0 | 1 | Fastest Way to (1) not insert duplicate entry (2) consolidate duplicates in Mongo DB? | 1 | python,mongodb | 0 | 2011-07-04T05:11:00.000 |
Our site has two separate projects connected to the same database. This is implemented by importing the models from project1 into project2 and using it to access and manipulate data.
This works fine on our test server, but we are planning deployment and we decided we would rather have the projects on two separate machi... | 2 | 1 | 0.197375 | 0 | false | 6,574,660 | 1 | 253 | 1 | 0 | 0 | 6,572,203 | This isn't really a Django question. It is more a Python Question.
However to answer your question Django is going to have to be able to import these files one way or another. If they are on seperate machines you really should refactor the code out into it's own app and then install this app on each of the machines.
Th... | 1 | 0 | 0 | separate django projects on different machines using a common database | 1 | python,django,deployment,architecture,amazon-web-services | 0 | 2011-07-04T13:31:00.000 |
I'm writing my first web site, and am dealing with user registration.
One common problem to me like to everyone else is to detect user already exist.
I am writing the app with python, and postgres as database.
I have currently come up with 2 ideas:
1)
lock(mutex)
u = select from db where name = input_name
if u == null... | 0 | 2 | 0.197375 | 0 | false | 6,580,794 | 0 | 235 | 2 | 0 | 0 | 6,580,723 | I think both will work, and both are equally bad ideas. :) My point is that implementing user authentication in python/pg has been done so many times in the past that there's hardly justification for writing it yourself. Have you had a look at Django, for example? It will take care of this for you, and much more, and l... | 1 | 0 | 0 | Detect users already exist in database on user registration | 2 | python,sql,database | 0 | 2011-07-05T09:47:00.000 |
I'm writing my first web site, and am dealing with user registration.
One common problem to me like to everyone else is to detect user already exist.
I am writing the app with python, and postgres as database.
I have currently come up with 2 ideas:
1)
lock(mutex)
u = select from db where name = input_name
if u == null... | 0 | 0 | 0 | 0 | false | 6,580,784 | 0 | 235 | 2 | 0 | 0 | 6,580,723 | Slightly different, I usually do a select query via AJAX to determine if a username already exists, that way I can display a message on the UI explaining that the name is already taken and suggest another before the submit the registration form. | 1 | 0 | 0 | Detect users already exist in database on user registration | 2 | python,sql,database | 0 | 2011-07-05T09:47:00.000 |
Im trying to use Spyder with pyodbc to connect mysql using a PyQT4 gui framework.
I have pyodbc in Spyder figure out.
How do I use PyQt4 to get info into gui's? I'm looking to use gui on Fedora and winx64.
Edit: I figured out the fedora driver. Can anyone help me with QMYSQL driver. | 0 | 0 | 0 | 0 | false | 6,644,662 | 0 | 842 | 1 | 0 | 0 | 6,582,404 | Have you considered using PyQt's built-in MySQL support? This could make it a bit easier to display DB info, depending on what you want the interface to look like. | 1 | 0 | 0 | How connect Spyder to mysql on Winx64 and Fedora? | 2 | python,pyqt4,spyder | 0 | 2011-07-05T12:07:00.000 |
I'm trying to have a purely in-memory SQLite database in Django, and I think I have it working, except for an annoying problem:
I need to run syncdb before using the database, which isn't too much of a problem. The problem is that it needs to create a superuser (in the auth_user table, I think) which requires interacti... | 1 | 3 | 1.2 | 0 | true | 6,600,219 | 1 | 1,200 | 1 | 0 | 0 | 6,599,716 | Disconnect django.contrib.auth.management.create_superuser from the post_syncdb signal, and instead connect your own function that creates and saves a new superuser User with the desired password. | 1 | 0 | 0 | Django In-Memory SQLite3 Database | 3 | python,database,django,sqlite,in-memory-database | 0 | 2011-07-06T16:20:00.000 |
Is there an elegant way to do an INSERT ... ON DUPLICATE KEY UPDATE in SQLAlchemy? I mean something with a syntax similar to inserter.insert().execute(list_of_dictionaries) ? | 44 | -1 | -0.022219 | 0 | false | 17,374,720 | 0 | 60,718 | 1 | 0 | 0 | 6,611,563 | As none of these solutions seem all the elegant. A brute force way is to query to see if the row exists. If it does delete the row and then insert otherwise just insert. Obviously some overhead involved but it does not rely on modifying the raw sql and it works on non orm stuff. | 1 | 0 | 1 | SQLAlchemy ON DUPLICATE KEY UPDATE | 9 | python,mysql,sqlalchemy | 0 | 2011-07-07T13:43:00.000 |
I have a python loader using Andy McCurdy's python library that opens multiple Redis DB connections and sets millions of keys looping through files of lines each containing an integer that is the redis-db number for that record. Alltogether, only 20 databases are open at the present time, but eventually there may be as... | 1 | 1 | 0.197375 | 0 | false | 6,703,919 | 0 | 1,371 | 1 | 0 | 0 | 6,628,953 | So I'm guessing this is about the connection pooling support built into the python library. Am I correct in that guess?
Yes.
If so the real question is is there a way to increase the pool size
Not needed, it will increase connections up to 2**31 per default (andys lib). So your connections are idle anyways.
If you w... | 1 | 0 | 0 | configuring connection-pool size with Andy McCurdy's python-for-redis library | 1 | python,configuration,redis,connection-pooling | 0 | 2011-07-08T18:43:00.000 |
If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient.
Is this even a plausible strategy?
Or is this the way they do it:
When a page loads, it takes the end datet... | 0 | 2 | 0.197375 | 0 | false | 6,639,561 | 1 | 2,127 | 2 | 0 | 0 | 6,639,247 | I don't think this question has anything to do with SQL, really--except that you might retrieve an expiration time from SQL. What you really care about is just how to display the timeout real-time in the browser, right?
Obviously the easiest way is just to send a "seconds remaining" counter to the page, either on the ... | 1 | 0 | 0 | Live countdown clock with django and sql? | 2 | javascript,python,django,time,countdown | 0 | 2011-07-10T04:52:00.000 |
If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient.
Is this even a plausible strategy?
Or is this the way they do it:
When a page loads, it takes the end datet... | 0 | 0 | 0 | 0 | false | 6,639,878 | 1 | 2,127 | 2 | 0 | 0 | 6,639,247 | I have also encountered the same problem a while ago.
First of all your problem is not related neither django nor sql. It is a general concept and it is not very easy to implement because of overhead in server.
One solution come into my mind is keeping start time of the process in the database.
When someone request you... | 1 | 0 | 0 | Live countdown clock with django and sql? | 2 | javascript,python,django,time,countdown | 0 | 2011-07-10T04:52:00.000 |
I have a Django project which has a mysql database backend. How can I export contents from my db to an Excel (xls, xlsx) format? | 0 | 0 | 0 | 0 | false | 6,650,011 | 1 | 1,028 | 1 | 0 | 0 | 6,649,990 | phpMyAdmin has an Export tab, and you can export in CSV. This can be imported into Excel. | 1 | 0 | 0 | MySQLdb to Excel | 4 | python,mysql,django,excel | 0 | 2011-07-11T12:20:00.000 |
I try to connect to database in a domain from my virtual machine.
It works on XP, but somehow does not work on Win7 and quitting with:
"OperationalError: (1042, "Can't get hostname for your address")"
Now I tried disable Firewall and stuff, but that doesn't matter anyway.
I don't need the DNS resolving, which will onl... | 12 | 1 | 0.099668 | 0 | false | 6,668,116 | 0 | 60,490 | 1 | 0 | 0 | 6,668,073 | This is an option which needs to be set in the MySQL configuration file on the server. It can't be set by client APIs such as MySQLdb. This is because of the potential security implications.
That is, I may want to deny access from a particular hostname. With skip-name-resolve enabled, this won't work. (Admittedly, acce... | 1 | 0 | 0 | How to use the option skip-name-resolve when using MySQLdb for Python? | 2 | python,mysql,mysql-python,resolve | 0 | 2011-07-12T17:00:00.000 |
I'm using an object database (ZODB) in order to store complex relationships between many objects but am running into performance issues. As a result I started to construct indexes in order to speed up object retrieval and insertion. Here is my story and I hope that you can help.
Initially when I would add an object to... | 5 | 8 | 1.2 | 0 | true | 6,674,416 | 0 | 601 | 2 | 0 | 0 | 6,668,234 | Yes, repoze.catalog is nice, and well documented.
In short : don't make indexing part of your site structure!
Look at using a container/item hierarchy to store and traverse content item objects; plan to be able to traverse content by either (a) path (graph edges look like a filesystem) or (b) by identifying singleton ... | 1 | 0 | 1 | Method for indexing an object database | 2 | python,indexing,zodb,object-oriented-database | 0 | 2011-07-12T17:14:00.000 |
I'm using an object database (ZODB) in order to store complex relationships between many objects but am running into performance issues. As a result I started to construct indexes in order to speed up object retrieval and insertion. Here is my story and I hope that you can help.
Initially when I would add an object to... | 5 | 0 | 0 | 0 | false | 6,668,904 | 0 | 601 | 2 | 0 | 0 | 6,668,234 | Think about using an attribute hash (something like Java's hashCode()), then use the 32-bit hash value as the key. Python has a hash function, but I am not real familiar with it. | 1 | 0 | 1 | Method for indexing an object database | 2 | python,indexing,zodb,object-oriented-database | 0 | 2011-07-12T17:14:00.000 |
I am looking at the Flask tutorial, and it suggests to create a new database connection for each web request. Is it the right way to do things ? I always thought that the database connection should only once be created once for each thread. Can that be done, while maintaining the application as thread-safe, with flask,... | 21 | 0 | 0 | 0 | false | 6,698,054 | 1 | 11,438 | 1 | 0 | 0 | 6,688,413 | In my experience, it's often a good idea to close connections frequently. In particular, MySQL likes to close connections that have been idle for a while, and sometimes that can leave the persistent connection in a stale state that can make the application unresponsive.
What you really want to do is optimize the "dead... | 1 | 0 | 0 | How to preserve database connection in a python web server | 3 | python,mysql,flask | 0 | 2011-07-14T04:13:00.000 |
The datetime module does date validation and math which is fine when you care about reality.
I need an object that holds dates generated even if they were invalid. Date time is way too strict as sometimes I know year only or year and month only and sometimes I have a date like 2011-02-30.
Is there a module out there th... | 6 | 0 | 0 | 0 | false | 12,212,950 | 0 | 2,334 | 1 | 0 | 0 | 6,697,770 | I haven't heard of such module out there and don't think there is one.
I would probably end up storing two dates for every instance: 1. the original input as a string, which could contain anything, even "N/A", just for showing back the original value, and 2. parsed and "normalized" datetime object which is the closest ... | 1 | 0 | 1 | allowing invalid dates in python datetime | 2 | python,datetime | 0 | 2011-07-14T17:56:00.000 |
Does anyone know of a way of accessing MS Excel from Python? Specifically I am looking to create new sheets and fill them with data, including formulae.
Preferably I would like to do this on Linux if possible, but can do it from in a VM if there is no other way. | 22 | 5 | 0.16514 | 0 | false | 21,573,501 | 0 | 37,522 | 2 | 0 | 0 | 6,698,229 | Long time after the original question, but last answer pushed it top of feed again. Others might benefit from my experience using python and excel.
I am using excel and python quite bit. Instead of using the xlrd, xlwt modules directly, I normally use pandas. I think pandas uses these modules as imports, but i fi... | 1 | 0 | 0 | Excel Python API | 6 | python,excel | 0 | 2011-07-14T18:34:00.000 |
Does anyone know of a way of accessing MS Excel from Python? Specifically I am looking to create new sheets and fill them with data, including formulae.
Preferably I would like to do this on Linux if possible, but can do it from in a VM if there is no other way. | 22 | 3 | 0.099668 | 0 | false | 6,698,343 | 0 | 37,522 | 2 | 0 | 0 | 6,698,229 | It's surely possible through the Excel object model via COM: just use win32com modules for Python. Can't remember more but I once controlled the Media Player through COM from Python. It was piece of cake. | 1 | 0 | 0 | Excel Python API | 6 | python,excel | 0 | 2011-07-14T18:34:00.000 |
Recently I've begun working on exploring ways to convert about 16k Corel Paradox 4.0 database tables (my client has been using a legacy platform over 20 years mainly due to massive logistical matters) to more modern formats (i.e.CSV, SQL, etc.) en mass and so far I've been looking at PHP since it has a library devoted ... | 1 | 0 | 1.2 | 0 | true | 6,711,276 | 0 | 734 | 2 | 0 | 0 | 6,709,833 | If you intend to just convert the data which I guess is a process you do only once you will run the script locally as a command script. For that you don't need a web site and thus XAMPP. What language you take is secondary except you say that PHP has a library. Does python or others have one?
About your concern of erro... | 1 | 0 | 0 | Batch converting Corel Paradox 4.0 Tables to CSV/SQL -- via PHP or other scripts | 2 | php,python,mysql,xampp,php-gtk | 0 | 2011-07-15T15:56:00.000 |
Recently I've begun working on exploring ways to convert about 16k Corel Paradox 4.0 database tables (my client has been using a legacy platform over 20 years mainly due to massive logistical matters) to more modern formats (i.e.CSV, SQL, etc.) en mass and so far I've been looking at PHP since it has a library devoted ... | 1 | 1 | 0.099668 | 0 | false | 39,728,385 | 0 | 734 | 2 | 0 | 0 | 6,709,833 | This is doubtless far too late to help you, but for posterity...
If one has a Corel Paradox working environment, one can just use it to ease the transition.
We moved the Corel Paradox 9 tables we had into an Oracle schema we built by connecting to the schema (using an alias such as SCHEMA001) then writing this Procedur... | 1 | 0 | 0 | Batch converting Corel Paradox 4.0 Tables to CSV/SQL -- via PHP or other scripts | 2 | php,python,mysql,xampp,php-gtk | 0 | 2011-07-15T15:56:00.000 |
I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of... | 9 | 0 | 0 | 0 | false | 7,404,552 | 0 | 14,347 | 4 | 0 | 0 | 6,713,715 | SQLAlchemy do sanitize for you if you will use regular queries. Maybe the problem that you use like clause. Like require addition escape for such symbols: _%. Thus you will need replace methods if you want to quote like expression. | 1 | 0 | 0 | Escaping special characters in filepaths using SQLAlchemy | 4 | python,mysql,sqlalchemy,escaping,filepath | 0 | 2011-07-15T22:12:00.000 |
I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of... | 9 | -3 | -0.148885 | 0 | false | 6,720,094 | 0 | 14,347 | 4 | 0 | 0 | 6,713,715 | Why do you need to escape the file paths? As far as you are not manually writing select / insert queries, SQLAlchemy will take care of the escaping when it generates the query internally.
The file paths can be inserted as they are into the database. | 1 | 0 | 0 | Escaping special characters in filepaths using SQLAlchemy | 4 | python,mysql,sqlalchemy,escaping,filepath | 0 | 2011-07-15T22:12:00.000 |
I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of... | 9 | -4 | -1 | 0 | false | 7,435,697 | 0 | 14,347 | 4 | 0 | 0 | 6,713,715 | You don't need do anything SQLAlchemy will do it for you. | 1 | 0 | 0 | Escaping special characters in filepaths using SQLAlchemy | 4 | python,mysql,sqlalchemy,escaping,filepath | 0 | 2011-07-15T22:12:00.000 |
I'm storing filenames and filepaths in MySQL. Retrieving them from the database using LIKE expressions requires that I escape all allowed filename chars that collide with MySQL special chars. I'm happy to simply use Python's string.replace() method, but was wondering if there was a more standard or built-in method of... | 9 | -3 | -0.148885 | 0 | false | 7,479,678 | 0 | 14,347 | 4 | 0 | 0 | 6,713,715 | As I know there isn’t what you are looking for in SQLAlchemy. Just go basestring.replace() method by yourself. | 1 | 0 | 0 | Escaping special characters in filepaths using SQLAlchemy | 4 | python,mysql,sqlalchemy,escaping,filepath | 0 | 2011-07-15T22:12:00.000 |
I am working in python and using xlwt.
I have got a sample excel sheet and have to generate same excel sheet from python. Now the problem is heading columns are highlighted using some color from excel color palette and I am not able to find the name of color. I need to generate exact copy of sample given to me.
Is the... | 3 | 0 | 0 | 0 | false | 15,435,059 | 0 | 1,582 | 1 | 0 | 0 | 6,723,242 | Best you read the colours from the sample given to you with xlrd.
If there are only a few different colours and they stay the same over time, you can also open the file in Excel and use a colour picker tool to get the RGB values of the relevant cells. | 1 | 0 | 0 | Reading background color of a cell of an excel sheet from python? | 1 | python,excel,colors,cell,xlwt | 0 | 2011-07-17T10:29:00.000 |
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb"
I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together... | 0 | 2 | 1.2 | 0 | true | 6,738,365 | 1 | 1,094 | 3 | 0 | 0 | 6,738,310 | You'll need to install MySQL for python as Django needs this to do the connecting, once you have the package installed you shouldn't need to configure it though as Django just needs to import from it.
Edit: from your comments there is a setuptools bundled but it has been replaced by the package distribute, install thi... | 1 | 0 | 0 | Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb" | 3 | python,mysql,django,mysql-python,bitnami | 0 | 2011-07-18T19:29:00.000 |
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb"
I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together... | 0 | 0 | 0 | 0 | false | 6,981,742 | 1 | 1,094 | 3 | 0 | 0 | 6,738,310 | BitNami DjangoStack already includes the mysql-python components components. I guess you selected MySQL as the database when installing the BitNami Stack, right? (it also includes PostgreSQL and SQLite). Do you receive the error at installation time? Or later working with your Django project?
In which platform are yo... | 1 | 0 | 0 | Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb" | 3 | python,mysql,django,mysql-python,bitnami | 0 | 2011-07-18T19:29:00.000 |
So I installed Bitnami Django stack, hoping as proclaimed 'ready-to-run' versions of python and mysql. However, I can't get python to syncdb: "Error loading MySQLdb module: No module named MySQLdb"
I thought the Bitnami package would already install everything necessary in Windows to make mysql and Python work together... | 0 | 0 | 0 | 0 | false | 12,083,825 | 1 | 1,094 | 3 | 0 | 0 | 6,738,310 | So I got this error after installing Bitnami Django stack on Windows Vista. Turns out that I had all components installed, but easy_install mysql_python didn't unwrap the entire package... ?
I inst... uninst... inst... uninst multiple times, but no combination (using mysql for the startup Project) made any difference... | 1 | 0 | 0 | Mysql-python not installed with bitnami django stack? "Error loading MySQLdb module: No module named MySQLdb" | 3 | python,mysql,django,mysql-python,bitnami | 0 | 2011-07-18T19:29:00.000 |
I am developing a Django app being a Web frontend to some Oracle database with another local DB keeping app's data such as Guardian permissions. The problem is that it can be modified from different places that I don't have control of.
Let's say we have 3 models: User, Thesis and UserThesis.
UserThesis - a table specif... | 1 | 0 | 1.2 | 0 | true | 7,011,483 | 1 | 159 | 1 | 0 | 0 | 6,775,359 | I decided to go with manually checking the permissions, caching it whenever I can. I ended up with get_perms_from_cache(self, user) model method which helps me a lot. | 1 | 0 | 0 | Django-guardian on DB with shared (non-exclusive) access | 1 | python,django,database-permissions,django-permissions | 0 | 2011-07-21T11:34:00.000 |
I'm building a centralised django application that will be interacting with a dynamic number of databases with basically identical schema. These dbs are also used by a couple legacy applications, some of which are in PHP. Our solution to avoid multiple silos of db credentials is to store this info in generic setting ... | 1 | 2 | 1.2 | 0 | true | 6,782,234 | 1 | 1,027 | 2 | 0 | 0 | 6,780,827 | rereading the file is a heavy penalty to pay when it's unlikely that the file has changed.
My usual approach is to use INotify to watch for configuration file changes, rather than trying to read a file on every request. Additionally, I tend to keep a "current" configuration, parsed from the file, and only replace it w... | 1 | 0 | 0 | Dynamic per-request database connections in Django | 2 | python,django | 0 | 2011-07-21T18:26:00.000 |
I'm building a centralised django application that will be interacting with a dynamic number of databases with basically identical schema. These dbs are also used by a couple legacy applications, some of which are in PHP. Our solution to avoid multiple silos of db credentials is to store this info in generic setting ... | 1 | 0 | 0 | 0 | false | 6,780,942 | 1 | 1,027 | 2 | 0 | 0 | 6,780,827 | You could start different instances with different settings.py files (by setting different DJANGO_SETTINGS_MODULE) on different ports, and redirect the requests to the specific apps. Just my 2 cents. | 1 | 0 | 0 | Dynamic per-request database connections in Django | 2 | python,django | 0 | 2011-07-21T18:26:00.000 |
I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7.
When I importing cx_Oracle module I get this error:
Traceback (most recent call last):
File "C:\Osebno\test.py", line 1, in
import cx_oracle
ImportError: No module named cx_oracle
Can someone please ... | 3 | 0 | 0 | 0 | false | 6,788,993 | 0 | 17,773 | 3 | 0 | 0 | 6,788,937 | It's not finding the module.
Things to investigate: Do you have several python installations? Did it go to the right one? Do a global search for cx_oracle and see if it's in the correct place. Check your PYTHONPATH variable. Check Python's registry values HKLM\Software\Python\Pyhoncore. Are they correct? | 1 | 0 | 0 | Error when importing cx_Oracle module [Python] | 5 | python,windows-7,oracle10g | 0 | 2011-07-22T10:51:00.000 |
I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7.
When I importing cx_Oracle module I get this error:
Traceback (most recent call last):
File "C:\Osebno\test.py", line 1, in
import cx_oracle
ImportError: No module named cx_oracle
Can someone please ... | 3 | 4 | 0.158649 | 0 | false | 6,789,312 | 0 | 17,773 | 3 | 0 | 0 | 6,788,937 | Have you tried import cx_Oracle (upper-case O) instead of import cx_oracle? | 1 | 0 | 0 | Error when importing cx_Oracle module [Python] | 5 | python,windows-7,oracle10g | 0 | 2011-07-22T10:51:00.000 |
I use Windows 7 64 bit and Oracle 10g. I have installed python-2.7.2.amd64 and cx_Oracle-5.1-10g.win-amd64-py2.7.
When I importing cx_Oracle module I get this error:
Traceback (most recent call last):
File "C:\Osebno\test.py", line 1, in
import cx_oracle
ImportError: No module named cx_oracle
Can someone please ... | 3 | 1 | 0.039979 | 0 | false | 16,885,226 | 0 | 17,773 | 3 | 0 | 0 | 6,788,937 | after installing the cx_Oracle download the instant client form oracle owth all DLLs , then copy then in the same directory of cx_Oracle.pyd , it will work directly
tried and worked for me. | 1 | 0 | 0 | Error when importing cx_Oracle module [Python] | 5 | python,windows-7,oracle10g | 0 | 2011-07-22T10:51:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.