Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I've produced a few Django sites but up until now I have been mapping individual views and URLs in urls.py.
Now I've tried to create a small custom CMS but I'm having trouble with the URLs. I have a database table (SQLite3) which contains code for the pages like a column for header, one for right menu, one for content.... | 1 | 1 | 0.099668 | 0 | false | 1,563,359 | 1 | 3,215 | 1 | 0 | 0 | 1,563,088 | Your question is a little bit twisted, but I think what you're asking for is something similar to how django.contrib.flatpages handles this. Basically it uses middleware to catch the 404 error and then looks to see if any of the flatpages have a URL field that matches.
We did this on one site where all of the URLs were... | 1 | 0 | 0 | URLs stored in database for Django site | 2 | python,database,django,url,content-management-system | 0 | 2009-10-13T21:43:00.000 |
I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements... | 17 | 1 | 0.039979 | 0 | false | 1,564,226 | 0 | 50,856 | 3 | 0 | 0 | 1,563,967 | Quoting parameters manually in general is a bad idea. What if there is a mistake in escaping rules? What if escape doesn't match used version of DB? What if you just forget to escape some parameter or erroneously assumed it can't contain data requiring escaping? That all may cause SQL injection vulnerability. Also, DB ... | 1 | 0 | 0 | Generate SQL statements with python | 5 | python,sql,postgresql,psycopg2 | 0 | 2009-10-14T02:31:00.000 |
I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements... | 17 | 2 | 0.07983 | 0 | false | 1,563,981 | 0 | 50,856 | 3 | 0 | 0 | 1,563,967 | For robustness, I recommend using prepared statements to send user-entered values, no matter what language you use. :-) | 1 | 0 | 0 | Generate SQL statements with python | 5 | python,sql,postgresql,psycopg2 | 0 | 2009-10-14T02:31:00.000 |
I need to generate a list of insert statements (for postgresql) from html files, is there a library available for python to help me properly escape and quote the names/values? in PHP i use PDO to do the escaping and quoting, is there any equivalent library for python?
Edit: I need to generate a file with sql statements... | 17 | 13 | 1 | 0 | false | 1,564,224 | 0 | 50,856 | 3 | 0 | 0 | 1,563,967 | SQLAlchemy provides a robust expression language for generating SQL from Python.
Like every other well-designed abstraction layer, however, the queries it generates insert data through bind variables rather than through attempting to mix the query language and the data being inserted into a single string. This approach... | 1 | 0 | 0 | Generate SQL statements with python | 5 | python,sql,postgresql,psycopg2 | 0 | 2009-10-14T02:31:00.000 |
When using the sqlite3 module in python, all elements of cursor.description except the column names are set to None, so this tuple cannot be used to find the column types for a query result (unlike other DB-API compliant modules). Is the only way to get the types of the columns to use pragma table_info(table_name).fetc... | 5 | 5 | 0.462117 | 0 | false | 1,583,379 | 0 | 3,955 | 1 | 0 | 0 | 1,583,350 | No, it's not the only way. Alternatively, you can also fetch one row, iterate over it, and inspect the individual column Python objects and types. Unless the value is None (in which case the SQL field is NULL), this should give you a fairly precise indication what the database column type was.
sqlite3 only uses sqlite3... | 1 | 0 | 0 | sqlite3 and cursor.description | 2 | python,sqlite,python-db-api | 0 | 2009-10-17T22:11:00.000 |
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL gen... | 2 | 0 | 0 | 0 | false | 1,586,035 | 1 | 369 | 3 | 0 | 0 | 1,586,008 | It would be great if code written for one platform would work on every other without any modification whatsoever, but this is usually not the case and probably never will be. What the current frameworks do is about all anyone can. | 1 | 0 | 0 | PHP, Python, Ruby application with multiple RDBMS | 4 | php,python,ruby-on-rails,database | 0 | 2009-10-18T20:56:00.000 |
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL gen... | 2 | 2 | 0.099668 | 0 | false | 1,586,105 | 1 | 369 | 3 | 0 | 0 | 1,586,008 | If you want to leverage the bells and whistles of various RDBMSes, you can certainly do it. Just apply standard OO Principles. Figure out what kind of API your persistence layer will need to provide.
You'll end up writing a set of isomorphic persistence adapter classes. From the perspective of your model code (whi... | 1 | 0 | 0 | PHP, Python, Ruby application with multiple RDBMS | 4 | php,python,ruby-on-rails,database | 0 | 2009-10-18T20:56:00.000 |
I start feeling old fashioned when I see all these SQL generating database abstraction layers and all those ORMs out there, although I am far from being old. I understand the need for them, but their use spreads to places they normally don't belong to.
I firmly believe that using database abstraction layers for SQL gen... | 2 | 2 | 1.2 | 0 | true | 1,587,887 | 1 | 369 | 3 | 0 | 0 | 1,586,008 | You cannot eat a cake and have it, choose on of the following options.
Use your database abstraction layer whenever you can and in the rare cases when you have a need for a hand-made query (eg. performance reasons) stick to the lowest common denominator and don't use stored procedures or any proprietary extensions tha... | 1 | 0 | 0 | PHP, Python, Ruby application with multiple RDBMS | 4 | php,python,ruby-on-rails,database | 0 | 2009-10-18T20:56:00.000 |
I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM.
Anyway, what use cases suit non relational datastores best? | 3 | 2 | 0.132549 | 0 | false | 1,588,748 | 1 | 587 | 2 | 1 | 0 | 1,588,708 | Consider the situation where you have many entity types but few instances of each entity. In this case you will have many tables each with a few records so a relational approach is not suitable. | 1 | 0 | 0 | What are the use cases for non relational datastores? | 3 | python,google-app-engine,couchdb | 0 | 2009-10-19T13:36:00.000 |
I'm looking at using CouchDB for one project and the GAE app engine datastore in the other. For relational stuff I tend to use postgres, although I much prefer an ORM.
Anyway, what use cases suit non relational datastores best? | 3 | 0 | 0 | 0 | false | 1,589,186 | 1 | 587 | 2 | 1 | 0 | 1,588,708 | In some cases that are simply nice. ZODB is a Python-only object database, that is so well-integrated with Python that you can simply forget that it's there. You don't have to bother about it, most of the time. | 1 | 0 | 0 | What are the use cases for non relational datastores? | 3 | python,google-app-engine,couchdb | 0 | 2009-10-19T13:36:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those f... | 3 | 3 | 0.085505 | 1 | false | 1,594,704 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | If you are I/O bound, the best way I have found to optimize is to read or write the entire file into/out of memory at once, then operate out of RAM from there on.
With extensive testing I found that my runtime eded up bound not by the amount of data I read from/wrote to disk, but by the number of I/O operations I used ... | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those f... | 3 | 1 | 0.028564 | 1 | false | 1,595,358 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | Use buffered writes for step 4.
Write a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it.
You would have one buffer per file, so that most "writes" won't ... | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those f... | 3 | 3 | 1.2 | 1 | true | 1,595,626 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | Python already does IO buffering and the OS should handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening t... | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those f... | 3 | 2 | 0.057081 | 1 | false | 1,597,062 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so. | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
I have a python program that does something like this:
Read a row from a csv file.
Do some transformations on it.
Break it up into the actual rows as they would be written to the database.
Write those rows to individual csv files.
Go back to step 1 unless the file has been totally read.
Run SQL*Loader and load those f... | 3 | 1 | 0.028564 | 1 | false | 1,597,281 | 0 | 2,504 | 5 | 0 | 0 | 1,594,604 | Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them?
This would remove the save to and load from the disk that step 4 entails.
If the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and... | 1 | 0 | 0 | How should I optimize this filesystem I/O bound program? | 7 | python,performance,optimization,file-io | 0 | 2009-10-20T13:27:00.000 |
Two libraries for Mysql.
I've always used _mysql because it's simpler.
Can anyone tell me the difference, and why I should use which one in certain occasions? | 11 | 5 | 0.321513 | 0 | false | 1,620,642 | 0 | 4,941 | 1 | 0 | 0 | 1,620,575 | _mysql is the one-to-one mapping of the rough mysql API. On top of it, the DB-API is built, handling things using cursors and so on.
If you are used to the low-level mysql API provided by libmysqlclient, then the _mysql module is what you need, but as another answer says, there's no real need to go so low-level. You c... | 1 | 0 | 0 | Python: advantages and disvantages of _mysql vs MySQLdb? | 3 | python,mysql | 0 | 2009-10-25T10:37:00.000 |
On my website I store user pictures in a simple manner such as:
"image/user_1.jpg".
I don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com/images/user_2.jpg, www.mydomain.com/images/user_3.jpg, so on...)
So far I have three solutions in mind:
I tried using .htaccess... | 2 | 6 | 1 | 0 | false | 1,623,338 | 1 | 4,621 | 2 | 0 | 0 | 1,623,311 | Any method you choose to determine the source of a request is only as reliable as the HTTP_REFERER information that is sent by the user's browser, which is not very. Requiring authentication is the only good way to protect content. | 1 | 0 | 0 | Restrict access to images on my website except through my own htmls | 5 | php,python,linux,perl | 0 | 2009-10-26T06:06:00.000 |
On my website I store user pictures in a simple manner such as:
"image/user_1.jpg".
I don't want visitors to be able to view images on my server just by trying user_ids. (Ex: www.mydomain.com/images/user_2.jpg, www.mydomain.com/images/user_3.jpg, so on...)
So far I have three solutions in mind:
I tried using .htaccess... | 2 | 2 | 0.07983 | 0 | false | 1,623,325 | 1 | 4,621 | 2 | 0 | 0 | 1,623,311 | You are right considering option #3. Use service script that would validate user and readfile() an image. Be sure to set correct Content-Type HTTP header via header() function prior to serving an image. For better isolation images should be put above web root directory, or protected by well written .htaccess rules - th... | 1 | 0 | 0 | Restrict access to images on my website except through my own htmls | 5 | php,python,linux,perl | 0 | 2009-10-26T06:06:00.000 |
Is there an easy way to reset a django database (i.e. drop all data/tables, create new tables and create indexes) without loading fixture data afterwords? What I want to have is just an empty database because all data is loaded from another source (a kind of a post-processed backup).
I know that this could be achieved... | 1 | 2 | 1.2 | 0 | true | 1,645,519 | 1 | 1,667 | 1 | 0 | 0 | 1,645,310 | As far as I know, the fixtures (in initial_data file) are automatically loaded after manage.py syndcb and not after reset. So, if you do a manage.py reset yourapp it should not load the fixtures. Hmm? | 1 | 0 | 0 | Django db reset without loading fixtures | 2 | python,database,django,fixtures | 0 | 2009-10-29T17:26:00.000 |
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.
Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the insert... | 1 | 1 | 0.033321 | 0 | false | 1,652,783 | 0 | 3,989 | 4 | 0 | 0 | 1,650,856 | Is your script executing a single INSERT statement per row of data? If so, pre-processing the data into a text file of many rows that could then be inserted with a single INSERT statement might improve the efficiency and cut down on the accumulating temporary crud that's causing it to bloat.
You might also make sure t... | 1 | 0 | 0 | MS-Access Database getting very large during inserts | 6 | python,ms-access | 0 | 2009-10-30T16:21:00.000 |
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.
Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the insert... | 1 | -1 | -0.033321 | 0 | false | 31,059,064 | 0 | 3,989 | 4 | 0 | 0 | 1,650,856 | File --> Options --> Current Database -> Check below options
* Use the Cache format that is compatible with Microsoft Access 2010 and later
* Clear Cache on Close
Then, you file will be saved by compacting to the original size. | 1 | 0 | 0 | MS-Access Database getting very large during inserts | 6 | python,ms-access | 0 | 2009-10-30T16:21:00.000 |
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.
Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the insert... | 1 | 3 | 0.099668 | 0 | false | 1,650,897 | 0 | 3,989 | 4 | 0 | 0 | 1,650,856 | A common trick, if feasible with regard to the schema and semantics of the application, is to have several MDB files with Linked tables.
Also, the way the insertions take place matters with regards to the way the file size balloons... For example: batched, vs. one/few records at a time, sorted (relative to particular ... | 1 | 0 | 0 | MS-Access Database getting very large during inserts | 6 | python,ms-access | 0 | 2009-10-30T16:21:00.000 |
I have a database which I regularly need to import large amounts of data into via some python scripts. Compacted, the data for a single months imports takes about 280mb, but during the import file size swells to over a gb.
Given the 2gb size limit on mdb files, this is a bit of a concern. Apart from breaking the insert... | 1 | 3 | 0.099668 | 0 | false | 1,651,412 | 0 | 3,989 | 4 | 0 | 0 | 1,650,856 | One thing to watch out for is records which are present in the append queries but aren't inserted into the data due to duplicate key values, null required fields, etc. Access will allocate the space taken by the records which aren't inserted.
About the only significant thing I'm aware of is to ensure you have exclusi... | 1 | 0 | 0 | MS-Access Database getting very large during inserts | 6 | python,ms-access | 0 | 2009-10-30T16:21:00.000 |
I am after a Python module for Google App Engine that abstracts away limitations of the GQL.
Specifically I want to store big files (> 1MB) and retrieve all records for a model (> 1000). I have my own code that handles this at present but would prefer to build on existing work, if available.
Thanks | 0 | 1 | 1.2 | 0 | true | 1,660,404 | 0 | 78 | 1 | 1 | 0 | 1,658,829 | I'm not aware of any libraries that do that. You may want to reconsider what you're doing, at least in terms of retrieving more than 1000 results - those operations are not available because they're expensive, and needing to evade them is usually (though not always) a sign that you need to rearchitect your app to do le... | 1 | 0 | 0 | module to abstract limitations of GQL | 1 | python,google-app-engine,gql | 0 | 2009-11-01T23:54:00.000 |
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.
The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photo... | 3 | 1 | 0.049958 | 0 | false | 1,689,143 | 1 | 1,620 | 4 | 0 | 0 | 1,689,031 | There is always overhead in database calls, in your case the overhead is not that bad because the application and database are on the same machine so there is no network latency but there is still a significant cost.
When you make a request to the database it has to prepare to service that request by doing a number of ... | 1 | 0 | 0 | Overhead of a Round-trip to MySql? | 4 | python,mysql,django,overhead | 0 | 2009-11-06T17:18:00.000 |
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.
The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photo... | 3 | 3 | 0.148885 | 0 | false | 1,689,146 | 1 | 1,620 | 4 | 0 | 0 | 1,689,031 | The overhead of each queries is only part of the picture. The actual round trip time between your Django and Mysql servers is probably very small since most of your queries are coming back in less than a one millisecond. The bigger problem is that the number of queries issued to your database can quickly overwhelm it.... | 1 | 0 | 0 | Overhead of a Round-trip to MySql? | 4 | python,mysql,django,overhead | 0 | 2009-11-06T17:18:00.000 |
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.
The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photo... | 3 | 4 | 0.197375 | 0 | false | 1,689,452 | 1 | 1,620 | 4 | 0 | 0 | 1,689,031 | Just because you are using an ORM doesn't mean that you shouldn't do performance tuning.
I had - like you - a home page of one of my applications that had low performance. I saw that I was doing hundreds of queries to display that page. I went looking at my code and realized that with some careful use of select_relate... | 1 | 0 | 0 | Overhead of a Round-trip to MySql? | 4 | python,mysql,django,overhead | 0 | 2009-11-06T17:18:00.000 |
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL.
The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photo... | 3 | 2 | 1.2 | 0 | true | 1,689,330 | 1 | 1,620 | 4 | 0 | 0 | 1,689,031 | There are some ways to reduce the query volume.
Use .filter() and .all() to get a bunch of things; pick and choose in the view function (or template via {%if%}). Python can process a batch of rows faster than MySQL.
"But I could send too much to the template". True, but you'll execute fewer SQL requests. Measure... | 1 | 0 | 0 | Overhead of a Round-trip to MySql? | 4 | python,mysql,django,overhead | 0 | 2009-11-06T17:18:00.000 |
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})
As this is a client-side app, I don't want to use a database server, I just want the info stored into files.
I want the files to be readable from various langua... | 5 | 2 | 0.066568 | 0 | false | 1,697,185 | 0 | 666 | 3 | 0 | 0 | 1,697,153 | BerkeleyDB is good, also look at the *DBM incarnations (e.g. GDBM). The big question though is: for what do you need to search? Do you need to search by that URL, by a range of URLs or the dates you list?
It is also quite possible to keep groups of records as simple files in the local filesystem, grouped by dates ... | 1 | 0 | 0 | Which database should I use to store records, and how should I use it? | 6 | c++,python,database,persistence | 0 | 2009-11-08T17:01:00.000 |
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})
As this is a client-side app, I don't want to use a database server, I just want the info stored into files.
I want the files to be readable from various langua... | 5 | 0 | 0 | 0 | false | 1,698,109 | 0 | 666 | 3 | 0 | 0 | 1,697,153 | Ok, so you say just storing the data..? You really only need a DB for retrieval, lookup, summarising, etc. So, for storing, just use simple text files and append lines. Compress the data if you need to, use delims between fields - just about any language will be able to read such files. If you do want to retrieve, then... | 1 | 0 | 0 | Which database should I use to store records, and how should I use it? | 6 | c++,python,database,persistence | 0 | 2009-11-08T17:01:00.000 |
I'm developing an application that will store a sizeable number of records. These records will be something like (URL, date, title, source, {optional data...})
As this is a client-side app, I don't want to use a database server, I just want the info stored into files.
I want the files to be readable from various langua... | 5 | 2 | 0.066568 | 0 | false | 1,697,239 | 0 | 666 | 3 | 0 | 0 | 1,697,153 | Personally I would use sqlite anyway. It has always just worked for me (and for others I work with). When your app grows and you suddenly do want to do something a little more sophisticated, you won't have to rewrite.
On the other hand, I've seen various comments on the Python dev list about Berkely DB that suggest i... | 1 | 0 | 0 | Which database should I use to store records, and how should I use it? | 6 | c++,python,database,persistence | 0 | 2009-11-08T17:01:00.000 |
Are there database testing tools for python (like sqlunit)? I want to test the DAL that is built using sqlalchemy | 4 | 4 | 1.2 | 0 | true | 1,719,347 | 0 | 601 | 1 | 0 | 0 | 1,719,279 | Follow the design pattern that Django uses.
Create a disposable copy of the database. Use SQLite3 in-memory, for example.
Create the database using the SQLAlchemy table and index definitions. This should be a fairly trivial exercise.
Load the test data fixture into the database.
Run your unit test case in a databa... | 1 | 0 | 0 | Are there database testing tools for python (like sqlunit)? | 1 | python,database,testing,sqlalchemy | 1 | 2009-11-12T01:27:00.000 |
But, they were unable to be found!?
How do I install both of them? | 1 | 2 | 1.2 | 0 | true | 1,720,904 | 0 | 730 | 1 | 0 | 0 | 1,720,867 | Have you installed python-mysqldb? If not install it using apt-get install python-mysqldb. And how are you importing mysql.Is it import MySQLdb? Python is case sensitive. | 1 | 0 | 0 | I just installed a Ubuntu Hardy server. In Python, I tried to import _mysql and MySQLdb | 3 | python,linux,unix,installation | 0 | 2009-11-12T09:01:00.000 |
I'm writing an application in Python with Postgresql 8.3 which runs on several machines on a local network.
All machines
1) fetch huge amount of data from the database server ( lets say database gets 100 different queries from a machine with in 2 seconds time) and there are about 10 or 11 machines doing that.
2) After ... | 2 | 1 | 0.099668 | 0 | false | 1,729,623 | 0 | 1,607 | 1 | 0 | 0 | 1,728,350 | This sounds a bit like your DB server might have some problems, especially if your database server literally crashes. I'd start by trying to figure out from logs what is the root cause of the problems. It could be something like running out of memory, but it could also happen because of faulty hardware.
If you're openi... | 1 | 0 | 0 | Optimal / best pratice to maintain continuos connection between Python and Postgresql using Psycopg2 | 2 | python,linux,performance,postgresql,out-of-memory | 0 | 2009-11-13T10:18:00.000 |
i recently switched to mac. first and foremost i installed xampp.
then for django-python-mysql connectivity, i "somehow" ended up installing a seperate MySQL.
now the seperate mysql installation is active all the time and the Xampp one doesnt switch on unless i kill the other one.
what i wanted to know is it possible t... | 0 | 1 | 1.2 | 0 | true | 1,734,939 | 1 | 150 | 1 | 0 | 0 | 1,734,918 | You could change the listening port of one of the installations and they shouldn't conflict anymore with each other.
Update: You need to find the mysql configuration file my.cnf of the server which should get a new port (the one from xampp should be somewhere in the xampp folder). Find the line port=3306 in the [mysqld... | 1 | 0 | 0 | 2 mysql instances in MAC | 1 | python,mysql,django,macos,xampp | 0 | 2009-11-14T17:22:00.000 |
How do I load data from an Excel sheet into my Django application? I'm using database PosgreSQL as the database.
I want to do this programmatically. A client wants to load two different lists onto the website weekly and they don't want to do it in the admin section, they just want the lists loaded from an Excel sheet. ... | 3 | -1 | -0.022219 | 0 | false | 11,293,612 | 1 | 7,027 | 1 | 0 | 0 | 1,747,501 | Just started using XLRD and it looks very easy and simple to use.
Beware that it does not support Excel 2007 yet, so keep in mind to save your excel at 2003 format. | 1 | 0 | 0 | Getting data from an Excel sheet | 9 | python,django,excel,postgresql | 0 | 2009-11-17T09:07:00.000 |
Greetings, everybody.
I'm trying to import the following libraries in python: cx_Oracle and kinterbasdb.
But, when I try, I get a very similar message error.
*for cx_Oracle:
Traceback (most recent call last):
File "", line 1, in
ImportError: DLL load failed: Não foi possível encontrar o procedimento especificado.
(t... | 2 | -1 | -0.197375 | 0 | false | 1,803,407 | 0 | 767 | 1 | 0 | 0 | 1,799,475 | oracle is a complete pain. i don't know the details for windows, but for unix you need ORACLE_HOME and LD_LIBRARY_PATH to both be defined before cx_oracle will work. in windows this would be your environment variables, i guess. so check those.
also, check that they are defined in the environment in which the program... | 1 | 1 | 0 | importing cx_Oracle and kinterbasdb returns error | 1 | python,cx-oracle,kinterbasdb | 0 | 2009-11-25T19:43:00.000 |
I'm creating a financial app and it seems my floats in sqlite are floating around. Sometimes a 4.0 will be a 4.000009, and a 6.0 will be a 6.00006, things like that. How can I make these more exact and not affect my financial calculations?
Values are coming from Python if that matters. Not sure which area the messed ... | 4 | 1 | 0.033321 | 0 | false | 1,801,521 | 0 | 3,672 | 1 | 0 | 0 | 1,801,307 | Most people would probably use Decimal for this, however if this doesn't map onto a database type you may take a performance hit.
If performance is important you might want to consider using Integers to represent an appropriate currency unit - often cents or tenths of cents is ok.
There should be business rules about h... | 1 | 0 | 0 | How to deal with rounding errors of floating types for financial calculations in Python SQLite? | 6 | python,sqlite,floating-point | 0 | 2009-11-26T02:55:00.000 |
I have downloaded mysqlDb, and while installing it I am getting errors like:
C:\Documents and Settings\naresh\Desktop\MySQL-python-1.2.3c1>setup.py build
Traceback (most recent call last):
File "C:\Documents and Settings\naresh\Desktop\MySQL-python-1.2.3c1
\setup.py",line15, in
metadata, options = get_config()... | 7 | 0 | 0 | 0 | false | 6,616,901 | 0 | 6,706 | 1 | 0 | 0 | 1,803,233 | You need to fire up regedit and make
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\InstallPath
and HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\InstallPath\InstallGroup
to look like HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.7\InstallPath\InstallGroup. | 1 | 0 | 0 | How to install mysql connector | 2 | python,mysql | 0 | 2009-11-26T11:46:00.000 |
We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.
Do you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the im... | 0 | 0 | 0 | 0 | false | 1,809,219 | 0 | 1,382 | 3 | 0 | 0 | 1,809,201 | Check out Django. They have a nice ORM and I believe it has tools to attempt a reverse-engineer from the DB schema. | 1 | 0 | 0 | NHibernate and python | 4 | python,nhibernate,orm | 0 | 2009-11-27T14:50:00.000 |
We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.
Do you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the im... | 0 | 2 | 0.099668 | 0 | false | 1,809,238 | 0 | 1,382 | 3 | 0 | 0 | 1,809,201 | What about running your project under Mono on Linux? Mono seems to support NHibernate, which means you may be able to get away with out rewriting large chunks of your application.
Also, if you really wanted to get Python in on the action, you could use IronPython along with Mono. | 1 | 0 | 0 | NHibernate and python | 4 | python,nhibernate,orm | 0 | 2009-11-27T14:50:00.000 |
We have an existing C# project based on NHibernate and WPF. I am asked to convert it to Linux and to consider other implementation like Python. But for some reason, they like NHibernate a lot and want to keep it.
Do you know if it's possible to keep the NHibernate stuff and make it work with Python ? I am under the im... | 0 | 5 | 0.244919 | 0 | false | 1,809,266 | 0 | 1,382 | 3 | 0 | 0 | 1,809,201 | NHibernate is not specific to C#, but it is specific to .NET.
IronPython is a .NET language from which you could use NHibernate.
.NET and NHibernate can run on Linux through Mono. I'm not sure how good Mono's support is for WPF.
I'm not sure if IronPython runs on Linux, but that would seem to be the closest thing to w... | 1 | 0 | 0 | NHibernate and python | 4 | python,nhibernate,orm | 0 | 2009-11-27T14:50:00.000 |
I recently created a script that parses several web proxy logs into a tidy sqlite3 db file that is working great for me... with one snag. the file size. I have been pressed to use this format (a sqlite3 db) and python handles it natively like a champ, so my question is this... what is the best form of string compres... | 0 | 0 | 0 | 0 | false | 1,829,601 | 0 | 2,957 | 2 | 0 | 0 | 1,829,256 | what sort of parsing do you do before you put it in the database? I get the impression that it is fairly simple with a single table holding each entry - if not then my apologies.
Compression is all about removing duplication, and in a log file most of the duplication is between entries rather than within each entry s... | 1 | 0 | 0 | Python 3: Best string compression method to minimize the size of a sqlite3 db | 3 | python,sqlite,compression | 0 | 2009-12-01T22:02:00.000 |
I recently created a script that parses several web proxy logs into a tidy sqlite3 db file that is working great for me... with one snag. the file size. I have been pressed to use this format (a sqlite3 db) and python handles it natively like a champ, so my question is this... what is the best form of string compres... | 0 | 0 | 0 | 0 | false | 1,832,688 | 0 | 2,957 | 2 | 0 | 0 | 1,829,256 | Instead of inserting compression/decompression code into your program, you could store the table itself on a compressed drive. | 1 | 0 | 0 | Python 3: Best string compression method to minimize the size of a sqlite3 db | 3 | python,sqlite,compression | 0 | 2009-12-01T22:02:00.000 |
I'm using the sqlite3 module in Python 2.6.4 to store a datetime in a SQLite database. Inserting it is very easy, because sqlite automatically converts the date to a string. The problem is, when reading it it comes back as a string, but I need to reconstruct the original datetime object. How do I do this? | 84 | 1 | 0.066568 | 0 | false | 48,429,766 | 0 | 58,748 | 1 | 0 | 0 | 1,829,872 | Note: In Python3, I had to change the SQL to something like:
SELECT jobid, startedTime as "st [timestamp]" FROM job
(I had to explicitly name the column.) | 1 | 0 | 1 | How to read datetime back from sqlite as a datetime instead of string in Python? | 3 | python,datetime,sqlite | 0 | 2009-12-02T00:15:00.000 |
I want to use Python-MySQLDB library on Mac so I have compiled the source code to get the _mysql.so under Mac10.5 with my Intel iMac (i386)
This _mysql.co works in 2 of my iMacs and another Macbook. But that's it, it doesn't work in any other Macs.
Does this mean some machine specific info got compiled into the file? | 0 | 2 | 0.197375 | 0 | false | 1,832,065 | 0 | 106 | 1 | 0 | 0 | 1,831,979 | If you've only built one architecture (i386 / PPC) then it won't work on Macs with the opposite architecture. Are the machines that don't work PPC machines, by any chance?
Sometimes build configurations are set up to build only the current architecture by default - I haven't build Python-MySQLDB so I'm not sure if this... | 1 | 0 | 0 | Why _mysql.co that compiled on one Mac doesn't work on another? | 2 | python,compilation,mysql | 0 | 2009-12-02T10:21:00.000 |
Does there exist, or is there an intention to create, a universal database frontend for Python like Perl's DBI? I am aware of Python's DB-API, but all the separate packages are leaving me somewhat aggravated. | 1 | 2 | 0.197375 | 0 | false | 1,836,125 | 0 | 1,158 | 1 | 0 | 0 | 1,836,061 | Well...DBAPI is that frontend:
This API has been defined to encourage similarity between the
Python modules that are used to access databases. By doing this,
we hope to achieve a consistency leading to more easily understood
modules, code that is generally more portable across databases,
and a broader reach o... | 1 | 0 | 0 | Python universal database interface? | 2 | python,database | 0 | 2009-12-02T21:49:00.000 |
I have an Excel spreadsheet with calculations I would like to use in a Django web application. I do not need to present the spreadsheet as it appears in Excel. I only want to use the formulae embedded in it. What is the best way to do this? | 2 | 0 | 0 | 0 | false | 1,937,261 | 1 | 1,592 | 1 | 0 | 0 | 1,883,098 | You need to use Excel to calculate the results? I mean, maybe you could run the Excel sheet from OpenOffice and use a pyUNO macro, which is somehow "native" python.
A different approach will be to create a macro to generate some more friendly code to python, if you want Excel to perform the calculation is easy you end... | 1 | 0 | 0 | Importing Excel sheets, including formulae, into Django | 4 | python,django,excel | 0 | 2009-12-10T18:39:00.000 |
PHP provides mysql_connect() and mysql_pconnect() which allow creating both temporary and persistent database connections.
Is there a similar functionality in Python? The environment on which this will be used is lighttpd server with FastCGI.
Thank you! | 2 | 0 | 0 | 0 | false | 1,895,731 | 0 | 3,649 | 1 | 0 | 0 | 1,895,089 | Note: Persistent connections can have a very negative effect on your system performance. If you have a large number of web server processes all holding persistent connections to your DB server you may exhaust the DB server's limit on connections. This is one of those areas where you need to test it under heavy simulate... | 1 | 0 | 0 | Persistent MySQL connections in Python | 2 | python,mysql,web-services | 0 | 2009-12-12T23:49:00.000 |
I am using MySQLdb module of python on FC11 machine. Here, i have an issue. I have the following implementation for one of our requirement:
connect to mysqldb and get DB handle,open a cursor, execute a delete statement,commit and then close the cursor.
Again using the DB handle above, iam performing a "select" stateme... | 0 | 0 | 0 | 0 | false | 1,922,710 | 0 | 626 | 2 | 0 | 0 | 1,922,623 | With no code, I can only make a guess: try not closing the cursor until you are done with that connection. I think that calling cursor() again after calling cursor.close() will just give you a reference to the same cursor, which can no longer be used for queries.
I am not 100% sure if that is the intended behavior, b... | 1 | 0 | 0 | MYSQLDB python module | 3 | python,mysql | 0 | 2009-12-17T15:42:00.000 |
I am using MySQLdb module of python on FC11 machine. Here, i have an issue. I have the following implementation for one of our requirement:
connect to mysqldb and get DB handle,open a cursor, execute a delete statement,commit and then close the cursor.
Again using the DB handle above, iam performing a "select" stateme... | 0 | 0 | 0 | 0 | false | 1,924,766 | 0 | 626 | 2 | 0 | 0 | 1,922,623 | It sounds as though the first cursor is being returned back to the second step. | 1 | 0 | 0 | MYSQLDB python module | 3 | python,mysql | 0 | 2009-12-17T15:42:00.000 |
I have both, django and mysql set to work with UTF-8.
My base.html set utf-8 in head.
row on my db :
+----+--------+------------------------------------------------------------------+-----------------------------+-----------------------------+---------------------+
| id | psn_id | name ... | 0 | 0 | 0 | 0 | false | 1,931,067 | 1 | 312 | 1 | 0 | 0 | 1,928,087 | As Dominic has said, the generated HTML source code is correct (these are your Japanese characters translated into HTML entities), but we're not sure, if you see the same code rendered in the page (in this case, you have probably set content-type to "text/plain" instead of "text/html" - do you use render_to_response() ... | 1 | 0 | 0 | django + mysql + UTF-8 - Chars are not displayed | 3 | python,django,unicode | 0 | 2009-12-18T13:04:00.000 |
I'd like to get busy with a winter programming project and am contemplating writing an online word game (with a server load of up to, say, 500 users simultaneously). I would prefer it to be platform independent. I intend to use Python, which I have some experience with. For user data storage, after previous experience ... | 2 | 2 | 0.07983 | 0 | false | 1,937,342 | 0 | 1,430 | 2 | 0 | 0 | 1,937,286 | Is it worth starting with Python 3, or is it still too poorly supported with ports of modules from previous versions?
depends on which modules do you want to use. twisted is a "swiss knife" for the network programming and could be a choice for your project but unfortunately it does not support python3 yet.
Are there ... | 1 | 0 | 0 | Word game server in Python, design pros and cons? | 5 | python | 0 | 2009-12-20T22:18:00.000 |
I'd like to get busy with a winter programming project and am contemplating writing an online word game (with a server load of up to, say, 500 users simultaneously). I would prefer it to be platform independent. I intend to use Python, which I have some experience with. For user data storage, after previous experience ... | 2 | 1 | 0.039979 | 0 | false | 1,937,370 | 0 | 1,430 | 2 | 0 | 0 | 1,937,286 | Related to your database choice, I'd seriously look at using Postgres instead of MySQL. In my experiance with the two Postgres has shown to be faster on most write operations while MySQL is slightly faster on the reads.
However, MySQL also has many issues some of which are:
Live backups are difficult at best, and i... | 1 | 0 | 0 | Word game server in Python, design pros and cons? | 5 | python | 0 | 2009-12-20T22:18:00.000 |
I worked on a PHP project earlier where prepared statements made the SELECT queries 20% faster.
I'm wondering if it works on Python? I can't seem to find anything that specifically says it does or does NOT. | 51 | 5 | 0.141893 | 0 | false | 2,539,467 | 0 | 52,874 | 1 | 0 | 0 | 1,947,750 | Using the SQL Interface as suggested by Amit can work if you're only concerned about performance. However, you then lose the protection against SQL injection that a native Python support for prepared statements could bring. Python 3 has modules that provide prepared statement support for PostgreSQL. For MySQL, "ours... | 1 | 0 | 0 | Does Python support MySQL prepared statements? | 7 | python,mysql,prepared-statement | 0 | 2009-12-22T17:06:00.000 |
I am using Python 2.6 + xlwt module to generate excel files.
Is it possible to include an autofilter in the first row with xlwt or pyExcelerator or anything else besides COM?
Thanks | 6 | 2 | 0.132549 | 0 | false | 20,838,509 | 0 | 6,820 | 1 | 0 | 0 | 1,948,224 | I have the same issue, running a linux server.
i'm going to check creating an ODS or XLSX file with auto-filter by other means, and then convert them with a libreoffice command line to "xls". | 1 | 0 | 0 | How to create an excel file with an autofilter in the first row with xlwt? | 3 | python,excel,xlwt,pyexcelerator | 0 | 2009-12-22T18:21:00.000 |
I need a documentation system for a PHP project and I wanted it to be able to integrate external documentation (use cases, project scope etc.) with the documentation generated from code comments. It seems that phpDocumentor has exactly the right feature set, but external documentation must be written in DocBook which i... | 0 | 2 | 1.2 | 0 | true | 2,035,342 | 0 | 853 | 1 | 0 | 0 | 1,957,787 | You can convert ReST to DocBook using pandoc. | 1 | 0 | 1 | External documentation for PHP, no DocBook | 2 | php,phpdoc,docbook,restructuredtext,python-sphinx | 0 | 2009-12-24T10:37:00.000 |
If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10. | 2 | 0 | 0 | 0 | false | 1,960,164 | 0 | 287 | 2 | 0 | 0 | 1,960,155 | You can try the Blackhole and Memory table types in MySQL. | 1 | 0 | 0 | Start a "throwaway" MySQL session for testing code? | 2 | python,mysql,unit-testing,ubuntu | 1 | 2009-12-25T00:25:00.000 |
If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10. | 2 | 1 | 1.2 | 0 | true | 1,960,160 | 0 | 287 | 2 | 0 | 0 | 1,960,155 | --datadir for just the data or --basedir | 1 | 0 | 0 | Start a "throwaway" MySQL session for testing code? | 2 | python,mysql,unit-testing,ubuntu | 1 | 2009-12-25T00:25:00.000 |
I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | 27 | 19 | 1.2 | 0 | true | 3,007,620 | 0 | 5,990 | 4 | 0 | 0 | 1,961,013 | Since a nosql database can contain huge amounts of data you can not migrate it in the regular rdbms sence. Actually you can't do it for rdbms as well as soon as your data passes some size threshold. It is impractical to bring your site down for a day to add a field to an existing table, and so with rdbms you end up doi... | 1 | 0 | 0 | Are there any tools for schema migration for NoSQL databases? | 4 | python,mongodb,couchdb,database,nosql | 0 | 2009-12-25T11:23:00.000 |
I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | 27 | 2 | 0.099668 | 0 | false | 1,961,090 | 0 | 5,990 | 4 | 0 | 0 | 1,961,013 | One of the supposed benefits of these databases is that they are schemaless, and therefore don't need schema migration tools. Instead, you write your data handling code to deal with the variety of data stored in the db. | 1 | 0 | 0 | Are there any tools for schema migration for NoSQL databases? | 4 | python,mongodb,couchdb,database,nosql | 0 | 2009-12-25T11:23:00.000 |
I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | 27 | 2 | 0.099668 | 0 | false | 1,966,375 | 0 | 5,990 | 4 | 0 | 0 | 1,961,013 | If your data are sufficiently big, you will probably find that you cannot EVER migrate the data, or that it is not beneficial to do so. This means that when you do a schema change, the code needs to continue to be backwards compatible with the old formats forever.
Of course if your data "age" and eventually expire anyw... | 1 | 0 | 0 | Are there any tools for schema migration for NoSQL databases? | 4 | python,mongodb,couchdb,database,nosql | 0 | 2009-12-25T11:23:00.000 |
I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | 27 | 1 | 0.049958 | 0 | false | 3,007,685 | 0 | 5,990 | 4 | 0 | 0 | 1,961,013 | When a project has a need for a schema migration in regards to a NoSQL database makes me think that you are still thinking in a Relational database manner, but using a NoSQL database.
If anybody is going to start working with NoSQL databases, you need to realize that most of the 'rules' for a RDBMS (i.e. MySQL) need to... | 1 | 0 | 0 | Are there any tools for schema migration for NoSQL databases? | 4 | python,mongodb,couchdb,database,nosql | 0 | 2009-12-25T11:23:00.000 |
Python --> SQLite --> ASP.NET C#
I am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.
I want to stay away from writing to ... | 0 | 1 | 1.2 | 0 | true | 1,977,499 | 0 | 309 | 2 | 0 | 0 | 1,962,130 | This sounds like a premature optimization (apologizes if you've already done the profiling). What I would suggest is go ahead and write the system in the simplest, cleanest way, but put a bit of abstraction around the database bits so they can easily by swapped out. Then profile it and find your bottleneck.
If it tur... | 1 | 0 | 0 | In memory database with socket capability | 5 | asp.net,python,sqlite,networking,udp | 0 | 2009-12-25T22:47:00.000 |
Python --> SQLite --> ASP.NET C#
I am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.
I want to stay away from writing to ... | 0 | 0 | 0 | 0 | false | 1,962,162 | 0 | 309 | 2 | 0 | 0 | 1,962,130 | The application of SQlite depends on your data complexity.
If you need to perform complex queries on relational data, then it might be a viable option. If your data is flat (i.e. not relational) and processed as a whole, then some python-internal data structures might be applicable. | 1 | 0 | 0 | In memory database with socket capability | 5 | asp.net,python,sqlite,networking,udp | 0 | 2009-12-25T22:47:00.000 |
I'm trying to install the module mySQLdb on a windows vista 64 (amd) machine.
I've installed python on a different folder other than suggested by Python installer.
When I try to install the .exe mySQLdb installer, it can't find python 2.5 and it halts the installation.
Is there anyway to supply the installer with the c... | 0 | 0 | 0 | 0 | false | 2,179,175 | 0 | 431 | 1 | 1 | 0 | 1,980,454 | did you use an egg?
if so, python might not be able to find it.
import os,sys
os.environ['PYTHON_EGG_CACHE'] = 'C:/temp'
sys.path.append('C:/path/to/MySQLdb.egg') | 1 | 0 | 1 | Problem installing MySQLdb on windows - Can't find python | 1 | python,windows-installer,mysql | 0 | 2009-12-30T14:24:00.000 |
I'm using cherrypy's standalone server (cherrypy.quickstart()) and sqlite3 for a database.
I was wondering how one would do ajax/jquery asynchronous calls to the database while using cherrypy? | 1 | 2 | 1.2 | 0 | true | 2,015,344 | 1 | 3,741 | 1 | 0 | 0 | 2,015,065 | The same way you would do them using any other webserver - by getting your javascript to call a URL which is handled by the server-side application. | 1 | 0 | 0 | How does one do async ajax calls using cherrypy? | 2 | jquery,python,ajax,asynchronous,cherrypy | 0 | 2010-01-06T17:57:00.000 |
I'm making a trivia webapp that will feature both standalone questions, and 5+ question quizzes. I'm looking for suggestions for designing this model.
Should a quiz and its questions be stored in separate tables/objects, with a key to tie them together, or am I better off creating the quiz as a standalone entity, with... | 1 | 1 | 0.039979 | 0 | false | 2,017,958 | 1 | 642 | 2 | 0 | 0 | 2,017,930 | My first cut (I assumed the questions were multiple choice):
I'd have a table of Questions, with ID_Question as the PK, the question text, and a category (if you want).
I'd have a table of Answers, with ID_Answer as the PK, QuestionID as a FK back to the Questions table, the answer text, and a flag as to whether it's ... | 1 | 0 | 0 | Database Design Inquiry | 5 | python,database-design,google-app-engine,schema | 0 | 2010-01-07T02:56:00.000 |
I'm making a trivia webapp that will feature both standalone questions, and 5+ question quizzes. I'm looking for suggestions for designing this model.
Should a quiz and its questions be stored in separate tables/objects, with a key to tie them together, or am I better off creating the quiz as a standalone entity, with... | 1 | 0 | 0 | 0 | false | 2,017,943 | 1 | 642 | 2 | 0 | 0 | 2,017,930 | Have a table of questions, a table of quizzes and a mapping table between them. That will give you the most flexibility. This is simple enough that you wouldn't even necessarily need a whole relational database management system. I think people tend to forget that relations are pretty simple mathematical/logical concep... | 1 | 0 | 0 | Database Design Inquiry | 5 | python,database-design,google-app-engine,schema | 0 | 2010-01-07T02:56:00.000 |
Two questions:
i want to generate a View in my PostGIS-DB. How do i add this View to my geometry_columns Table?
What i have to do, to use a View with SQLAlchemy? Is there a difference between a Table and View to SQLAlchemy or could i use the same way to use a View as i do to use a Table?
sorry for my poor english.
If... | 2 | 4 | 1.2 | 0 | true | 2,027,143 | 0 | 1,758 | 1 | 0 | 0 | 2,026,475 | Table objects in SQLAlchemy have two roles. They can be used to issue DDL commands to create the table in the database. But their main purpose is to describe the columns and types of tabular data that can be selected from and inserted to.
If you only want to select, then a view looks to SQLAlchemy exactly like a regula... | 1 | 0 | 0 | Work with Postgres/PostGIS View in SQLAlchemy | 1 | python,postgresql,sqlalchemy,postgis | 0 | 2010-01-08T09:05:00.000 |
hmm, is there any reason why sa tries to add Nones to for varchar columns that have defaults set in in database schema ?, it doesnt do that for floats or ints (im using reflection).
so when i try to add new row :
like
u = User()
u.foo = 'a'
u.bar = 'b'
sa issues a query that has a lot more cols with None values assigne... | 0 | 0 | 1.2 | 0 | true | 2,037,291 | 0 | 900 | 1 | 0 | 0 | 2,036,996 | I've found its a bug in sa, this happens only for string fields, they dont get server_default property for some unknow reason, filed a ticket for this already | 1 | 0 | 0 | Problem with sqlalchemy, reflected table and defaults for string fields | 2 | python,sqlalchemy | 0 | 2010-01-10T12:28:00.000 |
i have been in the RDBMS world for many years now but wish to explore the whole nosql movement. so here's my first question:
is it bad practice to have the possibility of duplicate keys? for example, an address book keyed off of last name (most probably search item?) could have multiple entities. is it bad practice ... | 1 | 1 | 1.2 | 0 | true | 2,384,015 | 0 | 597 | 1 | 0 | 0 | 2,068,473 | This depend on no-sql implementation. Cassandra, for example, allows range queries, so you could model data to do queries on last name, or with full name (starting with last name, then first name).
Beyond this, many simpler key-value stores would indeed require you to store a list structure (or such) for multi-valued e... | 1 | 0 | 0 | key/value (general) and tokyo cabinet (python tc-specific) question | 2 | python,tokyo-cabinet | 0 | 2010-01-15T00:04:00.000 |
I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading/writing to disk.
Now I have a script that runs at basically the sa... | 2 | 1 | 0.066568 | 0 | false | 2,077,129 | 0 | 917 | 3 | 0 | 0 | 2,076,582 | It is "well known", so to speak, that svn update waits up to a whole second after it has finished running, so that file modification timestamps get "in the past" (since many filesystems don't have a timestamp granularity finer than one second). You can find more information about it by Googling for "svn sleep_for_times... | 1 | 0 | 0 | Finding the performance bottleneck in a Python and MySQL script | 3 | python,mysql,performance,svn | 0 | 2010-01-16T07:40:00.000 |
I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading/writing to disk.
Now I have a script that runs at basically the sa... | 2 | 4 | 1.2 | 0 | true | 2,076,639 | 0 | 917 | 3 | 0 | 0 | 2,076,582 | Doing SQL queries in a for loop 15k times is a bottleneck in every language..
Is there any reason you query every time again ? If you do a single query before the for loop and then loop over the resultset and the SVN part, you will see a dramatic increase in speed.
But I doubt that you will get a higher CPU usage. The... | 1 | 0 | 0 | Finding the performance bottleneck in a Python and MySQL script | 3 | python,mysql,performance,svn | 0 | 2010-01-16T07:40:00.000 |
I have a script with a main for loop that repeats about 15k times. In this loop it queries a local MySQL database and does a SVN update on a local repository. I placed the SVN repository in a RAMdisk as before most of the time seemed to be spent reading/writing to disk.
Now I have a script that runs at basically the sa... | 2 | 1 | 0.066568 | 0 | false | 2,076,590 | 0 | 917 | 3 | 0 | 0 | 2,076,582 | Profile your Python code. That will show you how long each function/method call takes. If that's the method call querying the MySQL database, you'll have a clue where to look. But it also may be something else. In any case, profiling is the usual approach to solve such problems. | 1 | 0 | 0 | Finding the performance bottleneck in a Python and MySQL script | 3 | python,mysql,performance,svn | 0 | 2010-01-16T07:40:00.000 |
I recently joined a new company and the development team was in the progress of a project to rebuild the database categories structure as follows:
if we have category and subcategory for items, like food category and italian food category in food category.
They were building a table for each category, instead of having... | 2 | 2 | 0.197375 | 0 | false | 2,077,536 | 0 | 155 | 1 | 0 | 0 | 2,077,522 | First, the most obvious answer is that you should ask them, not us, since I can tell you this, that design seems bogus deluxe.
The only reason I can come up with is that you have inexperienced DBA's that does not know how to performance-tune a database, and seems to think that a table with less rows will always vastly ... | 1 | 0 | 0 | DB a table for the category and another table for the subcategory with similar fields, why? | 2 | python,mysql,database,django,performance | 0 | 2010-01-16T13:58:00.000 |
I have a question related to some guidances to solve a problem. I have with me an xml file, I have to populate it into a database system (whatever, it might be sqlite, mysql) using scripting language: Python.
Does anyone have any idea on how to proceed?
Which technologies I need to read further?
Which environments I h... | 7 | 1 | 0.049958 | 0 | false | 2,085,657 | 1 | 15,042 | 1 | 0 | 0 | 2,085,430 | If you are accustomed to DOM (tree) access to xml from other language, you may find useful these standard library modules (and their respective docs):
xml.dom
xml.dom.minidom
To save tha data to DB, you can use standard module sqlite3 or look for binding to mysql. Or you may wish to use something more abstract, like... | 1 | 0 | 0 | populating data from xml file to a sqlite database using python | 4 | python,xml,database,sqlite,parsing | 0 | 2010-01-18T10:55:00.000 |
I have a written a Python module which due to its specifics needs to have a MySQL database connection. Right now, details of this connection (host, database, username and password to connect with) are stored in /etc/mymodule.conf in plaintext, which is obviously not a good idea.
Supposedly, the /etc/mymodule.conf file ... | 0 | 4 | 1.2 | 0 | true | 2,088,188 | 0 | 1,241 | 1 | 0 | 0 | 2,087,920 | Your constraints set a very difficult problem: every user on the system must be able to access that password (since that's the only way for users to access that database)... yet they must not (except when running that script, and presumably only when running it without e.g. a python -i session that would let them set a... | 1 | 0 | 0 | Storing system-wide DB connection password for a Python module | 2 | python,security | 0 | 2010-01-18T17:30:00.000 |
I want to add a field to an existing mapped class, how would I update the sql table automatically. Does sqlalchemy provide a method to update the database with a new column, if a field is added to the class. | 15 | 0 | 0 | 0 | false | 65,265,231 | 0 | 11,257 | 1 | 0 | 0 | 2,103,274 | You can install 'DB Browser (SQLite)' and open your current database file and simple add/edit table in your database and save it, and run your app
(add script in your model after save above process) | 1 | 0 | 0 | SqlAlchemy add new Field to class and create corresponding column in table | 6 | python,sqlalchemy | 0 | 2010-01-20T17:01:00.000 |
Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In par... | 4 | 1 | 0.039979 | 1 | false | 2,111,067 | 0 | 2,801 | 3 | 0 | 0 | 2,110,843 | If the data is already organized in fields, it doesn't sound like a text searching/indexing problem. It sounds like tabular data that would be well-served by a database.
Script the file data into a database, index as you see fit, and query the data in any complex way the database supports.
That is unless you're looking... | 1 | 0 | 0 | File indexing (using Binary trees?) in Python | 5 | python,algorithm,indexing,binary-tree | 0 | 2010-01-21T16:22:00.000 |
Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In par... | 4 | 1 | 0.039979 | 1 | false | 2,110,912 | 0 | 2,801 | 3 | 0 | 0 | 2,110,843 | The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.
To reduce the time spent waiting for I/O, your best bet is compression.
Create a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more ... | 1 | 0 | 0 | File indexing (using Binary trees?) in Python | 5 | python,algorithm,indexing,binary-tree | 0 | 2010-01-21T16:22:00.000 |
Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In par... | 4 | 1 | 0.039979 | 1 | false | 12,805,622 | 0 | 2,801 | 3 | 0 | 0 | 2,110,843 | sqlite3 is fast, small, part of python (so nothing to install) and provides indexing of columns. It writes to files, so you wouldn't need to install a database system. | 1 | 0 | 0 | File indexing (using Binary trees?) in Python | 5 | python,algorithm,indexing,binary-tree | 0 | 2010-01-21T16:22:00.000 |
Table structure - Data present for 5 min. slots -
data_point | point_date
12 | 00:00
14 | 00:05
23 | 00:10
10 | 00:15
43 | 00:25
10 | 00:40
When I run the query for say 30 mins. and if data is present I'll get 6 rows (one row for each 5 min. stamp... | 1 | 0 | 0 | 0 | false | 2,119,402 | 0 | 1,466 | 2 | 0 | 0 | 2,119,153 | You cannot query data you do not have.
You (as a thinking person) can claim that the 00:20 data is missing; but there's no easy way to define "missing" in some more formal SQL sense.
The best you can do is create a table with all of the expected times.
Then you can do an outer join between expected times (including a 0... | 1 | 0 | 0 | python : mysql : Return 0 when no rows found | 3 | python,mysql,null | 0 | 2010-01-22T17:28:00.000 |
Table structure - Data present for 5 min. slots -
data_point | point_date
12 | 00:00
14 | 00:05
23 | 00:10
10 | 00:15
43 | 00:25
10 | 00:40
When I run the query for say 30 mins. and if data is present I'll get 6 rows (one row for each 5 min. stamp... | 1 | 0 | 0 | 0 | false | 2,119,384 | 0 | 1,466 | 2 | 0 | 0 | 2,119,153 | I see no easy way to create non-existing records out of thin air, but you could create yourself a point_dates table containing all the timestamps you're interested in, and left join it on your data:
select pd.slot, IFNULL(data_point, 0)
from point_dates pd
left join some_table st on st.point_date=pd.slot
where point_da... | 1 | 0 | 0 | python : mysql : Return 0 when no rows found | 3 | python,mysql,null | 0 | 2010-01-22T17:28:00.000 |
I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python in... | 2 | 2 | 1.2 | 0 | true | 2,124,718 | 1 | 1,021 | 3 | 1 | 0 | 2,124,688 | True, Google App Engine is a very cool product, but the datastore is a different beast than a regular mySQL database. That's not to say that what you need can't be done with the GAE datastore; however it may take some reworking on your end.
The most prominent different that you notice right off the start is that GAE ... | 1 | 0 | 0 | iPhone app with Google App Engine | 4 | iphone,python,google-app-engine,gql | 0 | 2010-01-23T20:55:00.000 |
I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python in... | 2 | 1 | 0.049958 | 0 | false | 2,124,705 | 1 | 1,021 | 3 | 1 | 0 | 2,124,688 | That's a pretty generic question :)
Short answer: yes. It's going to involve some rethinking of your data model, but yes, changes are you can support it with the GAE Datastore API.
When you create your Python models (think of these as tables), you can certainly define references to other models (so now we have a forei... | 1 | 0 | 0 | iPhone app with Google App Engine | 4 | iphone,python,google-app-engine,gql | 0 | 2010-01-23T20:55:00.000 |
I've prototyped an iPhone app that uses (internally) SQLite as its data base. The intent was to ultimately have it communicate with a server via PHP, which would use MySQL as the back-end database.
I just discovered Google App Engine, however, but know very little about it. I think it'd be nice to use the Python in... | 2 | 2 | 0.099668 | 0 | false | 2,125,297 | 1 | 1,021 | 3 | 1 | 0 | 2,124,688 | GQL offers almost no functionality at all; it's only used for SELECT queries, and it only exists to make writing SELECT queries easier for SQL programmers. Behind the scenes, it converts your queries to db.Query objects.
The App Engine datastore isn't a relational database at all. You can do some stuff that looks rel... | 1 | 0 | 0 | iPhone app with Google App Engine | 4 | iphone,python,google-app-engine,gql | 0 | 2010-01-23T20:55:00.000 |
Currently an application of mine is using SQLAlchemy, but I have been considering the possibility of using Django model API.
Django 1.1.1 is about 3.6 megabytes in size, whereas SQLAlchemy is about 400 kilobytes (as reported by PyPM - which is essentially the size of the files installed by python setup.py install).
I ... | 0 | 1 | 0.099668 | 0 | false | 2,127,512 | 1 | 191 | 2 | 0 | 0 | 2,126,433 | The Django ORM is usable on its own - you can use "settings.configure()" to set up the database settings. That said, you'll have to do the stripping down and repackaging yourself, and you'll have to experiment with how much you can actually strip away. I'm sure you can ditch contrib/, forms/, template/, and probably se... | 1 | 0 | 0 | Using Django's Model API without having to *include* the full Django stack | 2 | python,django,deployment,size,sqlalchemy | 0 | 2010-01-24T08:27:00.000 |
Currently an application of mine is using SQLAlchemy, but I have been considering the possibility of using Django model API.
Django 1.1.1 is about 3.6 megabytes in size, whereas SQLAlchemy is about 400 kilobytes (as reported by PyPM - which is essentially the size of the files installed by python setup.py install).
I ... | 0 | 1 | 0.099668 | 0 | false | 2,130,014 | 1 | 191 | 2 | 0 | 0 | 2,126,433 | You may be able to get a good idea of what is safe to strip out by checking which files don't have their access time updated when you run your application. | 1 | 0 | 0 | Using Django's Model API without having to *include* the full Django stack | 2 | python,django,deployment,size,sqlalchemy | 0 | 2010-01-24T08:27:00.000 |
Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | 380 | 133 | 1 | 0 | false | 2,157,930 | 0 | 221,892 | 4 | 0 | 0 | 2,128,505 | We actually had these merged together originally, i.e. there was a "filter"-like method that accepted *args and **kwargs, where you could pass a SQL expression or keyword arguments (or both). I actually find that a lot more convenient, but people were always confused by it, since they're usually still getting over the... | 1 | 0 | 0 | Difference between filter and filter_by in SQLAlchemy | 5 | python,sqlalchemy | 0 | 2010-01-24T19:49:00.000 |
Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | 380 | 40 | 1 | 0 | false | 2,128,567 | 0 | 221,892 | 4 | 0 | 0 | 2,128,505 | filter_by uses keyword arguments, whereas filter allows pythonic filtering arguments like filter(User.name=="john") | 1 | 0 | 0 | Difference between filter and filter_by in SQLAlchemy | 5 | python,sqlalchemy | 0 | 2010-01-24T19:49:00.000 |
Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | 380 | 494 | 1.2 | 0 | true | 2,128,558 | 0 | 221,892 | 4 | 0 | 0 | 2,128,505 | filter_by is used for simple queries on the column names using regular kwargs, like
db.users.filter_by(name='Joe')
The same can be accomplished with filter, not using kwargs, but instead using the '==' equality operator, which has been overloaded on the db.users.name object:
db.users.filter(db.users.name=='Joe')
You ca... | 1 | 0 | 0 | Difference between filter and filter_by in SQLAlchemy | 5 | python,sqlalchemy | 0 | 2010-01-24T19:49:00.000 |
Could anyone explain the difference between filter and filter_by functions in SQLAlchemy?
Which one should I be using? | 380 | 4 | 0.158649 | 0 | false | 68,331,326 | 0 | 221,892 | 4 | 0 | 0 | 2,128,505 | Apart from all the technical information posted before, there is a significant difference between filter() and filter_by() in its usability.
The second one, filter_by(), may be used only for filtering by something specifically stated - a string or some number value. So it's usable only for category filtering, not for e... | 1 | 0 | 0 | Difference between filter and filter_by in SQLAlchemy | 5 | python,sqlalchemy | 0 | 2010-01-24T19:49:00.000 |
we are still pretty new to Postgres and came from Microsoft Sql Server.
We are wanting to write some stored procedures now. Well, after struggling to get something more complicated than a hello world to work in pl/pgsql, we decided it's better if we are going to learn a new language we might as well learn Python becaus... | 11 | 9 | 1.2 | 0 | true | 2,142,128 | 0 | 5,869 | 1 | 0 | 0 | 2,141,589 | Depends on what operations you're doing.
Well, combine that with a general Python documentation, and that's about what you have.
No. Again, depends on what you're doing. If you're only going to run a query once, no point in preparing it separately.
If you are using persistent connections, it might. But they get cleared... | 1 | 0 | 0 | Stored Procedures in Python for PostgreSQL | 1 | python,postgresql,stored-procedures,plpgsql | 0 | 2010-01-26T18:19:00.000 |
I have a queryset with a few million records. I need to update a Boolean Value, fundamentally toggle it, so that in the database table the values are reset. What's the fastest way to do that?
I tried traversing the queryset and updating and saving each record, that obviously takes ages? We need to do this very fast, an... | 7 | 0 | 0 | 0 | false | 4,230,081 | 1 | 1,711 | 1 | 0 | 0 | 2,141,769 | Actually, that didn't work out for me.
The following did:
Entry.objects.all().update(value=(F('value')==False)) | 1 | 0 | 0 | Fastest Way to Update a bunch of records in queryset in Django | 3 | python,database,django,django-queryset | 0 | 2010-01-26T18:50:00.000 |
Why is _mysql in the MySQLdb module a C file? When the module tries to import it, I get an import error. What should I do? | 1 | 0 | 0 | 0 | false | 2,169,464 | 0 | 1,242 | 1 | 0 | 0 | 2,169,449 | It's the adaptor that sits between the Python MySQLdb module and the C libmysqlclient library. One of the most common reasons for it not loading is that the appropriate libmysqlclient library is not in place. | 1 | 0 | 0 | Importing _mysql in MySQLdb | 2 | python,mysql,c | 0 | 2010-01-30T21:12:00.000 |
I am attempting to execute the following query via the mysqldb module in python:
for i in self.p.parameter_type:
cursor.execute("""UPDATE parameters SET %s = %s WHERE parameter_set_name = %s""" % (i,
float(getattr(self.p, i)), self.list_box_parameter.GetStringSelection()))
I keep getting the error: "... | 0 | 0 | 0 | 0 | false | 2,171,104 | 0 | 375 | 1 | 0 | 0 | 2,171,072 | It looks like query is formed with wrong syntax.
Could you display string parameter of cursor.execute? | 1 | 0 | 0 | Trouble with MySQL UPDATE syntax with the module mysqldb in Python | 2 | python,mysql | 0 | 2010-01-31T08:45:00.000 |
I'm trying to implement the proper architecture for multiple databases under Python + Pylons. I can't put everything in the config files since one of the database connections requires the connection info from a previous database connection (sharding).
What's the best way to implement such an infrastructure? | 2 | 1 | 1.2 | 0 | true | 2,224,250 | 0 | 988 | 1 | 0 | 0 | 2,205,047 | Pylons's template configures the database in config/environment.py, probably with the engine_from_config method. It finds all the config settings with a particular prefix and passes them as keyword arguments to create_engine.
You can just replace that with a few calls to sqlalchemy.create_engine() with the per-engine u... | 1 | 0 | 0 | Multiple database connections with Python + Pylons + SQLAlchemy | 1 | python,pylons | 0 | 2010-02-05T04:29:00.000 |
TypeError: unsupported operand type(s) for /: 'tuple' and 'tuple'
I'm getting above error , while I fetched a record using query "select max(rowid) from table"
and assigned it to variable and while performing / operation is throws above message.
How to resolve this. | 1 | 4 | 1.2 | 0 | true | 2,220,107 | 0 | 1,106 | 1 | 0 | 0 | 2,220,099 | Sql query select max(rowid) would return Tuple data like records=(1000,)
You may need to do like numerator / records[0] | 1 | 0 | 1 | python tuple division | 1 | python,tuples | 0 | 2010-02-08T07:18:00.000 |
Suppose that I have a table Articles, which has fields article_id, content and it contains one article with id 1.
I also have a table Categories, which has fields category_id (primary key), category_name, and it contains one category with id 10.
Now suppose that I have a table ArticleProperties, that adds properties to... | 0 | 1 | 0.197375 | 0 | false | 2,248,806 | 1 | 928 | 1 | 0 | 0 | 2,234,030 | Assuming I understand you question correctly, then No, you can't model that relationship as you have suggested. (It would help if you described your desired result, rather than your perceived solution)
What I think you may want is a many-to-many mapping table called ArticleCategories, consisting of 2 int columns, Arti... | 1 | 0 | 0 | SQLAlchemy ForeignKey relation via an intermediate table | 1 | python,sqlalchemy | 0 | 2010-02-10T02:30:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.