Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm using SqlAlchemy in my Pylons application to access data and SqlAlchemy-migrate to maintain the database schema.
It works fine for managing the schema itself. However, I also want to manage seed data in a migrate-like way. E.g. when ProductCategory table is created it would make sense to seed it with categories dat... | 2 | 2 | 1.2 | 0 | true | 4,300,116 | 0 | 2,454 | 1 | 0 | 0 | 4,298,886 | Well what format is your seed data starting out in? The migrate calls are just python methods so you're free to open some csv, create SA object instances, loop, etc. I usually have my seed data as a series of sql insert statements and just loop over them executing a migate.execute(query) for each one.
So I'll first cr... | 1 | 0 | 0 | Managing seed data with SqlAlchemy and SqlAlchemy-migrate | 1 | python,sqlalchemy,pylons,sqlalchemy-migrate | 0 | 2010-11-28T20:24:00.000 |
I'm learning to use SQLAlchemy connected to a SQL database for 12 standard relational tables (e.g. SQLite or PostgreSQL). But then I'd like to use Redis with Python for a couple of tables, particularly for Redis's fast set manipulation. I realise that Redis is NoSQL, but can I integrate this with SQLAlchemy for the ben... | 15 | 17 | 1 | 0 | false | 4,331,070 | 0 | 13,868 | 2 | 0 | 0 | 4,324,407 | While it is possible to set up an ORM that puts data in redis, it isn't a particularly good idea. ORMs are designed to expose standard SQL features. Many things that are standard in SQL such as querying on arbitrary columns are not available in redis unless you do a lot of extra work. At the same time redis has feature... | 1 | 0 | 0 | How to integrate Redis with SQLAlchemy | 2 | python,sqlalchemy,nosql,redis | 0 | 2010-12-01T12:36:00.000 |
I'm learning to use SQLAlchemy connected to a SQL database for 12 standard relational tables (e.g. SQLite or PostgreSQL). But then I'd like to use Redis with Python for a couple of tables, particularly for Redis's fast set manipulation. I realise that Redis is NoSQL, but can I integrate this with SQLAlchemy for the ben... | 15 | 14 | 1 | 0 | false | 4,332,791 | 0 | 13,868 | 2 | 0 | 0 | 4,324,407 | Redis is very good at what it does, storing key values and making simple atomic operations, but if you want to use it as a relational database you're really gonna SUFFER!, as I had... and here is my story...
I've done something like that, making several objects to abstracting all the redis internals exposing primitives... | 1 | 0 | 0 | How to integrate Redis with SQLAlchemy | 2 | python,sqlalchemy,nosql,redis | 0 | 2010-12-01T12:36:00.000 |
I have a database full of data, including a date and time string, e.g. Tue, 21 Sep 2010 14:16:17 +0000
What I would like to be able to do is extract various documents (records) from the database based on the time contained within the date string, Tue, 21 Sep 2010 14:16:17 +0000.
From the above date string, how would I ... | 3 | 1 | 0.049958 | 0 | false | 4,325,260 | 0 | 563 | 1 | 0 | 0 | 4,325,194 | I agree with the other poster. Though this doesn't solve your immediate problem, if you have any control over the database, you should seriously consider creating a time/column, with either a DATE or TIMESTAMP datatype. That would make your system much more robust, & completely avoid the problem of trying to parse date... | 1 | 0 | 1 | Extracting Date and Time info from a string. | 4 | python,regex,mongodb,datetime,database | 0 | 2010-12-01T14:10:00.000 |
I am implementing a database model to store the 20+ fields of the iCal calendar format and am faced with tediously typing in all these into an SQLAlchemy model.py file. Is there a smarter approach? I am looking for a GUI or model designer that can create the model.py file for me. I would specify the column names and... | 5 | 0 | 0 | 0 | false | 4,330,995 | 0 | 2,698 | 1 | 0 | 0 | 4,330,339 | "I would specify the column names and some attributes, e.g, type, length, etc."
Isn't that the exact same thing as
"tediously typing in all these into an SQLAlchemy model.py file"?
If those two things aren't identical, please explain how they're different. | 1 | 0 | 0 | Is there any database model designer that can output SQLAlchemy models? | 3 | python,model,sqlalchemy,data-modeling | 0 | 2010-12-01T23:52:00.000 |
I just downloaded sqlite3.exe. It opens up as a command prompt. I created a table test & inserted a few entries in it. I used .backup test just in case. After I exit the program using .exit and reopened it I don't find the table listed under .tables nor can I run any query on it.
I need to quickly run an open source py... | 0 | 0 | 0 | 0 | false | 4,348,768 | 0 | 2,598 | 1 | 0 | 0 | 4,348,658 | Just execute sqlite3 foo.db? This will permanently store everything you do afterwards in this file. (No need for .backup.) | 1 | 0 | 0 | How to create tables in sqlite 3? | 3 | python,sqlite | 0 | 2010-12-03T18:30:00.000 |
I have some (Excel 2000) workbooks. I want to extract the data in each worksheet to a separate file.
I am running on Linux.
Is there a library I can use to access (read) XLS files on Linux from Python? | 3 | 0 | 0 | 0 | false | 4,355,455 | 0 | 2,104 | 1 | 0 | 0 | 4,355,435 | The easiest way would be to run excel up under Wine or as a VM and do it from Windows. You can use Mark Hammond's COM bindings, which come bundled with ActiveState Python. Alternatively, you could export the data in CSV format and read it from that. | 1 | 0 | 0 | Cross platform way to read Excel files in Python? | 4 | python,excel | 0 | 2010-12-04T19:41:00.000 |
With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?
Note
I am not asking whether RDBM... | 0 | 0 | 0 | 0 | false | 4,357,332 | 1 | 210 | 3 | 0 | 0 | 4,355,909 | The NoSQL effort has to do with creating a persistence layer that scales with modern applications using non-normalized data structures for fast reads & writes and data formats like JSON, the standard format used by ajax based systems. It is sometimes the case that transaction based relational databases do not scale wel... | 1 | 0 | 0 | With the rise of NoSQL, Is it more common these days to have a webapp without any model? | 3 | python,mysql,ruby-on-rails,nosql | 0 | 2010-12-04T21:24:00.000 |
With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?
Note
I am not asking whether RDBM... | 0 | 0 | 0 | 0 | false | 4,355,924 | 1 | 210 | 3 | 0 | 0 | 4,355,909 | SQL databases are still the order of the day. But it's becoming more common to use unstructured stores. NoSQL databases are well suited for some web apps, but not necessarily all of them. | 1 | 0 | 0 | With the rise of NoSQL, Is it more common these days to have a webapp without any model? | 3 | python,mysql,ruby-on-rails,nosql | 0 | 2010-12-04T21:24:00.000 |
With the rise of NoSQL, is it more common these days to have a webapp without any model and process everything in the controller? Is this a bad pattern in web development? Why should we abstract our database related function in a model when it is easy enough to fetch the data in nosql?
Note
I am not asking whether RDBM... | 0 | 4 | 1.2 | 0 | true | 4,355,976 | 1 | 210 | 3 | 0 | 0 | 4,355,909 | I don't think "NoSQL" has anything to do with "no model".
For one, MVC originated in the Smalltalk world for desktop applications, long before the current web server architecture (or even the web itself) existed. Most apps I've written have used MVC (including the M), even those that didn't use a DBMS (R or otherwise)... | 1 | 0 | 0 | With the rise of NoSQL, Is it more common these days to have a webapp without any model? | 3 | python,mysql,ruby-on-rails,nosql | 0 | 2010-12-04T21:24:00.000 |
When committing data that has originally come from a webpage, sometimes data has to be converted to a data type or format which is suitable for the back-end database. For instance, a date in 'dd/mm/yyyy' format needs to be converted to a Python date-object or 'yyyy-mm-dd' in order to be stored in a SQLite date column ... | 0 | 1 | 1.2 | 0 | true | 4,360,475 | 1 | 75 | 2 | 0 | 0 | 4,360,407 | I could be wrong, but I think there is no definite answer to this question. It depends on "language" level your framework provides. For example, if another parts of the framework accept data in non-canonical form and then convert it to an internal canonical form, it this case it would worth to support some input date f... | 1 | 0 | 0 | Framework design question | 2 | python,sqlite | 0 | 2010-12-05T18:24:00.000 |
When committing data that has originally come from a webpage, sometimes data has to be converted to a data type or format which is suitable for the back-end database. For instance, a date in 'dd/mm/yyyy' format needs to be converted to a Python date-object or 'yyyy-mm-dd' in order to be stored in a SQLite date column ... | 0 | 2 | 0.197375 | 0 | false | 4,360,452 | 1 | 75 | 2 | 0 | 0 | 4,360,407 | I think it belongs in the validation. You want a date, but the web page inputs strings only, so the validator needs to check if the value van be converted to a date, and from that point on your application should process it like a date. | 1 | 0 | 0 | Framework design question | 2 | python,sqlite | 0 | 2010-12-05T18:24:00.000 |
Which one of Ruby-PHP-Python is best suited for Cassandra/Hadoop on 500M+ users? I know language itself is not a big concern but I like to know base on proven success, infrastructure and available utilities around those frameworks! thanks so much. | 0 | 0 | 0 | 0 | false | 9,921,879 | 1 | 339 | 1 | 0 | 0 | 4,398,341 | Because Cassandra is written in Java, a client also in Java would likely have the best stability and maturity for your application.
As far as choosing between those 3 dynamic languages, I'd say whatever you're most comfortable with is best. I don't know of any significant differences between client libraries in those ... | 1 | 0 | 0 | Scability of Ruby-PHP-Python on Cassandra/Hadoop on 500M+ users | 1 | php,python,ruby-on-rails,scalability,cassandra | 1 | 2010-12-09T12:46:00.000 |
The question pretty much says it all. The database is in MySQL using phpMyAdmin.
A little background: I'm writing the interface for a small non-profit organization. They need to be able to see which customers to ship to this month, which customers have recurring orders, etc. The current system is ancient, written ... | 0 | 0 | 0 | 0 | false | 4,413,898 | 0 | 186 | 1 | 0 | 0 | 4,413,840 | What can I say? Just download the various software, dig in and ask questions here when you run into specific problems. | 1 | 0 | 0 | I have a MySQL database, I want to write an interface for it using Python. Help me get started, please! | 3 | php,python,mysql,phpmyadmin | 1 | 2010-12-10T22:23:00.000 |
I am writing a Python logger script which writes to a CSV file in the following manner:
Open the file
Append data
Close the file (I think this is necessary to save the changes, to be safe after every logging routine.)
PROBLEM:
The file is very much accessible through Windows Explorer (I'm using XP). If the file is op... | 1 | 0 | 0 | 0 | false | 4,427,958 | 0 | 2,562 | 1 | 0 | 0 | 4,427,936 | As far as I know, Windows does not support file locking. In other words, applications that don't know about your file being locked can't be prevented from reading a file.
But the remaining question is: how can Excel accomplish this?
You might want to try to write to a temporary file first (one that Excel does not know ... | 1 | 0 | 0 | Prevent a file from being opened | 2 | python,logging,file-locking | 0 | 2010-12-13T10:36:00.000 |
Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.
Which path should be followed, any advice ?
For example:
There is a table of Customers, containing fields li... | 2 | 1 | 0.039979 | 0 | false | 4,428,933 | 0 | 159 | 4 | 0 | 0 | 4,428,613 | I agree with Mchi, there is no problem storing "pickled" data if you don't need to search or do relational type operations.
Denormalisation is also an important tool that can scale up database performance when applied correctly.
It's probably a better idea to use JSON instead of pickles. It only uses a little more spac... | 1 | 0 | 0 | Is it a good practice to use pickled data instead of additional tables? | 5 | python,mysql | 0 | 2010-12-13T12:04:00.000 |
Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.
Which path should be followed, any advice ?
For example:
There is a table of Customers, containing fields li... | 2 | 2 | 1.2 | 0 | true | 4,429,509 | 0 | 159 | 4 | 0 | 0 | 4,428,613 | Mixing SQL databases and pickling seems to ask for trouble. I'd go with either sticking all data in the SQL databases or using only pickling, in the form of the ZODB, which is a Python only OO database that is pretty damn awesome.
Mixing makes case sometimes, but is usually just more trouble than it's worth. | 1 | 0 | 0 | Is it a good practice to use pickled data instead of additional tables? | 5 | python,mysql | 0 | 2010-12-13T12:04:00.000 |
Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.
Which path should be followed, any advice ?
For example:
There is a table of Customers, containing fields li... | 2 | 0 | 0 | 0 | false | 4,432,349 | 0 | 159 | 4 | 0 | 0 | 4,428,613 | I agree with @Lennart Regebro. You should probably see whether you need a Relational DB or an OODB. If RDBMS is your choice, I would suggest you stick with more tables. IMHO, pickling may have issues with scalability. If thats what you want, you should look at ZODB. It is pretty good and supports caching etc for better... | 1 | 0 | 0 | Is it a good practice to use pickled data instead of additional tables? | 5 | python,mysql | 0 | 2010-12-13T12:04:00.000 |
Many times while creating database structure, I get stuck at the question, what would be more effective, storing data in pickled format in a column in the same table or create additional table and then use JOIN.
Which path should be followed, any advice ?
For example:
There is a table of Customers, containing fields li... | 2 | 3 | 0.119427 | 0 | false | 4,428,635 | 0 | 159 | 4 | 0 | 0 | 4,428,613 | Usually it's best to keep your data normalized (i.e. create more tables). Storing data 'pickled' as you say, is acceptable, when you don't need to perform relational operations on them. | 1 | 0 | 0 | Is it a good practice to use pickled data instead of additional tables? | 5 | python,mysql | 0 | 2010-12-13T12:04:00.000 |
I would like to be able to plot a call graph of a stored procedure. I am not interested in every detail, and I am not concerned with dynamic SQL (although it would be cool to detect it and skip it maybe or mark it as such.)
I would like the tool to generate a tree for me, given the server name, db name, stored proc nam... | 8 | 0 | 0 | 0 | false | 18,523,367 | 0 | 6,375 | 1 | 0 | 0 | 4,445,117 | SQL Negotiator Pro has a free lite version at www.aphilen.com
The full version is the only product out there that will find all dependencies and not stop after finding the first 10 child dependencies. Other products fail when there is a circular reference and just hang, these guys have covered this off. Also a neat fea... | 1 | 0 | 0 | Is there a free tool which can help visualize the logic of a stored procedure in SQL Server 2008 R2? | 3 | sql-server-2008,stored-procedures,python-2.6,call-graph | 0 | 2010-12-14T22:54:00.000 |
I create hangman game with silverlight ironpython and I use data in postgresql for random word but I don't know to access data in postgresql in silverlight.
how can or should it be done?
Thanks!! | 0 | 3 | 0.53705 | 0 | false | 4,470,466 | 0 | 950 | 1 | 0 | 0 | 4,470,073 | From Silverlight you cannot access a database directly (remember it's a web technology that actually runs locally on the client and the client cannot access your database directly over the internet).
To communicate with the server from Silverlight, you must create a separated WebService either with SOAP, WCF or RIA Se... | 1 | 0 | 0 | How to access PostgreSQL with Silverlight | 1 | silverlight,postgresql,silverlight-4.0,silverlight-3.0,ironpython | 0 | 2010-12-17T11:37:00.000 |
I need to implement a function that takes a lambda as the argument and queries the database. I use SQLAlchemy for ORM. Is there a way to pass the lambda, that my function receives, to SQLAlchemy to create a query?
Sincerely,
Roman Prykhodchenko | 2 | 2 | 1.2 | 0 | true | 4,470,921 | 0 | 1,681 | 1 | 0 | 0 | 4,470,481 | I guess you want to filter the data with the lambda, like a WHERE clause? Well, no, functions nor lambdas cannot be turned into a SQL query. Sure, you could just fetch all the data and filter it in Python, but that completely defeats the purpose of the database.
You'll need to recreate the logic you put into the lambda... | 1 | 0 | 0 | Can I use lambda to create a query in SQLAlchemy? | 1 | python,sqlalchemy | 0 | 2010-12-17T12:33:00.000 |
The desire is to have the user provide information in an OpenOffice Writer or MS Word file that is inserted into part of a ReportLab generated PDF. I am comfortable with ReportLab; but, I don't have any experience with using Writer or Word data in this way. How would you automate the process of pulling in the Writer/Wo... | 1 | 0 | 0 | 0 | false | 4,691,989 | 0 | 147 | 1 | 0 | 0 | 4,478,478 | You can not embed such objects as is within a PDF, adobe specification does not support that. However you could always parse the data from the Office document and reproduce it as a table/graph/etc using reportlab in the output PDF. If you don't care about the data being an actual text you could always save it in the PD... | 1 | 0 | 0 | Is it possible to include OpenOffice Writer or MS Word data in a ReportLab generated PDF? | 1 | python,ms-word,reportlab,openoffice-writer | 0 | 2010-12-18T14:36:00.000 |
I am developing an application for managers that might be used in a large organisation. The app is improved and extended step by step on a frequent (irregular) basis. The app will have SQL connections to several databases and has a complex GUI.
What would you advise to deploy the app ?
Based on my current (limited) kno... | 1 | 1 | 0.197375 | 0 | false | 4,485,440 | 1 | 318 | 1 | 0 | 0 | 4,485,404 | If possible, make the application run without any installation procedure, and provide it on a network share (e.g. with a fixed UNC path). You didn't specify the client operating system: if it's Windows, create an MSI that sets up something in the start menu that will still make the application launch from the network s... | 1 | 0 | 0 | Deploy python application | 1 | python,client-server,rich-internet-application | 0 | 2010-12-19T22:13:00.000 |
Okay, so what I want to do is upload an excel sheet and display it on my website, in html. What are my options here ? I've found this xlrd module that allows you to read the data from spreadsheets, but I don't really need that right now. | 1 | 4 | 1.2 | 0 | true | 4,499,265 | 1 | 1,961 | 1 | 0 | 0 | 4,498,678 | Why don't you need xlrd? It sounds like exactly what you need.
Create a Django model with a FileField that holds the spreadsheet. Then your view uses xlrd to loop over the rows and columns and put them into an HTML table. Job done.
Possible complications: multiple sheets in one Excel file; formulas; styles. | 1 | 0 | 0 | Python/Django excel to html | 2 | python,html,django,excel | 0 | 2010-12-21T11:20:00.000 |
I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am b... | 2 | 0 | 0 | 0 | false | 4,505,300 | 0 | 1,161 | 3 | 0 | 0 | 4,505,170 | How about some key-value storages like MongoDB | 1 | 0 | 0 | 50 million+ Rows of Data - CSV or MySQL | 5 | python,mysql,database,optimization,csv | 0 | 2010-12-22T00:20:00.000 |
I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am b... | 2 | 3 | 1.2 | 0 | true | 4,505,218 | 0 | 1,161 | 3 | 0 | 0 | 4,505,170 | I would say that there are a wide variety of benefits to using a database over a CSV for such large structured data so I would suggest that you learn enough to do so. However, based on your description you might want to check out non-server/lighter weight databases. Such as SQLite, or something similar to JavaDB/Derby.... | 1 | 0 | 0 | 50 million+ Rows of Data - CSV or MySQL | 5 | python,mysql,database,optimization,csv | 0 | 2010-12-22T00:20:00.000 |
I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am b... | 2 | 1 | 0.039979 | 0 | false | 4,505,180 | 0 | 1,161 | 3 | 0 | 0 | 4,505,170 | Are you just going to slurp in everything all at once? If so, then CSV is probably the way to go. It's simple and works.
If you need to do lookups, then something that lets you index the data, like MySQL, would be better. | 1 | 0 | 0 | 50 million+ Rows of Data - CSV or MySQL | 5 | python,mysql,database,optimization,csv | 0 | 2010-12-22T00:20:00.000 |
I'm looking to write a small web app to utilise a dataset I already have stored in a MongoDB collection. I've been writing more Python than other languages lately and would like to broaden my repertoire and write a Python web app.
It seems however that most if not all of the current popular Python web development fram... | 17 | 0 | 0 | 0 | false | 50,201,839 | 1 | 9,196 | 1 | 0 | 0 | 4,534,684 | There is no stable support for mongodb using django framework. I tried using mongoengine, but unlike models, provided for admin in django framework, there is no support for mongoengine.
Correct if I am wrong. | 1 | 0 | 0 | Python Web Framework with best Mongo support | 4 | python,mongodb | 0 | 2010-12-26T17:14:00.000 |
I'm new to MySQL, and I have a question about the memory.
I have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the
memory.
I use python(actually MySQLdb in python) with sql: SELECT * FROM table.
However, from my linux "top" I saw this python process uses 50% of my memory(which is total 6GB) ... | 1 | 0 | 0 | 0 | false | 4,559,691 | 0 | 4,116 | 2 | 0 | 0 | 4,559,402 | In pretty much any scripting language, a variable will always take up more memory than its actual contents would suggest. An INT might be 32 or 64bits, suggesting it would require 4 or 8 bytes of memory, but it will take up 16 or 32bytes (pulling numbers out of my hat), because the language interpreter has to attach va... | 1 | 0 | 0 | the Memory problem about MySQL "SELECT *" | 4 | python,mysql | 0 | 2010-12-30T01:47:00.000 |
I'm new to MySQL, and I have a question about the memory.
I have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the
memory.
I use python(actually MySQLdb in python) with sql: SELECT * FROM table.
However, from my linux "top" I saw this python process uses 50% of my memory(which is total 6GB) ... | 1 | -1 | -0.049958 | 0 | false | 4,559,443 | 0 | 4,116 | 2 | 0 | 0 | 4,559,402 | This is almost certainly a bad design.
What are you doing with all that data in memory at once?
If it's for one user, why not pare the size down so you can support multiple users?
If you're doing a calculation on the middle tier, is it possible to shift the work to the database server so you don't have to bring all t... | 1 | 0 | 0 | the Memory problem about MySQL "SELECT *" | 4 | python,mysql | 0 | 2010-12-30T01:47:00.000 |
How can I update multiple records in a queryset efficiently?
Do I just loop over the queryset, edit , and call save() for each one of them? Is it equivalent to psycopg2's executemany? | 1 | 6 | 1.2 | 0 | true | 4,601,203 | 1 | 6,395 | 1 | 0 | 0 | 4,600,938 | If you have to update each record with a different value, then of couse you have to iterate over each record. If you wish to do update them all with the same value, then just use the update method of the queryset. | 1 | 0 | 0 | Django: how can I update more than one record at once? | 3 | python,django | 0 | 2011-01-05T04:53:00.000 |
I have a project coming up that involves a desktop application (tournament scoring for an amateur competition) that probably 99+% of the time will be a single-user on a single machine, no network connectivity, etc. For that, sqlite will likely work beautifully. For those other few times when there are more than one p... | 2 | 1 | 1.2 | 0 | true | 4,612,684 | 0 | 245 | 2 | 0 | 0 | 4,610,698 | Yes, SQLAlchemy will help you to be independent on what SQL database you use, and you get a nice ORM as well. Highly recommended. | 1 | 0 | 0 | creating a database-neutral app in python | 2 | python,database,sqlite,orm,sqlalchemy | 0 | 2011-01-06T00:31:00.000 |
I have a project coming up that involves a desktop application (tournament scoring for an amateur competition) that probably 99+% of the time will be a single-user on a single machine, no network connectivity, etc. For that, sqlite will likely work beautifully. For those other few times when there are more than one p... | 2 | -1 | -0.099668 | 0 | false | 4,610,735 | 0 | 245 | 2 | 0 | 0 | 4,610,698 | I don't see how those 2 use cases would use the same methods. Just create a wrapper module that conditionally imports either the sqlite or sqlalchemy modules or whatever else you need. | 1 | 0 | 0 | creating a database-neutral app in python | 2 | python,database,sqlite,orm,sqlalchemy | 0 | 2011-01-06T00:31:00.000 |
I'm developing a web app that uses stock data. The stock data can be stored in:
Files
DB
The structure of the data is simple: there's a daily set and a weekly set. If files are used, then I can store a file per symbol/set, such as GOOGLE_DAILY and GOOGLE_WEEKLY. Each set includes a simple list of (Date, open/hight/l... | 0 | 3 | 1.2 | 0 | true | 4,613,300 | 1 | 81 | 1 | 0 | 0 | 4,613,251 | You don't need a table per stock symbol, you just need one of the fields in the table to be the stock symbol. The table might be called StockPrices and its fields might be
ticker_symbol - the stock ticker symbol
time - the time of the stock quote
price - the price of the stock at that time
As long as ticker_symbol is... | 1 | 0 | 0 | Help needed with db structure | 2 | python,django,data-structures | 0 | 2011-01-06T09:02:00.000 |
Im setting a VM.
Both host and VM machine have Mysql.
How do keep the VM Mysql sync'd to to the host Mysql.
Host is using MYsql 5.5 on XP.
VM is Mysql 5.1 on Fedora 14.
1) I could DUMP to "shared," Restore. Not sure if this will work.
2) I could network Mysql Host to Mysql VM. Not how to do this
How would I do thi... | 1 | 0 | 0 | 0 | false | 4,621,472 | 0 | 2,319 | 2 | 0 | 0 | 4,619,392 | Do you want it synced in realtime?
Why not just connect the guest's mysql process to the host? | 1 | 0 | 0 | How to Sync MySQL with python? | 3 | python,mysql,virtual-machine | 0 | 2011-01-06T20:15:00.000 |
Im setting a VM.
Both host and VM machine have Mysql.
How do keep the VM Mysql sync'd to to the host Mysql.
Host is using MYsql 5.5 on XP.
VM is Mysql 5.1 on Fedora 14.
1) I could DUMP to "shared," Restore. Not sure if this will work.
2) I could network Mysql Host to Mysql VM. Not how to do this
How would I do thi... | 1 | 1 | 1.2 | 0 | true | 4,619,503 | 0 | 2,319 | 2 | 0 | 0 | 4,619,392 | You can use mysqldump to make snapshots of the database, and to restore it to known states after tests.
But instead or going into the complication of synchronizing different database instances, it would be best to open the host machine's instance to local network access, and have the applications in the virtual machin... | 1 | 0 | 0 | How to Sync MySQL with python? | 3 | python,mysql,virtual-machine | 0 | 2011-01-06T20:15:00.000 |
I know of PyMySQLDb, is that pretty much the thinnest/lightest way of accessing MySql? | 2 | 0 | 0 | 0 | false | 9,090,731 | 0 | 2,598 | 3 | 0 | 0 | 4,620,340 | MySQLDb is faster while SQLAlchemy makes code more user friendly -:) | 1 | 0 | 0 | What is the fastest/most performant SQL driver for Python? | 3 | python,mysql | 0 | 2011-01-06T21:56:00.000 |
I know of PyMySQLDb, is that pretty much the thinnest/lightest way of accessing MySql? | 2 | 5 | 0.321513 | 0 | false | 4,620,669 | 0 | 2,598 | 3 | 0 | 0 | 4,620,340 | The fastest is SQLAlchemy.
"Say what!?"
Well, a nice ORM, and I like SQLAlchemy, you will get your code finished much faster. If your code then runs 0.2 seconds slower isn't really gonna make any noticeable difference. :)
Now if you get performance problems, then you can look into improving the code. But choosing the ... | 1 | 0 | 0 | What is the fastest/most performant SQL driver for Python? | 3 | python,mysql | 0 | 2011-01-06T21:56:00.000 |
I know of PyMySQLDb, is that pretty much the thinnest/lightest way of accessing MySql? | 2 | 3 | 0.197375 | 0 | false | 4,620,433 | 0 | 2,598 | 3 | 0 | 0 | 4,620,340 | The lightest possible way is to use ctypes and directly call into the MySQL API, of course, without using any translation layers. Now, that's ugly and will make your life miserable unless you also write C, so yes, the MySQLDb extension is the standard and most performant way to use MySQL while still using the Python Da... | 1 | 0 | 0 | What is the fastest/most performant SQL driver for Python? | 3 | python,mysql | 0 | 2011-01-06T21:56:00.000 |
I work with Oracle Database and lastest Django but when i use the default user model is the query very slow
what can i do? | 2 | 2 | 1.2 | 0 | true | 6,583,775 | 1 | 305 | 1 | 0 | 0 | 4,625,835 | The solution was to add an index. | 1 | 0 | 0 | how can i optimize a django oracle connection? | 1 | python,django,oracle,django-models,model | 0 | 2011-01-07T13:18:00.000 |
I know that with an InnoDB table, transactions are autocommit, however I understand that to mean for a single statement? For example, I want to check if a user exists in a table, and then if it doesn't, create it. However there lies a race condition. I believe using a transaction prior to doing the select, will ensure ... | 2 | 4 | 0.379949 | 0 | false | 4,656,098 | 0 | 363 | 1 | 0 | 0 | 4,637,886 | There exists a SELECT ... FOR UPDATE that allows you to lock the rows from being read by another transaction but I believe the records have to exist in the first place. Then you can do as you say, and unlock it once you commit.
In your case I think the best approach is to simply set a unique constraint on the username... | 1 | 0 | 0 | How do you create a transaction that spans multiple statements in Python with MySQLdb? | 2 | python,mysql | 0 | 2011-01-09T05:47:00.000 |
i have a django project with a long running (~3hour) management command
in my production environment ( apache mod_wsgi ) this process fails with a broken pipe(32) at the end, when trying to update the database.
thank you | 1 | 1 | 1.2 | 0 | true | 4,644,443 | 1 | 571 | 1 | 0 | 0 | 4,644,317 | The broken pipe mostly mean that one socket in the canal of transmission has been closed without notifying the other one , in your case i think it mean that the database connection that you have establish was closed from the database part, so when you code try to use it, it raise the exception.
Usually the database con... | 1 | 0 | 0 | django long running process database connection | 1 | python,django,apache,mod-wsgi | 0 | 2011-01-10T06:42:00.000 |
I have a medium size (~100mb) read-only database that I want to put on google app engine. I could put it into the datastore, but the datastore is kind of slow, has no relational features, and has many other frustrating limitations (not going into them here). Another option is loading all the data into memory, but I q... | 2 | 2 | 1.2 | 0 | true | 4,663,353 | 1 | 803 | 2 | 1 | 0 | 4,663,071 | I don't think you're likely to find anything like that...surely not over blobstore. Because if all your data is stored in a single blob, you'd have to read the entire database into memory for any operation, and you said you can't do that.
Using the datastore as your backend is more plausible, but not much. The big issu... | 1 | 0 | 0 | A Read-Only Relational Database on Google App Engine? | 3 | python,google-app-engine,sqlite,relational-database,non-relational-database | 0 | 2011-01-11T21:55:00.000 |
I have a medium size (~100mb) read-only database that I want to put on google app engine. I could put it into the datastore, but the datastore is kind of slow, has no relational features, and has many other frustrating limitations (not going into them here). Another option is loading all the data into memory, but I q... | 2 | 2 | 0.132549 | 0 | false | 4,663,631 | 1 | 803 | 2 | 1 | 0 | 4,663,071 | django-nonrel does not magically provide an SQL database - so it's not really a solution to your problem.
Accessing a blobstore blob like a file is possible, but the SQLite module requires a native C extension, which is not enabled on App Engine. | 1 | 0 | 0 | A Read-Only Relational Database on Google App Engine? | 3 | python,google-app-engine,sqlite,relational-database,non-relational-database | 0 | 2011-01-11T21:55:00.000 |
I'm trying to use clr.AddReference to add sqlite3 functionality to a simple IronPython program I'm writing; but everytime I try to reference System.Data.SQLite I get this error:
Traceback (most recent call last):
File "", line 1, in
IOError: System.IO.IOException: Could not add reference to assembly System.Data... | 1 | 1 | 0.197375 | 0 | false | 4,696,478 | 0 | 1,695 | 1 | 0 | 0 | 4,682,960 | My first guess is that you're trying to load the x86 (32-bit) System.Data.SQLite.dll in a x64 (64-bit) process, or vice versa. System.Data.SQLite.dll contains the native sqlite3 library, which must be compiled for x86 or x64, so there is a version of System.Data.SQLite.dll for each CPU.
If you're using the console, ipy... | 1 | 1 | 0 | Adding System.Data.SQLite reference in IronPython | 1 | ado.net,ironpython,system.data.sqlite | 0 | 2011-01-13T17:11:00.000 |
I read somewhere that to save data to a SQLite3 database in Python, the method commit of the connection object should be called. Yet I have never needed to do this. Why? | 18 | 3 | 0.119427 | 0 | false | 15,967,816 | 0 | 20,808 | 1 | 0 | 0 | 4,699,605 | Python sqlite3 issues a BEGIN statement automatically before "INSERT" or "UPDATE". After that it automatically commits on any other command or db.close() | 1 | 0 | 0 | Why doesn’t SQLite3 require a commit() call to save data? | 5 | python,transactions,sqlite,autocommit | 0 | 2011-01-15T12:36:00.000 |
I've got a situation where I'm contemplating using subversion/svn as the repository/version control system for a project. I'm trying to figure out if it's possible, (and if so, how) to be able to have the subversion system, on a post commit hook/process to to write the user/file/time (and maybe msg) to either an extern... | 1 | 0 | 1.2 | 0 | true | 4,701,984 | 0 | 1,005 | 2 | 0 | 0 | 4,701,902 | I would say that's possible, but you are going to need a bit of work to retrieve the username, date and commit message.
Subversion invokes the post-commit hook with the repo path and the number of revision which was just committed as arguments.
In order to retrieve the information you're looking for, you will need to u... | 1 | 0 | 0 | subversion post commit hooks | 2 | php,python,svn,hook,svn-hooks | 1 | 2011-01-15T20:14:00.000 |
I've got a situation where I'm contemplating using subversion/svn as the repository/version control system for a project. I'm trying to figure out if it's possible, (and if so, how) to be able to have the subversion system, on a post commit hook/process to to write the user/file/time (and maybe msg) to either an extern... | 1 | 0 | 0 | 0 | false | 4,701,973 | 0 | 1,005 | 2 | 0 | 0 | 4,701,902 | Indeed it is very possible, in your repository root there should be a folder named hooks, inside which should be a file named post-commit (if not, create one), add whatever bash code you put there and it will execute after every commit.
Note, there are 2 variables that are passed into the script $1 is the repository, a... | 1 | 0 | 0 | subversion post commit hooks | 2 | php,python,svn,hook,svn-hooks | 1 | 2011-01-15T20:14:00.000 |
According to the Bigtable original article, a column key of a Bigtable is named using "family:qualifier" syntax where column family names must be printable but qualifiers may be arbitrary strings. In the application I am working on, I would like to specify the qualifiers using Chinese words (or phrase). Is it possible ... | 1 | 2 | 0.379949 | 0 | false | 4,718,951 | 1 | 134 | 1 | 1 | 0 | 4,712,143 | The Datastore is the only interface to the underlying storage on App Engine. You should be able to use any valid UTF-8 string as a kind name, key name, or property name, however. | 1 | 0 | 0 | Is there an API of Google App Engine provided to better configure the Bigtable besides Datastore? | 1 | python,google-app-engine | 0 | 2011-01-17T10:32:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (... | 2 | 1 | 0.033321 | 0 | false | 10,204,815 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | I've used mongoengine with django but you need to create a file like mongo_models.py for example. In that file you define your Mongo documents. You then create forms to match each Mongo document. Each form has a save method which inserts or updates whats stored in Mongo. Django forms are designed to plug into any data ... | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (... | 2 | 9 | 1.2 | 0 | true | 4,718,924 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | There's no reason why you can't use one of the standard RDBMSs for all the standard Django apps, and then Mongo for your app. You'll just have to replace all the standard ways of processing things from the Django ORM with doing it the Mongo way.
So you can keep urls.py and its neat pattern matching, views will still ge... | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (... | 2 | -1 | -0.033321 | 0 | false | 4,719,398 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | Primary pitfall (for me): no JOINs! | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (... | 2 | 0 | 0 | 0 | false | 4,719,167 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | Upfront, it won't work for any existing Django app that ships it's models. There's no backend for storing Django's Model data in mongodb or other NoSQL storages at the moment and, database backends aside, models themselves are somewhat of a moot point, because once you get in to using someones app (django.contrib apps ... | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I want to try Mongodb w/ mongoengine. I'm new to Django and databases and I'm having a fit with Foreign Keys, Joins, Circular Imports (you name it). I know I could eventually work through these issues but Mongo just seems like a simpler solution for what I am doing. My question is I'm using a lot of pluggable apps (... | 2 | 0 | 0 | 0 | false | 4,728,500 | 1 | 2,780 | 5 | 0 | 0 | 4,718,580 | I have recently tried this (although without Mongoengine). There are a huge number of pitfalls, IMHO:
No admin interface.
No Auth django.contrib.auth relies on the DB interface.
Many things rely on django.contrib.auth.User. For example, the RequestContext class. This is a huge hindrance.
No Registration (Relies ... | 1 | 0 | 0 | Converting Django project from MySQL to Mongo, any major pitfalls? | 6 | python,django,mongodb,mongoengine | 0 | 2011-01-17T22:22:00.000 |
I have a .sql file containing thousands of individual insert statements. It takes forever to do them all. I am trying to figure out a way to do this more efficiently. In python the sqlite3 library can't do things like ".read" or ".import" but executescript is too slow for that many inserts.
I installed the sqlite3... | 6 | 1 | 0.049958 | 0 | false | 4,724,461 | 0 | 1,859 | 2 | 0 | 0 | 4,719,836 | Use a parameterized query
and
Use a transaction. | 1 | 0 | 0 | Python and sqlite3 - adding thousands of rows | 4 | python,sql,django,sqlite | 0 | 2011-01-18T02:01:00.000 |
I have a .sql file containing thousands of individual insert statements. It takes forever to do them all. I am trying to figure out a way to do this more efficiently. In python the sqlite3 library can't do things like ".read" or ".import" but executescript is too slow for that many inserts.
I installed the sqlite3... | 6 | 2 | 0.099668 | 0 | false | 13,787,939 | 0 | 1,859 | 2 | 0 | 0 | 4,719,836 | In addition to running the queries in bulk inside a single transaction, also try VACUUM and ANALYZEing the database file. It helped a similar problem of mine. | 1 | 0 | 0 | Python and sqlite3 - adding thousands of rows | 4 | python,sql,django,sqlite | 0 | 2011-01-18T02:01:00.000 |
At my organization, PostgreSQL databases are created with a 20-connection limit as a matter of policy. This tends to interact poorly when multiple applications are in play that use connection pools, since many of those open up their full suite of connections and hold them idle.
As soon as there are more than a couple o... | 3 | 2 | 0.379949 | 0 | false | 4,729,629 | 0 | 303 | 1 | 0 | 0 | 4,729,361 | I think it's reasonable to require one connection per concurrent activity, and it's reasonable to assume that concurrent HTTP requests are concurrently executed.
Now, the number of concurrent HTTP requests you want to process should scale with a) the load on your server, and b) the number of CPUs you have available. If... | 1 | 0 | 0 | How can I determine what my database's connection limits should be? | 1 | python,database,sqlalchemy,pylons,connection-pooling | 0 | 2011-01-18T21:40:00.000 |
Question is rather conceptual, then direct.
What's the best solution to keep two different calendars synchronised? I can run a cron job for example every minute, I can keep additional information in database. How to avoid events conflicts?
As far I was thinking about these two solutions. First one is keeping a database... | 0 | 0 | 1.2 | 0 | true | 4,738,228 | 0 | 516 | 1 | 0 | 0 | 4,737,852 | Most calendars, including the Google calendar, has ways to import and synchronize data. You can use these ways. Just import the gdata information (perhaps you need to make it into ics first, I don't know) into the Google calendar. | 1 | 0 | 1 | Two calendars synchronization | 1 | python,synchronization,calendar | 0 | 2011-01-19T16:27:00.000 |
I can connect to a Oracle 10g release 2 server using instant client. Using pyodbc and cx_Oracle.
Using either module, I can execute a select query without any problems, but when I try to update a table, my program crashes.
For example,
SELECT * FROM table WHERE col1 = 'value'; works fine.
UPDATE table SET col2 = 'value... | 2 | 1 | 0.099668 | 0 | false | 4,753,975 | 0 | 587 | 2 | 0 | 0 | 4,748,962 | Use the instant client with SQL*Plus and see if you can run the update. If there's a problem, SQL*Plus is production quality, so won't crash and it should give you a reasonable error message. | 1 | 0 | 0 | Oracle instant client can't execute sql update | 2 | python,oracle,pyodbc,cx-oracle,instantclient | 0 | 2011-01-20T15:26:00.000 |
I can connect to a Oracle 10g release 2 server using instant client. Using pyodbc and cx_Oracle.
Using either module, I can execute a select query without any problems, but when I try to update a table, my program crashes.
For example,
SELECT * FROM table WHERE col1 = 'value'; works fine.
UPDATE table SET col2 = 'value... | 2 | 0 | 0 | 0 | false | 4,749,022 | 0 | 587 | 2 | 0 | 0 | 4,748,962 | Sounds more like your user you are connecting with doesn't have those privileges on that table. Do you get an ORA error indicating insufficient permissions when performing the update? | 1 | 0 | 0 | Oracle instant client can't execute sql update | 2 | python,oracle,pyodbc,cx-oracle,instantclient | 0 | 2011-01-20T15:26:00.000 |
I have a massive data set of customer information (100s of millions of records, 50+ tables).
I am writing a python (twisted) app that I would like to interact with the dataset, performing table manipulation. What I really need is an abstraction of 'table', so I can add/remove/alter columns/tables without having to res... | 1 | 0 | 1.2 | 0 | true | 4,764,551 | 0 | 858 | 1 | 0 | 0 | 4,764,476 | I thought that ORM solutions had to do with DQL (Data Query Language), not DDL (Data Definition Language). You don't use ORM to add, alter, or remove columns at runtime. You'd have to be able to add, alter, or remove object attributes and their types at the same time.
ORM is about dynamically generating SQL and devel... | 1 | 0 | 0 | Python ORM for massive data set | 4 | python,orm | 0 | 2011-01-21T22:26:00.000 |
I'm trying to use sqlalchemy on Cygwin with a MSSQL backend but I cannot seem to get any of the MSSQL Python DB APIs installed on Cygwin. Is there one that is known to work? | 2 | 0 | 0 | 0 | false | 5,013,126 | 0 | 682 | 1 | 0 | 0 | 4,770,083 | FreeTDS + unixodbc + pyodbc stack will work on Unix-like systems and should therefore work just as well in Cygwin. You should use version 8.0 of TDS protocol. This can be configured in connection string. | 1 | 0 | 0 | Which Python (sqlalchemy) mssql DB API works in Cygwin? | 1 | python,sql-server,cygwin,sqlalchemy | 0 | 2011-01-22T19:34:00.000 |
A website I am making revolves around a search utility, and a want to have something on the homepage that lists the top 10 (or something) most searched queries of the day.
What would be the easiest / most efficient way of doing this?
Should I use a sql database, or just a text file containing the top 10 queries and a c... | 0 | 2 | 1.2 | 0 | true | 4,778,081 | 0 | 98 | 1 | 0 | 0 | 4,778,058 | Put the queries in a table, with one row per distinct query, and a column to count. Insert if the query doesn't exist already, or otherwise increment the query row counter.
Put a cron job together than empties the table at 12 midnight. Use transactions to prevent two different requests from colliding. | 1 | 0 | 0 | How to make a "top queries" page | 2 | python,sql,multithreading | 0 | 2011-01-24T02:23:00.000 |
In some project I implement user-requested mapping (at runtime) of two tables which are connected by a 1-to-n relation (one table has a ForeignKey field).
From what I get from the documentation, the usual way is to add a orm.relation to the mapped properties with a mapped_collection as collection_class on the non-forei... | 0 | 1 | 0.197375 | 0 | false | 5,594,860 | 1 | 664 | 1 | 0 | 0 | 4,782,344 | I found the answer myself by now:
If you use an orm.relation from each side and no backrefs, you have to use back_populates or if you mess around at one side, it won't be properly updated in the mapping on the other side.
Therefore, an orm.relation from each side instead of an automated backref IS possible but you have... | 1 | 0 | 0 | SQLAlchemy - difference between mapped orm.relation with backref or two orm.relation from both sides | 1 | python,database,sqlalchemy,relation | 0 | 2011-01-24T13:10:00.000 |
I'm writing a network scheduling like program in Python 2.6+ in which I have a complex queue requirement: Queue should store packets, should retrieve by timestamp or by packet ID in O(1), should be able to retrieve all the packets below a certain threshold, sort packet by priorities etc. It should insert and delete wit... | 0 | 0 | 0 | 0 | false | 4,803,269 | 0 | 94 | 1 | 0 | 0 | 4,802,900 | A database is just some indexes and fancy algorithms wrapped around a single data structure -- a table. You don't have a lot of control about what happens under the hood.
I'd try using the built-in Python datastructures. | 1 | 0 | 0 | Need advice on customized datastructure vs using in-memory DB? | 2 | python,data-structures | 0 | 2011-01-26T09:20:00.000 |
I am new to python and its workings.
I have an excel spreadsheet which was got using some VBA's.
Now I want to invoke Python to do some of the jobs...
My question then is: How can I use python script instead of VBA in an excel spreadsheet?
An example of such will be appreciated. | 2 | 0 | 0 | 0 | false | 4,872,985 | 0 | 2,789 | 1 | 0 | 0 | 4,829,509 | I've always done the manipulation of Excel spreadsheets and Word documents with standalone scripts which use COM objects to manipulate the documents. I've never come across a good use case for putting Python into a spreadsheet in place of VBA. | 1 | 0 | 1 | Use of python script instead of VBA in Excel | 2 | python,excel | 0 | 2011-01-28T14:49:00.000 |
How do I use the Werkzeug framework without any ORM like SQLAlchemy? In my case, it's a lot of effort to rewrite all the tables and columns in SQLAlchemy from existing tables & data.
How do I query the database and make an object from the database output?
In my case now, I use Oracle with cx_Oracle. If you have a solut... | 1 | 0 | 0 | 0 | false | 4,838,669 | 1 | 445 | 1 | 0 | 0 | 4,838,528 | Is it a problem to use normal DB API, issue regular SQL queries, etc? cx_Oracle even has connection pooling biolt in to help you manage connections. | 1 | 0 | 0 | Werkzeug without ORM | 3 | python,orm,werkzeug | 0 | 2011-01-29T18:09:00.000 |
I have several occasions where I want to collect data when in the field. This is in situations where I do not always have access to my postgres database.
To keep things in sync, it would be excellent if I could use psycopg2 functions offline to generate queries that can be held back and once I am able to connect to the... | 12 | 0 | 0 | 0 | false | 4,880,978 | 0 | 3,099 | 1 | 0 | 0 | 4,879,804 | It seems like it would be easier and more versatile to store the data to be inserted later in another structure. Perhaps a csv file. Then when you connect you can run through that table, but you can also easily do other things with that CSV if necessary. | 1 | 0 | 0 | Use psycopg2 to construct queries without connection | 2 | python,psycopg2,offline-mode | 0 | 2011-02-02T21:00:00.000 |
After setting up a django site and running on the dev server, I have finally gotten around to figuring out deploying it in a production environment using the recommended mod_wsgi/apache22. I am currently limited to deploying this on a Windows XP machine.
My problem is that several django views I have written use the py... | 6 | 1 | 0.099668 | 0 | false | 8,750,220 | 1 | 2,617 | 1 | 0 | 0 | 4,882,605 | I ran into a couple of issues trying to use subprocess under this configuration. Since I am not sure what specifically you had trouble with I can share a couple of things that were not easy for me to solve but in hindsight seem pretty trivial.
I was receiving permissions related errors when trying to execute an applic... | 1 | 0 | 0 | Django + Apache + Windows WSGIDaemonProcess Alternative | 2 | python,django,apache,subprocess,mod-wsgi | 0 | 2011-02-03T04:07:00.000 |
I'm storing MySQL DateTimes in UTC, and let the user select their time zone, storing that information.
However, I want to to some queries that uses group by a date. Is it better to store that datetime information in UTC (and do the calculation every time) or is it better to save it in the timezone given? Since time z... | 0 | 1 | 0.099668 | 0 | false | 4,928,246 | 0 | 123 | 2 | 0 | 0 | 4,928,220 | It's almost always better to save the time information in UTC, and convert it to local time when needed for presentation and display.
Otherwise, you will go stark raving mad trying to manipulate and compare dates and times in your system because you will have to convert each time to UTC time for comparison and manipula... | 1 | 0 | 0 | How to handle time zones in a CMS? | 2 | python,mysql,timezone | 0 | 2011-02-08T00:10:00.000 |
I'm storing MySQL DateTimes in UTC, and let the user select their time zone, storing that information.
However, I want to to some queries that uses group by a date. Is it better to store that datetime information in UTC (and do the calculation every time) or is it better to save it in the timezone given? Since time z... | 0 | 3 | 1.2 | 0 | true | 4,928,244 | 0 | 123 | 2 | 0 | 0 | 4,928,220 | Generally always store in UTC and convert for display, it's the only sane way to do time differences etc. Or when somebody next year decides to change the summer time dates. | 1 | 0 | 0 | How to handle time zones in a CMS? | 2 | python,mysql,timezone | 0 | 2011-02-08T00:10:00.000 |
I am having problem when I do a query to mongodb using pymongo.
I do not know how to avoid getting the _id for each record.
I am doing something like this,
result = db.meta.find(filters, [
'model',
'fields.parent',
'field... | 1 | 0 | 0 | 0 | false | 4,941,686 | 0 | 803 | 1 | 0 | 0 | 4,937,817 | Does make any sense. The object id is core part of each document. Convert the BSON/JSON document to a native datastructure (depending on your implementation language) and remove _id on this level. Apart from that it does not make much sense what you are trying to accomplish. | 1 | 0 | 0 | PYMongo: Keep returning _id in every record after quering, How can I exclude this record? | 2 | python,mongodb,pymongo | 0 | 2011-02-08T20:08:00.000 |
I am in the middle of a project involving trying to grab numerous pieces of information out of 70GB worth of xml documents and loading it into a relational database (in this case postgres) I am currently using python scripts and psycopg2 to do this inserts and whatnot. I have found that as the number of rows in the som... | 4 | 2 | 0.057081 | 0 | false | 4,969,077 | 0 | 4,138 | 2 | 0 | 0 | 4,968,837 | Considering the process was fairly efficient before and only now when the dataset grew up it slowed down my guess is it's the indexes. You may try dropping indexes on the table before the import and recreating them after it's done. That should speed things up. | 1 | 0 | 0 | Postgres Performance Tips Loading in billions of rows | 7 | python,database-design,postgresql,psycopg2 | 0 | 2011-02-11T12:11:00.000 |
I am in the middle of a project involving trying to grab numerous pieces of information out of 70GB worth of xml documents and loading it into a relational database (in this case postgres) I am currently using python scripts and psycopg2 to do this inserts and whatnot. I have found that as the number of rows in the som... | 4 | 0 | 0 | 0 | false | 4,968,869 | 0 | 4,138 | 2 | 0 | 0 | 4,968,837 | I'd look at the rollback logs. They've got to be getting pretty big if you're doing this in one transaction.
If that's the case, perhaps you can try committing a smaller transaction batch size. Chunk it into smaller blocks of records (1K, 10K, 100K, etc.) and see if that helps. | 1 | 0 | 0 | Postgres Performance Tips Loading in billions of rows | 7 | python,database-design,postgresql,psycopg2 | 0 | 2011-02-11T12:11:00.000 |
I am trying to find the best solution (perfomance/easy code) for the following situation:
Considering a database system with two tables, A (production table) and A'(cache table):
Future rows are added first into A' table in order to not disturb the production one.
When a timer says go (at midnight, for example) rows f... | 0 | 1 | 0.197375 | 0 | false | 4,973,738 | 1 | 611 | 1 | 0 | 0 | 4,973,316 | The "best" solution according to the criteria you've laid out so far would just be to insert into the production table.
...unless there's actually something extremely relevant you're not telling us | 1 | 0 | 0 | Materialize data from cache table to production table [PostgreSQL] | 1 | python,postgresql,triggers,materialized-views | 0 | 2011-02-11T19:46:00.000 |
When exactly the database transaction is being commited? Is it for example at the end of every response generation?
To explain the question: I need to develop a bit more sophisticated application where I have to control DB transactions less or more manually. Especialy I have to be able to design a set of forms with som... | 2 | 1 | 0.099668 | 0 | false | 5,443,158 | 1 | 1,224 | 1 | 0 | 0 | 4,979,392 | you can call db.commit() and db.rollback() pretty much everywhere. If you do not and the action does not raise an exception, it commits before returning a response to the client. If it raises an exception and it is not explicitly caught, it rollsback. | 1 | 0 | 0 | web2py and DB transactions | 2 | python,web2py | 0 | 2011-02-12T17:14:00.000 |
I am using TG2.1 on WinXP.
Python ver is 2.6.
Trying to use sqlautocode (0.5.2) for working with my existing MySQL schema.
SQLAlchemy ver is 0.6.6
import sqlautocode # works OK
While trying to reflect the schema ----
sqlautocode mysql:\\username:pswd@hostname:3306\schema_name -o tables.py
SyntaxError: invalid ... | 2 | 1 | 0.099668 | 0 | false | 5,003,413 | 0 | 659 | 1 | 0 | 0 | 4,994,838 | Hey, I got it right somehow.
The problem seems to be version mismatch between SA 0.6 & sqlautocode 0.6
Seems that they don't work in tandom.
So I removed those & installed SA 0.5
Now it's working.
Thanks,
Vineet Deodhar. | 1 | 0 | 0 | sqlautocode for mysql giving syntax error | 2 | python,web-applications,turbogears2 | 0 | 2011-02-14T16:55:00.000 |
This relates to primary key constraint in SQLAlchemy & sqlautocode.
I have SA 0.5.1 & sqlautocode 0.6b1
I have a MySQL table without primary key.
sqlautocode spits traceback that "could not assemble any primary key columns".
Can I rectify this with a patch sothat it will reflect tables w/o primary key?
Thanks,
Vineet D... | 0 | 0 | 0 | 0 | false | 5,292,555 | 1 | 321 | 3 | 0 | 0 | 5,003,475 | We've succeeded in faking sqa if the there's combination of columns on the underlying table that uniquely identify it.
If this is your own table and you're not live, add a primary key integer column or something.
We've even been able to map an existing legacy table in a database with a) no pk and b) no proxy for a prim... | 1 | 0 | 0 | sqlautocode : primary key required in tables? | 3 | python,web-applications,turbogears,turbogears2 | 0 | 2011-02-15T12:11:00.000 |
This relates to primary key constraint in SQLAlchemy & sqlautocode.
I have SA 0.5.1 & sqlautocode 0.6b1
I have a MySQL table without primary key.
sqlautocode spits traceback that "could not assemble any primary key columns".
Can I rectify this with a patch sothat it will reflect tables w/o primary key?
Thanks,
Vineet D... | 0 | 0 | 0 | 0 | false | 5,292,729 | 1 | 321 | 3 | 0 | 0 | 5,003,475 | If the problem is that sqlautocode will not generate your class code because it cannot determine the PKs of the table, then you would probably be able to change that code to fit your needs (even if it means generating SQLA code that doesn't have PKs). Eventually, if you're using the ORM side of SQLA, you're going to ne... | 1 | 0 | 0 | sqlautocode : primary key required in tables? | 3 | python,web-applications,turbogears,turbogears2 | 0 | 2011-02-15T12:11:00.000 |
This relates to primary key constraint in SQLAlchemy & sqlautocode.
I have SA 0.5.1 & sqlautocode 0.6b1
I have a MySQL table without primary key.
sqlautocode spits traceback that "could not assemble any primary key columns".
Can I rectify this with a patch sothat it will reflect tables w/o primary key?
Thanks,
Vineet D... | 0 | 0 | 0 | 0 | false | 5,003,573 | 1 | 321 | 3 | 0 | 0 | 5,003,475 | I don't think so. How an ORM is suposed to persist an object to the database without any way to uniquely identify records?
However, most ORMs accept a primary_key argument so you can indicate the key if it is not explicitly defined in the database. | 1 | 0 | 0 | sqlautocode : primary key required in tables? | 3 | python,web-applications,turbogears,turbogears2 | 0 | 2011-02-15T12:11:00.000 |
The python unit testing framework called nosetest has a plugin for sqlalchemy, however there is no documentation for it that I can find. I'd like to know how it works, and if possible, see a code example. | 3 | 0 | 0 | 0 | false | 10,268,378 | 0 | 340 | 1 | 0 | 0 | 5,009,112 | It is my understanding that this plugin is only meant for unit testing SQLAlchemy itself and not as a general tool. Perhaps that is why there are no examples or documentation? Posting to the SQLAlchemy mailing list is likely to give you a better answer "straight from the horse's mouth". | 1 | 0 | 0 | How does the nosetests sqlalchemy plugin work? | 1 | python,sqlalchemy,nosetests | 0 | 2011-02-15T20:26:00.000 |
We have a system which generates reports in XLS using Spreadsheet_Excel_Writer for smaller files and in case of huge files we just export them as CSVs.
We now want to export excel sheets which are multicolor etc. as a part of report generation, which in excel could be done through a few macros.
Is there any good e... | 0 | 0 | 1.2 | 0 | true | 5,028,703 | 0 | 515 | 1 | 0 | 0 | 5,028,536 | It's your "excel sheets with macros" that is going to cause you all end of problems. If you're on a Windows platform, with Excel installed, then PHP's COM extension should allow you to do this. Otherwise, I'm nor aware of any PHP library which allows you to create macros... not even PHPExcel. I suspect the same will ap... | 1 | 0 | 0 | Good xls exporter to generate excel sheets automatically with a few macros from any programming language? | 1 | java,php,python,macros,xls | 0 | 2011-02-17T11:49:00.000 |
Basically I'm looking for an equivalent of DataMapper.auto_upgrade! from the Ruby world.
In other words:
change the model
run some magic -> current db schema is investigated and changed to reflect the model
profit
Of course, there are cases when it's impossible for such alteration to be non-desctructive, eg. when you... | 0 | 0 | 0 | 0 | false | 5,037,471 | 1 | 220 | 1 | 0 | 0 | 5,036,118 | Sqlalchemy-migrate (http://packages.python.org/sqlalchemy-migrate/) is intended to help do these types of operations. | 1 | 0 | 0 | Can SQLAlchemy do a non-destructive alter of the db comparing the current model with db schema? | 1 | python,orm,sqlalchemy | 0 | 2011-02-17T23:44:00.000 |
I have recently converted my workspace file format for my application to sqlite. In order to ensure robust operation on NFS I've used a common update policy, I do all modifications to a copy stored in a temp location on the local harddisk. Only when saving do I modify the original file (potentially on NFS) by copying... | 3 | 2 | 0.379949 | 0 | false | 5,095,693 | 0 | 2,749 | 1 | 0 | 0 | 5,043,327 | SQLite NFS issues are due to broken caching and locking. If your process is the only one accessing the file on NFS then you'll be ok.
The SQLite backup API was designed to solve exactly your problem. You can either backup directly to the NFS database or to another local temp file and then copy that. The backup API d... | 1 | 0 | 0 | How to ensure a safe file sync with sqlite and NFS | 1 | python,sqlite,sqlalchemy,nfs | 0 | 2011-02-18T15:44:00.000 |
Consider this test case:
import sqlite3
con1 = sqlite3.connect('test.sqlite')
con1.isolation_level = None
con2 = sqlite3.connect('test.sqlite')
con2.isolation_level = None
cur1 = con1.cursor()
cur2 = con2.cursor()
cur1.execute('CREATE TABLE foo (bar INTEGER, baz STRING)')
con1.isolation_level = 'IMMEDIATE'
cur1.execut... | 0 | 3 | 1.2 | 0 | true | 5,051,345 | 0 | 488 | 1 | 0 | 0 | 5,051,151 | You can't both commit and rollback the same transaction. con1.commit() ends your transaction on that cursor. The next con1.rollback() is either being silently ignored or is rolling back an empty transaction. | 1 | 0 | 0 | Python sqlite3 module not rolling back transactions | 1 | python,sqlite,rollback | 0 | 2011-02-19T13:59:00.000 |
I am developing a multiplayer gaming server that uses Django for the webserver (HTML frontend, user authentication, games available, leaderboard, etc.) and Twisted to handle connections between the players and the games and to interface with the games themselves. The gameserver, the webserver, and the database may run... | 9 | 2 | 0.197375 | 0 | false | 5,051,832 | 1 | 2,454 | 2 | 0 | 0 | 5,051,408 | I would just avoid the Django ORM, it's not all that and it would be a pain to access outside of a Django context (witness the work that was required to make Django support multiple databases). Twisted database access always requires threads (even with twisted.adbapi), and threads give you access to any ORM you choose.... | 1 | 0 | 0 | Sharing a database between Twisted and Django | 2 | python,database,django,twisted | 0 | 2011-02-19T14:42:00.000 |
I am developing a multiplayer gaming server that uses Django for the webserver (HTML frontend, user authentication, games available, leaderboard, etc.) and Twisted to handle connections between the players and the games and to interface with the games themselves. The gameserver, the webserver, and the database may run... | 9 | 10 | 1.2 | 0 | true | 5,051,760 | 1 | 2,454 | 2 | 0 | 0 | 5,051,408 | First of all I'd identify why you need both Django and Twisted. Assuming you are comfortable with Twisted using twisted.web and auth will easily be sufficient and you'll be able to reuse your database layer for both the frontend and backend apps.
Alternatively you could look at it the other way, what is Twisted doing ... | 1 | 0 | 0 | Sharing a database between Twisted and Django | 2 | python,database,django,twisted | 0 | 2011-02-19T14:42:00.000 |
I am trying to MySQL for Python (MySQLdb package) in Windows so that I can use it in the Django web frame.
I have just installed MySQL Community Server 5.5.9 and I have managed to run it and test it using the testing procedures suggested in the MySQL 5.5 Reference Manual. However, I discovered that I still don't have t... | 5 | 2 | 1.2 | 0 | true | 5,412,380 | 1 | 854 | 1 | 0 | 0 | 5,059,883 | I found that the key was actually generated under HKEY_CURRENT_USER instead of HKEY_LOCAL_MACHINE. Thanks. | 1 | 0 | 0 | MySQL AB, MySQL Server 5.5 Folder in HKEY_LOCAL_MACHINE not present | 1 | python,mysql,django | 0 | 2011-02-20T20:43:00.000 |
I'm creating a small website with Django, and I need to calculate statistics with data taken from several tables in the database.
For example (nothing to do with my actual models), for a given user, let's say I want all birthday parties he has attended, and people he spoke with in said parties. For this, I would need a... | 3 | 1 | 1.2 | 0 | true | 5,064,564 | 1 | 110 | 2 | 0 | 0 | 5,063,658 | I recommend extending Django's Model-Template-View approach with a controller. I usually have a controller.py within my apps which is the only interface to the data sources. So in your above case I'd have something like get_all_parties_and_people_for_user(user).
This is especially useful when your "data taken from seve... | 1 | 0 | 0 | Correct way of implementing database-wide functionality | 2 | python,database,django,django-models,coding-style | 0 | 2011-02-21T08:17:00.000 |
I'm creating a small website with Django, and I need to calculate statistics with data taken from several tables in the database.
For example (nothing to do with my actual models), for a given user, let's say I want all birthday parties he has attended, and people he spoke with in said parties. For this, I would need a... | 3 | 0 | 0 | 0 | false | 5,065,280 | 1 | 110 | 2 | 0 | 0 | 5,063,658 | User.get_attended_birthday_parties() or Event.get_attended_parties(user) work fine: it's an interface that makes sense when you use it. Creating an additional "all-purpose" object will not make your code cleaner or easier to maintain. | 1 | 0 | 0 | Correct way of implementing database-wide functionality | 2 | python,database,django,django-models,coding-style | 0 | 2011-02-21T08:17:00.000 |
So I am trying to take a large number of xml files (None are that big in particular and I can split them up as I see fit.) In all there is about 70GB worth of data. For the sake of reference the loading script is written in python and uses psycopg2 to interface with a postgres table.
Anyway, what I am trying to do is ... | 0 | 3 | 0.53705 | 0 | false | 5,066,699 | 0 | 126 | 1 | 0 | 0 | 5,066,569 | If I understand this "...I have been iterating over the update methods" it sounds like you are updating the database rows as you go? If this is so, consider writing some code that passes the XML, accumulates the totals you are tracking, outputs them to a file, and then loads that file with COPY.
If you are updating ex... | 1 | 0 | 0 | Efficiently creating a database to analyze relationships between information | 1 | python,database-design,postgresql,psycopg2 | 0 | 2011-02-21T13:30:00.000 |
I am using Python 2.7 and trying to get a Django project running on a MySQL backend.
I have downloaded mysqldb and followed the guide here:http://cd34.com/blog/programming/python/mysql-python-and-snow-leopard/
Yet when I go to run the django project the following traceback occurs:
Traceback (most recent call last):
... | 2 | 2 | 1.2 | 0 | true | 5,072,940 | 1 | 1,535 | 2 | 1 | 0 | 5,072,066 | I eventually managed to solve the problem by Installing python 2.7 with Mac Ports and installing mysqldb using Mac Ports - was pretty simple after that. | 1 | 0 | 0 | Python mysqldb on Mac OSX 10.6 not working | 2 | python,mysql,django,macos | 0 | 2011-02-21T22:35:00.000 |
I am using Python 2.7 and trying to get a Django project running on a MySQL backend.
I have downloaded mysqldb and followed the guide here:http://cd34.com/blog/programming/python/mysql-python-and-snow-leopard/
Yet when I go to run the django project the following traceback occurs:
Traceback (most recent call last):
... | 2 | 0 | 0 | 0 | false | 5,305,496 | 1 | 1,535 | 2 | 1 | 0 | 5,072,066 | you needed to add the MySQL client libraries to the LD_LIBRARY_PATH. | 1 | 0 | 0 | Python mysqldb on Mac OSX 10.6 not working | 2 | python,mysql,django,macos | 0 | 2011-02-21T22:35:00.000 |
I'm building my first app with GAE to allow users to run elections, and I create an Election entity for each election.
To avoid storing too much data, I'd like to automatically delete an Election entity after a certain period of time -- say three months after the end of the election. Is it possible to do this automa... | 3 | 5 | 1.2 | 0 | true | 5,079,939 | 1 | 2,483 | 1 | 1 | 0 | 5,079,885 | Assuming you have a DateProperty on the entities indicating when the election ended, you can have a cron job search for any older than 3 months every night and delete them. | 1 | 0 | 0 | Automatic deletion or expiration of GAE datastore entities | 3 | python,google-app-engine,google-cloud-datastore | 0 | 2011-02-22T15:09:00.000 |
I know that if I figure this one out or if somebody shows me, it'll be a forehead slapper. Before posting any questions, I try for at least three hours and quite a bit of searching. There are several hints that are close, but nothing I have adopted/tried seems to work.
I am taking a byte[] from Java and passing that vi... | 1 | 0 | 0 | 0 | false | 31,187,500 | 0 | 7,408 | 1 | 0 | 0 | 5,088,671 | I found ''.join(map(lambda x: chr(x % 256), data)) to be painfully slow (~4 minutes) for my data on python 2.7.9, where a small change to str(bytearray(map(lambda x: chr(x % 256), data))) only took about 10 seconds. | 1 | 0 | 1 | Convert Java byte array to Python byte array | 3 | java,python,mysql,binary,byte | 0 | 2011-02-23T08:47:00.000 |
I have installed MySqldb through .exe(precompiled). Its is stored in site-packages. But now i don't know how to test, that it is accessable or not. And major problem how to import in my application like import MySqldb. Help me i am very new techie in python i just want to work with my existing Mysql. Thanks in advance.... | 0 | 3 | 1.2 | 0 | true | 5,090,944 | 0 | 165 | 1 | 0 | 0 | 5,090,870 | Just open your CMD/Console, type python, press Enter, type import MySQLdb and then press Enter again.
If no error is shown, you're ok! | 1 | 0 | 0 | how to import mysqldb | 1 | python,mysql | 0 | 2011-02-23T12:23:00.000 |
I want to retain the flexibility of switching between MySQL and PostgreSQL without the awkwardness of using an ORM - SQL is a fantastic language and i would like to retain it's power without the additional overhead of an ORM.
So...is there a best practice for abstraction the database layer of a Python application to pr... | 1 | 1 | 0.099668 | 0 | false | 5,090,938 | 0 | 965 | 1 | 0 | 0 | 5,090,901 | Have a look at SQLAlchemy. You can use it to execute literal SQL on several RDBMS, including MySQL and PostgreSQL. It wraps the DB-API adapters with a common interface, so they will behave as similarly as possible.
SQLAlchemy also offers programmatic generation of SQL, with or without the included ORM, which you may fi... | 1 | 0 | 0 | Starting new project: database abstraction in Python, best practice for retaining option of MySQL or PostgreSQL without ORM | 2 | python,mysql,database,postgresql,abstraction | 0 | 2011-02-23T12:25:00.000 |
I'm looking for a library that lets me run SQL-like queries on python "object databases". With object database I mean a fairly complex structure of python objects and lists in memory. Basically this would be a "reverse ORM" - instead of providing an object oriented interface to a relational database, it would provide ... | 17 | 2 | 0.057081 | 0 | false | 5,127,794 | 0 | 9,831 | 1 | 0 | 0 | 5,126,776 | One major difference between what SQL does and what you can do in idiomatic python, in SQL, you tell the evaluator what information you are looking for, and it works out the most efficient way of retrieving that based on the structure of the data it holds. In python, you can only tell the interpreter how you want the ... | 1 | 0 | 1 | Query language for python objects | 7 | python,object-oriented-database | 0 | 2011-02-26T12:03:00.000 |
I have three tables, 1-Users, 2-Softwares, 3-UserSoftwares.
if suppose, Users table having 6 user records(say U1,U2,...,U6) and Softwares table having 4 different softwares(say S1,S2,S3,S4) and UserSoftwares stores the references if a user requested for given software only.
For example: UserSoftwares(5 records) have on... | 1 | 0 | 0 | 0 | false | 5,143,851 | 1 | 873 | 1 | 1 | 0 | 5,142,192 | If your are looking for join - there is no joins in GAE. BTW, there is pretty easy to make 2 simple queries (Softwares and UserSoftware), and calculate all additional data manually | 1 | 0 | 0 | Querying on multiple tables using google apps engine (Python) | 3 | python,google-app-engine,model | 0 | 2011-02-28T12:48:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.