Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
Php or python Use and connect to our existing postgres databases open source / or very low license fees Common features of cms, with admin tools to help manage / moderate community have a large member base on very basic site where members provide us contact info and info about their professional characteristics. About...
5
1
0.099668
0
false
13,003,890
1
5,703
1
0
0
13,000,007
Have you tried Drupal. Drupal supports PostgreSQL and is written in PHP and is open source.
1
0
0
What is a good cms that is postgres compatible, open source and either php or python based?
2
postgresql,content-management-system,python-2.7
0
2012-10-21T16:56:00.000
I cannot get a connection to a MySQL database if my password contains punctuation characters in particular $ or @. I have tried to escape the characters, by doubling the $$ etc. but no joy. I have tried the pymysql library and the _mssql library. the code... self.dbConn = _mysql.connect(host=self.dbDetails['site'], p...
2
0
0
0
false
16,186,975
0
432
1
0
0
13,004,789
Try to MySQLdb package, you can punctuation in password to connect database through this package.
1
0
0
How to connect with passwords that contains characters like "$" or "@"?
1
python,mysql,passwords
0
2012-10-22T03:56:00.000
I am using Django non-rel version with mongodb backends. I am interested in tracking the changes that occur on model instances e.g if someone creates/edits or deletes a model instance. Backend db is mongo hence models have an associated "_id" fields with them in the respective collections/dbs. Now i want to extract thi...
0
0
1.2
0
true
13,031,452
1
132
1
0
0
13,024,361
After some deep digging into the Django Models i was able to solve the problem. The save() method inturn call the save_base() method. This method saves the returned results, ids in case of mongo, into self.id. This _id field can then be picked by by over riding the save() method for the model
1
0
0
Django-Nonrel(mongo-backend):Model instance modification tracking
1
python,django,mongodb,django-models,django-nonrel
0
2012-10-23T06:01:00.000
Which of these two languages interfaces better and delivers a better performance/toolset for working with sqlite database? I am familiar with both languages but need to choose one for a project I'm developing and so I thought I would ask here. I don't believe this to be opinionated as performance of a language is pret...
0
5
1.2
0
true
13,059,204
0
150
1
0
0
13,059,142
There is no good reason to choose one over the other as far as sqlite performance or usability. Both languages have perfectly usable (and pythonic/rubyriffic) sqlite3 bindings. In both languages, unless you do something stupid, the performance is bounded by the sqlite3 performance, not by the bindings. Neither language...
1
0
0
ruby or python for use with sqlite database?
1
python,ruby,sqlite
1
2012-10-24T23:00:00.000
I have huge tables of data that I need to manipulate (sort, calculate new quantities, select specific rows according to some conditions and so on...). So far I have been using a spreadsheet software to do the job but this is really time consuming and I am trying to find a more efficient way to do the job. I use pytho...
0
1
0.099668
0
false
13,060,535
0
97
1
0
0
13,060,427
This is a very general question, but there are multiple things that you can do to possibly make your life easier. 1.CSV These are very useful if you are storing data that is ordered in columns, and if you are looking for easy to read text files. 2.Sqlite3 Sqlite3 is a database system that does not require a server ...
1
0
0
sorting and selecting data
2
python,sql,sorting,select
0
2012-10-25T01:40:00.000
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience. I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then...
3
1
0.066568
1
false
14,449,025
0
2,011
2
0
0
13,061,800
Create a process / script that will call a procedure to load csv files to external Oracle table and another script to load it to the destination table. You can also add cron jobs to call these scripts that will keep track of incoming csv files into the directory, process it and move the csv file to an output/processed ...
1
0
0
Choice of technology for loading large CSV files to Oracle tables
3
python,csv,etl,sql-loader,smooks
0
2012-10-25T04:54:00.000
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience. I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then...
3
2
1.2
1
true
13,062,737
0
2,011
2
0
0
13,061,800
Unless you can use some full-blown ETL tool (e.g. Informatica PowerCenter, Pentaho Data Integration), I suggest the 4th solution - it is straightforward and the performance should be good, since Oracle will handle the most complicated part of the task.
1
0
0
Choice of technology for loading large CSV files to Oracle tables
3
python,csv,etl,sql-loader,smooks
0
2012-10-25T04:54:00.000
I'm using python's MySQLdb to fetch rows from a MySQL 5.6.7 db, that supports microsecond precision datetime columns. When I read a row with MySQLdb I get "None" for the time field. Is there are way to read such time fields with python?
2
1
1.2
0
true
13,299,592
0
310
1
0
0
13,068,227
MySQLdb-1.2.4 (to be released within the next week) and the current release candidate has support for MySQL-5.5 and newer and should solve your problem. Please try 1.2.4c1 from PyPi (pip install MySQL-python)
1
0
0
How to read microsecond-precision mysql datetime fields with python
2
python,mysql,mysql-python
0
2012-10-25T12:08:00.000
I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something. What I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run ...
0
1
1.2
0
true
13,085,822
1
107
2
0
0
13,085,658
If you add a column to a table, which already has some rows populated, then either: the column is nullable, and the existing rows simply get a null value for the column the column is not nullable but has a default value, and the existing rows are updated to have that default value for the column To produce a non-null...
1
0
0
South initial migrations are not forced to have a default value?
2
python,django,postgresql,django-south
0
2012-10-26T11:00:00.000
I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something. What I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run ...
0
0
0
0
false
13,085,826
1
107
2
0
0
13,085,658
When you have existing records in your database and you add a column to one of your tables, you will have to tell the database what to put in there, south can't read your mind :-) So unless you mark the new field null=True or opt in a default value it will raise an error. If you had an empty database, there are no valu...
1
0
0
South initial migrations are not forced to have a default value?
2
python,django,postgresql,django-south
0
2012-10-26T11:00:00.000
I'm using python and excel with office 2010 and have no problems there. I used python's makepy module in order to bind to the txcel com objects. However, on a different computer I've installed office 2013 and when I launched makepy no excel option was listed (as opposed to office 2010 where 'Microsoft Excel 14.0 Object...
4
0
0
0
false
42,290,194
0
2,318
1
0
0
13,121,529
wilywampa's answer corrects the problem. However, the combrowse.py at win32com\client\combrowse.py can also be used to get the IID (Interface Identifier) from the registered type libraries folder and subsequently integrate it with code as suggested by @cool_n_curious. But as stated before, wilywampa's answer does corre...
1
0
0
Python Makepy with Office 2013 (office 15)
3
python,excel,win32com,office-2013
0
2012-10-29T12:20:00.000
I have a class that can interface with either Oracle or MySQL. The class is initialized with a keyword of either "Oracle" or "MySQL" and a few other parameters that are standard for both database types (what to print, whether or not to stop on an exception, etc.). It was easy enough to add if Oracle do A, elif MySQL d...
5
3
0.291313
0
false
13,125,435
0
126
1
0
0
13,125,271
Create a factory class which returns an implementation based on the parameter. You can then have a common base class for both DB types, one implementation for each and let the factory create, configure and return the correct implementation to the user based on a parameter. This works well when the two classes behave ve...
1
0
1
Most Pythonic way to handle splitting a class into multiple classes
2
python
0
2012-10-29T16:03:00.000
I am using the modules xlwd, xlwt and xlutil to do some Excel manipulations in Python. I am not able to figure out how to copy the value of cell (X,Y) to cell (A,B) in the same sheet of an Excel file in Python. Could someone let me know how to do that?
1
0
0
0
false
13,998,563
0
472
1
0
0
13,156,730
Work on 2 cells among tens of thousands...quite meager. Normally,one should present an iteration over rows x columns.
1
0
0
Copying value of cell (X,Y) to cell (A,B) in same sheet of an Excel file using Python
2
python,excel,xlrd,xlwt
0
2012-10-31T11:21:00.000
I have few things to ask for custom queries in Django DO i need to use the DB table name in the query or just the Model name if i need to join the various tables in raw sql. do i need to use db field name or model field name like Person.objects.raw('SELECT id, first_name, last_name, birth_date FROM Person A inner joi...
0
3
0.53705
0
false
13,172,382
1
408
1
0
0
13,172,331
You need to use the database's table and field names in the raw query--the string you provide will be passed to the database, not interpreted by the Django ORM.
1
0
0
Using raw sql in django python
1
python,django
0
2012-11-01T06:54:00.000
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent...
2
1
0.049958
0
false
13,187,373
1
1,014
3
0
0
13,186,494
You can create an entity that links blobs to users. When a user uploads a blob, you immediately create a new record with the blob id, user id (or post id), and time created. When a user submits a post, you add a flag to this entity, indicating that a blob is used. Now your cron job needs to fetch all entities of this ...
1
0
0
Deleting Blobstore orphans
4
google-app-engine,python-2.7,google-cloud-datastore,blobstore
0
2012-11-01T22:29:00.000
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent...
2
3
1.2
0
true
13,247,039
1
1,014
3
0
0
13,186,494
Thank for the comments. However, I understood those solutions well, I find them too inefficient. Querying thousands of entries for those that are flagged as "unused" is not ideal. I believe I have come up with a better way and would like to hear your thoughts on it: When a blob is saved, immediately a deferred task is ...
1
0
0
Deleting Blobstore orphans
4
google-app-engine,python-2.7,google-cloud-datastore,blobstore
0
2012-11-01T22:29:00.000
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent...
2
0
0
0
false
16,378,785
1
1,014
3
0
0
13,186,494
Use Drafts! Save as draft after each upload. Then dont do the cleaning! Let the user for himself chose to wipe out. If you're planning on posts in a Facebook style use drafts either or make it private. Why bother deleting users' data?
1
0
0
Deleting Blobstore orphans
4
google-app-engine,python-2.7,google-cloud-datastore,blobstore
0
2012-11-01T22:29:00.000
This issue has been occurring on and off for a few weeks now, and it's unlike any that has come up with my project. Two of the models that are used have a timestamp field, which is by default set to timezone.now(). This is the sequence that raises error flags: Model one is created at time 7:30 PM Model two is creat...
28
66
1.2
0
true
13,226,368
1
23,029
1
0
0
13,225,890
Just ran into this last week for a field that had default=date.today(). If you remove the parentheses (in this case, try default=timezone.now) then you're passing a callable to the model and it will be called each time a new instance is saved. With the parentheses, it's only being called once when models.py loads.
1
0
0
Django default=timezone.now() saves records using "old" time
2
python,django,django-timezone
0
2012-11-05T04:23:00.000
I'm currently building a web service using python / flask and would like to build my data layer on top of neo4j, since my core data structure is inherently a graph. I'm a bit confused by the different technologies offered by neo4j for that case. Especially : i originally planned on using the REST Api through py2neo ,...
2
5
1.2
0
true
13,234,558
1
1,075
1
0
0
13,233,107
None of the REST API clients will be able to explicitly support (proper) transactions since that functionality is not available through the Neo4j REST API interface. There are a few alternatives such as Cypher queries and batched execution which all operate within a single atomic transaction on the server side; however...
1
0
0
using neo4J (server) from python with transaction
2
python,flask,neo4j,py2neo
0
2012-11-05T13:27:00.000
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have h...
12
1
0.033321
0
false
58,120,873
0
25,818
3
0
0
13,234,196
Tip for Ubuntu users After configuring .bashrc environment variables, like it was explained in other answers, don't forget to reload your terminal window, typing $SHELL.
1
0
0
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
6
python,oracle,cx-oracle
0
2012-11-05T14:32:00.000
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have h...
12
2
0.066568
0
false
28,741,244
0
25,818
3
0
0
13,234,196
I got this message when I was trying to install the 32 bit version while having the 64bit Oracle client installed. What worked for me: reinstalled python with 64 bit (had 32 for some reason), installed cx_Oracle (64bit version) with the Windows installer and it worked perfectly.
1
0
0
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
6
python,oracle,cx-oracle
0
2012-11-05T14:32:00.000
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have h...
12
2
0.066568
0
false
13,234,377
0
25,818
3
0
0
13,234,196
I installed cx_Oracle, but I also had to install an Oracle client to use it (the cx_Oracle module is just a common and pythonic way to interface with the Oracle client in Python). So you have to set the variable ORACLE_HOME to your Oracle client folder (on Unix: via a shell, for instance; on Windows: create a new varia...
1
0
0
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
6
python,oracle,cx-oracle
0
2012-11-05T14:32:00.000
I've looked a a number of questions on this site and cannot find an answer to the question: How to create multiple NEW tables in a database (in my case I am using PostgreSQL) from multiple CSV source files, where the new database table columns accurately reflect the data within the CSV columns? I can write the CREATE T...
11
0
0
0
false
52,581,750
0
7,124
2
0
0
13,239,004
Although this is quite an old question, it doesn't seem to have a satisfying answer and I was struggling with the exact samen issue. With the arrival of SQL Server Management Studio 2018 edition - and probably somewhat before that - a pretty good solution was offered by Microsoft. In SSMS on a database node in the obj...
1
0
0
Create SQL table with correct column types from CSV
3
python,sql,postgresql,pgadmin
0
2012-11-05T19:30:00.000
I've looked a a number of questions on this site and cannot find an answer to the question: How to create multiple NEW tables in a database (in my case I am using PostgreSQL) from multiple CSV source files, where the new database table columns accurately reflect the data within the CSV columns? I can write the CREATE T...
11
7
1
0
false
21,917,162
0
7,124
2
0
0
13,239,004
I have been dealing with something similar, and ended up writing my own module to sniff datatypes by inspecting the source file. There is some wisdom among all the naysayers, but there can also be reasons this is worth doing, particularly when we don't have any control of the input data format (e.g. working with gover...
1
0
0
Create SQL table with correct column types from CSV
3
python,sql,postgresql,pgadmin
0
2012-11-05T19:30:00.000
I am trying to build an online Python Shell. I execute commands by creating an instance of InteractiveInterpreter and use the command runcode. For that I need to store the interpreter state in the database so that variables, functions, definitions and other values in the global and local namespaces can be used across c...
0
0
0
0
false
13,254,202
0
136
1
0
0
13,254,044
I believe the pickle package should work for you. You can use pickle.dump or pickle.dumps to save the state of most objects. (then pickle.load or pickle.loads to get it back)
1
0
1
How to store the current state of InteractiveInterpreter Object in a database?
2
python,interactive-shell,python-interactive
0
2012-11-06T15:19:00.000
I have started a retrival job for an archive stored in one of my vaults on Glacier AWS. It turns out that I do not need to resurrect and download that archive any more. Is there a way to stop and/or delete my Glacier job? I am using boto and I cannot seem to find a suitable function. Thanks
7
9
1.2
0
true
13,275,014
1
1,164
1
0
0
13,274,197
The AWS Glacier service does not provide a way to delete a job. You can: Initiate a job Describe a job Get the output of a job List all of your jobs The Glacier service manages the jobs associated with an vault.
1
0
0
AWS glacier delete job
1
python,amazon-web-services,boto,amazon-glacier
0
2012-11-07T16:42:00.000
I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run "import MySQLdb" in the python prompt from outside of the virtualenv, it works, inside it says "ImportError: No module named MySQLdb". I'm trying to learn Python and Linux web development. ...
9
1
0.066568
0
false
43,866,023
0
6,817
2
0
0
13,288,013
source $ENV_PATH/bin/activate pip uninstall MySQL-python pip install MySQL-python this worked for me.
1
0
1
Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?
3
python,virtualenv,mysql-python
0
2012-11-08T11:17:00.000
I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run "import MySQLdb" in the python prompt from outside of the virtualenv, it works, inside it says "ImportError: No module named MySQLdb". I'm trying to learn Python and Linux web development. ...
9
14
1.2
0
true
13,288,095
0
6,817
2
0
0
13,288,013
If you have created the virtualenv with the --no-site-packages switch (the default), then system-wide installed additions such as MySQLdb are not included in the virtual environment packages. You need to install MySQLdb with the pip command installed with the virtualenv. Either activate the virtualenv with the bin/acti...
1
0
1
Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?
3
python,virtualenv,mysql-python
0
2012-11-08T11:17:00.000
We are developing application for which we going to use a NoSql database. We have evaluated couchdb and mongodb. Our application is in python and read-speed is most critical for our application. And application is reading a large number of documents. I want ask: Is reading large number of documents is faster in bson...
0
-1
-0.197375
0
false
13,641,512
0
468
1
0
0
13,298,480
bson Try LogoDb from 1985 logo programming language for trs-80
1
0
1
CouchDB vs mongodb
1
python-2.7,pymongo,couchdbkit
0
2012-11-08T21:54:00.000
I'm filtering the twitter streaming API by tracking for several keywords. If for example I only want to query and return from my database tweet information that was filtered by tracking for the keyword = 'BBC' how could this be done? Do the tweet information collected have a key:value relating to that keyword by which...
1
0
0
0
false
22,388,827
0
374
1
0
0
13,352,796
Unfortunately, the Twitter API doesn't provide a way to do this. You can try searching through receive tweets for the keywords you specified, but it might not match exactly.
1
0
0
Querying twitter streaming api keywords from a database
1
python,mongodb,twitter,tweepy
0
2012-11-12T22:42:00.000
so I discovered Sets in Python a few days ago and am surprised that they never crossed my mind before even though they make a lot of things really simple. I give an example later. Some things are still unclear to me. The docs say that Sets can be created from iterables and that the operators always return new Sets but ...
2
4
1.2
0
true
13,358,975
0
707
1
0
0
13,358,955
Sets are just like dict and list; on creation they copy the references from the seeding iterable. Iterators cannot be sets, because you cannot enforce the uniqueness requirement of a set. You cannot know if a future value yielded by an iterator has already been seen before. Moreover, in order for you to determine what ...
1
0
1
Python: Combining itertools and sets to save memory
1
python,memory-management,set,itertools
0
2012-11-13T10:20:00.000
I'm scraping tweets and inserting them into a mongo database for analysis work in python. I want to check the size of my database so that I won't incur additional charges if I run this on amazon. How can I tell how big my current mongo database is on osx? And will a free tier cover me?
7
1
0.039979
0
false
13,369,827
0
17,118
2
0
0
13,369,795
Databases are, by default, stored in /data/db (some environments override this and use /var/lib/mongodb, however). You can see the total db size by looking at db.stats() (specifically fileSize) in the MongoDB shell.
1
0
0
where is mongo db database stored on local hard drive?
5
python,macos,mongodb,amazon-ec2
0
2012-11-13T22:13:00.000
I'm scraping tweets and inserting them into a mongo database for analysis work in python. I want to check the size of my database so that I won't incur additional charges if I run this on amazon. How can I tell how big my current mongo database is on osx? And will a free tier cover me?
7
4
1.2
0
true
13,369,857
0
17,118
2
0
0
13,369,795
I believe on OSX the default location would be /data/db. But you can check your config file for the dbpath value to verify.
1
0
0
where is mongo db database stored on local hard drive?
5
python,macos,mongodb,amazon-ec2
0
2012-11-13T22:13:00.000
I am interested in learning more about node.js and utilizing it in a new project. The problem I am having is envisioning where I could enhance my web stack with it and what role it would play. All I have really done with it is followed a tutorial or two where you make something like a todo app in all JS. That is all fi...
0
0
1.2
0
true
13,384,050
1
228
1
0
0
13,382,262
It would replace Python (flask/werkzeug) in both your view server and your API server.
1
0
0
Where does node.js fit in a stack or enhance it
1
python,node.js,web-applications
0
2012-11-14T15:51:00.000
I'm new to web development and I'm trying to get my mac set up for doing Django tutorials and helping some developers with a project that uses postgres. I will try to specify my questions as much as possible. However, it seems that there are lots of floating parts to this question and I'm not quite understanding some p...
1
1
1.2
0
true
13,495,557
1
291
1
0
0
13,495,135
Er, not sure how we can help you with that. One is for bash, one is for SQL. No, that's for running the development webserver, as the tutorial explains. There's no need to do that, that's what the virtualenv is for. This has nothing to do with Python versions, you simply don't seem to be in the right directory. Note t...
1
0
0
postgres installation error on Mac 10.6.8
1
python,django,postgresql
0
2012-11-21T14:12:00.000
Is there a library or open source utility available to search all the tables and columns of an Sqlite database? The only input would be the name of the sqlite DB file. I am trying to write a forensics tool and want to search sqlite files for a specific string.
13
5
0.244919
0
false
65,373,519
0
15,981
2
0
0
13,514,509
Just dump the db and search it. % sqlite3 file_name .dump | grep 'my_search_string' You could instead pipe through less, and then use / to search: % sqlite3 file_name .dump | less
1
0
0
Search Sqlite Database - All Tables and Columns
4
python,sqlite,search
0
2012-11-22T14:11:00.000
Is there a library or open source utility available to search all the tables and columns of an Sqlite database? The only input would be the name of the sqlite DB file. I am trying to write a forensics tool and want to search sqlite files for a specific string.
13
4
0.197375
0
false
59,407,127
0
15,981
2
0
0
13,514,509
@MrWorf's answer didn't work for my sqlite file (an .exb file from Evernote) but this similar method worked: Open the file with DB Browser for SQLite sqlitebrowser mynotes.exb File / Export to SQL file (will create mynotes.exb.sql) grep 'STRING I WANT" mynotes.exb.sql
1
0
0
Search Sqlite Database - All Tables and Columns
4
python,sqlite,search
0
2012-11-22T14:11:00.000
I have created an app using web2py and have declared certain new table in it using the syntax db.define_table() but the tables created are not visible when I run the app in Google App Engine even on my local server. The tables that web2py creates by itself like auth_user and others in auth are available. What am I miss...
1
0
0
0
false
13,551,914
1
100
1
1
0
13,548,590
App Engine datastore doesn't really have tables. That said, if web2py is able to make use of the datastore (I'm not familiar with it), then Kinds (a bit like tables) will only show up in the admin-console (/_ah/admin locally) once an entity has been created (i.e. tables only show up once one row has been inserted, you'...
1
0
0
New tables created in web2py not seen when running in Google app Engine
1
python,google-app-engine,web2py
0
2012-11-25T05:29:00.000
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line ...
2
2
1.2
0
true
13,573,647
1
2,446
2
0
0
13,573,359
I think you upgraded your OS installation which in turn upgraded libmysqlclient and broke native extension. What you can do is reinstall libmysqlclient16 again (how to do it depends your particular OS) and that should fix your issue. Other approach would be to uninstall MySQLdb module and reinstall it again, forcing py...
1
0
0
Python module issue
2
python,linux,mysql-python,bluehost
0
2012-11-26T21:20:00.000
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line ...
2
0
0
0
false
13,591,200
1
2,446
2
0
0
13,573,359
You were right. Bluehost upgraded MySQL. Here is what I did: 1) remove the "build" directory in the "MySQL-python-1.2.3" directory 2) remove the egg 3) build the module again "python setup.py build" 4) install the module again "python setup.py install --prefix=$HOME/.local" Morale of the story for me is to remove the o...
1
0
0
Python module issue
2
python,linux,mysql-python,bluehost
0
2012-11-26T21:20:00.000
I'm starting a Django project and need to shard multiple tables that are likely to all be of too many rows. I've looked through threads here and elsewhere, and followed the Django multi-db documentation, but am still not sure how that all stitches together. My models have relationships that would be broken by sharding,...
3
1
0.099668
0
false
13,639,532
1
1,300
1
0
0
13,620,867
I agree with @DanielRoseman. Also, how many is too many rows. If you are careful with indexing, you can handle a lot of rows with no performance problems. Keep your indexed values small (ints). I've got tables in excess of 400 million rows that produce sub-second responses even when joining with other many million ...
1
0
0
Sharding a Django Project
2
python,django,postgresql,sharding
0
2012-11-29T07:32:00.000
I am connecting my python software to a remote msql server. i have had to add an access host on cPanel just for my computer but the problem is the access host, which is my IP, is dynamic. How can i connect to the remote server without having to change the access host everytime? thanks guys, networking is my weakness.
0
0
1.2
0
true
13,657,435
0
1,475
1
0
0
13,657,404
Your best option is probably to find a [dynamic DNS] provider. The idea is to have a client running on your machine which updates a DNS entry on a remote server. Then you can use the hostname provided instead of your IP address in cPanel.
1
0
0
Configuring Remote MYSQL with a Dynamic IP
2
python,networking,cpanel
0
2012-12-01T07:31:00.000
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit. We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has e...
1
4
0.379949
0
false
13,675,611
0
1,447
2
0
0
13,675,440
My advice - if you don't know how to use these technologies - don't do it. Few servers will cost you less than the time spent mastering technologies you don't know. If you want to try them out - do it. One by one, not everything at once. There is no magic solution on how to use them.
1
0
0
How to utilize OpenBSD, Nginx, Python and NoSQL
2
python,nginx,nosql,openbsd
1
2012-12-03T00:05:00.000
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit. We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has e...
1
1
0.099668
0
false
13,676,002
0
1,447
2
0
0
13,675,440
I agree with wdev, the time it takes to learn this is not worth the money you will save. First of all, MySQL databases are not hard to scale. WordPress utilizes MySQL databases, and some of the world's largest websites use MySQL (google for a list). I can also say the same of linux and PHP. If you design your site usi...
1
0
0
How to utilize OpenBSD, Nginx, Python and NoSQL
2
python,nginx,nosql,openbsd
1
2012-12-03T00:05:00.000
I have a list of times in h:m format in an Excel spreadsheet, and I'm trying to do some manipulation with DataNitro but it doesn't seem to like the way Excel formats times. For example, in Excel the time 8:32 is actually just the decimal number .355556 formatted to appear as 8:32. When I access that time with DataNitro...
1
3
1.2
0
true
13,725,706
0
488
1
0
0
13,725,567
If .355556 (represented as 8:32) is in A1 then =HOUR(A1)&":"&MINUTE(A1) and Copy/Paste Special Values should get you to a string.
1
0
1
Converting time with Python and DataNitro in Excel
2
python,excel,time,number-formatting,datanitro
0
2012-12-05T14:35:00.000
I've implemented a breadth first search with a PyMongo social network. It's breadth first to reduce the number of connections. Now I get queries like coll.find({"_id":{"$in":["id1", "id2", ...]}} with a huge number of ids. PyMongo does not process some of these big queries due to their size. Is there a technical soluti...
0
0
0
0
false
13,729,295
0
116
1
0
0
13,728,955
If this is an inescapable problem, you could split the array of ids across multiple queries and then merge the results client-side.
1
0
0
Large size query with PyMongo?
2
python,mongodb
0
2012-12-05T17:27:00.000
So I am trying to create a realtime plot of data that is being recorded to a SQL server. The format is as follows: Database: testDB Table: sensors First record contains 3 records. The first column is an auto incremented ID starting at 1. The second column is the time in epoch format. The third column is my sensor d...
0
0
0
1
false
13,774,224
0
428
1
0
0
13,772,857
Install a httpd server Install php Write a php script to fetch the data from the database and render it as a webpage. This is fairly elaborate request, with relatively little details given. More information will allow us to give better answers.
1
0
0
Plotting data using Flot and MySQL
1
python,mysql,flot
0
2012-12-07T23:52:00.000
I need to obtain path from a FileField, in order to check it against a given file system path, to know if the file I am inserting into mongo database is already present. Is it possible? All I get is a GridFSProxy, but I am unable to understand how to handle it.
0
1
1.2
0
true
13,962,502
0
430
1
0
0
13,791,542
You can't since it stores the data into database. If you need to store the original path then you can create an EmbeddedDocument which contains a FileField and a StringField with the path string. But remember that the stored file and the file you might find on that path are not the same
1
0
0
How to get filesystem path from mongoengine FileField
1
python,mongodb,path,mongoengine,filefield
0
2012-12-09T20:34:00.000
I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions): I am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my pr...
1
2
0.197375
0
false
13,849,917
0
1,119
2
0
0
13,838,231
If you are committing your transactions after every 5000 record interval, it seems like you could do a little bit of preprocessing of your input data and actually break it out into a list of 5000 record chunks, i.e. [[[row1_data],[row2_data]...[row4999_data]],[[row5000_data],[row5001_data],...],[[....[row1000000_data]]...
1
0
0
How can I commit all pending queries until an exception occurs in a python connection object
2
python,postgresql,transactions,commit,psycopg2
0
2012-12-12T10:58:00.000
I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions): I am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my pr...
1
3
1.2
0
true
13,838,751
0
1,119
2
0
0
13,838,231
I doubt you'll find a fast cross-database way to do this. You just have to optimize the balance between the speed gains from batch size and the speed costs of repeating work when an entry causes a batch to fail. Some DBs can continue with a transaction after an error, but PostgreSQL can't. However, it does allow you to...
1
0
0
How can I commit all pending queries until an exception occurs in a python connection object
2
python,postgresql,transactions,commit,psycopg2
0
2012-12-12T10:58:00.000
I'm building a file hosting app that will store all client files within a folder on an S3 bucket. I then want to track the amount of usage on S3 recursively per top folder to charge back the cost of storage and bandwidth to each corresponding client. Front-end is django but the solution can be python for obvious reaso...
0
0
0
0
false
13,892,252
1
818
1
0
1
13,873,119
No its not possible to create a bucket for each user as Amazon allows only 100 buckets per account. So unless you are sure not to have more than 100 users, it will be a very bad idea. The ideal solution will be to remember each user's storage in you Django app itself in database. I guess you would be using S3 boto libr...
1
0
0
How can I track s3 bucket folder usage with python?
2
python,django,amazon-s3
0
2012-12-14T05:20:00.000
How would I extend the sqlite3 module so if I import Database I can do Database.connect() as an alias to sqlite3.connect(), but define extra non standard methods?
1
4
0.379949
0
false
13,881,814
0
206
1
0
0
13,881,533
You can create a class which wraps sqlite3. It takes its .connect() method and maybe others and exposes it to the outside, and then you add your own stuff. Another option would be subclassing - if that works.
1
0
0
How do I extend a python module to include extra functionality? (sqlite3)
2
python,sqlite
0
2012-12-14T15:24:00.000
I`m trying to write loader to sqlite that will load as fast as possible simple rows in DB. Input data looks like rows retrieved from postgres DB. Approximated amount of rows that will go to sqlite: from 20mil to 100mil. I cannot use other DB except sqlite due to project restrictions. My question is : what is a proper l...
0
1
0.066568
0
false
13,919,496
0
293
2
0
0
13,919,448
SQLite can handle huge transactions with ease, so why not commit at the end? Have you tried this at all? If you do feel one transaction is a problem, why not commit ever n transactions? Process rows one by one, insert as needed, but every n executed insertions add a connection.commit() to spread the load.
1
0
0
How to write proper big data loader to sqlite
3
python,sqlite,python-2.7
0
2012-12-17T17:56:00.000
I`m trying to write loader to sqlite that will load as fast as possible simple rows in DB. Input data looks like rows retrieved from postgres DB. Approximated amount of rows that will go to sqlite: from 20mil to 100mil. I cannot use other DB except sqlite due to project restrictions. My question is : what is a proper l...
0
0
1.2
0
true
13,976,529
0
293
2
0
0
13,919,448
Finally i managed to resolve my problem. Main issue was in exessive amount of insertions in sqlite. After i started to load all data from postgress to memory, aggregate it proper way to reduce amount of rows, i was able to decrease processing time from 60 hrs to 16 hrs.
1
0
0
How to write proper big data loader to sqlite
3
python,sqlite,python-2.7
0
2012-12-17T17:56:00.000
I'm attempting to install MySQL-python on a machine running CentOS 5.5 and python 2.7. This machine isn't running a mysql server, the mysql instance this box will be using is hosted on a separate server. I do have a working mysql client. On attempting sudo pip install MySQL-python, I get an error of EnvironmentError...
13
21
1
0
false
13,932,070
0
27,898
1
1
0
13,922,955
So it transpires that mysql_config is part of mysql-devel. mysql-devel is for compiling the mysql client, not the server. Installing mysql-devel allows the installation of MySQL-python.
1
0
0
Installing MySQL-python without mysql-server on CentOS
3
centos,mysql-python
0
2012-12-17T22:01:00.000
trying to figure out whether this is a bug or by design. when no query_string is specified for a query, the SearchResults object is NOT sorted by the requested column. for example, here is some logging to show the problem: Results are returned unsorted on return index.search(query): query_string = '' sort_options strin...
8
-2
-0.197375
0
false
13,954,922
1
103
1
0
0
13,953,039
Could be a bug in the way you build your query, since it's not shown. Could be that you don't have an index for the case that isn't working.
1
0
0
sort_options only applied when query_string is not empty?
2
python,google-app-engine,gae-search
0
2012-12-19T13:02:00.000
Im getting this error when trying to run python / django after installing psycopg2: Error: dlopen(/Users/macbook/Envs/medint/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _PQbackendPID Referenced from: /Users/macbook/Envs/medint/lib/python2.7/site-packages/psycopg2/_psycopg.so Expected in:...
1
6
1
0
false
59,063,813
0
3,182
1
0
0
14,001,116
on Mojave macOS, I solved it by running below steps: pip uninstall psycopg2 pip install psycopg2-binary
1
0
0
Psycopg2 Symbol not found: _PQbackendPID Expected in: dynamic lookup
2
python,django,postgresql,heroku,psycopg2
0
2012-12-22T08:02:00.000
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the ...
5
2
1.2
0
true
14,012,685
0
1,819
4
0
0
14,006,363
I often use a combination of SQS/S3/EC2 for this type of batch work. Queue up messages in SQS for all of the work that needs to be performed (chunked into some reasonably small chunks). Spin up N EC2 instances that are configured to start reading messages from SQS, performing the work and putting results into S3, and...
1
0
0
Processing a large amount of data in parallel
5
python,fabric,boto,data-processing
0
2012-12-22T20:30:00.000
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the ...
5
1
0.039979
0
false
14,009,860
0
1,819
4
0
0
14,006,363
You might benefit from hadoop in form of Amazon Elastic Map Reduce. Without getting too deep it can be seen as a way to apply some logic to massive data volumes in parralel (Map stage). There is also hadoop technology called hadoop streaming - which enables to use scripts / executables in any languages (like python). ...
1
0
0
Processing a large amount of data in parallel
5
python,fabric,boto,data-processing
0
2012-12-22T20:30:00.000
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the ...
5
3
0.119427
0
false
14,006,535
0
1,819
4
0
0
14,006,363
Did you do some performance measurements: Where are the bottlenecks? Is it CPU bound, IO bound, DB bound? When it is CPU bound, you can try a python JIT like pypy. When it is IO bound, you need more HDs (and put some striping md on them). When it is DB bound, you can try to drop all the indexes and keys first. Last wee...
1
0
0
Processing a large amount of data in parallel
5
python,fabric,boto,data-processing
0
2012-12-22T20:30:00.000
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the ...
5
2
0.07983
0
false
14,006,466
0
1,819
4
0
0
14,006,363
I did something like this some time ago, and my setup was like one multicore instance (x-large or more), that converts raw source files (xml/csv) into an intermediate format. You can run (num-of-cores) copies of the convertor script on it in parallel. Since my target was mongo, I used json as an intermediate format, ...
1
0
0
Processing a large amount of data in parallel
5
python,fabric,boto,data-processing
0
2012-12-22T20:30:00.000
So, a friend and I are currently writing a panel (in python/django) for managing gameservers. Each client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'. The passwords would be generated randomly and prese...
2
1
0.099668
0
false
14,008,320
1
408
2
0
0
14,008,232
Your question embodies a contradiction in terms. Either you don't want reversibility or you do. You will have to choose. The usual technique is to hash the passwords and to provide a way for the user to reset his own password on sufficient alternative proof of identity. You should never display a password to anybody, f...
1
0
0
Storing MySQL Passwords
2
python,mysql,django,security,encryption
0
2012-12-23T02:46:00.000
So, a friend and I are currently writing a panel (in python/django) for managing gameservers. Each client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'. The passwords would be generated randomly and prese...
2
4
1.2
0
true
14,008,264
1
408
2
0
0
14,008,232
Though this is not the answer you were looking for, you only have three possibilities store the passwords plaintext (ugh!) store with a reversible encryption, e.g. RSA (http://stackoverflow.com/questions/4484246/encrypt-and-decrypt-text-with-rsa-in-php) do not store it; clients can only reset password, not view it Th...
1
0
0
Storing MySQL Passwords
2
python,mysql,django,security,encryption
0
2012-12-23T02:46:00.000
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an al...
2
3
0.197375
0
false
14,039,904
0
599
3
0
0
14,039,877
It's not dangerous if you secure access to database. You are exposing only your know-how. Once somebody gains access to database, it's easy to list database structure.
1
0
0
Is it dangerous if I expose my database schema in an open source project?
3
python,database,open-source,schema,database-schema
0
2012-12-26T11:20:00.000
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an al...
2
0
0
0
false
14,039,945
0
599
3
0
0
14,039,877
There is a difference between sharing database and database schema. You can comment the values of database machine/username/password in your code and publish the code on github. As a proof of concept, you can host your application on cloud(without disclosing its database credentials) and add its link to your github rea...
1
0
0
Is it dangerous if I expose my database schema in an open source project?
3
python,database,open-source,schema,database-schema
0
2012-12-26T11:20:00.000
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an al...
2
0
0
0
false
21,087,156
0
599
3
0
0
14,039,877
I think it is dangerous, as if a SQL injection vulnerability exists in your website, the scheme will help the attacker to retrieve all important data easier.
1
0
0
Is it dangerous if I expose my database schema in an open source project?
3
python,database,open-source,schema,database-schema
0
2012-12-26T11:20:00.000
Need a way to improve performance on my website's SQL based Activity Feed. We are using Django on Heroku. Right now we are using actstream, which is a Django App that implements an activity feed using Generic Foreign Keys in the Django ORM. Basically, every action has generic foreign keys to its actor and to any obje...
4
1
0.066568
0
false
14,074,169
1
499
2
0
0
14,073,030
You said redis? Everything is better with redis. Caching is one of the best ideas in software development, no mather if you use Materialized Views you should also consider trying to cache those, believe me your users will notice the difference.
1
0
0
Good way to make a SQL based activity feed faster
3
python,sql,django,redis,feed
0
2012-12-28T17:04:00.000
Need a way to improve performance on my website's SQL based Activity Feed. We are using Django on Heroku. Right now we are using actstream, which is a Django App that implements an activity feed using Generic Foreign Keys in the Django ORM. Basically, every action has generic foreign keys to its actor and to any obje...
4
1
1.2
0
true
14,201,647
1
499
2
0
0
14,073,030
Went with an approach that sort of combined the two suggestions. We created a master list of every action in the database, which included all the information we needed about the actions, and stuck it in Redis. Given an action ID, we can now do a Redis look up on it and get a dictionary object that is ready to be retur...
1
0
0
Good way to make a SQL based activity feed faster
3
python,sql,django,redis,feed
0
2012-12-28T17:04:00.000
I would like to know as to where is the value stored for a one2many table initially in OpenERP6.1? i.e if we create a record for a one2many table,this record will be actually saved to the database table only after saving the record of the main table associated with this, even though we can create many records(rows) for...
2
2
0.132549
0
false
14,119,351
1
1,493
2
0
0
14,119,208
When saving a new record in openerp, a dictionary will be generated with all the fields having data as keys and its data as values. If the field is a one2many and have many lines, then a list of dictionaries will be the value for the one2many field. You can modify it by overriding the create and write functions in open...
1
0
0
Where is the value stored for a one2many table initially in OpenERP6.1
3
python,openerp
0
2013-01-02T08:57:00.000
I would like to know as to where is the value stored for a one2many table initially in OpenERP6.1? i.e if we create a record for a one2many table,this record will be actually saved to the database table only after saving the record of the main table associated with this, even though we can create many records(rows) for...
2
0
0
0
false
14,120,545
1
1,493
2
0
0
14,119,208
One2Many field is child parent relation in OpenERP. One2Many is just logical field there is no effect in database for that. If you are creating Sale order then Sale order line is One2Many in Sale order model. But if you will not put Many2One in Sale order line then One2Many in Sale order will not work. Many2One field p...
1
0
0
Where is the value stored for a one2many table initially in OpenERP6.1
3
python,openerp
0
2013-01-02T08:57:00.000
I want to compare the value of a given column at each row against another value, and if the values are equal, I want to copy the whole row to another spreadsheet. How can I do this using Python? THANKS!
3
0
0
0
false
30,048,138
0
14,135
1
0
0
14,188,923
For "xls" files it's possible to use the xlutils package. It's currently not possible to copy objects between workbooks in openpyxl due to the structure of the Excel format: there are lots of dependencies all over the place that need to be managed. It is, therefore, the responsibility of client code to copy everything ...
1
0
0
How to copy a row of Excel sheet to another sheet using Python
2
python,excel,xlrd,xlwt,openpyxl
0
2013-01-07T02:04:00.000
I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like "You are using old GAE SDK 1.4." So, to get rid of that I have done following things: I removed old version of GAE and installed GAE 1.7. Along with that I have also changed my ...
0
1
0.099668
0
false
14,368,275
1
191
2
1
0
14,307,581
Did you update djangoappengine without updating django-nonrel and djangotoolbox? While I haven't upgraded to GAE 1.7.4 yet, I'm running 1.7.2 with no problems. I suspect your problem is not related to the GAE SDK but rather your django-nonrel installation has mismatching pieces.
1
0
0
Django-nonrel broke after installing new version of Google App Engine SDK
2
python,google-app-engine,django-nonrel
0
2013-01-13T20:03:00.000
I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like "You are using old GAE SDK 1.4." So, to get rid of that I have done following things: I removed old version of GAE and installed GAE 1.7. Along with that I have also changed my ...
0
0
1.2
0
true
14,382,654
1
191
2
1
0
14,307,581
Actually I changed the google app engine path in /.bashrc file and restarted the system. It solved the issue. I think since I was not restarting the system after .bashrc changes, hence it was creating problem.
1
0
0
Django-nonrel broke after installing new version of Google App Engine SDK
2
python,google-app-engine,django-nonrel
0
2013-01-13T20:03:00.000
I'm working on an NDB based Google App Engine application that needs to keep track of the day/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strateg...
1
0
0
0
false
14,365,980
1
537
2
1
0
14,343,871
I would say precompute those structures and output them into hardcoded python structures that you save in a generated python file. Just read those structures into memory as part of your instance startup. From your description, there's no reason to compute these values at runtime, and there's no reason to store it in th...
1
0
0
Best strategy for storing precomputed sunrise/sunset data?
3
python,google-app-engine,python-2.7
0
2013-01-15T17:59:00.000
I'm working on an NDB based Google App Engine application that needs to keep track of the day/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strateg...
1
1
0.066568
0
false
14,345,283
1
537
2
1
0
14,343,871
For 2000 immutable data points - just calculate them when instance starts or on first use, then keep it in memory. This will be the cheapest and fastest.
1
0
0
Best strategy for storing precomputed sunrise/sunset data?
3
python,google-app-engine,python-2.7
0
2013-01-15T17:59:00.000
I have income table which contain recurrence field. Now if user select recurrence_type as "Monthly" or "Daily" then I have to add row into income table "daily" or "monthly" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.
1
0
0
0
false
27,122,957
1
214
2
0
0
14,344,473
Used django-celery package and created job in it to update the data periodically
1
0
0
add data to table periodically in mysql
2
python,mysql,django
0
2013-01-15T18:33:00.000
I have income table which contain recurrence field. Now if user select recurrence_type as "Monthly" or "Daily" then I have to add row into income table "daily" or "monthly" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.
1
1
1.2
0
true
14,344,610
1
214
2
0
0
14,344,473
As I know there is no such function in MySQL. Even if MySQL could do it, this should not be its job. Such functions should be part of the business logic in your application. The normal way is to setup the cron job in server. The cron job will wake up at the time you set, and then call your python script or SQL to fulfi...
1
0
0
add data to table periodically in mysql
2
python,mysql,django
0
2013-01-15T18:33:00.000
I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine. How ...
4
1
0.099668
0
false
14,347,324
1
1,105
2
0
0
14,347,244
If voting is only for subscribed users, then enable voting after members log in to your site. If not, then you can track users' IP addresses so one IP address can vote once for a single article in a day. By the way, what kind of security do you need?
1
0
0
How to implement a 'Vote up' System for posts in my blog?
2
python,mysql,google-app-engine,jinja2
0
2013-01-15T21:27:00.000
I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine. How ...
4
4
1.2
0
true
14,349,144
1
1,105
2
0
0
14,347,244
First, keep in mind that there is no such thing as "secure", just "secure enough for X". There's always a tradeoff—more secure means more annoying for your legitimate users and more expensive for you. Getting past these generalities, think about your specific case. There is nothing that has a 1-to-1 relationship with u...
1
0
0
How to implement a 'Vote up' System for posts in my blog?
2
python,mysql,google-app-engine,jinja2
0
2013-01-15T21:27:00.000
My Question is a bit complex and iam new to OpenERP. I have an external database and an OpenERP. the external one isn't PostgreSQL. MY job is that I need to synchronize the partners in the two databases. External one being the more important. This means that if the external one's data change so does the OpenERp's, bu...
6
0
0
0
false
14,356,856
1
4,853
1
0
0
14,356,218
Add an integer field in res partner table for storing external id on both database. When data is retrived from the external server and adding to your openerp database, store the external id in the record of res partner in local server and also save the id of the newly created partner record in the external server's par...
1
0
0
Adding external Ids to Partners in OpenERP withouth a new module
3
python,xml-rpc,openerp
0
2013-01-16T10:27:00.000
I'm working on a web-app that's very heavily database driven. I'm nearing the initial release and so I've locked down the features for this version, but there are going to be lots of other features implemented after release. These features will inevitably require some modification to the database models, so I'm concern...
1
2
1.2
0
true
14,364,804
1
120
1
0
0
14,364,214
Some thoughts for managing databases for a production application: Make backups nightly. This is crucial because if you try to do an update (to the data or the schema), and you mess up, you'll need to be able to revert to something more stable. Create environments. You should have something like a local copy of the da...
1
0
0
How to approach updating an database-driven application after release?
2
python,database,migration,sqlalchemy
0
2013-01-16T17:29:00.000
I have been trying to get my head around Django over the last week or two. Its slowly starting to make some sense and I am really liking it. My goal is to replace a fairly messy excel spreadsheet with a database and frontend for my users. This would involve pulling the data out of a table, presenting it in a web tabul...
3
1
0.099668
0
false
14,371,043
1
2,045
1
0
0
14,370,576
Exporting the excel sheet in Django and have the them rendered as text fields , is not as easy as 2 step process. you need to know how Django works. First you need to export the data in mysql in database using either some language or some ready made tools. Then you need to make a Model for that table and then you can u...
1
0
0
Custom Django Database Frontend
2
python,database,django,frontend
0
2013-01-17T01:00:00.000
I'm wrinting a webapp in bottle. I have a small interface that lets user run sql statements. Sometimes it takes about 5 seconds until the user get's a result because the DB is quite big and old. What I want to do is the following: 1.Starte the query in a thread 2.Give the user a response right away and have ajax poll...
1
0
1.2
0
true
14,377,893
1
105
1
0
0
14,377,250
This would be a good use for something like memcached.
1
0
0
Python 3 - SQL Result - where to store it
1
python,database,multithreading
0
2013-01-17T10:43:00.000
When I try installing mysql-python using below command, macbook-user$ sudo pip install MYSQL-python I get these messages: /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h:891:1: warning: this is the location of the previous definition /usr/bin/lipo: /tmp/_mysql-LtlmLe.o and /tmp/_m...
1
0
0
0
false
14,399,388
0
506
1
1
0
14,399,223
At first glance it looks like damaged pip package. Have you tried easy_install instead with the same package?
1
0
0
clang error when installing MYSQL-python on Lion-mountain (Mac OS X 10.8)
1
python,mysql,django,pip,mysql-python
0
2013-01-18T12:41:00.000
When I fired redis-py's bgsave() command, the return value was False, but I'm pretty sure the execution was successful because I've checked with lastsave(). However, if I use save() the return value would be True after successful execution. Could anyone please explain what False indicates for bgsave()? Not sure if it h...
1
2
0.379949
0
false
14,418,853
0
778
1
0
0
14,417,846
Thanks to Pavel Anossov, after reading the code of client.py, I found out that responses from 2 commands (BGSAVE and BGREWRITEAOF) were not converted from bytes to str, and this caused the problem in Python 3. To fix this issue, just change lambda r: r == to lambda r: nativestr(r) == for these two commands in RESPONSE_...
1
0
0
Why does redis-py's bgsave() command return False after successful execution?
1
python,redis
0
2013-01-19T19:10:00.000
I am writing a chat bot that uses past conversations to generate its responses. Currently I use text files to store all the data but I want to use a database instead so that multiple instances of the bot can use it at the same time. How should I structure this database? My first idea was to keep a main table like creat...
1
1
0.197375
0
false
14,430,911
0
1,198
1
0
0
14,430,856
Don't use a different table for each conversation. Instead add a "conversation" column to your single table.
1
0
0
Storing chat logs in relational database
1
python,sql,database,sqlite,database-design
0
2013-01-21T00:13:00.000
This is my program import MySQLdb as mdb from MySQLdb import IntegrityError conn = mdb.connect("localhost", "asdf", "asdf", "asdf") when the connect function is called python prints some text ("h" in the shell). This happens only if I execute the script file from a particular folder. If I copy the same script file to...
0
1
1.2
0
true
14,434,772
0
41
1
0
0
14,434,712
Try deleting *.pyc files. Secondly use script with -v option so that you can view from where the file is being imported
1
0
0
python mysqldb printing text even if no print statement in the code
1
python,mysql
0
2013-01-21T08:18:00.000
I have populated a combobox with an QSqlQueryModel. It's all working fine as it is, but I would like to add an extra item to the combobox that could say "ALL_RECORDS". This way I could use the combobox as a filtering device. I obviously don't want to add this extra item in the database, how can I add it to the combobo...
1
1
0.099668
0
false
14,540,595
0
243
1
0
0
14,455,871
You could use a proxy model that takes gets it's data from two models, one for your default values, the other for your database, and use it to populate your QComboBox.
1
1
0
Adding an item to an already populated combobox
2
python,qt,pyqt,pyqt4,pyside
0
2013-01-22T10:02:00.000
I'm building a finance application in Python to do time series analysis on security prices (among other things). The heavy lifting will be done in Python mainly using Numpy, SciPy, and pandas (pandas has an interface for SQLite and MySQL). With a web interface to present results. There will be a few hundred GB of data....
0
0
0
0
false
14,509,945
0
1,382
2
0
0
14,509,517
SQLite is great for embedded databases, but it's not really great for anything that requires access by more than one process at a time. For this reason it cannot be taken seriously for your application. MySQL is a much better alternative. I'm also in agreement that Postgres would be an even better option.
1
0
0
MySQL v. SQLite for Python based financial web app
3
python,mysql,sqlite,pandas
0
2013-01-24T19:49:00.000
I'm building a finance application in Python to do time series analysis on security prices (among other things). The heavy lifting will be done in Python mainly using Numpy, SciPy, and pandas (pandas has an interface for SQLite and MySQL). With a web interface to present results. There will be a few hundred GB of data....
0
0
0
0
false
14,514,661
0
1,382
2
0
0
14,509,517
For many 'research' oriented time series database loads, it is far faster to do as much analysis in the database than to copy the data to a client and analyze it using a regular programming language. Copying 10G across the network is far slower than reading it from disk. Relational databases do not natively support ti...
1
0
0
MySQL v. SQLite for Python based financial web app
3
python,mysql,sqlite,pandas
0
2013-01-24T19:49:00.000
I am using Celery standalone (not within Django). I am planning to have one worker task type running on multiple physical machines. The task does the following Accept an XML document. Transform it. Make multiple database reads and writes. I'm using PostgreSQL, but this would apply equally to other store types that ...
47
2
0.066568
0
false
14,526,700
0
24,474
2
1
0
14,526,249
You can override the default behavior to have threaded workers instead of a worker per process in your celery config: CELERYD_POOL = "celery.concurrency.threads.TaskPool" Then you can store the shared pool instance on your task instance and reference it from each threaded task invocation.
1
0
0
Celery Worker Database Connection Pooling
6
python,postgresql,connection-pooling,celery
0
2013-01-25T16:38:00.000
I am using Celery standalone (not within Django). I am planning to have one worker task type running on multiple physical machines. The task does the following Accept an XML document. Transform it. Make multiple database reads and writes. I'm using PostgreSQL, but this would apply equally to other store types that ...
47
3
0.099668
0
false
14,549,811
0
24,474
2
1
0
14,526,249
Have one DB connection per worker process. Since celery itself maintains a pool of worker processes, your db connections will always be equal to the number of celery workers. Flip side, sort of, it will tie up db connection pooling to celery worker process management. But that should be fine given that GIL allows only...
1
0
0
Celery Worker Database Connection Pooling
6
python,postgresql,connection-pooling,celery
0
2013-01-25T16:38:00.000
There is a sqlite3 library that comes with python 2.7.3, but it is hardly the latest version. I would like to upgrade it within a virtualenv environment. In other words, the upgrade only applies to the version of python installed within this virtualenv. What is the correct way to do so?
4
1
0.066568
0
false
17,417,792
0
9,594
2
0
0
14,541,869
I was stuck in the same problem once. This solved it for me: Download and untar the python version required mkdir local untar sqlite after downloading its package ./configure --prefix=/home/aanuj/local make make install ./configure --prefix=/home/anauj/local LDFLAGS='-L/home/aaanuj/local/lib' CPPFLAGS='-I/home/aanuj/l...
1
0
1
How to upgrade sqlite3 in python 2.7.3 inside a virtualenv?
3
python,sqlite,virtualenv
0
2013-01-26T21:42:00.000
There is a sqlite3 library that comes with python 2.7.3, but it is hardly the latest version. I would like to upgrade it within a virtualenv environment. In other words, the upgrade only applies to the version of python installed within this virtualenv. What is the correct way to do so?
4
4
1.2
0
true
14,550,136
0
9,594
2
0
0
14,541,869
The below works for me, but please comment if there is any room for improvement: Activate the virtualenv to which you are going to install the latest sqlite3 Get the latest source of pysqlite package from google code: wget http://pysqlite.googlecode.com/files/pysqlite-2.6.3.tar.gz Compile pysqlite from source and toge...
1
0
1
How to upgrade sqlite3 in python 2.7.3 inside a virtualenv?
3
python,sqlite,virtualenv
0
2013-01-26T21:42:00.000
I have couple OpenERP modules implemented for OpenERP 6.1 version. When I installed OpenERP 7.0, i copied these modules into addons folder for OpenERP 7. After that, I tried to update modules list trough web interface, but nothings changed. Also, I started server again with options --database=mydb --update=all, but mod...
2
6
1.2
0
true
14,564,692
1
3,217
1
0
0
14,563,801
Openerp 6.1 modules directly can not be used in openerp 7. You have to do some basic changes in openerp 6.1 modules. Like tree, form tag compulsory string and verision="7" include in form. If you have inherited some basic modules like sale, purchase then you have to do changes in inherit xpath etc. Some objects res....
1
0
0
OpenERP 7 with modules from OpenERP 6.1
1
python,openerp,erp
0
2013-01-28T14:06:00.000
I am trying to analyse the SQL performance of our Django (1.3) web application. I have added a custom log handler which attaches to django.db.backends and set DEBUG = True, this allows me to see all the database queries that are being executed. However the SQL is not valid SQL! The actual query is select * from app_mod...
0
0
0
0
false
14,567,526
1
189
1
0
0
14,567,172
select * from app_model where name = %s is a prepared statement. I would recommend you to log the statement and the parameters separately. In order to get a wellformed query you need to do something like "select * from app_model where name = %s" % quote_string("user") or more general query % map(quote_string, params). ...
1
0
0
How to retrieve the real SQL from the Django logger?
4
python,sql,django,django-database
0
2013-01-28T17:00:00.000
I am trying to use a python set as a filter for ids from a mysql table. The python set stores all the ids to filter (about 30 000 right now) this number will grow slowly over time and I am concerned about the maximum capacity of a python set. Is there a limit to the number of elements it can contain?
2
0
0
0
false
14,577,827
0
2,460
1
0
0
14,577,790
I don't know if there is an arbitrary limit for the number of items in a set. More than likely the limit is tied to the available memory.
1
0
1
Is there a limit to the number of values that a python set can contain?
2
python,set
0
2013-01-29T07:31:00.000
No code examples here. Just running into an issue with Microsoft Excel 2010 where I have a python script on linux that pulls data from csv files, pushes it into excel, and emails that file to a certain email address as an attachment. My problem is that I'm using formulas in my excel file, and when it first opens up it ...
0
0
0
0
false
14,592,481
0
853
1
0
0
14,592,328
Figured this out. Just used the for loop to keep a running total. Sorry for the wasted question.
1
0
0
Protected View in Microsoft Excel 2010 and Python
1
python,linux,excel,view,protected
0
2013-01-29T21:08:00.000
For 100k+ entities in google datastore, ndb.query().count() is going to cancelled by deadline , even with index. I've tried with produce_cursors options but only iter() or fetch_page() will returns cursor but count() doesn't. How can I count large entities?
4
2
0.132549
0
false
14,713,169
1
2,669
1
1
0
14,673,642
This is indeed a frustrating issue. I've been doing some work in this area lately to get some general count stats - basically, the number of entities that satisfy some query. count() is a great idea, but it is hobbled by the datastore RPC timeout. It would be nice if count() supported cursors somehow so that you could ...
1
0
0
ndb.query.count() failed with 60s query deadline on large entities
3
python,google-app-engine,app-engine-ndb,bigtable
0
2013-02-03T14:41:00.000
I need some help with d3 and MySQL. Below is my question: I have data stored in MySQL (eg: keywords with their frequencies). I now want to visualize it using d3. As far as my knowledge of d3 goes, it requires json file as input. My question is: How do I access this MySQL database from d3 script? One way which i could...
4
1
0.066568
0
false
14,679,748
0
8,185
1
0
0
14,679,610
d3 is a javascript library that run on client-side, while MySQL database is supposed to run on server-side. d3 can't connect to MySQL database, let alone conversion to json format. The way you thought it was possible (steps 1 and 2) is what you should do.
1
0
0
Accessing MySQL database in d3 visualization
3
javascript,python,mysql,d3.js,data-visualization
0
2013-02-04T02:22:00.000