Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I'm trying to use PyODBC to connect to an Access database. It works fine on Windows, but running it under OS X I get— Traceback (most recent call last): File "", line 1, in File "access.py", line 10, in init self.connection = connect(driver='{Microsoft Access Driver (.mdb)}', dbq=path, pwd=password) p...
1
3
0.53705
0
false
11,155,551
0
1,896
1
0
0
11,154,965
pyodbc allows connecting to ODBC data sources, but it does not actually implements drivers. I'm not familiar with OS X, but on Linux ODBC sources are typically described in odbcinst.ini file (location is determined by ODBCSYSINI variable). You will need to install Microsoft Access ODBC driver for OS X.
1
0
0
PyODBC "Image not found (0) (SQLDriverConnect)"
1
python,ms-access,pyodbc
0
2012-06-22T11:03:00.000
Trying to set up Flask and SQLAlchemy on Windows but I've been running into issues. I've been using Flask-SQLAlchemy along with PostgreSQL 9.1.4 (32 bit) and the Psycopg2 package. Here are the relevant bits of code, I created a basic User model just to test that my DB is connecting, and committing. The three bits of co...
1
4
0.26052
0
false
11,210,290
1
7,080
1
0
0
11,167,518
At the time you execute create_all, models.py has never been imported, so no class is declared. Thus, create_all does not create any table. To solve this problem, import models before running create_all or, even better, don't separate the db object from the model declaration.
1
0
0
Setting up Flask-SQLAlchemy
3
python,sqlalchemy,flask,flask-sqlalchemy
0
2012-06-23T06:47:00.000
I need to load fixtures into the system when a new VM is up. I have dumped MongoDB and Postgres. But I can't just sit in front of the PC whenever a new machine is up. I want to be able to just "issue" a command or the script automatically does it. But a command like pg_dump to dump PostgreSQL will require a password. T...
1
0
0
0
false
11,174,357
0
70
1
0
0
11,174,324
You have to give the user that loads the fixture the privileges to write on the database regardless which way you are going to load the data. With Postgres you can give login permission without password to specific users and eliminate the problem of a shared password or you can store the password in the pgpass file wit...
1
0
0
Security concerns while loading fixtures
1
python,deployment,fixtures
0
2012-06-24T01:20:00.000
I have a desktop app that has 65 modules, about half of which read from or write to an SQLite database. I've found that there are 3 ways that the database can throw an SQliteDatabaseError: SQL logic error or missing database (happens unpredictably every now and then) Database is locked (if it's being edited by anoth...
2
1
1.2
0
true
11,215,911
1
237
1
0
0
11,215,535
Your gut feeling is right. There is no way to add robustness to the application without reviewing each database access point separately. You still have a lot of important choice at how the application should react on errors that depends on factors like, Is it attended, or sometimes completely unattended? Is delay OK,...
1
0
0
Efficient approach to catching database errors
1
python,database,sqlite,error-handling
0
2012-06-26T20:38:00.000
I'm just curious that there are modern systems out there that default to something other than UTF-8. I've had a person block for an entire day on the multiple locations that a mysql system can have different encoding. Very frustrating. Is there any good reason not to use utf-8 as a default (and storage space seems lik...
8
6
1
0
false
11,219,610
0
519
2
0
0
11,219,060
Once upon a time there was no unicode or UTF-8, and disparate encoding schemes were in use throughout the world. It wasn't until back in 1988 that the initial unicode proposal was issued, with the goal of encoding all the worlds characters in a common encoding. The first release in 1991 covered many character repre...
1
0
0
why doesn't EVERYTHING default to UTF-8?
2
python,mysql,ruby,utf-8
0
2012-06-27T03:37:00.000
I'm just curious that there are modern systems out there that default to something other than UTF-8. I've had a person block for an entire day on the multiple locations that a mysql system can have different encoding. Very frustrating. Is there any good reason not to use utf-8 as a default (and storage space seems lik...
8
-1
-0.099668
0
false
11,219,088
0
519
2
0
0
11,219,060
Some encodings have different byte orders (little and big endian)
1
0
0
why doesn't EVERYTHING default to UTF-8?
2
python,mysql,ruby,utf-8
0
2012-06-27T03:37:00.000
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the ...
0
1
0.049958
0
false
11,224,222
0
1,125
3
0
0
11,223,147
If you're not after just parameter substitution, but full construction of the SQL, you have to do that using string operations on your end. The ? replacement always just stands for a value. Internally, the SQL string is compiled to SQLite's own bytecode (you can find out what it generates with EXPLAIN thesql) and ? rep...
1
0
0
Python + Sqlite 3. How to construct queries?
4
python,sqlite
0
2012-06-27T09:27:00.000
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the ...
0
1
1.2
0
true
11,224,475
0
1,125
3
0
0
11,223,147
If you're trying to transmit changes to the database to another computer, why do they have to be expressed as SQL strings? Why not pickle the query string and the parameters as a tuple, and have the other machine also use SQLite parameterization to query its database?
1
0
0
Python + Sqlite 3. How to construct queries?
4
python,sqlite
0
2012-06-27T09:27:00.000
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the ...
0
0
0
0
false
11,224,003
0
1,125
3
0
0
11,223,147
I want how to get the parsed 'sql param'. It's all open source so you have full access to the code doing the parsing / sanitization. Why not just reading this code and find out how it works and if there's some (possibly undocumented) implementation that you can reuse ?
1
0
0
Python + Sqlite 3. How to construct queries?
4
python,sqlite
0
2012-06-27T09:27:00.000
There is a worksheet.title method but not workbook.title method. Looking in the documentation there is no explicit way to find it, I wasn't sure if anyone knew a workaround or trick to get it.
3
2
1.2
0
true
11,233,362
0
10,098
1
0
0
11,233,140
A workbook doesn't really have a name - normally you'd just consider it to be the basename of the file it's saved as... slight update - yep, even in VB WorkBook.Name just returns "file on disk.xls"
1
0
0
Is there a way to get the name of a workbook in openpyxl
1
python,excel,openpyxl
0
2012-06-27T18:54:00.000
I am running a webapp on google appengine with python and my app lets users post topics and respond to them and the website is basically a collection of these posts categorized onto different pages. Now I only have around 200 posts and 30 visitors a day right now but that is already taking up nearly 20% of my reads and...
0
0
0
0
false
11,270,908
1
82
1
1
0
11,270,434
I'd suggest using pre-existing code and building around that in stead of re-inventing the wheel.
1
0
0
use standard datastore index or build my own
2
python,google-app-engine,indexing,google-cloud-datastore
0
2012-06-30T00:18:00.000
We are using Python Pyramid with SQLAlchemy and MySQL to build a web application. We would like to have user-specific database connections, so every web application user has their own database credentials. This is primarily for security reasons, so each user only has privileges for their own database content. We would ...
0
0
0
0
false
11,300,227
1
343
1
0
0
11,299,182
The best way to do this that I know is to use the same database with multiple schemas. Unfortunately I don't think this works with MySQL. The idea is that you connection pool engines to the same database and then when you know what user is associated with the request you can switch schemas for that connection.
1
0
0
How to manage user-specific database connections in a Pyramid Web Application?
1
python,sqlalchemy,pyramid
0
2012-07-02T18:29:00.000
I have successfully installed py27-mysql from MacPorts and MySQL-python-1.2.3c1 on a machine running Snow Leopard. Because I have MySQL 5.1.48 in an odd location (/usr/local/mysql/bin/mysql/), I had to edit the setup.cfg file when I installed mysql-python. However, now that it's installed, I'm still getting the error "...
0
0
0
0
false
12,535,972
0
67
1
0
0
11,304,019
MacPorts' py27-mysql, MySQL-python, and MySQLdb are all synonyms for the same thing. If you successfully installed py27-mysql, you should not need anything else, and it's possible you've messed up your python site-packages. Also, make sure you are invoking the right python binary, i.e. MacPorts' python27 and not the on...
1
0
0
setting up mysql-python on Snow Leopard
1
mysql-python
0
2012-07-03T03:19:00.000
Now on writing path as sys.path.insert(0,'/home/pooja/Desktop/mysite'), it ran fine asked me for the word tobe searched and gave this error: Traceback (most recent call last): File "call.py", line 32, in s.save() File "/usr/local/lib/python2.6/dist-packages/django/db/models/base.py", line 463, in save self.save_...
1
1
1.2
0
true
11,308,029
1
768
1
0
0
11,307,928
The exception says: no such table: search_keywords, which is quite self-explanatory and means that there is no database table with such name. So: You may be using relative path to db file in settings.py, which resolves to a different db depending on place where you execute the script. Try to use absolute path and see ...
1
0
0
error in accessing table created in django in the python code
1
python,django,linux,sqlite,ubuntu-10.04
0
2012-07-03T09:18:00.000
im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents. at this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appen...
1
0
0
0
false
11,319,983
1
239
2
1
0
11,319,890
The overhead of making calls from appengine to these external machines is going to be worse than the performance you're seeing now (I would expect). why not just move everything to a non-appengine machine? I can't speak for couch, but mongo or redis are definitely capable of handling serious load as long as they are se...
1
0
0
key/value store with good performance for multiple tenants
2
javascript,python,google-app-engine,nosql,multi-tenant
0
2012-07-03T22:08:00.000
im running a multi tenant GAE app where each tenant could have from a few 1000 to 100k documents. at this moment im trying to make a MVC javascript client app (the admin part of my app with spine.js) and i need CRUD endpoints and the ability to get a big amount of serialized objects at once. for this specific job appen...
1
2
1.2
0
true
11,323,377
1
239
2
1
0
11,319,890
Why not use the much faster regular appengine datastore instead of blobstore? Simply store your documents in regular entities as Blob property. Just make sure the entity size doesn't exceed 1 MB in which case you have to split up your data into more then one entity. I run an application whith millions of large Blobs th...
1
0
0
key/value store with good performance for multiple tenants
2
javascript,python,google-app-engine,nosql,multi-tenant
0
2012-07-03T22:08:00.000
Not sure if the title is a great way to word my actual problem and I apologize if this is too general of a question but I'm having some trouble wrapping my head around how to do something. What I'm trying to do: The idea is to create a MySQL database of 'outages' for the thousands of servers I'm responsible for monito...
0
0
0
0
false
11,329,769
0
131
1
0
0
11,329,588
The most basic solution with the setup you have now would be to: Get a list of all events, ordered by server ID and then by time of the event Loop through that list and record the start of a new event / end of an old event for your new database when: the server ID changes the time between the current event and the pr...
1
0
0
How can I combine rows of data into a new table based on similar timestamps? (python/MySQL/PHP)
2
php,python,mysql,json,pingdom
1
2012-07-04T12:56:00.000
I'm writing a bit of Python code that watches a certain directory for new files, and inserts new files into a database using the cx_Oracle module. This program will be running as a service. At a given time there could be many files arriving at once, but there may also be periods of up to an hour where no files are rece...
3
2
1.2
0
true
11,347,776
0
1,153
1
0
0
11,346,224
If you only need one or two connections, I see no harm in keeping them open indefinitely. With Oracle, creating a new connection is an expensive operation, unlike in some other databases, such as MySQL where it is very cheap to create a new connection. Sometimes it can even take a few seconds to connect which can beco...
1
0
0
Keeping database connection open - good practice?
1
python,oracle
0
2012-07-05T14:17:00.000
I am trying to query ODBC compliant databases using pyodbc in ubuntu. For that, i have installed the driver (say mysql-odbc-driver). After installation the odbcinst.ini file with the configurations gets created in the location /usr/share/libmyodbc/odbcinst.ini When i try to connect to the database using my pyodbc conne...
5
6
1.2
0
true
11,393,468
0
7,504
1
0
0
11,393,269
Assuming you are using unixODBC here was some possibilities: rebuild unixODBC from scratch and set --sysconfdir export ODBCSYSINI env var pointing to a directory and unixODBC will look here for odbcinst.ini and odbc.ini system dsns export ODBCINSTINI and point it at your odbcinst.ini file BTW, I doubt pyodbc looks a...
1
0
0
setting the location where pyodbc searches for odbcinst.ini file
1
python,odbc,pyodbc
0
2012-07-09T10:30:00.000
There is a list of data that I want to deal with. However I need to process the data with multiple instances to increase efficiency. Each time each instance shall take out one item, delete it from the list and process it with some procedures. First I tried to store the list in a sqlite database, but sqlite allows mult...
0
0
0
0
false
20,908,479
0
484
2
0
0
11,430,276
Why not read in all the items from the database and put them in a queue? You can have a worker thread get at item, process it and move on to the next one.
1
0
1
Concurrency on sqlite database using python
4
python,database,sqlite,concurrency,locking
0
2012-07-11T10:07:00.000
There is a list of data that I want to deal with. However I need to process the data with multiple instances to increase efficiency. Each time each instance shall take out one item, delete it from the list and process it with some procedures. First I tried to store the list in a sqlite database, but sqlite allows mult...
0
0
1.2
0
true
11,430,479
0
484
2
0
0
11,430,276
How about another field in db as a flag (e.g. PROCESSING, UNPROCESSED, PROCESSED)?
1
0
1
Concurrency on sqlite database using python
4
python,database,sqlite,concurrency,locking
0
2012-07-11T10:07:00.000
I'm writing a script to be run as a cron and I was wondering, is there any difference in speed between the Ruby MySQL or Python MySQL in terms of speed/efficiency? Would I be better of just using PHP for this task? The script will get data from a mysql database with 20+ fields and store them in another table every X a...
0
7
1.2
0
true
11,431,795
0
254
1
0
0
11,431,679
Just pick the language you feel most comfortable with. It shouldn't make a noticeable difference. After writing the application, you can search for bottlenecks and optimize that
1
0
0
Python MySQL vs Ruby MySQL
2
python,mysql,ruby
1
2012-07-11T11:28:00.000
since it is not possible to access mysql remotely on GAE, without the google cloud sql, could I put a sqlite3 file on google cloud storage and access it through the GAE with django.db.backends.sqlite3? Thanks.
1
0
1.2
0
true
11,498,320
1
2,078
2
1
0
11,462,291
No. SQLite requires native code libraries that aren't available on App Engine.
1
0
0
Google App Engine + Google Cloud Storage + Sqlite3 + Django/Python
3
python,django,sqlite,google-app-engine,google-cloud-storage
0
2012-07-12T23:44:00.000
since it is not possible to access mysql remotely on GAE, without the google cloud sql, could I put a sqlite3 file on google cloud storage and access it through the GAE with django.db.backends.sqlite3? Thanks.
1
0
0
0
false
11,463,047
1
2,078
2
1
0
11,462,291
Google Cloud SQL is meant for this, why don't you want to use it? If you have every frontend instance load the DB file, you'll have a really hard time synchronizing them. It just doesn't make sense. Why would you want to do this?
1
0
0
Google App Engine + Google Cloud Storage + Sqlite3 + Django/Python
3
python,django,sqlite,google-app-engine,google-cloud-storage
0
2012-07-12T23:44:00.000
Is it possible to determine fields available in a table (MySQL DB) pragmatically at runtime using SQLAlchemy or any other python library ? Any help on this would be great. Thanks.
4
0
0
0
false
11,500,397
0
103
1
0
0
11,500,239
You can run the SHOW TABLE TABLENAME and get the columns of the tables.
1
0
0
How to determine fields in a table using SQLAlchemy?
3
python,sqlalchemy
0
2012-07-16T08:00:00.000
I wanted to know whether mysql query with browser is faster or python's MySQLdb is faster. I am using MysqlDb with PyQt4 for desktop ui and PHP for web ui.
0
1
0.099668
0
false
11,510,083
0
161
2
0
0
11,508,670
I believe you're asking about whether Python or PHP (what I think you mean by browser?) is more efficient at making a database call. The answer? It depends on the specific code and calls, but it's going to be largely the same. Both Python and PHP are interpreted languages and interpret the code at run time. If either o...
1
0
0
browser query vs python MySQLdb query
2
php,python,mysql,pyqt4,mysql-python
0
2012-07-16T16:38:00.000
I wanted to know whether mysql query with browser is faster or python's MySQLdb is faster. I am using MysqlDb with PyQt4 for desktop ui and PHP for web ui.
0
0
0
0
false
11,509,874
0
161
2
0
0
11,508,670
Browsers don't perform database queries (unless you consider the embedded SQLite database), so not only is your question nonsensical, it is in fact completely irrelevant.
1
0
0
browser query vs python MySQLdb query
2
php,python,mysql,pyqt4,mysql-python
0
2012-07-16T16:38:00.000
Trying to do HelloWorld on GoogleAppEngine, but getting the following error. C:\LearningGoogleAppEngine\HelloWorld>dev_appserver.py helloworld WARNING 2012-07-17 10:21:37,250 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded. Traceback (most recent call last): File "C:...
3
1
0.066568
0
false
11,533,684
1
1,146
2
1
0
11,520,573
it's been a while, but I believe I've previously fixed this by adding import rdbms to dev_appserver.py hmm.. or was that import MySQLdb? (more likely)
1
0
0
GoogleAppEngine error: rdbms_mysqldb.py:74
3
google-app-engine,python-2.7
0
2012-07-17T10:25:00.000
Trying to do HelloWorld on GoogleAppEngine, but getting the following error. C:\LearningGoogleAppEngine\HelloWorld>dev_appserver.py helloworld WARNING 2012-07-17 10:21:37,250 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded. Traceback (most recent call last): File "C:...
3
0
0
0
false
12,513,978
1
1,146
2
1
0
11,520,573
just had the exact same error messages: I found that restarting Windows fixed everything and I did not have to deviate from the YAML or py file given on the google helloworld python tutorial.
1
0
0
GoogleAppEngine error: rdbms_mysqldb.py:74
3
google-app-engine,python-2.7
0
2012-07-17T10:25:00.000
I would like to get the suggestion on using No-SQL datastore for my particular requirements. Let me explain: I have to process the five csv files. Each csv contains 5 million rows and also The common id field is presented in each csv.So, I need to merge all csv by iterating 5 million rows.So, I go with python ...
1
0
0
1
false
11,522,576
0
347
1
0
0
11,522,232
If this is just a one-time process, you might want to just setup an EC2 node with more than 1G of memory and run the python scripts there. 5 million items isn't that much, and a Python dictionary should be fairly capable of handling it. I don't think you need Hadoop in this case. You could also try to optimize your scr...
1
0
0
Process 5 million key-value data in python.Will NoSql solve?
3
python,nosql
0
2012-07-17T12:15:00.000
I am working on the XLWT XLRD XLUTIL packages. Whenever I write to a new sheet, all the formulas have been obliterated. I tried the following fixes, but they all failed: Re-write all the formulas in with a loop: Failure: XLWT Formula does not support advanced i.e. VLOOKUP Formulas Doing the calculations all in Python...
1
1
0.197375
0
false
11,596,820
0
1,819
1
0
0
11,527,100
(a) xlrd does not currently support extracting formulas. (b) You say "XLWT Formula does not support advanced i.e. VLOOKUP Formulas". This is incorrect. If you are the same person that I seem to have convinced that xlwt supports VLOOKUP etc after a lengthy exchange of private emails over the last few days, please say s...
1
0
0
Preserving Formula in Excel Python XLWT
1
python,excel,formula,xlrd,xlwt
0
2012-07-17T16:44:00.000
How do I specify the column that I want in my query using a model (it selects all columns by default)? I know how to do this with the sqlalchmey session: session.query(self.col1), but how do I do it with with models? I can't do SomeModel.query(). Is there a way?
183
2
0.039979
0
false
68,064,416
1
191,024
1
0
0
11,530,196
result = ModalName.query.add_columns(ModelName.colname, ModelName.colname)
1
0
0
Flask SQLAlchemy query, specify column names
10
python,sqlalchemy,flask-sqlalchemy
0
2012-07-17T20:16:00.000
I have a data organization issue. I'm working on a client/server project where the server must maintain a copy of the client's filesystem structure inside of a database that resides on the server. The idea is to display the filesystem contents on the server side in an AJAX-ified web interface. Right now I'm simply u...
4
2
0.197375
0
false
11,554,828
1
1,195
1
0
0
11,554,676
I suggest to follow the Unix way. Each file is considered a stream of bytes, nothing more, nothing less. Each file is technically represented by a single structure called i-node (index node) that keeps all information related to the physical stream of the data (including attributes, ownership,...). The i-node does not...
1
0
0
Data structures in python: maintaining filesystem structure within a database
2
python,database,data-structures,filesystems
0
2012-07-19T05:50:00.000
I have a huge database in sqlite3 of 41 million rows in a table. However, it takes around 14 seconds to execute a single query. I need to significantly improve the access time! Is this a hard disk hardware limit or a processor limit? If it is a processor limit then I think I can use the 8 processors I have to paralleli...
0
1
1.2
0
true
11,557,147
0
204
1
0
0
11,556,783
Firstly, make sure any relevant indexes are in place to assist in efficient queries -- which may or may not help... Other than that, SQLite is meant to be a (strangely) lite embedded SQL DB engine - 41 million rows is probably pushing it depending on number and size of columns etc... You could take your DB and import i...
1
0
0
reducing SQLITE3 access time in Python (Parallelization?)
1
python,sqlite,parallel-processing
0
2012-07-19T08:25:00.000
I am parsing .xlsx files using openpyxl.While writing into the xlsx files i need to maintain the same font colour as well as cell colour as was present in the cells of my input .xlsx files.Any idea how to extract the colour coding from the cell and then implement the same in another excel file.Thanks in advance
0
0
0
0
false
11,783,592
0
116
1
0
0
11,562,279
I believe you can access the font colour by: colour = ws.cell(row=id,column=id).style.font.color I am not sure how to access the cell colour though.
1
0
0
How to detect colours and then apply colours while working with .xlsx(excel-2007) files on python 3.2(windows 7)
1
python,python-3.x,excel-2007,openpyxl
0
2012-07-19T13:47:00.000
I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row?
3
-1
-0.033321
0
false
11,566,922
0
1,336
2
0
0
11,566,537
select top 1 * from tablename order by date_and_time DESC (for sql server) select * from taablename order by date_and_time DESC limit 1(for mysql)
1
0
0
Retrieving only the most recent row in MySQL
6
python,mysql
0
2012-07-19T17:50:00.000
I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row?
3
3
0.099668
0
false
11,566,549
0
1,336
2
0
0
11,566,537
SELECT * FROM table ORDER BY date, time LIMIT 1
1
0
0
Retrieving only the most recent row in MySQL
6
python,mysql
0
2012-07-19T17:50:00.000
I am creating a GUI that is dependent on information from MySQL table, what i want to be able to do is to display a message every time the table is updated with new data. I am not sure how to do this or even if it is possible. I have codes that retrieve the newest MySQL update but I don't know how to have a message ev...
1
3
1.2
0
true
11,567,806
0
1,004
1
0
0
11,567,357
Quite simple and straightforward solution will be just to poll the latest autoincrement id from your table, and compare it with what you've seen at the previous poll. If it is greater -- you have new data. This is called 'active polling', it's simple to implement and will suffice if you do this not too often. So you ha...
1
0
0
Scanning MySQL table for updates Python
2
python,mysql
0
2012-07-19T18:48:00.000
Database A resides on server server1, while database B resides on server server2. Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc). In such a case, is it possible to perform a join between a table ...
0
0
0
0
false
11,585,571
0
214
2
0
0
11,585,494
Without doing something like replicating database A onto the same server as database B and then doing the JOIN, this would not be possible.
1
0
0
MySQL Joins Between Databases On Different Servers Using Python?
2
mysql,python-2.7
0
2012-07-20T19:04:00.000
Database A resides on server server1, while database B resides on server server2. Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc). In such a case, is it possible to perform a join between a table ...
0
0
0
0
false
11,585,697
0
214
2
0
0
11,585,494
I don't know python, so I'm going to assume that when you do a query it comes back to python as an array of rows. You could query table A and after applying whatever filters you can, return that result to the application. Same to table B. Create a 3rd Array, loop through A, and if there is a joining row in B, add that ...
1
0
0
MySQL Joins Between Databases On Different Servers Using Python?
2
mysql,python-2.7
0
2012-07-20T19:04:00.000
I'm working on my Thesis where the Python application that connects to other linux servers over SSH is implemented. The question is about storing the passwords in the database (whatever kind, let's say MySQL for now). For sure keeping them not encrypted is a bad idea. But what can I do to feel comfortable with storing ...
1
3
1.2
0
true
11,588,240
0
454
1
0
0
11,587,845
In my opinion using key authentication is the best and safest in my opinion for the SSH part and is easy to implement. Now to the meat of your question. You want to store these keys, or passwords, into a database and still be able to use them. This requires you to have a master password that can decrypt them from said...
1
0
0
Storing encrypted passwords storage for remote linux servers
1
python,mysql,ssh,password-protection
0
2012-07-20T22:40:00.000
I am using python and sqlite3 to handle a website. I need all timezones to be in localtime, and I need daylight savings to be accounted for. The ideal method to do this would be to use sqlite to set a global datetime('now') to be +10 hours. If I can work out how to change sqlite's 'now' with a command, then I was going...
1
2
0.197375
0
false
21,014,456
0
3,357
1
0
0
11,590,082
you can try this code, I am in Taiwan , so I add 8 hours: DateTime('now','+8 hours')
1
0
0
sqlite timezone now
2
python,sqlite,timezone,dst
0
2012-07-21T06:48:00.000
I am looking for a method of checking all the fields in a MySQL table. Let's say I have a MySQL table with the fields One Two Three Four Five and Big One. These are fields that contains numbers that people enter in, sort of like the Mega Millions. Users enter numbers and it inserts the numbers they picked from least to...
1
0
0
0
false
11,919,262
0
82
1
0
0
11,597,835
I would imagine MySQL has some sort of 'set' logic in it, but if it's lacking, I know Python has sets, so I'll use an example of those in my solution: Create a set with the numbers of the winning ticket: winners = set({11, 22, 33, 44, 55}) For each query, jam all it's numbers into a set too: current_user = set({$query...
1
0
0
Python MySQL Number Matching
1
python,mysql
0
2012-07-22T04:49:00.000
I'm making a company back-end that should include a password-safe type feature. Obviously the passwords needs to be plain text so the users can read them, or at least "reversible" to plain text somehow, so I can't use hashes. Is there anything more secure I can do than just placing the passwords in plain-text into the ...
0
2
0.197375
0
false
11,603,255
0
105
1
0
0
11,603,136
You can use MySQL's ENCODE(), DES_ENCRYPT() or AES_ENCRYPT() functions, and store the keys used to encrypt in a secure location.
1
0
0
What security measures can I take to secure passwords that can't be hashed in a database?
2
python,mysql,hash,passwords
0
2012-07-22T18:59:00.000
I am wondering what the most reliable way to generate a timestamp is using Python. I want this value to be put into a MySQL database, and for other programming languages and programs to be able to parse this information and use it. I imagine it is either datetime, or the time module, but I can't figure out which I'd us...
0
0
0
0
false
11,642,138
0
390
2
0
0
11,642,105
For a database, your best bet is to store it in the database-native format, assuming its precision matches your needs. For a SQL database, the DATETIME type is appropriate. EDIT: Or TIMESTAMP.
1
0
1
Most reliable way to generate a timestamp with Python
3
python,time
0
2012-07-25T03:04:00.000
I am wondering what the most reliable way to generate a timestamp is using Python. I want this value to be put into a MySQL database, and for other programming languages and programs to be able to parse this information and use it. I imagine it is either datetime, or the time module, but I can't figure out which I'd us...
0
0
0
0
false
11,642,253
0
390
2
0
0
11,642,105
if it's just a simple timestamp that needs to be read by multiple programs, but which doesn't need to "mean" anything in sql, and you don't care about different timezones for different users or anything like that, then seconds from the unix epoch (start of 1970) is a simple, common standard, and is returned by time.tim...
1
0
1
Most reliable way to generate a timestamp with Python
3
python,time
0
2012-07-25T03:04:00.000
I am building a system where entries are added to a SQL database sporadically throughout the day. I am trying to create a system which imports these entries to SOLR each time. I cant seem to find any infomation about adding individual records to SOLR from SQL. Can anyone point me in the right direction or give me a bi...
1
0
0
0
false
11,679,439
0
606
1
0
0
11,647,112
Besides DIH, you could setup a trigger in your db to fire Solr's REST service that would update changed docs for all inserted/updated/deleted documents. Also, you could setup a Filter (javax.servlet spec) in your application to intercept server requests and push them to Solr before they even reach database (it can even...
1
0
0
SOLR - Adding a single entry at a time
4
python,search,solr
0
2012-07-25T09:52:00.000
i am trying to connect to mysql in django. it asked me to install the module. the module prerequisites are "MySQL 3.23.32 or higher" etc. do i really need to install mysql, cant i just connect to remote one??
0
4
0.664037
0
false
11,653,215
1
62
1
0
0
11,653,040
You need to install the client libraries. The Python module is a wrapper around the client libraries. You don't need to install the server.
1
0
0
Not able to install python mysql module
1
python,mysql,django
0
2012-07-25T15:19:00.000
when I try to install the pyodbc by using "python setup.py build install", it shows up with some errors like the following: gcc -pthread -fno-strict-aliasing -DNDEBUG -march=i586 -mtune=i686 -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fwrapv -fPIC ...
1
2
1.2
0
true
11,691,895
0
4,644
1
1
0
11,691,039
I don't see a way around having the Python header files (which are part of python-devel package). They are required to compile the package. Maybe there was a pre-compiled egg for the 64bit version somewhere, and this is how it got installed. Why are you reluctant to install python-devel?
1
0
0
Error when installing pyodbc on opensuse
2
python,pyodbc,opensuse
0
2012-07-27T15:32:00.000
I'm writing an application that makes heavy use of geodjango (on PostGis) and spatial lookups. Distance queries on database side work great, but now I have to calculate distance between two points on python side of application (these points come from models obtained using separate queries). I can think of many ways th...
2
0
0
0
false
11,703,980
0
1,898
1
0
0
11,703,407
Use the appropriate data connection to execute the SQL function that you're already using, then retrieve that... Keeps everything consistent.
1
0
0
How to calculate distance between points on python side of my application in way that is consistent in what database does
3
python,django,gis,postgis,geodjango
0
2012-07-28T17:57:00.000
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like <92>,<89>, <94> etc. Any thoughts? I tried doing string.encode...
1
0
0
1
false
18,619,898
0
1,903
1
0
0
11,705,114
Are all these "junk" characters in the range <80> to <9F>? If so, it's highly likely that they're Microsoft "Smart Quotes" (Windows-125x encodings). Someone wrote up the text in Word or Outlook, and copy/pasted it into a Web application. Both Latin-1 and UTF-8 regard these characters as control characters, and the usua...
1
0
0
Junk characters (smart quotes, etc.) in output file
4
python,mysql,vim,encoding,smart-quotes
0
2012-07-28T22:20:00.000
I have an items table that is related to an item_tiers table. The second table consists of inventory receipts for an item in the items table. There can be 0 or more records in the item_tiers table related to a single record in the items table. How can I, using query, get only records that have 1 or more records in item...
1
1
1.2
0
true
11,747,157
1
144
1
0
0
11,746,610
If there is a foreign key defined between tables, SA will figure the join condition for you, no need for additional filters. There is, and i was really over thinking this. Thanks for the fast response. – Ominus
1
0
0
SQLAlchemy - Query show results where records exist in both table
2
python,sqlalchemy
0
2012-07-31T18:24:00.000
I currently run my own server "in the cloud" with PHP using mod_fastcgi and mod_vhost_alias. My mod_vhost_alias config uses a VirtualDocumentRoot of /var/www/%0/htdocs so that I can serve any domain that routes to my server's IP address out of a directory with that name. I'd like to begin writing and serving some Pyth...
3
0
0
0
false
36,646,397
1
6,266
1
0
0
11,796,126
Even I faced the same situation, and initially I was wondering in google but later realised and fixed it, I'm using EC2 service in aws with ubuntu and I created alias to php and python individually and now I can access both.
1
0
0
Can I run PHP and Python on the same Apache server using mod_vhost_alias and mod_wsgi?
2
php,python,apache,mod-vhost-alias
1
2012-08-03T12:53:00.000
I have really big collection of files, and my task is to open a couple of random files from this collection treat their content as a sets of integers and make an intersection of it. This process is quite slow due to long times of reading files from disk into memory so I'm wondering whether this process of reading from...
4
3
1.2
0
true
11,805,422
0
213
1
0
0
11,805,309
One thing you could try is calculating intersections of the files on a chunk-by-chunk basis (i.e., read x-bytes into memory from each, calculate their intersections, and continue, finally calculating the intersection of all intersections). Or, you might consider using some "heavy-duty" libraries to help you. Consider l...
1
0
1
Is speed of file opening/reading language dependent?
2
python,file,file-io,io,filesystems
0
2012-08-04T01:52:00.000
I have some SQL Server tables that contain Image data types. I want to make it somehow usable in PostgreSQL. I'm a python programmer, so I have a lot of learn about this topic. Help?
0
0
0
0
false
15,846,639
0
297
1
0
0
11,805,709
What you need to understand first is that the interfaces at the db level are likely to be different. Your best option is to write an abstraction layer for the blobs (and maybe publish it open source for the dbs you want to support). On the PostgreSQL side you need to figure out whether you want to bo with bytea or lob...
1
0
0
How can I select and insert BLOB between different databases using python?
1
python,sql-server,postgresql,blob
0
2012-08-04T03:36:00.000
In the High-Replication Datastore (I'm using NDB), the consistency is eventual. In order to get a guaranteed complete set, ancestor queries can be used. Ancestor queries also provide a great way to get all the "children" of a particular ancestor with kindless queries. In short, being able to leverage the ancestor model...
5
9
1.2
0
true
11,855,209
1
1,422
1
1
0
11,854,137
The only way to change the ancestor of an entity is to delete the old one and create a new one with a new key. This must be done for all child (and grand child, etc) entities in the ancestor path. If this isn't possible, then your listed solution works. This is required because the ancestor path of an entity is part of...
1
0
0
How to change ancestor of an NDB record?
1
python,google-app-engine,google-cloud-datastore
0
2012-08-07T21:04:00.000
I am looking for a pure-python SQL library that would give access to both MySQL and PostgreSQL. The only requirement is to run on Python 2.5+ and be pure-python, so it can be included with the script and still run on most platforms (no-install). In fact I am looking for a simple solution that would allow me to write SQ...
3
1
0.066568
0
false
11,870,176
0
2,182
1
0
0
11,868,582
Use SQL-Alchemy. It will work with most database types, and certainly does work with postgres and MySQL.
1
0
0
Pure python SQL solution that works with PostgreSQL and MySQL?
3
python,mysql,postgresql
0
2012-08-08T16:03:00.000
I am relatively new to Django and one thing that has been on my mind is changing the database that will be used when running the project. By default, the DATABASES 'default' is used to run my test project. But in the future, I want to be able to define a 'production' DATABASES configuration and have it use that instead...
0
1
0.066568
0
false
11,878,547
1
304
1
0
0
11,878,454
You can just use a different settings.py in your production environment. Or - which is a bit cleaner - you might want to create a file settings_local.py next to settings.py where you define a couple of settings that are specific for the current machine (like DEBUG, DATABASES, MEDIA_ROOT etc.) and do a from settings_loc...
1
0
0
How do I make Django use a different database besides the 'default'?
3
python,database,django,configuration
0
2012-08-09T07:14:00.000
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly. Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?
8
1
0.039979
0
false
11,889,137
1
9,082
3
0
0
11,889,104
I think connection pooling is the best thing to do if this application is to serve multiple clients and concurrently.
1
0
0
Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
5
python,postgresql,web-applications,flask,psycopg2
0
2012-08-09T17:48:00.000
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly. Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?
8
3
0.119427
0
false
11,889,659
1
9,082
3
0
0
11,889,104
The answer depends on how many such requests will happen and how many concurrently in your web app ? Connection pooling is usually a better idea if you expect your web app to be busy with 100s or even 1000s of user concurrently logged in. If you are only doing this as a side project and expect less than few hundred use...
1
0
0
Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
5
python,postgresql,web-applications,flask,psycopg2
0
2012-08-09T17:48:00.000
I'm building a web app in Python (using Flask). I do not intend to use SQLAlchemy or similar ORM system, rather I'm going to use Psycopg2 directly. Should I open a new database connection (and subsequently close it) for each new request? Or should I use something to pool these connections?
8
0
0
0
false
61,078,209
1
9,082
3
0
0
11,889,104
Pooling seems to be totally impossible in context of Flask, FastAPI and everything relying on wsgi/asgi dedicated servers with multiple workers. Reason for this behaviour is simple: you have no control about the pooling and master thread/process. A pooling instance is only usable for a single thread serving a set of cl...
1
0
0
Should PostgreSQL connections be pooled in a Python web app, or create a new connection per request?
5
python,postgresql,web-applications,flask,psycopg2
0
2012-08-09T17:48:00.000
I've searched and I can't seem to find anything. Here is the situation: t1 = table 1 t2 = table 2 v = view of table 1 and table 2 joined 1.) User 1 is logged into database. Does SELECT * FROM v; 2.) User 2 is logged into same database and does INSERT INTO t1 VALUES(1, 2, 3); 3.) User 1 does another SELECT * FROM v; U...
2
1
1.2
0
true
11,979,334
0
542
1
0
0
11,979,276
Instead of logging out and logging back in, user 2 could simply commit their transaction. MySQL InnoDB tables use transactions, requiring a BEGIN before one or more SQL statements, and either COMMIT or ROLLBACK afterwards, resulting in all your updates/inserts/deletes either happening or not. But there's a "feature" t...
1
0
0
MySQL view doesn't update when underlaying table changes across different users
2
python,mysql,mysql-python
0
2012-08-16T00:51:00.000
I am fairly new to databases and have just figured out how to use MongoDB in python2.7 on Ubuntu 12.04. An application I'm writing uses multiple python modules (imported into a main module) that connect to the database. Basically, each module starts by opening a connection to the DB, a connection which is then used for...
3
3
1.2
0
true
11,989,459
0
1,207
1
0
0
11,989,408
You can use one pymongo connection across different modules. You can open it in a separate module and import it to other modules on demand. After program finished working, you are able to close it. This will be the best option. About other questions: You can leave like this (all connections will be closed when script ...
1
0
0
When to disconnect from mongodb
1
python,mongodb,pymongo
0
2012-08-16T14:29:00.000
Could any one shed some light on how to migrate my MongoDB to PostgreSQL? What tools do I need, what about handling primary keys and foreign key relationships, etc? I had MongoDB set up with Django, but would like to convert it back to PostgreSQL.
2
1
0.099668
0
false
15,858,338
1
1,475
1
0
0
12,034,390
Whether the migration is easy or hard depends on a very large number of things including how many different versions of data structures you have to accommodate. In general you will find it a lot easier if you approach this in stages: Ensure that all the Mongo data is consistent in structure with your RDBMS model and...
1
0
0
From MongoDB to PostgreSQL - Django
2
python,django,mongodb,database-migration,django-postgresql
0
2012-08-20T08:25:00.000
I have two programs: the first only write to sqlite db, and the second only read. May I be sure that there are never be some errors? Or how to avoid from it (in python)?
3
1
0.099668
0
false
12,047,988
0
383
1
0
0
12,046,760
generally, it is safe if there is only one program writing the sqlite db at one time. (If not, it will raise exception like "database is locked." while two write operations want to write at the same time.) By the way, it is no way to guarantee the program will never have errors. using Try ... catch to handle exception ...
1
0
0
sqlite3: safe multitask read & write - how to?
2
python,concurrency,sqlite
0
2012-08-20T23:51:00.000
So I'm using xlrd to pull data from an Excel sheet. I get it open and it pulls the data perfectly fine. My problem is the sheet updates automatically with data from another program. It is updating stock information using an rtd pull. Has anyone ever figured out any way to pull data from a sheet like this that is up-to-...
0
1
0.197375
0
false
12,049,844
0
364
1
0
0
12,049,067
Since all that xlrd can do is read a file, I'm assuming that the excel file is saved after each update. If so, use os.stat() on the file before reading it with xlrd and save the results (or at least those of os.stat().st_mtime). Then periodically use os.stat() again, and check if the file modification time (os.stat().s...
1
0
0
Pulling from an auto-updating Excel sheet
1
python,excel
0
2012-08-21T05:48:00.000
I am currently sitting in front of a more specific problem which has to do with fail-over support / redundancy for a specific web site which will be hosted over @ WebFaction. Unfortunately replication at the DB level is not an option as I would have to install my own local PostgreSQL instances for every account and I a...
1
1
0.197375
0
false
12,934,130
1
345
1
0
0
12,070,031
I was looking for something similar. What I found is: 1) Try something like Xeround cloud DB - it's built on MySQL and is compatible but doesn't support savepoints. You have to disable this in (a custom) DB engine. The good thing is that they replicate at the DB level and provide automatic scalability and failover. Yo...
1
0
0
Django multi-db: Route all writes to multiple databases
1
python,django,redundancy,webfaction,django-orm
0
2012-08-22T09:24:00.000
I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo? I've read all about both but there's a l...
3
1
0.049958
0
false
12,078,992
1
5,188
2
0
0
12,078,928
Postgres is a great database for Django in production. sqlite is amazing to develop with. You will be doing a lot of work to try to not use a RDBMS on your first Django site. One of the greatest strengths of Django is the smooth full-stack integration, great docs, contrib apps, app ecosystem. Choosing Mongo, you lose a...
1
0
0
First time Django database SQL or NoSQL?
4
python,sql,django,nosql
0
2012-08-22T18:07:00.000
I've been learning Python through Udacity, Code Academy and Google University. I'm now feeling confident enough to start learning Django. My question is should I learn Django on an SQL database - either SQLite or MySQL; or should I learn Django on a NoSQL database such as Mongo? I've read all about both but there's a l...
3
0
0
0
false
12,079,233
1
5,188
2
0
0
12,078,928
sqlite is the simplest to start with. If you already know SQL toss a coin to choose between MySQL and Postgres for your first project!
1
0
0
First time Django database SQL or NoSQL?
4
python,sql,django,nosql
0
2012-08-22T18:07:00.000
I need python and php support. I am currently using mongodb and it is great for my data (test results), but I need to store results of a different type of test which are over 32 MB and exceed mongo limit of 16 MB. Currently each test is a big python dictionary and I retrieve and represent them with php.
2
0
1.2
0
true
12,090,898
0
128
1
0
0
12,090,204
You can store up to 16MB of data per MongoDB BSON document (e.g. using the pymongo Binary datatype). For arbitrary large data you want to use GridFS which basically stored your data as chunks + extra metadata. When you using MongoDB with its replication features (replica sets) you will have kind of a distributed binar...
1
0
0
no-sql database for document sizes over 32 MB?
1
php,python,mongodb,size,limit
0
2012-08-23T11:07:00.000
I am using MySQLdb. I am developing a simple GUI application using Rpy2. What my program does? - User can input the static data and mathematical operations will be computed using those data. - Another thing where I am lost is, user will give the location of their database and the program will computer maths using the d...
0
0
0
0
false
12,091,455
0
187
1
0
0
12,091,413
When you establish the MySQL connection, use the remote machines IP address / hostname and corresponding credentials (username, password).
1
0
0
How to take extract data from the remote database in Python?
1
python,database
0
2012-08-23T12:17:00.000
I use python 2.7 pyodbc module google app engine 1.7.1 I can use pydobc with python but the Google App Engine can't load the module. I get a no module named pydobc error. How can I fix this error or how can use MS-SQL database with my local Google App Engine.
3
0
0
0
false
12,116,542
1
2,793
1
1
0
12,108,816
You could, at least in theory, replicate your data from the MS-SQL to the Google Cloud SQL database. It is possible create triggers in the MS-SQL database so that every transaction is reflected on your App Engine application via a REST API you will have to build.
1
0
0
How can use Google App Engine with MS-SQL
2
python,sql-server,google-app-engine
0
2012-08-24T11:45:00.000
I am trying to copy and use the example 'User Authentication with PostgreSQL database' from the web.py cookbook. I can not figure out why I am getting the following errors. at /login 'ThreadedDict' object has no attribute 'login' at /login 'ThreadedDict' object has no attribute 'privilege' Here is the error output ...
0
0
0
0
false
12,137,859
1
2,157
1
0
0
12,120,539
Okay, I was able to figure out what I did wrong. Total newbie stuff and all part of the learning process. This code now works, well mostly. The part that I was stuck on is now working. See my comments in the code Thanks import web web.config.debug = False render = web.template.render('templates/', base='layout') url...
1
0
0
web.py User Authentication with PostgreSQL database example
1
python,session,login,web.py
0
2012-08-25T08:47:00.000
I am using PyMongo and gevent together, from a Django application. In production, it is hosted on Gunicorn. I am creating a single Connection object at startup of my application. I have some background task running continuously and performing a database operation every few seconds. The application also serves HTTP requ...
3
4
0.664037
0
false
12,163,744
1
862
1
0
0
12,157,350
I found what the problem is. By default PyMongo has no network timeout defined on the connections, so what was happening is that the connections in the pool got disconnected (because they aren't used for a while). Then when I try to reuse a connection and perform a "find", it takes a very long time for the connection b...
1
0
0
Deadlock with PyMongo and gevent
1
python,mongodb,pymongo,gevent,greenlets
0
2012-08-28T10:26:00.000
I'm trying to install mysql-python package on a machine with Centos 6.2 with Percona Server. However I'm running into EnvironmentError: mysql_config not found error. I've carefully searched information regarding this error but all I found is that one needs to add path to mysql_config binary to the PATH system variable...
1
0
1.2
0
true
12,202,936
0
1,025
1
0
0
12,202,303
mysql_config is a part of mysql-devel package.
1
0
0
mysql-python with Percona Server installation
1
python,percona
0
2012-08-30T17:27:00.000
I have a collection that is potentially going to be very large. Now I know MongoDB doesn't really have a problem with this, but I don't really know how to go about designing a schema that can handle a very large dataset comfortably. So I'm going to give an outline of the problem. We are collecting large amounts of dat...
2
1
1.2
0
true
12,216,914
0
239
1
0
0
12,210,307
It sounds like the larger set (A if I followed along correctly), could reasonably be put into its own database. I say database rather than collection, because now that 2.2 is released you would want to minimize lock contention between the busier database and the others, and to do that a separate database would be best...
1
0
0
Split large collection into smaller ones?
1
python,mongodb
0
2012-08-31T06:56:00.000
Are there any generally accepted practices to get around this? Specifically, for user-submitted images uploaded to a web service. My application is running in Python. Some hacked solutions that came to mind: Display the uploaded image from a local directory until the S3 image is ready, then "hand it off" and update th...
0
1
0.197375
0
false
12,242,133
1
323
1
0
1
12,241,945
I'd save time and not do anything. The wait times are pretty fast. If you wanted to stall the end-user, you could just show a 'success' page without the image. If the image isn't available, most regular users will just hit reload. If you really felt like you had to... I'd probably go with a javascript solution like...
1
0
0
What are some ways to work with Amazon S3 not offering read-after-write consistency in US Standard?
1
python,amazon-s3,amazon-web-services
0
2012-09-03T04:22:00.000
I'm creating a game mod for Counter-Strike in python, and it's basically all done. The only thing left is to code a REAL database, and I don't have any experience on sqlite, so I need quite a lot of help. I have a Player class with attribute self.steamid, which is unique for every Counter-Strike player (received from t...
1
0
1.2
0
true
12,268,131
0
356
1
0
0
12,266,016
You need an ORM. Either you roll your own (which I never suggest), or you use one that exists already. Probably the two most popular in Python are sqlalchemy, and the ORM bundled with Django.
1
0
0
Python sqlite3, saving instance of a class with an other instance as it's attribute?
2
python,database,sqlite,instance
0
2012-09-04T14:47:00.000
I have a few a few model classes such as a user class which is passed a dictionary, and wraps it providing various methods, some of which communicate with the database when a value needs to be changed. The dictionary itself is made from an sqlalchemy RowProxy, so all its keys are actually attribute names taken directly...
1
4
1.2
0
true
12,320,928
1
241
1
0
0
12,292,277
Unless you're being really careful, serializing the entire object into redis is going to cause problems. You're effectively treating it like a cache, so you have to be careful that those values are expired if the user changes something about themselves. You also have to make sure that all of the values are serializable...
1
0
0
How do I go about storing session objects?
2
python,session,sqlalchemy,session-state,pyramid
0
2012-09-06T02:42:00.000
Hi I intend to draw a chart with data in an xlsx file. In order to keep the style, I HAVE TO draw it within excel. I found a package named win32com, which can give a support to manipulate excel file with python on win32 platform, but I don't know where is the doc..... Another similar question is how to change the style...
0
1
0.049958
0
false
13,086,152
0
2,868
1
0
0
12,296,563
Documentation for win32com is next to non-existent as far I know. However, I use the following method to understand the commands. MS-Excel In Excel, record a macro of whatever action you intend to, say plotting a chart. Then go to the Macro menu and use View Macro to get the underlying commands. More often than not, t...
1
0
0
How to draw a chart with excel using python?
4
python,excel,win32com
0
2012-09-06T09:03:00.000
I use Python with SQLAlchemy for some relational tables. For the storage of some larger data-structures I use Cassandra. I'd prefer to use just one technology (cassandra) instead of two (cassandra and PostgreSQL). Is it possible to store the relational data in cassandra as well?
6
3
0.197375
0
false
12,302,894
0
8,349
1
0
0
12,297,847
playOrm supports JOIN on noSQL so that you CAN put relational data into noSQL but it is currently in java. We have been thinking of exposing a S-SQL language from a server for programs like yours. Would that be of interest to you? The S-SQL would look like this(if you don't use partitions, you don't even need anythi...
1
0
0
Can I use SQLAlchemy with Cassandra CQL?
3
python,sqlalchemy,cassandra
0
2012-09-06T10:17:00.000
I'm looking to expand my recommender system to include other features (dimensions). So far, I'm tracking how a user rates some document, and using that to do the recommendations. I'm interested in adding more features, such as user location, age, gender, and so on. So far, a few mysql tables have been enough to handle ...
1
0
0
0
false
12,369,285
0
289
2
0
0
12,355,416
An SQL database should work fine in your case. In fact, you can store all the training examples in just one database, each row representing a particular training set and each column representing a feature. You can add features by adding collumns as and when required. In a relational database, you might come across acce...
1
0
0
Multi feature recommender system representation
2
python,numpy,scipy,data-mining
0
2012-09-10T16:05:00.000
I'm looking to expand my recommender system to include other features (dimensions). So far, I'm tracking how a user rates some document, and using that to do the recommendations. I'm interested in adding more features, such as user location, age, gender, and so on. So far, a few mysql tables have been enough to handle ...
1
0
0
0
false
24,491,488
0
289
2
0
0
12,355,416
I recommend using tensors, which is multidimensional arrays. You can use any data table or simply text files to store a tensor. Each line or row is a record / transaction with different features all listed.
1
0
0
Multi feature recommender system representation
2
python,numpy,scipy,data-mining
0
2012-09-10T16:05:00.000
Trying to set up some basic data I/O scripts in python that read and write from a local sqlite db. I'd like to use the command line to verify that my scripts work as expected, but they don't pick up on any of the databases or tables I'm creating. My first script writes some data from a dict into the table, and the seco...
1
2
1.2
0
true
12,360,397
0
1,252
1
1
0
12,360,279
Are you putting the DB file name in the command ? $ sqlite3 test.db
1
0
0
sqlite3 command line tools don't work in Ubuntu
1
python,linux,sqlite,ubuntu
0
2012-09-10T22:23:00.000
Okay, I kinda asked this question already, but noticed that i might have not been as clear as i could have been, and might have made some errors myself. I have also noticed many people having the same or similar problems with sqlite3 in python. So i thought i would ask this as clearly as i could, so it could possibly...
0
0
0
0
false
12,420,541
0
3,032
1
0
0
12,420,338
As I understand you would like to install python form sources. To make sqlite module available you have to install sqlite package and its dev files (for example sqlite-devel for CentOS). That's it. YOu have to re-configure your sources after installing the required packages. Btw you will face up the same problem with s...
1
0
1
What does Python need to install sqlite3 module?
2
python,sqlite
0
2012-09-14T07:59:00.000
For my database project, I am using SQL Alchemy. I have a unit test that adds the object to the table, finds it, updates it, and deletes it. After it goes through that, I assumed I would call the session.rollback method in order to revert the database changes. It does not work because my sequences are not reverted. My ...
4
-3
1.2
0
true
12,443,800
0
3,017
1
0
0
12,440,044
Postgres does not rollback advances in a sequence even if the sequence is used in a transaction which is rolled back. (To see why, consider what should happen if, before one transaction is rolled back, another using the same sequence is committed.) But in any case, an in-memory database (SQLite makes this easy) is the...
1
0
0
How to rollback the database in SQL Alchemy?
1
python,unit-testing,sqlalchemy,rollback
0
2012-09-15T18:39:00.000
I am provided with text files containing data that I need to load into a postgres database. The files are structured in records (one per line) with fields separated by a tilde (~). Unfortunately it happens that every now and then a field content will include a tilde. As the files are not tidy CSV, and the tilde's not e...
1
0
0
0
false
12,553,211
1
111
1
0
0
12,553,197
If you know what each field is supposed to be, perhaps you could write a regular expression which would match that field type only (ignoring tildes) and capture the match, then replace the original string in the file?
1
0
0
Messed up records - separator inside field content
2
python,perl,language-agnostic
0
2012-09-23T14:36:00.000
Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks
38
0
0
0
false
56,126,467
1
52,339
2
0
1
12,570,465
Given that encryption at rest is a much desired data standard now, smart_open does not support this afaik
1
0
0
How to upload a file to S3 without creating a temporary local file
12
python,amazon-s3,amazon
0
2012-09-24T18:09:00.000
Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks
38
2
0.033321
0
false
12,570,568
1
52,339
2
0
1
12,570,465
I assume you're using boto. boto's Bucket.set_contents_from_file() will accept a StringIO object, and any code you have written to write data to a file should be easily adaptable to write to a StringIO object. Or if you generate a string, you can use set_contents_from_string().
1
0
0
How to upload a file to S3 without creating a temporary local file
12
python,amazon-s3,amazon
0
2012-09-24T18:09:00.000
I have the below setup 2 node hadoop/hbase cluster with thirft server running on hbase. Hbase has a table with 10 million rows. I need to run aggregate queries like sum() on the hbase table to show it on the web(charting purpose). For now I am using python(thrift client) to get the dataset and display. I am looking f...
0
0
0
0
false
21,502,085
1
1,019
1
0
0
12,585,286
Phoenix is a good solution for low latency result from Hbase tables than Hive. It is good for range scans than Hbase scanners because they use secondary indexes and SkipScan. As in your case , you use Python and phoenix API have only JDBC connectors. Else Try Hbase Coprocessors. Which do SUM, MAX, COUNT,AVG functions. ...
1
0
0
Hadoop Hbase query
3
java,python,hadoop,hbase,thrift
0
2012-09-25T14:35:00.000
I am finding Neo4j slow to add nodes and relationships/arcs/edges when using the REST API via py2neo for Python. I understand that this is due to each REST API call executing as a single self-contained transaction. Specifically, adding a few hundred pairs of nodes with relationships between them takes a number of secon...
18
2
0.07983
0
false
31,026,259
0
12,651
1
0
1
12,643,662
Well, I myself had need for massive performance from neo4j. I end up doing following things to improve graph performance. Ditched py2neo, since there were lot of issues with it. Besides it is very convenient to use REST endpoint provided by neo4j, just make sure to use request sessions. Use raw cypher queries for bulk...
1
0
0
Fastest way to perform bulk add/insert in Neo4j with Python?
5
python,neo4j,py2neo
0
2012-09-28T16:15:00.000
I see plenty of examples of importing a CSV into a PostgreSQL db, but what I need is an efficient way to import 500,000 CSV's into a single PostgreSQL db. Each CSV is a bit over 500KB (so grand total of approx 272GB of data). The CSV's are identically formatted and there are no duplicate records (the data was generated...
9
0
0
0
false
12,646,923
0
10,104
1
0
0
12,646,305
Nice chunk of data you have there. I'm not 100% sure about Postgre, but at least MySQL provides some SQL commands, to feed a csv directly into a table. This bypasses any insert checks and so on and is thatswhy more than a order of magnitude faster than any ordinary insert operations. So the probably fastest way to go i...
1
0
0
Efficient way to import a lot of csv files into PostgreSQL db
3
python,csv,import,postgresql-9.1
0
2012-09-28T19:38:00.000
I have downloaded mysql-connector-python-1.0.7-py2.7.msi from MySQL site and try to install but it gives error that Python v2.7 not found. We only support Microsoft Windows Installer(MSI) from python.org. I am using Official Python v 2.7.3 on windows XP SP3 with MySQL esssential5.1.66 Need Help ???
12
10
1
0
false
13,899,478
0
19,218
2
0
0
12,702,146
I met the similar problem under Windows 7 when installing mysql-connector-python-1.0.7-py2.7.msi and mysql-connector-python-1.0.7-py3.2.msi. After changing from "Install only for yourself" to "Install for all users" when installing Python for windows, the "python 3.2 not found" problem disappear and mysql-connector-pyt...
1
0
0
mysql for python 2. 7 says Python v2.7 not found
8
python,mysql,python-2.7,mysql-connector-python
0
2012-10-03T04:57:00.000
I have downloaded mysql-connector-python-1.0.7-py2.7.msi from MySQL site and try to install but it gives error that Python v2.7 not found. We only support Microsoft Windows Installer(MSI) from python.org. I am using Official Python v 2.7.3 on windows XP SP3 with MySQL esssential5.1.66 Need Help ???
12
0
0
0
false
19,051,115
0
19,218
2
0
0
12,702,146
I solved this problem by using 32bit python
1
0
0
mysql for python 2. 7 says Python v2.7 not found
8
python,mysql,python-2.7,mysql-connector-python
0
2012-10-03T04:57:00.000
I have a server which files get uploaded to, I want to be able to forward these on to s3 using boto, I have to do some processing on the data basically as it gets uploaded to s3. The problem I have is the way they get uploaded I need to provide a writable stream that incoming data gets written to and to upload to boto ...
4
3
1.2
0
true
12,716,129
1
636
1
1
1
12,714,965
boto is a Python library with a blocking API. This means you'll have to use threads to use it while maintaining the concurrence operation that Twisted provides you with (just as you would have to use threads to have any concurrency when using boto ''without'' Twisted; ie, Twisted does not help make boto non-blocking o...
1
0
0
Boto reverse the stream
2
python,stream,twisted,boto
0
2012-10-03T18:54:00.000
I want to select data from multiple tables, so i just want to know that can i used simple SQL queries for that, If yes then please give me an example(means where to use these queries and how). Thanks.
0
1
0.099668
0
false
12,740,533
1
77
1
0
0
12,740,424
Try this. https://docs.djangoproject.com/en/dev/topics/db/sql/
1
0
0
Can I used simple sql commands in django
2
python,sql,django,django-queryset
0
2012-10-05T06:05:00.000
Background: I'm working on dataview, and many of the reports are generated by very long running queries. I've written a small query caching daemon in python that accepts a query, spawns a thread to run it, and stores the result when done as a pickled string. The results are generally various aggregations broken down by...
0
2
1.2
0
true
12,743,439
0
1,016
1
0
0
12,743,436
There is a property of the connection object called thread_id, which returns an id to be passed to KILL. MySQL has a thread for each connection, not for each cursor, so you are not killing queries, but are instead killing connection. To kill an individual query you must run each query in it's own connection, and then k...
1
0
0
Get process id (of query/thread) of most recently run query in mysql using python mysqldb
1
python,mysql
0
2012-10-05T09:32:00.000
I'm reading conflicting reports about using PostgreSQL on Amazon's Elastic Beanstalk for python (Django). Some sources say it isn't possible: (http://www.forbes.com/sites/netapp/2012/08/20/amazon-cloud-elastic-beanstalk-paas-python/). I've been through a dummy app setup, and it does seem that MySQL is the only optio...
5
5
0.462117
0
false
21,391,684
1
2,422
1
0
0
12,850,550
Postgre is now selectable from the AWS RDS configurations. Validated through Elastic Beanstalk application setup 2014-01-27.
1
0
0
PostgreSQL for Django on Elastic Beanstalk
2
python,django,postgresql,amazon-elastic-beanstalk
0
2012-10-12T00:21:00.000
I have 10000 files in a s3 bucket.When I list all the files it takes 10 minutes. I want to implement a search module using BOTO (Python interface to AWS) which searches files based on user input. Is there a way I can search specific files with less time?
2
3
0.291313
0
false
12,907,767
1
5,534
1
0
1
12,904,326
There are two ways to implement the search... Case 1. As suggested by john - you can specify the prefix of the s3 key file in your list method. that will return you result of S3 key files which starts with the given prefix. Case 2. If you want to search the S3 key which are end with specific suffix or we can say extens...
1
0
0
Search files(key) in s3 bucket takes longer time
2
python,amazon-s3,boto
0
2012-10-15T21:29:00.000
A user accesses his contacts on his mobile device. I want to send back to the server all the phone numbers (say 250), and then query for any User entities that have matching phone numbers. A user has a phone field which is indexed. So I do User.query(User.phone.IN(phone_list)), but I just looked at AppStats, and is th...
3
0
0
0
false
12,980,347
1
176
1
1
0
12,976,652
I misunderstood part of your problem, I thought you were issuing a query that was giving you 250 entities. I see what the problem is now, you're issuing an IN query with a list of 250 phone numbers, behind the scenes, the datastore is actually doing 250 individual queries, which is why you're getting 250 read ops. I ca...
1
0
0
Efficient way to do large IN query in Google App Engine?
3
python,google-app-engine
0
2012-10-19T14:43:00.000