Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
suppose there was a database table with one column, and it's a PK. To make things more specific this is a django project and the database is in mysql. If I needed an additional column with all unique values, should I create a new UniqueField with unique integers, or just write a hash-like function to convert the exis...
0
1
1.2
0
true
17,393,525
1
50
1
0
0
17,393,291
Having a string-valued PK should not be a problem in any modern database system. A PK is automatically indexed, so when you perform a look-up with a condition like table1.pk = 'long-string-key', it won't be a string comparison but an index look-up. So it's ok to have string-valued PK, regardless of the length of the ke...
1
0
0
Database design, adding an extra column versus converting existing column with a function
1
python,mysql,django
0
2013-06-30T18:03:00.000
i am working on developing a Django application with Cassandra as the back end database. while Django supports ORM feature for SQL, i wonder if there is any thing similar for Cassandra. what would be the best approach to load the schema into the Cassandra server and perform CRUD operations. P.S. I am complete beginner ...
1
3
1.2
0
true
17,403,637
1
410
1
0
0
17,403,346
There's an external backend for Cassandra, but it has some issues with the authentication middleware, which doesn't handle users correctly in the admin. If you use a non-relational database, you lose a lot of goodies that django has. You could try using Postgres' nosql extension for the parts of your data that you want...
1
0
0
Cassandra-Django python application approach
2
python,django,orm,cassandra
0
2013-07-01T11:25:00.000
I have a state column in my table which has the following possible values: discharged, in process and None. Can I fetch all the records in the following order: in process, discharged followed by None?
1
2
0.379949
0
false
17,408,674
0
1,384
1
0
0
17,408,276
If you've declared that column as an enum type (as you should for cases such as these where the values are drawn from a small, fixed set of strings), then using ORDER BY on that column will order results according to the order in which the values of the enum were declared. So the datatype for that column should be ENU...
1
0
0
Sqlalchemy order_by custom ordering?
1
python,sql,sqlalchemy
0
2013-07-01T15:33:00.000
I'm trying to work with oursql in python 3.2, and it's really not going so well. Facts: I downloaded oursql binary and ran the installer. I have MySQL 5.1 installed. I separately downloaded the libmysql dll and placed it in the System32 directory. I downloaded cython for version 3.1 because there wasn't one for 2.7 o...
0
0
0
0
false
17,420,506
0
196
1
0
0
17,420,396
OK, I moved libmysql.dll to the same directory as python.exe, instead of in the DLL's folder, and it seems like it works.
1
0
0
Error on installing oursql for Python 3.1
1
python,mysql,python-3.x,oursql
0
2013-07-02T08:00:00.000
I am using python 2.7 and Mysql. I am using multi-threading and giving connections to different threads by using PooledDB . I give db connections to different threads by pool.dedicated_connection().Now if a thread takes a connection from pool and dies due to some reason with closing it(ie. without returning it to the p...
0
2
1.2
0
true
17,423,440
0
178
1
0
0
17,423,384
No, it does not. You have to tell the server on the other side that the connection is closed, because it can't tell the difference between "going away" and "I haven't sent my next query yet" without an explicit signal from you. The connection can time out, of course, but it won't be closed or cleaned up without instru...
1
0
0
Does database connection return to pool if a thread holding it dies?
1
python,mysql,multithreading,python-2.7
0
2013-07-02T10:37:00.000
I have an existing MySQL database that I set up on PMA, it has FKs that references columns that are not primary keys. Now I am trying to move the database to Django and am having trouble because when I try to set up d Foreign Keys in django it automatically references the Primary Key of the table that I am attempting t...
0
0
0
0
false
17,491,830
1
62
1
0
0
17,491,720
You can use the to_field attribute of a ForeignKey. Django should detect this automatically if you use ./manage.py inspectdb, though.
1
0
0
Moving database from PMA to Django
1
python,mysql,django,phpmyadmin
0
2013-07-05T14:54:00.000
What is the easiest way to export the results of a SQL Server query to a CSV file? I have read that the pymssql module is the preferred way, and I'm guessing I'll need csv as well.
0
0
0
0
false
17,495,797
0
1,521
1
0
0
17,495,581
Do you need to do this programmatically or is this a one-off export? If the latter, the easiest way by far is to use the SSMS export wizard. In SSMS, select the database, right-click and select Tasks->Export Data.
1
0
0
Export SQL Server Query Results to CSV using pymssql
1
python,sql-server,csv,pymssql
0
2013-07-05T19:20:00.000
im using openpyxl to edit an excel file that contains some formulas in certain cells. Now when i populate the cells from a text file, im expecting the formula to work and give me my desired output. But what i observe is that the formulas get removed and the cells are left blank.
0
1
0.099668
0
false
24,183,661
0
1,400
1
0
0
17,522,521
I had the same problem when saving the file with openpyxl: formulas removed. But I pointed out that some intermediate formulas were still there. After some tests, it appears that, in my case, all formulas which are displaying blank result (nothing) are cleaned when the save occured, unlike the formulas with an output i...
1
0
0
Openpyxl: Formulas getting removed when saving file
2
python-2.7,openpyxl
0
2013-07-08T08:55:00.000
I have two tables with a common field I want to find all the the items(user_id's) which present in first table but not in second. Table1(user_id,...) Table2(userid,...) user_id in and userid in frist and second table are the same.
1
1
1.2
0
true
17,542,024
0
225
1
0
0
17,541,225
session.query(Table1.user_id).outerjoin(Table2).filter(Table2.user_id == None)
1
0
0
find missing value between to tables in sqlalchemy
2
python,sqlalchemy
0
2013-07-09T06:15:00.000
I want to build Python 3.3.2 from scratch on my SLE 11 (OpenSUSE). During the compilation of Python I got the message that the modules _bz2, _sqlite and _ssl have not been compiled. I looked for solutions with various search engines. It is often said that you have to install the -dev packages with your package manageme...
1
0
0
0
false
17,979,292
0
444
1
0
0
17,546,628
I don't use that distro, but Linux Mint (it's based on Ubuntu). In my case before the compilation of Python 3.3.2 I've installed the necessary -dev libraries: $ sudo apt-get install libssl-dev $ sudo apt-get install libbz2-dev ... Then I've compiled and installed Python and those imports work fine. Hope you find it ...
1
0
0
How to build python 3.3.2 with _bz2, _sqlite and _ssl from source
2
python-3.x,sqlite,ssl,compilation,non-admin
0
2013-07-09T11:05:00.000
I've got a fairly simple Python program as outlined below: It has 2 threads plus the main thread. One of the threads collects some data and puts it on a Queue. The second thread takes stuff off the queue and logs it. Right now it's just printing out the stuff from the queue, but I'm working on adding it to a local MyS...
1
0
0
0
false
17,578,684
0
84
1
0
0
17,578,630
How should I deal with the database connection? Create it in main, then pass it to the logging thread, or create it directly in the logging thread? I would perhaps configure your logging component with the class that creates the connection and let your logging component request it. This is called dependency inject...
1
0
1
Architechture of multi-threaded program using database
1
python,mysql,database,multithreading
0
2013-07-10T18:49:00.000
How do you install pyodbc package on a Linux (RedHat server RHEL) onto a Zope/Plone bundled Python path instead of in the global Python path? yum install pyodbc and python setup.py install, all put pyodbc in the sys python path. I read articles about putting pyodbc in python2.4/site-packages/ I tried that, but it didn...
1
1
0.197375
0
false
17,794,367
0
265
1
0
0
17,662,330
Add the package to the eggs section in buildout and then re-run buildout. There might be additional server requirements to install pyodbc.
1
0
1
pyodbc Installation Issue on Plone Python Path
1
python,plone,zope,pyodbc
0
2013-07-15T19:32:00.000
I have a client-server interface realized using the module requests as client and tornado as server. I use this to query a database, where some dataitems may not be avaiable. For example the author in a query might not be there or the book-title. Is there a recommended way to let my client know, what was missing? Like...
0
1
1.2
0
true
17,681,053
0
237
1
0
0
17,678,927
Since HTTP 404 responses can have a response body, I would put the detailed error message in the body itself. You can, for example, send the string Author Not Found in the response body. You could also send the response string in the format that your API already uses, e.g. XML, JSON, etc., so that every response from t...
1
0
0
Can I have more semantic meaning in an http 404 error?
2
python,http,tornado,http-status-codes
0
2013-07-16T14:13:00.000
my teammate and i wrote a Python script running on the same server where the database is. Now we want to know if the performance changes when we write the same code as a stored procedure in our postgres database. What is the difference or its the same?? Thanks.
2
2
1.2
0
true
17,686,435
0
507
1
0
0
17,682,444
There can be differences - PostgreSQL stored procedures (functions) uses inprocess execution, so there are no any interprocess communication - so if you process more data, then stored procedures (in same language) can be faster than server side application. But speedup depends on size of processed data.
1
0
0
What is the difference between using a python script running on server and a stored procedure?
1
python,database,performance,postgresql,plpgsql
0
2013-07-16T16:48:00.000
I have been using the datastore with ndb for a multiplayer app. This appears to be using a lot of reads/writes and will undoubtedly go over quota and cost a substantial amount. I was thinking of changing all the game data to be stored only in memcache. I understand that data stored here can be lost at any time, but as ...
3
1
0.099668
0
false
17,816,617
1
341
1
1
0
17,702,165
As a commenter on another answer noted, there are now two memcache offerings: shared and dedicated. Shared is the original service, and is still free. Dedicated is in preview, and presently costs $.12/GB hour. Dedicated memcache allows you to have a certain amount of space set aside. However, it's important to under...
1
0
0
Datastore vs Memcache for high request rate game
2
python,google-app-engine,memcached,google-cloud-datastore,app-engine-ndb
0
2013-07-17T14:13:00.000
I have some very complex XSD schemas to work with. By complex I mean that each of these XSD would correspont to about 20 classes / tables in a database, with each table having approximately 40 fields. And I have 18 different XSD like that to program. What I'm trying to achieve is: Get a XML file defined by the XSD and ...
2
1
0.099668
0
false
34,734,878
0
2,179
1
0
0
17,750,340
Not sure if there is a way directly, but you could indirectly go from XSD to a SQL Server DB, and then import the DB from SQLAlchemy
1
0
0
Generate Python Class and SQLAlchemy code from XSD to store XML on Postgres
2
python,xml,postgresql,xsd,sqlalchemy
0
2013-07-19T15:50:00.000
I am working with Python, fetching huge amounts of data from MS SQL Server Database and processing those for making graphs. The real issue is that I wanted to know whether it would be a good idea to repeatedly perform queries to filter the data (using pyodbc for SQL queries) using attributes like WHERE and SELECT DISTI...
1
0
0
0
false
17,757,423
0
470
1
0
0
17,757,031
If you have bandwidth to burn, and prefer Python to SQL, go ahead and do one big query and filter in Python. Otherwise, you're probably better off with multiple queries. Sorry, no references here. ^_^
1
0
0
SQL query or Programmatic Filter for Big Data?
1
python,sql-server-2008,map,bigdata
0
2013-07-19T23:33:00.000
I have a load of data in CSV format. I need to be able to index this data based on a single text field (the primary key), so I'm thinking of entering it into a database. I'm familiar with sqlite from previous projects, so I've decided to use that engine. After some experimentation, I realized that that storing a hund...
1
5
0.462117
0
false
17,826,461
0
1,881
1
0
0
17,826,391
Wrap all insert commands into a single transaction. Use prepared statements. Create the index only after inserting all the data (i.e., don't declare a primary key).
1
0
0
What's the best way to insert over a hundred million rows into a SQLite database?
2
python,database,sqlite
0
2013-07-24T06:07:00.000
Psycopg is the most popular PostgreSQL adapter for the Python programming language. The name Psycopg does not make sense to me. I understand the last pg means Postgres, but what about Psyco?
21
11
1
0
false
17,869,993
0
2,412
1
0
0
17,869,761
I've always thought of it as psycho-Postgres.
1
0
0
Where does the name `Psycopg` come from?
1
python,postgresql
0
2013-07-25T22:19:00.000
I am trying to copy the entire /contentstore/ folder on a bucket to a timestamped version. Basically /contenstore/ would be copied to /contentstore/20130729/. My entire script uses s3s3mirror first to clone my production S3 bucket to a backup. I then want to rename the backup to a timestamped copy so that I can keep mu...
1
0
0
0
false
20,389,005
1
995
1
0
0
17,931,579
Since your source path contains your destination path, you may actually be copying things more than once -- first into the destination path, and then again when that destination path matches your source prefix. This would also explain why copying to a different bucket is faster than within the same bucket. If you're us...
1
0
0
Copying files in the same Amazon S3 bucket
1
python,amazon-web-services,amazon-s3,boto,s3cmd
0
2013-07-29T18:32:00.000
I have an html file on network which updates almost every minute with new rows in a table. At any point, the file contains close to 15000 rows I want to create a MySQL table with all data in the table, and then some more that I compute from the available data. The said HTML table contains, say rows from the last 3 days...
1
0
0
0
false
17,940,205
1
509
1
0
0
17,939,824
My Suggestion is instead of updating values row by row try to use Bulk Insert in temporary table and then move the data into an actual table based on some timing key. If you have key column that will be good for reading the recent rows as you added.
1
0
0
Update a MySQL table from an HTML table with thousands of rows
2
python,mysql,beautifulsoup,mysql-python
0
2013-07-30T06:33:00.000
Title question says it all. I was trying to figure out how I could go about integrating the database created by sqlite3 and communicate with it through Python from my website. If any further information is required about the development environment, please let me know.
5
1
0.066568
0
false
18,099,967
1
1,713
1
0
0
17,953,552
It looks like your needs has changed and you are going into direction where static web site is not sufficient any more. Firstly, I would pick appropriate Python framework for your needs. if static website was sufficient until recently Django can be perfect for you. Next I would suggest describing your DB schema for ORM...
1
0
0
I have a static website built using HTML, CSS and Javascript. How do I integrate this with a SQLite3 database accessed with the Python API?
3
python,sqlite,static-site
0
2013-07-30T17:29:00.000
I'm currently running into an issue in integrating ElasticSearch and MongoDB. Essentially I need to convert a number of Mongo Documents into searchable documents matching my ElasticSearch query. That part is luckily trivial and taken care of. My problem though is that I need this to be fast. Faster than network time, I...
1
0
0
0
false
24,357,799
0
195
1
0
0
17,955,275
pymongo is thread safe, so you can run multiple queries in parallel. (I assume that you can somehow partition your document space.) Feed the results to a local Queue if processing the result needs to happen in a single thread.
1
0
1
Bundling reads or caching collections with Pymongo
1
python,performance,mongodb,pymongo
0
2013-07-30T19:04:00.000
I'm working on an app that employs the python sqlite3 module. My database makes use of the implicit ROWID column provided by sqlite3. I expected that the ROWIDs be reordered after I delete some rows and vacuum the database. Because in the sqlite3 official document: The VACUUM command may change the ROWIDs of entries i...
1
0
1.2
0
true
17,988,741
0
134
1
0
0
17,987,732
This behaviour is version dependent. If you want a guaranteed reordering, you have to copy all records into a new table yourself. (This works with both implicit and explicit ROWIDs.)
1
0
0
Why are not ROWIDs updated after VACUUM when using python sqlite3 module?
1
python,sqlite,pysqlite
0
2013-08-01T07:30:00.000
I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try syncdb it is not creating the table I deleted manually. Honestly, I get really stuck ...
0
0
0
0
false
17,996,086
0
824
2
0
0
17,995,963
are you using south? If you are, there is a migration history database that exists. Make sure to delete the row mentionnaing the migration you want to run again.
1
0
0
syncdb is not creating tables again?
4
python,django,django-south
0
2013-08-01T13:51:00.000
I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try syncdb it is not creating the table I deleted manually. Honestly, I get really stuck ...
0
0
0
0
false
29,407,625
0
824
2
0
0
17,995,963
Try renaming the migration file and running python manage.py syncdb.
1
0
0
syncdb is not creating tables again?
4
python,django,django-south
0
2013-08-01T13:51:00.000
I'm designing a g+ application for a big international brand. the entities I need to create are pretty much in form of a graph, hence a lot of many-to-many relations (arcs) connecting nodes that can be traversed in both directions. I'm reading all the readable docs online, but I haven't found anything so far specific t...
1
1
0.099668
0
false
18,035,092
1
478
1
1
0
18,017,150
There's two ways to implement one-to-many relationships in App Engine. Inside entity A, store a list of keys to entities B1, B2, B3. In th old DB, you'd use a ListProperty of db.Key. In ndb you'd use a KeyProperty with repeated = True. Inside entity B1, B2, B3, store a KeyProperty to entity A. If you use 1: When y...
1
0
0
best practice for graph-like entities on appengine ndb
2
python,google-app-engine,app-engine-ndb,graph-databases
0
2013-08-02T12:42:00.000
I have read somewhere that you can store python objects (more specifically dictionaries) as binaries in MongoDB by using BSON. However right now I cannot find any any documentation related to this. Would anyone know how exactly this can be done?
18
5
0.321513
0
false
18,089,722
0
23,835
1
0
0
18,089,598
Assuming you are not specifically interested in mongoDB, you are probably not looking for BSON. BSON is just a different serialization format compared to JSON, designed for more speed and space efficiency. On the other hand, pickle does more of a direct encoding of python objects. However, do your speed tests before yo...
1
0
1
Is there a way to store python objects directly in mongoDB without serializing them
3
python,mongodb,pymongo,bson
0
2013-08-06T20:14:00.000
I have several S3 buckets containing a total of 40 TB of data across 761 million objects. I undertook a project to copy these objects to EBS storage. To my knowledge, all buckets were created in us-east-1. I know for certain that all of the EC2 instances used for the export to EBS were within us-east-1. The problem ...
0
0
0
0
false
18,366,790
1
878
1
0
1
18,113,426
The problem ended up being an internal billing error at AWS and was not related to either S3 or Boto.
1
0
0
Boto randomly connecting to different regions for S3 transfers
2
python,amazon-web-services,amazon-s3,boto
0
2013-08-07T20:42:00.000
I am trying to run the following db2 command through the python pyodbc module IBM DB2 Command : "DB2 export to C:\file.ixf of ixf select * from emp_hc" i am successfully connected to the DSN using the pyodbc module in python and works fine for select statement but when i try to execute the following command from the...
0
1
0.099668
0
false
18,135,069
0
1,372
1
0
0
18,134,390
db2 export is a command run in the shell, not through SQL via odbc. It's possible to write database query results to a file with python and pyodbc, but db2 export will almost certainly be faster and effortlessly handle file formatting if you need it for import.
1
0
0
sql import export command error using pyodbc module python
2
python,sql,db2,pyodbc
0
2013-08-08T19:24:00.000
when connecting to mysql database in Django ,I get the error. I'm sure mysql server is running. /var/run/mysqld/mysqld.sock doesn't exist. When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output. I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exi...
26
0
0
0
false
66,405,102
1
101,165
3
0
0
18,150,858
I faced this problem when connecting MySQL with Django when using Docker. Try 'PORT':'0.0.0.0'. Do not use 'PORT': 'db'. This will not work if you tried to run your app outside Docker.
1
0
0
OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
5
python,mysql,django,mysql.sock
0
2013-08-09T15:55:00.000
when connecting to mysql database in Django ,I get the error. I'm sure mysql server is running. /var/run/mysqld/mysqld.sock doesn't exist. When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output. I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exi...
26
0
0
0
false
56,762,083
1
101,165
3
0
0
18,150,858
in flask, you may use that app=Flask(__name__) app.config["MYSQL_HOST"]="127.0.0.1 app.config["MYSQL_USER"]="root"...
1
0
0
OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
5
python,mysql,django,mysql.sock
0
2013-08-09T15:55:00.000
when connecting to mysql database in Django ,I get the error. I'm sure mysql server is running. /var/run/mysqld/mysqld.sock doesn't exist. When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output. I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exi...
26
0
0
0
false
72,389,079
1
101,165
3
0
0
18,150,858
You need to change your HOST from 'localhost' to '127.0.0.1' and check your django app :)
1
0
0
OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
5
python,mysql,django,mysql.sock
0
2013-08-09T15:55:00.000
I used mysqldb to connect to a database in my localhost. It works, but if I add data to a table in the database when the program is running, it shows that it has been added, but when I check the table from localhost, it hasn't been updated.
0
0
1.2
0
true
18,245,522
0
30
1
0
0
18,245,510
if your table uses innodb engine, you should call connection.commit() on every cursor.execute().
1
0
0
musqldb-python doesnt really update the original database
1
python,mysql,database-connection,mysql-python
0
2013-08-15T02:41:00.000
I'm trying to understand which of the following is a better option: Data calculation using Python from the output of a MySQL query. Perform the calculations in the query itself. For example, the query returns 20 rows with 10 columns. In Python, I compute the difference or division of some of the columns. Is it a bett...
2
1
0.099668
0
false
18,270,751
0
2,805
2
0
0
18,270,585
It is probably a matter of taste but... ... to give you an exact opposite answer as the one by Alma Do Mundo, for (not so) simple calculation made on the SELECT ... clause, I generally push toward using the DB "as a calculator". Calculations (in the SELECT ... clause) are performed as the last step while executing the ...
1
0
0
Data Calculations MySQL vs Python
2
python,mysql,query-performance,sql-tuning,query-tuning
0
2013-08-16T09:53:00.000
I'm trying to understand which of the following is a better option: Data calculation using Python from the output of a MySQL query. Perform the calculations in the query itself. For example, the query returns 20 rows with 10 columns. In Python, I compute the difference or division of some of the columns. Is it a bett...
2
1
0.099668
0
false
18,271,329
0
2,805
2
0
0
18,270,585
If you are doing basic arithmetic operation on calculations in a row, then do it in SQL. This gives you the option of encapsulating the results in a view or stored procedure. In many databases, it also gives the possibility of parallel execution of the statements (although performance is not an issue with so few rows...
1
0
0
Data Calculations MySQL vs Python
2
python,mysql,query-performance,sql-tuning,query-tuning
0
2013-08-16T09:53:00.000
I'm trying to use python for manipulating some data in MySQL DB. DB is on a remote PC. And I will use another PC with Python to connect to the DB. When I searched how to install MySQLdb module to Python, they all said MySQL need to be installed on the local PC. Is it right? Or I don't need to install MySQL on the local...
0
1
1.2
0
true
18,288,628
0
323
1
0
0
18,288,616
You just need it if you want to compile the Python MySQL bindings from source. If you already have the binary version of the python library then the answer is no, you don't need it.
1
0
1
Do I need MySQL installed on my local PC to use MySQLdb for Python to connect MySQL server remotely?
1
python,mysql
0
2013-08-17T12:07:00.000
I'm using the python packages xlrd and xlwt to read and write from excel spreadsheets using python. I can't figure out how to write the code to solve my problem though. So my data consists of a column of state abbreviations and a column of numbers, 1 through 7. There are about 200-300 entries per state, and i want to f...
0
0
0
0
false
18,413,675
0
247
1
0
0
18,413,606
Prepare a dictionary to store the results. Get the numbers of line with data you have using xlrd, then iterate over each of them. For each state code, if it's not in the dict, you create it also as a dict. Then you check if the entry you read on the second column exists within the state key on your results dict. 4.1 I...
1
0
0
Python Programming approach - data manipulation in excel
2
python,excel,xlrd,xlwt
1
2013-08-24T00:09:00.000
I've been working with Python MySQLdb. With InnoDB tables autocommit is turned off in default and that was what I needed. But since I'm now working with MyISAM tables, the docs for MySQL say MyISAM tables effectively always operate in autocommit = 1 mode Since I'm running up to a few hundreds of queries a second, do...
1
0
0
0
false
18,463,239
0
440
1
0
0
18,462,528
MyISAM has no transactions, so you can't not to "autocommit" using MyISAM. Your runtime change may be also caused by the fact you moved from innoDB to MyISAM. The best approach for DB runtime issues in general is benchmarking, benchmarking and benchmarking.
1
0
0
does autocommit slow down performance in python?
1
python,mysql,commit
0
2013-08-27T10:06:00.000
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database. My issue is that I need to use...
2
0
0
0
false
18,496,589
1
941
2
1
0
18,492,467
The libpq driver, which is what the psycopg2 driver usually used by django is built on, does not support forking an active connection. I'm not sure if there might be another driver does not, but I would assume not - the protocol does not support multiplexing multiple sessions on the same connection. The proper solution...
1
0
0
Forking Django DB connections
2
python,django,postgresql
0
2013-08-28T15:42:00.000
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database. My issue is that I need to use...
2
1
0.099668
0
false
18,531,322
1
941
2
1
0
18,492,467
So one solution I found is to create a new thread to spawn from. Django opens a new connection per thread so spawning from a new thread ensures you pass a new connection to the new process. In retrospect I wish I'd used psycopg2 directly from the beginning rather than Django. Django is great for the web front end but ...
1
0
0
Forking Django DB connections
2
python,django,postgresql
0
2013-08-28T15:42:00.000
The company I work for is starting development of a Django business application that will use MySQL as the database engine. I'm looking for a way to keep from having database credentials stored in a plain-text config file. I'm coming from a Windows/IIS background where a vhost can impersonate an existing Windows/AD use...
3
1
0.197375
0
false
18,496,083
1
691
1
0
0
18,495,773
MySQL controls access to tables from its own list of users, so it's better to create MySQL users with permissions. You might want to create roles instead of users so you don't have as many to manage: an Admin, a read/write role, a read-only role, etc. A Django application always runs as the web server user. You could...
1
0
0
Can a Django application authenticate with MySQL using its linux user?
1
python,mysql,django
0
2013-08-28T18:39:00.000
I have an excel file whose extension is .xls but his type is Tab Space separated Text. When I try to open the file by MS Excel it tells me that the extension is fake. And So I have to confirm that I trust the file and so I can read it then. But my real problem is that when I try to read my file by the xlrd library it g...
0
1
0.099668
0
false
18,574,653
0
399
1
0
0
18,570,143
mv file.{xls,csv} It's a csv file, stop treating it as an excel file and things will work a lot better. :) There are nice csv manipulation tools available in most languages. Do you really need the excel library?
1
0
0
How to change automatically the type of the excel file from Tab space separated Text to xls file?
2
python,linux,excel,shell,xlrd
0
2013-09-02T09:46:00.000
I need to do the following Delete many entities from a database, also those entities have a file associated with them saved into the file system, which are accessed also by the web server (images!). The problem: File deletion might fail, I have all the files in a folder for the main entity (its actually a 1-N relatio...
0
2
1.2
0
true
18,581,616
0
708
1
0
0
18,581,117
There is no way to transactionally delete multiple files on normal filesystems (you might be able to find esoteric filesystems where it is, but even if so I doubt that helps you. Apparently your current filesystem doesn't even let you delete a file that's being read, so presumably you're stuck with what you have!). Per...
1
0
0
Delete files atomically/transactionally in python
1
python,django,transactions
0
2013-09-02T21:41:00.000
I have a Mac running OS X 10.6.8, which comes pre-installed with SQLite3 v3.6. I installed v3.8 using homebrew. But when I type "sqlite3" in my terminal it continues to run the old pre-installed version. Any help? Trying to learn SQL as I'm building my first web app. Not sure if PATH variable has anything to do with it...
1
0
0
0
false
18,629,528
0
1,449
1
1
0
18,626,114
To figure out exactly which sqlite3 binaries your system can find type which -a sqlite3. This will list the apps in the order that they are found according to your PATH variable, this also shows what order the thes ystem would use when figuring out which to run if you have multiple versions. Homebrew should normally li...
1
0
0
Running upgraded version of SQLite (3.8) on Mac when Terminal still defaults to old version 3.6
2
python,linux,macos,sqlite
0
2013-09-05T00:54:00.000
I want to use BDB as a time-series data store, and planning to use the microseconds since epoch as the key values. I am using BTREE as the data store type. However, when I try to store integer keys, bsddb3 gives an error saying TypeError: Integer keys only allowed for Recno and Queue DB's. What is the best workaround?...
0
-1
1.2
0
true
18,793,657
0
689
1
0
0
18,664,940
Well, there's no workaround. But you can use two approaches Store the integers as string using str or repr. If the ints are big, you can even use string formatting use cPickle/pickle module to store and retrieve data. This is a good way if you have data types other than basic types. For basics ints and floats this act...
1
0
0
Use integer keys in Berkeley DB with python (using bsddb3)
2
python,berkeley-db,bsddb
0
2013-09-06T19:11:00.000
I have some code that I am working on that scrapes some data from a website, and then extracts certain key information from that website and stores it in an object. I create a couple hundred of these objects each day, each from unique url's. This is working quite well, however, I'm inexperienced in what options are a...
0
0
0
0
false
18,674,706
1
95
1
0
0
18,674,630
Martijn's suggestion could be one of the alternatives. You may consider to store the pickle objects directly in a sqlite database which still can manage from the python standard library. Use a StringIO object to convert between the database column and python object. You didn't mention the size of each object you are pi...
1
0
0
Persistence of a large number of objects
1
python,persistence
0
2013-09-07T15:02:00.000
For a music project I want to find what which groups of artists users listens to. I have extracted three columns from the database: the ID of the artist, the ID of the user, and the percentage of all the users stream that is connected to that artist. E.g. Half of the plays from user 15, is of the artist 12. 12 | 15 |...
0
0
1.2
0
true
18,712,558
0
261
1
0
0
18,705,223
Sounds like a classic matrix factorization task to me. With a weighted matrix, instead of a binary one. So some fast algorithms may not be applicable, because they support binary matrixes only. Don't ask for source on Stackoverflow: asking for off-site resources (tools, libraries, ...) is off-topic.
1
0
0
Data Mining: grouping based on two text values (IDs) and one numeric (ratio)
2
python,ruby,data-mining,data-analysis
0
2013-09-09T19:09:00.000
I'm a beginner of openerp 7. i just want to know the details regarding how to generate report in openerp 7 in xls format. The formats supported in OpenERP report types are : pdf, odt, raw, sxw, etc.. Is there any direct feature that is available in OpenERP 7 regarding printing the report in EXCEL format(XLS)
1
0
0
0
false
18,716,823
1
2,902
1
0
0
18,716,623
In python library are available to export data in pdf and excel For excel you can use: 1)xlwt 2)Elementtree For pdf genration : 1)Pypdf 2)Reportlab are available
1
0
0
How to print report in EXCEL format (XLS)
3
python,openerp
0
2013-09-10T10:34:00.000
The context for this question is: A Google App Engine backend for a two-person multiplayer turn-based card game The game revolves around different combinations of cards giving rise to different scores in the game Obviously, one would store the state of a game in the GAE datastore, but I'm not sure on the approach for...
0
1
0.099668
0
false
18,807,184
1
209
1
0
0
18,807,022
If the logic is fixed, keep it in your code. Maybe you can procedurally generate the dicts on startup. If there is a dynamic component to the logic (something you want to update frequently), a data store might be a better bet, but it sounds like that's not applicable here. Unless the number of combinations runs over th...
1
0
1
Where to hold static information for game logic?
2
python,google-app-engine,google-cloud-datastore
0
2013-09-14T22:29:00.000
I have a postgres DB in which most of the tables have a column 'valid_time' indicating when the data in that row is intended to represent and an 'analysis_time' column, indicating when the estimate was made (this might be the same or a later time than the valid time in the case of a measurement or an earlier time in th...
0
1
0.099668
0
false
18,818,835
0
144
1
0
0
18,818,634
I'm not sure about the SQLalchemy part, but as far as the SQL queries I would do it in two steps: Get the times. For example, something like. SELECT DISTINCT valid_time FROM MyTable LIMIT 3 ORDER BY valid_time DESC; Get the rows with those times, using the previous step as a subquery: SELECT * FROM MyTable WHERE vali...
1
0
0
Selecting the rows with the N most recent unique values of a datetime
2
python,sql,postgresql,sqlalchemy
0
2013-09-15T23:45:00.000
I'm looking into the software architecture for using a NoSQL database (MongoDB). I would ideally want to use a database independent ORM/ODM for this, but I can't find any similar library to SQLAlchemy for NoSQL. Do you know any? I do find a lot of wrappers, but nothing that seems to be database independent. If there's ...
3
0
0
0
false
18,980,345
0
943
1
0
0
18,827,379
Not sure about python, but in Java you can use frameworks like PlayORM for this purpose which supports Csasandra, HBase and MongoDb.
1
0
0
NoSQL database independent ORM/ODM for Python
1
python,mongodb,nosql
0
2013-09-16T11:54:00.000
I have multiple xlsx File which contain two worksheet(data,graph). I have created graph using xlsxwriter in graph worksheet and write data in data worksheet. So I need to combine all graph worksheet into single xlsx File. So My question is: openpyxl : In openpyxl module, we can load another workbook and modify the valu...
2
0
0
0
false
18,917,174
0
2,875
1
0
0
18,913,370
In answer to the last part of the question: xlsxwriter : As of my understanding, we can not modify existing xlsx File. Do we any update into this module. That is correct. XlsxWriter only writes new files. It cannot be used to modify existing files. Rewriting files is not a planned feature.
1
0
0
Combine multiple xlsx File in single Xlsx File
2
python,openpyxl,xlsxwriter
0
2013-09-20T09:33:00.000
I have a 17gb xml file. I want to store it in MySQL. I tried it using xmlparser in php but it says maximum execution time of 30 seconds exceeded and inserts only a few rows. I even tried in python using element tree but it is taking lot of memory gives memory error in a laptop of 2 GB ram. Please suggest some efficient...
4
0
0
0
false
18,945,969
0
215
1
0
0
18,945,802
I'd say, turn off execution time limit in PHP (e.g. use a CLI script) and be patient. If you say it starts to insert something into database from a 17 GB file, it's actually doing a good job already. No reason to hasten it for such one-time job. (Increase memory limit too, just in case. Default 128 Mb is not that much....
1
0
0
extremely large xml file to mysql
2
php,mysql,python-2.7,xml-parsing
0
2013-09-22T16:01:00.000
in my program , ten process to write mongodb by update(key, doc, upsert=true) the "key" is mongodb index, but is not unique. query = {'hotelid':hotelid,"arrivedate":arrivedate,"leavedate":leavedate} where = "data.%s" % sourceid data_value_where = {where:value} self.collection.update(query,{'$set':data_value_where},Tru...
0
0
0
0
false
18,998,582
0
820
2
0
0
18,995,966
You would not end up with duplicate documents due to the operator you are using. You are actually using an atomic operator to update. Atomic (not to be confused with SQL atomic operations of all or nothing here) operations are done in sequence so each process will never pick up a stale document or be allowed to write ...
1
0
0
mongodb update(use upsert=true) not update exists data, insert a new data?
2
python,mongodb,pymongo
0
2013-09-25T04:00:00.000
in my program , ten process to write mongodb by update(key, doc, upsert=true) the "key" is mongodb index, but is not unique. query = {'hotelid':hotelid,"arrivedate":arrivedate,"leavedate":leavedate} where = "data.%s" % sourceid data_value_where = {where:value} self.collection.update(query,{'$set':data_value_where},Tru...
0
0
0
0
false
18,996,136
0
820
2
0
0
18,995,966
You can call it "threadsafe", as the update itself is not done in Python, it's in the mongodb, which is built to cater many requests at once. So in summary: You can safely do that.
1
0
0
mongodb update(use upsert=true) not update exists data, insert a new data?
2
python,mongodb,pymongo
0
2013-09-25T04:00:00.000
I am considering to serialize a big set of database records for cache in Redis, using python and Cassandra. I have either to serialize each record and persist a string in redis or to create a dictionary for each record and persist in redis as a list of dictionaries. Which way is faster? pickle each record? or create a ...
4
3
1.2
0
true
19,033,019
0
2,722
1
0
0
19,025,952
Instead of serializing your dictionaries into strings and storing them in a Redis LIST (which is what it sounds like you are proposing), you can store each dict as a Redis HASH. This should work well if your dicts are relatively simple key/value pairs. After creating each HASH you could add the key for the HASH to a LI...
1
0
0
Python - Redis : Best practice serializing objects for storage in Redis
1
python,redis,cassandra,cql,cqlengine
0
2013-09-26T10:39:00.000
datetime is stored in postgres DB with UTC. I could see that the date is 2013-09-28 00:15:52.62504+05:30 in postgres table. But when I fetch the value via django model, I get the same datetime field as datetime.datetime(2013, 9, 27, 18, 45, 52, 625040, tzinfo=). USE_TZ is True and TIME_ZONE is 'Asia/Kolkata' in setti...
2
3
1.2
0
true
19,076,075
1
1,585
1
0
0
19,058,491
The issue has been solved. The problem was that I was using another naive datetime field for calculation of difference in time, whereas the DB field was an aware field. I then converted the naive to timezone aware date, which solved the issue. Just in case some one needs to know.
1
0
0
Postgres datetime field fetched without timezone in django
1
python,django,postgresql,timezone
0
2013-09-27T19:19:00.000
I'm a complete beginner to Flask and I'm starting to play around with making web apps. I have a hard figuring out how to enforce unique user names. I'm thinking about how to do this in SQL, maybe with something like user_name text unique on conflict fail, but then how to I catch the error back in Python? Alternatively...
1
0
0
0
false
19,087,185
1
1,118
1
0
0
19,086,885
You can use SQLAlchemy.It's a plug-in
1
0
0
How do I enforce unique user names in Flask?
2
python,sql,web-applications,flask
0
2013-09-30T05:17:00.000
I have a program which calculates a set of plain interlinked objects (the objects consist of properties which basically are either String, int or link to another object). I would like to have the objects stored in a relational database for easy SQL querying (from another program). Moreover, the objects (classes) tend t...
1
1
0.049958
0
false
19,142,716
0
60
1
0
0
19,142,497
What about storing the objects in JSON? You could write a function that serialize your object before storing it into the database. If you have a specific identifier for your objects, I would suggest to use it as index so that you can easily retrieve it.
1
0
1
Store Python objects in a database for easy quering
4
python,database,orm
0
2013-10-02T16:56:00.000
I am dealing with a doubt about sqlalchemy and objects refreshing! I am in the situation in what I have 2 sessions, and the same object has been queried in both sessions! For some particular thing I cannot to close one of the sessions. I have modified the object and commited the changes in session A, but in session B, ...
37
9
1
0
false
54,821,257
0
54,352
1
0
0
19,143,345
I just had this issue and the existing solutions didn't work for me for some reason. What did work was to call session.commit(). After calling that, the object had the updated values from the database.
1
0
0
About refreshing objects in sqlalchemy session
6
python,mysql,session,notifications,sqlalchemy
0
2013-10-02T17:43:00.000
I'm currently using SQLAlchemy with two distinct session objects. In one object, I am inserting rows into a mysql database. In the other session I am querying that database for the max row id. However, the second session is not querying the latest from the database. If I query the database manually, I see the correct, ...
2
0
0
0
false
49,755,122
0
1,599
1
0
0
19,159,142
Had a similar problem, for some reason i had to commit both sessions. Even the one that is only reading. This might be a problem with my code though, cannot use same session as it the code will run on different machines. Also documentation of SQLalchemy says that each session should be used by one thread only, although...
1
0
0
How to force SQLAlchemy to update rows
2
python,mysql,database,session,sqlalchemy
0
2013-10-03T12:22:00.000
I'm working with a somewhat large set (~30000 records) of data that my Django app needs to retrieve on a regular basis. This data doesn't really change often (maybe once a month or so), and the changes that are made are done in a batch, so the DB solution I'm trying to arrive at is pretty much read-only. The total s...
2
0
0
0
false
19,311,615
1
1,409
1
0
0
19,310,083
Does the disk IO really become the bottleneck of your application's performance and affect your user experience? If not, I don't think this kind of optimization is necessary. Operating system and RDBMS (e.g MySQL , PostgresQL) are really smart nowdays. The data in the disk will be cached in memory by RDBMS and OS autom...
1
0
0
Load static Django database into memory
2
python,django,sqlite,orm,memcached
0
2013-10-11T04:06:00.000
I have a workbook that has some sheets in it. One of the sheets has charts in it. I need to use xlrd or openpyxl to edit another sheet, but, whenever I save the workbook, the charts are gone. Any workaround to this? Is there another python package that preserves charts and formatting?
4
2
0.379949
0
false
20,910,668
0
477
1
0
0
19,323,049
This is currently not possible with either but I hope to have it in openpyxl 2.x. Patches / pull requests always welcome! ;-)
1
0
0
How can I edit Excel Workbooks using XLRD or openpyxl while preserving charts?
1
python,xlrd,xlwt,openpyxl,xlutils
0
2013-10-11T16:33:00.000
I have a simple python/Django Application in which I am inserting records in database through some scanning event. And I am able to show the data on a simple page. I keep reloading the page every second to show the latest inserted database records.But I want it to improve so that page should update the records when eve...
1
2
0.132549
0
false
19,333,028
1
824
1
0
0
19,332,760
you need to elemplments the poll/long poll or server push.
1
0
0
Updating client page only when new entry comes in database in Django
3
python,mysql,django
0
2013-10-12T09:38:00.000
I understand that ForeignKey constrains a column to be an id value contained in another table so that entries in two different tables can be easily linked, but I do not understand the behavior of relationships(). As far as I can tell, the primary effect of declaring a relationship between Parent and Child classes is th...
1
5
1.2
0
true
19,369,883
0
251
1
0
0
19,366,605
It doesn't do anything at the database level, it's purely for convenience. Defining a relationship lets SQLAlchemy know how to automatically query for the related object, rather than you having to manually use the foreign key. SQLAlchemy will also do other high level management such as allowing assignment of objects ...
1
0
0
SQLAlchemy Relationships
1
python,sql,sqlalchemy,relationship
0
2013-10-14T18:21:00.000
I'm just curious if there's a way to make the no default value warning I get from Storm to go away. I have an insert trigger in MySQL that handles these fields and everything is functioning as expected so I just want to remove this unnecessary information. I tried setting the default value to None but that causes an er...
1
0
0
0
false
20,010,872
1
770
1
0
0
19,373,289
Is it not possible for you to remove the 'IsNull' constraint from your MySQL database? I'm not aware of any where it is not possible to do this. Otherwise you could set a default string which represents a null value.
1
0
0
How can I avoid "Warning: Field 'xxx' doesn't have a default value" in Storm?
1
python,mysql,apache-storm
0
2013-10-15T04:26:00.000
I have a few large hourly upload tables with RECORD fieldtypes. I want to pull select records out of those tables and put them in daily per-customer tables. The trouble I'm running into is that using QUERY to do this seems to flatten the data out. Is there some way to preserve the nested RECORDs, or do I need to rethin...
1
0
1.2
0
true
19,459,294
0
234
1
0
0
19,458,338
Unfortunately, there isn't a way to do this right now, since, as you realized, all results are flattened.
1
0
0
Bigquery: how to preserve nested data in derived tables?
2
python,google-bigquery
0
2013-10-18T20:17:00.000
I'm looking for a simple way to extract text from excel/word/ppt files. The objective is to index contents in whoosh for search with haystack. There are some packages like xlrd and pandas that work for excel, but they go way beyond what I need, and I'm not really sure that they will actually just print the cell's unfor...
1
2
1.2
0
true
19,500,864
0
631
1
0
0
19,500,625
I've done this "by hand" before--as it turns out, .(doc|ppt|xls)x files are just zip files which contain .xml files with all of your content. So you can use zipfile and your favorite xml parser to read the contents if you can find no better tool to do it.
1
0
1
Extract text from ms office files with python
1
python,django-haystack,whoosh
1
2013-10-21T17:07:00.000
Rackspace has added the feature to select certain cloud servers (as hosts) while creating a user in a cloud database instance. This allows the specified user to be accessed, only from those cloud servers. So I would like to know whether there is an API available in pyrax(python SDK for Rackspace APIs) to accomplish thi...
1
0
1.2
0
true
19,761,340
1
70
1
0
0
19,585,830
I released version 1.6.1 of pyrax a few days ago that adds support for the 'host' parameter for users, as well as for Cloud Database backups.
1
0
0
Host Parameter While Creating a User in Rackspace Cloud Database Instance
2
python,mysql,database,cloud,rackspace-cloud
0
2013-10-25T09:17:00.000
We are building a datawarehouse in PostgreSQL. We want to connect to different data sources. Most data will come from ms access. We not not python experts (yet :-)). We found several database connectors. We want to use (as much as possible) standard SQL for our queries. We looked at pyodbc pscopg2. Given that we use MS...
0
1
0.197375
0
false
20,310,119
0
126
1
0
0
19,605,580
Your query syntax differences will depend on PostgreSQL extensions vs MS Access-specific quirks. The psycodb and pyodbc will both provide a query interface using whatever SQL dialect (with quirks) the underlying db connections provide.
1
0
0
python postgresql ms access driver advice
1
python,postgresql,ms-access,psycopg2,pyodbc
0
2013-10-26T10:25:00.000
The goal is to find values in an Excel spreadsheet which match values in a separate list, then highlight the row with a fill color (red) where matches are found. In other words: Excel file A: source list (approximately 200 items) Excel file B: has one column containing the list we are checking; must apply fill color (...
2
0
0
0
false
20,011,728
0
961
1
0
0
19,612,872
I don't know what format your original list is in, but this sounds like a job for conditional formatting, if you can get the list into Excel. You can do conditional formatting based on a formula, and you can use a VLOOKUP() formula to do it.
1
0
0
Find text in Excel file matching text in separate file, then apply fill color to row
1
python,regex,excel,macos,applescript
0
2013-10-26T23:07:00.000
I'd like my Python script to read some data out of a postgresql dump file. The Python will be running on a system without postgresql, and needs to process the data in a dump file. It looks fairly straightforward to parse the CREATE TABLE calls to find the column names, then the INSERT INTO rows to build the contents. ...
3
1
1.2
0
true
19,703,149
0
4,660
1
0
0
19,638,019
Thanks for all the comments, even if they are mostly "don't do this!" ;) Given: The dump is always produced in the same format from a 3rd-party system I need to be able to automate reading it on another 3rd-party system without postgres I've gone for writing my own basic parser, which is doing a good enough job for w...
1
0
0
How to read postgresql dump file in Python
2
python,postgresql
0
2013-10-28T14:53:00.000
I'm using Django + Postgres. When I do a SQL query using psql, e.g. \d+ myapp_stories correctly shows the columns in the table But when I do SELECT * FROM myapp_stories, it returns nothing. But querying the same database & table from my python code returns data just fine. So there is data in the table. Any thoughts? I...
1
1
0.099668
0
false
19,665,116
1
82
2
0
0
19,664,732
I guess you forgot to enter semicolon: SELECT * FROM myapp_stories;
1
0
0
SELECT using psql returns no rows even though data is there
2
python,django,postgresql
0
2013-10-29T17:05:00.000
I'm using Django + Postgres. When I do a SQL query using psql, e.g. \d+ myapp_stories correctly shows the columns in the table But when I do SELECT * FROM myapp_stories, it returns nothing. But querying the same database & table from my python code returns data just fine. So there is data in the table. Any thoughts? I...
1
1
0.099668
0
false
19,666,882
1
82
2
0
0
19,664,732
Prefix the table in your query with the schema, as the search_path might be causing your query (or psql) to look in a schema other than what you are expecting.
1
0
0
SELECT using psql returns no rows even though data is there
2
python,django,postgresql
0
2013-10-29T17:05:00.000
I have PyQt application which uses SQLite files to store data and would like to allows multiple users to read and write to the same database. It uses QSqlDatabase and QSqlTableModels with item views for reading and editing. As is multiple users can launch the application and read/write to different tables. The issu...
0
0
0
0
false
19,764,106
0
74
1
0
0
19,759,594
SQLite has no mechanism by which another user can be notified. You have to implement some communication mechanism outside of SQLite.
1
1
0
Signaling Cell Changes across multiple QSqlDatabase to the same SQliteFile
1
python,sql,qt,sqlite
0
2013-11-03T23:40:00.000
I'm trying to get Django running on OS X Mavericks and I've encountered a bunch of errors along the way, the latest way being that when runpython manage.py runserver to see if everything works, I get this error, which I believe means that it misses libssl: ImportError: dlopen(/Library/Frameworks/Python.framework/Versi...
2
2
0.07983
0
false
19,772,866
1
9,380
1
1
0
19,767,569
It seems that it's libssl.1.0.0.dylib that is missing. Mavericks comme with libssl 0.9.8. You need to install libssl via homebrew. If loaderpath points to /usr/lib/, you also need to symlink libssl from /usr/local/Cell/openssl/lib/ into /usr/lib.
1
0
0
Django can't find libssl on OS X Mavericks
5
python,django,macos,postgresql
0
2013-11-04T12:15:00.000
I have two different python programs. One of the program uses the python BeautifulSoup module, the other uses the MySQLdb module. When I run the python files individually, I have no problem and the program run fine and give me the desired output. However I need to combine the two programs so to achieve my ultimate goa...
0
0
0
0
false
19,801,757
1
50
1
0
0
19,799,605
If you have 2 versions of python installed on your system, then you've somehow installed one library in each of them. You either need to install both libraries in both versions of python (which 2 seperate versions of pip can do), or need to setup your PYTHONPATH environment variable to allow loading of modules from add...
1
0
0
Modules not Working across different python versions
1
python,python-2.7,beautifulsoup,mysql-python
0
2013-11-05T21:42:00.000
Situation: I have a requirement to use connection pooling while connecting to Oracle database in python. Multiple python applications would use the helper connection libraries I develop. My Thought Process: Here I can think of two ways of connection pooling: 1) Let connection pool be maintained and managed by database...
4
0
0
0
false
19,848,278
0
622
1
0
0
19,848,191
Let the database handle the pool. . . it's smarter than you'll be, and you'll leverage every bug fix/performance improvement Oracle's installed base comes up with.
1
0
0
Application vs Database Resident Connection Pool
1
python,oracle,connection-pooling
0
2013-11-07T22:40:00.000
I got one table in which modifications are made :-account_bank_statement, what other tables are needed for the point of sale and if i make a sale in which tables modifications are made.I want to make a sale but not through the pos provided.
0
0
0
0
false
19,897,059
1
49
1
0
0
19,892,934
All the sales done through post is registered in post.order. If you are creating orders from an external source other than pos, you can create the order in this table and call the confirm bottom action. Rest changes in all other table will be done automatically..
1
0
0
In which tables changes are made in openERP when an items is sold at Point of sale
1
python,openerp
0
2013-11-10T17:46:00.000
I know there exists a plugin for nginx to load the config through perl. I was wondering, does anyone have any experience doing this without using a plugin? Possibly a fuse-backed Python script that queries a DB? I would really like to not use the perl plugin, as it doesn't seem that stable.
0
1
1.2
0
true
20,018,813
0
722
1
0
0
19,957,613
I haven't seen any working solution to solve your task, a quick google search doesn't give any useful information either (it doesn't look like HttpPerlModule could help with DB stored configuration). It sounds like it's a good task to develop and contribute to Nginx project !
1
0
0
Running Nginx with a database-backed config file
1
python,sql,configuration,nginx,fuse
1
2013-11-13T15:23:00.000
I'm trying to share an in-memory database between processes. I'm using Python's sqlite3. The idea is to create a file in /run/shm and use it as a database. Questions are: Is that safe? In particular: do read/write locks (fcntl) work the same in shm? Is that a good idea in the first place? I'd like to keep things simpl...
1
0
1.2
0
true
20,004,051
0
230
1
0
0
19,976,664
I've tested fcntl (in Python) with shm files and it seems that locking works correctly. Indeed, from process point of view it is a file and OS handles everything correctly. I'm going to keep this architecture since it is simple enough and I don't see any (major) drawbacks.
1
0
0
sqlite3 database in shared memory
1
python,sqlite,shared-memory
0
2013-11-14T11:38:00.000
I'm working on a web app in Python (Flask) that, essentially, shows the user information from a PostgreSQL database (via Flask-SQLAlchemy) in a random order, with each set of information being shown on one page. Hitting a Next button will direct the user to the next set of data by replacing all data on the page with ne...
1
1
0.099668
0
false
20,081,554
1
512
1
0
0
20,072,309
The easiest way is to do the random number generation in javascript at the client end... Tell the client what the highest number row is, then the client page keeps track of which ids it has requested (just a simple js array). Then when the "request next random page" button is clicked, it generates a new random number ...
1
0
0
Best way to show a user random data from an SQL database?
2
python,sql,flask,flask-sqlalchemy
0
2013-11-19T13:03:00.000
I am wondering if anyone knows a way to generate a connection to a SQLite database in python from a StringIO object. I have a compressed SQLite3 database file and I would like to decompress it using the gzip library and then connect to it without first making a temp file. I've looked into the slqite3 library source, bu...
10
4
1.2
0
true
20,084,315
0
1,371
1
0
0
20,084,135
The Python sqlite3 module cannot open a database from a file number, and even so, using StringIO will not give you a file number (since it does not open a file, it just emulates the Python file object). You can use the :memory: special file name to avoid writing a file to disk, then later write it to disk once you are ...
1
0
0
SQLite3 connection from StringIO (Python)
1
python,sqlite,stringio
0
2013-11-19T23:10:00.000
I have a MYSQL database with users table, and I want to make a python application which allows me to login to that database with the IP, pass, username and everything hidden. The thing is, the only IP which is allowed to connect to that mysql database, is the server itself (localhost). How do I make a connection to tha...
0
0
0
0
false
20,193,357
0
143
2
0
0
20,193,144
As i understood you are able to connect only with "server itself (localhost)" so to connect from any ip do this: mysql> CREATE USER 'myname'@'%.mydomain.com' IDENTIFIED BY 'mypass'; I agree with @Daniel no PHP script needed...
1
0
0
Secure MySQL Connection in Python
3
php,python,mysql,python-2.7
0
2013-11-25T12:29:00.000
I have a MYSQL database with users table, and I want to make a python application which allows me to login to that database with the IP, pass, username and everything hidden. The thing is, the only IP which is allowed to connect to that mysql database, is the server itself (localhost). How do I make a connection to tha...
0
1
0.066568
0
false
20,193,562
0
143
2
0
0
20,193,144
You should not make a connection from the user's computer. By default, most database configurations are done to allow only requests from the same server (localhost) to access the database. What you will need is this: A server side script such as Python, PHP, Perl, Ruby, etc to access the database. The script will be...
1
0
0
Secure MySQL Connection in Python
3
php,python,mysql,python-2.7
0
2013-11-25T12:29:00.000
win32com is a general library to access COM objects from Python. One of the major hallmarks of this library is ability to manipulate excel documents. However, there is lots of customized modules, whose only purpose it to manipulate excel documents, like openpyxl, xlrd, xlwt, python-tablefu. Are these libraries any bett...
3
9
1.2
0
true
20,263,978
0
3,031
1
0
0
20,263,021
Open and write directly and efficiently excel files, for instance. win32com uses COM communication, which while being very useful for certain purposes, it needs to perform complicated API calls that can be very slow (so to say, you are using code that controls Windows, that controls Excel) openpyxl or others, just ope...
1
0
0
What do third party libraries like openpyxl or xlrd/xlwt have, what win32com doesn't have?
1
python,excel,win32com,xlrd,openpyxl
1
2013-11-28T10:04:00.000
trying to import python-mysql.connector on Python 3.2.3 and receiving an odd stack. I suspect bad configuration on my ubuntu 12.04 install. vfi@ubuntu:/usr/share/pyshared$ python3 Python 3.2.3 (default, Sep 25 2013, 18:22:43) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for m...
4
0
0
0
false
65,242,155
0
20,264
2
0
0
20,275,176
pip3 install mysql-connector-python worked for me
1
0
0
ImportError: No module named mysql.connector using Python3?
3
mysql,python-3.x,python-module
0
2013-11-28T21:46:00.000
trying to import python-mysql.connector on Python 3.2.3 and receiving an odd stack. I suspect bad configuration on my ubuntu 12.04 install. vfi@ubuntu:/usr/share/pyshared$ python3 Python 3.2.3 (default, Sep 25 2013, 18:22:43) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for m...
4
5
1.2
0
true
20,275,797
0
20,264
2
0
0
20,275,176
Finally figured out what was my problem. python-mysql.connector was not a py3 package and apt-get nor aptitude was proposing such version. I managed to install it with pip3 which was not so simple on ubuntu 12.04 because it's only bundled with ubuntu starting at 12.10 and the package does not have the same name under ...
1
0
0
ImportError: No module named mysql.connector using Python3?
3
mysql,python-3.x,python-module
0
2013-11-28T21:46:00.000
I'm getting different information for a particular thing and i'm storing those information in a dictionary e.g. {property1:val , property2:val, property3:val} now I have several dictionary of this type (as I get many things ..each dictionary is for a thing) now I want to save information in DB so there would be as ma...
0
0
0
0
false
20,305,193
1
861
2
0
0
20,304,863
You're doing it wrong! Make an object that represents a row in the database, use __getitem__ to pretend it's a dictionary. Put your database logic in that. Don't go all noSQL unless your tables are not related. Just by being tables they are ideal for SQL!
1
0
1
How to save Information in Database using BeautifulSoup
4
python,database,python-2.7,beautifulsoup,mysql-python
0
2013-11-30T19:46:00.000
I'm getting different information for a particular thing and i'm storing those information in a dictionary e.g. {property1:val , property2:val, property3:val} now I have several dictionary of this type (as I get many things ..each dictionary is for a thing) now I want to save information in DB so there would be as ma...
0
0
0
0
false
20,305,076
1
861
2
0
0
20,304,863
If your dictionaries all have the same keys, and each key always has the same value-type, it would be pretty straight-forward to map this to a relational database like MySQL. Alternatively, you could convert your dictionaries to objects and use an ORM like SQLAlchemy to do the back-end work.
1
0
1
How to save Information in Database using BeautifulSoup
4
python,database,python-2.7,beautifulsoup,mysql-python
0
2013-11-30T19:46:00.000
I am hosting a web app at pythonanywhere.com and experiencing a strange problem. Every half-hour or so I am getting the OperationalError: (2006, 'MySQL server has gone away'). However, if I resave my wsgi.py file, the error disappears. And then appears again some half-an-hour later... During the loading of the main pag...
2
4
1.2
0
true
20,309,286
1
2,432
1
0
0
20,308,097
It normally because your mysql network connect be disconnected, may by your network gateway/router, so you have two options. One is always build a mysql connect before every query (not using connect pool etc). Second is try and catch this error, then get connect and query db again.
1
0
0
Periodic OperationalError: (2006, 'MySQL server has gone away')
1
python,mysql,mysql-python,pythonanywhere
0
2013-12-01T02:33:00.000
Have some programming background, but in the process of both learning Python and making a web app, and I'm a long-time lurker but first-time poster on Stack Overflow, so please bear with me. I know that SQLite (or another database, seems like PostgreSQL is popular) is the way to store data between sessions. But what's...
2
1
1.2
0
true
20,320,905
0
2,649
1
0
0
20,320,642
Aren't you over-optimizing? You don't need the best solution, you need a solution which is good enough. Implement the simplest one, using dicts; it has a fair chance to be adequate. If you test it and then find it inadequate, try SQLite or Mongo (both have downsides) and see if it suits you better. But I suspect that ...
1
0
0
What's faster: temporary SQL tables or Python dicts for session data?
1
python,python-2.7,sqlite,sqlalchemy
0
2013-12-02T04:05:00.000
I have two databases (infact two database dump ... db1.sql and db2.sql) both database have only 1 table in each. in each table there are few columns (not equal number nor type) but 1 or 2 columns have same type and same value i just want to go through both databases and find a row from each table so that they both ha...
0
0
0
0
false
20,348,851
0
1,032
2
0
0
20,348,584
Not sure if I understand what it is you want to do. You want to match a value from a column from one table to a value from a column from another table? If you'd have the data in two tables in a database, you could make an inner join. Depending on how big the file is, you could use a manual comparison tool like WinMerge...
1
0
0
Compare two databases and find common value in a row
2
python,mysql,sql,database,mysql-python
0
2013-12-03T10:28:00.000
I have two databases (infact two database dump ... db1.sql and db2.sql) both database have only 1 table in each. in each table there are few columns (not equal number nor type) but 1 or 2 columns have same type and same value i just want to go through both databases and find a row from each table so that they both ha...
0
0
0
0
false
20,348,719
0
1,032
2
0
0
20,348,584
You can use Join with alias name.
1
0
0
Compare two databases and find common value in a row
2
python,mysql,sql,database,mysql-python
0
2013-12-03T10:28:00.000
So I was making a simple chat app with python. I want to store user specific data in a database, but I'm unfamiliar with efficiency. I want to store usernames, public rsa keys, missed messages, missed group messages, urls to profile pics etc. There's a couple of things in there that would have to be grabbed pretty ofte...
0
0
1.2
0
true
20,382,525
0
45
1
0
0
20,380,661
The only answer possible at this point is 'try it and see'. I would start with MySQL (mostly because it's the 'lowest common denominator', freely available everywhere); it should do everything you need up to several thousand users, and if you get that far you should have a far better idea of what you need and where the...
1
0
0
efficient database file trees
1
python,database,performance,chat
1
2013-12-04T16:27:00.000
Is there a way to know how many rows were commited on the last commit on a SQLAlchemy Session? For instance, if I had just inserted 2 rows, I wish to know that there were 2 rows inserted, etc.
0
1
1.2
0
true
20,389,560
0
193
1
0
0
20,389,368
You can look at session.new, .dirty, and .deleted to see what objects will be committed, but that doesn't necessarily represent the number of rows, since those objects may set extra rows in a many-to-many association, polymorphic table, etc.
1
0
0
SQLAlchemy, how many rows were commited on last commit
1
python,sqlalchemy
0
2013-12-05T00:59:00.000
I have a giant (100Gb) csv file with several columns and a smaller (4Gb) csv also with several columns. The first column in both datasets have the same category. I want to create a third csv with the records of the big file which happen to have a matching first column in the small csv. In database terms it would be a ...
0
0
0
1
false
20,390,085
0
154
1
0
0
20,389,982
If you are only doing this once, your approach should be sufficient. The only improvement I would make is to read the big file in chunks instead of line by line. That way you don't have to hit the file system as much. You'd want to make the chunks as big as possible while still fitting in memory. If you will need to do...
1
0
0
Intersecting 2 big datasets
2
c#,python,database,bigdata
0
2013-12-05T02:00:00.000
Just wondering how to store files in the google app engine datastore. There are lots of examples on the internet, but they are using blobstore I have tried importing db.BlobProperty, but when i put() the data it shows up as a <Blob> i think. It appears like there is no data Similar to None for a string Are there any ex...
0
0
0
0
false
20,424,484
1
71
1
1
0
20,421,965
Datastore has a limit on the size of objects stored there, thats why all examples and documentation say to use the blobstore or cloud storage. Do that.
1
0
0
How do I store files in googleappengine datastore
1
python,google-app-engine,blob,google-cloud-datastore
0
2013-12-06T10:47:00.000
I'm using a python script to run hourly scrapes of a website that publishes the most popular hashtags for a social media platform. They're to be stored in a database (MYSQL), with each row being a hashtag and then a column for each hour that it appears in the top 20, where the number of uses within that past hour is li...
0
2
1.2
0
true
20,452,854
0
48
1
0
0
20,452,796
Your design is poorly suited for a relational database such as MySQL. The best way to go about it is to either redesign your storage layout to a form that a relational database works well with (eg. make each row a (hashtag, hour) pair), or use something other than a relational database to store it.
1
0
0
Best way to handle a database with lots of dynamically added columns?
1
python,mysql
0
2013-12-08T11:17:00.000