Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
Which is more efficient? Is there a downside to using open() -> write() -> close() compared to using logger.info()? PS. We are accumulating query logs for a university, so there's a perchance that it becomes big data soon (considering that the min-max cap of query logs per day is 3GB-9GB and it will run 24/7 constantly...
3
4
1.2
0
true
36,819,569
0
1,554
2
0
0
36,819,540
Use the method that more closely describes what you're trying to do. Are you making log entries? Use logger.*. If (and only if!) that becomes a performance issue, then change it. Until then it's an optimization that you don't know if you'll ever need. Pros for logging: It's semantic. When you see logging.info(...), yo...
1
0
0
Python logging vs. write to file
2
python,logging,file-writing,bigdata
0
2016-04-24T05:15:00.000
My database on Amazon currently has only a little data in it (I am making a web app but it is still in development) and I am looking to delete it, make changes to the schema, and put it back up again. The past few times I have done this, I have completely recreated my elasticbeanstalk app, but there seems like there i...
1
4
1.2
0
true
36,820,728
1
5,860
1
0
0
36,820,171
The easiest way to accomplish this is to SSH to one of your EC2 instances, that has acccess to the RDS DB, and then connect to the DB from there. Make sure that your python scripts can read your app configuration to access the configured DB, or add arguments for DB hostname. To drop and create your DB, you must just ad...
1
0
0
How to drop table and recreate in amazon RDS with Elasticbeanstalk?
1
python,django,amazon-web-services,amazon-elastic-beanstalk,amazon-rds
0
2016-04-24T06:51:00.000
There is a way to define MongoDB collection schema using mongoose in NodeJS. Mongoose verifies the schema at the time of running the queries. I have been unable to find a similar thing for Motor in Python/Tornado. Is there a way to achieve a similar effect in Motor, or is there a package which can do that for me?
2
2
1.2
0
true
36,842,258
0
1,449
1
1
0
36,841,121
No there isn't. Motor is a MongoDB driver, it does basic operations but doesn't provide many conveniences. An Object Document Mapper (ODM) library like MongoTor, built on Motor, provides higher-level features like schema validation. I don't vouch for MongoTor. Proceed with caution. Consider whether you really need an O...
1
0
0
Is there a way to define a MongoDB schema using Motor?
2
python,mongodb,tornado-motor,motordriver
0
2016-04-25T12:50:00.000
When inserting rows via INSERT INTO tbl VALUES (...), (...), ...;, what is the maximum number of values I can use? To clarify, PostgreSQL supports using VALUES to insert multiple rows at once. My question isn't how many columns I can insert, but rather how many rows of columns I can insert into a single VALUES clause....
7
5
0.761594
0
false
36,879,218
0
5,268
1
0
0
36,879,127
As pointed out by Gordon, there doesn't appear to be a predefined limit on the number of values sets you can have in your statement. But you would want to keep this to a reasonable limit to avoid consuming too much memory at both the client and the server. The client only needs to build the string and the server needs ...
1
0
0
What is the maximum number of VALUES that can be put in a PostgreSQL INSERT statement?
1
python,postgresql,sqlalchemy,psycopg2
0
2016-04-27T02:17:00.000
So basically I have this collection where objects are stored with a string parameter. example: {"string_": "MSWCHI20160501"} The last part of that string is a date, so my question is this: Is there a way of writing a mongo query which will take that string, convert part of it into an IsoDate object and then filter ob...
2
1
1.2
0
true
37,026,108
0
61
1
0
0
36,888,098
Depending on the schema of your objects, you could hypothetically write an aggregation pipeline that would first transform the objects, then filter the results based on the results and then return those filtered results. The main reason I would not recommend this way though is that, given a fairly large size for your d...
1
0
1
Write a query in MongoDB's client pymongo that converts a part of the string to a date on the fly
1
python,mongodb,pymongo
0
2016-04-27T11:11:00.000
Using openpyxl, the charts inserted into my worksheet have a border on them. Is there any way to set the style of the chart (pie/bar) to either via the styles.Style/styles.borders module to have no border, or at least a thin white border so that they would print borderless? The only options I see on the object is .sty...
1
0
0
0
false
36,988,605
0
1,846
1
0
0
36,988,384
This isn't easy but should be possible. You will need to work through the XML source of a suitably formatted sample chart and see which particular variables need setting or changing. openpyxl implements the complete chart API but this unfortunately very complicated.
1
0
0
openpyxl - Ability to remove border from charts?
2
python,python-2.7,charts,openpyxl
0
2016-05-02T17:42:00.000
When am trying to create a database on server it shows an error Error Code: 1006. Can't create database 'mmmm' (errno: 2) How can I solve this error? The server is mysql.
0
0
0
0
false
37,046,039
0
1,074
1
0
0
36,995,801
Likely you do not have permission to create a database.
1
0
0
Error Code: 1006. Can't create database 'mmmm' (errno: 2)
1
mysql,mysql-workbench,mysql-python
0
2016-05-03T04:48:00.000
I'm using Google Analytics API to make a Python program. For now it's capable to make specific querys, but... Is possible to obtain a large JSON with all the data in a Google Analytics account? I've been searching and i didn't have found any answer. Someone know if it's possible and how?
1
0
0
0
false
37,029,398
0
488
1
0
1
36,998,698
Google Analytics stores a ton (technical term) of data; there are a lot of metrics and dimensions, and some of them (such as the users metric) have to be calculated specifically for every query. It's easy to underestimate the flexibility of Google Analytics, but the fact that it's easy to apply a carefully defined segm...
1
0
0
Query all data of a Google Analytcs account
2
python,google-analytics,google-api,google-analytics-api
0
2016-05-03T07:59:00.000
I want to install MySQL-python-1.2.5 on an old Linux (Debian) embedded system. Python is already installed together with a working MariaDB/MySQL database. Problems: 1) the system is managed remotely and has no direct internet access; 2) band is infinitesimal to install further apps/libraries, so I would avoid doing th...
0
3
1.2
0
true
37,024,496
0
570
1
0
0
37,022,937
have you tried the .deb files from here packages.debian.org/wheezy/python-mysqldb
1
0
0
How to install MySQL for python WITHOUT internet access
1
python,mysql
0
2016-05-04T08:56:00.000
I am working on a Django project with another developer. I had initially created a model which I had migrated and was synced correctly with a MySQL database. The other developer had later pulled the code I had written so far from the repository and added some additional fields to my model. When I pulled through his cha...
0
0
0
0
false
56,626,356
1
881
1
0
0
37,044,634
After you pull, do not delete the migrations file or folder. Simply just do python manage.py migrate. Even after this there is no change in database schema then open the migrations file which came through the git pull and remove the migration code of the model whose table is not being created in the database. Then do ...
1
0
0
Django model changes cannot be migrated after a Git pull
2
python,django,git
0
2016-05-05T07:12:00.000
As the title suggests, I'm trying to use a environment variable in a config file for a Flask project (in windows 10). I'm using a virtual env and this far i have tried to add set "DATABASE_URL=sqlite:///models.db" to /Scripts/activate.bat in the virtualenv folder. But it does not seem to work. Any suggestions?
4
0
0
0
false
37,236,689
1
2,106
1
0
0
37,046,677
The problem was that PyCharm does not activate the virtualenvironment when pressing the run button. It only uses the virtualenv python.exe.
1
0
0
Setting environment variables in virtualenv (Python, Windows)
2
python,windows,flask,pycharm,virtualenv
0
2016-05-05T09:10:00.000
Afternoon, I have a really simple python script in which user is asked to input a share purchase price, script looks up price and returns whether user is up or down. Currently the input, and text output are done in the CMD prompt which is not ideal. I would love to have in excel a box for inputing purchase price, a but...
0
0
0
0
false
37,052,827
0
890
1
0
0
37,052,571
For windows, the win32com package will allow you to control excel from a python script. It's not quite the same as embedding the code, but it will allow you to read and write from the spreadsheet.
1
0
1
Python script input and output in excel
3
python,excel,vba
0
2016-05-05T13:58:00.000
I have downloaded a PG database backup from my Heroku App, it's in my repository folder as latest.dump I have installed postgres locally, but I can't use pg_restore on the windows command line, I need to run this command: pg_restore --verbose --clean --no-acl --no-owner -j 2 -h localhost -d DBNAME latest.dump But the ...
4
5
0.761594
0
false
37,104,332
1
10,221
1
1
0
37,104,193
Since you're on windows, you probably just don't have pg_restore on your path. You can find pg_restore in the bin of your postgresql installation e.g. c:\program files\PostgreSQL\9.5\bin. You can navigate to the correct location or simply add the location to your path so you won't need to navigate always.
1
0
0
How to use pg_restore on Windows Command Line?
1
python,django,database,postgresql,heroku
0
2016-05-08T19:55:00.000
I have to compare two MySql database data, I want to compare two MySql schema and find out the difference between both schema. I have created two variables Old_Release_DB and New_Release_DB. In Old_Release_DB I have stored old release schema than after some modification like I deleted some column, Added some column, Re...
1
0
0
0
false
39,398,778
0
3,461
2
0
0
37,109,762
I use mysql Workbench which has the schema synchronization utility. Very handy when trying to apply changes from development server to a production server.
1
0
0
How to compare two MySql database
3
javascript,python,mysql,linux,shell
0
2016-05-09T07:15:00.000
I have to compare two MySql database data, I want to compare two MySql schema and find out the difference between both schema. I have created two variables Old_Release_DB and New_Release_DB. In Old_Release_DB I have stored old release schema than after some modification like I deleted some column, Added some column, Re...
1
0
0
0
false
37,110,095
0
3,461
2
0
0
37,109,762
You can compare two databases by creating database dumps: mysqldump -u your-database-user your-database-name > database-dump-file.sql - if you're using a password to connect to a database, also add -p option to a mysqldump command. And then compare them with diff: diff new-database-dump-file.sql old-database-dump-file....
1
0
0
How to compare two MySql database
3
javascript,python,mysql,linux,shell
0
2016-05-09T07:15:00.000
I'm working on a project which involves a huge external dataset (~490Gb) loaded in an external database (MS SQL through django-pyodbc-azure). I've generated the Django models marked managed=False in their meta. In my application this works fine, but I can't seem to figure out how to run my unit tests. I can think of tw...
1
2
0.379949
0
false
37,130,919
1
299
1
0
0
37,115,070
After a day of staring at my screen, I found a solution: I removed the managed=True from the models, and generated migrations. To prevent actual migrations against the production database, I used my database router to prevent the migrations. (return False in allow_migrate when for the appropriate app and database). In ...
1
0
0
Unit tests with an unmanaged external read-only database
1
python,django,unit-testing
0
2016-05-09T11:50:00.000
Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql. I get: ImportError: No module named sql I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook. Thanks in advance!
12
1
0.049958
0
false
70,772,212
0
19,909
3
0
0
37,149,748
I know this answer will be (very) late to contribute to the discussion but maybe it will help someone. I found out what worked for me by following Thomas, who commented above. However, with a bit of a caveat, that I was using pyenv to setup and manage python on my local machine. So when running sys.executable in a jupy...
1
0
1
IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql'
4
python,pip,ipython,ipython-sql
0
2016-05-10T22:04:00.000
Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql. I get: ImportError: No module named sql I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook. Thanks in advance!
12
0
0
0
false
54,436,119
0
19,909
3
0
0
37,149,748
I doubt you're using different IPython Notebook kernel other than which you've installed ipython-sql in. IPython Notebook can have more than one kernel. If it is the case, make sure you're in the right place first.
1
0
1
IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql'
4
python,pip,ipython,ipython-sql
0
2016-05-10T22:04:00.000
Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql. I get: ImportError: No module named sql I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook. Thanks in advance!
12
5
0.244919
0
false
43,972,590
0
19,909
3
0
0
37,149,748
I know it's been a long time, but I faced the same issue, and Thomas' advice solved my problem. Just outlining what I did here. When I ran sys.executable in the notebook I saw /usr/bin/python2, while the pip I used to install the package was /usr/local/bin/pip (to find out what pip you are using, just do which pip or s...
1
0
1
IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql'
4
python,pip,ipython,ipython-sql
0
2016-05-10T22:04:00.000
I'm on Odoo 9, I have an issue when lunching odoo server $odoo.py -r odoo -w password, the localhost:8069 doesn't load and I get an error on terminal "Peer authentication failed for user "odoo"". I already created a user "odoo" on postgres. When lunching $odoo.py I can load the odoo page on browser but I can't create ...
4
5
0.321513
0
false
37,199,710
1
23,159
1
0
0
37,193,143
This helped me. sudo nano /etc/postgresql/9.3/main/pg_hba.conf then add local all odoo trust then restart postgres sudo service postgresql restart
1
0
0
Peer authentication failed for user "odoo"
3
python,postgresql,openerp,odoo-9
0
2016-05-12T16:57:00.000
I'm trying to use a PHP site button to kick off a python script on my server. When I run it, everything seems fine, on the server I can "ps ax" and see that the script is running. The Python script attempts to process some files and write the results to a MySQL database. When I ultimately check to see that the change...
0
0
0
0
false
37,213,822
0
40
1
0
0
37,213,568
You can check in your web-server logs.(/var/www/log/apache2/error.log) if you have apache as your webserver..
1
0
0
Running a python (with MySQL) script from PHP
1
php,python,mysql
1
2016-05-13T15:10:00.000
I just created a script with openpyxl to update a xlsx file that we usually update manually every month. It work fine, but the new file lost all the graphs and images that were in the workbook. Is there a way to keep them?
5
5
1.2
0
true
37,215,619
0
3,060
1
0
0
37,214,983
openpyxl version 2.5 will preserve charts in existing files.
1
0
0
Openpyxl updating sheet and maintaining graphs and images
2
python,openpyxl
0
2016-05-13T16:25:00.000
I am trying to get a project from one machine to another. This project contains a massive log db table. It is too massive. So I exported and imported all db tables except this one via phpmyadmin. No if I run the migrate command I expected django to create everything missing. But it is not. How to make django check for ...
1
1
0.197375
0
false
37,232,610
1
2,300
1
0
0
37,231,032
When you exported the data and re-imported it on the other database part of that package would have included the django_migrations table. This is basically a log of all the migrations successfully executed by django. Since you have left out only the log table according to you, that should really be the only table that'...
1
0
0
How to create missing DB tables in django?
1
python,django,django-migrations,django-database,django-1.9
0
2016-05-14T19:35:00.000
I am retrieving structured numerical data (float 2-3 decimal spaces) via http requests from a server. The data comes in as sets of numbers which are then converted into an array/list. I want to then store each set of data locally on my computer so that I can further operate on it. Since there are very many of these da...
1
1
0.066568
1
false
37,246,905
0
989
1
0
0
37,246,342
Have you considered HDF5? It's very efficient for numerical data, and is supported by both Python and Matlab.
1
0
1
Python, computationally efficient data storage methods
3
python,sql,arrays,mongodb,database
0
2016-05-16T03:38:00.000
I'm creating a little own game, and I need to solve the problem: where I need to store player's inventory: in database with JSON(text field) or directly in database tables. Which method consume less RAM, and which is faster at all? Game server will be written on Python
1
1
1.2
0
true
37,282,081
0
285
2
0
0
37,281,218
As your required requirements i preferred you to use NoSQL database e.g Mongowith implementation of redisfor memory data storage that gives you more flexibility and performance.It based on objects so helps you for fetching faster
1
0
1
What is better - store player's inventory in JSON(text field in database) or database directly?
2
python,mysql,json,database
0
2016-05-17T16:01:00.000
I'm creating a little own game, and I need to solve the problem: where I need to store player's inventory: in database with JSON(text field) or directly in database tables. Which method consume less RAM, and which is faster at all? Game server will be written on Python
1
1
0.099668
0
false
37,284,304
0
285
2
0
0
37,281,218
It's probably better to use database tables directly. That way you can take advantage of other database features such as foreign keys, unique constraints, triggers, and so on.
1
0
1
What is better - store player's inventory in JSON(text field in database) or database directly?
2
python,mysql,json,database
0
2016-05-17T16:01:00.000
I want to use pymssql on a 24/7 Linux production app and am worried about stability. As soon as I hear ODBC I start to have reservations, especially on Linux. Does pymssql use ODBC or is it straight to freeTDS?
0
0
0
0
false
37,366,489
0
184
1
1
0
37,357,318
No, pymssql does not use ODBC.
1
0
0
Does Linux pymssql use ODBC?
1
python,pymssql
0
2016-05-20T23:34:00.000
I was running Mongo 2.4 with a replicaset which contains 1 primary, 1 secondary and an arbiter. We were using pymongo 2.6 and mongoengine 0.8.2. Recently, we performed an upgrade to Mongo 2.6, and also upgraded pymongo to 2.7.2 and mongoengine to 0.8.7. This setup worked fine for almost 12 hours, after which we started...
0
1
0.197375
0
false
37,915,282
0
586
1
0
0
37,383,545
So after a lot of struggle and a lot of load testing, we solved the problem by upgrading PyMongo to 2.8.1. PyMongo 2.7.2 is the first version to support MongoDB 2.6 and it sure does have some problems handling connections. Upgrading to PyMongo 2.8.1 helped us resolve the issue. With the same load, the connections do no...
1
0
0
Too many open connections with mongo 2.6 + pymongo 2.7.2
1
python,mongodb,database-connection,pymongo,mongoengine
0
2016-05-23T06:01:00.000
I have three programs running, one of which iterates over a table in my database non-stop (over and over again in a loop), just reading from it, using a SELECT statement. The other programs have a line where they insert a row into the table and a line where they delete it. The problem is, that I often get an error sqli...
0
0
0
0
false
37,414,407
0
369
1
0
0
37,413,919
you need to switch databases.. I would use the following: postgresql as my database psycopg2 as the driver the syntax is fairly similar to SQLite and the migration shouldn't be too hard for you
1
0
0
Lock and unlock database access - database is locked
1
python,sqlite
0
2016-05-24T12:41:00.000
I am getting a UnicodeDecodeError in my Python script and I know that the unique character is not in Latin (or English), and I know what row it is in (there are thousands of columns). How do I go through my SQL code to find this unique character/these unique characters?
0
1
1.2
0
true
37,892,058
0
30
1
0
0
37,425,230
Do a binary search. Break the files (or the scripts, or whatever) in half, and process both files. One will (should) fail, and the other shouldn't. If they both have errors, doesn't matter, just pick one. Continue splitting the broken files until you've narrowed it down to the something more manageable that you can pro...
1
0
0
How do I find unique, non-English characters in a SQL script that has a lot of tables, scripts, etc. related to it?
1
python,sql
0
2016-05-24T22:57:00.000
It is possible [in any way, even poorly hacked solution] to share in-memory database between many processes? My application has one process that opens the in-memory database and the other are running only SELECT queries on the database. NOTE: I need solution only for python 2.7, and btw if it matters the module I use f...
3
3
0.53705
0
false
39,020,351
0
850
1
0
0
37,434,949
On Linux you can just use /dev/shm as the file location of your sqlite. This is a memory mounted drive suitable exactly for that.
1
0
0
Share in-memory database between processes sqlite
1
python,database,sqlite,multiprocessing
0
2016-05-25T10:52:00.000
I have a Python script which user's drag and drop KML files into for easy use. It takes the dropped file as sys.arg[1]. When entered into the command line as myScript.py Location.kml everything works fine. But when I drag and drop the file in an error is thrown saying no module named xlsxwriter. xlsxwriter is in the sa...
0
0
0
0
false
37,449,798
0
290
1
0
0
37,449,333
Thanks to erkysun this issue was solved! eryksun's solution worked perfectly and I found another reason it wasn't working. This was because when I dragged and dropped the file into the python script then ran os.getcwd() no matter where the file was it returned C:\WINDOWS\system32. To counteract this wherever I had os.g...
1
0
0
Module not found when drag and drop Python file
1
python,windows,drag-and-drop,xlsxwriter
0
2016-05-25T23:41:00.000
I have a results analysing spreadsheet where i need to enter my raw data into a 8x6 cell 'plate' which has formatted cells to produce the output based on a graph created. This .xlsx file is heavily formatted with formulas for the analysis, and it is a commercial spreadsheet so I cannot replicate these formulas. I am u...
2
0
0
0
false
50,905,858
0
2,214
1
0
0
37,455,466
While Destrif is correct, xlutils uses xlwt which doesn't support the .xlsx file format. However, you will also find that xlsxwritter is unable to write xlrd formatted objects. Similarly, the python-excel-cookbook he recommends only works if you are running Windows and have excel installed. A better alternative for thi...
1
0
0
How to enter values to an .xlsx file and keep formatting of cells
2
python,excel,formatting,xlwt,xlutils
0
2016-05-26T08:28:00.000
abort MySQLdb. I know many process not use same connect, Because this will be a problem . BUT, run the under code , mysql request is block , then many process start to query sql at the same time , the sql is "select sleep(10)" , They are one by one. I not found code abort lock/mutux in MySQLdb/mysql.c , Why there i...
0
0
0
0
false
37,517,765
0
571
1
0
0
37,475,338
It works contingency. MySQL is request-rensponse protocol. When two process sends query, it isn't mixed unless the query is large. MySQL server (1) receive one query, (2) send response of (1), (3) receive next query, (4) send response of (3). When first response was send from MySQL server, one of two processes receives...
1
0
0
use multiprocessing to query in same mysqldb connect , block?
1
mysql,linux,mysql-python,pymysql
0
2016-05-27T05:15:00.000
I was writing a PL/Python function for PostgreSQl, with Python 2.7 and Python 3.5 already installed on Linux. When I was trying to create extension plpythonu, I got an error, then I fixed executing in terminal the command $ sudo apt-get install postgresql-contrib-9.3 postgresql-plpython-9.3. I understand that this is s...
0
0
1.2
0
true
42,190,637
0
278
1
1
0
37,487,072
Yes, it will, the package is independent from standard python installation.
1
0
0
PL/Python in PostgreSQL
1
python,postgresql,plpython
0
2016-05-27T15:19:00.000
I tried to use pandas to read an excel sheet into a dataframe but for floating point columns, the data is read incorrectly. I use the function read_excel() to do the task In excel, the value is 225789.479905466 while in the dataframe, the value is 225789.47990546614 which creates discrepancy for me to import data from ...
4
0
0
1
false
37,596,921
0
4,533
1
0
0
37,492,173
Excel might be truncating your values, not pandas. If you export to .csv from Excel and are careful about how you do it, you should then be able to read with pandas.read_csv and maintain all of your data. pandas.read_csv also has an undocumented float_precision kwarg, that might be useful, or not useful.
1
0
0
loss of precision when using pandas to read excel
3
python,excel,pandas,dataframe,precision
0
2016-05-27T20:59:00.000
py.test sets up a test database. I'd like to use the real database set in settings.py file. (Since I'm on test machine with test data already) Would it be possible?
1
0
0
0
false
37,513,426
1
1,804
1
0
0
37,507,458
yeah you can override the settings on the setUp set the real database for the tests and load your databases fixtures.. but I think, it`s not a good pratice, since you want to run yours tests without modify your "real" app env. you should try the pytest-django. with this lib you can reuse, create drop your databases t...
1
0
0
Django py.test run on real database?
3
python,django,pytest
0
2016-05-29T07:46:00.000
So I have a database that stores a lot of information about many different objects; to simplify it, just imagine a database that stores information about the weights of 100 dogs and 100 cats over a period of a few years. I made a GUI, and I want one of the tabs to allow the user to enter a newly taken weight or change ...
0
0
1.2
0
true
37,579,094
0
2,414
1
0
0
37,578,763
Qt Sql is the SQL Framework that comes with Qt library. It provides basic (and classic) classes to access a database, execute queries and fetch the results.*Qt can be recompiled to support various DBMS such as MySQL, Postgres etc. Sql Connector I assume you refer to MySql Connector ? If so, it's a set of C++ classes to...
1
0
0
Which ORM to use for Python and MySql?
1
python,mysql,orm,sqlalchemy,pyqt
0
2016-06-01T21:05:00.000
Use case: I'm writing a backend using MongoDB (and Flask). At the moment this is not using any ORM like Mongoose/Mongothon. I'd like to store the _id of the user which created each document in the document. I'd like it to be impossible to modify that field after creation. The backend currently allows arbitrary upda...
2
3
0.291313
0
false
38,043,469
1
3,632
1
0
0
37,580,165
imho there is no know methods to prevent updates inside mongo. As you can control app behavior, then someone will still able to make this update outside the app. Mongo don't have triggers - which in sql world have the possibility to play as a data guards and prevent field changes. As you re not using ODM, then all you ...
1
0
0
How to make a field immutable after creation in MongoDB?
2
python,json,node.js,mongodb,immutability
0
2016-06-01T23:03:00.000
From my understanding BigTable is a Column Oriented NoSQL database. Although Google Cloud Datastore is built on top of Google’s BigTable infrastructure I have yet to see documentation that expressively says that Datastore itself is a Column Oriented database. The fact that names reserved by the Python API are enforced ...
1
3
0.53705
0
false
37,609,672
1
429
1
1
0
37,602,604
Strictly speaking, Google Cloud Datastore is distributed multi-dimensional sorted map. As you mentioned it is based on Google BigTable, however, it is only a foundation. From high level point of view Datastore actually consists of three layers. BigTable This is a necessary base for Datastore. Maps row key, column key a...
1
0
0
Is Google Cloud Datastore a Column Oriented NoSQL database?
1
python,google-app-engine,google-cloud-datastore
0
2016-06-02T21:41:00.000
Is it possible to access same mysql database using python and php.Beacause I am developing a video searching website which based on semantics. Purposely I have to use python and JavaEE. So I have to make a datbase to store video data. But it should be accessed through both python and javaEE, I can use php to interfacin...
0
1
0.099668
0
false
37,759,626
0
96
2
0
0
37,757,927
It's a database. It doesn't care what language or application you're using too access it. That's one of the benefits of having standards like the MySQL protocol, SQL in general, or even things like TCP/IP: They allow different systems to seamlessly inter-operate.
1
0
0
can python and php access same mysqldb?
2
php,python,mysql
0
2016-06-10T22:22:00.000
Is it possible to access same mysql database using python and php.Beacause I am developing a video searching website which based on semantics. Purposely I have to use python and JavaEE. So I have to make a datbase to store video data. But it should be accessed through both python and javaEE, I can use php to interfacin...
0
0
0
0
false
37,760,285
0
96
2
0
0
37,757,927
Like @tadman said, yes. All you care about is making a new connection and obtaining a cursor in each of your program (no matter what language). The cursor is what does what you want (analogous to executing an actual query in whatever program you're using).
1
0
0
can python and php access same mysqldb?
2
php,python,mysql
0
2016-06-10T22:22:00.000
As the title says, simple question... When to use pyodbc and when to use jaydebeapi in Python 2/3? Let me elaborate with a couple of example scenarios... If I were a solution architect and am looking at a Pyramid Web Server looking to access multiple RDBMS types (HSQLDB, Maria, Oracle, etc) with the expectation of hea...
1
2
1.2
0
true
37,793,124
0
1,110
1
0
0
37,792,956
Simple answer - until more details given in question: In case you want to speak ODBC with the database: Go with pyodbc or for a pure python solution with pypyodbc Else if you want to talk JDBC with the database try jaydebeapi This should depend mor on the channel you want to use between python and the database and les...
1
0
1
When to use pyodbc and when to use jaydebeapi in Python 2/3?
1
python,python-3.x,pyodbc,jaydebeapi
0
2016-06-13T14:54:00.000
I'm using a python driver (mysql.connector) and do the following: _db_config = { 'user': 'root', 'password': '1111111', 'host': '10.20.30.40', 'database': 'ddb' } _connection = mysql.connector.connect(**_db_config) # connect to a remote server _cursor = _connection.cursor(buffered=True) _cursor.execute("""SELE...
1
-1
-0.099668
0
false
37,839,311
0
1,546
1
0
0
37,809,163
Moving to MySQLdb (instead of mysql.connector) solved all the issues :-)
1
0
0
Call to MySQL cursor.execute() (Python driver) hangs
2
python,mysql
0
2016-06-14T10:15:00.000
I'm running a python 3.5 worker on heroku. self.engine = create_engine(os.environ.get("DATABASE_URL")) My code works on local, passes Travis CI, but gets an error on heroku - OperationalError: (psycopg2.OperationalError) FATAL: database "easnjeezqhcycd" does not exist. easnjeezqhcycd is my user, not database name. As...
1
1
1.2
0
true
46,558,758
1
2,039
2
0
0
37,910,066
Old question, but the answer seems to be that database_exists and create_database have special case code for when the engine URL starts with postgresql, but if the URL starts with just postgres, these functions will fail. However, SQLAlchemy in general works fine with both variants. So the solution is to make sure the...
1
0
0
Heroku SQLAlchemy database does not exist
3
python,postgresql,heroku,sqlalchemy,heroku-postgres
0
2016-06-19T17:44:00.000
I'm running a python 3.5 worker on heroku. self.engine = create_engine(os.environ.get("DATABASE_URL")) My code works on local, passes Travis CI, but gets an error on heroku - OperationalError: (psycopg2.OperationalError) FATAL: database "easnjeezqhcycd" does not exist. easnjeezqhcycd is my user, not database name. As...
1
0
0
0
false
62,351,512
1
2,039
2
0
0
37,910,066
so I was getting the same error and after checking several times I found that I was giving a trailing space in my DATABASE_URL. Which was like DATABASE_URL="url<space>". After removing the space my code runs perfectly fine.
1
0
0
Heroku SQLAlchemy database does not exist
3
python,postgresql,heroku,sqlalchemy,heroku-postgres
0
2016-06-19T17:44:00.000
I'm quite new to Python and trying to fetch data in HTML and saved to excels using xlwt. So far the program seems work well (all the output are correctly printed on the python console when running the program) except that when I open the excel file, an error message saying 'We found a problem with some content in FILEN...
0
-2
-0.379949
0
false
37,931,428
0
219
1
0
0
37,925,969
seems like a caching issue. Try sheet.flush_row_data() every 100 rows or so ?
1
0
0
Python XLWT: Excel generated by Python xlwt contains missing value
1
python,xlwt
0
2016-06-20T15:11:00.000
I have stored a pyspark sql dataframe in parquet format. Now I want to save it as xml format also. How can I do this? Solution for directly saving the pyspark sql dataframe in xml or converting the parquet to xml anything will work for me. Thanks in advance.
0
-1
-0.099668
0
false
37,989,050
0
992
1
0
1
37,945,725
You can map each row to a string with xml separators, then save as text file
1
0
0
How to save a pyspark sql DataFrame in xml format
2
xml,python-2.7,pyspark,spark-dataframe,parquet
0
2016-06-21T13:24:00.000
As the data retrieving is too slow when I am querying for the whole data at once in MongoDB using the query db.find({}, {'_id':0}). I am using PyMongo How can I retrieve all the documents faster using Python driver. I think indexing can make data retrieve faster but how to apply Indexing on whole collection to make db....
1
0
0
0
false
37,991,450
0
117
1
0
0
37,991,245
Indexing will drastically speed up finding subsets of documents within a collection, but will not (to my knowledge) speed up pulling the entire collection. The reason indexing speeds up finding subsets is that mongo does not have to iterate through each document to see if they match the query- instead mongo can just go...
1
0
1
How to apply indexing in mongodb to read the whole data at once faster
1
python,mongodb,pymongo
0
2016-06-23T12:10:00.000
I have an exelfile that I want to convert but the default type for numbers is float. How can I change it so xlwings explicitly uses strings and not numbers? This is how I read the value of a field: xw.Range(sheet, fieldname ).value The problem is that numbers like 40 get converted to 40.0 if I create a string from that...
1
0
0
0
false
70,226,530
0
3,124
1
0
0
37,996,435
In my case conclusion was, just adding one row to the last row of raw data. Write any text in the column you want to change to str, save, load, and then delete the last line.
1
0
1
How can I read every field as string in xlwings?
2
python,converter,type-conversion,xlwings
0
2016-06-23T15:53:00.000
I am new at using raspberry pi. I have a python 3.4 program that connects to a database on hostinger server. I want to install mysql connector in raspberry pi.I searched a lot but I was not able to find answers . any help would be appreciated
4
1
0.066568
0
false
48,172,697
0
31,314
1
0
0
38,007,240
Just use $sudo apt-get install python3-mysqldb and it works on pi-3.
1
0
0
installing mysql connector for python 3 in raspberry pi
3
mysql,python-3.x,raspberry-pi2
0
2016-06-24T06:50:00.000
I have a python program that accesses SQL databases with the database login currently encoded in base64 in a text file. I'd like to encode the login instead using MD5 and store it in a config file, but after some research, I couldn't find much on the topic. Could someone point me in the right direction on where to star...
0
1
1.2
0
true
38,016,382
0
397
1
0
0
38,016,242
MD5, unfortunately, is a hash signature protocol, not an encryption protocol. It is used to generate strings that are used to detect even the very-slightest change to the value from which the MD5 hash was produced. But . . . (by design) . . . you cannot recover the value that was originally used to produce the signat...
1
0
0
Encrypting a SQL Login in a Python program using MD5
2
python,sql,python-3.x,config,md5
0
2016-06-24T14:49:00.000
simple question - if I run apache 32bit version, on 64bit OS, with a lot of memory (32GB RAM). Does this mean all the memory will go to waste since 32bit apache can't use more then 3GB ram?
0
0
0
0
false
38,040,277
1
203
1
0
0
38,040,240
I would assume so. You should definitely go for a 64-bit version of Apache to make use of all the memory available.
1
0
0
Apache web server 32bit on 64bit computer
1
python,django,apache,32bit-64bit,32-bit
0
2016-06-26T15:45:00.000
I'm new to pythonanywhere. I wonder how to load data from local csv files (there are many of them, over 1,000) into a mysql table. Let's say the path for the folder of the csv files is d:/data. How can I write let pythonanywhere visit the local files? Thank you very much!
2
2
1.2
0
true
38,056,627
0
1,111
1
0
0
38,045,616
You cannot get PythonAnywhere to read the files directly off your machine. At the very least, you need to upload the file to PythonAnywhere first. You can do that from the Files tab. Then the link that Rptk99 provided will show you how to import the file into MySQL.
1
0
0
Pythonanywhere Loading data from local files
1
database,python-2.7,pythonanywhere
0
2016-06-27T03:46:00.000
In Python, what is a database "cursor" most like? A method within a class A Python dictionary A function A file handle I have searched on internet but I am not getting proper justification of this question.
1
2
0.197375
0
false
38,067,434
0
2,948
1
0
0
38,067,324
Probably it is most like a file handle. That does not mean that it is a file handle, and a cursor is actually an object - an instance of a Cursor class (depending on the actual db driver in use). The reason that it's similar to a file handle is that you can consume data from it, but (in general) you can't go back to pr...
1
0
1
Database Cursor
2
python,database,cursor
0
2016-06-28T04:55:00.000
I am trying to migrate database from SQL Server is in 172.16.12.116 to MariaDB (Windows) is in 172.16.12.107 through MySQL Workbench 6.1.4. Source selection got succeeded. But when I am trying to connect to target I am getting this error: Error during Check target DBMS connection: MySQLError("Host '172.16.12.116' is n...
0
0
0
0
false
38,116,416
0
806
1
0
0
38,100,722
MySQL Workbench only works with MySQL servers, with the exception of migration sources (which can be Postgres, Sybase and others). What you can do however is first to migrate to a MySQL server and then dump the imported data and import that in MariaDB. Might require a few adjustments then.
1
0
0
Error during Check target DBMS connection
1
mysql,python-2.7,mysql-workbench,mariadb
0
2016-06-29T13:15:00.000
I have a list of variables with unicode characters, some of them for chemicals like Ozone gas: like 'O\u2083'. All of them are stored in a sqlite database which is read in a Python code to produce O3. However, when I read I get 'O\\u2083'. The sqlite database is created using an csv file that contains the string 'O\u20...
0
2
0.132549
0
false
38,146,103
0
1,263
1
0
0
38,106,808
SQLite allows you to read/write Unicode text directly. u'O\u2083' is two characters u'O' and u'\u2083' (your question has a typo: 'u\2083' != '\u2083'). I understand that u\2083 is not being stored in sqlite database as unicode character but as 6 unicode characters (which would be u,\,2,0,8,3) Don't confuse u'u\2083'...
1
0
0
Reading unicode characters from file/sqlite database and using it in Python
3
python,sqlite,unicode
0
2016-06-29T17:51:00.000
i have been trying to connect to SQL Server (I have SQL Server 2014 installed on my machine and SQL Native Client 11.0 32bit as driver) using Python and specifically pyodbc but i did not manage to establish any connection. This is the connection string i am using: conn = pyodbc.connect('''DRIVER={SQL Server Native Cli...
0
0
0
0
false
38,146,099
0
228
1
0
0
38,145,048
I may be missing something here. Why don't you connect to your Oracle database as a SQL Server linked server (or the other way around) ?
1
0
0
Database Connection SQL Server / Oracle
1
python,sql-server,database,oracle,python-3.x
0
2016-07-01T12:08:00.000
I just started using Ipython in Pycharm. What's the shortcut for insert a cell for Ipython in Pycharm? To insert a cell between the 2nd and 3rd cell. To insert a cell at the end of code According to Pycharm documentation, way to add cell as follows. But it doesn't work for me. Anyone find the same issue? Since the ne...
2
1
0.099668
0
false
70,563,736
0
2,299
1
0
0
38,151,292
I've had the same concern and right now I remembered that you can just write #%% (for code cell) or #%% md (for markdown cell) anywhere you want and it will create a new cell
1
0
1
Shortcut for insert a cell below for Ipython in Pycharm?
2
ipython,pycharm
0
2016-07-01T17:49:00.000
I want to do a bulk insertion in SQL alchemy and would prefer to remove an index prior to making the insertion, reading it when the insertion is complete. I see adding and removing indexes is supported by Alembic for migrations, but is this possible with SQLAlchemy? If so, how?
1
1
1.2
0
true
38,949,945
0
1,562
1
0
0
38,238,858
The best method is to just execute sql. In this casesession.execute("DROP INDEX ...")
1
0
0
Drop and read index using SQLAlchemy
1
python,sqlalchemy,migration
0
2016-07-07T06:34:00.000
I'm currently using a mix of smart view and power query(sql) to load data into Excel models however my excel always crashes when smart view is used. I'm required to work in Excel but I'm know looking at finding a way to periodically load data from Essbase into my SQL server database and only use power query(sql) for al...
0
0
1.2
0
true
38,299,572
0
2,203
1
0
0
38,270,552
There are a couple of ways to go. The most straightforward is to export all of the data from your Essbase database using column export, then designing a process to load the data into SQL Server (such as using the import functionality or BULK IMPORT, or SSIS...). Another approach is to use the DataExport calc script co...
1
0
0
Load data from Essbase into SQL database
1
python,sql-server,excel,hyperion,essbase
0
2016-07-08T15:37:00.000
I work on a raspberry pi project and use Python + Kivy for such reasons: I read some string values comming from a device installed in a field every 300ms. As soon as I see certain value I trigger a python thread to run another function which takes the string and stores it in a list and timestamp it. My kivy app displa...
0
0
0
0
false
38,304,824
0
56
1
0
0
38,295,148
Both approaches have pros and cons. A database is designed to store and query data. You can query data easily (SQL) from multiple processes. If you don't have multiple processes and no complicated querys a database doesn't really offers that much. Maybe persistence if that is a concern for you. If you don't need the fe...
1
0
1
is it better to read from LIST or from Database?
1
python,database,kivy
0
2016-07-10T18:24:00.000
In the Python 3 docs, it states that the dbm module will use gdbm if it's installed. In my script I use from dbm.gnu import open as dbm_open to try and import the module. It always returns with the exception ImportError: No module named '_gdbm'. I've gone to the gnu website and have downloaded the latest version. I ins...
5
0
0
0
false
70,973,357
0
2,327
1
1
0
38,385,630
I got similar issue though I am not sure which platform you are using. Steps are: look for file _gdbm.cpython-"python version"-.so example file: _gdbm.cpython-39-darwin.so Once you find the path check which python version in directory path. Try creating same python venv. Execute your code. Before this make sure you h...
1
0
1
Python: How to Install gdbm for dbm.gnu
1
python-3.x,gdbm
0
2016-07-14T23:13:00.000
Overview: I have data something like this (each row is a string): 81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M 3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:...
5
1
0.066568
1
false
38,389,853
0
358
1
0
0
38,388,799
you can use string.split(),string.split(',')[1]
1
0
0
Sort A list of Strings Based on certain field
3
python,list,python-2.7,sorting
0
2016-07-15T05:57:00.000
I have a dataframe in Python. Can I write this data to Redshift as a new table? I have successfully created a db connection to Redshift and am able to execute simple sql queries. Now I need to write a dataframe to it.
32
5
0.141893
1
false
42,047,026
0
57,271
1
0
0
38,402,995
Assuming you have access to S3, this approach should work: Step 1: Write the DataFrame as a csv to S3 (I use AWS SDK boto3 for this) Step 2: You know the columns, datatypes, and key/index for your Redshift table from your DataFrame, so you should be able to generate a create table script and push it to Redshift to crea...
1
0
0
How to write data to Redshift that is a result of a dataframe created in Python?
7
python,pandas,dataframe,amazon-redshift,psycopg2
0
2016-07-15T18:33:00.000
I have an Excel file(xlsx) that already has lots of data in it. Now I am trying to use Python to write new data into this Excel file. I looked at xlwt, xldd, xlutils, and openpyxl, all of these modules requires you to load the data of my excel sheet, then apply changes and save to a new Excel file. Is there any way to ...
1
7
1.2
0
true
38,463,454
0
5,172
1
0
0
38,463,258
This is not possible because XLSX files are zip archives and cannot be modified in place. In theory it might be possible to edit only a part of the archive that makes up an OOXML package but, in practice this is almost impossible because relevant data may be spread across different files.
1
0
0
Modifying and writing data in an existing excel file using Python
2
python,openpyxl,xlrd,xlwt,xlutils
0
2016-07-19T15:52:00.000
I already have an owl ontology which contains classes, instances and object properties. How can I map them to a relational data base such as MYSQL using a Python as a programming language(I prefer Python) ? For example, an ontology can contains the classes: "Country and city" and instances like: "United states and NYC...
1
0
0
0
false
38,534,519
0
1,036
1
0
0
38,498,290
Use the right tool for the job. You're using RDF, that it's OWL axioms is immaterial, and you want to store and query it. Use an RDF database. They're optimized for storing and querying RDF. It's a waste of your time to homegrow storage & query in MySQL when other folks have already figured out how best to do this. As ...
1
0
0
How can I map an ontology components to a relational database?
2
python,mysql,relational-database,semantic-web,ontology
0
2016-07-21T07:56:00.000
I need to create a dashboard based upon an excel table and I know excel has a feature for creating dashboards. I have seen tutorials on how to do it and have done my research, but in my case, the excel table on which the dashboard would be based is updated every 2 minutes by a python script. My question is, does the da...
0
1
0.197375
0
false
38,520,493
0
1,488
1
0
0
38,518,227
If the "dashboard" is in Excel and if it contains charts that refer to data in the current workbook's worksheets, then the charts will update automatically when the data is refreshed, unless the workbook calculation mode is set to "manual". By default calculation mode is set to "automatic", so changes in data will imme...
1
0
0
Can Excel Dashboards update automatically?
1
python,excel,dashboard
0
2016-07-22T04:33:00.000
Would that be possible to create programatically a new OGC WMS (1.1/1/3) service using: Python MapProxy Mapnik PostGIS/Postgres any script/gist or sample would be more then appreciated. Cheers, M
0
0
0
0
false
39,417,434
0
662
1
0
0
38,524,212
If we are looking for publish data in postgres to WMS, enable tilecache, and use more advanced rendering engine like mapnik, then I would say there could be one component missing is the GIS server. So if I am guessing your requirement correctly as I mentioned earlier then here is what the system design could be: Use p...
1
0
0
python script for creating maproxy OGC WMS service using Mapnik and PostGIS
2
python,postgis,mapnik
0
2016-07-22T10:32:00.000
I am writing a Django application that will have entries entered by users of the site. Now suppose that everything goes well, and I get the expected number of visitors (unlikely, but I'm planning for the future). This would result in hundreds of millions of entries in a single PostgreSQL database. As iterating through ...
1
1
1.2
0
true
38,587,539
1
66
1
0
0
38,585,719
Store one at a time until you absolutely cannot anymore, then design something else around your specific problem. SQL is a declarative language, meaning "give me all records matching X" doesn't tell the db server how to do this. Consequently, you have a lot of ways to help the db server do this quickly even when you h...
1
0
0
Storing entries in a very large database
1
python,django,database,postgresql,saas
0
2016-07-26T09:13:00.000
I have python 2.7.12 installed on my server. I'm using PuTTY to connect to my server. When running my python script I get the following. File "home/myuser/python/lib/python2.7/site-packages/peewee.py", line 3657, in _connect raise ImproperlyConfigured('pysqlite or sqlite3 must be installed.') peewee.Improperl...
1
0
0
0
false
38,715,072
0
955
1
0
0
38,589,963
Peewee will use either the standard library sqlite3 module or, if you did not compile Python with SQLite, Peewee will look for pysqlite2. The problem is most definitely not with Peewee on this one, as Peewee requires a SQLite driver to use the SqliteDatabase class... If that driver does not exist, then you need to inst...
1
0
0
Python - pysqlite or sqlite3 must be installed
1
python,sqlite
0
2016-07-26T12:30:00.000
I am attempting to run a python 2.7 program on HTCondor, however after submitting the job and using 'condor_q' to assess the job status, I see that the job is put in 'held'. After querying using 'condor_q -analyse jobNo.' the error message is "Hold reason: Error from Ubuntu: Failed to execute '/var/lib/condor/execute/...
0
0
0
0
false
38,618,691
0
197
1
1
0
38,593,488
Update, managed to solve my problem. I needed to make sure that all directory paths were correct as I found that HTCondor was looking within its own files for the resources my submission program used. I therefore needed to define a variable in the .py file that contains the directory to the resource
1
0
0
Unable to submit python files to HTCondor- placed in 'held'
1
python,ubuntu
0
2016-07-26T15:03:00.000
Is there any way in SQLAlchemy by reflection or any other means to get the name that a column has in the corresponding model? For example i have the person table with a column group_id. In my Person class this attribute is refered to as 'group' is there a way to dynamically and generically getting this without importin...
0
0
0
0
false
38,652,751
0
937
1
0
0
38,639,948
Unfortunately it is most likely not possible...
1
0
0
SQLAlchemy get attribute name from table and column name
2
python,sqlalchemy
0
2016-07-28T14:55:00.000
I have a Flask webapp running on Pythonanywhere. I've recently been having a look at using Google Cloud's MYSQL service. It requires a list of IP addresses to be whitelisted for access. How can I find this? I've tried 50.19.109.98 which is the IP address for Python Anywhere, but unless there is a secondary issue thats ...
2
2
0.379949
0
false
38,704,698
0
1,260
1
0
0
38,686,528
Your code running on PythonAnywhere could be on a whole bunch of IPs that could change at any time. You could try to add all the IPs, but that might not be the best/most sustainable.
1
0
0
Pythonanywhere: getting the IP address for database access whitelist
1
pythonanywhere
0
2016-07-31T17:17:00.000
I am creating a web project where I take in Form data and write to a SQL database. The forms will be a questionnaire with logic branching. Due to the nature of the form, and the fact that this is an MVP project, I've opted to use an existing form service (e.g Google Forms/Typeform). I was wondering if it's feasible to...
0
0
0
0
false
39,074,108
1
638
1
0
0
38,703,892
You can add a script in the Google spreadsheet with an onsubmit trigger. Then you can do whatever you want with the submitted data.
1
0
0
Using Google Forms to write to multiple tables?
1
python,sql,google-forms
0
2016-08-01T16:37:00.000
I'm trying to serialize results from a SQLAlchemy query. I'm new to the ORM so I'm not sure how to filter a result set after I've retrieved it. The result set looks like this, if I were to flatten the objects: A1 B1 V1 A1 B1 V2 A2 B2 V3 I need to serialize these into a list of objects, 1 per unique value for A,...
0
0
0
0
false
38,882,242
0
591
1
0
0
38,708,645
Turns out I needed to use association tables and the joinedload() function. The documentation is a bit wonky but I got there after playing with it for a while.
1
0
0
How do I filter SQLAlchemy results based on a columns' value?
2
python,orm,sqlalchemy
0
2016-08-01T21:46:00.000
I wana draw some simple shapes in excel file like as "arrow, line, rectangle, oval" using XLSXWriter, but i can find any example to do it. Is it possible ? If not, what library of python can do that ? Thanks!
2
0
0
0
false
38,740,317
0
1,339
1
0
0
38,738,706
is it possible? Unfortunately not. Shapes aren't supported in XlsxWriter, apart from Textbox.
1
0
0
How to drawing shapes using XLSXWriter
1
python,xlsxwriter
0
2016-08-03T08:46:00.000
I have two separate programs; one counts the daily view stats and another calculates earning based on the stats. Counter runs first and followed by Earning Calculator a few seconds later. Earning Calculator works by getting stats from counter table using date(created_at) > date(now()). The problem I'm facing is that le...
0
0
0
0
false
38,770,424
1
53
1
0
0
38,766,962
You have to put a date on your data and instead of using now() use it.
1
0
0
How to solve mysql daily analytics that happens when date changes
1
python,mysql,analytics
0
2016-08-04T12:10:00.000
I want to develop a project that need a noSQL database. After searching a lot, I chose OrientDB. I want to make an API Rest that can connect to OrientDB. Firstly, I wanted to use Flask to develop but I don't know if it's better to use Java native driver between Python binary driver to connect with database. Anyone hav...
0
0
0
0
false
38,802,276
1
127
1
0
0
38,795,545
AFAIK on remote connection (with a standalone OrientDB server) performance would be the same. The great advantage of using the Java native driver is the option to go embedded. If your deployment scenario allows it, you can avoid the standalone server and use OrientDB embedded into your Java application, avoiding netwo...
1
0
0
Performance between Python and Java drivers with OrientDB
1
java,python,performance,orientdb
0
2016-08-05T18:15:00.000
I recently updated an entity model to include some extra properties, and noticed something odd. For properties that have never been written, the Datastore query page shows a "—", but for ones that I've explicitly set to None in Python, it shows "null". In SQL, both of those cases would be null. When I query an entity t...
1
4
1.2
0
true
38,815,611
1
184
1
1
0
38,814,666
You have to specifically set the value to NULL, otherwise it will not be stored in the Datastore and you see it as missing in the Datastore viewer. This is an important distinction. NULL values can be indexed, so you can retrieve a list of entities where date of birth, for example, is null. On the other hand, if you do...
1
0
0
Why does the Google App Engine NDB datastore have both "—" and "null" for unkown data?
1
python,google-app-engine,null,google-cloud-datastore,app-engine-ndb
0
2016-08-07T13:35:00.000
So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area. Coming close to launch, and obviously that's not going to work anymore. I managed to edit the connection strings and development.ini and point the development ins...
2
1
0.099668
0
false
38,886,655
1
166
1
0
0
38,843,404
Here's how I managed my last Pyramid app: I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deploymen...
1
0
0
Trying to make a development instance for a Python pyramid project
2
python,pyramid,pylons
1
2016-08-09T06:17:00.000
I know it's possible to import Google BigQuery tables to R through bigrquery library. But is it possible to export tables/data frames created in R to Google BigQuery as new tables? Basically, is there an R equivalent of Python's temptable.insert_data(df) or df.to_sql() ? thanks for your help, Kasia
0
1
1.2
1
true
39,119,230
0
635
1
0
0
38,847,743
It looks like bigrquery package does the job with insert_upload_job(). In the package documentation, it says this function > is only suitable for relatively small datasets but it doesn't specify any size limits. For me, it's been working for tens of thousands of rows.
1
0
0
Exporting R data.frame/tbl to Google BigQuery table
1
python,r,dataframe,google-bigquery
0
2016-08-09T10:03:00.000
I am running a Flask app on an Apache 2.4 server. The app sends requests to an API built by a colleague using the Requests library. The requests are in a specific format and constructed by data stored in a MySQL database. The site is designed to show the feedback from the API on the index, and the user can edit the dat...
0
0
0
0
false
38,869,412
1
83
1
0
0
38,854,382
Dirn was completely right, it turned out not to be an Apache issue at all. It was SQL Alchemy all along. I imagine that SQL Alchemy knows not to do any 'caching' when it requests data on the development server but decides that it's a good idea in production, which makes perfect sense really. It was not using the commit...
1
0
0
Apache server seems to be caching requests
1
apache,flask,python-requests
0
2016-08-09T15:06:00.000
Right now in Django, I have two databases: A default MySQL database for my app and an external Oracle database that, for my purposes, is read-only There are far more tables in the external database than I need data from, and also I would like to modify the db layout slightly. Is there a way I can selectively choose ...
0
-1
-0.197375
0
false
38,858,633
1
218
1
0
0
38,858,553
Basically write models that match what you want your destination tables to be and then write something to migrate data between the two. I'd make this a comment if I could but not enough rep.
1
0
0
How do you selectively sync a database in Django?
1
mysql,django,oracle,python-2.7
0
2016-08-09T19:04:00.000
I am writing an application that uses historical time series data to perform simulations. Is it better for application to load the data from the database into local data wrapper classes before executing the main loop (up to 30 years day by day) or connect to the database each day to pull the required data? Which is mor...
0
0
1.2
0
true
38,870,995
0
15
1
0
0
38,870,823
For current computers 30 years of day by day data amounts to almost nothing if your dayly data remains below say 10kB. Since your simulation may need efficient retrieval, especially if it combines data from different dates, I'd read all the data in memory in one query and then start processing. What is considered elega...
1
0
0
Database to wrapper-classes or direct connectivity to database for time-series simulation application?
1
python,database,matlab,oop,time-series
0
2016-08-10T10:28:00.000
I am using sqlite (development stage) database for my django project. I would like to store a dictionary field in a model. In this respect, i would like to use django-hstore field in my model. My question is, can i use django-hstore dictionary field in my model even though i am using sqlite as my database? As per my ...
2
3
1.2
0
true
38,875,962
1
912
1
0
0
38,875,927
hstore is specific to Postgres. It won't work on sqlite. If you just want to store JSON, and don't need to search within it, then you can use one of the many third-party JSONField implementations.
1
0
0
Django hstore field in sqlite
1
python,django,postgresql,sqlite,django-models
0
2016-08-10T14:15:00.000
What are some basic steps for troubleshooting and narrowing down the cause for the "django.db.utils.ProgrammingError: permission denied for relation django_migrations" error from Django? I'm getting this message after what was initially a stable production server but has since had some changes to several aspects of Dja...
53
4
0.197375
0
false
62,814,973
1
32,047
1
0
0
38,944,551
If you receive this error and are using the Heroku hosting platform its quite possible that you are trying to write to a Hobby level database which has a limited number of rows. Heroku will allow you to pg:push the database even if you exceed the limits, but it will be read-only so any modifications to content won't be...
1
0
0
Steps to Troubleshoot "django.db.utils.ProgrammingError: permission denied for relation django_migrations"
4
python,django,apache,postgresql,github
0
2016-08-14T17:06:00.000
I am working on a project that requires me to read a spreadsheet provided by the user and I need to build a system to check that the contents of the spreadsheet are valid. Specifically I want to validate that each column contains a specific datatype. I know that this could be done by iterating over every cell in the sp...
1
2
0.197375
0
false
39,077,066
0
109
2
0
0
38,961,360
In openpyxl you'll have to go cell by cell. You could use Excel's builtin Data Validation or Conditional Formatting, which openpyxl supports, for this. Let Excel do the work and talk to it using xlwings.
1
0
0
Use openpyxl to verify the structure of a spreadsheet
2
python,excel,openpyxl
0
2016-08-15T19:05:00.000
I am working on a project that requires me to read a spreadsheet provided by the user and I need to build a system to check that the contents of the spreadsheet are valid. Specifically I want to validate that each column contains a specific datatype. I know that this could be done by iterating over every cell in the sp...
1
1
1.2
0
true
39,088,757
0
109
2
0
0
38,961,360
I ended up just manually looking at each cell. I have to read them all into my data structures before I can process anything anyways so it actually made sense to check then.
1
0
0
Use openpyxl to verify the structure of a spreadsheet
2
python,excel,openpyxl
0
2016-08-15T19:05:00.000
I am facing a strange problem right now. I am using pypyodbc to insert data into a test database hosted by AWS. This database that I created was by hand and did not imitate all relations and whatnot between tables. All I did was create a table with the same columns and the same datatypes as the original (let's call it ...
0
3
1.2
0
true
39,027,828
0
54
1
0
0
39,027,681
It sounds like it's not pointing to the correct database. Have you made sure the connection information changes to point to the correct DB? So the server name is correct, the login credentials are good, etc.?
1
0
0
Same code inserts data into one database but not into another
1
python,sql-server,pypyodbc
0
2016-08-18T21:18:00.000
I have a Django application where I use django-storages and amazon s3 to store images. I need to move those images to a different account: different user different bucket. I wanted to know how do I migrate those pictures? my main concern is the links in my database to all those images, how do I update it?
0
0
0
0
false
39,062,626
1
82
1
0
0
39,062,605
The URL is relative to the amazon storage address you provide in your settings. so you only need to move the images to a new bucket and update your settings.
1
0
0
changing s3 storages with django-storages
1
django,amazon-s3,python-django-storages
0
2016-08-21T09:08:00.000
I need to store some daily information in DynamoDB. Basically, I need to store user actions: UserID, StoreID, ActionID and Timestamp. Each night I would like to process the information generated that day, do some aggregations, some reports, and then I can safely deleted those records. How should I model this? I mean th...
2
2
1.2
0
true
39,114,304
1
29
1
0
0
39,111,598
You can use RabbitMQ to schedule jobs asynchronously. This would be faster than multiple DB queries. Basically, this tool allows you to create a job queue (Containing UserID, StoreID & Timestamp) where workers can remove (at midnight if you want) and create your reports (or whatever your heart desires). This also allow...
1
0
0
How to build model in DynamoDB if each night I need to process the daily records and then delete them?
1
python,database-design,amazon-dynamodb
0
2016-08-23T22:22:00.000
For my app, I am using Flask, however the question I am asking is more general and can be applied to any Python web framework. I am building a comparison website where I can update details about products in the database. I want to structure my app so that 99% of users who visit my website will never need to query the d...
2
0
0
0
false
39,128,415
1
488
1
0
0
39,128,100
I had this exact question myself, with a PHP project, though. My solution was to use ElasticSearch as an intermediate cache between the application and database. The trick to this is the ORM. I designed it so that when Entity.save() is called it is first stored in the database, then the complete object (with all refere...
1
0
0
How do I structure a database cache (memcached/Redis) for a Python web app with many different variables for querying?
2
python,database,caching,redis,memcached
0
2016-08-24T16:03:00.000
Background I studied and found that bigQuery doesn't accept schemas defined by online tools (which have different formats, even though meaning is same). So, I found that if I want to load data (where no. of columns keeps varying and increasing dynamically) into a table which has a fixed schema. Thoughts What i could do...
0
1
1.2
0
true
39,172,452
0
353
1
0
0
39,141,642
We are in the process of releasing a new feature that can update the schema of the destination table within a load/query job. With autodetect and the new feature you can directly load the new data to the existing table, and the schema will be updated as part of the load job. Please stay tuned. The current ETA is 2 week...
1
0
0
schema free solution to BigQuery Load job
1
python,google-analytics,google-bigquery,google-cloud-platform
0
2016-08-25T09:34:00.000
I am using Python with SQLite currently and wondering if it is safe to have multiple threads reading and writing to the database simultaneously. Does SQLite handle data coming in as a queue or have sort of mechanism that will stop the data from getting corrupt?
2
3
0.291313
0
false
39,163,285
0
1,006
2
0
0
39,158,621
This is my issue too. SQLite using some kind of locking mechanism which prevent you doing concurrency operation on a DB. But here is a trick which i use when my db are small. You can select all your tables data into memory and operate on it and then update the original table. As i said this is just a trick and it does ...
1
0
1
Can you have multiple read/writes to SQLite database simultaneously?
2
python,multithreading,sqlite
0
2016-08-26T05:05:00.000
I am using Python with SQLite currently and wondering if it is safe to have multiple threads reading and writing to the database simultaneously. Does SQLite handle data coming in as a queue or have sort of mechanism that will stop the data from getting corrupt?
2
2
0.197375
0
false
39,158,655
0
1,006
2
0
0
39,158,621
SQLite has a number of robust locking mechanisms to ensure the data doesn't get corrupted, but the problem with that is if you have a number of threads reading and writing to it simultaneously you'll suffer pretty badly in terms of performance as they all trip over the others. It's not intended to be used this way, eve...
1
0
1
Can you have multiple read/writes to SQLite database simultaneously?
2
python,multithreading,sqlite
0
2016-08-26T05:05:00.000
I would like to create a script in Python for logging into MKS integrity and calling an already defined MKS query. Since I am a newby in programming, I was wondering if there is any script example for the task. That would be a great help for getting me started. Thank you!
0
0
0
0
false
41,023,731
0
1,411
1
0
0
39,165,180
I can't help you with python, but for MKS connect to a host: IM connect --hostname=%host% --port=%port% run query: im runquery --hostname=%host% --port=%port% %query_name% You can see the help for each command if you just write IM command -?
1
0
0
Python script for MKS integrity query
1
python,mks-integrity
0
2016-08-26T11:23:00.000
I have encountered a problem that I can not figure out. I'm working on an application written in Python and a Sybase ASE database using sybpydb to communicate with the datbase. Now I need to update a post where one of the columns in the where clause is of numeric(10) data type. When selecting the post Python treats the...
0
0
0
0
false
39,202,310
0
152
1
0
0
39,168,251
You need to capture the actual SQL query text which is sent to the ASE server before conclusions can be drawn.
1
0
0
Sybase numeric datatype and Python
1
python,sap-ase
0
2016-08-26T14:06:00.000
I have this huge Excel (xls) file that I have to read data from. I tried using the xlrd library, but is pretty slow. I then found out that by converting the Excel file to CSV file manually and reading the CSV file is orders of magnitude faster. But I cannot ask my client to save the xls as csv manually every time befor...
1
1
0.197375
0
false
39,360,812
0
582
1
0
0
39,179,880
if you need to read the file frequently, I think it is better to save it as CSV. Otherwise, just read it on the fly. for performance issue, I think win32com outperforms. however, considering cross-platform compatibility, I think xlrd is better. win32com is more powerful. With it, one can handle Excel in all ways (e.g....
1
0
0
XLRD vs Win32 COM performance comparison
1
python,excel,csv,win32com
1
2016-08-27T10:07:00.000
I want to execute script(probably written in python), when update query is executed on MySQL database. The query is going to be executed from external system written in PHP to which I don't have access, so I can't edit the source code. The MySQL server is installed on our machine. Any ideas how I can accomplish this, o...
1
0
1.2
0
true
39,244,221
0
54
1
0
0
39,243,626
No, it is not possible to call external scripts from MySQL. The only thing you can do is adding an ON UPDATE trigger that will write into some queue. Then you will have the python script POLLING the queue and doing whatever it's supposed to do with the rows it finds.
1
0
0
Executing script when SQL query is executed
1
php,python,mysql,database
0
2016-08-31T07:45:00.000