Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I have a table in a PostgreSQL database. I'm writing data to this table (using some computation with Python and psycopg2 to write results down in a specific column in that table). I need to update some existing cell of that column. Till now, I was able either to delete the complete row before writing this single ce...
0
1
0.197375
0
false
44,718,475
0
878
1
0
0
44,718,379
you simple update the cell with the value NULL in SQL - psycopg2 will insert NULL into the database when you update your column with None-type from python.
1
0
0
Update field with no-value
1
python,sql,postgresql,psycopg2
0
2017-06-23T09:50:00.000
I am trying to connect to a MySQL database using PyQt5 on Python 3.6 for 64-bit Windows. When I call QSqlDatabase.addDatabase('MYSQL') and run my utility, it shows up with this error message: QSqlDatabase: QMYSQL driver not loaded QSqlDatabase: available drivers: QSQLITE QMYSQL QMYSQL3 QODBC QODBC3 QP SQL QPSQL7 This c...
0
1
1.2
0
true
44,992,670
0
1,820
1
0
0
44,753,724
It said driver available but you need to rebuid a new Mysql driver base on Qt Source code and Mysql Library.
1
0
0
PyQt QSqlDatabase: QMYSQL driver not loaded
2
python,mysql,qt,pyqt
0
2017-06-26T05:49:00.000
When I'm using pymysql to perform operations on MySQL database, it seems that all the operations are temporary and only visible to the pymysql connection, which means I can only see the changes through cur.execute('select * from qiushi') and once I cur.close() and conn.close() and log back in using pymysql, everything ...
0
1
0.197375
0
false
44,758,048
0
377
1
0
0
44,756,118
I solved the problem by myself... Because the config is automatically committed, so after each SQL sentence we should commit the changes. Approach 1: add cur.commit() after the cur.execute() Approach 2: edit the connection config, add autocommit=True
1
0
0
Unable to INSERT with Pymysql (incremental id changes though)
1
python,mysql,pymysql
0
2017-06-26T08:56:00.000
I have some records data with \n. When I do a SELECT query using psycopg2 the result comes with \n escaped like this \\n. I want the result have literal \n in order to use splitlines().
2
0
1.2
0
true
44,766,725
0
972
2
0
0
44,763,758
The point is that values were edited with pgadmin3 (incorrectly, the correct way is shift+enter to add a new line). I asked the user to use phppgadmin (easier for him, multiline fields are edited with textarea control) and now everything is working properly. So pyscopg2 WORKS fine, I'm sorry to thought it was the culpr...
1
0
1
I don't want psycopg2 to escape new line character (\n) in query result
2
python,python-3.x,psycopg2,psycopg
0
2017-06-26T15:56:00.000
I have some records data with \n. When I do a SELECT query using psycopg2 the result comes with \n escaped like this \\n. I want the result have literal \n in order to use splitlines().
2
-1
-0.099668
0
false
44,763,994
0
972
2
0
0
44,763,758
Try this: object.replace("\\n", r"\n") Hope this helped :)
1
0
1
I don't want psycopg2 to escape new line character (\n) in query result
2
python,python-3.x,psycopg2,psycopg
0
2017-06-26T15:56:00.000
I wanna migrate from sqlite3 to MySQL in Django. Now I have been working in Oracle, MS Server and I know that I can make Exception to try over and over again until it is done... However this is insert in a same table where the data must be INSERTED right away because users will not be happy for waiting their turn on IN...
1
1
1.2
0
true
44,789,450
1
397
1
0
0
44,789,046
I don't think you can get deadlock just from rapid insertions. Deadlock occurs when you have two processes that are each waiting for the other one to do something before they can make the change that the other one is waiting for. If two processes are just inserting, the database will simply process them in the order th...
1
0
0
MySql: will it deadlock on to many insert - Django?
1
python,mysql,django
0
2017-06-27T20:10:00.000
I have one program that downloads time series (ts) data from a remote database and saves the data as csv files. New ts data is appended to old ts data. My local folder continues to grow and grow and grow as more data is downloaded. After downloading new ts data and saving it, I want to upload it to a Google BigQuery ta...
0
1
1.2
1
true
44,814,853
0
642
1
0
0
44,804,051
Consider breaking up your data into daily tables (or partitions). Then you only need to upload the CVS from the current day. The script you have currently defined otherwise seems reasonable. Extract your new day of CSVs from your source of timeline data. Gzip them for fast transfer. Copy them to GCS. Load the new CVSs...
1
0
0
Python/Pandas/BigQuery: How to efficiently update existing tables with a lot of new time series data?
1
python,pandas,google-bigquery,google-cloud-platform,gsutil
0
2017-06-28T13:34:00.000
i'm developing an app for my company, using Python2.7 and MariaDB. I have created a functions which backups our main database server to another database server. I use this command to do it:mysqldump -h localhost -P 3306 -u root -p mydb | mysql -h bckpIPsrv -P 3306 -u root -p mydb2 . I want to know if it's posible to ...
2
-1
-0.099668
0
false
54,635,754
0
6,336
1
0
0
44,824,517
dumpcmd = "mysqldump -h " + DB_HOST + " -u " + DB_USER + " -p" + DB_USER_PASSWORD + " " + DB_NAME + "| pv | gzip > " + pipes.quote( BACKUP_PATH) + "/" + FILE_NAME + ".sql"
1
0
0
is there a way to have mysqldump progress bar which shows the users the status of their backups?
2
mysql,python-2.7,mariadb
0
2017-06-29T11:57:00.000
Trying to install a postgresql database which resides on Azure for my python flask application; but the installation of psycopg2 package requires the pg_config file which comes when postgresql is installed. So how do I export the pg_config file from the postgresql database which also resides on azure? Is pg_config all ...
0
2
1.2
0
true
44,915,875
1
164
1
0
0
44,911,066
You don't need the specific pg_config from the target database. It's only being used to compile against libpq, the client library for PostgreSQL, so you only need the matching PostgreSQL client installed on your local machine. If you're on Windows I strongly advise you to install a pre-compiled PostgreSQL. You can just...
1
0
0
How to retrieve the pg_config file from Azure postgresql Database
1
python,postgresql,azure,psycopg2
0
2017-07-04T16:59:00.000
What I want is execute the sql select * from articles where author like "%steven%". For the sake of safety, i used like this way : cursor.execute('select * from articles where %s like %s', ('author', '%steven%') Then the result is just empty, not get a syntax error, but just empty set. But I am pretty sure there is ...
1
1
0.099668
0
false
44,937,097
0
221
1
0
0
44,937,003
The problem here is fact a minor mistake. Thanks to @Asad Saeeduddin, when I try to use print cursor._last_executed to check what has happened. I found that what is in fact executed is SELECT * FROM articles WHERE 'title' LIKE '%steven%', look the quotation mark around the title, that's the reason why I got empty set....
1
0
0
Could not format sql correctly in pymysql
2
python,mysql,pymysql
0
2017-07-05T22:30:00.000
I am relatively new to Django. I have managed to create a basic app and all that without problems and it works fine. The question probably has been asked before. Is there a way to update existing Django models already mapped to existing databases when the underlying database is modified? To be specific, I have mysql da...
1
4
1.2
0
true
44,954,903
1
514
1
0
0
44,954,521
You don't need to update models if you just added new data. Models are related to a database structure only.
1
0
0
Django models update when backend database updated
1
python,django,django-models
0
2017-07-06T16:33:00.000
I'm working with Pycharm in a project to read SQL DBs ,I'm working in a windows 10 64bits workstation and I'm trying to install the module pymssql, I have already installed VS2015 to get all requirements but now each time that i try to install i got the message: error: command 'C:\Program Files (x86)\Microsoft Visual S...
4
0
0
0
false
50,968,882
0
3,976
2
0
0
44,955,927
I had a same problem but its fixed this way. Copied "rc.exe" and "rcdll.dll" from "C:\Program Files (x86)\Windows Kits\8.1\bin\x86" Pasted "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin"
1
0
1
Install pymssql 2.1.3 in Pycharm
5
sql,python-3.x,pycharm
0
2017-07-06T17:57:00.000
I'm working with Pycharm in a project to read SQL DBs ,I'm working in a windows 10 64bits workstation and I'm trying to install the module pymssql, I have already installed VS2015 to get all requirements but now each time that i try to install i got the message: error: command 'C:\Program Files (x86)\Microsoft Visual S...
4
0
0
0
false
69,630,619
0
3,976
2
0
0
44,955,927
In my case helped me rollback to Python 3.8. Same problem I had on 3.10 x64
1
0
1
Install pymssql 2.1.3 in Pycharm
5
sql,python-3.x,pycharm
0
2017-07-06T17:57:00.000
I am new to pyspark. I want to plot the result using matplotlib, but not sure which function to use. I searched for a way to convert sql result to pandas and then use plot.
15
1
0.099668
1
false
66,233,233
0
30,940
1
0
0
45,003,301
For small data, you can use .select() and .collect() on the pyspark DataFrame. collect will give a python list of pyspark.sql.types.Row, which can be indexed. From there you can plot using matplotlib without Pandas, however using Pandas dataframes with df.toPandas() is probably easier.
1
0
0
How to use matplotlib to plot pyspark sql results
2
python,pandas,matplotlib,pyspark-sql
0
2017-07-10T03:15:00.000
I am trying to write a data migration script moving data from one database to another (Teradata to snowflake) using JDBC cursors. The table I am working on has about 170 million records and I am running into the issue where when I execute the batch insert a maximum number of expressions in a list exceeded, expected at ...
1
1
0.197375
0
false
46,125,739
0
215
1
0
0
45,012,005
If your table has 170M records, then using JDBC INSERT to Snowflake is not feasible. It would perform millions of separate insert commands to the database, each requiring a round-trip to the cloud service, which would require hundreds of hours. Your most efficient strategy would be to export from Teradata into multip...
1
0
0
JDBC limitation on lists
1
python,jdbc,teradata,snowflake-cloud-data-platform
0
2017-07-10T12:25:00.000
Background: I have an application written in Python to monitor the status of tools. The tools send their data from specific runs and it all gets stored in an Oracle database as JSON files. My Problem/Solution: Instead of connecting to the DB and then querying it repeatedly when I want to compare the current run data to...
1
1
1.2
0
true
45,044,543
0
465
1
0
0
45,036,714
As you described querying the db too many times is not an option. OK in that case I would do this the following way : When your program starts you get the data for all tools as a set of JSON-Files per tool right? OK. I am not sure how you get the data by querying the tools directly or by querying the db .. does not mat...
1
0
0
How To Store Query Results (Using Python)
1
python,json,database,oracle
0
2017-07-11T14:00:00.000
Currently we are uploading the data retrieved from vendor APIs into Google Datastore. Wanted to know what is the best approach with data storage and querying the data. I will be need to query millions of rows of data and will be extracting custom engineered features from the data. So wondering whether I should load th...
0
0
0
1
false
45,395,282
0
577
1
0
0
45,061,306
As far as I can tell there is no support for Datastore in Pandas. This might affect your decision.
1
0
0
Is Google Cloud Datastore or Google BigQuery better suited for analytical queries?
3
python,pandas,google-cloud-datastore,google-bigquery,google-cloud-platform
0
2017-07-12T15:00:00.000
I am trying to export data from Aurora into S3, I have created a stored procedure to perform this action. I can schedule this on the Aurora Scheduler to run at a particular point in time. However, I have multiple tables - could go up to 100; so I want my process controller which is a python script sitting in Lambda to ...
0
0
0
0
false
45,124,304
1
679
2
0
0
45,098,004
There isn't any built-in integration that allows SQS to interact with Aurora. Obviously you can do this externally, with a queue consumer that reads from the queue and invokes the procedures, but that doesn't appear to be relevant, here.
1
0
0
Does anyone know if we can start a storedprocedure in Aurora based on SQS
3
python,amazon-web-services,amazon-sqs,amazon-aurora
0
2017-07-14T08:14:00.000
I am trying to export data from Aurora into S3, I have created a stored procedure to perform this action. I can schedule this on the Aurora Scheduler to run at a particular point in time. However, I have multiple tables - could go up to 100; so I want my process controller which is a python script sitting in Lambda to ...
0
0
0
0
false
55,167,030
1
679
2
0
0
45,098,004
I have used lambda with alembic package to create schema and structures. I know we could create users and execute other database commands - the same way execute a stored procedure Lambda could prove to be expensive - we probably could have an container to do it
1
0
0
Does anyone know if we can start a storedprocedure in Aurora based on SQS
3
python,amazon-web-services,amazon-sqs,amazon-aurora
0
2017-07-14T08:14:00.000
In my pyramid app it's useful to be able to log in as any user (for test/debug, not in production). My normal login process is just a simple bcrypt check against the hashed password. When replicating user-submitted bug reports I found it useful to just clone the sqlite database and run a simple script which would chang...
2
1
1.2
0
true
45,113,051
1
64
1
0
0
45,112,983
In order to instantiate the server application with that debug feature in environment, the attacker would have to have the hand over your webserver, most probably with administrative privileges. From an outside process, an attacker cannot modify the environment of the running server, which is loaded into memory, withou...
1
0
0
Security implications of a pyramid/wsgi os.environ backdoor?
1
python,security,pyramid,environment,dev-to-production
1
2017-07-14T23:42:00.000
I'm sending data back and forth Python and Cassandra. I'm using both builtin float types in my python program and the data type for my Cassandra table. If I send a number 955.99 from python to Cassandra, in the database it shows 955.989999. When I send a query in python to return the value I just sent, it is now 955.9...
4
1
0.099668
0
false
50,065,729
0
639
1
0
0
45,139,240
Also if you cannot change your column definition for some reason, converting your float value to string and passing str to the cassandra-driver will also solve your problem. It will be able to generate the precise decimal values form str.
1
0
0
Python Cassandra floating precision loss
2
python,cassandra,floating-point,precision,cassandra-python-driver
0
2017-07-17T08:23:00.000
I have a python script that execute a gbq job to import a csv file from Google cloud storage to an existing table on BigQuery. How can I set the job properties to import to the right columns provided in the first row of the csv file? I set parameter 'allowJaggedRows' to TRUE, but it import columns in order regardless ...
0
2
0.379949
1
false
45,156,763
0
3,297
1
0
0
45,155,117
When you import a CSV into BigQuery the columns will be mapped in the order the CSV presents them - the first row (titles) won't have any effect in the order the subsequent rows are read. To be noted, if you were importing JSON files, then BigQuery would use the name of each column, ignoring the order.
1
0
0
How to import CSV to an existing table on BigQuery using columns names from first row?
1
python,google-bigquery,import-from-csv
0
2017-07-17T23:23:00.000
I need to store json objects on the google cloud platform. I have considered a number of options: Store them in a bucket as a text (.json) file. Store them as text in datastore using json.dumps(obj). Unpack it into a hierarchy of objects in datastore. Option 1: Rejected because it has no organising principles other ...
1
0
0
0
false
45,204,510
1
1,421
1
1
0
45,184,482
I don't know what your exact searching needs are, but the datastore API allows for querying that is decently good, provided you give the datastore the correct indexes. Plus it's very easy to go take the entities in the datastore and pull them back out as .json files.
1
0
1
Storing json objects in google datastore
1
json,python-2.7,google-app-engine,google-cloud-datastore
0
2017-07-19T08:05:00.000
I encountered the following irregularities and wanted to share my solution. I'm reading a sql table from Microsoft SQL Server in Python using Pandas and SQLALCHEMY. There is a column called "occurtime" with the following format: "2017-01-01 01:01:11.000". Using SQLAlchemy to read the "occurtime" column, everything was ...
0
0
0
1
false
45,197,852
0
752
1
0
0
45,197,851
I had to work around the datetime column from my SQL query itself just so SQLAlchemy/Pandas can stop reading it as a NaN value. In my SQL query, I used CONVERT() to convert the datetime column to a string. This was ready with no issue, and then I used pandas.to_datetime() to convert it back into datetime. Anyone else...
1
0
0
Pandas read_sql
1
python,sql-server,pandas,sqlalchemy
0
2017-07-19T17:59:00.000
As a user of the database, are there any quicker ways of exporting data from Filemaker using languages like python or java? Perhaps to an Excel. My job involves exporting selected data constantly from our company's Filemaker database. However, the software is super slow, and the design of our app is bad which makes sel...
1
0
0
0
false
45,215,938
0
1,900
1
0
0
45,205,162
You can also save records as a spreadsheet for use in Microsoft Excel. For more information, see Saving and sending records as an Excel file in the FileMaker Help file. Use export when you want to export records in the current found set or export in a format other than an Excel spreadsheet. Use Save as Excel when you w...
1
0
0
Any quick way to export data from Filemaker?
2
python,database,excel,filemaker,data-extraction
0
2017-07-20T04:25:00.000
I need to send keys to excel to refresh formulas. What are my best options? I am already using Openpyxl but it does not satisfy all my needs.
1
1
0.197375
0
false
55,856,869
0
1,564
1
0
0
45,225,010
If this still helps, you can use from pywin32 (which should be a default package) to use win32com.client. Sample code: import win32com.client xl = win32com.client.Dispatch("Excel.Application") xl.sendkeys("^+s") # saves file Use "%" to access alt so you can get hotkeys.
1
0
0
Using python to send keys to active Excel Window
1
python,excel,keyboard
0
2017-07-20T20:56:00.000
I am developing a skill for Amazon Alexa and I'm using DynamoDB for storing information about the users favorite objects. I would like 3 columns in the database: Alexa userId Object Color I currently have the Alexa userId as the primary key. The problem that I am running into is that if I try to add an entry into the...
9
1
0.039979
0
false
45,266,031
1
10,290
2
0
0
45,227,546
DynamoDB is a NoSQL-like, document database, or key-value store; that means, you may need to think about your tables differently from RDBMS. From what I understand from your question, for each user, you want to store information about their preferences on a list of objects; therefore, keep your primary key simple, that...
1
0
0
How to create multiple DynamoDB entries under the same primary key?
5
python,amazon-web-services,amazon-dynamodb,alexa-skills-kit
0
2017-07-21T01:26:00.000
I am developing a skill for Amazon Alexa and I'm using DynamoDB for storing information about the users favorite objects. I would like 3 columns in the database: Alexa userId Object Color I currently have the Alexa userId as the primary key. The problem that I am running into is that if I try to add an entry into the...
9
0
0
0
false
45,693,902
1
10,290
2
0
0
45,227,546
you cannot create multiple entries with same primary key. Please create composite keys (multiple keys together as primary key). Please note you cannot have multiple records of same combination
1
0
0
How to create multiple DynamoDB entries under the same primary key?
5
python,amazon-web-services,amazon-dynamodb,alexa-skills-kit
0
2017-07-21T01:26:00.000
As far as I know Django apps can't start if any of the databases set in the settings.py are down at the start of the application. Is there anyway to make Django "lazyload" the initial database connection? I have two databases configured and one of them is a little unstable and sometimes it can be down for some seconds...
6
1
1.2
0
true
45,638,008
1
2,860
1
0
0
45,240,311
I did some more tests and problem only happens when you are using de development server python manage.py runserver. In that case, it forces a connection with the database. Using an actual WSGI server it doesn't happen as @Alasdair informed. @JohnMoutafis in the end I didn't test your solution, but that could work.
1
0
0
Django: How to disable Database status check at startup?
2
python,mysql,django,django-models,django-south
0
2017-07-21T14:32:00.000
I use Django 1.11, PostgreSQL 9.6 and Django migration tool. I couldn't have found a way to specify the column orders. In the initial migration, changing the ordering of the fields is fine but what about migrations.AddField() calls? AddField calls can also happen for the foreign key additions for the initial migration....
5
11
1.2
0
true
45,261,424
1
5,299
2
0
0
45,261,303
AFAIK, there's no officially supported way to do this, because fields are supposed to be atomic and it shouldn't be relevant. However, it messes with my obsessive-compulsive side as well, and I like my columns to be ordered for when I need to debug things in dbshell, for example. Here's what I've found you can do: Mak...
1
0
0
Django Migration Database Column Order
2
python,django,postgresql,django-models,migration
0
2017-07-23T03:45:00.000
I use Django 1.11, PostgreSQL 9.6 and Django migration tool. I couldn't have found a way to specify the column orders. In the initial migration, changing the ordering of the fields is fine but what about migrations.AddField() calls? AddField calls can also happen for the foreign key additions for the initial migration....
5
0
0
0
false
59,406,349
1
5,299
2
0
0
45,261,303
I am not 100% sure about the PostgreSQL syntax but this is what it looks like in SQL after you have created the database. I'm sure PostgreSQL would have an equivalent: ALTER TABLE yourtable.yourmodel CHANGE COLUMN columntochange columntochange INT(11) NOT NULL AFTER columntoplaceunder; Or if you have a GUI (mysql work...
1
0
0
Django Migration Database Column Order
2
python,django,postgresql,django-models,migration
0
2017-07-23T03:45:00.000
I have a django sql explorer which is running with 5 queries and 3 users. Query1 Query2 Query3 Query4 Query5 I want to give access of Query1 and Query5 to user1 and Query4 and Query2 to user2 and likewise. my default url after somebody logins is url/explorer based on users permission he should see only those quer...
1
0
0
0
false
47,134,025
1
233
1
0
0
45,320,643
It's not possible to do with default implementation. You need to download the source code and customize as per your needs.
1
0
0
django sql explorer - user based query access
1
mysql,django,python-2.7
0
2017-07-26T07:50:00.000
I am writing a small program in python with pywin32 that manipulates some data in excel and I want to hide a row in order to obscure a label on one of my pivot tables. According to MSDN the proper syntax is Worksheet.Rows ('Row#').EntireRow.Hidden = True When I try this in my code nothing happens - no error, nor hidd...
3
0
0
0
false
45,335,421
0
1,117
2
0
0
45,334,926
I'm not familiar with python syntax, but in VBA you dont put quotes around the row number... Ex: myWorksheet.Rows(10).EntireRow.Hidden = True
1
0
0
Hide row in excel not working - pywin32
2
python,excel,vba,winapi,pywin32
0
2017-07-26T18:33:00.000
I am writing a small program in python with pywin32 that manipulates some data in excel and I want to hide a row in order to obscure a label on one of my pivot tables. According to MSDN the proper syntax is Worksheet.Rows ('Row#').EntireRow.Hidden = True When I try this in my code nothing happens - no error, nor hidd...
3
1
0.099668
0
false
45,335,753
0
1,117
2
0
0
45,334,926
Turns out that a cell merge later in my program was undoing the hidden row - despite the fact that the merged cells were not in the hidden row.
1
0
0
Hide row in excel not working - pywin32
2
python,excel,vba,winapi,pywin32
0
2017-07-26T18:33:00.000
What I can observe: I am using windows 7 64bit My code (establish an odbc connection with a SQL server on the network, simple reading operations only) is written in python 3.6.2 32bit I pip installed pyodbc, so I assume that was 32bit as well. I downloaded and installed the 64bit "Microsoft® ODBC Driver 13.1 for SQL...
0
1
1.2
0
true
45,365,583
0
1,620
1
0
0
45,362,440
A 32bit application can NOT invoke a 64bit dll, so python 32bit can not talk to a 64bit driver for sure. msodbc driver for sql server is in essence a dll file: msodbcsql13.dll I just found out (which is not even mentioned by microsoft) that "odbc for sql server 13.1 x64" will install a 64bit msodbcsql13.dll in system32...
1
0
0
32bit pyodbc for 32bit python (3.6) works with microsoft's 64 bit odbc driver. Why?
1
python-3.x,odbc,driver,32bit-64bit,pyodbc
0
2017-07-27T23:12:00.000
I have web application which made by symfony2(php framework) So there is mysql database handled by doctrine2 php source code. Now I want to control this DB from python script. Of course I can access directly to DB from python. However, it is complex and might break the doctrine2 rule. Is there a good way to access data...
0
0
0
0
false
45,375,722
0
153
1
0
0
45,371,167
You can try the Django ORM or SQL Alchemy but the configuration of the models have to be done very carefully. Maybe you can write a parser from Doctrine2 config files to Django models. If you do, open source it please.
1
0
0
How to access doctrine database made by php from python
1
php,python,symfony,frameworks
1
2017-07-28T10:31:00.000
AFAIU and from docs, RealDictCursor is a specialized DictCursor that enables to access columns only from keys (aka columns name), whereas DictCursor enables to access data both from keys or index number. I was wondering why RealDictCursor has been implemented if DictCursor offers more flexibility? Is it performance-wis...
15
-1
-0.099668
0
false
54,212,351
0
14,075
1
0
0
45,399,347
class psycopg2.extras.RealDictCursor(*args, **kwargs) A cursor that uses a real dict as the base type for rows. Note that this cursor is extremely specialized and does not allow the normal access (using integer indices) to fetched data. If you need to access database rows both as a dictionary and a list, then use the g...
1
0
1
psycopg2: DictCursor vs RealDictCursor
2
python,python-3.x,postgresql,psycopg2
0
2017-07-30T11:32:00.000
I have a project that : fetches data from active directory fetches data from different services based on active directory data aggregates data about 50000 row have to be added to database in every 15 min I'm using Postgresql as database and django as ORM tool. But I'm not sure that django is the right tools for suc...
0
0
1.2
0
true
45,404,605
1
81
1
0
0
45,404,241
For sure there are other ways, if that's what you're asking. But Django ORM is quite flexible overall, and if you write your queries carefully there will be no significant overhead. 50000 rows in 15 minutes is not really big enough. I am using Django ORM with PostgreSQL to process millions of records a day.
1
0
0
Collecting Relational Data and Adding to a Database Periodically with Python
3
django,python-2.7,postgresql,orm
0
2017-07-30T20:12:00.000
I am trying to generate a report in excel using win32com. I can get the information into the correct cells. However, one of my columns contains an ID number, and excel is formatting it as a number (displaying it in scientific notation). I have tried formatting the cell as text using sheet.Range(cell).NumberFormat = '@'...
0
2
1.2
0
true
45,443,851
0
328
1
0
0
45,443,395
Pass a single leading quote to Excel ahead of the number, for example "'5307245040001" instead of "5307245040001"
1
0
0
Formatting does not automatically update when using excel with win32com
1
python,excel,number-formatting,win32com
0
2017-08-01T16:43:00.000
I would like to use xlwings wit the OPTIMIZED_CONNECTION set to TRUE. I would like to modify the setting but somehow cannot find where to do it. I change the _xlwings.conf sheet name in my workbook but this seems to have no effect. Also I cannot find these settings in VBA as I think I am supposed to under what is calle...
0
0
0
0
false
45,456,886
0
702
1
0
0
45,455,892
The add-in replaces the need for the settings in VBA in newer versions. One can debug the xlam module using "xlwings" as a password. This enabled me to realize that the OPTIMIZED_CONNECTION parameter is now set through "USE UDF SERVER" keyword in the xlwings.conf sheet (which does work)
1
0
0
xlwings VBA function settings edit
1
python,windows,xlwings
0
2017-08-02T08:47:00.000
So i am writing a python3 app with kivy and i want to have some data stored in a database using sqlite. The user needs to have access to that data from the first time he opens the app Is there a way to possibly make it so that when i launch the app, the user that downloads it, will already have the data i stored, like ...
1
1
1.2
0
true
45,489,681
0
310
1
0
0
45,483,128
Just include the database file in the apk, as you would any other file.
1
1
0
Android-How can i attach SQLite database in python app
1
android,python-3.x,sqlite,kivy
0
2017-08-03T11:39:00.000
So I was trying learn sqlite and how to use it from Ipython notebook, and I have a sqlite object named db. I am executing this command: sel=" " " SELECT * FROM candidates;" " " c=db.cursor().execute(sel) and when I do this in the next cell: c.fetchall() it does print out all the rows but when I run this same comman...
0
1
1.2
0
true
45,498,306
0
189
1
0
0
45,498,188
That is because .fetchall() makes your cursor(c) pointing the last row. if you want to select your DB again, you should .execute again. Or, if you just want to use your fetched data again, you can store c.fetchall() into your variable.
1
0
0
Weird behavior by db.cursor.execute()
2
python,sqlite,ipython
0
2017-08-04T04:16:00.000
I am using django 1.10 and python 3.6.1 when executing get_or_none(models.Character, pk=0), with SQL's get method, the query returns a hashmap i.e.: <Character: example> How can I extract the value example? I tried .values(), I tried iterating, I tried .Character nothing seems to work, and I can't find a solution in t...
0
0
0
0
false
45,501,557
1
414
1
0
0
45,500,972
@Daniel Roseman helped me understand the answer. SOLVED: What I was getting from the query was the model of character, so I couldn't have accessed it thru result.Character but thru result.Field_Inside_Of_Character
1
0
0
Django SQL get query returns a hashmap, how to access the value?
3
python,mysql,sql,django
0
2017-08-04T07:43:00.000
I have to SSH into 120 machines and make a dump of a table in databases and export this back on to my local machine every day, (same database structure for all 120 databases). There isn't a field in the database that I can extract the name from to be able to identify which one it comes from, it's vital that it can be i...
0
0
0
0
false
45,508,534
0
62
1
1
0
45,508,137
You can try to modify your command as follow: mysql -uroot -p{your_password} -e 'SELECT * FROM dfs_va2.artikel_trigger;' > /Users/admin/Documents/dbdump/$(hostname)_dump.csv" download:"/Users/johnc/Documents/Imports/$(hostname)_dump.csv" hostname returns current machine name so all your files should be unique (of cou...
1
0
0
Best way to automate file names of multiple databases
1
python,automation,fabric,devops
0
2017-08-04T13:28:00.000
I am trying to include sqlite3 in an electron project I am getting my hands dirty with. I have never used electron, nor Node before, excuse my ignorance. I understand that to do this on Windows, I need Python installed, I need to download sqlite3, and I need to install it. As per the NPM sqlite3 page, I am trying to i...
0
0
0
0
false
45,533,423
0
259
1
0
0
45,527,497
This has been resolved.... Uninstalled Python 2.7.13. Reinstalled, added path to PATH variable again, now command 'python' works just fine...
1
0
0
Failing to install sqlite3 plugin for electron project on windows
1
python-2.7,sqlite,electron,node-gyp
0
2017-08-06T00:18:00.000
I have two database in mysql that have tables built from another program I have wrote to get data, etc. However I would like to use django and having trouble understanding the model/view after going through the tutorial and countless hours of googling. My problem is I just want to access the data and displaying the dat...
0
0
0
0
false
45,529,591
1
104
1
0
0
45,529,142
inspectdb is far from being perfect. If you have an existing db with a bit of complexity you will end up probably changing a lot of code generated by this command. One you're done btw it should work fine. What's your exact issue? If you run inspectdb and it creates a model of your table you should be able to import it ...
1
0
0
Django and Premade mysql databases
1
python,mysql,django
0
2017-08-06T06:14:00.000
I have a task which would really benefit from implementing partitioned tables, but I am torn because Postgres 10 will be coming out relatively soon. If I just build normal tables and handle the logic with Python format strings to ensure that my data is loaded to the correct tables, can I turn this into a partition ea...
1
1
1.2
0
true
45,537,959
0
130
1
0
0
45,535,616
Pg 10 partitioning right now is functionally the same as 9.6, just with prettier notation. Pretty much anything you can do in Pg 10, you can also do in 9.6 with table-inheritance based partitioning, it's just not as convenient. It looks like you may not have understood that table inheritance is used for partitioning in...
1
0
0
Postgres partitioning options right now?
1
python,postgresql
0
2017-08-06T19:09:00.000
I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script. When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission den...
0
0
0
0
false
45,539,442
0
2,830
3
0
0
45,539,241
make sure you have write permission in order to create a excel temporary lock file in said directory...
1
0
0
openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed
4
python,excel,openpyxl
0
2017-08-07T04:01:00.000
I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script. When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission den...
0
6
1
0
false
50,027,342
0
2,830
3
0
0
45,539,241
Windows does not let you modify open Excel files in another program -- only Excel may modify open Excel files. You must close the file before modifying it with the script. (This is one nice thing about *nix systems.)
1
0
0
openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed
4
python,excel,openpyxl
0
2017-08-07T04:01:00.000
I've been making this python script with openpyxl on a MAC. I was able to have an open excel workbook, modify something on it, save it, keep it open and run the script. When I switched to windows 10, it seems that I can't modify it, save it, keep it open, and run the script. I keep getting an [ERRNO 13] Permission den...
0
0
0
0
false
50,026,988
0
2,830
3
0
0
45,539,241
I've had this issue with Excel files that are located in synced OneDrive folders. If I copy the file to a unsynced directory, openpyxl no longer has problems reading the .xlsx file while it is open in Excel.
1
0
0
openpyxl - Unable to access excel file with openpyxl when it is open but works fine when it is closed
4
python,excel,openpyxl
0
2017-08-07T04:01:00.000
I would like to host a database on my raspberry pi to which I can access from any device. I would like to access the contents of the database using python. What I've done so far: I installed the necessary mysql packages, including apache 2. I created my first database which I named test. I wrote a simple php s...
0
1
0.066568
0
false
45,593,914
0
104
1
0
0
45,593,608
on the terminal of your raspi use the following command: mysql -u -p -h --port where you switch out your hostname with your ip address. since currently you can only connect via local host
1
0
0
Raspberry Pi Database Server
3
php,python,mysql
1
2017-08-09T14:31:00.000
I want to export data from Cassandra to Json file, because Pentaho didn't support my version of Cassandra 3.10
1
0
0
0
false
60,649,389
0
3,302
1
0
0
45,607,301
You can use bash redirction to get json file. cqlsh -e "select JSON * from ${keyspace}.${table}" | awk 'NR>3 {print $0}' | head -n -2 > table.json
1
0
0
How to export data from cassandra to Json file using Python or other language?
4
python,json,cassandra,cqlsh
0
2017-08-10T07:35:00.000
I have a task to import multiple Excel files in their respective sql server tables. The Excel files are of different schema and I need a mechanism to create a table dynamically; so that I don't have to write a Create Table query. I use SSIS, and I have seen some SSIS articles on the same. However, it looks I have to de...
0
0
0
0
false
45,614,658
0
59
1
0
0
45,610,737
You can try using BiML, which dynamically creates packages based on meta data. The only other possible solution is to write a script task.
1
0
0
Multiple Excel with different schema Upload in SQL
1
python,sql,sql-server,excel,ssis
0
2017-08-10T10:09:00.000
I have python programs that use python's xlwings module to communicate with excel. They work great, but I would like to run them using a button from excel. I imported xlwings to VBA and use the RunPython command to do so. That also works great, however the code I use with RunPython is something like: "from filename imp...
0
2
1.2
0
true
45,622,510
0
873
1
0
0
45,621,637
RunPython basically just does what it says: run python code. So to run a module rather than a single function, you could do: RunPython("import filename").
1
0
1
Running Python from VBA using xlwings without defining function
1
python,vba,excel,xlwings
0
2017-08-10T19:01:00.000
I have a Python Scraper that I run periodically in my free tier AWS EC2 instance using Cron that outputs a csv file every day containing around 4-5000 rows with 8 columns. I have been ssh-ing into it from my home Ubuntu OS and adding the new data to a SQLite database which I can then use to extract the data I want. Now...
0
1
0.099668
0
false
45,643,778
1
158
1
0
0
45,630,562
The problem is you don't have access to RDS filesystem, therefore cannot upload csv there (and import too). Modify your Python Scraper to connect to DB directly and insert data there.
1
0
0
Exported scraped .csv file from AWS EC2 to AWS MYSQL database
2
python,mysql,database,database-design,amazon-ec2
0
2017-08-11T08:36:00.000
I am looking for a solution to build an application with the following features: A database compound of -potentially- millions of rows in a table, that might be related with a few small ones. Fast single queries, such as "SELECT * FROM table WHERE field LIKE %value" It will run on a Linux Server: Single node, but mayb...
1
1
0.197375
0
false
45,631,639
0
43
1
0
0
45,631,450
Not sure whether these questions are on topic here, but fortunately the answer is simple enough: In these days a million rows is simply not that large anymore, even Excel can hold more than a million. If you have a few million rows in a large table, and want to run quick small select statements, the answer is that you ...
1
0
0
is the choice of Python and Hadoop a good one for this scenario?
1
python,hadoop,hadoop-streaming
0
2017-08-11T09:20:00.000
I'm working on a little python3 server and I want to download a sqlite database from this server. But when I tried that, I discovered that the downloaded file is larger than the original : the original file size is 108K, the downloaded file size is 247K. I've tried this many times, and each time I had the same result. ...
0
0
0
0
false
45,646,512
0
445
1
0
0
45,646,249
Well, I've finally found the solution. The problem (which I didn't see first) was that the server sent plain text to client. Here is one way to send binary data : import cgi import os import shutil import sys print('Content-Type: application/octet-stream; file="Library.db"') print('Content-Disposition: attachment; fil...
1
0
0
File downloaded larger than original
1
python-3.x,cgi
0
2017-08-12T03:33:00.000
I'm working a lot with Excel xlsx files which I convert using Python 3 into Pandas dataframes, wrangle the data using Pandas and finally write the modified data into xlsx files again. The files contain also text data which may be formatted. While most modifications (which I have done) have been pretty straight forward...
0
0
0
1
false
45,689,273
0
190
1
0
0
45,688,168
i have been recently working with openpyxl. Generally if one cell has the same style(font/color), you can get the style from cell.font: cell.font.bmeans bold andcell.font.i means italic, cell.font.color contains color object. but if the style is different within one cell, this cannot help. only some minor indication...
1
0
0
Modifying and creating xlsx files with Python, specifically formatting single words of a e.g. sentence in a cell
1
python,excel,pandas,openpyxl,xlsxwriter
0
2017-08-15T07:22:00.000
I've been pouring over everywhere I can to find an answer to this, but can't seem to find anything: I've got a batch update to a MySQL database that happens every few minutes, with Python handling the ETL work (I'm pulling data from web API's into the MySQL system). I'm trying to get a sense of what kinds of potential ...
0
0
0
0
false
45,702,416
0
325
1
0
0
45,702,192
For one I wrote in C#, I decided the best work partitioning was each "source" having a thread for extraction, one for each transform "type", and one to load the transformed data to each target. In my case, I found multiple threads per source just ended up saturating the source server too much; it became less responsiv...
1
0
1
Python Multithreading/processing gains for inserts to different tables in MySQL?
2
python,mysql,python-multiprocessing,python-multithreading
0
2017-08-15T21:59:00.000
I'm trying to create app using the command python3 manage.py startapp webapp but i'm getting an error that says: django.core.exceptions.ImproperlyConfigured: Error loading either pysqlite2 or sqlite3 modules (tried in that order): No module named '_sqlite3' So I tried installing sqlite3 using pip install sqlite3 ...
3
2
0.379949
0
false
45,706,624
1
5,897
1
0
0
45,704,177
sqlite3 is part of the standard library. You don't have to install it. If it's giving you an error, you probably need to install your distribution's python-dev packages, eg with sudo apt-get install python-dev.
1
0
0
Downloading sqlite3 in virtualenv
1
python,django,sqlite
0
2017-08-16T02:25:00.000
Working with Python 2.7 and I'd like to add new sheets to a current Excel workbook indexed to a specific position. I know Openpyxl's create_sheet command will allow me to specify an index for a new sheet within an existing workbook, but there's a catch: Openpyxl will delete charts from an existing Excel workbook if ope...
0
0
1.2
0
true
45,729,433
0
70
1
0
0
45,724,575
openpyxl 2.5 includes read support for charts
1
0
0
Add indexed sheets to Excel workbook w/out Openpyxl?
1
excel,python-2.7,openpyxl
0
2017-08-16T23:45:00.000
I have configured the server to use MySQL Cluster. The Cluster architecture is as follows: One Cluster Manager(ip1) Two Data Nodes (ip2,ip3) Two SQL Nodes(ip4,ip5) My Question: Which node should I use to connect from Python application?
0
3
1.2
0
true
45,729,005
0
915
1
0
0
45,728,111
You have to call SQL nodes from your application. Use comma separated ip addresses for this. In your code use DB_HOST = "ip4, ip5"
1
0
0
Connecting to mysql cluster from python application
1
python,mysql,mysql-cluster
0
2017-08-17T06:35:00.000
I would like to know how to insert into same MongoDb collection from different python scripts running at the same time using pymongo any help redirecting guidance would be very appreciated because I couldn't find any clear documentation in pymongo or mongdb about it yet thank in advance
1
1
1.2
0
true
45,737,589
0
754
1
0
0
45,737,486
You should be able to just insert into the collection in parallel without needing to do anything special. If you are updating documents then you might find there are issues with locking, and depending on the storage engine which your MongoDB is using there may be collection locking, but this should not affect how you ...
1
0
1
Writing in parallel to MongoDb collection from python
1
python,mongodb,python-3.x,pymongo,pymongo-3.x
0
2017-08-17T14:11:00.000
Let me explain the problem We get real time data which is as big as 0.2Million per day. Some of these records are of special significance. The attributes that shall mark them as significant are pushed in a reference collection. Let us say each row in Master Database has the following attributes a. ID b. Type c. ...
0
0
0
0
false
45,797,856
0
143
1
0
0
45,769,111
Yes, I also faced this problem but then I tried by moving small chunks of the data. Sharding is not the better way as per my experience regarding this kind of problem. Same thing for the replica set.
1
0
0
How to segregate large real time data in MongoDB
1
mongodb,python-3.x,cron
0
2017-08-19T07:51:00.000
I just want to set a password to my file "file.db" (SQLite3 database), if someone trying to open this DB it has to ask password for authentication. is there any way to do this Using python. Thanks in Advance.
0
0
0
0
false
45,811,504
0
52
1
0
0
45,811,440
Asking a password when opening a file doesn't make much sense, it will take another program to do that, watching the file and intercepting the request at os level.. What you need to do is protect the file using ACL, setting the proper access rights to only desired users&groups.
1
0
0
protecting DB using Python
1
python,sqlite
0
2017-08-22T07:28:00.000
When selecting a data source for a graph in Excel, you can specify how the graph should treat empty cells in your data set (treat as zero, connect with next data point, leave gap). The option to set this behavior is available in xlsxwriter with chart.show_blanks_as(), but I can't find it in openpyxl. If anyone knows w...
0
0
0
1
false
45,868,428
0
101
1
0
0
45,825,401
Asked the dev about it - There is a dispBlanksAs property of the ChartContainer but this currently isn't accessible to client code. I looked through the source some more using that answer to guide me. The option is definitely in there, but you'd have to modify source and build locally to get at it. So no, it's not a...
1
0
0
How to replicate the "Show empty cells as" functionality of Excel graphs
1
python,excel,openpyxl
0
2017-08-22T19:20:00.000
I would like to store a "set" in a database (specifically PostgreSQL) efficiently, but I'm not sure how to do that efficiently. There are a few options that pop to mind: store as a list ({'first item', 2, 3.14}) in a text or binary column. This has the downside of requiring parsing when inserting into the database an...
0
2
0.197375
0
false
45,850,429
0
76
1
0
0
45,848,956
What you want to do is store a one-to-many relationship between a row in your table and the members of the set. None of your solutions allow the members of the set to be queried by SQL. You can't do something like select * from mytable where 'first item' in myset. Instead you have to retrieve the text/blob and use anot...
1
0
1
How to store a "set" (the python type) in a database efficiently?
2
python,sql,json,postgresql,pickle
0
2017-08-23T20:42:00.000
I have an excel xlsx file that I want to edit using python script. I know that openpyxl is not able to treat data-validation but I want just to edit the value of some cells containing data-validation and then save the workbook without editing those data-validation. For now, when I try to do that, I get an error : Use...
2
1
0.197375
0
false
45,863,816
0
2,941
1
0
0
45,862,917
To be clear: openpyxl does support data validation as covered by the original OOXML specification. However, since then Microsoft has extended the options for data validation and it these that are not supported. You might be able to adjust the data validation so that it is supported.
1
0
0
openpyxl : data-validation read/write without treatment
1
python,excel,openpyxl
0
2017-08-24T13:26:00.000
I just switched from django 1.3.7 to 1.4.22 (on my way to updating to a higher version of django). I am using USE_TZ=True and TIME_ZONE = 'Europe/Bucharest'. The problem that I am encountering is a DateTimeField from DB (postgres) that holds the value 2015-01-08 10:02:03.076+02 (with timezone) is read by my django as 2...
1
0
0
0
false
45,879,886
1
32
1
0
0
45,878,039
Seems I needed to logout once and log in again in the app for it to work. Thanks.
1
0
0
Django offset-naive date from DB
1
python,django
0
2017-08-25T09:09:00.000
I am using 64-bit python anaconda v4.4 which runs python v3. I have MS Access 2016 32-bit version. I would like to use pyodbc to get python to talk to Access. Is it possible to use 64-bit pyodbc to talk to a MS Access 2016 32-bit database? I already have a number of python applications running with the 64-bit python a...
10
4
0.26052
0
false
45,929,130
0
15,479
1
0
0
45,928,987
Unfortunately, you need 32-bit Python to talk to 32-bit MS Access. However, you should be able to install a 32-bit version of Python alongside 64-bit Python. Assuming you are using Windows, during a custom install you can pick the destination path. Then use a virtualenv. For example, if you install to C:\Python36-32: v...
1
0
1
Is it possible for 64-bit pyodbc to talk to 32-bit MS access database?
3
python,ms-access,odbc,32bit-64bit,pyodbc
0
2017-08-29T00:20:00.000
Hope you have a great day. I have a table with 470 columns to be exact. I am working on Django unit testing and the tests won't execute giving the error when I run command python manage.py test: Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may...
0
0
0
0
false
46,018,456
1
978
1
0
0
45,964,972
Since I have never seen anyone use the feature of having bigger block size, I have no experience with making it work. And I recommend you not be the first to try. Instead I offer several likely workarounds. Don't use VARCHAR(255) blindly; make the lengths realistic for the data involved. Don't use uf8 (or utf8mb4) for...
1
0
0
Changing innodb_page_size in my.cnf file does not restart mysql database
1
python,mysql,django,unit-testing,innodb
0
2017-08-30T15:58:00.000
when I try to connect to my application deploy at Pythonanywhere database does not working, its seems that he can't reach to him. when I am using my computer and run the app all seems to be perfect. any one any ideas? Thanks very much.
0
1
0.197375
0
false
46,028,070
0
197
1
0
0
46,013,567
Hey after checking out I found that pythonanywhere required paid plan in order to use mlab services, or others services.
1
0
0
pythonanywhere with mlab(mongoDB)
1
mongodb,pythonanywhere,mlab
0
2017-09-02T12:03:00.000
I'm trying to connect to a PostgreSQL database on Google Cloud using SQLAlchemy. Making a connection to the database requires specifying a database URL of the form: dialect+driver://username:password@host:port/database I know what the dialect + driver is (postgresql), I know my username and password, and I know the da...
3
3
0.197375
0
false
64,040,093
1
6,507
1
1
0
46,178,062
Hostname is the Public IP address.
1
0
0
What is the hostname for a Google Cloud PostgreSQL instance?
3
python,postgresql,google-cloud-platform,google-cloud-storage,google-cloud-sql
0
2017-09-12T13:42:00.000
I have two data files which is some weird format. Need to parse it to some descent format to use that for future purposes. after parsing i end up having two formats on which one has an id and respective information pertaining to that id will be from another file. Ex : From file 1 i get Name, Position, PropertyID from ...
0
0
0
0
false
46,181,307
0
1,067
1
0
0
46,180,651
Ensure that you normalize your data with an ID to avoid touching so many different data columns with even a single change. Like the file2 you mentioned above, you can reduce the columns to two by having just the propertyId and the property columns. Rather than having 1 propertyId associated with 2 property in a single ...
1
0
1
Storing data in flat files
2
java,python,flat-file
0
2017-09-12T15:44:00.000
I am running my Python script in which I write excel files to put them into my EC2 instance. However, I have noticed that these excel files, although they are created, are only put into the server once the code stops. I guess they are kept in cache but I would like them to be added to the server straight away. Is there...
1
1
0.197375
0
false
46,301,367
0
116
1
0
0
46,300,696
I guess they are kept in cache but I would like them to be added to the server straight away. Is there a "commit()" to add to the code? No. It isn't possible to stream or write a partial xlsx file like a CSV or Html file since the file format is a collection of XML files in a Zip container and it can't be generated un...
1
0
0
xlswriter on a EC2 instance
1
python,excel,amazon-web-services,amazon-ec2,xlsxwriter
0
2017-09-19T12:40:00.000
Django-Storages provides an S3 file storage backend for Django. It lists AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as required settings. If I am using an AWS Instance Profile to provide S3 access instead of a key pair, how do I configure Django-Storages?
3
1
0.099668
0
false
61,942,402
1
1,069
1
0
0
46,307,447
The docs now explain this: If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not set, boto3 internally looks up IAM credentials.
1
0
0
Use Django-Storages with IAM Instance Profiles
2
django,amazon-s3,boto3,python-django-storages
0
2017-09-19T18:28:00.000
I am getting the below error while running the cqlsh in cassandra 2.2.10 ?? Can somebody help me to pass this hurdle: [root@rac1 site-packages]# $CASSANDRA_PATH/bin/cqlsh Python Cassandra driver not installed, or not on PYTHONPATH. You might try “pip install cassandra-driver”. Python: /usr/local/bin/python Module lo...
1
0
0
0
false
47,167,910
0
2,223
1
1
0
46,314,983
Cassandra uses the python driver bundled in-tree in a zip file. If your Python runtime was not built with zlib support, it cannot use the zip archive in the PYTHONPATH. Either install the driver directly (pip install) as suggested, or put a correctly configured Python runtime in your path.
1
0
0
Python Cassandra driver not installed, or not on PYTHONPATH
1
python,linux,cassandra
0
2017-09-20T06:42:00.000
My team uses .rst/sphinx for tech doc. We've decided to do tables in csv files, using the .. csv-table:: directive. We are beginning to using sphinx-intl module for translation. Everything seems to work fine, except that I don't see any our tables int he extracted .po files. Has anyone had this experience? What are bes...
0
1
0.197375
0
false
46,535,864
0
89
1
0
0
46,351,068
We tested and verified that the csv content is automatically extracted into PO files, and building a localized version places the translated strings in MO files back into the table.
1
0
0
How do I use sphinx-intl if I am using the .. csv-table:: directives for my tables?
1
internationalization,python-sphinx,restructuredtext
1
2017-09-21T18:40:00.000
can I get some advice, how to make mechanism for inserts, that will check if the values of PK is used? If it is not used in the table, it will insert row with number. If it is used, it will increment value and check next value if it's used. So on...
0
0
0
0
false
46,380,171
0
26
1
0
0
46,380,101
This is too long for a comment. You would need a trigger in the database to correctly implement this functionality. If you try to do it in the application layer, then you will be subject to race conditions in a multi-client environment. Within Oracle, I would recommend just using an auto-generated column for the prima...
1
0
0
cx_oracle PK autoincrementarion
1
python,oracle,cx-oracle
0
2017-09-23T13:31:00.000
We've had a Flask application using pymssql running for 1.5 years under Python 2.7 and SQL Server 2012. We moved the application to a new set of servers and upgraded the Flask app to Python 3.6 and a new database server to SQL Server 2016. They're both Windows servers. Since then, we've been getting intermittent 20017 ...
2
2
0.379949
0
false
46,436,613
0
933
1
0
0
46,410,009
Well, our answer was to switch to pyodbc. A few utility functions made it more or less a cut-and-paste with a few gotchas here and there, but pymssql has been increasingly difficult to build, upgrade, and use for the last few years.
1
0
0
Pymssql Error 20017 after upgrading to Python 3.6 and SQL Server 2016
1
sql-server,python-3.x,flask,pymssql
0
2017-09-25T16:32:00.000
I'm using openpyxl for Python 2.7 to open and then modify a existing .xlsx file. This excel file has about 2500 columns and just 10 rows. The problem is openpyxl took to long to load the file (almost 1 Minute). Is there anyway to speed up the loading process of openpyxl. From other Threads I found some tips with read_o...
0
0
0
0
false
55,336,278
0
1,358
1
0
0
46,428,168
I had the same issue and found that while i was getting reasonable times initially (opening and closing was taking maybe 2-3 seconds), this suddenly increased to over a minute. I had introduced logging, so thought that may have been the cause, but after commenting this out, there was still a long delay I copied the dat...
1
0
0
Openpyxl loading existing excel takes too long
2
python,excel,openpyxl
0
2017-09-26T13:42:00.000
I've noticed that many SQLAlchemy tutorials would use relationship() in "connecting" multiple tables together, may their relationship be one-to-one, one-to-many, or many-to-many. However, when using raw SQL, you are not able to define the relationships between tables explicitly, as far as I know. In what cases is relat...
9
10
1.2
0
true
46,462,502
0
1,079
1
0
0
46,462,152
In SQL, tables are related to each other via foreign keys. In an ORM, models are related to each other via relationships. You're not required to use relationships, just as you are not required to use models (i.e. the ORM). Mapped classes give you the ability to work with tables as if they are objects in memory; along t...
1
0
0
Is it necessary to use `relationship()` in SQLAlchemy?
1
python,sqlalchemy
0
2017-09-28T06:14:00.000
I have a question about sqlite3. If I were to host a database online, how would I access it through python's sqlite3 module? E.g. Assume I had a database hosted at "www.example.com/database.db". Would it be as simple as just forming a connection with sqlite3.connect ("www.example.com/database.db") or is there more I n...
4
3
0.291313
0
false
46,492,537
0
1,622
1
0
0
46,492,388
SQLite3 is embedded-only database so it does not have network connection capabilities. You will need to somehow mount the remote filesystem. With that being said, SQLite3 is not meant for this. Use PostgreSQL or MySQL (or anything else) for such purposes.
1
0
0
Connecting to an online database through python sqlite3
2
python,database,sqlite
0
2017-09-29T15:39:00.000
I have two databases in odoo DB1 and DB2. I made some changes to existing modules(say module1 and module2) in DB1 through GUI(web client). All those changes were stored to DB1 and were working correctly when I am logged in through DB1. Now, I made some changes in few files(in same two modules module1 and module2). Thes...
4
3
0.291313
0
false
46,501,313
1
1,295
2
0
0
46,500,405
you can restart the server and start the server by python odoo-bin -d database_name -u module_name or -u all to update all module
1
0
0
How upgrading of a Odoo module works?
2
python,openerp,odoo-9,odoo-10
0
2017-09-30T07:04:00.000
I have two databases in odoo DB1 and DB2. I made some changes to existing modules(say module1 and module2) in DB1 through GUI(web client). All those changes were stored to DB1 and were working correctly when I am logged in through DB1. Now, I made some changes in few files(in same two modules module1 and module2). Thes...
4
4
0.379949
0
false
46,513,745
1
1,295
2
0
0
46,500,405
There is 2 step for upgrading an addons in Odoo, First, restarting the service. it will upgrade your .py files. Second, click upgrade button in Apps>youraddonsname. it will upgrade your .xml files. i create a script for upgrading the XML files. the name is upgrade.sh #!/bin/sh for db in $(cat /opt/odoo/scripts/yourlist...
1
0
0
How upgrading of a Odoo module works?
2
python,openerp,odoo-9,odoo-10
0
2017-09-30T07:04:00.000
I'm updating from an ancient language to Django. I want to keep the data from the old project into the new. But old project is mySQL. And I'm currently using SQLite3 in dev mode. But read that postgreSQL is most capable. So first question is: Is it better to set up postgreSQL while in development. Or is it an easy tran...
0
0
0
0
false
46,544,581
1
31
1
0
0
46,544,518
better create the postgres database. write down the python script which take the data from the mysql database and import in postgres database.
1
0
0
Importing data from multiple related tables in mySQL to SQLite3 or postgreSQL
1
python,django,database,postgresql,sqlite
0
2017-10-03T12:22:00.000
I am trying to upload data from certain fields in a CSV file to an already existing table. From my understanding, the way to do this is to create a new table and then append the relevant columns of the newly created table to the corresponding columns of the main table. How exactly do I append certain columns of data fr...
1
0
1.2
1
true
46,546,554
0
4,039
1
0
0
46,546,388
You can use pandas library for that. import pandas as pd data = pd.read_csv('input_data.csv') useful_columns = [col1, col2, ... ] # List the columns you need data[useful_columns].to_csv('result_data.csv', index=False) # index=False is to prevent creating extra column
1
0
0
How to Skip Columns of CSV file
1
python,csv,google-api,google-bigquery,google-python-api
0
2017-10-03T13:57:00.000
I’m building a web app (python/Django) where customers create an account, each customer creates/adds as many locations as they want and a separate server generates large amounts of data for each location several times a day. For example: User A -> [locationA, locationB] User B -> [locationC, locationD, locationE] Wher...
1
1
0.197375
0
false
46,553,762
1
81
1
0
0
46,553,070
[GENERAL ADVICE]: I always use Postgres or MySQL as the django ORM connection and then Mongo or DynamoDB for analytics. You can say that it creates unnecessary complexity because that is true, but for us that abstraction makes it easier to separate out teams too. You have your front end devs, backend/ full stacks, an...
1
0
0
Right strategy for segmenting Mongo/Postgres database by customer?
1
python,sql,django,mongodb,postgresql
0
2017-10-03T20:41:00.000
I have a large amount of data around 50GB worth in a csv which i want to analyse purposes of ML. It is however way to large to fit in Python. I ideally want to use mySQL because querying is easier. Can anyone offer a host of tips for me to look into. This can be anything from: How to store it in the first place, i re...
0
0
0
0
false
46,607,645
0
589
1
0
0
46,574,694
That's depend on what you have, you can use Apache spark and then use their SQL feature, spark SQL gives you the possibility to write SQL queries in your dataset, but for best performance you need a distributed mode(you can use it in a local machine but the result is limited) and high machine performance. you can use p...
1
0
0
Storing and querying a large amount of data
2
python,mysql,bigdata,mysql-python
0
2017-10-04T21:45:00.000
I want to create a program, which automates excel reporting including various graphs in colours. The program needs to be able to read an excel dataset. Based on this dataset, the program then has to create report pages and graphs and then export to an excel file as well as pdf file. I have done some research and it see...
0
0
0
1
false
46,669,389
0
388
1
0
0
46,575,847
If you already have VBA that works for your project, then translating it to Ruby + WIN32OLE is probably your quickest path to working code. Anything you can do in VBA is doable in Ruby (if you find something you can't do, post here to ask for help). I prefer working with Excel via OLE since I know the file produced by ...
1
0
0
Automating excel reporting and graphs - Python xlsxWriter/xlswings or Ruby axlsx/win32ole
1
python,ruby,excel,xlsxwriter,axlsx
0
2017-10-04T23:52:00.000
So I have two table in a one-to-many relationship. When I make a new row of Table1, I want to populate Table2 with the related rows. However, this population actually involves computing the Table2 rows, using data in other related tables. What's a good way to do that using the ORM layer? That is, assuming that that th...
1
1
1.2
0
true
46,777,010
1
601
1
0
0
46,594,866
After asking around in #sqlalchemy IRC, it was pointed out that this could be done using ORM-level relationships in an before_flush event listener. It was explained that when you add a mapping through a relationship, the foreign key is automatically filled on flush, and the appropriate insert statement generated by the...
1
0
0
Populating related table in SqlAlchemy ORM
2
python,sql,database,orm,sqlalchemy
0
2017-10-05T21:11:00.000
I've built some tools that create front-end list boxes for users that reference dynamic Redshift tables. New items in the table, they appear automatically in the list. I want to put the list in alphabetical order in the database so the dynamic list boxes will show the data in that order. After downloading the list fro...
0
0
1.2
1
true
46,610,485
0
725
1
0
0
46,608,223
While ingesting data into redshift, data gets distributed between slices on each node in your redshift cluster. My suggestion would be to create a sort key on a column which you need to be sorted. Once you have sort key on that column, you can run Vacuum command to get your data sorted. Sorry! I cannot be of much help...
1
0
0
Sorting and loading data from Pandas to Redshift using to_sql
1
python,sorting,amazon-redshift,pandas-to-sql
0
2017-10-06T14:36:00.000
I'm working with a small company currently that stores all of their app data in an AWS Redshift cluster. I have been tasked with doing some data processing and machine learning on the data in that Redshift cluster. The first task I need to do requires some basic transforming of existing data in that cluster into some n...
1
1
0.066568
0
false
46,640,656
1
2,154
1
1
0
46,618,762
The 2 options for running ETL on Redshift Create some "create table as" type SQL, which will take your source tables as input and generate your target (transformed table) Do the transformation outside of the database using an ETL tool. For example EMR or Glue. Generally, in an MPP environment such as Redshift, the be...
1
0
0
AWS Redshift Data Processing
3
python,database,amazon-web-services,amazon-redshift
0
2017-10-07T09:41:00.000
I successfully installed mod_wsgi via pip install mod_wsgi on Windows. However, when I copy the output of mod_wsgi-express module-config into my httpd.conf and try to start the httpd, I get the following error: httpd.exe: Syntax error on line 185 of C:/path/to/httpd.conf: Cannot load c:/path/to/venv/Lib/site-packages/m...
0
0
1.2
0
true
46,645,404
1
1,202
1
0
0
46,622,112
The issue was that the Apache was built with VC14, but Python 2.7 naturally with VC9. Installing an Apache built with VC9 solved my issue.
1
0
0
Getting mod_wsgi to work with Python 2.7/Apache on Windows Server 2012; cannot load module
1
python,apache,mod-wsgi,windows-server-2008-r2
0
2017-10-07T15:46:00.000
Im going to run query that returns a huge table (about 700Mb) from Redshift and save it to CSV using SQLAlchemy and python 2.7 on my local machine (mac pro). I've never done this with such a huge queries before and obviously there could be some memory and other issues. My question is what i shall take into account and...
0
0
0
0
false
46,715,732
0
1,752
1
0
0
46,714,971
If you don't run much else on that machine then memory should not be an issue. Give it a try. Monitor memory use during the execution. Also use "load" to see what pressure on the system is.
1
0
0
Python/SQLAlchemy: How to save huge redshift table to CSV?
2
python,sql,sqlalchemy,amazon-redshift
0
2017-10-12T16:48:00.000
On the face of it, it seems that bindparam should generally be used to eliminate SQL injection. However, in what situations would it necessitate using literal_column instead of bindparam - and what measures should be taken to prevent SQL injection?
0
2
0.379949
0
false
46,736,027
0
918
1
0
0
46,719,568
literal_column is intended to be used as, well, a literal name for a column, not as a parameter (which is a value), because column names cannot be parameterized (it's part of the query itself). You should generally not be using literal_column to put a value in a query, only column names. If you are accepting user input...
1
0
0
Sqlalchemy When should literal_column or be used instead of bindparam?
1
python,python-2.7,sqlalchemy
0
2017-10-12T22:02:00.000
Is it possible to query data from InfoPlus 21 (IP21) AspenTech using php? I am willing to create a php application that can access tags and historical data from AspenTech Historian. Is ODBC my answer? Even thinking that is, I am not quite sure how to proceed. UPDATE: I ended up using python and pyODBC. This worked like...
2
3
0.197375
0
false
46,762,657
0
7,310
2
0
0
46,730,944
I am unaware of a method to access IP21 data directly via PHP, however, if you're happy to access data via a web service, there are both REST and a SOAP options. Both methods are extremely fast and responsive. AFW Security still applies to clients accessing the Web Services. Clients will require SQL Plus read (at lesas...
1
0
0
How to query data from an AspenTech IP21 Historian using PHP?
3
php,python,odbc,aspen
0
2017-10-13T13:18:00.000
Is it possible to query data from InfoPlus 21 (IP21) AspenTech using php? I am willing to create a php application that can access tags and historical data from AspenTech Historian. Is ODBC my answer? Even thinking that is, I am not quite sure how to proceed. UPDATE: I ended up using python and pyODBC. This worked like...
2
2
0.132549
0
false
50,016,010
0
7,310
2
0
0
46,730,944
Yes ODBC driver should be applicable to meet your requirement. We have already developed an application to insert the data into IP21 historian which uses same protocol. Similarly some analytical tools (e.g. Seeq Cooperation) also uses ODBC to fetch the data from IP21 historian. Therefore it should be possible in your c...
1
0
0
How to query data from an AspenTech IP21 Historian using PHP?
3
php,python,odbc,aspen
0
2017-10-13T13:18:00.000
I'm running into a performance issue with Google Cloud Bigtable Python Client. I'm working on a flask API that writes to and reads from a GCP Bigtable instance. The API uses the python client to communicate with Bigtable, and was deployed to GCP App Engine flexible environment. Under low traffic, the API works fine. Ho...
1
3
0.53705
0
false
47,776,406
1
503
1
1
0
46,740,127
Bigtable client take somewhere between 3 ms to 20 ms to complete each request, and because python is single threaded, during that period of time it will just wait until the response comes back. The best solution we found was for any writes, publish the request to Pubsub, then use Dataflow to write to Bigtable. It is si...
1
0
0
Google Cloud Bigtable Python Client Performance Issue
1
google-app-engine,google-cloud-platform,bigtable,google-cloud-bigtable,google-cloud-python
0
2017-10-14T02:03:00.000
I have MySQL database where I'm loading big files which insert more than 190 000 rows. I'm using python script which is doing some stuff and then load data from csv file into mysql execute query and commit. My question is if I'm sending such a big file, is database ready after commit command or how to trigger when all ...
0
1
1.2
0
true
46,745,333
0
54
1
0
0
46,742,682
The COMMIT does not actually return until the data has been... committed... so, yes, once you have committed any transaction, the work from that transaction is entirely done, as far as your application is concerned.
1
0
0
MySQL commit trigger done
1
python,mysql
0
2017-10-14T08:53:00.000