Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I am trying to reinstall one of my apps on my project site. These are the steps that I have followed to do so:
Removing the name of the installed app from settings.py
Manually deleting the app folder from the project folder
Manually removing the data tables from PostgreSQL
Copying the app folder back into the project ... | 1 | 0 | 0 | 0 | false | 33,704,110 | 1 | 205 | 1 | 0 | 0 | 33,703,866 | I think I might have managed to solve the problem. The command, python manage.py sqlmigrate app_name 0001, produces the SQL statements required for the table creation. Thus, I copied and paste the output into the PostgreSQL console and got the tables created. It seems to work for now, but I am not sure if there will be... | 1 | 0 | 0 | Reinstalling Django App - Data tables not re-created | 1 | python,django | 0 | 2015-11-14T00:37:00.000 |
I would like to push sensor data from the raspberry pi to localhost phpmyadmin. I understand that I can install the mysql and phpmyadmin on the raspberry pi itself. But what I want is to access my local machine's database in phpmyadmin from the raspberry pi. Would it be possible? | 0 | 1 | 0.099668 | 0 | false | 33,705,227 | 0 | 1,407 | 2 | 0 | 0 | 33,704,183 | Well, from what I understand, you'd like to save the sensor data arriving in your Raspberry Pi to a database and access it from another machine. What I suggest is, install a mysql db instance and phpmyadmin in your Raspberry Pi and you can access phpmyadmin from another machine in the network by using the RPi's ip addr... | 1 | 0 | 0 | Push sensor data from raspberry pi to local host phpmyadmin database | 2 | mysql,python-2.7,phpmyadmin,raspberry-pi2 | 1 | 2015-11-14T01:29:00.000 |
I would like to push sensor data from the raspberry pi to localhost phpmyadmin. I understand that I can install the mysql and phpmyadmin on the raspberry pi itself. But what I want is to access my local machine's database in phpmyadmin from the raspberry pi. Would it be possible? | 0 | 0 | 0 | 0 | false | 33,716,584 | 0 | 1,407 | 2 | 0 | 0 | 33,704,183 | Sure, as long as they're on the same network and you have granted proper permission, all you have to do is use the proper hostname or IP address of the MySQL server (what you call the local machine). In whatever utility or custom script you have that writes data, use the networked IP address instead of 127.0.0.1 or loc... | 1 | 0 | 0 | Push sensor data from raspberry pi to local host phpmyadmin database | 2 | mysql,python-2.7,phpmyadmin,raspberry-pi2 | 1 | 2015-11-14T01:29:00.000 |
can I store PDF files in the database, as object or blob, with Flask-Admin?
I do not find any reference in the documentation.
Thanks.
Cheers | 2 | -2 | -0.197375 | 0 | false | 33,724,438 | 1 | 3,809 | 1 | 0 | 0 | 33,722,132 | Flask-Admin doesn't store anything. It's just a window into the underlying storage.
So yes, you can have blob fields in a Flask-Admin app -- as long as the engine of your database supports blob types.
In case further explanation is needed, Flask-Admin is not a database. It is an interface to a database. In a flask-admi... | 1 | 0 | 0 | Storing a PDF file in DB with Flask-admin | 2 | python,mongodb,object,flask-sqlalchemy,flask-admin | 0 | 2015-11-15T16:40:00.000 |
Is it possible to use same formatting variable for formatting multiple excel workbooks using xlsxwriter? If yes, how? Currently I am able to use formatting variable for single excel workbook as I am initializing it using workbook.add_format method but this variable is bounded to that workbook only. | 1 | 1 | 0.197375 | 0 | false | 33,751,625 | 0 | 56 | 1 | 0 | 0 | 33,749,918 | Is it possible to use same formatting variable for formatting multiple excel workbooks using xlsxwriter?
No.
A formatting object is created by and thus, tied to, a workbook object.
However, there are other ways of doing what you need to do such as storing the properties for the format in a dict and using that to init... | 1 | 0 | 1 | Use same formatting variable for multiple Excel workbooks | 1 | python,format,xlsxwriter | 0 | 2015-11-17T05:39:00.000 |
I have a database in MS Access. I am trying to query one table to Python using pypyodbc. I get the following error message:
ValueError: could not convert string to float: E+6
The numbers in the table are fairly big, with up to ten significant figures. The error message tells me that MSAccess is formatting them in sci... | 3 | 0 | 0 | 0 | false | 33,894,451 | 0 | 1,522 | 1 | 0 | 0 | 33,769,143 | As I was putting together test files for you to try to reproduce, I noticed that two of the fields in the table were set to Single type rather than Double. Changed them to Double and that solved the problem.
Sorry for the bother and thanks for the help. | 1 | 0 | 0 | Issue querying from Access database: "could not convert string to float: E+6" | 3 | python,ms-access,pypyodbc | 0 | 2015-11-17T23:35:00.000 |
I'm using Sql alchemy with Sql Server as database engine. I have queries that take long time (approximately 10 second). When I send concurrent requests to database, the response time takes more (Exactly, time = execution time * requests count). I Increased the connection pool but no changes have been happend. | 2 | 1 | 1.2 | 0 | true | 33,991,887 | 0 | 403 | 1 | 0 | 0 | 33,775,269 | if the issue is only with threads and not concurrent processes then the
DBAPI in use would be suspect. I don't see which driver you are using
but perhaps it is not releasing the GIL while it waits for a server
response. produce a test case that isolates it just to that driver running in two threads, and then report
it... | 1 | 0 | 0 | SQL alchemy is slow in concurrent connection | 1 | python,sql-server,sqlalchemy | 0 | 2015-11-18T08:41:00.000 |
I have a Python program that connects to an MSSQL database using an ODBC connection. The Python library I'm using is pypyodbc.
Here is my setup:
Windows 8.1 x64
SQL Server 2014 x64
Python 2.7.9150
PyPyODBC 1.3.3
ODBC Driver: SQL Server Native Client 11.0
The problem I'm having is that when I query a table with a varc... | 8 | 8 | 1 | 0 | false | 53,201,310 | 0 | 5,189 | 1 | 0 | 0 | 33,878,291 | The "{SQL Server}" doesn't work in my case. "{SQL Server}" works perfectly well if the database is on my local machine. However, if I tried to connect to the remote server, always the error message below would return:
pypyodbc.DatabaseError: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver][DBNETLIB]SSL Security e... | 1 | 0 | 0 | How to get entire VARCHAR(MAX) column with Python pypyodbc | 3 | python,sql-server,pyodbc,pypyodbc | 0 | 2015-11-23T18:45:00.000 |
I have 2 different python processes (running from 2 separate terminals) running separately at the same time accessing and updating mysql. It crashes when they are using same table at the same time. Any suggestions on how to fix it? | 0 | 0 | 0 | 0 | false | 34,680,890 | 0 | 41 | 1 | 0 | 0 | 33,909,039 | Are you using myisam or innodb? I suggest using innodb since it has a better table/record locking flexibility for multiple simultaneous updates. | 1 | 0 | 1 | python accessing and updating mysql from simultaneously running processes | 1 | python,mysql,python-2.7,mysql-python | 0 | 2015-11-25T05:26:00.000 |
I want to connect my project from app engine with (googleSQL), but I get that error exceeded the maximum of 12 connections in python, I have a CLOUDSQL D8 1000 simultaneous connections
how can i change this number limit conexions, I'm using django and python
thanks | 0 | 2 | 0.379949 | 0 | false | 33,978,178 | 1 | 223 | 1 | 1 | 0 | 33,977,130 | Each single app engine instance can have no more than 12 concurrent connections to Cloud SQL -- but then, by default, an instance cannot service more than 8 concurrent requests, unless you have deliberately pushed that up by setting the max_concurrent_requests in the automatic_scaling stanza to a higher value.
If you'v... | 1 | 0 | 0 | As codified the limit of 12 connections appengine to cloudsql | 1 | python,google-app-engine,google-cloud-sql | 0 | 2015-11-28T22:26:00.000 |
I'm self-teaching programming through the plethora of online resources to build a startup idea I've had for awhile now. Currently, I'm using the SaaS platform at sharetribe.com for my business but I'm trying to build my own platform as share tribe does not cater to the many options I'd like to have available to my user... | 0 | 0 | 0 | 0 | false | 34,133,045 | 1 | 657 | 1 | 0 | 0 | 34,129,887 | Learn a good book on software development methodologies before you get into this . Then read some simple tutorial online on mysql . Then it will be a lot more easy to do this . | 1 | 0 | 0 | How do I build the database for my P2P rental marketplace? | 2 | python,mysql,ruby-on-rails,ruby,database | 0 | 2015-12-07T09:07:00.000 |
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want on... | 1 | 1 | 0.099668 | 0 | false | 34,878,492 | 1 | 510 | 2 | 0 | 0 | 34,197,011 | First, I'd really look (very hard) for a way to launch a script that does as masnun suggests on the client side, really hard.
Second, if that does not work, then I'd try the following:
Configure on your local machine all client databases in the settings variable DATABASES
Make sure you can connect to all the client da... | 1 | 0 | 0 | Running django migrations on multiple databases simultaneously | 2 | python,django,django-migrations | 0 | 2015-12-10T08:32:00.000 |
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want on... | 1 | 3 | 0.291313 | 0 | false | 34,197,250 | 1 | 510 | 2 | 0 | 0 | 34,197,011 | If we update the application code, when we push to the master branch,
all installations detect this, pull the latest version of the code and
restart the application.
I assume that you have some sort of automation to pull the codes and restart the web server. You can just add the migration to this automation proces... | 1 | 0 | 0 | Running django migrations on multiple databases simultaneously | 2 | python,django,django-migrations | 0 | 2015-12-10T08:32:00.000 |
Python, Twistd and SO newbie.
I am writing a program that organises seating across multiple rooms. I have only included related columns from the tables below.
Basic Mysql tables
Table
id
Seat
id
table_id
name
Card
seat_id
The Seat and Table tables are pre-populated with the 'name' columns initially NULL.
Stage ... | 3 | 0 | 0 | 0 | false | 35,131,551 | 0 | 311 | 1 | 1 | 0 | 34,213,706 | I think the best way to accomplish this is to first make a select for the id (or ids) of the row/rows you want to update, then update the row with a WHERE condition matching the id of the item to update. That way you are certain that you only updated the specific item.
An UPDATE statement can update multiple rows that ... | 1 | 0 | 0 | Python Twistd MySQL - Get Updated Row id (not inserting) | 1 | python,mysql,twisted | 0 | 2015-12-10T23:32:00.000 |
I have a python code which needs to retrieve and store data to/from a database on a LAMP server. The LAMP server and the device running the python code are never on the same internet network. The devices running the python code can be either a Linux, Windows or a MAC system. Any idea how could I implement this? | 0 | -1 | -0.197375 | 0 | false | 34,242,082 | 0 | 155 | 1 | 0 | 0 | 34,242,017 | are never on the same internet network.
Let me clear the question, the problem is are never on the same internet network. firstly you need to fix the network issue, add router between the two sides which you want to communicate with. No relations with Python or LAMP.
let me assume your DB is mysql, if you can make yo... | 1 | 0 | 0 | How to fetch or store data into a database on a LAMP server from devices over the internet? | 1 | python,mysql,database,lamp,mysql-python | 0 | 2015-12-12T16:11:00.000 |
As the title says, I’ am trying to run Flask alongside a PHP app.
Both of them are running under Apache 2.4 on Windows platform. For Flask I’m using wsgi_module.
The Flask app is actually an API. The PHP app controls users login therefore users access to API. Keep in mind that I cannot drop the use of the PHP app becau... | 2 | 1 | 1.2 | 0 | true | 34,272,457 | 1 | 1,770 | 1 | 0 | 0 | 34,266,083 | I'm not sure this is the answer you are looking for, but I would not try to have the Flask API access session data from PHP. Sessions and API do not go well together, a well designed API does not need sessions, it is instead 100% stateless.
What I'm going to propose assumes both PHP and Flask have access to the user da... | 1 | 0 | 0 | Run Flask alongside PHP [sharing session] | 1 | php,python,session,flask | 1 | 2015-12-14T11:37:00.000 |
I need to create random entries with a given sql-schema in sql with the help of python programming language.
Is there a simple way to do that or do I have to write own generators? | 0 | 1 | 0.066568 | 0 | false | 63,923,673 | 0 | 1,307 | 1 | 0 | 0 | 34,301,518 | You can also use faker.
just pip install faker
Just go through documentation and check it out | 1 | 0 | 0 | How to create random entries in database with python | 3 | python,sql,generator | 0 | 2015-12-15T23:38:00.000 |
I have created groups to give access rights everything seems fine but I want to custom access - rights for module issue. When user of particular group logins, I want that user only able to create/edit their own issue and can't see other users issue.Please help me out!!
Thanks | 0 | 2 | 0.197375 | 0 | false | 34,328,053 | 1 | 6,109 | 1 | 0 | 0 | 34,327,655 | Providing access rule is one part of the solution. If you look at "Access Control List" in "Settings > Technical > Security > Access Controls Lists", you can see that the group Hr Employee has only read access to the model hr.employee. So first you have to provide write access also to model hr.employee for group Employ... | 1 | 0 | 0 | How to make user can only access their own records in odoo? | 2 | python,xml,openerp | 0 | 2015-12-17T06:03:00.000 |
MySQLdb as I understand doesn't support Python 3. I've heard about PyMySQL as a replacement for this module. But how does it work in production environment?
Is there a big difference in speed between these two? I asking because I will be managing a very active webapp that needs to create entries in the database very of... | 0 | 3 | 1.2 | 0 | true | 34,341,868 | 0 | 79 | 1 | 0 | 0 | 34,341,489 | PyMySQL is a pure-python database connector for MySQL, and can be used as a drop-in replacement using the install_as_MySQLdb() function. As a pure-python implementation, it will have some more overhead than a connector that uses C code, but it is compatible with other versions of Python, such as Jython and PyPy.
At th... | 1 | 0 | 0 | MySQL module for Python 3 | 1 | python,django,python-3.x | 0 | 2015-12-17T18:13:00.000 |
I can't connect to a DB2 remote server using Python. Here is what I've done:
Created a virtualenv with Python 2.7.10 (On Mac OS X 10.11.1)
installed ibm-db using sudo pip install ibm_db
Ran the following code:
import ibm_db
ibm_db.connect("my_connection_string", "", "")
I then get the following error:
Exception: [... | 1 | 1 | 0.066568 | 0 | false | 34,651,608 | 0 | 2,550 | 1 | 1 | 0 | 34,436,084 | We are able to install the driver successfully and connection to db is established without any problem.
The steps are:
1) Upgraded to OS X El Capitan
2) Install pip - sudo pip install
3) Install ibm_db - sudo pip install ibm_db
4) During installation, below error was hit
Referenced from: /Users/roramana/Library/Pytho... | 1 | 0 | 0 | Can't connect to DB2 Driver through Python: SQL1042C | 3 | python,db2,dashdb | 0 | 2015-12-23T12:48:00.000 |
I am working on scaling out a webapp and providing some database redundancy for protection against failures and to keep the servers up when updates are needed. The app is still in development, so I have chosen a simple multi-master redundancy with two separate database servers to try and achieve this. Each server will ... | 0 | 0 | 0 | 0 | false | 34,841,926 | 1 | 1,628 | 1 | 0 | 0 | 34,468,030 | Your idea of the router is great! I would add that you need automatically detect whether a databases is [slow] down. You can detect that by the response time and by connection/read/write errors. And if this happens then you exclude this database from your round-robin list for a while, trying to connect back to it every... | 1 | 0 | 0 | Multi-master database replication with Django webapp and MySQL | 1 | python,mysql,django,multi-master-replication | 0 | 2015-12-26T02:34:00.000 |
I have a Django app with a postgres backend hosted on Heroku. I'm now migrating it to Azure. On Azure, the Django application code and postgres backend have been divided over two separate VMs.
Everything's set up, I'm now at the stage where I'm transferring data from my live Heroku website to Azure. I downloaded a pg_... | 0 | 1 | 1.2 | 0 | true | 34,480,125 | 1 | 119 | 1 | 0 | 0 | 34,472,609 | Try those same steps WITHOUT running syncdb and migrate at all. So overall, your steps will be:
heroku pg:backups capture
curl -o latest.dump heroku pg:backups public-url
`scp -P latest.dump myuser@example.cloudapp.net:/home/myuser
drop database mydb;
create database mydb;
pg_restore --verbose --clean --no-acl --no-ow... | 1 | 0 | 0 | Unable to correctly restore postgres data: I get the same error I usually get if I haven't run syncdb and migrate | 1 | python,django,database,postgresql,database-migration | 0 | 2015-12-26T15:18:00.000 |
Edited to clarify my meaning:
I am trying to find a method using a Django action to take data from one database table and then process it into a different form before inserting it into a second table. I am writing a kind of vocabulary dictionary which extracts data about students' vocabulary from their classroom texts.... | 0 | 1 | 0.099668 | 0 | false | 34,477,438 | 1 | 884 | 1 | 0 | 0 | 34,477,062 | I am pretty sure there is no built-in way for something this specific. Finding single words in a text alone is a quite complex task if you take into consideration misspelled words, hyphen-connected words, quotes, all sorts of punctuation and unicode letters.
Your best bet would be using a regex for each text and save t... | 1 | 0 | 0 | Django way to modify a database table using the contents of another table | 2 | python,mysql,django | 0 | 2015-12-27T02:18:00.000 |
I am using MongoVue and Python library Pymongo to insert some documents. I used MongoVue to see the db created. It was not listed. However, I made a find() request in shell. I got all the inserted documents.
Once I manually create DB all the inserted documents appears then.Every other db's inside the localhost is not ... | 0 | 0 | 1.2 | 0 | true | 37,524,187 | 0 | 117 | 1 | 0 | 0 | 34,488,751 | So, found out the fix for this behavior. Refreshing in MongoVue didn't work. So, I had to close it and open the MongoVue again to see the newly created collections. | 1 | 0 | 0 | Database is not appearing in MongoVue | 1 | python-2.7,mongovue,pymongo-2.x | 0 | 2015-12-28T06:35:00.000 |
I am using pymongo driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the .pretty() option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like pre... | 6 | -1 | -0.039979 | 0 | false | 34,493,742 | 0 | 4,368 | 1 | 0 | 0 | 34,493,535 | It probably depends on your IDE, not the pymongo itself. the pymongo is responsible for manipulating data and communicating with the mongodb. I am using Visual Studio with PTVS and I have such options provided from the Visual Studio. The PyCharm is also a good option for IDE that will allow you to watch your code varia... | 1 | 0 | 1 | Pretty printing of output in pymongo | 5 | mongodb,python-3.x,pymongo | 0 | 2015-12-28T12:26:00.000 |
I set a key that I have now realizes is wrong. It is set at migration 0005. The last migration I did was 0004. I'm now up to 0008. I want to rebuild the migrations with the current models.py against the current database schema. Migration 0005 is no longer relevant and has been deleted from models.py. Migration 0005 is ... | 0 | 0 | 0 | 0 | false | 61,643,148 | 1 | 512 | 1 | 0 | 0 | 34,502,379 | Simply delete 0005-0008 migration files from migrations/ folder.
Re. database tables, you won't need to delete anything from there if migrations weren't applied. You can check yourself django_migrations table entries to be sure. | 1 | 0 | 0 | Delete migrations that haven't been migrated yet | 1 | python,django,django-migrations,django-1.9 | 0 | 2015-12-28T23:37:00.000 |
I have data in an excel spreadsheet (*.xlsx) that consists of 1,213 rows of sensitive information (so, I'm sorry I can't share the data) and 35 columns. Every entry is a string (I don't know if that is screwing it up or not). The first row is the column names and I've never had a problem importing it with the column na... | 0 | 0 | 0 | 0 | false | 34,534,891 | 0 | 626 | 1 | 0 | 0 | 34,532,708 | I have had QlikView crash when importing an Excel spreadsheet that was exported with the SQuirreL SQL client (from a Firebird database). Opening the spreadsheet in Excel, and saving it again solved the problem.
I know that this is no longer relevant to your problem, but hopefully it can help someone with a similarly ap... | 1 | 1 | 0 | Why does QlikView keep crashing when I try to load my data? | 3 | python,excel,qlikview | 0 | 2015-12-30T15:52:00.000 |
My application is very database intensive so I'm trying to reduce the load on the database. I am using PostgreSQL as rdbms and python is the programming language.
To reduce the load I am already using a caching mechanism in the application. The caching type I used is a server cache, browser cache.
Currently I'm tuning ... | 11 | 2 | 0.197375 | 0 | false | 58,349,778 | 0 | 3,440 | 1 | 0 | 0 | 34,553,778 | Tuning PostgreSQL is far more than just tuning caches. In fact, the primary high level things are "shared buffers" (think of this as the main data and index cache), and the work_mem.
The shared buffers help with reading and writing. You want to give it a decent size, but it's for the entire cluster.. and you can't real... | 1 | 0 | 0 | Enable the query cache in postgreSQL to improve performance | 2 | python,sql,database,postgresql,caching | 0 | 2016-01-01T05:26:00.000 |
I am trying to use OpenPyXL to create invoices. I have a worksheet with an area to be printed and some notes outside of that range. I have most everything working but I am unable to find anything in the API for one function. Is there a way to set the print area on a worksheet?
I am able to find lots of print settings, ... | 2 | 0 | 0 | 0 | false | 34,579,357 | 0 | 3,471 | 1 | 0 | 0 | 34,578,910 | This isn't currently directly possible. You could do it manually by creating a definedNamed using the reserved xlnm prefix (see Worksheet.add_print_title for an example. | 1 | 0 | 0 | OpenPyXL - How to set print area for a worksheet | 3 | python,openpyxl | 0 | 2016-01-03T16:37:00.000 |
I am writing a python code for beam sizing. I have an Excel workbook from AISC that has all the data of the shapes and other various information on the cross-sections. I would like to be able to reference data in particular cells in this Excel workbook in my python code.
For example if the width of rectangle is 2in a... | 1 | 0 | 0 | 0 | false | 34,603,357 | 0 | 2,722 | 1 | 0 | 0 | 34,603,090 | Thank you for you inputs. I have found the solution I was looking for by using Numpy.
data = np.loadtxt('C:\Users[User_Name]\Desktop[fname].csv', delimiter=',')
using that it took the data and created an array with the data that I needed. Now I am able to use the data like any other matrix or array. | 1 | 0 | 0 | Reference Excel in Python | 4 | python,excel | 0 | 2016-01-05T02:03:00.000 |
I've had a fairly good look on the web for an answer to this question, but I've tended to find that people assume more knowledge of databases than I currently have. I'm sorry if this is a rookie question - I've always been aware of databases and their advantages, but never actually had to work with them.
I have a requ... | 0 | 1 | 1.2 | 0 | true | 34,609,510 | 0 | 93 | 1 | 0 | 0 | 34,609,259 | Although I would consider creating and maintaining databases in Python bad practice ( at least for MySQL and SQL Server), these databases will be fully compatible with non-Python tools and processes as they are created with the same SQL code. Regarding SQLAlchemy, this is used by several major companies and I have neve... | 1 | 0 | 0 | Python linkage to SQL databases | 1 | python,mysql,sql-server,database | 0 | 2016-01-05T10:20:00.000 |
In pyodbc, cursor.rowcount works perfectly when using cursor.execute(). However, it always returns -1 when using cursor.executemany().
How does one get the correct row count for cursor.executemany()?
This applies to multiple inserts, updates, and deletes. | 5 | 1 | 0.099668 | 0 | false | 47,834,345 | 0 | 7,653 | 1 | 0 | 0 | 34,613,875 | You can't, only the last query row count is returned from executemany, at least that's how it says in the pyodbc code docs. -1 usually indicates problems with query though.
If you absolutely need the rowcount, you need to either cursor.execute in a loop or write a patch for pyodbc library. | 1 | 0 | 0 | How to get correct row count when using pyodbc cursor.executemany() | 2 | python,sql,pyodbc | 0 | 2016-01-05T14:20:00.000 |
I am writing a form submit in my application written in python/Django.Form has an attachment(upto 3MB) uploaded. On submit it has to save the attachment in aws s3, save the other data in database and also send emails.
This form submit is taking too much time and the UI is hanging.
Is there any other way to do this in p... | 1 | 0 | 0 | 0 | false | 34,673,727 | 1 | 427 | 1 | 0 | 0 | 34,673,515 | The usual solution to tasks that are too long to be handled synchronously and can be handled asynchronously is to delegate them to some async queue like celery.
In your case, saving the form's data to db should be quite fast so I would not bother with this part, but moving the uploaded file to s3 and sending mails are... | 1 | 0 | 0 | Python - On form submit send email and save record in database taking huge time | 2 | python,django,forms,performance,amazon-s3 | 0 | 2016-01-08T09:28:00.000 |
Let's say that I need to maintain an index on a table where multiple documents can relate do the same item_id (not primary key of course).
Can one secondary compound index based on the result of a function which of any item_id returns the most recent document based on a condition, update itself whenever a newer documen... | 0 | 0 | 0 | 0 | false | 34,750,764 | 0 | 65 | 1 | 0 | 0 | 34,750,575 | I'm not 100% sure I understand the question, but if you have a secondary index and insert a new document or change an old document, the document will be in the correct place in the index once the write completes. So if you had a secondary index on a timestamp, you could write r.table('items').orderBy(index: r.desc('ti... | 1 | 0 | 1 | RethinkDb do function based secondary indexes update themselves dynamically? | 1 | indexing,rethinkdb,rethinkdb-python | 0 | 2016-01-12T17:53:00.000 |
I'm trying to connect to a SQL Server named instance from python 3.4 on a remote server, and get an error.
File "C:\Scripts\Backups Integrity Report\Backup Integrity Reports.py", line 269, in
conn = pymssql.connect(host=r'hwcvcs01\HDPS', user='My-office\romano', password='PASS', database='CommServ')
File "pymssql... | 3 | 1 | 0.099668 | 0 | false | 66,684,103 | 0 | 4,865 | 1 | 0 | 0 | 34,774,326 | According to the pymssql documentation on the pymssql Connection class, for a named instance containing database theDatabase, looking like this:
myhost\myinstance
You could connect as follows:
pymssql.connect(host=r'myhost\myinstance', database='theDatabase', user='user', password='pw')
The r-string is a so-called raw ... | 1 | 0 | 0 | Python pymssql - Connecting to Named Instance | 2 | python,instance,pymssql,named | 0 | 2016-01-13T18:28:00.000 |
I'm new to Django. It wasted me whole afternoon to config the MySQL engine. I am very confused about the database engine and the database driver. Is the engine also the driver? All the tutorial said that the ENGINE should be 'django.db.backends.mysql', but how the ENGINE decide which driver is used to connect MySQL?
... | 20 | 1 | 0.099668 | 0 | false | 53,195,032 | 1 | 18,120 | 1 | 0 | 0 | 34,777,755 | The short answer is no they are not the same.
The engine, in a Django context, is in reference to RDBMS technology. The driver is the library developed to facilitate communication to that actual technology when up and running. Letting Django know what engine to use tells it how to translate the ORM functions from a bac... | 1 | 0 | 0 | How to config Django using pymysql as driver? | 2 | python,mysql,django,pymysql | 0 | 2016-01-13T21:50:00.000 |
I have a Flask-admin application and I have a class with a "Department" and a "Subdepartment" fields.
In the create form, I want that when a Department is selected, the Subdepartment select automatically loads all the corresponding subdepartments.
In the database, I have a "department" table and a "sub_department" tabl... | 0 | 0 | 0 | 0 | false | 34,786,896 | 1 | 44 | 1 | 0 | 0 | 34,786,665 | You have to follow this steps
Javascript
Bind a on change event to your Department select .
If the select changes you get the value selected.
When you get the value, you have to send it to the server through an AJAX request.
Flask
Implement a method that reads the value and loads the associated Subdepartments.
Send... | 1 | 0 | 0 | Load a select list when selecting another select | 1 | python,flask,flask-admin | 0 | 2016-01-14T10:05:00.000 |
Coming off a NLTK NER problem, I have PERSONS and ORGANIZATIONS, which I need to store in a sqlite3 db. The obtained wisdom is that I need to create separate TABLEs to hold these sets. How can i create a TABLE when len(PERSONs) could vary for each id. It can even be zero. The normal use of:
insert into table_name ... | 0 | 0 | 0 | 0 | false | 34,827,171 | 0 | 129 | 1 | 0 | 0 | 34,824,495 | Thanks to CL.'s comment, I figured out the best way is to think rows in a two-column table, where the first column is id INT, and the second column contains person_names. This way, there will be no issue with varying lengths of PERSONS list. of course, to link the main table with the persons table, the id field has to ... | 1 | 0 | 0 | insert data in sqlite3 when array could be of different lengths | 1 | python-2.7,database-design,sqlite | 0 | 2016-01-16T07:09:00.000 |
I have been working on a localhost copy of my Django website for a little while now, but finally decided it was time to upload it to PythonAnywhere. The site works perfectly on my localhost, but I am getting strange errors when I do the initial migrations for the new site. For example, I get this:
mysql.connector.erro... | 1 | 1 | 1.2 | 0 | true | 34,837,989 | 1 | 463 | 1 | 0 | 0 | 34,836,049 | As it says in my comment above, it turns out that the problem with the database resulted from running an upgrade of Django from 1.8 to 1.9. I had forgotten about this. After rolling my website back to Django 1.8, the database migrations ran correctly.
The reason why I could not access the website turned out to be becau... | 1 | 0 | 0 | Strange error during initial database migration of a Django site | 2 | python,django,python-3.x,django-forms,pythonanywhere | 0 | 2016-01-17T07:12:00.000 |
As part of a big system, I'm trying to implement a service that (among other tasks) will serve large files (up to 300MB) to other servers (running in Amazon).
This files service needs to have more than one machine up and running at each time, and there are also multiple clients.
Service is written in Python, using Torn... | 0 | 0 | 0 | 0 | false | 34,899,601 | 1 | 191 | 1 | 1 | 0 | 34,895,738 | You also can use MongoDb , it provides several API, and also you can store file in S3 bucket with the use of Multi-Part Upload | 1 | 0 | 0 | Serving large files in AWS | 1 | python,mysql,amazon-web-services,nas | 0 | 2016-01-20T09:09:00.000 |
I'm using wget to download Excel file with xlsx extension. The thing is that when I want to deal with the file using openpyxl, I get the above mentioned error. But when I download the file manually using fire fox, I don't have any problems.
So I checked the difference between the two downloaded files. I found that the... | 0 | 0 | 0 | 0 | false | 34,896,780 | 0 | 1,390 | 1 | 0 | 0 | 34,896,043 | If the files are different in size then wget isn't getting the right file. Many websites now rely on javascript to handle links which wget can't emulate. I suspect that if you look at the file with less you'll see some HTML source as opposed to the start of a zipfile. | 1 | 0 | 0 | wget causes "BadZipfile: File is not a zip file" for openpyxl | 1 | python-2.7,debian,wget | 0 | 2016-01-20T09:23:00.000 |
I have a Django website running on an Amazon EC2 instance. I want to add an EBS. In order to do that, I need to change the location of my PGDATA directory if I understand well. The new PGDATA path should be something like /vol/mydir/blabla.
I absolutely need to keep the data safe (some kind of dump could be useful).
D... | 1 | 0 | 0 | 0 | false | 34,922,249 | 1 | 62 | 1 | 0 | 0 | 34,905,744 | Ok, thanks for your answers, I used :
find . -name "postgresql.conf" to find the configuration find, which was located into the "/etc/postgresql/9.3/main" folder. There is also pg_lsclusters if you want to show the directory data.
Then I edited that file putting the new path, restarted postgres and imported my old DB. | 1 | 0 | 0 | Django PostgreSQL : migrating database to a different directory | 1 | python,django,postgresql,amazon-ec2 | 0 | 2016-01-20T16:46:00.000 |
I have a series of python objects, each associated with a different user, e.g., obj1.userID = 1, obj2.userID = 2, etc. Each object also has a transaction history expressed as a python dict, i.e., obj2.transaction_record = {"itemID": 1, "amount": 1, "date": "2011-01-04"} etc.
I need these objects to persist, and transa... | 0 | -1 | -0.066568 | 0 | false | 34,979,208 | 0 | 358 | 1 | 0 | 0 | 34,978,896 | If you didn't make a decision until now for what kind of database you'll use, I advise you to pick mongodb as database server and mongoengine module for persist data, it's what you need, mongoengine has a DictField you can store in it a python dict directly and it's very easy to learn. | 1 | 0 | 0 | What kind of database schema would I use to store users' transaction histories? | 3 | python,sqlite,sqlalchemy | 0 | 2016-01-24T17:17:00.000 |
I am trying to call a postgres database procedure using psycopg2 in my python class.
lCursor.callproc('dbpackage.proc',[In_parameter1,In_parameter2,out_parameter]).
In_parameter values is 5008001#60°V4#FR.tif
But I am getting the below error.
DataError: invalid byte sequence for encoding "UTF8": 0xb0
I have tried mostl... | 0 | 0 | 1.2 | 0 | true | 34,993,660 | 0 | 2,227 | 1 | 0 | 0 | 34,993,615 | Your encoding and the database connection encoding don't match. The database connection is in UTF8 and you're probably trying to send with Latin1 encoding.
When opening the connection send SET client_encoding TO 'Latin1', after that PostgreSQL will assume all strings to be in Latin1 encoding regardless of the database ... | 1 | 0 | 0 | DataError: invalid byte sequence for encoding "UTF8": 0xb0 while calling the database procedure | 1 | python,postgresql,utf-8 | 0 | 2016-01-25T13:17:00.000 |
When I am defining a model and using unique_together in the Meta, I can define more than one tuple. Are these going to be ORed or ANDed? That is lets say I have a model where
class MyModel(models.Model):
druggie = ForeignKey('druggie', null=True)
drunk = ForeignKey('drunk', null=True)
quarts = IntegerField... | 23 | 24 | 1.2 | 0 | true | 35,024,190 | 1 | 5,860 | 1 | 0 | 0 | 35,024,007 | Each tuple results in a discrete UNIQUE clause being added to the CREATE TABLE query. As such, each tuple is independent and an insert will fail if any data integrity constraint is violated. | 1 | 0 | 0 | Multiple tuples in unique_together | 1 | python,django,django-models | 0 | 2016-01-26T21:10:00.000 |
I have a python code which queries psql and returns a batch of results using cursor.fetchall().
It throws an exception and fails the process if a casting fails, due to bad data in the DB.
I get this exception:
File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 377, in fetchall
return [... | 4 | 0 | 0 | 0 | false | 35,034,708 | 0 | 280 | 1 | 0 | 0 | 35,033,997 | change your psql query to cast and get the date column as string
e.g. select date_column_name:: to_char from table_name. | 1 | 0 | 0 | psql cast parse error during cursor.fetchall() | 2 | python,python-2.7,psycopg2,psql | 0 | 2016-01-27T09:55:00.000 |
I was asked to port a Access database to MySQL and
provide a simple web frontend for the users.
The DB consists of 8-10 tables and stores data about
clients consulting (client, consultant,topic, hours, ...).
I need to provide a webinterface for our consultants to use,
where they insert all this information during a se... | 0 | 0 | 0 | 0 | false | 35,039,883 | 1 | 670 | 1 | 0 | 0 | 35,038,543 | This double/redundant way of talking to my DB strikes me as odd and web2py does not support python3.
Any abstraction you want to use to communicate with your database (whether it be the web2py DAL, the Django ORM, SQLAlchemy, etc.) will have to have some knowledge of the database schema in order to construct queries.
... | 1 | 0 | 0 | Using web2py for a user frontend crud | 1 | python,mysql,frontend,crud,web2py | 0 | 2016-01-27T13:21:00.000 |
Our current Python pipeline scrapes from the web and stores those data into the MongoDB. After that we load the data into an analysis algorithm. This works well on a local computer since mongod locates the database, but I want to upload the database on sharing platform like Google Drive so that other users can use the ... | 1 | 0 | 0 | 0 | false | 35,120,084 | 0 | 7,187 | 1 | 0 | 0 | 35,119,959 | You can create a little Rest API for your database with unique keys and all peoples in your team will can use it.
If you want to use export only one time - just export it to JSON and no problem. | 1 | 0 | 0 | How to share database created by MongoDB? | 3 | python,mongodb,pymongo,database | 0 | 2016-01-31T21:52:00.000 |
My Python script uses an ADODB.Recordset object. I use an ADODB.Command object with a collection of ADODB.Parameter objects to update a record in the set. After that, I check the state of the recordset, and it was 1, which is adStateOpen. But when I call MyRecordset.Close(), I get an exception complaining that the o... | 0 | 0 | 1.2 | 0 | true | 35,134,057 | 0 | 305 | 1 | 0 | 0 | 35,133,678 | Yes, that was the problem. Once I change the value of one of a recordset's ADODB.Field objects, I have to either update the recordset using ADODB.Recordset.Update() or call CancelUpdate().
The reason I'm going through all this rigarmarole of the ADODB.Command object is that ADODB.Recordset.Update() fails at random (... | 1 | 0 | 0 | Why can't a close an open ADODB.Recordset? | 1 | python,adodb | 0 | 2016-02-01T14:59:00.000 |
I created my own python module and packaged it with distutils. Now I installed it on a new system (python setup.py install) and I'm trying to call it from a plpython3u function, but I get an error saying the module does not exist.
It was working on a previous Ubuntu instalation, and I'm not sure what I did wrong when s... | 2 | 1 | 1.2 | 0 | true | 35,205,633 | 0 | 884 | 1 | 0 | 0 | 35,204,352 | Sorry guys I think I found the problem. I'm using plpython3 in my stored procedure, but intalled my custom module using python 2. I just did sudo python3 setup.py install and now it's working on the native Ubuntu. I'll now try modifying my docker image and see if it works there too.
Thanks | 1 | 0 | 0 | Can't import own python module in Postgresql plpython function | 1 | python,postgresql,ubuntu,docker | 0 | 2016-02-04T14:58:00.000 |
In a platform using Flask, SQLAlchemy, and Alembic, we constantly need to create new separate instances with their own set of resources, including a database.
When creating a new instance, SQLAlchemy's create_all gives us a database with all the updates up to the point when the instance is created, but this means that ... | 4 | 1 | 1.2 | 0 | true | 35,275,008 | 1 | 591 | 1 | 0 | 0 | 35,260,536 | If you know the state of the database you can just stamp the revision you were at when you created in the instance.
setup instance
run create_all
alembic heads (to determine latest version available in scripts dir)
alembic stamp
Here is the doc from the commandline:
stamp 'stamp' the revision table wi... | 1 | 0 | 0 | SQLAlchemy, Alembic and new instances | 1 | python,sqlalchemy,flask-sqlalchemy,alembic | 0 | 2016-02-07T23:32:00.000 |
If one is using Django, what happens with changes made directly to the database (in my case postgres) through either pgadmin or psql?
How are such changes handled by migrations? Do they take precedence over what the ORM thinks the state of affairs is, or does Django override them and impose it's own sense of change hi... | 7 | 3 | 0.291313 | 0 | false | 35,273,897 | 1 | 670 | 1 | 0 | 0 | 35,273,294 | The migrations system does not look at your current schema at all. It builds up its picture from the graph of previous migrations and the current state of models.py. That means that if you make changes to the schema from outside this system, it will be out of sync; if you then make the equivalent change in models.py an... | 1 | 0 | 0 | Edit database outside Django ORM | 2 | python,django,git,postgresql | 0 | 2016-02-08T15:31:00.000 |
I am using apache with mod_wsgi in windows platform to deploy my flask application. I am using sqlalchemy to connect redshift database with connection pool(size 10).
After few days suddenly I am getting follwoing error.
(psycopg2.OperationalError) SSL SYSCALL error: Software caused connection abort
Can anybody sugges... | 4 | 1 | 0.197375 | 0 | false | 44,923,869 | 1 | 3,963 | 1 | 0 | 0 | 35,322,629 | I solved this error by turning DEBUG=False in my config file [and/or in the run.py]. Hope it helps someone. | 1 | 0 | 0 | (psycopg2.OperationalError) SSL SYSCALL error: Software caused connection abort | 1 | python,apache,flask,amazon-redshift | 0 | 2016-02-10T18:00:00.000 |
I'm developing a Django 1.8 application locally and having reached a certain point a few days ago, I uploaded the app to a staging server, ran migrations, imported the sql dump, etc. and all was fine.
I've since resumed local development which included the creation of a new model, and changing some columns on an existi... | 0 | 0 | 1.2 | 0 | true | 35,343,687 | 1 | 772 | 1 | 0 | 0 | 35,336,992 | Try running migrate --fake-initial since you're getting the "relation already exists" error. Failing that, I would manually back up each one of my migration folders, remove them from the server, then re-generate migration files for each app and run them all again from scratch (i.e., the initial makemigrations). | 1 | 0 | 0 | Migrations error in Django after moving to new server | 1 | python,django,postgresql,django-migrations | 0 | 2016-02-11T10:39:00.000 |
I have created a model using QSqlTableModel, then created a tablview using QTableView and set the model on it.
I want to update the model and view automatically whenever the database is updated by another program. How can I do that? | 0 | 1 | 0.197375 | 0 | false | 35,392,361 | 0 | 255 | 1 | 0 | 0 | 35,383,018 | There's no signal emitted for that currently. You could use a timer to query the last update timestamp and refresh the model data at designated intervals. | 1 | 1 | 0 | Automatically updating QSqlTableModel and QTableView | 1 | python,pyqt,auto-update,qtableview,qsqltablemodel | 0 | 2016-02-13T17:31:00.000 |
I have a spreadsheet which references/caches values from an external spreadsheet. When viewing the cell in Excel that I want to read using OpenPyxl, I see the the contents as a string: Users .
When I select the cell in Excel, I see the actual content in the Formula Bar is ='C:\spreadsheets\[_comments.xlsm]Rules-Source... | 1 | 0 | 1.2 | 0 | true | 35,392,192 | 0 | 310 | 1 | 0 | 0 | 35,385,486 | Yes, Excel does cache the values from the other sheet but openpyxl does not preserve this because there is no way of checking it. | 1 | 0 | 0 | OpenPyxl - difficulty getting cell value when cell is referencing other source | 1 | python,excel,python-3.x,openpyxl | 0 | 2016-02-13T21:22:00.000 |
I'm using openpyxl to read an Excel spreadsheet with a lot of formulas. For some cells, if I access the cell's value as e.g. sheet['M30'].value I get the formula as intended, like '=IFERROR(VLOOKUP(A29, other_wksheet, 9, FALSE)*E29, "")'. But strangely, if I try to access another cell's value, e.g. sheet['M31'].value a... | 1 | 0 | 1.2 | 0 | true | 35,392,251 | 0 | 637 | 1 | 0 | 0 | 35,385,519 | This sounds very much like you are looking at cells using "shared formulae". When this is the case the same formula is used by several cells. The formula itself is only stored with one of those cells and all others are marked as formulae but just contain a reference. Until version 2.3 of openpyxl all such cells would r... | 1 | 0 | 0 | openpyxl showing '=' instead of formula | 1 | python,excel,xlrd,openpyxl | 0 | 2016-02-13T21:26:00.000 |
I have a Flask app that uses SQLAlchemy (Flask-SQLAlchemy) and Alembic (Flask-Migrate). The app runs on Google App Engine. I want to use Google Cloud SQL.
On my machine, I run python manage.py db upgrade to run my migrations against my local database. Since GAE does not allow arbitrary shell commands to be run, how d... | 8 | 1 | 0.066568 | 0 | false | 35,395,267 | 1 | 1,816 | 1 | 1 | 0 | 35,391,120 | You can whitelist the ip of your local machine for the Google Cloud SQL instance, then you run the script on your local machine. | 1 | 0 | 0 | Run Alembic migrations on Google App Engine | 3 | python,google-app-engine,flask,google-cloud-sql,alembic | 0 | 2016-02-14T11:17:00.000 |
I'm working in a Python program which has to access data that is currently stored in plain text files. Each file represents a cluster of data points that will be accessed together. I don't need to support different queries, the only thing I need is to retrieve and copy to memory cluster of data as fast as possible.
I'm... | 2 | 0 | 0 | 0 | false | 35,421,941 | 0 | 367 | 1 | 0 | 0 | 35,421,803 | A DODB sounds like a much more reliable and professional solution. Besides you can add stored procedures thinking in the future and besides most databases offer text search capabilities. Backups are also easier, instead of using an incremental tar command, you can use the native DB backup tools.
I'm fan of CouchDB a... | 1 | 0 | 0 | Document-oriented databases vs plain text files | 2 | python,database,filesystems,document-oriented-db | 0 | 2016-02-16T00:52:00.000 |
When Mongoengine rebuild(update) a information about indexes? I mean, if a added or change some field (added uniques or sparse option to filed) or added some meta info in model declaration.
So question is:
When mongoengine update it?
How do they track changes? | 3 | 1 | 1.2 | 0 | true | 35,648,604 | 1 | 349 | 1 | 0 | 0 | 35,437,458 | Mongoengine do not rebuild index automaticly. Mongoengine track changes in models (btw dont work if you add sparse to your filed(if field dont have unique options)) and then fire the ensureIndex in mongoDB. But when its fire - make sure you delete oldest index version manualy(Mongoengine doesn't) in mongoDB.
The probl... | 1 | 0 | 0 | When Mongoengine rebuild indexes? | 1 | python,mongodb,mongoengine | 0 | 2016-02-16T16:11:00.000 |
I have about million records in a list that I would like to write to a Netezza table. I have been using executemany() command with pyodbc, which seems to be very slow (I can load much faster if I save the records to Excel and load to Netezza from the excel file). Are there any faster alternatives to loading a list with... | 1 | 0 | 0 | 0 | false | 35,599,759 | 0 | 806 | 1 | 0 | 0 | 35,466,165 | Netezza is good for bulk loads, where executeMany() inserts number of rows in one go. The best way to load millions of rows is "nzload" utility which can be scheduled by vbscript, Excel Macro from Windows or Shell script from Linux. | 1 | 0 | 0 | Loading data to Netezza as a list is very slow | 2 | python,list,pyodbc,netezza,executemany | 0 | 2016-02-17T19:40:00.000 |
I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time.
Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise t... | 2 | 1 | 0.066568 | 1 | false | 35,583,196 | 0 | 939 | 2 | 0 | 0 | 35,581,528 | I am assuming you have already implemented GeoPandas and are still finding difficulties?
you can improve this by further hashing your coords data. similar to how google hashes their search data. Some databases already provide support for these types of operations (eg mongodb). Imagine if you took the first (left) digit... | 1 | 0 | 0 | Fastest approach for geopandas (reading and spatialJoin) | 3 | python,multithreading,pandas,geopandas | 0 | 2016-02-23T15:28:00.000 |
I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time.
Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise t... | 2 | 1 | 1.2 | 1 | true | 35,786,998 | 0 | 939 | 2 | 0 | 0 | 35,581,528 | As it turned out, the most convenient solution in my case is to use pandas.read_SQL function with specific chunksize parameter. In this case, it returns a generator of data chunks, which can be effectively feed to the mp.Pool().map() along with the job;
In this (my) case job consists of 1) reading geoboundaries, 2) s... | 1 | 0 | 0 | Fastest approach for geopandas (reading and spatialJoin) | 3 | python,multithreading,pandas,geopandas | 0 | 2016-02-23T15:28:00.000 |
I need to migrate data from MySQL to Postgres. It's easy to write a script that connects to MySQL and to Postgres, runs a select on the MySQL side and inserts on the Postgres side, but it is veeeeery slow (I have + 1M rows). It's much faster to write the data to a flat file and then import it.
The MySQL command line ca... | 0 | 0 | 0 | 0 | false | 35,598,628 | 0 | 181 | 1 | 0 | 0 | 35,592,092 | I believe that the problem is that you are inserting each row in a separate transaction (which is the default behavior when you run SQL-queries without explicitly starting a transaction). In that case, the database must write (flush) changes to disk on every INSERT. It can be 100x times slower than inserting data in a ... | 1 | 0 | 0 | Why is MySQL command line so fast vs. Python? | 1 | python,mysql,postgresql | 0 | 2016-02-24T02:22:00.000 |
I first had a updating problem with using google drive api, Even I followed the example of Quickstart, and after making some changes on it, the file on google drive is updated successfully. But now here comes a new problem after updating, I am not sure if it is because my change to the Quickstart is not proper, or some... | 1 | 1 | 0.066568 | 0 | false | 35,604,757 | 0 | 597 | 1 | 0 | 0 | 35,604,605 | You can install google drive on your local machine and copy the file into the google drive directory at the correct position. then google drive (the client software) will update the file. | 1 | 0 | 0 | After updating file on google drive through google api, the file on local machine is not editable without closing IDLE window | 3 | python,google-api-python-client | 0 | 2016-02-24T14:17:00.000 |
I have a flask app that recently had to start using mssql generated guid's as primary keys (previously it was just integers). The guid's are latin-1 encoding. Also, I am not using sqlalchemy. Now, when I'm trying to display the queried mssql guid's in a flask jinja2 template, I get the following error:
UnicodeDecodeEr... | 0 | 0 | 1.2 | 0 | true | 35,608,084 | 1 | 167 | 1 | 0 | 0 | 35,604,937 | Well, this feels like a hack, but since the only time I'm ever using these guid's is when i'm reading them from the database, I just did:
CAST(REC_GUID_ID as VARCHAR(36)) as REC_GUID_ID
And now they are in a format that everything seems to read just fine. | 1 | 0 | 0 | Unicode issue using flask and mssql guids with FreeTDS | 1 | python,sql-server,flask,jinja2 | 0 | 2016-02-24T14:31:00.000 |
I've built a Django app that uses sqlite (the default database), but I can't find anywhere that allows deployment with sqlite. Heroku only works with postgresql, and I've spent two days trying to switch databases and can't figure it out, so I want to just deploy with sqlite. (This is just a small application.)
A few q... | 0 | -2 | -0.132549 | 0 | false | 35,615,302 | 1 | 3,302 | 1 | 0 | 0 | 35,615,273 | sure you can deploy with sqlite ... its not really recommended but should work ok if you have low network traffic
you set your database engine to sqlite in settings.py ... just make sure you have write access to the path that you specify for your database | 1 | 0 | 0 | Is it possible to deploy Django with Sqlite? | 3 | python,django,postgresql,sqlite,heroku | 0 | 2016-02-24T23:23:00.000 |
I have a Raspberry Pi collecting data from sensors attached to it. I would like to have this data - collected every minute - accessible from an online DB (Amazon RDS | MySQL).
Currently, a python script running on the Pi pushes this data to an Amazon RDS instance every 50 seconds (~per minute). However, I have no recor... | 1 | 1 | 0.197375 | 0 | false | 38,479,349 | 0 | 476 | 1 | 0 | 0 | 35,617,670 | I went with my first thought:
store the sensor data on a local DB (SQLite3 for its small footprint). Records are created every half minute.
a separate script - run regularly via cron - compares the last timestamp entry in the cloud DB with the local one and updates the cloud DB.
Even though the comparison would ideal... | 1 | 0 | 0 | Syncing locally collected regular data to online DB over unreliable internet connection | 1 | python,mysql,database,synchronization,raspberry-pi | 1 | 2016-02-25T03:28:00.000 |
I am currently developing an export plugin for MySQL Workbench 6.3. It is my first one.
Is there any developer tool that I can use to help me (debug console, watches, variables state, etc.) | 1 | 2 | 1.2 | 0 | true | 35,668,094 | 0 | 156 | 1 | 0 | 0 | 35,649,215 | There is the GRT scripting shell, which you can reach via menu -> Scripting -> Scripting Shell. This shell is mostly useful for python plugins, but also shows some useful informations from the GRT (classes, the current tree with all settings, open editors, models etc.) | 1 | 0 | 0 | MySQL Workbench developer tools | 1 | python,debugging,plugins,mysql-workbench | 0 | 2016-02-26T10:26:00.000 |
I am using a postgres database with sql-alchemy and flask. I have a couple of jobs which I have to run through the entire database to updates entries. When I do this on my local machine I get a very different behavior compared to the server.
E.g. there seems to be an upper limit on how many entries I can get from the ... | 1 | 3 | 0.291313 | 0 | false | 35,707,179 | 1 | 2,227 | 1 | 0 | 0 | 35,705,211 | Just the message "killed" appearing in the terminal window usually means the kernel was running out of memory and killed the process as an emergency measure.
Most libraries which connect to PostgreSQL will read the entire result set into memory, by default. But some libraries have a way to tell it to process the resul... | 1 | 0 | 0 | postgres database: When does a job get killed | 2 | python,database,postgresql,sqlalchemy,flask-sqlalchemy | 0 | 2016-02-29T17:01:00.000 |
I am using mongodb in python. The problem I'm facing is during the generation of a key. The code through which I'm generating a key is:
post_id = posts.insert_one({msg["To"]:a}
Now here, the "To" consist of an email address (which consists of a symbol dot(.)). I researched few documents online and I got to knew that “T... | 0 | 1 | 1.2 | 0 | true | 35,765,568 | 0 | 77 | 1 | 0 | 0 | 35,716,642 | i have done something like this..
'To':'test@gmail(dot)com' | 1 | 0 | 1 | How to set a generated key in mongodb using python? | 2 | python,mongodb | 0 | 2016-03-01T07:04:00.000 |
I'm trying to write some documentation on how to restore a CKAN instance in my organization.
I have backuped and restored successfully CKAN database and resources folder but i don't know what i have to do with datastore db.
Which is the best practice?
Use pg_dump to dump the database or initialize it from the resources... | 5 | 4 | 1.2 | 0 | true | 35,729,219 | 1 | 1,289 | 1 | 0 | 0 | 35,726,924 | Backup CKAN's databases (the main one and Datastore one if you use it) with pg_dump. If you use Filestore then you need to take a backup copy of the files in the directory specified by ckan.storage_path (default is /var/lib/ckan/default)
Restore the database backups (after doing createdb) using psql -f. Then run paster... | 1 | 0 | 0 | Ckan backup and restore | 1 | python,ckan | 0 | 2016-03-01T15:31:00.000 |
Hello everybody this is my first post,
I made a website with Django 1.8.9 and Python 3.4.4 on Windows 7. As I was using SQLite3 everything was fine.
I needed to change the database to MySQL. I installed MySQL 5.6 and mysqlclient. I changed the database settings and made the migration ->worked.
But when I try to registe... | 0 | 1 | 0.099668 | 0 | false | 35,777,867 | 1 | 2,782 | 2 | 0 | 0 | 35,732,758 | So here is the answer for all the django (or coding in general) noobs like me.
python manage.py createcachetable
I totally forgot about that and this caused all the trouble with "app_cache doesn't exist". At least in this case...
I changed my database to PostgreSQL, but I am sure it also helps with MySQL... | 1 | 0 | 0 | Django - MySQL : 1146 Table doesn't exist | 2 | mysql,django,python-3.x,django-database | 0 | 2016-03-01T20:25:00.000 |
Hello everybody this is my first post,
I made a website with Django 1.8.9 and Python 3.4.4 on Windows 7. As I was using SQLite3 everything was fine.
I needed to change the database to MySQL. I installed MySQL 5.6 and mysqlclient. I changed the database settings and made the migration ->worked.
But when I try to registe... | 0 | 0 | 0 | 0 | false | 35,733,218 | 1 | 2,782 | 2 | 0 | 0 | 35,732,758 | I would assume this was an issue with permissions. As in the web-page connects with a user that doesn't have the proper permissions to create content.
If your tables are InnoDB, you'll get the table doesn't exist message. You need the ib* files in the root of the MySQL datadir (e.g. ibdata1, ib_logfile0 ib_logfile1)
If... | 1 | 0 | 0 | Django - MySQL : 1146 Table doesn't exist | 2 | mysql,django,python-3.x,django-database | 0 | 2016-03-01T20:25:00.000 |
What's the best way to switch to a database management software from LibreOffice Calc?
I would like to move everything from a master spreadsheet to a database with certain conditions. Is it possible to write a script in Python that would do all of this for me?
The data I have is well structured I have about 300 columns... | 0 | 1 | 0.099668 | 0 | false | 66,788,273 | 0 | 1,516 | 2 | 0 | 0 | 35,784,155 | You can ofcourse use python for this task but it might be an overkill.
The CSV export / import sequence is likely much faster, less error prone and needs less ongoing maintainance (e.g if you change the spreadsheet columns). The sequence is roughly as follows:
select the sheet that you want to import into a DB
select ... | 1 | 0 | 0 | How to import data from LibreOffice Calc to a SQL database? | 2 | python,sql,database,libreoffice,libreoffice-calc | 0 | 2016-03-03T22:15:00.000 |
What's the best way to switch to a database management software from LibreOffice Calc?
I would like to move everything from a master spreadsheet to a database with certain conditions. Is it possible to write a script in Python that would do all of this for me?
The data I have is well structured I have about 300 columns... | 0 | 0 | 1.2 | 0 | true | 35,784,265 | 0 | 1,516 | 2 | 0 | 0 | 35,784,155 | You can create a python script that will read this spreadsheet row by row and then run insert statements in a database. In fact, would be even better if you save the spreadsheet as CSV for example, if you only need the data there. | 1 | 0 | 0 | How to import data from LibreOffice Calc to a SQL database? | 2 | python,sql,database,libreoffice,libreoffice-calc | 0 | 2016-03-03T22:15:00.000 |
I am trying to create my personal web page. So in that I needed to put in the recommendations panel , which contains recommendations by ex employees/friends etc.
So I was planning to create a model in django with following attributes:-
author_name
author_designation
author_image
author_comments
I have following quest... | 3 | 1 | 0.066568 | 0 | false | 35,844,490 | 1 | 1,276 | 1 | 0 | 0 | 35,844,303 | The best way to do this is to store the images in your server in some specific, general folder for this images. After that you store a string in your DB with the path to the image that you want to load. This will be a more efficient way to do this. | 1 | 0 | 0 | Is it a good practice to save images in the backend database in mysql/django? | 3 | python,mysql,django | 0 | 2016-03-07T12:54:00.000 |
I have installed Open edX (Dogwood) on an EC2 ubuntu 12.04 AMI and, honestly, nothing works.
I can sign up in studio, and create a course, but the process does not complete. I get a nice page telling me that the server has an error. However, the course will show up on the LMS page. But, I cannot edit the course in Stu... | 1 | 1 | 0.197375 | 0 | false | 36,759,310 | 1 | 226 | 1 | 0 | 0 | 35,948,834 | This one works ami-7de8981d (us-east). Login with ssh as the 'ubuntu' user. Studio is on port 18010 and the LMS is on port 80. | 1 | 0 | 0 | Open edX Dogwood problems | 1 | python,django,amazon-web-services,edx,openedx | 0 | 2016-03-11T19:57:00.000 |
I am using the python cassandra-driver to execute queries on a cassandra database and I am wondering how to re-insert a ResultSet returned from a SELECT query on table A to a table B knowing that A and B have the same columns but a different primary keys.
Thanks in advance | 0 | 0 | 0 | 0 | false | 35,969,262 | 0 | 294 | 1 | 0 | 0 | 35,964,324 | There is no magic, you'll need to:
create a prepare statement for INSERT ... INTO tableB ...
on each ResultSet from table A, extract the values and create a
bound statement for table B, then execute the bound statement for
insertion into B
You can use asynchronous queries to accelerate the migration a little bit but ... | 1 | 0 | 0 | Cassandra python driver - how to re-insert a ResultSet | 1 | python,cassandra,resultset | 0 | 2016-03-12T22:52:00.000 |
I am thinking if I don't use auto id as primary id in mysql but use other method to implement, may I replace auto id from bson.objectid.ObjectId in mysql?
According to ObjectId description, it's composed of:
a 4-byte value representing the seconds since the Unix epoch
a 3-byte machine identifier
a 2-byte process id
a ... | 0 | 3 | 1.2 | 0 | true | 35,983,791 | 0 | 373 | 1 | 0 | 0 | 35,983,632 | You certainly could do this. One issue though is that since this can't be set by the database itself, you'll need to write some Python code to ensure it is set on save.
Since you're not using MongoDB, though, I wonder why you want to use a BSON id. Instead you might want to consider using UUID, which can indeed be set ... | 1 | 0 | 0 | May I use bson.objectid.ObjectId as (primary key) id in sql? | 1 | python,mysql,django,flask,primary-key | 0 | 2016-03-14T09:23:00.000 |
I made a program with using sqlite3 and pyqt modules. The program can be used by different persons simultaneously. Actually I searched but I did not know and understand the concept of server. How can i connect this program with a server. Or just the computers that have connections with the server is enough to run the p... | 0 | 2 | 1.2 | 0 | true | 35,986,703 | 0 | 59 | 1 | 0 | 0 | 35,986,526 | Do u want to connect to sqlite database server? SQLite Is Serverless. It stores your data in a file.
U should use maria db for db server. Or u can store your sqlite database file in a network shared drive or cloud or... | 1 | 1 | 0 | How to connect my app with the database server | 1 | python,sqlite,server | 0 | 2016-03-14T11:38:00.000 |
I am not getting Database tool window under View-> Tool Windows, in pyhcharm community version software, so that I can connect to MYSQL server database. Also,please suggest me if there is other ways by which I can connect to MY SQL server database using pycharm community version. | 0 | 1 | 0.197375 | 0 | false | 38,564,886 | 0 | 493 | 1 | 0 | 0 | 35,991,312 | Database support available only in paid Jetbrains IDEs | 1 | 0 | 0 | Unable to connect to MYSQL server database using pycharm community (2.7.11) | 1 | python,pycharm | 0 | 2016-03-14T15:15:00.000 |
If cursor.execute('select * from users') returns a 4 row set, and then cursor.fetchone(), is there a way to re-position the cursor to the beginning of the returned results so that a subsequent cursor.fetchall() gives me all 4 rows?
Or do I need to the cursor.execute again, and then cursor.fetchall()? This seems awkwar... | 2 | 2 | 1.2 | 0 | true | 36,030,685 | 0 | 1,900 | 1 | 0 | 0 | 36,022,384 | SQLite computes each result row on demand, so it is neither possible to go back to an earlier row, nor to determine how many following rows there will be.
The only way to go back is to re-execute the query. Alternatively, call fetchall() first, and then use the returned list instead of the cursor. | 1 | 0 | 0 | Python & SQLite: fetchone() and fetchall() and cursor control | 1 | python,sqlite | 0 | 2016-03-15T21:19:00.000 |
I am creating an appEngine application in python that will need to perform efficient geospatial queries on datastore data. An example use case would be, I need to find the first 20 posts within a 10 mile radius of the current user. Having done some research into my options, I have found that currently what seems like... | 1 | 1 | 1.2 | 0 | true | 36,110,881 | 1 | 326 | 1 | 0 | 0 | 36,092,591 | Geohashing does not have to be inaccurate at all. It's all in the implementation details. What I mean is you can check the neighbouring geocells as well to handle border-cases, and make sure that includes neighbours on the other side of the equator.
If your use case is finding other entities within a radius as you sugg... | 1 | 0 | 0 | Geohashing vs SearchAPI for geospatial querying using datastore | 2 | python,google-app-engine,google-cloud-datastore,google-search-api,geohashing | 0 | 2016-03-18T19:15:00.000 |
I'm completely new to managing data using databases so I hope my question is not too stupid but I did not find anything related using the title keywords...
I want to setup a SQL database to store computation results; these are performed using a python library. My idea was to use a python ORM like SQLAlchemy or peewee t... | 2 | 0 | 0 | 0 | false | 36,104,630 | 0 | 963 | 1 | 0 | 0 | 36,104,521 | Is there a reason why some of the machine cannot be connected to the internet?
If you really can't, what I would do is setup a database and the Python app on each machine where data is collected/generated. Have each machine use the app to store into its own local database and then later you can create a dump of each da... | 1 | 0 | 0 | Python ORM - save or read sql data from/to files | 2 | python,mysql,database,orm | 0 | 2016-03-19T16:59:00.000 |
I try to use pip install psycopg2 on windows10 and python3.5, but it show me below error message. how can i fix it ?
Command
"d:\desktop\learn\python\webcatch\appserver\webcatch\scripts\python.exe
-u -c "import setuptools, tokenize;file='C:\Users\16022001\AppData\Local\Temp\pip-build-rsorislh\psycopg2\setup.py';ex... | 0 | 0 | 0 | 0 | false | 69,657,488 | 0 | 329 | 1 | 0 | 0 | 36,115,491 | pip install psycopg this worked for me, don't mention version i.e (pip install psycopg2) | 1 | 0 | 1 | pip install psycopg2 error | 1 | python,windows,psycopg2 | 0 | 2016-03-20T15:11:00.000 |
In Django database username is used as schema name.
In DB2 there is no database level users. OS users will be used for login into the database.
In my database I have two different names for database user and database schema.
So in django with db2 as backend how can I use different schema name to access the tables?
EDIT... | 2 | 0 | 0 | 0 | false | 36,159,818 | 1 | 688 | 1 | 0 | 0 | 36,159,706 | DB2 uses so-called two part names, schemaname.objectname. Each object, including tables, can be referenced by the entire name. Within a session there is the current schema which by default is set to the username. It can be changed by the SET SCHEMA myschema statement.
For your question there are two options:
1) Refere... | 1 | 0 | 0 | django-db2 use different schema name than database username | 2 | django,python-2.7,django-models,db2 | 0 | 2016-03-22T16:15:00.000 |
I've tried to deploy (includes migration) production environment. But my Django migration (like add columns) very often stops and doesn't progress anymore.
I'm working with postgresql 9.3, and I find some reasons of this problem. If postgresql has an active transaction, alter table query is not worked. So until now, re... | 1 | 1 | 0.197375 | 0 | false | 36,213,045 | 1 | 1,701 | 1 | 0 | 0 | 36,212,891 | Open connections will likely stop schema updates. If you can't wait for existing connections to finish, or if your environment is such that long-running connections are used, you may need to halt all connections while you run the update(s).
The downtime, if it's likely to be significant to you, could be mitigated if yo... | 1 | 0 | 0 | django migration doesn't progress and makes database lock | 1 | python,django,postgresql | 0 | 2016-03-25T01:53:00.000 |
I'm currently using google cloud sql 2nd generation instances to host my database. I need to make a schema change to a table but Im not sure what the best way to do this.
Ideally, before I deploy using gcloud preview app deploy my migrations will run so the new version of the code is using the latest schema. Also, if ... | 1 | -2 | 1.2 | 0 | true | 36,407,336 | 1 | 180 | 1 | 1 | 0 | 36,231,114 | SQL schema migration is a well-known branch of SQL DB administration which is not specific to Cloud SQL, which is mainly different to other SQL systems in how it is deployed and networked. Other than this, you should look up schema migration documentation and articles online to learn how to approach your specific situa... | 1 | 0 | 0 | How to perform sql schema migrations in app engine managed vm? | 1 | python,google-app-engine,google-cloud-sql,gcloud | 0 | 2016-03-26T02:54:00.000 |
I want to use psycopg2 (PostgreSQL) with virtualenv.
I am using Ubuntu and root already having psycopg2 and it is working fine but if i try to use it after activating virtualenv it shows
ImportError: No module named psycopg2
Do i need to put symbolic link of dist-packages manually ?? | 0 | 2 | 1.2 | 0 | true | 36,247,000 | 0 | 20 | 1 | 0 | 0 | 36,246,954 | virtualenvs are by default isolated from the system packages so you need to install all packages into each virtualenv (or you can pass --system-site-packages when creating it). | 1 | 0 | 1 | PostgreSQL not working with virtual envirement | 1 | python,postgresql,python-2.7,virtualenv,psycopg2 | 0 | 2016-03-27T11:49:00.000 |
I am using a simple sqlalchemy paginate statement like this
items = models.Table.query.paginate(page, 100, False)
with page = 1. When running this command twice I get different outputs? If I run it with less element (e.g. 10) it gives me the same outputs when run multiple times? I thought for a paginate command to work... | 0 | 0 | 0 | 0 | false | 36,334,148 | 1 | 193 | 1 | 0 | 0 | 36,319,702 | ok I don't know the answer to this question but ordering the query (order_by) solved my problem... I am still interested to know why paginate does not have an order by itself, because it basically means that without the order statement, paginate cannot be used to iterate through all elements?
cheers
carl | 1 | 0 | 0 | flask sqlalchemy paginate() function does not get the same elements when run twice | 1 | python,pagination,flask-sqlalchemy | 0 | 2016-03-30T21:04:00.000 |
I need to develop natural language querying tool for a structured database. I tried two approaches.
using Python nltk (Natural Language Toolkit for python) using
Javascript and JSON (for data source)
In the first case I did some NLP steps to format the natural query by doing removing stop words, stemming, finally map... | 1 | 0 | 0 | 0 | false | 36,531,992 | 0 | 1,077 | 1 | 0 | 0 | 36,330,033 | As I commented, I think you should add some code, since not everyone has read the book.
Anyway my conclusion is that yes, as you said it has a lot of limitations and the only way to achieve more complex queries is to write very extensive and complete grammar productions, a pretty hard work. | 1 | 0 | 0 | Natural Language Processing Database Querying | 2 | javascript,python,json,nlp,nltk | 0 | 2016-03-31T09:56:00.000 |
I have been trying to find examples in ZODB documentation about doing a join of 2 or more tables. I know that this is an Object Database, but I am trying to create objects that represent tables.
And I see that ZODB makes use SQLAlchemy.
So I was wondering if I can treat things in ZODB in a relational like sense.
I hope... | 0 | 2 | 1.2 | 0 | true | 36,479,782 | 0 | 156 | 1 | 0 | 0 | 36,457,076 | ZODB does not use SQLAlchemy, and there is no relational model. There are no tables to join, period. The ZODB stores an object tree, there is no schema. It's just Python objects in more Python objects.
Any references to ZODB and SQLAlchemy are all for applications built on top of the ZODB, where transactions for extern... | 1 | 0 | 0 | ZODB database: table joins | 1 | python,sqlalchemy,zodb | 0 | 2016-04-06T16:32:00.000 |
There is a use case in which we would like to add columns from the data of a webservice to our original sql data table.
If anybody has done that then pls do comment. | 0 | 2 | 0.197375 | 0 | false | 36,533,707 | 0 | 959 | 1 | 0 | 0 | 36,526,219 | Shadowfax is correct that you should review the How to Ask guide.
that said, Spotfire offers this feature in two ways:
use IronPython scripting attached to an action control to retrieve the data. this is a very rigid solution that offers no caching, and the data must be retrieved and placed in memory each time the doc... | 1 | 0 | 0 | How to use a web service as a datasource in Spotfire | 2 | sql-server,web-services,ironpython,spotfire | 0 | 2016-04-10T05:33:00.000 |
I have a dynamo table called 'Table'. There are a few columns in the table, including one called 'updated'. I want to set all the 'updated' field to '0' without having to providing a key to avoid fetch and search in the table.
I tried batch write, but seems like update_item required Key inputs. How could I update the e... | 2 | 1 | 1.2 | 0 | true | 36,564,082 | 1 | 643 | 2 | 0 | 0 | 36,562,764 | At this point, you cannot do this, we have to pass a key (Partition key or Partition key and sort key) to update the item.
Currently, the only way to do this is to scan the table with filters to get all the values which have 0 in "updated" column and get respective Keys.
Pass those keys and update the value.
Hopefully,... | 1 | 0 | 0 | DynamoDB update entire column efficiently | 2 | database,python-2.7,amazon-dynamodb,insert-update | 0 | 2016-04-12T02:49:00.000 |
I have a dynamo table called 'Table'. There are a few columns in the table, including one called 'updated'. I want to set all the 'updated' field to '0' without having to providing a key to avoid fetch and search in the table.
I tried batch write, but seems like update_item required Key inputs. How could I update the e... | 2 | 0 | 0 | 0 | false | 36,564,129 | 1 | 643 | 2 | 0 | 0 | 36,562,764 | If you can get partition key, for each partition key you can update the item | 1 | 0 | 0 | DynamoDB update entire column efficiently | 2 | database,python-2.7,amazon-dynamodb,insert-update | 0 | 2016-04-12T02:49:00.000 |
I am using python with boto3 to upload file into s3 bucket. Boto3 support upload_file() to create s3 object. But this API takes file name as input parameter
Can we give actual data buffer as a parameter to upload file () function instanced of file name?
I knew that we can use put_object() function if we want to give ... | 1 | 1 | 1.2 | 0 | true | 36,582,398 | 0 | 638 | 1 | 0 | 1 | 36,568,713 | There is currently no way to use a file-like object with upload_file. put_object and upload_part do support these, though you don't get the advantage of automatic multipart uploads. | 1 | 0 | 0 | Boto3 : Can we use actual data buffer as parameter instaed of file name to upload file in s3? | 1 | python-2.7,boto,boto3 | 0 | 2016-04-12T09:13:00.000 |
I have just cloned a Django app from Github to a local directory. I know for a fact that the app works because I've run it on others' computers.
When I run the server, I can see the site and register for an account. This works fine (I get a confirmation email). But then my login information causes an error because the... | 1 | 0 | 0 | 0 | false | 36,613,541 | 1 | 315 | 1 | 0 | 0 | 36,609,201 | The django_sessions table should get initialized when you run your first migrations. You said taht you made your migrations, but did you run them (with python manage.py migrate). Also, do you have django.contrib.auth in the installed_apps in your settings file? This is the app that owns that session table | 1 | 0 | 0 | Problems with database after cloning Django app from Github | 1 | python,django,git,github | 0 | 2016-04-13T20:42:00.000 |
Need help from someone who has got Apache , Python and cx_Oracle (Lib to run Oracle database using python) .
Even after setting all the required variables still getting the error ": libclntsh.so.11.1: cannot open shared object file: No such file or directory" when running python script .
The same script works perfectly... | 0 | 0 | 1.2 | 0 | true | 36,711,130 | 0 | 2,034 | 1 | 0 | 0 | 36,655,812 | I was able to solve this with the help of mod_env module of python by natively passing the env_variables to apache . What I did to achieve this was
--> define the my required env variables in the file /etc/sysconfig/httpd like
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/folder_with_library/
export LD_LIBRARY_P... | 1 | 0 | 0 | libclntsh.so.11.1: cannot open shared object file python error while running CGIusing cx_Oracle | 2 | python,apache,cx-oracle | 0 | 2016-04-15T19:56:00.000 |
Which is more efficient? Is there a downside to using open() -> write() -> close() compared to using logger.info()?
PS. We are accumulating query logs for a university, so there's a perchance that it becomes big data soon (considering that the min-max cap of query logs per day is 3GB-9GB and it will run 24/7 constantly... | 3 | 0 | 0 | 0 | false | 36,819,582 | 0 | 1,554 | 2 | 0 | 0 | 36,819,540 | It is always better to use a built-in facility unless you are facing issues with the built-in functionality.
So, use the built-in logging function. It is proven, tested and very flexible - something you cannot achieve with open() -> f.write() -> close(). | 1 | 0 | 0 | Python logging vs. write to file | 2 | python,logging,file-writing,bigdata | 0 | 2016-04-24T05:15:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.