Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm using fixtures with SQLAlchemy to create some integration tests.
I'd like to put SQLAlchemy into a "never commit" mode to prevent changes ever being written to the database, so that my tests are completely isolated from each other. Is there a way to do this?
My initial thoughts are that perhaps I could replace Sess... | 3 | 3 | 1.2 | 0 | true | 23,702,417 | 0 | 838 | 1 | 0 | 0 | 23,394,785 | The scoped session manager will by default return the same session object for each connection. Accordingly, one can replace .commit with .flush, and have that change persist across invocations to the session manager.
That will prevent commits.
To then rollback all changes, one should use session.transaction.rollback(). | 1 | 0 | 0 | sqlalchemy - force it to NEVER commit? | 1 | python,testing,sqlalchemy | 0 | 2014-04-30T17:48:00.000 |
I am writing a small web application using Flask and I have to use DynamoDB as backend for some hard requirements.
I went through the tutorial on Flask website without establishing sqlite connection. All data were pulled directly from DynamoDB and it seemed to work.
Since I am new to web development in general and Flas... | 0 | 2 | 0.379949 | 0 | false | 23,512,512 | 1 | 637 | 1 | 0 | 0 | 23,510,212 | No. SQLite is just one option for backend storage. SQLite is mentioned in the tutorial only for its simplicity in getting something working fast and simply on a typical local developers environment. (No db to or service to install/configure etc.) | 1 | 0 | 0 | Use Flask with Amazon DynamoDB without SQLite | 1 | python,flask,amazon-dynamodb | 0 | 2014-05-07T06:21:00.000 |
I have a mysql database with some huge tables, i have a task that I must run three queries one after another and the last one exports to the outfile.csv.
i.e.
Query 1. Select values from some tables with certain parameter. then write into a new table. aprox 4.5 hours
Query 2. After the first one is done, then use the ... | 0 | 2 | 1.2 | 0 | true | 23,529,243 | 0 | 121 | 1 | 0 | 0 | 23,529,212 | you can just separate the queries with a semi-column and run them as a batch. | 1 | 0 | 0 | How to automatically run chain multiple mysql queries | 1 | scripting,mysql-python | 0 | 2014-05-07T21:56:00.000 |
I have a server with a database, the server will listen for http requests, and using JSONs for
data transferring.
Currently, what my server code(python) mainly do is read the JSON file, convert it to SQL and make some modifications to the database. And the function of the server, as I see, is only like a converter bet... | 0 | 0 | 0 | 0 | false | 23,593,850 | 0 | 46 | 2 | 0 | 0 | 23,593,618 | People usually make use of a Web framework, instead of implementing the basic machinery themselves as you are doing.
That is: Python i s a great language that easily allows one to translate "json into sql" with a small amount of code - and it is great for learning. If you are doing this for educational purposes, it is... | 1 | 0 | 1 | How do people usually operate information on server database? | 2 | python,sql,rdbms | 0 | 2014-05-11T14:11:00.000 |
I have a server with a database, the server will listen for http requests, and using JSONs for
data transferring.
Currently, what my server code(python) mainly do is read the JSON file, convert it to SQL and make some modifications to the database. And the function of the server, as I see, is only like a converter bet... | 0 | 0 | 0 | 0 | false | 23,593,774 | 0 | 46 | 2 | 0 | 0 | 23,593,618 | The answer is: people do usually that, what they need to do. The layer between database and client normally provides a higher level api, to make the request independent from the actual database. But how this higher level looks like depends on the application you have. | 1 | 0 | 1 | How do people usually operate information on server database? | 2 | python,sql,rdbms | 0 | 2014-05-11T14:11:00.000 |
Is there an option in psycopg2 (in the connect() method) similar to psql -w (never issue a password prompt) and -W (force psql to prompt for a password before connecting to a database)? | 2 | 5 | 1.2 | 0 | true | 23,606,436 | 0 | 1,041 | 1 | 0 | 0 | 23,606,102 | psycopg2 will never prompt for a password - that's a feature of psql, not of the underlying libpq that both psql and psycopg2 use. There's no equvialent of -w / -W because there's no password prompt feature to turn on/off.
If you want to prompt for a password you must do it yourself in your code: trap the exception thr... | 1 | 0 | 0 | Is there a way to ask the user for a password or not with psycopg2? | 1 | python,postgresql,psycopg2 | 0 | 2014-05-12T10:02:00.000 |
I have a question in MySQL and Python MySQLdb library:
Suppose I'd like to insert a bulk in to the DB. When I'm inserting, there are possibly a duplicated among the records in the bulk, or a record in the bulk may be a duplicate of a record in the table. In case of duplicates, I'd like to ignore the duplication and jus... | 1 | 0 | 0 | 0 | false | 23,636,177 | 0 | 289 | 1 | 0 | 0 | 23,636,068 | Step 1 - Make your initial insert into a staging table.
Step 2 - deal with duplicate records and all other issues.
Step 3 - write to your real tables from the staging table. | 1 | 0 | 0 | Bulk insert into MySQL on duplicate | 1 | python,mysql,sql,mysql-python | 0 | 2014-05-13T15:56:00.000 |
I have been going through the Google App Engine documentation (Python) now and found two different types of storage.
NDB Datastore
DB Datastore
Both quota limits (free) seem to be same, and their database design too. However NDB automatically cache data in Memcache!
I am actually wondering when to use which storage?... | 2 | 5 | 1.2 | 0 | true | 23,646,875 | 1 | 568 | 1 | 1 | 0 | 23,645,572 | In simple words these are two versions of datastore . db being the older version and ndb the newer one. The difference is in the models, in the datastore these are the same thing. NDB provides advantages like handling caching (memcache) itself. and ndb is faster than db. so you should definitely go with ndb. to use ndb... | 1 | 0 | 0 | App Engine: Difference between NDB and Datastore | 1 | python,django,google-app-engine | 0 | 2014-05-14T04:19:00.000 |
I just wanted to know if it is possible to insert the contents of a local tabulated html file into an Excel worksheet using xlsxwriter? Manually it works fine by just dragging and dropping the file into Excel and the formatting is clear, but I can't find any information on inserting file contents into Excel using xlsx... | 0 | 1 | 1.2 | 0 | true | 23,674,407 | 0 | 448 | 1 | 0 | 0 | 23,673,433 | No, such functionality is not what xlsxwriter offers.
This package is able to write Excel files, but as importing HTML you describe is using MS Excell GUI functionality and as MS Excel is not an requirement of xlsxwriter, do not expect it to be present.
On the other hand, you could play with Python to do the conversion... | 1 | 0 | 0 | Using Python's xlsxwriter to drop a tabulated html file into worksheet - is this possible? | 1 | python,xlsxwriter | 0 | 2014-05-15T08:50:00.000 |
I have an app which will work in multiple timezones. I need the app to be able to say "Do this at 10 PM, 12 April, 2015, in the America/New York timezone and repeat every 30 days." (And similar).
If I get a datetime from the user in PST, should I be storing the datetime in DB after converting to UTC?
Pros: Easier to ma... | 0 | 1 | 0.099668 | 0 | false | 23,676,984 | 0 | 158 | 1 | 0 | 0 | 23,676,718 | Yes, you should store everything in your db in UTC.
I don't know why you say you won't be able to cope with DST. On the contrary, any good timezone library - such as pytz - is quite capable of translating UTC to the correct time in any timezone, taking DST into account. | 1 | 0 | 0 | Convert datetime to UTC before storing in DB? | 2 | python,django,datetime,timezone | 0 | 2014-05-15T11:19:00.000 |
My web app asks users 3 questions and simple writes that to a file, a1,a2,a3. I also have real time visualization of the average of the data (reads real time from file).
Must I use a database to ensure that no/minimal information is lost? Is it possible to produce a queue of read/writes>(Since files are small I am not ... | 0 | 0 | 0 | 0 | false | 23,703,319 | 1 | 31 | 1 | 0 | 0 | 23,703,135 | I see a few solutions:
read /dev/urandom a few times, calculate sha-256 of the number and use it as a file name; collision is extremely improbable
use Redis and command like LPUSH, using it from Python is very easy; then RPOP from right end of the linked list, there's your queue | 1 | 0 | 0 | Is it possible to make writing to files/reading from files safe for a questionnaire type website? | 1 | python,flask | 0 | 2014-05-16T19:24:00.000 |
Hi I am trying to write python functional tests for our application. It involves several external components and we are mocking them all out.. We have got a better framework for mocking a service, but not for mocking a database yet.
sqlite is very lite and thought of using them but its a serverless, is there a way I c... | 0 | -1 | -0.197375 | 0 | false | 23,744,831 | 0 | 535 | 1 | 0 | 0 | 23,744,128 | I don't understand your problem. Why do you care that it's serverless?
My standard technique for this is:
use SQLAlchemy
in tests, configure it with sqlite:/// or sqlite:///:memory: | 1 | 0 | 0 | how to do database mocking or make sqlite run on localhost? | 1 | python,sqlite,unit-testing | 0 | 2014-05-19T17:52:00.000 |
I am working on my Python project using PySide as my Ui language. My projet is a game which require an internet connection to update the users'score and store in the database.
My problem is how can I store my database in the internet. I mean that all users can access this information when they are connected to an inter... | 0 | 0 | 0 | 0 | false | 23,754,331 | 0 | 193 | 1 | 0 | 0 | 23,754,108 | I'm not familiar with PySide .. but the idea is
you need to build a function that when internet connection is available it should synchronize your local database with online database and in the server-side you need to build a script that can handle requests ( POST / GET ) to receive the scores and send it to database ... | 1 | 1 | 0 | Using Database with Pyside and Socket | 1 | python,database,sockets,pyside,qtsql | 0 | 2014-05-20T07:59:00.000 |
I need to develop an A/B testing method for my users. Basically I need to split my users into a number of groups - for example 40% and 60%.
I have around 1,000,00 users and I need to know what would be my best approach. Random numbers are not an option because the users will get different results each time. My second o... | 2 | 3 | 0.148885 | 0 | false | 23,846,676 | 0 | 1,690 | 2 | 0 | 0 | 23,846,617 | Run a simple algorithm against the primary key. For instance, if you have an integer for user id, separate by even and odd numbers.
Use a mod function if you need more than 2 groups. | 1 | 0 | 0 | Algorithm for A/B testing | 4 | python,mysql,python-2.7 | 0 | 2014-05-24T15:16:00.000 |
I need to develop an A/B testing method for my users. Basically I need to split my users into a number of groups - for example 40% and 60%.
I have around 1,000,00 users and I need to know what would be my best approach. Random numbers are not an option because the users will get different results each time. My second o... | 2 | 0 | 0 | 0 | false | 23,846,772 | 0 | 1,690 | 2 | 0 | 0 | 23,846,617 | I would add an auxiliary table with just userId and A/B. You do not change existent table and it is easy to change the percentage per class if you ever need to. It is very little invasive. | 1 | 0 | 0 | Algorithm for A/B testing | 4 | python,mysql,python-2.7 | 0 | 2014-05-24T15:16:00.000 |
I'm installing MySQL 5.6 Community Edition using MySQL installer and everything was installed properly except for "Connector/Python 2.7 1.1.6".
Upon mousing over, I get the error message "The product requires Python 2.7 but it was not detected on this machine. Python 2.7 requires manual installation and must be install... | 0 | 0 | 0 | 0 | false | 23,866,951 | 0 | 50 | 1 | 0 | 0 | 23,866,874 | Check if there is any install path dependencies on your installer to figure out if your python is in the right place.
But I recommend that you install the connector you want manually.
P.S. Are you sure you need the connector? Or you just saw the error and assumed you need it? | 1 | 0 | 0 | Connector installation error (MySQL installer) | 1 | mysql,django,python-2.7 | 0 | 2014-05-26T09:24:00.000 |
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server?
Any explanation would be really helpful, Cheers! | 0 | -1 | -0.066568 | 0 | false | 23,920,627 | 1 | 237 | 3 | 0 | 0 | 23,920,481 | Usually when the settings that are controlling the application are changed then the server has to be restarted. | 1 | 0 | 0 | When do I need to restart database server in Django? | 3 | python,mysql,django,postgresql | 0 | 2014-05-28T19:41:00.000 |
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server?
Any explanation would be really helpful, Cheers! | 0 | 2 | 0.132549 | 0 | false | 23,920,963 | 1 | 237 | 3 | 0 | 0 | 23,920,481 | You will not NEED to restart your database in production due to anything you've done in Django. You may need to restart it to change your database security or configuration settings, but that has nothing to do with Django and in a lot of cases doesn't even need a restart. | 1 | 0 | 0 | When do I need to restart database server in Django? | 3 | python,mysql,django,postgresql | 0 | 2014-05-28T19:41:00.000 |
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server?
Any explanation would be really helpful, Cheers! | 0 | 1 | 1.2 | 0 | true | 23,920,777 | 1 | 237 | 3 | 0 | 0 | 23,920,481 | You shouldn't really ever need to restart the database server.
You probably do need to restart - or at least reload - the web server whenever any of the code changes. But the db is a separate process, and shouldn't need to be restarted. | 1 | 0 | 0 | When do I need to restart database server in Django? | 3 | python,mysql,django,postgresql | 0 | 2014-05-28T19:41:00.000 |
I am using pyhs2 as hive client. The sql statement with ‘where’ clause was not recognized. Got
'pyhs2.error.Pyhs2Exception: 'Error while processing statement:
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask'
But it runs ok in hive shell. | 1 | 3 | 0.53705 | 0 | false | 23,984,349 | 0 | 945 | 1 | 0 | 0 | 23,971,667 | Fixed! It was due to permission on remote server. Changed user in connect statement from 'root' to 'hdfs' solved the problem. | 1 | 0 | 0 | python hive client pyhs2 does not recognize 'where' clause in sql statement | 1 | python,sql,client,hive | 0 | 2014-05-31T15:24:00.000 |
So in my spare time, I've been developing a piece of network monitoring software that essentially can be installed on a bunch of clients, and the clients report data back to the server(RAM/CPU/Storage/Network usage, and the like). For the administrative console as well as reporting, I've decided to use Django, which ha... | 2 | 1 | 0.099668 | 0 | false | 23,987,194 | 1 | 849 | 1 | 0 | 0 | 23,987,050 | I chose option 1 when I set up my environment, which does much of the same stuff.
I have a JSON interface that's used to pass data back to the server. Since I'm on a well-protected VLAN, this works great. The biggest benefit, like you say, is the Django ORM. A simple address call with proper data is all that's neede... | 1 | 0 | 0 | How to access Django DB and ORM outside of Django | 2 | python,django,orm | 0 | 2014-06-02T03:41:00.000 |
I'm curious if there's a way to insert a new row (that would push all the existing rows down) in an existing openpyxl worksheet? (I'm looking to insert at the first row if it helps)
I looked through all the docs and didn't see anything mentioned. | 1 | 0 | 0 | 0 | false | 24,012,565 | 0 | 5,098 | 1 | 0 | 0 | 24,006,376 | This is currently not directly possible in openpyxl because it would require the reassigning of all cells below the new row.
You can do it yourself by iterating through the relevant rows (starting at the end) and writing a new row with the values of the previous row. Then you create a row of cells where you wan them. | 1 | 0 | 0 | How to insert a row in openpyxl | 1 | python,openpyxl | 0 | 2014-06-03T02:53:00.000 |
I have a "local" Oracle database in my work network. I also have a website at a hosting service.
Can I connect to the 'local' Oracle database from the hosting service? Or does the Oracle database need to be at the same server as my website?
At my work computer I can connect to the Oracle database with a host name, user... | 0 | 1 | 0.197375 | 0 | false | 24,016,200 | 0 | 490 | 1 | 0 | 0 | 24,015,758 | It can depend on how your hosting is setup, but if it is allowed you will need the following.
Static IP, or Dynamic DNS setup so your home server can be found regularly.
Port forwarding on your router to allow traffic to reach the server.
The willingness to expose your home systems to the dangers of the internet
S... | 1 | 0 | 0 | Connect to local Oracle database from online website | 1 | python,database,django,oracle,database-connection | 0 | 2014-06-03T12:54:00.000 |
I am using Flask to make a small webapp to manage a group project, in this website I need to manage attendances, and also meetings reports. I don't have the time to get into SQLAlchemy, so I need to know what might be the bad things about using CSV as a database. | 3 | -1 | -0.049958 | 0 | false | 64,239,216 | 1 | 3,024 | 2 | 0 | 0 | 24,072,231 | I am absolutely baffled by how many people discourage using CSV as an database storage back-end format.
Concurrency: There is NO reason why CSV can not be used with concurrency. Just like how a database thread can write to one area of a binary file at the same time that another thread writes to another area of the sa... | 1 | 0 | 0 | Somthing wrong with using CSV as database for a webapp? | 4 | python,csv,web-applications,flask | 0 | 2014-06-06T00:11:00.000 |
I am using Flask to make a small webapp to manage a group project, in this website I need to manage attendances, and also meetings reports. I don't have the time to get into SQLAlchemy, so I need to know what might be the bad things about using CSV as a database. | 3 | 1 | 0.049958 | 0 | false | 47,320,760 | 1 | 3,024 | 2 | 0 | 0 | 24,072,231 | I think there's nothing wrong with that as long as you abstract away from it. I.e. make sure you have a clean separation between what you write and how you implement i . That will bloat your code a bit, but it will make sure you can swap your CSV storage in a matter of days.
I.e. pretend that you can persist your data ... | 1 | 0 | 0 | Somthing wrong with using CSV as database for a webapp? | 4 | python,csv,web-applications,flask | 0 | 2014-06-06T00:11:00.000 |
I have read a few posts on how to enable remote login to mysql. My question is: is this a safe way to access data remotely?
I have a my sql db located at home (on Ubuntu 14.04) that I use for research purposes. I would like to run python scripts from my Macbook at work. I was able to remote login from my old windows O... | 0 | 0 | 1.2 | 0 | true | 24,112,483 | 0 | 49 | 1 | 0 | 0 | 24,112,422 | Use ssh to login to your home computer, setup authorized keys for it and disable password login. setup iptables on your linux machine if you don't have a firewall on your router, and disable traffic on all ports except 80 and 22 (ssh and internet). That should get you started. | 1 | 0 | 0 | Best way to access data from mysql db on other non-local machines | 1 | python,mysql,ruby-on-rails | 0 | 2014-06-09T01:02:00.000 |
I'm trying to use the new excel integration module xlwings
It works like a charm under Anaconda 2.0 for python 2.7
but I'm getting this error under Anaconda 2.0 for python 3.4
the xlwings file does contain class Workbook so I don't understand why it can't import it
when I simply use the xlwings file in my project for ... | 4 | 3 | 1.2 | 0 | true | 24,122,113 | 0 | 2,145 | 1 | 0 | 0 | 24,121,692 | In "C:\Users\xxxxx\AppData\Local\Continuum\Anaconda3\lib\site-packages\xlwings__init__.py"
Try changing from xlwings import Workbook, Range, Chart, __version__
to from xlwings.xlwings import Workbook, Range, Chart, __version__ | 1 | 0 | 1 | importing xlwings module into python 3.4 | 1 | python,excel,xlwings | 0 | 2014-06-09T13:49:00.000 |
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script... | 1 | 0 | 0 | 0 | false | 24,157,502 | 1 | 1,076 | 2 | 0 | 0 | 24,156,992 | Yes its possible.
Two applications with different languages using one database is almost exactly the same as one application using several connections to it, so you are probably already doing it. All the possible problems are exactly the same. The database won't even know whether the connections are made from one appli... | 1 | 0 | 0 | Can two programs, written in different languages, connect to the same SQL database? | 3 | sql,node.js,python-2.7 | 0 | 2014-06-11T07:25:00.000 |
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script... | 1 | 1 | 1.2 | 0 | true | 24,158,115 | 1 | 1,076 | 2 | 0 | 0 | 24,156,992 | tl;dr
You can use any programming language that provides a client for the database server of your choice.
To the database server, as long as the client is communicating as per the server's requirements (that is, it is using the server's library, protocol, etc.), then there is no difference to what programming language ... | 1 | 0 | 0 | Can two programs, written in different languages, connect to the same SQL database? | 3 | sql,node.js,python-2.7 | 0 | 2014-06-11T07:25:00.000 |
I have a python script running on my server which accessed a database, executes a fetch query and runs a learning algorithm to classify and updates certain values and means depending on the query.
I want to know if for some reason my server shuts down in between then my python script would shut down and my query lost.
... | 0 | 2 | 0.379949 | 0 | false | 24,202,505 | 0 | 22 | 1 | 0 | 0 | 24,202,386 | First of all: the question is not really related to Python at all. It's a general problem.
And the answer is simple: keep track of what your script does (in a file or directly in db). If it crashes continue from the last step. | 1 | 0 | 0 | Python re establishment after Server shuts down | 1 | python | 0 | 2014-06-13T09:45:00.000 |
How can I use Google Cloud Datastore stats object (in Python ) to get the number of entities of one kind (i.e. Person) in my database satisfying a given constraint (i.e. age>20)? | 0 | 1 | 0.099668 | 0 | false | 24,227,785 | 1 | 68 | 1 | 1 | 0 | 24,227,510 | You can't, that's not what it's for at all. It's only for very broad-grained statistics about the number of each types in the datastore. It'll give you a rough estimate of how many Person objects there are in total, that's all. | 1 | 0 | 0 | How to use Google Cloud Datastore Statistics | 2 | python,google-app-engine,google-cloud-datastore | 0 | 2014-06-15T07:34:00.000 |
I have a script in Python that retrieves data from a remote server into a MySQL database located on my computer (the one that runs the Python script).
This script is executed daily to retrieve fresh data into the MySQL database. I am using Workbench 6.0 for Windows 64.
I want to add a web GUI to the system that will pr... | 1 | 0 | 0 | 0 | false | 24,228,304 | 0 | 1,052 | 1 | 0 | 0 | 24,227,779 | I'd comment, but I don't have enough rep. yet. You should be able to start the Apache and PHP services separately from the WAMP tray icon. If you can't, try this:
You should be able to use the WAMP menu on the tray icon to open WAMP's my.ini file. In there, just change the port number from 3306 to something else (mi... | 1 | 0 | 0 | Python and PHP use the same mySQL database | 1 | php,python,mysql,wamp | 0 | 2014-06-15T08:17:00.000 |
I'm storing strings on the order of 150M. It's well-below the maximum size of strings in Redis, but I'm seeing a lot of different, conflicted opinions on the approach I should take, and no clear path.
On the one hand, I've seen that I should use a hash with small data chunks, and on the other hand, I've been told that ... | 4 | 2 | 0.379949 | 0 | false | 24,321,923 | 0 | 2,349 | 1 | 0 | 0 | 24,320,040 | One option would be:
Storing data as long list of chunks
store data in List - this allows storing the content as sequence of chunks as well as desctroying whole list in one step
store the data using pipeline contenxt manager to ensure, you are the only one, who writes at that moment.
be aware, that Redis is always pro... | 1 | 0 | 1 | Best way to store large string in Redis... Getting mixed signals | 1 | python,redis | 0 | 2014-06-20T04:36:00.000 |
I'm looking to insert the current system timestamp into a field on a database. I don't want to use the server side now() function and need to use the python client's system timestamp. What MySQL datatype can store this value, and how should I insert it? Is time.time() sufficient? | 0 | 1 | 0.099668 | 0 | false | 24,368,972 | 0 | 3,202 | 1 | 0 | 0 | 24,367,155 | time.time() is a float, if a resolution of one second is enough you can just truncate it and store it as an INTEGER. | 1 | 0 | 0 | Inserting a unix timestamp into MySQL from Python | 2 | python,mysql,python-3.x,timestamp | 0 | 2014-06-23T13:26:00.000 |
I'm used to having a remote server I can use via ssh but I am looking at using Amazon Web Services for a new project to give me better performance and resilience at reduced costs but I'm struggling to understand how to use it.
This is what I want to do:
First-time:
Create a Postgres db
Connect to Amazon Server
Downlo... | 0 | 0 | 1.2 | 0 | true | 24,373,021 | 1 | 186 | 1 | 0 | 0 | 24,367,485 | First-time:
Create a Postgres db - Depending on size(small or large), might want RDS or Redshift
Connect to Amazon Server - EC2
Download code to server - upload your programs to an S3 bucket
Once a month:
Download large data file to server - Move data to S3, if using redshift data can be loaded directly from s3 to reds... | 1 | 0 | 0 | How do i get started with Amazon Web Services for this scenario? | 1 | java,python,amazon-web-services,amazon-ec2 | 0 | 2014-06-23T13:40:00.000 |
I can't connect with mysql and I can't do "python manage.py syncdb" on it
how to connect with mysql in django and django-cms without any error? | 1 | 3 | 0.291313 | 0 | false | 24,380,525 | 1 | 9,227 | 1 | 0 | 0 | 24,380,269 | This is an error message you get if MySQLdb isn't installed on your computer.
The easiest way to install it would be by entering pip install MySQL-python into your command line. | 1 | 0 | 0 | Getting “Error loading MySQLdb module: No module named MySQLdb” in django-cms | 2 | python,mysql,django,django-cms | 0 | 2014-06-24T06:59:00.000 |
There are Python libraries that allow to communicate with a database. Of course, to use these libraries there should be an installed and running database server on the computer (python cannot communicate with something that does not exist).
My question is whether the above written is applicable to the sqlite3 library. ... | 2 | 0 | 0 | 0 | false | 24,410,155 | 0 | 878 | 1 | 0 | 0 | 24,410,124 | No, sqlite package is part of Python standard library and as soon as you have Python installed, you may use sqlite functionality.
MartijnPieters noted, the actual shared library is not technically part of Python (this was my a bit oversimplified answer) but comes as shared library, which has to be installed too.
Practi... | 1 | 0 | 0 | Does python sqlite3 library need sqlite to be installed? | 2 | python,database,sqlite | 0 | 2014-06-25T13:30:00.000 |
I am using pandas to organize and manipulate data I am getting from the twitter API. The 'id' key returns a very long integer (int64) that pandas has no problem handling (i.e. 481496718320496643).
However, when I send to SQL:
df.to_sql('Tweets', conn, flavor='sqlite', if_exists='append', index=False)
I now have tweet ... | 0 | 0 | 0 | 0 | false | 24,419,432 | 0 | 160 | 1 | 0 | 0 | 24,416,140 | I have found the issue -- I am using SQLite Manager (Firefox Plugin) as a SQLite client. For whatever reason, SQLite Manager displays the tweet IDs incorrectly even though they are properly stored (i.e. when I query, I get the desired values). Very strange I must say. I downloaded a different SQLite client to view the ... | 1 | 0 | 0 | Long integer values in pandas dataframe change when sent to SQLite database using to_sql | 1 | python,sqlite,pandas | 0 | 2014-06-25T18:39:00.000 |
This might sound like a bit of an odd question - but is it possible to load data from a (in this case MySQL) table to be used in Django without the need for a model to be present?
I realise this isn't really the Django way, but given my current scenario, I don't really know how better to solve the problem.
I'm working ... | 2 | 1 | 0.066568 | 0 | false | 44,363,554 | 1 | 1,496 | 1 | 0 | 0 | 24,423,645 | There is one feature called inspectdb in Django. for legacy databases like MySQL , it creates models automatically by inspecting your db tables. it stored in our app files as models.py. so we don't need to type all column manually.But read the documentation carefully before creating the models because it may affect th... | 1 | 0 | 0 | Loading data from a (MySQL) database into Django without models | 3 | python,mysql,django,webproject | 0 | 2014-06-26T06:19:00.000 |
I am developing a web app based on the Google App Engine.
It has some hundreds of places (name, latitude, longitude) stored in the Data Store.
My aim is to show them on google map.
Since they are many I have registered a javascript function to the idle event of the map and, when executed, it posts the map boundaries... | 0 | 0 | 0 | 0 | false | 24,501,164 | 1 | 144 | 1 | 1 | 0 | 24,497,219 | You didn't say how frequently the data points are updated, but assuming 1) they're updated infrequently and 2) there are only hundreds of points, then consider just querying them all once, and storing them sorted in memcache. Then your handler function would just fetch from memcache and filter in memory.
This wouldn't... | 1 | 0 | 0 | Google App Engine NDB Query on Many Locations | 2 | javascript,python,google-maps,google-app-engine | 0 | 2014-06-30T19:09:00.000 |
I'm used to creating connections using MySQLdb directly so I'm not sure this is at all possible using sqlalchemy, but is there a way to get the mysql connection thread id from a mysql Session a la MySQLdb.connection.thread_id()? I've been digging through and can't seem to find a way to access it. I'm not creating a con... | 3 | 3 | 1.2 | 0 | true | 24,614,258 | 0 | 707 | 1 | 0 | 0 | 24,500,665 | session.connection().connection.thread_id() | 1 | 0 | 0 | Getting a mysql thread id from sqlalchemy Session | 1 | python,mysql,sqlalchemy,mysql-python | 0 | 2014-07-01T00:02:00.000 |
I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around... | 0 | 0 | 0 | 0 | false | 24,502,420 | 0 | 1,497 | 2 | 0 | 0 | 24,502,362 | Yes, it is possible to have up to that many number of mySQL connectins. It depends on a few variables. The maximum number of connections MySQL can support depends on the quality of the thread library on a given platform, the amount of RAM available, how much RAM is used for each connection, the workload from each conne... | 1 | 0 | 0 | simultaneous connections to a mysql database | 2 | mysql,python-2.7,mysql-python | 0 | 2014-07-01T04:16:00.000 |
I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around... | 0 | 0 | 1.2 | 0 | true | 24,502,484 | 0 | 1,497 | 2 | 0 | 0 | 24,502,362 | Having simultaneous connections from the same script depends on how you're processing the requests. The typical choices are by forking a new Python process (usually handled by a webserver), or by handling all the requests with a single process.
If you're forking processes (new process each request):
A single MySQL conn... | 1 | 0 | 0 | simultaneous connections to a mysql database | 2 | mysql,python-2.7,mysql-python | 0 | 2014-07-01T04:16:00.000 |
I'm attempting to store information from a decompiled file in Dynamo.
I have all of the files stored in s3 however I would like to change some of that.
I have an object id with properties such as a date, etc which I know how to create a table of in dynamo. My issue is that each object also contains images, text files, ... | 0 | 0 | 0 | 0 | false | 24,642,053 | 0 | 477 | 2 | 0 | 0 | 24,520,176 | From what you described, I think you just need to create one table with hashkey. The haskey should be object id. And you will have columns such as "date", "image pointer", "text pointer", etc.
DynamoDB is schema-less so you don't need to create the columns explicitly. When you call getItem the server will return you a ... | 1 | 0 | 0 | How to correctly nest tables in DynamoDb | 2 | python,database,amazon-s3,amazon-dynamodb,boto | 0 | 2014-07-01T22:25:00.000 |
I'm attempting to store information from a decompiled file in Dynamo.
I have all of the files stored in s3 however I would like to change some of that.
I have an object id with properties such as a date, etc which I know how to create a table of in dynamo. My issue is that each object also contains images, text files, ... | 0 | 0 | 1.2 | 0 | true | 24,536,208 | 0 | 477 | 2 | 0 | 0 | 24,520,176 | If you stay between the limits of Dynamodb of 64Kb per item.
You can have one item (row) per file.
DynamoDB has String type (for file name, date, etc) and also a StringSet (SS) for list of attributes (for text files, images).
From what you write I assume you are will only save pointers (keys) to binary data in the S3. ... | 1 | 0 | 0 | How to correctly nest tables in DynamoDb | 2 | python,database,amazon-s3,amazon-dynamodb,boto | 0 | 2014-07-01T22:25:00.000 |
I have made an online website which acts as the fronted for a database into which my customers can save sales information. So each customer logs onto the online website with their own credentials and only see their own sales records. The database comes in the form of SQL Server 2008.
Some of these customers have a thir... | 3 | 0 | 0 | 0 | false | 24,659,493 | 0 | 995 | 1 | 0 | 0 | 24,611,812 | So in summary you have a website/sql server application. Then some of your users have a separate local database with a python front end. And you need to bridge the two applications.
You can expose your sql server database with an rest api (using whatever tech of your choice). Then create a python app that calls that a... | 1 | 0 | 0 | Syncing PC data with online data | 2 | python,sql-server,browser,sync | 0 | 2014-07-07T13:32:00.000 |
Goal: Take/attach pictures in a PhoneGap application and send a public URL for each picture to a Google Cloud SQL database.
Question 1: Is there a way to create a Google Cloud Storage object from a base64 encoded image (in Python), then upload that object to a bucket and return a public link?
I'm looking to use PhoneGa... | 1 | 0 | 0 | 0 | false | 24,657,475 | 1 | 620 | 1 | 1 | 0 | 24,655,877 | Yes, that is a fine use for GAE and GCS. You do not need an <input type=file>, per se. You can just set up POST parameters in your call to your GAE url. Make sure you send a hidden key as well, and work from SSL-secured urls, to prevent spammers from posting to your app. | 1 | 0 | 0 | Using PhoneGap + Google App Engine to Upload and Save Images | 2 | python,google-app-engine,cordova,google-cloud-storage | 0 | 2014-07-09T14:05:00.000 |
So I have a string in Python that contains like 500 SQL INSERT queries, separated by ;. This is purely for performance reasons, otherwise I would execute individual queries and I wouldn't have this problem.
When I run my SQL query, Python throws: IntegrityError: (1062, "Duplicate entry 'http://domain.com' for key 'PRIM... | 0 | 0 | 1.2 | 0 | true | 24,666,569 | 0 | 828 | 1 | 0 | 0 | 24,664,413 | For anyone that cares, the ON DUPLICATE KEY UPDATE SQL command was what I ended up using. | 1 | 0 | 0 | Python Ignore MySQL IntegrityError when trying to add duplicate entry with a Primary key | 1 | python,mysql,sql | 0 | 2014-07-09T21:55:00.000 |
I apologize if this has been asked already, or if this is answered somewhere else.
Anyways, I'm working on a project that, in short, stores image metadata and then allows the user to search said metadata (which resembles a long list of key-value pairs). This wouldn't be too big of an issue if the metadata was standard... | 0 | 0 | 0 | 0 | false | 24,690,665 | 1 | 992 | 1 | 0 | 0 | 24,688,388 | In a Django project you've got 4 alternatives for this kind of problem, in no particular order:
using PostgreSQL, you can use the hstore field type, that's basically a pickled python dictionary. It's not very helpful in terms of querying it, but does its job saving your data.
using Django-NoRel with mongodb you get th... | 1 | 0 | 0 | Django: storing/querying a dictionary-like data set? | 2 | python,mysql,django,mongodb,database | 0 | 2014-07-11T00:29:00.000 |
I faced with problem:
There is a big old database on microsoft sql server (with triggers, functions etc.). I am writing C# app on top of this db. Most of work is a "experiments" like this:
Write a part of functionality and see if it works in old Delphi app (i.e. inserted data in C# loaded correctly in Delphi).
So I nee... | 0 | 0 | 1.2 | 0 | true | 24,690,183 | 0 | 70 | 1 | 0 | 0 | 24,690,101 | You can run a trace in SQL Profiler to see the queries being executed on the server. | 1 | 0 | 0 | Analyse sql queries text | 1 | c#,python,sql,tsql | 0 | 2014-07-11T04:26:00.000 |
So, locally I've changed my models a few times and used South to get everything working. I have a postgres database to power my live site, and one model keeps triggering a column mainsite_message.spam does not exist error. But when I run heroku run python manage.py migrate mainsite from the terminal, I get Nothing to... | 1 | 0 | 0 | 0 | false | 24,698,874 | 1 | 1,178 | 2 | 0 | 0 | 24,697,420 | I presume that you have created a migration to add mainsite_message.spam to the schema. Have you made sure that this migration is in your git repository?
If you type git status you should see untracked files. If the migration is untracked you need to git add path_to_migration and then push it to Heroku before you can r... | 1 | 0 | 0 | Add a column to heroku postgres database | 3 | python,django,postgresql,heroku | 0 | 2014-07-11T12:11:00.000 |
So, locally I've changed my models a few times and used South to get everything working. I have a postgres database to power my live site, and one model keeps triggering a column mainsite_message.spam does not exist error. But when I run heroku run python manage.py migrate mainsite from the terminal, I get Nothing to... | 1 | 0 | 0 | 0 | false | 24,697,852 | 1 | 1,178 | 2 | 0 | 0 | 24,697,420 | Did you run schemamigration before? If yes, go to your database and take a look at your table "south_migrationhistory" there you can see what happened.
If you already did the steps above you should try to open your migration file and take a look as well, there you can find if the creation column is specified or not! | 1 | 0 | 0 | Add a column to heroku postgres database | 3 | python,django,postgresql,heroku | 0 | 2014-07-11T12:11:00.000 |
I'm not sure how to best phrase this question:
I would like to UPDATE, ADD, or DELETE information in an SQLite3 Table, but I don't want this data to be written to disk yet. I would still like to be able to
SELECT the data, and get the updated information, but then I want to choose to either rollback or commit.
Is th... | 1 | 0 | 0 | 0 | false | 24,707,793 | 0 | 91 | 1 | 0 | 0 | 24,707,471 | If you explicitly need to commit multiple times throughout the code, and you are worried about the performance times of transactions, you could always build the database in memory db=sqlite3.connect(':memory:') and then dump it's contents to disk when all the time-critical aspects of the program have been completed. I.... | 1 | 0 | 0 | Can I Stage data to memory SELECT, then choose to rollback, or commit in sqlite3? python 2.7 | 3 | python,sqlite | 0 | 2014-07-11T22:20:00.000 |
I don't have access PHP server nor database like Mysql on machine I'll be working on. Would it be feasible to use Python instead of PHP and flat file database instead of Mysql? I'm not too concerned about performance or scalability. It's not like I'm going to create next facebook. I just want to load data from server ... | 0 | 1 | 0.049958 | 0 | false | 24,727,209 | 0 | 1,786 | 2 | 0 | 0 | 24,727,096 | Python comes bundled with sqlite3 module that gives access to SQLite databases. The only downside is that it is pretty much possible for just one thread can have write locks to it at any given moment. | 1 | 0 | 0 | Using Python and flat file database for server-side | 4 | python,web | 0 | 2014-07-13T21:21:00.000 |
I don't have access PHP server nor database like Mysql on machine I'll be working on. Would it be feasible to use Python instead of PHP and flat file database instead of Mysql? I'm not too concerned about performance or scalability. It's not like I'm going to create next facebook. I just want to load data from server ... | 0 | 1 | 0.049958 | 0 | false | 24,727,365 | 0 | 1,786 | 2 | 0 | 0 | 24,727,096 | There are many ways to serve Python applications, but you should probably look at something that does this using the WSGI standard. Many frameworks will let you do this e.g: Pyramid, Pylons, Django, .....
If you haven't picked one then it would be worth looking at your long term requirements and also what you already k... | 1 | 0 | 0 | Using Python and flat file database for server-side | 4 | python,web | 0 | 2014-07-13T21:21:00.000 |
I need to be able to query documents that have a date field between some range, but sometimes in my dataset the year doesn't matter (this is represented with a boolean flag in the mongo document).
So, for example, I might have a document for Christmas (12/25-- year doesn't matter) and another document for 2014 World Cu... | 0 | 0 | 0 | 0 | false | 24,729,803 | 0 | 61 | 1 | 0 | 0 | 24,728,191 | For me you have to store specific values you'll search on, and index them.
For example, alongside with the date, you may store "year", "month", and "day", index on "month" and "day", and do your queries on it.
You may want to store them as "y", "m", and "d" to gain some bytes (That's sad, I know). | 1 | 0 | 1 | Mongo query on custom date system | 1 | python,mongodb | 0 | 2014-07-14T00:41:00.000 |
I'm using matplotlib and MySQLdb to create some graphs from a MySQL database. For example, the number of unique visitors in a given time period, grouped by periods of say, 1 hours. So, there'll be a bunch of (Time, visits in 1-hour period near that time) points.
I have a table as (ip, visit_time) where each ip can occu... | 1 | 2 | 1.2 | 0 | true | 24,776,157 | 0 | 86 | 1 | 0 | 0 | 24,776,000 | Generally Database queries should be faster than python for two reasons:
Databases are optimised to work with data, and they will optimise a high level abstraction language like SQL in order to get the best performance, while python might be fast but doesn't have to be
Running SQL analyses the data at the source and y... | 1 | 0 | 0 | Python and MySQL - which to use more? | 1 | python,mysql,sql | 0 | 2014-07-16T08:36:00.000 |
I have a folder with a large number of Excel workbooks. Is there a way to convert every file in this folder into a CSV file using Python's xlrd, xlutiles, and xlsxWriter?
I would like the newly converted CSV files to have the extension '_convert.csv'.
OTHERWISE...
Is there a way to merge all the Excel workbooks in the... | 1 | 0 | 0 | 1 | false | 24,785,891 | 0 | 2,934 | 1 | 0 | 0 | 24,785,824 | Look at openoffice's python library. Although, I suspect openoffice would support MS document files.
Python has no native support for Excel file. | 1 | 0 | 0 | Converting a folder of Excel files into CSV files/Merge Excel Workbooks | 5 | python,csv,xlrd,xlsxwriter | 0 | 2014-07-16T16:20:00.000 |
When compiling documentation using Sphinx, I got the error AttributeError: 'str' object has no attribute 'worksheets'. How do I fix this? | 0 | 0 | 1.2 | 0 | true | 24,790,240 | 0 | 466 | 1 | 0 | 0 | 24,790,239 | You're getting the error because you don't have the most recent iPython installed. You probably installed it with sudo apt-get install ipython, but you should upgrade using sudo pip install ipython --upgrade and then making sure that the previous installation was removed by running sudo apt-get remove ipython. | 1 | 0 | 1 | Compiling Sphinx with iPython doc error "AttributeError: 'str' object has no attribute 'worksheets'" | 1 | ipython,python-sphinx | 0 | 2014-07-16T20:38:00.000 |
I currently have a Raspberry Pi running Iperf non stop and collecting results.
After collecting results it uploads the bandwidth tests to MySQL.
Is there a way to automatically refresh the table to which the data is added? | 0 | 0 | 0 | 0 | false | 24,795,785 | 0 | 726 | 1 | 0 | 0 | 24,791,510 | Is your goal is to use MySQL workbench to build a live-view of your data ? If so I don't think you're using the right tools.
You may just use ElasticSearch to store your data and Kibana to display it, this way you'll have free graphs and charts of your stored data, and auto-refresh (based on an interval, not on events)... | 1 | 0 | 0 | MySQL WorkBench How to automatically re run query? | 1 | python,mysql | 1 | 2014-07-16T21:57:00.000 |
I'm doing data analytics on medium sized data (2GB, 20Mio records) and on the current machine it hardly fits into memory. Windows 7 slows down considerably when reaching 3GB occupation on this 4 GB machine. Most of my current analysis need to iterate over all records and consider properties of groups of records determi... | 2 | 1 | 0.197375 | 0 | false | 24,866,662 | 0 | 246 | 1 | 0 | 0 | 24,866,113 | It's hard to say anything without knowing more about the data & aggregation you are trying to do, but definitely don't do serialize data to parse it faster with Python -- most probably that's not where the problem is. And probably not store data somehow column-wise so that I don't have to read all columns.
sort SQLit... | 1 | 0 | 1 | Iterate over large data fast with Python? | 1 | python,database | 0 | 2014-07-21T13:18:00.000 |
I am working on a python app that uses python 2.4, postgres 8.2 and old versions of pygresql, xlrd, etc. Because of this it is quite a pain to use, and has to be used in a windows xp VM. There are other problems such as the version of xlrd doesn't support .xlsx files, but the new version of xlrd doesn't work with pytho... | 0 | 0 | 1.2 | 0 | true | 24,913,304 | 0 | 77 | 1 | 0 | 0 | 24,908,188 | IMHO you should probably commit in your master branch, then rebase your upgrade branch, it will make more sense in your repository history.
If those commits are working on both environments, you should use a different branch based on the master one, so you can work out on the newer version of python, then merge it in t... | 1 | 0 | 0 | Managing a different python version as a branch in git | 1 | python,git,version-control,branching-and-merging | 0 | 2014-07-23T10:36:00.000 |
I want to order a large SQLite table and write the result into another table. This is because I need to iterate in some order and ordering takes a very long time on that big table.
Can I rely in a (Python) iterator giving my the rows in the same order as I INSERTed them? Is there a way to guarantee that? (I heard comme... | 0 | 2 | 1.2 | 0 | true | 24,909,921 | 0 | 46 | 1 | 0 | 0 | 24,909,851 | I think you are approaching this wrong. If it is taking too long to extract data in a certain order from a table in any SQL database, that is a sign that you need to add an index. | 1 | 0 | 0 | Are SQLite rows ordered persistently? | 1 | python,sql,sqlite | 0 | 2014-07-23T11:55:00.000 |
So I'm fairly new to Django development and I started using the cx_Oracle and MySQLdb libraries to connect to Oracle and MySQL databases. The idea is to build an interface that will connect to multiple databases and support CRUD ops. The user logs in with the db credentials for the respective databases. I tried not usi... | 0 | 0 | 1.2 | 0 | true | 24,917,828 | 1 | 110 | 1 | 0 | 0 | 24,912,020 | Django uses connection pooling (i.e. few requests share the same DB connection). Of course, you can write a middleware to close and reinitialize connection on every request, but I can't guarantee you will not create race conditions, and, as you said, there is no point to do so.
If you want to make automatic multi-datab... | 1 | 0 | 0 | How to use the Django ORM for creating an interface like MySQL admin that connects to multiple databases | 1 | django,mysql-python,django-orm,cx-oracle | 0 | 2014-07-23T13:38:00.000 |
I want to store html string in sql server database using pyodbc driver. I have used nvarchar(max)as the data type for storing in the database but it is throwing the following error
Error:
('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Warning: Partial insert/update. The insert/update of a text or image column(s... | 2 | 2 | 0.379949 | 0 | false | 31,753,938 | 0 | 1,047 | 1 | 0 | 0 | 24,930,835 | The link that Anthony Kong supplied includes something that may resolve the issue; it did for me in a very similar situation.
switch to DRIVER={SQL Server Native Client 10.0} instead of DRIVER={SQL Server} in the connection string
This would be for Sql Server 2008 (you didn't specify the Edition); for Sql Server 2012... | 1 | 0 | 0 | Pyodbc Store html unicode string in Sql Server | 1 | python | 0 | 2014-07-24T10:06:00.000 |
I have a python webpage which pulls information from a MSSQL database with pyodbc.
This works, however since some queries that get run are quite heavy. the webpage can take 20-30 seconds to load.
I want to fix this, What would be the best way to run all queries once every 15-30 minutes and store that data locally on th... | 0 | 0 | 1.2 | 0 | true | 24,934,239 | 0 | 764 | 1 | 0 | 0 | 24,933,185 | I have run into this when creating large reports. Nobody will wait for a 30 second query, even if it's going back over 15 years of sales data.
You have a few options:
Create a SQL Job in the SQL Server Agent to run a stored procedure that runs the query and saves to a table. (This is what I do)
Use a scheduled task ... | 1 | 0 | 0 | async information from MSSQL database | 1 | python,sql,sql-server,pyodbc | 0 | 2014-07-24T12:03:00.000 |
I am developing a system that will need to connect from a remote mysql database on the fly to do a specific task. To accomplish this, I am thinking to use Mysql-db module in python. Since the remote database is not part of the system itself I do not prefer to add it on the system core database settings (DATABASES setti... | 0 | 1 | 0.099668 | 0 | false | 24,997,975 | 1 | 56 | 2 | 0 | 0 | 24,944,869 | For working inside virtualenv you need to install
pip install MySQL-python==1.2.5 | 1 | 0 | 0 | Django Database Module | 2 | database,django,mysql-python,django-database | 0 | 2014-07-24T22:10:00.000 |
I am developing a system that will need to connect from a remote mysql database on the fly to do a specific task. To accomplish this, I am thinking to use Mysql-db module in python. Since the remote database is not part of the system itself I do not prefer to add it on the system core database settings (DATABASES setti... | 0 | 0 | 1.2 | 0 | true | 24,997,774 | 1 | 56 | 2 | 0 | 0 | 24,944,869 | MySQLdb is the best way to do this. | 1 | 0 | 0 | Django Database Module | 2 | database,django,mysql-python,django-database | 0 | 2014-07-24T22:10:00.000 |
I made a little PostgreSQL trigger with Plpython. This triggers plays a bit with the file system, creates and delete some files of mine. Created files are owned by the "postgres" unix user, but I would like them to be owned by another user, let's say foobar. Triggers are installed with user "foobar" and executed with u... | 0 | 3 | 1.2 | 0 | true | 24,958,698 | 0 | 1,158 | 1 | 0 | 0 | 24,951,431 | You're confusing operating system users and PostgreSQL users.
SECURITY DEFINER lets you run a function as the defining postgresql user. But no matter what PostgreSQL user is running the operating system user the back-end server runs as is always the same - usually the operating system user postgres.
By design, the Post... | 1 | 0 | 0 | PostgreSQL trigger with a given role | 2 | postgresql,roles,plpython | 0 | 2014-07-25T08:35:00.000 |
I am using cherrypy along with sqlalchemy-mysql as backend. I would like to know the ways of dealing with UNICODE strings in cherrypy web application. One brute-force way would be to convert all string coming in as parameters into UNICODE (and then decoding them to UTF-8) before storing them to database. But I was wond... | 0 | 0 | 0 | 0 | false | 25,016,312 | 0 | 153 | 1 | 0 | 0 | 24,997,946 | SQLAlchemy provides Unicode or UnicodeText for your purposes.
Also don't forget about u'text' | 1 | 0 | 0 | how to handle UNICODE characters in cherrypy-sqlalchemy-mysql application? | 1 | mysql,python-2.7,unicode,sqlalchemy,cherrypy | 0 | 2014-07-28T14:52:00.000 |
I'm using python to write a report which is put into an excel spreadshet.
There are four columns, namely:
Product Name | Previous Value | Current Value | Difference
When I am done putting in all the values I then want to sort them based on Current Value. Is there a way I can do this in xlwt? I've only seen examples of... | 1 | -1 | -0.197375 | 1 | false | 25,032,965 | 0 | 1,410 | 1 | 0 | 0 | 25,024,437 | You will get data from queries, right? Then you will write them to an excel by xlwt. Just before writing, you can sort them. If you can show us your code, then maybe I can optimize them. Otherwise, you have to follow wnnmaw's advice, do it in a more complicate way. | 1 | 0 | 0 | Sorting multiple columns in excel via xlwt for python | 1 | python,excel,xlwt | 0 | 2014-07-29T20:32:00.000 |
I am using Robot Framework with Database Library to test database queries on localhost. I am running it by XAMPP.
This is my test case:
*** Settings ***
Library DatabaseLibrary
*** Variables ***
@{DB} robotframework root \ localhost 3306
*** Test Cases ***
Select from database
... | 1 | 2 | 0.379949 | 0 | false | 32,266,513 | 1 | 4,534 | 1 | 0 | 0 | 25,072,996 | You should check the content of dbConfigFile. You don't specify one so the default one is ./resources/db.cfg.
The error says when python try to parse that file it cannot find a section named default. In documentation it says:
note: specifying dbapiModuleName, dbName dbUsername or dbPassword directly will override the... | 1 | 0 | 0 | Error: No section: 'default' in Robot Framework using DatabaseLibrary | 1 | python-2.7,robotframework | 0 | 2014-08-01T04:44:00.000 |
I'm initiating celery tasks via after_insert events.
Some of the celery tasks end up updating the db and therefore need the id of the newly inserted row. This is quite error-prone because it appears that if the celery task starts running immediately sometimes sqlalchemy will not have finished committing to the db and ... | 0 | 0 | 1.2 | 0 | true | 25,086,833 | 0 | 209 | 1 | 1 | 0 | 25,078,815 | It wasn't so complicated, subclass Session, providing a list for appending tasks via after_insert. Then run through the list in after_commit. | 1 | 0 | 0 | SQLAlchemy after_insert triggering celery tasks | 1 | python,sqlalchemy,celery | 0 | 2014-08-01T11:01:00.000 |
I'm currently in the process of trying to redesign the general workflow of my lab, and am coming up against a conceptual roadblock that is largely due to my general lack of knowledge in this subject.
Our data currently is organized in a typical file system structure along the lines of:
Date\Cell #\Sweep #
where for a... | 0 | 0 | 0 | 1 | false | 25,122,607 | 0 | 332 | 1 | 0 | 0 | 25,110,089 | First of all, I am a big fan of Pytables, because it helped me manage huge data files (20GB or more per file), which I think is where Pytables plays out its strong points (fast access, built-in querying etc.). If the system is also used for archiving, the compression capabilities of HDF5 will reduce space requirements ... | 1 | 0 | 0 | Benefits of Pytables / databases over file system for data organization? | 1 | python,csv,organization,pytables | 0 | 2014-08-03T23:40:00.000 |
My RDS is in a VPC, so it has a private IP address. I can connect my RDS database instance from my local computer with pgAdmin using SSH tunneling via EC2 Elastic IP.
Now I want to connect to the database instance in my code in python. How can I do that? | 2 | 0 | 0 | 0 | false | 25,115,502 | 0 | 667 | 1 | 0 | 1 | 25,112,648 | Point your python code to the same address and port you're using for the tunnelling.
If you're not sure check the pgAdmin destination in the configuration and just copy it. | 1 | 0 | 0 | AWS - Connect to RDS via EC2 tunnel | 1 | postgresql,python-2.7,amazon-web-services,psycopg2,amazon-vpc | 0 | 2014-08-04T06:08:00.000 |
I'm using wsgi apache and flask to run my application. I'm use from yourapplication import app as application to start my application. That works so far fine. The problem is, with every request a new instance of my application is created. That leads to the unfortunate situation that my flask application creates a new d... | 1 | 1 | 1.2 | 0 | true | 25,143,417 | 1 | 1,457 | 1 | 0 | 0 | 25,143,105 | The WSGIApplicationGroup directive may be what you're looking for as long as you have the wsgi app running in daemon mode (otherwise I believe apache's default behavior is to use prefork which spins up a process to handle each individual request):
The WSGIApplicationGroup directive can be used to specify which applica... | 1 | 0 | 0 | start only one flask instance using apache + wsgi | 1 | python,apache,flask,wsgi | 0 | 2014-08-05T15:49:00.000 |
My installed version of the python(2.7) module pandas (0.14.0) will not import. The message I receive is this:
UserWarning: Installed openpyxl is not supported at this time. Use >=1.6.1 and <2.0.0.
Here's the problem - I already have openpyxl version 1.8.6 installed so I can't figure out what the problem might be! Does... | 0 | 0 | 0 | 1 | false | 25,178,533 | 0 | 114 | 1 | 0 | 0 | 25,168,058 | The best thing would be to remove the version of openpyxl you installed and let Pandas take care. | 1 | 0 | 0 | Python pandas module openpxyl version issue | 1 | python,pandas,openpyxl,versions | 0 | 2014-08-06T18:55:00.000 |
I've looked through all the docs I could find, and read the source code...and it doesn't seem you can actually create a MySQL database (or any other kind, that I could find) using peewee. If so, that means for any the database I may need to connect to, I would need to create it manually using mysql or some other tool.... | 11 | 3 | 0.197375 | 0 | false | 25,365,070 | 0 | 4,151 | 2 | 0 | 0 | 25,194,297 | Peewee cannot create databases with MySql or with other systems that require database and user setup, but will create the database with sqlite when the first table is created. | 1 | 0 | 0 | Can peewee create a new MySQL database | 3 | python-3.x,peewee | 0 | 2014-08-08T00:23:00.000 |
I've looked through all the docs I could find, and read the source code...and it doesn't seem you can actually create a MySQL database (or any other kind, that I could find) using peewee. If so, that means for any the database I may need to connect to, I would need to create it manually using mysql or some other tool.... | 11 | 14 | 1.2 | 0 | true | 25,195,428 | 0 | 4,151 | 2 | 0 | 0 | 25,194,297 | Peewee can create tables but not databases. That's standard for ORMs, as creating databases is very vendor-specific and generally considered a very administrative task. PostgreSQL requires you to connect to a specific database, Oracle muddles the distinction between users and databases, SQLite considers each file to be... | 1 | 0 | 0 | Can peewee create a new MySQL database | 3 | python-3.x,peewee | 0 | 2014-08-08T00:23:00.000 |
I have developed a website where the pages are simply html tables. I have also developed a server by expanding on python's SimpleHTTPServer. Now I am developing my database.
Most of the table contents on each page are static and doesn't need to be touched. However, there is one column per table (i.e. page) that need... | 0 | 1 | 1.2 | 0 | true | 25,203,796 | 1 | 273 | 1 | 0 | 0 | 25,195,723 | Probably not the answer you were looking for, but your post is very broad, and I've used win32coma and Excel a fair but and don't see those as good tools towards your goal. An easier strategy is this:
for the server, use Flask: it is a Python HTTP server that makes it crazy easy to respond to HTTP requests via Python... | 1 | 0 | 0 | Database in Excel using win32com or xlrd Or Database in mysql | 1 | python,mysql,excel,win32com,xlrd | 0 | 2014-08-08T03:38:00.000 |
This is not a question of a code, I need to extract some BLOB data from an Oracle database using python script. My question is what are the steps in dealing with BLOB data and how to read as images, videos and text? Since I have no access to the database itself, is it possible to know the type of BLOBs stored if it is ... | 0 | 0 | 0 | 0 | false | 25,205,260 | 0 | 1,342 | 1 | 0 | 0 | 25,205,157 | If you have a pure BLOB in the database, as opposed to, say, an ORDImage that happens to be stored in a BLOB under the covers, the BLOB itself has no idea what sort of binary data it contains. Normally, when the table was designed, a column would be added that would store the data type and/or the file name. | 1 | 0 | 0 | Reading BLOB data from Oracle database using python | 1 | oracle,python-2.7,blob | 0 | 2014-08-08T13:54:00.000 |
I am still a noob in web app development and sorry if this question might seem obvious for you guys.
Currently I am developing a web application for my University using Python and Django. And one feature of my web app is to retrieve a large set of data in a table in the database(postgreSQL), and displaying these data i... | 1 | 3 | 1.2 | 0 | true | 25,208,098 | 1 | 1,235 | 1 | 0 | 0 | 25,207,697 | Are you allowed to use paging in your output? If so, then i'd start by setting a page size of 100 (for example) and then use LIMIT 100 in my various SQL queries. Essentially, each time the user clicks next or prev on the web page a new query would be executed based on the current filtering or sorting options with the L... | 1 | 0 | 0 | Handle and display large data set in web browser | 1 | python,database,django,postgresql,web | 0 | 2014-08-08T16:09:00.000 |
I am scoping out a project with large, mostly-uncompressible time series data, and wondering if Django + Postgres with raw SQL is the right call.
I have time series data that is ~2K objects/hour, every hour. This is about 2 million rows per year I store, and I would like to 1) be able to slice off data for analysis th... | 21 | 0 | 0 | 0 | false | 25,887,408 | 1 | 10,343 | 1 | 0 | 0 | 25,212,009 | You might also consider using the PostGIS postgres extension which includes support for raster data types (basically large grids of numbers) and has many features to make use of them.
However, do not use the ORM in this case, you will want to do SQL directly on the server. The ORM will add a huge amount of overhead for... | 1 | 0 | 0 | Django + Postgres + Large Time Series | 4 | python,django,postgresql,heroku,bigdata | 0 | 2014-08-08T20:48:00.000 |
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does... | 0 | 1 | 0.066568 | 0 | false | 25,222,611 | 1 | 35 | 3 | 0 | 0 | 25,222,515 | If I would create such type of application then
I will have some common queries like get by current date,current time , date ranges, time ranges, n others based on my application for the user to select easily.
Some autocompletions for common keywords.
If the data gets changed frequently there is no use saving html, ge... | 1 | 0 | 0 | Creating an archive - Save results or request them every time? | 3 | python,html,database | 0 | 2014-08-09T20:01:00.000 |
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does... | 0 | 1 | 1.2 | 0 | true | 25,222,656 | 1 | 35 | 3 | 0 | 0 | 25,222,515 | Specifically regarding retrieving the results from queries that have been run previously I would suggest saving the results to be able to view later rather than running the queries again and again. The main benefits of this approach are:
You save unnecessary computational work re-running the same queries;
You guarante... | 1 | 0 | 0 | Creating an archive - Save results or request them every time? | 3 | python,html,database | 0 | 2014-08-09T20:01:00.000 |
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does... | 0 | 1 | 0.066568 | 0 | false | 25,222,678 | 1 | 35 | 3 | 0 | 0 | 25,222,515 | The crucial difference is that if data changes, new query will return different result than what was saved some time ago, so you have to decide if the user should get the up to date data or a snapshot of what the data used to be.
If relevant data does not change, it's a matter of whether the queries will be expensive, ... | 1 | 0 | 0 | Creating an archive - Save results or request them every time? | 3 | python,html,database | 0 | 2014-08-09T20:01:00.000 |
I've done some research, and I don't fully understand what I found.
My aim is to, using a udp listener I wrote in python, store data that it receives from an MT4000 telemetry device. This data is received and read in hex, and I want that data to be put into a table, and store it as a string in base64. In terms of stor... | 2 | 2 | 1.2 | 0 | true | 25,239,591 | 0 | 5,730 | 2 | 0 | 0 | 25,239,361 | You can just save the base64 string in a TEXT column type. After retrieval just decode this string with base64.decodestring(data) ! | 1 | 0 | 0 | How to store base64 information in a MySQL table? | 2 | python,mysql | 0 | 2014-08-11T08:57:00.000 |
I've done some research, and I don't fully understand what I found.
My aim is to, using a udp listener I wrote in python, store data that it receives from an MT4000 telemetry device. This data is received and read in hex, and I want that data to be put into a table, and store it as a string in base64. In terms of stor... | 2 | 0 | 0 | 0 | false | 62,777,767 | 0 | 5,730 | 2 | 0 | 0 | 25,239,361 | You can storage a base64 string in a TEXT column type, but in my experience I recommend to use LONGTEXT type to avoid truncated errors in big base64 texts. | 1 | 0 | 0 | How to store base64 information in a MySQL table? | 2 | python,mysql | 0 | 2014-08-11T08:57:00.000 |
Using Boto, you can create an S3 bucket and configure a lifecycle for it; say expire keys after 5 days. I would like to not have a default lifecycle for my bucket, but instead set a lifecycle depending on the path within the bucket. For instance, having path /a/ keys expire in 5 days, and path /b/ keys to never expire.... | 0 | 0 | 1.2 | 0 | true | 25,245,827 | 1 | 41 | 1 | 0 | 1 | 25,245,710 | After some research in the boto docs, it looks like using the prefix parameter in the lifecycle add_rule method allows you to do this. | 1 | 0 | 0 | Setting a lifecycle for a path within a bucket | 1 | python,amazon-web-services,amazon-s3,boto | 0 | 2014-08-11T14:28:00.000 |
When mounting and writing files in the google cloud storage using the gcsfs, the gcsfs is creating folders and files but not writing files. Most of the times it shows input/output error. It even occurs even when we copy files from local directory to the mounted gcsfs directory.
gcsfs version 0.15 | 1 | 0 | 0 | 0 | false | 57,971,532 | 0 | 554 | 1 | 1 | 0 | 25,265,110 | Although this is quiet an old topic, I will try to provide an answer especially because of people who might stumble on this in the course of their own work. I have experience using more recent versions of gcsfs and it works quiet well. You can find the latest documentation at https://gcsfs.readthedocs.io/en/latest. To ... | 1 | 0 | 0 | gcsfs is not writing files in the google bucket | 1 | python-3.x,google-cloud-platform,google-cloud-storage | 0 | 2014-08-12T13:07:00.000 |
I am using python to parse an Excel file and am accessing the application COM using excel = Dispatch('Excel.Application') at the beginning of a restart the code will find the application object just fine and I will be able to access the active workbook.
The problem comes when I have had two instances of Excel open an... | 3 | 2 | 1.2 | 0 | true | 25,308,893 | 0 | 1,075 | 1 | 0 | 0 | 25,298,281 | When an application registers itself, only the first instance gets registered, until it dies and then the very next instance to register gets registered.
There's no registration queue, so when your first instance dies, the second keeps unregistered, so any call to Excel.Application will launch a third instance and they... | 1 | 1 | 0 | win32com dispatch Won't Find Already Open Application Instance | 1 | python,excel,com,win32com | 0 | 2014-08-14T00:26:00.000 |
From a pyramid middleware application I'm calling a stored procedure with pymssql. The procedure responds nicely upon the first request I pass through the middleware from the frontend (angularJS). Upon subsequent requests however, I do not get any response at all, not even a timeout.
If I then restart the pyramid appl... | 0 | 1 | 1.2 | 0 | true | 25,646,833 | 1 | 122 | 1 | 0 | 0 | 25,367,508 | The solution was rather trivial. Within one object instance, I was calling two different stored procedures without closing the connection after the first call. That caused a pending request or so in the MSSQL-DB, locking it for further requests. | 1 | 0 | 0 | pyramid middleware call to mssql stored procedure - no response | 1 | python-2.7,stored-procedures,pyramid,pymssql | 0 | 2014-08-18T16:05:00.000 |
In general I want to know the possible benefits of Graphite. For now I have a web app that receives data directly from JavaScript Ajax call and plots the data using high chart.
It first run 20 different queries for each graph using Python from my SQL database.
And sends each result data to HighChart library using G... | 0 | 0 | 1.2 | 0 | true | 25,381,895 | 0 | 199 | 1 | 0 | 0 | 25,374,338 | Maybe better is call one ajax which gets all data and then prepare parser which will return data for each chart. | 1 | 0 | 0 | Graphite or multiple query with AJAX call? | 1 | javascript,python,ajax,highcharts,graphite | 0 | 2014-08-19T01:18:00.000 |
I am using python's CSV module to output a series of parsed text documents with meta data. I am using the csv.writer module without specifying a special delimiter, so I am assuming it is delimited using commas. There are many commas in the text as well as in the meta data, so I was expecting there to be way more column... | 1 | 1 | 0.099668 | 0 | false | 25,380,579 | 0 | 518 | 1 | 0 | 0 | 25,380,448 | You shall inspect the real content of CSV file you have created and you will see, that there are ways to enclose text in quotes. This allows distinction between delimiter and a character inside text value.
Check csv module documentation, it explains these details too. | 1 | 0 | 1 | Python CSV module - how does it avoid delimiter issues? | 2 | python,excel,csv,export-to-csv | 0 | 2014-08-19T09:52:00.000 |
We have a query which returns 0 records sometimes when called. When you call the getQueryResults on the jobId it returns with a valid pageToken with 0 rows. This is a bit unexpected since technically there is no data. Whats worst is if you keep supplying the pageToken for subsequent data-pulls it keeps giving zero rows... | 1 | 0 | 1.2 | 0 | true | 25,393,093 | 1 | 471 | 1 | 1 | 0 | 25,388,124 | This is a known issue that has lingered for far far too long. It is fixed in this week's release, which should go live this afternoon or tomorrow. | 1 | 0 | 0 | BigQuery Api getQueryResults returning pageToken for 0 records | 1 | python,google-app-engine,google-bigquery | 0 | 2014-08-19T16:07:00.000 |
Let's assume I am developing a service that provides a user with articles. Users can favourite articles and I am using Solr to store these articles for search purposes.
However, when the user adds an article to their favourites list, I would like to be able to figure out out which articles the user has added to favouri... | 0 | 0 | 1.2 | 0 | true | 25,414,143 | 1 | 102 | 1 | 0 | 0 | 25,413,343 | I'd go with a modified version of the first one - it'll keep user specific data that's not going to be used for search out of the index (although if you foresee a case where you want to search for favourite'd articles, it would probably be an interesting field to have in the index) for now. For just display purposes li... | 1 | 0 | 0 | Solr & User data | 1 | python,mysql,solr,django-haystack | 0 | 2014-08-20T19:55:00.000 |
Logging sql queries is useful for debugging but in some cases, it's useless to log the whole query, especially for big inserts. In this case, display only first N caracters would be enough.
Is there a simple way to truncate sql queries when they are logged ? | 2 | 2 | 1.2 | 0 | true | 25,447,136 | 1 | 142 | 1 | 0 | 0 | 25,446,832 | It's quite simple actually:
in settings.py, let's say your logger is based on a handler which formatter is named 'simple'.
'formatters': {
...
'simple': {
'format': '%(asctime)s %(message).150s'
},
...
},
The message will now be truncated to the first 150 caracters. Pl... | 1 | 0 | 0 | Truncate logging of sql queries in Django | 1 | python,django | 0 | 2014-08-22T12:15:00.000 |
I ran a simple select query (with no LIMIT applied) using the Big Query python api. I also supplied a destination table as the result was too large. When run, the job returned an "unexpected LIMIT clause" error. I used ignore case at the end of the query. There could be a possibility that it might be causing the proble... | 1 | 1 | 1.2 | 0 | true | 25,489,066 | 0 | 144 | 1 | 0 | 0 | 25,483,349 | This issue is an artifact of how bigquery does "allow large results" queries interacting poorly with the "ignore case" clause. We're tracking an internal bug on the issue, and hopefully will have a fix soon. The workaround is either to remove the "allow large results" flag or the "ignore case" clause. | 1 | 0 | 0 | BigQuery: "unexpected LIMIT clause at:" error when using list query job | 1 | python,google-bigquery | 0 | 2014-08-25T09:57:00.000 |
Background.
My OS is Win7 64bit.
My Python is 2.7 64bit from python-2.7.8.amd64.msi
My cx_Oracle is 5.0 64bit from cx_Oracle-5.0.4-10g-unicode.win-amd64-py2.7.msi
My Oracle client is 10.1 (I don't know 32 or 64 arch, but SQL*Plus is 10.1.0.2.0
Database is
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64b... | 4 | 2 | 0.379949 | 0 | false | 27,795,948 | 0 | 6,803 | 1 | 1 | 0 | 25,542,787 | If python finds more than one OCI.DLL file in the path (even if they are identical) it will throw this error. (Your path statement looks like it may throw up more than one). You can manipulate the path inside your script to constrain where python will look for the supporting ORACLE files which may be your only option i... | 1 | 0 | 0 | Python + cx_Oracle : Unable to acquire Oracle environment handle | 1 | python,oracle | 0 | 2014-08-28T07:10:00.000 |
I need to know, what are the steps to generate an Excel sheet in OpenERP?
Or put it this way, I want to generate an Excel sheet for data that I have retrieved from different tables through queries with a function that I call from a button on wizard. Now I want when I click on the button an Excel sheet should be generat... | 0 | 1 | 1.2 | 0 | true | 25,993,349 | 1 | 596 | 1 | 0 | 0 | 25,552,075 | You can do it easily with python library called XlsxWriter. Just download it and add in openerp Server, look for XlsxWriter Documentation , plus there are also other python libraries for generating Xlsx reports. | 1 | 0 | 0 | What are the steps to create or generate an Excel sheet in OpenERP? | 1 | python,openerp | 0 | 2014-08-28T15:00:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.