Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is it possible to write nice-formatted excel files with dataframe.to_excel-xlsxwriter combo?
I am aware that it is possible to format cells when writing with pure xlsxwriter. But dataframe.to_excel takes so much less space.
I would like to adjust cell width and add some colors to column names.
What other alternatives w... | 1 | 2 | 0.379949 | 1 | false | 28,862,593 | 0 | 100 | 1 | 0 | 0 | 28,839,976 | I found xlwings. It's intuitive and does all the things I want to do. Also, it does well with all pandas data types. | 1 | 0 | 0 | How to do formmating with combination of pandas dataframe.to_excel and xlsxwriter? | 1 | python,pandas,xlsxwriter | 0 | 2015-03-03T19:10:00.000 |
Is it possible to add or remove entries to an excel file or a text file while it is still open (viewing live update of values from python output) instead of seeing the output in terminal? | 2 | 3 | 0.53705 | 0 | false | 28,879,986 | 0 | 2,289 | 1 | 0 | 0 | 28,879,391 | It depends on the application you are using to view the file. You will have to check the features available in the tools you are using.
For instance, in Excel, this is impossible. When you open an Excel document, it actually creates an invisible copy. You are not editing the original. It is only when the file is saved ... | 1 | 0 | 1 | Editing an open document with python | 1 | python,excel,xlsx | 0 | 2015-03-05T13:50:00.000 |
I've written some code that iterates through a flat file. After a certain section is completed reading, I take the data and put it into a spreadsheet. Then, I go back and continue reading the flat file for the next section and write to a new worksheet...and so on and so forth.
When looping through the python code, I cr... | 1 | 0 | 0 | 0 | false | 28,909,616 | 0 | 1,050 | 1 | 0 | 0 | 28,909,360 | Instead of assigning the variable worksheet to workbook.add_worksheet(thename), have a list called worksheets. When you normally do worksheet = workbook.add_worksheet(thename), do worksheets.append(workbook.add_worksheet(thename)). Then access your latest worksheet with worksheets[-1]. | 1 | 0 | 1 | python xlsxwriter worksheet object reuse | 2 | python,xlsxwriter | 0 | 2015-03-06T23:26:00.000 |
after I put the photologue on the server, I have no issue with uploading photos.
the issue is when I am creating a Gallery from the admin site, I can choose only one photo to be attached to the Gallery. even if I selected many photos, one of them will be linked to the Gallery only.
The only way to add photos to a galle... | 0 | 0 | 0 | 0 | false | 31,394,483 | 1 | 184 | 2 | 0 | 0 | 28,927,247 | I had exactly the same problem. I suspected some problem with django-sortedm2m package. To associate photo to gallery, it was using SortedManyToMany() from sortedm2m package. For some reason, the admin widget associated with this package did not function well. (I tried Firefox, Chrome and safari browser).
I actually ... | 1 | 0 | 0 | Gallery in Photologue can have only one Photo | 3 | python,django,python-3.4,django-1.7,photologue | 0 | 2015-03-08T13:57:00.000 |
after I put the photologue on the server, I have no issue with uploading photos.
the issue is when I am creating a Gallery from the admin site, I can choose only one photo to be attached to the Gallery. even if I selected many photos, one of them will be linked to the Gallery only.
The only way to add photos to a galle... | 0 | 0 | 0 | 0 | false | 32,932,624 | 1 | 184 | 2 | 0 | 0 | 28,927,247 | I guess your problem is solved by now, but just in case.. I had the same problem. Looking around in the logs, I found it was caused by me not having consolidated the static files from sortedm2m with the rest of my static files (hence the widget was not working properly). | 1 | 0 | 0 | Gallery in Photologue can have only one Photo | 3 | python,django,python-3.4,django-1.7,photologue | 0 | 2015-03-08T13:57:00.000 |
I have the following TimeStamp value: Wed Jun 25 09:18:15 +0000 2014.
I am writing a MapReduce program in Python that reads JSON objects from an Amazon S3 location and export it to a local CSV file. The CSV file will then export data to a MySQL and HBase database. I have about 200 million records (1 TB), so I need to o... | 1 | 2 | 0.379949 | 0 | false | 28,963,527 | 0 | 444 | 1 | 0 | 0 | 28,961,577 | Use long to represent time (milli seconds), so you don't bother about the date formatting/string encoding. It's space efficient and much easier to perform range queries. | 1 | 0 | 0 | Best way to store TimeStamp | 1 | python,mysql,csv,hbase | 0 | 2015-03-10T10:41:00.000 |
I have a python script that queries some data from several web APIs and after some processing writes it to MySQL. This process must be repeated every 10 seconds. The data needs to be available to Google Compute instances that read MySQL and perform CPU-intensive work.
For this workflow I thought about using GCloud SQL ... | 0 | 1 | 1.2 | 0 | true | 29,050,842 | 1 | 1,243 | 1 | 1 | 0 | 29,044,322 | The finest resolution of a cron job is 1 minute, so you cannot run a cron job once every 10 seconds.
In your place, I'd run a Python script that starts a new thread every 10 seconds to do your MySQL work, accompanied by a cronjob that runs every minute. If the cronjob finds that the Python script is not running, it wou... | 1 | 0 | 0 | Cron job on google cloud managed virtual machine | 2 | python,google-app-engine,cron,virtual-machine,google-compute-engine | 0 | 2015-03-14T01:06:00.000 |
I'm new to python and mysql-python module. Is there any way to reuse db connection so that we may not connect() and close() every time a request comes.
More generally, how can I keep 'status' on server-side? Can somebody give me a tutorial to follow or guide me somehow, lots of thanks! | 0 | 0 | 1.2 | 0 | true | 29,064,930 | 0 | 101 | 1 | 0 | 0 | 29,064,875 | Really not possible with CGI, the original Common Gateway Interface dictates that the program be run from scratch for each request.
You'd want to use WSGI instead (a Python standard), which allows your application be long-lived. WSGI in turn is easiest if you use a Web Framework such as Pyramid, Flask or Django; their... | 1 | 0 | 0 | How to Reuse Database Connection under Python CGI? | 1 | python,cgi | 0 | 2015-03-15T19:02:00.000 |
I am new to this so a silly question
I am trying to make a demo website using Django for that I need a database.. Have downloaded and installed MySQL Workbench for the same. But I don't know how to setup this.
Thank you in advance :)
I tried googling stuff but didn't find any exact solution for the same.
Please help | 0 | 1 | 1.2 | 0 | true | 29,493,720 | 1 | 1,235 | 1 | 0 | 0 | 29,102,422 | I am a mac user. I have luckily overcome the issue with connecting Django to mysql workbench. I assume that you have already installed Django package created your project directory e.g. mysite.
Initially after installation of MySQL workbench i have created a database : create database djo;
Go to mysite/settings.py and... | 1 | 0 | 0 | Connect MySQL Workbench with Django in Eclipse in a mac | 1 | python,mysql,django,eclipse,pydev | 0 | 2015-03-17T14:56:00.000 |
I have some very peculiar behavior happening when running a data importer using multiprocessor in python. I believe that this is a database issue, but I am not sure how to track it down. Below is a description of the process I am doing:
1) Multiprocessor file that runs XX number of processors doing parts two and thr... | 0 | 0 | 0 | 0 | false | 29,169,114 | 0 | 260 | 1 | 0 | 0 | 29,129,589 | I solved this by upping my instance type to a m3.Large instance without limited CPU credits. Everything works well now. | 1 | 0 | 1 | Importing data to mysql RDS with python multiprocessor - RDS | 1 | python,mysql,linux,amazon-ec2,rds | 0 | 2015-03-18T18:10:00.000 |
I'm not sure what exactly the wording for the problem is so if I haven't been able to find any resource telling me how to do this, that's most likely why.
The basic problem is that I have a webcrawler, coded in Python, that has a 'Recipe' object that stores certain data about a specific recipe such as 'Name', 'Instruct... | 0 | 0 | 0 | 0 | false | 29,170,744 | 0 | 45 | 1 | 0 | 0 | 29,170,268 | For the first question (how do I make sure I'm not duplicating ingredients?), if I understand well, is basically put your primary key as (i_id, name) in the table ingredients. This way you guarantee that is impossible insert an ingredient with the same key (i_id, name).
Now for the second question (how do I insert the ... | 1 | 0 | 0 | Inserting data into SQL database that needs to be linked | 2 | python,sql | 0 | 2015-03-20T15:33:00.000 |
I have a MySQLdb installation for Python 2.7.6. I have created a MySQLdb cursor once and would like to reuse the cursor for every incoming request. If 100 users are simultaneously active and doing a db query, does the cursor serve each request one by one and block others?
If that is the case, is there way to avoid that... | 0 | 1 | 0.099668 | 0 | false | 29,199,028 | 0 | 633 | 1 | 0 | 0 | 29,196,096 | For this purpose you can use Persistence Connection or Connection Pool.
Persistence Connection - very very very bad idea. Don't use use it! Just don't! Especially when you are talking about web programming.
Connection Pool - Better then Persistence Connection, but with no deep understanding of how it works, you will en... | 1 | 0 | 0 | Is a MySQLdb cursor for Python blocking in nature by default? | 2 | python,mysql,mysql-python | 0 | 2015-03-22T15:21:00.000 |
MS SQL Server supports passing a table as a stored-procedure parameter. Is there any way to utilize this from Python, using PyODBC or pymssql? | 1 | 0 | 1.2 | 0 | true | 29,679,132 | 0 | 830 | 1 | 0 | 0 | 29,371,570 | Use IronPython. It allows direct access to the .net framework, and therefore you can build a DataTable object and pass it over. | 1 | 0 | 0 | Can you pass table input parameter to SQL Server from Python | 2 | python,sql-server,pyodbc | 0 | 2015-03-31T14:49:00.000 |
I am using
db = MySQLdb.connect(host="machine01", user=local_settings.DB_USERNAME, passwd=local_settings.DB_PASSWORD, db=local_settings.DB_NAME)
to connect to a DB, but I am doing this from machine02 and I thought this would still work, but it does not. I get
_mysql_exceptions.OperationalError: (1045, "Access denied f... | 0 | 0 | 0 | 0 | false | 29,372,847 | 0 | 41 | 2 | 0 | 0 | 29,372,365 | The error tells you that 'test_user' at machine 'machine02' is not allowed. Probably user 'test_user' is on 'mysql.user' table registered with 'localhost' as connection's host. Check it using a query like this: select host, user from mysql.user;
Best regards,
Oscar. | 1 | 0 | 0 | Query MySQL db from Python returns "Access Denied" | 2 | python,mysql | 0 | 2015-03-31T15:25:00.000 |
I am using
db = MySQLdb.connect(host="machine01", user=local_settings.DB_USERNAME, passwd=local_settings.DB_PASSWORD, db=local_settings.DB_NAME)
to connect to a DB, but I am doing this from machine02 and I thought this would still work, but it does not. I get
_mysql_exceptions.OperationalError: (1045, "Access denied f... | 0 | 0 | 1.2 | 0 | true | 29,372,390 | 0 | 41 | 2 | 0 | 0 | 29,372,365 | Make sure your firewall isn't blocking port 3306. | 1 | 0 | 0 | Query MySQL db from Python returns "Access Denied" | 2 | python,mysql | 0 | 2015-03-31T15:25:00.000 |
I am passing the output from a sql query to again insert the data to ms sql db. If my data is null python / pyodbc is returning None instead of NULL. What is the best way to convert None to NULL when I am calling another query using the same data.
Or a basic string transformation is the only way out ?
Thanks
Shakti | 5 | -1 | -0.099668 | 0 | false | 29,431,913 | 0 | 16,822 | 1 | 0 | 0 | 29,431,557 | You could overwrite query function in way that None will be replace with "NULL" | 1 | 0 | 1 | How convert None to NULL with Python 2.7 and pyodbc | 2 | python,sql-server,pyodbc | 0 | 2015-04-03T11:43:00.000 |
I have a script to format a bunch of data and then push it into excel, where I can easily scrub the broken data, and do a bit more analysis.
As part of this I'm pushing quite a lot of data to excel, and want excel to do some of the legwork, so I'm putting a certain number of formulae into the sheet.
Most of these ("=AV... | 3 | 0 | 0 | 0 | false | 29,487,114 | 0 | 813 | 1 | 0 | 0 | 29,486,671 | I suspect that there might be a subtle difference in what you think you need to write as the formula and what is actually required. openpyxl itself does nothing with the formula, not even check it. You can investigate this by comparing two files (one from openpyxl, one from Excel) with ostensibly the same formula. The ... | 1 | 0 | 0 | openpyxl and stdev.p name error | 2 | python,openpyxl | 0 | 2015-04-07T08:01:00.000 |
I think InfluxDB is a really cool time series DB.
I am planning to use it as an intermediate data aggregator (collecting time based metrics from many sensors).
The data needs to be processed in "moving window" manner - when X samples received, Python based processing algorithm should be triggered.
What is the best wait... | 0 | 1 | 0.197375 | 0 | false | 29,533,201 | 0 | 528 | 1 | 0 | 0 | 29,528,394 | Not using Python, but in my case i use continuous queries in InfluxDb to consolidate automatically data in one place/serie. Then i request every X seconds on the newly created serie using a time window to select my data. They are then draw using a standard framework (highcharts.js)
Maybe in your case you could wait for... | 1 | 0 | 0 | How to use InfluxDB as an intermediate data storage | 1 | python,time-series,influxdb | 0 | 2015-04-09T01:51:00.000 |
I'm writing a web application in python and postgreSQL. Users are to access a lot of information during a session. All such information (almost) are indexed in the database. My question is, should I litter the code with specific queries, or is it better practice to query larger chunks of information, cashing it, and le... | 0 | 1 | 0.197375 | 0 | false | 29,538,970 | 0 | 30 | 1 | 0 | 0 | 29,538,870 | Database designers spend a lot of time on caching and optimization. Unless you hit a specific problem, it's probably better to let the database do the database stuff, and your code do the rest instead of having your code try to take over some of the database functionality. | 1 | 0 | 0 | General queries vs detailed queries to database | 1 | python,postgresql | 0 | 2015-04-09T12:44:00.000 |
I am using Python to stream large amounts of Twitter data into a MySQL database. I anticipate my job running over a period of several weeks. I have code that interacts with the twitter API and gives me an iterator that yields lists, each list corresponding to a database row. What I need is a means of maintaining a p... | 0 | 0 | 1.2 | 0 | true | 29,552,956 | 0 | 48 | 1 | 0 | 0 | 29,552,868 | I think the right answer is to try and handle the connection errors; it sounds like you'd only be pulling in a much a larger library just for this feature, while trying and catching is probably how it's done, whatever level of the stack it's at. If necessary, you could multithread these things since they're probably IO... | 1 | 0 | 0 | Persistant MySQL connection in Python for social media harvesting | 1 | python,mysql | 0 | 2015-04-10T03:32:00.000 |
I'm trying to create unit tests for a function that uses database queries in its implementation. My understanding of unit testing is that you shouldn't be using outside resources such as databases for unit testing, and you should just create mock objects essentially hard coding the results of the queries.
However, in t... | 1 | 1 | 0.099668 | 0 | false | 29,576,807 | 0 | 105 | 2 | 0 | 0 | 29,565,712 | As it seems I got the wrong end of the stick, I had a similarish problem and like you an ORM was not an option.
The way I addressed it was with simple collections of Data Transfer objects.
So the new code I wrote, had no direct access to the db. It did everything with simple lists of objects. All the business logic an... | 1 | 0 | 0 | Unit testing on implementation-specific database usage | 2 | python,database,unit-testing | 1 | 2015-04-10T15:51:00.000 |
I'm trying to create unit tests for a function that uses database queries in its implementation. My understanding of unit testing is that you shouldn't be using outside resources such as databases for unit testing, and you should just create mock objects essentially hard coding the results of the queries.
However, in t... | 1 | 2 | 1.2 | 0 | true | 29,566,319 | 0 | 105 | 2 | 0 | 0 | 29,565,712 | Well, to start with, I think this is very much something that depends on the application context, the QA/dev's skill set & preferences. So, what I think is right may not be right for others.
Having said that...
In my case, I have a system where an extremely complex ERP database, which I dont control, is very much in t... | 1 | 0 | 0 | Unit testing on implementation-specific database usage | 2 | python,database,unit-testing | 1 | 2015-04-10T15:51:00.000 |
I have a command-line tool that I'm creating and I'm looking for a safe place to put my sqlite database so it doesn't get overwritten or deleted by the user by accident in mac,windows,or linux and be accessible by my application. | 0 | 1 | 0.197375 | 0 | false | 29,589,262 | 0 | 64 | 1 | 0 | 0 | 29,587,822 | Your tool runs with the permissions of the user.
Any file created by it can also be delete by the same user.
You can ask the administrator to protect your files, but on most Mac/Windows/Linux PCs, the user is the administrator.
There is no place that is safe from the user that controls your tool's execution environment... | 1 | 0 | 0 | Where to put sqlite database in python command-line project | 1 | python,linux,windows,macos,sqlite | 0 | 2015-04-12T09:15:00.000 |
I have a PostgreSQL db. Pandas has a 'to_sql' function to write the records of a dataframe into a database. But I haven't found any documentation on how to update an existing database row using pandas when im finished with the dataframe.
Currently I am able to read a database table into a dataframe using pandas read_sq... | 25 | 0 | 0 | 1 | false | 68,004,057 | 0 | 11,070 | 1 | 0 | 0 | 29,607,222 | For sql alchemy case of read table as df, change df, then update table values based on df, I found the df.to_sql to work with name=<table_name> index=False if_exists='replace'
This should replace the old values in the table with the ones you changed in the df | 1 | 0 | 0 | Update existing row in database from pandas df | 2 | python,postgresql,pandas | 0 | 2015-04-13T14:01:00.000 |
In SQLAlchemy, is there a way to store arbitrary metadata in an column object? For example, I want to store a flag on each column that says whether or not that column should be serialized, and then access this information via inspect( Table ).attrs. | 0 | 2 | 1.2 | 0 | true | 29,611,824 | 0 | 95 | 1 | 0 | 0 | 29,611,273 | You can pass extra data at the info param in Column initializer Column(...., info={'data': 'data'}) | 1 | 0 | 0 | Storing arbitrary metadata in SQLAlchemy column | 1 | python,sqlalchemy | 0 | 2015-04-13T17:21:00.000 |
I apologize in advance for my lack of knowledge concerning character encoding.
My question is: are there any inherent advantages/disadvantages to using the 'Unicode' type, rather than the 'String' type, when storing data in PostgreSQL using SQLAlchemy (or vice-versa)? If so, would you mind elaborating? | 14 | 5 | 0.761594 | 0 | false | 35,273,011 | 0 | 3,572 | 1 | 0 | 0 | 29,617,210 | In 99.99% of the cases go for Unicode and if possible use Python 3 as it would make your life easier. | 1 | 0 | 0 | 'Unicode' vs. 'String' with SQLAlchemy and PostgreSQL | 1 | python,postgresql,unicode,sqlalchemy,python-2.x | 0 | 2015-04-14T00:27:00.000 |
I am using the openpyxl module for my Python scripts to create and edit .xlsx files directly from the script.
Now I want to save a not known amount of number on after the other. How can I increase the cell number? So if the last input was made in A4, how can I say that the next should be in A5? | 0 | 0 | 0 | 0 | false | 29,658,155 | 0 | 90 | 1 | 0 | 0 | 29,642,697 | You can use the .offset() method of a cell to get a cell a particular number of rows or cells away. | 1 | 0 | 0 | openpyxl for Python dynamic lines | 1 | python,export-to-excel | 0 | 2015-04-15T06:08:00.000 |
I am working on a Python/MySQL cloud app with a fairly complex architecture. Operating this system (currently) generates temporary files (plain text, YAML) and log files and I had intended to store them on the filesystem.
However, our prospective cloud operator only provides a temporary, non-persistent filesystem to ap... | 11 | -1 | -0.066568 | 0 | false | 29,656,524 | 0 | 976 | 1 | 1 | 0 | 29,656,422 | Store your logs in MySQL. Just make a table like this:
x***time*****source*****action
----------------------------
****unixtime*somemodule*error/event
Your temporary storage should be enough for temporary files :) | 1 | 0 | 0 | How/where to store temp files and logs for a cloud app? | 3 | python,mysql,redis,cloud,storage | 0 | 2015-04-15T17:04:00.000 |
I am using py2neo and I would like to extract the information from query returns so that I can do stuff with it in python. For example, I have a DB containing three "Person" nodes:
for num in graph.cypher.execute("MATCH (p:Person) RETURN count(*)"):
print num
outputs:
>> count(*)
3
Sorry for shitty formatting, it ... | 2 | 0 | 0 | 0 | false | 29,683,003 | 0 | 2,448 | 1 | 0 | 0 | 29,682,897 | can you int(), float() str() on the __str__() method that looks to be outputting the value you want in your example? | 1 | 0 | 1 | How to convert neo4j return types to python types | 2 | python,neo4j,type-conversion,py2neo | 0 | 2015-04-16T18:16:00.000 |
I need to use some aggregate data in my django application that changes frequently and if I do the calculations on the fly some performance issues may happen. Because of that I need to save the aggregate results in a table and, when data changes, update them. Because I use django some options may be exist and some mayb... | 13 | 14 | 1 | 0 | false | 55,397,481 | 1 | 6,549 | 1 | 0 | 0 | 29,716,972 | You can use Materialized view with postgres. It's very simple.
You have to create a view with query like CREATE MATERIALIZED VIEW
my_view as select * from my_table;
Create a model with two
option managed=false and db_name=my_view in the model Meta like
this
MyModel(models.Model):
class Meta:
managed ... | 1 | 0 | 0 | using materialized views or alternatives in django | 2 | python,sql-server,django,database,postgresql | 0 | 2015-04-18T11:54:00.000 |
The same attributes stored in __dict__ are needed to restore the object, right? | 0 | 1 | 0.197375 | 0 | false | 29,786,610 | 0 | 100 | 1 | 0 | 0 | 29,786,322 | I think a SQLAlchemy RowProxy uses _row, a tuple, to store the value. It doesn't have a __dict__, so no storage overhead of a _dict__ per row. Its _parent object has fields which store the column names to index pos in tuple lookup. Pretty common thing to do if you are trying to cut on down sql fetching result sizes ... | 1 | 0 | 0 | Why is a pickled SQLAlchemy model object smaller than its pickled `__dict__`? | 1 | python,sqlalchemy | 0 | 2015-04-22T01:50:00.000 |
I am using simple_salesforce package in python to extract data from SalesForce.
I have a table that has around 2.8 million records and I am using query_more to extract all data.
SOQL extracts 1000 rows at a time. How can I increase the batchsize in python to extract maximum number of rows at a time. [I hope maximum num... | 0 | 0 | 0 | 0 | false | 34,097,734 | 0 | 122 | 1 | 0 | 0 | 29,799,993 | If you truly wish to extract everything, you can use the query_all function.
query_all calls the helper function get_all_results which recursively calls query_more until query_more returns "done". The returned result is the full dictionary of all your results.
The plus, you get all of your data in a single dictionary.... | 1 | 0 | 0 | Increasing Batch size in SOQL | 1 | python,salesforce,soql | 0 | 2015-04-22T14:01:00.000 |
I'm currently running into soft memory errors on my Google App Engine app because of high memory usage. A number of large objects are driving memory usage sky high.
I thought perhaps if I set and recalled them from memcache maybe that might reduce overall memory usage. Reading through the docs this doesn't seem to ... | 0 | 3 | 1.2 | 0 | true | 29,806,800 | 1 | 130 | 1 | 1 | 0 | 29,806,384 | Moving objects to and from Memcache will have no impact on your memory unless you destroy these objects in your Java code or empty collections.
A bigger problem is that memcache entities are limited to 1MB, and memcache is not guaranteed. The first of these limitations means that you cannot push very large objects into... | 1 | 0 | 0 | Will using memcache reduce my instance memory? | 1 | python-2.7,google-app-engine,memcached | 0 | 2015-04-22T18:48:00.000 |
I'm using cx_Oracle module in python. Do we need to close opened cursors explicitly? What will happen when we miss to close the cursor after fetching data and closing only the connection object (con.close()) without issuing cursor.close()?
Will there be any chance of memory leak in this situation? | 4 | 1 | 0.099668 | 0 | false | 30,171,565 | 0 | 4,239 | 1 | 0 | 0 | 29,843,170 | If you use multiple cursor. cursor.close() will help you to release the resources you don't need anymore.
If you just use one cursor with one connection. I think connection.close() is fine. | 1 | 0 | 0 | cx_Oracle module cursor close in python | 2 | python,cx-oracle | 0 | 2015-04-24T09:05:00.000 |
I'm having this weird problem when using
Model.objects.get(op1=1,op2=2)
it raises the does not exist error although it exists. Did that ever happen with anyone?
I even checked in my logs to make sure that the log happened when the id already existed in the database.
[2015-04-24 20:18:21,106] ERROR: Couldn't find the... | 0 | 0 | 1.2 | 0 | true | 29,882,375 | 1 | 57 | 1 | 0 | 0 | 29,854,433 | I solved it by using transaction.commit() before my second query. | 1 | 0 | 0 | Model.objects.get returns nothing | 1 | python,django,object,get,models | 0 | 2015-04-24T18:00:00.000 |
I have UNIQUE constraint on two columns of a table in SQLite.
If I insert a record with a duplicate on these two columns into the table, I will get an exception (sqlite3.IntegrityError).
Is it possible to retrieve the primary key ID of this record upon such a violation, without doing an additional SELECT? | 1 | 1 | 1.2 | 0 | true | 29,874,681 | 0 | 1,878 | 1 | 0 | 0 | 29,871,461 | If the primary key is part of the UNIQUE constraint that led to the violation, you already have its value.
Otherwise, the two columns in the UNIQUE constraint are an alternate key for the table, i.e., they can uniquely identify the conflicting row.
If you need the actual primary key, you need to do an additional SELECT... | 1 | 0 | 0 | Return existing primary key ID upon constraint failure in sqlite3 | 2 | python,python-3.x,sqlite,unique-constraint | 0 | 2015-04-25T22:31:00.000 |
I have two repository written in flask and django.
These projects sharing the database model which is written in SQLAlchemy in flask and written in Django ORM.
When I write migration script in flask as alembic, How can django project migrates with that script?
I also think about Django with SQLAlchemy. But I can't fin... | 6 | 2 | 0.197375 | 0 | false | 29,890,773 | 1 | 3,757 | 1 | 0 | 0 | 29,890,684 | Firstly, don't do this; you're in for a world of pain. Use an API to pass data between apps.
But if you are resigned to doing it, there isn't actually any problem with migrations. Write all of them in one app only, either Django or Alembic and run them there. Since they're sharing a database table, that's all there is ... | 1 | 0 | 0 | How to manage django and flask application sharing one database model? | 2 | python,django,flask,sqlalchemy | 0 | 2015-04-27T08:25:00.000 |
I am using the psycopg2 library with Python3 on a linux server to create some temporary tables on Redshift and querying these tables to get results and write to files on the server.
Since my queries are long and takes about 15 minutes to create all these temp tables that I ultimate pull data from, how do I ensure that... | 0 | 1 | 1.2 | 0 | true | 29,915,754 | 0 | 82 | 1 | 0 | 0 | 29,893,476 | Re-declaring a cursor doesn't create new connection while using psycopg2. | 1 | 0 | 0 | Does redeclaring a cursor create new connection while using psycopg2? | 1 | linux,postgresql,python-3.x,psycopg2,amazon-redshift | 0 | 2015-04-27T10:42:00.000 |
I am using Python's peewee ORM with MYSQL. I want to list the active connections for the PooledDatabase. Is there any way to list..? | 1 | 2 | 1.2 | 0 | true | 29,968,980 | 0 | 283 | 1 | 0 | 0 | 29,962,386 | What do you mean "active"? Active as in being "checked out" by a thread, or active as in "has a connection to the database"?
For the first, you would just do pooled_db._in_use.
For the second, it's a little trickier -- basically it will be the combination of pooled_db._in_use (a dict) and pooled_db._connections (a heap... | 1 | 0 | 0 | Counting Active connections in peewee ORM | 1 | python-2.7,peewee | 0 | 2015-04-30T08:10:00.000 |
I'm stumped on this one, please help me oh wise stack exchangers...
I have a function that uses xlrd to read in an .xls file which is a file that my company puts out every few months. The file is always in the same format, just with updated data. I haven't had issues reading in the .xls files in the past but the newe... | 4 | 0 | 0 | 0 | false | 30,945,220 | 0 | 1,917 | 1 | 0 | 0 | 29,971,186 | I had the same problem and I think we have to look at the cells excel that these are not picking up empty, that's how I solved it. | 1 | 0 | 0 | Python XLRD Error : formula/tFunc unknown FuncID:186 | 3 | python,windows,excel,python-2.7,xlrd | 1 | 2015-04-30T15:01:00.000 |
I had a duplicate sqlite database. I tried deleting the duplicate but instead deleted both. Is there a way I can generate a new database? The data was not especially important. | 7 | 3 | 0.291313 | 0 | false | 66,293,699 | 1 | 5,560 | 1 | 0 | 0 | 29,991,871 | When you have no database in your project, a simple python manage.py migrate will create a new db.sqlite3 file. | 1 | 0 | 0 | Generating new SQLite database django | 2 | python,django,web | 0 | 2015-05-01T17:30:00.000 |
I have a table that stores tasks submitted by users, with timestamps. I would like to write a query that returns certain rows based on when they were submitted (was it this day/week/month..).
To check if it was submitted on this week, I wanted to use date.isocalendar()[1] function. The problem is, that my timestamps a... | 0 | 0 | 0 | 0 | false | 30,030,232 | 0 | 1,710 | 1 | 0 | 0 | 30,029,827 | Can you try sqlalchemy.extract(func.date('year', Task.timestamp)) == ... ? | 1 | 0 | 0 | SQLAlchemy func issue with date and .isocalendar() | 3 | python,sqlite,date,datetime,sqlalchemy | 0 | 2015-05-04T12:10:00.000 |
Os: Mac 10.9
Python ver: 2.7.9
database: postgresql 9.3
I am putting the following command to install psycopg2 in my virtualenv:
ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install psycopg2
I am getting the following error:
Traceback (most recent call last):
File "/Users/dialynsoto/py... | 0 | 0 | 0 | 0 | false | 30,148,423 | 0 | 213 | 1 | 0 | 0 | 30,148,133 | Try to find hashlib module within your system. It is likely that you have two modules and the one that is being imported is the wrong one (remove the wrong one if it is the case) or you should simply upgrade your python version. | 1 | 0 | 0 | psycopg2 error installation in virtualenv | 1 | python | 0 | 2015-05-10T05:41:00.000 |
cursor.execute(sql_statement)
conn.close()
return cursor
the above are the closing lines of my program. I've 3 html pages (users, workflows, home), returning curosor is triggering data for workflows and home page, but not for users page
Where as, if i do return cursor.fetchall(), then it's working for all 3 pages.
Th... | 0 | 0 | 0 | 0 | false | 30,181,819 | 1 | 24 | 1 | 0 | 0 | 30,181,471 | If you close the connection, you cannot iterate a cursor anymore. There is no connection to the database. | 1 | 0 | 0 | Returning cursor isn't retrieving data from DB | 1 | python-2.7,postgresql-9.3 | 0 | 2015-05-12T03:52:00.000 |
I have a scenario in which I am writing formula for calculating the sum of values of different cells in xlsx. After calculating the sum I write it into different cell. I am doing this in python and for xlsx writing I am using xlsxwriter.
For writing values I am using inmemory option for xlsxwritter and... | 1 | 0 | 0 | 0 | false | 30,227,506 | 0 | 505 | 1 | 0 | 0 | 30,222,389 | But is there any way of calculating the sum value in memory.
Not with XlsxWriter since it doesn't have a calculation engine like Excel.
However, if you only need to do a sum then you could do that in Python. | 1 | 0 | 0 | How to trigger calculation of formula in xlsx while writing value cell | 1 | python,xlsx,xlsxwriter | 0 | 2015-05-13T18:15:00.000 |
I am using sqlalchemy to query memory logs off a MySql database. I am using:
session.query(Memory).filter(Memmory.timestamp.between(from_date, to_date))
but the results after using the time window are still too many.
Now I want to query for results withing the time window, but filtered down by asking for entries logge... | 0 | 0 | 0 | 0 | false | 30,238,066 | 0 | 587 | 1 | 0 | 0 | 30,234,706 | If I understand correctly your from_date and to_date are just dates. If you set them to python datetime objects with the date/times you want your results between, it should work. | 1 | 0 | 0 | How to query rows with a minute/hour step interval in SqlAlchemy? | 1 | python,mysql,flask,sqlalchemy | 0 | 2015-05-14T10:14:00.000 |
I want to create a table(postgres) that stores data about what items were viewed by what user. authenticated users are no problem but how can I tell one anonymous user from another anonymous user? This is needed for analysis purposes.
maybe store their IP address as unique ID? How can I do this? | 8 | 7 | 1.2 | 0 | true | 30,298,038 | 1 | 4,523 | 1 | 0 | 0 | 30,297,785 | I think you should use cookies.
When a user that is not authenticated makes a request, look for a cookie named whatever ("nonuserid" in this case). If the cookie is not present it means it's a new user so you should set the cookie with a random id. If it's present you can use the id in it to identificate the anonymous ... | 1 | 0 | 0 | how to give some unique id to each anonymous user in django | 2 | python,django,authentication | 0 | 2015-05-18T07:52:00.000 |
I have a 3+ million record XLS file which i need to dump in Oracle 12C DB (direct dump) using a python 2.7.
I am using Cx_Oracle python package to establish connectivity to Oracle , but reading and dumping the XLS (using openpyxl pckg) is extremely slow and performance degrades for thousands/million records.
From a scr... | 4 | 2 | 1.2 | 0 | true | 30,324,469 | 0 | 4,256 | 3 | 0 | 0 | 30,324,370 | If is possible to export your excel fila as a CSV, then all you need is to use sqlldr to load the file in db | 1 | 0 | 0 | loading huge XLS data into Oracle using python | 5 | python,oracle,cx-oracle | 0 | 2015-05-19T11:29:00.000 |
I have a 3+ million record XLS file which i need to dump in Oracle 12C DB (direct dump) using a python 2.7.
I am using Cx_Oracle python package to establish connectivity to Oracle , but reading and dumping the XLS (using openpyxl pckg) is extremely slow and performance degrades for thousands/million records.
From a scr... | 4 | 0 | 0 | 0 | false | 30,328,198 | 0 | 4,256 | 3 | 0 | 0 | 30,324,370 | Excel also comes with ODBC support so you could pump straight from Excel to Oracle assuming you have the drivers. That said, anything that involves transforming a large amount of data in memory (from whatever Excel is using internally) and then passing it to the DB is likely to be less performant than a specialised bul... | 1 | 0 | 0 | loading huge XLS data into Oracle using python | 5 | python,oracle,cx-oracle | 0 | 2015-05-19T11:29:00.000 |
I have a 3+ million record XLS file which i need to dump in Oracle 12C DB (direct dump) using a python 2.7.
I am using Cx_Oracle python package to establish connectivity to Oracle , but reading and dumping the XLS (using openpyxl pckg) is extremely slow and performance degrades for thousands/million records.
From a scr... | 4 | 0 | 0 | 0 | false | 63,836,641 | 0 | 4,256 | 3 | 0 | 0 | 30,324,370 | Automate the export of XLSX to CSV as mentioned in a previous answer. But, instead of then calling a sqlldr script, create an external table that uses your sqlldr code. It will load your table from the CSV each time the table is selected from. | 1 | 0 | 0 | loading huge XLS data into Oracle using python | 5 | python,oracle,cx-oracle | 0 | 2015-05-19T11:29:00.000 |
Sorry for the rookie question. I have a sqlite file and I need to get table column names. How can I get them? | 1 | 2 | 1.2 | 0 | true | 30,329,701 | 0 | 696 | 1 | 0 | 0 | 30,329,528 | use the pragma table_info(spamtable) command. The table names will be index 1 of the returned tuples. | 1 | 0 | 0 | How to get list of column names of a sqlite db file | 2 | python,database,sqlite | 0 | 2015-05-19T15:12:00.000 |
I am using GoogleScraper for some automated searches in python.
GoogleScraper keeps search results for search queries in its database named google_scraper.db.e.g. if i have searched site:*.us engineering books and due to internet issue while making json file by GoogleScraper.If the result is missed and json file is not... | 0 | 0 | 1.2 | 0 | true | 30,433,834 | 1 | 192 | 1 | 0 | 1 | 30,347,571 | I solved the issue which keeps searches in GoogleScraper database,we first have to run following command
GoogleScraper --clean
This command cleans all cache and we can search again with new results.
Regards! | 1 | 0 | 0 | GoogleScraper keeps searches in database | 1 | python,bash,web-scraping | 0 | 2015-05-20T10:54:00.000 |
How to do syncdb in django 1.4.2?
i.e. having data in database, how to load the models again when the data schema is updated?
Thanks in advance | 1 | 3 | 1.2 | 0 | true | 30,392,918 | 1 | 2,775 | 1 | 0 | 0 | 30,387,974 | Thanks Amyth for the hints.
btw the commands is a bit different, i will post a 10x tested result here.
Using south
1. setup the model
python manage.py schemamigration models --initial
dump data if you have to
python manage.py dumpdata -e contenttypes -e auth.Permission --natural > data.json
syncdb
python manage.... | 1 | 0 | 0 | How to do django syncdb in version 1.4.2? | 2 | python,django,django-models,django-syncdb | 0 | 2015-05-22T03:38:00.000 |
I could create tables using the command alembic revision -m 'table_name' and then defining the versions and migrate using alembic upgrade head.
Also, I could create tables in a database by defining a class in models.py (SQLAlchemy).
What is the difference between the two? I'm very confused. Have I messed up the concep... | 19 | 52 | 1.2 | 0 | true | 30,425,438 | 1 | 10,088 | 1 | 0 | 0 | 30,425,214 | Yes, you are thinking about it in the wrong way.
Let's say you don't use Alembic or any other migration framework. In that case you create a new database for your application with the following steps:
Write your model classes
Create and configure a brand new database
Run db.create_all(), which looks at your models and... | 1 | 0 | 0 | What is the difference between creating db tables using alembic and defining models in SQLAlchemy? | 2 | python,flask,sqlalchemy,alembic | 0 | 2015-05-24T15:30:00.000 |
My problem is rather simple : I have an Excel Sheet that does calculations and creates a graph based on the values of two cells in the sheet. I also have two lists of inputs in text files. I would like to loop through those text files, add the values to the excel sheet, refresh the sheet, and print the resulting graph ... | 1 | 1 | 0.099668 | 0 | false | 30,436,742 | 0 | 226 | 1 | 0 | 0 | 30,436,329 | I think you should consider win32Com for excel operation in python instead of Openpyxl,XlsxWriter.
you can read/write excel, create chart and format excel file using win32com without any limitation.
And creating chart you can consider matplotlib, in that after creating chart you can save it in pdf file also. | 1 | 0 | 0 | Automatic input from text file in excel | 2 | python,excel | 0 | 2015-05-25T10:34:00.000 |
I have a large A.csv file (~5 Gb) with several columns. One of the columns is Model.
There is another large B.csv file (~15 Gb) with Vendor, Name and Model columns.
Two questions:
1) How can I create result file that combines all columns from A.csv and corresponding Vendor and Name from B.csv (join on Model). The trick... | 0 | 0 | 0 | 1 | false | 30,441,330 | 0 | 862 | 1 | 0 | 0 | 30,441,107 | As @Marc B said, reading one row at a time is the solution.
About the join I would do the following (pseudocode: I don't know python).
"Select distinct Model from A" on first file A.csv
Read all rows, search for Model field and collect distinct values in a list/array/map
"Select distinct Model from B" on second file... | 1 | 0 | 0 | Concatenate large files in sql-like way with limited RAM | 3 | python,file,memory,merge | 0 | 2015-05-25T14:57:00.000 |
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed.
Has anyone solved this issue before? I'ts driving me nuts... | 26 | 13 | 1 | 0 | false | 39,704,698 | 0 | 35,704 | 7 | 0 | 0 | 30,467,495 | Just to add to the murkiness, I had the same error with current version of MySql install when attempting with python 3.5 installed (which is the latest python download). Long story short, I uninstalled python 3.5, installed python 3.4.4 (which interestingly didn't update PATH so I updated it manually) and reran instal... | 1 | 0 | 0 | mysql installer fails to recognize python 3.4 | 12 | mysql,python-3.x,installation | 0 | 2015-05-26T19:42:00.000 |
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed.
Has anyone solved this issue before? I'ts driving me nuts... | 26 | 10 | 1 | 0 | false | 35,611,377 | 0 | 35,704 | 7 | 0 | 0 | 30,467,495 | just in case anyone else has this issue in future. Look at what bit version you have for Python 3.4. When I installed 64 bit version of Python 3.4, this issue went away. | 1 | 0 | 0 | mysql installer fails to recognize python 3.4 | 12 | mysql,python-3.x,installation | 0 | 2015-05-26T19:42:00.000 |
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed.
Has anyone solved this issue before? I'ts driving me nuts... | 26 | 8 | 1 | 0 | false | 54,292,906 | 0 | 35,704 | 7 | 0 | 0 | 30,467,495 | I ran into a similar issue with Python 3.7.2.
In my case, the problem was that I tried to install the 64 bit MySQL connector, but had the 32 bit version of Python installed on my machine.
I got a similar error message:
Python v3.7 not found. We only support Python installed using the Microsoft Windows Installer (MSI) ... | 1 | 0 | 0 | mysql installer fails to recognize python 3.4 | 12 | mysql,python-3.x,installation | 0 | 2015-05-26T19:42:00.000 |
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed.
Has anyone solved this issue before? I'ts driving me nuts... | 26 | 2 | 0.033321 | 0 | false | 30,468,759 | 0 | 35,704 | 7 | 0 | 0 | 30,467,495 | From my experience if you have both Py2.7 and Py3.4 installed when installing the mysql connector for py3.4 you will run into this issue. Not sure of the WHY but for some reason if you have py2.7 installed, the py3.4 mysql connector recognizes that version first and just assumes that you have py2.7 installed and does n... | 1 | 0 | 0 | mysql installer fails to recognize python 3.4 | 12 | mysql,python-3.x,installation | 0 | 2015-05-26T19:42:00.000 |
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed.
Has anyone solved this issue before? I'ts driving me nuts... | 26 | 0 | 0 | 0 | false | 53,574,430 | 0 | 35,704 | 7 | 0 | 0 | 30,467,495 | I had this problem until I discovered I had installed python based in another architecture (32b). MySQL required 64 bit. | 1 | 0 | 0 | mysql installer fails to recognize python 3.4 | 12 | mysql,python-3.x,installation | 0 | 2015-05-26T19:42:00.000 |
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed.
Has anyone solved this issue before? I'ts driving me nuts... | 26 | 1 | 0.016665 | 0 | false | 50,449,146 | 0 | 35,704 | 7 | 0 | 0 | 30,467,495 | I was looking for an similar answer. The correct answer is that there is a bug in the mysqlconnector MSI. When python installs, it creates a registry entry under HKLM Software\Python\PythonCore\3.6-32\InstallPath however, the MSI for mysqlconnector is looking for installation path in the registry Software\Python\Pyt... | 1 | 0 | 0 | mysql installer fails to recognize python 3.4 | 12 | mysql,python-3.x,installation | 0 | 2015-05-26T19:42:00.000 |
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed.
Has anyone solved this issue before? I'ts driving me nuts... | 26 | 0 | 0 | 0 | false | 61,068,804 | 0 | 35,704 | 7 | 0 | 0 | 30,467,495 | Here is a much simpler work around:
pip install mysql-connector-python
Is the same package that MySQL is having trouble installing. Just use pip to install it.
Next, go back to the installation style and select "Manual" instead of "Developer". They are identical, but "Manual" allows you to remove packages. Just remove ... | 1 | 0 | 0 | mysql installer fails to recognize python 3.4 | 12 | mysql,python-3.x,installation | 0 | 2015-05-26T19:42:00.000 |
I'm not sure if this has been answered before, I didn't get anything on a quick search.
My table is built in a random order, but thereafter it is modified very rarely. I do frequent selects from the table and in each select I need to order the query by the same column. Now is there a way to sort a table permanently by ... | 2 | 3 | 1.2 | 0 | true | 30,504,339 | 0 | 344 | 1 | 0 | 0 | 30,503,358 | You can add an index sorted by the column you want. The data will be presorted according to that index. | 1 | 0 | 0 | SQLAlchemy: how can I order a table by a column permanently? | 2 | python,sqlalchemy | 0 | 2015-05-28T10:04:00.000 |
i have two models, when i do request.POST.get('room_id') or ('id') i'm getting an error Room matching query does not exist.
how to solved this problem? help me
class Room(models.Model):
status = models.BooleanField('Status',default=True)
name = models.CharField('Name', max_length=100, unique=True... | 1 | 3 | 0.197375 | 0 | false | 30,517,111 | 1 | 1,484 | 1 | 0 | 0 | 30,517,002 | You are looking into request.POST, even if the request.method is not equal to 'POST'. This will not work, because when the request is not an HTTP-post, the POST-member of your request is empty. | 1 | 0 | 0 | django models request get id error Room matching query does not exist | 3 | django,python-3.x,django-queryset,models | 0 | 2015-05-28T20:59:00.000 |
First, the server setup:
nginx frontend to the world
gunicorn running a Flask app with gevent workers
Postgres database, connection pooled in the app, running from Amazon RDS, connected with psycopg2 patched to work with gevent
The problem I'm encountering is inexplicably slow queries that are sometimes running on th... | 0 | 0 | 0 | 0 | false | 30,519,353 | 1 | 1,181 | 1 | 0 | 0 | 30,519,299 | You could try this from within psql to get more details on query timing
EXPLAIN sql_statement
Also turn on more database logging. mysql has slow query analysis, maybe PostgreSQL has an equivalent. | 1 | 0 | 0 | Inconsistently slow queries in production (RDS) | 2 | python,postgresql,amazon-rds,gevent | 0 | 2015-05-29T00:34:00.000 |
I can do http://127.0.0.1:5000/people?where={"lastname":"like(\"Smi%\")"} to get people.lastname LIKE "Smi%"
How do I concat two conditions, like where city=XX and pop<1000 ? | 1 | 2 | 1.2 | 0 | true | 30,680,531 | 0 | 113 | 1 | 0 | 0 | 30,672,259 | It's quite simple you just do:
http://127.0.0.1:5000/people?where={"city":"XX", "pop":"<1000"} | 1 | 0 | 0 | Eve SQLAlchemy query catenation | 1 | python,sqlalchemy,eve | 0 | 2015-06-05T17:15:00.000 |
I'm working on a project where I have to store about 17 million 128-dimensional integer arrays e.g [1, 2, 1, 0, ..., 2, 6, 4] and I'm trying to figure out what's the best way to do it.
The perfect solution would be one that makes it fast to both store and retrieve the arrays, since I need to access ALL of them to make... | 2 | 0 | 0 | 1 | false | 30,684,782 | 0 | 1,207 | 1 | 0 | 0 | 30,682,311 | It seems not so big with numpy arrays, if your integers are 8 bits. a=numpy.ones((17e6,128),uint8) is created in less than a second on my computer. but ones((17e6,128),uint16) is difficult, and ones((17e6,128),uint64) crashed. | 1 | 0 | 1 | Fastest way to store and retrieve arrays | 2 | python,sql,arrays,database,nosql | 0 | 2015-06-06T11:32:00.000 |
My use case is simple, i have performed some kind of operation on image and the resulting feature vector is a numpy object of shape rowX1000(what i mean to say is that the row number can be variable but column number is always 1000)
I want to store this numpy array in mysql. No kind of operation is to be performed on t... | 9 | 8 | 1.2 | 1 | true | 30,713,767 | 0 | 7,366 | 1 | 0 | 0 | 30,713,062 | You could use ndarray.dumps() to pickle it to a string then write it to a BLOB field? Recover it using numpy.loads() | 1 | 0 | 0 | store numpy array in mysql | 2 | python,mysql,arrays,numpy | 0 | 2015-06-08T15:20:00.000 |
i have tried import file csv using bulk insert but it is failed, is there another way in query to import csv file without using bulk insert ?
so far this is my query but it use bulk insert :
bulk insert [dbo].[TEMP] from
'C:\Inetpub\vhosts\topimerah.org\httpdocs\SNASPV0374280960.txt' with
(firstrow=2,fieldterminato... | 0 | 0 | 1.2 | 1 | true | 30,724,975 | 0 | 777 | 1 | 0 | 0 | 30,724,143 | My answer is to work with bulk-insert.
1. Make sure you have bulk-admin permission in server.
2. Use SQL authentication login (For me most of the time window authentication login haven't worked.) for bulk-insert operation. | 1 | 0 | 0 | how to import file csv without using bulk insert query? | 1 | python,mysql,sql,sql-server,csv | 0 | 2015-06-09T06:05:00.000 |
I have a couple thousand lines of data in excel. In one column, however, only every fifth line is filled. What I'm trying to do is fill in the four empty lines below each filled line with the data from the line above. I have a beginner's grasp of python, so if someone could steer me in the right direction, it would be ... | 0 | 2 | 0.132549 | 0 | false | 30,811,295 | 0 | 1,002 | 1 | 0 | 0 | 30,810,963 | Based on your description, this seems easy enough to do in Excel:
Assume row 1 contains column headers, and data begin in row 2. If column A contains your values (starting in A2), in cell B2 use the formula =IF(ISBLANK(A2), B1, A2) and fill down. This formula will return the value of A2 if it is not blank, and will re... | 1 | 0 | 0 | Filling in missing data in excel | 3 | python,excel | 0 | 2015-06-12T19:39:00.000 |
What is the convention/best practices for naming database tables in Django... using the default database naming scheme (appname_classname) or creating your own table name (using your own naming conventions) with the meta class? | 0 | 3 | 0.53705 | 0 | false | 30,872,816 | 1 | 1,656 | 1 | 0 | 0 | 30,872,599 | The default convention is better and cleaner to use :
It avoids any table naming conflict ( As It's a combination of App name and Model name)
It creates well organized database (Tables are ordered by App names)
So until you have any special case that needs special naming convention , use the default. | 1 | 0 | 0 | Django database table naming convention | 1 | python,django,database,naming-conventions | 0 | 2015-06-16T15:57:00.000 |
If I ftp into a database and use pandas.read_sql to read in a huge file, what data type would the variable set equal to this be? And, if applicable, what kind of format would it be in? What object type is a pandas data frame? | 0 | 1 | 1.2 | 0 | true | 30,881,760 | 0 | 160 | 2 | 0 | 0 | 30,881,489 | Variable = ?
The variable set would be equal to a pandas.core.frame.DataFrame object.
Format?
The pandas.core.frame.DataFrame format is a collection of numpy ndarrays, dicts, series, arrays or list-like structures that make up a 2 dimensional (typically) tabular data structure.
Pandas Object Type?
A pandas.core.frame.... | 1 | 0 | 0 | Data type using Pandas | 2 | python,pandas | 0 | 2015-06-17T02:46:00.000 |
If I ftp into a database and use pandas.read_sql to read in a huge file, what data type would the variable set equal to this be? And, if applicable, what kind of format would it be in? What object type is a pandas data frame? | 0 | 0 | 0 | 0 | false | 30,881,700 | 0 | 160 | 2 | 0 | 0 | 30,881,489 | The function pandas.read_sql returns a DataFrame.
The type of a DataFrame in pandas is pandas.core.frame.DataFrame. | 1 | 0 | 0 | Data type using Pandas | 2 | python,pandas | 0 | 2015-06-17T02:46:00.000 |
I'm using Python 3.4. I have a binary column in a my postgresql database with some files and I need to retrieve it from the database and read it... the problem is that for this to work, I first have to (1) open a new file in the filesystem with 'wb', (2) write the contents of the binary column and then (3) read() the f... | 0 | 0 | 0 | 0 | false | 30,922,498 | 0 | 176 | 1 | 0 | 0 | 30,920,656 | Answering my own question: bytes(file) | 1 | 0 | 0 | Reading a file from database binary column (postgresql) in memory without having to save and open the file in the filesystem | 1 | python,database,python-3.x,io | 0 | 2015-06-18T16:16:00.000 |
I have a txt file with about 100 million records (numbers). I am reading this file in Python and inserting it to MySQL database using simple insert statement from python. But its taking very long and looks like the script wouldn't ever finish. What would be the optimal way to carry out this process ? The script is usin... | 4 | 1 | 0.066568 | 0 | false | 53,005,996 | 0 | 13,099 | 1 | 0 | 0 | 30,928,713 | Having tried to do this recently, I found a fast method, but this may be because I'm using an AWS Windows server to run python from that has a fast connection to the database. However, instead of 1 million rows in one file, it was multiple files that added up to 1 million rows. It's faster than other direct DB method... | 1 | 0 | 0 | Inserting millions of records into MySQL database using Python | 3 | python,mysql,insert,sql-insert,large-data | 0 | 2015-06-19T01:55:00.000 |
I am writing a web tool using Python and Pyramid. It access a MySQL database using MySQLdb and does queries based on user input. I created a user account for the tool and granted it read access on the tables it uses.
It works fine when I open the page in a single tab, but if I try loading it in second tab the page won... | 0 | 1 | 1.2 | 0 | true | 30,969,950 | 0 | 69 | 1 | 0 | 0 | 30,948,885 | What @AlexIvanov is trying to say is that when you're starting your Pyramid app in console it is served using Pyramid's built-in development server. This server is single-threaded and serves requests one after another, so if you have a long request which takes, say, 15 seconds - you won't be able to use your app in ano... | 1 | 0 | 0 | Accessing MySQL from multiple views of a web site | 1 | python,mysql,pyramid,mysql-python | 0 | 2015-06-19T23:56:00.000 |
I have a web application (based on Django 1.5) wherein a user uploads a spreadsheet file.
I've been using xlrd for manipulating xls files and looked into openpyxl which claims to support xlsx/xlsm files.
So is there a common way to read/write both xls and xlsx files?
Another option could be to convert the uploaded f... | 0 | 1 | 0.197375 | 0 | false | 30,974,768 | 0 | 1,651 | 1 | 0 | 0 | 30,974,575 | xlrd can read both xlsx and xls files, so it's probably simplest to use that. Support for xlsx isn't as extensive as openpyxl but should be sufficient.
There's a risk of losing information in converting xlsx to xls because xlsx files can be much larger. | 1 | 0 | 0 | How do I read/write both xlsx and xls files in Python? | 1 | python,xlrd,openpyxl | 0 | 2015-06-22T07:48:00.000 |
I'm using openpyxl to write to an existing file and everything works fine. However after the data is saved on the file, graphs disappear.
I understand Openpyxl currently only supports chart creation within a worksheet only. Charts in existing workbooks will be lost.
Are there any alternate libraries in Python to achie... | 1 | 0 | 1.2 | 0 | true | 31,022,634 | 0 | 2,336 | 1 | 0 | 0 | 31,020,766 | This is currently (version 2.2) not possible. | 1 | 0 | 0 | Graphs lost while overwriting to existing excel file in Python | 2 | python,excel,openpyxl | 0 | 2015-06-24T07:49:00.000 |
Is this a known limitation that will be addressed at some point, or is this just something that I need to accept?
If this is not possible with xlwings, I wonder if any of the other alternatives out there supports connecting to other instances.
I'm specifically talking about the scenario where you are calling python fro... | 1 | 1 | 0.197375 | 0 | false | 31,110,844 | 0 | 1,096 | 1 | 0 | 0 | 31,106,542 | Ok, based on your comments I think I can answer your question: Actually, yes, xlwings can handle various instances. But workbooks from untrusted locations (like downloaded from the internet or sometimes on shared network drives) don't play nicely.
So in your case you could try to add the network location to File > Opti... | 1 | 0 | 0 | Does xlwings only work with the first instance of Excel? | 1 | python,xlwings | 0 | 2015-06-29T01:26:00.000 |
I have limited experience with Jira and Jira query language.
This is regarding JIRA Query language. I have a set of 124 rows (issues) in Jira that are under a certain 'Label' say 'myLabel'.
I need to extract columns col1, col2 and col5 for all of the above 124 rows where the Label field is 'myLabel'.
Once I have the a... | 1 | 1 | 0.099668 | 0 | false | 31,676,903 | 0 | 1,804 | 1 | 0 | 0 | 31,123,357 | One way of doing this stuff what you want to do is with Excel directly.
0. create the filter in JIRA
1. create a VBA for Excel script which will Open the Exported to Excel filter from JIRA. In order to do that you have to copy the link from JIRA, -> export -> Excel current fields.
2. Most probably you will have to log... | 1 | 0 | 0 | JQL to retrive specific columns for a cetain label | 2 | jira,jira-plugin,jql,python-jira | 0 | 2015-06-29T18:54:00.000 |
I am interested in using a cursor to duplicate a database from one mongod to another. I want to limit the amount of insert requests sent so instead of inserting each document in the cursor individually I want to do an insert_many of each cursor batch.
Is there a way to do this in pymongo/python?
I have tried converti... | 1 | 0 | 1.2 | 0 | true | 31,413,796 | 0 | 1,052 | 1 | 0 | 0 | 31,123,896 | So far this has been my "slice/batch" solution and it has been much more effective than individually iterating each document from the cursor:
Keep note of the id field last document you have grabbed
Open a cursor with the query "Greater than _id of last doc" and with a limit of
whatever your batch_size is
Now you sho... | 1 | 0 | 0 | How to insertmany from a cursor using Pymongo? | 2 | python,mongodb,cursor,pymongo | 0 | 2015-06-29T19:24:00.000 |
I'm hoping to be pointed in the right direction as far as what tools to use while in the process of developing an application that runs on two servers per client.
[Main Server][Client db Server]
Each client has their own server which has a django application managing their respective data, in addition to serving as a ... | 0 | 1 | 1.2 | 0 | true | 31,227,735 | 1 | 843 | 1 | 0 | 0 | 31,226,223 | You could use different settings files, let's say settings_client_1.py and settings_client_2.py, import common settings from a common settings.py file to keep it DRY. Then add respective database settings.
Do the same with wsgi files, create one for each settings. Say, wsgi_c1.py and wsgi_c2.py
Then, in your web serve... | 1 | 0 | 0 | Effectively communicating between two Django applications on two servers (Multitenancy) | 1 | python,django,web-deployment,multi-tenant,saas | 0 | 2015-07-05T00:30:00.000 |
I've been trying to write a script to copy formatting from one workbook to another and, as anyone dealing with openpyxl knows, it's a big script. I've gotten it to work pretty well, but one thing I can't seem to figure out is how to read from the original if columns are hidden.
Can anyone tell me where to look in a wo... | 5 | 3 | 1.2 | 0 | true | 31,262,488 | 0 | 5,323 | 1 | 0 | 0 | 31,257,353 | Worksheets have row_dimensions and column_dimensions objects which contain information about particular rows or columns, such as whether they are hidden or not. Column dimensions can also be grouped so you'll need to take that into consideration when looking. | 1 | 0 | 0 | Finding hidden cells using openpyxl | 2 | python,excel,openpyxl | 0 | 2015-07-06T23:26:00.000 |
Let's suppose we have a single host where there is a Web Server and a Database Server.
An external application sends an http request to the web server to access to the database.
The data access logic is made for example by Python API.
The web server takes the request and the Python application calls the method to con... | 0 | 1 | 1.2 | 0 | true | 31,274,653 | 0 | 26 | 1 | 0 | 0 | 31,274,509 | Yes, as Python application lives inside of the web server process, this process will establish the connection with database server. | 1 | 0 | 0 | Which process establishes the connection with the database server? | 1 | python,webserver,database-connection,data-access-layer,database-server | 0 | 2015-07-07T16:37:00.000 |
I'm making an application that will fetch data from a/n (external) postgreSQL database with multiple tables.
Any idea how I can use inspectdb only on a SINGLE table? (I only need that table)
Also, the data in the database would by changing continuously. How do I manage that? Do I have to continuously run inspectdb? But... | 0 | 0 | 1.2 | 0 | true | 31,309,910 | 1 | 78 | 1 | 0 | 0 | 31,295,352 | I think you have misunderstood what inspectdb does. It creates a model for an existing database table. It doesn't copy or replicate that table; it simply allows Django to talk to that table, exactly as it talks to any other table. There's no copying or auto-fetching of data; the data stays where it is, and Django reads... | 1 | 0 | 0 | Django 1.8 and Python 2.7 using PostgreSQL DB help in fetching | 1 | python,django,postgresql,python-2.7,django-1.8 | 0 | 2015-07-08T14:15:00.000 |
I'd like people's views on current design I'm considering for a tornado app. Although I'm using mongoDB to store permanent information I currently have the session information as a python data structure that I've simply added within the Application object at initialisation.
I will need to perform some iteration and ma... | 0 | 2 | 1.2 | 0 | true | 31,311,950 | 0 | 120 | 1 | 0 | 0 | 31,311,620 | If you store session data in Python your apllication will:
loose it if you stop the Python process;
likely consume more memory as Python isn't very efficient in memory management (and you will have to store all the sessions in memory, not the ones you need right now).
If these are not problems for you you can go with... | 1 | 0 | 0 | Tornado Application design | 1 | python,mongodb,tornado,tornado-motor | 0 | 2015-07-09T08:07:00.000 |
I want to enter data into a Microsoft Excel Spreadsheet, and for that data to interact and write itself to other documents and webforms.
With success, I am pulling data from an Excel spreadsheet using xlwings. Right now, I’m stuck working with .docx files. The goal here is to write the Excel data into specific part... | 2 | 1 | 0.197375 | 0 | false | 31,349,163 | 0 | 581 | 1 | 0 | 0 | 31,346,625 | You probably need to be more specific, but the short answer is, in principle, yes.
At a certain level, all python-docx does is modify strings in the XML. A couple things though:
The XML you create needs to remain well-formed and valid according to the schema. So if you change the text enclosed in a <w:t> element, for ... | 1 | 0 | 0 | Can you modify only a text string in an XML file and still maintain integrity and functionality of .docx encasement? | 1 | python,xml,lxml,docx,python-docx | 0 | 2015-07-10T17:13:00.000 |
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error? | 83 | 0 | 0 | 0 | false | 61,108,863 | 1 | 62,898 | 4 | 0 | 0 | 31,353,137 | RedHat/CentOS:
dnf install -y unixODBC-devel
along with unixODBC installation | 1 | 0 | 0 | sql.h not found when installing PyODBC on Heroku | 7 | python,heroku,pyodbc | 0 | 2015-07-11T03:31:00.000 |
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error? | 83 | 1 | 0.028564 | 0 | false | 59,790,771 | 1 | 62,898 | 4 | 0 | 0 | 31,353,137 | I recently saw this error in Heroku. To fix this problem I took the following steps:
Add Apt File to the root folder, with the following:
unixodbc
unixodbc-dev
python-pyodbc
libsqliteodbc
Commit that
Run heroku buildpacks:clear
Run heroku buildpacks:add --index 1 heroku-community/apt
Push to Heroku
For me the problem... | 1 | 0 | 0 | sql.h not found when installing PyODBC on Heroku | 7 | python,heroku,pyodbc | 0 | 2015-07-11T03:31:00.000 |
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error? | 83 | 1 | 0.028564 | 0 | false | 47,557,567 | 1 | 62,898 | 4 | 0 | 0 | 31,353,137 | The other answers are more or less correct; you're missing the unixodbc-dev[el] package for your operating system; that's what pip needs in order to build pyodbc from source.
However, a much easier option is to install pyodbc via the system package manager. On Debian/Ubuntu, for example, that would be apt-get install p... | 1 | 0 | 0 | sql.h not found when installing PyODBC on Heroku | 7 | python,heroku,pyodbc | 0 | 2015-07-11T03:31:00.000 |
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error? | 83 | 8 | 1 | 0 | false | 31,358,757 | 1 | 62,898 | 4 | 0 | 0 | 31,353,137 | You need the unixODBC devel package. I don't know what distro you are using but you can google it and build from source. | 1 | 0 | 0 | sql.h not found when installing PyODBC on Heroku | 7 | python,heroku,pyodbc | 0 | 2015-07-11T03:31:00.000 |
We have a ticket software to manage our work, every ticket is assigned to a tech in one field -the normal stuff-, but now we want to assign the same ticket to several technicians eg: tick 5432: tech_id(2,4,7) where 2,4,7 are tech IDs.
Of course we can do that using a separate table with the IDs of the tech and the tick... | 0 | 0 | 0 | 0 | false | 31,371,403 | 0 | 136 | 1 | 0 | 0 | 31,369,558 | The "right" way to do this is to have a separate table of ticket
assignments. Converting the data for something like this is fairly simple on the database end. create table assign as select tech_id from ... followed by creating any necessary foreign key constraints.
Rewriting your interface code can be trickier, but y... | 1 | 0 | 0 | Is any variable in PostgreSQL to store a list | 1 | python,postgresql | 0 | 2015-07-12T15:40:00.000 |
I am using Spark 1.3.1 (PySpark) and I have generated a table using a SQL query. I now have an object that is a DataFrame. I want to export this DataFrame object (I have called it "table") to a csv file so I can manipulate it and plot the columns. How do I export the DataFrame "table" to a csv file?
Thanks! | 106 | 0 | 0 | 1 | false | 69,462,087 | 0 | 340,481 | 1 | 0 | 0 | 31,385,363 | try display(df) and use the download option in the results. Please note: only 1 million rows can be downloaded with this option but its really quick. | 1 | 0 | 0 | How to export a table dataframe in PySpark to csv? | 9 | python,apache-spark,dataframe,apache-spark-sql,export-to-csv | 0 | 2015-07-13T13:56:00.000 |
I have a fairly large redshift table with around 200 million records. I would like to update the values in one of the columns using a user-defined python function. If I run the function in an EC2 instance, it results in millions of updates to the table, and it is very slow. Is there a better process for me to speed ... | 0 | 0 | 0 | 0 | false | 31,411,856 | 0 | 56 | 1 | 0 | 0 | 31,388,220 | Unlike row-based systems, which are ideal for transaction processing, column-based systems (Redshift) are ideal for data warehousing and analytics, where queries often involve aggregates performed over large data sets. Since only the columns involved in the queries are processed and columnar data is stored sequentially... | 1 | 0 | 0 | How to increase performance of large number of updates to a redshift table with python functions | 1 | python,amazon-web-services | 0 | 2015-07-13T16:05:00.000 |
Everything I found about this via searching was either wrong or incomplete in some way. So, how do I:
delete everything in my postgresql database
delete all my alembic revisions
make it so that my database is 100% like new | 7 | 3 | 0.197375 | 0 | false | 31,392,595 | 0 | 7,991 | 1 | 0 | 0 | 31,392,285 | This works for me:
1) Access your session, in the same way you did session.create_all, do session.drop_all.
2) Delete the migration files generated by alembic.
3) Run session.create_all and initial migration generation again. | 1 | 0 | 0 | Clear postgresql and alembic and start over from scratch | 3 | python,postgresql,sqlalchemy,alembic | 0 | 2015-07-13T19:56:00.000 |
I have a 2D array, M 390x420 with float values in it that I would like to save as a table in a sqlite db with python. the row number of the table should be 390, the column number 420.
executemany from sqlite is not optimal because then I would have to write ~ 420 of "?" , as far as I've understood.
Thank you! | 0 | 0 | 0 | 0 | false | 48,332,928 | 0 | 1,249 | 1 | 0 | 0 | 31,426,367 | As CL recommended, not to use 420 columns. I would recommend an algorithmic approach to save much processing power. Here is an example, since the size is always 390x420, have a table with 10 columns, and 16380 rows. Referencing any point on this matrix can be done with a simple algorithm, and would be much more efficie... | 1 | 0 | 0 | save a matrix (or a 2 dimensional array) in a sqlite db with Python | 1 | python,sqlite,matrix | 0 | 2015-07-15T09:20:00.000 |
Each module in odoo have a table in the database.
I'd like to know if I can create two tables in the odoo database for one module. | 0 | 0 | 1.2 | 0 | true | 31,456,810 | 1 | 592 | 1 | 0 | 0 | 31,456,406 | Yes you can, for every class new_class(... with a unique _name="new.class" is created a table in the data base, if you want more than one table, you need to create more than one class in your .py file
For more reference look the account module in account_invoice.py you have class account_invoice(models.Model): _name =... | 1 | 0 | 0 | Is there any way to create two tables in the database for one odoo module? | 1 | python-2.7,openerp,odoo | 0 | 2015-07-16T14:00:00.000 |
I am running a migrate script in postgres, and at the top of one of the files I have from sqlalchemy import *
in the file I create tables with entries such as
1Column('tmp1', DOUBLE_PRECISION(precision=53))
However, when I run the script I get the error:
name 'DOUBLE_PRECISION' is not defined
Why is this? | 0 | 0 | 0 | 0 | false | 34,866,070 | 0 | 622 | 1 | 0 | 0 | 31,459,477 | First off, I'd advise against doing 'from import *' that can bring unknown things into your namespace that are quite difficult to debug.
Second, the sqlalchemy module simply doesn't have a 'DOUBLE_PRECISION' column type. So the reason it says it's not defined, is because sqlalchemy does not define any such name. Perha... | 1 | 0 | 0 | name 'DOUBLE_PRECISION' is not defined - PostgreSQL - SQLAlchemy | 1 | python,postgresql,sqlalchemy | 0 | 2015-07-16T16:15:00.000 |
I am using SQLAlchemy and am trying to update a boolean column value. I have the following command:
sess.query(Testing).filter(Testing.id == id).update({Testing.state: True})
I do not seem to get any errors, however, when I go to the database, nothing changes. Have I implemented something incorrectly with the command? | 0 | 0 | 0 | 0 | false | 31,517,891 | 0 | 277 | 1 | 0 | 0 | 31,517,753 | I simply left out sess.commit() as the next line of code. | 1 | 0 | 0 | SQLAlchemy Update Command | 1 | python,sql | 0 | 2015-07-20T13:25:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.