Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I have a python script (on my local machine) that queries Postgres database and updates a Google sheet via sheets API. I want the python script to run on opening the sheet. I am aware of Google Apps Script, but not quite sure how can I use it, to achieve what I want.
Thanks | 0 | 2 | 0.197375 | 0 | false | 42,327,384 | 0 | 6,619 | 1 | 0 | 0 | 42,218,932 | you will need several changes. first you need to move the script to the cloud (see google compute engine) and be able to access your databases from there.
then, from apps script look at the onOpen trigger. from there you can urlFetchApp to your python server to start the work.
you could also add a custom "refresh" menu... | 1 | 0 | 0 | Running python script from Google Apps script | 2 | python,google-apps-script,google-sheets,google-spreadsheet-api | 0 | 2017-02-14T06:00:00.000 |
Now the question is a little tricky.... I have 2 tables that i want to compare them for their content. The tables have same no. of columns and same column names and same ordering of columns(if there is such thing).
Now i want to compare their contents but the trick is the ordering of their rows can be different ,i.e.,... | 0 | 0 | 0 | 0 | false | 42,227,865 | 0 | 1,752 | 2 | 0 | 0 | 42,227,567 | You'd need to be more precise on how you intend to compare the tables' content and what is the expected outcome. Sqlite3 itself is a good tool for comparison and you can easily query the comparison results you wish to get.
If these tables however are located in different databases, you can dump them into temporary db u... | 1 | 0 | 0 | Comparing two sqlite3 tables using python | 3 | python,database,sqlite | 0 | 2017-02-14T13:34:00.000 |
Now the question is a little tricky.... I have 2 tables that i want to compare them for their content. The tables have same no. of columns and same column names and same ordering of columns(if there is such thing).
Now i want to compare their contents but the trick is the ordering of their rows can be different ,i.e.,... | 0 | 0 | 0 | 0 | false | 42,228,061 | 0 | 1,752 | 2 | 0 | 0 | 42,227,567 | You say "there is no PRIMARY KEY". Does this mean there is truly no way to establish the identity of the item represented by each row? If that is true, your problem is insoluble since you can never determine which row in one table to compare with each row in the other table.
If there is a set of columns that establis... | 1 | 0 | 0 | Comparing two sqlite3 tables using python | 3 | python,database,sqlite | 0 | 2017-02-14T13:34:00.000 |
I am trying to move an Excel sheet say of index 5 to the position of index 0. Right now I have a working solution that copies the entire sheet and writes it into a new sheet created at the index 0, and then deletes the original sheet.
I was wondering if there is another method that could push a sheet of any index to t... | 0 | 0 | 0 | 0 | false | 42,244,164 | 0 | 2,015 | 1 | 0 | 0 | 42,243,861 | Maybe the function from XLRD module can help you
where you can get the sheet contents by index like this:
worksheet = workbook.sheet_by_index(5)
and then you can copy that into some other sheet of a different index, like this:
workbook.sheet_by_index(0) = worksheet | 1 | 0 | 0 | Python - Change the sheet index in excel workbook | 1 | python,excel | 0 | 2017-02-15T08:12:00.000 |
I want to import and use dataset package of python at AWS Lambda. The dataset package is about MySQL connection and executing queries. But, when I try to import it, there is an error.
"libmysqlclient.so.18: cannot open shared object file: No such file or directory"
I think that the problem is because MySQL client packa... | 1 | 0 | 0 | 0 | false | 42,268,813 | 0 | 131 | 1 | 0 | 0 | 42,267,553 | You should install your packages in your lambda folder :
$ pip install YOUR_MODULE -t YOUR_LAMBDA_FOLDER
And then, compress your whole directory in a zip to upload in you lambda. | 1 | 0 | 0 | How to use the package written by another language in AWS Lambda? | 3 | mysql,python-2.7,amazon-web-services,aws-lambda | 1 | 2017-02-16T07:30:00.000 |
I noticed that most examples for accessing mysql from flask suggest using a plugin that calls init_app(app).
I was just wondering why that is as opposed to just using a mysql connector somewhere in your code as you need it?
Is it that flask does better resource management with request life cycles? | 1 | 3 | 0.53705 | 0 | false | 42,281,576 | 1 | 100 | 1 | 0 | 0 | 42,281,212 | Using packages like flask-mysql or Flask-SQLAlchemy, they provided useful defaults and extra helpers that make it easier to accomplish common CRUD tasks.
All of such package are good at handling relationships between objects. You only need to create the objects and then the objects contain all the functions and helpers... | 1 | 0 | 0 | accessing mysql from within flask | 1 | python,mysql,flask | 0 | 2017-02-16T17:48:00.000 |
I'm having a trouble updating a model in odoo
the tables of my module won't change when I make changes to the model, even when I restart the server, upgrade the module, delete the module and reinstall it
is there a way to make the database synchronized with my model? | 0 | 0 | 0 | 0 | false | 42,359,847 | 1 | 1,475 | 2 | 0 | 0 | 42,354,852 | Please check that in addons path there is no any duplicate folder having same name. Sometimes if there is zip file with same name in addons path than it doesn't get affect of any updation. | 1 | 0 | 0 | Updating a module's model in Odoo 10 | 2 | python,ubuntu,module,openerp,odoo-10 | 0 | 2017-02-20T21:50:00.000 |
I'm having a trouble updating a model in odoo
the tables of my module won't change when I make changes to the model, even when I restart the server, upgrade the module, delete the module and reinstall it
is there a way to make the database synchronized with my model? | 0 | 0 | 0 | 0 | false | 42,359,793 | 1 | 1,475 | 2 | 0 | 0 | 42,354,852 | If you save changes to the module, restart the server, and upgrade the module - all changes should be applied.
Changes to tables (e.g. fields) should only require the module to be upgraded, not a server reboot.
Python changes (e.g. contents of a method) require a server restart, not a module upgrade.
If the changes are... | 1 | 0 | 0 | Updating a module's model in Odoo 10 | 2 | python,ubuntu,module,openerp,odoo-10 | 0 | 2017-02-20T21:50:00.000 |
I want to isolate my LAMP installment into a virtual environment, I tried using virtualbox but my 4GB of RAM is not helping. My question is if I run sudo apt-get install lamp-server^ while in "venv"... would it install the mysql-server, apache2 and PHP into the virtualenv only or is the installation scope system-wide.
... | 1 | 1 | 0.099668 | 0 | false | 42,356,311 | 1 | 708 | 1 | 0 | 0 | 42,356,276 | Read about Docker if You want make separate environments without virtual machine. | 1 | 0 | 0 | Install LAMP Stack into Virtual Environment | 2 | php,python,mysql,virtualenv,lamp | 0 | 2017-02-20T23:53:00.000 |
I'm working on a Project with python and openpyxl.
In a Excel file are some cells with conditional formatting. These change the infillcolor, when the value changes. I need to extract the color from the cell.
The "normal" methode
worksheet["F11"].fill.start_color.index
doesn't work. Excel doesn't interpret the infillco... | 2 | 1 | 1.2 | 0 | true | 42,372,863 | 0 | 496 | 1 | 0 | 0 | 42,372,121 | This isn't possible without you writing some of your own code.
To do this you will have to write code that can evaluate conditional formatting because openpyxl is a library for the file format and not a replacement for an application like Excel. | 1 | 0 | 0 | Python/openpyxl get conditional format | 1 | python-3.x,spyder,openpyxl | 0 | 2017-02-21T15:58:00.000 |
I have a google drive folder with hundreds of workbooks. I want to cycle through the list and update data. For some reason, gspread can only open certain workbooks but not others. I only recently had this problem.
It's not an access issue because everything is in the same folder.
I get raise SpreadsheetNotFound when ... | 12 | 4 | 0.26052 | 0 | false | 42,867,483 | 0 | 3,398 | 1 | 0 | 0 | 42,382,847 | I've run into this issue repeatedly. The only consistent fix I've found is to "re-share" the file with the api user. It already lists the api user as shared (since it's in the same shared folder as everything else), but after "re-sharing" I can connect with gspread no problem.
Based on this I believe it may actually b... | 1 | 0 | 0 | gspread "SpreadsheetNotFound" on certain workbooks | 3 | python,google-sheets,gspread | 0 | 2017-02-22T04:40:00.000 |
I'm embarking on a software project, and I have a bit of an idea on how to attack it, but would really appreciate some general tips, advice or guidance on getting the task done. Project is as follows:
My company has an ERP (Enterprise Resource Planning) system that we use to record all our business activity (i.e. creat... | 1 | 2 | 0.197375 | 0 | false | 42,415,248 | 0 | 192 | 1 | 0 | 0 | 42,394,615 | Epicor ERP has a powerful extension system built in.
I would create a Business Process Method (BPM) for ReceiptEntry.Update. This wouldn't check for added rows but more specifically where the Recieved flag has been changed to set. This will prevent you getting multiple notifications every time a user saves an incomple... | 1 | 0 | 0 | Project Advice: push ERP/SQL transaction data to Slack | 2 | python,sql,architecture,slack-api,epicorerp | 0 | 2017-02-22T14:43:00.000 |
I generally use Pandas to extract data from MySQL into a dataframe. This works well and allows me to manipulate the data before analysis. This workflow works well for me.
I'm in a situation where I have a large MySQL database (multiple tables that will yield several million rows). I want to extract the data where one o... | 0 | 0 | 0 | 1 | false | 42,406,043 | 0 | 352 | 1 | 0 | 0 | 42,405,493 | I am not familiar with pandas but strictly speaking from a database point of view you could just have your panda values inserted in a PANDA_VALUES table and then join that PANDA_VALUES table with the table(s) you want to grab your data from.
Assuming you will have some indexes in place on both PANDA_VALUES table and th... | 1 | 0 | 0 | Selecting data from large MySQL database where value of one column is found in a large list of values | 1 | python,mysql,sql,python-3.x,pandas | 0 | 2017-02-23T01:36:00.000 |
I have about the 3GB 4-5 table in google bigquery and I want to export these table to Postgres. Reading the docs I found I have to do following steps.
create a job that will extract data to CSV in the google bucket.
From google storage to local storage.
Parse all CSV to database
So in the above step is there any effi... | 1 | 0 | 0 | 0 | false | 42,454,600 | 0 | 691 | 1 | 0 | 0 | 42,443,016 | Make an example project and see what times you get, if you can accept those times it's too early to optimize. I see all this is possible in about 3-5 minutes if you have 1Gbit internet access and server running on SSD. | 1 | 0 | 0 | Dump Data from bigquery to postgresql | 1 | python,postgresql,google-bigquery | 0 | 2017-02-24T15:58:00.000 |
I have an instance of an object (with many attributes) which I want to duplicate.
I copy it using deepcopy() then modify couple of attributes.
Then I save my new object to the database using Python / PeeWee save() but the save() actually updates the original object (I assume it is because that the id was copied from th... | 2 | 4 | 0.379949 | 0 | false | 42,451,623 | 0 | 1,374 | 1 | 0 | 0 | 42,449,783 | Turns out that I can set the id to None (obj.id = None) which will create a new record when performing save(). | 1 | 0 | 0 | Copy object instance and insert to DB using peewee creates duplicate ID | 2 | python,mysql,peewee | 0 | 2017-02-24T23:06:00.000 |
I'm starting my Python journey with a particular project in mind;
The title explains what I'm trying to do (make json api calls with python3.6 and sqlite3). I'm working on a mac.
My question is whether or not this setup is possible? Or if I should use MySQL, PostgreSQL or MongoDB?
If it is possible, am I going to have ... | 1 | 1 | 1.2 | 0 | true | 42,489,154 | 0 | 451 | 1 | 0 | 0 | 42,489,060 | Python 3.6 and sqlite both work on a Mac; whether your json api calls will depends on what service you are trying to make calls to (unless you are writing a server that services such calls, in which case you are fine).
Any further recommendations are either a) off topic for SO or b) dependent on what you want to do wit... | 1 | 0 | 1 | Python3 & SQLite3 JSON api calls | 1 | python-3.x,sqlite,json-api | 0 | 2017-02-27T15:06:00.000 |
Could someone give me an example of using whoosh for a sqlite3 database, I want to index my database. Just a simple connect and searching through the database would be great. I searched online and was not able to find an examples for sqlite3. | 3 | 0 | 0 | 0 | false | 51,001,220 | 0 | 466 | 1 | 0 | 0 | 42,493,984 | You need to add a post-save function index_data to your database writers. This post-save should get the data to be written in database, normalize it and index it.
The searcher could be an independent script given an index and queries to be searched for. | 1 | 0 | 0 | Using Whoosh with a SQLITE3.db (Python) | 1 | python,python-2.7,indexing,sqlite,whoosh | 0 | 2017-02-27T19:16:00.000 |
How to remove and add completly new db.sqlite3 database to django project written in pycharm?
I did something wrong and I need completelty new database. The 'flush' command just removes data from databse but it't dosent remove tables schema. So the question is how to get get back my databse to begin point(no data, no s... | 1 | 4 | 1.2 | 0 | true | 42,515,036 | 1 | 1,356 | 1 | 0 | 0 | 42,514,902 | A SQLite database is just a file. To drop the database, simply remove the file.
When using SQLite, python manage.py migrate will automatically create the database if it doesn't exist. | 1 | 0 | 0 | How to remove and add completly new db.sqlite3 to django project written in pycharm? | 1 | python,sql,django | 0 | 2017-02-28T17:14:00.000 |
The ping service that I have in mind allows users to keep easily track of their cloud application (AWS, GCP, Digital Ocean, etc.) up-time.
The part of the application's design that I am having trouble with is how to effectively read a growing/shrinking list of hostnames from a database and ping them every "x" interval... | 1 | 0 | 0 | 0 | false | 42,525,826 | 0 | 45 | 1 | 1 | 0 | 42,524,336 | Let me put it like this. You will be having these 4 statements in the following way. In the simplest way you could keep a table of users and a table of hostnames which will have following columns -> fk to users, hostname, last update and boolean is_running.
You will need the following actions.
UPDATE:
You will run this... | 1 | 0 | 0 | Designing a pinging service | 1 | python,architecture | 0 | 2017-03-01T06:01:00.000 |
I'm working with sqlalchemy and oracle, but I don't want to store the database password directly in the connection string, how to store a encrypted password instead? | 2 | 0 | 0 | 0 | false | 70,789,442 | 0 | 3,865 | 1 | 0 | 0 | 42,776,941 | Encrypting the password isn't necessarily very useful, since your code will have to contains the means to decrypt. Usually what you want to do is to store the credentials separately from the codebase, and have the application read them at runtime. For example*:
read them from a file
read them from command line argum... | 1 | 0 | 0 | How to use encrypted password in connection string of sqlalchemy? | 3 | python,oracle,sqlalchemy | 0 | 2017-03-14T02:53:00.000 |
in aws cli we can set output format as json or table. Now I can get json output from json.dumps is there anyway could achieve output in table format?
I tried pretty table but no success | 0 | 1 | 0.197375 | 0 | false | 42,952,740 | 0 | 980 | 1 | 0 | 1 | 42,787,327 | Python Boto3 does not return the data in the tabular format. You will need to parse the data and use another python lib to output the data in the tabular format . Pretty table works good for me, read the pretty table lib docs and debug your code. | 1 | 0 | 0 | Is it possible to get Boto3 | python output in tabular format | 1 | json,python-2.7,boto3,aws-cli,prettytable | 0 | 2017-03-14T13:27:00.000 |
I have an .ods file that contains many links that must be updated automatically. As I understand there is no easy way to do this with macros or libreoffice command arguments, so I am trying to make all links update upon opening the file and then will save the file and exit.
All links are DDE links which should be able... | 0 | 0 | 0 | 0 | false | 42,806,325 | 0 | 986 | 1 | 0 | 0 | 42,788,839 | The API does not provide a method to suppress the prompt upon opening the file!
I've tried running StarBasic code to update DDE links on "document open" event, but the question keeps popping up.
So, I guess you're out of luck: you have to answer "Yes" if you want the actual values.
[posted the comment to OP's question ... | 1 | 0 | 0 | Libreoffice - update links automatically upon opening? | 2 | python,libreoffice,dde | 0 | 2017-03-14T14:34:00.000 |
I'm attempting to access a Google Cloud SQL instance stored on one Cloud Platform project from an App Engine application on another project, and it's not working.
Connections to the SQL instance fail with this error:
OperationalError: (2013, "Lost connection to MySQL server at 'reading initial communication packet', sy... | 3 | 6 | 1 | 0 | false | 42,827,972 | 1 | 3,103 | 1 | 1 | 0 | 42,826,560 | Figured it out eventually - perhaps this will be useful to someone else encountering the same problem.
Problem:
The problem was that the "Cloud SQL Editor" role is not a superset
of the "Cloud SQL Client", as I had imagined; "Cloud SQL Editor"
allows administration of the Cloud SQL instance, but doesn't allow
b... | 1 | 0 | 0 | Can't access Google Cloud SQL instance from different GCP project, despite setting IAM permissions | 1 | python,mysql,google-app-engine,google-cloud-sql | 0 | 2017-03-16T06:14:00.000 |
How to achieve a read-only connection to the secondary nodes of the MongoDB.
I have a primary node and two secondary nodes. I want a read-only connection to secondary nodes.
I tried MongoReplicaSetClient but did not get what I wanted.
Is it possible to have a read-only connection to primary node? | 1 | 1 | 0.099668 | 0 | false | 42,850,060 | 0 | 1,793 | 1 | 0 | 0 | 42,849,056 | Secondaries are read-only by default. However, you can specify the read preference to read from secondaries. By default, it reads from the primary.
This can be achieved using readPreference=secondary in connection string | 1 | 0 | 0 | How to achieve a read only connection using pymongo | 2 | python,mongodb,pymongo | 0 | 2017-03-17T04:06:00.000 |
I understand you can do DELETE FROM table WHERE condition, but I was wondering if there was a more elegant way? Since I'm iterating over every row with c.execute('SELECT * FROM {tn}'.format(tn=table_name1)), the cursor is already on the row I want to delete. | 0 | 2 | 0.379949 | 0 | false | 42,889,578 | 0 | 346 | 1 | 0 | 0 | 42,888,269 | A cursor is a read-only object, and cursor rows are not necessarily related to table rows. So this is not possible.
And you must not change the table while iterating over it.
SQLite computes result rows on demand, so deleting the current row could break the computation of the next row. | 1 | 0 | 0 | While iterating over the rows in an SQLite table, is it possible to delete the cursor's row? | 1 | python,python-3.x,sqlite,sql-delete,delete-row | 0 | 2017-03-19T15:16:00.000 |
I need help with the pyodbc Python module. I installed it via Canopy package management, but when I try to import it, I get an error (no module named pyodbc). Why?
Here's the output from my Python interpreter:
import pyodbc
Traceback (most recent call last):
File "", line 1, in
import pyodbc... | 0 | 0 | 0 | 0 | false | 42,963,908 | 0 | 62 | 1 | 0 | 0 | 42,935,980 | For the record: the attempted import was in a different Python installation. It is never good, and usually impossible, to use a package which was installed into one Python installation, in another Python installation. | 1 | 0 | 1 | No Module After Install Package via Canopy Package Management | 1 | python,pyodbc,canopy | 0 | 2017-03-21T19:00:00.000 |
I downloaded the pyodbc module as a zip and installed it manually using the command python setup.py install. Although I can find the folder inside the Python directory which I pasted, while importing I am getting the error:
ImportError: No module named pyodbc
I am trying to use to this to connect with MS SQL Server. He... | 1 | 0 | 1.2 | 0 | true | 42,947,078 | 0 | 8,261 | 1 | 0 | 0 | 42,944,116 | As installation error showed, installing Visual C++ 9.0 solves problem because setup.py tries to compile some C++ libraries while installing plugin. I thing Cygwin C++ will also work due to contents of setup.py. | 1 | 0 | 0 | ImportError: No module named pyodbc | 1 | python,sql-server,database-connection,pyodbc | 0 | 2017-03-22T06:20:00.000 |
I am trying to generate a flask-sqlalchemy for an existing mysql db.
I used the following command
flask-sqlacodegen --outfile rcdb.py mysql://username:password@hostname/tablename
The project uses python 3.4. Any clues?
```Traceback (most recent call last):
File "/var/www/devaccess/py_api/ds/venv/bin/flask-sqlacodegen",... | 1 | 1 | 0.197375 | 0 | false | 49,161,491 | 1 | 362 | 1 | 0 | 0 | 43,008,166 | try specifying your database schema with option --schema | 1 | 0 | 0 | flask-sqlacodegen suports python 3.4? | 1 | python-3.x,sqlalchemy,flask-sqlalchemy,sqlacodegen | 0 | 2017-03-24T20:00:00.000 |
What I want is that when I have looked up a user in a table, I want to list all the file urls that the user have access to. My first thought was to have a field in the table with a list of file URLs. However, I have now understood that there are no such field type.
I was then thinking that maybe ForeignKeys might work... | 0 | 0 | 0 | 0 | false | 43,179,890 | 1 | 25 | 1 | 0 | 0 | 43,066,877 | 2 tables: user and user_uri_permission? 2 columns in the second: userID and URI. When the User-URI pair is in the table the use has access. | 1 | 0 | 0 | Mapping users to all of their files(URLs) in a mysql database. | 2 | mysql,mysql-python | 0 | 2017-03-28T10:18:00.000 |
I have a big-ol' dbm file, that's being created and used by my python program. It saves a good amount of RAM, but it's getting big, and I suspect I'll have to gzip it soon to lower the footprint.
I guess usage will involve un-gzipping it to the disk, using it, and erasing the extracted dbm when I'm done.
I was wonder... | 1 | 0 | 0 | 0 | false | 43,080,032 | 0 | 109 | 1 | 0 | 0 | 43,069,291 | You can gzip the value or use a key/value store that support compression like wiredtiger. | 1 | 0 | 0 | recipe for working with compressed (any)dbm files in python | 1 | python,compression,gzip,key-value-store,dbm | 0 | 2017-03-28T12:14:00.000 |
After updating PyCharm (version 2017.1), PyCharm does not display sqlite3 database tables anymore.
I've tested the connection and it's working.
In sqlite client I can list all tables and make queries.
Someone else has get this problem? And in this case could solve anyway? | 2 | 1 | 0.099668 | 0 | false | 43,075,527 | 0 | 3,401 | 1 | 0 | 0 | 43,075,420 | After clicking on the View => Tools => Window => Database click on the green plus icon and then on Data Source => Sqlite (Xerial). Then, on the window that opens install the driver (it's underneath the Test Connection button) that is proposing (Sqlite (Xerial)).
That should do it both for db.sqlite3 and identifier.sqli... | 1 | 0 | 0 | Pycharm does not display database tables | 2 | python,django,sqlite,pycharm | 0 | 2017-03-28T16:50:00.000 |
So is it possible to mix 2 ORM's in same web app,and if so how optimal would it be ? Why so?
- I'm working on a web app in flask using flask-mysqldb and I came to a point where I need to implement an auth system, and on flask-mysqldb there's no secure way to do it.
- With that said now I'm trying to implement flask-sec... | 0 | 1 | 1.2 | 0 | true | 43,098,951 | 1 | 145 | 2 | 0 | 0 | 43,098,668 | It's possible, but not recommended. Consider this:
Half of your app will not benefit from anything a proper ORM offers
Adding a field to the table means editing raw SQL in many places, and then changing the model.
Don't forget to keep them in sync.
Alternatively, you can port everything that uses raw mysqldb to use S... | 1 | 0 | 0 | Is it possible to mix 2 ORMS in same web app? | 2 | python,flask,flask-sqlalchemy,flask-mysql | 0 | 2017-03-29T16:03:00.000 |
So is it possible to mix 2 ORM's in same web app,and if so how optimal would it be ? Why so?
- I'm working on a web app in flask using flask-mysqldb and I came to a point where I need to implement an auth system, and on flask-mysqldb there's no secure way to do it.
- With that said now I'm trying to implement flask-sec... | 0 | 3 | 0.291313 | 0 | false | 43,098,934 | 1 | 145 | 2 | 0 | 0 | 43,098,668 | You can have a module for each orm. One module can be called auth_db and the other can be called data_db. In your main app file just import both modules and initialize the database connections. That being said, this approach will be harder to maintain in the future, and harder for other developers to understand what's ... | 1 | 0 | 0 | Is it possible to mix 2 ORMS in same web app? | 2 | python,flask,flask-sqlalchemy,flask-mysql | 0 | 2017-03-29T16:03:00.000 |
I am thinking to use AWS API Gateway and AWS Lambda(Python) to create a serverless API's , but while designing this i was thinking of some aspects like pagination,security,caching,versioning ..etc
so my question is:
What is the best approach performance & cost wise to implement API pagination with very big data (1 mi... | 1 | 3 | 0.53705 | 0 | false | 43,126,859 | 1 | 1,679 | 1 | 0 | 1 | 43,113,198 | If your data is going to live in a postgresql data base anyway I would start with your requests hitting the database and profile the performance. You've made assumptions about it being slow but you haven't stated what your requirements for latency are or what your schema is, so any assertions people would make about wh... | 1 | 0 | 0 | AWS API Gateway & Lambda - API Pagination | 1 | python,postgresql,amazon-web-services,aws-lambda,aws-api-gateway | 0 | 2017-03-30T09:04:00.000 |
I am working on a python/tornado web application.
I have several options to save in my app.
Thoses options can by changed by the user, and those options will be access very often.
I have created an sqlite database but there is some disk operation and i am asking you what is the best location for those options.
Does tor... | 0 | 0 | 0 | 0 | false | 43,171,572 | 1 | 28 | 1 | 1 | 0 | 43,145,705 | Yes, there is the tornado.options package, which does pretty much what you need. Keep in mind, however, that the values saved here are not persisted between requests; if you need that kind of functionality, you will have to implement an external persistence solution, which you already have done with SQLite. | 1 | 0 | 0 | Where should i save my tornado custom options | 1 | python,tornado | 0 | 2017-03-31T16:39:00.000 |
I have PhpMyAdmin to view and edit a database and a Flask + SQLAlchemy app that uses a table from this database. Everything is working fine and I can read/write to the database from the flask app. However, If I make a change through phpmyadmin, this change is not detected by SQLAlchmey. The only to get those changes is... | 0 | 0 | 0 | 0 | false | 43,703,427 | 1 | 665 | 1 | 0 | 0 | 43,149,092 | I suggest you to look at Server Sent Events(SSE). I am looking for code of SSE for postgres,mysql,etc. It is available for reddis. | 1 | 0 | 0 | Flask App using SQLAlcehmy: How to detect external changes committed to the database? | 2 | python,flask,sqlalchemy,flask-sqlalchemy | 0 | 2017-03-31T20:22:00.000 |
Are there any solutions (preferably in Python) that can repair pdfs with damaged xref tables?
I have a pdf that I tried to convert to a png in Ghostscript and received the following error:
**** Error: An error occurred while reading an XREF table.
**** The file has been damaged. This may have been caused
**** b... | 6 | 1 | 1.2 | 0 | true | 43,154,410 | 0 | 8,136 | 1 | 0 | 0 | 43,149,372 | If the file renders as expected in Ghostscript then you can run it through GS to the pdfwrite device and create a new PDF file which won't be damaged.
Preview is (like Acrobat) almost certainly silently repairing the problem in the background. Ghostscript will be doing the same, but unlike other applications we feel yo... | 1 | 0 | 0 | Repairing pdfs with damaged xref table | 1 | python,pdf,ghostscript | 0 | 2017-03-31T20:42:00.000 |
I am directing this question to experienced, Django developers, so as in subject, I have been learning Django since September 2016, but I've started to learn it without any knowledge about databases syntax. I know basic concepts and definitions, so I can easily implement in Django models. Summarizing, have I to know SQ... | 1 | 1 | 0.099668 | 0 | false | 43,164,854 | 1 | 1,361 | 1 | 0 | 0 | 43,161,718 | You do not have to be a wizard at it but understanding relations between data sets can be extremely helpful especially if you have a complicated data hierarchy.
Just learn as you go. If you want you can look at the SQL code Django executes for you in the migrations.py file of each app. | 1 | 0 | 0 | Do I need to know SQL when I work with Django | 2 | python,sql,django | 0 | 2017-04-01T20:33:00.000 |
I'm creating and writing into an excel file using xlsxwriter module. But when I open the excel file, I get this popup:
We found a problem with some content in 'excel_sheet.xlsx'. Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes. If I click Yes, it says Repaired Rec... | 1 | 2 | 1.2 | 0 | true | 43,248,203 | 0 | 1,256 | 1 | 0 | 0 | 43,199,359 | I was trying to recreate(Thanks to @jmcnamara) the problem and I could figure out where it went wrong.
In my command to write_rich_string, sometimes it was trying to format the empty string.
my_work_sheet.write_rich_string(row_no, col_no,format_1, string_1, format_2, string_2, format_1, string_3)
I came to know that a... | 1 | 0 | 0 | Python xlsxwriter Repaired Records: String properties from /xl/sharedStrings.xml part (Strings) | 2 | python-3.x,xlsxwriter | 0 | 2017-04-04T05:59:00.000 |
So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels ... | 0 | 1 | 0.099668 | 1 | false | 52,570,244 | 0 | 569 | 2 | 0 | 0 | 43,215,443 | I believe this is currently available via kapacitor, but assume a more elegant solution will be readily accomplished using FluxQL.
Consuming the influxdb measurements into kapacitor will allow you to force equivalent time buckets and present the data once normalized. | 1 | 0 | 0 | InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel | 2 | python,influxdb,grafana | 0 | 2017-04-04T18:55:00.000 |
So basically I would like to be able to view two different databases within the same Grafana graph panel. The issue is that InfluxDB is a time series database, so it is not possible to see the trend between two databases in the same graph panel unless they have similar timestamps. The workaround is creating two panels ... | 0 | 0 | 0 | 1 | false | 43,306,424 | 0 | 569 | 2 | 0 | 0 | 43,215,443 | I can confirm from my grafana instance that it's not possible to add a shift to one timeseries and not the other in one panel.
To change the timestamp, I'd just simply do it the obvious way. Load a few thousands of entries at a time to python, change the the timestamps and write it to a new measure (and indicate the sh... | 1 | 0 | 0 | InfluxDB and Grafana: comparing two databases with different timestamps on same graph panel | 2 | python,influxdb,grafana | 0 | 2017-04-04T18:55:00.000 |
Currently I am using celery to build a scheduled database synchronization feature, which periodically fetch data from multiple databases. If I want to store the task results, would the performance be better if I store them in Redis instead of a RDB like MySQL? | 2 | 2 | 0.379949 | 0 | false | 43,264,780 | 0 | 931 | 1 | 1 | 0 | 43,264,701 | Performance-wise it's probably going to be Redis but performance questions are almost always nuance based.
Redis stores lists of data with no requirement for them to relate to one another so is extremely fast when you don't need to use SQL type queries against the data it contains. | 1 | 0 | 0 | Celery: Is it better to store task results in MySQL or Redis? | 1 | python,mysql,django,redis,celery | 0 | 2017-04-06T19:57:00.000 |
I have a Django project with 5 different PostgreSQL databases. The project was preemptively separated in terms of model routing, but has proven quite problematic so now I'm trying to reverse it. Unfortunately, there's some overlap of empty, migrated tables so pg_dump's out of the question. It looks like django-dumpd... | 2 | 1 | 0.099668 | 0 | false | 43,267,208 | 1 | 650 | 1 | 0 | 0 | 43,266,059 | there's always the dump data from django, which is pretty easy to use.
or you could do this manually:
if the 2 databases share the same data (they are mirror one to another) and the same table structure, you could just run a syncdb from django to create the new table structure and then dump and import (i'm assuming yo... | 1 | 0 | 0 | How to intelligently merge Django databases? | 2 | python,django,postgresql,django-models,django-database | 0 | 2017-04-06T21:27:00.000 |
I need a small help.
I am new to postgres and django. I am creating a project in django where there will n number of clients and their data is saved into the database on monthly basis.
So my doubts is should i go with only a single table and save all the data inside it or do I have an option to create individual tables... | 0 | 1 | 0.197375 | 0 | false | 43,369,451 | 1 | 324 | 1 | 0 | 0 | 43,367,732 | In fact you do not need to create a special table for each customer. SQL databases is designed in a manner to keep all similar data in one table. It is much easier to work with them in such a way.
At a moment I'd like to recommend to read about relational databases to better understand ways how to store data in it. The... | 1 | 0 | 0 | Creating dynamic tables in postgres using django | 1 | python,django,postgresql | 0 | 2017-04-12T11:01:00.000 |
I'm using Python 2.7 and flask framework with flask-sqlalchemy module.
I always get the following exception when trying to insert : Exception Type: OperationalError. Exception Value: (1366, "Incorrect string value: \xF09...
I already set MySQL database, table and corresponding column to utf8mb4_general_ci and I can ins... | 4 | 0 | 0 | 0 | false | 43,557,984 | 1 | 1,485 | 1 | 0 | 0 | 43,557,926 | Add config file main file and set set 'charset' => 'utf8mb4'
you have to edit field in which you want to store emoji and set collation as utf8mb4_unicode_ci | 1 | 0 | 0 | Flask SQLAlchemy can't insert emoji to MySQL | 4 | python,mysql,flask | 0 | 2017-04-22T10:08:00.000 |
Can I perform a PATCH request to collection?
Like UPDATE table SET foo=bar WHERE some>10 in SQL. | 0 | 0 | 0 | 0 | false | 43,589,357 | 0 | 229 | 1 | 0 | 0 | 43,581,457 | No that is not supported, and probably should not (see Andrey Shipilov comment). | 1 | 0 | 0 | Bulk PATCH in python eve | 1 | python,mongodb,eve | 0 | 2017-04-24T06:45:00.000 |
I use cqlengine with django. In some occasions Cassandra throws an error indicating that user has no permissions do to something. Sometimes this is select, sometimes this is update or sometimes it is something else. I have no code to share, because there is no specific line that does this. I am very sure that user has ... | 0 | 0 | 0 | 0 | false | 43,630,653 | 1 | 292 | 2 | 0 | 0 | 43,622,277 | Is the system_auth keyspace RF the same as the amount of nodes? Did you try to run a repair on the system_auth keyspace already? If not do so.
For me it sounds like a consistency issue. | 1 | 0 | 0 | Cassandra: occasional permission errors | 2 | python,django,cassandra,permissions,cqlengine | 0 | 2017-04-25T22:54:00.000 |
I use cqlengine with django. In some occasions Cassandra throws an error indicating that user has no permissions do to something. Sometimes this is select, sometimes this is update or sometimes it is something else. I have no code to share, because there is no specific line that does this. I am very sure that user has ... | 0 | 0 | 0 | 0 | false | 43,645,204 | 1 | 292 | 2 | 0 | 0 | 43,622,277 | If you have authentication enabled, make sure you set appropriate RF for keyspace system_auth (should be equal to number of nodes).
Secondly, make sure the user you have created has following permissions on all keyspaces. {'ALTER', 'CREATE', 'DROP', 'MODIFY', 'SELECT'}. If you have the user as a superuser make sure you... | 1 | 0 | 0 | Cassandra: occasional permission errors | 2 | python,django,cassandra,permissions,cqlengine | 0 | 2017-04-25T22:54:00.000 |
I use SQLite3 in python because my school computers don't allow us to install anything to python so I used the pre installed SQLite3 module.
I'm working on a program whose back end relies on an SQLite3 database, however the databases are created and stored on their computer.
Is it possible for me to "Host" an SQLite3 ... | 1 | 1 | 1.2 | 0 | true | 43,647,246 | 0 | 1,615 | 1 | 0 | 0 | 43,647,227 | Write an API on the remote server, yes. This could be hosted by a web framework of your choice.
You won't get a direct network connection to a file | 1 | 0 | 0 | How to connect to SQLite3 database in python remotely | 1 | python,python-3.x,sqlite | 0 | 2017-04-27T01:37:00.000 |
I am deploying a Django App using Elastic Beanstalk on AWS. The app has a function whereby user can register their details.
The problem is when I make small changes to my app and deploy this new version I loose the registered users since their information isn't in my local database (only the database on aws).
Is there ... | 0 | 0 | 0 | 0 | false | 43,666,611 | 1 | 137 | 1 | 0 | 0 | 43,650,204 | Don't bundle the development .sqlite file with the production stuff. It needs to have its own .sqlite file and you just need to run migrations on the production one. | 1 | 0 | 0 | Update sqlite database based on changes in production | 1 | python,django,sqlite,amazon-web-services,amazon-elastic-beanstalk | 0 | 2017-04-27T06:31:00.000 |
I have written a piece of python code that scrapes the odds of horse races from a bookmaker's site. I wish to now:
Run the code at prescribed increasingly frequent times as the race draws closer.
Store the scraped data in a database fit for extraction and statistical analysis in R.
Apologies if the question is poorly... | 0 | 0 | 0 | 1 | false | 43,670,482 | 0 | 73 | 1 | 0 | 0 | 43,670,334 | In windows, you can use Task Scheduler or in Linux crontab. You can configure these to run python with your script at set intervals of time. This way you don't have a python script continuously running preventing some hangup in a single call from impacting all subsequent attempts to scrape or store in database.
To stor... | 1 | 0 | 0 | How to run python code at prescribed time and store output in database | 1 | python,database-design,web-scraping | 0 | 2017-04-28T01:07:00.000 |
I'm using boto3 and trying to upload files. It will be helpful if anyone will explain exact difference between file_upload() and put_object() s3 bucket methods in boto3 ?
Is there any performance difference?
Does anyone among these handles multipart upload feature in behind the scenes?
What are the best use cases ... | 49 | 51 | 1.2 | 0 | true | 43,744,495 | 0 | 16,616 | 1 | 0 | 1 | 43,739,415 | The upload_file method is handled by the S3 Transfer Manager, this means that it will automatically handle multipart uploads behind the scenes for you, if necessary.
The put_object method maps directly to the low-level S3 API request. It does not handle multipart uploads for you. It will attempt to send the entire body... | 1 | 0 | 0 | What is the Difference between file_upload() and put_object() when uploading files to S3 using boto3 | 3 | python,amazon-web-services,amazon-s3,boto3 | 0 | 2017-05-02T13:40:00.000 |
I process a report that consists of date fields. There are some instances wherein the date seen in the cell is not a number (how do I know? I use the isnumber() function from excel to check if a date value is really a number).
Using a recorded macro, for all the date columns, I do the text to columns function in excel ... | 0 | 0 | 0 | 0 | false | 43,783,883 | 0 | 940 | 1 | 0 | 0 | 43,779,887 | Sounds like you might want to take advantage of the type guessing in openpyxl. If so, open the workbook with guess_types=True and see if that helps. NB. this feature is more suited to working with text sources like CSV and is likely to be removed in future releases. | 1 | 0 | 1 | How to convert date formatted as string to a number in excel using openpyxl | 1 | python,excel,openpyxl | 0 | 2017-05-04T10:07:00.000 |
When attempting to connect to a PostgreSQL database with ODBC I get the following error:
('08P01', '[08P01] [unixODBC]ERROR: Unsupported startup parameter: geqo (210) (SQLDriverConnect)')
I get this with two different ODBC front-ends (pyodbc for Python and ODBC.jl for Julia), so it's clearly coming from the ODBC libr... | 1 | -1 | -0.099668 | 0 | false | 43,872,906 | 0 | 913 | 1 | 1 | 0 | 43,789,951 | Config SSL Mode: allow in ODBC Driver postgres, driver version: 9.3.400 | 1 | 0 | 0 | unsupported startup parameter geqo when connecting to PostgreSQL with ODBC | 2 | python,postgresql,odbc,julia | 0 | 2017-05-04T18:08:00.000 |
I have some pretty big, multi level, documents with LOTS of fields (over 1500 fields). While I want to save the whole document in mongo, Ido not want to define the whole schema. Only a handful of fields are important. I also need to index those "important" fields. Is this something that can be done?
Thank you | 0 | 1 | 0.197375 | 0 | false | 43,792,616 | 0 | 96 | 1 | 0 | 0 | 43,792,282 | Nevermind... found it... (ALLOW_UNKNOWN) | 1 | 0 | 0 | Is it possible to define a partial schema Python-eve? | 1 | python,eve | 0 | 2017-05-04T20:31:00.000 |
After doing a bit of research I am finding it difficult to find out how to use mysql timestamps in matplotlib.
Mysql fields to plot
X-axis:
Field: entered
Type: timestamp
Null: NO
Default: CURRENT TIMESTAMP
Sample: 2017-05-08 18:25:10
Y-axis:
Field: value
Type: float(12,6)
Null: NO
Sample: 123.332
What date format is m... | 0 | 0 | 1.2 | 1 | true | 43,860,724 | 0 | 202 | 1 | 0 | 0 | 43,859,988 | you can use datetime module,although i use now() function to extract datetime from mysql,but i consider the format is the same。
for instance
python>import datetime as dt
i put the datetime data into a list named datelist,and now you can use datetime.strptime function to convert the date format to what you want
python>... | 1 | 0 | 0 | Python - convert mysql timestamps type to matplotlib and graph | 1 | python,mysql,matplotlib | 0 | 2017-05-09T02:17:00.000 |
I am trying to access a Spreadsheet on a Team Drive using gspread. It is not working. It works if the spreadsheet is on my Google Drive. I was wondering if gspread has the new Google Drive API v3 capability available to open spreadsheets on Team Drives. If so, how do I specify the fact I want to open a spreadsheet on ... | 1 | 0 | 0 | 0 | false | 65,546,184 | 0 | 426 | 1 | 0 | 0 | 43,897,009 | Make sure you're using the latest version of gspread. The one that is e.g. bundled with Google Colab is outdated:
!pip install --upgrade gspread
This fixed the error in gs.csv_import for me on a team drive. | 1 | 0 | 0 | Does gspread Support Accessing Spreadsheets on Team Drives? | 2 | python,google-apps-script,gspread | 0 | 2017-05-10T15:34:00.000 |
I have a AWS lambda implemented using python/pymysql with AWS rds Mysql instance as backend. It connects and works well and I can also access the lambda from my android app.
The problem I have is after I insert a value into rds mysql tables successfully using local machine mysql workbench and run the lambda function f... | 3 | 0 | 0 | 0 | false | 56,725,577 | 0 | 878 | 1 | 0 | 0 | 43,944,404 | AWS recommends making a global connection (before your handler function definition) in order to increase performance. Idea is that a new connection does not have to be established and the previous connection to the DB is reused, even when multiple instances of Lambda are run in close connection. But if your use case in... | 1 | 0 | 0 | Python Aws lambda function not fetching the rds Mysql table value in realtime | 2 | python,amazon-web-services,aws-lambda,amazon-rds | 0 | 2017-05-12T18:31:00.000 |
Ubuntu 14.04.3, PostgreSQL 9.6
Maybe I can get the plpythonu source code from the PostgreSQL 9.6 source code or somewhere else, put it into the /contrib directory, make it and CREATE EXTENSION after that!? Or something like that.
Don't want to think that PostgreSQL reinstall is my only way. | 2 | 2 | 0.379949 | 0 | false | 44,186,978 | 0 | 2,374 | 1 | 0 | 0 | 43,984,705 | you can simply run
python 2
sudo apt-get install postgresql-contrib postgresql-plpython-9.6
python 3
sudo apt-get install postgresql-contrib postgresql-plpython3-9.6
Then check the extension is installed
SELECT * FROM pg_available_extensions WHERE name like '%plpython%';
To apply the extension to the database, use
for... | 1 | 0 | 1 | Is there a way to install PL/Python after the database has been compiled without "--with-python" parameter? | 1 | postgresql,plpython | 0 | 2017-05-15T16:36:00.000 |
I'm having issues connecting to a working SQL\Express database instance using Robot Framework's DatabaseLibrary.
If I use either Connect To Database with previously defined variables or Connect To Database Using Custom Params with a connection string, I get the following results:
pyodbc: ('08001', '[08001] [Microsoft]... | 2 | 2 | 1.2 | 0 | true | 44,121,400 | 0 | 1,854 | 1 | 0 | 0 | 43,988,892 | I was able to connect using @Goralight approach: Connect To Database Using Custom Params pymssql ${DBConnect} where ${DBConnect} contained database, user, Password, host and port | 1 | 0 | 0 | Cannot connect to SQL\Express with pyodbc/pymssql and Robot Framework | 1 | python,robotframework,pyodbc,pymssql | 0 | 2017-05-15T21:11:00.000 |
I have sqlite db runing on my server. I want to access it using client side javascript in browser. Is this possible?
As of now, I am using python to access the db and calling python scripts for db operations. | 0 | 1 | 0.197375 | 0 | false | 44,001,568 | 1 | 158 | 1 | 0 | 0 | 44,000,687 | It's not a good idea to allow clients to access directly to the db. If you have to do it be carefull to not give to the account you use full write/read access to the db or any malicius client can erase modify or steal information from the db.
An implementation with client identification server-side and rest API to retu... | 1 | 0 | 0 | Access sqlite in server through client side javascript | 1 | javascript,python,sql,sqlite | 0 | 2017-05-16T11:51:00.000 |
I am trying to install mysqlclient on mac to use mysql in a django project. I have made sure that setup tools is installed and that mysql connector c is installed as well. I keep getting the error Command "python setup.py egg_info" failed with error code 1 in. This is my first django project since switching from rails.... | 0 | 0 | 1.2 | 0 | true | 44,036,652 | 1 | 599 | 1 | 0 | 0 | 44,008,037 | I was able to fix this by running pip install mysql. I do not understand why this worked because I already had MySQL installed on my system and had been using it.
I am going to assume it is because Python uses environments and MySQL wasn't installed in the environment but I would like to know for sure. | 1 | 0 | 0 | Mysqlclient fails to install | 1 | python,mysql,django,pip | 0 | 2017-05-16T17:32:00.000 |
I am looking for a method for hiding all rows in an excel sheet using python's openpyxl module. I would like, for example, to hide all rows from the 10th one to the end. Is it possible in openpyxl? For instance in xlsxwriter there is a way to hide all unused rows. So I am looking for a similar funcitonality in openpyxl... | 1 | 1 | 1.2 | 0 | true | 45,254,364 | 0 | 413 | 1 | 0 | 0 | 44,028,186 | As far as I know, there is no such feature at the moment in the openpyxl. However, this can be easily done in an optimized way in the xlsxwriter module. | 1 | 0 | 0 | python openpyxl: hide all rows from to the end | 2 | python,excel,openpyxl,xlsxwriter | 0 | 2017-05-17T14:48:00.000 |
Getting neo4j.v1.api.CypherError: Internal error - should have used fall back to execute query, but something went horribly wrong when using python neomodel client with neo4j community edition 3.2.0 server.
And the neo4j server logs has the below errors:
2017-05-16 12:54:24.187+0000 ERROR [o.n.b.v.r.ErrorReporter] Cli... | 0 | 2 | 1.2 | 0 | true | 44,043,172 | 0 | 213 | 1 | 0 | 0 | 44,043,171 | This seems to be a issue with neo4j version 3.2.0. Setting cypher.default_language_version to 3.1 in neo4j.conf and restarting the server should fix this. | 1 | 0 | 0 | neo4j.v1.api.CypherError: Internal error - should have used fall back to execute query, but something went horribly wrong | 1 | python,python-3.x,neo4j | 0 | 2017-05-18T09:00:00.000 |
I am building a python 3.6 AWS Lambda deploy package and was facing an issue with SQLite.
In my code I am using nltk which has a import sqlite3 in one of the files.
Steps taken till now:
Deployment package has only python modules that I am using in the root. I get the error:
Unable to import module 'my_program': No m... | 23 | 6 | 1 | 0 | false | 44,076,628 | 0 | 7,936 | 2 | 0 | 0 | 44,058,239 | This isn't a solution, but I have an explanation why.
Python 3 has support for sqlite in the standard library (stable to the point of pip knowing and not allowing installation of pysqlite). However, this library requires the sqlite developer tools (C libs) to be on the machine at runtime. Amazon's linux AMI does not ha... | 1 | 0 | 0 | sqlite3 error on AWS lambda with Python 3 | 8 | python-3.x,amazon-web-services,sqlite,aws-lambda | 1 | 2017-05-18T21:43:00.000 |
I am building a python 3.6 AWS Lambda deploy package and was facing an issue with SQLite.
In my code I am using nltk which has a import sqlite3 in one of the files.
Steps taken till now:
Deployment package has only python modules that I am using in the root. I get the error:
Unable to import module 'my_program': No m... | 23 | 1 | 0.024995 | 0 | false | 49,342,276 | 0 | 7,936 | 2 | 0 | 0 | 44,058,239 | My solution may or may not apply to you (as it depends on Python 3.5), but hopefully it may shed some light for similar issue.
sqlite3 comes with standard library, but is not built with the python3.6 that AWS use, with the reason explained by apathyman and other answers.
The quick hack is to include the share object .s... | 1 | 0 | 0 | sqlite3 error on AWS lambda with Python 3 | 8 | python-3.x,amazon-web-services,sqlite,aws-lambda | 1 | 2017-05-18T21:43:00.000 |
I haven't been able to find any direct answers, so I thought I'd ask here.
Can ETL, say for example AWS Glue, be used to perform aggregations to lower the resolution of data to AVG, MIN, MAX, etc over arbitrary time ranges?
e.g. - Given 2000+ data points of outside temperature in the past month, use an ETL job to lower... | 0 | 0 | 1.2 | 0 | true | 44,107,589 | 0 | 588 | 1 | 0 | 0 | 44,074,550 | The 'T' in ETL stands for 'Transform', and aggregation is one of most common ones performed. Briefly speaking: yes, ETL can do this for you. The rest depends on specific needs. Do you need any drill-down? Increasing resolution on zoom perhaps? This would affect the whole design, but in general preparing your data for p... | 1 | 0 | 0 | Using ETL for Aggregations | 1 | python,amazon-web-services,etl,aws-glue | 0 | 2017-05-19T16:09:00.000 |
I am building a web application that allows users to login and upload data files that would eventually be used to perform data visualisation and data mining features - Imagine a SAS EG/Orange equivalent on the web.
What are the best practices to store these files (in a database or on file) to facilitate efficient retri... | 0 | 0 | 0 | 0 | false | 44,206,133 | 1 | 299 | 1 | 0 | 0 | 44,094,933 | This depends on what functionality you can offer.
Many very interesting data mining tools will read raw data files only, so storing the data in a database does not help you anything.
But then you won't want to run them "on the web" anyway, as they easily eat all your resources.
Either way, first get your requirements s... | 1 | 0 | 0 | Storing data files on Django application | 1 | python,django,database,data-visualization,data-mining | 0 | 2017-05-21T08:50:00.000 |
I am using Python 2.7 and SQLite3.
When I starting work with DB I want to check - does my database is empty or on not. I mean does it already have any tables or not.
My idea is to use the simple SELECT from any table. And wrap this select in try:exception block. So if exception was raised then my DB is empty.
Maybe som... | 2 | 4 | 1.2 | 0 | true | 44,098,371 | 0 | 1,822 | 1 | 0 | 0 | 44,098,235 | SELECT name FROM sqlite_master
while connected to your database will give you all the tables names. you can then do a fetchall and check the size, or even contents of the list. not try/catch necessary (the list will be empty if the database doesn't contain any tables) | 1 | 0 | 0 | How could I check - does my SQLite3 database is empty? | 1 | python,sqlite | 0 | 2017-05-21T14:50:00.000 |
I was running Django 1.11 with Python 3.5 and I decided to upgrade to Python 3.6.
Most things worked well, but I am having issues connection to AWS S3. I know that they have a new boto version boto3 and that django-storages is a little outdated, so now there is django-storages-redux.
I've been trying multiple combinati... | 0 | -1 | 1.2 | 0 | true | 44,122,528 | 1 | 149 | 1 | 0 | 0 | 44,121,989 | Found the issue.
Django-storages-redux was temporarily replacing django-storages since it's development had been interrupted.
Now the django-storages team restarted to support it.
That means that the correct configuration to use is: django-storages + boto3 | 1 | 0 | 0 | Django: Upgrading to python3.6 with Amazon S3 | 1 | django,python-3.x,amazon-s3,boto | 0 | 2017-05-22T20:53:00.000 |
I'm making a web crawler in Python that collects redirects/links, adds them to a database, and enters them as a new row if the link doesn't already exist. I want like to use multi-threading but having trouble because I have to check in real time if there is an entry with a given URL.
I was initially using sqlite3 but ... | 3 | 0 | 0 | 0 | false | 44,123,745 | 0 | 7,557 | 1 | 0 | 0 | 44,123,678 | One solution could be to acquire a lock to access the database directly from your program. In this way the multiple threads or processes will wait the other processes to insert the link before performing a request. | 1 | 0 | 0 | Getting SQLite3 to work with multiple threads | 3 | python,multithreading,sqlite,multiprocessing | 0 | 2017-05-22T23:47:00.000 |
I have pretty simple model. User defines url and database name for his own Postgres server. My django backend fetches some info from client DB to make some calculations, analytics and draw some graphs.
How to handle connections? Create new one when client opens a page, or keep connections alive all the time?(about 250-... | 0 | 0 | 0 | 0 | false | 44,139,838 | 1 | 77 | 1 | 0 | 0 | 44,139,772 | In your case, I would rather go with Django internal implementation and follow Django ORM as you will not need to worry about handling connection and different exceptions that may arise during your own implementation of DAO model in your code.
As per your requirement, you need to access user database, there still exist... | 1 | 0 | 0 | Handle connections to user defined DB in Django | 1 | python,django,database,postgresql,orm | 0 | 2017-05-23T16:00:00.000 |
Trying to install cx_Oracle on Solaris11U3 but getting ld: fatal: file /oracle/database/lib/libclntsh.so: wrong ELF class: ELFCLASS64 error
python setup.py build
running build
running build_ext
building 'cx_Oracle' extension
cc -DNDEBUG -KPIC -DPIC -I/oracle/database/rdbms/demo -I/oracle/database/rdbms/public -I/usr/in... | 0 | 0 | 0 | 0 | false | 44,171,743 | 0 | 268 | 1 | 1 | 0 | 44,155,943 | You cannot mix 32-bit and 64-bit together. Everything (Oracle client, Python, cx_Oracle) must be 32-bit or everything must be 64-bit. The error above looks like you are trying to mix a 64-bit Oracle client with a 32-bit Python. | 1 | 0 | 0 | Python module cx_Oracle ld installation issue on Solaris11U3 SPARC: fatal: file /oracle/database/lib/libclntsh.so: wrong ELF class: ELFCLASS64 error | 1 | python,cx-oracle | 0 | 2017-05-24T10:37:00.000 |
I recently did an inspectdb of our old database which is on MySQL. We want to move to postgre. now we want the inspection to migrate to a different schema is there a migrate command to achieve this? so different apps shall use different schemas in the same database | 0 | 0 | 1.2 | 0 | true | 47,838,958 | 1 | 204 | 1 | 0 | 0 | 44,161,614 | so simple!
python manage.py migrate "app_name" | 1 | 0 | 0 | how to specify an app to migrate to a schema django | 1 | postgresql,python-3.5,django-migrations,django-1.8,django-postgresql | 0 | 2017-05-24T14:39:00.000 |
I have a table which I am working on and it contains 11 million rows there abouts... I need to run a migration on this table but since Django trys to store it all in cache I run out of ram or disk space which ever comes first and it comes to abrupt halt.
I'm curious to know if anyone has faced this issue and has come u... | 5 | 5 | 1.2 | 0 | true | 44,167,568 | 1 | 995 | 1 | 0 | 0 | 44,167,386 | The issue comes from a Postgresql which rewrites each row on adding a new column (field).
What you would need to do is to write your own data migration in the following way:
Add a new column with null=True. In this case data will not be
rewritten and migration will finish pretty fast.
Migrate it
Add a default value ... | 1 | 0 | 0 | Django migration 11 million rows, need to break it down | 1 | python,django,postgresql,django-migrations | 0 | 2017-05-24T19:55:00.000 |
in my app I have a mixin that defines 2 fields like start_date and end_date. I've added this mixin to all table declarations which require these fields.
I've also defined a function that returns filters (conditions) to test a timestamp (e.g. now) to be >= start_date and < end_date. Currently I'm manually adding these ... | 1 | 0 | 0 | 0 | false | 44,395,701 | 1 | 182 | 1 | 0 | 0 | 44,207,726 | I tried extending Query but had a hard time. Eventually (and unfortunately) I moved back to my previous approach of little helper functions returning filters and applying them to queries.
I still wish I would find an approach that automatically adds certain filters if a table (Base) has certain columns.
Juergen | 1 | 0 | 0 | sqlalchemy automatically extend query or update or insert upon table definition | 2 | python,sqlalchemy | 0 | 2017-05-26T18:06:00.000 |
Yesterday, I installed an apche web server and phpmyadmin on my raspberry-py. How can I connect my raspberry-pi to databases in phpmyadmin with python? Can I use MySQL? Thank, I hope you understand my question and sorry for my bad english. | 0 | 0 | 0 | 0 | false | 44,215,522 | 0 | 110 | 1 | 0 | 0 | 44,215,404 | Your question is quite unclear. But from my understanding, here is what you should try doing: (Note: I am assuming you want to connect your Pi to a database to collect data and store in an IoT based application)
Get a server. Any Basic server would do. I recommend DigitalOcean or AWS LightSail. They have usable server... | 1 | 0 | 0 | connect my raspbery-pi to MySQL | 1 | python,mysql,apache,raspberry-pi | 1 | 2017-05-27T09:52:00.000 |
I'm currently programming in python :
- a graphical interface
- my own simple client-server based on sockets (also in python).
The main purpose here is to recieve a message (composed of several fields), apply any changes on it, and then send back the message to the client.
What I want to achieve is to link the mod... | 0 | 0 | 0 | 0 | false | 44,585,660 | 0 | 156 | 1 | 0 | 0 | 44,262,826 | Update :
I finally managed to do what I had in mind.
In SQLmap, the "dictionnary" can be found in the "/xml" directory.
By using several "grep" I have been able to follow the flow of creation. Then, to create the global payload, and then split it in real SQL payload | 1 | 0 | 0 | How to link sqlmap with my own script | 1 | python,python-3.x,sockets,unix,sqlmap | 0 | 2017-05-30T12:51:00.000 |
I'm calling a database, and returning a datetime.datetime object as the 3rd item in a tuple.
The SQL statement is: SELECT name,name_text,time_received FROM {}.{} WHERE DATE(time_received)=CURRENT_DATE LIMIT 1
If I do print(mytuple[2]) it returns something like: 2017-05-31 17:21:19+00:00 but always with that "+00:00" at... | 2 | 0 | 0 | 0 | false | 44,292,411 | 0 | 4,484 | 1 | 0 | 0 | 44,292,280 | I imagine in order to use string stripping methods, you must first convert it to a string, then strip it, then convert back to whatever format you want to use. Cheers | 1 | 0 | 1 | How to remove "+00:00" from end of datetime.datetime object. (not remove time) | 3 | sql,database,python-3.x,datetime,psycopg2 | 0 | 2017-05-31T18:31:00.000 |
There is a data model(sql) with a scenario where it uses one input table and it is fed into the prediction model(python) and some output variables are generated onto another table and final join is done between input table and output table to get the complete table(sql). Note: there could be chances that output table d... | 0 | 1 | 1.2 | 0 | true | 44,295,921 | 0 | 123 | 1 | 0 | 0 | 44,295,661 | It's difficult to say without knowing what the database load and latency requirements are. I would typically avoid using the same table for source and output: I would worry about the cost and contention of simultaneously reading from the table and then writing back to the table but this isn't going to be a problem for ... | 1 | 0 | 0 | Best way to use input and output tables | 1 | python,sql,postgresql | 0 | 2017-05-31T22:30:00.000 |
I have set up a database using MySQL Community Edition to log serial numbers of HDD's and file names. I am instructed to find a way to integrate Python scripting into the database so that the logs can be entered through python programming instead of manually (as manually would take a ridiculous amount of time.) Pycharm... | 0 | 0 | 1.2 | 0 | true | 44,332,632 | 0 | 173 | 1 | 0 | 0 | 44,331,043 | Reading between the lines here. I believe what you are being asked to do is called ETL. If somebody were to ask me to do the above my approach would be
Force an agreed upon format for the incoming data (probably a .csv)
Write a python application to; a. Read the data from the csv, b. Condition the data if necessary, c... | 1 | 0 | 0 | What is the best way to integrate Pycharm Python into a MySQL CE Database? | 1 | python,mysql,pycharm,mysql-python | 0 | 2017-06-02T14:12:00.000 |
I wonder if a good habit is to have some logic in MySQL database (triggers etc.) instead of logic in Django backend. I'm aware of fact that some functionalities may be done both in backend and in database but I would like to do it in accordance with good practices. I'm not sure I should do some things manually or maybe... | 0 | 1 | 0.197375 | 0 | false | 44,354,531 | 1 | 133 | 1 | 0 | 0 | 44,353,532 | It is true that if you used a database for your business logic you could get maximum possible performance and security optimizations. However, you would also risk many things such as
No separation of concerns
Being bound to the database vendor
etc.
Also, whatever logic you write in your database won't be version con... | 1 | 0 | 0 | Django - backend logic vs database logic | 1 | python,mysql,django,django-models,django-rest-framework | 0 | 2017-06-04T11:24:00.000 |
I am writing software that manipulates Excel sheets. So far, I've been using xlrd and xlwt to do so, and everything works pretty well.
It opens a sheet (xlrd) and copies select columns to a new workbook (xlwt)
It then opens the newly created workbook to read data (xlrd) and does some math and formatting with the data ... | 4 | 4 | 1.2 | 0 | true | 44,417,110 | 0 | 1,717 | 1 | 0 | 0 | 44,387,732 | Fundamentally, there is no reason you need to read twice and save twice. For your current (no charts) process, you can just read the data you need using xlrd; then do all your processing; and write once with xlwt.
Following this workflow, it is a relatively simple matter to replace xlwt with XlsxWriter. | 1 | 0 | 0 | Saving XlsxWriter workbook more than once | 1 | python,excel,xlrd,xlsxwriter | 0 | 2017-06-06T10:37:00.000 |
I'm running pandas read_sql_query and cx_Oracle 6.0b2 to retrieve data from an Oracle database I've inherited to a DataFrame.
A field in many Oracle tables has data type NUMBER(15, 0) with unsigned values. When I retrieve data from this field the DataFrame reports the data as int64 but the DataFrame values have 9 or fe... | 0 | 1 | 1.2 | 1 | true | 44,519,380 | 0 | 577 | 1 | 0 | 0 | 44,392,676 | Removing pandas and just using cx_Oracle still resulted in an integer overflow so in the SQL query I'm using:
CAST(field AS NUMBER(19))
At this moment I can only guess that any field between NUMBER(11) and NUMBER(18) will require an explicit CAST to NUMBER(19) to avoid the overflow. | 1 | 0 | 0 | pandas read_sql_query returns negative and incorrect values for Oracle Database number field containing positive values | 1 | python,sql,oracle,pandas,dataframe | 0 | 2017-06-06T14:24:00.000 |
I have a PostgreSQL database that is being used by a front-end application built with Django, but being populated by a scraping tool in Node.js. I have made a sequence that I want to use across two different tables/entities, which can be accessed by a function (nexval(serial)) and is called on every insert. This is not... | 3 | 1 | 1.2 | 0 | true | 44,806,040 | 1 | 2,587 | 1 | 0 | 0 | 44,444,385 | My eventual solution:
Override the save method of the model, using a raw query to SELECT nextval('serial') inside the override, setting that as the value of the necessary field, then call save on the parent (super(PARENT, self).save()). | 1 | 0 | 0 | Postgres Sequences as Default Value for Django Model Field | 2 | python,django,postgresql,orm | 0 | 2017-06-08T19:47:00.000 |
In my file there are MAX and MIN formulas in a row.
Sample
CELLS - | A | B | C | D | E | F | G | H |
ROW: | MAX | MIN | MIN | MAX | MIN | MIN | MAX | MIN |MIN
If the excel sheet is opened a green triangle is displaying with a warning message "Inconsistent Formula". | 0 | 1 | 0.197375 | 0 | false | 44,523,601 | 0 | 523 | 1 | 0 | 0 | 44,521,638 | This is a standard Excel warning to alert users to the fact that repeated and adjacent formulas are different since that may be an error.
It isn't possible to turn off this warning in XlsxWriter. | 1 | 0 | 0 | How to Ignore "Inconsistent Formula" warning showing in generated .xlsx file using the python xlsxwriter? | 1 | python,excel,python-2.7,xlsxwriter | 0 | 2017-06-13T12:31:00.000 |
i'm actually making ajax request that call a php file which call a python file. My main problem is with the imports in the python scripts. I actually work on local.
I'm on linux. When i do "$ php myScript.php" ( which call python script inside ) it's work but when it come from the ajax call then the import of the pyth... | 0 | 0 | 0 | 0 | false | 44,577,194 | 1 | 104 | 1 | 0 | 0 | 44,558,269 | I finally succeed. In fact tweepy use the library call "six" which is not in my current folder. So i import all the python library in my folder, so i get no more error.
But i still don't understand why python does not go search the library in his normal folder instead in the current folder. | 1 | 0 | 0 | executing python throw ajax , import does not work | 1 | php,python,ajax,import,directory | 1 | 2017-06-15T03:56:00.000 |
I have data in CSV, in which one column is for fiscal year.
eg. 2017 - 2019 .
Please specify how to form the CREATE TABLE query and INSERT query with the Fiscal Year as field. | 0 | 0 | 1.2 | 0 | true | 44,560,766 | 0 | 130 | 1 | 0 | 0 | 44,560,315 | since it seems like range of years for fiscal postion. I would suggest to use two Integer field to store data.
and
and years will be in 4 numbers so use Type SMALLINT this way you use half of the storage space then INT field. | 1 | 0 | 0 | How to store fiscal year (eg. 2017-2020) in mysql? | 1 | python,mysql,csv | 0 | 2017-06-15T06:37:00.000 |
Im trying to load xml file into the google bigquery ,can any one please help me how to solve this . I know we can load JSON ,CSV and AVRO files into big query . I need suggestion/help, Is the any way can i load xml file into bigquery | 0 | 1 | 0.197375 | 0 | false | 44,590,333 | 0 | 834 | 1 | 0 | 1 | 44,588,770 | The easiest option is probably to convert your XML file either to CSV or to JSON and then load it. Without knowing the size and shape of your data it's hard to make a recommendation, but you can find a variety of converters if you search online for them. | 1 | 0 | 0 | how to load xml file into big query | 1 | xml,python-2.7,google-bigquery | 0 | 2017-06-16T12:00:00.000 |
Firstly, this question isn't a request for code suggestions- it's more of a question about a general approach others would take for a given problem.
I've been given the task of writing a web application in python to allow users to check the content of media files held on a shared server. There will also likely be a po... | 0 | 1 | 1.2 | 0 | true | 44,608,243 | 1 | 49 | 1 | 0 | 0 | 44,606,342 | These requirements are more or less straightforward to follow. Given that you will have a persistent database that can share the state of each file with multiple sessions - and even multiple deploys - of your system - and that is more or less a given with Python + PostgreSQL.
I'd suggest you to create a Python class w... | 1 | 0 | 0 | Python web app ideas- incremental/unique file suggestions for multiple users | 1 | python,postgresql,python-3.x,flask | 0 | 2017-06-17T15:37:00.000 |
I have a table of size 15 GB in DynamoDB. Now I need to transfer some data based on timestamps ( which is in db) to another DynamoDB. What would be the most efficient option here?
a) Transfer to S3,process with pandas or someway and put in the other table (data is huge. I feel this might take a huge time)
b) Through Da... | 1 | 1 | 0.197375 | 0 | false | 44,612,782 | 0 | 831 | 1 | 0 | 0 | 44,608,785 | I would suggest going with the data pipeline into S3 approach. And then have a script to read from S3 and process your records. You can schedule this to run on regular intervals to backup all your data. I don't think that any solution that does a full scan will offer you a faster way, because it is always limited by re... | 1 | 0 | 0 | Data transfer from DynamoDB table to another DynamoDB table | 1 | python,hive,amazon-emr,amazon-data-pipeline | 0 | 2017-06-17T19:48:00.000 |
I connect to a postgres database hosted on AWS. Is there a way to find out the number of open connections to that database using python API? | 0 | 1 | 1.2 | 0 | true | 44,621,822 | 0 | 60 | 1 | 0 | 0 | 44,621,606 | I assume this is for RDS. There is no direct way via the AWS API. You could potentially get it from CloudWatch but you'd be better off connecting to the database and getting the count that way by querying pg_stat_activity. | 1 | 0 | 0 | Finding number of open connections to postgres database | 1 | python,postgresql,amazon-web-services | 0 | 2017-06-19T02:52:00.000 |
I want to find the same words in two different excel workbooks. I have two excel workbooks (data.xls and data1.xls). If in data.xls have the same words in the data1.xls, i want it to print the row of data1.xls that contain of the same words with data.xls. I hope u can help me. Thank you. | 0 | 0 | 0 | 0 | false | 44,626,892 | 0 | 67 | 1 | 0 | 0 | 44,626,578 | I am assuming that both excel sheets have a list of words, with one word in each cell.
The best way to write this program would be something like this:
Open the first excel file, you might find it easier to open if you export it as a CSV first.
Create a Dictionary to store word and Cell Index Pairs
Iterate over each... | 1 | 0 | 1 | python- how to find same words in two different excel workbooks | 1 | excel,windows,python-2.7 | 0 | 2017-06-19T09:17:00.000 |
I'm not able anymore to change my database on arangodb.
If I try to create a collection I get the error:
Collection error: cannot create collection: invalid database directory
If I try to delete a collection I get the error:
Couldn't delete collection.
Besides that some of the collections are now corrupted.
I've ... | 1 | 1 | 0.197375 | 0 | false | 44,649,139 | 1 | 111 | 1 | 0 | 0 | 44,632,365 | If anyone gets the same error anytime in life, it was just a temporary error due to server overload. | 1 | 0 | 0 | Python ArangoDB | 1 | python,arangodb | 0 | 2017-06-19T13:47:00.000 |
I'm using google cloudSQL for applying advance search on people data to fetch the list of users. In datastore, there are data already stored there with 2 model. First is used to track current data of users and other model is used to track historical timeline. The current data is stored on google cloudSQL are more than ... | 1 | 0 | 0 | 0 | false | 44,715,500 | 1 | 81 | 2 | 1 | 0 | 44,654,127 | @Kevin Malachowski : Thanks for guiding me with your info and questions as It gave me new way of thinking.
Historical data records will be more than 0.3-0.5 million(maximum). Now I'll use BigQuery for historical advance search.
For live data-cloudSQL will be used as we must focus on perfomance for fetched data.
Some o... | 1 | 0 | 0 | Google CloudSQL : structuring history data on cloudSQL | 2 | python,google-app-engine,google-cloud-sql | 0 | 2017-06-20T13:14:00.000 |
I'm using google cloudSQL for applying advance search on people data to fetch the list of users. In datastore, there are data already stored there with 2 model. First is used to track current data of users and other model is used to track historical timeline. The current data is stored on google cloudSQL are more than ... | 1 | 0 | 0 | 0 | false | 44,662,852 | 1 | 81 | 2 | 1 | 0 | 44,654,127 | Depending on how often you want to do live queries vs historical queries and the size of your data set, you might want to consider placing the historical data elsewhere.
For example, if you need quick queries for live data and do many of them, but can handle higher-latency queries and only execute them sometimes, you m... | 1 | 0 | 0 | Google CloudSQL : structuring history data on cloudSQL | 2 | python,google-app-engine,google-cloud-sql | 0 | 2017-06-20T13:14:00.000 |
Me and my group are currently working on a school project where we need to use an online python compiler, since we are not allowed to install or download any software on their computers. The project requires me to read data from a .xlsx file.
Is there any online IDE with xlrd that can read the file that is on the schoo... | 0 | -1 | -0.066568 | 0 | false | 72,494,283 | 0 | 2,482 | 2 | 0 | 0 | 44,686,664 | import tabula
Read a PDF File
df = tabula.read_pdf("file:///C:/Users/tanej/Desktop/salary.pdf", pages='all')[0]
convert PDF into CSV
tabula.convert_into("file:///C:/Users/tanej/Desktop/salary.pdf", "file:///C:/Users/tanej/Desktop/salary.csv", output_format="csv", pages='all')
print(df) | 1 | 0 | 1 | Read excel file with an online Python compiler with xlrd | 3 | python,xlsx,xlrd,online-compilation | 0 | 2017-06-21T21:36:00.000 |
Me and my group are currently working on a school project where we need to use an online python compiler, since we are not allowed to install or download any software on their computers. The project requires me to read data from a .xlsx file.
Is there any online IDE with xlrd that can read the file that is on the schoo... | 0 | 0 | 0 | 0 | false | 44,687,122 | 0 | 2,482 | 2 | 0 | 0 | 44,686,664 | Could the pandas package and its pandas.read_clipboard function help? You'd need to copy the content of the file manually to the clipboard before starting your script.
Alternatively - is it considered cheating to just rent a server? Pretty cheap these days.
Finally: you don't usually require admin rights to install Pyt... | 1 | 0 | 1 | Read excel file with an online Python compiler with xlrd | 3 | python,xlsx,xlrd,online-compilation | 0 | 2017-06-21T21:36:00.000 |
I am trying to retrieve a large amount of data(more than 7 million) from database and trying to save a s flat file. The data is being retrieved using python code(python calls stored procedure). But I am having a problem here. The process is eating up so much of memory hence killing the process automatically by unix mac... | 3 | 0 | 0 | 0 | false | 44,707,209 | 0 | 6,717 | 1 | 0 | 0 | 44,706,706 | Rather than using the pandas library, make a database connection directly (using psycopg2, pymysql, pyodbc, or other connector library as appropriate) and use Python's db-api to read and write rows concurrently, either one-by-one or in whatever size chunks you can handle. | 1 | 0 | 0 | Reading and writing large volume of data in Python | 3 | python,sql,pandas | 0 | 2017-06-22T18:15:00.000 |
I have a basic personal project website that I am looking to learn some web dev fundamentals with and database (SQL) fundamentals as well (If SQL is even the right technology to use??).
I have the basic skeleton up and running but as I am new to this, I want to make sure I am doing it in the most efficient and "correct... | 0 | 2 | 1.2 | 0 | true | 44,715,054 | 1 | 181 | 1 | 0 | 0 | 44,714,345 | This kind of data is called time series. There are specialized database engines for time series, but with a not-extreme volume of observations - (timestamp, wave heigh, wind, tide, which break it is) tuples - a SQL database will be perfectly fine.
Try to model your data as a table in Postgres or MySQL. Start by making ... | 1 | 0 | 0 | Flask website backend structure guidance assistance? | 1 | python,sql,web,flask | 0 | 2017-06-23T06:20:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.