Question stringlengths 25 7.47k | Q_Score int64 0 1.24k | Users Score int64 -10 494 | Score float64 -1 1.2 | Data Science and Machine Learning int64 0 1 | is_accepted bool 2
classes | A_Id int64 39.3k 72.5M | Web Development int64 0 1 | ViewCount int64 15 1.37M | Available Count int64 1 9 | System Administration and DevOps int64 0 1 | Networking and APIs int64 0 1 | Q_Id int64 39.1k 48M | Answer stringlengths 16 5.07k | Database and SQL int64 1 1 | GUI and Desktop Applications int64 0 1 | Python Basics and Environment int64 0 1 | Title stringlengths 15 148 | AnswerCount int64 1 32 | Tags stringlengths 6 90 | Other int64 0 1 | CreationDate stringlengths 23 23 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Im creating Excel file from pandas and I'm using worksheet.hide_gridlines(2)
the problem that all gridlines are hide in my current worksheet.I need to hide a range of cells, for example A1:I80.How can I do that? | 3 | 1 | 0.099668 | 1 | false | 46,747,332 | 0 | 2,139 | 1 | 0 | 0 | 46,745,120 | As far as I know that isn't possible in Excel to hide gridlines for a range. Gridlines are either on or off for the entire worksheet.
As a workaround you could turn the gridlines off and then add a border to each cell where you want them displayed.
As a first step you should figure out how you would do what you want to... | 1 | 0 | 0 | Set worksheet.hide_gridlines(2) to certain range of cells | 2 | excel,python-2.7,xlsxwriter | 0 | 2017-10-14T13:29:00.000 |
I have a website which has been built using HTML and PHP.
I have a Microsoft SQL Server database. I have connected to this database and created several charts using Python.
I want to be able to publish these graphs on my website and make the graphs live (so that they are refreshed every 5 minutes or so with latest data... | 0 | 0 | 0 | 0 | false | 46,752,813 | 1 | 161 | 1 | 0 | 0 | 46,752,760 | You could add a process to crontab to run the Python program every 5 minutes (assuming Linux). You could, alternatively, have the PHP call Python and await the refreshed file before responding with the page. | 1 | 0 | 0 | Live graphs using python on website | 1 | python,html,graph | 0 | 2017-10-15T07:33:00.000 |
I'm trying to find a way to log all queries done on a Cassandra from a python code. Specifically logging as they're done executing using a BatchStatement
Are there any hooks or callbacks I can use to log this? | 10 | 1 | 0.066568 | 0 | false | 46,839,220 | 0 | 2,113 | 1 | 1 | 0 | 46,773,522 | Have you considered creating a decorator for your execute or equivalent (e.g. execute_concurrent) that logs the CQL query used for your statement or prepared statement?
You can write this in a manner that the CQL query is only logged if the query was executed successfully. | 1 | 0 | 0 | Logging all queries with cassandra-python-driver | 3 | python,cassandra,cassandra-python-driver | 0 | 2017-10-16T15:12:00.000 |
The program would follow the below steps:
Click on executable program made through python
File explorer pops up for user to choose excel file to alter
Choose excel file for executable program to alter
Spits out txt file OR excel spreadsheet with newly altered data to same folder location as the original spreadsheet | 0 | 0 | 1.2 | 0 | true | 46,803,941 | 0 | 123 | 1 | 0 | 0 | 46,803,803 | Yes this is perfectly doable. I suggest you look at PyQT5 or TkInter for the user interface, pyexcel for the excel interface and pyinstaller for packaging up an executable as you asked. There are many great tutorials on all of these modules. | 1 | 0 | 0 | Python - how to get executable program to get the windows file browser to pop up for user to choose an excel file or any other document? | 1 | python,excel,file,exe,explorer | 0 | 2017-10-18T06:09:00.000 |
After running some tests (casting a PyMongo set to a list vs iterating over the cursor and saving to a list) I've noticed that the step from cursor to data in memory is negligible. For a db cursor of about 160k records, it averages about 2.3s.
Is there anyway to make this conversion from document to object faster? Or w... | 3 | 0 | 0 | 0 | false | 53,745,939 | 0 | 871 | 1 | 0 | 0 | 46,817,939 | After some A/B testing, it seems like there isn't really a way to speed this up, unless you change your Python interpreter. Alternatively, bulk pulling from the DB could speed this up. | 1 | 0 | 1 | PyMongo Cursor to List Fastest Way Possible | 1 | python,database,python-3.x,mongodb,pymongo | 0 | 2017-10-18T19:37:00.000 |
I have different threads running which all write to the same database (though not the same table).
Currently I have it setup that I create a connection, and pass that to each thread, which then creates it own cursor for writing.
I haven't implementing the writing to db part yet, but am wondering if not every thread nee... | 3 | 0 | 0 | 0 | false | 46,941,465 | 0 | 2,665 | 1 | 0 | 0 | 46,869,761 | Each thread should use a distinct connection to avoid problems with inconsistent states and to make debugging easier. On web servers, this is typically achieved by using a pooled connection. Each thread (http request processor) picks up a connection from the pool when it needs it and then returns it back to the pool wh... | 1 | 0 | 0 | using python psycopg2: multiple cursors (1 per thread) on same connection | 1 | multithreading,psycopg2,python-3.6 | 0 | 2017-10-22T01:46:00.000 |
I have a PostgreSQL database in which I am collecting reports from 4 different producers. Back when I wrote this I defined 4 different schemas (one per producer) and since the reports are similar in structure each schema has exactly the same tables inside. I'd like to combine the schemas into one and add an extra colum... | 2 | 0 | 0 | 0 | false | 46,879,648 | 0 | 80 | 2 | 0 | 0 | 46,879,611 | For simple INSERTs, yes, you can safely have four producers adding rows. I'm assuming you don't have long running queries, as consistent reads can require allocating an interesting amount of log space if inserts keep happening during an hour-long JOIN.
if I am inserting large amounts of data and one insert causes anot... | 1 | 0 | 0 | Can I safely combine my schemas | 2 | python,postgresql,sqlalchemy,flask-sqlalchemy | 0 | 2017-10-22T22:06:00.000 |
I have a PostgreSQL database in which I am collecting reports from 4 different producers. Back when I wrote this I defined 4 different schemas (one per producer) and since the reports are similar in structure each schema has exactly the same tables inside. I'd like to combine the schemas into one and add an extra colum... | 2 | 1 | 1.2 | 0 | true | 46,879,647 | 0 | 80 | 2 | 0 | 0 | 46,879,611 | This won't be a problem if you are using a proper db such as Postgres or MySQL. They are designed to handle this.
If you are using sqlite then it could break. | 1 | 0 | 0 | Can I safely combine my schemas | 2 | python,postgresql,sqlalchemy,flask-sqlalchemy | 0 | 2017-10-22T22:06:00.000 |
I want to add docx.table.Table and docx.text.paragraph.Paragraph objects to documents.
Currently
table = document.add_table(rows=2, cols=2)
Would create a new table inside the document, and table would hold the docx.table.Table object with all its properties.
What I want to do instead is add a table OBJECT to the do... | 0 | 2 | 0.379949 | 0 | false | 46,897,992 | 0 | 1,713 | 1 | 0 | 0 | 46,897,003 | There are a few different possibilities your description would admit, but none of them have direct API support in python-docx.
The simplest case is copying a table from one part of a python-docx Document object to another location in the same document. This can probably be accomplished by doing a deep copy of the XML f... | 1 | 0 | 0 | python-docx add table object to document | 1 | python,python-docx | 0 | 2017-10-23T19:22:00.000 |
I am building a warehouse consisting of data that's found from a public facing API. In order to store & analyze the data, I'd like to save the JSON files I'm receiving into a structured SQL database. Meaning, all the JSON contents shouldn't be contained in 1 column. The contents should be parsed out and stored in va... | 1 | 0 | 0 | 0 | false | 46,899,529 | 0 | 1,594 | 1 | 0 | 0 | 46,898,834 | You should be able to use json.dumps(json_value) to convert your JSON object into a JSON string that can be put into an sql database. | 1 | 0 | 1 | Save JSON file into structured database with Python | 1 | python,sql,json,database,data-warehouse | 0 | 2017-10-23T21:28:00.000 |
I am running a uwsgi application on my linux mint. it has does work with a database and shows it on my localhost. i run it on 127.0.0.1 IP and 8080 port. after that i want to test its performance by ab(apache benchmark).
when i run the app by command uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi and get test of i... | 0 | 0 | 0 | 0 | false | 47,568,008 | 1 | 238 | 1 | 1 | 0 | 46,927,517 | it has solved.
the point is that you should create separate connection for each completely separate query to avoid missing data during each query execution | 1 | 0 | 0 | uwsgi application stops with error when running it with multi thread | 1 | python,multithreading,server,uwsgi | 1 | 2017-10-25T08:26:00.000 |
I am trying to get the current user of the db I have. But I couldn't find a way to do that and there are no questions on stackoverflow similar to this. In postgresql there is a method current_user. For example I coudl just say SELECT current_user and I would get a table with the current user's name. Is there something ... | 1 | 1 | 0.066568 | 0 | false | 65,727,720 | 0 | 2,399 | 1 | 0 | 0 | 47,038,961 | If you use flask-login module of Flask you could just import a function current_user with from flask_login import current_user.
Then you could just get it from the database and db model (for instance Sqlite/SqlAlchemy) if you save it in a database:
u_id = current_user.id
u_email = current_user.email
u_name = current_us... | 1 | 0 | 0 | SqlAlchemy current db user | 3 | python,python-2.7,sqlalchemy | 0 | 2017-10-31T15:24:00.000 |
I use Pandas with Jupyter notebook a lot. After I ingest a table in from using pandas.read_sql, I would preview it by doing the following:
data = pandas.read_sql("""blah""")
data
One problem that I have been running into is that all my preview tables will disappear if I reopen my .ipynb
Is there a way to prevent that ... | 0 | 0 | 0 | 1 | false | 47,042,891 | 0 | 71 | 1 | 0 | 0 | 47,042,689 | Are you explicitly saving your notebook before you re-open it? A Jupyter notebook is really just a large json object, eventually rendered as a fancy html object. If you save the notebook, illustrations and diagrams should be saved as well. If that doesn't do the trick, try putting the one-liner "data" in a different ce... | 1 | 0 | 0 | How to prevent charts or tables to disappear when I re-open Jupyter Notebook? | 1 | python,ipython,jupyter-notebook,ipython-notebook | 0 | 2017-10-31T18:53:00.000 |
I need to read the whole geoip2 database and insert that data into SQL lite database. I tried to read the .mmdb file in the normal way but it prints random characters. | 0 | 1 | 0.197375 | 0 | false | 47,048,122 | 0 | 845 | 1 | 0 | 0 | 47,047,727 | You should be able to download CSV file and import it into SQL lite. | 1 | 0 | 1 | Can we read the geoip2 database file with .mmdb format like normal file in Python? | 1 | python,maxmind,geoip2 | 0 | 2017-11-01T03:21:00.000 |
When I run from flask.ext.mysql import MySQL I get the warning Importing flask.ext.mysql is deprecated, use flask_mysql instead.
So I installed flask_mysql using pip install flask_mysql,installed it successfully but then when I run from flask_mysql import MySQL I get the error No module named flask_mysql. In the first... | 4 | 3 | 1.2 | 0 | true | 47,117,043 | 0 | 3,318 | 1 | 0 | 0 | 47,116,912 | flask.ext. is a deprecated pattern which was used prevalently in older extensions and tutorials. The warning is telling you to replace it with the direct import, which it guesses to be flask_mysql. However, Flask-MySQL is using an even more outdated pattern, flaskext.. There is nothing you can do about that besides con... | 1 | 0 | 0 | Python flask.ext.mysql is deprecated? | 2 | python,mysql,flask | 0 | 2017-11-05T00:06:00.000 |
I'm trying to use wb = load_workbook(filename) but either I work in Python console or call it from a script, it hangs for a while, then my laptop completely freezes. I can't switch to console to reboot, can't restart X etc. (UPD: CPU consumption is 100% in this moment, memory consump. is 5% only). Has anybody met such ... | 1 | 0 | 0 | 0 | false | 47,125,299 | 0 | 1,046 | 2 | 0 | 0 | 47,123,188 | The warning is exactly that, a warning about some aspect of the file being removed. But it has nothing to do with the rest of the question. I suspect you are running out of memory. How much memory is openpyxl using when the laptop freezes? | 1 | 0 | 0 | openpyxl load_workbook() freezes | 2 | python,openpyxl | 0 | 2017-11-05T15:19:00.000 |
I'm trying to use wb = load_workbook(filename) but either I work in Python console or call it from a script, it hangs for a while, then my laptop completely freezes. I can't switch to console to reboot, can't restart X etc. (UPD: CPU consumption is 100% in this moment, memory consump. is 5% only). Has anybody met such ... | 1 | 0 | 0 | 0 | false | 64,410,841 | 0 | 1,046 | 2 | 0 | 0 | 47,123,188 | I had this issue kinda.... I had been editing my excel workbook. I ended up accidentally pasting a space into an almost infinite amount of rows. ya know... like a lot. I selected all empty cells and hit delete, saved workbook, problem gone. | 1 | 0 | 0 | openpyxl load_workbook() freezes | 2 | python,openpyxl | 0 | 2017-11-05T15:19:00.000 |
I want to insert date and time into mongo ,using pymongo.
However, I can insert datetime but not just date or time .
here is the example code :
now = datetime.datetime.now()
log_date = now.date()
log_time = now.time()
self.logs['test'].insert({'log_date_time': now, 'log_date':log_date, 'log_time':log_time})
it show err... | 0 | 0 | 0 | 0 | false | 47,148,634 | 0 | 515 | 1 | 0 | 0 | 47,148,516 | You are experiencing the defined behavior. MongoDB has a single datetime type (datetime). There are no separate, discrete types of just date or just time.
Workarounds: Plenty, but food for thought:
Storing just date is straightforward: assume Z time, use a time component of 00:00:00, and ignore the time offset upo... | 1 | 0 | 1 | questions about using pymongo to insert date and time into mongo | 1 | python,mongodb,datetime,pymongo | 0 | 2017-11-07T01:23:00.000 |
I wonder how does Postgres sever determine to close a DB connection, if I forgot at the Python source code side.
Does the Postgres server send a ping to the source code? From my understanding, this is not possible. | 0 | 2 | 0.197375 | 0 | false | 47,166,411 | 0 | 41 | 1 | 0 | 0 | 47,166,301 | When your script quits your connection will close and the server will clean it up accordingly. Likewise, it's often the case in garbage collected languages like Python that when you stop using the connection and it falls out of scope it will be closed and cleaned up.
It is possible to write code that never releases the... | 1 | 0 | 0 | How does Postges Server know to keep a database connection open | 2 | python,database,postgresql | 0 | 2017-11-07T19:49:00.000 |
I am deploying a Jupyter notebook(using python 2.7 kernel) on client side which accesses data on a remote and does processing in a remote Spark standalone cluster (using pyspark library). I am deploying spark cluster in Client mode. The client machine does not have any Spark worker nodes.
The client does not have enoug... | 0 | 0 | 0 | 1 | false | 47,173,911 | 0 | 187 | 1 | 0 | 0 | 47,173,286 | If i understand correctly, then what you will get on the client side is an int. At least should be, if setup correctly. So the answer is no, the DF is not going to hit your local RAM.
You are interacting with the cluster via SparkSession (SparkContext for earlier versions). Even though you are developing -i.e. writing ... | 1 | 0 | 0 | Where is RDD or Spark SQL dataframe stored or persisted in client deploy mode on a Spark 2.1 Standalone cluster? | 1 | python,pyspark,apache-spark-sql,spark-dataframe | 0 | 2017-11-08T06:50:00.000 |
I have made several tables in a Postgres database in order to acquire data with time values and do automatic calculation in order to have directly compiled values. Everything is done using triggers that will update the right table in case of modification of values.
For example, if I update or insert a value measured @ ... | 0 | 0 | 0 | 0 | false | 47,226,539 | 1 | 1,413 | 1 | 0 | 0 | 47,209,114 | Problem was resulting from an error of time zone management. | 1 | 0 | 0 | Issues with django and postgresql triggers | 1 | python,django,postgresql,database-trigger | 0 | 2017-11-09T18:29:00.000 |
I am looking for another method to convert .accdb to .db without using csv exporting and separator method to create the new database. Do Access has built-in option to export files into .db? | 0 | 2 | 1.2 | 0 | true | 47,248,148 | 0 | 1,624 | 1 | 0 | 0 | 47,247,790 | Access has built-in ODBC support.
You can use ODBC to export tables to SQLite. You do need to create the database first. | 1 | 0 | 0 | How to convert .accdb to db | 1 | python,sqlite,ms-access | 0 | 2017-11-12T10:35:00.000 |
I've got a google app engine application that loads time series data real-time into a google datastore nosql style table. I was hoping to get some feedback around the right type of architecture to pull this data into a web application style chart (and ideally something I could also plug into a content management system... | 0 | 0 | 0 | 0 | false | 47,257,054 | 1 | 70 | 1 | 1 | 0 | 47,254,930 | Welcome to "it depends".
You have some choices. Imagine the classic four-quadrant chart. Along one axis is data size, along the other is staleness/freshness.
If your time-series data changes rapidly but is small enough to safely be retrieved within a request, you can query for it on demand, convert it to JSON, and squi... | 1 | 0 | 0 | Load chart data into a webapp from google datastore | 1 | python,wordpress,google-app-engine,charts,google-cloud-datastore | 0 | 2017-11-12T22:55:00.000 |
I've used Excel in the past to fetch daily price data on more than 1000 equity securities over a period of a month and it was a really slow experience (1 hour wait in some circumstances) since I was making a large amount of calls using the Bloomberg Excel Plugin.
I've always wondered if there was a substantial perform... | 0 | 1 | 1.2 | 0 | true | 47,331,288 | 0 | 418 | 1 | 0 | 0 | 47,319,322 | I have only used the Python API, and via wrappers. As such I imagine there are ways to get data faster than what I currently do.
But for what I do, I'd say I can get a few years of daily data for roughly 50 securities in a matter of seconds.
So I imagine it could improve your workflow to move to a more robust API.
Rega... | 1 | 0 | 0 | Accessing Bloomberg's API through Excel vs. Python / Java / other programming languages | 1 | java,python,excel,api,bloomberg | 0 | 2017-11-15T23:54:00.000 |
I am using the Python3, Django and database as Postgresql, and I wanted to use the Thingsboard dashboard in my web application. can anyone pls guide me how can I use this | 0 | 0 | 1.2 | 0 | true | 47,798,648 | 1 | 304 | 1 | 0 | 0 | 47,454,686 | ThingsBoard has APIs which you can use. You may also customise it based on your requirements. | 1 | 0 | 0 | I am using the python django, and i wanted to know to get use thingsboard dashboard, and database as postgresql | 2 | django,python-3.x,postgresql,mqtt,thingsboard | 0 | 2017-11-23T11:41:00.000 |
OS: Ubuntu 17.10
Python: 2.7
SUBLIME TEXT 3:
I am trying to import mysql.connector,
ImportError: No module named connector
Although, when i try import mysql.connector in python shell, it works.
Earlier it was working fine, I just upgraded Ubuntu and somehow mysql connector is not working.
I have tried reinstalling mysq... | 2 | 0 | 0 | 0 | false | 52,473,839 | 0 | 722 | 1 | 0 | 0 | 47,588,910 | I am now using Python 3.6, mysql.connector is working for me best.
OS: Ubuntu 18.04 | 1 | 0 | 0 | MySQL Connector not Working: NO module named Connector | 2 | python,mysql,ubuntu,sublimetext3,mysql-connector | 0 | 2017-12-01T07:55:00.000 |
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-00000000004... | 14 | 0 | 0 | 0 | false | 61,532,508 | 0 | 17,855 | 3 | 0 | 0 | 47,608,506 | Deletion of the folder as mentioned previously did not work for me.
I solved this problem by installing a new version of pywin32 using conda.
conda install -c anaconda pywin32 | 1 | 0 | 0 | Issue in using win32com to access Excel file | 6 | python,excel,win32com | 0 | 2017-12-02T13:41:00.000 |
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-00000000004... | 14 | 5 | 0.16514 | 0 | false | 61,842,925 | 0 | 17,855 | 3 | 0 | 0 | 47,608,506 | A solution is to locate the gen_py folder (C:\Users\\AppData\Local\Temp\gen_py) and delete its content. It works for me when using the COM with another program. | 1 | 0 | 0 | Issue in using win32com to access Excel file | 6 | python,excel,win32com | 0 | 2017-12-02T13:41:00.000 |
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-00000000004... | 14 | 6 | 1 | 0 | false | 55,256,887 | 0 | 17,855 | 3 | 0 | 0 | 47,608,506 | Renaming the GenPy folder should work.
It's present at: C:\Users\ _insert_username_ \AppData\Local\Temp\gen_py
Renaming it will create a new Gen_py folder and will let you dispatch Excel properly. | 1 | 0 | 0 | Issue in using win32com to access Excel file | 6 | python,excel,win32com | 0 | 2017-12-02T13:41:00.000 |
I have an existing sqlite db file, on which I need to make some extensive calculations. Doing the calculations from the file is painfully slow, and as the file is not large (~10 MB), so there should be no problem to load it into memory.
Is there a way in Python to load the existing file into memory in order to speed up... | 0 | -2 | 1.2 | 0 | true | 47,702,482 | 0 | 231 | 1 | 0 | 0 | 47,702,450 | You could read all the tables into DataFrames with Pandas, though I'm surprised it's slow. sqlite has always been really fast for me. | 1 | 0 | 0 | Load existing db file to memory Python sqlite? | 1 | python,sqlite | 0 | 2017-12-07T19:31:00.000 |
If I have two backends, one NodeJS and one Python both of them are accessing the same database. Is it possible to use an ORM for both or is that really bad practice? It seems like that would lead to a maintenance nightmare. | 0 | 0 | 0 | 0 | false | 47,708,755 | 1 | 184 | 2 | 0 | 0 | 47,707,608 | It is possible, but it may cause conflicts with table names, constraint names, sequence names and other names which are depend on ORM naming strategy. | 1 | 0 | 0 | Node ORM and Python ORM for same DB? | 2 | python,node.js,postgresql,orm | 0 | 2017-12-08T04:13:00.000 |
If I have two backends, one NodeJS and one Python both of them are accessing the same database. Is it possible to use an ORM for both or is that really bad practice? It seems like that would lead to a maintenance nightmare. | 0 | 0 | 0 | 0 | false | 47,708,025 | 1 | 184 | 2 | 0 | 0 | 47,707,608 | so long as both ORMs put few constraints on the database structure it should be fine. | 1 | 0 | 0 | Node ORM and Python ORM for same DB? | 2 | python,node.js,postgresql,orm | 0 | 2017-12-08T04:13:00.000 |
I have MySQL DB/table with column "name" containing one value. Multiple python scripts are accessing the same DB/table and the same column. There are also two more columns called "locked" and "locked_by", each script is reading the table and selects 10 entries from "name" where "locked" is false and update the locked v... | 1 | 0 | 0 | 0 | false | 47,729,223 | 0 | 79 | 1 | 0 | 0 | 47,728,923 | Looks like SQLs: SELECT ... FOR UPDATE would lock the selected row and other processes can't read/update it until I commit changes.. if I understand correctly | 1 | 0 | 1 | How to prevent conflicts while multiple python scripts accessing MySQL DB at once ? | 1 | python,mysql | 0 | 2017-12-09T13:11:00.000 |
I want to store a list within another list in a database (SQL) without previous data being lost.This is one example of values i have in my database (1, 'Haned', 15, 11, 'Han15', 'password', "['easymaths', 6]"). What i want to do is store another piece of information/data within the list [] without it getting rid of "[... | 0 | 0 | 0 | 0 | false | 47,731,116 | 0 | 20 | 1 | 0 | 0 | 47,730,994 | Though not intended you could join the list by a specific seperator. In turn when you query the selected field you have to convert it into a list again. | 1 | 0 | 0 | Storing a list within another list in a database without previous information in that list getting lost | 1 | python,sql,database,list,append | 0 | 2017-12-09T17:06:00.000 |
In the book grokking algorithms, the author said that
In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow.
In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for sear... | 3 | 0 | 0 | 0 | false | 47,738,572 | 0 | 2,136 | 3 | 0 | 0 | 47,738,554 | Because in order to insert and delete, you need to make search and search takes O(n) in worst case. Therefore, it should also takes at least O(n) in worst case as well. | 1 | 0 | 0 | Why in worst case insert and delete take linear time for hash table? | 3 | python,algorithm,data-structures,hash | 0 | 2017-12-10T11:52:00.000 |
In the book grokking algorithms, the author said that
In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow.
In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for sear... | 3 | 0 | 1.2 | 0 | true | 47,738,575 | 0 | 2,136 | 3 | 0 | 0 | 47,738,554 | And for linked list, delete and insert take constant time.
They don't. They take linear time, because you have to find the item to delete (or the place to insert) first. | 1 | 0 | 0 | Why in worst case insert and delete take linear time for hash table? | 3 | python,algorithm,data-structures,hash | 0 | 2017-12-10T11:52:00.000 |
In the book grokking algorithms, the author said that
In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow.
In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for sear... | 3 | 3 | 0.197375 | 0 | false | 47,738,580 | 0 | 2,136 | 3 | 0 | 0 | 47,738,554 | Delete will not be constant: you will have to visit the whole worst case linked-list to find the object you want to delete. So this would also be a O(n) complexity.
You will have the same problem to insert: you don't want any duplicates, therefore, to be sure not to create some of them ,you will have to check the whole... | 1 | 0 | 0 | Why in worst case insert and delete take linear time for hash table? | 3 | python,algorithm,data-structures,hash | 0 | 2017-12-10T11:52:00.000 |
I want to have a checkboxcolumn in my returned table via Django-filter, then select certain rows via checkbox, and then do something with these rows.
This is Django-filter: django-filter.readthedocs.io/en/1.1.0 This is an example of checkboxcolumn being used in Django-tables2: stackoverflow.com/questions/10850316/…
My ... | 1 | 0 | 0 | 0 | false | 47,835,665 | 1 | 1,965 | 1 | 0 | 0 | 47,783,328 | What django-filter does from the perspective of django-tables2 is supplying a different (filtered) queryset. django-tables2 does not care about who composed the queryset, it will just iterate over it and render rows using the models form the queryset.
So if you a checkbox column to the table or not, or use django-filte... | 1 | 0 | 0 | Django-filter AND Django-tables2 CheckBoxColumn compatibility | 2 | python,mysql,django,django-filter,django-tables2 | 0 | 2017-12-12T23:40:00.000 |
I am looking for a way to require users of a SQL query system to include certain columns in the SELECT query for example require select to have transaction_id column else return error. This is to insure compatibility with other functions.
I'm using EXPLAIN (FORMAT JSON) to parse query plan as a dictionary but it doesn'... | 0 | 0 | 1.2 | 0 | true | 47,789,777 | 0 | 32 | 1 | 0 | 0 | 47,789,320 | Have you tried EXPLAIN (VERBOSE)? That shows the column names.
But I think it will be complicated – you'd have to track table aliases to figure out which column belongs to which table. | 1 | 0 | 0 | Requiring certain columns in SELECT SQL query for it to go through? | 1 | python,postgresql,sqlalchemy | 0 | 2017-12-13T09:13:00.000 |
So I'm trying to store a LOT of numbers, and I want to optimize storage space.
A lot of the numbers generated have pretty high precision floating points, so:
0.000000213213 or 323224.23125523 - long, high memory floats.
I want to figure out the best way, either in Python with MySQL(MariaDB) - to store the number with s... | 2 | 1 | 1.2 | 0 | true | 47,843,118 | 0 | 317 | 1 | 0 | 0 | 47,842,966 | All python floats have the same precision and take the same amount of storage. If you want to reduce overall storage numpy arrays should do the trick. | 1 | 0 | 0 | Most efficient way to store scientific notation in Python and MySQL | 2 | python,mysql,types,double,storage | 0 | 2017-12-16T05:53:00.000 |
Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case?
I am not getting any error messages. The column names appear fine, but the table is entirely empty.
When I try to send over a single column (i.e. data.ix[2]), it actually works... | 1 | -1 | -0.099668 | 1 | false | 55,399,149 | 0 | 2,183 | 2 | 0 | 0 | 47,878,076 | I was also facing same issue because dot was added in header. remove dot then it will work. | 1 | 0 | 0 | Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case? | 2 | python,sql,python-3.x,postgresql,pandas | 0 | 2017-12-18T23:46:00.000 |
Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case?
I am not getting any error messages. The column names appear fine, but the table is entirely empty.
When I try to send over a single column (i.e. data.ix[2]), it actually works... | 1 | 0 | 0 | 1 | false | 47,896,038 | 0 | 2,183 | 2 | 0 | 0 | 47,878,076 | I fixed this problem - it was becomes some of the column headers had '%' in it.
I accidentally discovered this reason for the empty tables when I tried to use io and copy_from a temporary csv, instead of to_sql. I got a transaction error based on a % placeholder error.
Again, this is specific to passing to PSQL; it w... | 1 | 0 | 0 | Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case? | 2 | python,sql,python-3.x,postgresql,pandas | 0 | 2017-12-18T23:46:00.000 |
I have two projects under the same account:
projectA with BQ and projectB with cloud storage
projectA has BQ with dataset and table - testDataset.testTable
prjectB has cloud storage and bucket - testBucket
I use python, google cloud rest api
account key credentials for every project, with different permissions: proje... | 1 | 0 | 0 | 0 | false | 47,884,399 | 0 | 716 | 1 | 0 | 0 | 47,884,227 | You put this wrong. You need to provide access to the user account on both projects to have accessible across projects. So there needs to be a user authorized to do the BQ thing and also the GCP thing on the different project.
Also Bucket names must be globally unique it means I can't create the name as well, it's glo... | 1 | 0 | 0 | Import data from BigQuery to Cloud Storage in different project | 1 | google-bigquery,google-cloud-platform,google-cloud-storage,google-python-api | 0 | 2017-12-19T09:53:00.000 |
I have a problem figuring out how I can create a table using psycopg2, with IF NOT EXISTS statement, and getting the NOT EXISTS result
The issue is that I'm creating a table, and running some CREATE INDEX / UNIQUE CONSTRAINT after it was created. If the table already exists - there is no need to create the indexes or c... | 2 | 0 | 0 | 0 | false | 48,137,413 | 0 | 1,166 | 1 | 0 | 0 | 47,912,529 | Eventually I ended up adding AUTOCOMMIT = true
This is the only way I can make sure all workers see when a table is created | 1 | 0 | 0 | psycopg2 create table if not exists and return exists result | 2 | python,postgresql,psycopg2 | 0 | 2017-12-20T18:47:00.000 |
I am using pymysql to connect to a database. I am new to database operations. Is there a status code that I can use to check if the database is open/alive, something like db.status. | 7 | -4 | -1 | 0 | false | 47,973,362 | 0 | 13,383 | 1 | 0 | 0 | 47,973,320 | It looks like you can create the database object and check if it has been created if it isn't created you can raise an exception
Try connecting to an obviously wrong db and see what error it throws and you can use that in a try and except block
I'm new to this as well so anyone with a better answer please feel free t... | 1 | 0 | 0 | How to check the status of a mysql connection in python? | 3 | python,pymysql | 0 | 2017-12-26T02:11:00.000 |
I am working with alembic and it automatically creates a table called alembic_revision on your database. How, do I specify the name of this table instead of using the default name? | 4 | 8 | 1.2 | 0 | true | 47,979,603 | 1 | 874 | 1 | 0 | 0 | 47,979,390 | After you run your init. Open the env.py file and update context.configure, add version_table='alembic_version_your_name as a kwarg. | 1 | 0 | 0 | Alembic, how do you change the name of the revision database? | 1 | python,sqlalchemy,alembic | 0 | 2017-12-26T13:36:00.000 |
I am trying to build a AWS Lambda function using APi Gateway which utlizes pyodbc python package. I have followed the steps as mentioned in the documentation. I keep getting the following error Unable to import module 'app': libodbc.so.2: cannot open shared object file: No such file or directory when I test run the Lam... | 3 | 1 | 0.033321 | 0 | false | 50,925,535 | 1 | 6,014 | 1 | 0 | 0 | 48,016,091 | Fisrt, install unixODBC and unixODBC-devel packages using yum install unixODBC unixODBC-devel. This step will install everything required for pyodbc module.
The library you're missing is located in /usr/lib64 folder on you Amazon Linux instance.
Copy the library to your python project's root folder (libodbc.so.2 is ju... | 1 | 0 | 0 | Unable to use pyodbc with aws lambda and API Gateway | 6 | python,amazon-web-services,aws-lambda,chalice | 1 | 2017-12-29T00:40:00.000 |
I have a file that has several tabs that have pivot tables that are based on one data tab. I am able to write the data to the data tab without issue, but I can't figure out how to get all of the tabs with pivot tables to refresh.
If this can be accomplished with openpyxl that would be ideal. | 6 | 0 | 0 | 0 | false | 49,428,497 | 0 | 7,810 | 2 | 0 | 0 | 48,016,206 | Currently what I do is in my template I create a dynamic data range that gets the data from the raw data sheet and then I set that named range to the tables data source. Then in the pivot table options there is a "refresh on open" parameter and I enable that. When the excel file opens it refreshes and you can see it ... | 1 | 0 | 0 | Using openpyxl to refresh pivot tables in Excle | 4 | python,excel,openpyxl | 0 | 2017-12-29T01:00:00.000 |
I have a file that has several tabs that have pivot tables that are based on one data tab. I am able to write the data to the data tab without issue, but I can't figure out how to get all of the tabs with pivot tables to refresh.
If this can be accomplished with openpyxl that would be ideal. | 6 | 1 | 0.049958 | 0 | false | 52,547,674 | 0 | 7,810 | 2 | 0 | 0 | 48,016,206 | If the data source range is always the same, you can set each pivot table as "refresh when open". To do that, just go to the pivot table tab, click on the pivot table, under "Analyze" - > Options -> Options -> Data -> select "Refresh data when opening the file".
If the data source range is dynamic, you can set a named... | 1 | 0 | 0 | Using openpyxl to refresh pivot tables in Excle | 4 | python,excel,openpyxl | 0 | 2017-12-29T01:00:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.