Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I was wondering if one of you could advice me how to tackle a problem I am having. I developed a python script that updates data to a database (MySQL) every iteration (endless while loop). What I want to prevent is that if the script is accidentally closed or stopped half way the script it waits till all the data is lo...
0
2
0.132549
0
false
25,562,066
0
184
1
0
0
25,561,971
There are some things you can do to prevent a program from being closed unexpectedly (signal handlers, etc), but they only work in some cases and not others. There is always the chance of a system shutdown, power failure or SIGKILL that will terminate your program whether you like it or not. The canonical solution to...
1
0
0
Closing python MySQL script
3
python,mysql,while-loop
0
2014-08-29T05:02:00.000
After importing the module, on script run I get the error and I have the module installed already, I am new with python, so I expect I might have forgot to install something else? Python is version 2.7.
0
1
1.2
0
true
25,797,155
0
691
1
0
0
25,623,002
Maybe you have more versions than 2.7 and You installed the module on another version.
1
0
1
error "No module named MySQLdb"
1
python-2.7,mysql-python
0
2014-09-02T12:05:00.000
I need help switching my database engine from sqlite to mysql. manage.py datadump is returning the same error that pops up when I try to do anything else with manage.py : ImproperlyConfigured: Error loading MySQL module, No module named MySQLdb. This django project is a team project. I pulled new changes from bitbucket...
0
-1
-0.099668
0
false
25,630,191
1
4,742
1
0
0
25,629,092
Try the followings steps: 1. Change DATABASES in settings.py to MYSQL engine 2. Run $ ./manage.py syncdb
1
0
0
How can I switch my Django project's database engine from Sqlite to MySQL?
2
python,mysql,django,sqlite,mysql-python
0
2014-09-02T17:31:00.000
I have written an extensive python package that utilizes excel and pywin32. I am now in the progress of moving this package to a linux environment on a Vagrant machine. I know there are "emulator-esque" software packages (e.g. WINE) that can run Windows applications and look-a-likes for some Windows applications (e.g. ...
0
2
1.2
0
true
25,629,595
0
850
1
1
0
25,629,462
The short answer is, you can't. WINE does not expose a bottled Windows environment's COM registry out to linux—and, even if it did, pywin32 doesn't build on anything but Windows. So, here are some options, roughly ordered from the least amount of change to your code and setup to the most: Run both your Python script a...
1
0
0
Porting Python on Windows using pywin32/excel to Linux on Vagrant Machine
1
python,linux,excel,vagrant,pywin32
0
2014-09-02T17:56:00.000
In Python, I'm using SQLite's executemany() function with INSERT INTO to insert stuff into a table. If I pass executemany() a list of things to add, can I rely on SQLite inserting those things from the list in order? The reason is because I'm using INTEGER PRIMARY KEY to autoincrement primary keys. For various reasons,...
3
3
0.53705
0
false
25,712,762
0
444
1
0
0
25,712,611
Python's sqlite3 module executes the statement with the list values in the correct order. Note: if the code already knows the to-be-generated ID value, then you should insert this value explicitly so that you get an error if this expectation turns out to be wrong.
1
0
0
Python SQLite executemany() always in order?
1
python,sql,sqlite
0
2014-09-07T16:52:00.000
After scanning the very large daily event logs using regular expression, I have to load them into a SQL Server database. I am not allowed to create a temporary CSV file and then use the command line BCP to load them into the SQL Server database. Using Python, is it possible to use BCP streaming to load data into SQL S...
1
0
0
0
false
25,743,680
0
1,625
1
0
0
25,740,355
The BCP API is only available using the ODBC call-level interface and the managed SqlClient .NET API using the SqlBulkCopy class. I'm not aware of a Python extension that provides BCP API access. You can insert many rows in a single transaction to improve performance. This can be accomplished by batching individual i...
1
0
0
Loading Large data into SQL Server [BCP] using Python
1
python,sql-server
0
2014-09-09T08:52:00.000
I've got a sqlite3 database and I want to write in it from multiple threads. I've got multiple ideas but I'm not sure which I should implement. create multiple connection, detect and waif if the DB is locked use one connection and try to make use of Serialized connections (which don't seem to be implemented in python)...
4
0
0
0
false
25,748,935
0
1,466
1
0
0
25,747,192
I used method 1 before. It is the easiest in coding. Since that project has a small website, each query take only several milliseconds. All the users requests can be processed promptly. I also used method 3 before. Because when the query take longer time, it is better to queue the queries since frequent "detect and wa...
1
0
0
Writing in SQLite multiple Threads in Python
2
python,multithreading,postgresql,sqlite
0
2014-09-09T14:28:00.000
i'm writing a script that puts a large number of xml files into mongodb, thus when i execute the script multiple times the same object is added many times to the same collection. I checked out for a way to stop this behavior by checkinng the existance of the object before adding it, but can't find a way. help!
0
0
0
0
false
25,807,361
0
491
1
0
1
25,807,271
You can index on one or more fields(not _id) of the document/xml structure. Then make use of find operator to check if a document containing that indexed_field:value is present in the collection. If it returns nothing then you can insert new documents into your collection. This will ensure only new docs are inserted wh...
1
0
1
add if no duplicate in collection mongodb python
2
python,mongodb,pymongo,upsert
0
2014-09-12T11:28:00.000
I want to store the job in mongodb using python and it should schedule on specific time. I did googling and found APScheduler will do. i downloaded the code and tried to run the code. It's schedule the job correctly and run it, but it store the job in apscheduler database of mongodb, i want to store the job in my own ...
0
1
0.197375
0
false
25,898,609
0
708
1
1
0
25,884,242
Simply give the mongodb jobstore a different "database" argument. It seems like the API documentation for this job store was not included in what is available on ReadTheDocs, but you can inspect the source and see how it works.
1
0
0
APScheduler store job in custom database of mongodb
1
mongodb,python-2.7,apscheduler
0
2014-09-17T07:01:00.000
I have built a couple basic workflows using XML tools on top of XLSX workbooks that are mapped to an XML schema. You would enter data into the spreadsheet, export the XML and I had some scripts that would then work with the data. Now I'm trying to eliminate that step and build a more integrated and portable tool that o...
0
0
1.2
0
true
25,908,953
0
2,150
1
0
0
25,893,266
The Excel format is pretty complicated with dependencies between components – you can't for example be sure of that the order of the worksheets in the folder worksheets has any bearing to what the file looks like in Excel. I don't really understand exactly what you're trying to do but the existing libraries present an ...
1
0
0
XLSX to XML with schema map
1
python,xml,excel,xlsx,openpyxl
0
2014-09-17T14:25:00.000
I am close to finishing an ORM for RethinkDB in Python and I got stuck at writing tests. Particularly at those involving save(), get() and delete() operations. What's the recommended way to test whether my ORM does what it is supposed to do when saving or deleting or getting a document? Right now, for each test in my s...
3
3
1.2
0
true
25,922,468
1
484
1
0
0
25,916,839
You can create all your databases/tables just once for all your test. You can also use the raw data directory: - Start RethinkDB - Create all your databases/tables - Commit it. Before each test, copy the data directory, start RethinkDB on the copy, then when your test is done, delete the copied data directory.
1
0
0
Testing an ORM for RethinkDB
1
python,unit-testing,testing,orm,rethinkdb
1
2014-09-18T15:32:00.000
I am using mongoimport in a python script to import multiple CSV files into my Mongo DB. Some values contain backslash escaped commas. How can I use this to correctly import these files to Mongo? I can't find any specific solutions to this.
0
0
0
0
false
25,936,756
0
313
1
0
0
25,936,385
I'm not familiar with mongoimport, but I do know that if you use csv.reader, the backslashes are taken care of during reading. Maybe you could consider using a package specifically designed to read the csv, and then pass that along to mongoimport.
1
0
1
Mongoimport: Escaping commas in CSV
1
python,mongoimport
0
2014-09-19T14:34:00.000
I know you can read in Excel files with pandas, but I have had trouble reading in files where the column headings in the worksheets are not in a format easily readable like plain text. In other words, if the column headings had special characters then the file would fail to import. Where as if you import data like that...
1
0
0
0
false
49,387,955
0
1,140
1
0
0
25,987,179
Add a "u" before your string. For example, if you're looking for a column named 'lissé' in a dataframe "df" then you should put df[u'lissé']
1
0
0
Python: read an Excel file using Pandas when the file has special characters in column headers
1
python,excel,pandas,xls,xlsx
0
2014-09-23T04:56:00.000
Django 1.7, Python 3.4. In my models I have several TextFields defined. When I go to load a JSON fixture (which was generated from an SQLite3 dump), it fails on the second object, which has 515 characters for one of its fields. The error printed is psycopg2.DataError: value too long for type character varying(500) I c...
2
1
1.2
0
true
26,030,265
1
1,186
1
0
0
26,028,200
I'm not sure what the exact cause was, but it seems to be related to django's migration tool storing migrations, even on a new database. What I did to get this behavior: Create django project, then apps, using CharField syncdb, run the project's dev server kill the devserver, modify fields to be TextField Create a new...
1
0
0
Why is Django creating my TextField as a varchar in the PostgreSQL database?
1
python,sql,django,postgresql,psycopg2
0
2014-09-24T23:35:00.000
I have a Python script running as a daemon. At startup, it spawns 5 processes, each of which connects to a Postgres database. Now, in order to reduce the number of DB connections (which will eventually become really large), I am trying to find a way of sharing a single connection across multiple processes. And for this...
4
12
1.2
0
true
26,072,257
0
5,128
1
0
0
26,070,040
You can't sanely share a DB connection across processes like that. You can sort-of share a connection between threads, but only if you make sure the connection is only used by one thread at a time. That won't work between processes because there's client-side state for the connection stored in the client's address spac...
1
0
0
Share connection to postgres db across processes in Python
1
python,postgresql,psycopg2,python-multiprocessing
0
2014-09-27T00:16:00.000
I have CSV files that I want to make database tables from in mysql. I've searched all over and can't find anything on how to use the header as the column names for the table. I suppose this must be possible. In other words, when creating a new table in MySQL do you really have to define all the columns, their names, th...
0
0
1.2
0
true
26,108,522
0
3,067
1
0
0
26,108,160
The csv module can easily give you the column names from the first line, and then the values from the other ones. The hard part will be do guess the correct column types. When you load a csv file into an Excel worksheet, you only have few types : numeric, string, date. In a database like MySQL, you can define the size...
1
0
0
create database by load a csv files using the header as columnnames (and add a column that has the filename as a name)
2
python,mysql,sql,csv
0
2014-09-29T20:19:00.000
In my project there is two models ,ORGANISATION and CUSTUMER .Here what i am doing is while i am adding new customer to the organisation i save the organisation_id to the table CUSTOMER .But now i am worrying about the performance of my project when the database becomes huge. So now i am planning to create new database...
0
1
0.197375
0
false
26,158,170
1
181
1
0
0
26,157,625
It doesn't make sense to create a new database for each organization. Even if the number of customers or organizations grows to the hundreds or thousands, keeping data in a single database is your best option. Edit: Your original concern was that an increase in the number of organizations would impact performance. Well...
1
0
0
django multiple database for multiple organisation in a single project
1
mysql,django,python-2.7
0
2014-10-02T09:07:00.000
How can i create Django sqlite3 dump file (*.sql) using terminal? There is a fabric fabfile.py with certain dump scripts, but when i try to use fab command next massage shows up: The program 'fab' is currently not installed. To run fab please ask your administrator to install the package 'fabric'. But there are fabri...
0
0
0
0
false
26,179,396
1
1,114
1
0
0
26,178,633
You could also use fixtures. And generate fixtures for your app. Dependes on what you're planing to do with them. You'll just make a loaddata after that.
1
0
0
Django sqlite3 database dump
2
python,django,database,sqlite
0
2014-10-03T12:07:00.000
I am using Django 1.7 and I want to use MongoDB, So for that I try to install django-nonrel. Please let me know django-nonrel is compatible with Django 1.7?
1
0
0
0
false
26,293,980
1
250
1
0
0
26,293,481
Django-nonrel isn't "compatible" with anything. It is actually a fork of Django, currently based on the 1.5 release.
1
0
1
does django-nonrel is compatible with django 1.7
1
django,python-2.7,django-nonrel
0
2014-10-10T06:49:00.000
I have been given a few TSV files containing data, around 800MB total in a couple of files. Each of them has columns that link up with columns in another file. I have so far imported all of my data into Python and stored it in an array. I now need to find a way to build a database out of this data without using any SQL...
1
1
1.2
0
true
26,329,692
0
101
1
0
0
26,329,613
Yes, you could fake a lot of DB operations with a nested dict structure. Top level is your "tables", each table has entries (use a "primary key" on these) and each entry is a dict of key:value pairs where keys are "column names" and values are, well, values. You could even write a little sql-like query language on thi...
1
0
1
Database-like operations without any database use
1
python,mysql,sql,database,nosql
0
2014-10-12T20:25:00.000
I am using Python to establish a connection to greenplum and run codes automatically. For that I am using these drivers ­ psycopg2,­ psycopg2.extensions & psycopg2.extras. I also have to establish a connection to Teradata and run some codes and tranfer tables from Teradata to greenplum. Can someone please suggest so...
0
1
0.197375
0
false
26,425,426
0
694
1
0
0
26,418,454
I'm guessing the data volumes are at least moderate in size - 10's of millions or greater. FastExport or Teradata Parallel Transport Export of the Teradata data to a flat file or named pipe. Ingesting using Greenplum's preferred method for bulk loading data from a flat file or named pipe. Other options may inclu...
1
0
0
How to tranfer data from Teradata to Greenplum using Python?
1
python,teradata,greenplum
0
2014-10-17T05:31:00.000
Is it possible? If so then how? Currently I'm inserting strings >16MB into GridFS one-by-one, but its very slow when dealing not with 1 string, but with thousands. I tried to check documentation, but didn't find a single line about bulk insert to GridFS storage, not just simple collection. I'm using PyMongo for co...
1
1
1.2
0
true
26,662,382
0
1,705
1
0
0
26,429,023
I read and researched all the answers but unfortunately they didn't fulfill my requirements. The data that I was needed to use for specifying _id of jsons in GridFS was actually stored inside of JSON itself. It sounds like worst idea ever including redundancy and etc, but unfortunately its requirement. What I did is I ...
1
0
1
Bulk insert to GridFS in MongoDB
3
python,mongodb,pymongo,bulkinsert,gridfs
0
2014-10-17T16:03:00.000
I'm new to Web Dev and I came across a problem. I was wondering if there's a Javascript Framework that will allow me to register and authenticate users to a database like when using PHP and MySql. Also, when the user is granted access to the site, such user will be required to upload files that will be written to the l...
0
0
0
0
false
26,518,519
1
959
1
0
0
26,518,355
Welcome to the world of development. In general, javascript is only used to give more resources to the user's navigation on the site (ex: visual effects). As you're starting out, I advise you to start studying the part of the server-side login. For security purposes who confirms whether the user is logged in or not is ...
1
0
0
User Registration and Authentication to a Database using Javascript
3
javascript,python,ruby-on-rails,angularjs,node.js
0
2014-10-22T22:41:00.000
since I could not find an answer to my question neither here nor in other forums, I decided to ask it to the community: Does anybody know if and how it is possible to realize automatic documentation generation for code generated with Dymola? The background for this e. g. is that I want/need to store additional informat...
1
3
1.2
0
true
26,543,595
0
274
1
0
0
26,529,779
If you mean the Modelica model code, how does the HTML export in Dymola work for you? What's missing? If you mean the C code generated by Dymola, the source code generation option enables more comments in the code.
1
0
0
Automatic documentation generation for Dymola code
1
doxygen,python-sphinx,documentation-generation,dymola
0
2014-10-23T13:56:00.000
I'm using celery with django and am storing the task results in the DB. I'm considering having a single set of workers reading messages from a single message broker. Now I can have multiple clients submitting celery tasks and each client will have tasks and their results created/stored in a different DB. Even though th...
0
1
1.2
0
true
26,644,301
1
412
1
1
0
26,637,631
Eventually you will have duplicates. Many people ignore this issue because it is a "low probability", and then are surprised when it hits them. And then a story leaks how someone was logged into another uses Facebook account. If you require them to always be unique then you will have to prefix each ID with something th...
1
0
0
Common celery workers for different clients having different DBs
1
python,django,celery,django-celery
0
2014-10-29T18:06:00.000
Currently, I'm using Google's 2-step method to backup the datastore and than import it to BigQuery. I also reviewed the code using pipeline. Both methods are not efficient and have high cost since all data is imported everytime. I need only to add the records added from last import. What is the right way of doing it? ...
2
2
0.197375
0
false
26,722,516
1
541
1
1
0
26,722,127
There is no full working example (as far as I know), but I believe that the following process could help you : 1- You'd need to add a "last time changed" to your entities, and update it. 2- Every hour you can run a MapReduce job, where your mapper can have a filter to check for last time updated and only pick up those...
1
0
0
Import Data Efficiently from Datastore to BigQuery every Hour - Python
2
python,google-app-engine,google-bigquery,google-cloud-datastore
0
2014-11-03T20:00:00.000
If there a way using Whoosh to return the documents that have a field matching exactly the terms in a query? For example, say I have a schema that has a autograph field that has three possible values; Autograph, Partial autograph, and No Autograph. If I do a standard query autograph:autograph, I get all the records. Be...
2
0
0
0
false
26,740,422
0
257
1
0
0
26,723,964
I have come up with a solution, It works. First off, I redefined by schema so that autograph was an ID field in whoosh. Then I added a filter to the search call using a Regex query. This works, but I am not going to accept it as the answer in hopes that there is a more elegant solution for filtering results.
1
0
0
Whoosh: matching terms exactly
1
python,whoosh
0
2014-11-03T21:55:00.000
I have a bottle+mongo application running in openshift. when I git-clone the application to my local computer neither the database nor the env-variables get download on my computer --just the python files. Should I have to mimic the mongo part in my local computer to developed locally? Or I missing something here.
0
0
0
0
false
26,922,793
0
47
1
1
0
26,921,629
Yes. You have to run your own Mongodb server locally or port forward and use the OPENSHIFT Mongodb.
1
0
0
openshift python mongodb local
1
python,mongodb,openshift,bottle
0
2014-11-14T01:43:00.000
I wrote a database program using SQLAlchemy. So far, I've been using FreeFileSync to sync the database file over the network for two computers when necessary. I want to learn how to set things up so that the file stays in one place and allows multiple user access but I don't know where to begin. Is it possible to open...
0
0
1.2
0
true
27,054,069
0
321
1
0
0
26,965,270
I can't delete this question outright, so I will answer it with what I did. Part of the problem was that I was trying to find a solution for moving a sqlite3 database to a server, but it turns out that sqlite3 is only intended for use in simpler local situations. So I decided to migrate to MySQL. The following are the ...
1
0
0
SQLAlchemy, how to transition from single-user to multi-user
1
python-2.7,sqlalchemy,multi-user
0
2014-11-17T03:55:00.000
I am getting this error when I am querying my rest app built with tornado, gevent, postgres and patched using psycogreen. I am constantly getting this error even when i am making requests at a concurrency of 10. If any one has a solution or info about what I might be doing wrong please share. Error messages: Programmin...
2
2
0.379949
0
false
27,316,073
0
1,803
1
0
0
27,025,622
You are probably using the same connection with two different cursors concurrently.
1
0
0
ProgrammingError: close cannot be used while an asynchronous query is underway
1
python-2.7,sqlalchemy,tornado,psycopg2,gevent
0
2014-11-19T19:45:00.000
Sorry i deleted my code because i realized i wasn't supposed to put it up
0
1
0.197375
0
false
27,093,407
0
76
1
0
0
27,093,359
You are getting this error because the __init__() function in your class requires 3 arguments - new_dict, coloumn_name, and coloumn_value - and you did not supply them.
1
0
0
How do you get a column name and row from table?
1
python
0
2014-11-23T19:35:00.000
I'm looking for open-ended advice on the best approach to re-write a simple document control app I developed, which is really just a custom file log generator that looks for and logs files that have a certain naming format and file location. E.g., we name all our Change Orders with the format "CO#3 brief description.do...
0
1
0.099668
0
false
30,106,383
0
92
1
0
0
27,096,588
Firstly, if it works well as you suggest, then why fix it? Secondly, before doing any changes to your code I would ask myself the following questions: What are the improvements/new requirements I want to implement that I can't easily do with the current structure? Do I have a test suite of the current solution, so th...
1
0
0
Need advice on writing a document control software with Python and MySQL
2
python,mysql,file
0
2014-11-24T01:24:00.000
I am trying to build a simple login / register system with python sockets and Tkinter. It might sound like a stupid question, but I really couldn't find by searching in Google. I am wondering if using sqlite3 for storing username and password (with a server) is a good idea. If No, please explain why shouldn't I use sq...
1
2
1.2
0
true
27,134,845
0
1,093
1
0
0
27,134,539
You'll need to store the names and (secured) passwords on the server. SQLite is a perfectly good solution for this but there are many, many other ways to do it. If your application does not otherwise use a database for storage there's no need to add database support just for this simple task. Assuming that you don't...
1
0
0
Should I use sqlite3 for storing username and password with python?
1
python,database,python-2.7,sqlite
0
2014-11-25T18:57:00.000
I am using Ubuntu 14.04 and trying to run snoopy_auth which is a part of the snoopy-ng application I downloaded and installed from their GitHub. When running, I get an error that is documented on snoopy-ng's GitHub page, which says that it works using version 0.7.8. How can I downgrade sqlalchemy to 0.7.8? The error lo...
1
2
1.2
0
true
27,371,294
1
1,813
1
0
0
27,161,760
To get passed this error I just simply ran the command: sudo easy_install "SQLAlchemy==0.7.8" The virtual environments do seem like the preferred method though, so hopefully I don't run into any additional problems from downgrading system-wide.
1
0
0
How can I remove version 0.9.7 of sqlalchemy and install 0.7.8 instead?
2
python,linux,python-2.7,ubuntu,sqlalchemy
0
2014-11-27T01:37:00.000
I have a working Django 1.6 project using sqlite deployed in Digital Ocean, Ubuntu. I use Git to update my project on server side. (Git clone and git pull thereafter) My question is: every time after I update my database locally (e.g. added some new tables), how can I synchronise with the server one? Using git pull res...
0
2
1.2
0
true
27,163,084
1
860
1
0
0
27,162,982
Follow the following steps to push from local and pull to server. make changes to models.py Use this cmd to add change to git . > git add models.py use this cmd to commit > git commit -m "your message" git push > this will push your local changes to repo. go to sever now. run cmd > git status see if there are any loc...
1
0
0
How to synchronise local Django sqlite database with the server one?
1
python,django,database,git
0
2014-11-27T04:17:00.000
I have uploaded some data into Elastic server as " job id , job place , job req , job desc ". My index is my_index and doctype = job_list. I need to write a query to find a particular term say " Data Analyst " and it should give me back matching results with a specified field like " job place " . ie, Data Analyst term ...
0
1
0.099668
1
false
27,177,167
0
5,426
1
0
0
27,166,357
the above search example looks correct.Try lowercasing the Data "Analyst" as "data analyst". if doesn't help post your mappings,query you firing and response you are getting.
1
0
0
Elastic Search query filtering
2
python,search,curl,elasticsearch
0
2014-11-27T08:43:00.000
I'm working on a multithreaded application that uses the SQLAlchemy ORM. It already uses scoped_session with the thread as its scope, but we are having some issues when we pass an ORM object from a worker thread back to the main thread. Since the objects are attached to the worker thread's session, when the worker thre...
2
5
1.2
0
true
27,194,059
1
1,254
1
0
0
27,193,849
Session.merge() is enough and should do what you're after, but even then it gets fiddly with threads. You might want to rethink this. Pass the primary key(s) to the worker instead of the objects, and then handle object loading and the actual work in the worker itself. No messing around with threading and open/closed se...
1
0
0
SQLAlchemy ORM: safely passing objects between threads without manually reattaching?
1
python,multithreading,sqlalchemy
0
2014-11-28T17:57:00.000
Right now I'm using print(), calling the variables I want that are stored in a tuple and then formatting them using: print(format(x,"<10s")+ format(y,"<40s")...) but this gives me output that isn't aligned in a column form. How do I make it so that each row's element is aligned? So, my code is for storing student detai...
1
0
0
0
false
27,209,158
0
5,713
1
0
0
27,196,501
My shell has the font settings changed so the alignment was off. Back to font: "Courier" and everything is working fine. Sorry.
1
0
1
Format strings to make 'table' in Python 3
2
python,formatting,tabular
0
2014-11-28T22:07:00.000
I'm trying to delete cells from an Excel spreadsheet using openpyxl. It seems like a pretty basic command, but I've looked around and can't find out how to do it. I can set their values to None, but they still exist as empty cells. worksheet.garbage_collect() throws an error saying that it's deprecated. I'm using t...
1
4
1.2
0
true
27,280,801
0
2,658
1
0
0
27,259,478
In openpyxl cells are stored individually in a dictionary. This makes aggregate actions like deleting or adding columns or rows difficult as code has to process lots of individual cells. However, even moving to a tabular or matrix implementation is tricky as the coordinates of each cell are stored on each cell meaning ...
1
0
0
Delete cells in Excel using Python 2.7 and openpyxl
1
python,excel,openpyxl
0
2014-12-02T21:38:00.000
Is it just about creating models that use the best fitting data store API? For part of the data I need relations, joins and sum(). For other this is not necessary but nosql way is more appropriate.
1
0
0
0
false
28,197,823
1
62
1
1
0
27,278,297
MySQL commands cannot be run on NoSQL. You will need to do some conversions during manipulation of the data from both DBs.
1
0
0
can I combine NDB and mysqldb in one app on google cloud platform
1
google-app-engine,google-cloud-storage,google-cloud-datastore,mysql-python,app-engine-ndb
0
2014-12-03T17:47:00.000
I am dealing with some performance issues whilst working with a very large dataset. The data is a pairwise distance matrix of ~60k entries. The resulting vectors have been generated in the following format: mol_a,mol_b,score, year_a, year_b 1,1,1,year,year 1,2,x,year,year 1,3,y,year,year ... 1,60000,z,year,year 2...
2
0
0
0
false
27,323,942
0
314
1
0
0
27,322,027
You have several options. The simplest is to simply save the output in chunks instead (say save one file for all the 1st molecule 'distance' scores, a second file for the second molecule distances, etc., with 60,000 files in all). That would allow you to also process your work in batches, and then aggregate to get the ...
1
0
0
database solution for very large table
2
python,mysql,sql,sqlite
0
2014-12-05T18:04:00.000
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_G...
38
1
0.022219
0
false
42,188,912
0
200,367
4
0
0
27,385,097
I am not sure what caused this but it went all well once I changed the path name from Lib into lib and I was finally able to make it work.
1
0
0
ImportError: No module named xlsxwriter
9
python-2.7,xlsxwriter
0
2014-12-09T17:29:00.000
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_G...
38
0
0
0
false
67,318,348
0
200,367
4
0
0
27,385,097
I found the same error when using xlsxwriter in my test.py application. First, check if you have xlsxwriter module installed or not. sudo pip install xlsxwriter Then check the python version you are using, The following worked for me python2 test.py
1
0
0
ImportError: No module named xlsxwriter
9
python-2.7,xlsxwriter
0
2014-12-09T17:29:00.000
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_G...
38
1
0.022219
0
false
72,355,605
0
200,367
4
0
0
27,385,097
in VSCode: instead of activating your environment with script use python select interpreter from VSCode(press ctrl + shift + p) and then select your environment from the list (marked with recommended)
1
0
0
ImportError: No module named xlsxwriter
9
python-2.7,xlsxwriter
0
2014-12-09T17:29:00.000
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_G...
38
5
0.110656
0
false
50,458,074
0
200,367
4
0
0
27,385,097
I managed to resolve this issue as follows... Be careful, make sure you understand the IDE you're using! - Because I didn't. I was trying to import xlsxwriter using PyCharm and was returning this error. Assuming you have already attempted the pip installation (sudo pip install xlsxwriter) via your cmd prompt, try using...
1
0
0
ImportError: No module named xlsxwriter
9
python-2.7,xlsxwriter
0
2014-12-09T17:29:00.000
my job id is job_7mb6iw3BHoMRC09US9Vqq-Qd06s, while uploading data by this job on Bigquery The data was not getting uploaded on bigquery. And I am not getting any error for this.
0
1
0.197375
0
false
27,449,004
0
95
1
0
0
27,416,642
That job failed with reason "invalid" and message starting with "Too many errors encountered." In order to detect job failure, when you get a successful response from jobs.get, first ensure that the job is in a DONE state, then look for the presence of errors in the status.errorResult.reason and status.errorResult.mess...
1
0
0
Bigquery data not getting uploaded
1
google-app-engine,python-2.7,google-bigquery
0
2014-12-11T06:26:00.000
My goal was to duplicate my Google App Engine application. I created new application, and upload all needed code from source application(python). Then I uploaded previously created backup files from the Cloud Storage of the source application (first I downloaded those files to PC and than uploaded files to GCS bucket ...
8
4
0.379949
0
false
34,706,288
1
961
2
1
0
27,514,985
Yes!! What you are trying to do is not possible. The reason is that there are absolute references in the backup files to the original backup location (bucket). So moving the files to another GCS location will not work. Instead you have to leave the backup files in the original GCS bucket and give your new project read ...
1
0
0
Backup in one and restore in another Google App Engine application by using Cloud Storage?
2
python,google-app-engine
0
2014-12-16T22:17:00.000
My goal was to duplicate my Google App Engine application. I created new application, and upload all needed code from source application(python). Then I uploaded previously created backup files from the Cloud Storage of the source application (first I downloaded those files to PC and than uploaded files to GCS bucket ...
8
1
0.099668
0
false
29,852,870
1
961
2
1
0
27,514,985
Given the message, my guess is that the target application has no read access to the bucket where the backup is stores. Add the application to the permitted users to that bucket before creating the backup so that the backup objects will inherit the permission.
1
0
0
Backup in one and restore in another Google App Engine application by using Cloud Storage?
2
python,google-app-engine
0
2014-12-16T22:17:00.000
I get an error when trying to run ogr2ogr thru subprocess but I am able to run it using just the windows command prompt. The script will be part of a series of processes that start with batch importing gpx files unto a postgres db. Can somebody please tell me what's wrong? Thanks! :::::::::::::::::::::::::::: Running T...
0
0
0
0
false
27,570,551
0
1,962
1
1
0
27,567,450
REINSTALLING the python bindings resolved my issue. I don't see GDAL on the paths below but its working now. Is it supposed to be there so since its not, I might probably have another round of GDAL head scratching in the future? ::::::::::::::::::::::::::::::::::::::: THIS is what I currently have when I type in sys.pa...
1
0
0
ERROR: 'ogr2ogr' is not recognized as an internal or external command, operable program or batch file when running ogr2ogr in python script
1
python-2.7,subprocess
0
2014-12-19T13:48:00.000
I'm using the mySQLdb module within my django application which is linked to Apache via WSGI. However I'm getting permission issues (shown below). This is down to SElinux and if I set it to passive everything is ok. ImproperlyConfigured: Error loading MySQLdb module: /opt/django/virtenv/django15/lib/python2.7/site-p...
8
0
0
0
false
27,734,160
1
1,090
1
0
0
27,584,508
Couple of permission issues that I notice: Make sure your credentials for mySQLdb have access to the database. If you are using IP and Port to connect to the database, try using localhost. Make sure the user (chmod permissions) have access to the folder where mySQL stores stuff. Sometimes when storing media and things...
1
0
0
Python MySQLdb with SELinux
4
python,django,mysql-python,selinux
0
2014-12-20T21:29:00.000
I am trying to get writing privileges to my sqlite3.db file in my django project hosted on bluehost, but I cannot get any other chmod command to work besides the dangerous/risky chmod 777. When I chmod 777 the db file and the directory, everything works perfectly. However, in order to be more prudent, I’ve tried chmod...
2
1
1.2
0
true
27,594,818
1
310
1
0
0
27,594,703
The user accessing the database (www-data?) needs to have write privileges to the folder the data resides in as well as the file itself. I would probably change the group ownership (chgrp) of the folder to www-data and add a group sticky bit to the folder as well (chmod g+s dbfolder). The last one makes sure that any...
1
0
0
Django on CentOS/Bluehost: Attempt to Write a Readonly Database, which Chmod besides 777 to use?
1
python,django,sqlite
0
2014-12-21T23:04:00.000
I use python 2.7.3 and Windows7. I want to decorate the Excel chart by using Python. It's not necessary to make charts from start to end. First step(EXCEL STEP), I store data in the Excel sheet and make line chart roughly. (by selecting data range and using hot-key 'ALT+N+N+enter') Next step(PYTHON STEP), I want to m...
0
0
1.2
0
true
27,599,618
0
191
1
0
0
27,596,890
It seems that all the python module could only create excels but not activate existing charts. Try xlrd and xlwt. Good luck.
1
0
0
Selecting or activating existing Excel chart
1
python,excel,charts
0
2014-12-22T05:11:00.000
I'm developing intranet web app that is based on Pyramid with SQLAlchemy. It eventually may (will) happen that 2 users will edit the same record. How can I handle the requirement to notify the user who started editing later that particular record is being edited by the first user?
1
0
0
0
false
27,616,278
1
55
1
0
0
27,616,098
You need a table with current editor, record_id and timeout. The first editor asks per POST-request to edit a record and you put a new line in this table, with a reasonable timeout, say 5 min. The first editor gets an "ok" in return. For the second editor you find a match for the record_id in the table, look at the tim...
1
0
0
Notifying user's browser of change without websockets
2
python,web,pyramid
0
2014-12-23T07:44:00.000
I am generating load test data in a Python script for Cassandra. Is it better to insert directly into Cassandra from the script, or to write a CSV file and then load that via Cassandra? This is for a couple million rows.
0
0
0
1
false
27,688,141
0
364
1
0
0
27,678,990
For a few million, I'd say just use CSV (assuming rows aren't huge); and see if it works. If not, inserts it is :) For more heavy duty stuff, you might want to create sstables and use sstable loader.
1
0
0
Python/Cassandra: insert vs. CSV import
1
python,cassandra,load-testing
0
2014-12-28T17:47:00.000
I have a typical Django project with one primary database where I keep all the data I need. Suppose there is another DB somewhere with some additional information. That DB isn't directly related to my Django project so let's assume I do not even have a control under it. The problem is that I do ont know if I need to cr...
1
0
0
0
false
27,744,297
1
437
1
0
0
27,742,457
I would create the minimal django models on the external databases => those that interact with your code: Several outcomes to this If parts of the database you're not interested in change, it won't have an impact on your app. If the external models your using change, you probably want to be aware of that as quickly as...
1
0
0
Django models with external DBs
3
python,django
0
2015-01-02T12:45:00.000
I have a Python client program (which will be available to a limited number of users) that fetches data from a remote MySQL-DB using the pymysql-Module. The problem is that the login data for the DB is visible for everyone who takes a look at the code, so everyone could manipulate or delete data in the DB. Even if I wo...
1
0
0
0
false
27,743,210
0
934
1
0
0
27,743,031
This happens to be one of the reasons desktop client-server architecture gave way to web architecture. Once a desktop user has access to a dbms, they don't have to use just the SQL in your application. They can do whatever their privileges allow. In those bad old days, client-server apps only could change rows in the D...
1
0
0
Secure MySQL login data in a Python client program
2
python,mysql,pymysql
0
2015-01-02T13:31:00.000
I am connecting to MySQL database using torndb in Python. Is there a way to switch between databases after connection is established? Like the select_db method?
0
0
0
0
false
27,760,934
0
106
2
0
0
27,760,817
this ultimately will be decided if the database is running on the same host and in the instance of MySQL. If it is running in the same instance you should be able to prefix your tables names with the database name. For example; "select splat from foo.bar where splat is not null" where foo is the database name and bar...
1
0
0
Torndb - Switch from one database to another
2
python,tornado
0
2015-01-03T23:45:00.000
I am connecting to MySQL database using torndb in Python. Is there a way to switch between databases after connection is established? Like the select_db method?
0
2
0.197375
0
false
27,765,801
0
106
2
0
0
27,760,817
Switch db: conn.execute('use anotherDBName');
1
0
0
Torndb - Switch from one database to another
2
python,tornado
0
2015-01-03T23:45:00.000
I'm running Django with Postgres database. On top of application-level security checks, I'm considering adding database-level restrictions. E.g. the application code should only be able to INSERT into log tables, and not UPDATE or DELETE from them. I would manually create database user with appropriate grants for this...
2
0
0
0
false
27,964,212
1
1,088
2
0
0
27,819,930
Yes, this is practiced sometimes, but not commonly. The best way to do it is to grant specific privileges on user, not in django. Making such restrictions means that we should not trust application, because it might change some files / data in db in the way that we do not expect it to do so. So, to sum up: create anoth...
1
0
0
Restricted database user for Django
3
python,django,database,postgresql,security
0
2015-01-07T12:50:00.000
I'm running Django with Postgres database. On top of application-level security checks, I'm considering adding database-level restrictions. E.g. the application code should only be able to INSERT into log tables, and not UPDATE or DELETE from them. I would manually create database user with appropriate grants for this...
2
1
0.066568
0
false
27,972,123
1
1,088
2
0
0
27,819,930
For storing the credentials to the privileged user for management commands, when running manage.py you can use the --settings flag, which you would point to another settings file that has the other database credentials. Example migrate command using the new settings file: python manage.py migrate --settings=myapp.privi...
1
0
0
Restricted database user for Django
3
python,django,database,postgresql,security
0
2015-01-07T12:50:00.000
I've gone through many threads related to installing mysql-python in a virtualenv, including those specific to users of Percona. None have solved my problem thus far. With Percona, it is normal to get a long error on pip install MySQL-python in the virtualenv that ultimately says EnvironmentError: mysql_config not foun...
1
1
1.2
0
true
27,829,817
0
110
1
0
0
27,828,737
Found the solution! I think it was improper of my to install mysql-devel in the first place, so I went ahead and uninstalled it. Instead, I used a packaged supplied by Percona - Percona-Server-devel-55 yum install Percona-Server-devel-55 and the problem is solved!
1
0
0
Unable to get these to cooperate: mysql-python + virtualenv + percona + centos6
1
python-2.7,virtualenv,mysql-python,centos6,percona
0
2015-01-07T21:11:00.000
I have a python script set up that captures game data from users while the game is being played. The end goal of this is to get all that data, from every user, into a postgresql database on my web server where it can all be collated and displayed via django The way I see it, I have 2 options to accomplish this: While ...
0
1
0.099668
0
false
27,885,848
1
70
1
0
0
27,885,733
The script makes a POST request to your Django web server either with login/pwd or unique string. The web server validates credentials and inserts data into DB.
1
0
0
Clients uploading to database
2
python,postgresql,security,csv
0
2015-01-11T09:46:00.000
I need to backup the current db while logged into odoo. I should be able to do it using a button, so that suppose I click on the button, it works the same way as odoo default backup in manage databases, but I should be able to do it from within while logged in. Is there any way to achieve this? I do know that this is p...
13
1
0.022219
0
false
28,070,202
1
4,840
1
0
0
27,935,745
You can use a private browser session to access the Database menu, from the login screen, and perform the the backup form there (you need to know the master password to access that, defined in the server configuration file).
1
0
0
Backup Odoo db from within odoo
9
python,openerp,odoo
0
2015-01-14T04:11:00.000
Python 2.7 and 3.4 co-exist in my mac-os. After installing the official mysql connector (downloaded from dev.mysql.com), import mysql.connector can only pass in python 2.7. Is there any way for the connector to work for both python versions?
0
0
0
0
false
47,486,286
0
569
1
0
0
28,039,131
Relative to Python 3.6: The official mysql connector only worked in python 2.7 after i installed it on OSX. As an alternative I used the easy_install-3.6 python module integrated in python 3.6 Go to directory: /Library/Frameworks/Python.framework/Versions/3.6/bin command: easy_install-3.6 mysql-connector-python
1
0
0
mysql-python connector work with python2.7 and 3.4 at the same time
2
python,mysql,mysql-python,mysql-connector,python-3.4
0
2015-01-20T06:36:00.000
I want to build an universal database in which I will keep data from multiple countries so I will need to work with the UNICODE charset. I need a little help in order to figure out which is the best way to work with stuff like that and how my queries will be affected ( some sql example queries from php/python for basic...
0
1
0.099668
0
false
28,096,917
0
1,733
2
0
0
28,096,856
just put a N infront of the string, something like INSERT INTO MYTABLE VALUES(N"xxx") and make sure your column type is nvarchar
1
0
0
How to insert UNICODE characters to SQL db?
2
php,python,sql,unicode
1
2015-01-22T19:13:00.000
I want to build an universal database in which I will keep data from multiple countries so I will need to work with the UNICODE charset. I need a little help in order to figure out which is the best way to work with stuff like that and how my queries will be affected ( some sql example queries from php/python for basic...
0
0
0
0
false
28,096,868
0
1,733
2
0
0
28,096,856
there is nothing special you need to do. with php you can do... query("SET NAMES utf8");
1
0
0
How to insert UNICODE characters to SQL db?
2
php,python,sql,unicode
1
2015-01-22T19:13:00.000
i installed PyCharm 4 on my Mac Yosemite, then installed SQLAlchemy through easy_install with console, also I have already official python 2.7.9 IDLE. I tried to import SQLAlchemy module in official IDLE and it works, but in PyCharm 4 IDE it doesn't. How can i fix this error? Traceback (most recent call last): File "/U...
0
0
1.2
0
true
28,123,162
0
664
1
0
0
28,121,229
Go into Settings -> Project Settings -> Project Interpreter. Then press configure interpreter, and navigate to the "Paths" tab. Press the + button in the Paths area. You can put the path to the module you'd like it to recognize.
1
0
0
SQLAlchemy with Pycharm 4
1
macos,sqlalchemy,pycharm,osx-yosemite,python-2.x
0
2015-01-24T01:10:00.000
Many spreadsheets have formulas and formatting that Python tools for reading and writing Excel files cannot faithfully reproduce. That means that any file I want to create programmatically must be something I basically create from scratch, and then other Excel files (with the aforementioned sophistication) have to refe...
26
6
1
0
false
28,254,411
0
32,281
1
0
0
28,142,420
I'm 90% confident the answer to "can pandas do this" is no. Posting a negative is tough, because there always might be something clever that I've missed, but here's a case: Possible interface engines are xlrd/xlwt/xlutils, openpyxl, and xlsxwriter. None will work for your purposes, as xlrd/wt don't support all formu...
1
0
0
Can Pandas read and modify a single Excel file worksheet (tab) without modifying the rest of the file?
6
python,excel,pandas
0
2015-01-25T22:38:00.000
Recently I've been receiving this error regarding what appears to be an insufficiency in connection slots along with many of these Heroku errors: H18 - Request Interrupted H19 - Backend connection timeout H13 - Connection closed without response H12 - Request timeout Error django.db.utils.OperationalError in / FATAL:...
2
4
1.2
0
true
28,395,905
1
3,434
1
0
0
28,238,144
I realized that I was using the django server in my procfile. I accidentally commented out and commited it to heroku instead of using gunicorn. Once I switched to gunicorn on the same heroku plan the issue was resolved. Using a production level application server really makes a big difference. Also don't code at crazy ...
1
0
0
Django/Postgres: FATAL: remaining connection slots are reserved for non-replication superuser connections
1
python,django,postgresql,heroku,django-queryset
0
2015-01-30T14:35:00.000
I've written a python/webdriver script that scrapes a table online, dumps it into a list and then exports it to a CSV. It does this daily. When I open the CSV in Excel, it is unformatted, and there are fifteen (comma-delimited) columns of data in each row of column A. Of course, I then run 'Text to Columns' and get ev...
1
0
0
1
false
28,238,935
0
24
1
0
0
28,238,830
Try importing it as a csv file, instead of opening it directly on excel.
1
0
0
Retain Excel Settings When Adding New CSV
1
python,excel,csv
0
2015-01-30T15:11:00.000
Most of the Flask tutorials and examples I see use an ORM such as SQLAlchemy to handle interfacing with the user database. If you have a general working knowledge of SQL, is this extra level of abstraction, heavy with features, necessary? I am tempted to write a lightweight interface/ORM of my own so I better understan...
2
0
0
0
false
28,280,443
1
970
1
0
0
28,271,711
No, an ORM is not required, just incredibly convenient. SQLAlchemy will manage connections, pooling, sessions/transactions, and a wide variety of other things for you. It abstracts away the differences between database engines. It tracks relationships between tables in convenient collections. It generally makes wor...
1
0
0
Handling user database in flask web app without ORM like SQLAlchemy
1
python,sql,orm,flask,sqlalchemy
0
2015-02-02T05:39:00.000
I save my data on RethinkDB Database. As long as I dont restart the server, all is well. But when I restart, it gives me an error saying database doesnt exist, although the folder and data does exist in folder rethinkdb_data. What is the problem ?
7
10
1.2
0
true
28,330,153
0
897
1
0
0
28,329,352
You're almost certainly not losing data, you're just starting RethinkDB without pointing it to the data. Try the following: Start RethinkDB from the directory that contains the rethinkdb_data directory. Alternatively, pass the -d flag to RethinkDB to point it to the directory that contains rethinkdb_data. For example,...
1
0
0
RethinkDB losing data after restarting server
1
python,ubuntu-14.04,rethinkdb,rethinkdb-python
0
2015-02-04T19:04:00.000
I'm trying to pip install pymssql in my Centos 6.6, but kept on experiencing this error: _mssql.c:314:22: error: sqlfront.h: No such file or directory cpp_helpers.h:34:19: error: sybdb.h: No such file or directory I already installed freetds, freetds-devel, and cython. Any ideas? Thanks in advance!
1
2
1.2
0
true
28,349,658
0
2,708
1
0
0
28,343,666
Looking at the full traceback we see that include_dirs includes /usr/local/include but the header files are in /usr/include which I imagine has to do with the fact python 2.7 is not the system python. You can change the setup.py script to include /usr/include or copy the files into /usr/local/include
1
0
0
Installing pymssql in Centos 6.6 64-bit
1
python,python-2.7,pip,pymssql
0
2015-02-05T12:12:00.000
I m building a web crawler and I wanted to save links in a database with informations like type, size, etc. and actually I don't know when I should commit the database (how often) in other terms: is it a problem if I commit the database every 0.1 second?
0
0
1.2
0
true
28,400,155
0
67
1
0
0
28,400,064
In terms of logical correctness, you should commit every time a set of one or more queries that are supposed to execute atomically (i.e, all of them, or else none of them, execute) is finished. There is no connection between this logical correctness and any given amount of time between commits. In your vaguely-sketche...
1
0
0
Python sqlite3 correct use of commit
1
python,sqlite
0
2015-02-08T22:23:00.000
i Launch cluster spark cassandra with datastax dse in aws cloud. So my dataset storage in S3. But i don't know how transfer data from S3 to my cluster cassandra. Please help me
0
1
1.2
1
true
28,419,293
0
1,657
1
0
0
28,417,806
The details depend on your file format and C* data model but it might look something like this: Read the file from s3 into an RDD val rdd = sc.textFile("s3n://mybucket/path/filename.txt.gz") Manipulate the rdd Write the rdd to a cassandra table: rdd.saveToCassandra("test", "kv", SomeColumns("key", "value"))
1
0
0
How import dataset from S3 to cassandra?
2
python,cassandra,datastax-enterprise
0
2015-02-09T19:34:00.000
I've been looking for ways to do this and haven't found a good solution to this. I'm trying to copy a sheet in an .xlsx file that has macros to another workbook. I know I could do this if the sheet contained data in each cell but that's not the case. The sheet contains checkboxes and SOME text. Is there a way to do thi...
0
0
0
0
false
28,946,313
0
194
1
0
0
28,423,604
Try win32com package. This offers an interface of VBA for python You can find it on SourceForge. I've done some projects with this package, we can discuss more on your problem if this package helps.
1
0
0
Copy sheet with Macros to workbook in Python
1
python,xlsx,xlsxwriter
0
2015-02-10T03:32:00.000
Is it possible to connect a Flask app to a database using MySQLdb-python and vertica_python? It seems that the recommended Flask library for accessing databases is Flask-SQLAlchemy. I have an app that connects to MySQL and Vertica databases, and have written a GUI wrapper for it in Flask using flask-wtforms, but am ju...
0
0
0
0
false
28,502,445
1
224
1
0
0
28,489,779
Yes, it is possible. I was having difficulties debugging because of the opacity of the error, but ran it with app.run(debug=True), and managed to troubleshoot my problem.
1
0
0
Can I connect a flask app to a database using MySQLdb-python vertica_python?
1
python,flask,flask-sqlalchemy
0
2015-02-12T23:21:00.000
Currently I am installing psycopg2 for work within eclipse with python. I am finding a lot of problems: The first problem sudo pip3.4 install psycopg2 is not working and it is showing the following message Error: pg_config executable not found. FIXED WITH:export PATH=/Library/PostgreSQL/9.4/bin/:"$PATH” When I i...
48
13
1
0
false
60,101,069
0
17,998
2
1
0
28,515,972
I was able to fix this on my Mac (running Catalina, 10.15.3) by using psycopg2-binary rather than psycopg2. pip3 uninstall psycopg2 pip3 install psycopg2-binary
1
0
0
Problems using psycopg2 on Mac OS (Yosemite)
8
python,eclipse,macos,postgresql,psycopg2
0
2015-02-14T13:12:00.000
Currently I am installing psycopg2 for work within eclipse with python. I am finding a lot of problems: The first problem sudo pip3.4 install psycopg2 is not working and it is showing the following message Error: pg_config executable not found. FIXED WITH:export PATH=/Library/PostgreSQL/9.4/bin/:"$PATH” When I i...
48
4
0.099668
0
false
28,949,608
0
17,998
2
1
0
28,515,972
I am using yosemite, postgres.app & django. this got psycopg2 to load properly for me but the one difference was that my libpq.5.dylib file is in /Applications/Postgres.app/Contents/Versions/9.4/lib. thus my second line was sudo ln -s /Applications/Postgres.app/Contents/Versions/9.4/lib/libpq.5.dylib /usr/lib
1
0
0
Problems using psycopg2 on Mac OS (Yosemite)
8
python,eclipse,macos,postgresql,psycopg2
0
2015-02-14T13:12:00.000
I have a python process serving as a WSGI-apache server. I have many copies of this process running on each of several machines. About 200 megabytes of my process is read-only python data. I would like to place these data in a memory-mapped segment so that the processes could share a single copy of those data. Best wou...
11
1
0.049958
0
false
30,273,392
0
1,046
1
0
0
28,570,438
One possibility is to create a C- or C++-extension that provides a Pythonic interface to your shared data. You could memory map 200MB of raw data, and then have the C- or C++-extension provide it to the WSGI-service. That is, you could have regular (unshared) python objects implemented in C, which fetch data from some ...
1
0
1
How to store easily python usable read-only data structures in shared memory
4
python,shared-memory,wsgi,uwsgi
0
2015-02-17T20:23:00.000
I have a file that is several G in size and contains a JSON hash on each line. The document itself is not a valid JSON document, however I have no control over the generation of this data so I cannot change it. The JSON needs to be read, lookups need to be performed on certain "fields" in the JSON and then the result ...
1
1
1.2
0
true
34,722,784
0
76
1
0
0
28,579,257
1) filter out the lines you can ignore. 2) work out your table dependency graph and partition rows into multiple files by table. 3) insert all rows for tables without dependencies; optionally, cache these so you don't have to ask the DB what you just told it for lookups. N) use that cache + do any DB lookups required t...
1
0
1
Mass insert of data with intermediate lookups using Python and MySQL
1
python,mysql,json,database
0
2015-02-18T08:42:00.000
Query results from some Postgres data types are converted to native types by psycopg2. Neither pgdb (PostgreSQL) and cx_Oracle seem to do this. …so my attempt to switch pgdb out for psycopg2cffi is proving difficult, as there is a fair bit of code expecting strings, and I need to continue to support cx_Oracle. The ps...
2
3
1.2
0
true
28,602,221
0
274
1
0
0
28,597,575
You can re-register a plain string type caster for every single PostgreSQL type (or at least for every type you expect a string for in your code): when you register a type caster for an already registered OID the new definition takes precedence. Just have a look at the source code of psycopg (both C and Python) to find...
1
0
0
Can one disable conversion to native types when using psycopg2?
1
postgresql,psycopg2,python-db-api
0
2015-02-19T02:11:00.000
I need to process a lot of .xls files which come out of this Microscopy image analysis software called Aperio (after analysis with Aperio, it allows you to export the data as "read-only" xls format. The save-as only works in Excel on a Mac, on windows machine, the save and save as buttons are greyed out since the files...
0
1
0.066568
0
false
28,634,898
0
415
1
0
0
28,632,987
Use JODConverter. You have an Excel 4.0 file; too old for Apache POI.
1
0
0
How to program to save a bunch of ".xls" files in Excel
3
java,python,excel,apache-poi,poi-hssf
0
2015-02-20T15:51:00.000
I could use some help. My python 3.4 Django 1.7.4 site worked fine using sqlite. Now I've moved it to Heroku which uses Postgres. And when I try to create a user / password i get this error: column "is_superuser" is of type integer but expression is of type boolean LINE 1: ...15-02-08 19:23:26.965870+00:00', "is_su...
0
0
0
0
false
28,636,553
1
2,600
2
0
0
28,636,141
It seems to me that you are using raw SQL queries instead of Django ORM calls and this causes portability issues when you switch database engines. I'd strongly suggest to use ORM if it's possible in your case. If not, then I'd say that you need to detect database engine on your own and construct queries depending on cu...
1
0
0
column "is_superuser" is of type integer but expression is of type boolean DJANGO Error
3
django,python-3.x,heroku,heroku-postgres
0
2015-02-20T18:50:00.000
I could use some help. My python 3.4 Django 1.7.4 site worked fine using sqlite. Now I've moved it to Heroku which uses Postgres. And when I try to create a user / password i get this error: column "is_superuser" is of type integer but expression is of type boolean LINE 1: ...15-02-08 19:23:26.965870+00:00', "is_su...
0
-1
-0.066568
0
false
28,638,965
1
2,600
2
0
0
28,636,141
The problem is caused by a variable trying to change data types (i.e. from a char field to date-time) in the migration files. A database like PostgreSQL might not know how to change the variable type. So, make sure the variable has the same type in all migrations.
1
0
0
column "is_superuser" is of type integer but expression is of type boolean DJANGO Error
3
django,python-3.x,heroku,heroku-postgres
0
2015-02-20T18:50:00.000
I have a large dataset that I do not have direct access to and am trying to convert the data headers into column headings using Python and then returning it back to Excel. I have created the function to do this and it works but I have hit a snag. What I want the Excel VBA to do is loop down the range and if the cell's...
0
0
0
0
false
28,684,225
0
5,808
1
0
0
28,663,658
Thanks for your help. I've got this to work now and I'm super excited about the future possibilities for Python, xlwings and Excel. My problem was simple once I got the looping through the range sorted (which incidentally was handily imported as each row per element rather than each cell). I had declared my list outsid...
1
0
0
xlwings output to iterative cell range
2
python,excel,python-3.x,xlwings,vba
0
2015-02-22T21:44:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to sub...
32
8
1
0
false
35,020,640
1
73,141
6
0
0
28,685,931
the new django 1.9 has removed "syncdb", run "python manage.py migrate", if you are trying to create a super user, run "python manage.py createsuperuser"
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to sub...
32
0
0
0
false
34,814,438
1
73,141
6
0
0
28,685,931
You can run the command from the project folder as: "python.exe manage.py migrate", from a commandline or in a batch-file. You could also downgrade Django to an older version (before 1.9) if you really need syncdb. For people trying to run Syncdb from Visual Studio 2015: The option syncdb was removed from Django 1.9 (d...
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to sub...
32
0
0
0
false
36,004,441
1
73,141
6
0
0
28,685,931
Run the command python manage.py makemigratons,and than python manage.py migrate to sync.
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to sub...
32
1
0.019997
0
false
42,688,208
1
73,141
6
0
0
28,685,931
Django has removed python manage.py syncdb command now you can simply use python manage.py makemigrations followed bypython manage.py migrate. The database will sync automatically.
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to sub...
32
2
0.039979
0
false
42,795,652
1
73,141
6
0
0
28,685,931
In Django 1.9 onwards syncdb command is removed. So instead of use that one, you can use migrate command,eg: python manage.py migrate.Then you can run your server by python manage.py runserver command.
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to sub...
32
0
0
0
false
43,525,717
1
73,141
6
0
0
28,685,931
Alternarte Way: Uninstall Django Module from environment Edit Requirements.txt a type Django<1.9 Run Install from Requirments option in the enviroment Try Syncdb again This worked for me.
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I have a CherryPy Webapp that I originally wrote using file based sessions. From time to time I store potentially large objects in the session, such as the results of running a report - I offer the option to download report results in a variety of formats, and I don't want to re-run the query when the user selects a do...
2
1
0.066568
0
false
28,705,996
0
1,009
2
0
0
28,705,661
Sounds like you want to store a reference to the object stored in Memcache and then pull it back when you need it, rather than relying on the state to handle the loading / saving.
1
0
0
CherryPy Sessions and large objects?
3
python,cherrypy
0
2015-02-24T20:30:00.000
I have a CherryPy Webapp that I originally wrote using file based sessions. From time to time I store potentially large objects in the session, such as the results of running a report - I offer the option to download report results in a variety of formats, and I don't want to re-run the query when the user selects a do...
2
1
0.066568
0
false
28,717,896
0
1,009
2
0
0
28,705,661
From what you have explained I can conclude that conceptually it isn't a good idea to mix user sessions and a cache. What sessions are mostly designed for is holding state of user identity. Thus it has security measures, locking, to avoid concurrent changes, and other aspects. Also a session storage is usually volatile...
1
0
0
CherryPy Sessions and large objects?
3
python,cherrypy
0
2015-02-24T20:30:00.000
I'm making queries from a MS SQL server using Python code (Pymssql library) however I was wondering if there was any way to make the connection secure and encrypt the data being sent from the server to python? Thanks
4
0
0
0
false
38,181,077
0
6,365
1
0
0
28,724,427
If you want to connect SQL server using secured connection using pymssql then you need to provide "secure" syntax in your host.. for e.g. unsecured connection host : xxx.database.windows.net:1433 secured connection host : xxx.database.secure.windows.net:1443
1
0
0
Can Pymssql have a secure connection (SSL) to MS SQL Server?
4
python,sql-server,pymssql
0
2015-02-25T16:32:00.000
Python application, standard web app. If a particular request gets executed twice by error the second request will try to insert a row with an already existing primary key. What is the most sensible way to deal with it. a) Execute a query to check if the primary key already exists and do the checking and error handli...
1
1
0.099668
0
false
28,787,981
1
61
2
0
0
28,787,814
The latter one you need to do and handle in any case, thus I do not see there is much value in querying for duplicates, except to show the user information beforehand - e.g. report "This username has been taken already, please choose another" when the user is still filling in the form.
1
0
0
Let the SQL engine do the constraint check or execute a query to check the constraint beforehand
2
python,mysql,sql
0
2015-02-28T22:40:00.000
Python application, standard web app. If a particular request gets executed twice by error the second request will try to insert a row with an already existing primary key. What is the most sensible way to deal with it. a) Execute a query to check if the primary key already exists and do the checking and error handli...
1
2
1.2
0
true
28,788,000
1
61
2
0
0
28,787,814
The best option is (b), from almost any perspective. As mentioned in a comment, there is a multi-threading issue. That means that option (a) doesn't even protect data integrity. And that is a primary reason why you want data integrity checks inside the database, not outside it. There are other reasons. Consider per...
1
0
0
Let the SQL engine do the constraint check or execute a query to check the constraint beforehand
2
python,mysql,sql
0
2015-02-28T22:40:00.000
Are null bytes allowed in unicode strings? I don't ask about utf8, I mean the high level object representation of a unicode string. Background We store unicode strings containing null bytes via Python in PostgreSQL. The strings cut at the null byte if we read it again.
5
-2
-0.132549
0
false
28,813,836
0
8,634
2
0
0
28,813,409
Since a string is basically just data and a pointer, you can save null in it. However, since null represents the end of the string ("null terminator "), there is no way to read beyond the null without knowing the size ahead of reading. Therefore, seems that you ought to store your data in binary and read it as a buffe...
1
0
0
Are null bytes allowed in unicode strings in PostgreSQL via Python?
3
python,postgresql,unicode
0
2015-03-02T15:27:00.000
Are null bytes allowed in unicode strings? I don't ask about utf8, I mean the high level object representation of a unicode string. Background We store unicode strings containing null bytes via Python in PostgreSQL. The strings cut at the null byte if we read it again.
5
1
0.066568
0
false
28,814,135
0
8,634
2
0
0
28,813,409
Python itself is perfectly capable of having both byte strings and Unicode strings with null characters having a value of zero. However if you call out to a library implemented in C, that library may use the C convention of stopping at the first null character.
1
0
0
Are null bytes allowed in unicode strings in PostgreSQL via Python?
3
python,postgresql,unicode
0
2015-03-02T15:27:00.000
For a project I have to use DynamoDB(aws) and python(with boto). I have items with a date and I need to display the count grouped by date or by month. Something like by date of the month [1/2: 5, 2/2: 10, 3/2: 7, 4/2: 30, 5/2: 25, ...] or by month of the year [January: 5, February: 10, March: 7, ...]
0
2
1.2
0
true
28,890,074
1
3,106
1
0
0
28,818,394
You can create 2 GSIs: 1 with date as hashKey, 1 with month as hashKey. Those GSIs will point you to the rows of that month / of that day. Then you can just query the GSI, get all the rows of that month/day, and do the aggregation on your own. Does that work for you? Thanks! Erben
1
0
0
Using group by in DynamoDB
1
python,amazon-web-services,amazon-dynamodb,boto
0
2015-03-02T19:56:00.000