qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
36,911,421
If a class contains two constructors that take in different types of arguments as shown here: ``` public class Planet { public double xxPos; //its current x position public double yyPos; //its current y position public double xxVel; //its current veolicity in the x direction public double yyVel; //its current veolicity in the y direction public double mass; //its mass public String imgFileName; //The name of an image in the images directory that depicts the planet // constructor is like __init__ from python, this sets up the object when called like: Planet(arguments) public Planet(double xP, double yP, double xV, double yV, double m, String img) { xxPos = xP; yyPos = yP; xxVel = xV; yyVel = yV; mass = m; imgFileName = img; } // second constructor // how come testplanetconstructor knows to use this second one? // does it know based on the argument type its being passed? public Planet(Planet p) { xxPos = p.xxPos; yyPos = p.yyPos; xxVel = p.xxVel; yyVel = p.yyVel; mass = p.mass; imgFileName = p.imgFileName; } } ``` My primary question is: 1) How does another class with a main that calls this class determine which constructor to use? If that is the case, what would happen if you have two constructors with: 2) the same type and number of arguments? 3) the same type but different number of arguments? I realize that follow up questions are something that you should probably never do (aka something messy). I am just curious.
2016/04/28
[ "https://Stackoverflow.com/questions/36911421", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5249753/" ]
> > 1) How does another class with a main that calls this class determine > which constructor to use? > > > Compiler follows same process as overloaded method for static binding by checking unique method signature. To know about method signature [see this](https://docs.oracle.com/javase/tutorial/java/javaOO/methods.html) ``` public static void main(String args[]) { double d1 = 0; double d2 = 0; double d3 = 0; double d4 = 0; double d5 = 0; String img = ""; Planet p = new Planet(d1, d2, d3, d4, d5, img);// constructor with valid argument set } ``` > > 2) the same type and number of arguments? > > > This is actually not possible to write two method/constructor with same signature in a single class. e.g. following code never compile ``` Planet(int i) {// compilation error return 0; } Planet(int j) {// compilation error return 0; } ``` > > 3) the same type but different number of arguments? > > > This is possible, just like method creating / calling with different signature. e.g. ``` Planet p1 = new Planet(d1, d2, d3, d4, d5, img); Planet p2 = new Planet(p1); ```
The constructor to use will be determined based on the number and type of arguments you will pass to it. In Java, the constructor and method signatures (identities) are comprised of the method name (not applicable for constructors, obviously), and the number and type of its parameters. Two methods (or constructors) cannot have the same signature. So you cannot provide two methods or constructors that would have the same number and type of parameters, which makes it easy for the JVM to know which one to call, just by matching the number and type of the actual parameters passed.
27,849,023
**HTML** -------- This is a form that accepts a user's input (url): ``` <form method="post" action="/" accept-charset="utf-8"> <input type="search" name="url" placeholder="Enter a url" /> <button>Go</button> </form> ``` **PHP (Laravel)** ----------------- This controller stores the value of the user's input (url) into a variable for use in the python script. ``` $url = Input::get('url'); $name = shell_exec('path/to/python ' . base_path() . '/test.py ' . $url); return $name; ``` **Python** ---------- The script passes what the user input and takes the text after a forward slash and displays it. ``` url = sys.argv[1] name = url.split('/')[-1] print(name) ``` Going back to the PHP, it returns the value of what was executed in the Python script. If a user inputs the url: `http://example.com/file.png` it will successfully return the string `file.png` Everything about this works, but I'm realizing that I'm limiting myself, since the `shell_exec` command is only going to paste out the string that is returned from the python script. What do I need to do if I want to have multiple variables returned? **Python in question** ---------------------- ``` url = sys.argv[1] name = url.split('/')[-1] test = "pass this text too!" # ? what goes here ? ``` So the HTML page should now return `file.png` and `pass this text too!` **Am I going to need to return an array or json in the python and then extract the array/json in PHP?** *NOTE: I am aware of security flaws/injections in these examples, these are stripped down versions of my actual code.*
2015/01/08
[ "https://Stackoverflow.com/questions/27849023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2097870/" ]
You can return a an array, but be sure that elements don't contain the delimiter. Like `print(','.join(name, test))`. Or you can encode to json like `json.dumps([name, test])`, then parse json in PHP. Second one is better, of course.
In this case, your Python script has to return a formatted response, that PHP script can parse properly and determine variables with their values for example, your python code has to return something like that : ``` file_name=file.png;file_extension=png;creation_date=1/1/2015; ``` After that in your php code yo do the necessary stuff ``` $varValues = explode(";", $PythonResult); foreach($varValues as $vv) { $temp = explode("=", $vv); print 'Parameter : ' . $temp[0] . ' value : ' . $temp[1] . '<br />'; } ```
62,015,339
``` x = Flatten()(vgg.output) variable = function (variable) ``` I can't find this type of expressions in python , can anyone help me to understand the above expression Thanks in advance
2020/05/26
[ "https://Stackoverflow.com/questions/62015339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5952135/" ]
The `Flatten()` function here, returns another function, which takes `vgg.output` as argument. This happens because everything in python is a first class object. So you can return a function as the return value of function. This will be clear with an example: Let's say we have a function `square` which returns the square of a number: ``` def square(number): return number**2 ``` So calling square on a number would give: ``` >>> square(3) 9 ``` Now, let us define another function that returns the `square` function, and only the function: ``` def return_func(): return square ``` Calling `return_func` we will get the `square` function back: ``` >>> some_func = return_func() >>> some_func <function __main__.square(number)> >>> some_func == square == return_func() True ``` So calling `square(number)` should be equivalent to `return_func()(number)`, i.e. ``` >>> return_func()(3) 9 ``` So in your example, `Flatten()` is equivalent to `return_func()` and `number` is equivalent to `vgg.output`.
`x = Flattern()` is calling this function and assign returned data to `x`. `v = function` like `v` is `function` alias. And you can do `v()` to call it.
62,015,339
``` x = Flatten()(vgg.output) variable = function (variable) ``` I can't find this type of expressions in python , can anyone help me to understand the above expression Thanks in advance
2020/05/26
[ "https://Stackoverflow.com/questions/62015339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5952135/" ]
The `Flatten()` function here, returns another function, which takes `vgg.output` as argument. This happens because everything in python is a first class object. So you can return a function as the return value of function. This will be clear with an example: Let's say we have a function `square` which returns the square of a number: ``` def square(number): return number**2 ``` So calling square on a number would give: ``` >>> square(3) 9 ``` Now, let us define another function that returns the `square` function, and only the function: ``` def return_func(): return square ``` Calling `return_func` we will get the `square` function back: ``` >>> some_func = return_func() >>> some_func <function __main__.square(number)> >>> some_func == square == return_func() True ``` So calling `square(number)` should be equivalent to `return_func()(number)`, i.e. ``` >>> return_func()(3) 9 ``` So in your example, `Flatten()` is equivalent to `return_func()` and `number` is equivalent to `vgg.output`.
Python has special syntax for inner functions. ``` def Flatten()(vgg.output): pass ``` Is basically the same as: ``` def Flatten(): def inner(vgg.output): pass ```
72,531,611
I have Miniconda3 on a Linux system (Ubuntu 22.04). The environment has Python 3.10 as well as a functioning (in Python) installation of PyTorch (installed following official instructions). I would like to setup a CMake project that uses PyTorch C++ API. The reason is not important and also I am aware that it's beta (the official documentation states that), so instability and major changes are not excluded. Currently I have this very minimal `CMakeLists.txt`: ``` cmake_minimum_required(VERSION 3.19) # or whatever version you use project(PyTorch_Cpp_HelloWorld CXX) set(PYTORCH_ROOT "/home/$ENV{USER}/miniconda/envs/ML/lib/python3.10/site-packages/torch") list(APPEND CMAKE_PREFIX_PATH "${PYTORCH_ROOT}/share/cmake/Torch/") find_package(Torch REQUIRED CONFIG) ... # Add executable # Link against PyTorch library ``` When I try to configure the project I'm getting error: > > CMake Error at CMakeLists.txt:21 (message): message called with > incorrect number of arguments > > > -- Could NOT find Protobuf (missing: Protobuf\_LIBRARIES Protobuf\_INCLUDE\_DIR) > -- Found Threads: TRUE CMake Warning at /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/public/protobuf.cmake:88 > (message): Protobuf cannot be found. Depending on whether you are > building Caffe2 or a Caffe2 dependent library, the next warning / > error will give you more info. Call Stack (most recent call first): > /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:56 > (include) > > /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 > (find\_package) CMakeLists.txt:23 (find\_package) > > > CMake Error at > /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:58 > (message): Your installed Caffe2 version uses protobuf but the > protobuf library cannot be found. Did you accidentally remove it, > or have you set the right CMAKE\_PREFIX\_PATH? If you do not have > protobuf, you will need to install protobuf and set the library path > accordingly. Call Stack (most recent call first): > > /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 > (find\_package) CMakeLists.txt:23 (find\_package) > > > I installed `libprotobuf` (again via `conda`) but, while I can find the library files, I can't find any `*ProtobufConfig.cmake` or anything remotely related to protobuf and its CMake setup. Before I go fight against wind mills I would like to ask here what the proper setup would be. I am guessing building from source is always an option, however this will pose a huge overhead on people, who I collaborate with.
2022/06/07
[ "https://Stackoverflow.com/questions/72531611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1559401/" ]
``` import os import pandas as pd base_dir = '/path/to/dir' #Get all files in the directory data_list = [] for file in os.listdir(base_dir): #If file is a json, construct it's full path and open it, append all json data to list if file.endswith('json'): json_path = os.path.join(base_dir, file) json_data = pd.read_json(json_path, lines=True) data_list.append(json_data) print(data_list) ```
You probably need to build a list of DataFrames. You may not be able to process every file in the given directory so try this: ``` import pandas as pd from glob import glob from os.path import join BASEDIR = 'Datasets' dataframes = [] for file in glob(join(BASEDIR, '*.json')): try: dataframes.append(pd.read_json(file)) except ValueError: print(f'Unable to process {file}') print(f'Successfully constructed {len(dataframes)} dataframes') ```
72,531,611
I have Miniconda3 on a Linux system (Ubuntu 22.04). The environment has Python 3.10 as well as a functioning (in Python) installation of PyTorch (installed following official instructions). I would like to setup a CMake project that uses PyTorch C++ API. The reason is not important and also I am aware that it's beta (the official documentation states that), so instability and major changes are not excluded. Currently I have this very minimal `CMakeLists.txt`: ``` cmake_minimum_required(VERSION 3.19) # or whatever version you use project(PyTorch_Cpp_HelloWorld CXX) set(PYTORCH_ROOT "/home/$ENV{USER}/miniconda/envs/ML/lib/python3.10/site-packages/torch") list(APPEND CMAKE_PREFIX_PATH "${PYTORCH_ROOT}/share/cmake/Torch/") find_package(Torch REQUIRED CONFIG) ... # Add executable # Link against PyTorch library ``` When I try to configure the project I'm getting error: > > CMake Error at CMakeLists.txt:21 (message): message called with > incorrect number of arguments > > > -- Could NOT find Protobuf (missing: Protobuf\_LIBRARIES Protobuf\_INCLUDE\_DIR) > -- Found Threads: TRUE CMake Warning at /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/public/protobuf.cmake:88 > (message): Protobuf cannot be found. Depending on whether you are > building Caffe2 or a Caffe2 dependent library, the next warning / > error will give you more info. Call Stack (most recent call first): > /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:56 > (include) > > /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 > (find\_package) CMakeLists.txt:23 (find\_package) > > > CMake Error at > /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:58 > (message): Your installed Caffe2 version uses protobuf but the > protobuf library cannot be found. Did you accidentally remove it, > or have you set the right CMAKE\_PREFIX\_PATH? If you do not have > protobuf, you will need to install protobuf and set the library path > accordingly. Call Stack (most recent call first): > > /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 > (find\_package) CMakeLists.txt:23 (find\_package) > > > I installed `libprotobuf` (again via `conda`) but, while I can find the library files, I can't find any `*ProtobufConfig.cmake` or anything remotely related to protobuf and its CMake setup. Before I go fight against wind mills I would like to ask here what the proper setup would be. I am guessing building from source is always an option, however this will pose a huge overhead on people, who I collaborate with.
2022/06/07
[ "https://Stackoverflow.com/questions/72531611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1559401/" ]
``` import os import pandas as pd base_dir = '/path/to/dir' #Get all files in the directory data_list = [] for file in os.listdir(base_dir): #If file is a json, construct it's full path and open it, append all json data to list if file.endswith('json'): json_path = os.path.join(base_dir, file) json_data = pd.read_json(json_path, lines=True) data_list.append(json_data) print(data_list) ```
``` import os import json #heres some information about get list of file in a folder: <https://www.geeksforgeeks.org/python-list-files-in-a-directory/> #heres some information about how to open a json file: <https://www.geeksforgeeks.org/read-json-file-using-python/> path = "./data" file_list = os.listdir(path) #opens the directory form path and gets name of file for i in range(len(file_list)): #loop the list with index number current = open(path+"/"+file_list[i]) #opens the file in order by current index number data = json.load(current) #loads the data form file for k in data['01']: # print(k) ``` output ``` main.json : {'name': 'Nava', 'Surname': 'No need for to that'} data.json : {'name': 'Nava', 'watchs': 'Anime'} ``` heres a link to run code online <https://replit.com/@Nava10Y/opening-open-multiple-json-files-in-a-directory>
38,690,035
I have a Python script that creates a Lambda script in AWS along with all the policies and triggers. I use python boto3 library for that. I create the zip file for the lambda as on-the-fly rather than uploading a static zip file from the hard drive. I use this simple code from below to create my zip file. It creates the zip file without any problems and my python code uploads this zip file as a lambda script and I can view my lambda script in the AWS without any problems. But when I run my lambda script it gives me the module not found error even though I can clearly see that both the module name and the file name does exist and is view-able. **Unable to import module 'xxxx': No module named xxxx** In the file system I double click that zip file that was created by this code and see that the content is created and everything looks normal. If I bypass zipping on the fly and create the zip statically using WinZip and let the rest of the Python & boto3 script upload this file then it works just fine. ``` def CreateLambdaZip(self, fileName, fileContent): with zipfile.ZipFile('Lambda/' + fileName + '.zip', 'w') as myzipc: myzipc.writestr( fileName + '.py', fileContent) myzipc.close() ``` It kinda looks like for the zip file I'm skipping some special headers that is needed by Aws Lambda. Is there such thing? Because in the file system the zip file that is created by Python code and the other one that is created by WinZip are exactly the same. So I know there's nothing wrong with the lambda script. **Update**: I'm uploading the zip file using the below code that reads the zip file which was created using the above snippet. ``` with open('Lambda/'+ fileName +'.zip', 'rb') as zipFile: func = boto3.client("Lambda").create_function( FunctionName=lambdaFunction, Runtime='python2.7', Role=role['Role']['Arn'], Handler= fileName + "." + functionName, Description=description, Timeout=10, MemorySize=256, Publish=True, Code={'ZipFile': zipFile.read()}, ) ``` When I use zipFile.read() I get 2 different headers for the same content when I zip it using WinZip and when I zip it using Python's module. Zip file that's created programmatically using Python ``` b'PK\x03\x04\x14\x00\x00\x00\x00\x00\xe4~\x01IO\x96J=Z\x07\x00\x00Z\x07\x00\x00\x19\x00\x00\x00schedule-ec2-snapshots.pyimport json\nimport boto3\nimport time\nfrom datetime import date, timedelta\n\nprint(\'Loading scheduled EC2 backup actions\')\n\ndef create_snapshots(event, context):\n """\n Lambda function that executes daily snapshots for the instances that ``` and zipfile created by WinZip ``` b'PK\x03\x04\x14\x00\x02\x00\x08\x004X\xfcH\x88\x1f\xce\xb5&\x03\x00\x00b\x07\x00\x00\x19\x00\x00\x00schedule-ec2-snapshots.py\x8dU]k\xdb@\x10|7\xf4?,\nA\x12qL\xda\x06B\r~I\x93Bh\x9b\x87&\xf4E\x15\xe1\xac[\xdb\xd7HwBw2\t\xc1\xff\xbd{+\xeb\xcb.\xb4\n\xc4\xba\xdb\xd1\xec\xce\xdc\xae\xa4\x8a\xd2T\x0e~[\xa3\'\xaa\xb9_\x1ag>\xb6\x0b\xa7\n\x9c\xac*S\x80\x14\x0e\xfd\n\xf6\x11\xbf\x9er\\b\xee\xc4dRVJ\xbb(\xfcf\x84Tz\r6\xdb\xa0\xacs\x94p\xfb\xf9\x03,E\xf6\\\x97 ```
2016/08/01
[ "https://Stackoverflow.com/questions/38690035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4163842/" ]
With the info above I was able to start the in-memory solution. The deployment of that zip file worked but I could not use the resulting function. Got error: ``` Unable to import module '<function-name>': No module named <function-name> ``` I got it to work by specifying the file permissions. I then use the in-mem-zip to create an AWS lambda function. Setup: **file\_map** is a dictionary of full\_path->file\_bytes. **files** is a list of full\_paths ``` def create_lambda_function(function_name, desc, role, handler, file_map, files) zip_contents = create_in_mem_zip_archive(file_map, files) result = lambda_code.create_function( FunctionName=function_name, Runtime="python2.7", Description=desc, Role=role, Handler=handler, Code={'ZipFile': zip_contents}, ) return result def create_in_mem_zip_archive(file_map, files): buf = io.BytesIO() logger.info("Building zip file: " + str(files)) with zipfile.ZipFile(buf, 'w', zipfile.ZIP_DEFLATED) as zfh: for file_name in files: file_blob = file_map.get(file_name) if file_blob is None: logger.error("Missing file {} from files".format(file_name)) continue try: info = zipfile.ZipInfo(file_name) info.date_time = time.localtime() info.compress_type = zipfile.ZIP_DEFLATED info.external_attr = 0777 << 16L # give full access # info.external_attr = 0644 << 16L # -r-wr--r-- # info.external_attr = 0755 << 16L # -rwxr-xr-x zfh.writestr(info, file_blob) except Exception as ex: logger.info("Error reading file: " + file_name + ", error: " + ex.message) buf.seek(0) return buf.read() ```
I have experienced the exactly same problem you have. My solution is do NOT use on the fly zip file. Create a real zip file and add real file into it, and it just works. You can do that even in the lambda environment, by create filepath like "/tmp/yourfile.txt" you can create temp real file when lambda execute.
67,542,736
I'm trying to make lists of companies from long strings. The company names tend to be randomly dispersed through the strings, but they always have a comma and a space before the names ', ', and they always end in Inc, LLC, Corporation, or Corp. In addition, there is always a company listed at the very beginning of the string. It goes something like: ``` Companies = 'Apples Inc, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, Bananas LLC, Carrots Corp, xxxx.' ``` I've been trying to use regex to crack this nut, but I am too inexperienced with python. My closest attempt went like this: ``` r = re.compile(r' .*? Inc | .*? LLC | .*? Corporation | .*? Corp', flags = re.I | re.X) r.findall(Companies) ``` But my output is always some variation of ``` ['Apples Inc', ', xxxxxxxxxxxxxxxxxxx, Bananas LLC', ', Carrots Corp'] ``` When I need it to be like ``` ['Apples Inc', 'Bananas LLC', 'Carrots Corp'] ``` I am vexed and I humbly ask for assistance. \*\*\*\*EDIT I have figured out a method to find the company name if it includes a comma, like Apples, Inc. Before I run any analysis on the long string, I will have the program look if any commas exist 2 spaces before the Inc., and then delete them. Then I will run the program to list out the company names.
2021/05/15
[ "https://Stackoverflow.com/questions/67542736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15930763/" ]
In C, the `for` loop is a "check before body" operation, you want the "check after body" variant, a `do while` loop, something like: ```c int xs[] = {1,2,3,4,5}; { int i = 0; do { foo(xs[i]); } while (xs[i++] != 4); } ``` You'll notice I've enclosed the entire chunk in its own scope (the outermost `{}` braces). This is just to limit the existence of `i` to make it conform more with the `for` loop behaviour. In terms of a complete program showing this, the following code: ```c #include <stdio.h> void foo(int x) { printf("%d\n", x); } int main(void) { int xs[] = {1,2,3,4,5}; { int i = 0; do { foo(xs[i]); } while (xs[i++] != 4); } return 0; } ``` outputs: ```none 1 2 3 4 ``` --- As an aside, like you, I'm *also* not that keen of the two other solutions you've seen. For the first solution, that won't actually work in this case since the lifetime of `i` is limited to the `for` loop itself (the `int` in the `for` statement initialisation section makes this so). That means `i` will not have the value you expect after the loop. Either there will *be* no `i` (a compile-time error) or there *will* be an `i` which was hidden within the `for` loop and therefore unlikely to have the value you expect, leading to insidious bugs. For the second, I will sometimes break loops within the body but generally only at the start of the body so that the control logic is still visible in a single area. I tend to do that if the `for` condition would be otherwise very lengthy but there are other ways to do this.
Try processing the loop as long as *the previous* element (if available) is not `4`: ``` int xs[] = {1,2,3,4,5}; for (int i = 0; i == 0 || xs[i - 1] != 4; i++) { foo(xs[i]); } ```
67,542,736
I'm trying to make lists of companies from long strings. The company names tend to be randomly dispersed through the strings, but they always have a comma and a space before the names ', ', and they always end in Inc, LLC, Corporation, or Corp. In addition, there is always a company listed at the very beginning of the string. It goes something like: ``` Companies = 'Apples Inc, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, Bananas LLC, Carrots Corp, xxxx.' ``` I've been trying to use regex to crack this nut, but I am too inexperienced with python. My closest attempt went like this: ``` r = re.compile(r' .*? Inc | .*? LLC | .*? Corporation | .*? Corp', flags = re.I | re.X) r.findall(Companies) ``` But my output is always some variation of ``` ['Apples Inc', ', xxxxxxxxxxxxxxxxxxx, Bananas LLC', ', Carrots Corp'] ``` When I need it to be like ``` ['Apples Inc', 'Bananas LLC', 'Carrots Corp'] ``` I am vexed and I humbly ask for assistance. \*\*\*\*EDIT I have figured out a method to find the company name if it includes a comma, like Apples, Inc. Before I run any analysis on the long string, I will have the program look if any commas exist 2 spaces before the Inc., and then delete them. Then I will run the program to list out the company names.
2021/05/15
[ "https://Stackoverflow.com/questions/67542736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15930763/" ]
In C, the `for` loop is a "check before body" operation, you want the "check after body" variant, a `do while` loop, something like: ```c int xs[] = {1,2,3,4,5}; { int i = 0; do { foo(xs[i]); } while (xs[i++] != 4); } ``` You'll notice I've enclosed the entire chunk in its own scope (the outermost `{}` braces). This is just to limit the existence of `i` to make it conform more with the `for` loop behaviour. In terms of a complete program showing this, the following code: ```c #include <stdio.h> void foo(int x) { printf("%d\n", x); } int main(void) { int xs[] = {1,2,3,4,5}; { int i = 0; do { foo(xs[i]); } while (xs[i++] != 4); } return 0; } ``` outputs: ```none 1 2 3 4 ``` --- As an aside, like you, I'm *also* not that keen of the two other solutions you've seen. For the first solution, that won't actually work in this case since the lifetime of `i` is limited to the `for` loop itself (the `int` in the `for` statement initialisation section makes this so). That means `i` will not have the value you expect after the loop. Either there will *be* no `i` (a compile-time error) or there *will* be an `i` which was hidden within the `for` loop and therefore unlikely to have the value you expect, leading to insidious bugs. For the second, I will sometimes break loops within the body but generally only at the start of the body so that the control logic is still visible in a single area. I tend to do that if the `for` condition would be otherwise very lengthy but there are other ways to do this.
This may not be a direct answer to the original question, but I would strongly suggest against making a habit of parsing arrays like that (it's like a ticking bomb waiting to explode at a random point in time). I know you said you already know x is a member of xs, but when it is not (and this can accidentally happen for a variety of reasons) then your loop will crash your program if you are lucky, or it will corrupt irrelevant data if you are not lucky. In my opinion, **it is neither ugly nor "hacky" to be defensive with an extra check.** If the hurdle is the seemingly unknown length of xs, it is not. Static arrays have a known length, either by declaration or by initialization (like your example). In the latter case, the length can be calc'ed on demand **within the scope of the declared array**, by `sizeof(arr) / sizeof(*arr)` - you can even make it a reusable macro. ``` #define ARRLEN(a) (sizeof(a)/sizeof(*(a))) ... int xs[] = {1,2,3,4,5}; /* way later... */ size_t xslen = ARRLEN(xs); for (size_t i=0; i < xslen; i++) { if (xs[i] == 4) { foo( xs[i] ); break; }; } ``` This will not overrun xs, even when 4 is not present in the array. --- **EDIT:** "**within the scope of the declared array**" means that the macro (or its direct code) will not work on an array passed as a function parameter. ``` // This does NOT work, because sizeof(arr) returns the size of an int-pointer size_t foo( int arr[] ) { return( sizeof(arr)/sizeof(*arr) ); } ``` If you need the length of an array inside a function, you can pass it too as a parameter along with the array (which actually is just a pointer to the 1st element). Or if performance is not an issue, you may use the `sentinel` approach, explained below. [end of EDIT] --- An alternative could be to manually mark the end of your array with a `sentinel` value (a value you intentionally consider invalid). For example, for integers it could be INT\_MAX: ``` #include <limits.h> ... int xs[] = {1,2,3,4,5, INT_MAX}; for (size_t i=0; xs[i] != INT_MAX; i++) { if (xs[i] == 4) { foo( xs[i] ); break; }; } ``` Sentinels are quite common for parsing unknown-length dynamically-allocated arrays of pointers, with the sentinel being NULL. Anyway, my main point is that preventing accidental buffer overruns probably has a higher priority compared to code prettiness :)
29,205,052
Iam create project in django-oscar with the help of <http://django-oscar.readthedocs.org/en/latest/internals/getting_started.html> tutorial , i installed every packages which they mention in doc . after i run my project i getting (A server error occurred. Please contact the administrator. this error in ui) and the error throwing in terminal is ``` File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run self.result = application(self.environ, self.start_response) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py", line 64, in __call__ return self.application(environ, start_response) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 168, in __call__ self.load_middleware() File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 44, in load_middleware mw_class = import_string(middleware_path) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/utils/module_loading.py", line 26, in import_string module = import_module(module_path) File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/apps/basket/middleware.py", line 8, in <module> Applicator = get_class('offer.utils', 'Applicator') File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 67, in get_class return get_classes(module_label, [classname])[0] File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 124, in get_classes oscar_module = _import_module(oscar_module_label, classnames) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 159, in _import_module return __import__(module_label, fromlist=classnames) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/apps/offer/utils.py", line 10, in <module> ConditionalOffer = get_model('offer', 'ConditionalOffer') File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 321, in get_model return apps.get_model(app_label, model_name) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/apps/registry.py", line 202, in get_model return self.get_app_config(app_label).get_model(model_name.lower()) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/apps/registry.py", line 150, in get_app_config raise LookupError("No installed app with label '%s'." % app_label) LookupError: No installed app with label 'offer'. [23/Mar/2015 07:22:20] "GET / HTTP/1.1" 500 59 Traceback (most recent call last): File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run self.result = application(self.environ, self.start_response) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py", line 64, in __call__ return self.application(environ, start_response) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 168, in __call__ self.load_middleware() File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 44, in load_middleware mw_class = import_string(middleware_path) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/utils/module_loading.py", line 26, in import_string module = import_module(module_path) File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/apps/basket/middleware.py", line 8, in <module> Applicator = get_class('offer.utils', 'Applicator') File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 67, in get_class return get_classes(module_label, [classname])[0] File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 124, in get_classes oscar_module = _import_module(oscar_module_label, classnames) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 159, in _import_module return __import__(module_label, fromlist=classnames) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/apps/offer/utils.py", line 10, in <module> ConditionalOffer = get_model('offer', 'ConditionalOffer') File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/oscar/core/loading.py", line 321, in get_model return apps.get_model(app_label, model_name) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/apps/registry.py", line 202, in get_model return self.get_app_config(app_label).get_model(model_name.lower()) File "/home/spericorn/global/oscar/local/lib/python2.7/site-packages/django/apps/registry.py", line 150, in get_app_config raise LookupError("No installed app with label '%s'." % app_label) LookupError: No installed app with label 'offer'. ``` my settings.py is : """ Django settings for frobshop project. "" ``` from oscar.defaults import * from oscar import OSCAR_MAIN_TEMPLATE_DIR import os def root(x): return os.path.join(os.path.abspath(os.path.dirname(__file__)), '..',x) SECRET_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' DEBUG = True TEMPLATE_DEBUG = True ALLOWED_HOSTS = ['*'] # Application definition INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.flatpages', 'compressor', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'oscar.apps.basket.middleware.BasketMiddleware', 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware' ) AUTHENTICATION_BACKENDS = ( 'oscar.apps.customer.auth_backends.EmailBackend', 'django.contrib.auth.backends.ModelBackend', ) ROOT_URLCONF = 'frobshop.urls' WSGI_APPLICATION = 'frobshop.wsgi.application' DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': root( 'db.sqlite3'), 'ATOMIC_REQUESTS': True, } } LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True SITE_ID = 1 TEMPLATE_CONTEXT_PROCESSORS = ( "django.contrib.auth.context_processors.auth", "django.core.context_processors.request", "django.core.context_processors.debug", "django.core.context_processors.i18n", "django.core.context_processors.media", "django.core.context_processors.static", "django.core.context_processors.tz", "django.contrib.messages.context_processors.messages", 'oscar.apps.search.context_processors.search_form', 'oscar.apps.promotions.context_processors.promotions', 'oscar.apps.checkout.context_processors.checkout', 'oscar.apps.customer.notifications.context_processors.notifications', 'oscar.core.context_processors.metadata', ) MEDIA_ROOT = root('media') MEDIA_URL = '/media/' STATIC_ROOT = root('staticstorage') STATIC_URL = '/static/' STATICFILES_DIRS = ( root('static'), ) TEMPLATE_DIRS = [ root('templates'), OSCAR_MAIN_TEMPLATE_DIR ] # # HAYSTACK_CONNECTIONS = { # 'default': { # 'ENGINE': 'haystack.backends.solr_backend.SolrEngine', # 'URL': 'http://127.0.0.1:8983/solr', # 'INCLUDE_SPELLING': True, # }, # } HAYSTACK_CONNECTIONS = { 'default': { 'ENGINE': 'haystack.backends.simple_backend.SimpleEngine', }, } ``` Thanks in advance,please help me.
2015/03/23
[ "https://Stackoverflow.com/questions/29205052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3632705/" ]
Try this ``` from oscar import get_core_apps INSTALLED_APPS = [ 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.flatpages', 'compressor', ] + get_core_apps() ``` `get_core_apps` will include ``` OSCAR_CORE_APPS = [ 'oscar', 'oscar.apps.analytics', 'oscar.apps.checkout', 'oscar.apps.address', 'oscar.apps.shipping', 'oscar.apps.catalogue', 'oscar.apps.catalogue.reviews', 'oscar.apps.partner', 'oscar.apps.basket', 'oscar.apps.payment', 'oscar.apps.offer', 'oscar.apps.order', 'oscar.apps.customer', 'oscar.apps.promotions', 'oscar.apps.search', 'oscar.apps.voucher', 'oscar.apps.wishlists', 'oscar.apps.dashboard', 'oscar.apps.dashboard.reports', 'oscar.apps.dashboard.users', 'oscar.apps.dashboard.orders', 'oscar.apps.dashboard.promotions', 'oscar.apps.dashboard.catalogue', 'oscar.apps.dashboard.offers', 'oscar.apps.dashboard.partners', 'oscar.apps.dashboard.pages', 'oscar.apps.dashboard.ranges', 'oscar.apps.dashboard.reviews', 'oscar.apps.dashboard.vouchers', 'oscar.apps.dashboard.communications', # 3rd-party apps that oscar depends on 'haystack', 'treebeard', 'sorl.thumbnail', 'django_tables2', ] ```
You dont import Oscar core apps. You need to import them and add to the INSTALLED\_APPS. You can doing this by importing `from oscar import get_core_apps` `get_core_apps` is a function that return (list) a default and required Oscar core apps. So you need to concatenate it. ``` INSTALLED_APPS = [ 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.flatpages', 'compressor', ] + get_core_apps() ``` **Note:** If your INSTALLED\_APPS is a tuple (not list) you got a error, because `get_core_apps()` return a list. So you need to change your INSTALLED\_APPS to a list type.
24,331,551
I was wondering how I would be able to sort a whole array by the values in one of its columns. I have : ``` array([5,2,8,2,4]) ``` and: ``` array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) ``` I want to append the first array to the second one like this: ``` array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [5, 2, 8, 2, 4]]) ``` And then sort the array by the appended row to get either this: ``` array([[1, 3, 4, 0, 2], [6, 8, 9, 5, 7], [11, 13, 14, 10, 12], [16, 18, 19, 15, 17], [21, 23, 24, 20, 22], [2, 2, 4, 5, 8]]) ``` or this: ``` array([[ 2, 1, 3, 4, 0], [ 7, 6, 8, 9, 5], [12, 11, 13, 14, 10], [17, 16, 18, 19, 15], [22, 21, 23, 24, 20], [ 8, 5, 4, 2, 2]]) ``` And then remove the appended column to get: ``` array([[1, 3, 4, 0, 2], [6, 8, 9, 5, 7], [11, 13, 14, 10, 12], [16, 18, 19, 15, 17], [21, 23, 24, 20, 22]]) ``` or: ``` array([[ 2, 1, 3, 4, 0], [ 7, 6, 8, 9, 5], [12, 11, 13, 14, 10], [17, 16, 18, 19, 15], [22, 21, 23, 24, 20]]) ``` Is there a code to carry out this procedure. I am very new to python. Thanks a lot!
2014/06/20
[ "https://Stackoverflow.com/questions/24331551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3712008/" ]
You can use [numpy.argsort](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html) to get a list with the sorted indices of your array. Using that you can then rearrange the columns of the matrix. ``` import numpy as np c = np.array([5,2,8,2,4]) a = np.array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) i = np.argsort(c) a = a[:,i] ```
You don't need numpy to do this; (although if you are using numpy, you can just use the `.transpose()` method of the array class. What this essentially does, is transpose your array so that it's `array[column][row]`, and then takes each columns, and pairs them with the sortKeys you provided in a list of tuples (the `zip(sortKeys, a)` bit). Then it sorts this list of tuples. By default, sort with sort tuples based on their first value, then their second value, then third, etc. Now you have the columns in order. then `aNew = [...]` just extracts the columns and creates your new array, still in the `array[column][row]` format, and then transposes it again. ``` a = [[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]] #transpose a a = zip(*a) sortKeys = [5,2,8,2,4] b = zip(sortKeys, a) aNew = [row[1] for row in sorted(b)] #transpose a back aNew = zip(*aNew) print aNew ```
54,648,040
I would like to get ELK version through REST API or parse html. I search in API documentation without finding anything Re-edit: In python ... i'm not found better than ``` re.findall(r"version&quot;:&quot;(\d\.\d\.\d)&quot", requests.get(my_elk).content.decode())[0] ```
2019/02/12
[ "https://Stackoverflow.com/questions/54648040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3671796/" ]
Elasticsearch gives JSON, not HTML. So, you could use `jq` ``` $ curl -s localhost:9200 | jq '.version.number' 6.6.0 ``` In Python, please don't use `re` module... Use `json` module and actually parse that content
There's no HTML, but if you call `GET /` in Kibana's Console or `curl -XGET http://localhost:9200/`, the return will be: ``` { "name" : "instance-0000000039", "cluster_name" : "c2edd39f6fa24b0d8e5c34e8d1d19849", "cluster_uuid" : "VBkvp8OmTCaVuVvMioS3SA", "version" : { "number" : "6.6.0", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "a9861f4", "build_date" : "2019-01-24T11:27:09.439740Z", "build_snapshot" : false, "lucene_version" : "7.6.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" } ``` So all you need to do is get `version.number` from the JSON response.
54,648,040
I would like to get ELK version through REST API or parse html. I search in API documentation without finding anything Re-edit: In python ... i'm not found better than ``` re.findall(r"version&quot;:&quot;(\d\.\d\.\d)&quot", requests.get(my_elk).content.decode())[0] ```
2019/02/12
[ "https://Stackoverflow.com/questions/54648040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3671796/" ]
Elasticsearch gives JSON, not HTML. So, you could use `jq` ``` $ curl -s localhost:9200 | jq '.version.number' 6.6.0 ``` In Python, please don't use `re` module... Use `json` module and actually parse that content
If you're already using the `requests` library, why not use the `json` method to parse the result? ``` requests.get(my_elk).json()["version"]["number"] ```
62,388,691
I am newer for a python language. I want to be read the data from the text file(multiple lines in the text file), Then use the data that read from the text file to execute with the dictionary function. ``` def readCmd(): f = open('cmd.txt', "r") line = f.readline() for line in f: print (line) time.sleep(1) return str(line) f.close() def zero(): print( "Hi 0") def one(): print( "Test 1") def two(): print( "end 2") def num_to_func_to_str(argument): switcher = { "Hi": zero, "test": one, "end": two, } print(switcher.get(argument,"Please enter only 'Hi', 'test' and 'end'")) def main(): #readCmd() while 1 : time.sleep(0.5) num_to_func_to_str(readCmd()) if __name__ == "__main__": main() ``` The above code is the code that I tried. it showed just the second line and not go to the dictionary(switcher) condition. This code is skipped to `print(switcher.get(argument,"Please enter only 'Hi', 'test' and 'end'"))` The data in text file as below. ``` start Hi test Hi test Hi test Hi test Hi test Hi test end ``` Can anyone suggest to me for the how to solve this? thanks
2020/06/15
[ "https://Stackoverflow.com/questions/62388691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11240783/" ]
You need to parse the input field value to an integer: ```js parseInt($('#corporateA').val(), 10) // 89 (integer) ``` Also, as a tip, avoid prefixing the dollar-sign to non-jQuery objects. ### Proper usage ```js let $corporateA = $('#corporateA'); // Storing a jQuery DOM object let valA = parseInt($corporateA.val(), 10); ``` --- As a jQuery plugin ------------------ You could also write your own jQuery plugin, to make the call convenient and succinct. ```js (function($) { $.fn.intVal = function(radix) { return parseInt(this.val(), radix || 10); }; })(jQuery); let corporateA = $('#corporate-a').intVal(); let corporateB = $('#corporate-b').intVal(); let corporateE = $('#corporate-e').intVal(); ``` --- Working demo ------------ HTML conventions dictate that element IDs and classes should be kebab-case or snake-case, rather than lower camel-case. ```js $('.my-cor-opt').change(function(e) { let corporateA = parseInt($('#corporate-a').val(), 10); let corporateB = parseInt($('#corporate-b').val(), 10); let corporateE = parseInt($('#corporate-e').val(), 10); let corElevators = Math.ceil(corporateE * (corporateA + corporateB) / 1000); let corColumns = Math.ceil((corporateA + corporateB) / 20); let elevatorsPerColumnsCor = Math.ceil(corElevators / corColumns); let corTotalElevators = elevatorsPerColumnsCor * corColumns; alert(corTotalElevators); // 25 }); $('.my-cor-opt').trigger('change'); ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <input id="corporate-a" value="89" /> <input id="corporate-b" value="6" /> <input id="corporate-e" value="240" /> <select class="my-cor-opt"></select> ```
You need to parse the values of the input fields, which are of type string into numbers. There are two main strategies to achieve that. 1. Prefix every input field with a unary plus operator 2. Parse with the `parseInt` function If you fail to parse the string into numbers, the binary `plus` operator will do string concatenation instead of addition ``` '1' + '2' + '3' // '123' 1 + 2 + 3 // 6 +'1' + +'2' + +'3' // 6 parseInt('1', 10) + parseInt('2', 10) + parseInt('3', 10) // 6 ``` See here for more details: [parseInt vs unary plus, when to use which?](https://stackoverflow.com/questions/17106681/parseint-vs-unary-plus-when-to-use-which) Unary plus operator =================== ```js "use strict"; console.clear(); $(".myCorOpt").change(function () { var $corporateA = +$("#corporateA").val(); var $corporateB = +$("#corporateB").val(); var $corporateE = +$("#corporateE").val(); var $corElevators = Math.ceil( ($corporateE * ($corporateA + $corporateB)) / 1000 ); var $corColumns = Math.ceil(($corporateA + $corporateB) / 20); var $elevatorsPerColumnsCor = Math.ceil($corElevators / $corColumns); var $corTotalElevators = $elevatorsPerColumnsCor * $corColumns; $('#corElevators').val($corElevators); $('#corColumns').val($corColumns); $('#elevatorsPerColumnsCor').val($elevatorsPerColumnsCor); $('#corTotalElevators').val($corTotalElevators); }); ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <form class="myCorOpt"> <table> <tr><td>Corporate A</td><td><input id="corporateA" placeholder="corporateA" value="89"></td></tr> <tr><td>Corporate B</td><td><input id="corporateB" placeholder="corporateB" value="6"></td></tr> <tr><td>Corporate E</td><td><input id="corporateE" placeholder="corporateE"></td></tr> <tr><td colspan="2"><hr></td></tr> <tr><td>Elevators</td><td><input id="corElevators"></td></tr> <tr><td>Columns</td><td><input id="corColumns"></td></tr> <tr><td>Elevators per Columns</td><td><input id="elevatorsPerColumnsCor"></td></tr> <tr><td>Total Elevators</td><td><input id="corTotalElevators"></td></tr> </form> ``` `parseInt` ========== ```js "use strict"; console.clear(); $(".myCorOpt").change(function () { var $corporateA = parseInt($("#corporateA").val(), 10); var $corporateB = parseInt($("#corporateB").val(), 10); var $corporateE = parseInt($("#corporateE").val(), 10); var $corElevators = Math.ceil( ($corporateE * ($corporateA + $corporateB)) / 1000 ); var $corColumns = Math.ceil(($corporateA + $corporateB) / 20); var $elevatorsPerColumnsCor = Math.ceil($corElevators / $corColumns); var $corTotalElevators = $elevatorsPerColumnsCor * $corColumns; $('#corElevators').val($corElevators); $('#corColumns').val($corColumns); $('#elevatorsPerColumnsCor').val($elevatorsPerColumnsCor); $('#corTotalElevators').val($corTotalElevators); }); ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <form class="myCorOpt"> <table> <tr><td>Corporate A</td><td><input id="corporateA" placeholder="corporateA" value="89"></td></tr> <tr><td>Corporate B</td><td><input id="corporateB" placeholder="corporateB" value="6"></td></tr> <tr><td>Corporate E</td><td><input id="corporateE" placeholder="corporateE"></td></tr> <tr><td colspan="2"><hr></td></tr> <tr><td>Elevators</td><td><input id="corElevators"></td></tr> <tr><td>Columns</td><td><input id="corColumns"></td></tr> <tr><td>Elevators per Columns</td><td><input id="elevatorsPerColumnsCor"></td></tr> <tr><td>Total Elevators</td><td><input id="corTotalElevators"></td></tr> </form> ```
63,881,599
I am trying to use the [kubernetes-client](https://github.com/kubernetes-client) in python in order to create an horizontal pod autoscaler in kubernetes. To do so i make use of the [create\_namespaced\_horizontal\_pod\_autoscaler](https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/AutoscalingV2beta2Api.md#create_namespaced_horizontal_pod_autoscaler)() function that belongs to the [AutoscalingV2beta2Api](https://github.com/kubernetes-client/python/tree/master/kubernetes). My code is the following: ``` my_metrics = [] my_metrics.append(client.V2beta2MetricSpec(type='Pods', pods= client.V2beta2PodsMetricSource(metric=client.V2beta2MetricIdentifier(name='cpu'), target=client.V2beta2MetricTarget(average_utilization='50',type='Utilization')))) body = client.V2beta2HorizontalPodAutoscaler( api_version='autoscaling/v2beta2', kind='HorizontalPodAutoscaler', metadata=client.V1ObjectMeta(name='php-apache'), spec= client.V2beta2HorizontalPodAutoscalerSpec( max_replicas=10, min_replicas=1, metrics = my_metrics, scale_target_ref = client.V2beta2CrossVersionObjectReference(kind='Object',name='php-apache') )) v2 = client.AutoscalingV2beta2Api() ret = v2.create_namespaced_horizontal_pod_autoscaler(namespace='default', body=body, pretty=True) ``` And the response i get: ``` HTTP response body: { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "HorizontalPodAutoscaler in version \"v2beta2\" cannot be handled as a HorizontalPodAutoscaler: v2beta2.HorizontalPodAutoscaler.Spec: v2beta2.HorizontalPodAutoscalerSpec.Metrics: []v2beta2.MetricSpec: v2beta2.MetricSpec.Pods: v2beta2.PodsMetricSource.Target: v2beta2.MetricTarget.AverageUtilization: readUint32: unexpected character: \ufffd, error found in #10 byte of ...|zation\": \"50\", \"type|..., bigger context ...|\"name\": \"cpu\"}, \"target\": {\"averageUtilization\": \"50\", \"type\": \"Utilization\"}}, \"type\": \"Pods\"}], \"m|...", "reason": "BadRequest", "code": 400 } ``` I just want to use the kubernetes-client to call the equivalent of: ``` kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 ``` Any working example would be more than welcome.
2020/09/14
[ "https://Stackoverflow.com/questions/63881599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1488184/" ]
The above example permits the creation of a memory based hpa via the latest version of the [k8s python client](https://github.com/kubernetes-client/python) (v12.0.1). ``` my_metrics = [] my_metrics.append(client.V2beta2MetricSpec(type='Resource', resource= client.V2beta2ResourceMetricSource(name='memory',target=client.V2beta2MetricTarget(average_utilization= 30,type='Utilization')))) my_conditions = [] my_conditions.append(client.V2beta2HorizontalPodAutoscalerCondition(status = "True", type = 'AbleToScale')) status = client.V2beta2HorizontalPodAutoscalerStatus(conditions = my_conditions, current_replicas = 1, desired_replicas = 1) body = client.V2beta2HorizontalPodAutoscaler( api_version='autoscaling/v2beta2', kind='HorizontalPodAutoscaler', metadata=client.V1ObjectMeta(name=self.app_name), spec= client.V2beta2HorizontalPodAutoscalerSpec( max_replicas=self.MAX_PODS, min_replicas=self.MIN_PODS, metrics = my_metrics, scale_target_ref = client.V2beta2CrossVersionObjectReference(kind = 'Deployment', name = self.app_name, api_version = 'apps/v1'), ), status = status) v2 = client.AutoscalingV2beta2Api() ret = v2.create_namespaced_horizontal_pod_autoscaler(namespace='default', body=body, pretty=True) ```
Following error: > > > ``` > v2beta2.MetricTarget.AverageUtilization: readUint32: unexpected character: \ufffd, error found in #10 byte of .. > > ``` > > means that its expecting `Uint32` and you are passing a `string`: ``` target=client.V2beta2MetricTarget(average_utilization='50',type='Utilization') HERE ------^^^^ ``` Change it to integer value and it should work. --- The same information you can find in [python client documentation](https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V2beta2MetricTarget.md) (notice the type column).
12,439,762
Given this (among more...): ``` compile_coffee() { echo "Compile COFFEESCRIPT files..." i=0 for folder in ${COFFEE_FOLDER[*]} do for file in $folder/*.coffee do file_name=$(echo "$file" | awk -F "/" '{print $NF}' | awk -F "." '{print $1}') file_destination_path=${COFFEE_DESTINATION_FOLDER[${i}]} file_destination="$file_destination_path/$file_name.js" if [ -f $file_path ]; then echo "+ $file -> $file_destination" $COFFEE_CMD $COFFEE_PARAMS $file > $file_destination #FAIL #$COFFEE_CMD $COFFEE_PARAMS $file > testfile fi done i=$i+1 done echo "done!" compress_javascript } ``` And just to clarify, everything except the #FAIL line works flawessly, if I'm doing something wrong just tell me, the problem I have is: * the line executes and does what it have to do, but dont write the file that I put in "file\_destination". * if a delete a folder in that route (it's relative to this script, see below), bash throws and error saying that the folder do not exist. * If I make the folder again, no errors, but no file either. * If I change the $file\_destination to "testfile", it create the file with correct contents. * The $file\_destination path its ok -as you can see, my script echoes it- * if I echo the entire line, copy the exact command with params and execute it onto a shell in the same directory the script is, it works. I don't know what is wrong with this, been wondering for two hours... Script output (real paths): ``` (alpha)[pyron@vps herobrine]$ ./deploy.sh compile && ls -l database/static/js/ =============================== === Compile === Compile COFFEESCRIPT files... + ./database/static/coffee/test.coffee -> ./database/static/js/test.js done! Linking static files to django staticfiles folder... done! total 0 ``` Complete command: ``` coffee --compile --print ./database/static/coffee/test.coffee > ./database/static/js/test.js ``` What am I missing? **EDIT** I've made some progression through this. In the shell, If I deactivate the python virtualenv the script works, but If I call deactivate from the script it says command not found.
2012/09/15
[ "https://Stackoverflow.com/questions/12439762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/697150/" ]
This will give you 0.85 and 1.00 from your specified string, stored in `$values[1]` and `$values[2]` respectively. ``` $values = array(); preg_match('/Chop Suey<\/a><\/td><td align="right">([\d]+\.[\d]+)<\/td><td align="right">([\d]+\.[\d]+)<\/td>/', 'Chop Suey</a></td><td align="right">0.85</td><td align="right">1.00</td>', $values); ```
You could also be more dynamic with it. Instead of statically looking for "chop suey" why not look for other alignments. Here is a sample to that. (very basic). ``` preg_match("/\d+.\d+/",$content,$output); ``` (above match, would give you all the decimals you need in correct order.) ``` $output[0] (is the array you can loop) for the exact numbers above, you'd use $output[0][0] and $output[0][1] ``` as seen in the regex example [here](http://lumadis.be/regex/test_regex.php?id=1301)
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This comes straight from Python Docs (<https://docs.python.org/3.3/using/windows.html>): **3.3.5. Executing scripts without the Python launcher** Without the Python launcher installed, Python scripts (files with the extension .py) will be executed by python.exe by default. This executable opens a terminal, which stays open even if the program uses a GUI. If you do not want this to happen, use the extension .pyw which will cause the script to be executed by pythonw.exe by default (both executables are located in the top-level of your Python installation directory). This suppresses the terminal window on startup. You can also make all .py scripts execute with pythonw.exe, setting this through the usual facilities, for example (might require administrative rights): Launch a command prompt. Associate the correct file group with .py scripts: ``` assoc .py=Python.File ``` Redirect all Python files to the new executable: ``` ftype Python.File=C:\Path\to\pythonw.exe "%1" %* ```
The solution that worked like a charm for me > From <https://www.tutorialexample.com/convert-python-script-to-exe-using-auto-py-to-exe-library-python-tutorial/> `pip install auto-py-to-exe` The GUI is available just by typing: `auto-py-to-exe` Then, I used this command to generate the desired output: `pyinstaller --noconfirm --onedir --windowed --icon "path/favicon.ico" "path/your_python_script"` Now I have my script as executable on taskbar
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As an alternative to [py2exe](http://www.py2exe.org/) you could use the pyinstaller package with the `onefile` flag. This is a solution which works for python 3.x. 1. install pyinstaller via pip 2. Package your file to a single exe with the `onefile` flag ``` pyinstaller --onefile your_file.py ```
A better way (in my opinion): Create a shortcut: Set the target to `%systemroot%\System32\cmd.exe /c "python C:\Users\MyUsername\Documents\MyScript.py"` Start In: `C:\Users\MyUsername\Documents\` Obviously change the path to the location of your script. May need to add escaped quotes if there is a space in it.
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This comes straight from Python Docs (<https://docs.python.org/3.3/using/windows.html>): **3.3.5. Executing scripts without the Python launcher** Without the Python launcher installed, Python scripts (files with the extension .py) will be executed by python.exe by default. This executable opens a terminal, which stays open even if the program uses a GUI. If you do not want this to happen, use the extension .pyw which will cause the script to be executed by pythonw.exe by default (both executables are located in the top-level of your Python installation directory). This suppresses the terminal window on startup. You can also make all .py scripts execute with pythonw.exe, setting this through the usual facilities, for example (might require administrative rights): Launch a command prompt. Associate the correct file group with .py scripts: ``` assoc .py=Python.File ``` Redirect all Python files to the new executable: ``` ftype Python.File=C:\Path\to\pythonw.exe "%1" %* ```
[This tutorial](https://datatofish.com/batch-python-script/) shows how to create a batch file that runs a python script. Note that many of the other answers are out of date - py2exe is out of support past python 3.4. More info [here](https://stackoverflow.com/a/42310168/3206926).
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
[This tutorial](https://datatofish.com/batch-python-script/) shows how to create a batch file that runs a python script. Note that many of the other answers are out of date - py2exe is out of support past python 3.4. More info [here](https://stackoverflow.com/a/42310168/3206926).
A better way (in my opinion): Create a shortcut: Set the target to `%systemroot%\System32\cmd.exe /c "python C:\Users\MyUsername\Documents\MyScript.py"` Start In: `C:\Users\MyUsername\Documents\` Obviously change the path to the location of your script. May need to add escaped quotes if there is a space in it.
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
[This tutorial](https://datatofish.com/batch-python-script/) shows how to create a batch file that runs a python script. Note that many of the other answers are out of date - py2exe is out of support past python 3.4. More info [here](https://stackoverflow.com/a/42310168/3206926).
The solution that worked like a charm for me > From <https://www.tutorialexample.com/convert-python-script-to-exe-using-auto-py-to-exe-library-python-tutorial/> `pip install auto-py-to-exe` The GUI is available just by typing: `auto-py-to-exe` Then, I used this command to generate the desired output: `pyinstaller --noconfirm --onedir --windowed --icon "path/favicon.ico" "path/your_python_script"` Now I have my script as executable on taskbar
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The simplest solution will be creating [batch file](https://en.wikipedia.org/wiki/Batch_file) containing command like: ``` c:\python27\python.exe c:\somescript.py ``` With this solution you will have to have installed python interpreter. If you need more portable solution you can try for e.g. [py2exe](http://www.py2exe.org/) and bundle python scripts into executable file, able to run without requiring a Python installation.
A better way (in my opinion): Create a shortcut: Set the target to `%systemroot%\System32\cmd.exe /c "python C:\Users\MyUsername\Documents\MyScript.py"` Start In: `C:\Users\MyUsername\Documents\` Obviously change the path to the location of your script. May need to add escaped quotes if there is a space in it.
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The simplest solution will be creating [batch file](https://en.wikipedia.org/wiki/Batch_file) containing command like: ``` c:\python27\python.exe c:\somescript.py ``` With this solution you will have to have installed python interpreter. If you need more portable solution you can try for e.g. [py2exe](http://www.py2exe.org/) and bundle python scripts into executable file, able to run without requiring a Python installation.
[This tutorial](https://datatofish.com/batch-python-script/) shows how to create a batch file that runs a python script. Note that many of the other answers are out of date - py2exe is out of support past python 3.4. More info [here](https://stackoverflow.com/a/42310168/3206926).
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
A better way (in my opinion): Create a shortcut: Set the target to `%systemroot%\System32\cmd.exe /c "python C:\Users\MyUsername\Documents\MyScript.py"` Start In: `C:\Users\MyUsername\Documents\` Obviously change the path to the location of your script. May need to add escaped quotes if there is a space in it.
The solution that worked like a charm for me > From <https://www.tutorialexample.com/convert-python-script-to-exe-using-auto-py-to-exe-library-python-tutorial/> `pip install auto-py-to-exe` The GUI is available just by typing: `auto-py-to-exe` Then, I used this command to generate the desired output: `pyinstaller --noconfirm --onedir --windowed --icon "path/favicon.ico" "path/your_python_script"` Now I have my script as executable on taskbar
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This comes straight from Python Docs (<https://docs.python.org/3.3/using/windows.html>): **3.3.5. Executing scripts without the Python launcher** Without the Python launcher installed, Python scripts (files with the extension .py) will be executed by python.exe by default. This executable opens a terminal, which stays open even if the program uses a GUI. If you do not want this to happen, use the extension .pyw which will cause the script to be executed by pythonw.exe by default (both executables are located in the top-level of your Python installation directory). This suppresses the terminal window on startup. You can also make all .py scripts execute with pythonw.exe, setting this through the usual facilities, for example (might require administrative rights): Launch a command prompt. Associate the correct file group with .py scripts: ``` assoc .py=Python.File ``` Redirect all Python files to the new executable: ``` ftype Python.File=C:\Path\to\pythonw.exe "%1" %* ```
A better way (in my opinion): Create a shortcut: Set the target to `%systemroot%\System32\cmd.exe /c "python C:\Users\MyUsername\Documents\MyScript.py"` Start In: `C:\Users\MyUsername\Documents\` Obviously change the path to the location of your script. May need to add escaped quotes if there is a space in it.
37,219,045
I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this?
2016/05/13
[ "https://Stackoverflow.com/questions/37219045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This comes straight from Python Docs (<https://docs.python.org/3.3/using/windows.html>): **3.3.5. Executing scripts without the Python launcher** Without the Python launcher installed, Python scripts (files with the extension .py) will be executed by python.exe by default. This executable opens a terminal, which stays open even if the program uses a GUI. If you do not want this to happen, use the extension .pyw which will cause the script to be executed by pythonw.exe by default (both executables are located in the top-level of your Python installation directory). This suppresses the terminal window on startup. You can also make all .py scripts execute with pythonw.exe, setting this through the usual facilities, for example (might require administrative rights): Launch a command prompt. Associate the correct file group with .py scripts: ``` assoc .py=Python.File ``` Redirect all Python files to the new executable: ``` ftype Python.File=C:\Path\to\pythonw.exe "%1" %* ```
As an alternative to [py2exe](http://www.py2exe.org/) you could use the pyinstaller package with the `onefile` flag. This is a solution which works for python 3.x. 1. install pyinstaller via pip 2. Package your file to a single exe with the `onefile` flag ``` pyinstaller --onefile your_file.py ```
53,780,882
I have multiple times as a string: ``` "2018-12-14 11:20:16","2018-12-14 11:14:01","2018-12-14 11:01:58","2018-12-14 10:54:21" ``` I want to calculate the average time difference between all these times. The above example would be: ``` 2018-12-14 11:20:16 - 2018-12-14 11:14:01 = 6 minutes 15 seconds 2018-12-14 11:14:01 - 2018-12-14 11:01:58 = 12 minutes 3 seconds 2018-12-14 11:14:01 - 2018-12-14 11:01:58 = 7 minutes 37 seconds ``` (375s+723s+457s)/3 = 518.3 seconds = **8 minutes 38.33 seconds** (If my hand math is correct.) I have all the datestrings in an array. I assume it's easy to convert it to a timestamp, loop over it to calculate the differences and then the average? I just began to learn python and this is the first program that I want to do for myself.
2018/12/14
[ "https://Stackoverflow.com/questions/53780882", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323427/" ]
You have a bunch of problems. I'm not going to give you solutions to all of these. Just hints to get you started. I assume you know... * ... how to convert a time string to a `datetime.datetime` object, * ... how to calculate the difference between two, i.e., a `datetime.timedelta`, * ... how to compute a mean value (`datetime.timedelta` supports both addition and division by `int` or `float`). So the only complicated step in this task will be, how to calculate the differences between adjacent items in a list. I'm going to show you this. When you have a list of items, you can *slice* and *zip* to produce an iterator to iterate the pairs of adjacent elements in the list. For example, try this (untested code): ``` l = range(10) for i,j in zip(l, l[1:]): print(i, j) ``` Then build a list to contain the differences, e.g., using a *list comprehension*: ``` deltas = [i-j for i,j in zip(l, l[1:])] ``` Now [calculate your mean value](https://stackoverflow.com/q/3617170/1025391) and you're done. For reference: * <https://docs.python.org/3/library/datetime.html#datetime-objects> * <https://docs.python.org/3/library/datetime.html#timedelta-objects> * [Understanding Python's slice notation](https://stackoverflow.com/questions/509211/understanding-pythons-slice-notation) * [Iterate a list as pair (current, next) in Python](https://stackoverflow.com/q/5434891/1025391) * <https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions> * [Average timedelta in list](https://stackoverflow.com/q/3617170/1025391)
``` from datetime import datetime import numpy as np ts_list = ["2018-12-14 11:20:16","2018-12-14 11:14:01","2018-12-14 11:01:58","2018-12-14 10:54:21"] dif_list = [] for i in range(len(ts_list)-1): dif_list.append((datetime.strptime(ts_list[i], '%Y-%m-%d %H:%M:%S')-datetime.strptime(ts_list[i+1], '%Y-%m-%d %H:%M:%S' )).total_seconds()) avg = np.mean(dif_list) print(avg) ``` result is 518.3333333333334 seconds.
44,367,508
I have a for loop which runs a Python script ~100 times on 100 different input folders. The python script is most efficient on 2 cores, and I have 50 cores available. So I'd like to use GNU parallel to run the script on 25 folders at a time. Here's my for loop (works fine, but is sequential of course), the python script takes a bunch of input variables including the `-p 2` which runs it on two cores: ``` for folder in $(find /home/rob/PartitionFinder/ -maxdepth 2 -type d); do python script.py --raxml --quick --no-ml-tree $folder --force -p 2 done ``` and here's my attempt to parallelise it, which does not work: ``` folders=$(find /home/rob/PartitionFinder/ -maxdepth 2 -type d) echo $folders | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2 ``` The issue I'm hitting (perhaps it's just the first of many though) is that my `folders` variable is not a list, so it's really just passing a long string of 100 folders as the `{}` to the script. All hints gratefully received.
2017/06/05
[ "https://Stackoverflow.com/questions/44367508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1084470/" ]
Replace `echo $folders | parallel ...` with `echo "$folders" | parallel ...`. Without the double quotes, the shell parses spaces in `$folders` and passes them as separate arguments to `echo`, which causes them to be printed on one line. `parallel` provides each line as argument to the job. To avoid such quoting issues altogether, it is always a good idea to pipe `find` to `parallel` directly, and use the null character as the delimiter: ``` find ... -print0 | parallel -0 ... ``` This will work even when encountering file names that contain multiple spaces or a newline character.
you can pipe find directly to parallel: ``` find /home/rob/PartitionFinder/ -maxdepth 2 -type d | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2 ``` If you want to keep the string in `$folder`, you can pipe the echo to xargs. ``` echo $folders | xargs -n 1 | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2 ```
44,367,508
I have a for loop which runs a Python script ~100 times on 100 different input folders. The python script is most efficient on 2 cores, and I have 50 cores available. So I'd like to use GNU parallel to run the script on 25 folders at a time. Here's my for loop (works fine, but is sequential of course), the python script takes a bunch of input variables including the `-p 2` which runs it on two cores: ``` for folder in $(find /home/rob/PartitionFinder/ -maxdepth 2 -type d); do python script.py --raxml --quick --no-ml-tree $folder --force -p 2 done ``` and here's my attempt to parallelise it, which does not work: ``` folders=$(find /home/rob/PartitionFinder/ -maxdepth 2 -type d) echo $folders | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2 ``` The issue I'm hitting (perhaps it's just the first of many though) is that my `folders` variable is not a list, so it's really just passing a long string of 100 folders as the `{}` to the script. All hints gratefully received.
2017/06/05
[ "https://Stackoverflow.com/questions/44367508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1084470/" ]
you can pipe find directly to parallel: ``` find /home/rob/PartitionFinder/ -maxdepth 2 -type d | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2 ``` If you want to keep the string in `$folder`, you can pipe the echo to xargs. ``` echo $folders | xargs -n 1 | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2 ```
You can create a `Makefile` like this: ``` #!/usr/bin/make -f FOLDERS=$(shell find /home/rob/PartitionFinder/ -maxdepth 2 -type d) all: ${FOLDERS} # To execute the find before the all find_folders: @ echo $(FOLDERS) > /dev/null ${FOLDERS}: find_folders @ python script.py --raxml --quick --no-ml-tree $@ --force -p 2 ``` and then run `make -j 25` Be careful: use tabs to indent in your file Also, files with spaces in the name won't work.
44,367,508
I have a for loop which runs a Python script ~100 times on 100 different input folders. The python script is most efficient on 2 cores, and I have 50 cores available. So I'd like to use GNU parallel to run the script on 25 folders at a time. Here's my for loop (works fine, but is sequential of course), the python script takes a bunch of input variables including the `-p 2` which runs it on two cores: ``` for folder in $(find /home/rob/PartitionFinder/ -maxdepth 2 -type d); do python script.py --raxml --quick --no-ml-tree $folder --force -p 2 done ``` and here's my attempt to parallelise it, which does not work: ``` folders=$(find /home/rob/PartitionFinder/ -maxdepth 2 -type d) echo $folders | parallel -P 25 python script.py --raxml --quick --no-ml-tree {} --force -p 2 ``` The issue I'm hitting (perhaps it's just the first of many though) is that my `folders` variable is not a list, so it's really just passing a long string of 100 folders as the `{}` to the script. All hints gratefully received.
2017/06/05
[ "https://Stackoverflow.com/questions/44367508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1084470/" ]
Replace `echo $folders | parallel ...` with `echo "$folders" | parallel ...`. Without the double quotes, the shell parses spaces in `$folders` and passes them as separate arguments to `echo`, which causes them to be printed on one line. `parallel` provides each line as argument to the job. To avoid such quoting issues altogether, it is always a good idea to pipe `find` to `parallel` directly, and use the null character as the delimiter: ``` find ... -print0 | parallel -0 ... ``` This will work even when encountering file names that contain multiple spaces or a newline character.
You can create a `Makefile` like this: ``` #!/usr/bin/make -f FOLDERS=$(shell find /home/rob/PartitionFinder/ -maxdepth 2 -type d) all: ${FOLDERS} # To execute the find before the all find_folders: @ echo $(FOLDERS) > /dev/null ${FOLDERS}: find_folders @ python script.py --raxml --quick --no-ml-tree $@ --force -p 2 ``` and then run `make -j 25` Be careful: use tabs to indent in your file Also, files with spaces in the name won't work.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
If you are using Python for IOS, the following should work, although I haven't yet tried it myself. Email the program to your own e-mail account as text. Then read the e-mail message on your iPad in any one of several e-mail applications. Cut and paste the text from the e-mail message into the python editor. Don't cut and paste the code into the interpreter. Then you can't save it, at least not in the current version of Python for IOS. Instead, click on the second icon on the bottom (I think that's the icon, my iPad is at home and I'm not home now), to open the editor. You can save files from the editor using the menu button on the upper right; there's a "save" menu item that allows saving the code to a file on the iPad. I'll be trying this tonight. Sorry for posting this before trying it, but I'm not sure I'll return to this question later. It 'should' work. (Famous last words!)
I use Python 2.7 for IOS and download source python files through iFunBox into /var/mobile/Applications/Python 2.7 for IOS/Documents/User Scripts. Nevertheless I can't recommend this application as it's quite buggy and very slow when editing code.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
If you're just running command-line Python, then you can edit it and/or upload it to a server and then run it from an iPad ssh terminal client. I know that's not the same as pushing it onto the iPad, and it requires a server and an internet connection, but it's the easiest way. Otherwise, include a Python interpreter in your app and a UITextView that prints its outputs. Edit the code on your PC (or Mac, I guess), as a file in your Xcode project, and have the interpreter run that code and show the output in the UITextView.
Pythonista for Ipad: save a script from the web in the Ipad ----------------------------------------------------------- Pythonista is a great app for python on the ipad. If you use a bluetooth keyboard it can be also easier to digit (I use the logitech keyboard and it's great). Save it to your github repository (or any other site). Go into pythonista and run this script (python 3): ``` from urllib.request import urlopen import os file = "test_es1.py" url = urlopen("https://formazione.github.io/" + file) content = url.read() p = "" with open(file, "w") as new_file: content = content.decode("utf-8") print(content) new_file.write(content) os.startfile(file) ``` Now, you have saved the file from you site to pythonista, you can open it from the app and run it.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
you may be interesting in <https://www.pythonanywhere.com> project. I am using it on ipad too.
I use Python 2.7 for IOS and download source python files through iFunBox into /var/mobile/Applications/Python 2.7 for IOS/Documents/User Scripts. Nevertheless I can't recommend this application as it's quite buggy and very slow when editing code.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
Pythonista for Ipad: save a script from the web in the Ipad ----------------------------------------------------------- Pythonista is a great app for python on the ipad. If you use a bluetooth keyboard it can be also easier to digit (I use the logitech keyboard and it's great). Save it to your github repository (or any other site). Go into pythonista and run this script (python 3): ``` from urllib.request import urlopen import os file = "test_es1.py" url = urlopen("https://formazione.github.io/" + file) content = url.read() p = "" with open(file, "w") as new_file: content = content.decode("utf-8") print(content) new_file.write(content) os.startfile(file) ``` Now, you have saved the file from you site to pythonista, you can open it from the app and run it.
Use the eclipse Che editor on Codenvy.com [Codenvy Websife](https://codenvy.com/) It's a browser-based IDE and can be used from any device that has a web browser.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
If you are using Python for IOS, the following should work, although I haven't yet tried it myself. Email the program to your own e-mail account as text. Then read the e-mail message on your iPad in any one of several e-mail applications. Cut and paste the text from the e-mail message into the python editor. Don't cut and paste the code into the interpreter. Then you can't save it, at least not in the current version of Python for IOS. Instead, click on the second icon on the bottom (I think that's the icon, my iPad is at home and I'm not home now), to open the editor. You can save files from the editor using the menu button on the upper right; there's a "save" menu item that allows saving the code to a file on the iPad. I'll be trying this tonight. Sorry for posting this before trying it, but I'm not sure I'll return to this question later. It 'should' work. (Famous last words!)
If you're just running command-line Python, then you can edit it and/or upload it to a server and then run it from an iPad ssh terminal client. I know that's not the same as pushing it onto the iPad, and it requires a server and an internet connection, but it's the easiest way. Otherwise, include a Python interpreter in your app and a UITextView that prints its outputs. Edit the code on your PC (or Mac, I guess), as a file in your Xcode project, and have the interpreter run that code and show the output in the UITextView.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
you may be interesting in <https://www.pythonanywhere.com> project. I am using it on ipad too.
If you're just running command-line Python, then you can edit it and/or upload it to a server and then run it from an iPad ssh terminal client. I know that's not the same as pushing it onto the iPad, and it requires a server and an internet connection, but it's the easiest way. Otherwise, include a Python interpreter in your app and a UITextView that prints its outputs. Edit the code on your PC (or Mac, I guess), as a file in your Xcode project, and have the interpreter run that code and show the output in the UITextView.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
I use Python 2.7 for IOS and download source python files through iFunBox into /var/mobile/Applications/Python 2.7 for IOS/Documents/User Scripts. Nevertheless I can't recommend this application as it's quite buggy and very slow when editing code.
Use the eclipse Che editor on Codenvy.com [Codenvy Websife](https://codenvy.com/) It's a browser-based IDE and can be used from any device that has a web browser.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
If you're just running command-line Python, then you can edit it and/or upload it to a server and then run it from an iPad ssh terminal client. I know that's not the same as pushing it onto the iPad, and it requires a server and an internet connection, but it's the easiest way. Otherwise, include a Python interpreter in your app and a UITextView that prints its outputs. Edit the code on your PC (or Mac, I guess), as a file in your Xcode project, and have the interpreter run that code and show the output in the UITextView.
Use the eclipse Che editor on Codenvy.com [Codenvy Websife](https://codenvy.com/) It's a browser-based IDE and can be used from any device that has a web browser.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
you may be interesting in <https://www.pythonanywhere.com> project. I am using it on ipad too.
Use the eclipse Che editor on Codenvy.com [Codenvy Websife](https://codenvy.com/) It's a browser-based IDE and can be used from any device that has a web browser.
12,002,051
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
2012/08/17
[ "https://Stackoverflow.com/questions/12002051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508702/" ]
If you're just running command-line Python, then you can edit it and/or upload it to a server and then run it from an iPad ssh terminal client. I know that's not the same as pushing it onto the iPad, and it requires a server and an internet connection, but it's the easiest way. Otherwise, include a Python interpreter in your app and a UITextView that prints its outputs. Edit the code on your PC (or Mac, I guess), as a file in your Xcode project, and have the interpreter run that code and show the output in the UITextView.
I use Python 2.7 for IOS and download source python files through iFunBox into /var/mobile/Applications/Python 2.7 for IOS/Documents/User Scripts. Nevertheless I can't recommend this application as it's quite buggy and very slow when editing code.
44,952,354
I would like to use segment of the same string in a formatted string. ```python input_string = 'abcdefhijk' result_string = "A's name is abcd-defh; he does hijk" ``` The intuitive solution is ```python "A's name is {0[0:4]}-{0{[3:7]}; he does {0[6:10]}".format(input_string) ``` And this obviously doesn't work. What works is the round-about way ```python "A's name is {0}-{1}; he does {2}".format(input_string[0:4], input_string[3:7], input_string[6:10]) ``` Is there any easy way to specify the index range in the formatting string?
2017/07/06
[ "https://Stackoverflow.com/questions/44952354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5112950/" ]
In Python 3.6, you can make a similar construct to your "intuitive solution" with [f-strings](https://www.python.org/dev/peps/pep-0498/): ``` >>> input_string = 'abcdefhijk' >>> f"A's name is {input_string[0:4]}-{input_string[3:7]}; he does {input_string[6:10]}" "A's name is abcd-defh; he does hijk" ```
Have you tried the below ``` result_string = "A's name is %s-%s; he does %s" %(input_string[:4], input_string[3:7], input_string[6:10]) ```
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
This one worked for me after wasting hours: ``` pip install --pre -i https://pypi.anaconda.org/scipy-wheels-nightly/simple scipy ```
The following worked for me. I'm currently using `Python 3.10.8`, installed using `brew`. And currently, when installing `numpy==1.23.4`, `setuptools < 60.0.0` is required. I'm using `(brew --prefix)/bin/python3 -m pip` for explicitly calling the `pip` from `python 3.10` installed by `brew`. Here are the versions I've just installed. ``` # python 3.10.8 # pip 22.3 # setuptools 59.8.0 # wheel 0.37.1 # numpy 1.23.4 # scipy 1.9.3 # pandas 1.5.1 # scikit-learn 1.1.3 # seaborn 0.12.1 # statsmodels 0.13.2 # gcc 12.2.0 # openblas 0.3.21 # gfortran 12 # pybind11 2.10.0 # Cython 0.29.32 # pythran 0.12.0 ``` Here are the steps I followed: ``` # setuptools < 60.0.0 is required for numpy==1.23.4 in Python 3.10.8 $(brew --prefix)/bin/python3 -m pip install --upgrade pip==22.3 setuptools==59.8.0 wheel==0.37.1 # uninstall numpy and pythran first $(brew --prefix)/bin/python3 -m pip uninstall -y numpy pythran # uninstall scipy $(brew --prefix)/bin/python3 -m pip uninstall -y scipy # install prerequisites (with brew) brew install gcc brew install openblas brew install gfortran # set environment variables for compilers to find openblas export LDFLAGS="-L/opt/homebrew/opt/openblas/lib" export CPPFLAGS="-I/opt/homebrew/opt/openblas/include" # install the prerequisites (with pip) $(brew --prefix)/bin/python3 -m pip install pybind11 $(brew --prefix)/bin/python3 -m pip install Cython # install numpy $(brew --prefix)/bin/python3 -m pip install --no-binary :all: numpy # install pythran after installing numpy, before installing scipy $(brew --prefix)/bin/python3 -m pip install pythran # install scipy export OPENBLAS="$(brew --prefix)/opt/openblas/lib/" $(brew --prefix)/bin/python3 -m pip install scipy # install pandas $(brew --prefix)/bin/python3 -m pip install pandas # install scikit-learn $(brew --prefix)/bin/python3 -m pip install scikit-learn # install seaborn $(brew --prefix)/bin/python3 -m pip install seaborn # install statsmodels $(brew --prefix)/bin/python3 -m pip install statsmodels ```
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
This one worked for me after wasting hours: ``` pip install --pre -i https://pypi.anaconda.org/scipy-wheels-nightly/simple scipy ```
According to [this Github issue](https://github.com/scipy/scipy/issues/16974), Scipy doesn't work on MacOS 11 (Big Sur). If none of these solutions are working for you I'd suggest updating your OS.
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
[This solution](https://github.com/numpy/numpy/issues/17784#issuecomment-729950525) worked on my M1 machine with `pyenv`: ``` brew install openblas OPENBLAS="$(brew --prefix openblas)" pip install numpy scipy ```
For those who need it for short-term purposes and don't want too much hustle - it seems to work with python 3.6.4 and scipy 1.5.4 out of the box (Big Sur 11.5.2, M1 chip).
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
It's possible to install on regular arm64 brew python, you need to compile it yourself. If `numpy` is already installed (from wheels) you'll need to uninstall it: ``` pip3 uninstall -y numpy pythran ``` I had to compile `numpy`, which requires `cython` and `pybind11`: ``` pip3 install cython pybind11 ``` Then `numpy` can be compiled: ``` pip3 install --no-binary :all: --no-use-pep517 numpy ``` Scipy needs `pythran` (this should happen after installing numpy): ``` pip3 install pythran ``` Then we need to compile scipy itself, it depends on fortran and BLAS/LACK: ``` brew install openblas gfortran ``` Tell `scipy` where it can find this library: ``` export OPENBLAS=/opt/homebrew/opt/openblas/lib/ ``` Then finally compile`scipy`: ``` pip3 install --no-binary :all: --no-use-pep517 scipy ```
In addition, if someone has this error message> ``` ########### CLIB COMPILER OPTIMIZATION ########### Platform : Architecture: aarch64 Compiler : clang CPU baseline : Requested : 'min' Enabled : none Flags : none Extra checks: none CPU dispatch : Requested : 'max -xop -fma4' Enabled : none Generated : none CCompilerOpt.cache_flush[809] : write cache to path ``` I found this solution before compile numpy and scipy **Analysis of reasons:** From the above error message, you can see that the last error shows that clang has an error, so it is speculated that it should be an error caused by the compiler, because the new version of the xcode command tool uses the arm version of the compilation method by default, and if we want to use For x86 architecture, we need to manually set the specific architecture through environment variables. ``` export ARCHFLAGS="-arch x86_64" ``` example: ``` 3c790c45799ec8c598753ebb22/build/temp.macosx-10.14.6-arm64-3.8/ccompiler_opt_cache_clib.py ---------------------------------------- ERROR: Command errored out with exit status 1: /Users/daniel_edu/Projects/PERSONAL/great_expectation_demo/.env/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/zb/c_b9kh2x1px7vl5683rwz8fr0000gn/T/pip-install-y8alaej_/numpy_3d813a3c790c45799ec8c598753ebb22/setup.py'"'"'; __file__='"'"'/private/var/folders/zb/c_b9kh2x1px7vl5683rwz8fr0000gn/T/pip-install-y8alaej_/numpy_3d813a3c790c45799ec8c598753ebb22/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/zb/c_b9kh2x1px7vl5683rwz8fr0000gn/T/pip-record-q9vraevr/install-record.txt --single-version-externally-managed --compile --install-headers /Users/daniel_edu/Projects/PERSONAL/great_expectation_demo/.env/include/site/python3.8/numpy Check the logs for full command output. (.env) ➜ great_expectation_demo git:(master) ✗ export ARCHFLAGS="-arch x86_64" (.env) ➜ great_expectation_demo git:(master) ✗ pip install --no-binary :all: --no-use-pep517 numpy Collecting numpy Using cached numpy-1.21.5.zip (10.7 MB) Preparing metadata (setup.py) ... done Skipping wheel build for numpy, due to binaries being disabled for it. Installing collected packages: numpy Running setup.py install for numpy ... done Successfully installed numpy-1.21.5 ```
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
[This solution](https://github.com/numpy/numpy/issues/17784#issuecomment-729950525) worked on my M1 machine with `pyenv`: ``` brew install openblas OPENBLAS="$(brew --prefix openblas)" pip install numpy scipy ```
According to [this Github issue](https://github.com/scipy/scipy/issues/16974), Scipy doesn't work on MacOS 11 (Big Sur). If none of these solutions are working for you I'd suggest updating your OS.
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
For those who need it for short-term purposes and don't want too much hustle - it seems to work with python 3.6.4 and scipy 1.5.4 out of the box (Big Sur 11.5.2, M1 chip).
I use `conda install scipy` to resolve this problem. Conda have a custom version of scipy for Apple M1. Update macOS to 12 if you don't want to use Conda.
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
For those who need it for short-term purposes and don't want too much hustle - it seems to work with python 3.6.4 and scipy 1.5.4 out of the box (Big Sur 11.5.2, M1 chip).
The following worked for me. I'm currently using `Python 3.10.8`, installed using `brew`. And currently, when installing `numpy==1.23.4`, `setuptools < 60.0.0` is required. I'm using `(brew --prefix)/bin/python3 -m pip` for explicitly calling the `pip` from `python 3.10` installed by `brew`. Here are the versions I've just installed. ``` # python 3.10.8 # pip 22.3 # setuptools 59.8.0 # wheel 0.37.1 # numpy 1.23.4 # scipy 1.9.3 # pandas 1.5.1 # scikit-learn 1.1.3 # seaborn 0.12.1 # statsmodels 0.13.2 # gcc 12.2.0 # openblas 0.3.21 # gfortran 12 # pybind11 2.10.0 # Cython 0.29.32 # pythran 0.12.0 ``` Here are the steps I followed: ``` # setuptools < 60.0.0 is required for numpy==1.23.4 in Python 3.10.8 $(brew --prefix)/bin/python3 -m pip install --upgrade pip==22.3 setuptools==59.8.0 wheel==0.37.1 # uninstall numpy and pythran first $(brew --prefix)/bin/python3 -m pip uninstall -y numpy pythran # uninstall scipy $(brew --prefix)/bin/python3 -m pip uninstall -y scipy # install prerequisites (with brew) brew install gcc brew install openblas brew install gfortran # set environment variables for compilers to find openblas export LDFLAGS="-L/opt/homebrew/opt/openblas/lib" export CPPFLAGS="-I/opt/homebrew/opt/openblas/include" # install the prerequisites (with pip) $(brew --prefix)/bin/python3 -m pip install pybind11 $(brew --prefix)/bin/python3 -m pip install Cython # install numpy $(brew --prefix)/bin/python3 -m pip install --no-binary :all: numpy # install pythran after installing numpy, before installing scipy $(brew --prefix)/bin/python3 -m pip install pythran # install scipy export OPENBLAS="$(brew --prefix)/opt/openblas/lib/" $(brew --prefix)/bin/python3 -m pip install scipy # install pandas $(brew --prefix)/bin/python3 -m pip install pandas # install scikit-learn $(brew --prefix)/bin/python3 -m pip install scikit-learn # install seaborn $(brew --prefix)/bin/python3 -m pip install seaborn # install statsmodels $(brew --prefix)/bin/python3 -m pip install statsmodels ```
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
For me the easiest solutions: ``` brew install scipy ``` Probably good idea to edit the PATH, so the homebrew version will be the default.
I use `conda install scipy` to resolve this problem. Conda have a custom version of scipy for Apple M1. Update macOS to 12 if you don't want to use Conda.
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
It's possible to install on regular arm64 brew python, you need to compile it yourself. If `numpy` is already installed (from wheels) you'll need to uninstall it: ``` pip3 uninstall -y numpy pythran ``` I had to compile `numpy`, which requires `cython` and `pybind11`: ``` pip3 install cython pybind11 ``` Then `numpy` can be compiled: ``` pip3 install --no-binary :all: --no-use-pep517 numpy ``` Scipy needs `pythran` (this should happen after installing numpy): ``` pip3 install pythran ``` Then we need to compile scipy itself, it depends on fortran and BLAS/LACK: ``` brew install openblas gfortran ``` Tell `scipy` where it can find this library: ``` export OPENBLAS=/opt/homebrew/opt/openblas/lib/ ``` Then finally compile`scipy`: ``` pip3 install --no-binary :all: --no-use-pep517 scipy ```
For those who need it for short-term purposes and don't want too much hustle - it seems to work with python 3.6.4 and scipy 1.5.4 out of the box (Big Sur 11.5.2, M1 chip).
65,745,683
I have successfully installed python 3.9.1 with Numpy and Matplotlib on a new Mac mini with Apple Silicon. However, I cannot install SciPy : I get compilation errors when using ``` python3 -m pip install scipy ``` I also tried installing everything from brew, and `import scipy` works, but using it gives a seg fault. I have installed ARM versions of lapack and openblas, but this does not fix the problem. Has anyone succeeded? (I am interested in running it natively, not through Rosetta).
2021/01/16
[ "https://Stackoverflow.com/questions/65745683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206216/" ]
This one worked for me after wasting hours: ``` pip install --pre -i https://pypi.anaconda.org/scipy-wheels-nightly/simple scipy ```
I managed to get scipy installed on Apple Silicon. I mostly followed the instructions by lutzroeder here: <https://github.com/scipy/scipy/issues/13409> Those instructions weren't successful for me, but running 'pip3 install scipy' worked afterwards. I think this fixed the problem for me: ``` /opt/homebrew/bin/brew install openblas export OPENBLAS=$(/opt/homebrew/bin/brew --prefix openblas) export CFLAGS="-falign-functions=8 ${CFLAGS}" ```
65,560,546
I have a multi index dataframe like this: ``` PID Fid x y A 1 2 3 2 6 1 3 4 6 B 1 3 5 2 2 4 3 5 7 ``` I would like to delete the rows with the highest x-value per patient (PID). I need to get a new dataframe with the remaining rows and all columns to continue my analysis on these data, for example the mean of the remaining y-values. The dataframe should look like this: ``` PID Fid x y A 1 2 3 3 4 6 B 1 3 5 2 2 4 ``` I used the code from [Python Multiindex Dataframe remove maximum](https://stackoverflow.com/questions/49669129/python-multiindex-dataframe-remove-maximum) ``` idx = (df.reset_index('Fid') .groupby('PID')['x'] .max() .reset_index() .values.tolist()) df_s = df.loc[df.index.difference(idx)] ``` I can get idx, but not remove them from the dataframe. It says TypeError: unhashable type: 'list' What do I do wrong?
2021/01/04
[ "https://Stackoverflow.com/questions/65560546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14564287/" ]
You can try this: ``` idx = df.groupby(level=0)['x'].idxmax() df[~df.index.isin(idx)] x y PID Fid A 1 2 3 3 4 6 B 1 3 5 2 2 4 ``` Or You can use `pd.Index.difference` here. ``` df.loc[df.index.difference(df['x'].groupby(level=0).idxmax())] #Use level=0 if index is unnamed #('PID').idxmax())] x y PID Fid A 1 2 3 3 4 6 B 1 3 5 2 2 4 ```
Use [`GroupBy.transform`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html) for repeat max values per groups, compare by [`Series.ne`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ne.html) for not equal and filter in [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing): ``` df_s = df[df.groupby('PID')['x'].transform('max').ne(df['x'])] print (df_s) x y PID Fid A 1 2 3 3 4 6 B 1 3 5 2 2 4 ```
45,619,217
I created my own user model by sub-classing AbstractBaseUser as is recommended by the docs. The goal here was to use a new field called mob\_phone as the identifying field for registration and log-in. It works a charm - for the first user. It sets the username field as nothing - blank.But when I register a second user I get a "UNIQUE constraint failed: user\_account\_customuser.username". I basically want to do away with the username field altogether. How can I achieve this? I basically need to either find a way of making the username field not unique or remove it altogether. models.py ``` from django.contrib.auth.models import AbstractUser, BaseUserManager class MyUserManager(BaseUserManager): def create_user(self, mob_phone, email, password=None): """ Creates and saves a User with the given mobile number and password. """ if not mob_phone: raise ValueError('Users must mobile phone number') user = self.model( mob_phone=mob_phone, email=email ) user.set_password(password) user.save(using=self._db) return user def create_superuser(self, mob_phone, email, password): """ Creates and saves a superuser with the given email, date of birth and password. """ user = self.create_user( mob_phone=mob_phone, email=email, password=password ) user.is_admin = True user.save(using=self._db) return user class CustomUser(AbstractUser): mob_phone = models.CharField(blank=False, max_length=10, unique=True) is_admin = models.BooleanField(default=False) objects = MyUserManager() # override username field as indentifier field USERNAME_FIELD = 'mob_phone' EMAIL_FIELD = 'email' def get_full_name(self): return self.mob_phone def get_short_name(self): return self.mob_phone def __str__(self): # __unicode__ on Python 2 return self.mob_phone def has_perm(self, perm, obj=None): "Does the user have a specific permission?" # Simplest possible answer: Yes, always return True def has_module_perms(self, app_label): "Does the user have permissions to view the app `app_label`?" # Simplest possible answer: Yes, always return True @property def is_staff(self): "Is the user a member of staff?" # Simplest possible answer: All admins are staff return self.is_admin ``` Stacktrace: > > Traceback (most recent call last): > File "manage.py", line 22, in > execute\_from\_command\_line(sys.argv) > File "/home/dean/.local/lib/python3.5/site-packages/django/core/management/**init**.py", line 363, in execute\_from\_command\_line > utility.execute() > File "/home/dean/.local/lib/python3.5/site-packages/django/core/management/**init**.py", line 355, in execute > self.fetch\_command(subcommand).run\_from\_argv(self.argv) > File "/home/dean/.local/lib/python3.5/site-packages/django/core/management/base.py", line 283, in run\_from\_argv > self.execute(\*args, \*\*cmd\_options) > File "/home/dean/.local/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 63, in execute > return super(Command, self).execute(\*args, \*\*options) > File "/home/dean/.local/lib/python3.5/site-packages/django/core/management/base.py", line 330, in execute > output = self.handle(\*args, \*\*options) > File "/home/dean/.local/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 183, in handle > self.UserModel.\_default\_manager.db\_manager(database).create\_superuser(\*\*user\_data) > File "/home/dean/Development/UrbanFox/UrbanFox/user\_account/models.py", line 43, in create\_superuser > password=password > File "/home/dean/Development/UrbanFox/UrbanFox/user\_account/models.py", line 32, in create\_user > user.save(using=self.\_db) > File "/home/dean/.local/lib/python3.5/site-packages/django/contrib/auth/base\_user.py", line 80, in save > super(AbstractBaseUser, self).save(\*args, \*\*kwargs) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/base.py", line 807, in save > force\_update=force\_update, update\_fields=update\_fields) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/base.py", line 837, in save\_base > updated = self.\_save\_table(raw, cls, force\_insert, force\_update, using, update\_fields) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/base.py", line 923, in \_save\_table > result = self.\_do\_insert(cls.\_base\_manager, using, fields, update\_pk, raw) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/base.py", line 962, in \_do\_insert > using=using, raw=raw) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/manager.py", line 85, in manager\_method > return getattr(self.get\_queryset(), name)(\*args, \*\*kwargs) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/query.py", line 1076, in \_insert > return query.get\_compiler(using=using).execute\_sql(return\_id) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 1107, in execute\_sql > cursor.execute(sql, params) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/backends/utils.py", line 80, in execute > return super(CursorDebugWrapper, self).execute(sql, params) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/backends/utils.py", line 65, in execute > return self.cursor.execute(sql, params) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/utils.py", line 94, in **exit** > six.reraise(dj\_exc\_type, dj\_exc\_value, traceback) > File "/home/dean/.local/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise > raise value.with\_traceback(tb) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/backends/utils.py", line 65, in execute > return self.cursor.execute(sql, params) > File "/home/dean/.local/lib/python3.5/site-packages/django/db/backends/sqlite3/base.py", line 328, in execute > return Database.Cursor.execute(self, query, params) > django.db.utils.IntegrityError: UNIQUE constraint failed: user\_account\_customuser.username > > >
2017/08/10
[ "https://Stackoverflow.com/questions/45619217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5644300/" ]
Okay I'm an idiot. Literally seconds after posting this the obvious solution occurred to me: ``` username = models.CharField(max_length=40, unique=False, default='') ``` Just override the username field and make it non unique. Rubber duck theory in action....
It might be because you would have already entered some data in database which must be contradicting to the constraint. So try deleting that data or whole database and then run the command again.
607,931
I've deployed trac using apache/mod\_wsgi (no SSL) (preferable, since the problem I'm facing with CGI is performance), and it works fine WITHOUT SVN integration. But I actually need SVN, so when I configure the repository path (i.e: repository\_dir = c:/projects/svn/my\_project) I can't even get my project TRAC to even open any of its pages. On Mozilla Firefox shows a white page and on MS-IE shows a 'The page cannot be displayed' error as if the server has 'timed out'. I've tried with mod\_python (3.3.1) and the exact same problem happens. It works fine with CGI though. I've also tried disabeling SVN authentication, thinking it might be a authentication conflict (I'm using Apache Basic Auth). **Environment:** * Win 2000 Server SP 4; * Apache 2.2.10; * Python 2.5.2; * mod\_wsgi revision 1018 2.3, py25\_apache22; * Trac 0.12dev; * Subversion 1.5.3. **Configuration files:** * Apache httpd.conf excerpt: > > > ``` > WSGIScriptAlias /trac "c:/projects/apache/trac.wsgi" > > <Directory c:/projects/apache> > WSGIApplicationGroup %{GLOBAL} > Order deny,allow > Allow from all > </Directory> > > ``` > > * trac.wsgi: > > > ``` > import sys > sys.stdout = sys.stderr > > import os > os.environ['TRAC_ENV_PARENT_DIR'] = 'c:/projects/trac' > os.environ['PYTHON_EGG_CACHE'] = 'c:/projects/eggs' > > import trac.web.main > > application = trac.web.main.dispatch_request > > ``` > > * trac.ini excerpt: > > > ``` > repository_type = svn > repository_dir = c:/projects/svn/my_project > > ``` > > Any ideas???
2009/03/03
[ "https://Stackoverflow.com/questions/607931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/73377/" ]
Set a break point on the line that reads: NewFaqDropDownCategory.DataBind() and one in your event handler (NewFaqDropDownCategory\_SelectedIndexChanged). I suspect the databind is being called right before your NewFaqDropDownCategory\_SelectedIndexChanged event fires causing your selected value to change. If so, you need either to make sure you only databind if you aren't in the middle of your autopostback or instead of using NewFaqDropDownCategory.SelectedIndex on the first line of your event handler you can cast the sender parameter to a DropDownList and use its selected value.
I think there is a bug in your LINQ query for the second drop down box ``` Dim faqs = (From f In db.faqs Where f.category = NewFaqDropDownCategory.SelectedValue) ``` Here you are comparing SelectedValue to category. Yet in the first combobox you said that the DataValueField should be category\_id. Try changing f.category to f.category\_id
607,931
I've deployed trac using apache/mod\_wsgi (no SSL) (preferable, since the problem I'm facing with CGI is performance), and it works fine WITHOUT SVN integration. But I actually need SVN, so when I configure the repository path (i.e: repository\_dir = c:/projects/svn/my\_project) I can't even get my project TRAC to even open any of its pages. On Mozilla Firefox shows a white page and on MS-IE shows a 'The page cannot be displayed' error as if the server has 'timed out'. I've tried with mod\_python (3.3.1) and the exact same problem happens. It works fine with CGI though. I've also tried disabeling SVN authentication, thinking it might be a authentication conflict (I'm using Apache Basic Auth). **Environment:** * Win 2000 Server SP 4; * Apache 2.2.10; * Python 2.5.2; * mod\_wsgi revision 1018 2.3, py25\_apache22; * Trac 0.12dev; * Subversion 1.5.3. **Configuration files:** * Apache httpd.conf excerpt: > > > ``` > WSGIScriptAlias /trac "c:/projects/apache/trac.wsgi" > > <Directory c:/projects/apache> > WSGIApplicationGroup %{GLOBAL} > Order deny,allow > Allow from all > </Directory> > > ``` > > * trac.wsgi: > > > ``` > import sys > sys.stdout = sys.stderr > > import os > os.environ['TRAC_ENV_PARENT_DIR'] = 'c:/projects/trac' > os.environ['PYTHON_EGG_CACHE'] = 'c:/projects/eggs' > > import trac.web.main > > application = trac.web.main.dispatch_request > > ``` > > * trac.ini excerpt: > > > ``` > repository_type = svn > repository_dir = c:/projects/svn/my_project > > ``` > > Any ideas???
2009/03/03
[ "https://Stackoverflow.com/questions/607931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/73377/" ]
I had the same problem. Found I forgot to look if I was posting back to the page or not and I was binding my DropDownList control in the `Page_Load` event of the page. I had forgot to use: ``` if (!IsPostBack) { .... do databind .... } ```
I think there is a bug in your LINQ query for the second drop down box ``` Dim faqs = (From f In db.faqs Where f.category = NewFaqDropDownCategory.SelectedValue) ``` Here you are comparing SelectedValue to category. Yet in the first combobox you said that the DataValueField should be category\_id. Try changing f.category to f.category\_id
607,931
I've deployed trac using apache/mod\_wsgi (no SSL) (preferable, since the problem I'm facing with CGI is performance), and it works fine WITHOUT SVN integration. But I actually need SVN, so when I configure the repository path (i.e: repository\_dir = c:/projects/svn/my\_project) I can't even get my project TRAC to even open any of its pages. On Mozilla Firefox shows a white page and on MS-IE shows a 'The page cannot be displayed' error as if the server has 'timed out'. I've tried with mod\_python (3.3.1) and the exact same problem happens. It works fine with CGI though. I've also tried disabeling SVN authentication, thinking it might be a authentication conflict (I'm using Apache Basic Auth). **Environment:** * Win 2000 Server SP 4; * Apache 2.2.10; * Python 2.5.2; * mod\_wsgi revision 1018 2.3, py25\_apache22; * Trac 0.12dev; * Subversion 1.5.3. **Configuration files:** * Apache httpd.conf excerpt: > > > ``` > WSGIScriptAlias /trac "c:/projects/apache/trac.wsgi" > > <Directory c:/projects/apache> > WSGIApplicationGroup %{GLOBAL} > Order deny,allow > Allow from all > </Directory> > > ``` > > * trac.wsgi: > > > ``` > import sys > sys.stdout = sys.stderr > > import os > os.environ['TRAC_ENV_PARENT_DIR'] = 'c:/projects/trac' > os.environ['PYTHON_EGG_CACHE'] = 'c:/projects/eggs' > > import trac.web.main > > application = trac.web.main.dispatch_request > > ``` > > * trac.ini excerpt: > > > ``` > repository_type = svn > repository_dir = c:/projects/svn/my_project > > ``` > > Any ideas???
2009/03/03
[ "https://Stackoverflow.com/questions/607931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/73377/" ]
Set a break point on the line that reads: NewFaqDropDownCategory.DataBind() and one in your event handler (NewFaqDropDownCategory\_SelectedIndexChanged). I suspect the databind is being called right before your NewFaqDropDownCategory\_SelectedIndexChanged event fires causing your selected value to change. If so, you need either to make sure you only databind if you aren't in the middle of your autopostback or instead of using NewFaqDropDownCategory.SelectedIndex on the first line of your event handler you can cast the sender parameter to a DropDownList and use its selected value.
I had the same problem. Found I forgot to look if I was posting back to the page or not and I was binding my DropDownList control in the `Page_Load` event of the page. I had forgot to use: ``` if (!IsPostBack) { .... do databind .... } ```
67,366,722
I am having issue parsing an xml result using python. I tried using etree.Element(text), but the error says Invalid tag name. Does anyone know if this is actually an xml and any way of parsing the result using a standard package? Thank you! ``` import requests, sys, json from lxml import etree response = requests.get("https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=snp&id=1593319917&report=XML") text=response.text print(text) <?xml version="1.0" ?> <ExchangeSet xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xmlns="https://www.ncbi.nlm.nih.gov/SNP/docsum" xsi:schemaLocation="https://www.ncbi.nlm.nih.gov/SNP/docsum ftp://ftp.ncbi.nlm.nih.gov/snp/specs/docsum_eutils.xsd" ><DocumentSummary uid="1593319917"><SNP_ID>1593319917</SNP_ID><ALLELE_ORIGIN/><GLOBAL_MAFS><MAF><STUDY>SGDP_PRJ</STUDY><FREQ>G=0.5/1</FREQ></MAF></GLOBAL_MAFS><GLOBAL_POPULATION/><GLOBAL_SAMPLESIZE>0</GLOBAL_SAMPLESIZE><SUSPECTED/><CLINICAL_SIGNIFICANCE/><GENES><GENE_E><NAME>FLT3</NAME><GENE_ID>2322</GENE_ID></GENE_E></GENES><ACC>NC_000013.11</ACC><CHR>13</CHR><HANDLE>SGDP_PRJ</HANDLE><SPDI>NC_000013.11:28102567:G:A</SPDI><FXN_CLASS>upstream_transcript_variant</FXN_CLASS><VALIDATED>by-frequency</VALIDATED><DOCSUM>HGVS=NC_000013.11:g.28102568G&gt;A,NC_000013.10:g.28676705G&gt;A,NG_007066.1:g.3001C&gt;T|SEQ=[G/A]|LEN=1|GENE=FLT3:2322</DOCSUM><TAX_ID>9606</TAX_ID><ORIG_BUILD>154</ORIG_BUILD><UPD_BUILD>154</UPD_BUILD><CREATEDATE>2020/04/27 06:19</CREATEDATE><UPDATEDATE>2020/04/27 06:19</UPDATEDATE><SS>3879653181</SS><ALLELE>R</ALLELE><SNP_CLASS>snv</SNP_CLASS><CHRPOS>13:28102568</CHRPOS><CHRPOS_PREV_ASSM>13:28676705</CHRPOS_PREV_ASSM><TEXT/><SNP_ID_SORT>1593319917</SNP_ID_SORT><CLINICAL_SORT>0</CLINICAL_SORT><CITED_SORT/><CHRPOS_SORT>0028102568</CHRPOS_SORT><MERGED_SORT>0</MERGED_SORT></DocumentSummary> </ExchangeSet> ```
2021/05/03
[ "https://Stackoverflow.com/questions/67366722", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14304103/" ]
You're using the wrong method to parse your XML. The `etree.Element` class is for *creating* a single XML element. For example: ``` >>> a = etree.Element('a') >>> a <Element a at 0x7f8c9040e180> >>> etree.tostring(a) b'<a/>' ``` As Jayvee has pointed how, to parse XML contained in a string you use the `etree.fromstring` method (to parse XML content in a file you would use the `etree.parse` method): ``` >>> response = requests.get("https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=snp&id=1593319917&report=XML") >>> doc = etree.fromstring(response.text) >>> doc <Element {https://www.ncbi.nlm.nih.gov/SNP/docsum}ExchangeSet at 0x7f8c9040e180> >>> ``` Note that because this XML document sets a default namespace, you'll need properly set namespaces when looking for elements. E.g., this will fail: ``` >>> doc.find('DocumentSummary') >>> ``` But this works: ``` >>> doc.find('docsum:DocumentSummary', {'docsum': 'https://www.ncbi.nlm.nih.gov/SNP/docsum'}) <Element {https://www.ncbi.nlm.nih.gov/SNP/docsum}DocumentSummary at 0x7f8c8e987200> ```
You can check if the xml is well formed by try converting it: ``` import requests, sys, json from lxml import etree response = requests.get("https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=snp&id=1593319917&report=XML") text=response.text try: doc=etree.fromstring(text) print("valid") except: print("not a valid xml") ```
73,142,014
Am using the following ETL pipeline to get data into BigQuery. Data source are .csv & .xls files from a URL posted daily at 3 pm. Cloud Scheduler publishes a message to a cloud pub/sub topic at 3:05 pm. Pub/Sub pushes/triggers the subscribers-cloud functions When triggered, these cloud functions (python script) downloads the file from the URL, performs transformations (cleaning, formatting, aggregations & filtering) and uploads it to BigQuery. Is there a cleaner way in GCP to download files from a URL on a schedule, transform & upload it to BigQuery instead of using cloud scheduler + pub/sub + cloud functions ? I researched Dataflow but cant figure out if it can do all three jobs (downloading from a URL on a schedule, transforming and uploading it to BQ)
2022/07/27
[ "https://Stackoverflow.com/questions/73142014", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12322651/" ]
In your architecture, Dataflow can only replace the PubSub + Cloud Functions. You still need a scheduler to run a dataflow (based on a template, maybe your custom template). But, before using dataflow, why do you need it? I'm in charge of a datalake, to ingest data from different sources, but, because each element to ingest are small enough to be kept in memory (of Cloud Run, but it's very similar to Cloud Functions) there is no problem to keep that pattern if it works!!
I do this kind of thing all the time and I can see why you'd wonder if there's a cleaner way. We use Composer (Ariflow) in GCP. In your scenario we would create one DAG with four sequential taks: 1. Copy file from URL to local bucket 2. Load file from local bucket to stage table 3. Merge stage table into final destination table using bigquery SQL script 4. Clean up staging table The composer job would look something like this: [![enter image description here](https://i.stack.imgur.com/sHJQb.png)](https://i.stack.imgur.com/sHJQb.png) All the code required to load a table end to end is located one DAG/folder. *You do need to pay for, and maintain, a Composer instance on GCP. Would be interesting to see how other companies do this kind of thing?*
34,405,936
When I try to do `python manage.py syncdb` in my Django app, I get the error **ImportError: No module named azure.storage.blob**. But thing is, the following packages are installed if one does `pip freeze`: `azure-common==1.0.0 azure-mgmt==0.20.1 azure-mgmt-common==0.20.0 azure-mgmt-compute==0.20.0 azure-mgmt-network==0.20.1 azure-mgmt-nspkg==1.0.0 azure-mgmt-resource==0.20.1 azure-mgmt-storage==0.20.0 azure-nspkg==1.0.0 azure-servicebus==0.20.1 azure-servicemanagement-legacy==0.20.1 azure-storage==0.20.3` Clearly **azure-storage** is installed, as is evident. Why is **azure.storage.blob** not available for import? I even went into my `.virtualenvs` directory, and got in all the way to `azure.storage.blob` (i.e. `~/.virtualenvs/myvirtualenv/local/lib/python2.7/site-packages/azure/storage/blob$`). It exists! What do I do? This answer here has not helped: [Install Azure Python api on linux: importError: No module named storage.blob](https://stackoverflow.com/questions/32787450/install-azure-python-api-on-linux-importerror-no-module-named-storage-blob) *Note: please ask for more information in case you need it*
2015/12/21
[ "https://Stackoverflow.com/questions/34405936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4936905/" ]
I had a similar issue. To alleviate that, I followed this discussion here: <https://github.com/Azure/azure-storage-python/issues/51#issuecomment-148151993> Basically, try `pip install azure==0.11.1` before trying `syncdb`, and I'm confident it will work for you!
There is a thread similar with yours, please check my answer for the thread [Unable to use azure SDK in Python](https://stackoverflow.com/questions/34213764/unable-to-use-azure-sdk-in-python). Based on my experience, Python imports the third-party library packages from some library paths that you can check them thru codes `import sys` & `sys.path` in the python interpreter. So you can try to dynamically add the new path contains the installed `azure` packages into the `sys.path` in the Python runtime to solve the issue. For adding the new library path, you just code `sys.path.append('<the new paths you want to add>')` at the front of the code like `import azure`. If the way has not helped, I suggest you can try to reinstall Python environment. On Ubuntu, you can use the command `sudo apt-get remove python python-pip` & `sudo apt-get install python python-pip` to reinstall `Python 2.7` & `pip 2.7`.(Note: The current major Linux distributions use Python 2.7 as the system default version.) If Python 3.4 as your runtime for Django, the apt package names for Ubuntu are `python3` and `python3-pip`, and you can use `sudo pip3 install azure` for `Python 3.4` on Ubuntu. Any concern, please feel free to let me know.
7,548,562
I've been getting weird results and I finally noticed that my habit of putting spaces in a tuple is causing the problem. If you can reproduce this problem and tell me why it works this way, you would be saving what's left of my hair. Thanks! ``` jcomeau@intrepid:/tmp$ cat haversine.py #!/usr/bin/python def dms_to_float(degrees): d, m, s, compass = degrees d, m, s = int(d), float(m), float(s) float_degrees = d + (m / 60) + (s / 3600) float_degrees *= [1, -1][compass in ['S', 'W', 'Sw']] return float_degrees jcomeau@intrepid:/tmp$ python Python 2.6.7 (r267:88850, Jun 13 2011, 22:03:32) [GCC 4.6.1 20110608 (prerelease)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from haversine import * >>> dms_to_float((111, 41, 0, 'SW')) 111.68333333333334 >>> dms_to_float((111,41,0,'Sw')) -111.68333333333334 ``` With spaces in the tuple, the answer is wrong. Without, the answer is correct.
2011/09/25
[ "https://Stackoverflow.com/questions/7548562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/493161/" ]
The spaces should make no difference. The difference is due to the case: `SW` vs `Sw`. You don't check for `SW` here: ``` compass in ['S', 'W', 'Sw']] ``` Perhaps change it to this: ``` compass.upper() in ['S', 'W', 'SW']] ```
Presuming that the "degrees" relate to degrees of latitude or longitude, I can't imagine why "SW" is treated as a viable option. Latitude is either N or S. Longitude is either E or W. Please explain. Based on your sample of size 1, user input is not to be trusted. Consider checking the input, or at least ensuring that bogus input will cause an exception to be raised. You appear to like one-liners; try this: ``` float_degrees *= {'n': 1, 's': -1, 'e': 1, 'w': -1}[compass.strip().lower()] ```
64,120,358
I try to make simple 3D plot with plot\_surface of matplotlib, below is the minimum example: ``` import numpy as np import matplotlib.pyplot as plt from matplotlib import cm x_test = np.arange(0.001, 0.01, 0.0005) y_test = np.arange(0.1, 100, 0.05) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') Xtest, Ytest = np.meshgrid(x_test, y_test) Ztest = Xtest**-1 + Ytest surf = ax.plot_surface(Xtest, Ytest, Ztest, cmap=cm.plasma, alpha=1, antialiased=False) fig.colorbar(surf, shrink=0.5, aspect=5) ax.set_ylabel(r'$ Y $', fontsize=16) ax.set_xlabel(r'$ X $', fontsize=16) ax.set_zlabel(r'$ Z $', fontsize=16) ``` The result gives strange colormap, that does not represent the magnitude of the z scale, as you can see from here [3D plot result](https://i.stack.imgur.com/kjxl6.png). I mean if you take a straight line of constant Z, you don't see the same color. I've tried to change the `ccount` and `rcount` inside `plot_surface` function or changing the interval of the Xtest or Ytest data, nothing helps. I've tried some suggestion from [here](https://stackoverflow.com/questions/46260682/how-to-set-the-scale-of-z-axis-equal-to-x-and-y-axises-in-python-plot-surface) and [here](https://stackoverflow.com/questions/46260682/how-to-set-the-scale-of-z-axis-equal-to-x-and-y-axises-in-python-plot-surface). But it seems not related. Could you help me how to solve this? It seems a simple problem, but I couldn't solve it. Thanks! **Edit** I add example but using the original equation that I don't write it here (it's complicated), please take a look: [Comparison](https://i.stack.imgur.com/7iVB0.png). While the left figure was done in Matlab (by my advisor), the right figure by using matplotlib. You can see clearly the plot on the left really make sense, the brightest color always on the maximum z-axis. Unfortunately, i don't know matlab. I hope i can do it by using python, Hopefully this edit makes it more clear of the problem. **Edit 2** I'm sure this is not the best solution. That's why I put it here, not as an answer. As suggested by @swatchai to use the `contour3D`. Since surface is set of lines, I can generate the correct result by plotting a lot of contour lines, by using: ``` surf = ax.contour3D(Xtest, Ytest, Ztest, 500, cmap=cm.plasma, alpha=0.5, antialiased=False) ``` The colormap is correct as you can see from here[alternative1](https://i.stack.imgur.com/x3Nt3.png) But the plot is very heavy. When you zoom-in, it doesn't look good, unless you increase again the number of the contour. Any suggestion are welcome :).
2020/09/29
[ "https://Stackoverflow.com/questions/64120358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361102/" ]
It's possible to have 2 nested scroll viewers in Xamarin.Forms. Notice how it says 'scroll viewers ***should not*** be nested, this means that it is certainly possible but it is not recommended. I think nested scroll viewers creates a bad user-experience and makes for a clunky app, especially for Xamarin.Forms; but again, here is a demonstration of a nested scroll viewer: ``` <ScrollView Orientation="Horizontal"> <StackLayout Orientation="Vertical"> <Grid> </Grid> <ScrollView Orientation="Vertical"> <Grid> </Grid> </ScrollView> </StackLayout> </ScrollView> ``` I think once you've played around with nested scroll viewers it should be easy for you to implement what you want. I would recommend using a grid with column and row definitions with your desired measurements. If I've answered your question, please mark this as the correct answer. Thanks,
According to your screenshot, i make a code sample for your reference. **Xaml:** ``` <ScrollView Orientation="Vertical"> <StackLayout Orientation="Horizontal"> <BoxView BackgroundColor="Blue" WidthRequest="150" /> <StackLayout Orientation="Vertical"> <ScrollView Orientation="Horizontal" BackgroundColor="Accent"> <StackLayout Orientation="Horizontal"> <Label HeightRequest="50" Text="A1" WidthRequest="50" /> <Label HeightRequest="50" Text="A2" WidthRequest="50" /> <Label HeightRequest="50" Text="A3" WidthRequest="50" /> <Label HeightRequest="50" Text="A4" WidthRequest="50" /> <Label HeightRequest="50" Text="A5" WidthRequest="50" /> <Label HeightRequest="50" Text="A6" WidthRequest="50" /> <Label HeightRequest="50" Text="A7" WidthRequest="50" /> <Label HeightRequest="50" Text="A8" WidthRequest="50" /> <Label HeightRequest="50" Text="A9" WidthRequest="50" /> <Label HeightRequest="50" Text="A10" WidthRequest="50" /> </StackLayout> </ScrollView> <Label HeightRequest="150" Text="Elem1" BackgroundColor="Green"/> <Label HeightRequest="150" Text="Elem2" BackgroundColor="Green"/> <Label HeightRequest="150" Text="Elem3" BackgroundColor="Green"/> <Label HeightRequest="150" Text="Elem4" BackgroundColor="Green"/> <Label HeightRequest="150" Text="Elem5" BackgroundColor="Green"/> <ScrollView Orientation="Horizontal"> <StackLayout Orientation="Horizontal"> <Label HeightRequest="50" Text="B1" WidthRequest="50" /> <Label HeightRequest="50" Text="B2" WidthRequest="50" /> <Label HeightRequest="50" Text="B3" WidthRequest="50" /> <Label HeightRequest="50" Text="B4" WidthRequest="50" /> <Label HeightRequest="50" Text="B5" WidthRequest="50" /> <Label HeightRequest="50" Text="B6" WidthRequest="50" /> <Label HeightRequest="50" Text="B7" WidthRequest="50" /> <Label HeightRequest="50" Text="B8" WidthRequest="50" /> <Label HeightRequest="50" Text="B9" WidthRequest="50" /> <Label HeightRequest="50" Text="B10" WidthRequest="50" /> </StackLayout> </ScrollView> </StackLayout> </StackLayout> </ScrollView> ``` **Screenshot:** [![enter image description here](https://i.stack.imgur.com/XyRJC.gif)](https://i.stack.imgur.com/XyRJC.gif)
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
**Shortcuts**: * `Shift` + `Command` + `L`: Show Library. * `Shift` + `Command` + `M`: Show Media Library. --- Xcode 10 has added a toolbar button to access the Object Library. [![enter image description here](https://i.stack.imgur.com/3J26u.png)](https://i.stack.imgur.com/3J26u.png) From a [thread](https://forums.developer.apple.com/thread/103777) on Apple Developer Forum: > > Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the Option key before dragging will keep the library open for an additional drag. > > > The library can be opened via a new toolbar button, the `View > Libraries` menu, or the ⇧⌘L keyboard shortcut. Content dynamically matches the active editor, so the same UI provides access to code snippets, Interface Builder, SpriteKit, or SceneKit items. The media library is available via a long press on the toolbar button, the `View > Libraries` menu, or the ⇧⌘M keyboard shortcut. (37318979, 39885726) > > >
The library can be opened via a new toolbar button, the View → Libraries menu, or the `Shift` + `Command` + `L` keyboard shortcut.The media library is available via a long press on the toolbar button, the View → Libraries menu, or the `Shift` + `Command` + `M` keyboard shortcut. Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the `Option` key before dragging will keep the library open for an additional drag.
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
**XCode 11 - Object library location** Click on the plus icon on the top right corner of Xcode topbar. [![enter image description here](https://i.stack.imgur.com/UdxJz.jpg)](https://i.stack.imgur.com/UdxJz.jpg)
In Xcode 11 use Shift + Command + L to show the Object Library.
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
**Shortcuts**: * `Shift` + `Command` + `L`: Show Library. * `Shift` + `Command` + `M`: Show Media Library. --- Xcode 10 has added a toolbar button to access the Object Library. [![enter image description here](https://i.stack.imgur.com/3J26u.png)](https://i.stack.imgur.com/3J26u.png) From a [thread](https://forums.developer.apple.com/thread/103777) on Apple Developer Forum: > > Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the Option key before dragging will keep the library open for an additional drag. > > > The library can be opened via a new toolbar button, the `View > Libraries` menu, or the ⇧⌘L keyboard shortcut. Content dynamically matches the active editor, so the same UI provides access to code snippets, Interface Builder, SpriteKit, or SceneKit items. The media library is available via a long press on the toolbar button, the `View > Libraries` menu, or the ⇧⌘M keyboard shortcut. (37318979, 39885726) > > >
**XCode 11 - Object library location** Click on the plus icon on the top right corner of Xcode topbar. [![enter image description here](https://i.stack.imgur.com/UdxJz.jpg)](https://i.stack.imgur.com/UdxJz.jpg)
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
**XCode 11 - Object library location** Click on the plus icon on the top right corner of Xcode topbar. [![enter image description here](https://i.stack.imgur.com/UdxJz.jpg)](https://i.stack.imgur.com/UdxJz.jpg)
The library can be opened via a new toolbar button, the View → Libraries menu, or the `Shift` + `Command` + `L` keyboard shortcut.The media library is available via a long press on the toolbar button, the View → Libraries menu, or the `Shift` + `Command` + `M` keyboard shortcut. Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the `Option` key before dragging will keep the library open for an additional drag.
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
What the existing answers (so far) neglect to mention is that if you hold `Option` as you summon the Library window — i.e., press `Shift` + `Option` + `Command` + `L`, or hold `Option` while clicking the Library button in the toolbar — the window stays open, *permanently*, until you explicitly close it with its Close button. [![enter image description here](https://i.stack.imgur.com/7A6vu.png)](https://i.stack.imgur.com/7A6vu.png) It is not incorporated (docked) into the current project window, but it can be used in any project. The point is that it becomes *almost* a normal window (to be precise, it becomes a normal *floating* window).
In Xcode 11 use Shift + Command + L to show the Object Library.
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
What the existing answers (so far) neglect to mention is that if you hold `Option` as you summon the Library window — i.e., press `Shift` + `Option` + `Command` + `L`, or hold `Option` while clicking the Library button in the toolbar — the window stays open, *permanently*, until you explicitly close it with its Close button. [![enter image description here](https://i.stack.imgur.com/7A6vu.png)](https://i.stack.imgur.com/7A6vu.png) It is not incorporated (docked) into the current project window, but it can be used in any project. The point is that it becomes *almost* a normal window (to be precise, it becomes a normal *floating* window).
Xcode 12 users can find the same option as the Xcode 11 as written above. `Shift` `Command` `L` to bring up the Objects/Image/Color and other context-sensitive libraries. There is also the `+` sign at the top right of the window titled Library when you mouseover. This drove me crazy trying to follow a tutorial that was likely written for Xcode 10. Thanks for the rest of those that answered!
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
**Shortcuts**: * `Shift` + `Command` + `L`: Show Library. * `Shift` + `Command` + `M`: Show Media Library. --- Xcode 10 has added a toolbar button to access the Object Library. [![enter image description here](https://i.stack.imgur.com/3J26u.png)](https://i.stack.imgur.com/3J26u.png) From a [thread](https://forums.developer.apple.com/thread/103777) on Apple Developer Forum: > > Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the Option key before dragging will keep the library open for an additional drag. > > > The library can be opened via a new toolbar button, the `View > Libraries` menu, or the ⇧⌘L keyboard shortcut. Content dynamically matches the active editor, so the same UI provides access to code snippets, Interface Builder, SpriteKit, or SceneKit items. The media library is available via a long press on the toolbar button, the `View > Libraries` menu, or the ⇧⌘M keyboard shortcut. (37318979, 39885726) > > >
Xcode 12 users can find the same option as the Xcode 11 as written above. `Shift` `Command` `L` to bring up the Objects/Image/Color and other context-sensitive libraries. There is also the `+` sign at the top right of the window titled Library when you mouseover. This drove me crazy trying to follow a tutorial that was likely written for Xcode 10. Thanks for the rest of those that answered!
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
**XCode 11 - Object library location** Click on the plus icon on the top right corner of Xcode topbar. [![enter image description here](https://i.stack.imgur.com/UdxJz.jpg)](https://i.stack.imgur.com/UdxJz.jpg)
Xcode 12 users can find the same option as the Xcode 11 as written above. `Shift` `Command` `L` to bring up the Objects/Image/Color and other context-sensitive libraries. There is also the `+` sign at the top right of the window titled Library when you mouseover. This drove me crazy trying to follow a tutorial that was likely written for Xcode 10. Thanks for the rest of those that answered!
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
**Shortcuts**: * `Shift` + `Command` + `L`: Show Library. * `Shift` + `Command` + `M`: Show Media Library. --- Xcode 10 has added a toolbar button to access the Object Library. [![enter image description here](https://i.stack.imgur.com/3J26u.png)](https://i.stack.imgur.com/3J26u.png) From a [thread](https://forums.developer.apple.com/thread/103777) on Apple Developer Forum: > > Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the Option key before dragging will keep the library open for an additional drag. > > > The library can be opened via a new toolbar button, the `View > Libraries` menu, or the ⇧⌘L keyboard shortcut. Content dynamically matches the active editor, so the same UI provides access to code snippets, Interface Builder, SpriteKit, or SceneKit items. The media library is available via a long press on the toolbar button, the `View > Libraries` menu, or the ⇧⌘M keyboard shortcut. (37318979, 39885726) > > >
In Xcode 11 use Shift + Command + L to show the Object Library.
50,962,836
I am new with python-pptx. But I am familiar with its basic working. I have searched a lot but I could not find a way to change a particular text by another text in all slides. That text may be in any text\_frame of a slide. like all slides in a ppt have 'java' keyword, I want to change it by 'python' using python pptx in slides. ``` for slide in ppt.slides: if slide.has_text_frame: #do something with text frames ```
2018/06/21
[ "https://Stackoverflow.com/questions/50962836", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8847530/" ]
The library can be opened via a new toolbar button, the View → Libraries menu, or the `Shift` + `Command` + `L` keyboard shortcut.The media library is available via a long press on the toolbar button, the View → Libraries menu, or the `Shift` + `Command` + `M` keyboard shortcut. Library content has moved from the bottom of the Inspector area to an overlay window, which can be moved and resized like Spotlight search. It dismisses once items are dragged, but holding the `Option` key before dragging will keep the library open for an additional drag.
Xcode 12 users can find the same option as the Xcode 11 as written above. `Shift` `Command` `L` to bring up the Objects/Image/Color and other context-sensitive libraries. There is also the `+` sign at the top right of the window titled Library when you mouseover. This drove me crazy trying to follow a tutorial that was likely written for Xcode 10. Thanks for the rest of those that answered!
35,678,885
I am following an online course for making web apps with python/mongo/bootstrap. I install mongodb using default settings I run mongod in powershell from install directory ``` C:\Program Files\MongoDB\Server\3.2\bin>mongod 2016-02-27T23:31:14.684-0500 I CONTROL [initandlisten] MongoDB starting : pid=2456 port=27017 dbpath=C:\data\db\ 64-bit host=DESKTOP-8LMCN7R 2016-02-27T23:31:14.686-0500 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 2016-02-27T23:31:14.687-0500 I CONTROL [initandlisten] db version v3.2.3 2016-02-27T23:31:14.689-0500 I CONTROL [initandlisten] git version: b326ba837cf6f49d65c2f85e1b70f6f31ece7937 2016-02-27T23:31:14.690-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015 2016-02-27T23:31:14.692-0500 I CONTROL [initandlisten] allocator: tcmalloc 2016-02-27T23:31:14.693-0500 I CONTROL [initandlisten] modules: none 2016-02-27T23:31:14.704-0500 I CONTROL [initandlisten] build environment: 2016-02-27T23:31:14.706-0500 I CONTROL [initandlisten] distmod: 2008plus-ssl 2016-02-27T23:31:14.707-0500 I CONTROL [initandlisten] distarch: x86_64 2016-02-27T23:31:14.708-0500 I CONTROL [initandlisten] target_arch: x86_64 2016-02-27T23:31:14.709-0500 I CONTROL [initandlisten] options: {} 2016-02-27T23:31:14.712-0500 I - [initandlisten] Detected data files in C:\data\db\ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2016-02-27T23:31:14.715-0500 W - [initandlisten] Detected unclean shutdown - C:\data\db\mongod.lock is not empty. 2016-02-27T23:31:14.718-0500 W STORAGE [initandlisten] Recovering data from the last clean checkpoint. 2016-02-27T23:31:14.720-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2016-02-27T23:31:14.954-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 2016-02-27T23:31:14.954-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/data/db/diagnostic.data' 2016-02-27T23:31:14.962-0500 I NETWORK [initandlisten] waiting for connections on port 27017 2016-02-27T23:31:15.004-0500 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK 2016-02-27T23:32:10.255-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50834 #1 (1 connection now open) ``` I run mongo in a different powershell from install directory ``` C:\Program Files\MongoDB\Server\3.2\bin>mongo MongoDB shell version: 3.2.3 connecting to: test ``` I confirm that their are some values in my db ``` > show dbs fullstack 0.000GB local 0.000GB > use fullstack switched to db fullstack > show collections students > db.students.find({}) { "_id" : ObjectId("56d1b951d14d4af940b77a14"), "name" : "Jose", "mark" : "99" } { "_id" : ObjectId("56d211e8d14d4af940b77a16"), "name" : "Jose", "Mark" : 99 } { "_id" : ObjectId("56d26bedc83f1574076c732b"), "name" : "Jose", "mark" : 99 } { "_id" : ObjectId("56d26c0fc83f1574076c732c"), "name" : "Kris", "mark" : 69 } { "_id" : ObjectId("56d271b1c83f1574076c732d"), "name" : "Kelly", "Grade" : 99 } ``` I run this code in pyCharm: ``` import pymongo uri="mongodb://127.0.0.1:27017" client = pymongo.MongoClient(uri) database = client['fullstack'] collection = database['students'] students = collection.find({}) for student in students: print(students) ``` but the interpreter returns nothing. No errors. It just doesn't return anything. The console does not even return a cursor dump. What am I doing wrong?
2016/02/28
[ "https://Stackoverflow.com/questions/35678885", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5885288/" ]
The easiest way to get a variable by name is to search for it in the [`tf.global_variables()`](https://www.tensorflow.org/api_docs/python/tf/global_variables) collection: ``` var_23 = [v for v in tf.global_variables() if v.name == "Variable_23:0"][0] ``` This works well for ad hoc reuse of existing variables. A more structured approach—for when you want to share variables between multiple parts of a model—is covered in the [Sharing Variables tutorial](https://www.tensorflow.org/versions/r0.7/how_tos/variable_scope/index.html#sharing-variables).
If you want to get any stored variables from a model, use`tf.train.load_variable("model_folder_name","Variable name")`
35,678,885
I am following an online course for making web apps with python/mongo/bootstrap. I install mongodb using default settings I run mongod in powershell from install directory ``` C:\Program Files\MongoDB\Server\3.2\bin>mongod 2016-02-27T23:31:14.684-0500 I CONTROL [initandlisten] MongoDB starting : pid=2456 port=27017 dbpath=C:\data\db\ 64-bit host=DESKTOP-8LMCN7R 2016-02-27T23:31:14.686-0500 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 2016-02-27T23:31:14.687-0500 I CONTROL [initandlisten] db version v3.2.3 2016-02-27T23:31:14.689-0500 I CONTROL [initandlisten] git version: b326ba837cf6f49d65c2f85e1b70f6f31ece7937 2016-02-27T23:31:14.690-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015 2016-02-27T23:31:14.692-0500 I CONTROL [initandlisten] allocator: tcmalloc 2016-02-27T23:31:14.693-0500 I CONTROL [initandlisten] modules: none 2016-02-27T23:31:14.704-0500 I CONTROL [initandlisten] build environment: 2016-02-27T23:31:14.706-0500 I CONTROL [initandlisten] distmod: 2008plus-ssl 2016-02-27T23:31:14.707-0500 I CONTROL [initandlisten] distarch: x86_64 2016-02-27T23:31:14.708-0500 I CONTROL [initandlisten] target_arch: x86_64 2016-02-27T23:31:14.709-0500 I CONTROL [initandlisten] options: {} 2016-02-27T23:31:14.712-0500 I - [initandlisten] Detected data files in C:\data\db\ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2016-02-27T23:31:14.715-0500 W - [initandlisten] Detected unclean shutdown - C:\data\db\mongod.lock is not empty. 2016-02-27T23:31:14.718-0500 W STORAGE [initandlisten] Recovering data from the last clean checkpoint. 2016-02-27T23:31:14.720-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2016-02-27T23:31:14.954-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 2016-02-27T23:31:14.954-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/data/db/diagnostic.data' 2016-02-27T23:31:14.962-0500 I NETWORK [initandlisten] waiting for connections on port 27017 2016-02-27T23:31:15.004-0500 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK 2016-02-27T23:32:10.255-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50834 #1 (1 connection now open) ``` I run mongo in a different powershell from install directory ``` C:\Program Files\MongoDB\Server\3.2\bin>mongo MongoDB shell version: 3.2.3 connecting to: test ``` I confirm that their are some values in my db ``` > show dbs fullstack 0.000GB local 0.000GB > use fullstack switched to db fullstack > show collections students > db.students.find({}) { "_id" : ObjectId("56d1b951d14d4af940b77a14"), "name" : "Jose", "mark" : "99" } { "_id" : ObjectId("56d211e8d14d4af940b77a16"), "name" : "Jose", "Mark" : 99 } { "_id" : ObjectId("56d26bedc83f1574076c732b"), "name" : "Jose", "mark" : 99 } { "_id" : ObjectId("56d26c0fc83f1574076c732c"), "name" : "Kris", "mark" : 69 } { "_id" : ObjectId("56d271b1c83f1574076c732d"), "name" : "Kelly", "Grade" : 99 } ``` I run this code in pyCharm: ``` import pymongo uri="mongodb://127.0.0.1:27017" client = pymongo.MongoClient(uri) database = client['fullstack'] collection = database['students'] students = collection.find({}) for student in students: print(students) ``` but the interpreter returns nothing. No errors. It just doesn't return anything. The console does not even return a cursor dump. What am I doing wrong?
2016/02/28
[ "https://Stackoverflow.com/questions/35678885", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5885288/" ]
The easiest way to get a variable by name is to search for it in the [`tf.global_variables()`](https://www.tensorflow.org/api_docs/python/tf/global_variables) collection: ``` var_23 = [v for v in tf.global_variables() if v.name == "Variable_23:0"][0] ``` This works well for ad hoc reuse of existing variables. A more structured approach—for when you want to share variables between multiple parts of a model—is covered in the [Sharing Variables tutorial](https://www.tensorflow.org/versions/r0.7/how_tos/variable_scope/index.html#sharing-variables).
Based on @mrry 's answer, I think it would be better to create and use the following function, since there's also local variables, and other variables that are not in global variables (they are in different collections): ``` def get_var_by_name(query_name, var_list): """ Get Variable by name e.g. local_vars = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES) the_var = get_var_by_name(local_vars, 'accuracy/total:0') """ target_var = None for var in var_list: if var.name==query_name: target_var = var break return target_var ```
35,678,885
I am following an online course for making web apps with python/mongo/bootstrap. I install mongodb using default settings I run mongod in powershell from install directory ``` C:\Program Files\MongoDB\Server\3.2\bin>mongod 2016-02-27T23:31:14.684-0500 I CONTROL [initandlisten] MongoDB starting : pid=2456 port=27017 dbpath=C:\data\db\ 64-bit host=DESKTOP-8LMCN7R 2016-02-27T23:31:14.686-0500 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 2016-02-27T23:31:14.687-0500 I CONTROL [initandlisten] db version v3.2.3 2016-02-27T23:31:14.689-0500 I CONTROL [initandlisten] git version: b326ba837cf6f49d65c2f85e1b70f6f31ece7937 2016-02-27T23:31:14.690-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015 2016-02-27T23:31:14.692-0500 I CONTROL [initandlisten] allocator: tcmalloc 2016-02-27T23:31:14.693-0500 I CONTROL [initandlisten] modules: none 2016-02-27T23:31:14.704-0500 I CONTROL [initandlisten] build environment: 2016-02-27T23:31:14.706-0500 I CONTROL [initandlisten] distmod: 2008plus-ssl 2016-02-27T23:31:14.707-0500 I CONTROL [initandlisten] distarch: x86_64 2016-02-27T23:31:14.708-0500 I CONTROL [initandlisten] target_arch: x86_64 2016-02-27T23:31:14.709-0500 I CONTROL [initandlisten] options: {} 2016-02-27T23:31:14.712-0500 I - [initandlisten] Detected data files in C:\data\db\ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2016-02-27T23:31:14.715-0500 W - [initandlisten] Detected unclean shutdown - C:\data\db\mongod.lock is not empty. 2016-02-27T23:31:14.718-0500 W STORAGE [initandlisten] Recovering data from the last clean checkpoint. 2016-02-27T23:31:14.720-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2016-02-27T23:31:14.954-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 2016-02-27T23:31:14.954-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/data/db/diagnostic.data' 2016-02-27T23:31:14.962-0500 I NETWORK [initandlisten] waiting for connections on port 27017 2016-02-27T23:31:15.004-0500 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK 2016-02-27T23:32:10.255-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50834 #1 (1 connection now open) ``` I run mongo in a different powershell from install directory ``` C:\Program Files\MongoDB\Server\3.2\bin>mongo MongoDB shell version: 3.2.3 connecting to: test ``` I confirm that their are some values in my db ``` > show dbs fullstack 0.000GB local 0.000GB > use fullstack switched to db fullstack > show collections students > db.students.find({}) { "_id" : ObjectId("56d1b951d14d4af940b77a14"), "name" : "Jose", "mark" : "99" } { "_id" : ObjectId("56d211e8d14d4af940b77a16"), "name" : "Jose", "Mark" : 99 } { "_id" : ObjectId("56d26bedc83f1574076c732b"), "name" : "Jose", "mark" : 99 } { "_id" : ObjectId("56d26c0fc83f1574076c732c"), "name" : "Kris", "mark" : 69 } { "_id" : ObjectId("56d271b1c83f1574076c732d"), "name" : "Kelly", "Grade" : 99 } ``` I run this code in pyCharm: ``` import pymongo uri="mongodb://127.0.0.1:27017" client = pymongo.MongoClient(uri) database = client['fullstack'] collection = database['students'] students = collection.find({}) for student in students: print(students) ``` but the interpreter returns nothing. No errors. It just doesn't return anything. The console does not even return a cursor dump. What am I doing wrong?
2016/02/28
[ "https://Stackoverflow.com/questions/35678885", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5885288/" ]
The `get_variable()` function creates a new variable or returns one created earlier by `get_variable()`. It won't return a variable created using `tf.Variable()`. Here's a quick example: ``` >>> with tf.variable_scope("foo"): ... bar1 = tf.get_variable("bar", (2,3)) # create ... >>> with tf.variable_scope("foo", reuse=True): ... bar2 = tf.get_variable("bar") # reuse ... >>> with tf.variable_scope("", reuse=True): # root variable scope ... bar3 = tf.get_variable("foo/bar") # reuse (equivalent to the above) ... >>> (bar1 is bar2) and (bar2 is bar3) True ``` If you did not create the variable using `tf.get_variable()`, you have a couple options. First, you can use `tf.global_variables()` (as @mrry suggests): ``` >>> bar1 = tf.Variable(0.0, name="bar") >>> bar2 = [var for var in tf.global_variables() if var.op.name=="bar"][0] >>> bar1 is bar2 True ``` Or you can use `tf.get_collection()` like so: ``` >>> bar1 = tf.Variable(0.0, name="bar") >>> bar2 = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="bar")[0] >>> bar1 is bar2 True ``` **Edit** You can also use `get_tensor_by_name()`: ``` >>> bar1 = tf.Variable(0.0, name="bar") >>> graph = tf.get_default_graph() >>> bar2 = graph.get_tensor_by_name("bar:0") >>> bar1 is bar2 False, bar2 is a Tensor througn convert_to_tensor on bar1. but bar1 equal bar2 in value. ``` Recall that a tensor is the output of an operation. It has the same name as the operation, plus `:0`. If the operation has multiple outputs, they have the same name as the operation plus `:0`, `:1`, `:2`, and so on.
If you want to get any stored variables from a model, use`tf.train.load_variable("model_folder_name","Variable name")`
35,678,885
I am following an online course for making web apps with python/mongo/bootstrap. I install mongodb using default settings I run mongod in powershell from install directory ``` C:\Program Files\MongoDB\Server\3.2\bin>mongod 2016-02-27T23:31:14.684-0500 I CONTROL [initandlisten] MongoDB starting : pid=2456 port=27017 dbpath=C:\data\db\ 64-bit host=DESKTOP-8LMCN7R 2016-02-27T23:31:14.686-0500 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 2016-02-27T23:31:14.687-0500 I CONTROL [initandlisten] db version v3.2.3 2016-02-27T23:31:14.689-0500 I CONTROL [initandlisten] git version: b326ba837cf6f49d65c2f85e1b70f6f31ece7937 2016-02-27T23:31:14.690-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1p-fips 9 Jul 2015 2016-02-27T23:31:14.692-0500 I CONTROL [initandlisten] allocator: tcmalloc 2016-02-27T23:31:14.693-0500 I CONTROL [initandlisten] modules: none 2016-02-27T23:31:14.704-0500 I CONTROL [initandlisten] build environment: 2016-02-27T23:31:14.706-0500 I CONTROL [initandlisten] distmod: 2008plus-ssl 2016-02-27T23:31:14.707-0500 I CONTROL [initandlisten] distarch: x86_64 2016-02-27T23:31:14.708-0500 I CONTROL [initandlisten] target_arch: x86_64 2016-02-27T23:31:14.709-0500 I CONTROL [initandlisten] options: {} 2016-02-27T23:31:14.712-0500 I - [initandlisten] Detected data files in C:\data\db\ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2016-02-27T23:31:14.715-0500 W - [initandlisten] Detected unclean shutdown - C:\data\db\mongod.lock is not empty. 2016-02-27T23:31:14.718-0500 W STORAGE [initandlisten] Recovering data from the last clean checkpoint. 2016-02-27T23:31:14.720-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2016-02-27T23:31:14.954-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 2016-02-27T23:31:14.954-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/data/db/diagnostic.data' 2016-02-27T23:31:14.962-0500 I NETWORK [initandlisten] waiting for connections on port 27017 2016-02-27T23:31:15.004-0500 I FTDC [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK 2016-02-27T23:32:10.255-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50834 #1 (1 connection now open) ``` I run mongo in a different powershell from install directory ``` C:\Program Files\MongoDB\Server\3.2\bin>mongo MongoDB shell version: 3.2.3 connecting to: test ``` I confirm that their are some values in my db ``` > show dbs fullstack 0.000GB local 0.000GB > use fullstack switched to db fullstack > show collections students > db.students.find({}) { "_id" : ObjectId("56d1b951d14d4af940b77a14"), "name" : "Jose", "mark" : "99" } { "_id" : ObjectId("56d211e8d14d4af940b77a16"), "name" : "Jose", "Mark" : 99 } { "_id" : ObjectId("56d26bedc83f1574076c732b"), "name" : "Jose", "mark" : 99 } { "_id" : ObjectId("56d26c0fc83f1574076c732c"), "name" : "Kris", "mark" : 69 } { "_id" : ObjectId("56d271b1c83f1574076c732d"), "name" : "Kelly", "Grade" : 99 } ``` I run this code in pyCharm: ``` import pymongo uri="mongodb://127.0.0.1:27017" client = pymongo.MongoClient(uri) database = client['fullstack'] collection = database['students'] students = collection.find({}) for student in students: print(students) ``` but the interpreter returns nothing. No errors. It just doesn't return anything. The console does not even return a cursor dump. What am I doing wrong?
2016/02/28
[ "https://Stackoverflow.com/questions/35678885", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5885288/" ]
The `get_variable()` function creates a new variable or returns one created earlier by `get_variable()`. It won't return a variable created using `tf.Variable()`. Here's a quick example: ``` >>> with tf.variable_scope("foo"): ... bar1 = tf.get_variable("bar", (2,3)) # create ... >>> with tf.variable_scope("foo", reuse=True): ... bar2 = tf.get_variable("bar") # reuse ... >>> with tf.variable_scope("", reuse=True): # root variable scope ... bar3 = tf.get_variable("foo/bar") # reuse (equivalent to the above) ... >>> (bar1 is bar2) and (bar2 is bar3) True ``` If you did not create the variable using `tf.get_variable()`, you have a couple options. First, you can use `tf.global_variables()` (as @mrry suggests): ``` >>> bar1 = tf.Variable(0.0, name="bar") >>> bar2 = [var for var in tf.global_variables() if var.op.name=="bar"][0] >>> bar1 is bar2 True ``` Or you can use `tf.get_collection()` like so: ``` >>> bar1 = tf.Variable(0.0, name="bar") >>> bar2 = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="bar")[0] >>> bar1 is bar2 True ``` **Edit** You can also use `get_tensor_by_name()`: ``` >>> bar1 = tf.Variable(0.0, name="bar") >>> graph = tf.get_default_graph() >>> bar2 = graph.get_tensor_by_name("bar:0") >>> bar1 is bar2 False, bar2 is a Tensor througn convert_to_tensor on bar1. but bar1 equal bar2 in value. ``` Recall that a tensor is the output of an operation. It has the same name as the operation, plus `:0`. If the operation has multiple outputs, they have the same name as the operation plus `:0`, `:1`, `:2`, and so on.
Based on @mrry 's answer, I think it would be better to create and use the following function, since there's also local variables, and other variables that are not in global variables (they are in different collections): ``` def get_var_by_name(query_name, var_list): """ Get Variable by name e.g. local_vars = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES) the_var = get_var_by_name(local_vars, 'accuracy/total:0') """ target_var = None for var in var_list: if var.name==query_name: target_var = var break return target_var ```
54,404,263
I have a python list A and a python list B with words as list elements. I need to check how often the list elements from list B are contained in list A. Is there a python method or how can I implement this efficient? The python intersection method only tells me that a list element from list B occurs in list A, but not how often.
2019/01/28
[ "https://Stackoverflow.com/questions/54404263", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10478079/" ]
``` import collections counter_A = collections.Counter(A) for word in B: print(word, '->', counter_A[word]) ```
You could convert list B to a set, so that checking if the element is in B is faster. Then create a dictionary to count the amount of times that the element is in A if the element is also in the set of B As mentioned in the comments `collections.Counter` does the "heavy lifting" for you
54,404,263
I have a python list A and a python list B with words as list elements. I need to check how often the list elements from list B are contained in list A. Is there a python method or how can I implement this efficient? The python intersection method only tells me that a list element from list B occurs in list A, but not how often.
2019/01/28
[ "https://Stackoverflow.com/questions/54404263", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10478079/" ]
``` import collections counter_A = collections.Counter(A) for word in B: print(word, '->', counter_A[word]) ```
> > The python intersection method only tells me that a list element from > list B occurs in list A, but not how often. > > > Correct, but you can build on this principle. Iterate over the intersection using a dictionary comprehension: ``` from collections import Counter A = [1, 2, 2, 2, 3, 3, 4, 4, 4, 5] B = [2, 3, 5, 6] c = Counter(A) res = {k: c[k] for k in c.keys() & B} # {2: 3, 3: 2, 5: 1} ``` A view of dictionary keys, i.e. `dict.keys` isn't strictly a `set`, but it has [`set`-like properties](https://docs.python.org/3.7/library/stdtypes.html#typesmapping) such as `&` (intersection): > > Keys views are set-like since their entries are unique and hashable. > > >
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
I'm having a similar need and I read current Pandas version supports a directory path as argument for the [read\_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.html) function. So you can read multiple parquet files like this: ``` import pandas as pd df = pd.read_parquet('path/to/the/parquet/files/directory') ``` It concats everything into a single dataframe so you can convert it to a csv right after: ``` df.to_csv('csv_file.csv') ``` Make sure you have the following dependencies according to the doc: * pyarrow * fastparquet
If you are going to copy the files over to your local machine and run your code you could do something like this. The code below assumes that you are running your code in the same directory as the parquet files. It also assumes the naming of files as your provided above: "order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder." If you need to search for your files then you will need to get the file names using `glob` and explicitly provide the path where you want to save the csv: `open(r'this\is\your\path\to\csv_file.csv', 'a')` Hope this helps. ``` import pandas as pd # Create an empty csv file and write the first parquet file with headers with open('csv_file.csv','w') as csv_file: print('Reading par_file1.parquet') df = pd.read_parquet('par_file1.parquet') df.to_csv(csv_file, index=False) print('par_file1.parquet appended to csv_file.csv\n') csv_file.close() # create your file names and append to an empty list to look for in the current directory files = [] for i in range(2,101): files.append(f'par_file{i}.parquet') # open files and append to csv_file.csv for f in files: print(f'Reading {f}') df = pd.read_parquet(f) with open('csv_file.csv','a') as file: df.to_csv(file, header=False, index=False) print(f'{f} appended to csv_file.csv\n') ``` You can remove the print statements if you want. Tested in `python 3.6` using `pandas 0.23.3`
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
I ran into this question looking to see if pandas can natively read partitioned parquet datasets. I have to say that the current answer is unnecessarily verbose (making it difficult to parse). I also imagine that it's not particularly efficient to be constantly opening/closing file handles then scanning to the end of them depending on the size. A better alternative would be to read all the parquet files into a single DataFrame, and write it once: ``` from pathlib import Path import pandas as pd data_dir = Path('dir/to/parquet/files') full_df = pd.concat( pd.read_parquet(parquet_file) for parquet_file in data_dir.glob('*.parquet') ) full_df.to_csv('csv_file.csv') ``` Alternatively, if you *really* want to just append to the file: ``` data_dir = Path('dir/to/parquet/files') for i, parquet_path in enumerate(data_dir.glob('*.parquet')): df = pd.read_parquet(parquet_path) write_header = i == 0 # write header only on the 0th file write_mode = 'w' if i == 0 else 'a' # 'write' mode for 0th file, 'append' otherwise df.to_csv('csv_file.csv', mode=write_mode, header=write_header) ``` A final alternative for appending each file that opens the target CSV file in `"a+"` mode at the onset, keeping the file handle scanned to the end of the file for each write/append (I believe this works, but haven't *actually* tested it): ``` data_dir = Path('dir/to/parquet/files') with open('csv_file.csv', "a+") as csv_handle: for i, parquet_path in enumerate(data_dir.glob('*.parquet')): df = pd.read_parquet(parquet_path) write_header = i == 0 # write header only on the 0th file df.to_csv(csv_handle, header=write_header) ```
a small change for those trying to read remote files, which helps to read it faster (direct read\_parquet for remote files was doing this much slower for me): ``` import io merged = [] # remote_reader = ... <- init some remote reader, for example AzureDLFileSystem() for f in files: with remote_reader.open(f, 'rb') as f_reader: merged.append(remote_reader.read()) merged = pd.concat((pd.read_parquet(io.BytesIO(file_bytes)) for file_bytes in merged)) ``` Adds a little temporary memory overhead though.
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
I'm having a similar need and I read current Pandas version supports a directory path as argument for the [read\_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.html) function. So you can read multiple parquet files like this: ``` import pandas as pd df = pd.read_parquet('path/to/the/parquet/files/directory') ``` It concats everything into a single dataframe so you can convert it to a csv right after: ``` df.to_csv('csv_file.csv') ``` Make sure you have the following dependencies according to the doc: * pyarrow * fastparquet
a small change for those trying to read remote files, which helps to read it faster (direct read\_parquet for remote files was doing this much slower for me): ``` import io merged = [] # remote_reader = ... <- init some remote reader, for example AzureDLFileSystem() for f in files: with remote_reader.open(f, 'rb') as f_reader: merged.append(remote_reader.read()) merged = pd.concat((pd.read_parquet(io.BytesIO(file_bytes)) for file_bytes in merged)) ``` Adds a little temporary memory overhead though.
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
This helped me to load all parquet files into one data frame ``` import glob files = glob.glob("*.snappy.parquet") data = [pd.read_parquet(f,engine='fastparquet') for f in files] merged_data = pd.concat(data,ignore_index=True) ```
You can use Dask to read in the multiple Parquet files and write them to a single CSV. Dask accepts an asterisk (\*) as wildcard / glob character to match related filenames. Make sure to set `single_file` to `True` and `index` to `False` when writing the CSV file. ```py import pandas as pd import numpy as np # create some dummy dataframes using np.random and write to separate parquet files rng = np.random.default_rng() for i in range(3): df = pd.DataFrame(rng.integers(0, 100, size=(10, 4)), columns=list('ABCD')) df.to_parquet(f"dummy_df_{i}.parquet") # load multiple parquet files with Dask import dask.dataframe as dd ddf = dd.read_parquet('dummy_df_*.parquet', index=False) # write to single csv ddf.to_csv("dummy_df_all.csv", single_file=True, index=False ) # test to verify df_test = pd.read_csv("dummy_df_all.csv") ``` Using Dask for this means you won't have to worry about the resulting file size (Dask is a distributed computing framework that can handle anything you throw at it, while pandas might throw a MemoryError if the resulting DataFrame is too large) and you can easily read and write from cloud data storage like Amazon S3.
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
I'm having a similar need and I read current Pandas version supports a directory path as argument for the [read\_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.html) function. So you can read multiple parquet files like this: ``` import pandas as pd df = pd.read_parquet('path/to/the/parquet/files/directory') ``` It concats everything into a single dataframe so you can convert it to a csv right after: ``` df.to_csv('csv_file.csv') ``` Make sure you have the following dependencies according to the doc: * pyarrow * fastparquet
This helped me to load all parquet files into one data frame ``` import glob files = glob.glob("*.snappy.parquet") data = [pd.read_parquet(f,engine='fastparquet') for f in files] merged_data = pd.concat(data,ignore_index=True) ```
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
I ran into this question looking to see if pandas can natively read partitioned parquet datasets. I have to say that the current answer is unnecessarily verbose (making it difficult to parse). I also imagine that it's not particularly efficient to be constantly opening/closing file handles then scanning to the end of them depending on the size. A better alternative would be to read all the parquet files into a single DataFrame, and write it once: ``` from pathlib import Path import pandas as pd data_dir = Path('dir/to/parquet/files') full_df = pd.concat( pd.read_parquet(parquet_file) for parquet_file in data_dir.glob('*.parquet') ) full_df.to_csv('csv_file.csv') ``` Alternatively, if you *really* want to just append to the file: ``` data_dir = Path('dir/to/parquet/files') for i, parquet_path in enumerate(data_dir.glob('*.parquet')): df = pd.read_parquet(parquet_path) write_header = i == 0 # write header only on the 0th file write_mode = 'w' if i == 0 else 'a' # 'write' mode for 0th file, 'append' otherwise df.to_csv('csv_file.csv', mode=write_mode, header=write_header) ``` A final alternative for appending each file that opens the target CSV file in `"a+"` mode at the onset, keeping the file handle scanned to the end of the file for each write/append (I believe this works, but haven't *actually* tested it): ``` data_dir = Path('dir/to/parquet/files') with open('csv_file.csv', "a+") as csv_handle: for i, parquet_path in enumerate(data_dir.glob('*.parquet')): df = pd.read_parquet(parquet_path) write_header = i == 0 # write header only on the 0th file df.to_csv(csv_handle, header=write_header) ```
This helped me to load all parquet files into one data frame ``` import glob files = glob.glob("*.snappy.parquet") data = [pd.read_parquet(f,engine='fastparquet') for f in files] merged_data = pd.concat(data,ignore_index=True) ```
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
This helped me to load all parquet files into one data frame ``` import glob files = glob.glob("*.snappy.parquet") data = [pd.read_parquet(f,engine='fastparquet') for f in files] merged_data = pd.concat(data,ignore_index=True) ```
a small change for those trying to read remote files, which helps to read it faster (direct read\_parquet for remote files was doing this much slower for me): ``` import io merged = [] # remote_reader = ... <- init some remote reader, for example AzureDLFileSystem() for f in files: with remote_reader.open(f, 'rb') as f_reader: merged.append(remote_reader.read()) merged = pd.concat((pd.read_parquet(io.BytesIO(file_bytes)) for file_bytes in merged)) ``` Adds a little temporary memory overhead though.
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
If you are going to copy the files over to your local machine and run your code you could do something like this. The code below assumes that you are running your code in the same directory as the parquet files. It also assumes the naming of files as your provided above: "order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder." If you need to search for your files then you will need to get the file names using `glob` and explicitly provide the path where you want to save the csv: `open(r'this\is\your\path\to\csv_file.csv', 'a')` Hope this helps. ``` import pandas as pd # Create an empty csv file and write the first parquet file with headers with open('csv_file.csv','w') as csv_file: print('Reading par_file1.parquet') df = pd.read_parquet('par_file1.parquet') df.to_csv(csv_file, index=False) print('par_file1.parquet appended to csv_file.csv\n') csv_file.close() # create your file names and append to an empty list to look for in the current directory files = [] for i in range(2,101): files.append(f'par_file{i}.parquet') # open files and append to csv_file.csv for f in files: print(f'Reading {f}') df = pd.read_parquet(f) with open('csv_file.csv','a') as file: df.to_csv(file, header=False, index=False) print(f'{f} appended to csv_file.csv\n') ``` You can remove the print statements if you want. Tested in `python 3.6` using `pandas 0.23.3`
You can use Dask to read in the multiple Parquet files and write them to a single CSV. Dask accepts an asterisk (\*) as wildcard / glob character to match related filenames. Make sure to set `single_file` to `True` and `index` to `False` when writing the CSV file. ```py import pandas as pd import numpy as np # create some dummy dataframes using np.random and write to separate parquet files rng = np.random.default_rng() for i in range(3): df = pd.DataFrame(rng.integers(0, 100, size=(10, 4)), columns=list('ABCD')) df.to_parquet(f"dummy_df_{i}.parquet") # load multiple parquet files with Dask import dask.dataframe as dd ddf = dd.read_parquet('dummy_df_*.parquet', index=False) # write to single csv ddf.to_csv("dummy_df_all.csv", single_file=True, index=False ) # test to verify df_test = pd.read_csv("dummy_df_all.csv") ``` Using Dask for this means you won't have to worry about the resulting file size (Dask is a distributed computing framework that can handle anything you throw at it, while pandas might throw a MemoryError if the resulting DataFrame is too large) and you can easily read and write from cloud data storage like Amazon S3.
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
If you are going to copy the files over to your local machine and run your code you could do something like this. The code below assumes that you are running your code in the same directory as the parquet files. It also assumes the naming of files as your provided above: "order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder." If you need to search for your files then you will need to get the file names using `glob` and explicitly provide the path where you want to save the csv: `open(r'this\is\your\path\to\csv_file.csv', 'a')` Hope this helps. ``` import pandas as pd # Create an empty csv file and write the first parquet file with headers with open('csv_file.csv','w') as csv_file: print('Reading par_file1.parquet') df = pd.read_parquet('par_file1.parquet') df.to_csv(csv_file, index=False) print('par_file1.parquet appended to csv_file.csv\n') csv_file.close() # create your file names and append to an empty list to look for in the current directory files = [] for i in range(2,101): files.append(f'par_file{i}.parquet') # open files and append to csv_file.csv for f in files: print(f'Reading {f}') df = pd.read_parquet(f) with open('csv_file.csv','a') as file: df.to_csv(file, header=False, index=False) print(f'{f} appended to csv_file.csv\n') ``` You can remove the print statements if you want. Tested in `python 3.6` using `pandas 0.23.3`
a small change for those trying to read remote files, which helps to read it faster (direct read\_parquet for remote files was doing this much slower for me): ``` import io merged = [] # remote_reader = ... <- init some remote reader, for example AzureDLFileSystem() for f in files: with remote_reader.open(f, 'rb') as f_reader: merged.append(remote_reader.read()) merged = pd.concat((pd.read_parquet(io.BytesIO(file_bytes)) for file_bytes in merged)) ``` Adds a little temporary memory overhead though.
51,696,655
I am new to python and I have a scenario where there are multiple parquet files with file names in order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder. I need to read these parquet files starting from file1 in order and write it to a singe csv file. After writing contents of file1, file2 contents should be appended to same csv without header. Note that all files have same column names and only data is split into multiple files. I learnt to convert single parquet to csv file using pyarrow with the following code: ``` import pandas as pd df = pd.read_parquet('par_file.parquet') df.to_csv('csv_file.csv') ``` But I could'nt extend this to loop for multiple parquet files and append to single csv. Is there a method in pandas to do this? or any other way to do this would be of great help. Thank you.
2018/08/05
[ "https://Stackoverflow.com/questions/51696655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183474/" ]
I ran into this question looking to see if pandas can natively read partitioned parquet datasets. I have to say that the current answer is unnecessarily verbose (making it difficult to parse). I also imagine that it's not particularly efficient to be constantly opening/closing file handles then scanning to the end of them depending on the size. A better alternative would be to read all the parquet files into a single DataFrame, and write it once: ``` from pathlib import Path import pandas as pd data_dir = Path('dir/to/parquet/files') full_df = pd.concat( pd.read_parquet(parquet_file) for parquet_file in data_dir.glob('*.parquet') ) full_df.to_csv('csv_file.csv') ``` Alternatively, if you *really* want to just append to the file: ``` data_dir = Path('dir/to/parquet/files') for i, parquet_path in enumerate(data_dir.glob('*.parquet')): df = pd.read_parquet(parquet_path) write_header = i == 0 # write header only on the 0th file write_mode = 'w' if i == 0 else 'a' # 'write' mode for 0th file, 'append' otherwise df.to_csv('csv_file.csv', mode=write_mode, header=write_header) ``` A final alternative for appending each file that opens the target CSV file in `"a+"` mode at the onset, keeping the file handle scanned to the end of the file for each write/append (I believe this works, but haven't *actually* tested it): ``` data_dir = Path('dir/to/parquet/files') with open('csv_file.csv', "a+") as csv_handle: for i, parquet_path in enumerate(data_dir.glob('*.parquet')): df = pd.read_parquet(parquet_path) write_header = i == 0 # write header only on the 0th file df.to_csv(csv_handle, header=write_header) ```
If you are going to copy the files over to your local machine and run your code you could do something like this. The code below assumes that you are running your code in the same directory as the parquet files. It also assumes the naming of files as your provided above: "order. ex: par\_file1,par\_file2,par\_file3 and so on upto 100 files in a folder." If you need to search for your files then you will need to get the file names using `glob` and explicitly provide the path where you want to save the csv: `open(r'this\is\your\path\to\csv_file.csv', 'a')` Hope this helps. ``` import pandas as pd # Create an empty csv file and write the first parquet file with headers with open('csv_file.csv','w') as csv_file: print('Reading par_file1.parquet') df = pd.read_parquet('par_file1.parquet') df.to_csv(csv_file, index=False) print('par_file1.parquet appended to csv_file.csv\n') csv_file.close() # create your file names and append to an empty list to look for in the current directory files = [] for i in range(2,101): files.append(f'par_file{i}.parquet') # open files and append to csv_file.csv for f in files: print(f'Reading {f}') df = pd.read_parquet(f) with open('csv_file.csv','a') as file: df.to_csv(file, header=False, index=False) print(f'{f} appended to csv_file.csv\n') ``` You can remove the print statements if you want. Tested in `python 3.6` using `pandas 0.23.3`
40,413,712
I downloaded wheel to the most recent version But I'm not entirely sure how to make of this semi-cryptic error message ``` Failed building wheel for mysql-python Command "/Users/username/Desktop/Project/venv/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/bg/_nsyc_vxasdfx___h11f3jw00000gn/T/pip-build-rBf9R1/mysql-python/setup.py'; f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n'); f.close();exec(compile(code, __file__, 'exec'))" install --record /var/folders/bg/_nsyc_vx4g___xbsh11f3jw00000gn/T/pip-Tjwbij-record/install-record.txt --single-version-externally-managed --compile --install-headers /Users/username/Desktop/project/venv/include/site/python2.7/mysql-python" failed with error code 1 in /private/var/folders/bg/_nsyc_vxasdf__xbsh11f3jw00000gn/T/pip-build-rBf9R1/mysql-python/ ``` I tried ``` pip install --upgrade wheel ``` and I get ``` Requirement already up-to-date: wheel ``` MySQL version ``` mysql Ver 14.14 Distrib 5.7.10, for osx10.11 (x86_64) using EditLine wrapper ```
2016/11/04
[ "https://Stackoverflow.com/questions/40413712", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4407093/" ]
As for me, it is because my system lack of python3 developing lib. It warns that there is no "Python.h" while installing. The following command fix it for me. `yum install python34-devel -y` `pip3 install mysqlclient`
The topic is quite old but for the people who might be suffered from having this problem,this can be your solution: First of all,you must open the file where you use Python like 3.5,3.6,Anaconda etc. Then open cmd in that file and run the command below: ``` $ pip install mysqlclient==1.3.12 ```
56,588,265
I've tried replacing each string but I can't get it to work. I can get all the data between `<span>...</span>` but I can't if is closed, how could I do it? I've tried replacing the text afterwards, but I am not able to do it. I am quite new to python. I have also tried using `for x in soup.find_all('/span', class_ = "textLarge textWhite")` but that won't display anything. **Relevant html:** ``` <div style="width:100%; display:inline-block; position:relative; text- align:center; border-top:thin solid #fff; background-image:linear- gradient(#333,#000);"> <div style="width:100%; max-width:1400px; display:inline-block; position:relative; text-align:left; padding:20px 15px 20px 15px;"> <a href="/manpower-fit-for-military-service.asp" title="Manpower Fit for Military Service ranked by country"> <div class="smGraphContainer"><img class="noBorder" src="/imgs/graph.gif" alt="Small graph icon"></div> </a> <span class="textLarge textWhite"><span class="textBold">FIT-FOR-SERVICE:</span> 18,740,382</span> </div> <div class="blockSheen"></div> </div> ``` **Relevant python code:** ``` for y in soup.find_all('span', class_ = "textBold"): print(y.text) #this gets FIT-FOR-SERVICE: for x in soup.find_all('span', class_ = "textLarge textWhite"): print(x.text) #this gets FIT-FOR-SERVICE: 18,740,382 but i only want the number ``` **Expected result**: `"18,740,382"`
2019/06/13
[ "https://Stackoverflow.com/questions/56588265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11644852/" ]
This problem is basically because the library you are using is in version 1.0.1 [mxgraph-js](https://www.npmjs.com/package/mxgraph-js), this library is very basic with the other features that the latest versions 4.0.6 [mxgraph](https://unpkg.com/mxgraph@4.0.6/javascript/mxClient.js) brings. In order for your prototype to work, you must import a more current version of mxgraph, and you must do so using a tag. ``` import React, { Component } from "react"; import PropTypes from "prop-types"; import ReactDOM from "react-dom"; class Graph extends React.Component{ constructor(props) { super(props); this.state = {}; this.containerId = "divGraph"; this.LoadGraph = this.LoadGraph.bind(this); } componentDidMount() { var that= this; const script = document.createElement("script"); // script.src = "https://unpkg.com/mxgraph@3.9.8/javascript/mxClient.js"; script.src = "https://unpkg.com/mxgraph@4.0.6/javascript/mxClient.js"; script.async = true; document.body.appendChild(script); setTimeout(() => { if(that.refs && this.refs.divGraph){ console.log(20,mxClient); that.LoadGraph(); }; },1000) } LoadGraph() { var container = ReactDOM.findDOMNode(this.refs.divGraph); var zoomPanel = ReactDOM.findDOMNode(this.refs.divZoom); var xmlString='<mxGraphModel dx="492" dy="658" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="827" pageHeight="1169"> <root> <mxCell id="0" /> <mxCell id="1" parent="0" /> <mxCell id="2" value="" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1"> <mxGeometry x="20" y="20" width="120" height="60" as="geometry" /> </mxCell> <mxCell id="3" value="" style="whiteSpace=wrap;html=1;aspect=fixed;"parent="1" vertex="1"> <mxGeometry x="20" y="100" width="80" height="80" as="geometry" /> </mxCell> <mxCell id="4" value="" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;" parent="1" vertex="1"> <mxGeometry x="20" y="200" width="80" height="80" as="geometry" /> </mxCell> <mxCell id="5" value="" style="triangle;whiteSpace=wrap;html=1;" parent="1" vertex="1"> <mxGeometry x="20" y="300" width="60" height="80"as="geometry" /> </mxCell> <mxCell id="6" value="" style="shape=hexagon;perimeter=hexagonPerimeter2;whiteSpace=wrap;html=1;" parent="1" vertex="1"> <mxGeometry x="20" y="400" width="120" height="80" as="geometry" /> </mxCell> </root></mxGraphModel>'; if (!mxClient.isBrowserSupported()) { mxUtils.error("Browser is not supported!", 200, false); } else { this.container = document.getElementById(this.containerId); this.model = new mxGraphModel(); var graph = new mxGraph(this.container, this.model); const parent = graph.getDefaultParent(); this.model.beginUpdate(); try { let xmlString = '<mxGraphModel dx="116" dy="608" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100"><root><mxCell id="0"/><mxCell id="1" parent="0"/><mxCell id="2" value="Template" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#00ACC1;strokeWidth=5;glass=0;comic=0;gradientColor=#b2ebf2;fillColor=#e0f7fa;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="210" y="120" width="200" height="120" as="geometry"/></mxCell><mxCell id="3" value="Sub-Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ffb74d;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#fff3e0;fillColor=#FFFFFF;" vertex="1" parent="1"><mxGeometry x="210" y="260" width="200" height="120" as="geometry"/></mxCell><mxCell id="4" value="End" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#ffebee;strokeColor=#f44336;strokeWidth=4;gradientColor=#ffcdd2;" vertex="1" parent="1"><mxGeometry x="240" y="400" width="80" height="80" as="geometry"/></mxCell><mxCell id="5" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="240" y="500" width="200" height="120" as="geometry"/></mxCell><mxCell id="6" value="Start" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#e8f5e9;strokeColor=#4caf50;strokeWidth=4;gradientColor=#c8e6c9;" vertex="1" parent="1"><mxGeometry x="270" y="640" width="80" height="80" as="geometry"/></mxCell><mxCell id="7" value="Start" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#e8f5e9;strokeColor=#4caf50;strokeWidth=4;gradientColor=#c8e6c9;" vertex="1" parent="1"><mxGeometry x="270" y="740" width="80" height="80" as="geometry"/></mxCell><mxCell id="8" value="Documentation" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;strokeWidth=4;strokeColor=#9C27B0;gradientColor=#E1BEE7;fillColor=#F3E5F5;fontSize=16;" vertex="1" parent="1"><mxGeometry x="270" y="840" width="200" height="120" as="geometry"/></mxCell><mxCell id="9" value="Sub-Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ffb74d;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#fff3e0;fillColor=#FFFFFF;" vertex="1" parent="1"><mxGeometry x="300" y="980" width="200" height="120" as="geometry"/></mxCell><mxCell id="10" value="Template" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#00ACC1;strokeWidth=5;glass=0;comic=0;gradientColor=#b2ebf2;fillColor=#e0f7fa;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="330" y="1120" width="200" height="120" as="geometry"/></mxCell><mxCell id="11" value="Decision" style="rhombus;whiteSpace=wrap;html=1;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#333333;strokeWidth=4;fillColor=#eeeeee;gradientColor=#e5e5e5;fontFamily=Helvetica;fontSize=16;fontColor=#000000;align=center;" vertex="1" parent="1"><mxGeometry x="360" y="1260" width="200" height="120" as="geometry"/></mxCell><mxCell id="12" value="End" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#ffebee;strokeColor=#f44336;strokeWidth=4;gradientColor=#ffcdd2;" vertex="1" parent="1"><mxGeometry x="390" y="1400" width="80" height="80" as="geometry"/></mxCell><mxCell id="13" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="390" y="1500" width="200" height="120" as="geometry"/></mxCell><mxCell id="14" value="Documentation" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;strokeWidth=4;strokeColor=#9C27B0;gradientColor=#E1BEE7;fillColor=#F3E5F5;fontSize=16;" vertex="1" parent="1"><mxGeometry x="420" y="1640" width="200" height="120" as="geometry"/></mxCell><mxCell id="15" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="450" y="1780" width="1020" height="200" as="geometry"/></mxCell><mxCell id="16" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="480" y="2000" width="1020" height="200" as="geometry"/></mxCell><mxCell id="17" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="510" y="2220" width="1020" height="200" as="geometry"/></mxCell><mxCell id="18" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="540" y="2440" width="1020" height="200" as="geometry"/></mxCell><mxCell id="19" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="570" y="2660" width="200" height="120" as="geometry"/></mxCell><mxCell id="20" value="Start" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#e8f5e9;strokeColor=#4caf50;strokeWidth=4;gradientColor=#c8e6c9;" vertex="1" parent="1"><mxGeometry x="600" y="2800" width="80" height="80" as="geometry"/></mxCell><mxCell id="21" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="600" y="2900" width="200" height="120" as="geometry"/></mxCell><mxCell id="22" value="Action" style="rounded=1;whiteSpace=wrap;html=1;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#e91e63;strokeWidth=4;fillColor=#fce4ec;gradientColor=#f8bbd0;fontFamily=Helvetica;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="630" y="3040" width="280" height="40" as="geometry"/></mxCell><mxCell id="23" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="660" y="3100" width="200" height="120" as="geometry"/></mxCell><mxCell id="24" value="Sub-Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ffb74d;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#fff3e0;fillColor=#FFFFFF;" vertex="1" parent="1"><mxGeometry x="690" y="3240" width="200" height="120" as="geometry"/></mxCell><mxCell id="25" value="End" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#ffebee;strokeColor=#f44336;strokeWidth=4;gradientColor=#ffcdd2;" vertex="1" parent="1"><mxGeometry x="720" y="3380" width="80" height="80" as="geometry"/></mxCell><mxCell id="26" value="Decision" style="rhombus;whiteSpace=wrap;html=1;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#333333;strokeWidth=4;fillColor=#eeeeee;gradientColor=#e5e5e5;fontFamily=Helvetica;fontSize=16;fontColor=#000000;align=center;" vertex="1" parent="1"><mxGeometry x="720" y="3480" width="200" height="120" as="geometry"/></mxCell><mxCell id="27" value="Template" style="rounded=1;whiteSpace=wrap;html=1;strokeColor=#00ACC1;strokeWidth=5;glass=0;comic=0;gradientColor=#b2ebf2;fillColor=#e0f7fa;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="750" y="3620" width="200" height="120" as="geometry"/></mxCell><mxCell id="28" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="780" y="3760" width="200" height="120" as="geometry"/></mxCell><mxCell id="29" value="Start" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fontFamily=Helvetica;fontSize=16;fillColor=#e8f5e9;strokeColor=#4caf50;strokeWidth=4;gradientColor=#c8e6c9;" vertex="1" parent="1"><mxGeometry x="810" y="3900" width="80" height="80" as="geometry"/></mxCell><mxCell id="30" value="Documentation" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;strokeWidth=4;strokeColor=#9C27B0;gradientColor=#E1BEE7;fillColor=#F3E5F5;fontSize=16;" vertex="1" parent="1"><mxGeometry x="810" y="4000" width="200" height="120" as="geometry"/></mxCell><mxCell id="31" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="840" y="4140" width="1020" height="200" as="geometry"/></mxCell><mxCell id="32" value="Action" style="rounded=1;whiteSpace=wrap;html=1;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#e91e63;strokeWidth=4;fillColor=#fce4ec;gradientColor=#f8bbd0;fontFamily=Helvetica;fontSize=16;fontColor=#000000;" vertex="1" parent="1"><mxGeometry x="870" y="4360" width="280" height="40" as="geometry"/></mxCell><mxCell id="33" value="Documentation" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;strokeWidth=4;strokeColor=#9C27B0;gradientColor=#E1BEE7;fillColor=#F3E5F5;fontSize=16;" vertex="1" parent="1"><mxGeometry x="900" y="4420" width="200" height="120" as="geometry"/></mxCell><mxCell id="34" value="Phase" style="swimlane;html=1;horizontal=0;swimlaneLine=0;rounded=0;glass=0;comic=0;labelBackgroundColor=none;strokeColor=#000000;strokeWidth=4;fillColor=#26C6DA;gradientColor=#00ACC1;fontFamily=Helvetica;fontSize=16;fontColor=#FFFFFF;gradientDirection=east;spacing=0;spacingTop=-4;swimlaneFillColor=none;collapsible=0;" vertex="1" parent="1"><mxGeometry x="930" y="4560" width="1020" height="200" as="geometry"/></mxCell><mxCell id="35" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="960" y="4780" width="200" height="120" as="geometry"/></mxCell><mxCell id="36" value="Sub-Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ffb74d;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#fff3e0;fillColor=#FFFFFF;" vertex="1" parent="1"><mxGeometry x="990" y="4920" width="200" height="120" as="geometry"/></mxCell><mxCell id="37" value="Process" style="shape=process;whiteSpace=wrap;html=1;backgroundOutline=1;strokeColor=#ff9800;strokeWidth=4;perimeterSpacing=0;fontSize=16;gradientColor=#ffe0b2;fillColor=#fff3e0;" vertex="1" parent="1"><mxGeometry x="1020" y="5060" width="200" height="120" as="geometry"/></mxCell></root></mxGraphModel>'; var doc = mxUtils.parseXml(xmlString); var codec = new mxCodec(doc); codec.decode(doc.documentElement, graph.getModel()); } finally { graph.model.endUpdate(); } } } render() { return <div className="graph-container" ref="divGraph" id="divGraph" />; } } export default Graph; ```
I found a way to resolve this in my webpack project (vuejs) , maybe it's the same problem because all the mx Object must in window dict e.g. window['mxCell'] = mxCell so I write this ``` const {mxCell .......} = mxgraph({ mxImageBasePath: ..... }) // register all object window['mxClient'] = mxClient window['mxGraph'] = mxGraph window['mxUtils'] = mxUtils window['mxEvent'] = mxEvent window['mxVertexHandler'] = mxVertexHandler window['mxConstraintHandler'] = mxConstraintHandler window['mxConnectionHandler'] = mxConnectionHandler window['mxEdgeHandler'] = mxEdgeHandler window['mxCellHighlight'] = mxCellHighlight window['mxEditor'] = mxEditor window['mxGraphModel'] = mxGraphModel window['mxKeyHandler'] = mxKeyHandler window['mxStencilRegistry'] = mxStencilRegistry window['mxConstants'] = mxConstants window['mxGraphView'] = mxGraphView window['mxCodec'] = mxCodec window['mxToolbar'] = mxToolbar window['mxRubberband'] = mxRubberband window['mxShape'] = mxShape window['mxImage'] = mxImage window['mxConnectionConstraint'] = mxConnectionConstraint window['mxPoint'] = mxPoint window['mxDivResizer'] = mxDivResizer window['mxGraphHandler'] = mxGraphHandler window['mxPerimeter'] = mxPerimeter window['mxPolyline'] = mxPolyline window['mxCellState'] = mxCellState window['mxOutline'] = mxOutline window['mxCell'] = mxCell window['mxGeometry'] = mxGeometry window['mxGuide'] = mxGuide window['mxDefaultKeyHandler'] = mxDefaultKeyHandler window['mxDefaultToolbar'] = mxDefaultToolbar window['mxXmlCanvas2D'] = mxXmlCanvas2D window['mxImageExport'] = mxImageExport // your code here ```
40,764,894
I am trying to run Pylint and I am getting the below error: > > pkg\_resources.DistributionNotFound: The 'backports.functools-lru-cache' distribution was not found and is required by pylint > > > I found the below link, but not sure what to do with these files or where to place them. <https://pypi.python.org/simple/backports.functools-lru-cache/> How can I fix this?
2016/11/23
[ "https://Stackoverflow.com/questions/40764894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4663869/" ]
I had the same problem and I installed two missing dependencies (wrong configuration on the pylint or not updated pip??) Just do: ``` pip install backports.functools_lru_cache ``` then if you get an error like: ``` raise DistributionNotFound(req) ``` pkg\_resources.DistributionNotFound: configparser just do: ``` pip install configparser ```
I had this problem when running within a [virtual environment](https://virtualenv.pypa.io/en/stable/) on CentOS 7. On CentOS, the backports module is packaged as a yum package (`python-backports.x86_64`). The solution was to create the virtualenv using the [`--system-site-packages`](https://virtualenv.pypa.io/en/stable/userguide/#the-system-site-packages-option) option. First verify that the ``python-backports` package is installed: `yum list installed | grep python-backports` Then create/recreate your virtual environment: `virtualenv env --system-site-packages` This allows the virtualenv's pylint to see the backports module when installing. Then install pylint within the virtual environment: `env/bin/pip install pylint`
40,764,894
I am trying to run Pylint and I am getting the below error: > > pkg\_resources.DistributionNotFound: The 'backports.functools-lru-cache' distribution was not found and is required by pylint > > > I found the below link, but not sure what to do with these files or where to place them. <https://pypi.python.org/simple/backports.functools-lru-cache/> How can I fix this?
2016/11/23
[ "https://Stackoverflow.com/questions/40764894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4663869/" ]
From what I can tell certain versions of RHEL/CentOS have some sort of problem with the backports.ssl-match-hostname package in their yum repositories which can lead to problems when other backports packages get updated from PyPI. Specifically, in a RHEL7.2 environment I've reproduced the problem as follows: ``` > yum install python-pip # indirectly installs backports.ssl-match-hostname > pip2 install pylint # indirectly installs backports.functools_lru_cache > pip2 install --upgrade backports.ssl-match-hostname # install latest package from pypi, which effectively corrupts backports.functools_lru_cache > python2 -m pylint --version # fails with missing import backports.functools_lru_cache ``` The only way to avoid this that I've found is to replace the yum-installed package with the equivalent one from PyPI. This can be done as follows: ``` > yum install python-pip # installs backports.ssl-match-hostname as a transitive dependency > pip2 freeze > temp_reqs.txt # take a snapshot of the installed packages and versions > pip2 uninstall backports.ssl-match-hostname # remove the yum installed package > pip2 install -r temp_reqs.txt # reinstall the same version of the backports package, but install from PyPI ``` Now the installed packages should work as expected. Performing the following test case confirms this: ``` > pip2 install pylint > pip2 install --upgrade backports.ssl-match-hostname # previously caused corruption of backports.functools_lru_cache used by pylint > python2 -m pylint --version # now works correctly ``` Hopefully this helps others with this problem.
I had this problem when running within a [virtual environment](https://virtualenv.pypa.io/en/stable/) on CentOS 7. On CentOS, the backports module is packaged as a yum package (`python-backports.x86_64`). The solution was to create the virtualenv using the [`--system-site-packages`](https://virtualenv.pypa.io/en/stable/userguide/#the-system-site-packages-option) option. First verify that the ``python-backports` package is installed: `yum list installed | grep python-backports` Then create/recreate your virtual environment: `virtualenv env --system-site-packages` This allows the virtualenv's pylint to see the backports module when installing. Then install pylint within the virtual environment: `env/bin/pip install pylint`
13,358,283
I tried to start a smtp server with python with the following code: ``` import smtplib smtp_server = smtplib.SMTP('localhost') ``` And I get the following error: ``` File "test.py", line 2, in <module> smtp_server = smtplib.SMTP('localhost') File "/usr/local/lib/python2.7/smtplib.py", line 242, in __init__ (code, msg) = self.connect(host, port) File "/usr/local/lib/python2.7/smtplib.py", line 302, in connect self.sock = self._get_socket(host, port, self.timeout) File "/usr/local/lib/python2.7/smtplib.py", line 277, in _get_socket return socket.create_connection((port, host), timeout) File "/usr/local/lib/python2.7/socket.py", line 571, in create_connection raise err ``` I tried to `telnet localhost 25` and `telnet localhost` but they both give the following error: ``` Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused telnet: Unable to connect to remote host: Connection refused ``` The same python code runs perfectly well on one of my other machines. Any advice ? Thanks. UPDATE: The machine I'm having problem with is running Python 2.7.2, while the machine that the code works well is running Python 2.6.2. I don't know if this is one of the reasons.
2012/11/13
[ "https://Stackoverflow.com/questions/13358283", "https://Stackoverflow.com", "https://Stackoverflow.com/users/639165/" ]
`smtplib` is SMTP protocol client. What you are looking for is [`smtpd`](http://docs.python.org/2/library/smtpd.html).
The problem is that smtplib does not provide an SMTP server implementation, it implements SMTP client. Most likely, the code tries to connect to localhost and fails. See <http://docs.python.org/2/library/smtplib.html> Is the other machine you write about already running an SMTP server? And what are you trying to do?
42,762,924
To view the installed libraries on an environment I run this code within a Jupyter Python notebook cell : ``` %%bash pip freeze ``` This works, but how to conditionally execute this code ? This is my attempt : ``` from __future__ import print_function from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets def f(x1): if(x1 == True): f2() return x1 interact(f , x1 = False) def f2(): %%bash pip freeze ``` But evaluating the cell throws error : ``` File "<ipython-input-186-e8a8ec97ab2d>", line 15 pip freeze ^ SyntaxError: invalid syntax ``` To generate the checkbox I'm using ipywidgets : <https://github.com/ipython/ipywidgets> Update : Running `pip freeze` within `check_call` returns 0 results : [![enter image description here](https://i.stack.imgur.com/Cf9TP.png)](https://i.stack.imgur.com/Cf9TP.png) Running ``` %%bash pip freeze ``` Returns installed libraries so 0 is not correct. Is `subprocess.check_call("pip freeze", shell=True)` correct ? Update 2 : This works : ``` from __future__ import print_function from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets import subprocess def f(View): if(View == True): f2() interact(f , View = False) def f2(): print(subprocess.check_output(['pip', 'freeze'])) ```
2017/03/13
[ "https://Stackoverflow.com/questions/42762924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/470184/" ]
You can just use the standard Python way: ``` import subprocess print(subprocess.check_output(['pip', 'freeze'])) ``` Then your function will work in any Python environment.
The short explanation is that the notebook has interactive commands which are handled by the notebook itself, before the Python interpreter even sees them. `%%bash` is an example of such a command; you cannot put this in Python code, because it's not Python. Using `bash` doesn't actually add anything here *per se;* using the shell offers many interactive benefits, and of course, in an interactive notebook, offering the user access to the shell is a powerful mechanism for allowing users to execute external processes; but in this particular case, with noninteractive execution, there are no actual benefits from putting the shell between yourself and `pip`, so you might simply want ``` import subprocess if some_condition: p = subprocess.run(['pip', 'freeze'], stdout=subprocess.PIPE, universal_newlines=True) ``` (Notice the absence of `shell=True` because we don't need or want the shell here.) If you want the captured exit code or the output from `pip freeze` they are available as attributes of the returned object `p`. See the [`subprocess.run` documentation](https://docs.python.org/3/library/subprocess.html#subprocess.run) for details. Briefly, `p.returncode` will be 0 if the command succeeded, and the output will be in `p.stdout`. Older versions of Python had a diverse collection of special-purpose wrappers around `subprocess.Popen` like `check_call`, `check_output`, etc. but these have all been subsumed by `subprocess.run` in recent versions. If you need to support Python versions before 3.5, the legacy functions are still available, but they should arguably be avoided in new code.
61,343,399
I want to ask about h3-py installation on windows I have tried to install h3 on windows with python I ran the command pip install h3 and it is installed. After install, when I try to import it, i get this error: ``` from h3 import h3 File "C:\Users\ASUS\AppData\Local\Programs\Python\Python36\lib\site-packages\h3\h3.py", line 39, in <module> libh3 = cdll.LoadLibrary('{}/{}'.format(_dirname, libh3_path)) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python36\lib\ctypes\__init__.py", line 426, in LoadLibrary return self._dlltype(name) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python36\lib\ctypes\__init__.py", line 348, in __init__ self._handle = _dlopen(self._name, mode) OSError: [WinError 126] The specified module could not be found ```
2020/04/21
[ "https://Stackoverflow.com/questions/61343399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7765614/" ]
We're in the process of migrating the wrapper to Cython at <https://github.com/uber/h3-py/tree/cython> Would you mind trying to install this cython version with these (temporary) install steps: ``` pip install scikit-build pip install git+https://github.com/uber/h3-py.git@cython ``` Edit: We've released `3.6.1` along with pre-built wheels on PyPI and conda. Install should now be as simple as ``` pip install h3 ```
Apprently there was a problem with the package and after fixing it in the release v3.6.1 it works. they fixed install issues. As mentioned in this issue on github : <https://github.com/uber/h3-py/issues/32>
2,152,463
I get the following errors, I've placed [my name] for anonymity: ``` >>> python /Users/[myname]/Desktop/setuptools-0.6c11/ez_setup.py File "<stdin>", line 1 python /Users/[myname]/Desktop/setuptools-0.6c11/ez_setup.py ^ SyntaxError: invalid syntax ``` If you can't see the ^ is under the 11. Or I get this error: ``` >>> python /Users/[myname]/Desktop/EZ_tutorial/ez_setup.py Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'python' is not defined ```
2010/01/28
[ "https://Stackoverflow.com/questions/2152463", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260666/" ]
The ez\_setup.py script may or may not work depending on your environment. If not, follow the instructions [here](http://pypi.python.org/pypi/setuptools). In particular, from the shell, make sure that the python 2.6 you installed is now invoked by the command `python`: ``` $ python Python 2.6.4 (r264:75821M, Oct 27 2009, 19:48:32) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> ^D ``` If not, modify your shell's PATH environment variable. Then download the setuptools 2.6 python egg from [here](http://pypi.python.org/pypi/setuptools), change to your brower's download directory, and run the downloaded script: ``` $ cd ~/Downloads # substitute the appropriate directory name $ sh setuptools-0.6c11-py2.6.egg ```
Try running that command from a shell (i.e. straight from Terminal.app), not from inside the python interpreter.
69,460,583
I am deploying my project with heroku. My Django version number is 3.2.8 and python 3.9.7. It is written on heroku. Heroku supports Python 3.9.7 deployment, but there is an error in my push process. The version above does not support it. What should I do? Thank you for your reply This is my cmd ![enter image description here](https://i.stack.imgur.com/mYGMQ.png) my requirements.txt ![enter image description here](https://i.stack.imgur.com/yUy6w.png)
2021/10/06
[ "https://Stackoverflow.com/questions/69460583", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17086297/" ]
I parsed the file you provided as an example [here](https://www.sec.gov/Archives/edgar/data/1087699/000108769999000001/0001087699-99-000001.txt). I first copied the data from the file to a txt file. The file `copied.txt` needs to be in the current working directory. This could give you an idea how to proceed. ```r library(tidyverse) df <- read_file("copied.txt") %>% # trying to extract data only from the table (function(x){ tbl_beg <- str_locate(x, "Managers Sole")[2] + 1 tbl_end <- str_locate(x, "\r\n</TABLE>")[1] str_sub(x, tbl_beg, tbl_end) }) %>% # removing some unwanted characters from the beginning and the end of the extracted string str_sub(start = 4, end = -3) %>% # splitting for individual lines str_split('\"\r\n\"') %>% unlist() %>% # removing broken line break str_remove("\r\n") %>% # replacing the original text where there are spaces with one, where there is underscore # the reason for that is that I need to split the rows into columns using space str_replace_all("Sole Managers Sole", " Managers_Sole") %>% # removing extra spaces str_squish() %>% # reversing the order of the line (I need to split from the right because the company name contains additional spaces) # if the company name is the last one, it is okey that there are additional spaces stringi::stri_reverse() %>% str_split(pattern = " ", n = 6, simplify = T) %>% # making the order to the original one apply(MARGIN = 2, FUN = stringi::stri_reverse) %>% as_tibble() %>% select(c(6:1)) %>% set_names(nm = c("name_of_issuer", "title_of_cl", "cusip_number", "fair_market_value", "shares", "shares_of_princip_mngrs")) # A tibble: 47 x 6 name_of_issuer title_of_cl cusip_number fair_market_value shares shares_of_princip_mngrs <chr> <chr> <chr> <chr> <chr> <chr> 1 America Online COM 02364J104 2,940,000 20,000 Managers_Sole 2 Anheuser Busch COM 35229103 3,045,000 40,000 Managers_Sole 3 At Home COM 45919107 787,500 5,000 Managers_Sole 4 AT&T COM 1957109 5,985,937 75,000 Managers_Sole 5 Bank Toyko COM 65379109 700,000 50,000 Managers_Sole 6 Bay View Capital COM 07262L101 14,958,437 792,500 Managers_Sole 7 Broadcast.com COM 111310108 2,954,687 25,000 Managers_Sole 8 Chase Manhattan COM 16161A108 10,578,750 130,000 Managers_Sole 9 Chase Manhattan 4/85C 16161A9DQ 59,375 500 Managers_Sole 10 Cisco Systems COM 17275R102 4,930,312 45,000 Managers_Sole ```
You can use `httr` package to request the page: ``` > install.packages("httr") # follow instructions etc ``` Then in `R` shell (you might need a restart): ```r > httr::GET("https://www.sec.gov/Archives/edgar/data/1087699/000108769999000001/0001087699-99-000001.txt") ``` This will download the file successfully however I'm not fluent enough in R to parse this text, but it looks pretty easy: split text by `<TABLE>`, spline newlines for rows, split every row by spaces for columns.
1,233,284
I want to hit a URL from python in google app engine. Can any one please tell me how can i hit the URL using python in google app engine.
2009/08/05
[ "https://Stackoverflow.com/questions/1233284", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can use the [URLFetch API](http://code.google.com/appengine/docs/python/urlfetch/overview.html) ``` from google.appengine.api import urlfetch url = "http://www.google.com/" result = urlfetch.fetch(url) if result.status_code == 200: doSomethingWithResult(result.content) ```
It always depends on post or get. urllib can post to a form somewhere else, if we want the rather tricky thing validate between different hashes (sha and md5) ``` import urllib data = urllib.urlencode({"id" : str(d), "password" : self.request.POST['passwd'], "edit" : "edit"}) result = urlfetch.fetch(url="http://www..../Login.do", payload=data, method=urlfetch.POST, headers={'Content-Type': 'application/x-www-form-urlencoded'}) if result.content.find('loggedInOrYourInfo') > 0: self.response.out.write("clear") else: self.response.out.write("wrong password ") ```
46,309,485
I have data that looks like this: ``` ['6005600401'] ['000000: PUSH1 0x05'] ['000002: PUSH1 0x04'] ['000004: ADD'] ``` The initial representation it's derived from is here: ``` 6005600401 000000: PUSH1 0x05 000002: PUSH1 0x04 000004: ADD ``` The output I'd like to create is like so: ``` PUSH1 0x60, PUSH1 0x40, MSTORE, CALLDATASIZE, ISZERO, PUSH2 0x006c, JUMPI, PUSH1 0xe0, PUSH1 0x02, EXP, PUSH1 0x00, CALLDATALOAD, DIV ``` I've been experimenting with different techniques of isolating that data in the python shell like so: ``` >>> date_div = "Blah blah blah, Updated: Aug. 23, 2012" >>> date_div.split('Updated: ') ['Blah blah blah, ', 'Aug. 23, 2012'] >>> date_div.split('Updated: ')[-1] 'Aug. 23, 2012' >>> line = ['000000: PUSH1 0x05'] >>> date_div.split(':') ['Blah blah blah, Updated', ' Aug. 23, 2012'] >>> line.split(':') Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'list' object has no attribute 'split' >>> line = ["000000: PUSH1 0x05"] >>> line.split(':') ``` But as of yet I've not been able to eliminate all the superfluous characters. How can I employ regex such that my data goes from a representation like this: ``` ['6005600401'] ['000000: PUSH1 0x05'] ['000002: PUSH1 0x04'] ['000004: ADD'] ``` Yo one like this: ``` PUSH1 0x60, PUSH1 0x40, MSTORE, CALLDATASIZE, ISZERO, PUSH2 0x006c, JUMPI, PUSH1 0xe0, PUSH1 0x02, EXP, PUSH1 0x00, CALLDATALOAD, DIV ``` --- Here is the script used to create it: ``` import csv import sys import subprocess def my_test_func(filename, data): with open(filename, 'w') as fd: fd.write(data) fd.write('\n') return subprocess.check_output(['evm', 'disasm', filename]) if '__main__' == __name__: file_name = sys.argv[1] byte_code = sys.argv[2] status = my_test_func(file_name, byte_code) edits = csv.reader(status.splitlines(), delimiter=",") for row in edits: print(row) ```
2017/09/19
[ "https://Stackoverflow.com/questions/46309485", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3787253/" ]
Why not just split on ':'? ``` for line in status: data = line.split(:) if len(data) < 2: continue print(data[1].strip()) ```
This is one good way: ``` opcodes_list = list() for element in status.split('\n'): result = re.search(r"\b[A-Z].+", element) if result: opcodes_list.append(result.group(0)) ```
37,467,680
in the following code, there is indentation error at line 5. what i want is when if condition at line 4 is true then break should execute otherwise div=div+2 should execute. ``` max_num=input() for num in range(2,max_num+1): for div in range(3,max_num/2): if(num%div==0):break div=div+2 else: print num if(num==2): num=num+1 else : num=num+2 ``` I am new with python. please help me out.. :)
2016/05/26
[ "https://Stackoverflow.com/questions/37467680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5774499/" ]
Please use 2 or 4 spaces for indentation. I see your indentation is inconsistent - one time you use 1 space and the others you use 3. What you did **should** work as I see, but it's just inconsistent. Here is a slightly more organized version: ``` max_num=input() for num in range(2,max_num+1): for div in range(3,max_num/2): if(num%div==0): break div=div+2 else: print num if(num==2): num=num+1 else: num=num+2 ``` You can also optimize it to work more efficiently like so: ``` # Makes sure only ints are allowed. Else throws an exception max_num = int(raw_input()) for num in range(2, max_num + 1): try: (div for div in range(3, max_num / 2, 2) if num % div == 0).next() except StopIteration: print num # This section is not used as I see # if(num==2): # num=num+1 # else: # num=num+2 ```
I suggest add this code, but i would want to know what is the wanted result for yor code. ``` max_num=input() for num in range(2,max_num+1): for div in range(3,max_num/2): if(num%div==0):break else:div=div+2 print num if(num==2): num=num+1 else : num=num+2 ```
48,254,890
I'm learning about how to import libraries from directories and I've stumbled upon an error I can't seem to figure out. I'm using the IMDBpy python library in a folder called lib. Below, I am importing the module which no longer returns any errors, but when I move to line number 6, I get the following error: ``` Traceback (most recent call last): File "testlib.py", line 4, in <module> from lib.imdb import IMDb File "/home/user/Scripts/test/lib/imdb/__init__.py", line 49, in <module> import imdb._logging ImportError: No module named 'imdb' ``` Python is throwing an error because the python test file I'm writing is 2 directories up. Not sure how to get it working without putting everything in the same directory. ``` #!/usr/bin/env python3 from lib.imdb import IMDb # Create the object that will be used to access the IMDb's database. ia = imdb.IMDb() # by default access the web. ```
2018/01/14
[ "https://Stackoverflow.com/questions/48254890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3808507/" ]
You can either write: ``` from lib.imdb import IMDb ia = IMDb() ``` or: ``` from lib import imdb ia = imdb.IMDb() ``` But importing `IMDb` and then calling `imdb.IMDb()` does not work. One other point to mention: If your *testlib.py* resides above the `lib` foder, then you need to add a `__init__.py` file within `lib` for your import to work. The file can be completely empty. --- **Update**: This packages uses import statement that are not relative to the package folder, but to an arbitrary folder in `sys.path`. An illustration of this is the exact line that causes the error: ``` # imdb.__init__.py imdb._logging ``` This works only if the packages resides in a folder that is part of `sys.path`, so either in the same path as the script that does the `from imdb import IMDb` statement or in the python dist-package path (simplified explanation). This and all equivalent imports could be replaced by ``` # imdb.__init__.py from . import _logging ``` In which case the import is relative to the module, so there is no need for the imdbpy package to reside in a directory in `sys.path`. The consequence for us now is that we cannot simply put imdb in a sub-folder, we need to either put it in a folder from `sys.path` or add the folder it resides in to `sys.path`. So, the options are: * install the package * move the imdb folder up into the current folder * write: ``` import sys sys.path.insert(0, 'absolute/path/to/lib') ``` before the import statement. In any of the above cases you will be able to do: ``` from imdb import IMDb ia = IMDb() ```
As mentioned above, it is different importing an object from importing a function. When you import a module such as `imdb` you may use its functions. However, when you import a function, you can only use that function, not the object/module where it came from. Check this for better examples: [Python 3 modules](https://docs.python.org/3/tutorial/modules.html#modules)