qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
29,648,412
I have a Python 3 installation of Anaconda and want to be able to switch quickly between python2 and 3 kernels. This is on OSX. My steps so far involved: ``` conda create -p ~/anaconda/envs/python2 python=2.7 source activate python2 conda install ipython ipython kernelspec install-self source deactivate ``` After this I have a python2 Kernel to choose from in the python3 IPython notebook, which however can't start. So I went ahead and modified /usr/local/share/jupyter/kernels/python2/kernel.json ``` { "display_name": "Python 2", "language": "python", "argv": [ "/Users/sonium/anaconda/envs/python2/bin/python", "-m", "IPython.kernel", "-f", "{connection_file}" ], "env":{"PYTHONHOME":"~/anaconda/envs/python2/:~/anaconda/envs/python2/lib/"} } ``` Now when I start the python2 kernel it fails with: ``` ImportError: No module named site ```
2015/04/15
[ "https://Stackoverflow.com/questions/29648412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1217949/" ]
Apparently IPython expects explicit pathnames, so no '~' instead of the home directory. It worked after changing the kernel.json to: ``` { "display_name": "Python 2", "language": "python", "argv": [ "/Users/sonium/anaconda/envs/python2/bin/python2.7", "-m", "IPython.kernel", "-f", "{connection_file}" ], "env":{"PYTHONHOME":"/Users/sonium/anaconda/envs/python2"} } ```
I install the Anaconda 3 in Win10. I am now focus on python 3, but I have lot projects written in python 2. If I want check them in juypter in python environment it's will failed, and shows "kernel error". The solution is almost like above, but something different. The path to find those two json files is : `C:\ProgramData\jupyter\kernels` Sometime it may be hidden. Another path you need check, after you create a python2 environment in Anaconda, try to find this path: `C:\Users\username\Anaconda3\envs\python2\python.exe` Copy it to your python2 kernel json file then it should be work.
13,545
16,033,348
I have a list of items that relate to each other, I know they end up building a graph but the data is not sorted in any way. ``` PMO-100 -> SA-300 SA-100 -> SA-300 SA-100 -> SA-200 PMO-100 -> SA-100 ``` In python examples for graphViz api I realize that you can pretty much generate a graph if you know it's top node and can run down the levels and build all the relations. Since I'm getting an assorted list (I only know that it is a graph since all relations are figured from a single point) - is there a way to build a graph on top of this data? **Update:** the first comment to this question identifies I haven't correctly explained what I'm after. Indeed the following code gets you a proper graph: ``` gr = pydot.Dot(graph_type='digraph') for link in graph.links: edge = pydot.Edge(link.outwardIssue.key, link.inwardIssue.key) gr.add_edge(edge) gr.write('graph.png',format='png') ``` my question really is - how can I color-rode individual nodes or change type of arrow for a specific edge?
2013/04/16
[ "https://Stackoverflow.com/questions/16033348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/220255/" ]
There is no need to identify a top node for graphviz. Just add all Nodes and edges and let it do the rest. For example: ``` import pydot graph = pydot.Dot('graphname', graph_type='digraph') pmo100 = pydot.Node("PMO-100") sa300 = pydot.Node("SA-300") sa100 = pydot.Node("SA-100") sa200 = pydot.Node("SA-200") graph.add_edge(pydot.Edge(pmo100, sa300)) graph.add_edge(pydot.Edge(sa100, sa300)) graph.add_edge(pydot.Edge(sa100, sa200)) graph.add_edge(pydot.Edge(pmo100, sa100)) graph.write_png('example1_graph.png') ``` This will result in the following image: ![result](https://i.stack.imgur.com/IJxg0.png) You can find more info at <http://pythonhaven.wordpress.com/2009/12/09/generating_graphs_with_pydot/>
Try out [pygraph](http://github.com/iamaziz/pygraph) package, it produces a directed graph based on relation statements (data). No matter if the data are sorted or not. So (as in your case) if we have the following data (in a triple relation), it is easy to produce the corresponding graph: ### First way ``` s1 = "PMO-100 -> SA-300" s2 = "SA-100 -> SA-300" s3 = "SA-100 -> SA-200" s4 = "PMO-100 -> SA-100" from pygraph.dgraph import PyGraph g = PyGraph() g.add_relation(s1) g.add_relation(s2) g.add_relation(s3) g.add_relation(s4) g.draw_graph(label=False) ``` It will produce: ![enter image description here](https://i.stack.imgur.com/qs0Dx.png) Or you can even read the relations from a file. ### Second way Say, you have `data.txt`: ``` PMO-100 -> SA-300 SA-100 -> SA-300 SA-100 -> SA-200 PMO-100 -> SA-100 ``` It is even easier in 4 loc,: ``` from pygraph.dgraph import PyGraph g = PyGraph() g.file_relations("data.txt") g.draw_graph(label=False) ``` In both ways the result is the same: ![enter image description here](https://i.stack.imgur.com/qs0Dx.png)
13,546
50,860,578
I have the following situation: ``` for x1 in range(x1, x2): for x2 in range(x3, x4): for x3 ... ... f(x1, x2, x3, ...) ``` How to convert this to a mechanism in which I only tell python to make *n* nested loops where the variable name is x1, x2, x3, x4, ...? I don't want to write every possibility manually of course, since there might be very many dimensions.
2018/06/14
[ "https://Stackoverflow.com/questions/50860578", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8472333/" ]
What you want to do is iterate over a product. Use [`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product). ``` import itertools ranges = [range(x1, x2), range(x3, x4), ...] for xs in itertools.product(*ranges): f(*xs) ``` Example ------- ``` import itertools ranges = [range(0, 2), range(1, 3), range(2, 4)] for xs in itertools.product(*ranges): print(*xs) ``` Output ------ ``` 0 1 2 0 1 3 0 2 2 0 2 3 1 1 2 1 1 3 1 2 2 1 2 3 ```
Recommended: itertools ---------------------- [`itertools`](https://docs.python.org/3/library/itertools.html) is an awesome package for everything related to iteration: ``` from itertools import product x1 = 3; x2 = 4 x3 = 0; x4 = 2 x5 = 42; x6 = 42 for x, y, z in product(range(x1, x2), range(x3, x4), range(x4, x5)): print(x, y, z) ``` gives ``` 3 0 2 3 0 3 3 0 4 3 0 5 3 0 6 3 0 7 3 0 8 3 0 9 3 0 10 3 0 11 3 0 12 3 0 13 3 0 14 3 0 15 3 0 16 3 0 17 3 0 18 3 0 19 3 0 20 3 0 21 3 0 22 3 0 23 3 0 24 3 0 25 3 0 26 3 0 27 3 0 28 3 0 29 3 0 30 3 0 31 3 0 32 3 0 33 3 0 34 3 0 35 3 0 36 3 0 37 3 0 38 3 0 39 3 0 40 3 0 41 3 1 2 3 1 3 3 1 4 3 1 5 3 1 6 3 1 7 3 1 8 3 1 9 3 1 10 3 1 11 3 1 12 3 1 13 3 1 14 3 1 15 3 1 16 3 1 17 3 1 18 3 1 19 3 1 20 3 1 21 3 1 22 3 1 23 3 1 24 3 1 25 3 1 26 3 1 27 3 1 28 3 1 29 3 1 30 3 1 31 3 1 32 3 1 33 3 1 34 3 1 35 3 1 36 3 1 37 3 1 38 3 1 39 3 1 40 3 1 41 ``` Create a cross-product function yourself ---------------------------------------- ``` def product(*args): pools = map(tuple, args) result = [[]] for pool in pools: result = [x + [y] for x in result for y in pool] for prod in result: yield tuple(prod) ```
13,548
55,138,466
I'm scraping a [website](https://www.cofidis.es/es/creditos-prestamos/financiacion-coche.html). I'm trying to click on a link under `<li>` but it throws `NoSuchElementException` exception. And the links I want to click: [![enter image description here](https://i.stack.imgur.com/2BVsw.png)](https://i.stack.imgur.com/2BVsw.png) I'm using below code: ``` from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--headless') chrome_options.add_argument('--no-sandbox') chrome_options.add_argument('--disable-dev-shm-usage') chrome_options.add_argument('--start-maximized') chrome_options.add_argument('window-size=5000x2500') webdriver = webdriver.Chrome('chromedriver',chrome_options=chrome_options) url = "https://www.cofidis.es/es/creditos-prestamos/financiacion-coche.html" webdriver.get(url) webdriver.find_element_by_xpath('//*[@id="btncerrar"]').click() time.sleep(5) webdriver.find_element_by_link_text('Préstamo Coche Nuevo').click() webdriver.save_screenshot('test1.png') ``` The error I got: > > > ``` > /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:10: DeprecationWarning: use options instead of chrome_options > # Remove the CWD from sys.path while we load stuff. > > --------------------------------------------------------------------------- > > NoSuchElementException Traceback (most recent call last) > > <ipython-input-44-f6608be53ab3> in <module>() > 13 webdriver.find_element_by_xpath('//*[@id="btncerrar"]').click() > 14 time.sleep(5) > ---> 15 webdriver.find_element_by_link_text('Préstamo Coche Nuevo').click() > 16 webdriver.save_screenshot('test1.png') > > /usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py > > ``` > > in find\_element\_by\_link\_text(self, link\_text) > 426 element = driver.find\_element\_by\_link\_text('Sign In') > 427 """ > --> 428 return self.find\_element(by=By.LINK\_TEXT, value=link\_text) > 429 > 430 def find\_elements\_by\_link\_text(self, text): > > > > ``` > /usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py > > ``` > > in find\_element(self, by, value) > 976 return self.execute(Command.FIND\_ELEMENT, { > 977 'using': by, > --> 978 'value': value})['value'] > 979 > 980 def find\_elements(self, by=By.ID, value=None): > > > > ``` > /usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py > > ``` > > in execute(self, driver\_command, params) > 319 response = self.command\_executor.execute(driver\_command, params) > 320 if response: > --> 321 self.error\_handler.check\_response(response) > 322 response['value'] = self.\_unwrap\_value( > 323 response.get('value', None)) > > > > ``` > /usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py > > ``` > > in check\_response(self, response) > 240 alert\_text = value['alert'].get('text') > 241 raise exception\_class(message, screen, stacktrace, alert\_text) > --> 242 raise exception\_class(message, screen, stacktrace) > 243 > 244 def \_value\_or\_default(self, obj, key, default): > > > > ``` > NoSuchElementException: Message: no such element: Unable to locate element: {"method":"link text","selector":"Préstamo Coche Nuevo"} > (Session info: headless chrome=72.0.3626.121) > (Driver info: chromedriver=72.0.3626.121,platform=Linux 4.14.79+ x86_64) > > ``` > >
2019/03/13
[ "https://Stackoverflow.com/questions/55138466", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11155464/" ]
onchange of the radio button select the input using document.querySelector and using setAttribute set the required attribute to the elements ```js function a() { document.querySelector('.one').setAttribute('required','required'); document.querySelector('.five').setAttribute('required','required'); } ``` ```html <input type="radio" name="customer" id="customer" onchange='a()' value="A customer">Customer<br> <input type="radio" name="customer" id="client" value="A client">Customer<br> <input type="radio" name="customer" id="other" value="Other">Customer<br> <input type="text" placeholder="bla bla" name="referenceno" class='one'> <input type="text" placeholder="bla bla" name="referenceno" class='2'> <input type="text" placeholder="bla bla" name="referenceno" class='3'> <input type="text" placeholder="bla bla" name="referenceno" class='4'> <input type="text" placeholder="bla bla" name="referenceno" class='five'> <input type="text" placeholder="bla bla" name="referenceno" class='5'> <input type="text" placeholder="bla bla" name="referenceno" class='7'> ```
You can add class to the input elements by matching the id of the radio button. Then on clicking on the button add the *required* attribute with that class name: ```js var radio = [].slice.call(document.querySelectorAll('[name=customer]')); radio.forEach(function(r){ r.addEventListener('click', function(){ var allInput = document.querySelectorAll('[type="text"]'); [].slice.call(allInput).forEach(function(el){ el.required = false; }); document.querySelector('.'+this.id).required = true; }); }); ``` ```html <form> <input type="radio" name="customer" id="customer" value="A customer">Customer<br> <input type="radio" name="customer" id="client" value="A client">Customer<br> <input type="radio" name="customer" id="other" value="Other">Customer<br> <input type="text" placeholder="bla bla" name="referenceno" class="customer"> <input type="text" placeholder="bla bla" name="referenceno" class="client"> <input type="text" placeholder="bla bla" name="referenceno" class="other"> <button type="submit">Login</button> </form> ```
13,554
69,086,563
i am trying to read CT scan Dicom file using pydicom python library but i just can't get rid of this below error even when i install gdcm and pylibjpeg ``` RuntimeError: The following handlers are available to decode the pixel data however they are missing required dependencies: GDCM (req. ), pylibjpeg (req. ) ``` Here is my code ``` !pip install pylibjpeg pylibjpeg-libjpeg pylibjpeg-openjpeg !pip install python-gdcm import gdcm import pylibjpeg import numpy as np import pydicom from pydicom.pixel_data_handlers.util import apply_voi_lut import matplotlib.pyplot as plt %matplotlib inline def read_xray(path, voi_lut = True, fix_monochrome = True): dicom = pydicom.read_file(path) # VOI LUT (if available by DICOM device) is used to transform raw DICOM data to "human-friendly" view if voi_lut: data = apply_voi_lut(dicom.pixel_array, dicom) else: data = dicom.pixel_array # depending on this value, X-ray may look inverted - fix that: if fix_monochrome and dicom.PhotometricInterpretation == "MONOCHROME1": data = np.amax(data) - data data = data - np.min(data) data = data / np.max(data) data = (data * 255).astype(np.uint8) return data img = read_xray('/content/ActiveTB/2018/09/17/1.2.392.200036.9116.2.6.1.44063.1797735841.1537157438.869027/1.2.392.200036.9116.2.6.1.44063.1797735841.1537157440.863887/1.2.392.200036.9116.2.6.1.44063.1797735841.1537154539.142332.dcm') plt.figure(figsize = (12,12)) plt.imshow(img) ``` Here is the image link on which i am trying to run this code ``` https://drive.google.com/file/d/1-xuryA5VlglaumU2HHV7-p6Yhgd6AaCC/view?usp=sharing ```
2021/09/07
[ "https://Stackoverflow.com/questions/69086563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15054164/" ]
Try running the following: ``` !pip install pylibjpeg !pip install gdcm ```
In a [similar problem](https://stackoverflow.com/questions/62159709/pydicom-read-file-is-only-working-with-some-dicom-images) as yours, we could see that the problem persists at the *Pixel Data* level. You need to [install one or more optional libraries](https://pydicom.github.io/pydicom/stable/tutorials/installation.html#install-the-optional-libraries), so that you can handle the various compressions. First, you should do: ``` $ pip uninstall pycocotools $ pip install pycocotools --no-binary :all: --no-build-isolation ``` From here, you should do as follows: ``` $ pip install pylibjpeg pylibjpeg-libjpeg pydicom ``` Then, your code should look like this: ``` from pydicom import dcmread import pylibjpeg import gdcm import numpy as np import pydicom from pydicom.pixel_data_handlers.util import apply_voi_lut import matplotlib.pyplot as plt %matplotlib inline def read_xray(path, voi_lut = True, fix_monochrome = True): dicom = dcmread(path) # VOI LUT (if available by DICOM device) is used to transform raw DICOM data to "human-friendly" view if voi_lut: data = apply_voi_lut(dicom.pixel_array, dicom) else: data = dicom.pixel_array # depending on this value, X-ray may look inverted - fix that: if fix_monochrome and dicom.PhotometricInterpretation == "MONOCHROME1": data = np.amax(data) - data data = data - np.min(data) data = data / np.max(data) data = (data * 255).astype(np.uint8) return data img = read_xray('/content/ActiveTB/2018/09/17/1.2.392.200036.9116.2.6.1.44063.1797735841.1537157438.869027/1.2.392.200036.9116.2.6.1.44063.1797735841.1537157440.863887/1.2.392.200036.9116.2.6.1.44063.1797735841.1537154539.142332.dcm') plt.figure(figsize = (12,12)) plt.imshow(img) ```
13,559
52,478,863
I have an input file *div.txt* that looks like this: ``` <div>a</div>b<div>c</div> <div>d</div> ``` Now I want to pick all the *div* tags and the text between them using *sed*: ``` sed -n 's:.*\(<div>.*</div>\).*:\1:p' < div.txt ``` The result I get: ``` <div>c</div> <div>d</div> ``` What I really want: ``` <div>a</div> <div>c</div> <div>d</div> ``` So the question is, how to match the same pattern n times on the same line? (do not suggest me to use perl or python, please)
2018/09/24
[ "https://Stackoverflow.com/questions/52478863", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5459536/" ]
This might work for you (GNU sed): ``` sed 's/\(<\/div>\)[^<]*/\1\n/;/^</P;D' file ``` Replace a `</div>` followed by zero or more characters that are not a `<` by itself and a newline. Print only lines that begin with a `<`.
Sed is not the right tool to handle HTML. But if you really insist, and you know your input will always have properly closed pairs of div tags, you can just replace everything that's not inside a div by a newline: ``` sed 's=</div>.*<div>=</div>\n<div>=' ```
13,560
62,026,087
I have a simple Glue pythonshell job and for testing purpose I just have print("Hello World") in it. I have given it the required AWSGlueServiceRole. When I am trying to run the job it throws the following error: ``` Traceback (most recent call last): File "/tmp/runscript.py", line 114, in <module> temp_file_path = download_user_script(args.scriptLocation) File "/tmp/runscript.py", line 91, in download_user_script download_from_s3(args.scriptLocation, temp_file_path) File "/tmp/runscript.py", line 81, in download_from_s3 s3.download_file(bucket_name, s3_key, new_file_path) File "/usr/local/lib/python3.6/site-packages/boto3/s3/inject.py", line 172, in download_file extra_args=ExtraArgs, callback=Callback) File "/usr/local/lib/python3.6/site-packages/boto3/s3/transfer.py", line 307, in download_file future.result() File "/usr/local/lib/python3.6/site-packages/s3transfer/futures.py", line 106, in result return self._coordinator.result() File "/usr/local/lib/python3.6/site-packages/s3transfer/futures.py", line 265, in result raise self._exception File "/usr/local/lib/python3.6/site-packages/s3transfer/tasks.py", line 255, in _main self._submit(transfer_future=transfer_future, **kwargs) File "/usr/local/lib/python3.6/site-packages/s3transfer/download.py", line 345, in _submit **transfer_future.meta.call_args.extra_args File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 661, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden ``` When I add S3 full access policy to the role, then the job runs successfully. I am not able to debug what is wrong
2020/05/26
[ "https://Stackoverflow.com/questions/62026087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133469/" ]
In Glue you need to attach S3 policies to the Amazon Glue Role that you are using to run the job. When you define the job you select the role. In this example it is AWSGlueServiceRole-S3IAMRole. That does not have S3 access until you assign it. [![enter image description here](https://i.stack.imgur.com/AzqTa.png)](https://i.stack.imgur.com/AzqTa.png) ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:*", "Resource": "*" } ] } ```
You need to have the permissions to the bucket **and** the script resource. Try adding the following inline policy: ``` { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:*" ], "Resource": "arn:aws:s3:::myBucket/*", "Effect": "Allow" }, { "Action": [ "s3:*" ], "Resource": "arn:aws:s3:::myBucket", "Effect": "Allow" } ] } ```
13,561
69,455,665
In python, if class C inherits from two other classes C(A,B), and A and B have methods with identical names but different return values, which value will that method return on C?
2021/10/05
[ "https://Stackoverflow.com/questions/69455665", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
"Inherits two methods" isn't quite accurate. What happens is that `C` has a *method resolution order* (MRO), which is the list `[C, A, B, object]`. If you attempt to access a method that `C` does not define or override, the MRO determines which class will be checked next. If the desired method is defined in `A`, it shadows a method with the same name in `B`. ``` >>> class A: ... def foo(self): ... print("In A.foo") ... >>> class B: ... def foo(self): ... print("In B.foo") ... >>> class C(A, B): ... pass ... >>> C.mro() [<class '__main__.C'>, <class '__main__.A'>, <class '__main__.B'>, <class 'object'>] >>> C().foo() In A.foo ```
MRO order will be followed now, if you inherit A and B in C, then preference order goes from left to right, so, A will be prefered and method of A will be called instead of B
13,562
28,747,456
I am trying to install a python package using pip in linux mint but keep getting this error message. Anyone know how to fix this? ``` alex@alex-Satellite-C660D ~ $ pip install django-registration-redux Downloading/unpacking django-registration-redux Downloading django-registration-redux-1.1.tar.gz (63kB): 63kB downloaded Running setup.py (path:/tmp/pip_build_alex/django-registration-redux/setup.py) egg_info for package django-registration-redux no previously-included directories found matching 'docs/_build' Installing collected packages: django-registration-redux Running setup.py install for django-registration-redux no previously-included directories found matching 'docs/_build' error: could not create '/usr/local/lib/python3.4/dist-packages/test_app': Permission denied Complete output from command /usr/bin/python3.4 -c "import setuptools, tokenize;__file__='/tmp/pip_build_alex/django-registration-redux/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-qi55qyaf-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib creating build/lib/test_app copying test_app/__init__.py -> build/lib/test_app copying test_app/models.py -> build/lib/test_app copying test_app/urls_simple.py -> build/lib/test_app copying test_app/settings.py -> build/lib/test_app copying test_app/urls_default.py -> build/lib/test_app copying test_app/settings_test.py -> build/lib/test_app creating build/lib/registration copying registration/users.py -> build/lib/registration copying registration/views.py -> build/lib/registration copying registration/signals.py -> build/lib/registration copying registration/__init__.py -> build/lib/registration copying registration/forms.py -> build/lib/registration copying registration/auth_urls.py -> build/lib/registration copying registration/urls.py -> build/lib/registration copying registration/models.py -> build/lib/registration copying registration/admin.py -> build/lib/registration creating build/lib/registration/management copying registration/management/__init__.py -> build/lib/registration/management creating build/lib/registration/tests copying registration/tests/__init__.py -> build/lib/registration/tests copying registration/tests/forms.py -> build/lib/registration/tests copying registration/tests/urls.py -> build/lib/registration/tests copying registration/tests/simple_backend.py -> build/lib/registration/tests copying registration/tests/models.py -> build/lib/registration/tests copying registration/tests/default_backend.py -> build/lib/registration/tests creating build/lib/registration/backends copying registration/backends/__init__.py -> build/lib/registration/backends creating build/lib/registration/management/commands copying registration/management/commands/__init__.py -> build/lib/registration/management/commands copying registration/management/commands/cleanupregistration.py -> build/lib/registration/management/commands creating build/lib/registration/backends/default copying registration/backends/default/views.py -> build/lib/registration/backends/default copying registration/backends/default/__init__.py -> build/lib/registration/backends/default copying registration/backends/default/urls.py -> build/lib/registration/backends/default creating build/lib/registration/backends/simple copying registration/backends/simple/views.py -> build/lib/registration/backends/simple copying registration/backends/simple/__init__.py -> build/lib/registration/backends/simple copying registration/backends/simple/urls.py -> build/lib/registration/backends/simple running egg_info writing top-level names to django_registration_redux.egg-info/top_level.txt writing django_registration_redux.egg-info/PKG-INFO writing dependency_links to django_registration_redux.egg-info/dependency_links.txt warning: manifest_maker: standard file '-c' not found reading manifest file 'django_registration_redux.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' no previously-included directories found matching 'docs/_build' writing manifest file 'django_registration_redux.egg-info/SOURCES.txt' creating build/lib/registration/locale creating build/lib/registration/locale/ar creating build/lib/registration/locale/ar/LC_MESSAGES copying registration/locale/ar/LC_MESSAGES/django.mo -> build/lib/registration/locale/ar/LC_MESSAGES copying registration/locale/ar/LC_MESSAGES/django.po -> build/lib/registration/locale/ar/LC_MESSAGES creating build/lib/registration/locale/bg creating build/lib/registration/locale/bg/LC_MESSAGES copying registration/locale/bg/LC_MESSAGES/django.mo -> build/lib/registration/locale/bg/LC_MESSAGES copying registration/locale/bg/LC_MESSAGES/django.po -> build/lib/registration/locale/bg/LC_MESSAGES creating build/lib/registration/locale/ca creating build/lib/registration/locale/ca/LC_MESSAGES copying registration/locale/ca/LC_MESSAGES/django.mo -> build/lib/registration/locale/ca/LC_MESSAGES copying registration/locale/ca/LC_MESSAGES/django.po -> build/lib/registration/locale/ca/LC_MESSAGES creating build/lib/registration/locale/cs creating build/lib/registration/locale/cs/LC_MESSAGES copying registration/locale/cs/LC_MESSAGES/django.mo -> build/lib/registration/locale/cs/LC_MESSAGES copying registration/locale/cs/LC_MESSAGES/django.po -> build/lib/registration/locale/cs/LC_MESSAGES creating build/lib/registration/locale/da creating build/lib/registration/locale/da/LC_MESSAGES copying registration/locale/da/LC_MESSAGES/django.mo -> build/lib/registration/locale/da/LC_MESSAGES copying registration/locale/da/LC_MESSAGES/django.po -> build/lib/registration/locale/da/LC_MESSAGES creating build/lib/registration/locale/de creating build/lib/registration/locale/de/LC_MESSAGES copying registration/locale/de/LC_MESSAGES/django.mo -> build/lib/registration/locale/de/LC_MESSAGES copying registration/locale/de/LC_MESSAGES/django.po -> build/lib/registration/locale/de/LC_MESSAGES creating build/lib/registration/locale/el creating build/lib/registration/locale/el/LC_MESSAGES copying registration/locale/el/LC_MESSAGES/django.mo -> build/lib/registration/locale/el/LC_MESSAGES copying registration/locale/el/LC_MESSAGES/django.po -> build/lib/registration/locale/el/LC_MESSAGES creating build/lib/registration/locale/en creating build/lib/registration/locale/en/LC_MESSAGES copying registration/locale/en/LC_MESSAGES/django.mo -> build/lib/registration/locale/en/LC_MESSAGES copying registration/locale/en/LC_MESSAGES/django.po -> build/lib/registration/locale/en/LC_MESSAGES creating build/lib/registration/locale/es creating build/lib/registration/locale/es/LC_MESSAGES copying registration/locale/es/LC_MESSAGES/django.mo -> build/lib/registration/locale/es/LC_MESSAGES copying registration/locale/es/LC_MESSAGES/django.po -> build/lib/registration/locale/es/LC_MESSAGES creating build/lib/registration/locale/es_AR creating build/lib/registration/locale/es_AR/LC_MESSAGES copying registration/locale/es_AR/LC_MESSAGES/django.mo -> build/lib/registration/locale/es_AR/LC_MESSAGES copying registration/locale/es_AR/LC_MESSAGES/django.po -> build/lib/registration/locale/es_AR/LC_MESSAGES creating build/lib/registration/locale/fa creating build/lib/registration/locale/fa/LC_MESSAGES copying registration/locale/fa/LC_MESSAGES/django.mo -> build/lib/registration/locale/fa/LC_MESSAGES copying registration/locale/fa/LC_MESSAGES/django.po -> build/lib/registration/locale/fa/LC_MESSAGES creating build/lib/registration/locale/fr creating build/lib/registration/locale/fr/LC_MESSAGES copying registration/locale/fr/LC_MESSAGES/django.mo -> build/lib/registration/locale/fr/LC_MESSAGES copying registration/locale/fr/LC_MESSAGES/django.po -> build/lib/registration/locale/fr/LC_MESSAGES creating build/lib/registration/locale/he creating build/lib/registration/locale/he/LC_MESSAGES copying registration/locale/he/LC_MESSAGES/django.mo -> build/lib/registration/locale/he/LC_MESSAGES copying registration/locale/he/LC_MESSAGES/django.po -> build/lib/registration/locale/he/LC_MESSAGES creating build/lib/registration/locale/hr creating build/lib/registration/locale/hr/LC_MESSAGES copying registration/locale/hr/LC_MESSAGES/django.mo -> build/lib/registration/locale/hr/LC_MESSAGES copying registration/locale/hr/LC_MESSAGES/django.po -> build/lib/registration/locale/hr/LC_MESSAGES creating build/lib/registration/locale/is creating build/lib/registration/locale/is/LC_MESSAGES copying registration/locale/is/LC_MESSAGES/django.mo -> build/lib/registration/locale/is/LC_MESSAGES copying registration/locale/is/LC_MESSAGES/django.po -> build/lib/registration/locale/is/LC_MESSAGES creating build/lib/registration/locale/it creating build/lib/registration/locale/it/LC_MESSAGES copying registration/locale/it/LC_MESSAGES/django.mo -> build/lib/registration/locale/it/LC_MESSAGES copying registration/locale/it/LC_MESSAGES/django.po -> build/lib/registration/locale/it/LC_MESSAGES creating build/lib/registration/locale/ja creating build/lib/registration/locale/ja/LC_MESSAGES copying registration/locale/ja/LC_MESSAGES/django.mo -> build/lib/registration/locale/ja/LC_MESSAGES copying registration/locale/ja/LC_MESSAGES/django.po -> build/lib/registration/locale/ja/LC_MESSAGES creating build/lib/registration/locale/ko creating build/lib/registration/locale/ko/LC_MESSAGES copying registration/locale/ko/LC_MESSAGES/django.mo -> build/lib/registration/locale/ko/LC_MESSAGES copying registration/locale/ko/LC_MESSAGES/django.po -> build/lib/registration/locale/ko/LC_MESSAGES creating build/lib/registration/locale/nb creating build/lib/registration/locale/nb/LC_MESSAGES copying registration/locale/nb/LC_MESSAGES/django.mo -> build/lib/registration/locale/nb/LC_MESSAGES copying registration/locale/nb/LC_MESSAGES/django.po -> build/lib/registration/locale/nb/LC_MESSAGES creating build/lib/registration/locale/nl creating build/lib/registration/locale/nl/LC_MESSAGES copying registration/locale/nl/LC_MESSAGES/django.mo -> build/lib/registration/locale/nl/LC_MESSAGES copying registration/locale/nl/LC_MESSAGES/django.po -> build/lib/registration/locale/nl/LC_MESSAGES creating build/lib/registration/locale/pl creating build/lib/registration/locale/pl/LC_MESSAGES copying registration/locale/pl/LC_MESSAGES/django.mo -> build/lib/registration/locale/pl/LC_MESSAGES copying registration/locale/pl/LC_MESSAGES/django.po -> build/lib/registration/locale/pl/LC_MESSAGES creating build/lib/registration/locale/pt creating build/lib/registration/locale/pt/LC_MESSAGES copying registration/locale/pt/LC_MESSAGES/django.mo -> build/lib/registration/locale/pt/LC_MESSAGES copying registration/locale/pt/LC_MESSAGES/django.po -> build/lib/registration/locale/pt/LC_MESSAGES creating build/lib/registration/locale/pt_BR creating build/lib/registration/locale/pt_BR/LC_MESSAGES copying registration/locale/pt_BR/LC_MESSAGES/django.mo -> build/lib/registration/locale/pt_BR/LC_MESSAGES copying registration/locale/pt_BR/LC_MESSAGES/django.po -> build/lib/registration/locale/pt_BR/LC_MESSAGES creating build/lib/registration/locale/ru creating build/lib/registration/locale/ru/LC_MESSAGES copying registration/locale/ru/LC_MESSAGES/django.mo -> build/lib/registration/locale/ru/LC_MESSAGES copying registration/locale/ru/LC_MESSAGES/django.po -> build/lib/registration/locale/ru/LC_MESSAGES creating build/lib/registration/locale/sl creating build/lib/registration/locale/sl/LC_MESSAGES copying registration/locale/sl/LC_MESSAGES/django.mo -> build/lib/registration/locale/sl/LC_MESSAGES copying registration/locale/sl/LC_MESSAGES/django.po -> build/lib/registration/locale/sl/LC_MESSAGES creating build/lib/registration/locale/sr creating build/lib/registration/locale/sr/LC_MESSAGES copying registration/locale/sr/LC_MESSAGES/django.mo -> build/lib/registration/locale/sr/LC_MESSAGES copying registration/locale/sr/LC_MESSAGES/django.po -> build/lib/registration/locale/sr/LC_MESSAGES creating build/lib/registration/locale/sv creating build/lib/registration/locale/sv/LC_MESSAGES copying registration/locale/sv/LC_MESSAGES/django.mo -> build/lib/registration/locale/sv/LC_MESSAGES copying registration/locale/sv/LC_MESSAGES/django.po -> build/lib/registration/locale/sv/LC_MESSAGES creating build/lib/registration/locale/tr_TR creating build/lib/registration/locale/tr_TR/LC_MESSAGES copying registration/locale/tr_TR/LC_MESSAGES/django.mo -> build/lib/registration/locale/tr_TR/LC_MESSAGES copying registration/locale/tr_TR/LC_MESSAGES/django.po -> build/lib/registration/locale/tr_TR/LC_MESSAGES creating build/lib/registration/locale/zh_CN creating build/lib/registration/locale/zh_CN/LC_MESSAGES copying registration/locale/zh_CN/LC_MESSAGES/django.mo -> build/lib/registration/locale/zh_CN/LC_MESSAGES copying registration/locale/zh_CN/LC_MESSAGES/django.po -> build/lib/registration/locale/zh_CN/LC_MESSAGES creating build/lib/registration/locale/zh_TW creating build/lib/registration/locale/zh_TW/LC_MESSAGES copying registration/locale/zh_TW/LC_MESSAGES/django.mo -> build/lib/registration/locale/zh_TW/LC_MESSAGES copying registration/locale/zh_TW/LC_MESSAGES/django.po -> build/lib/registration/locale/zh_TW/LC_MESSAGES running install_lib creating /usr/local/lib/python3.4/dist-packages/test_app error: could not create '/usr/local/lib/python3.4/dist-packages/test_app': Permission denied ---------------------------------------- Cleaning up... Command /usr/bin/python3.4 -c "import setuptools, tokenize;__file__='/tmp/pip_build_alex/django-registration-redux/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-qi55qyaf-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_alex/django-registration-redux Storing debug log for failure in /home/alex/.pip/pip.log ```
2015/02/26
[ "https://Stackoverflow.com/questions/28747456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3541209/" ]
Autoprefixer doesn't run on it's own. It needs to be run as part of postcss-cli like so: `postcss --use autoprefixer *.css -d build/` (from <https://github.com/postcss/autoprefixer#cli>) Save-dev postcss-cli and then reformat your build:css to match `postcss --use autoprefixer -b 'last 2 versions' <assets/styles/main.css | cssmin > dist/main.css` Then let us know if you're still having problems.
Installation: `npm install -g postcss-cli autoprefixer` Usage (order matters!): `postcss main.css -u autoprefixer -d dist/`
13,563
11,729,662
``` print'Personal information, journal and more to come' x = raw_input() if x ==("Personal Information"): # wont print print' Edward , Height: 5,10 , EYES: brown , STATE: IL TOWN: , SS:' elif x ==("Journal"): # wont print read = open('C:\\python\\foo.txt' , 'r') name = read.readline() print (name) ``` I start the program and `"Personal information, journal and more to come"` shows but when I type either `Personal information` or `journal neither` of them print the result and I'm not getting any errors.
2012/07/30
[ "https://Stackoverflow.com/questions/11729662", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1564130/" ]
> > when i type either Personal information or journal > > > Well, yeah. It isn't expecting either of those; your case is wrong. To perform a case-insensitive comparison, convert both to the same case first. ``` if foo.lower() == bar.lower(): ```
Works for me. Are you writing "Personal Information" with a capital I? ``` print'Personal information, journal and more to come' x = raw_input() if x == ("Personal Information"): # wont print print' Edward , Height: 5,10 , EYES: brown , STATE: IL TOWN: , SS:' elif x ==("Journal"): # wont print read = open('C:\\python\\foo.txt' , 'r') name = read.readline() print (name) ``` output: ``` [00:20: ~$] python py Personal information, journal and more to come Journal Traceback (most recent call last): File "py", line 8, in <module> read = open('C:\\python\\foo.txt' , 'r') IOError: [Errno 2] No such file or directory: 'C:\\python\\foo.txt' [00:20: ~$] python py Personal information, journal and more to come Personal Information Edward , Height: 5,10 , EYES: brown , STATE: IL TOWN: , SS: [00:20: ~$] ``` Perhaps it's the formatting? I'm using 4 white spaces.
13,566
47,705,274
I am learning bare metal programming in c++ and it often involves setting a portion of a 32 bit hardware register address to some combination. For example for an IO pin, I can set the 15th to 17th bit in a 32 bit address to `001` to mark the pin as an output pin. I have seen code that does this and I half understand it based on an explanation of another [SO question.](https://stackoverflow.com/a/6917002/1158977) ``` # here ra is a physical address # the 15th to 17th bits are being # cleared by AND-ing it with a value that is one everywhere # except in the 15th to 17th bits ra&=~(7<<12); ``` Another example is: ``` # this clears the 21st to 23rd bits of another address ra&=~(7<<21); ``` How do I choose the **7** and how do I choose the number of bits to shift left? I tried this out in **python** to see if I can figure it out ``` bin((7<<21)).lstrip('-0b').zfill(32) '00000000111000000000000000000000' # this has 8, 9 and 10 as the bits which is wrong ```
2017/12/07
[ "https://Stackoverflow.com/questions/47705274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1158977/" ]
The 7 (base 10) is chosen as its binary representation is 111 (7 in base 2). As for why it's bits 8, 9 and 10 set it's because you're reading from the wrong direction. Binary, just as normal base 10, counts right to left. (I'd left this as a comment but reputation isn't high enough.)
> > How do I choose the 7 > > > You want to clear three adjacent bits. Three adjacent bits at the bottom of a word is 1+2+4=7. > > and how do I choose the number of bits to shift left > > > You want to clear bits 21-23, not bits 1-3, so you shift left another 20. Both your examples are wrong. To clear 15-17 you need to shift left 14, and to clear 21-23 you need to shift left 20. > > this has 8, 9,and 10 ... > > > No it doesn't. You're counting from the wrong end.
13,568
32,625,597
I am having trouble getting this python code to work right. it is a code to display pascal's triangle using binomials. I do not know what is wrong. The code looks like this ``` from math import factorial def binomial (n,k): if k==0: return 1 else: return int((factorial(n)//factorial(k))*factorial(n-k)) def pascals_triangle(rows): rows=20 for n in range (0,rows): for k in range (0,n+1): print(binomial(n,k)) print '\n' ``` This is what it keeps printing ``` 1 1 1 1 2 1 1 12 3 1 1 144 24 4 1 1 2880 360 40 5 1 1 86400 8640 720 60 6 1 1 3628800 302400 20160 1260 ``` and on and on. any help would be welcomed.!!
2015/09/17
[ "https://Stackoverflow.com/questions/32625597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5345562/" ]
You can use ``` SELECT id, date, product_id, sales FROM sales LIMIT X OFFSET Y; ``` where X is the size of the batch you need and Y is current offset (X times number of current iterations for example)
To expand on akalikin's answer, you can use a stepped iteration to split the query into chunks, and then use LIMIT and OFFSET to execute the query. ``` cur = con.cursor(mdb.cursors.DictCursor) cur.execute("SELECT COUNT(*) FROM sales") for i in range(0,cur.fetchall(),5): cur2 = con.cursor(mdb.cursors.DictCursor) cur2.execute("SELECT id, date, product_id, sales FROM sales LIMIT %s OFFSET %s" %(5,i)) rows = cur2.fetchall() print rows ```
13,570
35,894,511
When i run ``` pip install --upgrade pip ``` I get this error message: ``` Collecting pip Downloading pip-8.1.0-py2.py3-none-any.whl (1.2MB) 100% |████████████████████████████████| 1.2MB 371kB/s Installing collected packages: pip Found existing installation: pip 8.0.2 Uninstalling pip-8.0.2: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 209, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 725, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 752, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py",line 115, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip/utils/__init__.py", line 266, in renames shutil.move(old, new) File"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 303, in move os.unlink(src) OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site- packages/pip-8.0.2.dist-info/DESCRIPTION.rst' ``` Previously I had been struggling to install and run a couple of python modules so I remember moving files around a bit. Is that what has caused this error? How can I fix this? I am on Mac. I was trying to install bs4 prior to this and I got similar error messages. (But i suspect the bs4 install has more issues so that's another question for later). Also sorry for any format issues with the code. Have tried my best to make it look like it is on the terminal. Thanks.
2016/03/09
[ "https://Stackoverflow.com/questions/35894511", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5827690/" ]
A permission issue means your user privileges don't allow you to write on the desired folder(`/Library/Python/2.7/site-packages/pip/`). There's basically two things you can do: 1. run pip as sudo: ``` sudo pip install --upgrade pip ``` 2. Configure pip to install only for the current user, as covered [here](https://stackoverflow.com/questions/7143077/how-can-i-install-packages-in-my-home-folder-with-pip) and [here](https://scicomp.stackexchange.com/questions/2987/what-is-the-simplest-way-to-do-a-user-local-install-of-a-python-package).
From administrator mode command prompt, you can run the following command, that should fix the issue: ``` python -m ensurepip --user ``` Replace python3 if your version supports that.
13,576
10,838,596
The below piece of code is giving me a error for some reason, Can someone tell me what would be the problem.. Basically, I create 2 classes Point & Circle..THe circle is trying to inherit the Point class. ``` Code: class Point(): x = 0.0 y = 0.0 def __init__(self, x, y): self.x = x self.y = y print("Point constructor") def ToString(self): return "{X:" + str(self.x) + ",Y:" + str(self.y) + "}" class Circle(Point): radius = 0.0 def __init__(self, x, y, radius): super(Point,self).__init__(x,y) self.radius = radius print("Circle constructor") def ToString(self): return super().ToString() + \ ",{RADIUS=" + str(self.radius) + "}" if __name__=='__main__': newpoint = Point(10,20) newcircle = Circle(10,20,0) ``` Error: ``` C:\Python27>python Point.py Point constructor Traceback (most recent call last): File "Point.py", line 29, in <module> newcircle = Circle(10,20,0) File "Point.py", line 18, in __init__ super().__init__(x,y) TypeError: super() takes at least 1 argument (0 given) ```
2012/05/31
[ "https://Stackoverflow.com/questions/10838596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1050619/" ]
It looks like you already may have fixed the original error, which was caused by `super().__init__(x,y)` as the error message indicates, although your fix was slightly incorrect, instead of `super(Point, self)` from the `Circle` class you should use `super(Circle, self)`. Note that there is another place that calls `super()` incorrectly, inside of `Circle`'s `ToString()` method: ``` return super().ToString() + \ ",{RADIUS=" + str(self.radius) + "}" ``` This is valid code on Python 3, but on Python 2 `super()` requires arguments, rewrite this as the following: ``` return super(Circle, self).ToString() + \ ",{RADIUS=" + str(self.radius) + "}" ``` I would also recommend getting rid of the line continuation, see the [Maximum Line Length section of PEP 8](http://www.python.org/dev/peps/pep-0008/#maximum-line-length) for the recommended way of fixing this.
`super(..)` takes only new-style classes. To fix it, extend Point class from `object`. Like this: ``` class Point(object): ``` Also the correct way of using super(..) is like: ``` super(Circle,self).__init__(x,y) ```
13,582
71,391,688
pip install pygame==2.0.0.dev10 this package not installed in python 3.8.2 version.so when I run the gaming project so that time this error showing.instead of this I am install pip install pygame==2.0.0.dev12,this package is install but same error is showing. so can anyone give me a solution
2022/03/08
[ "https://Stackoverflow.com/questions/71391688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18405878/" ]
Redirect in htaccess does not matter if it is a pdf file or anything else just a **valid link**. This is an example of a 301 redirect code to the link you want `RedirectMatch 301 /subpage/ <https://www.mywebsite.com/upload/files/file.pdf>`
To target an HTML link to a specific page in a PDF file, add #page=[page number] to the end of the link's URL. For example, this HTML tag opens page 4 of a PDF file named myfile.pdf: ``` <A HREF="http://www.example.com/myfile.pdf#page=4"> ``` --- And If you want to redirect a single page to another you just need to insert this line in the .htaccess file: Redirect 301 /old-post <https://www.yourwebsite.com/myfiles/myfile.pdf> Going to replace the old and new addresses respectively. The code must be inserted after and before .
13,585
33,813,848
I only started using python, so i don't know it very well.Can you help how to say that the number should not be divisible by another integer given by me ? For example, if a is not divisible by 2.
2015/11/19
[ "https://Stackoverflow.com/questions/33813848", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5583129/" ]
Check out the % operator. ``` if x%2 == 0: # x is divisible by 2 print x ```
"a is not divisible by 2": `(a % 2) != 0`
13,588
49,626,707
``` from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.chrome.options import Options import time chrome_options = webdriver.ChromeOptions() prefs = {"profile.default_content_setting_values.notifications" : 2} chrome_options.add_experimental_option("prefs",prefs) driver = webdriver.Chrome("C:\\chromedriver\\chromedriver.exe") driver.maximize_window() driver.get("https://www.arttoframe.com/") time.sleep(6) driver.close() ``` Console Logs : ``` C:\Users\Dell\PycharmProjects\untitled\venv\Scripts\python.exe C:/Users/Dell/PycharmProjects/untitled/newaaa.py Process finished with exit code 0 ```
2018/04/03
[ "https://Stackoverflow.com/questions/49626707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9575127/" ]
As you have created an instance of `ChromeOptions()` as **chrome\_options** you need to pass it as an argument to put the configurations in effect while invoking `webdriver.Chrome()` as follows : ``` from selenium import webdriver from selenium.webdriver.chrome.options import Options import time chrome_options = webdriver.ChromeOptions() prefs = {"profile.default_content_setting_values.notifications" : 2} chrome_options.add_experimental_option("prefs", prefs) chrome_options.add_argument("start-maximized") driver = webdriver.Chrome(chrome_options=chrome_options, executable_path=r'C:\path\to\chromedriver.exe') driver.get("https://www.arttoframe.com/") time.sleep(6) driver.quit() ``` **Note** : * To maximize the Chrome browser instead of `maximize_window()` use the `ChromeOptions()` class argument **start-maximized** * At the end of your program instead of `driver.close()` use **driver.quit()**.
``` driver.switch_to_alert().dismiss() driver.switch_to_alert().accept() ``` These can be used dismiss-right and accept-left
13,590
22,418,816
I'm trying to use multiprocessing to get a handle on my memory issues, however I can't get a function to pickle, and I have no idea why. My main code starts with ``` def main(): print "starting main" q = Queue() p = Process(target=file_unpacking,args=("hellow world",q)) p.start() p.join() if p.is_alive(): p.terminate() print "The results are in" Chan1 = q.get() Chan2 = q.get() Start_Header = q.get() Date = q.get() Time = q.get() return Chan1, Chan2, Start_Header, Date, Time def file_unpacking(args, q): print "starting unpacking" fileName1 = "050913-00012" unpacker = UnpackingClass() for fileNumber in range(0,44): fileName = fileName1 + str(fileNumber) + fileName3 header, data1, data2 = UnpackingClass.unpackFile(path,fileName) if header == None: logging.warning("curropted file found at " + fileName) Data_Sets1.append(temp_1) Data_Sets2.append(temp_2) Headers.append(temp_3) temp_1 = [] temp_2 = [] temp_3 = [] #for i in range(0,10000): # Chan1.append(0) # Chan2.append(0) else: logging.info(fileName + " is good!") temp_3.append(header) for i in range(0,10000): temp_1.append(data1[i]) temp_2.append(data2[i]) Data_Sets1.append(temp_1) Data_Sets2.append(temp_2) Headers.append(temp_3) temp_1 = [] temp_2 = [] temp_3 = [] lengths = [] for i in range(len(Data_Sets1)): lengths.append(len(Data_Sets1[i])) index = lengths.index(max(lengths)) Chan1 = Data_Sets1[index] Chan2 = Data_Sets2[index] Start_Header = Headers[index] Date = Start_Header[index][0] Time = Start_Header[index][1] print "done unpacking" q.put(Chan1) q.put(Chan2) q.put(Start_Header) q.put(Date) q.put(Time) ``` and currently I have the unpacking method in a separate python file that imports struct and os. This reads a part text part binary file, structures it, and then closes it. This is mostly leg work, so I won't post it yet, however if it helps I will. I will give the start ``` class UnpackingClass: def __init__(self): print "Unpacking Class" def unpackFile(path,fileName): import struct import os ....... ``` Then I simply call main() to get the party started, and I get nothing but a infinite loop of pickle errors. Long story short I don't have any clue how to pickle a function. Everything is defined at the top of files, so I'm at a loss. Here is the error message ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\multiprocessing\forking.py", line 373, in main prepare(preparation_data) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\multiprocessing\forking.py", line 488, in prepare '__parents_main__', file, path_name, etc File "A:\598\TestCode\test1.py", line 142, in <module> Chan1, Chan2, Start_Header, Date, Time = main() File "A:\598\TestCode\test1.py", line 43, in main p.start() File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\multiprocessing\process.py", line 130, in start self._popen = Popen(self) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\multiprocessing\forking.py", line 271, in __init__ dump(process_obj, to_child, HIGHEST_PROTOCOL) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\multiprocessing\forking.py", line 193, in dump ForkingPickler(file, protocol).dump(obj) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\pickle.py", line 224, in dump self.save(obj) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\pickle.py", line 419, in save_reduce save(state) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\pickle.py", line 681, in _batch_setitems save(v) File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "C:\Users\Casey\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\pickle.py", line 748, in save_global (obj, module, name)) pickle.PicklingError: Can't pickle <function file_unpacking at 0x0000000007E1F048>: it's not found as __main__.file_unpacking ```
2014/03/15
[ "https://Stackoverflow.com/questions/22418816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2352742/" ]
Pickling a function is a very very relevant thing to do if you want to do any parallel computing. Python's `pickle` and `multiprocessing` are pretty broken for doing parallel computing, so if you aren't adverse to going outside of the standard library, I'd suggest `dill` for serialization, and `pathos.multiprocessing` as a `multiprocessing` replacement. `dill` can serialize almost anything in python, and `pathos.multiprocessing` uses `dill` to provide more robust parallel CPU use. For more information, see: [What can multiprocessing and dill do together?](https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together?rq=1) or this simple example: ``` Python 2.7.6 (default, Nov 12 2013, 13:26:39) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import dill >>> from pathos.multiprocessing import ProcessingPool >>> >>> def squared(x): ... return x**2 ... >>> pool = ProcessingPool(4) >>> pool.map(squared, range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> res = pool.amap(squared, range(10)) >>> res.get() [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> res = pool.imap(squared, range(10)) >>> list(res) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> >>> def add(x,y): ... return x+y ... >>> pool.map(add, range(10), range(10)) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] >>> res = pool.amap(add, range(10), range(10)) >>> res.get() [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] >>> res = pool.imap(add, range(10), range(10)) >>> list(res) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] ``` Both `dill` and `pathos` are available here: <https://github.com/uqfoundation>
You can technically pickle a function. But, it's only a name reference that's being saved. When you unpickle, you must set up the environment so that the name reference makes sense to python. Make sure to read [What can be pickled and unpicked](http://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled) carefully. If this doesn't answer your question, you'll need to provide us with the exact error messages. Also, please explain the purpose of pickling a function. Since you can only pickle a name reference and not the function itself, why can't you simply import and call the corresponding code?
13,591
38,947,967
I'm trying to mungle my data from the following data frame to the one following it where the values in column B and C are combined to column names for the values in D grouped by the values in A. Below is a reproducible example. ``` set.seed(10) fooDF <- data.frame(A = sample(1:4, 10, replace=TRUE), B = sample(letters[1:4], 10, replace=TRUE), C= sample(letters[1:4], 10, replace=TRUE), D = sample(1:4, 10, replace=TRUE)) fooDF[!duplicated(fooDF),] A B C D 1 4 c b 2 2 4 d a 2 3 2 a b 4 4 3 c a 1 5 4 a b 3 6 4 b a 2 7 1 b d 2 8 1 a d 4 9 2 b a 3 10 2 d c 2 newdata <- data.frame(A = 1:4) for(i in 1:nrow(fooDF)){ col_name <- paste(fooDF$B[i], fooDF$C[i], sep="") newdata[newdata$A == fooDF$A[i], col_name ] <- fooDF$D[i] } ``` The format I am trying to get it in. ``` > newdata A cb da ab ca ba bd ad dc 1 1 NA NA NA NA NA 2 4 NA 2 2 NA NA 4 NA 3 NA NA 2 3 3 NA NA NA 1 NA NA NA NA 4 4 2 2 3 NA 2 NA NA NA ``` Right now I am doing it line by line but that is unfeasible for a large csv containing 5 million + lines. Is there a way to do it faster in R or python?
2016/08/15
[ "https://Stackoverflow.com/questions/38947967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3329254/" ]
`my Tuple $t` creates a `$t` variable such that any (re)assignment or (re)binding to it must (re)pass the `Tuple` type check. `= [1, 2]` assigns a reference to an `Array` object. The `Tuple` type check is applied (and passes). `$t.append(3)` modifies the contents of the `Array` object held in `$t` but does not reassign or rebind `$t` so there's no type check. Mutating-method-call syntax -- `$t.=append(3)` instead of `$t.append(3)` -- will trigger the type check on `$t`. There is specific syntax for a bounds-checked array (`my @array[2]` etc.) but I'm guessing that's not the point of your question.
The type constraint is bound to the scalar container, but the object it contains is just a plain old array - and an array's `append` method is unaware you want it to trigger a type check. If you want to explicitly trigger the check again, you could do a reassignment like `$t = $t`.
13,592
64,318,676
I am trying to scrape all email addresses from this index page - <http://www.uschess.org/assets/msa_joomla/AffiliateSearch/clubresultsnew.php?st=AL> I modified a python script to define the string, parse content with BS4 and save each unique address to an xls file: ``` import requests from bs4 import BeautifulSoup import xlwt wb = xlwt.Workbook() ws = wb.add_sheet('Emails') ws.write(0,0,'Emails') emailList= [] r=0 #add url of the page you want to scrape to urlString urlString='http://www.uschess.org/assets/msa_joomla/AffiliateSearch/clubresultsnew.php?st=AL' #function that extracts all emails from a page you provided and stores them in a list def emailExtractor(urlString): getH=requests.get(urlString) h=getH.content soup=BeautifulSoup(h,'html.parser') mailtos = soup.select('a[href^=mailto]') for i in mailtos: href=i['href'] try: str1, str2 = href.split(':') except ValueError: break emailList.append(str2) emailExtractor(urlString) #adding scraped emails to an excel sheet for email in emailList: r=r+1 ws.write(r,0,email) wb.save('emails.xls') ``` The xls file exports as expected, but with no email values. If anyone can explain why or how to simplify this solution it would be greatly appreciated!
2020/10/12
[ "https://Stackoverflow.com/questions/64318676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5705012/" ]
Your insert has 2 issues. Assuming the string '1,2,3,4' is the result of your decode (not at all clear on that) there is no new line ( chr(10)||chr(13) ) data in it. As a result there is only 1 value to be extracted. Perhaps the decoded result is actually a CSV. I proceed with that assumption. Your second issue is that the regexp parse routine produces 4 rows, not 4 columns. You will have to somehow convert those rows to columns before inserting. The most "natural" way to do that is the PIVOT operator. Note for demonstration I changed your data values from '1,2,3,4' to 'A,B,C,D'. See [fiddle](https://dbfiddle.uk/?rdbms=oracle_18&fiddle=2197bc4ed7ce4738dd2e6e6fb313b830). ``` insert into demo(col1,col2,col3,col4) with decoded(res) as ( select 'A,B,C,D' from dual) select * from (select level lev , regexp_substr(res,'[^,]+', 1, level) itm from decoded connect by regexp_substr(res, '[^,]+', 1, level) is not null) pivot ( max(itm) for lev in (1,2,3,4) ); ```
``` PROCEDURE LOAD_SPM_ITEM_SYNC(P_ENCODED_STRING IN CLOB,truncateflag in varchar2) IS r_records varchar2(3000); BEGIN declare cursor c_records IS select regexp_substr(utl_raw.cast_to_varchar2(utl_encode.base64_decode(utl_raw.cast_to_raw(P_ENCODED_STRING))), '[^'||CHR(10)||CHR(13)||']+', 1, level) from dual connect by regexp_substr(utl_raw.cast_to_varchar2(utl_encode.base64_decode(utl_raw.cast_to_raw(P_ENCODED_STRING))), '[^'||CHR(10)||CHR(13)||']+', 1, level) is not null ; Begin OPEN c_records; loop FETCH c_records INTO r_records; exit when c_records%NOTFOUND; insert into demo(column_1,column_2,column_3,column_4) select regexp_substr(r_records, '\d+', 1, 1) , regexp_substr(r_records, '\d+', 1, 2) , regexp_substr(r_records, '\d+', 1, 3) , regexp_substr(r_records, '\d+', 1, 4) from dual; end loop; End; --loop end LOAD_SPM_ITEM_SYNC; ``` This code will convert encoded string to values and they will be inserted into 4 columns and we can add a new regex\_substsr if we want a new column.
13,595
50,330,893
I use python-telegram-bot and I don't understand how forward a message from the user to a telegram group, I have something like this: ``` def feed(bot, update): bot.send_message(chat_id=update.message.chat_id, text="reply this message" bot.forward_message(chat_id="telegram group", from_chat_id="username bot", message_id=?) ``` I need to forward the message the user shares the message and the reply must be sent to a group. How it is possible?
2018/05/14
[ "https://Stackoverflow.com/questions/50330893", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9788313/" ]
From [documentation](https://python-telegram-bot.readthedocs.io/en/stable/telegram.bot.html#telegram.Bot.forward_message): **chat\_id** - Unique identifier for the target chat ... **from\_chat\_id** - Unique identifier for the ***chat where the original message was sent*** ... **message\_id** - Message identifier in the chat specified in from\_chat\_id. Solution: ``` def feed(bot, update): # send reply to the user bot.send_message(chat_id=update.message.chat_id, text='reply this message') # forward user message to group # note: group ID with the negative sign bot.forward_message(chat_id='-1010101001010', from_chat_id=update.message.chat_id, message_id=update.message.message_id) ```
Its easy with `Telethon`. You need the `chat_id`, `from_chat_id`. You can add a bot and you will need the token.
13,596
25,044,403
I've had success with `mvn deploy` for about a week now, and suddenly it's not working. It used to prompt me for my passphrase (in a dialog window--I'm using Kleopatra on Windows 7, 32bit), but it's not any more. The only thing that's changed in the POM is the project's version number. There are two random outcomes, both bad: First, this output, which is printed *without me pressing any keys*: ``` R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava>mvn deploy [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building XBN-Java 0.1.4.1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- build-helper-maven-plugin:1.8:attach-artifact (attach-artifacts) @ xbnjava --- [INFO] [INFO] --- maven-gpg-plugin:1.5:sign (sign-artifacts) @ xbnjava --- You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 OK Your orders please OK OK OK OK OK D 4204 OK OK OK ``` After pressing enter, this is the response: ``` gpg: Invalid passphrase; please try again ... gpg: Invalid passphrase; please try again ... You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 OK Your orders please OK OK OK OK OK D 5220 OK gpg: AllowSetForegroundWindow(5220) failed: Access is denied. OK OK OK gpg: AllowSetForegroundWindow(5220) failed: Access is denied. ``` It freezes again here, at which point I cancel with Ctrl+C ``` Terminate batch job (Y/N)? y R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava> ``` I see the `AllowSetForegroundWindow(5220)`, and found [this about it](http://lists.wald.intevation.org/pipermail/gpg4win-users-en/2009-August/000354.html), but it doesn't say anything specific to do. I tried it again and got this: ``` R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava>mvn deploy [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building XBN-Java 0.1.4.1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- build-helper-maven-plugin:1.8:attach-artifact (attach-artifacts) @ xbnjava --- [INFO] [INFO] --- maven-gpg-plugin:1.5:sign (sign-artifacts) @ xbnjava --- You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 ``` I enter my passphrase here and press enter, but nothing happens. It's stuck. I cancel the process and it prints my passphrase in plain-text: ``` Terminate batch job (Y/N)? MY_PASSPHRASE_PRINTED_HERE Terminate batch job (Y/N)? y R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava> ``` I try it *again* and the original stuff happens again ("...OK Your orders please OK OK OK OK OK..."). I enter my passphrase, and it starts working (I'm not sure when it actually took, it seems now), meaning the jars are uploaded successfully, but at the end of the log, after the "BUILD SUCCESSFUL" message... ``` R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava>mvn deploy [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building XBN-Java 0.1.4.1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- build-helper-maven-plugin:1.8:attach-artifact (attach-artifacts) @ xbnjava --- [INFO] [INFO] --- maven-gpg-plugin:1.5:sign (sign-artifacts) @ xbnjava --- You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ xbnjava --- [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\pom.xml to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1.pom [INFO] Installing R:\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1.jar to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1.jar [INFO] Installing R:\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1-javadoc.jar to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1-javadoc.jar [INFO] Installing R:\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1-sources.jar to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1-sources.jar [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\target\xbnjava-0.1.4.1.pom.asc to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1.pom.asc [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\target\gpg\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1.jar.asc to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1.jar.asc [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\target\gpg\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1-javadoc.jar.asc to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1-javadoc.jar.asc [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\target\gpg\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1-sources.jar.asc to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1-sources.jar.asc [INFO] [INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ xbnjava --- Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.pom Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.pom (6 KB at 4.2 KB/sec) Downloading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/maven-metadata.xml Downloaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/maven-metadata.xml (314 B at 0.9 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/maven-metadata.xml Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/maven-metadata.xml (314 B at 1.4 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.jar Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.jar (630 KB at 439.6 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-javadoc.jar Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-javadoc.jar (5093 KB at 572.4 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-sources.jar Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-sources.jar (10728 KB at 609.9 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.pom.asc Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.pom.asc (499 B at 2.0 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.jar.asc Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.jar.asc (499 B at 1.9 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-javadoc.jar.asc Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-javadoc.jar.asc (499 B at 2.0 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-sources.jar.asc Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-sources.jar.asc (499 B at 2.2 KB/sec) [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 37.724 s [INFO] Finished at: 2014-07-30T13:58:42-04:00 [INFO] Final Memory: 6M/17M [INFO] ------------------------------------------------------------------------ ``` It ends with this: ``` 'import site' failed; use -v for traceback Traceback (most recent call last): File "C:\applications\programming\python_341\Lib\cmd.py", line 45, in ? import string, sys File "C:\applications\programming\python_341\Lib\string.py", line 73 class Template(metaclass=_TemplateMetaclass): ^ SyntaxError: invalid syntax ``` Note that, with all those ``` You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 ``` I never touch the keyboard. --- Full settings.xml: ``` <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <servers> <server> <id>ossrh</id> <username>MY_SONATYPE_DOT_COM_USERNAME</username> <password>MY_SONATYPE_DOT_COM_PASSWORD</password> </server> </servers> <pluginGroups></pluginGroups> <proxies></proxies> <mirrors></mirrors> <profiles></profiles> </settings> ``` Full pom.xml: ``` <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.github.aliteralmind</groupId> <artifactId>xbnjava</artifactId> <packaging>pom</packaging> <version>0.1.4.1</version> <name>XBN-Java</name> <url>https://github.com/aliteralmind/xbnjava</url> <inceptionYear>2014</inceptionYear> <organization> <name>Jeff Epstein</name> </organization> <description>XBN-Java is a collection of generically-useful backend (server side, non-GUI) programming utilities, featuring RegexReplacer and FilteredLineIterator. XBN-Java is the foundation of Codelet (http://codelet.aliteralmind.com).</description> <licenses> <license> <name>Lesser General Public License (LGPL) version 3.0</name> <url>https://www.gnu.org/licenses/lgpl-3.0.txt</url> </license> <license> <name>Apache Software License (ASL) version 2.0</name> <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url> </license> </licenses> <developers> <developer> <name>Jeff Epstein</name> <email>aliteralmind-github@yahoo.com</email> <roles> <role>Lead Developer</role> </roles> </developer> </developers> <issueManagement> <system>GitHub Issue Tracker</system> <url>https://github.com/aliteralmind/xbnjava/issues</url> </issueManagement> <distributionManagement> <snapshotRepository> <id>ossrh</id> <url>https://oss.sonatype.org/content/repositories/snapshots</url> </snapshotRepository> <repository> <id>ossrh</id> <url>https://oss.sonatype.org/service/local/staging/deploy/maven2/</url> </repository> </distributionManagement> <scm> <connection>scm:git:git@github.com:aliteralmind/xbnjava.git</connection> <url>scm:git:git@github.com:aliteralmind/xbnjava.git</url> <developerConnection>scm:git:git@github.com:aliteralmind/xbnjava.git</developerConnection> </scm> <properties> <java.version>1.7</java.version> <jarprefix>R:\jeffy\programming\build\/${project.artifactId}-${project.version}/download/${project.artifactId}-${project.version}</jarprefix> </properties> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>build-helper-maven-plugin</artifactId> <version>1.8</version> <executions> <execution> <id>attach-artifacts</id> <phase>package</phase> <goals> <goal>attach-artifact</goal> </goals> <configuration> <artifacts> <artifact> <file>${jarprefix}.jar</file> <type>jar</type> </artifact> <artifact> <file>${jarprefix}-javadoc.jar</file> <type>jar</type> <classifier>javadoc</classifier> </artifact> <artifact> <file>${jarprefix}-sources.jar</file> <type>jar</type> <classifier>sources</classifier> </artifact> </artifacts> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-gpg-plugin</artifactId> <version>1.5</version> <executions> <execution> <id>sign-artifacts</id> <phase>verify</phase> <goals> <goal>sign</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <!-- <profiles> This profile will sign the JAR file, sources file, and javadocs file using the GPG key on the local machine. See: https://docs.sonatype.org/display/Repository/How+To+Generate+PGP+Signatures+With+Maven <profile> <id>release-sign-artifacts</id> <activation> <property> <name>release</name> <value>true</value> </property> </activation> </profile> </profiles> --> <dependencies> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-collections4</artifactId> <version>4.0</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.4</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.3.2</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>16.0</version> </dependency> </dependencies> </project> ```
2014/07/30
[ "https://Stackoverflow.com/questions/25044403", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2736496/" ]
Set your gpg.passphrase in your ~/.m2/settings.xml like so: ``` <server> <id>gpg.passphrase</id> <passphrase>clear or encrypted text</passphrase> </server> ``` Or pass it as a parameter when calling maven: ``` mvn -Dgpg.passphrase=yourpassphrase deploy ```
I uninstalled gpg4win (of which Kleopatra is part), restarted my computer, and re-installed gpg4win, and the passphrase issue went away. The dialog popped up and prompted me for my password. ``` R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava>mvn deploy [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building XBN-Java 0.1.4.1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- build-helper-maven-plugin:1.8:attach-artifact (attach-artifacts) @ xbnjava --- [INFO] [INFO] --- maven-gpg-plugin:1.5:sign (sign-artifacts) @ xbnjava --- You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 You need a passphrase to unlock the secret key for user: "MY NAME HERE <MY_EMAIL_HERE@yahoo.com>" 2048-bit RSA key, ID 4AB64866, created 2014-07-15 [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ xbnjava --- [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\pom.xml to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1.pom [INFO] Installing R:\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1.jar to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1.jar [INFO] Installing R:\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1-javadoc.jar to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1-javadoc.jar [INFO] Installing R:\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1-sources.jar to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1-sources.jar [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\target\xbnjava-0.1.4.1.pom.asc to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1.pom.asc [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\target\gpg\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1.jar.asc to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1.jar.asc [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\target\gpg\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1-javadoc.jar.asc to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1-javadoc.jar.asc [INFO] Installing R:\jeffy\programming\sandbox\z__for_git_commit_only\xbnjava\target\gpg\jeffy\programming\build\xbnjava-0.1.4.1\download\xbnjava-0.1.4.1-sources.jar.asc to C:\Users\jeffy\.m2\repository\com\github\aliteralmind\xbnjava\0.1.4.1\xbnjava-0.1.4.1-sources.jar.asc [INFO] [INFO] --- maven-deploy-plugin:2.7:deploy (default-deploy) @ xbnjava --- Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.pom Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.pom (6 KB at 0.2 KB/sec) Downloading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/maven-metadata.xml Downloaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/maven-metadata.xml (314 B at 1.2 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/maven-metadata.xml Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/maven-metadata.xml (314 B at 1.2 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.jar Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.jar (630 KB at 453.2 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-javadoc.jar Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-javadoc.jar (5093 KB at 605.6 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-sources.jar Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-sources.jar (10728 KB at 609.6 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.pom.asc Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.pom.asc (499 B at 1.7 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.jar.asc Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1.jar.asc (499 B at 1.6 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-javadoc.jar.asc Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-javadoc.jar.asc (499 B at 1.7 KB/sec) Uploading: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-sources.jar.asc Uploaded: https://oss.sonatype.org/service/local/staging/deploy/maven2/com/github/aliteralmind/xbnjava/0.1.4.1/xbnjava-0.1.4.1-sources.jar.asc (499 B at 2.0 KB/sec) [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:18 min [INFO] Finished at: 2014-07-30T14:30:23-04:00 [INFO] Final Memory: 5M/15M [INFO] ------------------------------------------------------------------------ 'import site' failed; use -v for traceback Traceback (most recent call last): File "C:\applications\programming\python_341\Lib\cmd.py", line 45, in ? import string, sys File "C:\applications\programming\python_341\Lib\string.py", line 73 class Template(metaclass=_TemplateMetaclass): ^ SyntaxError: invalid syntax ``` There's still the invalid syntax issue at the end of the log, which I'm about to post in a followup question. Sorry. I hope this helps someone in the future. Or in the past. Whatever.
13,597
62,327,106
I am trying to make a class consisting of several methods and I want to use return values from methods as parameters for other methods within the same class. Is it possible to do so? ``` class Result_analysis(): def __init__(self, confidence_interval): self.confidence_interval = confidence_interval def read_file(self, file_number): dict_ = {1: 'Ten_Runs_avg-throughput_scalar.csv', 2: 'Thirty_Runs_avg-throughput_scalar.csv', 3: 'Hundred_Runs_avg-throughput_scalar.csv', 4: 'Thousand_Runs_avg-throughput_scalar.csv'} cols = ['run', 'ber', 'timelimit', 'repetition', 'Module', 'Avg_Throughput'] data = pd.read_csv(dict_[file_number], delimiter=',', skiprows=[0], names=cols) df = pd.DataFrame(data) return df def extract_arrays(self,df): df = Result_analysis().read_file(file_number) avgTP_10s_arr = [] avgTP_100s_arr = [] avgTP_1000s_arr = [] for i in range(len(data)): if (df['timelimit'][i] == 10): avgTP_10s_arr.append(df['Avg_Throughput'][i]) elif (df['timelimit'][i] == 100): avgTP_100s_arr.append(df['Avg_Throughput'][i]) elif (df['timelimit'][i] == 1000): avgTP_1000s_arr.append(df['Avg_Throughput'][i]) return avgTP_10s_arr, avgTP_100s_arr, avgTP_1000s_arr d = Result_analysis(0.95) d.read_file(1) d.exextract_arrays(d.read_file(1)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-92-485309654e5c> in <module> 1 d = Result_analysis(0.95) 2 d.read_file(1) ----> 3 d.extract_arrays(d.read_file(1)) <ipython-input-91-06bc29de002c> in extract_arrays(self, file_number) 15 16 def extract_arrays(self,file_number): ---> 17 df = Result_analysis().read_file(file_number) 18 avgTP_10s_arr = [] 19 avgTP_100s_arr = [] TypeError: __init__() missing 1 required positional argument: 'confidence_interval' ``` I get the above given error.
2020/06/11
[ "https://Stackoverflow.com/questions/62327106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13628325/" ]
You didn't include all your code, and you should have updated the question with the traceback, but did you mean this: ``` n = ... # I don't know what n is. d = Result_analysis(0.95) print(d.extract_arrays(d.read_file(n)) ```
If you don't want to call the function read\_file explicitly from outside class. then you can convert the program as: ``` class Result_analysis(): def __init__(self, confidence_interval): self.confidence_interval = confidence_interval def read_file(self, file_number): dict_ = {1: 'Ten_Runs_avg-throughput_scalar.csv', 2: 'Thirty_Runs_avg-throughput_scalar.csv', 3: 'Hundred_Runs_avg-throughput_scalar.csv', 4: 'Thousand_Runs_avg-throughput_scalar.csv'} cols = ['run', 'ber', 'timelimit', 'repetition', 'Module', 'Avg_Throughput'] data = pd.read_csv(dict_[file_number], delimiter=',', skiprows=[0], names=cols) df = pd.DataFrame(data) return df def extract_arrays(self,file_number): df = Result_analysis().read_file(file_number) avgTP_10s_arr = [] avgTP_100s_arr = [] avgTP_1000s_arr = [] for i in range(len(data)): if (df['timelimit'][i] == 10): avgTP_10s_arr.append(df['Avg_Throughput'][i]) elif (df['timelimit'][i] == 100): avgTP_100s_arr.append(df['Avg_Throughput'][i]) elif (df['timelimit'][i] == 1000): avgTP_1000s_arr.append(df['Avg_Throughput'][i]) return avgTP_10s_arr, avgTP_100s_arr, avgTP_1000s_arr ``` call the function `extract_arrays`, and pass `file_number` as parameter
13,599
620,954
Are there any easy to use python components that could be used in a GUI? It would be great to have something like JSyntaxPane for Python. I would like to know of python-only versions ( not interested in jython ) .
2009/03/07
[ "https://Stackoverflow.com/questions/620954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31610/" ]
Other than pygments? <http://pygments.org/>
If you're using gtk+, there's a binding of gtksourceview for Python in [gnome-python-extras](http://ftp.gnome.org/pub/GNOME/sources/gnome-python-extras/2.25/). It seems to work well in my experience. The downside: the documentation is less than perfect. There's also a binding of [QScintilla](http://www.riverbankcomputing.co.uk/software/qscintilla/intro) for Python if PyQt is your thing.
13,600
60,396,222
I know there are questions like that but I still wanted to ask this because I couldn't solve, so there is my code: ``` #! python3 import os my_path = 'E:\\Movies' for folder in os.listdir(my_path): size = os.path.getsize(folder) print(f'size of the {folder} is: {size} ') ``` And I got this error: ``` Traceback (most recent call last): File "c:/Users/ataba/OneDrive/Masaüstü/Programming/python/findingfiles.py", line 7, in <module> size = os.path.getsize(folder) File "C:\Program Files\Python37\lib\genericpath.py", line 50, in getsize return os.stat(filename).st_size FileNotFoundError: [WinError 2] The system cannot find the file specified: 'FordvFerrari1080p' ``` When I write `print(folder)` instead of getting their size it shows the folders so I don't think the program can't find them.
2020/02/25
[ "https://Stackoverflow.com/questions/60396222", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10577122/" ]
Temporary tables are created using the server's collation by default. It looks like your server's collation is `SQL_Latin1_General_CP1_CI_AS` and the database's (actually, the column's) `Hebrew_CI_AS` or vice versa. You can overcome this by using `collate database_default` in the temporary table's column definitions, eg : ``` create #x ( ID int PRIMARY KEY, Company_Code nvarchar(20) COLLATE database_default, Cust nvarchar(20) COLLATE database_default, ... ) ``` This will create the columns using the current database's collation, not the server's.
In your temp table definition #x,add COLLATE DATABASE\_DEFAULT to the String columns, like ``` custName nvarchar(xx) COLLATE DATABASE_DEFAULT NOT NULL ```
13,606
29,723,590
I'm trying to set up Python to run on my local apache server on a mac. On `httpd.conf`, I've ``` <VirtualHost *:80> DocumentRoot "/Users/ptamzz/Sites/python/sandbox.ptamzz.com" <Directory "/Users/pritams/Sites/python/sandbox.ptamzz.com"> Options Indexes FollowSymLinks MultiViews ExecCGI AddHandler cgi-script .py Require all granted </Directory> DirectoryIndex index.py ServerName sandbox.ptamzz.com ErrorLog "/var/log/apache2/error-log" CustomLog "/var/log/apache2/access-log" common </VirtualHost> ``` On my document root, I've my `index.py` file as ``` #!/usr/bin/python print "Content-type: text/html" print "<html><body>Pritam's Page</body></html>" ``` But when I load the page on my browser, the python codes are returned as it is. ![Screenshot](https://i.stack.imgur.com/QxQ6s.png) What all am I missing?
2015/04/18
[ "https://Stackoverflow.com/questions/29723590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/383393/" ]
**Executing python in Apache ( cgi )** System : OSX yosmite 10.10.3 , Default apache **uncommented in http config** ``` LoadModule cgi_module libexec/apache2/mod_cgi.so ``` **virtual hosts entry** ``` <VirtualHost *:80> # Hook virtual host domain name into physical location. ServerName python.localhost DocumentRoot "/Users/sj/Sites/python" # Log errors to a custom log file. ErrorLog "/private/var/log/apache2/pythonlocal.log" # Add python file extensions as CGI script handlers. AddHandler cgi-script .py # Be sure to add ** ExecCGI ** as an option in the document # directory so Apache has permissions to run CGI scripts. <Directory "/Users/sj/Sites/python"> Options Indexes MultiViews FollowSymLinks ExecCGI AddHandler cgi-script .cgi AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> ``` **index.py file** ``` #!/usr/bin/env python print "Content-type: text/html" print print "<html><body>Test Page</body></html>" ``` file was made executable using ``` chmod a+x index.py ``` **restarted apache and output** ![python in apache](https://i.stack.imgur.com/JUXFt.png)
For later Versions of Apache (2.4) the answer of Sojan V Jose is mostly still applying, except that I got an authorization failure, that can be resolved by replacing: ``` Order allow,deny Allow from all ``` with: ``` Require all granted ```
13,607
70,125,767
I am a student, new to python. I am trying to code a program that will tell if a user input number is fibonacci or not. ``` num=int(input("Enter the number you want to check\n")) temp=1 k=0 a=0 summ=0 while summ<=num: summ=temp+k temp=summ k=temp if summ==num: a=a+1 print("Yes. {} is a fibonnaci number".format(num)) break if a==0: print("No. {} is NOT a fibonacci number".format(num)) #The program is working for only numbers upto 2. ```
2021/11/26
[ "https://Stackoverflow.com/questions/70125767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17517066/" ]
You don't need quite so many variables. A Fibonacci integer sequence can be generated with the single assignment ``` a, b = b, a + b # Short for # temp = a # a = b # b = temp + b ``` Repeated applications sets `a` to the numbers in the pattern in sequence. The choice of initial values for `a` and `b` determine which exact sequence you get. The Fibonacci numbers are generated when `a = 0` and `b = 1` are used. ``` a = 0 b = 1 a, b = 1, 0 + 1 # 0, 1 a, b = 1, 1 + 1 # 1, 2 a, b = 2, 1 + 2 # 2, 3 a, b = 3, 2 + 3 # 3, 5 # etc ``` (As another example, the Lucas numbers are generated if you start with `a = 2` and `b = 1`.) All you need to do is return `True` if `n == a` at each step, iterating until `n > a`. If `n` wasn't one of the `a` values generated by the loop, you'll return `False`. ``` n = int(input(...)) a = 0 b = 1 while a <= n: if n == a: return True a, b = b, a + b return False ```
I'd suggest something like this: ``` fib_terms = [0, 1] # first two fibonacci terms user_input= int(input('Enter the number you want to check\n')) # Add new fibonacci terms until the user_input is reached while fib_terms[-1] <= user_input: fib_terms.append(fib_terms[-1] + fib_terms[-2]) if user_input in fib_terms: print(f'Yes. {user_input} is a fibonacci number.') else: print(f'No. {user_input} is NOT a fibonacci number.') ```
13,608
46,035,788
I'm trying to use python to search through a directory of files and open every single file in search of a string. We're talking about less than 1000 files with 1, 2, 3 line each, so opening them all only takes a few seconds. Well, I think that I've made it, but there's a problem: the string that I'm search for is "**exit 0**". ``` #!/usr/bin/env python import glob scriptsdir = "/dummydir/*.ksh" string = "exit 0" files = glob.glob(scriptsdir) teste = 0 for script in files: f=open(script,"r") for line in f: if string in line: teste = teste + 1 f.close() print teste ``` This sucks because the code is working: like this, the "teste" value at the end refers to number of .ksh files on the directory, but if change the string to **exit 1**, the value of the "teste" in the end is 0. I'm literally searching for an "**exit 0**" string that exists in some of those files. Is it possible to do it? I've tested it and if I change the string value to something that exists, the count is done right. Any help? Thank you,
2017/09/04
[ "https://Stackoverflow.com/questions/46035788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6834568/" ]
Because the start of your pattern match the empty string: ``` /|google... ``` That means: nothing OR google ...
This pattern is invalid: ``` |google|robot|bot|spider|crawler|curl|Facebot|facebook|archiver|^$ ^ | +--- this character (an alternator) effectively renders the regex useless ``` This means that the regex could in theory match anything because the left part is empty. Moreover, `^$` on the right of the last alternator also means an empty string match. [Debuggex Demo](https://www.debuggex.com/r/4q2m9j1y71bNaA8x)
13,612
40,981,908
I'm new to AWS Lambda and pretty new to Python. I wanted to write a python lambda that uses the AWS API. boto is the most popular python module to do this so I wanted to include it. Looking at examples online I put `import boto3` at the top of my Lambda and it just worked- I was able to use boto in my Lambda. How does AWS know about boto? It's a community module. Are there a list of supported modules for Lambdas? Does AWS cache its own copy of community modules?
2016/12/05
[ "https://Stackoverflow.com/questions/40981908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1028270/" ]
[The documentation](http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html) seems to suggest `boto3` is provided by default on AWS Lambda: > > AWS Lambda includes the AWS SDK for Python (Boto 3), so you don't need to include it in your deployment package. However, if you want to use a version of Boto3 other than the one included by default, you can include it in your deployment package. > > > As far as I know, you will need to manually install any other dependencies in your deployment package, as shown in the linked documentation, using: ``` pip install foobar -t <project path> ```
AWS Lambda includes the AWS SDK for Python (Boto 3), so you don't need to include it in your deployment package. This link will give you a little more in-depth info on Lambda environment <https://aws.amazon.com/blogs/compute/container-reuse-in-lambda/> And this too <https://alestic.com/2014/12/aws-lambda-persistence/>
13,613
26,278,386
I have a list or array (what is the correct term in python?) of objects. What is the most efficient way to get all objects matching a condition? I could iterate over the list and check each element, but that doesn't seem very efficient. ``` objects = [] for object in list: if object.value == "123": objects.add(object) ```
2014/10/09
[ "https://Stackoverflow.com/questions/26278386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1474934/" ]
This is the simplest way: ``` objects = [x for x in someList if x.value == 123] ``` If you're looking for a *faster* solution, you have to tell us more about your objects and how the source list is built. For example, if the property in question is unique among the objects, you can have better performance using `dict` instead of `list`. Another option is to keep the list sorted and use `bisect` instead of the linear search. Do note however, that these optimization efforts start making sense when the list grows really big, if you have less than ~500 elements, just use a comprehension and stop worrying.
You can use `filter` ``` objects = filter(lambda i: i.value == '123', l) ``` *Note to self* Apparently the ["filter vs list comp"](https://stackoverflow.com/questions/3013449/list-filtering-list-comprehension-vs-lambda-filter) debate starts flame wars.
13,615
41,079,379
I have to program a task in python 3.4 that requires the following: - The program ask the user to enter a data model that is made by several submodels in a form of string like the following: ``` # the program ask : please input the submodel1 # user will input: A = a*x + b*y*exp(3*c/d*x) # then the program ask : -> please input the submodel2 # user will input: B = e*x + f*y*exp(3*h/d*x) # then the program ask : -> please input the main model: # user will input: Y = A*x - exp(B**2/y) ``` then the program take these models (strings) and performs some operations on them like curve fitting of the main model to existing data and plot results with showing parameters values. the idea here is to give the user the freedom of choosing the model at runtime without needing to program it as a function inside the app. my problem here resides in converting or interpreting the string as a python function that returns value. I explored solution like the eval() function but I am looking to a solution that is like in the MatLab ``` func = sym('string') func = matlabfunction(func) ``` which creates an anonymous function that you can deal with it easily hope I made myself clear, if any further clarification needed please let me know, the solution should be compatible with python 3.4 Thanks in advance
2016/12/10
[ "https://Stackoverflow.com/questions/41079379", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4434914/" ]
Invisible reCAPTCHA =================== Implementing Google's new Invisible reCAPTCHA is very similar to how we add v2 to our site. You may add it as its own container like normal, or the new method of adding it to the form submit button. I hope this guide will help you along the correct path. Standalone CAPTCHA Container ---------------------------- Implementing recaptcha requires a few things: ``` - Sitekey - Class - Callback - Bind ``` This will be your final goal. ``` <div class="g-recaptcha" data-sitekey="<sitekey>" data-bind="recaptcha-submit" data-callback="submitForm"> </div> ``` When using the standalone method, you must have data-bind set to the ID of your submit button. If you do not have this set, your captcha will not be invisible. A callback must also be used to submit the form. An invisible captcha will cancel all events from the submit button, so you need the callback to actually pass the submission on. ``` <script> function submitForm() { var form = document.getElementById("ContactForm"); if (validate_form(form)) { form.submit(); } else { grecaptcha.reset(); } } </script> ``` Notice in the example callback that there is also custom form validation. It is very important that you reset the reCAPTCHA when the validation fails, otherwise you will not be able to re-submit the form until the CAPTCHA expires. Invisible CAPTCHA Button ------------------------ A lot of this is the same as with the standalone CAPTCHA above, but instead of having a container, everything is placed on the submit button. This will be your goal. ``` <button class="g-recaptcha" data-sitekey="<sitekey>" data-callback="submitForm" data-badge="inline" type="submit"> Submit</button> ``` There's something new here, data-badge. This is a div that gets inserted into the DOM that contains the inputs required for reCAPTCHA to function. It has three possible values: bottomleft, bottomright, inline. Inline will make it display directly above the submit button, and allow you to control how you would like it to be styled. On to your question ------------------- ``` <form action="test.php" method="POST" id="theForm"> <script> function submitForm() { document.getElementById("theForm").submit(); } </script> <input type="text" name="email" placeholder="Email"> <input type="password" name="password" placeholder="Password"> <div class="g-recaptcha" data-sitekey="<sitekey>" data-bind="recaptcha-submit" data-callback="submitForm"></div> <input type="submit" name="login" class="loginmodal-submit" id="recaptcha-submit" value="Login"> </form> ``` Or ``` <form action="test.php" method="POST" id="theForm"> <script> function submitForm() { document.getElementById("theForm").submit(); } </script> <input type="text" name="email" placeholder="Email"> <input type="password" name="password" placeholder="Password"> <button class="loginmodal-submit g-recaptcha" data-sitekey="<sitekey>" data-callback="submitForm" data-badge="inline" type="submit" value="Login">Submit</button> </form> ``` I hope this helps you and future coders. I'll keep this up-to-date as the technology evolves.
If you're looking for a fully customizable general solution which will even work with multiple forms on the same page, I'll explicitly render the reCaptcha widget by using the **render=explicit** and **onload=aFunctionCallback** parameters. Here is a simple example: ```html <!DOCTYPE html> <html> <body> <form action="" method="post"> <input type="text" name="first-name-1"> <br /> <input type="text" name="last-name-1"> <br /> <div class="recaptcha-holder"></div> <input type="submit" value="Submit"> </form> <br /><br /> <form action="" method="post"> <input type="text" name="first-name-2"> <br /> <input type="text" name="last-name-2"> <br /> <div class="recaptcha-holder"></div> <input type="submit" value="Submit"> </form> <script type="text/javascript"> var renderGoogleInvisibleRecaptcha = function() { for (var i = 0; i < document.forms.length; ++i) { var form = document.forms[i]; var holder = form.querySelector('.recaptcha-holder'); if (null === holder){ continue; } (function(frm){ var holderId = grecaptcha.render(holder,{ 'sitekey': 'CHANGE_ME_WITH_YOUR_SITE_KEY', 'size': 'invisible', 'badge' : 'bottomright', // possible values: bottomright, bottomleft, inline 'callback' : function (recaptchaToken) { HTMLFormElement.prototype.submit.call(frm); } }); frm.onsubmit = function (evt){ evt.preventDefault(); grecaptcha.execute(holderId); }; })(form); } }; </script> <script src="https://www.google.com/recaptcha/api.js?onload=renderGoogleInvisibleRecaptcha&render=explicit" async defer></script> </body> </html> ``` As you can see, I am adding an empty div element into a form. In order to identify which forms should be protected using reCaptcha, I'll add a class name to this element. In our example I am using 'recaptcha-holder' class name. The callback function iterates through all the existing forms and if it finds our injected element with the 'recaptcha-holder' class name, it will render the reCaptcha widget. I've been using this solution on my Invisible reCaptcha for WordPress plugin. If somebody wants to see how this works, the plugin is available for download on WordPress directory: <https://wordpress.org/plugins/invisible-recaptcha/> Hope this helps!
13,616
7,378,431
How to add data to a relationship with multiple foreign keys in Django? I'm building a simulation written in python and would like to use django's orm to store (intermediate and final) results in a database. Therefore I am not concerned with urls and views. The problem I am having is the instantiation of an object with multiple ForeinKeys. This object is supposed to represent the relationship between an agent and a time step. Here is the code: I'm using the following models ``` from django.db import models import random from django.contrib.admin.util import related_name # Create your models here. class Agent(models.Model): agent = int def __init__(self, agent, *args, **kwargs): models.Model.__init__(self, *args, **kwargs) self.agent = agent def __str__(self): return str(self.agent) class Step(models.Model): step = int def __init__(self, step, *args, **kwargs): models.Model.__init__(self, *args, **kwargs) self.step = step def __str__(self): return str(self.step) class StepAgentData(models.Model): def __init__(self, step, agent, *args, **kwargs): models.Model.__init__(self, *args, **kwargs) self.step = step #This does not work self.agent = agent step = models.ForeignKey(Step, related_name='time_step') agent = models.ForeignKey(Agent, related_name='associated_agent') data = float def __unicode__(self): return str("Step %s \t Agent %s ", (self.step,self.agent)) ``` Running the following script ``` from DBStructureTest.App.models import Step, Agent, StepAgentData if __name__ == '__main__': s = Step(1) s.save() a = Agent(2) a.save() sad = StepAgentData(s,a) sad.save() print "finished" ``` results in the following error message when self.step = step is executed in the constructor of the StepAgentData (see comment in the code of the StepAgentData model) ``` > File > "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/related.py", > line 367, in __set__ > val = getattr(value, self.field.rel.get_related_field().attname) > File > "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/related.py", > line 773, in get_related_field > data = self.to._meta.get_field_by_name(self.field_name) File > "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", > line 307, in get_field_by_name > cache = self.init_name_map() File > "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", > line 337, in init_name_map > for f, model in self.get_all_related_m2m_objects_with_model(): > File > "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", > line 414, in get_all_related_m2m_objects_with_model > cache = self._fill_related_many_to_many_cache() File > "/usr/local/lib/python2.7/dist-packages/django/db/models/options.py", > line 428, in _fill_related_many_to_many_cache > for klass in get_models(): File > "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py", > line 167, in get_models > self._populate() File > "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py", > line 61, in _populate > self.load_app(app_name, True) File > "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py", > line 76, in load_app > app_module = import_module(app_name) File > "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", > line 35, in import_module > __import__(name) ImportError: No module named App ``` The folder structure is the following: ``` |-DBStructureTest |-App |-__init__.py |-Main.py |-models.py |-test.py |-views.py |-__init__.py |-manage.py |-settings.py |-urls.py ``` What am I doing wrong? Help is very much appreciated. EDIT: In the /App directory I get the following after doing 'import models' ` > > > > > > > > > > > > import models > > > Traceback (most recent call last): > > > File "", line 1, in > > > File "models.py", line 1, in > > > from django.db import models > > > File "/usr/local/lib/python2.7/dist-packages/django/db/**init**.py", line 14, in > > > if not settings.DATABASES: > > > File "/usr/local/lib/python2.7/dist-packages/django/utils/functional.py", line 276, in **getattr** > > > self.\_setup() > > > File "/usr/local/lib/python2.7/dist-packages/django/conf/**init**.py", line 40, in \_setup > > > raise ImportError("Settings cannot be imported, because environment variable %s is undefined." % ENVIRONMENT\_VARIABLE) > > > ImportError: Settings cannot be imported, because environment variable DJANGO\_SETTINGS\_MODULE is undefined. > > > ' > > > > > > > > > > > > > > > > > >
2011/09/11
[ "https://Stackoverflow.com/questions/7378431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/939186/" ]
Try using [PAM](http://en.wikipedia.org/wiki/Pluggable_Authentication_Modules) via [Python PAM](http://atlee.ca/software/pam/) or similar
That should be possible by having your script read the `/etc/passwd` and `/etc/shadow` files, which contain details about usernames and passwords on a Linux system. Do note that the script will have to have read access to the files, which depending on the situation may or may not be possible. Here are two good articles explaining the format of those files, which should tell you everything you need to know in order to have your script read and understand them: * [Understanding /etc/passwd File Format](http://www.cyberciti.biz/faq/understanding-etcpasswd-file-format/) * [Understanding /etc/shadow File Format](http://www.cyberciti.biz/faq/understanding-etcshadow-file/) By the way, when it talks about *encrypted password*, it means that it has been encrypted using the DES algorithm. You'll probably need to use [pyDes](http://twhiteman.netfirms.com/des.html) or another python implementation of the DES algorithm in order for your script to create an encrypted password that it can compare to the one in `/etc/shadow`.
13,625
3,495,290
relatively long-time PHP user here. I could install XAMPP in my sleep at this point to the point where I can get a PHP script running in the browser at "localhost", but in my searches to find a similar path using Python, I've run out of Googling ideas. I've discovered the mod\_python Apache mod, but then I also discovered that it's been discontinued. I'd greatly prefer to do my Python learning in a browser as opposed to the command prompt, so if anybody could point me along the proper path, I'd be very grateful. Thanks!
2010/08/16
[ "https://Stackoverflow.com/questions/3495290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/385950/" ]
well mod\_python has been retired. So in order to host python apps you want `mod_wsgi` <http://code.google.com/p/modwsgi/> But python isn't really like php(as in you mix it with html to get output, if I understand your question correctly). In learning python, using the command line/repl will probably be much more useful/straight forward I would think. If you are looking for python as primarily web development you should look into django (<http://www.djangoproject.com/>) as that might be closer to what you are looking for..
Most Python webframeworks have a built-in minimal development server: * [Flask](http://flask.pocoo.org/docs/quickstart/) * [Turbogears](http://turbogears.org/2.0/docs/main/QuickStart.html#run-the-server) * [Django](http://docs.djangoproject.com/en/dev/ref/django-admin/#runserver-port-or-ipaddr-port) * [Pylons](http://pylonshq.com/docs/en/1.0/gettingstarted/#running-the-application) * [web.py](http://webpy.org/tutorial3.en#start) * [web2py](http://web2py.com/book/default/chapter/03) But don't be afraid of the command line. Python has a great interactive console (just run `python`) and there's an even better one: [IPython](http://ipython.scipy.org/moin/).
13,626
40,879,188
np.where lets you pick values to assign for a boolean type query, e.g. ``` test = [0,1,2] np.where(test==0,'True','False') print test ['True','False','False'] ``` Which is basically an 'if' statement. Is there a pythonic way of having an 'if, else if, else' kind of statement (with different cases) for a numpy array? This is my workaround: ``` color = [0,1,2] color = np.where(color==0,'red',color) color = np.where(color==1,'blue',color) color = np.where(color==2,'green',color) print color ['red','blue','green'] ``` But I wonder if there's a better way of doing this.
2016/11/30
[ "https://Stackoverflow.com/questions/40879188", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5531108/" ]
`np.choose` is something of a multielement `where`: ``` In [97]: np.choose([0,1,1,2,0,1],['red','green','blue']) Out[97]: array(['red', 'green', 'green', 'blue', 'red', 'green'], dtype='<U5') In [113]: np.choose([0,1,2],[0,np.array([1,2,3])[:,None], np.arange(10,13)]) Out[113]: array([[ 0, 1, 12], [ 0, 2, 12], [ 0, 3, 12]]) ``` In the more complex cases it helps to have a good handle on broadcasting. There are limits, for example no more than 32 choices. It's not used nearly as much as `np.where`. And sometimes you just want to apply `where` or boolean masking multiple times: ``` In [115]: x Out[115]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) In [116]: x[x<4] += 10 In [117]: x Out[117]: array([[10, 11, 12, 13], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) In [118]: x[x>8] -=3 In [119]: x Out[119]: array([[ 7, 8, 9, 10], [ 4, 5, 6, 7], [ 8, 6, 7, 8]]) In [120]: x[(4<x)&(x<8)] *=2 In [121]: x Out[121]: array([[14, 8, 9, 10], [ 4, 10, 12, 14], [ 8, 12, 14, 8]]) ```
One of the more Pythonic ways to do this would be to use a list comprehension, like this: ``` >>> color = [0,1,2] >>> ['red' if c == 0 else 'blue' if c == 1 else 'green' for c in color] ['red', 'blue', 'green'] ``` It's fairly intuitive if you read it. For a given item in the list `color`, the value in the new list will be `'red'` if the color is `0`, `'blue'` if it's `1`, and `'green'` otherwise. I don't know if I would take the `if` `else`s in a list comprehension further than three, though. A `for` loop would be appropriate there. Or you could use a dictionary, which might be more "Pythonic," and would be much more scalable: ``` >>> color_dict = {0: 'red', 1: 'blue', 2: 'green'} >>> [color_dict[number] for number in color] ['red', 'blue', 'green'] ```
13,628
24,606,931
I have tried for hours to install `FiPy` I've installed Pip and many other things to get it to work. Pip successfull installed many of the things I needed but I cannot get it to work for PySparse or FiPy. Why I try, to install PySparse I get an error: ``` $ pip install pysparse Downloading/unpacking pysparse Could not find a version that satisfies the requirement pysparse (from versions: 1.1.1-dev, 1.2-dev, 1.2-dev202, 1.2-dev203, 1.2-dev213, 1.3-dev) Cleaning up... No distributions matching the version for pysparse Storing debug log for failure in /Users/Emily/.pip/pip.log ``` and when I try to install `FiPy` I get this error: ``` $ pip install fipy Downloading/unpacking fipy Downloading FiPy-3.1.tar.gz (5.7MB): 5.7MB downloaded Running setup.py (path:/private/var/folders/jt/gzhjdv8s1xb_v2b52lmr8bx00000gn/T/pip_build_Emily/fipy/setup.py) egg_info for package fipy Traceback (most recent call last): File "<string>", line 17, in <module> File "/private/var/folders/jt/gzhjdv8s1xb_v2b52lmr8bx00000gn/T/pip_build_Emily/fipy/setup.py", line 44, in <module> import ez_setup ImportError: No module named ez_setup Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 17, in <module> File "/private/var/folders/jt/gzhjdv8s1xb_v2b52lmr8bx00000gn/T/pip_build_Emily/fipy/setup.py", line 44, in <module> import ez_setup ImportError: No module named ez_setup ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /private/var/folders/jt/gzhjdv8s1xb_v2b52lmr8bx00000gn/T/pip_build_Emily/fipy Storing debug log for failure in /Users/Emily/.pip/pip.log ``` Can you please help? Please be specific because I am having a lot of trouble.
2014/07/07
[ "https://Stackoverflow.com/questions/24606931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3811717/" ]
I've just encountered the same problems you did, and here are my work-arounds: Pysparse: pip doesn't find the pysparse distribution in PyPI because it appears to not be a stable version (see e.g. [Could not find a version that satisfies the requirement pytz](https://stackoverflow.com/questions/18230956/could-not-find-a-version-that-satisfies-the-requirement-pytz) for reference). To get around this you could use, as the link suggests, ``` pip install --pre pysparse ``` However, that didn't work for me--the installation failed. So I downloaded and installed pysparse manually from <http://pysparse.sourceforge.net/index.html> FiPY: The error says "no module named ez\_setup". If you install the python ez\_setup module using pip the installation of FiPY should now work. FiPY seems to be working for me now. Good luck!
PySparse is not a strict requirement for FiPy. FiPy needs one of PySparse, Scipy or Trilinos as a linear solver and Scipy is probably the easiest to install. To check that FiPy has access to a linear solver, you can run the tests, ``` $ python -c "import fipy; fipy.test()" ``` It should be evident from the test output if FiPy does not have a solver library available.
13,631
55,016,775
I need help making a mirrored right triangle like below ``` 1 21 321 4321 54321 654321 ``` I can print a regular right triangle with the code below ``` print("Pattern A") for i in range(8): for j in range(1,i): print(j, end="") print("") ``` Which prints ``` 1 12 123 1234 12345 123456 ``` But I can't seem to find a way to mirror it. I tried to look online on how to do it but I can't seem to find any results for python and only examples for Java.
2019/03/06
[ "https://Stackoverflow.com/questions/55016775", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11157933/" ]
Here is one using the new f-string formatting system: ``` def test(x): s = "" for i in range(1,x+1): s = str(i) + s print(f'{s:>{x}}') test(6) ```
Something like this works. I loop over the amount of lines, add the whitespace needed for that line and print the numbers. ``` def test(x): for i in range(1,x+1): print((x-i)*(" ") + "".join(str(j+1) for j in range(i))) test(6) ```
13,632
44,690,174
How to convert a column that has been read as a string into a column of arrays? i.e. convert from below schema ``` scala> test.printSchema root |-- a: long (nullable = true) |-- b: string (nullable = true) +---+---+ | a| b| +---+---+ | 1|2,3| +---+---+ | 2|4,5| +---+---+ ``` To: ``` scala> test1.printSchema root |-- a: long (nullable = true) |-- b: array (nullable = true) | |-- element: long (containsNull = true) +---+-----+ | a| b | +---+-----+ | 1|[2,3]| +---+-----+ | 2|[4,5]| +---+-----+ ``` Please share both scala and python implementation if possible. On a related note, how do I take care of it while reading from the file itself? I have data with ~450 columns and few of them I want to specify in this format. Currently I am reading in pyspark as below: ``` df = spark.read.format('com.databricks.spark.csv').options( header='true', inferschema='true', delimiter='|').load(input_file) ``` Thanks.
2017/06/22
[ "https://Stackoverflow.com/questions/44690174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4531806/" ]
There are various method, The best way to do is using `split` function and cast to `array<long>` ``` data.withColumn("b", split(col("b"), ",").cast("array<long>")) ``` You can also create simple udf to convert the values ``` val tolong = udf((value : String) => value.split(",").map(_.toLong)) data.withColumn("newB", tolong(data("b"))).show ``` Hope this helps!
Using a [UDF](https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-sql-udfs.html) would give you exact required schema. Like this: ``` val toArray = udf((b: String) => b.split(",").map(_.toLong)) val test1 = test.withColumn("b", toArray(col("b"))) ``` It would give you schema as follows: ``` scala> test1.printSchema root |-- a: long (nullable = true) |-- b: array (nullable = true) | |-- element: long (containsNull = true) +---+-----+ | a| b | +---+-----+ | 1|[2,3]| +---+-----+ | 2|[4,5]| +---+-----+ ``` As far as applying schema on file read itself is concerned, I think that is a tough task. So, for now you can apply transformation after creating `DataFrameReader` of `test`. I hope this helps!
13,633
29,989,024
Im trying to use a [specific gamma corrected grayscale implementation](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0029740) - Gleam to convert an images pixels to grayscale. How can i do this manually with PIL python? ``` def tau_gamma_correct(pixel_channel): pixel_channel = pixel_channel**(1/2.2) return pixel_channel #@param: rgb #@result: returns grayscale value def gleam(rgb): #convert rgb tuple to list rgblist = list(rgb) #gamma correct each rgb channel rgblist[0] = tau_gamma_correct(rgblist[0]) rgblist[1] = tau_gamma_correct(rgblist[1]) rgblist[2] = tau_gamma_correct(rgblist[2]) grayscale = 1/3*(rgblist[0] + rgblist[1] + rgblist[2]) return grayscale # get a glob list of jpg filenames files = glob.glob('*.jpg') for file in files: file = open(file) filename = file.name image = Image.open(file) pix = image.load() width, height = image.size #print(width,height) for x in range(0, width): for y in range(0, height): rgb = pix[x,y] #print(rgb) # calc new pixel value and set to pixel image.mode = 'L' pix[x,y] = gleam(rgb) image.save(filename + 'gray.gleam'+'.jpg') file.close() ``` `SystemError: new style getargs format but argument is not a tuple` It is still expecting the rgb tuple i think.
2015/05/01
[ "https://Stackoverflow.com/questions/29989024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/688830/" ]
I found that i could just build another image: ``` import sys import os import glob import numpy from PIL import Image def tau_gamma_correct(pixel_channel): pixel_channel = pixel_channel**(1/2.2) return pixel_channel #@param: rgb #@result: returns grayscale value def gleam(rgb): #convert rgb tuple to list rgblist = list(rgb) #gamma correct each rgb channel rgblist[0] = tau_gamma_correct(rgblist[0]) print('gleamed red ' + str(rgblist[0])) rgblist[1] = tau_gamma_correct(rgblist[1]) print('gleamed green ' + str(rgblist[1])) rgblist[2] = tau_gamma_correct(rgblist[2]) print('gleamed blue ' + str(rgblist[0])) grayscale = (rgblist[0] + rgblist[1] + rgblist[2])/3 print('grayscale '+ str(grayscale)) return grayscale # get a glob list of jpg filenames files = glob.glob('*.jpg') for file in files: file = open(file) filename = file.name image = Image.open(file) pix = image.load() width, height = image.size new_image = Image.new('L', image.size) #pixelmatrix = [width][height] pixelmatrix = numpy.zeros((width, height)) #print(width,height) for x in range(0, width): for y in range(0, height): rgb = pix[x,y] print('current pixel value: '+str(rgb)) # calc new pixel value and set to pixel #print(gleam(rgb)) gray = gleam(rgb) print('changing to pixel value: '+str(gray)) pixelmatrix[x,y] = gray new_image.save(filename + 'gray.gleam'+'.jpg') new_image.putdata(pixelmatrix) file.close() ```
Seeing the `SystemError: new style getargs format but argument is not a tuple` error it seems that you need to return a tuple, which is represented as : ``` sample_tuple = (1, 2, 3, 4) ``` So we edit the `gleam()` function as: ``` def gleam(rgb): #convert rgb tuple to list rgblist = list(rgb) #gamma correct each rgb channel rgblist[0] = tau_gamma_correct(rgblist[0]) rgblist[1] = tau_gamma_correct(rgblist[1]) rgblist[2] = tau_gamma_correct(rgblist[2]) grayscale = 1/3*(rgblist[0] + rgblist[1] + rgblist[2]) return (grayscale, ) ``` Keep in mind that while returning a single element tuple you need to represent as : ``` sample_tuple = (1, ) ``` This is due to the fact that `(4) == 4` but `(4, ) != 4`
13,635
66,312,791
i have a python script to download and save a MP3 and i would like to add code to cut out 5 seconds from the beginning of the MP3. ``` def download(): ydl_opts = { 'format': 'bestaudio/best', 'outtmpl': 'c:/MP3/%(title)s.%(ext)s', 'cookiefile': 'cookies.txt', 'postprocessors': [{ 'key': 'FFmpegExtractAudio', 'preferredcodec': 'mp3', 'preferredquality': '192', }], } with youtube_dl.YoutubeDL(ydl_opts) as ydl: ydl.download([inpYTLinkSong.get()]) ``` I found some code for command line to cut x seconds: `ffmpeg -ss 00:00:15.00 -i "OUTPUT-OF-FIRST URL" -t 00:00:10.00 -c copy out.mp4` So i think i have to get the -ss part into the postprocessor part in my script, something like: ``` 'postprocessors': [{ 'key': 'FFmpegExtractAudio', 'preferredcodec': 'mp3', 'preferredquality': '192', 'ss':'00:00:05.00' }], ``` But of course its not working with 'ss' or 'duration' (found in ffmpeg docu). So any ideas what i have to put there instead of 'ss'?
2021/02/22
[ "https://Stackoverflow.com/questions/66312791", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11675699/" ]
I don't think you can directly limit the width of the `Textfield`, it will take the max width of its parent element. But you can put it in a parent element with fixable width: ```py v.Row( children = [ v.Html( tag = 'div', style_ = 'width: 500px', children = [ v.TextField(type = "number") ] ), v.Html( tag = 'H1', children = ['Some text here'] ) ] ) ```
Here is the best I have so far, based on Christoph's answer: I added v.Spacer, and converted the second part to a v.Col (might be worth also including the first part to a v.Col ?) ``` import ipyvuetify as v v.Row( children = [ v.Html( tag = 'div', style_ = 'width: 50px', children = [ v.TextField(type = "number") ] ), v.Spacer(), v.Col( sm=3, tag = 'H1', children = ['Some text here'] ) ] ) ``` [![enter image description here](https://i.stack.imgur.com/J2Kx3.png)](https://i.stack.imgur.com/J2Kx3.png)
13,638
44,141,117
I am trying to print fibonacci series using lists in python. This is my code ``` f=[1,1] for i in range(8): f.append(f[i-1]+f[i-2]) print(f) ``` output is ``` [1, 1, 2, 3, 2, 3, 5, 5, 5, 8] ``` I am not getting the bug here!
2017/05/23
[ "https://Stackoverflow.com/questions/44141117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8054946/" ]
``` f=[1,1] for i in range(8): s=len(f) f.append(f[s-1]+f[s-2]) #sum of last two elements print(f) ``` or use -1 and -2 as index for two last element.
Script ``` f=[1,1] for i in range(2,8): print(i) f.append(f[i-1]+f[i-2]) print(f) ``` Output ``` 2 [1, 1, 2] 3 [1, 1, 2, 3] 4 [1, 1, 2, 3, 5] 5 [1, 1, 2, 3, 5, 8] 6 [1, 1, 2, 3, 5, 8, 13] 7 [1, 1, 2, 3, 5, 8, 13, 21] ``` Issue is `i` starts from 0, `f[-1], f[-2]` returns 0. start using range from 2 and 8 represents first 8 fibonacci numbers. removing debug print Script ``` f=[1,1] for i in range(2,8): f.append(f[i-1]+f[i-2]) print(f) ```
13,641
21,395,069
I am running ubuntu 13.10 with the latest pip. I have a whole set of SSL certs for my corporate proxy installed as per: <https://askubuntu.com/questions/73287/how-do-i-install-a-root-certificate> now. Firefox no longer complains about unrecognised certs but I still get: ``` Could not fetch URL http://pypi.python.org/simple/: There was a problem confirming the ssl certificate: [Errno 1] _ssl.c:509: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed ``` with pip? I have tried adding settings to $HOME/.pip/pip.conf ``` [global] cert = /etc/ssl/certs/mycorporatecert.pem ``` as well Thanks
2014/01/28
[ "https://Stackoverflow.com/questions/21395069", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1687633/" ]
I guess you would have to use `pip`'s `--cert` option. ``` --cert <path> Path to alternate CA bundle. ``` There's no indication in the documentation that you can use the `cert=` option in the `pip.conf` configuration file. See: <https://pip.pypa.io/en/stable/reference/pip/?highlight=proxy#cmdoption-cert>
try updating your proxy variables as shown here for http\_proxy and https\_proxy <https://askubuntu.com/questions/228530/updating-http-proxy-environment-variable> you should need the cert (or declared global cert as you have it above) as well as the proxy. the alternative to setting the variables would be to use it from the command line like [user:passwd@]proxy.server:port ``` pip install --proxy http://proxy.company.com:80 <package> ```
13,651
70,622,248
My question is an extension to this: [Python Accessing Values in A List of Dictionaries](https://stackoverflow.com/questions/17117912/python-accessing-values-in-a-list-of-dictionaries) I want to only return values from dictionaries if another given value exists in that dictionary. In the case of the example given in the linked question, say I only want to return 'Name' values if the 'Age' value in the dictionary is '17'. This should output ```py 'Suzy' ``` only.
2022/01/07
[ "https://Stackoverflow.com/questions/70622248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3465940/" ]
The reason for your error is that the conditional operator (`c ? t : f`) takes precedence over the lambda declaration (`=>`). [source](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/#operator-precedence) Wrapping the conditional operator in `{}` brackets should solve your problem: ```cs CreateMap<CompanyClient, MyDto>() .ForMember( dto => dto.PaymentTerms, opt => opt.MapFrom(companyClient => { companyClient.Company.PaymentTerms != null ? companyClient.Company.PaymentTerms.Value : null })) ```
If someone has this error and the answers above do not help, try the following. <https://stackoverflow.com/a/60379426/13434418> Preview (Conditional mapping with different source types) ``` //Cannot convert lambda expression to type x.MapFrom(src => src.CommunityId.HasValue ? src.Community : src.Account) //OK x.MapFrom(src => src.CommunityId.HasValue ? (object)src.Community : src.Account) ```
13,655
63,583,084
According to this [FAQ page](https://faq.whatsapp.com/iphone/how-to-link-to-whatsapp-from-a-different-app) of Whatsapp on **How to link to WhatsApp from a different app** Using the URL `whatsapp://send?phone=XXXXXXXXXXXXX&text=Hello` can be used to open the Whatsapp app on a Windows PC and perform a custom action. It does work when it is opened in a browser. The URL opens Whatsapp installed and composes the message(*parameter:*`text`) for the given contact(*parameter:*`phone`) I want to open the Whatsapp application directly using python script without the intervention of browser in between I have tried using `request` and `urlib`, but they don't consider it as a valid schema for URL as it does not have a `http://` or `https://` instead it has `whatsapp://`. ``` requests.exceptions.InvalidSchema: No connection adapters were found for 'whatsapp://send' ``` Is there a library in python that would open the associated applications directly based on the URL. **P.S.** I know the page is for Iphone but the link works perfectly on windows browser. Just need to know if that can be used. Working with `Python 3.7.9` on **Windows 10**
2020/08/25
[ "https://Stackoverflow.com/questions/63583084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/603633/" ]
You can use cmd exe to do such a job. Just try ``` import subprocess subprocess.Popen(["cmd", "/C", "start whatsapp://send?phone=XXXXXXXXXXXXX&text=Hello"], shell=True) ``` edit: if you want to pass '&'(ampersand) in cmd shell you need to use escape char '^' for it. please try that one ``` subprocess.Popen(["cmd", "/C", "start whatsapp://send?phone=XXXXXXXXXXXXX^&text=Hello"], shell=True) ```
Using this command start whatsapp://send?phone=XXXXXXXXXXXXX^&text=Hello not working after updated to new WhatsApp Desktop App (UWP) for windows. you have other idea for send text message to destination user.
13,657
50,546,892
i've seen some people write their shebang line with a space after env. Eg. ``` #!/usr/bin/env python ``` Is this a typo? I don't ever use a space. I use ``` #!/usr/bin/env/python ``` Can someone please clarify this?
2018/05/26
[ "https://Stackoverflow.com/questions/50546892", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9536653/" ]
No it isn't a typo! The one you are using will not work on all os's. E.G. in my Ubuntu Linux implementation 'env' is a program in /usr/bin not a directory so #!/usr/bin/env/python won't work.
Not using a space isn't a typo, but leads to portability issues when executing the same file on different machines. The purpose of the shebang line (`#!/usr....`) is to indicate what interpreter is to be used when executing the code in the file. According to [Wikipedia](https://en.wikipedia.org/wiki/Shebang_%28Unix%29#Syntax): > > The form of a shebang interpreter directive is as follows: > > > `#!interpreter [optional-arg]` > > > in which interpreter is an absolute path to an executable program. The optional argument is a string representing a single argument. > > > The example you provided, `#!/usr/bin/env python`, actually indicates to the shell that the interpreter to be used when executing the file is the first python interpreter that exists in the user's path. The shebang line is written this way to [ensure portability](https://en.wikipedia.org/wiki/Shebang_(Unix)#Portability): > > Shebangs must specify absolute paths (or paths relative to current working directory) to system executables; this can cause problems on systems that have a non-standard file system layout. Even when systems have fairly standard paths, it is quite possible for variants of the same operating system to have different locations for the desired interpreter. Python, for example, might be in /usr/bin/python, /usr/local/bin/python, or even something like /home/username/bin/python if installed by an ordinary user. > > > Because of this it is sometimes required to edit the shebang line after copying a script from one computer to another because the path that was coded into the script may not apply on a new machine, depending on the consistency in past convention of placement of the interpreter. For this reason and because POSIX does not standardize path names, POSIX does not standardize the feature. > > > Often, the program /usr/bin/env can be used to circumvent this limitation by introducing a level of indirection. #! is followed by /usr/bin/env, followed by the desired command without full path, as in this example: > > > `#!/usr/bin/env sh` > > >
13,658
57,864,579
I have a dataset of 8500 rows of text. I want to apply a function `pre_process` on each of these rows. When I do it serially, it takes about 42 mins on my computer: ``` import pandas as pd import time import re ### constructing a sample dataframe of 10 rows to demonstrate df = pd.DataFrame(columns=['text']) df.text = ["The Rock is destined to be the 21st Century 's new `` Conan '' and that he 's going to make a splash even greater than Arnold Schwarzenegger , Jean-Claud Van Damme or Steven Segal .", "The gorgeously elaborate continuation of `` The Lord of the Rings '' trilogy is so huge that a column of words can not adequately describe co-writer/director Peter Jackson 's expanded vision of J.R.R. Tolkien 's Middle-earth .", 'Singer/composer Bryan Adams contributes a slew of songs -- a few potential hits , a few more simply intrusive to the story -- but the whole package certainly captures the intended , er , spirit of the piece .', "You 'd think by now America would have had enough of plucky British eccentrics with hearts of gold .", 'Yet the act is still charming here .', "Whether or not you 're enlightened by any of Derrida 's lectures on `` the other '' and `` the self , '' Derrida is an undeniably fascinating and playful fellow .", 'Just the labour involved in creating the layered richness of the imagery in this chiaroscuro of madness and light is astonishing .', 'Part of the charm of Satin Rouge is that it avoids the obvious with humour and lightness .', "a screenplay more ingeniously constructed than `` Memento ''", "`` Extreme Ops '' exceeds expectations ."] def pre_process(text): ''' function to pre-process and clean text ''' stop_words = ['in', 'of', 'at', 'a', 'the'] # lowercase text=str(text).lower() # remove special characters except spaces, apostrophes and dots text=re.sub(r"[^a-zA-Z0-9.']+", ' ', text) # remove stopwords text=[word for word in text.split(' ') if word not in stop_words] return ' '.join(text) t = time.time() for i in range(len(df)): df.text[i] = pre_process(df.text[i]) print('Time taken for pre-processing the data = {} mins'.format((time.time()-t)/60)) >>> Time taken for pre-processing the data = 41.95724259614944 mins ``` So, I want to make use of multiprocessing for this task. I took help from [here](https://stackoverflow.com/questions/48934807/how-to-convert-a-for-loop-into-parallel-processing-in-python) and wrote the following code: ``` import pandas as pd import multiprocessing as mp pool = mp.Pool(mp.cpu_count()) def func(text): return pre_process(text) t = time.time() results = pool.map(func, [df.text[i] for i in range(len(df))]) print('Time taken for pre-processing the data = {} mins'.format((time.time()-t)/60)) pool.close() ``` But the code just keeps on running, and doesn't stop. How can I get it to work?
2019/09/10
[ "https://Stackoverflow.com/questions/57864579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5305512/" ]
Try following : ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; using System.Xml.Serialization; namespace ConsoleApplication131 { class Program { const string FILENAME = @"c:\temp\test.xml"; static void Main(string[] args) { XmlReader reader = XmlReader.Create(FILENAME); XmlSerializer serializer = new XmlSerializer(typeof(Envelope)); Envelope envelope = (Envelope)serializer.Deserialize(reader); } } [XmlRoot(ElementName = "Envelope", Namespace = "http://www.w3.org/2003/05/soap-envelope")] public class Envelope { [XmlElement(ElementName = "Header", Namespace = "http://www.w3.org/2003/05/soap-envelope")] public Header header { get; set; } [XmlElement(ElementName = "Body", Namespace = "http://www.w3.org/2003/05/soap-envelope")] public Body body { get; set; } } [XmlRoot(ElementName = "Header", Namespace = "http://www.w3.org/2003/05/soap-envelope")] public class Header { [XmlElement(ElementName = "Action", Namespace = "http://www.w3.org/2005/08/addressing")] public Action action { get; set; } } [XmlRoot(ElementName = "Action", Namespace = "http://www.w3.org/2005/08/addressing")] public class Action { [XmlAttribute(AttributeName = "mustUnderstand", Namespace = "http://www.w3.org/2003/05/soap-envelope")] public int mustUnderstand { get; set;} [XmlText] public string value { get;set;} } [XmlRoot(ElementName = "Body", Namespace = "")] public class Body { [XmlElement(ElementName = "BatchResponse", Namespace = "https://example.com/operations")] public BatchResponse batchResponse { get; set; } } [XmlRoot(ElementName = "BatchResponse", Namespace = "https://example.com/operations")] public class BatchResponse { [XmlElement(ElementName = "BatchResult", Namespace = "https://example.com/operations")] public BatchResult batchResult { get; set; } } [XmlRoot(ElementName = "BatchResult", Namespace = "https://example.com/operations")] public class BatchResult { [XmlElement(ElementName = "FailureMessages", Namespace = "https://example.com/responses")] public string failureMessages { get; set; } [XmlElement(ElementName = "FailureType", Namespace = "https://example.com/responses")] public string failureType { get; set; } [XmlElement(ElementName = "ProcessedSuccessfully", Namespace = "https://example.com/responses")] public Boolean processedSuccessfully { get; set; } [XmlElement(ElementName = "SessionId", Namespace = "https://example.com/responses")] public string sessionId { get; set; } [XmlElement(ElementName = "TotalPages", Namespace = "https://example.com/responses")] public int totalPages { get; set; } } } ```
you can use `local-name()` to ignore the namespace in xpath ``` //*[local-name()='Envelope']/*[local-name()='Body']/*[local-name()='BatchResponse'] ```
13,659
51,311,741
how can i create a spoofed UDP packet using python sockets,without using scapy library. i have created the socket like this ``` sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP sock.sendto(bytes('', "utf-8"), ('192.168.1.9', 7043))# 192.168.1.9dest 7043 dest port ```
2018/07/12
[ "https://Stackoverflow.com/questions/51311741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6584777/" ]
This is one of the first results for google searches like "spoofing udp packet python" so I am going to expand @Cukic0d's answer using scapy. Using the scapy CLI tool (some Linux distributions package it separately to the scapy Python library ): ```py pkt = IP(dst="1.1.1.1")/UDP(sport=13338, dport=13337)/"fm12abcd" send(pkt) ``` This sends a UDP Packet to the IP `1.1.1.1` with the source port `13338`, destination port `13337` and the content `fm12abcd`. If you need to a certain interface for some reason (like sending over a VPN that isn't your default route) you can use `send(pkt, iface='tun0')` to specify it. One difference to @Cukic0d's answer is that this solution is more flexible by sending a layer 3 packet with `send` instead of a layer 2 packet with `sendp`. So it isn't necessary to prepend the correct Ethernet header with `Ether()` which can cause issues in some scenarios, e.g.: ``` WARNING: Could not get the source MAC: Unsupported address family (-2) for interface [tun0] WARNING: Mac address to reach destination not found. Using broadcast. ```
I think you mean changing the source and destination addresses from the IP layer (on which the UDP layer is based). To do so, you will need to use raw sockets. (SOCK\_RAW), meaning that you have to build everything starting from the Ethernet layer to the UDP layer. Honestly, without scapy, that’s a lot of hard work. If you wanted to use scapy, it would take 2 lines: ``` pkt = Ether()/IP(src=“...”, dst=“...”)/UDP()/... sendp(pkt) ``` I really advice you to use scapy. The code itself is quite small so I don’t see a reason not to use it. It’s defiantly the easiest in python
13,660
20,078,739
I have a list of objects that I need to sort according to a [key function](https://wiki.python.org/moin/HowTo/Sorting/#Key_Functions "Key functions"). The problem is that some of the elements in my list can go "out-of-date" while the list is being sorted. When the key function is called on such an expired item, it fails with an exception. Ideally, what I would like is a way of sorting my list with a key function such that when an error occurs upon calling the key function on an element, this element is excluded from the sort result. My problem can be reconstructed using the following example: Suppose I have two classes, `Good` and `Bad`: ``` class Good(object): def __init__(self, x): self.x = x def __repr__(self): return 'Good(%r)' % self.x class Bad(object): @property def x(self): raise RuntimeError() def __repr__(self): return 'Bad' ``` I want to sort instances of these classes according to their `x` property. Eg.: ``` >>> sorted([Good(5), Good(3), Good(7)], key=lambda obj: obj.x) [Good(3), Good(5), Good(7)] ``` Now, when there is a `Bad` in my list, the sorting fails: ``` >>> sorted([Good(5), Good(3), Bad()], key=lambda obj: obj.x) ... RuntimeError ``` I am looking for a magical function `func` that sorts a list according to a key function, but simply ignores elements for which the key function raised an error: ``` >>> func([Good(5), Good(3), Bad()], key=lambda obj: obj.x) [Good(3), Good(5)] ``` What is the most Pythonic way of achieving this?
2013/11/19
[ "https://Stackoverflow.com/questions/20078739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1839209/" ]
Every sorting algorithm I know doesn't throw out some values because they're outdated or something. The task of sorting algorithm is to **sort** the list, and sort it fast, everything else is extraneous, specific task. So, I would write this magical function myself. It would do the sorting in two steps: first it would filter the list, leaving only `Good` values, and then sort the resulting list.
Since the result of the key function can change over time, and most sorting implementations probably assume a deterministic key function, it's probably best to only execute the key function once per object, to ensure a well-ordered and crash-free final list. ``` def func(seq, **kargs): key = kargs["key"] stored_values = {} for item in seq: try: value = key(item) stored_values[item] = value except RuntimeError: pass return sorted(stored_values.iterkeys(), key=lambda item: stored_values[item]) print func([Good(5), Good(3), Bad()], key=lambda obj: obj.x) ``` Result: ``` [Good(3), Good(5)] ```
13,661
14,104,128
I'm trying to define "variables" in an openoffice document, and I must be doing something wrong because when I try to display the value of the variable using a field, I only get an empty string. Here's the code I'm using (using the Python UNO bridge). The interesting bit is the second function. ```python import time import subprocess import logging import os import sys import uno from com.sun.star.text.SetVariableType import STRING def get_uno_model(): # helper function to connect to OOo. Only interesting # if you want to reproduce the issue locally, # don't spend time on this one try: model = XSCRIPTCONTEXT.getDocument() except NameError: pass # we are not running in a macro # get the uno component context from the PyUNO runtime localContext = uno.getComponentContext() # create the UnoUrlResolver resolver = localContext.ServiceManager.createInstanceWithContext( "com.sun.star.bridge.UnoUrlResolver", localContext ) # connect to the running office try: ctx = resolver.resolve("uno:socket,host=localhost,port=2002;" "urp;StarOffice.ComponentContext") except: cmd = ['soffice', '--writer', '-accept=socket,host=localhost,port=2002;urp;'] popen = subprocess.Popen(cmd) time.sleep(1) ctx = resolver.resolve("uno:socket,host=localhost,port=2002;" "urp;StarOffice.ComponentContext") smgr = ctx.ServiceManager # get the central desktop object desktop = smgr.createInstanceWithContext("com.sun.star.frame.Desktop", ctx) # access the current writer document model = desktop.getCurrentComponent() return model def build_variable(model, name, value): # find or create a TextFieldMaster with the correct name master_prefix = u"com.sun.star.text.fieldmaster.SetExpression" variable_names = set([_name.split('.')[-1] for _name in model.TextFieldMasters.ElementNames if _name.startswith(master_prefix)]) master_name = u'%s.%s' % (master_prefix, name) if name not in variable_names: master = model.createInstance(master_prefix) master.Name = name else: master = model.TextFieldMasters.getByName(master_name) # create the SetExpression field field = model.createInstance(u'com.sun.star.text.textfield.SetExpression') field.attachTextFieldMaster(master) field.IsVisible = True field.IsInput = False field.SubType = STRING field.Content = value return field model = get_uno_model() # local function to connect to OpenOffice text = model.Text field = build_variable(model, u'Toto', 'nice variable') text.insertTextContent(text.getEnd(), field, False) ``` This code works somehow (unless I removed too much), but if I manually insert a field to display the value of Toto I don't get the 'nice variable' string that I expect, and the field which is inserted has no value
2012/12/31
[ "https://Stackoverflow.com/questions/14104128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/281368/" ]
Use the updater property: <http://jsfiddle.net/ZUhFy/10/> ``` .typeahead({ source: namelist, updater: function(item) { $('#log').append(item + '<br>'); return item; } }); ``` > > updater returns selected item The method used to return selected item. > Accepts a single argument, the item and has the scope of the typeahead > instance. <http://twitter.github.com/bootstrap/javascript.html#typeahead> > > >
try ``` $('#findname').on('keyup', function() {...}); ```
13,666
69,503,139
My path is all messed up, javac and python commands not found, I even uninstalled and reinstalled java. I would throw in random stuff from the internet into my terminal whenever my code wouldn't work and have subsequently ruined my path, I've even added stuff to my bash\_profile and I don't know how to reset it or fix it. very frustrated. sorry I'm a noob echo $PATH gives this: ``` Khalids-MacBook-Air:~ khalidhamid$ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin ``` ~/.bash\_profile gives this: ``` Khalids-MacBook-Air:~ khalidhamid$ ~/.bash_profile -bash: /Users/khalidhamid/.bash_profile: Permission denied ``` javac Student.java gives this: ``` Khalids-MacBook-Air:java khalidhamid$ javac Student.java javac: file not found: Student.java Usage: javac <options> <source files> use -help for a list of possible options ```
2021/10/09
[ "https://Stackoverflow.com/questions/69503139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17110677/" ]
In SQL Server Management Studio you can remove the shortcut CTRL+E to execute a query. Go to Tools > Options in the menu, then go to Environment > Keyboard > Keyboard in the tree on the left. Then select `Query.Execute` in the right table. Select the shortcut `CTRL+E (SQL Query Editor)` then click Remove which removes the shortcut. You will then no longer be able to execute the query when clicking CTRL+E. [![enter image description here](https://i.stack.imgur.com/oia8j.png)](https://i.stack.imgur.com/oia8j.png)
To prevent executing an entire script file from start to end, and only allow running selected snippets, this would work: ``` WHILE 1=1 PRINT 'NOT ALLOWED, press the STOP button' PRINT 'After WHILE, will not show unless executed manually' GO PRINT 'After GO, will not show unless executed manually' ``` Because the server goes into *very busy loop*, it may not immediately stop when you click the STOP button, but it will stop in the end. For sake of completeness, here is the **STOP** button: > > [![enter image description here](https://i.stack.imgur.com/w6VPv.png)](https://i.stack.imgur.com/w6VPv.png) > > >
13,668
66,345,760
I'm trying to play a `.mp3` file. My code : ``` import os os.system("start C:\Users\User\Desktop\Wakeup.mp3") ``` But, it gives an error like this: ``` File "C:/Users/User/PycharmProjects/pythonProject/Test.py", line 2 os.system("start C:\Users\User\Desktop\Wakeup.mp3") ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 8-9: truncated \UXXXXXXXX escape ``` with other variants for playing a sound, I'm getting the same error Thanks for attention
2021/02/24
[ "https://Stackoverflow.com/questions/66345760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15272617/" ]
You did not escape the backslash character. try this: ``` os.system("start C:\\Users\\User\\Desktop\\Wakeup.mp3") ``` **Option 2** Use forward slash instead: ``` os.system("start C:/Users/User/Desktop/Wakeup.mp3") ``` **Option 3** Use "raw" string: ``` os.system(r"start C:\Users\User\Desktop\Wakeup.mp3") ```
If you want to play sound. You can do something like this. ``` from playsound import playsound playsound("path/name.mp3") ``` Hope this helps ... :D
13,671
5,639,049
I found in the [documentation of pygraph](http://dl.dropbox.com/u/1823095/python-graph/docs/pygraph.classes.graph.graph-class.html) how to change the [attributes of the nodes and the edges](http://www.graphviz.org/doc/info/attrs.html), but I found no help on how to change the attributes of the graphs. I tried without luck: ``` gr = graph() gr.__setattr__('aspect',2) ``` What do you recommend? Thanks! [update] I also tried: ``` gr = graph() gr.__setattr__('rotate',90) gr.rotate = 90 gr.color = 'red' setattr(gr,'bgcolor','red') ``` [update 2] Example code from the website with the different ideas to change the attributes: ``` #!/usr/bin/env python # Copyright (c) 2007-2008 Pedro Matiello <pmatiello@gmail.com> # License: MIT (see COPYING file) # Import graphviz import sys sys.path.append('..') sys.path.append('/usr/lib/graphviz/python/') sys.path.append('/usr/lib64/graphviz/python/') import gv # Import pygraph from pygraph.classes.graph import graph from pygraph.classes.digraph import digraph from pygraph.algorithms.searching import breadth_first_search from pygraph.readwrite.dot import write # Graph creation gr = graph() gr.__setattr__('rotate',90) gr.rotate = 90 gr.color = 'red' setattr(gr,'bgcolor','red') # Add nodes and edges gr.add_nodes(["Portugal","Spain","France","Germany","Belgium","Netherlands","Italy"]) gr.add_nodes(["Switzerland","Austria","Denmark","Poland","Czech Republic","Slovakia","Hungary"]) gr.add_nodes(["England","Ireland","Scotland","Wales"]) gr.add_edge(("Portugal", "Spain")) gr.add_edge(("Spain","France")) gr.add_edge(("France","Belgium")) gr.add_edge(("France","Germany")) gr.add_edge(("France","Italy")) gr.add_edge(("Belgium","Netherlands")) gr.add_edge(("Germany","Belgium")) gr.add_edge(("Germany","Netherlands")) gr.add_edge(("England","Wales")) gr.add_edge(("England","Scotland")) gr.add_edge(("Scotland","Wales")) gr.add_edge(("Switzerland","Austria")) gr.add_edge(("Switzerland","Germany")) gr.add_edge(("Switzerland","France")) gr.add_edge(("Switzerland","Italy")) gr.add_edge(("Austria","Germany")) gr.add_edge(("Austria","Italy")) gr.add_edge(("Austria","Czech Republic")) gr.add_edge(("Austria","Slovakia")) gr.add_edge(("Austria","Hungary")) gr.add_edge(("Denmark","Germany")) gr.add_edge(("Poland","Czech Republic")) gr.add_edge(("Poland","Slovakia")) gr.add_edge(("Poland","Germany")) gr.add_edge(("Czech Republic","Slovakia")) gr.add_edge(("Czech Republic","Germany")) gr.add_edge(("Slovakia","Hungary")) # Draw as PNG dot = write(gr) gvv = gv.readstring(dot) gv.layout(gvv,'dot') gv.render(gvv,'png','europe.png') ```
2011/04/12
[ "https://Stackoverflow.com/questions/5639049", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380038/" ]
So, doing more research on my own, I found that [pygraphviz](http://networkx.lanl.gov/pygraphviz/index.html) is offering what I need. Here is the `pygraph` example using `pygraphviz` and accepting attributes. It is also shorter, as you don't have to specify the nodes if all of them are connected via edges. ``` #!/usr/bin/env python # Copyright (c) 2007-2008 Pedro Matiello <pmatiello@gmail.com> # License: MIT (see COPYING file) # Import pygraphviz import pygraphviz as pgv # Graph creation and setting of attributes gr = pgv.AGraph(rotate='90',bgcolor='lightgray') # Add nodes and edges gr.add_edge(("Portugal", "Spain")) gr.add_edge(("Spain","France")) gr.add_edge(("France","Belgium")) gr.add_edge(("France","Germany")) gr.add_edge(("France","Italy")) gr.add_edge(("Belgium","Netherlands")) gr.add_edge(("Germany","Belgium")) gr.add_edge(("Germany","Netherlands")) gr.add_edge(("England","Wales")) gr.add_edge(("England","Scotland")) gr.add_edge(("Scotland","Wales")) gr.add_edge(("Switzerland","Austria")) gr.add_edge(("Switzerland","Germany")) gr.add_edge(("Switzerland","France")) gr.add_edge(("Switzerland","Italy")) gr.add_edge(("Austria","Germany")) gr.add_edge(("Austria","Italy")) gr.add_edge(("Austria","Czech Republic")) gr.add_edge(("Austria","Slovakia")) gr.add_edge(("Austria","Hungary")) gr.add_edge(("Denmark","Germany")) gr.add_edge(("Poland","Czech Republic")) gr.add_edge(("Poland","Slovakia")) gr.add_edge(("Poland","Germany")) gr.add_edge(("Czech Republic","Slovakia")) gr.add_edge(("Czech Republic","Germany")) gr.add_edge(("Slovakia","Hungary")) # Draw as PNG gr.layout(prog='dot') gr.draw('europe.png') ```
I don't know about pygraph, but I think the way you set attributes in python was: ``` setattr(object, attr, value) setattr(gr, 'aspect', 2) ``` I mean, you have tried gr.aspect = 2, right?
13,672
64,866,304
I'm trying to set up a new cygwin installation and install python through cygwin. I've done so, and the setup completed, but when I try to run `python3.8` I get a fatal python error: ```sh $ python3.8 Python path configuration: PYTHONHOME = 'C:\Users\cmhac\AppData\Local\Programs\Python\Python38' PYTHONPATH = 'C:\Users\cmhac\AppData\Local\Programs\Python\Python38' program name = 'python3.8' isolated = 0 environment = 1 user site = 1 import site = 1 sys._base_executable = '/usr/bin/python3.8.exe' sys.base_prefix = 'C' sys.base_exec_prefix = '\\Users\\cmhac\\AppData\\Local\\Programs\\Python\\Python38' sys.executable = '/usr/bin/python3.8.exe' sys.prefix = 'C' sys.exec_prefix = '\\Users\\cmhac\\AppData\\Local\\Programs\\Python\\Python38' sys.path = [ 'C', '\\Users\\cmhac\\AppData\\Local\\Programs\\Python\\Python38', 'C/lib/python38.zip', 'C/lib/python3.8', '\\Users\\cmhac\\AppData\\Local\\Programs\\Python\\Python38/lib/python3.8/lib-dynload', ] Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' Current thread 0x0000000800000010 (most recent call first): <no Python frame> ``` How do I even begin fixing this? The python38.exe file is there, and everything seems fine. Never had this error with any other python installation.
2020/11/16
[ "https://Stackoverflow.com/questions/64866304", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11831775/" ]
I saw this exact error: Fatal Python error: init\_fs\_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' If you set these two env variables to nil, the problem should disappear: set PYTHONHOME= set PYTHONPATH=
Go to your wsgi.py file under your project and do something similar to: ``` import os import sys from django.core.wsgi import get_wsgi_application sys.path.append('path/to/yourprojectenv/lib/python3.8/site-packages') ``` Restart your server and try again.
13,676
44,434,185
I am trying to create a python program that will take data from multiple excel sheets and I am kinda stuck at the moment: ``` import xlrd import xlwt workbook1 = xlrd.open_workbook('Z:\Public\Safety\SafetyDataPullProject\TestFile.xlsx', on_demand = True) worksheet = workbook1.sheet_by_index('Sheet1') sheet.cell(5, 5).value ``` This is my code currently this produces a error ``` Traceback (most recent call last): File "H:\python things\Datapull2.py", line 4, in <module> worksheet = workbook1.sheet_by_index('Sheet1') File "C:\Users\gomcrai\AppData\Local\Programs\Python\Python36-32\lib\site-packages\xlrd\book.py", line 432, in sheet_by_index return self._sheet_list[sheetx] or self.get_sheet(sheetx) TypeError: list indices must be integers or slices, not str ``` I just wanna take the information from the cell(s) I've marked and drop them into a newly created excel sheet. If anyone could help me please let me know. Side note: I know these things would probably be easier in VB but I am practicing python.
2017/06/08
[ "https://Stackoverflow.com/questions/44434185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8131036/" ]
The answer's well documented in Marshmallows [api reference](http://marshmallow.readthedocs.io/en/latest/api_reference.html#module-marshmallow.fields). I need to use `dump_to` : ``` class ApiSchema(Schema): class Meta: strict = True time = fields.Number(dump_to='_time') id = fields.String(dump_to='_id') ```
You can override the [`dump`](https://marshmallow.readthedocs.io/en/latest/api_reference.html#marshmallow.Schema.dump) method to *prepend* underscores to selected fields before returning the serialised object: ``` class ApiSchema(Schema): class Meta: strict = True time = fields.Number() id = fields.String() def dump(self, *args, **kwargs): special = kwargs.pop('special', None) _partial = super(ApiSchema, self).dump(*args, **kwargs) if special is not None and all(f in _partial for f in special): for field in special: _partial['_{}'.format(field)] = _partial.pop(field) return _partial ``` --- ``` api_schema = ApiSchema(...) result = api_schema.dump(obj, special=('id', 'time')) ``` You can also use the [`post_dump`](http://marshmallow.readthedocs.io/en/latest/api_reference.html#marshmallow.decorators.post_dump) decorator on a separate custom method without having to override `dump`, but then, you may have to hardcode the fields to-be-modified as part of the class, which may be inflexible depending on your use case.
13,677
15,627,307
I am trying to figure out the best way to convert from epoch seconds (since NTP epoch 1900-01-01 00:00) to a datetime string (MM/DD/YY,hh:mm:ss) without any libraries/modules/external functions, as they are not available on an embedded device. My first thought was to look at the [Python datetime module source code](http://svn.python.org/projects/python/trunk/Modules/datetimemodule.c), however that was not very useful to me. My initial attempt in Python uses a conversion of days since 0001-01-01 to date using `getDateFromJulianDay` adapted to Python from [C++ source](http://qt.gitorious.org/qt/qt/blobs/4.7/src/corelib/tools/qdatetime.cpp#line132), combined with modulo operations to obtain time. It works, but is there a better way? ``` def getDateFromJulianDay(julianDay): # Gregorian calendar starting from October 15, 1582 # This algorithm is from: # Henry F. Fliegel and Thomas C. van Flandern. 1968. # Letters to the editor: # a machine algorithm for processing calendar dates. # Commun. ACM 11, 10 (October 1968), 657-. DOI=10.1145/364096.364097 # http://doi.acm.org/10.1145/364096.364097 ell = julianDay + 68569; n = (4 * ell) / 146097; ell = ell - (146097 * n + 3) / 4; i = (4000 * (ell + 1)) / 1461001; ell = ell - (1461 * i) / 4 + 31; j = (80 * ell) / 2447; d = ell - (2447 * j) / 80; ell = j / 11; m = j + 2 - (12 * ell); y = 100 * (n - 49) + i + ell; return y,m,d # NTP response (integer portion) for Monday, March 25, 2013 at 6:40:43 PM sec_since_1900 = 3573225643 # 2415021 is the number of days between 0001-01-01 and 1900-01-01, # the start of the NTP epoch (year,month,day) = getDateFromJulianDay(2415021 + sec_since_1900/60/60/24) seconds_into_day = sec_since_1900 % 86400 (hour, sec_past_hour) = divmod(seconds_into_day,3600) (min, sec) = divmod(sec_past_hour,60) print 'year:',year,'month:',month,'day:',day print 'hour:',hour,'min:',min,'sec:',sec ``` Why I'm doing this: I am getting the current time from an NTP server, and taking this time at face value for updating a hardware real time clock (RTC) that only accepts the date, time and time zone: MM/DD/YY,hh:mm:ss,±zz. I plan to implement true NTP capabilities at a later date. A discussion of time synchronization methods is best left elsewhere, such as [this question](https://stackoverflow.com/questions/5642946/small-footprint-clock-synchronization-without-ntp). Notes: * My embedded device is a Telit GC-864 cellular modem that runs Python 1.5.2+ and only has limited operators (mostly just C operators), no modules, and some of the expected built-in Python types. The exact capabilities are [here](http://www.telit.com/module/infopool/download.php?id=617), if you're interested. I write Python for this device as if I'm writing C code - not very Pythonic, I know. * I realize NTP is best used only for a time offset, however with limited options, I'm using NTP as an absolute time source (I could add the check for the NTP rollover in 2036 to enable another 136 years of operation). * The GC-864-V2 device with up-to-date firmware does have NTP capability, but the GC-864 I need to use is stuck on a previous release of firmware.
2013/03/26
[ "https://Stackoverflow.com/questions/15627307", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2209295/" ]
The `getDateFromJulianDay` function originally proposed is too computationally intensive for effective use on an embedded device, containing many multiplication and division operations on large `long` variables or, as originally written in C++, [`longlong` variables](http://qt.gitorious.org/qt/qt/blobs/4.7/src/corelib/tools/qdatetime.cpp#line140). I think I hunted down an efficient epoch to date algorithm for an embedded device. After fruitless Googling, I found myself back on Stack Overflow, and found the question [Converting epoch time to “real” date/time](https://stackoverflow.com/questions/1692184/converting-epoch-time-to-real-date-time), asking about self-written epoch time to date implementation and provides a suitable algorithm. This [answer](https://stackoverflow.com/a/1692210/2209295) to the question references the [gmtime.c source code](http://www.raspberryginger.com/jbailey/minix/html/gmtime_8c-source.html), and provided the source in C I needed to write a Python conversion algorithm: ``` /* * gmtime - convert the calendar time into broken down time */ /* $Header: /opt/proj/minix/cvsroot/src/lib/ansi/gmtime.c,v 1.1.1.1 2005/04/21 14:56:05 beng Exp $ */ #include <time.h> #include <limits.h> #include "loc_time.h" struct tm * gmtime(register const time_t *timer) { static struct tm br_time; register struct tm *timep = &br_time; time_t time = *timer; register unsigned long dayclock, dayno; int year = EPOCH_YR; dayclock = (unsigned long)time % SECS_DAY; dayno = (unsigned long)time / SECS_DAY; timep->tm_sec = dayclock % 60; timep->tm_min = (dayclock % 3600) / 60; timep->tm_hour = dayclock / 3600; timep->tm_wday = (dayno + 4) % 7; /* day 0 was a thursday */ while (dayno >= YEARSIZE(year)) { dayno -= YEARSIZE(year); year++; } timep->tm_year = year - YEAR0; timep->tm_yday = dayno; timep->tm_mon = 0; while (dayno >= _ytab[LEAPYEAR(year)][timep->tm_mon]) { dayno -= _ytab[LEAPYEAR(year)][timep->tm_mon]; timep->tm_mon++; } timep->tm_mday = dayno + 1; timep->tm_isdst = 0; return timep; } ``` Additionally, the [analysis](https://stackoverflow.com/a/3136466/2209295) of the question [Why is gmtime implemented this way?](https://stackoverflow.com/questions/3136341/why-is-gmtime-implemented-this-way) helped affirm that the `gmtime` function is fairly efficient. Using the [raspberryginger.com minix Doxygen documentation site](http://www.raspberryginger.com/jbailey/minix/html/), I was able to find the C macros and constants that were included in [gmtime.c](http://www.raspberryginger.com/jbailey/minix/html/gmtime_8c-source.html) from [loc\_time.h](http://www.raspberryginger.com/jbailey/minix/html/loc__time_8h-source.html). The relevant code snippet: ``` #define YEAR0 1900 /* the first year */ #define EPOCH_YR 1970 /* EPOCH = Jan 1 1970 00:00:00 */ #define SECS_DAY (24L * 60L * 60L) #define LEAPYEAR(year) (!((year) % 4) && (((year) % 100) || !((year) % 400))) #define YEARSIZE(year) (LEAPYEAR(year) ? 366 : 365) #define FIRSTSUNDAY(timp) (((timp)->tm_yday - (timp)->tm_wday + 420) % 7) #define FIRSTDAYOF(timp) (((timp)->tm_wday - (timp)->tm_yday + 420) % 7) #define TIME_MAX ULONG_MAX #define ABB_LEN 3 extern const int _ytab[2][10]; ``` And the `extern const int _ytab` was defined in [misc.c](http://www.raspberryginger.com/jbailey/minix/html/lib_2ansi_2misc_8c-source.html#l00082): ``` const int _ytab[2][12] = { { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }, { 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 } }; ``` Some other things I found: * The [gmtime.c File Reference](http://www.raspberryginger.com/jbailey/minix/html/gmtime_8c.html) was very helpful for finding dependencies. * The `gmtime` function starts indexing the Month, Day of Week, and Day of Year at the number zero, (maximal ranges of 0-11, 0-6, 0-365, respectively), whereas the Day of Month starts at the number 1, (1-31), see the [IBM `gmtime()` reference](http://publib.boulder.ibm.com/infocenter/zvm/v6r1/index.jsp?topic=/com.ibm.zvm.v610.edclv/gmtime.htm). --- I re-wrote the `gmtime` function for Python 1.5.2+: ```py def is_leap_year(year): return ( not ((year) % 4) and ( ((year) % 100) or (not((year) % 400)) ) ) def year_size(year): if is_leap_year(year): return 366 else: return 365 def ntp_time_to_date(ntp_time): year = 1900 # EPOCH_YR for NTP ytab = [ [ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31], [ 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] ] (dayno,dayclock) = divmod(ntp_time, 86400L) dayno = int(dayno) # Calculate time of day from seconds on the day's clock. (hour, sec_past_hour) = divmod(dayclock,3600) hour = int(hour) (min, sec) = divmod(int(sec_past_hour),60) while (dayno >= year_size(year)): dayno = dayno - year_size(year) year = year + 1 month = 1 # NOTE: month range is (1-12) while (dayno >= ytab[is_leap_year(year)][month]): dayno = dayno - ytab[is_leap_year(year)][month] month = month + 1 day = dayno + 1 return (year, month, day, hour, min, sec) ``` Modifications I made re-factoring the C++ `gmtime` function to my Python function `ntp_time_to_date(ntp_time)`: * Changed epoch from UNIX epoch of 1970 to NTP epoch of 1900 ([the prime epoch for NTP](http://www.eecis.udel.edu/~mills/y2k.html#era)). * Slightly streamlined time of day calculation. + Comparing time of day calculation of `gmtime` to `ntp_time_to_date`: - Both `(dayclock % 3600) / 60` and `dayclock / 3600` occur behind the scenes in `divmod(dayclock,3600)` and `divmod(sec_past_hour,60)`. - Only real difference is that `divmod(sec_past_hour,60)` avoids modulo of `dayclock` (0-86399) by 60 via `dayclock % 60`, and instead does modulo of `sec_past_hour` (0-3599) by 60 within `divmod(sec_past_hour,60)`. * Removed variables and code I did not need, for example, day of week. * Changed indexing of Month to start at 1, so Month range is (1-12) instead of (0-11) * Type cast variables away from `long` as soon as values were less than 65535 to greatly decrease code execution time. + The requires long variables are: - `ntp_time`, seconds since 1900 (0-4294967295) - `dayclock`, seconds into day (0-86399) + The largest of the rest of the variables is the calculated year within the date. --- The Python `ntp_time_to_date` function (with its dependencies) runs successfully on the Telit GC-864 on an embedded version of Python 1.5.2+, as well as on Python 2.7.3, but of course use the datetime library if you can.
TL;DR ===== If you are using a Telit GC-864, the Python interpreter seemingly inserts some sort of delay in-between each line of code execution. For a Telit GC-864, the function in my question `getDateFromJulianDay(julianDay)` is faster than the function in my answer, `ntp_time_to_date(ntp_time)`. More Detail =========== The number of code *lines* dominates execution time on a GC-864 more than the *complexity* of the code - weird, I know. The function `getDateFromJulianDay(julianDay)` in my question has a handful of complex operations, maybe 15 lines of code. The function in my answer `ntp_time_to_date(ntp_time)` has simpler computation complexity, but the `while` loops cause 100+ lines of code execution: * One loop counts from 1900 to the current year * Another loop counts from month 1 to the current month Test Results ============ Timing test results running on an actual GC-864 (note: **not** a GC-864-V2) using the same NTP time input for each trial (each function outputs "3/25/2013 18:40"). Timing was performed using printf statement debugging, and a serial terminal on a computer would time stamp each line sent by the GC-864. `getDateFromJulianDay(julianDay)` Trials: * 0.3802 seconds * 0.3370 seconds * 0.3370 seconds * Average: 0.3514 seconds `ntp_time_to_date(ntp_time)` Trials: * 0.8899 seconds * 0.9072 seconds * 0.8986 seconds * Average: 0.8986 seconds The variability partially stems from the GC-864 cell modem servicing cellular network tasks periodically. For completeness, the optimization of typecasting the `long` variables to `int` as quickly as possible in `ntp_time_to_date(ntp_time)` has a fairly significant effect. Without this optimization: * 2.3155 seconds * 1.5034 seconds * 1.5293 seconds * 2.0995 seconds * 2.0909 seconds * Average: 1.9255 seconds Doing anything computationally involved on a Telit GC-864 running .pyo files in Python 1.5.2+ is not a great idea. Using the GC-864-V2, which has a built-in NTP capability is a possible solution for someone who comes across this issue. Moreover, newer machine-to-machine (M2M) aka internet of things (IoT) cell phone modems are much more capable. If you have a similar issue with the GC-864, **consider using a newer and more modern cell phone modem**.
13,683
46,367,625
I am trying to detect language of the string with langdetect package. But it does not work. ``` from langdetect import detect word_string = "Books are for reading" print(detect(word_string)) ``` If I use code above I get error `ImportError: cannot import name detect` When I replace detect with \* ``` from langdetect import * word_string = "Books are for reading" print(detect(word_string)) ``` I get error: `NameError: name 'detect' is not defined` So my question is how can I solve these problems ? **So the problem was that my langdetect package and python file was with the same name.... Thank you for your answers.**
2017/09/22
[ "https://Stackoverflow.com/questions/46367625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6152972/" ]
Try installing it first by writing :-> `!pip install langdetect` on terminal and then `import langdetect`
You can not import it because it is probably not there. See if the name is correct. Seeing other questions here on SO I assume you mean detect\_langs.
13,684
39,370,879
I have a dataframe df\_energy2 ``` df_energy2.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 29974 entries, 0 to 29973 Data columns (total 4 columns): TIMESTAMP 29974 non-null datetime64[ns] P_ACT_KW 29974 non-null int64 PERIODE_TARIF 29974 non-null object P_SOUSCR 29974 non-null int64 dtypes: datetime64[ns](1), int64(2), object(1) memory usage: 936.8+ KB ``` with this structure : ``` df_energy2.head() TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR 2016-01-01 00:00:00 116 HC 250 2016-01-01 00:10:00 121 HC 250 ``` Is there any python function which can extract hour from `TIMESTAMP`? Kind regards
2016/09/07
[ "https://Stackoverflow.com/questions/39370879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2278337/" ]
I think you need [`dt.hour`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.hour.html): ``` print (df.TIMESTAMP.dt.hour) 0 0 1 0 Name: TIMESTAMP, dtype: int64 df['hours'] = df.TIMESTAMP.dt.hour print (df) TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR hours 0 2016-01-01 00:00:00 116 HC 250 0 1 2016-01-01 00:10:00 121 HC 250 0 ```
Given your data: > > > ``` > df_energy2.head() > > TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR > 2016-01-01 00:00:00 116 HC 250 > 2016-01-01 00:10:00 121 HC 250 > > ``` > > You have timestamp as the index. For extracting hours from timestamp where you have it as the index of the dataframe: ``` hours = df_energy2.index.hour ``` --- **Edit**: Yes, jezrael you're right. Putting what he has stated: pandas dataframe has a property for this i.e. `dt` : ``` <dataframe>.<ts_column>.dt.hour ``` Example in your context - the column with date is `TIMESTAMP` ``` df.TIMESTAMP.dt.hour ``` --- A similar question - [Pandas, dataframe with a datetime64 column, querying by hour](https://stackoverflow.com/questions/21624217/pandas-dataframe-with-a-datetime64-column-querying-by-hour)
13,686
16,027,942
I've written a high level motor controller in Python, and have got to a point where I want to go a little lower level to get some speed, so I'm interested in coding those bits in C. I don't have much experience with C, but the math I'm working on is pretty straightforward, so I'm sure I can implement with a minimal amount of banging my head against the wall. What I'm not sure about is how best to invoke this compiled C program in order to pipe it's outputs back into my high-level python controller. I've used a little bit of ctypes, but only to pull some functions from a manufacfturer-supplied DLL...not sure if that is an appropriate path to go down in this case. Any thoughts?
2013/04/16
[ "https://Stackoverflow.com/questions/16027942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1481457/" ]
You can take a look at this tutorial [here](http://csl.name/C-functions-from-Python/). Also, a more reliable example on the official python website, [here](http://docs.python.org/2/extending/extending.html#a-simple-example). For example, `sum.h` function ``` int sum(int a, int b) ``` A file named, `module.c`, ``` #include <Python.h> #include "sum.h" static PyObject* mod_sum(PyObject *self, PyObject *args) { int a; int b; int s; if (!PyArg_ParseTuple(args,"ii",&a,&b)) return NULL; s = sum(a,b); return Py_BuildValue("i",s); } static PyMethodDef ModMethods[] = { {"sum", mod_sum, METH_VARARGS, "Description.."}, {NULL,NULL,0,NULL} }; PyMODINIT_FUNC initmod(void) { PyObject *m; m = Py_InitModule("module",ModMethods); if (m == NULL) return; } ``` Python ``` import module s = module.sum(3,5) ```
Another option: try [numba](http://numba.pydata.org/). It gives you C-like speed for free: just import numba and @autojit your functions, for a wonderful speed increase. Won't work if you have complicated data types, but if you're looping and jumping around array indices, it might be just what you're looking for.
13,687
21,119,958
I am trying to negate the value returned by one function. Consider the code to be ``` def greater(a,b): return a>b check = negate(greater) assert check(8,9) ``` OR ``` def equal(a,b): return a==b check = negate(equal) assert check("python","java") ``` How should I define the ***NEGATE*** function???
2014/01/14
[ "https://Stackoverflow.com/questions/21119958", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3181773/" ]
Like this: ``` def negate(func): def result(*args): return not func(*args) return result ``` This is a function which creates a function which returns the result of invoking the original function and `not`ing it's return value, and then returning that function.
Use the `not` operator in a decorator: ``` from functools import wraps def negate(func): @wraps(func) def wrapper(*args, **kw): return not func(*args, **kw) return wrapper ``` The above decorator returns a wrapper function that applies `not` to the return value of the wrapped function. The [`@functools.wraps()` decorator](http://docs.python.org/2/library/functools.html#functools.wraps) in the above example is optional but makes sure the wrapper gains the docstring and other introspectable attributes from the original function, making it behave more closely like the wrapped function. Demo: ``` >>> from functools import wraps >>> def negate(func): ... @wraps(func) ... def wrapper(*args, **kw): ... return not func(*args, **kw) ... return wrapper ... >>> def greater(a,b): ... return a>b ... >>> check = negate(greater) >>> check(8, 9) True ```
13,690
14,749,379
Because *programming* is one of my favorite hobbies I started a small project in python. I'm trying to make a nutritional calculator for daily routine, see the code below: ``` # Name: nutri.py # Author: pyn my_dict = {'chicken':(40, 50, 10), 'pork':(50, 30, 20) } foods = raw_input("Enter your food: ") #explode / split the user input foods_list = foods.split(',') #returns a list separated by comma print foods_list ``` What I want to do: 1. Get user input and store it in a variable 2. Search the dictionary based on user input and return the asociated values if the keys / foods exists 3. Sum these values in different nutritional chunks and return them, something like: You ate x protein, y carbs and z fat etc. Any ideas are welcome.
2013/02/07
[ "https://Stackoverflow.com/questions/14749379", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2050432/" ]
You could use the sibling selector. As long as div's share the same parent, you can still affect them with hover [DEMO](http://jsfiddle.net/8LFWR/) Vital Code: ``` #gestaltung_cd:hover ~ #mainhexa1, #gestaltung_illu:hover ~ #mainhexa2, #gestaltung_klassisch:hover ~ #mainhexa3 { display: block; } ```
Here's an example of how to do the first one and you'd just do the same for the other two with the relevant IDs. ``` $("#gestaltung_cd").hover( function () { $("#mainhexa1").show(); }, function () { $("#mainhexa1").hide(); } ); ```
13,693
7,556,499
despite using the search function I've been unable to find an answer. I got two assumptions, but don't know in how far they may apply. Now the problem: I'd like to plot a contour. For this I've got here the following python code: ``` import numpy as np import matplotlib.pyplot as plt xi=list_of_distance yi=list_of_angle x = np.arange(0,54,0.2) y = np.arange(0,180,2.0) Z = np.histogram2d(xi,yi,bins=(274,90)) X, Y = np.meshgrid(x, y) plt.contour(X,Y,Z) plt.ylabel('angles') plt.xlabel('distance') plt.colorbar() plt.show() ``` xi and yi are lists containing float values. x and y are defining the 'intervals' ...for example: x generates a list with values from 0 to 54 in 0.2 steps y generates a list with values from 0 to 180 in 2.0 steps with Z I make use of the numpy function to create 2D-Histograms. Actually this seems to be the spot that causes trouble. When the function plt.contour(X,Y,Z) is called, the following error message emerges: > > ... File "/usr/lib/pymodules/python2.7/numpy/ma/core.py", line 2641, in **new** > \_data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) > ValueError: setting an array element with a sequence. > > > Now to the assumptions what may cause this problem: 1. It seems like it expects an array but instead of an numpy-array it receives a list or 2. We got a row that is shorter than the others (I came to that thought, after a collegue ran into such an issue a year ago - there it has been fixed by figuring out that the last row was by 2 elements shorter than all others...)
2011/09/26
[ "https://Stackoverflow.com/questions/7556499", "https://Stackoverflow.com", "https://Stackoverflow.com/users/329586/" ]
As @rocksportrocker implies, you need to take into account that `histogram2d` returns the edges in addition to the histogram. Another detail is that you probably want to explicitly pass in a range, otherwise one will be chosen for you based on the actual min and max values in your data. You then want to convert the edges to cell centers for the plot. Something like this: ``` import numpy as np import matplotlib.pyplot as plt n = 1000000 # how many data points xmin, xmax = 0.0, 54.0 # distances ymin, ymax = 0.0, 180.0 # angles # make up some random data xi=np.random.normal(xmax/2.0, xmax/4.0, n) yi=np.random.normal(ymax/3.0, ymax/3.0, n) Z, xedges, yedges = np.histogram2d(xi,yi, bins=(270,90), range=[[xmin, xmax], [ymin, ymax]]) # find the cell centers from the cell edges x = 0.5*(xedges[:-1] + xedges[1:]) y = 0.5*(yedges[:-1] + yedges[1:]) # promote to 2D arrays Y, X = np.meshgrid(y, x) plt.contour(X,Y,Z) plt.ylabel('angles') plt.xlabel('distance') plt.colorbar() plt.savefig("hist2d.png") ``` yields a countour plot like this: ![hist2d.png](https://i.stack.imgur.com/AaL8V.png) but personally I wouldn't use contours in this case, since the histogram is likely to be noisy.
Your traceback indicates that the error does not raise from the call to matplotlib, it is numpy which raises the ValueError.
13,703
46,437,882
So this is driving me crazy I have python3 and modwsgi and apache and a virtual host, that work great, as I have several other wsgi scripts that work fine on the server. I also have a django app that works great when I run the dev server. I have checked that "ldd mod\_wsgi.so" is linked correctly against python3.5 Whenever I try to access my site, I get an error and the apache log states: ImportError: No module named 'protectionprofiles' protection profiles is mysite name the following is my virtual host config ``` <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html ServerName <my ip> WSGIScriptAlias /certs /var/www/scripts/CavsCertSearch/CavsCertSearch/certstrip.wsgi WSGIScriptAlias /testcerts /var/www/scripts/CavsCertSearchTest/CavsCertSearch/certstriptest.wsgi WSGIScriptAlias /protectionprofiles /var/www/protectionprofiles/protectionprofiles/wsgi.py <Directory /var/www/protectionprofiles/protectionprofiles> <Files wsgi.py> Require all granted </Files> </Directory> </VirtualHost> ``` my site app is the protection profiles alias. I have no idea what the issue is I have tried following dozens of different apache tutorials and none of them seem to work. Any help is greatly appreciated. ------ To add something else I tried -------- so I added the following 2 commands inside the virtual host ``` WSGIDaemonProcess protectionprofiles python-path=/var/www/protectionprofiles/ WSGIProcessGroup protectionprofiles ``` and no I get a different error ``` Error was: No module named 'django.db.backends.postgresql' ``` which is my backend, but the dev server works fine? Below is my database configuration ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'USER': 'myuser', 'PASSWORD': 'abcd1234', 'HOST': '127.0.0.1', 'PORT': '5432', 'NAME': 'protectionprofile', } } ``` ----Another error ---- Occasional I get ``` RuntimeError: populate() isn't reentrant ``` ---- Another error!!--- Now when I update ``` WSGIDaemonProcess protectionprofiles python-path=/var/www/protectionprofiles/:/usr/lib/python3/dist-packages/django ``` I get ``` ImportError: cannot import name 'SimpleCookie' ``` ---Another wierd thng is that when I have ``` WSGIProcessGroup protectionprofiles ``` enabled I dont get the error that it can't find the module "protectionprofiles" but when then non of my other wsgi scripts work!. They only work when thats no in there. Any explanation of that would be very helpful
2017/09/27
[ "https://Stackoverflow.com/questions/46437882", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4216567/" ]
You probably need to tell mod\_wsgi where your project code is. For embedded mode this is done with `WSGIPythonPath` directive. You should preferably though use daemon mode, in which case you would use `python-path` option to `WSGIDaemonProcess` directive.
ok I finally figured out how to fix the issue but not sure exactly why. First off when I was mixing my own wsgi scipts and django I had to specify the daemon process only for the script alias that was django ``` WSGIScriptAlias /certs /var/www/scripts/CavsCertSearch/CavsCertSearch/certstrip.wsgi WSGIScriptAlias /testcerts /var/www/scripts/CavsCertSearchTest/CavsCertSearch/certstriptest.wsgi WSGIScriptAlias /debug /var/www/scripts/debug/debug.wsgi WSGIScriptAlias / /var/www/protectionprofiles/protectionprofiles/wsgi.py process-group=protectionprofiles WSGIDaemonProcess protectionprofiles python-path=/var/www/protectionprofiles/ WSGIProcessGroup protectionprofiles ``` Then in the app/settings.py in dev mode one backend worked but when ran in mod\_wsgi I needed a different one: settings.py: ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'databse', 'USER': 'user', 'PASSWORD': '', 'HOST': 'localhost', 'PORT': '5432', } } ``` In the development server "'django.db.backends.postgresql" worked fine but in release I had to specify the exact module type. I am not sure why but it works now!!
13,704
12,957,921
I am attempting to run a python program that can run a dictionary from a file with a list of words with each word given a score and standard deviation. My program looks like this: ``` theFile = open('word-happiness.csv' , 'r') theFile.close() def make_happiness_table(filename): '''make_happiness_table: string -> dict creates a dictionary of happiness scores from the given file''' with open(filename) as f: d = dict( line.split(' ') for line in f) return d make_happiness_table("word-happiness.csv") table = make_happiness_table("word-happiness.csv") (score, stddev) = table['hunger'] print("the score for 'hunger' is %f" % score) ``` My .csv file is in the form ``` word{TAB}score{TAB}standard_deviation ``` and I am trying to create the dictionary in that way. How can I create such a dictionary so that I can print a word such as 'hunger' from the function and get its score and std deviation?
2012/10/18
[ "https://Stackoverflow.com/questions/12957921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1712920/" ]
The template parameter to [`std::normal_distribution`](http://en.cppreference.com/w/cpp/numeric/random/normal_distribution) must be a floating-point type (`float`, `double`, or `long double`). Using anything else will result in undefined behavior. Since normal distribution is a continuous distribution, it isn't really clear what you expect it to do when using an `int` type. Maybe you can get the result you want by using `normal_distribution<double>` and rounding the result to `int`.
You can use `binomial_distribution` with default probability value of 0.5. [link](http://www.cplusplus.com/reference/random/binomial_distribution/binomial_distribution/) It will return integer values in range [0,t], with mean at t/2 ( in case t is even else (t+1)/2, (t-1)/2 have equal prob.). You can set the value of t accordingly and shift by adding a constant to the result if needed. Binomial distribution is discrete approximation of normal distribution ([link](http://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation)). Normal distributions theoretically have no lower/upper limits. I prefer to use following thin wrapper over the original: ``` template <typename T> class NormalDistribution { private: std::mt19937 mt; std::normal_distribution<T> dis; public: NormalDistribution(T mu, T sigma):dis(mu, sigma) { std::random_device rd; mt.seed(rd()); } NormalDistribution(T mu, T sigma, unsigned seed ):dis(mu, sigma) { mt.seed(seed); } T random() { return dis(mt); } }; template <> class NormalDistribution<int> { private: std::mt19937 mt; std::binomial_distribution<int> dis; int _min; public: NormalDistribution(int min, int max):dis(max-min) { std::random_device rd; mt.seed(rd()); } NormalDistribution(int min, int max, unsigned seed):dis(max-min), _min(min) { mt.seed(seed); } int random() { return dis(mt)+_min; } }; ```
13,705
51,572,168
when i want to see my django website in their server....i open cmd and go to manage.py directory: ``` C:\Users\computer house\Desktop\ahmed> ``` and then i type : ``` python manage.py runserver ``` but i see this error : ``` Traceback (most recent call last): File "manage.py", line 15, in <module> execute_from_command_line(sys.argv) File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\core\management\__init__.py", line 371, in execute_from_command_line utility.execute() File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\core\management\__init__.py", line 317, in execute settings.INSTALLED_APPS File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\conf\__init__.py", line 56, in __getattr__ self._setup(name) File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\conf\__init__.py", line 43, in _setup self._wrapped = Settings(settings_module) File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\conf\__init__.py", line 106, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "C:\Users\computer house\AppData\Local\Programs\Python\Python36-32\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked ModuleNotFoundError: No module named 'ahmed' ``` what can i do to solve this problem and run my website in the server correctly ?
2018/07/28
[ "https://Stackoverflow.com/questions/51572168", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10102365/" ]
As far as i understood, You have started your project with the following command: ``` C:\Users\computer house\Desktop> python startproject ahmed ``` After that I assume you have similar file structure: ``` ahmed/ ├── ahmed/ # make sure this directory exists │ ├── __init__.py # make sure this file exists. │ ├── settings.py │ ├── urls.py │ └── wsgi.py └── manage.py ``` Either the inner ahmed directory or `ahmed/__init__.py` does not exists. if `__init__.py` does not exist please create an empty file with taht name. A python module is a file or directory. Every single file with `.py` extension is treated to be python modules by the interpreter. If you want a directory to be a python module you need to create `__init__.py` inside that directory. The file is run at import time of the module, if you do not know what you are doing leave `__init__.py` empty.
check your current directory to have file names `__init__.py` also a directory named `ahmed` contains `__init__.py`
13,706
10,363,853
I'm trying to talk to a child process using the python subprocess.Popen() call. In my real code, I'm implementing a type of IPC, so I want to write some data, read the response, write some more data, read the response, and so on. Because of this, I cannot use Popen.communicate(), which otherwise works well for the simple case. This code shows my problem. It never even gets the first response, hangs at the first "Reading result". Why? How can I make this work as I expect? ``` import subprocess p = subprocess.Popen(["sed", 's/a/x/g'], stdout = subprocess.PIPE, stdin = subprocess.PIPE) p.stdin.write("abc\n") print "Reading result:" print p.stdout.readline() p.stdin.write("cat\n") print "Reading result:" print p.stdout.readline() ```
2012/04/28
[ "https://Stackoverflow.com/questions/10363853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/526797/" ]
I would try to use `Popen().communicate()` if you can as it does a lot of nice things for you, but if you need to use `Popen()` exactly as you described, you'll need to set sed to flush its buffer after newlines with the `-l` option: ``` p = subprocess.Popen(['sed', '-l', 's/a/x/g'], stdout=subprocess.PIPE, stdin=subprocess.PIPE) ``` and your code should work fine
`sed`'s output is buffered and only outputs its data until enough has been cumulated or the input stream is exhausted and closed. Try this: ``` import subprocess p = subprocess.Popen(["sed", 's/a/x/g'], stdout = subprocess.PIPE, stdin = subprocess.PIPE) p.stdin.write("abc\n") p.stdin.write("cat\n") p.stdin.close() print "Reading result 1:" print p.stdout.readline() print "Reading result 2:" print p.stdout.readline() ``` Be aware that this cannot be done reliably which huge data as wriring to `stdin` blocks once the buffer is full. The best way to do is using [`communicate()`](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate).
13,707
18,389,273
I'm using python and having some trouble reading the properties of a file, when the filename includes non-ASCII characters. One of the files for example is named: `0-Channel-https∺∯∯services.apps.microsoft.com∯browse∯6.2.9200-1∯615∯Channel.dat` When I run this: ```python list2 = os.listdir('C:\\Users\\James\\AppData\\Local\\Microsoft\\Windows Store\\Cache Medium IL\\0\\') for data in list2: print os.path.getmtime(data) + '\n' ``` I get the error: ```none WindowsError: [Error 123] The filename, directory name, or volume label syntax is incorrect: '0-Channel-https???services.apps.microsoft.com?browse?6.2.9200-1?615?Channel.dat' ``` I assume its caused by the special chars because the code works fine with other file names with only ASCII chars. Does anyone know of a way to query the filesystem properties of a file named like this?
2013/08/22
[ "https://Stackoverflow.com/questions/18389273", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1084310/" ]
If this is python 2.x, its an encoding issue. If you pass a unicode string to os.listdir such as `u'C:\\my\\pathname'`, it will return unicode strings and they should have the non-ascii chars encoded correctly. See [Unicode Filenames](http://docs.python.org/2/howto/unicode.html#unicode-filenames) in the docs. Quoting the doc: > > os.listdir(), which returns filenames, raises an issue: should it return the Unicode version of filenames, or should it return 8-bit strings containing the encoded versions? os.listdir() will do both, depending on whether you provided the directory path as an 8-bit string or a Unicode string. If you pass a Unicode string as the path, filenames will be decoded using the filesystem’s encoding and a list of Unicode strings will be returned, while passing an 8-bit path will return the 8-bit versions of the filenames. For example, assuming the default filesystem encoding is UTF-8, running the following program: > > > this code should work... ``` directory_name = u'C:\\Users\\James\\AppData\\Local\\Microsoft\\Windows Store\\Cache Medium IL\\0\\' list2 = os.listdir(directory_name) for data in list2: print data, os.path.getmtime(os.path.join(directory_name, data)) ```
As you are in windows you should try with ntpath module instead of os.path ``` from ntpath import getmtime ``` As I don't have windows I can't test it. Every os has a different path convention, so, Python provides a specific module for the most common operative systems.
13,708
7,281,348
I'm building a website which allows you to write python code online just like <http://shell.appspot.com/> or <http://ideone.com>. Can someone please provide me some help how I can achieve this?
2011/09/02
[ "https://Stackoverflow.com/questions/7281348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/924945/" ]
First of all you can browse their [source code](http://code.google.com/p/google-app-engine-samples/downloads/detail?name=shell_20091112.tar.gz&can=2&q=). The main file is only 321 lines ! Basically it keeps a separate global dict for each user session, then uses `compile` and `exec` to run the code and return the result. ``` # log and compile the statement up front try: logging.info('Compiling and evaluating:\n%s' % statement) compiled = compile(statement, '<string>', 'single') except: self.response.out.write(traceback.format_exc()) return ``` and ``` # run! old_globals = dict(statement_module.__dict__) try: old_stdout = sys.stdout old_stderr = sys.stderr try: sys.stdout = self.response.out sys.stderr = self.response.out exec compiled in statement_module.__dict__ finally: sys.stdout = old_stdout sys.stderr = old_stderr except: self.response.out.write(traceback.format_exc()) return ``` **Edit:** Not using Google App Engine would make things much more complicated.But you can take a look at [pysandbox](https://github.com/haypo/pysandbox)
I am new to python but hopefully following links could help you acheive ur goal, BaseHTTPServer (http://docs.python.org/library/basehttpserver.html) library can be used for handling web requests and ipython (http://ipython.org) for executing python commands at the backend.
13,709
33,301,838
I am trying to access a JSON object/dictionary in python however get the error: > > TypeError: string indices must be integers if script['title'] == > "IT": > > > and this is my code to try and access that particular key within the dictionary: ``` def CreateScript(scriptsToGenerate): start = time.clock() apiLocation = "" saveFile = "" for script in scriptsToGenerate: print script if script['title'] == "IT": time = script['timeInterval'] baseItem = "" ``` and scriptsToGenerate is passed in using this which makes a HTTP request to my API ``` def GetScripts(urlParameters): result = requests.get(constURLString + "/" + str(urlParameters)).json() return [x for x in result if x != []] ``` and here is where I call CreateScript ``` def RunInThread(ID): startedProcesses = list() Scripts = [] Scripts = GetScripts(ID) scriptList = ThreadChunk(Scripts, 2) for item in scriptList: proc = Process(target=CreateScript, args=(item)) startedProcesses.append(proc) proc.start() #we must wait for the processes to finish before continuing for process in startedProcesses: process.join() print "finished" ``` which I pass this into CreateScript Here is the output of my script object ``` {u'timeInterval': u'1800', u'title': u'IT', u'attribute' : 'hello'} ```
2015/10/23
[ "https://Stackoverflow.com/questions/33301838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4867148/" ]
Facts: * `scriptList` is a list of dictionaries (according to you) * `for item in scriptList:` gets one dictionary at a time * `proc = Process(target=CreateScript, args=(item))` passes a dictionary to the `CreateScript` function * `def CreateScript(scriptsToGenerate):` receives a dictionary * `for script in scriptsToGenerate:` iterates over the *keys* of the dictionary. * `if script['title'] == "IT":` tries to access the `title` index of a dictionary key (a string). So no, this won’t work. At some point, you iterate over a list. You probably want to pass a list of scripts to the `CreateScript` function, so you shouldn’t iterate over `scriptList`.
John, what I meant was that you have a json object there, it will need to be treated as such. I think you mean can you do the following? ``` def CreateScript(scriptsToGenerate): #start = time.clock() apiLocation = "" saveFile = "" i = json.loads(scriptsToGenerate) if i['title'] == "IT": time = i['timeInterval'] baseItem = "" print i['title'], i['timeInterval'] p = json.dumps({u'timeInterval': u'1800', u'title': u'IT', u'attribute' : 'hello'}) CreateScript(p) ```
13,710
56,328,818
I am using fuzzywuzzy in python for fuzzy string matching. I have a set of names in a list named HKCP\_list which I am matching against a pandas column iteratively to get the best possible match. Given below is the code for it ``` import fuzzywuzzy from fuzzywuzzy import fuzz,process def search_func(row): chk = process.extract(row,HKCP_list,scorer=fuzz_token_sort_ratio)[0] return chk wc_df['match']=wc_df['concat_name'].map(search_func) ``` The wc\_df dataframe contains the column 'concat\_name' which needs to be matched with every name in the list HKCP\_list. The above code took around 2 hours to run with 6K names in the list and 11K names in the column 'concat\_name'. I have to rerun this on another data set where are 89K names in the list and 120K names in the column. In order to speed up the process, I got an idea in the following question on Stackoverflow [Vectorizing or Speeding up Fuzzywuzzy String Matching on PANDAS Column](https://stackoverflow.com/questions/52631291/vectorizing-or-speeding-up-fuzzywuzzy-string-matching-on-pandas-column) In one of the comments in the answer in the above, it has been advised to compare names that have the same 1st letter. The 'concat\_name' column that I am comparing with is a derived column obtained by concatenating 'first\_name' and 'last\_name' columns in the dataframe. Hence I am using the following function to match the 1st letter (since this is a token sort score that I am considering, I am comparing the 1st letter of both the first\_name and last\_name with the elements in the list). Given below is the code: ``` wc_df['first_name_1stletter'] = wc_df['first_name'].str[0] wc_df['last_name_1stletter'] = wc_df['last_name'].str[0] import time start_time=time.time() def match_func(row): CP_subset=[x for x in HKCP_list if x[0]==row['first_name_1stletter'] or x[0]==row['last_name_1stletter']] return CP_subset wc_df['list_to_match']=wc_df.apply(match_func,axis=1) end_time=time.time() print(end_time-start_time) ``` The above step took 1600 second with 6K X 11K data. The 'list\_to\_match' column contains the list of names to be compared for each concat\_name. Now here I have to again take the list\_to\_match element and pass individual elements in a list and do the fuzzy string matching using the process.extract method. Is there a more elegant and faster way of doing this in the same step as above? PS: Editing this to add an example as to how the list and the dataframe column looks like. ``` HKCp_list=['jeff bezs','michael blomberg','bill gtes','tim coook','elon musk'] concat_name=['jeff bezos','michael bloomberg','bill gates','tim cook','elon musk','donald trump','kim jong un', 'narendra modi','michael phelps'] first_name=['jeff','michael','bill','tim','elon','donald','kim','narendra','michael'] last_name=['bezos','bloomberg','gates','cook','musk','trump','jong un', 'modi','phelps'] import pandas as pd df=pd.DataFrame({'first_name':first_name,'last_name':last_name,'concat_name':concat_name}) ``` Each row of the 'concat\_name' in df has to be compared against the elements of HKcp\_list. PS: editing today to reflect the ":" and the row in the 2nd snippet of code I missed yesterday Regards, Nirvik
2019/05/27
[ "https://Stackoverflow.com/questions/56328818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4762935/" ]
Use this function: ``` typedef Future<T> FutureGenerator<T>(); Future<T> retry<T>(int retries, FutureGenerator aFuture) async { try { return await aFuture(); } catch (e) { if (retries > 1) { return retry(retries - 1, aFuture); } rethrow; } } ``` And to use it: ``` main(List<String> arguments) { retry(2, doSometing); } Future doSometing() async { print("Doing something..."); await Future.delayed(Duration(milliseconds: 500)); return "Something"; } ```
This is how I implemented it: ``` Future retry<T>( {Future<T> Function() function, int numberOfRetries = 3, Duration delayToRetry = const Duration(milliseconds: 500), String message = ''}) async { int retry = numberOfRetries; List<Exception> exceptions = []; while (retry-- > 0) { try { return await function(); } catch (e) { exceptions.add(e); } if (message != null) print('$message: retry - ${numberOfRetries - retry}'); await Future.delayed(delayToRetry); } AggregatedException exception = AggregatedException(message, exceptions); throw exception; } class AggregatedException implements Exception { final String message; AggregatedException(this.message, this.exceptions) : lastException = exceptions.last, numberOfExceptions = exceptions.length; final List<Exception> exceptions; final Exception lastException; final int numberOfExceptions; String toString() { String result = ''; exceptions.forEach((e) => result += e.toString() + '\\'); return result; } } ``` This is how I use it: ``` try { await retry( function: () async { _connection = await BluetoothConnection.toAddress(device.address); }, message: 'Bluetooth Connect'); } catch (e) { _log.finest('Bluetooth init failed ${e.toString()}'); } ```
13,711
6,424,676
I have a list of lists, say: ``` arr = [[1, 2], [1, 3], [1, 4]] ``` I would like to append 100 to each of the inner lists. Output for the above example would be: ``` arr = [[1, 2, 100], [1, 3, 100], [1, 4, 100]] ``` I can of course do: ``` for elem in arr: elem.append(100) ``` But is there something more pythonic that can be done? Why does the following not work: ``` arr = [elem.append(100) for elem in arr] ```
2011/06/21
[ "https://Stackoverflow.com/questions/6424676", "https://Stackoverflow.com", "https://Stackoverflow.com/users/373948/" ]
The second version should be written like `arr = [elem + [100] for elem in arr]`. But the most pythonic way if you ask me is the first one. The `for` construct has it's own use, and it suits very well here.
You can do ``` [a + [100] for a in arr] ``` The reason why your `append` doesn't work is that `append` doesn't return the list, but rather `None`. Of course, this is more resource intensive than just doing `append` - you end up making copies of everything.
13,717
54,024,663
I have a python application that is running in Kubernetes. The app has a ping health-check which is called frequently via a REST call and checks that the call returns an HTTP 200. This clutters the Kubernetes logs when I view it through the logs console. The function definition looks like this: ``` def ping(): return jsonify({'status': 'pong'}) ``` How can I silence a specific call from showing up in the log? Is there a way I can put this in code such as a python decorator on top of the health check function? Or is there an option in the Kubernetes console where I can configure to ignore this call?
2019/01/03
[ "https://Stackoverflow.com/questions/54024663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5240483/" ]
In kubernetes, everything in container which you have on `stdout` or `stderr` will come into the kubernetes logs. The only way to exclude the logs of `health-check` ping call remove from kubernetes logs is that, In your application you should redirect output of those ping calls to somefile in say `/var/log/`. This will effectively remove the output of that `health-check` ping from the `stdout`. Once the output is not in `stdout` or `stderr` of the pods, pod logs will not have logs from that special `health-check` You can also use the sidecar containers to streamline your application logs, like if you don't want all of the logs of application in `kubectl logs` output. You can write those file. As stated in Official docs of kubernetes: > > By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or the journald. Each individual sidecar container prints log to its own stdout or stderr stream. > > > This approach allows you to separate several log streams from different parts of your application, some of which can lack support for writing to stdout or stderr. The logic behind redirecting logs is minimal, so it’s hardly a significant overhead. For more information on Kubernetes logging, please refer official docs: <https://kubernetes.io/docs/concepts/cluster-administration/logging/>
Just invert the logic log on fail in the app, modify the code or wrap with a custom decorator
13,721
48,186,159
My app is installed correctly and its models.py reads: ``` from django.db import models class Album(models.Model): artist = models.CharField(max_lenght=250) album_title = models.CharField(max_lenght=500) genre = models.CharField(max_lenght=100) album_logo = models.CharField(max_lenght=1000) class Song(models.Model): album = models.ForeignKey(Album, on_delete=models.CASCADE) file_type = models.CharField(max_lenght=10) song_title = models.CharField(max_lenght=250) ``` But whenever I run python manage.py migrate, I get the following error: ``` Traceback (most recent call last): File "manage.py", line 15, in <module> execute_from_command_line(sys.argv) File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\core\management\__init__.py", line 371, in execute_from_command_line utility.execute() File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\core\management\__init__.py", line 347, in execute django.setup() File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\apps\registry.py", line 112, in populate app_config.import_models() File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\apps\config.py", line 198, in import_models self.models_module = import_module(models_module_name) File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\dougl\desktop\django websites\twenty_one_pilots\downloads\models.py", line 3, in <module> class Album(models.Model): File "C:\Users\dougl\desktop\django websites\twenty_one_pilots\downloads\models.py", line 4, in Album artist = models.CharField(max_lenght=250) File "C:\Users\dougl\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django-2.0-py3.6.egg\django\db\models\fields\__init__.py", line 1042, in __init__ super().__init__(*args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'max_lenght' ``` What could be wrong? I'm still a beginner and I'm still learning about databases.
2018/01/10
[ "https://Stackoverflow.com/questions/48186159", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8978159/" ]
Use this: As per my comment, length (lenght) splelling is wrong. ``` from django.db import models class Album(models.Model): artist = models.CharField(max_length=250) album_title = models.CharField(max_length=500) genre = models.CharField(max_length=100) album_logo = models.TextField() class Song(models.Model): album = models.ForeignKey(Album, on_delete=models.CASCADE) file_type = models.CharField(max_length=10) song_title = models.CharField(max_length=250) ```
You have spelled length wrong here ``` album_title = models.CharField(max_lenght=500) ``` happens to the best of us.
13,724
63,709,035
I am trying to have a button in HTML that removes a row in the database when I click it. This is for a flask app. here is the HTML: ``` <div class="container-fluid text-center" id="products"> {% for product in productList %} <div class='userProduct'> <a href="{{ product.productURL }}" target="_blank">{{ product.title|truncate(30) }}</a> <h4 class="currentPrice">${{ product.currentPrice }}</h4> <h5 class="budget">Budget: {{ product.userBudget }}</h5> <form action="{{ url_for('delete', id=product.id) }}"> <button class="btn btn-sm btn-primary" type="submit">Remove from Wishlist</button> </form> </div> {% endfor %} </div> ``` And here is the route in the python file ``` @app.route('/delete/<id>') @login_required def delete(id): remove_product = Product.query.filter_by(id=int(id)).first() db.session.delete(remove_product) db.session.commit() return redirect(url_for('dashboard')) ``` Is there anything wrong I am doing with url\_for? I want to pass the id from the button to the python file so I can determine which entry to delete from the database.
2020/09/02
[ "https://Stackoverflow.com/questions/63709035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11522265/" ]
You're probably passing ID as an integer and your route is expecting a string. Use this instead: ``` @app.route('/delete/<int:id>') ```
I just encountered the same error, In my case it was caused by some values being None in the database. So maybe check whether product.id has missing values. To handle this, I needed an extra line, which would look something like this for you: ``` @app.route('/delete/<id>') # current line @app.route('/delete', defaults={'id': None}) # added line which handles None ``` In delete(id) you then have to handle the case where id is None: ``` if id is None: ... # what should happen if id is None ```
13,725
31,291,204
I'm new to Apache Spark and have a simple question about DataFrame caching. When I cached a DataFrame in memory using `df.cache()` in python, I found that the data is removed after the program terminates. Can I keep the cached data in memory so that I can access the data for the next run without doing `df.cache()` again?
2015/07/08
[ "https://Stackoverflow.com/questions/31291204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2368670/" ]
The cache used with `cache()` is tied to the current spark context; its purpose is to prevent having to recalculate some intermediate results in the current application multiple times. If the context gets closed, the cache is gone. Nor can you share the cache between different running Spark contexts. To be able to reuse the data in a different context, you will have to save it to a file system. If you prefer the results to be in memory (or have a good chance of being in memory when you try to reload them) you can look at using [Tachyon](http://www.tachyon-project.org).
If you are talking about saving the RDD into disk , use any of the following - ![enter image description here](https://i.stack.imgur.com/6ohTW.png) The link is for pyspark , same is available for Java/ Scala as well-# <https://spark.apache.org/docs/latest/api/python/pyspark.sql.html> If you were not talking about disk , then cache() is already doing the job of saving the same in memory
13,726
38,989,150
I trying to import a module and use a function from that module in my current python file. I run the nosetests on the parser\_tests.py file but it fails with "name 'parse\_subject' not defined" e.g its not finding the parse\_subject function which is clearly defined in the parsrer.py file This is the parsrer file: ``` def peek(word_list): if word_list: word = word_list[0] return word[0] else: return None #Confirms that the expected word is the right type, ``` def match(word\_list, expecting): if word\_list: word = word\_list.pop(0) ``` if word[0] == expecting: return word else: return None else: return None ``` def skip(word\_list, word\_type): while peek(word\_list) == word\_type: match(word\_list, word\_type) def parse\_verb(word\_list): skip(word\_list, 'stop') ``` if peek(word_list) == 'verb': return match(word_list, 'verb') else: raise ParserError("Expected a verb next.") ``` def parse\_object(word\_list): skip(word\_list, 'stop') next\_word = peek(word\_list) ``` if next_word == 'noun': return match(word_list, 'noun') elif next_word == 'direction': return match(word_list, 'direction') else: raise ParserError("Expected a noun or direction next.") ``` def parse\_subject(word\_list): skip(word\_list, 'stop') next\_word = peek(word\_list) ``` if next_word == 'noun': return match(word_list, 'noun') elif next_word == 'verb': return ('noun', 'player') else: raise ParserError("Expected a verb next.") ``` def parse\_sentence(word\_list): subj = parse\_subject(word\_list) verb = parse\_verb(word\_list) obj = parse\_object(word\_list) ``` return Sentence(subj, verb, obj) ``` This is my tests file ``` from nose.tools import * ``` from nose.tools import assert\_equals import sys sys.path.append("h:/projects/projectx48/ex48") import parsrer def test\_subject(): word\_list = [('noun', 'bear'), ('verb', 'eat'), ('stop', 'the'), ('noun', 'honey')] assert\_equals(parse\_subject(word\_list), ('noun','bear'))
2016/08/17
[ "https://Stackoverflow.com/questions/38989150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6684586/" ]
``` SELECT STUFF(( SELECT ','+ Name FROM MyTable WHERE ID in (1, 2, 3) FOR XML PATH('') ), 1, 1, '') AS Names ``` Result: > > Apple,Microsoft,Samsung > > >
``` USE [Database Name] SELECT COLUMN_NAME,* FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'YourTableName' ```
13,727
41,595,532
I have a Tkinter button, and for some reason, it accepts width=xx, but not height=xx I'm using Python 3.5, with Tkinter support by default, on ubuntu 16.04 Here's the code sample: ``` # works: button_enter = ttk.Button(self.frm, text='ok', width = 100) # works: button_enter.config(width=25) # fails: button_enter = ttk.Button(self.frm, text='ok', height=15, width = 25) # fails: button_enter.config(width=25, height=15) button_enter = ttk.Button(self.frm, text='ok') button_enter.config(width=25) button_enter['command'] = self.some_method button_enter.grid(column=2, row = 0, sticky=W) ``` Here is the error I'm getting: ``` Traceback (most recent call last): File "pygo.py", line 44, in <module> app = App() File "pygo.py", line 34, in __init__ button_enter.config(height=15, width=25) File "/usr/lib/python3.5/tkinter/__init__.py", line 1333, in configure return self._configure('configure', cnf, kw) File "/usr/lib/python3.5/tkinter/__init__.py", line 1324, in _configure self.tk.call(_flatten((self._w, cmd)) + self._options(cnf)) _tkinter.TclError: unknown option "-height" ``` Can it be made to work? Or if it's a bug, where should I report it?
2017/01/11
[ "https://Stackoverflow.com/questions/41595532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/377783/" ]
It's not a bug, that's just how ttk buttons work. If you need a highly configurable button, use a tkinter button. ttk buttons are less configurable on purpose. The goal of ttk widgets is to give you a set of buttons consistent with a particular theme. Since you're on a linux system you can affect the height with `pack`, `place` or `grid` with appropriate options, though it's less convenient than the `height` option.
Use the **Place()** method instead over **Grid() or Pack() or Config()** . It will work fine. I never used **Config()** method
13,732
631,884
How do you encode a png image into base64 using python on Windows? ``` iconfile = open("icon.png") icondata = iconfile.read() icondata = base64.b64encode(icondata) ``` The above works fine in Linux and OSX, but on Windows it will encode the first few characters then cut short. Why is this?
2009/03/10
[ "https://Stackoverflow.com/questions/631884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/76322/" ]
[Open the file in binary mode](http://docs.python.org/tutorial/inputoutput.html#reading-and-writing-files): ``` open("icon.png", "rb") ``` I'm not very familiar with Windows, but I'd imagine what's happening is that the file contains a character (0x1A) that [Windows is interpreting as the end of the file](http://www.google.com/search?q=windows+end+of+file+character) (for legacy reasons) when it is opened in text mode. The other issue is that opening a file in text mode (without the 'b') on Windows will cause line endings to be rewritten, which will generally break binary files where those characters don't actually indicate the end of a line.
To augment the answer from Miles, the [first eight bytes in a PNG file](http://www.libpng.org/pub/png/spec/1.2/PNG-Rationale.html#R.PNG-file-signature) are specially designed: * 89 - the first byte is a check that bit 8 hasn't been stripped * "PNG" - let someone read that it's a PNG format * 0d 0a - the DOS end-of-line indicator, to check if there was DOS->unix conversion * 1a - the DOS end-of-file character, to check that the file was opened in binary mode * 0a - unix end-of-line character, to check if there was a unix->DOS conversion Your code stops at the 1a, as designed.
13,734
8,839,846
In vim, you can check if a file is open in the current buffer with `bufexists`. For a short filename (not full path), you can check if it's open using `bufexists(bufname('filename'))`. Is there any way to check if a file is open in a *tab*? My closest workaround is to do something like: ``` :tabdo if bufnr(bufname('filename')) in tabpagebuflist(): echo "Yes" ``` However, that's sort of pythonic pseudocode... I'm not sure how to get that to work in vim. My goal is for an external applescript to check if a file is already open and if so go to a line in that file. Ideally, I'd like to be able to search through different GUI windows too, but I've gathered (e.g. [Open vim tab in new (GUI) window?](https://stackoverflow.com/questions/6123424/open-vim-tab-in-new-gui-window)) that working with different GUI windows is very challenging / impossible in VIM.
2012/01/12
[ "https://Stackoverflow.com/questions/8839846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/814354/" ]
My impatience and good documentation got the better of me... here's the solution (greatly aided by [Check if current tab is empty in vim](https://stackoverflow.com/questions/5025558/check-if-current-tab-is-empty-in-vim) and [Open vim tab in new (GUI) window?](https://stackoverflow.com/questions/6123424/open-vim-tab-in-new-gui-window)). The source is at <https://github.com/keflavich/macvim-skim> ``` function! WhichTab(filename) " Try to determine whether file is open in any tab. " Return number of tab it's open in let buffername = bufname(a:filename) if buffername == "" return 0 endif let buffernumber = bufnr(buffername) " tabdo will loop through pages and leave you on the last one; " this is to make sure we don't leave the current page let currenttab = tabpagenr() let tab_arr = [] tabdo let tab_arr += tabpagebuflist() " return to current page exec "tabnext ".currenttab " Start checking tab numbers for matches let i = 0 for tnum in tab_arr let i += 1 echo "tnum: ".tnum." buff: ".buffernumber." i: ".i if tnum == buffernumber return i endif endfor endfunction function! WhichWindow(filename) " Try to determine whether the file is open in any GVIM *window* let serverlist = split(serverlist(),"\n") "let currentserver = ???? for server in serverlist let remotetabnum = remote_expr(server, \"WhichTab('".a:filename."')") if remotetabnum != 0 return server endif endfor endfunction ``` then use like so: ``` exec "tabnext ".WhichTab('my_filename') echo remote_foreground( WhichWindow('my_filename') ) ``` or, from the command line, here's a script to go to a particular line of a file using `WhichTab`: ``` #!/bin/bash file="$1" line="$2" for server in `mvim --serverlist` do foundfile=`mvim --servername $server --remote-expr "WhichTab('$file')"` if [[ $foundfile > 0 ]] then mvim --servername $server --remote-expr "foreground()" mvim --servername $server --remote-send ":exec \"tabnext $foundfile\" <CR>" mvim --servername $server --remote-send ":$line <CR>" fi done ```
I'd reply to keflavich, but I can't yet... I was working on a similar problem where I wanted to mimic the behavior of gvim --remote-tab-silent when opening files inside of gvim. I found this WhichTab script of yours, but ran into problems when there is more than one window open in any given tab. If you split windows inside of tabs, then you will have more than one buffer returned by tabpagebuflist(), so your method of using the buffer number's position in the List doesn't work. Here's my solution that accounts for that possibility. ``` " Note: returns a list of tabnos where the buf is found or 0 for none. " tabnos start at 1, so 0 is always invalid function! WhichTabNo(bufNo) let tabNos = [] for tabNo in range(1, tabpagenr("$")) for bufInTab in tabpagebuflist(tabNo) if (bufInTab == a:bufNo) call add(tabNos, tabNo) endif endfor endfor let numBufsFound = len(tabNos) return (numBufsFound == 0) ? 0 : tabNos endfunction ``` I think I can just return tabNos which will be an empty list that gets evaluated as a scalar 0, but I just learned vimscript and am not that comfortable with the particulars of its dynamic typing behavior yet, so I'm leaving it like that for now.
13,735
44,894,992
**On Windows 10, Python 3.6** Let's say I have a command prompt session open **(not Python command prompt or Python interactive session)** and I've been setting up an environment with a lot of configurations or something of that nature. Is there any way for me to access the history of commands I used in that session with a python module for example? Ideally, I would like to be able to export this history to a file so I can reuse it in the future. **Example:** Type in command prompt: `python savecmd.py` and it saves the history from that session.
2017/07/03
[ "https://Stackoverflow.com/questions/44894992", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6805800/" ]
You don't need Python at all, use `doskey` facilities for that, i.e.: ``` doskey /history ``` will print out the current session's command history, you can then redirect that to a file if you want to save it: ``` doskey /history > saved_commands.txt ``` If you really want to do it from within Python, you can use `subprocess.check_output()` to capture the command history and then save it to a file: ``` import subprocess cmd_history = subprocess.check_output(["doskey", "/history"]) with open("saved_commands.txt", "wb") as f: f.write(cmd_history) ```
You're actually asking for 3 different things there. 1. Getting Python REPL command hisotry 2. Getting info from "prompt", which I assume is the Powershell or CMD.exe. 3. Save the history list to a file. To get the history from inside your REPL, use: ```py for i in list(range(readline.get_current_history_length())): print(readline.get_history_item(i)) ``` Unfortunately, there is no history in REPL, when using from outside, as above, so you need to find the history file and print it. To see the history file from powershell: ``` function see_my_py_history{ cat C:\<path-to>\saved_commands.txt } Set-Alias -Name seehist -Value see_my_py_history ```
13,736
73,631,420
I am scraping web series data from `JSON` and `lists` using python. the problem is with date and time. I got a function to convert the `duration` format ``` def get_duration(secs): hour = int(secs/3600) secs = secs - hour*3600 minute = int(secs/60) if hour == 0: return "00:" + str(minute) elif len(str(hour)) == 1: return "0" + str(hour) + ":" + str(minute) return str(hour) + ":" + str(minute) ``` The output is `00:29` which is correct. But how can I convert `broadCastDate` to DD-MM-YYYY format Here is the Data ``` "items": [ { "duration": 1778, "startDate": 1555893000, "endDate": 4102424940, "broadCastDate": 1555939800, } ] ``` The Broadcast Date on the website is **22 Apr 2019**, so need a function to get this type of output.
2022/09/07
[ "https://Stackoverflow.com/questions/73631420", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19861079/" ]
That value looks like a timestamp so you could use [`datetime.fromtimestamp`](https://docs.python.org/3/library/datetime.html#datetime.date.fromtimestamp) to convert to a `datetime` object and then [`strftime`](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) to output in the format you want: ``` from datetime import datetime d = datetime.fromtimestamp(1555939800) print(d.strftime('%d-%m-%Y')) ``` Output: ``` 22-04-2019 ```
The original question is slightly unclear about the final desired format. If you would like "22 Apr 2019" then the required format string is "%d %b %Y" ``` from datetime import datetime d = datetime.fromtimestamp(1555939800) print(d.strftime('%d %b %Y')) ``` Output: ``` 22 Apr 2019 ```
13,737
57,159,408
Am new to python and trying to understand logging (am not new to programming at all though). Trying to replicate log4j logger that we have in Java. Have a simple test project of the following files: - User.py - Test.py - conf/ - logging.conf - log All the python files have logger objects to log to specific file mentioned in logging.conf. However, only the Test.py file dumps logs. Not the User.py file. User.py is as follows: ```py import logging import logging.config logging.config.fileConfig('conf/logging.conf') logger = logging.getLogger(__name__) class User: username = "" password = "" def __init__(self, username = "username", password = "password"): global logger logger.debug( "In User constructor" ); self.username = username self.password = password logger.debug( "Out User constructor" ); def getSomething(self) : return "Something" def login_user(self) : logger.debug( "In login_user" ); logger.debug( "Out login_user" ); return "Logged In" ``` Test.py as follows: ```py from User import User import logging import logging.config from threading import Thread logging.config.fileConfig('conf/logging.conf') logger = logging.getLogger(__name__) objectsCount = 1 logger.debug("In VolumeTest") def thr_func(username, password): logger.info("In thr_func") user1 = User(username, password) logger.info("Created user obj") logger.info("Before login_user call") token = user1.login_user() logger.debug ( "Out thr_func") def main() : thr_func( "abc", "def" ) logger.info("In Main") thr_objects = [] for i in range(objectsCount): thread = Thread(target=thr_func, args=("subhayan", "MER2018")) thr_objects.append(thread) logger.info("Main : before running thread") for i in range(objectsCount): logger.info("Main : starting thread " + str(i)) thr_objects[i].start() logger.info("Main : started thread " + str(i)) logger.info("Main : wait for the thread to finish") for i in range(objectsCount): thr_objects[i].join() logger.info("Main : all done") if __name__ == "__main__": main() ``` logging.conf file as follows: ```py [loggers] keys=root [handlers] keys=logfile [formatters] keys=logfileformatter [logger_root] level=DEBUG handlers=logfile [formatter_logfileformatter] format=%(asctime)s %(name)-12s: %(threadName)s : %(levelname)s : %(message)s [handler_logfile] #class=handlers.RotatingFileHandler class=handlers.TimedRotatingFileHandler #class=TimedCompressedRotatingFileHandler level=DEBUG #args=('testing.log','a',10,100) args=('log/testing.log','d', 1, 9, None, False, False) formatter=logfileformatter ``` I tried running using : "python Test.py". However, no logs from User file So, the question is, can't we have logging for the second file in this way ? What am I doing wrong ? If nothing wrong, what is the best way to do this in python ?
2019/07/23
[ "https://Stackoverflow.com/questions/57159408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1300830/" ]
I see a couple of things that may be the issue: 1) you're running 2 separate loggers: each file is instantiating it's own logger pointed to the same file. 2) if you want everything in the same file, then create one logger and pass the reference to other modules via a global var. I find it easier to create a log.py file that creates and writes the log file, and then import that module into the other modules. 2) you have a file handler and a formatter in your config file, but don't seem to set it in the code. Something like ``` handler = logging.handlers.RotatingFileHandler(LOG_FILENAME, ... f = logging.Formatter('%(process)d %(asctime)s %(levelname)s: %(message)s') handler.setFormatter(f) logger.addHandler(handler) ``` Hope this helps!
Thanks tbitson for the pointers. Here is the changed code. ``` import logging import logging.config logging.config.fileConfig('conf/logging.conf') logger = logging.getLogger(__name__) ``` The modified User.py is: ``` from logger import logger class User: username = "" password = "" def __init__(self, username = "username", password = "password"): global logger logger.debug( "In User constructor" ); self.username = username self.password = password logger.debug( "Out User constructor" ); def getSomething(self) : return "Something" def login_user(self) : logger.debug( "In login_user" ); logger.debug( "Out login_user" ); return "Logged In" ``` The modified Test.py is: from User import User from logger import logger from threading import Thread ``` objectsCount = 1 logger.debug("In VolumeTest") def thr_func(username, password): logger.info("In thr_func") user1 = User(username, password) logger.info("Created user obj") logger.info("Before login_user call") token = user1.login_user() logger.debug ( "Out thr_func") def main() : thr_func( "abc", "def" ) logger.info("In Main") thr_objects = [] for i in range(objectsCount): thread = Thread(target=thr_func, args=("subhayan", "MER2018")) thr_objects.append(thread) logger.info("Main : before running thread") for i in range(objectsCount): logger.info("Main : starting thread " + str(i)) thr_objects[i].start() logger.info("Main : started thread " + str(i)) logger.info("Main : wait for the thread to finish") for i in range(objectsCount): thr_objects[i].join() logger.info("Main : all done") if __name__ == "__main__": main() ```
13,738
47,111,787
I'm trying to set and get keys from ElastiCache (memcached) from a python lambda function using Boto3. I can figure out how to get the endpoints but that's pretty much it. Is there some documentation out there that shows the entire process?
2017/11/04
[ "https://Stackoverflow.com/questions/47111787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4891674/" ]
It sounds like you are trying to interact with Memcached via Boto3. This is not possible. Boto3 is for interacting with the AWS API. You can manage your ElastiCache servers via the AWS API, but you can't interact with the Memcached software running on those servers. You need to use a Memcached client library like [python-memcached](https://pypi.python.org/pypi/python-memcached) in your Python code to actually get and set keys in your Memcached cluster. Also, your Lambda function will need to reside in the same VPC as the ElastiCache node(s).
I had the exact timeout problem listed in the commment of the older post. My bug is in the security group for memcached. Here is the working version in terraform: ``` resource "aws_security_group" "memcached" { vpc_id = "${aws_vpc.dev.id}" name = "memcached SG" ingress { from_port = "${var.memcached_port}" to_port = "${var.memcached_port}" protocol = "tcp" cidr_blocks = ["${var.public_subnet_cidr}"] } egress { from_port = "${var.memcached_port}" to_port = "${var.memcached_port}" protocol = "tcp" cidr_blocks = ["${var.public_subnet_cidr}"] } tags = { Name = "memcached SG" } } ``` I tested the connection by creating a EC2 instance in public subnet and do "telnet (input your cache node URL) 11211".
13,739
39,899,312
I am using the Tracer software package (<https://github.com/Teichlab/tracer>). The program is invoked as followed: `tracer assemble [options] <file_1> [<file_2>] <cell_name> <output_directory>` The program runs on a single dataset and the output goes to `/<output_directory>/<cell_name>` What I want to do now is run this program on multiple files. To do so this is what I do: ```bash for filename in /home/tobias/tracer/datasets/test/*.fastq do echo "Processing $filename file..." python tracer assemble --single_end --fragment_length 62 --fragment_sd 1 $filename Tcell_test output; done ``` This works in priciple, but as `cell_name` is static, every iteration overwrites the output from the previous iteration. How do I need to change my script in order to give the output folder the name of the input file? For example: Input filename is `tcell1.fastq`. For this cell\_name should be `tcell1`. Next file is `tcell2.fastq` and cell\_name should be `tcell2`, and so on...
2016/10/06
[ "https://Stackoverflow.com/questions/39899312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6932623/" ]
I think this will do it, in bash, if I understand correctly - ```bash for filename in /home/tobias/tracer/datasets/test/*.fastq do echo "Processing $filename file..." basefilename="${filename##*/}" #<--- python tracer assemble --single_end --fragment_length 62 --fragment_sd 1 "$filename" "${basefilename%.fastq}" output; # ^^^^^^^^^^^^^^^^^^^^^^^^ done ``` `${filename##*/}` removes the part up to the last `/`, and `${basefilename%.fastq}` removes the `.fastq` at the end.
From what I understood the library you use writes output to predefined (non configurable) directory Let's call it `output_dir`. At each iteration you should rename the output directory. So your code should be something like this (pseudo code) ``` for filename in /home/tobias/tracer/datasets/test/*.fastq do echo "Processing $filename file..." python tracer assemble --single_end --fragment_length 62 --fragment_sd 1 $filename Tcell_test output; rename output_dir , each_file + "_output_dir" done ```
13,740
69,760,313
Is there more pythonic way to remove all duplicate elements from a list, while keeping the first and last element? ``` lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"] occurence = [i for i, e in enumerate(lst) if e == "foo"] to_remove = occurence[1:-1] for i in to_remove: del lst[i] print(lst) # ['foo', 'bar', 'foobar', 'barfoo', 'foo'] ``` Another approach which I like more. ``` for i, e in enumerate(lst[1:-1]): if e == "foo": del lst[i+1] ```
2021/10/28
[ "https://Stackoverflow.com/questions/69760313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7661466/" ]
Use a slice-assignment and unpack/consume a filter? ``` lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"] lst[1:-1] = filter(lambda x: x != "foo", lst[1:-1]) print(lst) ``` Output: ``` ['foo', 'bar', 'foobar', 'barfoo', 'foo'] ```
You can construct a new list with your requirements using Python's set data structure and slicing like this: ``` lst = ["foo", "bar", "foobar", "foo", "barfoo", "foo"] new_lst = [lst[0]] + list(set(lst[1:-1])) + [lst[-1]] ```
13,741
66,819,423
I have homework where I'm asked to build Newton and Lagrange interpolation polynomials. I had no troubles with Lagrange polynomial but with Newton polynomial arises one problem: while Lagrange interpolation polynomial and original function match completely with each other, Newton Interpolation doesn't do this. [Here is the plot.](https://i.stack.imgur.com/2couA.png) If I remember correctly, Newton and Lagrange polynomial interpolation are different ways to represent the same polynomial and they should match the original function at the interval of interpolation completely. I thought that Newton coefficients were calculated wrongly, so I found another divided difference function. I tried both function and [they gave me the same results.](https://i.stack.imgur.com/N3YJ7.png) I'm stuck at this moment. I still think that there is something wrong with calculating divided differences functions, but I can't see the mistake. Any suggestions, please? Here is the code: ``` import numpy as np from scipy.interpolate import lagrange # func from https://pythonnumericalmethods.berkeley.edu/notebooks/chapter17.05-Newtons-Polynomial-Interpolation.html def divided_diff(x, y): n = len(y) coef = np.zeros([n, n]) coef[:,0] = y for j in range(1,n): for i in range(n-j): coef[i][j] = \ (coef[i+1][j-1] - coef[i][j-1]) / (x[i+j]-x[i]) return coef def coeff(x, y): n = len(x) arr = y.copy() for i in range(1, n): arr[i:n] = (arr[i:n] - arr[i - 1]) / (x[i:n] - x[i - 1]) return arr # 1st range x_values = np.array([0, np.pi/6, np.pi/3, np.pi/2], float) y_values = np.array([1, np.sqrt(3)/2, 1/2, 0], float) print(coeff(x_values, y_values)) print(divided_diff(x_values, y_values)[0,:]) # Show polynomials newton_coefficients = divided_diff(x_values, y_values)[0,:] newton = np.poly1d(newton_coefficients[::-1]) lagrange_poly = lagrange(x_values, y_values) print(f"Here is our Lagrange interpolating polynomial:\n{lagrange_poly}\n") print(f"Here is our Newton interpolating polynomial:\n{newton}\n") ```
2021/03/26
[ "https://Stackoverflow.com/questions/66819423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11749578/" ]
One object is created, of type `C`, which contains all of the members of `A`, `B`, and `C`.
Here are some additional information about the relation between a base-class object and a derived-class object: Every derived object contains a base-class part to which a pointer or reference of the base-class type can be bound. That's the reason why a conversion from derived to base exists, and no implicit Conversion from base to derived. A base-class object can exist either as an independent object or as part of a derived object. When we initialize or assign an object of a base type from an object of a derived type, only the base-class part of the derived object is copied, moved, or assigned. The derived part of the object is ignored. *see "C++ Primer Fifth Edition" §15.2.3*
13,745
56,712,534
How to append alpha values inside nested list in python? ``` nested_list = [['72010', 'PHARMACY', '-IV', 'FLUIDS', '7.95'], ['TOTAL', 'HOSPITAL', 'CHARGES', '6,720.92'],['PJ72010', 'WORD', 'FLUIDS', '7.95']] Expected_output: [['72010', 'PHARMACY -IV FLUIDS', '7.95'], ['TOTAL HOSPITAL CHARGES', '6,720.92'],['PJ72010', 'WORD FLUIDS', '7.95']] ```
2019/06/22
[ "https://Stackoverflow.com/questions/56712534", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11425561/" ]
If you create a function that defines what you mean by word you can use [`itertools.groupby()`](https://docs.python.org/3/library/itertools.html#itertools.groupby) to group by this function. Then you can either append the `join()`ed results or `extend()` depending on whether it's a group of numbers of word. I'm inferring from your example that you are defining words from as anything with no numbers, but you can adjust the function as you see fit: ``` from itertools import groupby # it's a word if it has no numerics def word(s): return not any(c.isnumeric() for c in s) def groupwords(s): sub = [] for isword, v in groupby(s, key = word): if isword: sub.append(" ".join(v)) else: sub.extend(v) return sub res =[groupwords(l) for l in nested_list] res ``` **Results:** ``` [['72010', 'PHARMACY -IV FLUIDS', '7.95'], ['TOTAL HOSPITAL CHARGES', '6,720.92'], ['PJ72010', 'WORD FLUIDS', '7.95']] ```
Go through the each nested list. Check the each element of that list . If it is a full alpha den append it in a temporary variable. Once you find a numeric append the temporary and numeric both. Code: ``` nested_list = [['72010', 'PHARMACY', '-IV', 'FLUIDS', '7.95'], ['TOTAL', 'HOSPITAL', 'CHARGES', '6,720.92'],['PJ72010', 'WORD', 'FLUIDS', '7.95']] def isAlpha(temp): for i in temp: if i>='0' and i<='9': return 0 return 1 isAlpha("abul") anser_list=[] for i in nested_list: nested_list=[] temp="" for j in i: if isAlpha(j)==1: if len(temp)>0: temp+=" " temp+=j else: if len(temp)>0: nested_list.append(temp) temp="" nested_list.append(j) if len(temp)>0: nested_list.append(temp) anser_list.append(nested_list) for i in anser_list: print(i) ``` Output will be: ``` ['72010', 'PHARMACY -IV FLUIDS', '7.95'] ['TOTAL HOSPITAL CHARGES', '6,720.92'] ['PJ72010', 'WORD FLUIDS', '7.95'] ```
13,747
34,902,307
I have a python script that processes an XML file each day (it is transferred via SFTP to a remote directory, then temporarily copied to a local directory) and stores its information in a MySQL database. One of my parameters for the file is set to "date=today" so that the correct file is processed each day. This works fine and each day I successfully store new file information into the database. What I need help on is passing a Linux command line argument to run a file for a specific day (in case a previous day's file needs to be rerun). I can manually edit my code to make this work but this will not be an option once the project is in production. In addition, I need to be able to pass in a command line argument for "date=\*" and have the script run every file in my remote directory. Currently, this parameter will successfully process only a single file based on alphabetic priority. If my two questions should be asked separately, my mistake, and I'll edit this question to just cover one of them. Example of my code below: ``` today = datetime.datetime.now().strftime('%Y%m%d') file_var = local_file_path + connect_to_sftp.sftp_get_file( local_file_path=local_file_path, sftp_host=sftp_host, sftp_username=sftp_username, sftp_directory=sftp_directory, date=today) ET = xml.etree.ElementTree.parse(file_var).getroot() def parse_file(): for node in ET.findall(.......) ``` In another module: ``` def sftp_get_file(local_file_path, sftp_host, sftp_username, sftp_directory, date): pysftp.Connection(sftp_host, sftp_username) # find file in remote directory with given suffix remote_file = glob.glob(sftp_directory + '/' + date + '_file_suffix.xml') # strip directory name from full file name file_name_only = remote_file[0][len(sftp_directory):] # set local path to hold new file local_path = local_file_path # combine local path with filename that was loaded local_file = local_path + file_name_only # pull file from remote directory and send to local directory shutil.copyfile(remote_file[0], local_file) return file_name_only ``` So the SFTP module reads the file, transfers it to the local directory, and returns the file name to be used in the parsing module. The parsing module passes in the parameters and does the rest of the work. What I need to be able to do, on certain occasions, is override the parameter that says "date=today" and instead say "date=20151225", for example, but I must do this through a **Linux** command line argument. In addition, if I currently enter the parameter of "date=\*" it only runs the script for the first file that matches that parameter. I need the script to run for ALL files that match that parameter. Any help is much appreciated. Happy to answer any questions to improve clarity.
2016/01/20
[ "https://Stackoverflow.com/questions/34902307", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5451573/" ]
You can use `sys` module and pass the filename as command line argument. That would be : ``` import sys today = str(sys.argv[1]) if len(sys.argv) > 1 else datetime.datetime.now().strftime('%Y%m%d') ``` If the name is given as first argument, then `today` variable will be filename given from command line otherwise if no argument is given it will be what you specified as `datetime`. For second question, ``` file_name_only = remote_file[0][len(sftp_directory):] ``` You are only accessing the first element, but glob might return serveral files when you use `*` wildcard. You must iterate over remote\_file variable and copy all of them.
You can use [argsparse](https://docs.python.org/3/library/argparse.html) to consume command line arguments. You will have to check if specific date is passed and use it instead of the current date ``` if args.date_to_run: today = args.date_to_run else: today = datetime.datetime.now().strftime('%Y%m%d') ``` For the second part of your question you can use something like <https://docs.python.org/2/library/fnmatch.html> to match multiple files based on a pattern.
13,748
73,061,728
I'm trying to get a better understanding of python imports and namespaces. Take the example of a module testImport.py: ```py # testImport.py var1 = 1 def fn1(): return var1 ``` if I import this like this: ```py from testImport import * ``` I can inspect `var1` ```py >>> var1 1 ``` and `fn1()` returns ```py >>> fn1() 1 ``` Now, I can change the value of var1: ```py >>> var1 = 2 >>> var1 2 ``` but `fn1()` still returns 1: ```py >>> fn1() 1 ``` This is different to if I import properly: ```py >>> import testImport as t >>> t.var1 = 2 >>> t.fn1() 2 ``` What's going on here? How does python 'remember' the original version of `var1` in `fn1` when using `import *`?
2022/07/21
[ "https://Stackoverflow.com/questions/73061728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13258525/" ]
I'm not sure that you would be able to modify the root node as you are trying to do here. The namespace URI should be `http://www.w3.org/2000/xmlns/p` if it is `p` that you are trying to use as the prefix and you would use `p:root` as the qualifiedName but that would result in the root node being modified like this `<root xmlns:p="http://www.w3.org/2000/xmlns/p" p:root="p"/>` A possible alternative approach to get the output you seek might be to add a new node - append all the child nodes from the original root to it and delete the original. Admittedly it is `hacky` and a better solution might well exist... Given the source XML as: ``` <?xml version="1.0" encoding="utf-8" standalone="yes"?> <root> <domain>cdiinvest.com.br</domain> <domain>savba.sk</domain> <domain>laboratoriodiseno.cl</domain> <domain>upc.ro</domain> <domain>nickel.twosuperior.co.uk</domain> <domain>hemenhosting.org</domain> <domain>hostslim.eu</domain> </root> ``` Then, to modify the source XML: ``` # local XML source file $file='xml/netblock.xml'; # new node namespace properties $nsuri='http://www.w3.org/2000/xmlns/p'; $nsname='p:root'; $nsvalue=null; # create the DOMDocument and load the XML libxml_use_internal_errors( true ); $dom=new DOMDocument; $dom->formatOutput=true; $dom->validateOnParse=true; $dom->preserveWhiteSpace=false; $dom->strictErrorChecking=false; $dom->recover=true; $dom->load( $file ); libxml_clear_errors(); # create the new root node $new=$dom->createElementNS( $nsuri, $nsname, $nsvalue ); $dom->appendChild( $new ); # find the original Root $root=$dom->documentElement; # iterate backwards through the childNodes - inserting into the new root container in the original order for( $i=$root->childNodes->length-1; $i >= 0; $i-- ){ $new->insertBefore( $root->childNodes[ $i ],$new->firstChild ); } # remove original root $root->parentNode->removeChild( $root ); # save the XML ( or, as here, print out modified XML ) printf('<textarea cols=80 rows=10>%s</textarea>', $dom->saveXML() ); ``` Which results in the following output: ``` <?xml version="1.0" encoding="utf-8" standalone="yes"?> <p:root xmlns:p="http://www.w3.org/2000/xmlns/p"> <domain>cdiinvest.com.br</domain> <domain>savba.sk</domain> <domain>laboratoriodiseno.cl</domain> <domain>upc.ro</domain> <domain>nickel.twosuperior.co.uk</domain> <domain>hemenhosting.org</domain> <domain>hostslim.eu</domain> </p:root> ```
The root element is the root element. You change it by creating a new document with the root element of your choice. If the the job is left to import all root elements children from another document, then be it (SimpleXML is not that fitting for that, DOMDocument is, see [`DOMDocument::importNode`](https://www.php.net/manual/en/domdocument.importnode.php) and mind the `$deep` flag). If you don't actually want to change the root element but rewrite the XML then [`XMLReader`](https://www.php.net/XMLReader) / [`XMLWriter`](https://www.php.net/XMLWriter) is likely what you're looking for. (This is not entirely clear with your question as it misses the input document of the example) Compare as well with the reference Q&A [How do you parse and process HTML/XML in PHP?](https://stackoverflow.com/q/3577641/367456).
13,749
51,943,359
I have this part on the `docker-compose` that will copy the `angular-cli` frontend and a `scripts` directory. ``` www: build: ./www volumes: - ./www/frontend:/app - ./www/scripts:/scripts command: /scripts/init-dev.sh ports: - "4200:4200" - "49153:49153" ``` The `Dockerfile` inside the `www` will install npm. ``` FROM node:10-alpine RUN apk add --no-cache git make curl gcc g++ python2 RUN npm config set user 0 RUN npm config set unsafe-perm true RUN npm install ``` And the script: ``` #!/bin/bash echo "Serving.." cd /app && npm run-script start ``` I'm running the `docker-compose down && docker-compose build && docker-compose up` command: ``` www_1 | standard_init_linux.go:190: exec user process caused "no such file or directory" ``` I assume the error comes from the `command` line.
2018/08/21
[ "https://Stackoverflow.com/questions/51943359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2111400/" ]
Two options: 1. also start your services with the InProcessServerBuilder, and use the InProcessChannelBuilder to communicate with it, or 2. just contact the server over "localhost"
if you want a cli type of call to your local grpc server, you could checkout <https://github.com/fullstorydev/grpcurl>
13,750
42,990,994
I have this code ``` python_data_struct = { 'amazon_app_id': amazon_app_id, 'librivox_rest_url': librivox_rest_url, 'librivox_id': librivox_id, 'top': top, 'pacakge': 'junk', 'version': 'junk', 'password': password, 'description': description, 'long_description': long_description, 'make_audiobook::sftp_script::appname': '\"%{::appname}\"' } try: myyaml = yaml.dump(python_data_struct) except: e = sys.exc_info()[0] print "Error on %s Error [%s]" % ( librivox_rest_url, e ) sys.exit(5) write_file( myyaml, hiera_dir + '/' + appname + '.yaml' ); ``` It outputs yaml that looks like this: ``` {amazon_app_id: junk, description: !!python/unicode ' Riley was an American writer known as the "Hoosier poet", and made a start writing newspaper verse in Hoosier dialect for the Indianapolis Journal in 1875. His favorite authors were Burns and Dickens. This collection of poems is a romanticized and mostly boy-centered paean to a 19th century rural American working-class childhood. (Summary by Val Grimm)', librivox_id: '1000', librivox_rest_url: 'https://librivox.org/api/feed/audiobooks/id/1000/extended/1/format/json', long_description: !!python/unicode "\n Riley was an American writer known as the\ \ \"Hoosier poet\", and made a start writing newspaper verse in Hoosier dialect\ \ for the Indianapolis Journal in 1875. His favorite authors were Burns and Dickens.\ \ This collection of poems is a romanticized and mostly boy-centered paean to\ \ a 19th century rural American working-class childhood. (Summary by Val Grimm)\n\ \nThe \"Selected Riley Child-Rhymes\" App will not use up your data plan's minutes\ \ by downloading the audio files (mp3) over and over. You will download load all\ \ the data when you install the App. The \"Selected Riley Child-Rhymes\" App works\ \ even in airplane mode and in places without Wi-Fi or phone network access! So\ \ you can listen to your audio book anywhere anytime! The \"Selected Riley Child-Rhymes\"\ \ App automatically pauses when you make or receive a phone call and automatically\ \ resumes after the call ends. The \"Selected Riley Child-Rhymes\" App will continue\ \ to play until you exit the App or pause the reading so it is perfect for listening\ \ in bed, at the gym, or while driving into work.\" \n", 'make_audiobook::sftp_script::appname': '"%{::appname}"', pacakge: junk, password: junk, top: junk, version: junk} ``` It is hard to see but the particular key/value pair that is a problem is this one: ``` 'make_audiobook::sftp_script::appname': '"%{::appname}"', ``` I need to it to be this: ``` 'make_audiobook::sftp_script::appname': "%{::appname}", ``` Just double quotes around `%{::appname}` I cannot figure out what to do in my python code. Please help :)
2017/03/24
[ "https://Stackoverflow.com/questions/42990994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/759991/" ]
You can make function calls on ngSubmit *form class="well" (ngSubmit)="addUserModal.hide(); addUser(model); userForm.reset()" #userForm="ngForm"*
I haven't used AngularJS, but the following works for me in Angular 2, if you're able to use jQuery: `$(".modal").modal("hide")`
13,751
59,641,747
I cannot get it it's bash related or python subprocess, but results are different: ``` >>> subprocess.Popen("echo $HOME", shell=True, stdout=subprocess.PIPE).communicate() (b'/Users/mac\n', None) >>> subprocess.Popen(["echo", "$HOME"], shell=True, stdout=subprocess.PIPE).communicate() (b'\n', None) ``` Why in second time it's just newline? Where argument are falling off?
2020/01/08
[ "https://Stackoverflow.com/questions/59641747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6519078/" ]
Use fifo format, which can reconnect. My working example: ``` -f fifo -fifo_format flv \ -drop_pkts_on_overflow 1 -attempt_recovery 1 -recover_any_error 1 \ rtmp://bla.bla/bla ```
+1 for the accepted answer, but: In FFmpeg, an encoder auto-detects it's parameters based on selected output format. Here the output format is unknown (that's correct for formats like "fifo" and "tee") so the encoder won't have all parameters setup the same as if using "flv" output format. For example: Wowza Streaming Engine will report an error while trying to publish that RTMP stream: ``` H264Utils.decodeAVCC : java.lang.ArrayIndexOutOfBoundsException: 9 ``` To overcome that you should add `-flags +global_header` option. This should help for @Matt's problem.
13,754
58,469,032
I am starting to learn python and I don't know much about programming right now. In the future, I want to build android applications. Python looks interesting to me. Can I use python for building Android applications?
2019/10/20
[ "https://Stackoverflow.com/questions/58469032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11109516/" ]
You can build android apps in python, but it's not as powerful as android studio, and the apps made by python take more space and are less memory efficient.But the apps work well... There are several ways to use Python on Android: BeeWare. BeeWare is a collection of tools for building native user interfaces. ... Chaquopy. Chaquopy is a plugin for Android Studio's Gradle-based build system. ... Kivy. Kivy is a cross-platform OpenGL-based user interface toolkit. ... Pyqtdeploy. ... QPython. ... SL4A. ... PySide.
As of now the official android development languages are **kotlin** and **java** in `android studio` IDE, with the addition of **dart** for cross-platform development in `flutter` SDK. I would advise you stick to whichever you find easiest as per your needs. Although python is more widely accepted in the domains of data science/machine learning so your knowledge of it is still a big plus.
13,755
74,054,668
How to convert a `.csv` file to `.npy` efficently? -------------------------------------------------- I've tried: ``` import numpy as np filename = "myfile.csv" vec =np.loadtxt(filename, delimiter=",") np.save(f"{filename}.npy", vec) ``` While the above works for smallish file, the actual `.csv` file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an `.npy` format. ### Q (Part 1): Is there some way to load/convert a `.csv` to `.npy` efficiently for large CSV file? The above code snippet is similar to the answer from [Convert CSV to numpy](https://stackoverflow.com/questions/34932210/convert-csv-to-numpy) but that won't work for ~12M x 1024 matrix. ### Q (Part 2): If there isn't any way to to load/convert a `.csv` to `.npy` efficiently, is there some way to iteratively read the `.csv` file into `.npy` efficiently? Also, there's an answer here <https://stackoverflow.com/a/53558856/610569> to save the csv file as numpy array iteratively. But seems like the `np.vstack` isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards. ### Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format? There is another library `tensorstore` that seems to efficiently handles arrays which support conversion to numpy array when read, <https://google.github.io/tensorstore/python/tutorial.html>. But somehow there isn't any information on how to save the `tensor`/array without the exact dimensions, all of the examples seem to include configurations like `'dimensions': [1000, 20000],`. Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs: > > Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved) > > >
2022/10/13
[ "https://Stackoverflow.com/questions/74054668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/610569/" ]
Nice question; Informative in itself. I understand you want to have the whole data set/array in memory, eventually, as a NumPy array. I assume, then, you have enough (RAM) memory to host such array -- 12M x 1K. I don't specifically know about how `np.loadtxt` (`genfromtxt`) is operating behind the scenes, so I will tell you how I *would* do (after trying like you did). ### Reasoning about memory... Notice that a simple boolean array will cost ~12 GBytes of memory: ```py >>> print("{:.1E} bytes".format( np.array([True]).itemsize * 12E6 * 1024 )) 1.2E+10 bytes ``` And this is for a *Boolean* data type. Most likely, you have -- what -- a dataset of Integer, Float? The size may increase quite significantly: ```py >>> np.array([1], dtype=bool).itemsize 1 >>> np.array([1], dtype=int).itemsize 8 >>> np.array([1], dtype=float).itemsize 8 ``` > > *It's a lot of memory* (which you know, just want to emphasize). > > > At this point, I would like to point out a possible *swapping* of the working memory. You may have enough physical (RAM) memory in your machine, but if not enough of *free* memory, your system will use the *swap* memory (i.e, *disk*) to keep your system stable & have the work done. The cost you pay is clear: read/writing from/to the disk is very slow. **My point so far is**: check the data type of your dataset, estimate the size of your future array, and guarantee you have that minimum amount of RAM memory available. ### I/O text Considering you do have all the (RAM) memory necessary to host the whole numpy array: I would then loop over the whole (~12M lines) text file, filling the pre-existing array row-by-row. More precisely, I would have the (big) array already instantiated before start reading the file. Only then, I would read each line, split the columns, and give it to [`np.asarray`](https://numpy.org/devdocs/reference/generated/numpy.asarray.html) and assign those (1024) values to each respective row of the *output* array. > > The looping over the file is slow, yes. The thing here is that you limit (and control) the amount of memory being used. Roughly speaking, the big objects consuming your memory are the "output" (big) array, and the "line" (1024) array. Sure, there are quite a considerable amount of memory being consumed in each loop in the temporary objects during reading (text!) values, splitting into list elements and casting to an array. Still, it's something that will remain largely constant during the whole ~12M lines. > > > So, **the steps I would go through are**: ``` 0) estimate and guarantee enough RAM memory available 1) instantiate (np.empty or np.zeros) the "output" array 2) loop over "input.txt" file, create a 1D array from each line "i" 3) assign the line values/array to row "i" of "output" array ``` Sure enough, you can even make it parallel: If on one hand text files cannot be randomly (r/w) accessed, on the other hand you can easily split them (see [How can I split one text file into multiple \*.txt files?](https://stackoverflow.com/q/19031144/687896)) to have -- if *fun* is at the table -- them read in parallel, if that time if critical. Hope that helps.
TL;DR ===== Export to a different function other than `.npy` seems inevitable unless your machine is able to handle the size of the data in-memory as per described in [@Brandt answer](https://stackoverflow.com/a/74055562/610569). --- Reading the data, then processing it (Kinda answering Q part 2) =============================================================== To handle data size larger than what the RAM can handle, one would often resort to libraries that performs "**out-of-core**" computation, e.g. [`turicreate.SFrame`](https://apple.github.io/turicreate/docs/api/generated/turicreate.SFrame.html), [`vaex`](https://vaex.readthedocs.io/en/docs/example_io.html#In-memory-data-representations) or [`dask`](https://dask.pydata.org/en/latest/dataframe.html) . These libraries would be able to lazily load the `.csv` files into dataframes and process them by chunks when evaluated. ``` from turicreate import SFrame filename = "myfile.csv" sf = SFrame.read_csv(filename) sf.apply(...) # Trying to process the data ``` or ``` import vaex filename = "myfile.csv" df = vaex.from_csv(filename, convert=True, chunk_size=50_000_000) df.apply(...) ``` Converting the read data into numpy array (kinda answering Q part 1) ==================================================================== While out-of-core libraries can read and process the data efficiently, converting into numpy is an "**in-memory**" operation, the machine needs to have enough RAM to fit all data. The `turicreate.SFrame.to_numpy` documentation writes: > > Converts this SFrame to a numpy array > > > This operation will construct a numpy array in memory. Care must be taken when size of the returned object is big. > > > And the `vaex` documentation writes: > > In-memory data representations > > > One can construct a Vaex DataFrame from a variety of in-memory data representations. > > > And `dask` best practices actually reimplemented their own array objects that are simpler than numpy array, see <https://docs.dask.org/en/stable/array-best-practices.html>. But when going through the docs, it seems like the format they have saved the dask array in are not `.npy` but various other formats. Writing the file into non-`.npy` versions (answering Q Part 3) ============================================================== Given the numpy arrays are inevitably in-memory, trying to save the data into one single `.npy` isn't the most viable option. Different libraries seems to have different solutions for storage. E.g. * `vaex` saves the data into `hdf5` by default if the `convert=True` argument is set when data is read through `vaex.from_csv()` * `sframe` saves the data into their [own binary format](https://apple.github.io/turicreate/docs/api/generated/turicreate.SFrame.save.html#turicreate.SFrame.save) * `dask` [export functions](https://docs.dask.org/en/stable/dataframe-create.html) save `to_hdf()` and `to_parquet()` format
13,757
8,575,062
I am sure the configuration of `matplotlib` for python is correct since I have used it to plot some figures. But today it just stop working for some reason. I tested it with really simple code like: ``` import matplotlib.pyplot as plt import numpy as np x = np.arange(0, 5, 0.1) y = np.sin(x) plt.plot(x, y) ``` There's no error but just no figure shown up. I am using python 2.6, Eclipse in Ubuntu
2011/12/20
[ "https://Stackoverflow.com/questions/8575062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/504283/" ]
You must use `plt.show()` at the end in order to see the plot
`plt.plot(X,y)` function just draws the plot on the canvas. In order to view the plot, you have to specify `plt.show()` after `plt.plot(X,y)`. So, ``` import matplotlib.pyplot as plt X = //your x y = //your y plt.plot(X,y) plt.show() ```
13,764
15,128,225
I have developed a python script where i have a setting window which has the options to select the paths for the installation of software.When clicked on OK button of the setting window, i want to write all the selected paths to the registry and read the same when setting window is opened again. My code looks as below. ``` def OnOk(self, event): data1=self.field1.GetValue() #path selected in setting window aReg = ConnectRegistry(None,HKEY_LOCAL_MACHINE) keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE) try: SetValueEx(keyVal,"Log file",0,REG_SZ,data1) except EnvironmentError: pass CloseKey(keyVal) CloseKey(aReg) ``` I get a error like below: ``` Traceback (most recent call last): File "D:\PROJECT\project.py", line 305, in OnOk keyVal=OpenKey(aReg,r"SOFTWARE\my path to\Registry", 0,KEY_WRITE) WindowsError: [Error 5] Access is denied ``` And to read from registry,the saved registry has to show up in the setting window.I tried with the below code.Though its working but not satisfied with the way i programmed it.Help me out for the better solution ``` key = OpenKey(HKEY_CURRENT_USER, r'Software\my path to\Registry', 0, KEY_READ) for i in range(4): try: n,v,t = EnumValue(key,i) if i==0: self.field2.SetValue(v) elif i==1: self.field3.SetValue(v) elif i==2: self.field4.SetValue(v) elif i==3: self.field1.SetValue(v) except EnvironmentError: pass CloseKey(key) ```
2013/02/28
[ "https://Stackoverflow.com/questions/15128225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2016950/" ]
Reading registry keys: ``` def read(path, root=HKEY_CURRENT_USER): path, name = os.path.split(path) with suppress(FileNotFoundError), OpenKey(root, path) as key: return QueryValueEx(key, name)[0] ``` And writing: ``` def write(path, value, root=HKEY_CURRENT_USER): path, name = os.path.split(path) with OpenKey(root, path, 0, KEY_WRITE) as key: SetValueEx(key, name, 0, REG_SZ, value) ``` Extended for type handling. Provide type as argument, match current type in registry or python value type. ``` def write(path, value, root=HKEY_CURRENT_USER, regtype=None): path, name = os.path.split(path) with OpenKey(root, path, 0, KEY_WRITE|KEY_READ) as key: with suppress(FileNotFoundError): regtype = regtype or QueryValueEx(key, name)[1] SetValueEx(key, name, 0, regtype or REG_DWORD if isinstance(value, int) else REG_SZ, str(value) if regtype==REG_SZ else value) ``` > > **NOTE:** Use of [contextlib.suppress()](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) (available since `python 3.4`) can be replaced by try..except..pass for older versions. The [context manager](https://docs.python.org/2/library/_winreg.html#_winreg.PyHKEY.__enter__) interface for winreg was introduced in `python 2.6`. > > >
for Creating / writing values in registry key: ``` from winreg import* import winreg keyVal = r'SOFTWARE\\python' try: key = OpenKey(HKEY_LOCAL_MACHINE, keyVal, 0, KEY_ALL_ACCESS) except: key = CreateKey(HKEY_LOCAL_MACHINE, keyVal) SetValueEx(key, "Start Page", 0, REG_SZ, "snakes") CloseKey(key) ``` If access denied - try running the command (CMd or IDE) in administrative mode for reading value in registry-key ``` from winreg import* Registry = ConnectRegistry(None, HKEY_LOCAL_MACHINE) RawKey = OpenKey(Registry, "SOFTWARE\\python") try: i = 0 while 1: name, value, type = EnumValue(RawKey, i) print("name:",name,"value:", value,"i:", i) i += 1 except WindowsError: print("") ```
13,774
15,576,693
I am currently working on a project for one of my classes where I have to implement an AI opponent to play tic-tac-toe using minmax and Alpha-Beta minmax algorithms to determine the moves. The problem I am having however is attempting to generate a list of possible moves for a board. My problem code is as follows ``` def genMoves(genBoard, turnNumber): moveList = [] print "inMovesList" #Figure out if X or O go now if turnNumber % 2 == 0: moveChar = "O" else: moveChar = "X" i = 0; while i < 9: tempBoard = genBoard if tempBoard[i] == "*": #set tempBoard[i] to X or O tempBoard[i] = moveChar #append move, new board moveList.append((i, tempBoard)) i+=1 print "MovesList: " print moveList return moveList ``` My board is represented as a list of 9 strings initialized to `["*", "*", "*", "*", "*", "*", "*", "*", "*"]`. My goal is to have move list return a list of tuples with the first element of the tuple being i (Where the X or O was inserted) and the second element being the resulting board. The problem I have is that I will recieve a list with the correct number of possible moves (Eg: If I manually play the first 4 moves for both sides it will only give me 5 possible moves) however it will place the same move in each location that contains a \*. (So it ends up generating something like X,O,O,O,O,O,O,O,O for possible second moves) This is not the first time I have had to use minmax but it is the first time I've had to do it in python. Any suggestions for how to get around this issue would be helpful! Thanks!
2013/03/22
[ "https://Stackoverflow.com/questions/15576693", "https://Stackoverflow.com", "https://Stackoverflow.com/users/967929/" ]
This line is a problem: ``` tempBoard = genBoard ``` After this line, you seem to think that you have *two* lists -- the original, still referenced by `genBoard`, and a new one, now referenced by `tempBoard`. This is not the case. This line does **not** create a copy of the list. Instead, it binds the name `tempBoard` to refer to *the same object* that `genBoard` is bound to. Consequently, subsequent references to `tempBoard[i]` also affect `genBoard[i]`. Try one of these instead: ``` tempBoard = list(genBoard) tempBoard = genBoard[:] tempBoard = copy.copy(genBoard) ``` Each of these lines creates a new list, the initial contents of which is the same as `genBoard`. `tempboard` is bound to this new list, while `genboard` remains bound to the old list. If the object in question were more complicated than a list of strings, you might need to do this: ``` tempBoard = copy.deepcopy(genBoard) ```
I belive Python does not create a copy of your Board but just points to the original version. therefore your printout is: *"MovesList: " [[0,(X,O,O,O,O,O,O,O,O)],[1,(X,O,O,O,O,O,O,O,O)],[2,(X,O,O,O,O,O,O,O,O)],* etc. and your genBoard, variable is changed. to test that add *print genBoard* directly before the end of your method if that was indeed the problem try to google how to create a copy of your Board instead of referencing to it.
13,784
47,801
I am trying to create a web application using Pylons and the resources on the web point to the [PylonsBook](http://pylonsbook.com/alpha1/authentication_and_authorization) page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons? I tried downloading the [SimpleSiteTemplate](http://pypi.python.org/pypi/SimpleSiteTemplate/) from the cheeseshop but wasn't able to run the setup-app command. It throws up an error: ``` File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__ table = metadata.tables[key] AttributeError: 'module' object has no attribute 'tables' ``` I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4.
2008/09/06
[ "https://Stackoverflow.com/questions/47801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1448/" ]
I don't think AuthKit is actively maintained anymore. It does use the Paste (<http://pythonpaste.org>) libs though for things like HTTP Basic/Digest authentication. I would probably go ahead and take a look at the source for some inspiration and then use the Paste tools if you want to use HTTP authentication. There is also OpenID which is very easy to setup. The python-openid libs have an excellent example that is easy to translate to WSGI for wrapping a Pylons app. You can look at an example: <http://ionrock.org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth.py>
This actually got me interested:[Check out this mailing on the pylons list](http://groups.google.com/group/pylons-discuss/browse_thread/thread/644deb53612af362?hl=en). So AuthKit is being developed, and I will follow the book and get back on the results.
13,785
44,532,041
I want to strip all python docstrings out of a file using simple search and replace, and the following (extremely) simplistic regex does the job for one line doc strings: [Regex101.com](https://regex101.com/r/PiExji/1) ``` """.*""" ``` How can I extend that to work with multi-liners? Tried to include `\s` in a number of places to no avail.
2017/06/13
[ "https://Stackoverflow.com/questions/44532041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2171758/" ]
As you cannot use an inline `s` (DOTALL) modifier, the usual workaround to match any char is using a character class with opposite shorthand character classes: ``` """[\s\S]*?""" ``` or ``` """[\d\D]*?""" ``` or ``` """[\w\W]*?""" ``` will match `"""` then any 0+ chars, as few as possible as `*?` is a lazy quantfiier, and then trailing `"""`.
Sometimes there are multiline strings that are not docstrings. For example, you may have a complicated SQL query that extends across multiple lines. The following attempts to look for multiline strings that appear before class definitions and after function definitions. ``` import re input_str = """''' This is a class level docstring ''' class Article: def print_it(self): ''' method level docstring ''' print('Article') sql = ''' SELECT * FROM mytable WHERE DATE(purchased) >= '2020-01-01' ''' """ doc_reg_1 = r'("""|\'\'\')([\s\S]*?)(\1\s*)(?=class)' doc_reg_2 = r'(\s+def\s+.*:\s*)\n(\s*"""|\s*\'\'\')([\s\S]*?)(\2[^\n\S]*)' input_str = re.sub(doc_reg_1, '', input_str) input_str = re.sub(doc_reg_2, r'\1', input_str) print(input_str) ``` Prints: ``` class Article: def print_it(self): print('Article') sql = ''' SELECT * FROM mytable WHERE DATE(purchased) >= '2020-01-01' ''' ```
13,790
43,970,466
I tried using the below python code to find the websites of the companies. But after trying few times, I face a **Service Unavailable** error. I have finished the first level of finding the possible domains of the companies. For example : CompanyExample [u'<http://www.examples.com/>', u'<https://www.example.com/quote/CGL:SP>', u'<http://example2.sgx.com/FileOpen/China%20Great%20Land.ashx?App=Prospectus&FileID=3813>', u'<https://www.example3.com/php/company-profile/SG/en_2036109.html>'] ``` from google import search for link in links: parsed_uri = urlparse(link) domain = '{uri.scheme}://{uri.netloc}/'.format(uri=parsed_uri) for url in search(domain,stop = 4): print url ``` Kindly help me on: 1. Why do I find **urllib2.HTTPError: HTTP Error 503: Service Unavailable** error suddenly. 2. Is there any other method(Python requests) to find websites of the list of companies ?
2017/05/15
[ "https://Stackoverflow.com/questions/43970466", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7664428/" ]
If you need to check a specific permission try ``` {% if user.hasPermission('administer nodes') %} <div class="foo"><a href="/">Shorthand link for admins</a></div> {% endif %} ```
If you try to check the current user is administer in TWIG file , you can use ``` {% if is_admin %} <div class="foo"><a href="/">Shorthand link for admins</a></div> {% endif %} ```
13,791
2,731,871
I have a web application in python wherein the user submits their email and password. These values are compared to values stored in a mysql database. If successful, the script generates a session id, stores it next to the email in the database and sets a cookie with the session id, with allows the user to interact with other parts of the sight. When the user clicks logout, the script erases the session id from the database and deletes the cookie. The cookie expires after 5 hours. My concern is that if the user doesnt log out, and the cookie expires, the script will force him to login, but if he has copied the session id from before, it can still be validated. How do i automatically delete the session id from the mysql database after 5 hours?
2010/04/28
[ "https://Stackoverflow.com/questions/2731871", "https://Stackoverflow.com", "https://Stackoverflow.com/users/301860/" ]
You can encode the expiration time as part of your session id. Then when you validate the session id, you can also check if it has expired, and if so force the user to log-in again. You can also clean your database periodically, removing expired sessions.
You'd have to add a timestamp to the session ID in the database to know when it ran out. Better, make the timestamp part of the session ID itself, eg.: ``` Set-Cookie: session=1272672000-(random_number);expires=Sat, 01-May-2010 00:00:00 GMT ``` so that your script can see just by looking at the number at the front whether it's still a valid session ID. Better still, make both the expiry timestamp *and* the user ID part of the session ID, and make the bit at the end a cryptographic hash of the user ID, expiry time and a secret key. Then you can validate the ID *and* know which user it belongs to without having to store anything in the database. ([Related PHP example](https://stackoverflow.com/questions/2695153/php-csrf-how-to-make-it-works-in-all-tabs/2695291#2695291).)
13,792
47,412,723
I'm new in python and I am trying to build a script that gets as an argument a URL and send a GET request to the website and then print the details from the response and the HTML body as well. The code I wrote: ``` import sys import urllib.request url = 'https://%s' % (sys.argv[1]) res = urllib.request.urlopen(url) print('Response from the server:'+'\n', res.info()) print('HTML web page:'+'\n', res.read()) ``` Now, I want to check if an argument is passed through the CMD or not and if not to alert about it. And also to print the Status Code and the Reason. What is exactly Reason means? This is what I got so far and I do not understand why I don't get any result. My current code is: ``` import sys import urllib.request, urllib.error try: url = 'https://%s' % (sys.argv[1]) res = urllib.request.urlopen(url) except IndexError: if len(sys.argv) < 1: print(res.getcode()) print(res.getreason()) elif len(sys.argv) > 1: print('Response from the server:'+'\n', res.info()) print('HTML web page:'+'\n', res.read()) print('Status Code:', res.getcode()) ``` Thank you.
2017/11/21
[ "https://Stackoverflow.com/questions/47412723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You have to handle array of elements separately: ``` var formToJSON = elements => [].reduce.call(elements, (data, element) => { var isArray = element.name.endsWith('[]'); var name = element.name.replace('[]', ''); data[name] = isArray ? (data[name] || []).concat(element.value) : element.value; return data; }, {}); ```
If you want to convert form data into json object, try the following ``` var formData = JSON.parse(JSON.stringify($('#frm').serializeArray())); ```
13,793