qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
58,026,436
|
I am working on a bank statement, corresponding to the output dataframe and an ending balance corresponding to the output['balance'][0] I would like to calculate all balance values for the individual transactions as described below. It's a very straightforward calculation and yet it doesn't seem to be working - is there something quite obvious I am missing? Thanks in advance!
```
output['balance'] = ''
output['balance'][0] = 21.15
if len(output[amount]) > 0:
return output[balance][i+1].append((output[balance][i]-output[amount][i+1]))
else:
output[balance].append((output[balance][0]))
output[['balance']] = output['Amount'].apply(lambda amount: bal_calc(output, amount))```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2896 try:
-> 2897 return self._engine.get_loc(key)
2898 except KeyError:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 4.95
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-271-b85947935fca> in <module>
----> 1 output[['balance']] = output['Amount'].apply(lambda amount: bal_calc(output, amount))
~\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds)
4040 else:
4041 values = self.astype(object).values
-> 4042 mapped = lib.map_infer(values, f, convert=convert_dtype)
4043
4044 if len(mapped) and isinstance(mapped[0], Series):
pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()
<ipython-input-271-b85947935fca> in <lambda>(amount)
----> 1 output[['balance']] = output['Amount'].apply(lambda amount: bal_calc(output, amount))
<ipython-input-270-cbf5ac20716d> in bal_calc(output, amount)
2 output['balance'] = ''
3 output['balance'][0] = 21.15
----> 4 if len(output[amount]) > 0:
5 return output[balance][i+1].append((output[balance][i]-output[amount][i+1]))
6 else:
~\Anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2978 if self.columns.nlevels > 1:
2979 return self._getitem_multilevel(key)
-> 2980 indexer = self.columns.get_loc(key)
2981 if is_integer(indexer):
2982 indexer = [indexer]
~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2897 return self._engine.get_loc(key)
2898 except KeyError:
-> 2899 return self._engine.get_loc(self._maybe_cast_indexer(key))
2900 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2901 if indexer.ndim > 1 or indexer.size > 1:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 4.95
```
|
2019/09/20
|
[
"https://Stackoverflow.com/questions/58026436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7698202/"
] |
I would need to have all of your code and be able to run it locally in order to diagnose the problem because your posting is devoid of details (I would need to see inside your `ManipulatePixel` function, as well as the code that calls `ProcessFrame`). but here's some general tips that apply in your case.
* 2D arrays in .NET are significantly slower than 1D arrays and staggered arrays, even in .NET Core today - this is a longstanding bug.
+ See here:
- <https://github.com/dotnet/coreclr/issues/4059>
- [Why are multi-dimensional arrays in .NET slower than normal arrays?](https://stackoverflow.com/questions/468832/why-are-multi-dimensional-arrays-in-net-slower-than-normal-arrays)
- [Multi-dimensional array vs. One-dimensional](https://stackoverflow.com/questions/5476000/multi-dimensional-array-vs-one-dimensional)
+ So consider changing your code to use either a jagged array ([which also helps with memory locality/proximity caching, as each thread would have its own private buffer](https://stackoverflow.com/questions/12065774/why-does-cache-locality-matter-for-array-performance)) or a 1D array with your own code being responsible for bounds-checking.
+ Or better-yet: use `stackalloc` to manage the buffer's lifetime and pass that by-pointer (`unsafe` ahoy!) to your thread delegate.
* Sharing memory buffers between threads makes it harder for the system to optimize safe memory accesses.
* Avoid allocating a new buffer for each frame encountered - if a frame has a limited lifespan then consider using reusable buffers using a buffer-pool.
* Consider using the SIMD and AVX features in .NET. While modern C/C++ compilers are smart enough to compile code to use those instructions, the .NET JIT isn't so hot - but you can make explicit calls into [SMID/AVX instructions using the SIMD-enabled types](https://learn.microsoft.com/en-us/dotnet/standard/numerics#simd-enabled-types) (you'll need to use .NET Core 2.0 or later for the best accelerated functionality)
* Also, avoid copying individual bytes or scalar values inside a `for` loop in C#, instead consider using `Buffer.BlockCopy` for bulk copy operations (as these can use hardware memory copy features).
* Regarding your observation of "80% CPU usage" - if you have a loop in a program then that *will* cause 100% CPU usage within the time-slices provided by the operating-system - if you don't see 100% usage then your code then:
+ Your code is actually running *faster* than real-time (this is a good thing!) - (unless you're certain your program can't keep-up with the input?)
+ Your codes' thread (or threads) is blocked by something, such as a blocking IO call or a misplaced `Thread.Sleep`. Use tools like ETW to see what your process is doing when you think it should be CPU-bound.
+ Ensure you aren't using any `lock` (`Monitor`) calls or using other thread or memory synchronization primitives.
|
Efficiency matters ( it is **not** true-**`[PARALLEL]`**, but may, yet need not, benefit from a *"just"*-`[CONCURRENT]` work
The BEST, yet a rather hard way, if ultimate performance is a MUST :
--------------------------------------------------------------------
in-line an assembly, optimised as per cache-line sizes in the CPU hierarchy and keep indexing that follows the actual memory-layout of the 2D data `{ column-wise | row-wise }`. Given there is no 2D-kernel-transformation mentioned, your process does not need to "touch" any topological-neighbours, the indexing can step in whatever order "across" both of the ranges of the 2D-domain and the `ManipulatePixel()` may get more efficient on transforming rather blocks-of pixels, instead of bearing all overheads for calling a process just for each isolated atomicised-1px ( ILP + cache-efficiency are on your side ).
Given your target production-platform CPU-family, best use (block-SIMD)-vectorised instructions available from AVX2, best AVX512 code. As you most probably know, may use C/C++ using AVX-intrinsics for performance optimisations with assembly-inspection and finally "copy" the best resulting assembly for your C# assembly-inlining. Nothing will run faster. Tricks with CPU-core affinity mapping and eviction/reservation are indeed a last resort, yet may help for indeed an *almost* hard-real-time production settings ( though, hard R/T systems are seldom to get developed in an ecosystem with non-deterministic behaviour )
A CHEAP, few-seconds step :
---------------------------
Test and benchmark the run-time per batch of frames of a reversed composition of moving the more-"expensive"-part, the `Parallel.For(...{...})` inside the `for(var col = 0; col < width; ++col){...}` to see the change of the costs of instantiations of the `Parallel.For()` instrumentation.
Next, if going this cheap way, think about re-factoring the `ManipulatePixel()` to at least use a block of data, aligned with data-storage layout and being a multiple of cache-line length ( for cache-hits **`~ 0.5 ~ 5 [ns]`** improved costs-of-memory accesses, being **`~ 100 ~ 380 [ns]`** otherwise - here, a will to distribute a work (the worse per 1px) across all NUMA-CPU-cores will result in paying way more time, due to extended access-latencies for cross-NUMA-(non-local) memory addresses and besides never re-using an expensively cached block-of-fetched-data, you knowingly pay excessive costs from cross-NUMA-(non-local) memory fetches ( from which you "use" just 1px and "throw" away all the rest of the cached-block ( as those pixels will get re-fetched and manipulated in some other CPU-core in some other time ~ a triple-waste of time ~ sorry to have mentioned that explicitly, but **when shaving each possible `[ns]` this cannot happen in production pipeline** ) )
---
Anyway, let me wish you perseverance and good luck on your steps forwards to gain the needed efficiency back onto your side.
| 8,669
|
29,360,607
|
I was looking up how to create a function that removes duplicate characters from a string in python and found this on stack overflow:
```
from collections import OrderedDict
def remove_duplicates (foo) :
print " ".join(OrderedDict.fromkeys(foo))
```
It works, but how? I've searched what OrderedDict and fromkeys mean but I can't find anything that explains how it works in this context.
|
2015/03/31
|
[
"https://Stackoverflow.com/questions/29360607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4728174/"
] |
I will give it a shot:
[OrderedDict](https://docs.python.org/3/library/collections.html#collections.OrderedDict) are dictionaries that store keys in order they are added. Normal dictionaries don't. If you look at **doc** of `fromkeys`, you find:
>
> OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S.
>
>
>
So the `fromkeys` class method, creates an `OrderedDict` using items in the input iterable S (in my example characters from a string) as keys. In a dictionary, keys are unique, so duplicate items in `S` are ignored.
For example:
```
s = "abbcdece" # example string with duplicate characters
print(OrderedDict.fromkeys(s))
```
This results in an OrderedDict:
```
OrderedDict([('a', None), ('b', None), ('c', None), ('d', None), ('e', None)])
```
Then `" ".join(some_iterable)` takes an iterable and joins its elements using a space in this case. It uses only keys, as iterating through a dictionary is done by its keys. For example:
```
for k in OrderedDict.fromkeys(s): # k is a key of the OrderedDict
print(k)
```
Results in:
```
a
b
c
d
e
```
Subsequently, call to join:
```
print(" ".join(OrderedDict.fromkeys(s)))
```
will print out:
```
a b c d e
```
**Using set**
Sometimes, people use a set for this:
```
print( " ".join(set(s)))
# c a b d e
```
But unlike sets in C++, sets in python do not guarantee order. So using a set will give you unique values easily, but they might be in a different order then they are in the original list or string (as in the above example).
Hope this helps a bit.
|
By list comprehension
```
print ' '.join([character for index, character in enumerate(foo) if character not in foo[:index]])
```
| 8,672
|
22,036,124
|
I declared a few global variables in a python file and would like to reset their values to None in a function. Is there a better/hack/pythonic way to declare all variables as global and assign them a value in one line?
```
doctype, content_type, framework, cms, server = (None,)*5
def reset():
doctype, content_type, framework, cms, server = (None,)*5
```
|
2014/02/26
|
[
"https://Stackoverflow.com/questions/22036124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2113279/"
] |
Chain `=`, since you're assining immutable `None` to them:
```
doctype = content_type = framework = cms = server = None
```
If you wanna use the `reset` function, you have to declare them as `global` inside it:
```
def reset():
global doctype, content_type, framework, cms, server
doctype = content_type = framework = cms = server = None
```
|
I would use **one** global mutable object in this case, `dict` for example:
```
conf = dict.fromkeys(['doctype', 'content_type', 'framework', 'cms', 'server'])
def reset():
for k in conf:
conf[k] = None
```
or class. This way you can incapsulate `reset` in class itself:
```
class Config():
doctype = None
content_type = None
framework = None
cms = None
server = None
@classmethod
def reset(cls):
for attr in [i for i in cls.__dict__.keys()
if i[:1] != '_' and not hasattr(getattr(cls, i), '__call__')]:
setattr(cls, attr, None)
```
| 8,673
|
50,878,885
|
I am new in machine learning area. i am trying to run python program on browser by converting trained model in tensorflow js.
[this attention\_ocr](https://github.com/tensorflow/models/tree/master/research/attention_ocr/python) is related to OCR written in python. i have generated HDF5/H5 file and converted that in web specific format with `tensorflowjs_converter`[[ref]](https://js.tensorflow.org/tutorials/import-keras.html).
I follow all instruction given in [this](https://js.tensorflow.org/tutorials/import-keras.html) document but at the time of running in browser it throwing me error *(refer screenshot)*
[](https://i.stack.imgur.com/N4pzc.png)
>
> I am looking for solution to remove this error...!
>
>
>
**Referance :**
[tensorflow.org](https://js.tensorflow.org/tutorials/)
[How to import a TensorFlow SavedModel into TensorFlow.js](https://github.com/tensorflow/tfjs-converter)
[Importing a Keras model into TensorFlow.js](https://js.tensorflow.org/tutorials/import-keras.html)
|
2018/06/15
|
[
"https://Stackoverflow.com/questions/50878885",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8367883/"
] |
Lambda (native code) layers are not supported in TensorFlow.js. You will need to replace it with a custom layer. This is tricky. Here is an example custom layer: <https://github.com/tensorflow/tfjs-examples/tree/master/custom-layer>
|
You have to create a custom layer class as @BlessedKey stated above (<https://github.com/tensorflow/tfjs-examples/tree/master/custom-layer>).
You also have to edit the model.json file. The model definition must be updated point to the new class you created for your custom layer instead of the Lambda class. Find the lambda layer inside your model.json and change the "class\_name": "Lambda" attribute to "class\_name": "YourCustomLayer". You can find an example here: <https://github.com/tensorflow/tfjs/issues/283>.
Also make sure to return the correct tensor shape from `computeOutputShape(inputShape)` or you'll get bothered by this pesky guy:
```
TypeError: Cannot read property 'dtype' of undefined
at executor.js:29
at t.e.add (executor.js:96)
at SA (executor.js:341)
at training.js:1063
at engine.js:425
at t.e.scopedRun (engine.js:436)
at t.e.tidy (engine.js:423)
at Nb (globals.js:182)
at s (training.js:1046)
at training.js:1045
```
| 8,675
|
66,988,750
|
**Summary**:
I'm trying to use multiprocess and multiprocessing to parallelise work with the following attributes:
* Shared datastructure
* Multiple arguments passed to a function
* Setting number of processes based on current system
**Errors**:
My approach works for a small amount of work but fails with the following on larger tasks:
`OSError: [Errno 24] Too many open files`
**Solutions tried**
Running on a macOS Catalina system, `ulimit -n` gives `1024` within Pycharm.
Is there a way to avoid having to change `ulimit`? I want to avoid this as the code will ideally work out of the box for various sytems.
I've seen in related questions like [this thread](https://stackoverflow.com/questions/65298058/python-multiprocessing-pool-oserror-too-many-files-open) that recommend using .join and gc.collect in the comments, other threads recommend closing any opened files but I do not access files in my code.
```
import gc
import time
import numpy as np
from math import pi
from multiprocess import Process, Manager
from multiprocessing import Semaphore, cpu_count
def do_work(element, shared_array, sema):
shared_array.append(pi*element)
gc.collect()
sema.release()
# example_ar = np.arange(1, 1000) # works
example_ar = np.arange(1, 10000) # fails
# Parallel code
start = time.time()
# Instantiate a manager object and a share a datastructure
manager = Manager()
shared_ar = manager.list()
# Create semaphores linked to physical cores on a system (1/2 of reported cpu_count)
sema = Semaphore(cpu_count()//2)
job = []
# Loop over every element and start a job
for e in example_ar:
sema.acquire()
p = Process(target=do_work, args=(e, shared_ar, sema))
job.append(p)
p.start()
_ = [p.join() for p in job]
end_par = time.time()
# Serial code equivalent
single_ar = []
for e in example_ar:
single_ar.append(pi*e)
end_single = time.time()
print(f'Parallel work took {end_par-start} seconds, result={sum(list(shared_ar))}')
print(f'Serial work took {end_single-end_par} seconds, result={sum(single_ar)}')
```
|
2021/04/07
|
[
"https://Stackoverflow.com/questions/66988750",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7482692/"
] |
The way to avoid changing the ulimit is to make sure that your process pool size does not increase beyond 1024. That's why 1000 works and 10000 fails.
Here's an example of managing the processes with a Pool which will ensure you don't go above the ceiling of your ulimit value:
```
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
```
<https://docs.python.org/3/library/multiprocessing.html#introduction>
>
> other threads recommend closing any opened files but I do not access
> files in my code
>
>
>
You don't have files open, but your processes are opening file descriptors which is what the OS is seeing here.
|
Check limit on number of file descriptors. I changed my ulimit to `4096` from `1024` and it worked.
Check:
```
ulimit -n
```
For me it was `1024`, and I updated it to `4096` and it worked.
```
ulimit -n 4096
```
| 8,676
|
5,014,261
|
I'm using win32com.client to write data to an excel file.
This takes too much time (the code below simulates the amount of data I want to update excel with, and it takes ~2 seconds).
Is there a way to update multiple cells (with different values) in one call rather than filling them one by one? or maybe using a different method which is more efficient?
I'm using python 2.7 and office 2010.
Here is the code:
```
from win32com.client import Dispatch
xlsApp = Dispatch('Excel.Application')
xlsApp.Workbooks.Add()
xlsApp.Visible = True
workSheet = xlsApp.Worksheets(1)
for i in range(300):
for j in range(20):
workSheet.Cells(i+1,j+1).Value = (i+10000)*j
```
|
2011/02/16
|
[
"https://Stackoverflow.com/questions/5014261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/619303/"
] |
A few suggestions:
**ScreenUpdating off, manual calculation**
Try the following:
```
xlsApp.ScreenUpdating = False
xlsApp.Calculation = -4135 # manual
try:
#
worksheet = ...
for i in range(...):
#
finally:
xlsApp.ScreenUpdating = True
xlsApp.Calculation = -4105 # automatic
```
**Assign several cells at once**
Using VBA, you can set a range's value to an array. Setting several values at once might be faster:
```
' VBA code
ActiveSheet.Range("A1:D1").Value = Array(1, 2, 3, 4)
```
I have never tried this using Python, I suggest you try something like:
```
worksheet.Range("A1:D1").Value = [1, 2, 3, 4]
```
**A different approach**
Consider using [openpyxl](https://bitbucket.org/ericgazoni/openpyxl/wiki/Home) or [xlwt](http://pypi.python.org/pypi/xlwt). Openpyxls lets you create `.xlsx` files without having Excel installed. Xlwt does the same thing for `.xls` files.
|
used the range suggestion of the other answer, I wrote this:
```
def writeLineToExcel(wsh,line):
wsh.Range( "A1:"+chr(len(line)+96).upper()+"1").Value=line
xlApp = Dispatch("Excel.Application")
xlApp.Visible = 1
xlDoc = xlApp.Workbooks.Open("test.xlsx")
wsh = xlDoc.Sheets("Sheet1")
writeLineToExcel(wsh,[1, 2, 3, 4])
```
you may also write multiple lines at once:
```
def writeLinesToExcel(wsh,lines): # assume that all lines have the same length
wsh.Range( "A1:"+chr(len(lines)+96).upper()+str(len(lines[0]))).Value=lines
writeLinesToExcel(wsh,[ [1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10,11,12],
[13,14,15,16],
])
```
| 8,677
|
67,887,059
|
I am trying to create a python code that will read the subject of the most recent email in every folder of my outlook account.
I am able to open my email account and loop through every folder in it, however I cannot open my most recent email.
I looked at similar questions and tried using the `get.Last()` method. To use this method I understand that I have to sort my emails but I don't think that I am using the `.sort` method correctly because it is causing new errors.
```
Traceback (most recent call last):
File "c:/Users/myname/OneDrive/Documents/Computer Science/Email reader/project.py", line 24, in <module>
checksubj = messages.Subject
AttributeError: 'NoneType' object has no attribute 'Subject'
```
Here is my code:
```
import win32com.client
import win32com
import os
import sys
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders.Item(3)
print(folder.Name)
for subfolder in folder.Folders:
print(subfolder.Name)
messages = subfolder.Items
messages.sort
messages = messages.GetLast()
checksubj = messages.Subject
checksend = messages.Sender
if "Success" in checksubj:
print(str(checksend)+" Success")
elif "Failure" in checksubj:
print(str(checksend)+" Failure")
```
|
2021/06/08
|
[
"https://Stackoverflow.com/questions/67887059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16163886/"
] |
`Workbooks.Open` does have a `notify` parameter to indicate to wait until the file can be opened for read/write. It presents an interface to the user, so if your application is unattended, that probably isn't a good solution. In that case, I would set up a timer to retry after a short time period. The `Thread.Sleep` solution you mentioned is probably implementing something like that.
---
The documentation I could find about `Workbooks.Open` doesn't indicate what happens on error, e.g. if exceptions are generated or `null` is returned, so to be most compatible with your existing code, I went with generating exceptions and chose `FileLoadException` as a likely one to be thrown if the file can't be opened for write operations. You'll need to test and see what happens.
It might also be that `Workbooks.Open` returns `null` on failure, in which case you might need to check an error code to see if it's because the file is in read-only mode vs. file doesn't exist. You don't want to retry if the file doesn't exist.
```
using System;
using System.Threading;
private void writeExcel()
{
Excel.Application excelApp = null;
Excel.Workbook excelBook = null;
bool retry = false;
try
{
excelApp = new Excel.Application();
excelBook = excelApp.Workbooks.Open(@"C:\Users\ed0510\Desktop\SomeExcel.xlsx", ReadOnly: false, Password: "123");
Excel.Worksheet workSheet = excelBook.Worksheets["SomeWorkSheet"];
workSheet.Cells[1, 1].Value = "Something";
}
catch (FileLoadException ex)
{
// TODO: output exception details to error log
Log(ex);
// Indicate to retry
retry = true;
}
finally
{
if (excelApp != null)
{
if (excelBook != null)
{
excelBook.Close(true);
}
excelApp.Quit();
}
}
// Failed and need to retry?
if (retry)
{
// Wait a minute
Thread.Sleep(60 * 1000);
// Retry
writeExcel();
}
}
```
|
For anyone struggling with this problem in the future, I later came across scenarios when the accepted answer didn't quite work how I wanted it to. Here is what I came up with:
```
using System;
using System.Threading;
private void writeExcel()
{
string path = @"C:\Users\ed0510\Desktop\SomeExcel.xlsx";
if (IsExcelOpen(path))
{
throw new ExcelFileIsOpen();
}
Excel.Application excelApp = null;
Excel.Workbook excelBook = null;
bool retry = false;
try
{
excelApp = new Excel.Application();
excelBook = excelApp.Workbooks.Open(@"C:\Users\ed0510\Desktop\SomeExcel.xlsx", ReadOnly: false, Password: "123");
Excel.Worksheet workSheet = excelBook.Worksheets["SomeWorkSheet"];
workSheet.Cells[1, 1].Value = "Something";
}
catch
{
++numOfRetries;
if (numOfRetries <= maxNumOfRetries)
{
retry = true;
}
else
{
numOfRetries = 0;
saveFailed = true;
}
}
finally
{
if (excelApp != null)
{
if (excelBook != null)
{
excelBook.Close(true);
}
excelApp.Quit();
}
if (retry)
{
Thread.Sleep(5 * 1000);
retry = false;
SavingIntoExcelHDT();
}
}
```
As you can see, first I check if the excel is open, as the `FileLoadException` didn't always work how I wanted it. To check whether the excel is open, I use this method:
```
private bool IsExcelOpen(string path)
{
FileStream fs = null;
try
{
fs = new FileStream(path, FileMode.Open, FileAccess.Write, FileShare.None);
return false;
}
catch (IOException)
{
return true;
}
finally
{
if (fs != null)
{
fs.Close();
}
}
}
```
I'm not sure this is the best way to do this, but it works perfectly for me.
| 8,679
|
196,930
|
How, in the simplest possible way, distinguish between Windows XP and Windows Vista, using Python and [pywin32](http://python.net/crew/mhammond/win32/Downloads.html) or [wxPython](http://www.wxpython.org/)?
Essentially, I need a function that called will return True iff current OS is Vista:
```
>>> isWindowsVista()
True
```
|
2008/10/13
|
[
"https://Stackoverflow.com/questions/196930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18745/"
] |
Python has the lovely 'platform' module to help you out.
```
>>> import platform
>>> platform.win32_ver()
('XP', '5.1.2600', 'SP2', 'Multiprocessor Free')
>>> platform.system()
'Windows'
>>> platform.version()
'5.1.2600'
>>> platform.release()
'XP'
```
NOTE: As mentioned in the comments proper values may not be returned when using older versions of python.
|
The simplest solution I found is this one:
```
import sys
def isWindowsVista():
'''Return True iff current OS is Windows Vista.'''
if sys.platform != "win32":
return False
import win32api
VER_NT_WORKSTATION = 1
version = win32api.GetVersionEx(1)
if not version or len(version) < 9:
return False
return ((version[0] == 6) and
(version[1] == 0) and
(version[8] == VER_NT_WORKSTATION))
```
| 8,680
|
63,621,597
|
I have a pandas dataframe where I need to conditionally update the value based on the first two letters. The pattern is simple and the code below works, but it doesn't feel pythonic. I need to extend this to other letters (at least 11-19/A-J) and, while I could just add additional rows, I'd really like to do this the right way. Existing code below
```
df['REFERENCE_ID'] = df['PRECERT_ID'].astype(str)
df.loc[df['REFERENCE_ID'].str.startswith('11'), 'REFERENCE_ID'] = 'A' + df['PRECERT_ID'].str[-7:]
df.loc[df['REFERENCE_ID'].str.startswith('12'), 'REFERENCE_ID'] = 'B' + df['PRECERT_ID'].str[-7:]
df.loc[df['REFERENCE_ID'].str.startswith('13'), 'REFERENCE_ID'] = 'C' + df['PRECERT_ID'].str[-7:]
df.loc[df['REFERENCE_ID'].str.startswith('14'), 'REFERENCE_ID'] = 'D' + df['PRECERT_ID'].str[-7:]
df.loc[df['REFERENCE_ID'].str.startswith('15'), 'REFERENCE_ID'] = 'E' + df['PRECERT_ID'].str[-7:]
```
I thought I might be able to use a list of letters, like
```
letters = list(string.ascii_uppercase)
```
but I'm new to dataframes (and python in general) and can't figure out the syntax to get the dataframe equivalent of
```
letters = list(string.ascii_uppercase)
text = '1523456789'
first = int(text[:2])
text = letters[first-11] + text[-7:]
```
I wasn't able to find something addressing this, but would be grateful for any help or a link to a similar question if it exists. Thank you.
|
2020/08/27
|
[
"https://Stackoverflow.com/questions/63621597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12140123/"
] |
As already noticed, the sponsored links are simply not at their position before some mouse event occurs. Once the mouse event occurs, the elements are added to the DOM, supposedly this is how Facebook avoids people crawling it too easily.
So, if you have a quest to find the sponsored links, then you will need to do the following
* find out what is the exact event which results in the links being added
* conduct experiments until you find out how you can programmatically generate that event
* implement a crawling algorithm that does some scrolling on the wall for a long while and then induces the given event. At that point you might get many sponsored links
Note: sponsored links are paid by companies and they would not be very happy if their ad slots are being used up by uninterested bots.
|
The approach I took to solve this issue is as follows:
```
// using an IIFE ("Immediately-Invoked Function Expression"):
(function() {
'use strict';
// using Arrow function syntax to define the callback function
// supplied to the (later-created) mutation observer, with
// two arguments (supplied automatically by that mutation
// observer), the first 'mutationList' is an Array of
// MutationRecord Objects that list the changes that were
// observed, and the second is the observer that observed
// the change:
const nodeRemoval = (mutationList, observer) => {
// here we use Array.prototype.forEach() to iterate over the
// Array of MutationRecord Objects, using an Arrow function
// in which we refer to the current MutationRecord of the
// Array over which we're iterating as 'mutation':
mutationList.forEach( (mutation) => {
// if the mutation.addedNodes property exists and
// also has a non-falsy length (zero is falsey, numbers
// above zero are truthy and negative numbers - while truthy -
// seem invalid in the length property):
if (mutation.addedNodes && mutation.addedNodes.length) {
// here we retrieve a list of nodes that have the
// "aria-label" attribute-value equal to 'Advertiser link':
mutation.target.querySelectorAll('[aria-label="Advertiser link"]')
// we use NodeList.prototype.forEach() to iterate over
// the returned list of nodes (if any) and use (another)
// Arrow function:
.forEach(
// here we pass a reference to the current Node of the
// NodeList we're iterating over, and use
// ChildNode.remove() to remove each of the nodes:
(adLink) => adLink.remove() );
}
});
},
// here we retrieve the <body> element (since I can't find
// any element with a predictable class or ID that will
// consistently exist as an ancestor of the ad links):
targetNode = document.querySelector('body'),
// we define the types of changes we're looking for:
options = {
// we're looking for changes amongst the
// element's descendants:
childList: true,
// we're not looking for attribute-changes:
attributes: false,
(if this is false, or absent, we look only to
changes/mutations on the target element itself):
subtree: true
},
// here we create a new MutationObserver, and supply
// the name of the callback function:
observer = new MutationObserver(nodeRemoval);
// here we specify what the created MutationObserver
// should observe, supplying the targetNode (<body>)
// and the defined options:
observer.observe(targetNode, options);
})();
```
I realise that in your question you're looking for elements that match a different attribute and attribute-value (`document.querySelector('a[href*="/ads/about"]')`) but as that attribute-value wouldn't match my own situation I couldn't use it in my code, but it should be as simple as replacing:
```
mutation.target.querySelectorAll('[aria-label="Advertiser link"]')
```
With:
```
mutation.target.querySelector('a[href*="/ads/about"]')
```
Though it's worth noting that `querySelector()` will return only the first node that matches the selector, or `null`; so you may need to incorporate some checks into your code.
While there may look to be quite a bit of code, above, uncommented this becomes merely:
```
(function() {
'use strict';
const nodeRemoval = (mutationList, observer) => {
mutationList.forEach( (mutation) => {
if (mutation.addedNodes && mutation.addedNodes.length) {
mutation.target.querySelectorAll('[aria-label="Advertiser link"]').forEach( (adLink) => adLink.remove() );
}
});
},
targetNode = document.querySelector('body'),
options = {
childList: true,
attributes: false,
subtree: true
},
observer = new MutationObserver(nodeRemoval);
observer.observe(targetNode, options);
})();
```
References:
* [`Array.prototype.forEach()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach).
* [Arrow Functions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions).
* [`childNode.remove()`](https://developer.mozilla.org/en-US/docs/Web/API/ChildNode/remove).
* [`MutationObserver()` Interface](https://developer.mozilla.org/en-US/docs/Web/API/MutationObserver).
* [`NodeList.prototype.forEach()`](https://developer.mozilla.org/en-US/docs/Web/API/NodeList/forEach).
| 8,688
|
4,936,594
|
I'm writing a chat program in Python that needs to connect to a server before user input from sys.stdin is accepted. If a connection cannot be made then the program exits.
Running this from the shell, if a connection fails and input was sent while attempting to connect, the input is echoed to the shell after the program exits:
```
jtink@gab-dec:~$ python chat.py
attempting to connect...
Hey there! # This is user input
How are you? # This is more user input
connection failed
jtink@gab-dec:~$ Hey there!
Hey there!: command not found
jtink@gab-dev:~$ How are you?
How are you?: command not found
jtink@gab-dev:~$
```
Is there a way to tell if there is input left on sys.stdin, so that I can read it before the chat program exits?
I have read [this similar question](https://stackoverflow.com/questions/699390/whats-the-best-way-to-tell-if-a-python-program-has-anything-to-read-from-stdin), but the answers only describe how to tell if input is being piped to a python program, not how to tell if there is input.
|
2011/02/08
|
[
"https://Stackoverflow.com/questions/4936594",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/396241/"
] |
Well,
You should do something like that (pseudo-code):
1 - start a transaction
2 - post master record
3 - get the id inserted on master
4 - pass the master id to detail dataset
5 - post detail record
6 - If it worked, commit transaction. Otherwise, rollback transaction.
|
Just an side note: CTP of the new SQL Server codename 'Denali' will bring the feature of SEQUENCES, working much near of whar firebird generator works. So this task will become MUCH easier:
When you get the command from gui to start an insert, get an ID from sequence
Use it to fill the PK field of master record
Post master record
While you have detail records to insert
Fill detail(s) record
Post detail record
Commit transaction
Very niiiice...
| 8,690
|
62,673,374
|
i'm pretty new to python but i know how to use most of the things in it, included random.choice. I want to choose a random file name from 2 files [list](https://i.stack.imgur.com/zeMzN.jpg).
To do so, i'm using this line of code:
```
minio = Minio('myip',
access_key='mykey',
secret_key='mykey',
)
images = minio.list_objects('mybucket', recursive=True)
for img2 in images:
names = img2.object_name
print(random.choice([names]))
```
Everytime i try to run it, it prints always the same file's name *(c81d9307-7666-447d-bcfb-2c13a40de5ca.png)*
I tried to put the "print" function in the "for" block, but it prints out both of the files' names
|
2020/07/01
|
[
"https://Stackoverflow.com/questions/62673374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13834756/"
] |
You are setting the variable `names` to one specific instsance of images right now. That means it is only a single value. Try adding them to an array or similar instead.
For example:
```
names = [img2.object_name for img2 in images]
print(random.choice(names))
```
|
```
minio = Minio('myip',
access_key='mykey',
secret_key='mykey',
)
images = minio.list_objects('mybucket', recursive=True)
names = []
for img2 in images:
names.append(img2.object_name)
print(random.choice([names]))
```
Try this, the problem may that your names was not list varible
| 8,691
|
15,643,094
|
I'm a programming novice and only rarely use python so please bear with me as I try to explain what I am trying to do :)
I have the following XML:
```
<?xml version = "1.0" encoding = "utf-8"?>
<Patients>
<Patient>
<PatientCharacteristics>
<patientCode>3</patientCode>
</PatientCharacteristics>
<Visits>
<Visit>
<DAS>
<CRP>14</CRP>
<ESR/>
<Joints>
<DAS_PROFILE>28/28</DAS_PROFILE>
<SWOL28>20</SWOL28>
<TEN28>20</TEN28>
</Joints>
</DAS>
<VisitDate>2010-02-17</VisitDate>
</Visit>
<Visit>
<DAS>
<CRP>10</CRP>
<ESR/>
<Joints>
<DAS_PROFILE>28/28</DAS_PROFILE>
<SWOL28>15</SWOL28>
<TEN28>20</TEN28>
</Joints>
</DAS>
<VisitDate>2010-02-10</VisitDate>
</Visit>
</Visits>
</Patient>
<Patient>
<PatientCharacteristics>
<patientCode>3</patientCode>
</PatientCharacteristics>
<Visits>
<Visit>
<DAS>
<CRP>14</CRP>
<ESR/>
<Joints>
<DAS_PROFILE>28/28</DAS_PROFILE>
<SWOL28>34</SWOL28>
<TEN28>0</TEN28>
</Joints>
</DAS>
<VisitDate>2010-08-17</VisitDate>
</Visit>
<Visit>
<DAS>
<CRP>10</CRP>
<ESR/>
<Joints>
<DAS_PROFILE>28/28</DAS_PROFILE>
<SWOL28></SWOL28>
<TEN28>2</TEN28>
</Joints>
</DAS>
<VisitDate>2010-07-10</VisitDate>
</Visit>
<Visit>
<DAS>
<CRP>9</CRP>
<ESR/>
<Joints>
<DAS_PROFILE>28/28</DAS_PROFILE>
<SWOL28>56</SWOL28>
<TEN28>6</TEN28>
</Joints>
</DAS>
<VisitDate>2009-07-10</VisitDate>
</Visit>
</Visits>
</Patient>
</Patients>
```
All I want to do here is update certain 'SWOL28' values if they match the patientCode and VisitDate that I have stored in a text file. As I understand, elementtree does not include a parent reference, as if it did, I could just use findall() from the root and work backwards from there. As it stands here is my psuedocode:
1. For each line in the text file:
2. Put Visit\_Date Patient\_Code New\_SWOL28 into variables
3. For each patient element:
4. If patientCode = Patient\_Code
5. For each Visit element:
6. If VisitDate = Visit\_Date
7. If SWOL28 element exists for this visit
8. Update SWOL28 to New\_SWOL28
But I am stuck at step number 5. How do I get a list of visits to iterated through? Apologies if this is a very dumb question but I have searched high and low for an answer I assure you! I have stripped down my code to the bare example of the part I need to fix below:
```
import xml.etree.ElementTree as ET
tree = ET.parse('DB3.xml')
root = tree.getroot()
for child in root: # THIS GETS ME ALL THE PATIENT ATTRIBUTES
print child.tag
for x in child/Visit: # THIS IS WHAT I CANNOT FIND THE CORRECT SYNTAX FOR
# I WOULD THEN PERFORM STEPS 6, 7 AND 8 HERE
```
I would be deeply appreciative of any ideas any of you may have on this. I am not a programming natural that's for sure!
Thanks in advance,
Sarah
**Edit 1:**
On the advice of SVK below I tried the following:
```
import xml.etree.ElementTree as ET
tree = ET.parse('Untitled.xml')
root = tree.getroot()
for child in root:
print child.tag
child.find( "visits" )
for x in child.iter("visit"):
print x.tag, x.text
```
But the only output I get is:
Patient
Patient
and none of the lower tags. Any ideas?
|
2013/03/26
|
[
"https://Stackoverflow.com/questions/15643094",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704003/"
] |
You can iterate over all the "visit" tags directly under an element "element" like this:
```
for x in element.iter("visit"):
```
You can find the first direct child of element matching a certain tag with:
```
element.find( "visits" )
```
It looks like you will first have to locate the "visits" element, which is the parent of "visit", and then iterate through its "visit" children. Putting those together you'd have something like this:
```
for patient_element in root:
print patient_element.tag
visits_element = patient_element.find( "visits" )
for visit_element in visits_element.iter("visit"):
print visit_element.tag, visit_element.text
# ... further processing of each visit element here
```
In general look at the section "Finding interesting elements" in the documentation for xml.etree.ElementTree: <http://docs.python.org/2/library/xml.etree.elementtree.html#finding-interesting-elements>
|
You could use a CssSelector to get the nodes you want from the Patient element:
```
from lxml.cssselect import CSSSelector
visitSelector = CSSSelector('Visit')
visits = visitSelector(child)
```
you can do the same to get the patientCode Tag and the SWOL28 tag
then you can access and modifiy the text of the elements using `element.text`
| 8,694
|
36,669,131
|
I matched 2 columns in a DataFrame and yield the result in Boolean value in a new 'bool' column.
First I wrote:
```
df_new = df[[7 for 32 in df if df 39 == 'False']]
```
But it didn't work.
Then I wrote only to match the columns
My code is
```
df['bool'] = (df.iloc[:, 7] == df.iloc[:, 32])
```
The above code matched the columns positioned at 7 and 32 and placed `bool` value at column 39. In this I want to slice the data and choose the rows with a false value.
[Data](http://i.stack.imgur.com/hB6gX.png)
I wrote:
```
df_filtered = df[df['bool'] == 'False']
```
But I got the Future error as
>
> [c:\python34\lib\site-packages\pandas\core\ops.py:714: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
> result = getattr(x, name)(y)](http://i.stack.imgur.com/3YnOh.png)
>
>
>
Not sure what wrong I am doing.
I've also tried
```
df[df[39] == 'False']
```
|
2016/04/16
|
[
"https://Stackoverflow.com/questions/36669131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4673147/"
] |
You can compare the columns directly:
```
df = pd.DataFrame({'a': [1, 2, 3], 'b': [1, 3, 2]})
df['Bool'] = df.a == df.b
>>> df
a b Bool
0 1 1 True
1 2 3 False
2 3 2 False
```
To filter for False values, use the negation flag, i.e. `~`:
```
>>> df[~df.Bool]
a b Bool
1 2 3 False
2 3 2 False
```
|
I don't know your situation, I get the "FutureWarning: in the future, boolean array-likes will be handled as a boolean array index" when I use those code as below:
```
>>> import numpy as np
>>> test=np.array([1,2,3])
>>> test3=test[[False,True,False]]
__main__:1: FutureWarning: in the future, boolean array-likes will be handled as a boolean array index
>>> test3=test[np.array([False,True,False])]
>>> test3
array([2])
>>>
```
I think these are some relations between the two "FutureWarning", I hope my situation can help you.
| 8,699
|
64,637,084
|
Could anyone please help me with why I am getting the below error, everything worked before when I used the same logic, after I converted my data type of date columns to the appropriate format.
Below is the line of code I am trying to run
```
data['OPEN_DT'] = data['OPEN_DT'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d') if len(x[:x.find ('-')]) == 4 else datetime.strptime(x, '%d-%m-%Y'))
```
Error being received :
```
AttributeError Traceback (most recent call last)
<ipython-input-93-f0a22bfffeee> in <module>
----> 1 data['OPEN_DT'] = data['OPEN_DT'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d') if len(x[:x.find ('-')]) == 4 else datetime.strptime(x, '%d-%m-%Y'))
~\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds)
3846 else:
3847 values = self.astype(object).values
-> 3848 mapped = lib.map_infer(values, f, convert=convert_dtype)
3849
3850 if len(mapped) and isinstance(mapped[0], Series):
pandas\_libs\lib.pyx in pandas._libs.lib.map_infer()
<ipython-input-93-f0a22bfffeee> in <lambda>(x)
----> 1 data['OPEN_DT'] = data['OPEN_DT'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d') if len(x[:x.find ('-')]) == 4 else datetime.strptime(x, '%d-%m-%Y'))
AttributeError: 'Timestamp' object has no attribute 'find'
```
ValueError: time data '30/09/2020' does not match format '%d-%m-%Y'
Many thanks.
|
2020/11/01
|
[
"https://Stackoverflow.com/questions/64637084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14559396/"
] |
I assumed that you don't want to repeat the add button, then,
I removed the button from the template, that way you can only add the input fields:
Here you can find a working example: <https://stackblitz.com/edit/js-gp6xjx?file=index.html>
```js
const data = [];
const appendContent = () => {
let form_content = document.querySelector(".form-content");
const template = document.createElement('template');
const html = `<div class="row">
<div class="form-group">
<input type="text" placeholder="Enter Name" class="form-control" name="name">
</div>
<div class="form-group">
<input type="text" placeholder="Enter Description" class="form-control" name="description">
</div>
</div>
`
template.innerHTML = html.trim();
form_content.appendChild(template.content.firstChild);
}
const fake_button = document.querySelector('.btn');
fake_button.addEventListener('click', e => {
record(e);
appendContent();
});
const record = (e) => {
const form = e.target.parentElement.parentElement;
const inputs = form.querySelectorAll('input');
const obj = {};
Array.from(inputs).forEach(input => {
obj[input.name]= input.value;
})
data.push(obj);
console.log(data);
}
appendContent();
```
```css
.btn {
display: block;
width: 70px;
cursor: pointer;
border: 1px solid #333;
padding: 5px 10px;
text-align: center;
}
```
```html
<form action="" method="post" id="create-db-form">
<div class="form-content"></div>
<div class="form-group btn-with-margin">
<span class="btn btn-primary">Add</span>
</div>
<button type="submit">Save</button>
</form>
```
|
Alright, so I was fiddling with it a little bit, I'm not very experienced with pure javascript. I came up with a few ideas:
1 - Separate submit and add field buttons.
When you press add field, it just adds new fields inside your form which will later be submitted as part of a complete form.
2 - Indexed forms
The idea here is to create a form of forms, with each pair of inputs consisting of a form itself. Later, when you want to submit, you just query all forms and each result will be under a 'name = index' prop.
I found this snippet to retrieve form values:
```
function getFormValues() {
var params = [];
for( var i=0; i<document.myform.elements.length; i++ )
{
var fieldName = document.myform.elements[i].name;
var fieldValue = document.myform.elements[i].value;
params.push({[fieldName]:fieldValue});
}
console.log(params);
}
```
I was testing it in this fiddle: <https://jsfiddle.net/968cL1uz/28/>
I only posted an answer to be able to write more about your problem, I'm sorry if this still doesn't solve it. Let's keep working on it :)
| 8,700
|
44,872,673
|
Let's say I have this code in `test.py`:
```
import sys
a = 'alfa'
b = 'beta'
c = 'gamma'
d = 'delta'
print(sys.argv[1])
```
Running `python test.py a` would then return `a`. How can I make it return `alfa` instead?
|
2017/07/02
|
[
"https://Stackoverflow.com/questions/44872673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4931616/"
] |
Using a dictionary that maps to those strings:
```
mapping = {'a': 'alfa', 'd': 'delta', 'b': 'beta', 'c': 'gamma'}
```
Then when you get your `sys.argv[1]` just access the value from your dictionary as:
```
print(mapping.get(sys.argv[1]))
```
Demo:
File: `so_question.py`
```
import sys
mapping = {'a': 'alfa', 'd': 'delta', 'b': 'beta', 'c': 'gamma'}
user_var = sys.argv[1]
user_var_value = mapping.get(user_var)
print("user_var_value is: {}".format(user_var_value))
```
In a shell:
```
▶ python so_question.py a
user_var_value is: alfa
```
|
You can also use the `globals` or `locals`:
```
import sys
a = 'alfa'
b = 'beta'
c = 'gamma'
d = 'delta'
print(globals().get(sys.argv[1]))
# or
print(locals().get(sys.argv[1]))
```
| 8,701
|
50,809,096
|
A few days ago I started getting the following error when using pip (1,2 or 3) to install.
\*
```
Traceback (most recent call last): File "/home/c4pta1n/.local/bin/pip", line 7, in <module>
from pip._internal import main File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_internal/__init__.py", line 42, in <module>
from pip._internal import cmdoptions File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_internal/cmdoptions.py", line 16, in <module>
from pip._internal.index import ( File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_internal/index.py", line 15, in <module>
from pip._vendor import html5lib, requests, six File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_vendor/requests/__init__.py", line 86, in <module>
from pip._vendor.urllib3.contrib import pyopenssl File "/home/c4pta1n/.local/lib/python2.7/site-packages/pip/_vendor/urllib3/contrib/pyopenssl.py", line 46, in <module>
import OpenSSL.SSL File "/usr/local/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL File "/usr/local/lib/python2.7/dist-packages/OpenSSL/crypto.py", line 13, in <module>
from cryptography.hazmat.primitives.asymmetric import dsa, rsa File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py", line 12, in <module>
@six.add_metaclass(abc.ABCMeta) AttributeError: 'module' object has no attribute 'add_metaclass'
```
\*
```
pip3 install pip --ignore-installed six
Traceback (most recent call last):
File "/usr/local/bin/pip3", line 11, in <module>
load_entry_point('pip==10.0.1', 'console_scripts', 'pip3')()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 476, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2700, in load_entry_point
return ep.load()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2318, in load
return self.resolve()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2324, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/local/lib/python2.7/dist-packages/pip/_internal/__init__.py", line 42, in <module>
from pip._internal import cmdoptions
File "/usr/local/lib/python2.7/dist-packages/pip/_internal/cmdoptions.py", line 16, in <module>
from pip._internal.index import (
File "/usr/local/lib/python2.7/dist-packages/pip/_internal/index.py", line 15, in <module>
from pip._vendor import html5lib, requests, six
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/__init__.py", line 86, in <module>
from pip._vendor.urllib3.contrib import pyopenssl
File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/urllib3/contrib/pyopenssl.py", line 46, in <module>
import OpenSSL.SSL
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import rand, crypto, SSL
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/crypto.py", line 13, in <module>
from cryptography.hazmat.primitives.asymmetric import dsa, rsa
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py", line 12, in <module>
@six.add_metaclass(abc.ABCMeta)
AttributeError: 'module' object has no attribute 'add_metaclass'
```
I have been researching and trying to troubleshoot this issue and I have not been able to narrow down the issue.
Just prior to noticing this issue I had updated my debian system using the standard repository and had no issues of note, I had also updated a few pip modules using pip3 install --update, I believe the modules I had updated were scapy and requests
I am unable to use pip for any command that I have tried, even "pip list" or any version of pip through 3.6.
I have uninstalled and reinstalled pip, virtualenv, and tried to manually remove the six.add\_metaclass-1.0\* folder from my distutils folder.
Nothing I have tried has created any change that I can see and I am not able to narrow down that any issue that I see written about is indeed similar or related to this specific issue.
I am hoping to find help to narrow this problem down further, correct it or be pointed in the direction of any information that could help me.
|
2018/06/12
|
[
"https://Stackoverflow.com/questions/50809096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8213561/"
] |
[six 1.3.0](https://github.com/benjaminp/six/blob/1.3.0/six.py) doesn't have `add_metaclass`. It was released in 2013 year. Really time to upgrade it.
|
I found the answer to my issue. Apparently some linux versions have specific versions of pip and six that have to be installed through the distro package manager directly in order to work. There are some nuanced changes in how Debian makes use of pip, especially regarding updates, and they have coded these changes in to their package manager and not to pip. When I recompiled Python I had uninstalled the entire python framework and I went to the source url's to recombine python and to download pip and any other dependencies. I figured since I was installing directly from the source that it would be fine... If you are using CentOS, Debian,Redhat and maybe others, then you must install pip from the package manager that is managed by your distro in order to avoid running into this error somewhere down the line.
| 8,702
|
28,849,386
|
How to remove T from time format `%Y-%m-%dT%H:%M:%S` in python?
Am using it in my html as
```
<b>Start:{{ start.date_start }}<br/>
```
|
2015/03/04
|
[
"https://Stackoverflow.com/questions/28849386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4631100/"
] |
```
@register.filter
def isotime(datestring):
datestring = str(datestring)
return datestring.replace("T"," ")
```
|
Manually format the datetime, don't rely on the default `str()` formatting. You can use [`datetime.datetime.isoformat()`](https://docs.python.org/2/library/datetime.html#datetime.datetime.isoformat) for example, passing in a space as the separator:
```
<b>Start:{{ start.date_start.isoformat(' ') }}<br/>
```
or you can use [`datetime.datetime.strftime()`](https://docs.python.org/2/library/datetime.html#datetime.datetime.strftime) to control formatting more finely:
```
<b>Start:{{ start.date_start.strftime('%Y-%m-%d %H:%M:%S') }}<br/>
```
| 8,703
|
22,882,427
|
I want to take input as string as raw\_input and want to use this value in another line for taking the input in python. My code is below:
```
p1 = raw_input('Enter the name of Player 1 :')
p2 = raw_input('Enter the name of Player 2 :')
p1 = input('Welcome %s > Enter your no:') % p1
```
Here in place of `%s` I want to put the value of `p1`.
Thanks in advance.
|
2014/04/05
|
[
"https://Stackoverflow.com/questions/22882427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3494397/"
] |
You can do (the vast majority will agree that this is the best way):
```
p1 = input('Welcome {0} > Enter your no:'.format(p1))
```
|
Try
```
input("Welcome " + p1 + "> Enter your no:")
```
It concatenates the value of `p1` to the input string
Also see [here](https://docs.python.org/2/library/string.html)
```
input("Welcome {0}, {1} > Enter your no".format(p1, p2)) #you can have multiple values
```
**EDIT**
Note that using `+` is [discouraged](http://legacy.python.org/dev/peps/pep-0008/#programming-recommendations).
| 8,704
|
26,664,102
|
Here are the commands I am running:
```
$ python setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
$ pip --version
pip 1.5.6 from /usr/local/lib/python3.4/site-packages (python 3.4)
$ python -c "import setuptools; print(setuptools.__version__)"
2.1
$ python --version
Python 3.4.1
$ which python
/usr/local/bin/python
```
Also, I am running a mac with homebrewed python
Here is my setup.py script:
<https://gist.github.com/cloudformdesign/4791c46fe7cd52eb61cd>
I'm going absolutely crazy -- I can't figure out why this wouldn't be working.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26664102",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036670/"
] |
Update your `pip` first:
```
pip install --upgrade pip
```
for Python 3:
```
pip3 install --upgrade pip
```
|
I tried everything said here without any luck, but found a workaround.
After running this command (and failing) : `bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg`
Go to the temporary directory the tool made (given in the output of the last command), then execute `python setup.py bdist_wheel`. The `.whl` file is in the `dist` folder.
| 8,707
|
39,137,179
|
I am working on a rails application now that needs to run a single python script whenever a button is clicked on our apps home page. I am trying to figure out a way to have rails run this script, and both of my attempts so far have failed.
My first try was to use the exec(..) command to just run the "python script.py", but when I do this it seems to run the file but terminate the rails server so I would need to manually reboot it each time.
My second try was to install the gem "RubyPython" and attempt from there, but I am at a loss at what to do once I have it running.. I can not find any examples of people using it to run or load a python script.
Any help for this would be appreciated.
|
2016/08/25
|
[
"https://Stackoverflow.com/questions/39137179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5805587/"
] |
Here are ways to execute a shell script
```
`python pythonscript.py`
```
or
```
system( "python pythonscript.py" )
```
or
```
exec(" python pythonscript.py")
```
exec replaces the current process by running the given external command.
Returns none, the current process is replaced and never continues.
|
`exec` replaces the current process with the new one. You want to run it as a subprocess. See [When to use each method of launching a subprocess in Ruby](https://stackoverflow.com/questions/7212573/when-to-use-each-method-of-launching-a-subprocess-in-ruby) for an overview; I suggest using either backticks for a simple process, or `popen`/`popen3` when more control is required.
Alternately, you can use [`rubypython`](https://github.com/halostatue/rubypython), the Ruby-Python bridge gem, to execute Python from Ruby itself (especially if you'll be executing the same Python function repeatedly). To do so, you would need to make your script into a proper Python module, start the bridge when the Rails app starts, then use `RubyPython.import` to get a reference to your module into your Ruby code. The examples on the gem's GitHub page are quite simple, and should be sufficient for your purpose.
| 8,717
|
65,148,247
|
For an unknown reason, I ran into a docker error when I tried to run a `docker-compose up` on my project this morning.
My web container isn't able to connect to the db host and `nc` still returning
>
> web\_1 | nc: bad address 'db'
>
>
>
There is the relevant part of my docker-compose definition :
```yaml
version: '3.2'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=
mailhog:
# mailhog declaration
volumes:
postgres_data:
```
I've suspected the network to be broken and it actually is. This is what I get when I inspect the docker network relative to this project :
(`docker network inspect my_docker_network`)
```json
[
{
"Name": "my_docker_network",
"Id": "f09c148d9f3253d999e276c8b1061314e5d3e1f305f6124666e2e32a8e0d9efd",
"Created": "2020-11-18T13:30:29.710456682-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {}, // <=== This is empty !
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "my-project"
}
}
]
```
### Versions :
Docker : 18.09.1, build 4c52b90
Docker-compose : 1.21.0, build unknown
|
2020/12/04
|
[
"https://Stackoverflow.com/questions/65148247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3950328/"
] |
I was able to fix that by running `docker-compose down && docker-compose up` but it could be kinda bad if your down was removing all your volumes and so, your data...
The inspection of networking is now alright :
```json
[
{
"Name": "my_docker_network",
"Id": "236c45042b03c3a2922d9a9fabf644048901c66b3c1fd15507aca2c464c1d7ef",
"Created": "2020-12-04T12:04:40.765889533-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0939787203f2e222f2db380e8d5b36928e95bc7242c58df56b3e6e419efdd280": {
"Name": "my_docker_db_1",
"EndpointID": "af206a7e957682d3d9aee2ec0ffae2c51638cbe8821d3b20eb786165a0159c9d",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"ae90bd27539e89d0b26e0768aec765431ee623f45856e13797f3ba0262cca3f2": {
"Name": "my_docker_web_1",
"EndpointID": "09b5cefed6c5b49d31497419fd5784dcd887a23875e6c998209615c7ec8863f4",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"f2d3e46ab544b146bdc0aafba9fddb4e6c9d9ffd02c2015627516c7d6ff17567": {
"Name": "my_docker_mailhog_1",
"EndpointID": "242a693e6752f05985c377cd7c30f6781f0576bcd5ffede98f77f82efff8c78f",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "my_docker_project"
}
}
]
```
But : **Does someone have any idea of what happened and how to prevent this problem to reappear ?**
|
I had the same problem, but with rabbitmq service in my compose file. At first I solved it by deleting all existing container and volumes on my machine, (but it happened again here and then) but later I updated the rabbitmq image version to latest in `docker-compose.yml`:
```
image: rabbitmq:latest
```
and the problem did not reappear afterwards...
| 8,718
|
66,386,685
|
I'm working on a secure system where internet access is restricted. My company will let my install python and libraries, but they only allow the unblocking of specific urls temporarily. So I need to know what urls do I need to unblock to install python and what urls I need to unblock to execute
**pip install pandas**
**pip install requests**
**pip install xlrd**
among others.
Alternatively I would also be happy if I could just find a url to manually install each library.
|
2021/02/26
|
[
"https://Stackoverflow.com/questions/66386685",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12986251/"
] |
You can do all that in one loop - that would be way faster. To know the correct position to put the number in, add extra counter for each array.
### Your kind of approach
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
int[] odd = new int[10];
int[] even = new int[10];
int oddPos = 0;
int evenPos = 0;
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0) {
even[evenPos] = num[i];
evenPos++;
} else {
odd[oddPos] = num[i];
oddPos++;
}
}
```
However this would not be the best solution as you (in most cases) cannot determine the length of `odd` and `even` arrays beforehand. Then you should use either arraylists or count the values of each or something else.
### More dynamic approach
As stated before - you need to determine the size of the arrays at first
```java
int[] num = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
int oddCount = 0, evenCount = 0;
int oddPos = 0, evenPos = 0;
//get the count of each type
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0)
oddCount++;
else
evenCount++;
}
//define arrays in correct sizes
int[] odd = new int[oddCount];
int[] even = new int[evenCount];
//put values in arrays
for (int i = 0; i < num.length; i++) {
if (num[i] % 2 == 0) {
even[evenPos] = num[i];
evenPos++;
} else {
odd[oddPos] = num[i];
oddPos++;
}
}
```
|
the approach for detecting `odd` and `even` numbers is correct, But I think the problem with the code you wrote is that the length of `odd` and `even` arrays, isn't determinant. so for this matter, I suggest using `ArrayList<Integer>`, let's say you get the array in a function input, and want arrays in the output (I'll mix the arrays in the output for better performance. but separating the functions for each list extracting is also ok depending on what you're going to do with them).
### Solution
```java
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class Test {
public static Integer[][] separateOddnEven(int[] input) {
Integer[][] output = new Integer[2][];
List<Integer> odds = new ArrayList<>();
List<Integer> evens = new ArrayList<>();
for (int i = 0; i < input.length; ++i) {
int temp = input[i];
if (temp % 2 == 0)
evens.add(temp);
else
odds.add(temp);
}
// alternative is to use these Arraylists directly
output[0] = new Integer[odds.size()];
output[1] = new Integer[evens.size()];
output[0] = odds.toArray(output[0]);
output[1] = evens.toArray(output[1]);
return output; // index 0 has odd numbers and index 1 has even numbers.
}
public static void main(String[] args) {
int[] input = {0, 21, 24, 22, 14, 15, 16, 18};
Integer[][] output = separateOddnEven(input);
System.out.println("odd numbers :");
System.out.println(Arrays.toString(output[0]));
System.out.println("even numbers :");
System.out.println(Arrays.toString(output[1]));
}
}
```
### output :
```
odd numbers :
[21, 15]
even numbers :
[0, 24, 22, 14, 16, 18]
```
| 8,719
|
70,187,603
|
I am able to create an image via az cli commands with:
```
az vm create --resource-group $RG2 \
--name $VM_NAME --image $(az sig image-version show \
--resource-group $RG \
--gallery-name $SIG \
--gallery-image-definition $SIG_IMAGE_DEFINITION \
--gallery-image-version $VERSION \
--query id -o tsv) \
--size $SIZE \
--public-ip-address "" \
--assign-identity $(az identity show --resource-group $RG2 --name $IDENTITY --query id -o tsv) \
--ssh-key-values $SSH_KEY_PATH \
--authentication-type ssh \
--admin-username admin
```
This works great. I am trying to do the same with python.
I see examples where they are creating everything in that, resource groups, nics, subnets, vnets etc but that is not what I need. I am literally trying to do what this az cli is doing. Is there a way to do this with python?
How do we address the setting of public ip to nothing so it does not set one? I want it to use the vnet, subnet, etc that the resource groups already has defined just like the az cli.
|
2021/12/01
|
[
"https://Stackoverflow.com/questions/70187603",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/372429/"
] |
You can modify the [example script](https://learn.microsoft.com/en-us/azure/developer/python/azure-sdk-example-virtual-machines?tabs=cmd) in our doc to do this. Essentially, you need to get rid of step 4. and modify step 5 to not send a public IP when creating the NIC. This has been validated in my own subscription.
```
# Import the needed credential and management objects from the libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.compute import ComputeManagementClient
import os
print(f"Provisioning a virtual machine...some operations might take a minute or two.")
# Acquire a credential object using CLI-based authentication.
credential = AzureCliCredential()
# Retrieve subscription ID from environment variable.
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
# Step 1: Provision a resource group
# Obtain the management object for resources, using the credentials from the CLI login.
resource_client = ResourceManagementClient(credential, subscription_id)
# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = "PythonAzureExample-VM-rg"
LOCATION = "westus2"
# Provision the resource group.
rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{
"location": LOCATION
}
)
print(f"Provisioned resource group {rg_result.name} in the {rg_result.location} region")
# For details on the previous code, see Example: Provision a resource group
# at https://learn.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
# Step 2: provision a virtual network
# A virtual machine requires a network interface client (NIC). A NIC requires
# a virtual network and subnet along with an IP address. Therefore we must provision
# these downstream components first, then provision the NIC, after which we
# can provision the VM.
# Network and IP address names
VNET_NAME = "python-example-vnet"
SUBNET_NAME = "python-example-subnet"
IP_NAME = "python-example-ip"
IP_CONFIG_NAME = "python-example-ip-config"
NIC_NAME = "python-example-nic"
# Obtain the management object for networks
network_client = NetworkManagementClient(credential, subscription_id)
# Provision the virtual network and wait for completion
poller = network_client.virtual_networks.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME,
{
"location": LOCATION,
"address_space": {
"address_prefixes": ["10.0.0.0/16"]
}
}
)
vnet_result = poller.result()
print(f"Provisioned virtual network {vnet_result.name} with address prefixes {vnet_result.address_space.address_prefixes}")
# Step 3: Provision the subnet and wait for completion
poller = network_client.subnets.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME, SUBNET_NAME,
{ "address_prefix": "10.0.0.0/24" }
)
subnet_result = poller.result()
print(f"Provisioned virtual subnet {subnet_result.name} with address prefix {subnet_result.address_prefix}")
# Step 4: Provision an IP address and wait for completion
# Removed as not needed
# Step 5: Provision the network interface client
poller = network_client.network_interfaces.begin_create_or_update(RESOURCE_GROUP_NAME,
NIC_NAME,
{
"location": LOCATION,
"ip_configurations": [ {
"name": IP_CONFIG_NAME,
"subnet": { "id": subnet_result.id }
}]
}
)
nic_result = poller.result()
print(f"Provisioned network interface client {nic_result.name}")
# Step 6: Provision the virtual machine
# Obtain the management object for virtual machines
compute_client = ComputeManagementClient(credential, subscription_id)
VM_NAME = "ExampleVM"
USERNAME = "azureuser"
PASSWORD = "ChangePa$$w0rd24"
print(f"Provisioning virtual machine {VM_NAME}; this operation might take a few minutes.")
# Provision the VM specifying only minimal arguments, which defaults to an Ubuntu 18.04 VM
# on a Standard DS1 v2 plan with a public IP address and a default virtual network/subnet.
poller = compute_client.virtual_machines.begin_create_or_update(RESOURCE_GROUP_NAME, VM_NAME,
{
"location": LOCATION,
"storage_profile": {
"image_reference": {
"publisher": 'Canonical',
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
}
},
"hardware_profile": {
"vm_size": "Standard_DS1_v2"
},
"os_profile": {
"computer_name": VM_NAME,
"admin_username": USERNAME,
"admin_password": PASSWORD
},
"network_profile": {
"network_interfaces": [{
"id": nic_result.id,
}]
}
}
)
vm_result = poller.result()
print(f"Provisioned virtual machine {vm_result.name}")
```
|
```
resource_name = f"myserver{random.randint(1000, 9999)}"
VNET_NAME = "myteam-vpn-vnet"
SUBNET_NAME = "myteam-subnet"
IP_NAME = resource_name + "-ip"
IP_CONFIG_NAME = resource_name + "-ip-config"
NIC_NAME = resource_name + "-nic"
Subnet=network_client.subnets.get(resource_group_name, VNET_NAME, SUBNET_NAME)
# Step 5: Provision the network interface client
poller = network_client.network_interfaces.begin_create_or_update(resource_group_name,
NIC_NAME,
{
"location": location,
"ip_configurations": [{
"name": IP_CONFIG_NAME,
"subnet": { "id": Subnet.id },
}]
}
)
nic_result = poller.result()
```
Yes, we removed steps 4 and 5 above as suggested. The nic was then applied in the vm creation as such:
```
"network_profile": {
"network_interfaces": [
{
"id": nic_result.id
}
]
},
```
| 8,724
|
61,037,527
|
I want to run my code on GPU provided by Kaggle. I am able to run my code on CPU though but unable to migrate it properly to run on Kaggle GPU I guess.
On running this
```
with tf.device("/device:GPU:0"):
hist = model.fit(x=X_train, y=Y_train, validation_data=(X_test, Y_test), batch_size=25, epochs=20, callbacks=callbacks_list)
```
and getting this error
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-28-cdb8b009cd85> in <module>
1 with tf.device("/device:GPU:0"):
----> 2 hist = model.fit(x=X_train, y=Y_train, validation_data=(X_test, Y_test), batch_size=25, epochs=20, callbacks=callbacks_list)
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
817 self._assert_compile_was_called()
818 self._check_call_args('evaluate')
--> 819
820 func = self._select_training_loop(x)
821 return func.evaluate(
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
233
234 recreate_training_iterator = (
--> 235 training_data_adapter.should_recreate_iterator(steps_per_epoch))
236 if not steps_per_epoch:
237 # TODO(b/139762795): Add step inference for when steps is None to
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
591 class_weights=None,
592 shuffle=False,
--> 593 steps=None,
594 distribution_strategy=None,
595 max_queue_size=10,
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
704 """Provide a scope for running one batch."""
705 batch_logs = {'batch': step, 'size': size}
--> 706 self.callbacks._call_batch_hook(
707 mode, 'begin', step, batch_logs)
708 self.progbar.on_batch_begin(step, batch_logs)
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, sample_weight_modes, batch_size, epochs, steps, shuffle, **kwargs)
355 sample_weights = _process_numpy_inputs(sample_weights)
356
--> 357 # If sample_weights are not specified for an output use 1.0 as weights.
358 if (sample_weights is not None and
359 any([sw is None for sw in sample_weights])):
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in slice_inputs(self, indices_dataset, inputs)
381 if steps and not batch_size:
382 batch_size = int(math.ceil(num_samples/steps))
--> 383
384 if not batch_size:
385 raise ValueError(
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py in from_tensors(tensors)
564 existing iterators.
565
--> 566 Args:
567 unused_dummy: Ignored value.
568
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py in __init__(self, element)
2763 init_args: A nested structure representing the arguments to `init_func`.
2764 init_func: A TensorFlow function that will be called on `init_args` each
-> 2765 time a C++ iterator over this dataset is constructed. Returns a nested
2766 structure representing the "state" of the dataset.
2767 next_func: A TensorFlow function that will be called on the result of
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/data/util/structure.py in normalize_element(element)
111 ops.convert_to_tensor(t, name="component_%d" % i))
112 return nest.pack_sequence_as(element, normalized_components)
--> 113
114
115 def convert_legacy_structure(output_types, output_shapes, output_classes):
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1312 return ret
1313 raise TypeError("%sCannot convert %r with type %s to Tensor: "
-> 1314 "no conversion function registered." %
1315 (_error_prefix(name), value, type(value)))
1316
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_conversion_registry.py in _default_conversion_function(***failed resolving arguments***)
50 def _default_conversion_function(value, dtype, name, as_ref):
51 del as_ref # Unused.
---> 52 return constant_op.constant(value, dtype, name=name)
53
54
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in constant(value, dtype, shape, name)
256 return _eager_fill(shape.as_list(), t, ctx)
257 raise TypeError("Eager execution of tf.constant with unsupported shape "
--> 258 "(value has %d elements, shape is %s with %d elements)." %
259 (num_t, shape, shape.num_elements()))
260 g = ops.get_default_graph()
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
264 value, dtype=dtype, shape=shape, verify_shape=verify_shape,
265 allow_broadcast=allow_broadcast))
--> 266 dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype)
267 const_tensor = g.create_op(
268 "Const", [], [dtype_value.type],
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
94 dtype = dtypes.as_dtype(dtype).as_datatype_enum
95 ctx.ensure_initialized()
---> 96 return ops.EagerTensor(value, ctx.device_name, dtype)
97
98
RuntimeError: Can't copy Tensor with type string to device /job:localhost/replica:0/task:0/device:GPU:0.
```
I have also tried installing different tensorflow versions like latest tensorflow, tensorflow-gpu, tensorflow-gpu=1.12, but got no success.
Though I am able to list out CPUs and GPUs by using
`from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())`
Please help!
|
2020/04/05
|
[
"https://Stackoverflow.com/questions/61037527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9872938/"
] |
Can you try to update `@material-ui/core` by running
```
npm update
```
|
As described in the Material-UI project [CHANGELOG](https://github.com/mui-org/material-ui/releases/tag/v4.9.9) of the latest version (which is **v4.9.9** the time I'm writing this answer), there is a change related to `createSvgIcon`
[](https://i.stack.imgur.com/n8NYJ.png)
The complete conversation of team can be found [here](https://github.com/mui-org/material-ui/pull/20308).
**When I encountered the problem?**
When running a React project and I wanted to use the [Autocomplete](https://material-ui.com/components/autocomplete/) component from `@material-ui/lab`.
**How I solved it?**
I upgraded `@material-ui/core` package to v4.9.9 using this command:
`yarn upgrade @material-ui/core --latest`
| 8,725
|
67,327,106
|
I am trying to load a serialized xgboost model from a pickle file.
```
import pickle
def load_pkl(fname):
with open(fname, 'rb') as f:
obj = pickle.load(f)
return obj
model = load_pkl('model_0_unrestricted.pkl')
```
while printing the model object, I am getting the following error in linux(AWS Sagemaker Notebook)
```
~/anaconda3/envs/python3/lib/python3.6/site-packages/xgboost/sklearn.py in get_params(self, deep)
436 if k == 'type' and type(self).__name__ != v:
437 msg = 'Current model type: {}, '.format(type(self).__name__) + \
--> 438 'type of model in file: {}'.format(v)
439 raise TypeError(msg)
440 if k == 'type':
~/anaconda3/envs/python3/lib/python3.6/site-packages/sklearn/base.py in get_params(self, deep)
193 out = dict()
194 for key in self._get_param_names():
--> 195 value = getattr(self, key)
196 if deep and hasattr(value, 'get_params'):
197 deep_items = value.get_params().items()
AttributeError: 'XGBClassifier' object has no attribute 'use_label_encoder'
```
Can you please help to fix the issue?
It is working fine in my local mac.
Ref: xgboost:1.4.1 installation log (Mac)
```
Collecting xgboost
Downloading xgboost-1.4.1-py3-none-macosx_10_14_x86_64.macosx_10_15_x86_64.macosx_11_0_x86_64.whl (1.2 MB)
```
But not working on AWS
Ref: xgboost:1.4.1 installation log (SM Notebook, linux machine)
```
Collecting xgboost
Using cached xgboost-1.4.1-py3-none-manylinux2010_x86_64.whl (166.7 MB)
```
Thanks
|
2021/04/30
|
[
"https://Stackoverflow.com/questions/67327106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2140489/"
] |
Looks like you upgraded xgboost.
You may consider downgrading to 1.2.0 by:
```
pip install xgboost==1.2.0
```
|
I tried testing on notebook running on ubuntu, it seems to work fine, however can you check how are you initializing your classifier ? This is what I tried :
```
import numpy as np
import pickle
from scipy.stats import uniform, randint
from sklearn.datasets import load_breast_cancer, load_diabetes, load_wine
from sklearn.metrics import auc, accuracy_score, confusion_matrix, mean_squared_error
from sklearn.model_selection import cross_val_score, GridSearchCV, KFold,RandomizedSearchCV, train_test_split
import xgboost as xgb
cancer = load_breast_cancer()
X = cancer.data
y = cancer.target
xgb_model = xgb.XGBClassifier(objective="binary:logistic", random_state=45)
xgb_model.fit(X, y)
pickle.dump(xgb_model, open("xgb_model.pkl", "wb"))
```
Load the model back using your function and output it :
```
def load_pkl(fname):
with open(fname, 'rb') as f:
obj = pickle.load(f)
return obj
model = load_pkl('xgb_model.pkl')
model
```
Below is the output :
```
XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1, gamma=0, gpu_id=-1,
importance_type='gain', interaction_constraints='',
learning_rate=0.300000012, max_delta_step=0, max_depth=6,
min_child_weight=1, missing=nan, monotone_constraints='()',
n_estimators=100, n_jobs=8, num_parallel_tree=1, random_state=45,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
```
| 8,726
|
52,104,644
|
I have the following function which basically asks user to enter the choice for "X" or "O". I used the while loop to keep asking user until I get the answer that's either "X" or "O".
```
def player_input():
choice = ''
while choice != "X" and choice != "O":
choice = input("Player 1, choose X or O: ")
pl1 = choice
if pl1 == "X":
pl2 = "O"
else:
pl2 = "X"
return (pl1, pl2)
```
The above code works fine but I quite don't understand how that 'and' works in this particular scenario. If I understand it right, 'and' means both conditions have to be true. However, choice can only be either "X" or "O" at any given time.
Please help me understand this. Apologies in advance if you think this is a dumb question. I am new to python and programming in general.
Thank you!
|
2018/08/30
|
[
"https://Stackoverflow.com/questions/52104644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9114293/"
] |
Loop indexing is well known in Python to be an incredibly slow operation. By replacing a loop with array slicing, and a list with a Numpy array, we see increases @ 3x:
```
import numpy as np
import timeit
def generate_primes_original(limit):
boolean_list = [False] * 2 + [True] * (limit - 1)
for n in range(2, int(limit ** 0.5 + 1)):
if boolean_list[n] == True:
for i in range(n ** 2, limit + 1, n):
boolean_list[i] = False
return np.array(boolean_list,dtype=np.bool)
def generate_primes_fast(limit):
boolean_list = np.array([False] * 2 + [True] * (limit - 1),dtype=bool)
for n in range(2, int(limit ** 0.5 + 1)):
if boolean_list[n]:
boolean_list[n*n:limit+1:n] = False
return boolean_list
limit = 1000
print(timeit.timeit("generate_primes_fast(%d)"%limit, setup="from __main__ import generate_primes_fast"))
# 30.90620080102235 seconds
print(timeit.timeit("generate_primes_original(%d)"%limit, setup="from __main__ import generate_primes_original"))
# 91.12803511600941 seconds
assert np.array_equal(generate_primes_fast(limit),generate_primes_original(limit))
# [nothing to stdout - they are equal]
```
To gain even more speed, one option is to use [numpy vectorization](https://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html). Looking at the outer loop, it's not immediately obvious how one could vectorize that.
Second, you will see dramatic speed-ups if you port to [Cython](http://docs.cython.org/en/latest/src/userguide/numpy_tutorial.html#numpy-tutorial), which should be a fairly seamless process.
Edit: you may also see improvements by changing things like `n**2 => math.pow(n,2)`, but minor improvements like that are inconsequential compared to the bigger problem, which is the iterator.
|
If your are still using Python 2 use xrange instead of range for greater speed
| 8,728
|
32,478,825
|
I am using python and scikit-learn to find the cosine similarity between two strings(specifically, names).The program is able to find the similarity score between two strings but, when strings are abbreviated, it shows some undesirable output.
e.g- String1 ="K KAPOOR",String2="L KAPOOR"
The cosine similarity score of these strings is 1(maximum) while the two strings are entirely different names.Is there a way to modify it, in order to get some desired results.
My code is:
```
# -*- coding: utf-8 -*-
"""
Created on Wed Sep 9 14:40:21 2015
@author: gauge
"""
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
documents=("K KAPOOR","L KAPOOR")
tfidf_vectorizer=TfidfVectorizer()
tfidf_matrix=tfidf_vectorizer.fit_transform(documents)
#print tfidf_matrix.shape
cs=cosine_similarity(tfidf_matrix[0:1],tfidf_matrix)
print cs
```
|
2015/09/09
|
[
"https://Stackoverflow.com/questions/32478825",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4994653/"
] |
As mentioned in the other answer, the cosine similarity is one because the two strings have **the exact same representation**.
That means that this code:
```
tfidf_vectorizer=TfidfVectorizer()
tfidf_matrix=tfidf_vectorizer.fit_transform(documents)
```
produces, well:
```
print(tfidf_matrix.toarray())
[[ 1.]
[ 1.]]
```
This means that the two strings/documents (here the rows in the array) have the same representation.
That is because the `TfidfVectorizer` tokenizes your document using **word tokens**, and keeps only words with **at least 2 characters**.
So you could do one of the following:
1. Use:
```
tfidf_vectorizer=TfidfVectorizer(analyzer="char")
```
to get character n-grams instead of word n-grams.
2. Change the token pattern so that it keeps one-letter tokens:
```
tfidf_vectorizer=TfidfVectorizer(token_pattern=u'(?u)\\b\w+\\b')
```
This is just a simple modification from the default pattern you can see in the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html). Note that I had to escape the `\b` occurrences in the regular expression as I was getting an 'empty vocabulary' error.
Hope this helps.
|
>
> String1 ="K KAPOOR", String2="L KAPOOR" The cosine similarity score of these strings is 1 (maximum) while the two strings are entirely different names. Is there a way to modify it, in order to get some desired results.
>
>
>
**It depends.** You are facing an issue because the vector representation of these two strings are exactly the same.
Cosine similarity between to strings is **1** because they are **same**. Not because they are same strings but represented with the **same vector**.
If you want them to be different, then you need to represent them different. To do that you need to train your algorithm with enough words that occur multiple times in a corpus.
Also it is high likely that these two strings might be converted to something like 'KAPOOR' in the preprocessing.
| 8,729
|
2,545,655
|
Using Python 2.6.4, windows
With the following script I want to test a certain xmlrpc server. I call a non-existent function and hope for a traceback with an error. Instead, the function does not return. What could be the cause?
```
import xmlrpclib
s = xmlrpclib.Server("http://127.0.0.1:80", verbose=True)
s.functioncall()
```
The output is:
```
send: 'POST /RPC2 HTTP/1.0\r\nHost: 127.0.0.1:80\r\nUser-Agent: xmlrpclib.py/1.0
.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: 106\r\n\
r\n'
send: "<?xml version='1.0'?>\n<methodCall>\n<methodName>functioncall</methodName
>\n<params>\n</params>\n</methodCall>\n"
reply: 'HTTP/1.1 200 OK\r\n'
header: Content-Type: text/xml
header: Cache-Control: no-cache
header: Content-Length: 376
header: Date: Tue, 30 Mar 2010 13:27:21 GMT
body: '<?xml version="1.0"?>\r\n<methodResponse>\r\n<fault>\r\n<value>\r\n<struc
t>\r\n<member>\r\n<name>faultCode</name>\r\n<value><i4>1</i4></value>\r\n</membe
r>\r\n<member>\r\n<name>faultString</name>\r\n<value><string>PVSS00ctrl (2), 2
010.03.30 15:27:21.395, CTRL, SEVERE, 72, Function not defined, functioncall
, , \n</string></value>\r\n</member>\r\n</struct>\r\n</value>\r\n</fault>\r\n</m
ethodResponse>\r\n'
```
(here the program hangs and does not return until I kill the server)
edit: the server is written in c++, using its own xmlrpc library
edit: found an issue that looks like the same problem <http://bugs.python.org/issue1727418>
|
2010/03/30
|
[
"https://Stackoverflow.com/questions/2545655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/80500/"
] |
As you noticed, this is a bug in the server (the client claims to understand 1.0 and the server ignores that and responds in 1.1 anyway, so doesn't close the socket). Python has a workaround for such buggy servers in 2.7 and 3.2, see [this issue](http://bugs.python.org/issue6267), but that workaround wasn't in 2.6.4. Unfortunately, from 2.6.5's [NEWS.txt](http://www.python.org/download/releases/2.6.5/NEWS.txt) it looks like we haven't backported it to 2.6.5 either. The patch for the workaround in 2.7 is [here](http://svn.python.org/view?view=rev&revision=73638), perhaps you can try applying it to 2.6.5 yourself if it's just impossible to fix the buggy server...?
|
Most likely, the server you're testing does not close the TCP connection once it has sent the response back to your client. Thus the client hangs, waiting for the server to close the connection before it can return from the function.
| 8,730
|
59,524,498
|
I am trying to create a seaborn Facetgrid to plot the normality distribution of all columns in my dataFrame decathlon. The data looks as such:
```
P100m Plj Psp Phj P400m P110h Ppv Pdt Pjt P1500
0 938 1061 773 859 896 911 880 732 757 752
1 839 975 870 749 887 878 880 823 863 741
2 814 866 841 887 921 939 819 778 884 691
3 872 898 789 878 848 879 790 790 861 804
4 892 913 742 803 816 869 1004 789 854 699
... ... ... ... ... ... ... ... ... ...
7963 755 760 604 714 812 794 482 571 539 780
7964 830 845 524 767 786 783 601 573 562 535
7965 819 804 653 840 791 699 659 461 448 632
7966 804 720 539 758 830 782 731 487 425 729
7967 687 809 692 714 565 741 804 527 738 523
```
I am relatively new to python and I can't understand my error. My attempt to format the data and create the grid is as such:
```
import seaborn as sns
df_stacked = decathlon.stack().reset_index(1).rename({'level_1': 'column', 0: 'values'}, axis=1)
g = sns.FacetGrid(df_stacked, row = 'column')
g = g.map(plt.hist, "values")
```
However I recieve the following error:
```
ValueError: Axes instance argument was not found in a figure
```
Can anyone explain what exactly this error means and how I would go about fixing it?
**EDIT**
`df_stacked` looks as such:
```
column values
0 P100m 938
0 Plj 1061
0 Psp 773
0 Phj 859
0 P400m 896
... ...
7967 P110h 741
7967 Ppv 804
7967 Pdt 527
7967 Pjt 738
7967 P1500 523
```
|
2019/12/30
|
[
"https://Stackoverflow.com/questions/59524498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10574250/"
] |
I encountered this similar issue when running a Jupyter Notebook.
My solution involved:
1. Restart the notebook
2. Re-run the imports `%matplotlib inline; import matplotlib.pyplot as plt`
|
As you did not post a full working example its a bit of guessing.
What might go wrong is in the line where you have `g = g.map(plt.hist, "values")` because the error comes from deep within matplotlib. You can see this [here](https://stackoverflow.com/questions/40399631/valueerror-axes-instance-argument-was-not-found-in-a-figure) in this SO question where its another function `pylab.sca(axes[i])` outside matplotlib due to not being in that module available, is being triggered by matplotlib.
Likely you installed/updated something in your (conda?) environment (changes in environment paths?) and after the next reboot it was found.
I also wonder how you come up with `plt.hist` ... fully typed it should resemble `matplotlib.pyplot.hist` ... but guessing... (waiting for your updated example code).
| 8,731
|
60,182,791
|
I have tried uploading file to Google Drive from my local system using a Python script but I keep getting HttpError 403. The script is as follows:
```python
from googleapiclient.http import MediaFileUpload
from googleapiclient import discovery
import httplib2
import auth
SCOPES = "https://www.googleapis.com/auth/drive"
CLIENT_SECRET_FILE = "client_secret.json"
APPLICATION_NAME = "test"
authInst = auth.auth(SCOPES, CLIENT_SECRET_FILE, APPLICATION_NAME)
credentials = authInst.getCredentials()
http = credentials.authorize(httplib2.Http())
drive_serivce = discovery.build('drive', 'v3', credentials=credentials)
file_metadata = {'name': 'gb1.png'}
media = MediaFileUpload('./gb.png',
mimetype='image/png')
file = drive_serivce.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
print('File ID: %s' % file.get('id'))
```
The error is :
```python
googleapiclient.errors.HttpError: <HttpError 403 when requesting
https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart&alt=json&fields=id
returned "Insufficient Permission: Request had insufficient authentication scopes.">
```
Am I using the right scope in the code or missing anything ?
I also tried a script I found online and it is working fine but the issue is that it takes a static token, which expires after some time. So how can I refresh the token dynamically?
Here is my code:
```python
import json
import requests
headers = {
"Authorization": "Bearer TOKEN"}
para = {
"name": "account.csv",
"parents": ["FOLDER_ID"]
}
files = {
'data': ('metadata', json.dumps(para), 'application/json; charset=UTF-8'),
'file': ('mimeType', open("./test.csv", "rb"))
}
r = requests.post(
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart",
headers=headers,
files=files
)
print(r.text)
```
|
2020/02/12
|
[
"https://Stackoverflow.com/questions/60182791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7640700/"
] |
Try with this:
```
var str = @"email from Ram at 10:10 am"" ""email from Ramesh at 10:15 am"" ""email from Rajan at 10:20 am"" ""email from Rakesh at 10:25 am";
string[] sl=str.Trim().Split(new string[] { "\" \"" }, StringSplitOptions.None);
foreach(string st in sl) {
Console.WriteLine(st);
}
```
**Output:**
>
> email from Ram at 10:10 am
>
>
> email from Ramesh at 10:15 am
>
>
> email from Rajan at 10:20 am
>
>
> email from Rakesh at 10:25 am
>
>
>
Check results here: <https://dotnetfiddle.net/5zlfJf>
|
It is possible to use additional `"` as they are part of the string literal. And they will be interpreted by the compiler as a single ":
```
var str = @"email from Ram at 10:10 am"" ""email from Ramesh at 10:15 am"" ""email from Rajan at 10:20 am"" ""email from Rakesh at 10:25 am";
var splitted = str.Split(new string[] { @""" """ }, StringSplitOptions.None);
```
or another way:
Try to use `Split`:
```
var str = @"email from Ram at 10:10 am"" ""email from Ramesh at 10:15 am"" ""email from Rajan at 10:20 am"" ""email from Rakesh at 10:25 am";
var splitted = str.Split(new []{ '"'}, StringSplitOptions.RemoveEmptyEntries)
.Where(s=> !string.IsNullOrWhiteSpace(s)).ToList();
```
| 8,732
|
9,833,152
|
>
> **Possible Duplicate:**
>
> [RegEx match open tags except XHTML self-contained tags](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags)
>
>
>
If I have a string that looks something like...
```
"<tr><td>123</td><td>234</td>...<td>697</td></tr>"
```
Basically a table row with n cells.
What's the easiest way in python to get the values of each cell. That is I just want the values "123", "234", "697" stored in a list or array or what ever is easiest.
I've tried to use regular expressions, when I use
```
re.match
```
I am not able to get it to find anything. If I try with
```
re.search
```
I can only get the first cell. But I want to get all the cells. If I can't do this with n cells, how would you do it with a fixed number of cells?
|
2012/03/23
|
[
"https://Stackoverflow.com/questions/9833152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/399523/"
] |
If that markup is part of a larger set of markup, you should prefer a tool with a HTML parser.
One such tool is [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/).
Here's one way to find what you need using that tool:
```
>>> markup = '''"<tr><td>123</td><td>234</td>...<td>697</td></tr>"'''
>>> from bs4 import BeautifulSoup as bs
>>> soup = bs(markup)
>>> for i in soup.find_all('td'):
... print(i.text)
```
Result:
```
123
234
697
```
|
Don't do this. Just use a proper HTML parser, and use something like xpath to get the elements you want.
A lot of people like lxml. For this task, you will probably want to use the BeautifulSoup backend, or use BeautifulSoup directly, because this is presumably not markup from a source known to generate well-formed, valid documents.
| 8,733
|
58,350,100
|
I am trying to solve this [Dynamic Array problem](https://www.hackerrank.com/challenges/dynamic-array/problem?isFullScreen=true) on HackerRank. This is my code:
```py
#!/bin/python3
import math
import os
import random
import re
import sys
#
# Complete the 'dynamicArray' function below.
#
# The function is expected to return an INTEGER_ARRAY.
# The function accepts following parameters:
# 1. INTEGER n
# 2. 2D_INTEGER_ARRAY queries
#
def dynamicArray(n, queries):
lastAnswer = 0
a = []
array_result = []
for k in range(n):
a.append([])
for i in queries:
x = i[1]
y = i[2]
if i[0] == 1:
seq = ((x ^ lastAnswer) % n)
a[seq].append(y)
elif i[0] == 2:
seq = ((x ^ lastAnswer) % n)
lastAnswer = a[seq][y]
array_result.append(lastAnswer)
return array_result
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
first_multiple_input = input().rstrip().split()
n = int(first_multiple_input[0])
q = int(first_multiple_input[1])
queries = [] # 1 0 5, 1 1 7, 1 0 3, ...
for _ in range(q):
queries.append(list(map(int, input().rstrip().split())))
result = dynamicArray(n, queries)
fptr.write('\n'.join(map(str, result)))
fptr.write('\n')
fptr.close()
```
I am getting a runtime error:
>
> Traceback (most recent call last):
>
>
> File "Solution.py", line 50, in
>
>
> fptr.write('\n'.join(map(str, result)))
>
>
> TypeError: 'NoneType' object is not iterable
>
>
>
Can anyone help me with this, I can't seem to find a solution.
This is the input:
>
> 2 5
>
>
> 1 0 5
>
>
> 1 1 7
>
>
> 1 0 3
>
>
> 2 1 0
>
>
> 2 1 1
>
>
>
Thanks.
>
> Update: It seems like this input is working now, thanks to @cireo but the code is not working for other test cases. What the problem with this code?
>
>
>
|
2019/10/12
|
[
"https://Stackoverflow.com/questions/58350100",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8598839/"
] |
you can try this, it works totally fine.(no runtime error)
==========================================================
>
> Replace your dynamicArray function with this code. Hopefully this will be helpful for you (^\_^).
>
>
>
def dynamicArray(n, queries):
```
col = [[] for i in range(n)]
res = []
lastanswer = 0
for q in queries:
data = (q[1]^lastanswer)%n
if q[0] == 1:
col[data].append(q[2])
elif q[0] == 2:
ind_x = q[2]%len(col[data])
lastanswer = col[data][ind_x]
res.append(lastanswer)
return res
```
|
The answer to your question lies in the boilerplate provided by hackerrank.
`# The function is expected to return an INTEGER_ARRAY.`
You can also see that `result = dynamicArray(n, queries)` is expected to return a list of integers from `map(str, result)`, which throws the exception.
In your code you do `print(lastAnswer)`, but you probably want
```
+ ret = []
...
- print(lastAnswer)
+ ret.append(lastAnswer)
+ return ret
```
instead.
Since you do not return anything, the function returns `None` by default, which cannot be iterated over by `map`.
| 8,735
|
23,211,546
|
I had asked a similar question [here](https://stackoverflow.com/questions/23159053/re-read-a-file-from-start-after-the-program-finishes-reading-it-python/23159107) and the answer that I get was to use the `seek()` method. Now I am doing the following:
```
with open("total.csv", 'rb') as input1:
time.sleep(3)
input1.seek(0)
reader = csv.reader(input1, delimiter="\t")
for row in reader:
#Read the CSV row by row.
```
However, I want to navigate to the first record of the CSV **within the same for loop**. I know that my loop won't terminate that ways but that's precisely what I want. I don't want the `for` loop to end and if it reaches the last record I want to navigate back to the first record and read the whole file all over again (and keep reading it). How do I do that?
Thanks!
|
2014/04/22
|
[
"https://Stackoverflow.com/questions/23211546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3534055/"
] |
For simplicity, create an generator:
```
def repeated_reader(input, reader):
while True:
input.seek(0)
for row in reader:
yield row
with open("total.csv", 'rb') as input1:
reader = csv.reader(input1, delimiter="\t")
for row in repeated_reader(input1, reader):
#Read the CSV row by row.
```
|
Does it have to be in the `for`-loop? You could achieve this behaviour like this (untested):
```
with open("total.csv", 'rb') as input1:
time.sleep(3)
reader = csv.reader(input1, delimiter="\t")
while True:
input1.seek(0)
for row in reader:
#Read the CSV row by row.
```
| 8,738
|
49,783,902
|
In python, if I use a ternary operator:
```
x = a if <condition> else b
```
Is `a` executed even if `condition` is false? Or does `condition` evaluate first and then goes to either `a` or `b` depending on the result?
|
2018/04/11
|
[
"https://Stackoverflow.com/questions/49783902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3754760/"
] |
The condition is evaluated first, if it is False, `a` is not evaluated: [documentation](https://docs.python.org/3/reference/expressions.html#conditional-expressions).
|
It gets evaluated depending if meets the condition. For example:
```
condition = True
print(2 if condition else 1/0)
#Output is 2
print((1/0, 2)[condition])
#ZeroDivisionError is raised
```
No matter if `1/0` raise an error, is never evaluated as the condition was True on the evaluation.
Sames happen in the other way:
```
condition = False
print(1/0 if condition else 2)
#Output is 2
```
| 8,741
|
58,512,790
|
I'm wanting to wrap some c++ code in python using swig, and I need to be able to use numpy.i to convert numpy arrays to vectors.
This has been quite the frustrating process, as I haven't been able to find any useful info online as to where I actually get numpy.i from.
This is what I currently have running:
numpy 1.17.3
swig 2.0.12
python 3.7.3
Debian 4.9.2
From reading <https://docs.scipy.org/doc/numpy/reference/swig.interface-file.html> I'm told that numpy.i should be located in tools/swig/numpy.i, though the only place on my machine that I can find numpy.i is in a python 2.7 folder which I've upgraded from. My working version of python (3.7.3) holds no such file.
```
$ locate numpy.i
/usr/lib/python2.7/dist-packages/instant/swig/numpy.i
```
**What I've tried:**
* copying the numpy.i (as described above) into my working folder. This is at least recognized by my test.i file when I call %include "numpy.i", but it doesn't seem to allow usage of numpy.i calls.
* Copying this code <https://github.com/numpy/numpy/blob/master/tools/swig/numpy.i> into a new file called numpy.i and putting that in my folder, but I get lots of errors when I try to run it.
**Is there a standard way to get the proper numpy.i version? Where would I download it from, and where should I put it?**
I've included some code below as reference:
test.i:
```
%module test
%{
#define SWIG_FILE_WITH_INIT
#include "test.h"
%}
%include "numpy.i" //this doesn't seem to do anything
%init %{
import_array();
%}
%apply (int DIM1) {(char x)}; //this doesn't seem to do anything
%include "test.h"
```
test.h:
```
#include <iostream>
void char_print(char x);
```
test.cpp:
```
#include "test.h"
void char_print(char x) {
std::cout << x << std::endl;
return;
}
```
tester.py:
```
import test
test.char_print(5) #nothing is printed, since this isn't converted properly to a char.
```
This is just a simple example, but I've tried using numpy.i in many different ways (including copying and pasting other people's code that works for them) but it consistently doesn't change anything whether I have it in my test.i file or not.
**Where/how do I get numpy.i?**
|
2019/10/22
|
[
"https://Stackoverflow.com/questions/58512790",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12114274/"
] |
**Problem:** The numpy.i file I copied over from the python2.7 package isn't compatible, and the compatible version isn't included in the installation package when you go through anaconda (still not sure why they'd do that).
**Answer:** Find which version of numpy you're running, then go here (<https://github.com/numpy/numpy/releases>) and download the numpy-[your\_version].zip file, then specifically copy the numpy.i file, found in numpy-[your\_version]/tools/swig/. Now paste that numpy.i into your project working directory.
|
You should download new numpy.i file from <https://github.com/numpy/numpy/blob/master/tools/swig/numpy.i>. In this numpy.i file have no PyFile\_Check function, which python3 don't support. If you still use
`/usr/lib/python2.7/dist-packages/instant/swig/numpy.i`, your code may appear error `undefined symbol: PyFile_Check` because only python2 can support `PyFile_Check` function.
By the way, when the error `undefined symbol: PyFile_Check` occurs, it is not necessarily a problem with SWIG.
| 8,742
|
45,966,355
|
I would like to write a function which performs efficiently this "strange" sort (I am sorry for this pseudocode, it seems to me to be the clearest way to introduce the problem):
```
l=[[A,B,C,...]]
while some list in l is not sorted (increasingly) do
find a non-sorted list (say A) in l
find the first two non-sorted elements of A (i.e. A=[...,b,a,...] with b>a)
l=[[...,a,b,...],[...,b+a,...],B,C,...]
```
Two important things should be mentioned:
1. The sorting is dependent on the choice of the first two
non-sorted elements: `if A=[...,b,a,r,...], r<a<b` and we choose to
sort wrt to `(a,r)` then the final result won't be the same. This is
why we fix the two first non-sorted elements of `A`.
2. Sorting this way always comes to an end.
An example:
```
In: Sort([[4,5,3,10]])
Out: [[3,4,5,10],[5,7,10],[10,12],[22],[4,8,10]]
```
since
```
(a,b)=(5,3): [4,5,3,10]->[[4,3,5,10],[4,8,10]]
(a,b)=(4,3): [[4,3,5,10],[4,8,10]]->[[3,4,5,10],[7,5,10],[4,8,10]]
(a,b)=(7,5): [[3,4,5,10],[7,5,10],[4,8,10]]->[[3,4,5,10],[5,7,10],[12,10],[4,8,10]]
(a,b)=(12,10): [[3,4,5,10],[5,7,10],[12,10],[4,8,10]]->[[3,4,5,10],[5,7,10],[10,12],[22],[4,8,10]]
```
Thank you for your help!
**EDIT**
Why am I considering this problem:
I am trying to do some computations with the Universal Enveloping Algebra of a Lie algebra. This is a mathematical object generated by products of some generators x\_1,...x\_n. We have a nice description of a generating set (it amounts to the ordered lists in the question), but when exchanging two generators, we need to take into account the commutator of these two elements (this is the sum of the elements in the question). I haven't given a solution to this question because it would be close to the worst one you can think of. I would like to know how you would implement this in a good way, so that it is pythonic and fast. I am not asking for a complete solution, only some clues. I am willing to solve it by myself .
|
2017/08/30
|
[
"https://Stackoverflow.com/questions/45966355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7871040/"
] |
Here's a simple implementation that could use some improvement:
```
def strange_sort(lists_to_sort):
# reverse so pop and append can be used
lists_to_sort = lists_to_sort[::-1]
sorted_list_of_lists = []
while lists_to_sort:
l = lists_to_sort.pop()
i = 0
# l[:i] is sorted
while i < len(l) - 1:
if l[i] > l[i + 1]:
# add list with element sum to stack
lists_to_sort.append(l[:i] + [l[i] + l[i + 1]] + l[i + 2:])
# reverse elements
l[i], l[i + 1] = l[i + 1], l[i]
# go back if necessary
if i > 0 and l[i - 1] > l [i]:
i -= 1
continue
# move on to next index
i += 1
# done sorting list
sorted_list_of_lists.append(l)
return sorted_list_of_lists
print(strange_sort([[4,5,3,10]]))
```
This keeps track of which lists are left to sort by using a stack. The time complexity is pretty good, but I don't think it's ideal
|
Firstly you would have to implement a `while` loop which would check if all of the numbers inside of the lists are sorted. I will be using `all` which checks if all the objects inside a sequence are `True`.
```
def a_sorting_function_of_some_sort(list_to_sort):
while not all([all([number <= numbers_list[numbers_list.index(number) + 1] for number in numbers_list
if not number == numbers_list[-1]])
for numbers_list in list_to_sort]):
for numbers_list in list_to_sort:
# There's nothing to do if the list contains just one number
if len(numbers_list) > 1:
for number in numbers_list:
number_index = numbers_list.index(number)
try:
next_number_index = number_index + 1
next_number = numbers_list[next_number_index]
# If IndexError is raised here, it means we don't have any other numbers to check against,
# so we break this numbers iteration to go to the next list iteration
except IndexError:
break
if not number < next_number:
numbers_list_index = list_to_sort.index(numbers_list)
list_to_sort.insert(numbers_list_index + 1, [*numbers_list[:number_index], number + next_number,
*numbers_list[next_number_index + 1:]])
numbers_list[number_index] = next_number
numbers_list[next_number_index] = number
# We also need to break after parsing unsorted numbers
break
return list_to_sort
```
| 8,743
|
55,218,096
|
Right now I am trying to write a python script which could give a binary result to check if my machine is connected to Corporate\_VPN (Connection\_Name) OR Not connected to Corporate\_VPN.
I have tried few articles and post which I could find but with no success.
Here are some:
I have tried this post: [Getting Connected VPN Name in Python](https://stackoverflow.com/questions/36816282/getting-connected-vpn-name-in-python)
And tried:
```
import NetworkManager
for conn in NetworkManager.NetworkManager.ActiveConnections:
print('Name: %s; vpn?: %s' % (conn.Id, conn.Vpn))
```
I am getting this error:
```
ImportError
Traceback (most recent call last)
<ipython-input-6-52b1e422fff2> in <module>()
----> 1 import NetworkManager
2
3 for conn in NetworkManager.NetworkManager.ActiveConnections:
4 print('Name: %s; vpn?: %s' % (conn.Id, conn.Vpn))
ImportError: No module named 'NetworkManager'
```
When tried to "pip install python-NetworManager" I got this error:
```
Failed building wheel for dbus-python
Running setup.py clean for dbus-python
Successfully built python-networkmanager
Failed to build dbus-python
Installing collected packages: dbus-python, python-networkmanager
Running setup.py install for dbus-python ... error
Complete output from command C:\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\samola\\AppData\\Local\\Temp\\1\\pip-install-p1feeotm\\dbus-python\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\samola\AppData\Local\Temp\1\pip-record-91dmsyv1\install-record.txt --single-version-externally-managed --compile:
running install
running build
creating C:\Users\samola\AppData\Local\Temp\1\pip-install-p1feeotm\dbus-python\build
creating C:\Users\samola\AppData\Local\Temp\1\pip-install-p1feeotm\dbus-python\build\temp.win-amd64-3.6
error: [WinError 193] %1 is not a valid Win32 application
----------------------------------------
Command "C:\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\samola\\AppData\\Local\\Temp\\1\\pip-install-p1feeotm\\dbus-python\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\samola\AppData\Local\Temp\1\pip-record-91dmsyv1\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\samola\AppData\Local\Temp\1\pip-install-p1feeotm\dbus-python\
```
Later when I tried to "pip install dbus-python" i got this error:
```
Failed building wheel for dbus-python
Running setup.py clean for dbus-python
Failed to build dbus-python
Installing collected packages: dbus-python
Running setup.py install for dbus-python ... error
Complete output from command C:\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\samola\\AppData\\Local\\Temp\\1\\pip-install-lp5w3k60\\dbus-python\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\samola\AppData\Local\Temp\1\pip-record-7mvtqy_d\install-record.txt --single-version-externally-managed --compile:
running install
running build
creating C:\Users\samola\AppData\Local\Temp\1\pip-install-lp5w3k60\dbus-python\build
creating C:\Users\samola\AppData\Local\Temp\1\pip-install-lp5w3k60\dbus-python\build\temp.win-amd64-3.6
error: [WinError 193] %1 is not a valid Win32 application
----------------------------------------
Command "C:\Anaconda3\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\samola\\AppData\\Local\\Temp\\1\\pip-install-lp5w3k60\\dbus-python\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\samola\AppData\Local\Temp\1\pip-record-7mvtqy_d\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\samola\AppData\Local\Temp\1\pip-install-lp5w3k60\dbus-python\
```
I have also tried following POST as well with no help:<https://www.reddit.com/r/learnpython/comments/5qkpu1/python_script_to_check_if_connected_to_vpn_or_not/>
```
host = *******
ping = subprocess.Popen(["ping.exe","-n","1","-w","1",host],stdout = subprocess.PIPE).communicate()[0]
if ('unreachable' in str(ping)) or ('timed' in str(ping)) or ('failure' in str(ping)):
ping_chk = 0
else:
ping_chk = 1
if ping_chk == 1:
print ("VPN Connected")
else:
print ("VPN Not Connected")
```
Throwing me error:
```
File "<ipython-input-5-6f992511172f>", line 1
host = 192.168.*.*
^
SyntaxError: invalid syntax
```
I am not sure what wrong I am doing right now.
Note: I am doing all this in Corporate VPN connection.
|
2019/03/18
|
[
"https://Stackoverflow.com/questions/55218096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8042963/"
] |
For three numbers specifically, there are two basic approaches:
* you can sort the three numbers and return the middle number from the sorted array. For this, a three-stage sorting network is generally useful. To build this, use this primitive which swaps `r0` and `r1` if `r0` is larger than `r1`, using `r3` as a temporary register:
```
cmp r0, r1 # if (r0 > r1)
movgt r3, r1 # r2 = r1
movgt r1, r0 # r1 = r0
movgt r0, r3 # r0 = r1
```
* alternatively, you can compute the maximum and minimum of the three numbers and subtract it from their sum, yielding the number in the middle. For example, if the three numbers are in `r0`, `r1`, and `r2`, this can be done by:
```
cmp r0, r1 # if (r0 > r1)
movgt r3, r0 # then r3 = r0 (r3 is max)
movgt r4, r1 # then r4 = r1 (r4 is min)
movle r3, r1 # else r3 = r1
movle r4, r0 # else r4 = r0
cmp r2, r3 # if (r2 > r3)
movgt r3, r2 # then r3 = r2
cmp r4, r2 # if (r4 > r2)
movgt r4, r2 # then r4 = r2
add r5, r0, r1 # r5 = r0 + r1 (r5 is middle)
add r5, r5, r2 # r5 = r5 + r2
sub r5, r5, r3 # r5 = r5 - r3
sub r5, r5, r4 # r5 = r5 - r5
```
|
What was the exact problem you encountered? Your `CMP` instruction is fine, and will set the status flags depending on the relative values of `R0` and `R1`, so you can then use a conditional branch (e.g. `BHI` or `BGT`) or one of the `IT` family of instructions that will allow you to execute other instructions conditionally using the same condition codes.
There's a quick reference for the Thumb-2 instruction set (as used by most of the Cortex-M devices, which I'll assume you're using in the absence of any other information) [here](http://infocenter.arm.com/help/topic/com.arm.doc.qrc0001m/QRC0001_UAL.pdf), and there's plenty more documentation on conditional branching and conditional execution on the ARM Infocenter site, for example [here](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0552a/CIHDFHCC.html) for the Cortex-M3.
| 8,744
|
56,744,322
|
I am creating an E-commerce now when I try adding an Item into the cart it returns the error above?
It is complaining about this line of code in the view:
```
else:
order.items.add(order_item)
```
View
```
def add_to_cart(request, slug):
item = get_object_or_404(Item, slug=slug)
order_item = OrderItem.objects.get_or_create(
item=item,
user = request.user,
ordered = False
)
order_qs = Order.objects.filter(user=request.user, ordered=False)
if order_qs.exists():
order = order_qs[0]
#check if the order item is in the order
if order.items.filter(item__slug=item.slug).exists():
order_item.quantity += 1
order_item.save()
else:
order.items.add(order_item)
else:
ordered_date = timezone.now()
order = Order.objects.create(user=request.user, ordered_date=ordered_date)
order.items.add(order_item)
return redirect("core:product", slug=slug)
```
Model
```
class OrderItem(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
ordered = models.BooleanField(default=False)
item = models.ForeignKey(Item, on_delete=models.CASCADE)
quantity = models.IntegerField(default=1)
def __str__(self):
return f"{self.quantity} of {self.item.title}"
class Order(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
items = models.ManyToManyField(OrderItem)
start_date = models.DateTimeField(auto_now_add= True)
ordered_date = models.DateTimeField()
ordered = models.BooleanField(default=False)
def __str__(self):
return self.user.username
```
Traceback
```
Internal Server Error: /add-to-cart/pants-2/
Traceback (most recent call last):
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/core/views.py", line 38, in add_to_cart
order.items.add(order_item)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 965, in add
through_defaults=through_defaults,
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 1092, in _add_items
'%s__in' % target_field_name: new_ids,
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/query.py", line 892, in filter
return self._filter_or_exclude(False, *args, **kwargs)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/query.py", line 910, in _filter_or_exclude
clone.query.add_q(Q(*args, **kwargs))
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1290, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1318, in _add_q
split_subq=split_subq, simple_col=simple_col,
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1251, in build_filter
condition = self.build_lookup(lookups, col, value)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/sql/query.py", line 1116, in build_lookup
lookup = lookup_class(lhs, rhs)
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/lookups.py", line 20, in __init__
self.rhs = self.get_prep_lookup()
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/related_lookups.py", line 59, in get_prep_lookup
self.rhs = [target_field.get_prep_value(v) for v in self.rhs]
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/related_lookups.py", line 59, in <listcomp>
self.rhs = [target_field.get_prep_value(v) for v in self.rhs]
File "/home/vince/Desktop/Dev-Projects/djangoEcommerce/django_project_boilerplate/env/lib/python3.6/site-packages/django/db/models/fields/__init__.py", line 966, in get_prep_value
return int(value)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'OrderItem'
[24/Jun/2019 20:49:06] "GET /add-to-cart/pants-2/ HTTP/1.1" 500 150701
```
I need it to create a new item into the cart if it doesn't exist and if it exists inside the cart it increments the similar item by 1, instead of creating the same thing all over again.
|
2019/06/24
|
[
"https://Stackoverflow.com/questions/56744322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10374065/"
] |
The Django [**`get_or_create(..)`** [Django-doc]](https://docs.djangoproject.com/en/dev/ref/models/querysets/#get-or-create), does *not* return a model instance, it returns a 2-tuple with the object, and a boolean (whether it created a record or not). Or as written in the documentation:
>
> (..)
>
>
> Returns a tuple of `(object, created)`, where `object` is the
> retrieved or created object and `created` is a boolean specifying
> whether a new object was created.
>
>
> (..)
>
>
>
You can easily fix this by using iterable unpacking however:
```
def add_to_cart(request, slug):
item = get_object_or_404(Item, slug=slug)
**order\_item, \_\_** = OrderItem.objects.get_or_create(
item=item,
user = request.user,
ordered = False
)
order_qs = Order.objects.filter(user=request.user, ordered=False)
if order_qs.exists():
order = order_qs[0]
#check if the order item is in the order
if order.items.filter(item__slug=item.slug).exists():
order_item.quantity += 1
order_item.save()
else:
order.items.add(order_item)
else:
ordered_date = timezone.now()
order = Order.objects.create(user=request.user, ordered_date=ordered_date)
order.items.add(order_item)
return redirect("core:product", slug=slug)
```
Here we thus assign the result of `OrderItem.objects.get_or_create(..)` to `order_item, __`, with `__` a "throw away variable".
|
add the below line into this. Just add 'created' as below.
```
order_item, created = OrderItem.objects.get_or_create(
item=item,
user = request.user,
ordered = False
)
```
| 8,745
|
45,582,838
|
Is there a way to convert the string **"12345678aaaa12345678bbbbbbbb"** to **"12345678-aaaa-1234-5678-bbbbbbbb"** in python?
I am not sure on how to do it, since I need to insert "-" after elements of variable lengths say after 8th element then 4th element and so on.
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45582838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8123705/"
] |
This function inserts a char at a postion for a string:
```
def insert(char,position,string):
return string[:position] + char + string[position:]
```
|
Python strings cannot be mutated. What we can do is create another string with the hyphen inserted in between, as per your wish.
Consider the string s = "12345678aaaa12345678bbbbbbbb"
Giving `s[:8] + '-' + s[8:] will give you 12345678-aaaa12345678bbbbbbbb`
You can give the hyphen as you wish by adjusting the `:` values.
For more methods to add the hyphen, refer to this question thread for answer as to how to insert hypForhen.
[Add string in a certain position in Python](https://stackoverflow.com/questions/5254445/add-string-in-a-certain-position-in-python)
| 8,746
|
14,412,907
|
I'm trying to scrape the [NDTV](http://en.wikipedia.org/wiki/NDTV) website for news titles. [This](http://archives.ndtv.com/articles/2012-01.html) is the page I'm using as a HTML source. I'm using BeautifulSoup (bs4) to handle the HTML code, and I've got everything working, except my code breaks when I encounter the hindi titles in the page I linked to.
My code so far is :
```
import urllib2
from bs4 import BeautifulSoup
htmlUrl = "http://archives.ndtv.com/articles/2012-01.html"
FileName = "NDTV_2012_01.txt"
fptr = open(FileName, "w")
fptr.seek(0)
page = urllib2.urlopen(htmlUrl)
soup = BeautifulSoup(page, from_encoding="UTF-8")
li = soup.findAll( 'li')
for link_tag in li:
hypref = link_tag.find('a').contents[0]
strhyp = str(hypref)
fptr.write(strhyp)
fptr.write("\n")
```
The error I get is :
```
Traceback (most recent call last):
File "./ScrapeTemplate.py", line 30, in <module>
strhyp = str(hypref)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-5: ordinal not in range(128)
```
I got the same error even when I didn't include the `from_encoding` parameter. I initially used it as `fromEncoding`, but python warned me that it was deprecated usage.
How do I fix this? From what I've read I need to either avoid the hindi titles or explicitly encode it into non-ascii text, but I don't know how to do that. Any help would be greatly appreciated!
|
2013/01/19
|
[
"https://Stackoverflow.com/questions/14412907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1765768/"
] |
What you see is a NavigableString instance (which is derived from the Python unicode type):
```
(Pdb) hypref.encode('utf-8')
'NDTV'
(Pdb) hypref.__class__
<class 'bs4.element.NavigableString'>
(Pdb) hypref.__class__.__bases__
(<type 'unicode'>, <class 'bs4.element.PageElement'>)
```
You need to convert to utf-8 using
```
hypref.encode('utf-8')
```
|
```
strhyp = hypref.encode('utf-8')
```
<http://joelonsoftware.com/articles/Unicode.html>
| 8,755
|
29,318,565
|
I am writing a raingauge precipitation calculator based in the radius of the raingauge. When I run my script, I have this error message:
```
Type de raingauge radius [cm]: 5.0
Traceback (most recent call last):
File "pluviometro.py", line 27, in <module>
area_bocal = (pi * (raio_bocal * raio_bocal)) # cm.cm
TypeError: can't multiply sequence by non-int of type 'str'
```
I am using
`raio_bocal = input("Type de raingauge radius [cm]:")`
for data input. When using `Python2` the typecasting was automatic.
How can I have a `float` entered value using `python3`?
|
2015/03/28
|
[
"https://Stackoverflow.com/questions/29318565",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/824522/"
] |
As mentioned in the [docs](https://docs.python.org/3/library/functions.html#input)
>
> The function then reads a line from input, **converts it to a string** (stripping a trailing newline), and returns that
>
>
>
So you need to type cast it to `float` explicitly
```
raio_bocal = float(input("Type de raingauge radius [cm]:"))
```
|
You need to cast to float, input returns a string in python3:
```
float(input("Type de raingauge radius [cm]:"))
```
Probably safer use a while loop with a try/except when casting input.
```
while True:
inp = input("Type de raingauge radius [cm]:")
try:
raio_bocal = float(inp)
break
except ValueError:
print("Invalid input")
```
| 8,756
|
35,258,492
|
I have a directory containing a certificate bundle, a Python script and a Node script. Both scripts make a GET request to the same URL and are provided with the same certificate bundle. The Python script makes the request as expected however the node script throws this error:
>
> { [Error: unable to verify the first certificate] code: 'UNABLE\_TO\_VERIFY\_LEAF\_SIGNATURE' }
>
>
>
The Python script *(Python 3.4.3 and the [requests](https://github.com/kennethreitz/requests) library)*:
```
import requests
print(requests.get(url, verify='/tmp/cert/cacert.pem'))
```
The node script *(Node 4.2.6 and the [request](https://github.com/request/request) library)*:
```
var fs = require('fs');
var request = require('request');
request.get({
url: url,
agentOptions: {
ca: fs.readFileSync('/tmp/cert/cacert.pem')
}
}, function (error, response, body) {
if (error) {
console.log(error);
} else {
console.log(body);
}
});
```
Both are using the same OpenSSL version:
```
$ python -c 'import ssl; print(ssl.OPENSSL_VERSION)'
OpenSSL 1.0.2e-fips 3 Dec 2015
$ node -pe process.versions.openssl
1.0.2e
```
I don't believe the problem to be with the certificate bundle and I don't want to turn off host verification in Node.
Does anybody know why Node is throwing this error?
|
2016/02/07
|
[
"https://Stackoverflow.com/questions/35258492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1066031/"
] |
The [documentation](https://nodejs.org/api/https.html#https_https_request_options_callback) describes the `ca` option as follows:
>
> **ca: A string, Buffer or array of strings or Buffers of trusted certificates in PEM format. If this is omitted several well known "root" CAs will be used, like VeriSign. These are used to authorize connections.**
>
>
>
So it doesn't expect a CA bundle. The fix is simple however, just split the bundle like so:
```
var fs = require('fs');
var request = require('request');
var certs = fs.readFileSync('/tmp/cert/cacert.pem').toString().split("\n\n");
request.get({
url: url,
agentOptions: {
ca: certs
}
}, function (error, response, body) {
if (error) {
console.log(error);
} else {
console.log(body);
}
});
```
|
Maybe you can use this module that fixes the problem, by downloading certificates usually used by browsers.
<https://www.npmjs.com/package/ssl-root-cas>
| 8,757
|
43,935,569
|
my device will sent json data like this:
```
[{"channel":924125000, "sf":10, "time":"2017-05-11T16:56:15", "gwip":"192.168.1.125", "gwid":"00004c4978dbf5b4", "repeater":"00000000ffffffff", "systype":5, "rssi":-108.0, "snr":17.0, "snr_max":23.3, "snr_min":10.8, "macAddr":"00000000000000c3", "data":"4702483016331210179183", "frameCnt":1, "fport":2}]
```
but sometimes i received multiple json data(two or more):
```
[{"channel":924125000, "sf":10, "time":"2017-05-11T16:56:15", "gwip":"192.168.1.125", "gwid":"00001c497b48dbf5", "repeater":"00000000ffffffff", "systype":5, "rssi":-108.0, "snr":17.0, "snr_max":23.3, "snr_min":10.8, "macAddr":"00000000050100e8", "data":"4702483016331210179183", "frameCnt":1, "fport":2}],[{"channel":924125000, "sf":10, "time":"2017-05-11T16:56:15", "gwip":"192.168.1.125", "gwid":"00001c497b48dbf5", "repeater":"00000000ffffffff", "systype":5, "rssi":-108.0, "snr":17.0, "snr_max":23.3, "snr_min":10.8, "macAddr":"00000000050100e8", "data":"4702483016331210179183", "frameCnt":1, "fport":2}]
```
when i parse multiple json data
```
json_Dict = json.loads(jsonData)
```
then
`File "/usr/lib/python2.7/json/decoder.py", line 369, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 303 - line 1 column 1818 (char 302 - 1817)`
how can parse every multiple json data ?
thanks for your help
|
2017/05/12
|
[
"https://Stackoverflow.com/questions/43935569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8002033/"
] |
because you have multiple objects in your json you should include them in a list :
```
json_List = json.loads('[' + jsonData + ']')
```
|
Paste it in a Tool like [JSONLINT](https://jsonlint.com/)
and you get:
>
> Error: Parse error on line 17:
> ...": 1, "fport": 2}], [{ "channel": 924
> ---------------------^
> Expecting 'EOF', got ','
>
>
>
which is the cause of your error. This is not *valid* JSON.
The correct structure would be something like `[[...],[...]]`. You have `[...],[...]`, which is not correct.
| 8,758
|
5,524,241
|
I have two custom Django fields, a `JSONField` and a `CompressedField`, both of which work well. I would like to also have a `CompressedJSONField`, and I was rather hoping I could do this:
```
class CompressedJSONField(JSONField, CompressedField):
pass
```
but on import I get:
```
RuntimeError: maximum recursion depth exceeded while calling a Python object
```
I can find information about using models with multiple inheritance in Django, but nothing about doing the same with fields. Should this be possible? Or should I just give up at this stage?
**edit:**
Just to be clear, I don't *think* this has anything to do with the specifics of my code, as the following code has exactly the same problem:
```
class CustomField(models.TextField, models.CharField):
pass
```
**edit 2:**
I'm using Python 2.6.6 and Django 1.3 at present. Here is the full code of my stripped-right-down test example:
`customfields.py`
-----------------
```
from django.db import models
class CompressedField(models.TextField):
""" Standard TextField with automatic compression/decompression. """
__metaclass__ = models.SubfieldBase
description = 'Field which compresses stored data.'
def to_python(self, value):
return value
def get_db_prep_value(self, value, **kwargs):
return super(CompressedField, self)\
.get_db_prep_value(value, prepared=True)
class JSONField(models.TextField):
""" JSONField with automatic serialization/deserialization. """
__metaclass__ = models.SubfieldBase
description = 'Field which stores a JSON object'
def to_python(self, value):
return value
def get_db_prep_save(self, value, **kwargs):
return super(JSONField, self).get_db_prep_save(value, **kwargs)
class CompressedJSONField(JSONField, CompressedField):
pass
```
`models.py`
-----------
```
from django.db import models
from customfields import CompressedField, JSONField, CompressedJSONField
class TestModel(models.Model):
name = models.CharField(max_length=150)
compressed_field = CompressedField()
json_field = JSONField()
compressed_json_field = CompressedJSONField()
def __unicode__(self):
return self.name
```
as soon as I add the `compressed_json_field = CompressedJSONField()` line I get errors when initializing Django.
|
2011/04/02
|
[
"https://Stackoverflow.com/questions/5524241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/88411/"
] |
after doing a few quick tests i found that if you remove the **metaclass** from the JSON and compressed fields and put it in the compressedJSON field it compiles. if you then need the JSON or Compressed fields then subclass them and jusst add the `__metaclass__ = models.SubfieldBase`
i have to admit that i didn't do any heavy testing with this:
```
from django.db import models
class CompressedField(models.TextField):
""" Standard TextField with automatic compression/decompression. """
description = 'Field which compresses stored data.'
def to_python(self, value):
return value
def get_db_prep_value(self, value, **kwargs):
return super(CompressedField, self).get_db_prep_value(value, prepared=True)
class JSONField(models.TextField):
""" JSONField with automatic serialization/deserialization. """
description = 'Field which stores a JSON object'
def to_python(self, value):
return value
def get_db_prep_save(self, value, **kwargs):
return super(JSONField, self).get_db_prep_save(value, **kwargs)
class CompressedJSONField(JSONField, CompressedField):
__metaclass__ = models.SubfieldBase
class TestModel(models.Model):
name = models.CharField(max_length=150)
#compressed_field = CompressedField()
#json_field = JSONField()
compressed_json_field = CompressedJSONField()
def __unicode__(self):
return self.name
```
if you then want to uses the JSON and Commpressed fields separately i assume this idea will work:
```
class JSONFieldSubClass(JSONField):
__metaclass__ = models.SubfieldBase
```
Honestly ... I don't really understand any of this.
**EDIT base method hack**
```
class CompressedJSONField(JSONField, CompressedField):
__metaclass__ = models.SubfieldBase
def to_python(self, value):
value = JSONField.to_python(self, value)
value = CompressedField.to_python(self, value)
return value
```
the other way is to make the to\_python() on the classes have unique names and call them in your inherited classes to\_python() methods
or maybe check out this [answer](https://stackoverflow.com/questions/2611892/get-python-class-parents/2611897#2611897)
**EDIT**
after some reading if you implement a call to `super(class, self).method(args)` in the first base to\_python() then it will call the second base. If you use super consistently then you shouldn't have any problems. <http://docs.python.org/library/functions.html#super> is worth checking out and <http://www.artima.com/weblogs/viewpost.jsp?thread=237121>
```
class base1(object):
def name(self, value):
print "base1", value
super(base1, self).name(value)
def to_python(self, value):
value = value + " base 1 "
if(hasattr(super(base1, self), "to_python")):
value = super(base1, self).to_python(value)
return value
class base2(object):
def name(self, value):
print "base2", value
def to_python(self, value):
value = value + " base 2 "
if(hasattr(super(base2, self), "to_python")):
value = super(base2, self).to_python(value)
return value
class superClass(base1, base2):
def name(self, value):
super(superClass, self).name(value)
print "super Class", value
```
|
It is hard to understand when exactly you are getting that error. But looking at DJango code, there is simlar implementation (multiple inheritance)
refer: **class ImageFieldFile(ImageFile, FieldFile)**
in django/db/models/fields
| 8,759
|
430,226
|
I need to poll a web service, in this case twitter's API, and I'm wondering what the conventional wisdom is on this topic. I'm not sure whether this is important, but I've always found feedback useful in the past.
A couple scenarios I've come up with:
1. The querying process starts every X seconds, eg a cron job runs a python script
2. A process continually loops and queries at each iteration, eg ... well, here is where I enter unfamiliar territory. Do I just run a python script that doesn't end?
Thanks for your advice.
ps - regarding the particulars of twitter: I know that it sends emails for following and direct messages, but sometimes one might want the flexibility of parsing @replies. In those cases, I believe polling is as good as it gets.
pps - twitter limits bots to 100 requests per 60 minutes. I don't know if this also limits web scraping or rss feed reading. Anyone know how easy or hard it is to be whitelisted?
Thanks again.
|
2009/01/10
|
[
"https://Stackoverflow.com/questions/430226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
"Do I just run a python script that doesn't end?"
How is this unfamiliar territory?
```
import time
polling_interval = 36.0 # (100 requests in 3600 seconds)
running= True
while running:
start= time.clock()
poll_twitter()
anything_else_that_seems_important()
work_duration = time.clock() - start
time.sleep( polling_interval - work_duration )
```
It's just a loop.
|
You should have a page that is like a Ping or Heartbeat page. The you have another process that "tickles" or hits that page, usually you can do this in your Control Panel of your web host, or use a cron if you have a local access. Then this script can keep statistics of how often it has polled in a database or some data store and then you poll the service as often as you really need to, of course limiting it to whatever the providers limit is. You definitely don't want to (and certainly don't want to rely) on a python scrip that "doesn't end." :)
| 8,760
|
42,673,016
|
I tried to make a checkbutton which is supposed to activate a function "rond" but it's not working... What have I done wrong ?
```
from tkinter import*
def rond():
if okok.get()==1:
print("ok")
okok = BooleanVar()
okok.set(0)
root = Tk()
can = Canvas(root, width=200, height=150, bg="light yellow")
can.bind("<ButtonPress-1>", variable=okok, onvalue=1, offvalue=0, command=rond)
can.pack(side="top")
root.mainloop()
```
After it has run this appears:
`Traceback (most recent call last):
File "/PycharmProjects/untitled/testtest.py", line 7, in <module>
okok = BooleanVar()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/tkinter/__init__. py", line 389, in __init__
Variable.__init__(self, master, value, name)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/tkinter/__init__.py", line 233, in __init__
self._root = master._root()
AttributeError: 'NoneType' object has no attribute '_root'`
|
2017/03/08
|
[
"https://Stackoverflow.com/questions/42673016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7678528/"
] |
There are three problems:
1. The exception you are getting is because you have to create `root = Tk()` before the `BooleanVar`.
2. As already noted, you should use the [`Checkbutton`](http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/checkbutton.html) widget instead of `Canvas`. The `command` then goes directly into the constructor; no `bind` neaded. Also, your `onvalue` and `offvalue` are the same as the default, so those are not really needed, either.
```
can = Checkbutton(root, width=20, height=15, bg="light yellow",
variable=okok, onvalue=1, offvalue=0, command=rond)
```
3. Without an image icon, the `width` and `height` will be in characters (i.e. lines and columns of text), so the numbers you entered are much too high. Alternatively, provide an image icon.
|
It looks like you're using canvas and not the check button. I would try something like this:
cbutton = Checkbutton(root, etc, etc)
or check out effbot.org for a good resource.
| 8,761
|
70,150,128
|
This is my project structure
[](https://i.stack.imgur.com/BrsjM.png)
I am able to access the default SQLite database `db.sqlite3` created by Django, by importing the models directly inside of my views files
Like - `from basic.models import table1`
Now, I have another Database called `UTF.db` which is created by someone else, and I want to access it's data and perform normal QuerySet operations on the retrieved data
The problem is I don't know how to import the tables inside that database, as they are not inside any model file inside my project as it's created by someone else
I tried adding the tables inside the `UTF.db` database to a `models.py` file by first adding it to the `settings.py` file like the following
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
},
'otherdb':{
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'UTF.db',
}
}
```
And then using the `inspectdb` command to add the tables to an existing `models.py` file
The command I tried out -
`python manage.py inspectdb > models.py`
But, that just causes my models.py file to get emptied out
Does anyone know how this can be solved?
In the end, I wish to import the table data inside of my views files by importing the respective model
|
2021/11/29
|
[
"https://Stackoverflow.com/questions/70150128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11827709/"
] |
You can specify specific database as specified in [Documentation](https://docs.djangoproject.com/en/3.2/ref/django-admin/#cmdoption-inspectdb-database)
```
python manage.py inspectdb --database=otherdb > your_app/models.py
```
Also if possible putting otherdb in a different App is better.
|
You can attach the second database to the first one and use it from within the first one. You can use tables from both databases in single sql query.
Here is the doc <https://www.sqlite.org/lang_attach.html>.
```
attach database '/path/to/dbfile.sqlite' as db_remote;
select *
from some_table
join db_remote.remote_table on ....
detach database db_remote;
```
| 8,762
|
6,282,519
|
I'm not sure if I'm even asking this question correctly. I just built my first real program and I want to make it available to people in my office. I'm not sure if I will have access to the shared server, but I was hoping I could simply package the program (I hope I'm using this term correctly) and upload it to a website for my coworkers to download.
I know how to zip a file, but something tells me it's a little more complicated than that :) In fact, some of the people in my office who need the program installed do not have python on their computers already, and I would rather avoid asking everyone to install python before downloading my .py files from my hosting server.
So, is there an easy way to package my program, along with python and the other dependencies, for simple distribution from a website? I tried searching for the answer but I can't find exactly what I'm looking for. Oh, and since this is the first time I have done this- are there any precautions I need to take when sharing these files so that everything runs smoothly?
|
2011/06/08
|
[
"https://Stackoverflow.com/questions/6282519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1382299/"
] |
[PyInstaller](http://www.pyinstaller.org/) or [py2exe](http://www.py2exe.org/) can package your Python program.
Both are actively maintained. PyInstaller is actively maintained. py2exe has not been updated for at least a year. I've used each with success.
Also there is [cx\_Freeze](http://cx-freeze.sourceforge.net/) which I have not used.
|
Take a look at <http://www.py2exe.org/>
| 8,764
|
63,890,399
|
I am working with a dataframe which looks similar to this
```
Ind Pos Sample Ct LogConc RelConc
1 B1 wt1A 26.93 -2.0247878 0.009445223
2 B2 wt1A 27.14 -2.0960951 0.008015026
3 B3 wt1B 26.76 -1.9670628 0.010787907
4 B4 wt1B 26.94 -2.0281834 0.009371662
5 B5 wt1C 26.01 -1.7123939 0.019391264
6 B6 wt1C 26.08 -1.7361630 0.018358492
7 B7 wt1D 25.68 -1.6003396 0.025099232
8 B8 wt1D 25.75 -1.6241087 0.023762457
9 B9 wt1E 22.11 -0.3881154 0.409151879
10 B10 wt1E 22.21 -0.4220713 0.378380453
11 B11 dko1A 22.20 -0.4186757 0.381350463
12 B12 dko1A 22.10 -0.3847199 0.412363423
```
My goal is to calculate the sample wise average of the RelConc, which would result in a dataframe which would look something like this.
```
Ind Pos Sample Ct LogConc RelConc AverageRelConc
1 B1 wt1A 26.93 -2.0247878 0.009445223 0.008730124
2 B2 wt1A 27.14 -2.0960951 0.008015026 0.008730124
3 B3 wt1B 26.76 -1.9670628 0.010787907 0.010079785
4 B4 wt1B 26.94 -2.0281834 0.009371662 0.010079785
5 B5 wt1C 26.01 -1.7123939 0.019391264 0.018874878
6 B6 wt1C 26.08 -1.7361630 0.018358492 0.018874878
7 B7 wt1D 25.68 -1.6003396 0.025099232 0.024430845
8 B8 wt1D 25.75 -1.6241087 0.023762457 0.024430845
9 B9 wt1E 22.11 -0.3881154 0.409151879 0.393766166
10 B10 wt1E 22.21 -0.4220713 0.378380453 0.393766166
11 B11 dko1A 22.20 -0.4186757 0.381350463 0.396856943
12 B12 dko1A 22.10 -0.3847199 0.412363423 0.396856943
```
I am fairly new to R and have no idea how to accomplish such a seemingly simple task. In python, I'd probably loop through each row and check if I have encountered a new sample name and then calculate the average for all samples above. However this seems not very "R like".
If somebody could point me to a solution, I'd be very happy!
Cheers!
|
2020/09/14
|
[
"https://Stackoverflow.com/questions/63890399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11637268/"
] |
In `base R`, we can use `ave` and it is very fast
```
df1$AverageRelConc <- with(df1, ave(RelConc, Sample))
```
-output
```
df1$AverageRelConc
#[1] 0.008730125 0.008730125 0.010079784 0.010079784 0.018874878 0.018874878 0.024430844 0.024430844 0.393766166 0.393766166
#[11] 0.396856943 0.396856943
```
---
Or using `tidyverse`, we group by 'Sample' and get the `mean` of 'RelConc'
```
library(dplyr)
df1 %>%
group_by(Sample) %>%
mutate(AverageRelConc = mean(RelConc, na.rm = TRUE))
```
-output
```
# A tibble: 12 x 7
# Groups: Sample [6]
# Ind Pos Sample Ct LogConc RelConc AverageRelConc
# <int> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
# 1 1 B1 wt1A 26.9 -2.02 0.00945 0.00873
# 2 2 B2 wt1A 27.1 -2.10 0.00802 0.00873
# 3 3 B3 wt1B 26.8 -1.97 0.0108 0.0101
# 4 4 B4 wt1B 26.9 -2.03 0.00937 0.0101
# 5 5 B5 wt1C 26.0 -1.71 0.0194 0.0189
# 6 6 B6 wt1C 26.1 -1.74 0.0184 0.0189
# 7 7 B7 wt1D 25.7 -1.60 0.0251 0.0244
# 8 8 B8 wt1D 25.8 -1.62 0.0238 0.0244
# 9 9 B9 wt1E 22.1 -0.388 0.409 0.394
#10 10 B10 wt1E 22.2 -0.422 0.378 0.394
#11 11 B11 dko1A 22.2 -0.419 0.381 0.397
#12 12 B12 dko1A 22.1 -0.385 0.412 0.397
```
### data
```
df1 <- structure(list(Ind = 1:12, Pos = c("B1", "B2", "B3", "B4", "B5",
"B6", "B7", "B8", "B9", "B10", "B11", "B12"), Sample = c("wt1A",
"wt1A", "wt1B", "wt1B", "wt1C", "wt1C", "wt1D", "wt1D", "wt1E",
"wt1E", "dko1A", "dko1A"), Ct = c(26.93, 27.14, 26.76, 26.94,
26.01, 26.08, 25.68, 25.75, 22.11, 22.21, 22.2, 22.1), LogConc = c(-2.0247878,
-2.0960951, -1.9670628, -2.0281834, -1.7123939, -1.736163, -1.6003396,
-1.6241087, -0.3881154, -0.4220713, -0.4186757, -0.3847199),
RelConc = c(0.009445223, 0.008015026, 0.010787907, 0.009371662,
0.019391264, 0.018358492, 0.025099232, 0.023762457, 0.409151879,
0.378380453, 0.381350463, 0.412363423)), class = "data.frame",
row.names = c(NA,
-12L))
```
|
Try this `tidyverse` option:
```
library(tidyverse)
#Code
df %>% group_by(Sample) %>%
mutate(AvgRelConc=mean(RelConc,na.rm=T))
```
Output:
```
# A tibble: 12 x 7
# Groups: Sample [6]
Ind Pos Sample Ct LogConc RelConc AvgRelConc
<int> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 1 B1 wt1A 26.9 -2.02 0.00945 0.00873
2 2 B2 wt1A 27.1 -2.10 0.00802 0.00873
3 3 B3 wt1B 26.8 -1.97 0.0108 0.0101
4 4 B4 wt1B 26.9 -2.03 0.00937 0.0101
5 5 B5 wt1C 26.0 -1.71 0.0194 0.0189
6 6 B6 wt1C 26.1 -1.74 0.0184 0.0189
7 7 B7 wt1D 25.7 -1.60 0.0251 0.0244
8 8 B8 wt1D 25.8 -1.62 0.0238 0.0244
9 9 B9 wt1E 22.1 -0.388 0.409 0.394
10 10 B10 wt1E 22.2 -0.422 0.378 0.394
11 11 B11 dko1A 22.2 -0.419 0.381 0.397
12 12 B12 dko1A 22.1 -0.385 0.412 0.397
```
Some data used:
```
#Data
df <- structure(list(Ind = 1:12, Pos = c("B1", "B2", "B3", "B4", "B5",
"B6", "B7", "B8", "B9", "B10", "B11", "B12"), Sample = c("wt1A",
"wt1A", "wt1B", "wt1B", "wt1C", "wt1C", "wt1D", "wt1D", "wt1E",
"wt1E", "dko1A", "dko1A"), Ct = c(26.93, 27.14, 26.76, 26.94,
26.01, 26.08, 25.68, 25.75, 22.11, 22.21, 22.2, 22.1), LogConc = c(-2.0247878,
-2.0960951, -1.9670628, -2.0281834, -1.7123939, -1.736163, -1.6003396,
-1.6241087, -0.3881154, -0.4220713, -0.4186757, -0.3847199),
RelConc = c(0.009445223, 0.008015026, 0.010787907, 0.009371662,
0.019391264, 0.018358492, 0.025099232, 0.023762457, 0.409151879,
0.378380453, 0.381350463, 0.412363423)), class = "data.frame", row.names = c(NA,
-12L))
```
Or you could use `aggregate()` and save the results in a different dataframe and after that you can join with original `df`:
```
#Compute means
dfmeans <- aggregate(RelConc~Sample,df,mean,na.rm=T)
#Now match
df$AvgRelConc <- dfmeans[match(df$Sample,dfmeans$Sample),"RelConc"]
```
Output:
```
Ind Pos Sample Ct LogConc RelConc AvgRelConc
1 1 B1 wt1A 26.93 -2.0247878 0.009445223 0.008730125
2 2 B2 wt1A 27.14 -2.0960951 0.008015026 0.008730125
3 3 B3 wt1B 26.76 -1.9670628 0.010787907 0.010079784
4 4 B4 wt1B 26.94 -2.0281834 0.009371662 0.010079784
5 5 B5 wt1C 26.01 -1.7123939 0.019391264 0.018874878
6 6 B6 wt1C 26.08 -1.7361630 0.018358492 0.018874878
7 7 B7 wt1D 25.68 -1.6003396 0.025099232 0.024430844
8 8 B8 wt1D 25.75 -1.6241087 0.023762457 0.024430844
9 9 B9 wt1E 22.11 -0.3881154 0.409151879 0.393766166
10 10 B10 wt1E 22.21 -0.4220713 0.378380453 0.393766166
11 11 B11 dko1A 22.20 -0.4186757 0.381350463 0.396856943
12 12 B12 dko1A 22.10 -0.3847199 0.412363423 0.396856943
```
| 8,765
|
4,666,527
|
Does anyone have some good resources on learning more advanced regular expressions
I keep having problems where I want to make sure something is not enclosed in quotation marks
i.e. I am trying to make an expression that will match lines in a python file containing an equality, i.e.
```
a = 4
```
which is easy enough, but I am having trouble devising an expression that would be able to separate out multiple terms or ones wrapped in quotes like these:
```
a, b = b, a
a,b = "You say yes, ", "i say no"
```
|
2011/01/12
|
[
"https://Stackoverflow.com/questions/4666527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/392485/"
] |
Parsing code with regular expressions is generally not a good idea, as the grammar of a programming language is not a regular language. I'm not much of a python programmer, but I think you would be a lot better off parsing python code with python modules such as [this one](http://docs.python.org/library/parser.html) or [this one](http://docs.python.org/library/ast.html)
|
Python has an excellent [Language Reference](http://docs.python.org/reference/index.html) that also includes [descriptions of the lexical analysis and syntax](http://docs.python.org/reference/introduction.html#notation).
In your case both statements are [assignments](http://docs.python.org/reference/simple_stmts.html#assignment-statements) with a [list of targets](http://docs.python.org/reference/simple_stmts.html#grammar-token-target_list) on the left hand side and and a [list of expressions](http://docs.python.org/reference/expressions.html#grammar-token-expression_list) on the right hand side.
But since parts of that grammar part are context-free and not regular, you can’t use regular expressions (unless they support some kind of recursive patterns). So better use a proper parser [as Jonas H suggested](https://stackoverflow.com/questions/4666527/reqular-expression-to-seperate-equalitys/4666575#4666575).
| 8,766
|
55,338,811
|
I'm currently working on a small project to learn python. This project creates a random forest, then sets the forest up on fire to stimulate a forest fire. So I managed to create the forest out using a function. The forest is just an array of 0s and 1s. 0 to represent water, 1 to present a tree.
So now I'm currently really stuck on how can I stimulate a forest fire with my array. I do know the logic behind how the fire should be started and spread, but I do not know how should I write it as a code.
The logic is that:
1. I'll use 2 to represent a fire, and 3 to represent burnt areas. So when trees get burned, all the 1s in the array will become 2, then followed by 3. Water, represented by 0, will not be affected. I think this part needs to be done with a for-loop. So one iteration of the loop will change 1 into 2, then the next for-loop will change 2 into 3, and repeat till the end of the array.
2. The fire needs to start from the center of the forest, so I need to figure out the positional index of the center of the array and check that if it is 1, and not 0 to initiate the fire. This can be done with a if-else condition.
3. The fire will then spread outwards to adjacent 1s in north, south, east, west direction, and so on and so forth.
So I'm having trouble writing up the loops to replace 1 with 2, then 2 with 3 such that it spreads from one tree to another.
I managed to write a function to create the random forest. The problem is with setting the forest up on fire. I've tried to write some for-loops, but I really have no idea how should I approach this problem.
```
#Define parameters for createForest Function. Sets the parameters for the forest too.
width = int(5)
height = int(5)
density = float(0.7) # probability of spawning a tree
forest = [[]]
#Making a random forest
def createForest(width, height, density):
forest = np.random.choice(2, size=(width, height), p=[(1-density), density])
return forest
print(createForest(width, height, density))
forest = createForest(width, height, density) # updates forest into the list
```
This would print out an array of 0s and 1s in random order:
```
[[1 0 1 1 1]
[1 1 1 1 1]
[0 0 1 1 1]
[1 1 1 1 1]
[1 1 1 1 1]]
```
|
2019/03/25
|
[
"https://Stackoverflow.com/questions/55338811",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11255128/"
] |
use not exists
```
select mygroup from table_name t1
where not exists( select 1 from table_name t2 where t1.var2=t2.var1
and t1.mygroup=t2.mygroup)
and t1.var2 is not null
```
|
Another approach to use cte and temptables:
1. Find out the var2 values that is not included in var1 for the same mygroup
2. List the mygroups and group them there var2 in the list you have found in step 1.
Try below:
```
create table #temp (mygroup int, var1 int, var2 int)
insert into #temp values
(1 , 1, null),
(1 , 2, 1),
(1 , 3, 2),
(1 , 4, null),
(2 , 23, 23 ),
(2 , 24, 20 ),
(2 , 26, null),
(3 , 30, 10),
(3 , 20, null),
(3 , 10, null)
;with cte as (
select t.mygroup, t.var1, t2.var2
from #temp t
inner join #temp t2 on t2.var2=t.var1 and t2.mygroup = t.mygroup
)
select var2
into #notIncludeList
from #temp
where var2 not in (select var1 from cte)
select mygroup
from #temp
where var2 in (select var2 from #notIncludeList)
group by mygroup
```
This solution worked in MsSql-2014.
| 8,768
|
59,289,903
|
Please help me make sense of this big fat error output. At this point I don't know which end is up. I have been spinning my wheels for days on this.
This is **not** the first/only package installation that has given me these errors, but the project ran fine anyway, so I ignored it. Now I want a new package, and it won't install. I did not set up this project.
Using *React, Webpack, and Yarn* on *MacOS 10.14.6* with *Node* running through `nvm`. I have *Xcode command line tools* installed independently, and full *XCode* installed. I have not disturbed the `yarn.lock` file since receiving this project, but I believe I did update a few packages through yarn at some point.
```
myprojectweb $ yarn add redux-persist
yarn add v1.19.1
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
warning " > @date-io/moment@0.0.2" has incorrect peer dependency "moment@^2.22.2".
warning " > @material-ui/icons@3.0.2" has incorrect peer dependency "@material-ui/core@^3.0.0".
warning " > @material-ui/pickers@3.1.2" has unmet peer dependency "@date-io/core@^1.3.6".
warning " > connected-react-router@6.5.0" has unmet peer dependency "react-router@^4.3.1 || ^5.0.0".
warning " > mdi-material-ui@5.7.0" has incorrect peer dependency "@material-ui/core@^1.0.0 || ^3.0.0".
warning " > redux-persist@6.0.0" has incorrect peer dependency "redux@>4.0.0".
[4/4] Building fresh packages...
[1/4] ⠐ node-sass
[2/4] ⠐ deasync
[-/4] ⠐ waiting...
error /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync: Command failed.
Exit code: 1
Command: node ./build.js
Arguments:
Directory: /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync
Output:
gyp info it worked if it ends with ok
gyp info using node-gyp@5.0.5
gyp info using node@12.13.0 | darwin | x64
gyp info find Python using Python version 2.7.16 found at "/usr/bin/python"
gyp info spawn /usr/bin/python
gyp info spawn args [
gyp info spawn args '/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/Users/csf/Library/Caches/node-gyp/12.13.0',
gyp info spawn args '-Dnode_gyp_dir=/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/Users/csf/Library/Caches/node-gyp/12.13.0/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
CXX(target) Release/obj.target/deasync/src/deasync.o
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:221:
In file included from ../../nan/nan_converters.h:67:
../../nan/nan_converters_43_inl.h:22:1: warning: 'ToBoolean' is deprecated: ToBoolean can never throw. Use Local version. [-Wdeprecated-declarations]
X(Boolean)
^
../../nan/nan_converters_43_inl.h:18:12: note: expanded from macro 'X'
val->To ## TYPE(isolate->GetCurrentContext()) \
^
<scratch space>:213:1: note: expanded from here
ToBoolean
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:2567:3: note: 'ToBoolean' has been explicitly marked deprecated here
V8_DEPRECATED("ToBoolean can never throw. Use Local version.",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:221:
In file included from ../../nan/nan_converters.h:67:
../../nan/nan_converters_43_inl.h:40:1: warning: 'BooleanValue' is deprecated: BooleanValue can never throw. Use Isolate version. [-Wdeprecated-declarations]
X(bool, Boolean)
^
../../nan/nan_converters_43_inl.h:37:15: note: expanded from macro 'X'
return val->NAME ## Value(isolate->GetCurrentContext()); \
^
<scratch space>:220:1: note: expanded from here
BooleanValue
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:2605:3: note: 'BooleanValue' has been explicitly marked deprecated here
V8_DEPRECATED("BooleanValue can never throw. Use Isolate version.",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:103:42: error: no viable conversion from 'v8::Isolate *' to 'Local<v8::Context>'
return scope.Escape(v8::Function::New( isolate
^~~~~~~
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:183:7: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'v8::Isolate *' to 'const v8::Local<v8::Context> &' for 1st argument
class Local {
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:183:7: note: candidate constructor (the implicit move constructor) not viable: no known conversion from 'v8::Isolate *' to 'v8::Local<v8::Context> &&' for 1st argument
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:187:13: note: candidate template ignored: could not match 'Local<type-parameter-0-0>' against 'v8::Isolate *'
V8_INLINE Local(Local<S> that)
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:4171:22: note: passing argument to parameter 'context' here
Local<Context> context, FunctionCallback callback,
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:337:37: error: too few arguments to function call, expected 2, have 1
return v8::StringObject::New(value).As<v8::StringObject>();
~~~~~~~~~~~~~~~~~~~~~ ^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:5426:3: note: 'New' declared here
static Local<Value> New(Isolate* isolate, Local<String> value);
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:337:58: error: expected '(' for function-style cast or type construction
return v8::StringObject::New(value).As<v8::StringObject>();
~~~~~~~~~~~~~~~~^
../../nan/nan_implementation_12_inl.h:337:60: error: expected expression
return v8::StringObject::New(value).As<v8::StringObject>();
^
```
**... MORE SIMILAR TRACE ERRORS, TOO MANY CHARACTERS FOR STACKOVERFLOW ...**
```
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:563:3: note: 'MarkIndependent' has been explicitly marked deprecated here
V8_DEPRECATED(
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../src/deasync.cc:3:
In file included from ../../nan/nan.h:2690:
../../nan/nan_object_wrap.h:124:26: error: no member named 'IsNearDeath' in 'Nan::Persistent<v8::Object, v8::NonCopyablePersistentTraits<v8::Object> >'
assert(wrap->handle_.IsNearDeath());
~~~~~~~~~~~~~ ^
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/assert.h:93:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __FILE__, __LINE__, #e) : (void)0)
^
9 warnings and 8 errors generated.
make: *** [Release/obj.target/deasync/src/deasync.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:210:5)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
gyp ERR! System Darwin 18.7.0
gyp ERR! command "/Users/csf/.nvm/versions/node/v12.13.0/bin/node" "/Users/csf/.nvm/versions/node/v12.13.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/deasync
gyp ERR! node -v v12.13.0
gyp ERR! node-gyp -v v5.0.5
warning Error running install script for optional dependency: "/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents: Command failed.
Exit code: 1
Command: node install
Arguments:
Directory: /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents
Output:
node-pre-gyp info it worked if it ends with ok
node-pre-gyp info using node-pre-gyp@0.10.0
node-pre-gyp info using node@12.13.0 | darwin | x64
node-pre-gyp info check checked for \"/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\" (not found)
node-pre-gyp http GET https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.4/fse-v1.2.4-node-v72-darwin-x64.tar.gz
node-pre-gyp http 404 https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.4/fse-v1.2.4-node-v72-darwin-x64.tar.gz
node-pre-gyp WARN Tried to download(404): https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.4/fse-v1.2.4-node-v72-darwin-x64.tar.gz
node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.4 and node@12.13.0 (node-v72 ABI, unknown) (falling back to source compile with node-gyp)
node-pre-gyp http 404 status code downloading tarball https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.4/fse-v1.2.4-node-v72-darwin-x64.tar.gz
gyp info it worked if it ends with ok
gyp info using node-gyp@5.0.3
gyp info using node@12.13.0 | darwin | x64
gyp info ok
gyp info it worked if it ends with ok
gyp info using node-gyp@5.0.3
gyp info using node@12.13.0 | darwin | x64
gyp info find Python using Python version 2.7.16 found at \"/usr/bin/python\"
gyp info spawn /usr/bin/python
gyp info spawn args [
gyp info spawn args '/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/Users/csf/Library/Caches/node-gyp/12.13.0',
gyp info spawn args '-Dnode_gyp_dir=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/Users/csf/Library/Caches/node-gyp/12.13.0/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp info ok
gyp info it worked if it ends with ok
gyp info using node-gyp@5.0.3
gyp info using node@12.13.0 | darwin | x64
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
SOLINK_MODULE(target) Release/.node
CXX(target) Release/obj.target/fse/fsevents.o
In file included from ../fsevents.cc:6:
In file included from ../../nan/nan.h:221:
In file included from ../../nan/nan_converters.h:67:
../../nan/nan_converters_43_inl.h:22:1: warning: 'ToBoolean' is deprecated: ToBoolean can never throw. Use Local version. [-Wdeprecated-declarations]
X(Boolean)
^
../../nan/nan_converters_43_inl.h:18:12: note: expanded from macro 'X'
val->To ## TYPE(isolate->GetCurrentContext()) \\\n ^
<scratch space>:213:1: note: expanded from here
ToBoolean
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:2567:3: note: 'ToBoolean' has been explicitly marked deprecated here
V8_DEPRECATED(\"ToBoolean can never throw. Use Local version.\",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../fsevents.cc:6:
In file included from ../../nan/nan.h:221:
In file included from ../../nan/nan_converters.h:67:
../../nan/nan_converters_43_inl.h:40:1: warning: 'BooleanValue' is deprecated: BooleanValue can never throw. Use Isolate version. [-Wdeprecated-declarations]
X(bool, Boolean)
^
../../nan/nan_converters_43_inl.h:37:15: note: expanded from macro 'X'
return val->NAME ## Value(isolate->GetCurrentContext()); \\\n ^
<scratch space>:220:1: note: expanded from here
BooleanValue
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:2605:3: note: 'BooleanValue' has been explicitly marked deprecated here
V8_DEPRECATED(\"BooleanValue can never throw. Use Isolate version.\",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../fsevents.cc:6:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:103:42: error: no viable conversion from 'v8::Isolate *' to 'Local<v8::Context>'
return scope.Escape(v8::Function::New( isolate
^~~~~~~
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:183:7: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'v8::Isolate *' to 'const v8::Local<v8::Context> &' for 1st argument
class Local {
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:183:7: note: candidate constructor (the implicit move constructor) not viable: no known conversion from 'v8::Isolate *' to 'v8::Local<v8::Context> &&' for 1st argument
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:187:13: note: candidate template ignored: could not match 'Local<type-parameter-0-0>' against 'v8::Isolate *'
V8_INLINE Local(Local<S> that)
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:4171:22: note: passing argument to parameter 'context' here
Local<Context> context, FunctionCallback callback,
^
In file included from ../fsevents.cc:6:
In file included from ../../nan/nan.h:222:
In file included from ../../nan/nan_new.h:189:
../../nan/nan_implementation_12_inl.h:337:37: error: too few arguments to function call, expected 2, have 1
return v8::StringObject::New(value).As<v8::StringObject>();
~~~~~~~~~~~~~~~~~~~~~ ^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:5426:3: note: 'New' declared here
static Local<Value> New(Isolate* isolate, Local<String> value);
^
```
**... MORE SIMILAR TRACE ERRORS, TOO MANY CHARACTERS FOR STACKOVERFLOW ...**
```
In file included from ../fsevents.cc:82:
../src/constants.cc:107:11: warning: 'Set' is deprecated: Use maybe version [-Wdeprecated-declarations]
object->Set(Nan::New<v8::String>(\"kFSEventStreamEventFlagItemIsDir\").ToLocalChecked(), Nan::New<v8::Integer>(kFSEventStreamEventFlagItemIsDir));
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:3402:3: note: 'Set' has been explicitly marked deprecated here
V8_DEPRECATED(\"Use maybe version\",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
In file included from ../fsevents.cc:82:
../src/constants.cc:108:11: warning: 'Set' is deprecated: Use maybe version [-Wdeprecated-declarations]
object->Set(Nan::New<v8::String>(\"kFSEventStreamEventFlagItemIsSymlink\").ToLocalChecked(), Nan::New<v8::Integer>(kFSEventStreamEventFlagItemIsSymlink));
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8.h:3402:3: note: 'Set' has been explicitly marked deprecated here
V8_DEPRECATED(\"Use maybe version\",
^
/Users/csf/Library/Caches/node-gyp/12.13.0/include/node/v8config.h:311:29: note: expanded from macro 'V8_DEPRECATED'
declarator __attribute__((deprecated(message)))
^
../fsevents.cc:85:16: error: variable has incomplete type 'void'
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
^
../fsevents.cc:85:31: error: no member named 'Handle' in namespace 'v8'
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
~~~~^
../fsevents.cc:85:48: error: expected '(' for function-style cast or type construction
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
~~~~~~~~~~^
../fsevents.cc:85:50: error: use of undeclared identifier 'exports'
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
^
../fsevents.cc:85:58: error: expected ';' after top level declarator
void FSEvents::Initialize(v8::Handle<v8::Object> exports) {
^
;
30 warnings and 14 errors generated.
make: *** [Release/obj.target/fse/fsevents.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/lib/build.js:196:23)
gyp ERR! stack at ChildProcess.emit (events.js:210:5)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
gyp ERR! System Darwin 18.7.0
gyp ERR! command \"/Users/csf/.nvm/versions/node/v12.13.0/bin/node\" \"/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/bin/node-gyp.js\" \"build\" \"--fallback-to-build\" \"--module=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\" \"--module_name=fse\" \"--module_path=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64\" \"--napi_version=5\" \"--node_abi_napi=napi\"
gyp ERR! cwd /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents
gyp ERR! node -v v12.13.0
gyp ERR! node-gyp -v v5.0.3
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/Users/csf/.nvm/versions/node/v12.13.0/bin/node /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node --module_name=fse --module_path=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64 --napi_version=5 --node_abi_napi=napi' (1)
node-pre-gyp ERR! stack at ChildProcess.<anonymous> (/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:83:29)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:210:5)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:1021:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:283:5)
node-pre-gyp ERR! System Darwin 18.7.0
node-pre-gyp ERR! command \"/Users/csf/.nvm/versions/node/v12.13.0/bin/node\" \"/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\" \"install\" \"--fallback-to-build\"
node-pre-gyp ERR! cwd /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents
node-pre-gyp ERR! node -v v12.13.0
node-pre-gyp ERR! node-pre-gyp -v v0.10.0
node-pre-gyp ERR! not ok
Failed to execute '/Users/csf/.nvm/versions/node/v12.13.0/bin/node /Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node --module_name=fse --module_path=/Users/csf/Documents/ORG/Local on disk/Projects - local/myproject/myprojectweb/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64 --napi_version=5 --node_abi_napi=napi' (1)"
info This module is OPTIONAL, you can safely ignore this error
myprojectweb $
```
|
2019/12/11
|
[
"https://Stackoverflow.com/questions/59289903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9068961/"
] |
I don't know *why* this worked, but running a regular yarn-upgrade cleared the errors. I still got warnings about dependencies.
I should have saved the terminal output from yarn-outdated before and after the upgrade, but alas, I did not.
I still show a few mismatched dependencies.
|
deasync try´s to compile itself if it did not find a precompiled version for current Node version.
This compilation has additional requirements so it is easier to use deasync/Node combinations where precompiled packages exists:
* <https://github.com/abbr/deasync/issues/106>
* <https://github.com/abbr/deasync-bin>
| 8,769
|
64,870,829
|
Let's say I have a list
`list = ['aa', 'bb', 'aa', 'aaa', 'bbb', 'bbbb', 'cc']`
if you do `list.sort()`
you get back
`['aa', 'aa', 'aaa', 'bb', 'bbb', 'bbbb', 'cc']`
Is there a way in **python 3** we can get
`['aaa', 'aa', 'aa', 'bbbb', 'bbb', 'bb', 'cc']`
So within the same lexicographical group order, pick the one with the bigger length first.
Thanks a ton for your help!
|
2020/11/17
|
[
"https://Stackoverflow.com/questions/64870829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10688867/"
] |
You can redefine what less-than means for the strings with a custom class. Use that class as the key for `list.sort` or `sorted`.
```
class C:
def __init__(self, val):
self.val = val
def __lt__(self, other):
min_len = min((len(self.val), len(other.val)))
if self.val[:min_len] == other.val[:min_len]:
return len(self.val) > len(other.val)
else:
return self.val < other.val
lst = ['aa', 'bb', 'aa', 'aaa', 'bbb', 'bbbb', 'cc']
slist = sorted(lst, key=C)
print(slist)
```
|
Tuples are ordered lexicographically, so you can use a tuple of (first character of string, negative length) as the sort key:
```python
list.sort(key=lambda s: (s[0], -len(s)))
```
| 8,770
|
36,590,496
|
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "random_uuid",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}'''
headers = {
'content-type': "application/x-www-form-urlencoded",
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
```
In the above script, I created a UUID and assigned it to the variable `random_uuid`. I want the UUID that was created to be substituted inside json for the value `random_uuid` for the key `id`. But, the above script is not substituting the value of `random_uuid` and just using the variable `random_uuid` itself.
Can anyone please tell me what I'm doing wrong here?
Thanks in advance.
|
2016/04/13
|
[
"https://Stackoverflow.com/questions/36590496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6133947/"
] |
You'll can use string formatting for that.
In your JSON string, replace random\_uuid with %s, than do:
```
payload = payload % random_uuid
```
Another option is to use `json.dumps` to create the json:
```
payload_dict = {
'id': random_uuid,
...
}
payload = json.dumps(payload_dict)
```
|
This code may help.
```
#!/usr/bin/python
import requests
import uuid
random_uuid = uuid.uuid4()
print random_uuid
url = "http://192.168.54.214:8080/credential-store/domain/_/createCredentials"
payload = '''json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "%s",
"username": "testuser3",
"password": "bar",
"description": "biz",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}''' % random_uuid
headers = {
'content-type': "application/x-www-form-urlencoded",
}
print payload
print(response.text)
```
| 8,772
|
23,414,509
|
I have common problem. I have some data and I want search in them. My issue is, that I dont know a proper data structures and algorhitm suitable for this situation.
There are two kind of objects - `Process` and `Package`. Both have some properties, but they are only data structures (dont have any methods). Next, there are PackageManager and something what can be called ProcessManager, which both have function returning list of files that belongs to some `Package` or files that is used by some `Process`.
So semantically, we can imagine these data as
### Packages:
* Package\_1
+ file\_1
\_ file\_2
+ file\_3
* Package\_2
+ file\_4
+ file\_5
+ file\_6
Actually file that belongs to Package\_k can *not* belong to Package\_l for k != l :-)
### Processes:
* Process\_1
+ file\_2
+ file\_3
* Process\_2
+ file\_1
Files used by processes corresponds to files owned by packages. Also, there the rule doesn't applies on this as for packages - that means, `n` processes can use one same file at the same time.
Now what is the task. I need to find some match between processes and packages - for given list of packages, I need to find list of processes which uses any of files owned by packages.
My temporary solution was making list of `[package_name, package_files]` and list of `[process_name, process_files]` and for every file from every package I was searching through every file of every process searching for match, but of course it could be only temporary solution vzhledem to horrible time complexity (even when I sort the files and use binary search to it).
What you can recommend me for this kind of searching please?
(I am coding it in python)
|
2014/05/01
|
[
"https://Stackoverflow.com/questions/23414509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3285282/"
] |
I would move the case into its own method on your Team model.
```
class Team
def tree(type)
...
end
end
```
Then in your controller you could just have the following
```
if @team = fetch_team
@output = @team.tree(params[:tree])
render json: @output
else
render json: {message: "team: '#{params[:id]}' not found"}, status: 404
end
```
|
You could write
```
if @team = fetch_team
@output = case params[:tree]
when 'parents' then @team.ancestor_ids
when 'children' then @team.child_ids
when 'full' then @team.full_tree
when nil then @team
else {message: "requested query parameter: '#{params[:tree]}' not defined"}
end
render json: @output
else
render json: {message: "team: '#{params[:id]}' not found"}, status: 404
end
```
| 8,777
|
42,471,570
|
I am trying to build a classification model. I have 1000 text documents in local folder. I want to divide them into training set and test set with a split ratio of 70:30(70 -> Training and 30 -> Test) What is the better approach to do so? I am using python.
---
I wanted a approach programatically to split the training set and test set. First to read the files in local directory. Second, to build a list of those files and shuffle them. Thirdly to split them into a training set and test set.
I tried a few ways by using built in python keywords and functions only to fail. Lastly I got the idea of approaching it. Also Cross-validation is a good option to be considered for the building general classification models.
|
2017/02/26
|
[
"https://Stackoverflow.com/questions/42471570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5024829/"
] |
that's quite simple if you use numpy, first load the documents and make them a numpy array, and then:
```
import numpy as np
docs = np.array([
'one', 'two', 'three', 'four', 'five',
'six', 'seven', 'eight', 'nine', 'ten',
])
idx = np.hstack((np.ones(7), np.zeros(3))) # generate indices
np.random.shuffle(idx) # shuffle to make training data and test data random
train = docs[idx == 1]
test = docs[idx == 0]
print(train)
print(test)
```
the result:
```
['one' 'two' 'three' 'six' 'eight' 'nine' 'ten']
['four' 'five' 'seven']
```
|
Just make a list of the filenames using `os.listdir()`. Use `collections.shuffle()` to shuffle the list, and then `training_files = filenames[:700]` and `testing_files = filenames[700:]`
| 8,778
|
39,516,760
|
I have a string that I pull from a REST API that is actually a JSON.
I can't use `req.json()` as python doesn't format json correctly i.e. it is using single quotes and not double quotes, plus it puts a unicode symbol where there shouldn't be one. This means I can't use it to respond back to REST as the JSON is not formatted correctly.
However `r.text` prints json that I could use, if I could just tell python: "this is a json and not a string, take it just as it is and use it as a json".
Is there anyway I could do this? Or is there anyway to tell Python to properly format json object as per json spec (i.e. not have unicode characters, and use double quotes).
EDIT:
Apparently this wasn't clear, I apologize.
The issue is that I have to send back a proper JSON formatted object and NOT python object. Here is what I get:
r.text:
{"domain":"example.com", "Link":null, "monitor":"true"}
r.json():
{u'domain':u'example.com', u'Link": None, u'minotor':True}
This is NOT proper JSON formating. You can't have the unicode character, it isn't None it is null, and it isn't True it is true. You also should have double and not single quotes (not as big deal I think).
Hope this clarifies my issues.
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39516760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1164102/"
] |
You can check if a string is valid json by catching the error.
```
import json
def is_json(myjson):
try:
json_object = json.loads(myjson)
except ValueError, e:
return False
return True
```
Test cases:
```
print is_json("{}") #prints True
print is_json("{asdf}") #prints False
print is_json('{ "age":100}') #prints True
print is_json("{'age':100 }") #prints False
print is_json("{\"age\":100 }") #prints True
print is_json('{"age":100 }') #prints True
print is_json('{"foo":[5,6.8],"foo":"bar"}') #prints True
```
|
```
import json
request_as_json = json.loads(r.text)
```
Then you can call things like `request_as_json['key']`
["More Info Here"](http://docs.python.org/library/json.html#json.loads)
| 8,783
|
56,590,075
|
I'm trying to read a timeseries of a single [WRF](https://www.mmm.ucar.edu/weather-research-and-forecasting-model) output variable. The time series is distributed, one timestamp per file, across more than 5000 netCDF files. Each file contains roughly 200 variables.
Is there a way to call xarray.open\_mfdataset() for only the variable I'm interested in? I can specify a single variable by providing a list to the 'data\_vars' argument, but it still reads everything for the 'minimal' case. For my files the 'minimal' case includes almost everything and is thus relatively slow.
Is my best bet to create a single netCDF file containing my variable of interest with something like [ncrcat](http://nco.sourceforge.net/nco.html#Concatenation), or is there a more streamlined way to do this entirely within xarray (or some other python tool)?
My netCDF files are netCDF4 (not netCDF4-classic), which seems to rule out [netCDF4.MFDataset()](http://unidata.github.io/netcdf4-python/netCDF4/index.html#netCDF4.MFDataset).
|
2019/06/14
|
[
"https://Stackoverflow.com/questions/56590075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1854821/"
] |
I'm not sure why providing the `data_vars=` argument still reads all data - I experienced the same issue reading WRF output. My workaround was to make a list of all the variables I didn't need (all 200+) and feed that to the `drop_variables=` argument. You can get a list of all variables and then just delete or comment out the ones you want to keep.
```
varlist = list(ds.variables)
```
|
As a follow up for the ones who will find this thread later.
Based on the documentation (but a bit hidden), the "data\_vars=" argument only works with Python 3.9.
| 8,785
|
9,197,385
|
I'm using AWS for the first time and have just installed boto for python. I'm stuck at the step where it advices to:
"You can place this file either at /etc/boto.cfg for system-wide use or in the home directory of the user executing the commands as ~/.boto."
Honestly, I have no idea what to do. First, I can't find the boto.cfg and second I'm not sure which command to execute for the second option.
Also, when I deploy the application to my server, I'm assuming I need to do the same thing there too...
|
2012/02/08
|
[
"https://Stackoverflow.com/questions/9197385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/815878/"
] |
>
> "You can place this file either at /etc/boto.cfg for system-wide use
> or in the home directory of the user executing the commands as
> ~/.boto."
>
>
>
The former simply means that you might create a configuration file named `boto.cfg` within directory `/etc` (i.e. it won't necessarily be there already, depending on how [boto](https://github.com/boto/boto/) has been installed on your particular system).
The latter is indeed phrased a bit unfortunate - `~/.boto` means that boto will look for a configuration file named `.boto` within the home directory of the user executing the commands (i.e. Python scripts) which are facilitating the boto library.
You can read more about this in the boto wiki article [BotoConfig](http://docs.pythonboto.org/en/latest/boto_config_tut.html), e.g. regarding the question at hand:
>
> A boto config file is simply a .ini format configuration file that
> specifies values for options that control the behavior of the boto
> library. Upon startup, the boto library looks for configuration files
> in the following locations and in the following order:
>
>
> 1. /etc/boto.cfg - for site-wide settings that all users on this machine
> will use
> 2. ~/.boto - for user-specific settings
>
>
>
You'll indeed need to prepare a respective configuration file on the server your application is deployed to as well.
Good luck!
|
For those who want to configure the credentials in Windows:
1-Create your file with the name you want(e.g boto\_config.cfg) and place it in a location of your choice(e.g C:\Users\\configs).
2- Create an environment variable with the Name='BOTO\_CONFIG' and Value= file\_location/file\_name
3- Boto is now ready to work with credentials automatically configured!
* To create environment variables in Windows follow this tutorial: <http://www.onlinehowto.net/Tutorials/Windows-7/Creating-System-Environment-Variables-in-Windows-7/1705>
| 8,788
|
45,382,917
|
I cannot successfully run the `optimize_for_inference` module on a simple, saved TensorFlow graph (Python 2.7; package installed by `pip install tensorflow-gpu==1.0.1`).
Background
==========
Saving TensorFlow Graph
-----------------------
Here's my Python script to generate and save a simple graph to add 5 to my input `x` `placeholder` operation.
```
import tensorflow as tf
# make and save a simple graph
G = tf.Graph()
with G.as_default():
x = tf.placeholder(dtype=tf.float32, shape=(), name="x")
a = tf.Variable(5.0, name="a")
y = tf.add(a, x, name="y")
saver = tf.train.Saver()
with tf.Session(graph=G) as sess:
sess.run(tf.global_variables_initializer())
out = sess.run(fetches=[y], feed_dict={x: 1.0})
print(out)
saver.save(sess=sess, save_path="test_model")
```
Restoring TensorFlow Graph
--------------------------
I have a simple restore script that recreates the saved graph and restores graph params. Both the save/restore scripts produce the same output.
```
import tensorflow as tf
# Restore simple graph and test model output
G = tf.Graph()
with tf.Session(graph=G) as sess:
# recreate saved graph (structure)
saver = tf.train.import_meta_graph('./test_model.meta')
# restore net params
saver.restore(sess, tf.train.latest_checkpoint('./'))
x = G.get_operation_by_name("x").outputs[0]
y = G.get_operation_by_name("y").outputs
out = sess.run(fetches=[y], feed_dict={x: 1.0})
print(out[0])
```
Optimization Attempt
--------------------
But, while I don't expect much in terms of optimization, when I try to optimize the graph for inference, I get the following error message. The expected output node does not appear to be in the saved graph.
```
$ python -m tensorflow.python.tools.optimize_for_inference --input test_model.data-00000-of-00001 --output opt_model --input_names=x --output_names=y
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/{path}/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 141, in <module>
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/{path}/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference.py", line 90, in main
FLAGS.output_names.split(","), FLAGS.placeholder_type_enum)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/tools/optimize_for_inference_lib.py", line 91, in optimize_for_inference
placeholder_type_enum)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/tools/strip_unused_lib.py", line 71, in strip_unused
output_node_names)
File "/{path}/local/lib/python2.7/site-packages/tensorflow/python/framework/graph_util_impl.py", line 141, in extract_sub_graph
assert d in name_to_node_map, "%s is not in graph" % d
AssertionError: y is not in graph
```
Further investigation led me to inspect the checkpoint of the saved graph, which only shows 1 tensor (`a`, no `x` and no `y`).
```
(tf-1.0.1) $ python -m tensorflow.python.tools.inspect_checkpoint --file_name ./test_model --all_tensors
tensor_name: a
5.0
```
Specific Questions
==================
1. Why do I not see `x` and `y` in the checkpoint? Is it because they are operations and not tensors?
2. Since I need to provide input and output names to the `optimize_for_inference` module, how do I build the graph so I can reference the input and output nodes?
|
2017/07/28
|
[
"https://Stackoverflow.com/questions/45382917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5874320/"
] |
**Here is the detailed guide on how to optimize for inference:**
The `optimize_for_inference` module takes a `frozen binary GraphDef` file as input and outputs the `optimized Graph Def` file which you can use for inference. And to get the `frozen binary GraphDef file` you need to use the module `freeze_graph` which takes a `GraphDef proto`, a `SaverDef proto` and a set of variables stored in a checkpoint file. The steps to achieve that is given below:
1. Saving tensorflow graph
==========================
```
# make and save a simple graph
G = tf.Graph()
with G.as_default():
x = tf.placeholder(dtype=tf.float32, shape=(), name="x")
a = tf.Variable(5.0, name="a")
y = tf.add(a, x, name="y")
saver = tf.train.Saver()
with tf.Session(graph=G) as sess:
sess.run(tf.global_variables_initializer())
out = sess.run(fetches=[y], feed_dict={x: 1.0})
# Save GraphDef
tf.train.write_graph(sess.graph_def,'.','graph.pb')
# Save checkpoint
saver.save(sess=sess, save_path="test_model")
```
2. Freeze graph
===============
```
python -m tensorflow.python.tools.freeze_graph --input_graph graph.pb --input_checkpoint test_model --output_graph graph_frozen.pb --output_node_names=y
```
3. Optimize for inference
=========================
```
python -m tensorflow.python.tools.optimize_for_inference --input graph_frozen.pb --output graph_optimized.pb --input_names=x --output_names=y
```
4. Using Optimized graph
========================
```
with tf.gfile.GFile('graph_optimized.pb', 'rb') as f:
graph_def_optimized = tf.GraphDef()
graph_def_optimized.ParseFromString(f.read())
G = tf.Graph()
with tf.Session(graph=G) as sess:
y, = tf.import_graph_def(graph_def_optimized, return_elements=['y:0'])
print('Operations in Optimized Graph:')
print([op.name for op in G.get_operations()])
x = G.get_tensor_by_name('import/x:0')
out = sess.run(y, feed_dict={x: 1.0})
print(out)
#Output
#Operations in Optimized Graph:
#['import/x', 'import/a', 'import/y']
#6.0
```
5. For multiple output names
============================
If there are multiple output nodes, then specify : `output_node_names = 'boxes, scores, classes'` and import graph by,
```
boxes,scores,classes, = tf.import_graph_def(graph_def_optimized, return_elements=['boxes:0', 'scores:0', 'classes:0'])
```
|
1. You are doing it wrong: `input` is a graphdef file for the [script](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/optimize_for_inference.py) not the data part of the checkpoint. You need to freeze the model to a `.pb` file/ or get the prototxt for graph and use the optimize for inference script.
`This script takes either a frozen binary GraphDef file (where the weight
variables have been converted into constants by the freeze_graph script), or a
text GraphDef proto file (the weight variables are stored in a separate
checkpoint file), and outputs a new GraphDef with the optimizations applied.`
2. Get the graph proto file using [write\_graph](https://www.tensorflow.org/api_docs/python/tf/train/write_graph)
3. get the frozen model [freeze graph](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py)
| 8,790
|
309,135
|
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit.
I need Pros and Cons for Python development between this two editors...
|
2008/11/21
|
[
"https://Stackoverflow.com/questions/309135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/35425/"
] |
I use both Komodo Edit and Notepad++.
Notepad++ is a lot quicker to launch and it's more lightweight, so I often use it for quick one-off editing.
I use Komodo Edit for major projects, like my django and wxPython applications. KE is a full-featured IDE, so it has a lot more features.
Main advantages of Komodo Edit for programming Python:
* Manage groups of files as projects
* Use custom commands to run files, run nosetests/pylint, etc.
* Auto complete & syntax checking
* Mozilla extension system, with several useful extensions available
* Write macros in JavaScript or Python
* Spell checking
Some of the little things that Notepad++ is missing for Python development:
* Doesn't auto-indent after a colon
* ~~You can't set tabs/spaces on a file-type basis (I like to use tabs for HTML)~~
* ~~No code completion or tooltips~~
* No on-the-fly syntax checking
|
I haven't used Komodo yet (the download never quite finished on the slow connection I was on at the time), but I use Eclipse with PyDev regularly and enjoy the "IDE" features described by the other respondents. However, I'm also regularly frustrated by how much of a resource hog it is.
I downloaded Notepad++ recently (much smaller download size ;-) ) and have been enjoying it quite a bit. The editor itself is nice and fast and it looks to be extensible. I'm hoping to copy some of my favorite features from IDE into Notepad++ and migrate, at some distant point in the future.
| 8,791
|
5,495,143
|
```
var data;
$(document).ready(function(){
var rows = document.getElementById("orderlist_1").rows; var cell = rows[rows.length - 1].cells[3];
data = "id="+cell.innerHTML
checkAndNotify();
})
function checkAndNotify()
{
alert("oo");
$("#shownoti").load("/mostrecenttransaction","id=2008010661301520679");
t = setTimeout("checkAndNotify()",3000)
return true;
}
//$(document).ready(checkAndNotify())
```
In the above code, window with content is shown every 3 seconds when I open the webpage.
But, the next line is as if its never executes. If I manually open the URL <http://127.0.0.1:8000/mostrecenttransaction/?id=2008010661301520679> , it returns me a HttpResponse so why is the AJAX call never sent.
I have checked using Inspect Element of google chrome by going to the network tab & observed that even if alert("oo") is executes every 3 seconds, but ajax request is never sent. Can anyone help me out. I have spent so much time on but I am unable to figure it out. You can check the code simply by git clone & "git checkout counternoti". **Code can be run simply by "python2 manage.py runserver", no configuration needed.(enter username as "123456789" and password as abcd** It would be nice if if you could see it for yourself. At least, it damn interesting to me as a beginner.
>
> My repo ->
> <https://github.com/shadyabhi/pycourt_login/tree/counternoti>
>
>
>
Related code in urls.py:(./urls.py)
```
(r'^mostrecenttransaction/',mostRecentTransaction),
```
views.py: (./views.py:line 339)
```
def mostRecentTransaction(request):
transactid_atdocument = request.GET['id'][7:]
latestid = Ordersss.objects.latest('transaction_id').transaction_id[7:]
print latestid, transactid_atdocument
if latestid > transactid_atdocument:
return HttpResponse("Food List outdated, please refresh")
else:
return HttpResponse("")
```
templates: (./templates/home.html:line 282)
```
<script type="text/javascript" src="/pycourt/newordernotification.js"></script>
..
..
<div id=shownoti"></div>
```
I am new to jQuery & AJAX and this thing has driven me crazy after I tried it for hours solving it. **Main point is, if the "alert" is shown then why not the AJAX request?** A reply would be highly appreciated.
|
2011/03/31
|
[
"https://Stackoverflow.com/questions/5495143",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/167814/"
] |
```
jQuery.ajax({
url: "mostRecentTransaction",
type: "GET",
data: { id : 2008010661301520679 },
success: function(data) {
alert(data);
jQuery('#shownoti').html(data).hide().fadeIn(1500);
}
});
```
|
Try using setInterval instead of setTimeout
| 8,801
|
48,819,547
|
I want take logarithm multiple times. We know this
```
import numpy as np
np.log(x)
```
now the second logarithm would be
```
np.log(np.log(x))
```
what if one wants to take n number of logs? surely it would not be pythonic to repeat n times as above.
|
2018/02/16
|
[
"https://Stackoverflow.com/questions/48819547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
As per @eugenhu's suggestion, one way is to use a generic function which loops iteratively:
```
import numpy as np
def repeater(f, n):
def fn(i):
result = i
for _ in range(n):
result = f(result)
return result
return fn
repeater(np.log, 5)(x)
```
|
You could use the following little trick:
```
>>> from functools import reduce
>>>
>>> k = 4
>>> x = 1e12
>>>
>>> y = np.array(x)
>>> reduce(np.log, (k+1) * (y,))[()]
0.1820258315495139
```
and back:
```
>>> reduce(np.exp, (k+1) * (y,))[()]
999999999999.9813
```
On my machine this is slightly faster than @jp\_data\_analysis' approach:
```
>>> def f_pp(ufunc, x, k):
... y = np.array(x)
... return reduce(ufunc, (k+1) * (y,))[()]
...
>>> x = 1e12
>>> k = 5
>>>
>>> from timeit import repeat
>>> kwds = dict(globals=globals(), number=100000)
>>>
>>> repeat('repeater(np.log, 5)(x)', **kwds)
[0.5353733809897676, 0.5327484680456109, 0.5363518510130234]
>>> repeat('f_pp(np.log, x, 5)', **kwds)
[0.4512511100037955, 0.4380568229826167, 0.45331112697022036]
```
To be fair, their approach is more flexible. Mine uses quite specific properties of unary `ufunc`s and numpy `array`s.
Larger `k` is also possible. For that we need to make sure that `x` is complex because `np.log` will not switch automatically.
```
>>> x = 1e12+0j
>>> k = 50
>>>
>>> f_pp(np.log, x, 50)
(0.3181323483680859+1.3372351153002153j)
>>> f_pp(np.exp, _, 50)
(1000000007040.9696+6522.577629950761j)
# not that bad, all things considered ...
>>>
>>> repeat('f_pp(np.log, x, 50)', **kwds)
[4.272890724008903, 4.266964592039585, 4.270542044949252]
>>> repeat('repeater(np.log, 50)(x)', **kwds)
[5.799160094989929, 5.796761817007791, 5.80835147597827]
```
| 8,802
|
994,460
|
I have a Pylons app and am using FormEncode and HtmlFill to handle my forms. I have an array of text fields in my template (Mako)
```
<tr>
<td>Yardage</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
<td>${h.text('yardage[]', maxlength=3, size=3)}</td>
</tr>
```
However, I can't seem to figure out how to validate these fields.
Here is the relevant entry from my Schema
`yardage = formencode.ForEach(formencode.validators.Int())`
I'm trying to validate that each of these fields is an Int.
However, no validation occurs for these fields.
**UPDATE**
As requested here is the code for the action of this controller. I know it was working as I can validate other form fields.
```
def submit(self):
schema = CourseForm()
try:
c.form_result = schema.to_python(dict(request.params))
except formencode.Invalid, error:
c.form_result = error.value
c.form_errors = error.error_dict or {}
c.heading = 'Add a course'
html = render('/derived/course/add.html')
return htmlfill.render(
html,
defaults = c.form_result,
errors = c.form_errors
)
else:
h.redirect_to(controler='course', action='view')
```
**UPDATE**
It was suggested on IRC that I change the name of the elements from `yardage[]` to `yardage`
No result. They should all be ints but putting in f into one of the elements doesn't cause it to be invalid. As I said before, I am able to validate other form fields. Below is my entire schema.
```
import formencode
class CourseForm(formencode.Schema):
allow_extra_fields = True
filter_extra_fields = True
name = formencode.validators.NotEmpty(messages={'empty': 'Name must not be empty'})
par = formencode.ForEach(formencode.validators.Int())
yardage = formencode.ForEach(formencode.validators.Int())
```
|
2009/06/15
|
[
"https://Stackoverflow.com/questions/994460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10738/"
] |
Turns out what I wanted to do wasn't quite right.
**Template**:
```
<tr>
<td>Yardage</td>
% for hole in range(9):
<td>${h.text('hole-%s.yardage'%(hole), maxlength=3, size=3)}</td>
% endfor
</tr>
```
(Should have made it in a loop to begin with.) You'll notice that the name of the first element will become `hole-1.yardage`. I will then use `[FormEncode.variabledecode](http://www.formencode.org/en/latest/modules/variabledecode.html)` to turn this into a dictionary. This is done in the
**Schema**:
```
import formencode
class HoleSchema(formencode.Schema):
allow_extra_fields = False
yardage = formencode.validators.Int(not_empty=True)
par = formencode.validators.Int(not_empty=True)
class CourseForm(formencode.Schema):
allow_extra_fields = True
filter_extra_fields = True
name = formencode.validators.NotEmpty(messages={'empty': 'Name must not be empty'})
hole = formencode.ForEach(HoleSchema())
```
The HoleSchema will validate that `hole-#.par` and `hole-#.yardage` are both ints and are not empty. `formencode.ForEach` allows me to apply `HoleSchema` to the dictionary that I get from passing `variable_decode=True` to the `@validate` decorator.
Here is the `submit` action from my
**Controller**:
```
@validate(schema=CourseForm(), form='add', post_only=False, on_get=True,
auto_error_formatter=custom_formatter,
variable_decode=True)
def submit(self):
# Do whatever here.
return 'Submitted!'
```
Using the `@validate` decorator allows for a much cleaner way to validate and fill in the forms. The `variable_decode=True` is very important or the dictionary will not be properly created.
|
```
c.form_result = schema.to_python(request.params) - (without dict)
```
It seems to works fine.
| 8,805
|
38,608,781
|
Python has filter method which filters the desired output on some criteria as like in the following example.
```
>>> s = "some\x00string. with\x15 funny characters"
>>> import string
>>> printable = set(string.printable)
>>> filter(lambda x: x in printable, s)
'somestring. with funny characters'
```
The example is taken from [link](https://stackoverflow.com/questions/8689795/how-can-i-remove-non-ascii-characters-but-leave-periods-and-spaces-using-python). Now, when I am trying to do similar thing in Python IDE it does return the result `'somestring. with funny characters'` infact it return this `<filter object at 0x0000020E1792EC50>`. In IDE I cannot use just simple Enter so I am running this whole code with a print when filtering. Where I am doing it wrong? Thanks.
|
2016/07/27
|
[
"https://Stackoverflow.com/questions/38608781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5453723/"
] |
As others said it is undefined behaviour.
Why it is working though? It is probably because the function call is linked statically, during compile-time (it's not virtual function). The function `B::hi()` exists so it is called. Try to add variable to `class B` and use it in function `hi()`. Then you will see problem (trash value) on the screen:
```
class B
{
public:
void hi() { cout << "hello, my value is " << x << endl; }
private:
int x = 5;
};
```
Otherwise you could make function `hi()` virtual. Then function is linked dynamically, at runtime and program crashes immediately:
```
class B
{
public:
virtual void hi() { cout << "hello" << endl; }
};
```
|
>
> Now, why is this happening ?
>
>
>
Because it can happen. Anything can happen. The behaviour is *undefined*.
The fact that something unexpected happened demonstrates well why UB is so dangerous. If it always caused a crash, then it would be far easier to deal with.
>
> What object was used to call such a method
>
>
>
Most likely, the compiler blindly trusts you, and assumes that `b` does point to an object of type `B` (which it doesn't). It probably would use the pointed memory as if the assumption was true. The member function didn't access any of the memory that belongs to the object, and the behaviour happened to be the same as if there had been an object of correct type.
Being undefined, the behaviour could be completely different. If you try to run the program again, the standard doesn't guarantee that demons won't fly out of your nose.
| 8,806
|
42,875,890
|
I do install odoo version 8 in ubuntu version 16 ,
then I doing like this link <https://www.getopenerp.com/easy-odoo8-installation/>
. In step 3 , it is wrong it say "E:Package 'python-pybabel' has no installation candidate". what happens?
|
2017/03/18
|
[
"https://Stackoverflow.com/questions/42875890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7626653/"
] |
python-pybabel was replace by python-babel package from what I see on the web search.
After I face also this problem I use python-babel and all work correctly.
best regards,
CiprianR
|
Hello this is a command for dependencies, you need to run it :
```
sudo apt-get install python-psutil python-pybabel
```
| 8,808
|
62,045,094
|
I'm trying to create an s3 bucket in every region in AWS with boto3 in python but I'm failing to create a bucket in 4 regions (af-south-1, eu-south-1, ap-east-1 & me-south-1)
My python code:
```
def create_bucket(name, region):
s3 = boto3.client('s3')
s3.create_bucket(Bucket=name, CreateBucketConfiguration={'LocationConstraint': region})
```
and the exception I get:
```
botocore.exceptions.ClientError: An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified location-constraint is not valid
```
I can create buckets in these regions from the aws website but it is not good for me, so I tried to do create it directly from the rest API without boto3.
url: **bucket-name**.s3.amazonaws.com
body:
```
<?xml version="1.0" encoding="UTF-8"?>
<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<LocationConstraint>eu-south-1</LocationConstraint>
</CreateBucketConfiguration>
```
but the response was similar to the exception:
```
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InvalidLocationConstraint</Code>
<Message>The specified location-constraint is not valid</Message>
<LocationConstraint>eu-south-1</LocationConstraint>
<RequestId>********</RequestId>
<HostId>**************</HostId>
</Error>
```
Does anyone have an idea why I can do it manually from the site but not from python?
|
2020/05/27
|
[
"https://Stackoverflow.com/questions/62045094",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4213730/"
] |
The regions your code fails in are relativly new regions, where you need to opt-in first to use them, see here [Managing AWS Regions](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html)
|
Newer AWS regions only support regional endpoints. Thus, if creating buckets in one of those regions, a regional endpoint needs to be created.
Since I was creating buckets in multiple regions, I set the endpoint by creating a new instance of the client for each region. (This was in Node.js, but should still work with boto3)
```
client = boto3.client('s3', region_name='region')
```
See the same problem on Node.js [here](https://stackoverflow.com/questions/63262119/invalidlocationconstraint-creating-a-bucket-in-af-south-1-cape-town-region-usi/63262329#63262329)
| 8,811
|
30,518,714
|
After running a python program, I obtain a list of numeric data. How can I format the data in CSV?
Goal:
I hope to format it so that I can reuse the CSV-formatted data in Mathematica.
|
2015/05/28
|
[
"https://Stackoverflow.com/questions/30518714",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4143312/"
] |
Let's assume your list of numeric data is stored in a variable - `numericList`
```
import csv
myFile = open(csvFile, 'wb')
writer = csv.writer(myFile, quoting = csv.QUOTE_ALL)
writer.writerow(numericList)
```
*wb* indicates that the file is opened for **w**riting in **b**inary mode.
The `csvFile` should contain your list in csv format.
**Note:** If each element in the `numericList` list ***is a list*** in itself, `writer.writerows(numericList)` might be a better option than `writer.writeRow(numericList)`. This is because `writeRows` breaks up each element in list into columns if the elements are lists in the first place.
|
for a simple case you can just as easily write it directly..
```
f=open('test.csv','w')
for p in data : f.write('%g,%g\n'%tuple(p))
f.close()
```
where data here is an `nx2` array.
| 8,812
|
52,161,349
|
I have successfully install pattern3 for python 3.6 in my Linux system.
But after writing this code I got an error.
```
from pattern3.en import referenced
print(referenced('university'))
print(referenced('hour'))
```
>
> IndentationError::expected an indented block
>
>
>
|
2018/09/04
|
[
"https://Stackoverflow.com/questions/52161349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6051513/"
] |
I solved this by entering to the problematic file, which should be `(C:\Python27\Lib\site-packages\pattern3\text\tree.py)` and fixing the problem myself:
```py
from itertools import chain # 34
try: # 35
None # ===> THIS IS THE LINE I ADDED! <===
except: # 37
izip = zip # Python 3
```
Obviously, this is not a suitable answer for production environments.
Alternatively, clone the current pattern repository on your computer and install it:
```sh
git clone -b development https://github.com/clips/pattern
cd pattern
python setup.py install
```
Then you must import `pattern` instead of `pattern3`.
Good luck.
|
Python will give you an error `expected an indented block` if you skip the indentation:
Example of this:
```
if 5 > 2:
print("Five is greater than two!")
```
run it,and it will give an error
```
if 5 > 2:
print("Five is greater than two!")
```
run it,it will not give any error.
| 8,813
|
25,538,584
|
I have 2 date columns (begin and end) in a data frame where the dates are in the following string format '%Y-%m-%d %H:%M:%S.%f'. How can I change these into date format in python? I also want to create a new column that shows the difference in days between the end and begin dates.
Thanks in advance!
|
2014/08/27
|
[
"https://Stackoverflow.com/questions/25538584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2313307/"
] |
If you're using a recent version of pandas you can pass a format argument to `to_datetime`:
```
In [11]: dates = ["2014-08-27 19:53:06.000", "2014-08-27 19:53:15.002"]
In [12]: pd.to_datetime(dates, format='%Y-%m-%d %H:%M:%S.%f')
Out[12]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-08-27 19:53:06, 2014-08-27 19:53:15.002000]
Length: 2, Freq: None, Timezone: None
```
Note: it isn't necessary in this case to pass format but it may be faster/tighter:
```
In [13]: pd.to_datetime(dates,)
Out[13]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-08-27 19:53:06, 2014-08-27 19:53:15.002000]
Length: 2, Freq: None, Timezone: None
```
|
The `datetime` module has everything you need to play around with dates. Note that in the format you describe `%Y-%m-%d %H:%M:%S.%f` the `%f` does not appear in the [known directives](https://docs.python.org/3/library/time.html#time.strftime) and is not included in my answer
```
from datetime import datetime
dates = ["2014-08-27 19:53:06", "2014-08-27 19:53:15"]
# That's where the conversion happens from string to datetime objects
datetimes = [datetime.strptime(date, "%Y-%m-%d %H:%M:%S") for date in dates]
print datetimes
>> [datetime.datetime(2014, 8, 27, 19, 53, 6), datetime.datetime(2014, 8, 27, 19, 53, 15)
# Here a simple subtraction will give you the result you are looking for return a timedelta object
delta = datetimes[1] - datetimes[0]
print type(delta), delta
>> <type 'datetime.timedelta'>, 0:00:09
```
| 8,814
|
19,845,259
|
I am getting below errors while configuring Grinder on JIRA instances, followed all instruction as per <https://confluence.atlassian.com/display/ATLAS/JIRA+Performance+Testing+with+Grinder#JIRAPerformanceTestingwithGrinder-Prerequisites>
Errors :
$ cat project\_manager\_8/error\_xxxx004.fm.XXXXX.com-0.log
```
11/7/13 7:44:35 PM (process xxxx004.fm.XXXXX.com-0): Error running worker process (Java exception initialising test script
File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./env.py", line 35, in request
File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./dashboard.py", line 9, in __init__
File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./interactions.py", line 35, in ?
File "./agent_project_manager.py", line 4, in ?)
Java exception initialising test script
File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./env.py", line 35, in request
File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./dashboard.py", line 9, in __init__
File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./interactions.py", line 35, in ?
File "./agent_project_manager.py", line 4, in ?
Caused by: net.grinder.script.NotWrappableTypeException: Failed to wrap http://jira-fm-dev.devtools.XXXXX.com:8080/
at net.grinder.engine.process.instrumenter.MasterInstrumenter.createInstrumentedProxy(MasterInstrumenter.java:99)
at net.grinder.engine.process.TestData.createProxy(TestData.java:93)
at net.grinder.script.Test.wrap(Test.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:600)
at org.python.core.PyReflectedFunction.__call__(Unknown Source)
at org.python.core.PyMethod.__call__(Unknown Source)
at org.python.core.PyObject.__call__(Unknown Source)
at org.python.core.PyInstance.invoke(Unknown Source)
at env$py.request$6(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./env.py:35)
at env$py.call_function(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./env.py)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyFunction.__call__(Unknown Source)
at dashboard$py.__init__$2(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./dashboard.py:9)
at dashboard$py.call_function(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./dashboard.py)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyFunction.__call__(Unknown Source)
at org.python.core.PyInstance.__init__(Unknown Source)
at org.python.core.PyClass.__call__(Unknown Source)
at org.python.core.PyObject.__call__(Unknown Source)
at interactions$py.f$0(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./interactions.py:35)
at interactions$py.call_function(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./interactions.py)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyCode.call(Unknown Source)
at org.python.core.imp.createFromCode(Unknown Source)
at org.python.core.imp.createFromPyClass(Unknown Source)
at org.python.core.imp.loadFromSource(Unknown Source)
at org.python.core.imp.find_module(Unknown Source)
at org.python.core.imp.import_next(Unknown Source)
at org.python.core.imp.import_name(Unknown Source)
at org.python.core.imp.importName(Unknown Source)
at org.python.core.ImportFunction.load(Unknown Source)
at org.python.core.ImportFunction.__call__(Unknown Source)
at org.python.core.PyObject.__call__(Unknown Source)
at org.python.core.__builtin__.__import__(Unknown Source)
at org.python.core.imp.importAll(Unknown Source)
at org.python.pycode._pyx0.f$0(./agent_project_manager.py:4)
at org.python.pycode._pyx0.call_function(./agent_project_manager.py)
at org.python.core.PyTableCode.call(Unknown Source)
at org.python.core.PyCode.call(Unknown Source)
at org.python.core.Py.runCode(Unknown Source)
at org.python.core.__builtin__.execfile_flags(Unknown Source)
at org.python.util.PythonInterpreter.execfile(Unknown Source)
at net.grinder.engine.process.jython.JythonScriptEngine.initialise(JythonScriptEngine.java:83)
at net.grinder.engine.process.GrinderProcess.run(GrinderProcess.java:259)
at net.grinder.engine.process.WorkerProcessEntryPoint.run(WorkerProcessEntryPoint.java:87)
at net.grinder.engine.process.WorkerProcessEntryPoint.main(WorkerProcessEntryPoint.java:59)
```
|
2013/11/07
|
[
"https://Stackoverflow.com/questions/19845259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2608551/"
] |
It's taken me 2 weeks to find out how to REALLY fix this issue. I had put a new class file under the **App\_Code** folder (I haven't used that folder for ages). For some reason I had set the "Build Action" to "*Compile*". Well, I guess anything under the **App\_Code** folder is already compiled by default, so when the project would build, it would give me this "ambiguous" error. By simply setting the "Build Action" back to "*None*", the ambiguous error went away! There's a tip for all you putting your helper methods under the "App\_Code" folder!
|
It appears that something was wrong with either one of the web.config's. Simply took the web.config's from a blank MVC4 project and replaced.
Incidentally, having the namespace in both the config and layout does not throw an error.
| 8,815
|
48,189,688
|
Im trying to find if a .xlsx file contains a @.
I have used pandas, which work great, unless if the excel sheet have the first column empty, then it fails.. any ideas how to rewrite the code to handle/skip empty columns?
the code:
```
df = pandas.read_excel(open(path,'rb'), sheetname=0)
out = 'False'
for col in df.columns:
if df[col].str.contains('@').any():
out = 'True'
break
```
This is the error i'm getting:
```
df = pandas.read_excel(open(path,'rb'), sheetname=0)
File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 203, in read_excel
io = ExcelFile(io, engine=engine)
File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 258, in __init__
self.book = xlrd.open_workbook(file_contents=data)
File "/anaconda3/lib/python3.6/site-packages/xlrd/__init__.py", line 162, in open_workbook
ragged_rows=ragged_rows,
File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 91, in open_workbook_xls
biff_version = bk.getbof(XL_WORKBOOK_GLOBALS)
File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1271, in getbof
bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8])
File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1265, in bof_error
raise XLRDError('Unsupported format, or corrupt file: ' + msg)
xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found b'\x17Microso'
```
|
2018/01/10
|
[
"https://Stackoverflow.com/questions/48189688",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1495850/"
] |
Take a look at the [json module](https://docs.python.org/3/library/json.html). More specifically the 'Decoding JSON:' section.
```
import json
import requests
response = requests.get() # api call
users = json.loads(response.text)
for user in users:
print(user['id'])
```
|
It seems what you are looking for is the [json](https://docs.python.org/2/library/json.html) module. with it you can use this to parse a string into json format:
```
import json
output=json.loads(myJsonString)
```
| 8,816
|
57,919,803
|
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing.
Assume I have a list [Sunday, Monday, ...Saturday],
and I want to iterate the list starting from different position, wrap around and complete the loop.
For example
```py
a = [Sunday, Monday, ...Saturday]
for i in range(7):
print("----")
for j in (SOMETHING):
print(j)
OUTPUT:
----
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
----
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Sunday
----
Tuesday
.
.
.
Friday
```
How could I approach this?
|
2019/09/13
|
[
"https://Stackoverflow.com/questions/57919803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12062120/"
] |
You can use `collections.dequeue`, which has a `rotate` method. However, if you want to make it on your own you can do it like this:
```
>>> a = ['a','b','c','d']
>>> counter = 0
>>> start_index=2
>>> while counter < len(a):
... print(a[start_index])
... start_index+=1
... counter += 1
... if start_index==len(a):
... start_index=0
...
c
d
a
b
```
This is quite optimal, because you do not need to make any copy or create a new list, just iterate.
|
Use the following function:
```
def cycle_list(l, i):
for element in l[i:]:
yield element
for element in l[:i]:
yield element
```
| 8,821
|
13,610,654
|
I understand [from this question](https://stackoverflow.com/questions/2227169/are-python-built-in-containers-thread-safe) that if I want to have a `set` which is thread-safe I have to implement the thread-safety part on my own.
Therefore I could come up with:
```
from threading import Lock
class LockedSet(set):
"""A set where add() and remove() are thread-safe"""
def __init__(self, *args, **kwargs):
# Create a lock
self._lock = Lock()
# Call the original __init__
super(LockedSet, self).__init__(*args, **kwargs)
def add(self, elem):
self._lock.acquire()
try:
super(LockedSet, self).add(elem)
finally:
self._lock.release()
def remove(self, elem):
self._lock.acquire()
try:
super(LockedSet, self).remove(elem)
finally:
self._lock.release()
```
So, of course only add() and remove() are thread-safe in this implementation. The other methods are not because they were not overwritten in the subclass.
Now, the pattern is pretty simple: acquire lock, call original method, release lock.
If I follow the logic above, I would have to overwrite all methods exposed by `set` in essentially the same way, e.g.:
(pseudo-code)
```
def <method>(<args>):
1. acquire lock
2. try:
3. call original method passing <args>
4. finally:
5. release lock
```
(/pseudo-code)
This is not only tedious but also prone to errors. So, any ideas/suggestions on how to approach this in a better way?
|
2012/11/28
|
[
"https://Stackoverflow.com/questions/13610654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15931/"
] |
[Indeed, see the comments, it is not true]
If you are running CPython you can see from the set source code that it doesn't release the GIL (http://hg.python.org/cpython/file/db20367b20de/Objects/setobject.c) so all its operations should be atomic.
If it is all what you need and you are sure to run your code on CPython you can just use it directly.
|
You can implement your own context manager:
```
class LockableSet:
def __enter__(self):
self.lock()
return self
def __exit__(self, exc_type, exc_value, traceback):
#Do what you want with the error
self.unlock()
with LockableSet() as s:
s.whatever()
raise Exception()
```
No matter what, the object's `__exit__` method will be called at the end. More detailed informations are available [here](http://docs.python.org/2/reference/compound_stmts.html#the-with-statement) (python official docs).
Another use for this could be a `lock` decorator for methods, like this:
```
def lock(func):
def safe_func(self, *args, **kwargs):
with self:
func(self, *args, **kwargs)
return safe_func
```
| 8,831
|
11,965,655
|
I have created a utility software for operating file copy process in python.Every thing is working nice but when i start copying any files larger than 2 Gb the the whole system hangs. It seems to me that it might be a memory leak issue.
I have tried:
* Copying it using Shutil Module
* Using Lazy operation by copying Chunks of Bytes
* Copying Files data LINE by LINE
* Using Fileinput Module
* Adjusting Buffer million times
* Writing copy file part with C and then Extending it with python.
...but none of this has worked.
Here are links to my [File Script](http://dl.dropbox.com/u/16634607/schizocopy.py) and my [GUI Script](http://sourceforge.net/projects/schizocopy/files/latest/download):
I'm using Windows 7 with 2 Gb of RAM.
Can anyone help please?
|
2012/08/15
|
[
"https://Stackoverflow.com/questions/11965655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1599825/"
] |
Since you only have 2 GB of memory when you copy a file that's larger than your memory, it causes issues. Don't load the entire file into memory. Instead, I would do something like:
```
with open(myLargeFile) as f:
with open(myOtherLargeFile, "w") as fo:
for line in f:
fo.write(line)
```
Since this can potentially take a long time, you should put this into a separate thread from your GUI or the GUI will appear to hang. Here are a couple links on that topic for wxPython:
* <http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/>
* <http://wiki.wxpython.org/LongRunningTasks>
|
The good approach for this problem is:
* use multiprocessing or multithreading
* split file into chunks
* use python dbm for storing which chunk belongs to which filename, filepath and chunk offset( for file.seek function)
* create queue for read and write chunks
| 8,836
|
48,441,737
|
I have a raspberry pi and I have installed dockers in it. I have made a python script to read gpio status in it. So when I run the below command
```
sudo docker run -it --device /dev/gpiomem app-image
```
It runs perfectly and shows the gpio status. Now I have created a `docker-compose.yml` file as I want to deploy this `app.py` to the swarm cluster which I have created.
Below is the content of `docker-compose.yml`
```
version: "3"
services:
app:
image: app-image
deploy:
mode: global
restart: always
privileged: true
```
When I start the deployment using `sudo docker stack deploy` command, the image is deployed but it gives error:
```
No access to /dev/mem. Try running as root
```
So it says that it do not have access to `/dev/mem`, but this is very strange when I am using `device`, why the service do not have access. It also says trying running as root which I think all the containers are in root already. I also tried giving the full permissions to the file by including the command `chmod 777 /dev/gpiomem` in the code but it still shows this error.
My main question is that when it runs normally using `docker run..` command why it is showing error in `docker-compose` file when deploying using `sudo docker stack deploy`.? How to resolve this issue.?
Thanks
|
2018/01/25
|
[
"https://Stackoverflow.com/questions/48441737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9267000/"
] |
Adding devices, capabilities, and using privileged mode are not supported in swarm mode. Those options in the yml file exist for using `docker-compose` instead of `docker stack deploy`. You can track the progress on getting these features added to swarm mode in [github issue #24862](https://github.com/moby/moby/issues/24862).
Since all you need to do is access a device, you may have luck adding the file for the device as a volume, but that's a shot in the dark:
```
volumes:
- /dev/gpiomem:/dev/gpiomem
```
|
As stated in [docker-compose devices](https://docs.docker.com/compose/compose-file/#devices)
>
> Note: This option is ignored when deploying a stack in swarm mode with
> a (version 3) Compose file.
>
>
>
The devices option is ignored in swarm. You can use `privileged: true` which will give access to all devices.
| 8,837
|
19,331,093
|
I am trying to create a legend for a plot with variable sets of data. There are at least 2, and at most 5. The first two will always be there, but the other three are optional, so how can I create a legend for only the existing number of data sets?
I've tried if-statements to tell python what to do if that variable doesn't exist, but without avail. Perhaps this is not the proper way to determine variable existence.
```
line1 = os.path.basename(str(os.path.splitext(selectedFiles[0])[0]))
line2 = os.path.basename(str(os.path.splitext(selectedFiles[1])[0]))
if selectedFiles[2] in locals:
line3 = os.path.basename(str(os.path.splitext(selectedFiles[2])[0]))
else: line3 = None
if selectedFiles[3] in locals:
line4 = os.path.basename(str(os.path.splitext(selectedFiles[3])[0]))
else: line4 = None
if selectedFiles[4] in locals:
line5 = os.path.basename(str(os.path.splitext(selectedFiles[4])[0]))
else:line5 = None
legend((line1, line2, line3, line4, line5), loc='upper left')
```
Here is the error I am getting:
```
if selectedFiles[2] in locals:
IndexError: tuple index out of range
```
It is possible that there are multiple issues with his code (not sure if the "None" is the right way to handle the non-existent data). Please bear in mind that I'm am new to python with little programming experience otherwise, so bear with me and try not to condescend, as some more experienced users tend to do.
|
2013/10/12
|
[
"https://Stackoverflow.com/questions/19331093",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2873277/"
] |
Because selectedFiles is a tuple, and the logic of processing each item inside it is same. you can iterate it with a for loop.
```
lines = [os.path.basename(str(os.path.splitext(filename)[0])) for filename in selectedFiles]
#extend lines' length to 5 and fill the space with None
lines = lines + [None] * (5-len(lines))
legend(lines,loc='upper left')
```
|
I have no idea what your data structures look like, but it looks like you just want
```
lines = (os.path.basename(str(os.path.splitext(x)[0])) for x in selectedFiles)
legend(lines, loc='upper left')
```
| 8,838
|
56,106,783
|
I am building a dockerfile with the `docker build .` command. While building, I am experiencing the following error:
```
Downloading/unpacking requests
Cannot fetch index base URL http://pypi.python.org/simple/
Could not find any downloads that satisfy the requirement requests
No distributions at all found for requests
```
Here is the dockerfile:
```
FROM jonasbonno/rpi-grovepi
RUN pip install requests
RUN git clone https://github.com/keyban/fogservice.git #update
ENTRYPOINT ["python"]
CMD ["fogservice/service.py"]
```
What might be the problem?
|
2019/05/13
|
[
"https://Stackoverflow.com/questions/56106783",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11439964/"
] |
You have a pip problem, not a docker problem, you need to add `pip install --index-url https://pypi.python.org/simple/ --upgrade pip` to your docker file:
```
FROM jonasbonno/rpi-grovepi
RUN pip install --index-url https://pypi.python.org/simple/ --upgrade pip
RUN hash -r
RUN pip install requests
RUN git clone https://github.com/keyban/fogservice.git #update
ENTRYPOINT ["python"]
CMD ["fogservice/service.py"]
```
You can find the solution here: [pip connection failure: cannot fetch index base URL http://pypi.python.org/simple/](https://stackoverflow.com/questions/21294997/pip-connection-failure-cannot-fetch-index-base-url-http-pypi-python-org-simpl)
|
#### Legacy problem
In Python 2.7, a pip installer of *Pylons* threw the same error. I then read somewhere that upgrading *pip* could help, and doing so in the bash of the container, you get the same error again, now for *pip* itself:
```bash
Cannot fetch index base URL https://pypi.python.org/simple/
Could not find any downloads that satisfy the requirement pip in /usr/lib/python2.7/dist-packages
Downloading/unpacking pip
Cleaning up...
No distributions at all found for pip in /usr/lib/python2.7/dist-packages
Storing debug log for failure in /root/.pip/pip.log
```
And when you try upgrading pip in the Dockerfile, you get the same output, but at the end:
```bash
The command '/bin/sh -c pip install --index-url https://pypi.python.org/simple/ --upgrade pip' returned a non-zero code: 1
```
See also the two comments at the end of [pip connection failure: cannot fetch index base URL http://pypi.python.org/simple/](https://stackoverflow.com/a/46963820/11154841)
The error is thrown only from within your container / when building the image, while in your workspace, you can install the upgrade or the packages. The container's pip does not have the rights to download from the internet since the [TLS security changed](https://stackoverflow.com/a/49901622/11154841).
#### Answer 1: load from your own *pip* server
I saw a fix in a legacy docker setup, likely for the same reason as in the question. This is not for sure, though, I have not done it myself:
Download the *pip* installers of the packages you need in your workspace, then upload these *pip* installers to a local server with your own certificate, and then add that server website with your own setup to a python `setup.py` file by means of `from setuptools import setup, find_packages`, then calling the built-in `setup(...)` function, to *pip install* the needed packages.
In short: Do not upgrade the *pip* installer for this, instead, just have the needed tarballs on your own server to get them without TLS problems, and take the `setuptools` module in a python file for this.
#### Answer 2: take a younger ubuntu base image in the Dockerfile (recommended)
Neither upgrading pip (which should never make sense in any Dockerfile) nor uploading outdated pip installers into your own server and fetching them with a `setuptools` python module will be good. At such a legacy project, try changing the base image, and perhaps even the Python version. I changed the line from `FROM ubuntu:14.04` to 15.04, 16.04, and then 18.04 worked to upgrade pip (which was just a test, I do not need to upgrade it anyway). Then I tried `pip install Pylons` and all of the other needed packages in the bash of the container, worked. When all of the versions were clear and the app worked, I added the pip installations to a requirements file that gets loaded in the Dockerfile.
| 8,841
|
19,312,270
|
Is there any way to break string based on punctuation-word
```
#!/usr/bin/python
#Asking user to Enter a line in specified format
myString=raw_input('Enter your String:\nFor Example:I am doctor break I stays in CA break you can contact me on +000000\n')
# 'break' is punctuation word
<my code which breaks the user input based on break word and returns output in different lists>
```
Expecting output like
`String1:I am doctor`
`String2:I stays in CA`
`String2:you can contact me on +000000`
|
2013/10/11
|
[
"https://Stackoverflow.com/questions/19312270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1699472/"
] |
If you have written you `java-script` code then make sure that you `return false` from the js code if it is not valid and in aspx file you need to use return as follows
```
<asp:Button runat="server" id="btnLogin" Text="Login"
OnClientClick="return Validate()"/>
```
Edit -1
-------
There is a chance that your button is not the default one so the page get refreshed when you press the enter button. For this use code below
```
<asp:Panel ID="p" runat="server" DefaultButton="myButton">
<%-- Text boxes here --%>
<asp:Button ID="myButton" runat="server" />
</asp:Panel>
```
some useful links
1. <http://www.hanselman.com/blog/ASPNETHowToCreateADefaultEnterButtonForFormsPostBacks.aspx>
2. [how to set a default 'enter' on a certain button](https://stackoverflow.com/questions/7638119/how-to-set-a-default-enter-on-a-certain-button)
|
You need add a add an attribute to button(btnLogin) `OnClientClick="Validate()"`.
Like:
```
<asp:Button runat="server" id="btnLogin" Text="Login"
OnClientClick="Validate()"/>
```
Define javascript function `Validate()` and return false if your form value is not valid.
| 8,842
|
52,175,927
|
I am coming from a C# background and Python's Asyncio library is confusing me.
I have read the following [1](https://stackoverflow.com/questions/37278647/fire-and-forget-python-async-await/37345564#37345564) [2](https://stackoverflow.com/questions/33357233/when-to-use-and-when-not-to-use-python-3-5-await/33399896#33399896), yet the use of asyncio remains unclear to me.
I am trying to make a website **scraper** in python that is asynchronous.
```
async def requestPage(url):
request = requests.get(url, headers=headers)
soup = BeautifulSoup(request.content, 'html.parser')
return soup
async def main():
#****** How do I run an async task and store its result to use in another task?
index_soup = asyncio.ensure_future(requestPage(index_url))
res = asyncio.gather(index_soup)
currency_urls = res.select('a[href^="/currencies"]')
print(currency_urls)
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.close()
```
|
2018/09/05
|
[
"https://Stackoverflow.com/questions/52175927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8714371/"
] |
As the **requests** library is not asynchronous, you can use [run\_in\_executor](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) method, so it won't block the running thread. As the result, you can define `requestPage` as a regular function and call it in the `main` function like this:
`res = await asyncio.gather(loop.run_in_executor(None, requestPage, url)`
The blocking function will run in a separate executor, while the control will be returned to the event loop.
Or you can try to use async HTTP client library, like [aiohttp](https://aiohttp.readthedocs.io/en/stable/).
|
Ok, I think I found a basic solution.
```
async def requestPage(url):
request = requests.get(url, headers=headers)
soup = BeautifulSoup(request.content, 'html.parser')
return soup
async def getValueAsync(func, param):
# Create new task
task = asyncio.ensure_future(func(param))
# Execute task. This returns a list of tasks
await asyncio.gather(task)
# Get result from task
return task.result()
async def main():
soup = await getValueAsync(requestPage, index_url)
print(val.encode("utf-8"))
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.close()
```
I wrote a wrapper that that allows me to call the function asynchronously and store the result.
| 8,846
|
57,417,108
|
I have to parse the following file in python:
```
20100322;232400;1.355800;1.355900;1.355800;1.355900;0
20100322;232500;1.355800;1.355900;1.355800;1.355900;0
20100322;232600;1.355800;1.355800;1.355800;1.355800;0
```
I need to end upwith the following variables (first line is parsed as example):
```
year = 2010
month = 03
day = 22
hour = 23
minute = 24
p1 = Decimal('1.355800')
p2 = Decimal('1.355900')
p3 = Decimal('1.355800')
p4 = Decimal('1.355900')
```
I have tried:
```
line = '20100322;232400;1.355800;1.355900;1.355800;1.355900;0'
year = line[:4]
month = line[4:6]
day = line[6:8]
hour = line[9:11]
minute = line[11:13]
p1 = Decimal(line[16:24])
p2 = Decimal(line[25:33])
p3 = Decimal(line[34:42])
p4 = Decimal(line[43:51])
print(year)
print(month)
print(day)
print(hour)
print(minute)
print(p1)
print(p2)
print(p3)
print(p4)
```
Which works fine, but I am wondering if there is an easier way to parse this (maybe using struct) to avoid having to count each position manually.
|
2019/08/08
|
[
"https://Stackoverflow.com/questions/57417108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5328289/"
] |
```
from decimal import Decimal
from datetime import datetime
line = "20100322;232400;1.355800;1.355900;1.355800;1.355900;0"
tokens = line.split(";")
dt = datetime.strptime(tokens[0] + tokens[1], "%Y%m%d%H%M%S")
decimals = [Decimal(string) for string in tokens[2:6]]
# datetime objects also have some useful attributes: dt.year, dt.month, etc.
print(dt, *decimals, sep="\n")
```
Output:
```
2010-03-22 23:24:00
1.355800
1.355900
1.355800
1.355900
```
|
You could use regex:
```
import re
to_parse = """
20100322;232400;1.355800;1.355900;1.355800;1.355900;0
20100322;232500;1.355800;1.355900;1.355800;1.355900;0
20100322;232600;1.355800;1.355800;1.355800;1.355800;0
"""
stx = re.compile(
r'(?P<date>(?P<year>\d{4})(?P<month>\d{2})(?P<day>\d{2}));'
r'(?P<time>(?P<hour>\d{2})(?P<minute>\d{2})(?P<second>\d{2}));'
r'(?P<p1>[\.\-\d]*);(?P<p2>[\.\-\d]*);(?P<p3>[\.\-\d]*);(?P<p4>[\.\-\d]*)'
)
f = [{k:float(v) if 'p' in k else int(v) for k,v in a.groupdict().items()} for a in stx.finditer(to_parse)]
print(f)
```
Output:
```
[{'date': 20100322,
'day': 22,
'hour': 23,
'minute': 24,
'month': 3,
'p1': 1.3558,
'p2': 1.3559,
'p3': 1.3558,
'p4': 1.3559,
'second': 0,
'time': 232400,
'year': 2010},
{'date': 20100322,
'day': 22,
'hour': 23,
'minute': 25,
'month': 3,
'p1': 1.3558,
'p2': 1.3559,
'p3': 1.3558,
'p4': 1.3559,
'second': 0,
'time': 232500,
'year': 2010},
{'date': 20100322,
'day': 22,
'hour': 23,
'minute': 26,
'month': 3,
'p1': 1.3558,
'p2': 1.3558,
'p3': 1.3558,
'p4': 1.3558,
'second': 0,
'time': 232600,
'year': 2010}]
```
Here i stored everything in a list, but you could actually go through the results of `finditer` line by line if you don't want to store everything in memory.
You can also replace fload and/or int with Decimal if needed
| 8,847
|
10,904,629
|
noob programmer here, I'm trying to get the SQLite3 on my Python installation up-to-date (I currently have version 3.6.11, whereas I need at least version 3.6.19, as that is the first version that supports foreign keys). Here's my problem, though: I have no idea how to do this. I know next to nothing about the command line, and I don't really know what files to replace (if at all) in my python install. And before anyone asks, I'm already using the latest Pysql version – it's what's not up to date. Can anyone give me some pointers, or maybe a guide on how to update this?
I'm on Mac OSX 10.5.8, working with python 2.6.
|
2012/06/05
|
[
"https://Stackoverflow.com/questions/10904629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1430987/"
] |
I suggest using the 'pip' command on the command line.
```
pip search sqlite
pip install pysqlite
```
|
<https://pip.pypa.io/en/latest/installing.html>
python get-pip.py
python [complete path]
python c:\folder\get-pip.py
| 8,848
|
33,978,739
|
First post to the forum here. I searched for an answer, but wasnt exactly sure how to phrase the search. I am currently working through "learn python the hard way" and one of the drills he uses this coding:
```
target.write(line1)
target.write("\n")
target.write(line2)
target.write("\n")
target.write(line3)
target.write("\n")
```
and my objective is to fit everything into calling target.write() one time rather than six, which i tried to do like this:
```
target.write(line1, "\n", line2, "\n", line3)
```
but i get an error for giving 5 arguments rather than 1. can anyone tell me the correct syntax?
|
2015/11/29
|
[
"https://Stackoverflow.com/questions/33978739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5616687/"
] |
Using a comma sends each object as a separate argument. Concatenate them with `+` instead, or `join()` them:
```
target.write(line1 + "\n" + line2 + "\n" + line3)
```
Or:
```
target.write('\n'.join((line1, line2, line3)))
```
|
You can use Python's `str` `format`:
```
target.write('{}\n{}\n{}'.format(line1, line2, line3))
```
| 8,855
|
33,580,308
|
Hello I am new to python,
I was trying to find the distance from different points.
Example:
The distance between each door is about 2.5 feet. So the distance between door 1 and door 2 is 2.5 feet. How would i go about looking for two different distanced in the door dictionary. or should i use something else.
```
d = {"door 1" : 2.5,"door 2" :2.5 , "door 3" : 2.5, "door 4": 2.5}
x = raw_input()
y = raw_input()
tol = 0
if x not in list and y not in list:
print 'not a door'
else:
if x in list and y in list:
tol = (list[x]) + (list[y])
print tol
```
|
2015/11/07
|
[
"https://Stackoverflow.com/questions/33580308",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5533124/"
] |
try this
```
final ListPopupWindow listPopupWindow = new ListPopupWindow(
context);
listPopupWindow.setAdapter(new ArrayAdapter(
context,
R.layout.list_row_layout, arrayOfValues));
listPopupWindow.setAnchorView(your_view);
listPopupWindow.setWidth(300);
listPopupWindow.show();
```
|
Exmaple :
```
findViewById(R.id.btn).setOnTouchListener(new OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
//use MotionEvent event : getX() and getY() will return your pressing
location in the button.
}
});
```
| 8,856
|
72,363,146
|
I am working on a project where I have to send arguments by a command line to a python file (using system exec) and then visualize the results saved in a folder after the python file finishes executing. I need to have this by only clicking on one button, so my question is, if there is any way to realize this scenario or maybe if I can order the events.
[](https://i.stack.imgur.com/RmwtK.png)
Now I have included the flat sequence structure to the block Diagram so I can order the events, but I had an issue with making the program (the python file) running every time I press the Test button (it only runs in the first time I click on the Test button), I tried to use the while loop but I couldn't execute it again unless I restart the program.
|
2022/05/24
|
[
"https://Stackoverflow.com/questions/72363146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18449615/"
] |
The way you phrased your question makes me think that you want to wait until the command you call via system exec finished and then run some code. You could simply use a sequence structure for this.
However, if you need to do this *asynchronously*, i.e. launch the command and get an event when the command finished so you can draw the results, you will need to resort to asynchronous techniques like "Start Asynchronous Call" and "Wait On Asynchronous Call", or for example queues and a separate code area for the background-work.
[](https://i.stack.imgur.com/tjRZw.png)
|
Use "wait until completion?" input of System Exec function to make sure the script finished execution, then proceed with the results visualization part.
| 8,857
|
69,856,536
|
**Goal:**
Using python, I want to create a service account in a project on the Google Cloud Platform and grant that service account one role.
**Problem:**
The docs explain [here](https://cloud.google.com/iam/docs/granting-changing-revoking-access#grant-single-role) how to grant a single role to the service account. However, it seems to be only possible by using the Console or the gcloud tool, not with python. The alternative for python is to update the whole IAM policy of the project to grant the role for the single service account and overwrite it (described [here](https://cloud.google.com/iam/docs/granting-changing-revoking-access#multiple-roles)). *However*, overwriting the whole policy seems quite risky because in case of an error the policy of the whole project could be lost. *Therefore I want to avoid that.*
**Question:**
I'm creating a service account using the python code provided [here in the docs](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating). Is it possible to grant the role already while creating the service account with this code or in any other way?
|
2021/11/05
|
[
"https://Stackoverflow.com/questions/69856536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13439686/"
] |
Creating a service account, creating a service account key, downloading a service account JSON key file, and granting a role are separate steps. There is no single API to create a service account and grant a role at the same time.
Anytime you update a project's IAM bindings is a risk. Google prevents multiple applications from updating IAM at the same time. It is possible to lock everyone (users and services) out of a project by overwriting the policy with no members.
I recommend that you create a test project and develop and debug your code against that project. Use credentials that have no permissions to your other projects. Otherwise use the CLI or Terraform to minimize your risks.
The API is very easy to use provided that you understand the API, IAM bindings, and JSON data structures.
|
As mentioned in John’s answer, you should be very careful when manipulating the IAM module, if something goes wrong it could end in services completely inoperable.
Here is a Google’s document which [manipulates the IAM resources using the REST API](https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy).
The owner role can be granted to a user, serviceAccount, or a group that is part of an organization. For example, group@myownpersonaldomain.com could be added as an owner to a project in the myownpersonaldomain.com organization, but not the examplepetstore.com organization.
| 8,858
|
63,695,246
|
Is there a fast possibility to reverse a binary number in python?
Example: I have the number 11 in binary 0000000000001011 with 16 Bits. Now I'm searching for a **fast** function f, which returns 1101000000000000 (decimal 53248). Lookup tables are no solutions since i want it to scale to 32Bit numbers. Thank you for your effort.
Edit:
**Performances**. I tested the code for all 2^16 pattern several times.
* winner are the partially look up tables: 30ms
* 2nd `int(format(num, '016b')[::-1], 2)` from the comments: 56ms
* 3rd `x = ((x & 0x00FF) << 8) | (x >> 8)`: 65ms
* I did not expect my approach to be so horribly slow but it is.
approx. 320ms. Small improvement by using + instead of | 300ms
* `bytes(str(num).encode('utf-8'))` fought for the 2nd place but somehow
the code did not provide valid answers. Most likely because I made a
mistake by transforming them into an integer again.
thank you very much for your input. I was quite surprised.
|
2020/09/01
|
[
"https://Stackoverflow.com/questions/63695246",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8080648/"
] |
This might be faster using small 8-bit lookup table:
```
num = 11
# One time creation of 8bit lookup
rev = [int(format(b, '08b')[::-1], base=2) for b in range(256)]
# Run for each number to be flipped.
lower_rev = rev[num & 0xFF] << 8
upper_rev = rev[(num & 0xFF00) >> 8]
flipped = lower_rev + upper_rev
```
|
My current approach is to access the bits via bit shifting and mask and to shift them in the mirror number until they reach their destination. Still I have the feeling that there is room for improvement.
```
num = 11
print(format(num, '016b'))
right = num
left = 0
for i in range(16):
tmp = right & 1
left = (left << 1 ) | tmp
right = right >> 1
print(format(left, '016b'))
```
| 8,859
|
52,553,757
|
I am a newcomer to python. I want to implement a "For" loop on the elements of a dataframe, with an embedded "if" statement.
Code:
```
import numpy as np
import pandas as pd
#Dataframes
x = pd.DataFrame([1,-2,3])
y = pd.DataFrame()
for i in x.iterrows():
for j in x.iteritems():
if x>0:
y = x*2
else:
y = 0
```
With the previous loop, I want to go through each item in the x dataframe and generate a new dataframe y based on the condition in the "if" statement. When I run the code, I get the following error message.
```
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
Any help would be much appreciated.
|
2018/09/28
|
[
"https://Stackoverflow.com/questions/52553757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1978243/"
] |
In pandas is best avoid loops if exist vectorized solution:
```
x = pd.DataFrame([1,-2,3], columns=['a'])
y = pd.DataFrame(np.where(x['a'] > 0, x['a'] * 2, 0), columns=['b'])
print (y)
b
0 2
1 0
2 6
```
**Explanation**:
First compare column by value for boolean mask:
```
print (x['a'] > 0)
0 True
1 False
2 True
Name: a, dtype: bool
```
Then use [`numpy.where`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) for set values by conditions:
```
print (np.where(x['a'] > 0, x['a'] * 2, 0))
[2 0 6]
```
And last use `DataFrame` constructor or create new column:
```
x['new'] = np.where(x['a'] > 0, x['a'] * 2, 0)
print (x)
a new
0 1 2
1 -2 0
2 3 6
```
|
You can try this:
```
y = (x[(x > 0)]*2).fillna(0)
```
| 8,862
|
20,629,561
|
#### With many started postgresql services, psql chooses the lowest postgresql version
I have installed two versions of postgresql, `12` and `13` (in an earlier version of this question, these were `9.1` and `9.2`, I change this to be in line with the added output details from the higher versions).
```
sudo service postgresql status
12/main (port 5432): down
13/main (port 5433): down
```
They are located at `/etc/postgresql/12/` and `/etc/postgresql/13/`.
After installing an extension on version `13`:
```
sudo apt-get install postgresql-contrib postgresql-plpython3-13
```
start the postgresql service:
```
sudo service postgresql start
```
which outputs:
```
* Starting PostgreSQL 12 database server
* Starting PostgreSQL 13 database server
```
Now let us create the extension in the database, running:
```
sudo su - postgres
```
and then:
```
postgres=# psql
psql (13.4 (Ubuntu 13.4-1.pgdg20.04+1), server 12.7 (Ubuntu 12.7-0ubuntu0.20.04.1))
Type "help" for help.
postgres=# CREATE EXTENSION plpython3u;
ERROR: could not open extension control file "/usr/share/postgresql/12/extension/plpython3u.control": No such file or directory
```
We see that the extension is searched in version `12` although I have installed the `postgresql-python3u` to the directory of version `13`.
#### Aim
I want to use version `13` only, I don't need two different versions, and psql seems to choose the lowest available postgresql version of the started services by default, not the highest which I need.
How to either remove version `12` safely or make `13` the only started (or default) service, also using the standard port `5432` for version `13`?
|
2013/12/17
|
[
"https://Stackoverflow.com/questions/20629561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2813589/"
] |
This situation with two clusters in Ubuntu may happen when upgrading to a newer release providing an newer postgresql version.
The automatic upgrade does not remove the old cluster, presumably for fear of erasing valuable data (which is wise because some postgres upgrades may require human work to be complete).
If you know you want to drop it, just run:
```
sudo pg_dropcluster --stop 9.1 main
```
The corresponding data directory will be removed and `service postgresql` will no longer refer to 9.1
At this point the 9.2 cluster will still use the port 5433, which is unpractical.
To switch it to the default port, edit `/etc/postgresql/9.2/main/postgresql.conf` and change the line `port = 5433` to `port = 5432`
Then restart PostgreSQL.
Finally to get rid of the postgresql-9.1 packages see the result of `dpkg -l 'postgresql*9.1*'`
|
`psql` fails because none of your postgres is running.
First, you should understand **why** there are 2 different servers, then delete one of them (through `apt-get`, I think), and if necessary reconfigure the other (if you type `sudo service portgresql start`, both of the servers will start, and to connect to 9.2 you must use `psql --port=5433`).
Edit your question to add more informations (Version of Ubuntu, origin of Postgres, etc.)...
| 8,863
|
47,684,408
|
new to web development and i need some help to figure out the basics.I have a website right now,which is working fine,on a VPS with Ubuntu 16.04 and Apache.Say i would like a converter in my site or in a mobile application and have a python script in my server doing all the work.How can i send the python program the request and how can the page/application receive the data back(like download link,or info like JSON).
A simple information would suffice,i don't need a thorough explanation,mostly names like protocols,etc.. maybe some sample code for Hello World.
I should add that my website runs with Wordpress and i am not looking to change that yet.I want to create a web app within wordpress,with the app written in Python.
|
2017/12/06
|
[
"https://Stackoverflow.com/questions/47684408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5540416/"
] |
There are a few things you need to take into consideration.
**Centralising this element**
To address your issue of centralising this element with a couple of methods:
**Option 1.**
You can make the entire `span` width 100% and center the text within it by adding this to `#header-content`:
```
width: 100%;
display: block;
text-align:center;
```
As shown here: <https://codepen.io/anon/pen/RjmMjQ>
**Option 2.**
You can remove the absolute positioning, use `display:inline-block`, and, adding `text-align:center;` on the parent element like so:
<https://codepen.io/anon/pen/wPbmrQ>
**Option 3.**
If the element is position:absolute; you can also use:
```
left: 50%;
transform: translateX(-50%);
```
**Positioning the element relative to an image:**
You also said you would like this element to be positioned inside the border. Because this border is set on the image itself, you will need to position your element so it is relative to the image.
To make sure everything lines up correctly with padding we need to set the following, otherwise padding & css borders are not included in any width you set.:
```
*{
box-sizing: border-box;
}
```
Then you need to get the exact position right for aligning you element with the image, these seem to work right:
```
width: 170px;
bottom: 18px;
left:50%;
transform: translateX(-50%);
border-radius: 0 0 12px 12px;
```
This is done for you here:
<https://codepen.io/anon/pen/YEbMYB>
|
First you need to make the image `max-width:100%` to avoid overflow, then simply adjust left/right/bottom values since your element is absolute position and add `text-align:center` :
```css
.elixir {
position: absolute;
top: 5px;
left: 15px;
color: white;
font-weight: bold;
font-size: 50px;
}
.level {
color: white;
font: bold 24px/45px Helvetica, Sans-Serif;
letter-spacing: -1px;
background: rgb(0, 0, 0);
/* fallback color */
background: rgba(0, 0, 0, 0.7);
padding: 10px;
text-align: center;
}
#header {
position: relative;
background: yellow;
width: 200px;
}
#header img {
max-width:100%;
}
#header-content {
position: absolute;
bottom: 5px;
left:5px;
right:5px;
text-align:center;
color: Violet ;
font: bold 24px/45px Helvetica, Sans-Serif;
letter-spacing: -1px;
background: rgb(0, 0, 0);
/* fallback color */
background: rgba(0, 0, 0, 0.7);
padding: 10px;
/* width: 100%; */
line-height: 40px;
}
```
```html
<div id="header">
<img src="https://vignette.wikia.nocookie.net/clashroyale/images/4/46/DarkPrinceCard.png/revision/latest?cb=20160702201038" />
<span class="elixir"> 4 </span>
<span id="header-content"> Level 2 </span>
</div>
```
| 8,864
|
54,298,939
|
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter.
Like:
```
The cat jumped over the moon very quickly.
```
The delimiter would be the space and the resulting strings would be:
```
The cat jumped over
the moon very quickly.
```
I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
|
2019/01/21
|
[
"https://Stackoverflow.com/questions/54298939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/327258/"
] |
I think the solutions using split are good. I tried to solve it without `split` and here's what I came up with.
```
sOdd = "The cat jumped over the moon very quickly."
sEven = "The cat jumped over the moon very quickly now."
def split_on_delim_mid(s, delim=" "):
delim_indexes = [
x[0] for x in enumerate(s) if x[1]==delim
] # [3, 7, 14, 19, 23, 28, 33]
# Select the correct number from delim_indexes
middle = len(delim_indexes)/2
if middle % 2 == 0:
middle_index = middle
else:
middle_index = (middle-.5)
# Return the separated sentances
sep = delim_indexes[int(middle_index)]
return s[:sep], s[sep:]
split_on_delim_mid(sOdd) # ('The cat jumped over', ' the moon very quickly.')
split_on_delim_mid(sEven) # ('The cat jumped over the', ' moon very quickly now.')
```
The idea here is to:
* Find the indexes of the deliminator.
* Find the median of that list of indexes
* Split on that.
|
Solutions with `split()` and `join()` are fine if you want to get half the words, not half the string (counting the characters and not the words). I think the latter is impossibile without a `for` loop or a list comprehension (or an expensive workaround such a recursion to find the indexes of the spaces maybe).
But if you are fine with a list comprehension, you could do:
```
phrase = "The cat jumped over the moon very quickly."
#indexes of separator, here the ' '
sep_idxs = [i for i, j in enumerate(phrase) if j == ' ']
#getting the separator index closer to half the length of the string
sep = min(sep_idxs, key=lambda x:abs(x-(len(phrase) // 2)))
first_half = phrase[:sep]
last_half = phrase[sep+1:]
print([first_half, last_half])
```
Here first I look for the indexes of the separator with the list comprehension. Then I find the index of the closer separator to the half of the string using a custom key for the [min()](https://docs.python.org/3.6/library/functions.html#min) built-in function. Then split.
The `print` statement prints `['The cat jumped over', 'the moon very quickly.']`
| 8,867
|
4,184,841
|
I need to color the white part surrounded by black edges!
```
from PIL import Image
import sys
image=Image.open("G:/ghgh.bmp")
data=image.load()
image_width,image_height=image.size
sys.setrecursionlimit(10115)
def f(x,y):
if(x<image_width and y<image_height and x>0 and y>0):
if (data[x,y]==255):
image.putpixel((x,y),150)
f(x+1,y)
f(x-1,y)
f(x,y+1)
f(x,y-1)
f(x+1,y+1)
f(x-1,y-1)
f(x+1,y-1)
f(x-1,y+1)
f(100,100)
image.show()
```
255 is to detect white color, 150 is used to re-color greyish, and (100,100) is the starting pixel.
It's giving "Max recursion depth" at n=10114 and python crashes on n=10115 (the `setrecursionlimit(n)`).
|
2010/11/15
|
[
"https://Stackoverflow.com/questions/4184841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/508292/"
] |
Some warnings provide a description, others may not. Maybe the documentation is just incomplete or the authors figure some warnings don't need additional clarification?
|
If stylecop check is been run during a build (MSBuild/VS build) the show help option wont appear, however if you right click on the solution and run stylecop check, it will have the show help option.
Hope that helps
| 8,877
|
29,775,006
|
I've been writing automated tests with Selenium Webdriver 2.45 in python. To get through some of the things I need to test I must retrieve the various `JSESSION` cookies that are generate from the site. When I use webdrivers `get_cookies()` function with Firefox or Chrome all of the needed cookies return to me. When I do the same thing with IE11 I do not see the cookies that I need. Anyone know how I can retrieve session cookies from IE?
|
2015/04/21
|
[
"https://Stackoverflow.com/questions/29775006",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3249517/"
] |
What you describe sounds like an issue I ran into a few months ago. My tests ran fine with Chrome and Firefox but not in IE, and the problem was cookies. Upon investigation what I found is that my web site had set its session cookies to be [HTTP-only](https://en.wikipedia.org/wiki/HTTP_cookie#Secure_and_HttpOnly). When a cookie has this flag turned on, the browser will send the cookie over the HTTP(S) protocol and allow it to be set by the server in responses but it will make the cookie inaccessible to JavaScript. (Which is consistent with your comment that you cannot see the cookies you want in `document.cookie`.) It so happens that when you use Selenium with Chrome or Firefox, Selenium is able to ignore this flag and obtain the cookies from the browser anyway. However, it cannot do the same with IE.
I worked around this issue by turning off the HTTP-only flag when running my site in testing mode. I use Django for my server so I had to create a special `test_settings.py` file with `SESSION_COOKIE_HTTPONLY = False` in it.
|
There is an open issue with IE and Safari. Those driver will not return correct cookies information. At least not the domain. See [this](https://code.google.com/p/selenium/issues/detail?id=8509)
| 8,878
|
60,992,133
|
I am trying to get my PTZ camera to stream using python 3 and openCV.
The URL i use in the code works with VLC stream but not with the code.
```
import cv2
import numpy as np
cap = cv2.VideoCapture(src="rtsp://USER:PASS@XX.XXX.XXX.XXX:XXX/Streaming/Channels/101/")
FRAME_WIDTH = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
FRAME_HIGTH = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
print('Frame Size: ', FRAME_WIDTH, 'x', FRAME_HIGTH)
if cap.isOpened():
ret, frame = cap.read()
else:
ret = False
while ret:
ret, frame = cap.read()
cv2.imshow('Camera', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
When I run it i get the next error:
```
Traceback (most recent call last): File "C:/Users/.../CameraTest/TEST.py", line 4, in <module>
cap = cv2.VideoCapture(src="rtsp://.../Streaming/Channels/101/") TypeError: Required argument 'index' (pos 1) not found
```
this is and HIKVISON PTZ camera.
Can I please get any tips for how to get her to stream.
TNX in advance.
|
2020/04/02
|
[
"https://Stackoverflow.com/questions/60992133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13198103/"
] |
one small change, remove **src=** from the cv2.VideoCapture() method.
It should look like,
```
cap = cv2.VideoCapture("rtsp://USER:PASS@XX.XXX.XXX.XXX:XXX/Streaming/Channels/101/")
```
|
This is working for me in Hikvision camera. Avoid using special letter in the password.
```
import cv2
cap = cv2.VideoCapture('rtsp://username:password@10.199.27.123:554')
while True:
ret, img = cap.read()
cv2.imshow('video output', img)
k = cv2.waitKey(10)& 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
```
| 8,879
|
67,740,665
|
The Script, running on a Linux host, should call some Windows hosts holding Oracle Databases. Each Oracle Database is in DNS with its name "db-[ORACLE\_SID]".
Lets say you have a database with ORACLE SID `TEST02`, it can be resolved as `db-TEST02`.
The complete script is doing some more stuff, but this example is sufficient to explain the problem.
The `db-[SID]` hostnames must be added as dynamic hosts to be able to parallelize the processing.
The problem is that oracle\_databases is not passed to the new playbook. It works if I change the hosts from `windows` to `localhost`, but I need to analyze something first and get some data from the windows hosts, so this is not an option.
Here is the script:
```yaml
---
# ansible-playbook parallel.yml -e "databases=TEST01,TEST02,TEST03"
- hosts: windows
gather_facts: false
vars:
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: true
tasks:
- set_fact:
database: "{{ databases.split(',') }}"
- name: Add databases as hosts, to parallelize the shutdown process
add_host:
name: "db-{{ item }}"
groups: oracle_databases
loop: "{{ database | list}}"
##### just to check, what is in oracle_databases
- name: show the content of oracle_databases
debug:
msg: "{{ item }}"
with_inventory_hostnames:
- oracle_databases
- hosts: oracle_databases
gather_facts: true
tasks:
- debug:
msg:
- "Hosts, on which the playbook is running: {{ ansible_play_hosts }}"
verbosity: 1
```
My inventory file is just small, but there will be more windows hosts in future:
```
[adminsw1@obelix oracle_change_home]$ cat inventory
[local]
localhost
[windows]
windows68
```
And the output
```
[adminsw1@obelix oracle_change_home]$ ansible-playbook para.yml -l windows68 -e "databases=TEST01,TEST02"
/usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
/usr/lib/python2.7/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.23) or chardet (2.2.1) doesn't match a supported version!
RequestsDependencyWarning)
PLAY [windows] *****************************************************************************************************************************
TASK [set_fact] ****************************************************************************************************************************
ok: [windows68]
TASK [Add databases as hosts, to parallelize the shutdown process] *************************************************************************
changed: [windows68] => (item=TEST01)
changed: [windows68] => (item=TEST02)
TASK [show the content of oracle_databases] ************************************************************************************************
ok: [windows68] => (item=db-TEST01) => {
"msg": "db-TEST01"
}
ok: [windows68] => (item=db-TEST02) => {
"msg": "db-TEST02"
}
PLAY [oracle_databases] ********************************************************************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************************************************************************
windows68 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
2021/05/28
|
[
"https://Stackoverflow.com/questions/67740665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15623827/"
] |
The first thing that comes to mind for me would be to use the append feature on Python file handlers. You could do something like this for each line of text:
```py
def writecond(text, cond):
fname = cond + '.txt'
with open(fname, 'a') as file:
file.write(text)
```
Another thing you could do is have a `dict` which maps you condition text to a list of open file handlers (although I think there might be a hard limit to the number of handlers you can have on some systems), but just be sure to close all of them before your function exits!
EDIT:
If you want the dictionary case, here's the code for that:
```py
fh_assign = {}
def writeline(text, condition):
if condition not in fh_assign.keys():
fh = open(f'{condition}.txt', 'w')
fh.write(text)
fh_assign[condition] = fh
else:
fh_assign[condition].write(text)
```
Once you're done with the calls to `writeline`, just iterate through the list as follows and close all the connections.
```py
for _, fh in fh_assign:
fh.close()
```
|
I figured out one option of handling objects dynamically and keeping track of it.
```
file_handler = {}
with open(file) as f:
for line in f:
if line.split()[1] not in file_handler.keys():
file_handler[line.split()[1]] = open(line.split()[1],"w")
file_handler[line.split()[1]].write(line.split()[0])
else:
file_handler[line.split()[1]].write(line.split()[0])
f.close()
for key in file_handler.keys():
file_handler[key].close()
```
| 8,880
|
74,195,370
|
Hello so im new on python, i want to know how to do multiple string input on list. I already try to append the input to the list, but it doesn't give me expected output. Here is the source code:
```
test=[]
input1=input("Enter multiple strings: ")
splitinput1=input1.split()
for x in range(len(splitinput1)):
test.append(splitinput1)
print(test)
print(len(test))
```
And the output is not what i expected:
```
Enter multiple strings: A B C
[['A', 'B', 'C'], ['A', 'B', 'C'], ['A', 'B', 'C']]
3
```
However, when i change to `print(splitinput1)`, it give me expected output:
```
Enter multiple strings: A B C
['A', 'B', 'C']
3
```
So how to make the output like `print(splitinput1)` while use `print(test)` and whats missing on my code? Thankyou.
|
2022/10/25
|
[
"https://Stackoverflow.com/questions/74195370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19363291/"
] |
>
> reducing vertex data down to 32bits per vertex is as far as the GPU will allow
>
>
>
You seem to think that vertex buffer sizes are what's holding you back. Make no mistake here, they are not. You have many gigs of VRAM to work with, use them if it will make your code faster. Specifically, anything you're unpacking in your shaders that could otherwise be stored explicitly in your vertex buffer should probably be stored in your vertex buffer.
>
> I am wondering if anyone has experience with using geometry shaders to auto-generate quads
>
>
>
I'll stop you right there, geometry shaders are very inefficient in most driver implementations, even today. They just aren't used that much so nobody bothered to optimize them.
One quick thing that jumps at me is that you're allocating and freeing your system-side vertex array every frame. Building it is fine, but cache the array, C memory allocation is about as slow as anything is going to get. A quick profiling should have shown you that.
Your next biggest problem is that you have *a lot* of branching in your pixel shader. Use standard functions (like `clamp` or `mix`) or blending to let the math cancel out instead of checking for ranges or fully transparent values. Branching will absolutely kill performance.
And lastly, make sure you have the correct hints and usage on your buffers. You don't show them, but they should be set to whatever the equivalent of `GL_STREAM_DRAW` is, and you need to ensure you don't corrupt the in-flight parts of your vertex buffer. Future frames will render at the same time as the current one as long as you don't invalidate their data by overwriting their vertex buffer, so instead use a round-robin scheme to allow as many vertices as possible to survive (again, use memory for performance). Personally I allocate a very large vertex buffer (5x the data a frame needs) and write it sequentially until I reach the end, at which point I orphan the whole thing and re-allocate it and start from the beginning again.
|
I think your code is CPU bound. While your approach has very small vertices, you have non-trivial API overhead.
A better approach is rendering all quads with a single draw call. I would probably use instancing for that.
Assuming you want arbitrary per-quad size, position, and orientation in 3D space, here’s one possible approach. Untested.
Vertex buffer elements:
```
struct sInstanceData
{
// Center of the quad in 3D space
XMFLOAT3 center;
// XY coordinates of the sprite in the atlas
uint16_t spriteX, spriteY;
// Local XY vectors of the quad in 3D space
// length of the vectors = half width/height of the quad
XMFLOAT3 plusX, plusY;
};
```
Input layout:
```
D3D11_INPUT_ELEMENT_DESC desc[ 4 ];
desc[ 0 ] = D3D11_INPUT_ELEMENT_DESC{ "QuadCenter", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 0 };
desc[ 1 ] = D3D11_INPUT_ELEMENT_DESC{ "SpriteIndex", 0, DXGI_FORMAT_R16G16_UINT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 0 };
desc[ 2 ] = D3D11_INPUT_ELEMENT_DESC{ "QuadPlusX", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 0 };
desc[ 3 ] = D3D11_INPUT_ELEMENT_DESC{ "QuadPlusY", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 0 };
```
Vertex shader:
```
cbuffer Constants
{
matrix viewProj;
// Pass [ 1.0 / xSegs, 1.0 / ySegs ] in that field
float2 texcoordMul;
};
struct VOut
{
float3 position : POSITION;
float3 n : NORMAL;
float2 texcoord : TEXCOORD;
float4 pos : SV_Position;
};
VOut main( uint index: SV_VertexID,
float3 center : QuadCenter, uint2 texcoords : SpriteIndex,
float3 plusX : QuadPlusX, float3 plusY : QuadPlusY )
{
VOut result;
float3 pos = center;
int2 uv = ( int2 )texcoords;
// No branches are generated in release builds;
// only conditional moves are there
if( index & 1 )
{
pos += plusX;
uv.x++;
}
else
pos -= plusX;
if( index & 2 )
{
pos += plusY;
uv.y++;
}
else
pos -= plusY;
result.position = pos;
result.n = normalize( cross( plusX, plusY ) );
result.texcoord = ( ( float2 )uv ) * texcoordMul;
result.pos = mul( float4( pos, 1.0f ), viewProj );
return result;
}
```
Rendering:
```
UINT stride = sizeof( sInstanceData );
UINT off = 0;
context->IASetVertexBuffers( 0, 1, &vb, &stride, &off );
context->IASetPrimitiveTopology( D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP );
context->DrawInstanced( 4, countQuads, 0, 0 );
```
| 8,881
|
48,702,330
|
Code:
```
a={'day': [{'average_price': 9.3,
'buy_m2m': 9.3,
'buy_price': 9.3,
'buy_quantity': 1,
'buy_value': 9.3,
'close_price': 0,
'exchange': 'NSE',
'instrument_token': 2867969,
'last_price': 9.3,
'm2m': 0.0,
'multiplier': 1,
'net_buy_amount_m2m': 9.3,
'net_sell_amount_m2m': 0,
'overnight_quantity': 0,
'pnl': 0.0,
'product': 'MIS',
'quantity': 1,
'realised': 0,
'sell_m2m': 0,
'sell_price': 0,
'sell_quantity': 0,
'sell_value': 0,
'tradingsymbol': 'SUBEX',
'unrealised': 0.0,
'value': -9.3}],
'net': [{'average_price': 9.3,
'buy_m2m': 9.3,
'buy_price': 9.3,
'buy_quantity': 1,
'buy_value': 9.3,
'close_price': 0,
'exchange': 'NSE',
'instrument_token': 2867969,
'last_price': 9.3,
'm2m': 0.0,
'multiplier': 1,
'net_buy_amount_m2m': 9.3,
'net_sell_amount_m2m': 0,
'overnight_quantity': 0,
'pnl': 0.0,
'product': 'MIS',
'quantity': 1,
'realised': 0,
'sell_m2m': 0,
'sell_price': 0,
'sell_quantity': 0,
'sell_value': 0,
'tradingsymbol': 'SUBEX',
'unrealised': 0.0,
'value': -9.3}]}
b= a['day']
```
`a` shows dict type variable in python. I want to assign value of `buy_price` which is `9.3` to variable ``x and value of `instrument_token` which is `2867969` to variable `y`.
Now problem is after using `b=a['day']`, `b` variable becomes list in python so I can not use `x=b['buy_price']` to get `x=9.3`.
|
2018/02/09
|
[
"https://Stackoverflow.com/questions/48702330",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9337404/"
] |
`Footer` is not inside a section element, so your selector won't work.
```css
.mypage :not(footer) p{
color:red
}
```
```html
<body class="mypage">
<section>
<p>Hello world</p>
</section>
<footer>
<p>footer content</p>
</footer>
</body>
```
|
Why not target the footer again ?
```
.mypage section p{color: red}
.mypage footer p{color: blue}
<body class="mypage">
<section>
<p>Hello world</p>
</section>
<footer>
<p>footer content</p>
</footer>
</body>
```
| 8,882
|
37,297,276
|
I have a data frame
```
df=data.frame(f=c('a','ab','abc'),v=1:3)
```
and make a new column with:
```
df$c=paste(df$v,df$f,sep='')
```
the result is
```
> df
f v c
1 a 1 1a
2 ab 2 2ab
3 abc 3 3abc
```
I would like column c to be in this format:
```
> df
f v c
1 a 1 1 a
2 ab 2 2 ab
3 abc 3 3abc
```
such that the total length of the concatenated values is a fixed number (in this case 4 characters) and to fill it will a chosen character, such as | (in this case \w).
Is there a function like this in R? I think it is similar to the z.fill function in python, but I am not a python programmer, and would prefer to stay in R as opposed to switching between languages for processing. Ultimately, I am creating a supervariable of 10 columns, and think this would help in downstream processing
I guess it would be in the paste function, but not sure how to 'fill a factor' so that it is of a fixed width
|
2016/05/18
|
[
"https://Stackoverflow.com/questions/37297276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2123706/"
] |
You can use the `format()` function to pretty print the values of your column. For example:
```
> format(df$f, width = 3, justify = "right")
[1] " a" " ab" "abc"
```
So your code should be:
```
df <- within(df, {
c <- paste0(v, format(f, width = 3, justify = "right"))
})
df
```
The result:
```
> df
f v c
1 a 1 1 a
2 ab 2 2 ab
3 abc 3 3abc
```
|
You can use the `formatC`function as follow
```
df$c <- paste(df$v, formatC(as.character(df$f), width = 3, flag = " "), sep = "")
df
f v c
1 a 1 1 a
2 ab 2 2 ab
3 abc 3 3abc
```
**DATA**
```
df <- data.frame(f = c('a','ab','abc'), v=1:3)
```
| 8,883
|
40,397,657
|
So, Im a complete newb when it comes to programming. I have been watching tutorials and I am reading a book on how to program python. So, I want to create a number generator guesser on my own and I have watched some tutorials on it but I do not want to recreate the code. basically, I want to make my own guesser with the information I've gotten.
```
import random
# Random Numbergenerator Guesser
print("Hello and welcome to the random number guesser.")
print("I am guessing a number of 1 - 20. Can you guess which one?")
x = random.randint(1,20)
# Here you guess the number value of 'x'
for randomNumber in range (1,7):
randomGuess = input()
if randomGuess > x:
print("Too high. Guess again!")
elif randomGuess < x:
print("Too low. Guess again!")
else:
break
# Checks to see if the number you were guessing is correct or takes you to a fail screen.
if randomGuess == x:
print("Correct number!")
else:
print("Too many tries. You have failed. The number I was thinking of was " + (x))``
```
I keep getting this error.
```
C:\Python\Python35\python.exe "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py"
Hello and welcome to the random number guesser.
I am guessing a number of 1 - 20. Can you guess which one?
1
Traceback (most recent call last):
File "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py", line 12, in <module>
if randomGuess > x:
TypeError: unorderable types: str() > int()
```
|
2016/11/03
|
[
"https://Stackoverflow.com/questions/40397657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7096598/"
] |
**Updated Answer**
For ASP Core 1.1.0 generic model binding is now done using `Get`:
```
var config = Configuration.GetSection("configuredClients").Get<ClientConfiguration>();
```
---
**Original Answer**
How about this:
```
var config = Configuration.GetSection("configuredClients").Bind<ClientConfiguration>();
```
|
You don't read the configuration manually generally in ASP.NET Core yourself, instead you create an object that matches your definition. You can read more on that in the official documentation [here](https://docs.asp.net/en/latest/fundamentals/configuration.html).
E.g.
```
public class MyOptions
{
public string Option1 { get; set; }
public int Option2 { get; set; }
}
public void ConfigureServices(IServiceCollection services)
{
// Setup options with DI
services.AddOptions();
services.Configure<MyOptions>(Configuration);
}
```
Then you just inject the options `IOptions<MyOptions>` where you need them.
| 8,884
|
70,736,110
|
I have a pandas dataframe like this:
```
df = pd.DataFrame([
{'A': 'aaa', 'B': 0.01, 'C': 0.00001, 'D': 0.00999999999476131, 'E': 0.00023191546403037534},
{'A': 'bbb', 'B': 0.01, 'C': 0.0001, 'D': 0.010000000000218279, 'E': 0.002981781316158273},
{'A': 'ccc', 'B': 0.1, 'C': 0.001, 'D': 0.0999999999999659, 'E': 0.020048115477145148},
{'A': 'ddd', 'B': 0.01, 'C': 0.01, 'D': 0.019999999999999574, 'E': 0.397456279809221},
{'A': 'eee', 'B': 0.00001, 'C': 0.000001, 'D': 0.09500000009999432, 'E': 0.06821282401091405},
])
```
```
A B C D E
0 aaa 0.01 0.00001 0.00999999999476131 0.00023191546403037534
1 bbb 0.01 0.0001 0.010000000000218279 0.002981781316158273
2 ccc 0.1 0.001 0.0999999999999659 0.020048115477145148
3 ddd 0.01 0.01 0.019999999999999574 0.397456279809221
4 eee 0.00001 0.000001 0.09500000009999432 0.06821282401091405
```
I have tried to round columns D and E to the same number of decimal places as the values in columns B and C without success.
I try this:
```
df['b_decimals'] = df['B'].astype(str).str.split('.').str[1].str.len()
df['c_decimals'] = df['C'].astype(str).str.split('.').str[1].str.len()
df['D'] = [np.around(x, y) for x, y in zip(df['D'], df['b_decimals'])]
df['E'] = [np.around(x, y) for x, y in zip(df['E'], df['c_decimals'])]
```
but i get this error:
```
Traceback (most recent call last):
File "C:\Program Files\Python37\lib\site-packages\numpy\core\fromnumeric.py", line 56, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
AttributeError: 'float' object has no attribute 'round'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:/_GITHUB/python/test.py", line 30, in <module>
main()
File "D:/_GITHUB/python/test.py", line 24, in main
df['E'] = [np.around(x, y) for x, y in zip(df['E'], df['c_decimals'])]
File "D:/_GITHUB/python/test.py", line 24, in <listcomp>
df['E'] = [np.around(x, y) for x, y in zip(df['E'], df['c_decimals'])]
File "C:\Program Files\Python37\lib\site-packages\numpy\core\fromnumeric.py", line 3007, in around
return _wrapfunc(a, 'round', decimals=decimals, out=out)
File "C:\Program Files\Python37\lib\site-packages\numpy\core\fromnumeric.py", line 66, in _wrapfunc
return _wrapit(obj, method, *args, **kwds)
File "C:\Program Files\Python37\lib\site-packages\numpy\core\fromnumeric.py", line 46, in _wrapit
result = getattr(asarray(obj), method)(*args, **kwds)
TypeError: integer argument expected, got float
```
And the problems is this, when creating columns b\_decimals and c\_decimals, they store NaN values:
```
A B C D E b_decimals c_decimals
0 aaa 0.01 0.00001 0.00999999999476131 0.00023191546403037534 2 NaN
1 bbb 0.01 0.0001 0.010000000000218279 0.002981781316158273 2 4
2 ccc 0.1 0.001 0.0999999999999659 0.020048115477145148 1 3
3 ddd 0.01 0.01 0.019999999999999574 0.397456279809221 2 2
4 eee 0.00001 0.000001 0.09500000009999432 0.06821282401091405 NaN NaN
```
What is the reason for this to happen when creating the columns?
Is there another way to get the desired transformation like below?
```
A B C D E
0 aaa 0.01 0.00001 0.01 0.00023
1 bbb 0.01 0.0001 0.01 0.0030
2 ccc 0.1 0.001 0.1 0.020
3 ddd 0.01 0.01 0.02 0.40
4 eee 0.00001 0.000001 0.09600 0.068212
```
I read them!... thanks!
|
2022/01/17
|
[
"https://Stackoverflow.com/questions/70736110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14382768/"
] |
You can use a `-log10` operation to obtain the number of decimals before a digit (credit goes to @Willem van Onsem's answer [here](https://stackoverflow.com/questions/57663565/pandas-column-with-count-of-decimal-places)).
Then you can incorporate this into a lambda function that you `apply` rowwise:
```
import numpy as np
df['D'] = df.apply(lambda row: round(row['D'], int(-np.floor(np.log10(row['B'])))),axis=1)
df['E'] = df.apply(lambda row: round(row['E'], int(-np.floor(np.log10(row['C'])))),axis=1)
```
Result:
```
>>> df
A B C D E
0 aaa 0.0100 0.000010 0.010 0.000230
1 bbb 0.0100 0.000100 0.010 0.003000
2 ccc 0.1000 0.001000 0.100 0.020000
3 ddd 0.0100 0.010000 0.020 0.400000
4 eee 0.0001 0.000001 0.095 0.068213
>>> df.values
array([['aaa', 0.01, 1e-05, 0.01, 0.00023],
['bbb', 0.01, 0.0001, 0.01, 0.003],
['ccc', 0.1, 0.001, 0.1, 0.02],
['ddd', 0.01, 0.01, 0.02, 0.4],
['eee', 0.0001, 1e-06, 0.095, 0.068213]], dtype=object)
```
|
I use part of the solution above of Derek and make my solution:
```
df['b_decimals'] = -np.floor(np.log10(df['B']))
df['c_decimals'] = -np.floor(np.log10(df['C']))
df['D'] = [np.around(x, y) for x, y in zip(df['D'], df['b_decimals'].astype(int))]
df['E'] = [np.around(x, y) for x, y in zip(df['E'], df['c_decimals'].astype(int))]
```
getting the following:
```
A B C D E b_decimals c_decimals
0 aaa 0.01 0.00001 0.01 0.00023 2 5
1 bbb 0.01 0.0001 0.01 0.003 2 4
2 ccc 0.1 0.001 0.1 0.02 1 3
3 ddd 0.01 0.01 0.02 0.4 2 2
4 eee 0.00001 0.000001 0.1 0.068213 5 6
```
| 8,890
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.