blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 288 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 684 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 147 values | src_encoding stringclasses 25 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 128 12.7k | extension stringclasses 142 values | content stringlengths 128 8.19k | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e73a05ec52d0355a85842ccce3a885e97d322841 | 3b9b4049a8e7d38b49e07bb752780b2f1d792851 | /src/content/DEPS | 4bddf330a520761c8e6c930b17b4baea616679d3 | [
"BSD-3-Clause",
"Apache-2.0"
] | permissive | webosce/chromium53 | f8e745e91363586aee9620c609aacf15b3261540 | 9171447efcf0bb393d41d1dc877c7c13c46d8e38 | refs/heads/webosce | 2020-03-26T23:08:14.416858 | 2018-08-23T08:35:17 | 2018-09-20T14:25:18 | 145,513,343 | 0 | 2 | Apache-2.0 | 2019-08-21T22:44:55 | 2018-08-21T05:52:31 | null | UTF-8 | Python | false | false | 3,879 | # Do NOT add chrome to the list below. We shouldn't be including files
# from src/chrome in src/content.
include_rules = [
# The subdirectories in content/ will manually allow their own include
# directories in content/ so we disallow all of them.
"-content",
"+content/app/resources/grit/content_resources.h",
"+content/common",
"+content/grit",
"+content/public/common",
"+content/public/test",
"+content/test",
"+blink/public/resources/grit",
"+cc",
"-cc/blink",
# If you want to use any of these files, move them to src/base first.
"-cc/base/scoped_ptr_algorithm.h",
"-cc/base/scoped_ptr_deque.h",
"-cc/base/scoped_ptr_vector.h",
"-components",
# Content can depend on components that are:
# 1) related to the implementation of the web platform
# 2) shared code between third_party/WebKit and content
# It should not depend on chrome features or implementation details, i.e. the
# original components/ directories which was code split out from chrome/ to be
# shared with iOS. This includes, but isn't limited to, browser features such
# as autofill or extensions, and chrome implementation details such as
# settings, packaging details, installation or crash reporting.
"+crypto",
"+grit/blink_resources.h",
"+grit/content_strings.h",
"+dbus",
"+gpu",
"+media",
"+mojo/common",
"+mojo/edk/embedder",
"+mojo/edk/js",
"+mojo/edk/test",
"+mojo/public",
"+net",
"+ppapi",
"+printing",
"+sandbox",
"+skia",
# In general, content/ should not rely on google_apis, since URLs
# and access tokens should usually be provided by the
# embedder.
#
# There are a couple of specific parts of content that are excepted
# from this rule, see content/browser/speech/DEPS and
# content/browser/geolocation/DEPS. Both of these are cases of
# implementations that are strongly tied to Google servers, i.e. we
# don't expect alternate implementations to be provided by the
# embedder.
"-google_apis",
# Don't allow inclusion of these other libs we shouldn't be calling directly.
"-v8",
"-tools",
# Allow inclusion of third-party code:
"+third_party/angle",
"+third_party/flac",
"+third_party/libjingle",
"+third_party/mozilla",
"+third_party/npapi/bindings",
"+third_party/ocmock",
"+third_party/re2",
"+third_party/skia",
"+third_party/sqlite",
"+third_party/khronos",
"+third_party/webrtc",
"+third_party/webrtc_overrides",
"+third_party/zlib/google",
"+third_party/WebKit/public/platform",
"+third_party/WebKit/public/web",
"+ui/accelerated_widget_mac",
"+ui/accessibility",
"+ui/android",
# Aura is analogous to Win32 or a Gtk, so it is allowed.
"+ui/aura",
"+ui/base",
"+ui/compositor",
"+ui/display",
"+ui/events",
"+ui/gfx",
"+ui/gl",
"+ui/native_theme",
"+ui/ozone/public",
"+ui/resources/grit/ui_resources.h",
"+ui/resources/grit/webui_resources.h",
"+ui/resources/grit/webui_resources_map.h",
"+ui/shell_dialogs",
"+ui/snapshot",
"+ui/strings/grit/ui_strings.h",
"+ui/surface",
"+ui/touch_selection",
"+ui/wm",
# Content knows about grd files, but the specifics of how to get a resource
# given its id is left to the embedder.
"-ui/base/l10n",
"-ui/base/resource",
# These files aren't related to grd, so they're fine.
"+ui/base/l10n/l10n_util_android.h",
"+ui/base/l10n/l10n_util_win.h",
# Content shouldn't depend on views. While we technically don't need this
# line, since the top level DEPS doesn't allow it, we add it to make this
# explicit.
"-ui/views",
"+storage/browser",
"+storage/common",
# For generated JNI includes.
"+jni",
]
# content -> content/shell dependency is not allowed, except for browser tests.
specific_include_rules = {
".*_browsertest[a-z_]*\.(cc|h)": [
"+content/shell/browser",
"+content/shell/common",
],
}
| [
"changhyeok.bae@lge.com"
] | changhyeok.bae@lge.com | |
abe97102d879e1c25515a0ddf84ed08265a1f2ec | 9f0e740c6486bcb12f038c443b039c886124e55c | /python-study/tools/tools/keystore/GenKeySigner.py | b599bec5c615113d30e7575b518a4486d508adb7 | [] | no_license | zfanai/python-study | 373ff09bd1e6be9e098bde924c98f5277ad58a54 | de11a6c8018730bb27e26808f5cbc0c615b4468f | refs/heads/master | 2021-01-18T17:59:16.817832 | 2017-11-06T09:33:21 | 2017-11-06T09:33:21 | 86,831,175 | 1 | 0 | null | null | null | null | GB18030 | Python | false | false | 668 | py | # coding=gbk
import sys
import os
if 2 == len(sys.argv):
#print os.path.join(os.getcwd(), sys.argv[1])
#print os.path.join(os.getcwd(), sys.argv[2])
strJarFileName=os.path.join(os.getcwd(), sys.argv[1]);
print "输入的签名文件是:%s" % strJarFileName
else:
print "输入文件路径"
sys.exit(0)
KEYTOOL_CMD = "keytool";
WORK_DIR = "C:\Users\zhoufan\Projects\JavaWeb\TrumLink\Output\\"
ksfile = "C:/Users/zhoufan/zhoufan.keystore"
OPTION_SIGNER="-keystore "+ ksfile +" "+strJarFileName+" zhoufan " +\
"-storepass 666666 -keypass 888888";
strSigner = "jarsigner" + " " + OPTION_SIGNER;
print strSigner
os.system(strSigner) | [
"zf_sch@126.com"
] | zf_sch@126.com |
cb98e681ab70cf62af45a08f37caaccf7aea6859 | ddf1267a1a7cb01e70e3b12ad4a7bfaf291edb3e | /src/search/documents/summary.py | 0713e7288a3dd219e9d601a408bb4d70c73289b5 | [
"MIT"
] | permissive | Garinmckayl/researchhub-backend | 46a17513c2c9928e51db4b2ce5a5b62df453f066 | cd135076d9a3b49a08456f7ca3bb18ff35a78b95 | refs/heads/master | 2023-06-17T04:37:23.041787 | 2021-05-18T01:26:46 | 2021-05-18T01:26:46 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,608 | py | from django_elasticsearch_dsl import Document, fields as es_fields
from django_elasticsearch_dsl.registries import registry
from researchhub.settings import (
ELASTICSEARCH_AUTO_REINDEX,
TESTING
)
from search.analyzers import title_analyzer
from summary.models import Summary
import utils.sentry as sentry
@registry.register_document
class SummaryDocument(Document):
summary_plain_text = es_fields.TextField(analyzer=title_analyzer)
proposed_by = es_fields.TextField(attr='proposed_by_indexing')
paper = es_fields.IntegerField(attr='paper_indexing')
paper_title = es_fields.TextField(
attr='paper_title_indexing',
analyzer=title_analyzer
)
approved = es_fields.BooleanField()
class Index:
name = 'summary'
class Django:
model = Summary
fields = [
'id',
'approved_date',
'created_date',
'updated_date',
]
# Ignore auto updating of Elasticsearch when a model is saved
# or deleted (defaults to False):
ignore_signals = (TESTING is True) or (
ELASTICSEARCH_AUTO_REINDEX is False
)
# Don't perform an index refresh after every update (False overrides
# global setting of True):
auto_refresh = (TESTING is False) or (
ELASTICSEARCH_AUTO_REINDEX is True
)
def update(self, *args, **kwargs):
try:
super().update(*args, **kwargs)
except ConnectionError as e:
sentry.log_info(e)
except Exception as e:
sentry.log_info(e)
| [
"lightning.lu7@gmail.com"
] | lightning.lu7@gmail.com |
21d5caf2707d77ba6c1633641dfdcc12f72e370e | d48f5d355a721280f5fd56f67f05d414f595c251 | /pycharm/job_search/search.py | 9e7f12d33bbc5126bb9db3248e3dddb354c739e9 | [] | no_license | santokalayil/Rust_Notebooks | 7517dadda3777fd5ce130f40ad039a480fde1ba4 | b5c239079db4e7eec670aab9619405c05a235285 | refs/heads/main | 2023-04-16T01:26:56.712343 | 2021-04-30T21:31:12 | 2021-04-30T21:31:12 | 363,260,358 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,271 | py | # pip install google
import os
import urllib
from googlesearch import search as _search
def _add_to_file(item, file_name):
with open(file_name, 'a') as file_object:
file_object.write(f'{item}\n')
def _run(search_query, text_file, **kwargs):
if text_file not in os.listdir():
with open(text_file, 'w') as fh:
fh.write('results:\n')
print("file successfully created!")
else:
print("pre_existing file found! updating the file..")
i = 0
for link in _search(search_query, **kwargs):
if link not in open(text_file, 'r').readlines():
i += 1
print(f'{i} new results retrieved! ', end='\r')
_add_to_file(link, text_file)
return
def run_search(search_query, text_file, **kwargs):
"""
This function searches on google and takes all the links available and writes to a text file
:param search_query:
:param text_file:
:param kwargs: preexisting arguments in google_search external library
:return: 0 if no URL error else returns 1
"""
try:
return _run(search_query, text_file, **kwargs)
except urllib.error.URLError as e:
print(f"URL Error: {e}")
return
| [
"49450970+santokalayil@users.noreply.github.com"
] | 49450970+santokalayil@users.noreply.github.com |
24010da0c26af51dce20efea2c1be897f39ee2b1 | d3efc82dfa61fb82e47c82d52c838b38b076084c | /Autocase_Result/SjShHBJJMM/YW_HBJJMM_SHSJ_112.py | 19a4ad0509551d89fbc177ecd4862da12462c65c | [] | no_license | nantongzyg/xtp_test | 58ce9f328f62a3ea5904e6ed907a169ef2df9258 | ca9ab5cee03d7a2f457a95fb0f4762013caa5f9f | refs/heads/master | 2022-11-30T08:57:45.345460 | 2020-07-30T01:43:30 | 2020-07-30T01:43:30 | 280,388,441 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,189 | py | #!/usr/bin/python
# -*- encoding: utf-8 -*-
import sys
sys.path.append("/home/yhl2/workspace/xtp_test/xtp/api")
from xtp_test_case import *
sys.path.append("/home/yhl2/workspace/xtp_test/service")
from ServiceConfig import *
from log import *
sys.path.append("/home/yhl2/workspace/xtp_test/MoneyFund/moneyfundservice")
from mfmainService import *
from mfQueryStkPriceQty import *
sys.path.append("/home/yhl2/workspace/xtp_test/MoneyFund/moneyfundmysql")
from mfCaseParmInsertMysql import *
sys.path.append("/home/yhl2/workspace/xtp_test/utils")
from QueryOrderErrorMsg import queryOrderErrorMsg
class YW_HBJJMM_SHSJ_112(xtp_test_case):
# YW_HBJJMM_SHSJ_112
def test_YW_HBJJMM_SHSJ_112(self):
title = '上海A股股票交易日五档即成转撤销买——数量溢出(100亿)'
# 定义当前测试用例的期待值
# 期望状态:初始、未成交、部成、全成、部撤已报、部撤、已报待撤、已撤、废单、撤废、内部撤单
# xtp_ID和cancel_xtpID默认为0,不需要变动
case_goal = {
'期望状态': '废单',
'errorID': 11000107,
'errorMSG': queryOrderErrorMsg(11000107),
'是否生成报单': '是',
'是否是撤废': '否',
'xtp_ID': 0,
'cancel_xtpID': 0,
}
logger.warning(title)
# 定义委托参数信息------------------------------------------
# 参数:证券代码、市场、证券类型、证券状态、交易状态、买卖方向(B买S卖)、期望状态、Api
stkparm = QueryStkPriceQty('999999', '1', '111', '2', '0', 'B', case_goal['期望状态'], Api)
# 如果下单参数获取失败,则用例失败
if stkparm['返回结果'] is False:
rs = {
'用例测试结果': stkparm['返回结果'],
'测试错误原因': '获取下单参数失败,' + stkparm['错误原因'],
}
self.assertEqual(rs['用例测试结果'], True)
else:
wt_reqs = {
'business_type': Api.const.XTP_BUSINESS_TYPE['XTP_BUSINESS_TYPE_CASH'],
'order_client_id':2,
'market': Api.const.XTP_MARKET_TYPE['XTP_MKT_SH_A'],
'ticker': stkparm['证券代码'],
'side': Api.const.XTP_SIDE_TYPE['XTP_SIDE_BUY'],
'price_type': Api.const.XTP_PRICE_TYPE['XTP_PRICE_BEST5_OR_CANCEL'],
'price': stkparm['涨停价'],
'quantity': 10000000000,
'position_effect': Api.const.XTP_POSITION_EFFECT_TYPE['XTP_POSITION_EFFECT_INIT']
}
ParmIni(Api, case_goal['期望状态'], wt_reqs['price_type'])
CaseParmInsertMysql(case_goal, wt_reqs)
rs = serviceTest(Api, case_goal, wt_reqs)
logger.warning('执行结果为' + str(rs['用例测试结果']) + ','
+ str(rs['用例错误源']) + ',' + str(rs['用例错误原因']))
self.assertEqual(rs['用例测试结果'], True) # 0
if __name__ == '__main__':
unittest.main()
| [
"418033945@qq.com"
] | 418033945@qq.com |
51db0e20f3a4908cea359578f85eb0caa6fe1815 | e912af291e1457c61606642f1c7700e678c77a27 | /python/560_subarray_sum_equals_K.py | f7e77a145f8ec5f44cc38ad5b749302db0b1c447 | [] | no_license | MakrisHuang/LeetCode | 325be680f8f67b0f34527914c6bd0a5a9e62e9c9 | 7609fbd164e3dbedc11308fdc24b57b5097ade81 | refs/heads/master | 2022-08-13T12:13:35.003830 | 2022-07-31T23:03:03 | 2022-07-31T23:03:03 | 128,767,837 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 895 | py | from collections import defaultdict
class Solution:
def subarraySum(self, nums: List[int], k: int) -> int:
count = 0
sum = 0
occurrences = defaultdict(int) # key: sum, value: occurrence times
occurrences[0] = 1
for i in nums:
sum += i
if sum - k in occurrences:
count += occurrences[sum - k]
occurrences[sum] = occurrences[sum] + 1
return count
def subarraySum_n2(self, nums: List[int], k: int) -> int:
# use cumulative sum
count = 0
prefix = [0 for i in range(len(nums) + 1)]
for i in range(1, len(nums) + 1):
prefix[i] = prefix[i - 1] + nums[i - 1]
for i in range(len(nums)):
for j in range(i + 1, len(nums) + 1):
if prefix[j] - prefix[i] == k:
count += 1
return count
| [
"vallwesture@gmail.com"
] | vallwesture@gmail.com |
0ce290b4a1ccafb29a96606116454afcb6d63098 | fb72aef4db762749f3ac4bc08da36d6accee0697 | /modules/photons_themes/collections.py | 359dbd750e637c1c9a451e1b3adffd243fb001f7 | [
"MIT"
] | permissive | xbliss/photons-core | 47698cc44ea80354e0dcabe42d8d370ab0623f4b | 3aca907ff29adffcab4fc22551511c5d25b8c2b7 | refs/heads/master | 2022-11-07T12:33:09.951104 | 2020-05-07T09:10:35 | 2020-05-07T09:45:27 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,486 | py | from photons_themes.theme import ThemeColor
class ZoneColors:
"""
Representation of colors on a zone
"""
def __init__(self):
self._colors = []
def add_hsbk(self, hsbk):
"""
Add a ThemeColor instance
The idea is you use this function to add each zone in order.
"""
self._colors.append(hsbk)
def apply_to_range(self, color, next_color, length):
"""
Recursively apply two colours to our strip such that we blend between
the two colours.
"""
if length == 1:
self.add_hsbk(color)
elif length == 2:
second_color = ThemeColor.average([next_color.limit_distance_to(color), color])
self.add_hsbk(color)
self.add_hsbk(second_color)
else:
average = ThemeColor.average([next_color, color])
self.apply_to_range(color, average, length // 2)
self.apply_to_range(average, next_color, length - length // 2)
def apply_theme(self, theme, zone_count):
"""Apply a theme across zone_count zones"""
i = 0
location = 0
zones_per_color = max(1, int(zone_count / (max(len(theme) - 1, 1))))
while location < zone_count:
length = min(location + zones_per_color, zone_count) - location
self.apply_to_range(theme[i], theme.get_next_bounds_checked(i), length)
i = min(len(theme) - 1, i + 1)
location += zones_per_color
@property
def colors(self):
"""
Return a list of ``((start_index, end_index), hsbk)`` for our colors.
This function will make sure that contiguous colors are returned with
an appropriate ``start_index``, ``end_index`` range.
"""
start_index = 0
end_index = -1
current = None
result = []
for hsbk in self._colors:
if current is not None and current != hsbk:
result.append(((start_index, end_index), current))
start_index = end_index + 1
end_index += 1
current = hsbk
result.append(((start_index, end_index), current))
return result
class TileColors:
"""
A very simple wrapper around multiple tiles
"""
def __init__(self):
self.tiles = []
def add_tile(self, hsbks):
"""Add a list of 64 ThemeColor objects to represent the next tile"""
self.tiles.append(hsbks)
| [
"stephen@delfick.com"
] | stephen@delfick.com |
adf41d6bff071e121098205bc52eb1e19fa80e27 | 79737ef5b519c00d61d460451773cdf3dd7a086b | /yalantis/courses/dto.py | b19fd738ce21c72c426620e17b4d3eae2cc9a4ad | [] | no_license | stsh1119/Yalantis_task | ab1145c39944f2db67e4b572995cc3a4dcf39f49 | 9c8eb996021c8969f55cde3c00bbf9a8219a37d3 | refs/heads/main | 2023-04-30T21:57:43.310700 | 2021-05-02T18:30:26 | 2021-05-02T18:30:26 | 362,245,135 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 575 | py | from pydantic import BaseModel, Field, validator
from datetime import datetime
class CourseDto(BaseModel):
name: str = Field(min_length=5, max_length=100)
start_date: datetime
end_date: datetime
lectures_amount: int
@validator('end_date')
def end_date_must_be_later_than_start_date(cls, v, values):
if 'start_date' in values and v < values['start_date']:
raise ValueError('start_date must be less than end_date')
return v
class SearchCourseDto(BaseModel):
name: str
start_date: datetime
end_date: datetime
| [
"stshlaptop@gmail.com"
] | stshlaptop@gmail.com |
9f119d5b473cd71f48040280bf1fd66e8febda37 | b3b066a566618f49ae83c81e963543a9b956a00a | /Statistical Thinking in Python (Part 1)/03_Thinking probabilistically-- Discrete variables/08_Plotting the Binomial PMF.py | 517032e86b836f8eccb638e2f669e96a312f46a6 | [] | no_license | ahmed-gharib89/DataCamp_Data_Scientist_with_Python_2020 | 666c4129c3f0b5d759b511529a365dfd36c12f1a | f3d20b788c8ef766e7c86c817e6c2ef7b69520b8 | refs/heads/master | 2022-12-22T21:09:13.955273 | 2020-09-30T01:16:05 | 2020-09-30T01:16:05 | 289,991,534 | 2 | 0 | null | 2020-08-24T17:15:43 | 2020-08-24T17:15:42 | null | UTF-8 | Python | false | false | 2,212 | py | '''
Plotting the Binomial PMF
100xp
As mentioned in the video, plotting a nice looking PMF requires a bit of matplotlib
trickery that we will not go into here. Instead, we will plot the PMF of the Binomial
distribution as a histogram with skills you have already learned. The trick is setting
up the edges of the bins to pass to plt.hist() via the bins keyword argument. We want
the bins centered on the integers. So, the edges of the bins should be -0.5, 0.5, 1.5,
2.5, ... up to max(n_defaults) + 1.5. You can generate an array like this using np.arange()
and then subtracting 0.5 from the array.
You have already sampled out of the Binomial distribution during your exercises on loan
defaults, and the resulting samples are in the NumPy array n_defaults.
Instructions
-Using np.arange(), compute the bin edges such that the bins are centered on the integers.
Store the resulting array in the variable bins.
-Use plt.hist() to plot the histogram of n_defaults with the normed=True and bins=bins
keyword arguments.
-Leave a 2% margin and label your axes.
-Show the plot.
'''
import numpy as np
import matplotlib.pyplot as plt
def ecdf(data):
"""Compute ECDF for a one-dimensional array of measurements."""
# Number of data points: n
n = len(data)
# x-data for the ECDF: x
x = np.sort(data)
# y-data for the ECDF: y
y = np.arange(1, n + 1) / n
return x, y
# Seed random number generator
np.random.seed(42)
# Take 10,000 samples out of the binomial distribution: n_defaults
n_defaults = np.random.binomial(n=100, p=0.05, size=10000)
# Compute bin edges: bins
bins = np.arange(min(n_defaults), max(n_defaults) + 1.5) - 0.5
# Generate histogram
_ = plt.hist(n_defaults, normed=True, bins=bins)
# Set margins
_ = plt.margins(0.02)
# Label axes
_ = plt.xlabel('x')
_ = plt.ylabel('y')
# Show the plot
plt.show()
#========================================================#
# DEVELOPER #
# BasitAminBhatti #
# Github #
# https://github.com/basitaminbhatti #
#========================================================# | [
"Your-Email"
] | Your-Email |
4e6c7bb668401577839eaef145ff5e971097f59d | e00d41c9f4045b6c6f36c0494f92cad2bec771e2 | /network/remoteshell/telnet-bsd/actions.py | cf124eeb0fbaf5a1f09d4e506b956d13cf9da65f | [] | no_license | pisilinux/main | c40093a5ec9275c771eb5fb47a323e308440efef | bfe45a2e84ea43608e77fb9ffad1bf9850048f02 | refs/heads/master | 2023-08-19T00:17:14.685830 | 2023-08-18T20:06:02 | 2023-08-18T20:06:02 | 37,426,721 | 94 | 295 | null | 2023-09-14T08:22:22 | 2015-06-14T19:38:36 | Python | UTF-8 | Python | false | false | 499 | py | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU General Public License, version 3.
# See the file http://www.gnu.org/licenses/gpl.txt
from pisi.actionsapi import autotools
from pisi.actionsapi import pisitools
from pisi.actionsapi import get
def setup():
autotools.configure("--enable-nls")
def build():
autotools.make()
def install():
autotools.rawInstall("DESTDIR=%s" % get.installDIR())
pisitools.dodoc("README", "THANKS", "NEWS", "AUTHORS", "ChangeLog")
| [
"suvarice@gmail.com"
] | suvarice@gmail.com |
d66780ba96a3025014dfd2f08fc6a169b08fa9a2 | 1b77f98443b885412f563355dd051acf7e9d8c9f | /archlinuxcn/android-ndk/lilac.py | 6b94e765481c7f4fa7b3e98fbd03a7f2b3be0f25 | [] | no_license | hassoon1986/repo | ee24b8c83ff8aef9630eb56bffde9bc6c8badf48 | 04766bc00c02288bee718f428b54c0915ba31a38 | refs/heads/master | 2022-12-13T04:42:06.481477 | 2020-08-30T06:45:52 | 2020-08-30T06:45:52 | 291,433,108 | 1 | 0 | null | 2020-08-30T08:45:50 | 2020-08-30T08:45:49 | null | UTF-8 | Python | false | false | 170 | py | # Trimmed lilac.py
from lilaclib import *
def pre_build():
update_pkgver_and_pkgrel(_G.newver)
def post_build():
git_pkgbuild_commit()
update_aur_repo()
| [
"yan12125@gmail.com"
] | yan12125@gmail.com |
a84acf62ed7b31ab50e8a1cc6f1a458871f51cb4 | 8acffb8c4ddca5bfef910e58d3faa0e4de83fce8 | /ml-flask/Lib/site-packages/matplotlib/testing/jpl_units/UnitDblFormatter.py | 753621b8df94d359672a489db556e30b7488ae0a | [
"MIT"
] | permissive | YaminiHP/SimilitudeApp | 8cbde52caec3c19d5fa73508fc005f38f79b8418 | 005c59894d8788c97be16ec420c0a43aaec99b80 | refs/heads/master | 2023-06-27T00:03:00.404080 | 2021-07-25T17:51:27 | 2021-07-25T17:51:27 | 389,390,951 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 128 | py | version https://git-lfs.github.com/spec/v1
oid sha256:09171b3ed1372b4165149e219218be4a0425d4c515f9596621e38f2043cdc2e2
size 681
| [
"yamprakash130@gmail.com"
] | yamprakash130@gmail.com |
0aa83217419c6c135c95034cbea0f9e09cf08118 | 04768a2eb1d6a76835fbc055ed88f791661656e6 | /05-day/test4/booktest/migrations/0001_initial.py | cbce3c1ca2c77c465dde99a0f86b1d1164d228a3 | [] | no_license | XUANXUANXU/5-django | 668d8b5ad4cc50996632e9474cd93e17bce36b66 | e82714b67afd39018cc31d177aa6fba0e7420daa | refs/heads/master | 2020-04-13T07:37:41.532727 | 2018-12-25T07:19:03 | 2018-12-25T07:19:03 | 163,057,063 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 614 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-08-12 10:19
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='AccountInfo',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('account', models.CharField(max_length=50)),
('pwd', models.CharField(max_length=50)),
],
),
]
| [
"302738630@qq.com"
] | 302738630@qq.com |
bca05d5bdfe1a1e633c08bb434c9f74e584831a7 | 4bcae7ca3aed842d647d9112547522cffa805d51 | /0045.跳跃游戏-ii.py | fd99d06fd024d0a401e38c458a6cd6618541f7ed | [] | no_license | SLKyrim/vscode-leetcode | fd5a163f801661db0dfae1d4fdfa07b79fdb82b6 | 65a271c05258f447d3e56755726f02179780eb8a | refs/heads/master | 2021-07-03T03:15:28.883786 | 2021-02-23T06:19:18 | 2021-02-23T06:19:18 | 226,062,540 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,989 | py | #
# @lc app=leetcode.cn id=45 lang=python3
#
# [45] 跳跃游戏 II
#
# https://leetcode-cn.com/problems/jump-game-ii/description/
#
# algorithms
# Hard (32.38%)
# Likes: 376
# Dislikes: 0
# Total Accepted: 32K
# Total Submissions: 96.2K
# Testcase Example: '[2,3,1,1,4]'
#
# 给定一个非负整数数组,你最初位于数组的第一个位置。
#
# 数组中的每个元素代表你在该位置可以跳跃的最大长度。
#
# 你的目标是使用最少的跳跃次数到达数组的最后一个位置。
#
# 示例:
#
# 输入: [2,3,1,1,4]
# 输出: 2
# 解释: 跳到最后一个位置的最小跳跃数是 2。
# 从下标为 0 跳到下标为 1 的位置,跳 1 步,然后跳 3 步到达数组的最后一个位置。
#
#
# 说明:
#
# 假设你总是可以到达数组的最后一个位置。
#
#
# @lc code=start
class Solution:
def jump(self, nums: List[int]) -> int:
# if len(nums) <= 1:
# return 0
# ind = 0 # nums的索引位置
# pre = 0 # 记录上次的最右边界
# cur = 0 # 记录当前的右边界
# res = 0
# while (ind < len(nums)):
# while ind < len(nums) and ind <= pre:
# # 在上次最右边界左边中的点找可以跳到更远位置的位置
# cur = max(cur, nums[ind] + ind)
# ind += 1
# pre = cur # 已当前能到达的最右边界更新pre供下一次循环使用
# res += 1
# if ind >= len(nums) or cur >= len(nums) - 1:
# break
# if cur < ind:
# # 现在的最右边界无法到达下一个ind索引位置
# res = -1
# break
# if cur < len(nums) - 1:
# res = -1
# return res
# # BFS 超时
# n = len(nums)
# tmp = [0]
# visited = [0 for _ in range(n)]
# visited[0] = 1
# res = 0
# while tmp:
# now = len(tmp)
# for i in range(now):
# cur = tmp.pop(0)
# if cur == n - 1:
# return res
# for v in range(cur, cur + nums[cur] + 1):
# if 0 <= v < n and visited[v] == 0:
# visited[v] = 1
# tmp.append(v)
# res += 1
# return res
# 1、在可跳到的最右位置内的格子内跳跃,更新能到达的最右位置right
# 2、到达上次跳跃可达的最右位置end后,说明需要再次跳跃,用right更新可达的最右位置end
res = 0
end = 0 # 每次跳跃可以跳到的最右位置
right = 0
for i in range(len(nums) - 1): # 注意这里循环的范围,避免达到最后一个数时也计算进去
right = max(right, i + nums[i])
if i == end:
end = right
res += 1
return res
# @lc code=end
| [
"623962644@qq.com"
] | 623962644@qq.com |
1314b88ac884f5dac03694b313caff1ba8fecba6 | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_136/3130.py | d879c4079469e55f4c67c4c5a0d14fd3a1d5600e | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,227 | py | # Google Code Jam (2013): Qualification Round
# Code Jam Utils
# Can be found on Github's Gist at:
# https://gist.github.com/Zren/5376385
from codejamutils import *
problem_id = 'B'
# problem_size = 'sample'
# problem_size = 'small'
problem_size = 'large'
opt_args = {
#'practice': True,
'log_level': DEBUG,
'filename': 'B-large',
}
def total_time(C, F, X):
cookies_per_second = lambda farms: 2 + farms * F
farm_build_time = lambda cps: C / float(cps)
win_time = lambda cps: X / float(cps)
t = 0
farms = 0
while True:
curCPS = cookies_per_second(farms)
nextCPS = cookies_per_second(farms+1)
farmT = farm_build_time(curCPS)
curWT = win_time(curCPS)
nextWT = farmT + win_time(nextCPS)
# info(t, farms)
if curWT > nextWT:
# Buy a farm
t += farmT
farms += 1
else:
t += curWT
return t
with Problem(problem_id, problem_size, **opt_args) as p:
for case in p:
info('Case', case.case)
C, F, X = map(float, case.readline().split())
case.writecase(total_time(C, F, X)) | [
"miliar1732@gmail.com"
] | miliar1732@gmail.com |
ad100d93e0725e53ea4e57e5051cdc8f5988ce56 | 384813261c9e8d9ee03e141ba7270c48592064e9 | /new_project/fastsklearnfeature/declarative_automl/optuna_package/feature_preprocessing/SelectKBestOptuna.py | 16d8a38ffc19812f51bd2fd738007ec5cdf55e5d | [] | no_license | pratyushagnihotri/DFS | b99d87c085e67888b81c19629c338dae92272a3b | 3b60e574905e93c24a2b883cc251ecc286cb2263 | refs/heads/master | 2023-04-18T22:17:36.816581 | 2021-04-20T13:41:29 | 2021-04-20T13:41:29 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,374 | py | from sklearn.feature_selection import SelectPercentile
from sklearn.feature_selection.from_model import _get_feature_importances
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectKBest
import sklearn
from fastsklearnfeature.declarative_automl.optuna_package.optuna_utils import id_name
import functools
def model_score(X, y=None, estimator=None):
estimator.fit(X,y)
scores = _get_feature_importances(estimator)
return scores
class SelectKBestOptuna(SelectKBest):
def init_hyperparameters(self, trial, X, y):
self.name = id_name('SelectKBest')
self.k_fraction = trial.suggest_uniform(self.name + 'k_fraction', 0.0, 1.0)
self.sparse = False
score_func = trial.suggest_categorical(self.name + 'score_func', ['chi2', 'f_classif', 'mutual_info', 'ExtraTreesClassifier', 'LinearSVC'])
if score_func == "chi2":
self.score_func = sklearn.feature_selection.chi2
elif score_func == "f_classif":
self.score_func = sklearn.feature_selection.f_classif
elif score_func == "mutual_info":
self.score_func = sklearn.feature_selection.mutual_info_classif
elif score_func == 'ExtraTreesClassifier':
new_name = self.name + '_' + score_func + '_'
model = ExtraTreesClassifier()
model.n_estimators = 100
model.criterion = trial.suggest_categorical(new_name + "criterion", ["gini", "entropy"])
model.max_features = trial.suggest_uniform(new_name + "max_features", 0, 1)
model.max_depth = None
model.max_leaf_nodes = None
model.min_samples_split = trial.suggest_int(new_name + "min_samples_split", 2, 20, log=False)
model.min_samples_leaf = trial.suggest_int(new_name + "min_samples_leaf", 1, 20, log=False)
model.min_weight_fraction_leaf = 0.
model.min_impurity_decrease = 0.
model.bootstrap = trial.suggest_categorical(new_name + "bootstrap", [True, False])
self.score_func = functools.partial(model_score, estimator=model) #bindFunction1(model)
elif score_func == 'LinearSVC':
new_name = self.name + '_' + score_func + '_'
model = sklearn.svm.LinearSVC()
model.penalty = "l1"
model.loss = "squared_hinge"
model.dual = False
model.tol = trial.suggest_loguniform(new_name + "tol", 1e-5, 1e-1)
model.C = trial.suggest_loguniform(new_name + "C", 0.03125, 32768)
model.multi_class = "ovr"
model.fit_intercept = True
model.intercept_scaling = 1
self.score_func = functools.partial(model_score, estimator=model)
def fit(self, X, y):
self.k = max(1, int(self.k_fraction * X.shape[1]))
#print('k: ' + str(self.k))
return super().fit(X=X, y=y)
def generate_hyperparameters(self, space_gen, depending_node=None):
self.name = id_name('SelectKBest')
space_gen.generate_number(self.name + "k_fraction", 0.5, depending_node=depending_node)
category_fs = space_gen.generate_cat(self.name + 'score_func',['chi2', 'f_classif', 'mutual_info', 'ExtraTreesClassifier', 'LinearSVC',
'variance'], "chi2", depending_node=depending_node)
tree_catgory = category_fs[3]
lr_catgory = category_fs[4]
new_name = self.name + '_' + 'ExtraTreesClassifier' + '_'
space_gen.generate_cat(new_name + "criterion", ["gini", "entropy"], "gini", depending_node=tree_catgory)
space_gen.generate_number(new_name + "max_features", 0.5, depending_node=tree_catgory)
space_gen.generate_number(new_name + "min_samples_split", 2, depending_node=tree_catgory)
space_gen.generate_number(new_name + "min_samples_leaf", 1, depending_node=tree_catgory)
space_gen.generate_cat(new_name + "bootstrap", [True, False], False, depending_node=tree_catgory)
new_name = self.name + '_' + 'LinearSVC' + '_'
space_gen.generate_cat(new_name + "loss", ["hinge", "squared_hinge"], "squared_hinge", depending_node=lr_catgory)
space_gen.generate_number(new_name + "tol", 1e-4, depending_node=lr_catgory)
space_gen.generate_number(new_name + "C", 1.0, depending_node=lr_catgory)
| [
"neutatz@googlemail.com"
] | neutatz@googlemail.com |
7b8259f7286d6b3a3bd76fff394fd7c861a7e0cf | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/396/usersdata/285/81060/submittedfiles/av1_programa2.py | 588440d2b0b90f5e9a964045ecf30c4ddb7c2d41 | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,163 | py | # -*- coding: utf-8 -*-
#COMECE AQUI ABAIXO
n1=int(input("insira o valor de n1: "))
n2=int(input("insira o valor de n2: "))
n3=int(input("insira o valor de n3: "))
n4=int(input("insira o valor de n4: "))
n5=int(input("insira o valor de n5: "))
n6=int(input("insira o valor de n6: "))
n7=int(input("insira o valor de n7: "))
n8=int(input("insira o valor de n8: "))
n9=int(input("insira o valor de n9: "))
n10=int(input("insira o valor de n10: "))
n11=int(input("insira o valor de n11: "))
n12=int(input("insira o valor de n12: "))
n1 >= 1 and n1 < 100
n2 >= 1 and n2 < 100
n3 >= 1 and n3 < 100
n4 >= 1 and n4 < 100
n5 >= 1 and n5 < 100
n6 >= 1 and n6 < 100
n7 >= 1 and n7 < 100
n8 >= 1 and n8 < 100
n9 >= 1 and n9 < 100
n10 >= 1 and n10 < 100
n11 >= 1 and n11 < 100
n12 >= 1 and n12 < 100
if n1==n7:
if n2==n8:
if n3==n9:
if n4!=n10:
if n5!=n11:
if n6!=n12:
print("terno")
if n1==n7:
if n2==n8:
if n3==n9:
if n4==n10:
if n5!=n11:
if n6!=n12:
print("quadra")
if n1==n7:
if n2==n8:
if n3==n9:
if n4==n10:
if n5==n11:
if n6!=n12:
print("quina")
if n1==n7 and n2==n8 and n3==n9 and n4==n10 and n5==n11 and n6==n12:
print("sena")
if n1==n7:
if n2==n8:
if n3!=n9:
if n4!=n10:
if n5!=n11:
if n6!=n12:
print("azar")
if n1==n7:
if n2!=n8:
if n3!=n9:
if n4!=n10:
if n5!=n11:
if n6!=n12:
print("azar")
if n1!=n7:
if n2!=n8:
if n3!=n9:
if n4!=n10:
if n5!=n11:
if n6!=n12:
print("azar")
else:
if n1<=1 and n1>99
| [
"rafael.mota@ufca.edu.br"
] | rafael.mota@ufca.edu.br |
0cf72458010aa3709542cace33734a22f867a027 | 37872e29f2025af79ed3ce48a2ef010a7e0f1a3e | /venv1/Lib/site-packages/snowflake/client/__init__.py | dbf359392b44a546a6f4370f09a145518c276fa4 | [] | no_license | leozll/Python_demo | 892de135c30144c3b8c28288b6c23b53ebc583e5 | d1b2faef52aed1e10cb9c232b8faef158410fad9 | refs/heads/master | 2020-04-04T18:09:25.945945 | 2018-12-29T03:47:06 | 2018-12-29T03:47:06 | 156,152,053 | 0 | 2 | null | null | null | null | UTF-8 | Python | false | false | 684 | py | import requests
import json
class SnowflakeClient(object):
def __init__(self, host, port):
self.host = host
self.port = port
self.api_uri = 'http://%s:%s/' % (self.host, self.port)
def get_guid(self):
res = requests.get(self.api_uri)
return int(res.text)
def get_stats(self):
res = requests.get(self.api_uri + 'stats')
return json.loads(res.text)
default_client = SnowflakeClient('localhost', 8910)
def setup(host, port):
global default_client
default_client = SnowflakeClient(host, port)
def get_guid():
return default_client.get_guid()
def get_stats():
return default_client.get_stats() | [
"zllong145511"
] | zllong145511 |
1425a438ff5caa905937cf7e33796d39f0fe9929 | 21b461b71b4c63f7aac341bd12ba35d211c7956e | /codes/05plot/histogram/normalized_histogram.py | 7b04a8b26163b1a15f9fd014d8c2cce8cdf0fa7f | [] | no_license | Ziaeemehr/workshop_scripting | cebdcb552720f31fd6524fd43f257ca46baf70e2 | ed5f232f6737bc9f750d704455442f239d4f0561 | refs/heads/main | 2023-08-22T23:00:36.121267 | 2023-07-19T10:53:41 | 2023-07-19T10:53:41 | 153,342,386 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 206 | py | import numpy as np
import matplotlib.pyplot as plt
x = np.random.randn(10000)
fig = plt.figure()
ax = fig.add_subplot(111)
n, bins, rectangles = ax.hist(x, 50, density=True)
# fig.canvas.draw()
plt.show()
| [
"a.ziaeemehr@gmail.com"
] | a.ziaeemehr@gmail.com |
9f858e3de9b23839e71072bef406eaa8c2426642 | edf31957838a65e989d5eb5e8118254ac2413fc8 | /parakeet/transforms/permute_reductions.py | 56f9a4f677101c414f06752a7b3fee9a732fe156 | [
"BSD-3-Clause"
] | permissive | iskandr/parakeet | e35814f9030b9e8508a7049b62f94eee5b8c5296 | d9089f999cc4a417d121970b2a447d5e524a3d3b | refs/heads/master | 2021-07-18T19:03:05.666898 | 2019-03-13T17:20:20 | 2019-03-13T17:20:20 | 5,889,813 | 69 | 7 | NOASSERTION | 2021-07-17T21:43:03 | 2012-09-20T16:54:18 | Python | UTF-8 | Python | false | false | 1,756 | py | from ..builder import build_fn
from ..ndtypes import ArrayT, lower_rank
from ..prims import Prim
from ..syntax import Return, Map
from ..syntax.helpers import is_identity_fn, unwrap_constant
from transform import Transform
def get_nested_map(fn):
if len(fn.body) != 1:
return None
stmt = fn.body[0]
if stmt.__class__ is not Return:
return None
if stmt.value.__class__ is not Map:
return None
return stmt.value
class PermuteReductions(Transform):
"""
When we have a reduction of array values, such as:
Reduce(combine = Map(f), X, axis = 0)
it can be more efficient to interchange the Map and Reduce:
Map(combine = f, X, axis = 1)
"""
def transform_Reduce(self, expr):
return expr
def transform_Scan(self, expr):
if len(expr.args) > 1:
return expr
axis = unwrap_constant(expr.axis)
if axis is None or not isinstance(axis, (int,long)) or axis > 1 or axis < 0:
return expr
if not isinstance(expr.type, ArrayT) or expr.type.rank != 2:
return expr
fn = self.get_fn(expr.fn)
fn_closure_args = self.closure_elts(expr.fn)
if len(fn_closure_args) > 0:
return expr
combine = self.get_fn(expr.combine)
combine_closure_args = self.closure_elts(expr.closure)
if len(combine_closure_args) > 0:
return expr
if is_identity_fn(fn):
nested_map = get_nested_map(combine)
if not isinstance(nested_map.fn, Prim):
return expr
arg_t = expr.args[0].type
elt_t = lower_rank(arg_t, 1)
new_nested_fn = None
return Map(fn = new_nested_fn,
args = expr.args,
axis = 1 - axis,
type = expr.type)
| [
"alex.rubinsteyn@gmail.com"
] | alex.rubinsteyn@gmail.com |
5636707ee445a2ffa0f842d50f036a902fbb482f | 8bb94363c45803d10b101c070ae0a569bcc66bd6 | /Educational Round 15_A.py | a80fd4d6a3e74e8897f69cfdb6409ba693e212f9 | [] | no_license | rpm1995/Codeforces | 17506dcf2f515226b2208e3e103db30b94d0d30b | 487853264145872e059a76d8bba534da264e8699 | refs/heads/master | 2021-05-23T14:35:34.879518 | 2020-09-08T01:59:05 | 2020-09-08T01:59:05 | 253,342,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 221 | py | n = int(input())
arr = list(map(int, input().split()))
count = 1
ans = 1
for index in range(1, n):
if arr[index] > arr[index-1]:
count += 1
else:
count = 1
ans = max(ans, count)
print(ans)
| [
"31997276+rpm1995@users.noreply.github.com"
] | 31997276+rpm1995@users.noreply.github.com |
fa742f33ccad3f7119040a7a242fc2894a500282 | a41457257c588d2a5a9509c55257df9f702c9430 | /src/features/OOneighbors.py | 7b613f16bac2ff071c896a8875ae8319e416fc92 | [] | no_license | schwancr/water | be096962748480d57ce899fd3559dd1989abee0b | 01ab34d3c4240ea9df96a018a7b89dbbe7117a45 | refs/heads/master | 2020-04-05T23:44:06.180343 | 2014-12-28T19:46:52 | 2014-12-28T19:46:52 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,762 | py | import numpy as np
from .base import BaseTransformer
import mdtraj as md
from .utils import get_square_distances
import copy
import IPython
class OOneighbors(BaseTransformer):
"""
Compute the OO distances and sort them for each water molecule
Parameters
----------
n_waters : int, optional
Limit the feature vectors to the closest n_waters. If None,
then all waters are included.
"""
def __init__(self, n_waters=None):
if n_waters is None:
self.n_waters = n_waters
else:
self.n_waters = int(n_waters)
def transform(self, traj):
"""
Transform a trajectory into the OO features
Parameters
----------
traj : mdtraj.Trajectory
Returns
-------
Xnew : np.ndarray
sorted distances for each water molecule
distances : np.ndarray
distances between each water molecule
"""
oxygens = np.array([i for i in xrange(traj.n_atoms) if traj.top.atom(i).element.symbol == 'O'])
distances = get_square_distances(traj, oxygens)
Xnew = copy.copy(distances)
Xnew.sort()
if self.n_waters is None:
Xnew = Xnew[:, :, 1:]
else:
Xnew = Xnew[:, :, 1:(self.n_waters + 1)]
sorted_waters = np.argsort(distances, axis=-1)
# sorted_waters[t, i, k] contains the k'th closest water index to water i at time t
# k==0 is clearly i
ind0 = np.array([np.arange(Xnew.shape[0])] * Xnew.shape[1]).T
Xnew0 = copy.copy(Xnew)
for k in xrange(1, 5):
Xnew = np.concatenate([Xnew, Xnew0[ind0, sorted_waters[:, :, k]]], axis=2)
return Xnew, distances
| [
"schwancr@stanford.edu"
] | schwancr@stanford.edu |
455b8a215483b52cd4a3d5f6fcf75b9851e8bcd7 | fd7f7e9c01344b3c3eb8d54b8713153a062eea74 | /docs/source/tutorials/src/ucs/polyline3d.py | d323ea975ec381484c468483263cf158a5fb89c4 | [
"MIT"
] | permissive | bagumondigi/ezdxf | 7843dec2d5f129d79944bc75f333fae33dd4bb11 | 85bdeefc602c070d38d99bf9f437b48bef7460aa | refs/heads/master | 2023-02-18T04:09:17.043410 | 2021-01-09T10:59:47 | 2021-01-09T10:59:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,295 | py | # Copyright (c) 2020 Manfred Moitzi
# License: MIT License
from pathlib import Path
OUT_DIR = Path('~/Desktop/Outbox').expanduser()
import math
import ezdxf
from ezdxf.math import UCS
doc = ezdxf.new('R2010')
msp = doc.modelspace()
# using an UCS simplifies 3D operations, but UCS definition can happen later
# calculating corner points in local (UCS) coordinates without Vec3 class
angle = math.radians(360 / 5)
corners_ucs = [(math.cos(angle * n), math.sin(angle * n), 0) for n in range(5)]
# let's do some transformations by UCS
transformation_ucs = UCS().rotate_local_z(math.radians(15)) # 1. rotation around z-axis
transformation_ucs.shift((0, .333, .333)) # 2. translation (inplace)
corners_ucs = list(transformation_ucs.points_to_wcs(corners_ucs))
location_ucs = UCS(origin=(0, 2, 2)).rotate_local_x(math.radians(-45))
msp.add_polyline3d(
points=corners_ucs,
dxfattribs={
'closed': True,
'color': 1,
}
).transform(location_ucs.matrix)
# Add lines from the center of the POLYLINE to the corners
center_ucs = transformation_ucs.to_wcs((0, 0, 0))
for corner in corners_ucs:
msp.add_line(
center_ucs, corner, dxfattribs={'color': 1}
).transform(location_ucs.matrix)
location_ucs.render_axis(msp)
doc.saveas(OUT_DIR / 'ucs_polyline3d.dxf')
| [
"me@mozman.at"
] | me@mozman.at |
f48fed8fbb9cb25e5b7b4bbb2cd23bc4f29ceb18 | 614ee629aadd3e12256dc4a383eccd0ddf16fe5a | /testing/wait_for_kubeflow.py | c00d1b3ca7c32f86876d52094ed223cf4319c51a | [
"Apache-2.0"
] | permissive | hmizuma/kubeflow | 06298fb7b2382ea236e2201bc18ed1e06eb8bed4 | 9647149083b7090cbe8b3d93f864d81e63b0067a | refs/heads/master | 2020-03-31T11:09:48.413632 | 2018-10-11T19:07:43 | 2018-10-11T19:11:00 | 152,165,761 | 0 | 0 | Apache-2.0 | 2018-10-09T00:50:57 | 2018-10-09T00:51:13 | null | UTF-8 | Python | false | false | 1,525 | py | """Wait for Kubeflow to be deployed."""
import argparse
import logging
from testing import deploy_utils
from kubeflow.testing import test_helper
from kubeflow.testing import util # pylint: disable=no-name-in-module
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument(
"--namespace", default=None, type=str, help=("The namespace to use."))
args, _ = parser.parse_known_args()
return args
def deploy_kubeflow(_):
"""Deploy Kubeflow."""
args = parse_args()
namespace = args.namespace
api_client = deploy_utils.create_k8s_client()
util.load_kube_config()
# Verify that the TfJob operator is actually deployed.
tf_job_deployment_name = "tf-job-operator-v1alpha2"
logging.info("Verifying TfJob controller started.")
util.wait_for_deployment(api_client, namespace, tf_job_deployment_name)
# Verify that JupyterHub is actually deployed.
jupyterhub_name = "tf-hub"
logging.info("Verifying TfHub started.")
util.wait_for_statefulset(api_client, namespace, jupyterhub_name)
# Verify that PyTorch Operator actually deployed
pytorch_operator_deployment_name = "pytorch-operator"
logging.info("Verifying PyTorchJob controller started.")
util.wait_for_deployment(api_client, namespace, pytorch_operator_deployment_name)
def main():
test_case = test_helper.TestCase(
name='deploy_kubeflow', test_func=deploy_kubeflow)
test_suite = test_helper.init(
name='deploy_kubeflow', test_cases=[test_case])
test_suite.run()
if __name__ == "__main__":
main()
| [
"k8s-ci-robot@users.noreply.github.com"
] | k8s-ci-robot@users.noreply.github.com |
dc8a055216007f3a5d2122ad9d34bd2a358f4726 | c1bd12405d244c5924a4b069286cd9baf2c63895 | /azure-mgmt-loganalytics/azure/mgmt/loganalytics/models/management_group_paged.py | 04ad8581510fc92ff23ca223e8587164cd7e0955 | [
"MIT"
] | permissive | lmazuel/azure-sdk-for-python | 972708ad5902778004680b142874582a284a8a7c | b40e0e36cc00a82b7f8ca2fa599b1928240c98b5 | refs/heads/master | 2022-08-16T02:32:14.070707 | 2018-03-29T17:16:15 | 2018-03-29T17:16:15 | 21,287,134 | 1 | 3 | MIT | 2019-10-25T15:56:00 | 2014-06-27T19:40:56 | Python | UTF-8 | Python | false | false | 960 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.paging import Paged
class ManagementGroupPaged(Paged):
"""
A paging container for iterating over a list of :class:`ManagementGroup <azure.mgmt.loganalytics.models.ManagementGroup>` object
"""
_attribute_map = {
'next_link': {'key': 'nextLink', 'type': 'str'},
'current_page': {'key': 'value', 'type': '[ManagementGroup]'}
}
def __init__(self, *args, **kwargs):
super(ManagementGroupPaged, self).__init__(*args, **kwargs)
| [
"laurent.mazuel@gmail.com"
] | laurent.mazuel@gmail.com |
49dde84fb44327191f1a1901fbd68dfb1a948c13 | 6fdf0ad8a70cfe666ab1cae331ddf751178b0f34 | /Python/Backtracking/problem_79. Word Search.py | c39d649e910fd7555bf78b78b3b4843a11c151ac | [] | no_license | vigneshthiagarajan/Leetcode_prep | 3aa46f90af084d6100cd61af28767e811c848d4e | 1f087564e9b68f85d9974c3643538b8370ba82e3 | refs/heads/main | 2023-06-19T06:47:00.388621 | 2021-07-12T04:54:32 | 2021-07-12T04:54:32 | 356,921,259 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,111 | py | class Solution:
def exist(self, board: List[List[str]], word: str) -> bool:
ROWS, COLS = len(board), len(board[0])
# store values (path location to avoid revisiting)
path = set()
def dfs(row, column, seek_char):
if seek_char == len(word):
return True
if (
row < 0
or column < 0
or row >= ROWS
or column >= COLS
or word[seek_char] != board[row][column]
or (row, column) in path
):
return False
path.add((row, column))
result = (
(dfs(row + 1, column, seek_char + 1))
or (dfs(row - 1, column, seek_char + 1))
or (dfs(row, column + 1, seek_char + 1))
or (dfs(row, column - 1, seek_char + 1))
)
path.remove((row, column))
return result
for i in range(ROWS):
for j in range(COLS):
if dfs(i, j, 0):
return True
return False
| [
"vigneshthiagaraj@gmail.com"
] | vigneshthiagaraj@gmail.com |
6e1f4446d9c6e61ac3fcae638f2566546e7a6f54 | 77beefcfbccca789210364fc7fab3fc9508b1a02 | /server.py | 6622ec9e9e6f7115b3bf4c7cc4453a2114840de8 | [] | no_license | cgracedev/MVC_Python_1 | ce05e08b1b8922558ef7244921d800e4227b5ef9 | b6daae43af2d450f0013b872a7dcaaac2049f8f1 | refs/heads/master | 2023-06-05T04:42:23.181976 | 2021-06-29T16:58:05 | 2021-06-29T16:58:05 | 381,436,992 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 281 | py | from wsgiref.simple_server import make_server
import wsgi
# ===========================
#
# 0.1 Server
#
# ===========================
PORT = 8001
print("Open: http://127.0.0.1:{0}/".format(PORT))
httpd = make_server('', PORT, wsgi.application)
httpd.serve_forever()
| [
"pahaz.blinov@gmail.com"
] | pahaz.blinov@gmail.com |
57d67de0e305c902f8f682d239ceb91eb3e19f3b | 07ec5a0b3ba5e70a9e0fb65172ea6b13ef4115b8 | /lib/python3.6/site-packages/tensorflow/contrib/ffmpeg/ops/gen_decode_audio_op_py.py | dffc53fd2902bcb4e8c1505c65f123abd397e8c9 | [] | no_license | cronos91/ML-exercise | 39c5cd7f94bb90c57450f9a85d40c2f014900ea4 | 3b7afeeb6a7c87384049a9b87cac1fe4c294e415 | refs/heads/master | 2021-05-09T22:02:55.131977 | 2017-12-14T13:50:44 | 2017-12-14T13:50:44 | 118,736,043 | 0 | 0 | null | 2018-01-24T08:30:23 | 2018-01-24T08:30:22 | null | UTF-8 | Python | false | false | 129 | py | version https://git-lfs.github.com/spec/v1
oid sha256:8a140d778b63a774ed38be65f8577359c8b50f01eeb823b6c1fd1bc61eabf680
size 2848
| [
"seokinj@jangseog-in-ui-MacBook-Pro.local"
] | seokinj@jangseog-in-ui-MacBook-Pro.local |
9fccf0392ab349af10ee9f804386b1404b0fe641 | 6d25434ca8ce03f8fef3247fd4fc3a1707f380fc | /[0009][Easy][Palindrome_Number][math]/Palindrome_Number_2.py | c67fa378b0bdff4f0cf7e5ac59d195d28abc9dc0 | [] | no_license | sky-dream/LeetCodeProblemsStudy | 145f620e217f54b5b124de09624c87821a5bea1b | e0fde671cdc9e53b83a66632935f98931d729de9 | refs/heads/master | 2020-09-13T08:58:30.712604 | 2020-09-09T15:54:06 | 2020-09-09T15:54:06 | 222,716,337 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 922 | py | # leetcode time cost : 100 ms
# leetcode memory cost : 13.8 MB
# Time Complexity: O(N)
# Space Complexity: O(1)
# solution 2, convert int value into num list
class Solution:
def isPalindrome(self, x: int) -> bool:
if x<0:
return False
nums = []
while x>0:
num = x%10
nums.append(num)
x = x//10
l,r = 0,len(nums)-1
while l<r:
if nums[l]!=nums[r]:
return False
l+=1
r-=1
return True
def main():
inputX,expectRes = -3210, False
obj = Solution()
result = obj.isPalindrome(inputX)
try:
assert result == expectRes
print("passed, result is follow expect:",result)
except AssertionError as aError:
print('failed, result >> ', result,"<< is wrong, ","expect is : ",expectRes)
if __name__ =='__main__':
main() | [
"xxm1263476788@126.com"
] | xxm1263476788@126.com |
f51dac07cbce60b0e882b7eece1c15cb8f37d157 | 602158267284ad23bd9f7709b7f88af814b1564d | /UserProfiles/models.py | 2f05ddf1aa022b786a6c19368ab627aec953da67 | [
"MIT"
] | permissive | Milo-Goodfellow-Work/GuildsDevelopmentBuild | 594d0676ed5a4b15f98db553f9738a0afdfbcc2b | 96b384a4b2a3f5a6e930d6bfb4a3a98c73ba4da1 | refs/heads/master | 2022-03-10T04:35:30.938151 | 2022-02-24T01:19:22 | 2022-02-24T01:19:22 | 196,897,633 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 829 | py | #Non project imports
from django.db import models
from django.conf import settings
import datetime
#Project imports
# Create your models here.
class UserProfileModel(models.Model):
Id = models.AutoField(primary_key=True)
UserProfileRelation = models.OneToOneField(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
UserProfileBio = models.TextField(max_length=300, blank=True, null=True)
UserProfileWebsite = models.CharField(max_length=50, blank=True, null=True)
UserProfileJoinDate = models.DateField(default=datetime.date.today)
UserProfileImage = models.ImageField(upload_to="UserProfiles/", default="UserProfiles/Defaults/Blank.png", blank=True, null=True)
UserProfileHeader = models.ImageField(upload_to="UserProfiles/", default="UserProfiles/Defaults/BlankWhite.png", blank=True, null=True)
| [
"milogoodfellowworkemail@gmail.com"
] | milogoodfellowworkemail@gmail.com |
788fead693d37df2a18c70d02ae6be8d508695ed | be691b23dea7c96b78867e37ca8ca38e994b4afa | /qcloudapi/common/sign.py | dde954cbb2fe0ff0d6bad213bdb83d18d417f77b | [
"Apache-2.0"
] | permissive | aiden0z/qcloudapi-sdk-python | f0e43dd083569ecc3ccc29324b9654daace50838 | 80a1e560e8f4a373544c55658385d60ecb1e9285 | refs/heads/master | 2021-01-11T10:10:02.095944 | 2016-11-03T15:22:11 | 2016-11-03T15:26:55 | 72,758,701 | 2 | 1 | null | 2016-11-03T15:20:56 | 2016-11-03T15:20:56 | null | UTF-8 | Python | false | false | 637 | py | # -*- coding: utf-8 -*-
import hmac
import binascii
import hashlib
def sign(secretKey, requestHost, requestUri, params, method='GET'):
d = {}
for key in params:
if method == 'post' and str(params[key])[0:1] == '@':
continue
d[key] = params[key]
srcStr = method.upper() + requestHost + requestUri + '?'
srcStr += '&'.join('%s=%s' %
(k.replace('_', '.'), d[k]) for k in sorted(d.keys()))
hashed = hmac.new(secretKey.encode('utf-8'),
srcStr.encode('utf-8'), hashlib.sha1)
return binascii.b2a_base64(hashed.digest())[:-1].decode('utf-8')
| [
"aiden0xz@gmail.com"
] | aiden0xz@gmail.com |
3ac43819bd70fd56b4582082d60c1b7b78e3cc46 | 41890f62e6af50205257c54cc588e18fe0b10d4e | /pontoon/sync/formats/ftl.py | 964acf7ac0da8ec10e3aef09f33ac22abb45119a | [
"BSD-3-Clause"
] | permissive | naveenjafer/pontoon | 4cc933154e9e1f80063e5c4cb7fb4004b8909c03 | 0e3a54ec80d131a5853f73529067cd8d10ced614 | refs/heads/master | 2021-01-12T12:28:52.914480 | 2016-10-31T20:24:23 | 2016-10-31T20:24:23 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,089 | py | from __future__ import absolute_import
import codecs
import copy
import logging
import os
from ftl.format.parser import FTLParser, ParseContext
from ftl.format.serializer import FTLSerializer
from pontoon.sync import SyncError
from pontoon.sync.formats.base import ParsedResource
from pontoon.sync.vcs.models import VCSTranslation
log = logging.getLogger(__name__)
class L20NEntity(VCSTranslation):
"""
Represents entities in l20n (without its attributes).
"""
def __init__(self, key, source_string, source_string_plural, strings, comments=None, order=None):
super(L20NEntity, self).__init__(
key=key,
source_string=source_string,
source_string_plural=source_string_plural,
strings=strings,
comments=comments or [],
fuzzy=False,
order=order,
)
def __repr__(self):
return '<L20NEntity {key}>'.format(key=self.key.encode('utf-8'))
class L20NResource(ParsedResource):
def __init__(self, path, locale, source_resource=None):
self.path = path
self.locale = locale
self.entities = {}
self.source_resource = source_resource
self.order = 0
# Copy entities from the source_resource if it's available.
if source_resource:
for key, entity in source_resource.entities.items():
self.entities[key] = L20NEntity(
entity.key,
'',
'',
{},
copy.copy(entity.comments),
entity.order
)
with codecs.open(path, 'r', 'utf-8') as resource:
self.structure = FTLParser().parseResource(resource.read())
def get_comment(obj):
return [obj['comment']['content']] if obj['comment'] else []
def parse_entity(obj, section_comment=[]):
translation = FTLSerializer().dumpEntity(obj).split('=', 1)[1].lstrip(' ')
self.entities[obj['id']['name']] = L20NEntity(
obj['id']['name'],
translation,
'',
{None: translation},
section_comment + get_comment(obj),
self.order
)
self.order += 1
for obj in self.structure[0]['body']:
if obj['type'] == 'Entity':
parse_entity(obj)
elif obj['type'] == 'Section':
section_comment = get_comment(obj)
for obj in obj['body']:
if obj['type'] == 'Entity':
parse_entity(obj, section_comment)
@property
def translations(self):
return sorted(self.entities.values(), key=lambda e: e.order)
def save(self, locale):
"""
Load the source resource, modify it with changes made to this
Resource instance, and save it over the locale-specific
resource.
"""
if not self.source_resource:
raise SyncError('Cannot save l20n resource {0}: No source resource given.'
.format(self.path))
with codecs.open(self.source_resource.path, 'r', 'utf-8') as resource:
structure = FTLParser().parseResource(resource.read())
def serialize_entity(obj, entities):
entity_id = obj['id']['name']
translations = self.entities[entity_id].strings
if translations:
source = translations[None]
key = self.entities[entity_id].key
entity = ParseContext(key + '=' + source).getEntity().toJSON()
obj['value'] = entity['value']
obj['traits'] = entity['traits']
else:
index = entities.index(obj)
del entities[index]
entities = structure[0]['body']
# Use list() to iterate over a copy, leaving original free to modify
for obj in list(entities):
if obj['type'] == 'Entity':
serialize_entity(obj, entities)
elif obj['type'] == 'Section':
index = entities.index(obj)
section = entities[index]['body']
for obj in list(section):
if obj['type'] == 'Entity':
serialize_entity(obj, section)
# Remove section if empty
if len(section) == 0:
del entities[index]
# Create parent directory if it doesn't exist.
try:
os.makedirs(os.path.dirname(self.path))
except OSError:
pass # Already exists, phew!
with codecs.open(self.path, 'w+', 'utf-8') as f:
f.write(FTLSerializer().serialize(structure[0]))
log.debug('Saved file: %s', self.path)
def parse(path, source_path=None, locale=None):
if source_path is not None:
source_resource = L20NResource(source_path, locale)
else:
source_resource = None
return L20NResource(path, locale, source_resource)
| [
"matjaz.horvat@gmail.com"
] | matjaz.horvat@gmail.com |
f4f450a376f97b550804d200f6f22aa87abf6379 | 163bbb4e0920dedd5941e3edfb2d8706ba75627d | /Code/CodeRecords/2391/60653/304418.py | e391f5e13131e83b305ef08178bbf1e9eeb8d30f | [] | no_license | AdamZhouSE/pythonHomework | a25c120b03a158d60aaa9fdc5fb203b1bb377a19 | ffc5606817a666aa6241cfab27364326f5c066ff | refs/heads/master | 2022-11-24T08:05:22.122011 | 2020-07-28T16:21:24 | 2020-07-28T16:21:24 | 259,576,640 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 374 | py | n = int(input())
a = []
index = []
for i in range(0, n):
a.append(input())
index.append(1)
m = int(input())
for i in range(0, m):
s = input()
if a.count(s) == 0:
print("WRONG")
elif index[a.index(s)] == 1:
print("OK")
index[a.index(s)] = 2
elif index[a.index(s)] == 2:
print("REPEAT")
else:
print("WRONG")
| [
"1069583789@qq.com"
] | 1069583789@qq.com |
c0144b9cc48ec9754cb179434848257a49a3642a | 3eecaca9c842bffe8757917bcdb5a6911a7ed157 | /strings/string_strip.py | 8e5be49dc8dba5c0de0c192a713c30b2674c7558 | [] | no_license | matamkiran/python2020 | 2c79cb16e526ea7242e976c79c5db61d97326ec7 | c86b96ed3a266245f14b26fc7e8c0c35e76d4946 | refs/heads/master | 2023-02-10T06:21:50.609567 | 2023-02-01T18:43:28 | 2023-02-01T18:43:28 | 244,181,848 | 5 | 1 | null | 2021-04-28T17:34:01 | 2020-03-01T16:18:05 | Python | UTF-8 | Python | false | false | 194 | py | # -*- coding: utf-8 -*-
"""
Created on Thu Jan 28 08:43:01 2021
@author: Divya
"""
s1=" hello today is thursday in january 2021 "
s2=s1.strip("i")
print(len(s1))
print(len(s2)) | [
"noreply@github.com"
] | matamkiran.noreply@github.com |
f8a3d56c20af90cf2cb55b484973f80abc4e2ca3 | 38382e23bf57eab86a4114b1c1096d0fc554f255 | /hazelcast/protocol/codec/client_destroy_proxy_codec.py | e047ba77e915584754a1b52567d3cc7fac6dff13 | [
"Apache-2.0"
] | permissive | carbonblack/hazelcast-python-client | e303c98dc724233376ab54270832bfd916426cea | b39bfaad138478e9a25c8a07f56626d542854d0c | refs/heads/gevent-3.12.3.1 | 2023-04-13T09:43:30.626269 | 2020-09-18T17:37:17 | 2020-09-18T17:37:17 | 110,181,474 | 3 | 1 | Apache-2.0 | 2020-12-01T17:45:42 | 2017-11-10T00:21:55 | Python | UTF-8 | Python | false | false | 953 | py | from hazelcast.serialization.bits import *
from hazelcast.protocol.client_message import ClientMessage
from hazelcast.protocol.codec.client_message_type import *
REQUEST_TYPE = CLIENT_DESTROYPROXY
RESPONSE_TYPE = 100
RETRYABLE = False
def calculate_size(name, service_name):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += calculate_size_str(service_name)
return data_size
def encode_request(name, service_name):
""" Encode request into client_message"""
client_message = ClientMessage(payload_size=calculate_size(name, service_name))
client_message.set_message_type(REQUEST_TYPE)
client_message.set_retryable(RETRYABLE)
client_message.append_str(name)
client_message.append_str(service_name)
client_message.update_frame_length()
return client_message
# Empty decode_response(client_message), this message has no parameters to decode
| [
"arslanasim@gmail.com"
] | arslanasim@gmail.com |
27aab182b07728cb4d279da014b66bd8a2bb192b | 7296bd842936b1980b2271cff84aec831fa7dceb | /PyQt5All/PyQt521/QLabel(movie).py | e3b37362ab97ae08dc5477468e2f29dc4fdd8397 | [] | no_license | redmorningcn/PyQT5Example | dbb1400e754909092dd13c111b4b241245a70720 | 4d4c44365d5d0bf3d9fd94922a13d0b50f17a95c | refs/heads/master | 2022-08-21T09:32:44.938410 | 2020-05-26T05:26:54 | 2020-05-26T05:26:54 | 266,952,766 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,643 | py | #coding=utf-8
'''
这是一个关于标签(QLabel)的小例子-GIF动画显示!
文章链接:http://www.xdbcb8.com/archives/460.html
'''
import sys
from PyQt5.QtWidgets import QWidget, QApplication, QLabel, QPushButton
from PyQt5.QtGui import QMovie, QPixmap
class Example(QWidget):
'''
GIF动画显示
'''
def __init__(self):
'''
一些初始设置
'''
super().__init__()
self.initUI()
def initUI(self):
'''
界面初始设置
'''
self.resize(550, 300)
self.setWindowTitle('关注微信公众号:学点编程吧--标签:动画(QLabel)')
self.lb = QLabel(self)
self.lb.setGeometry(100, 50, 300, 200)
self.bt1 = QPushButton('开始', self)
self.bt2 = QPushButton('停止', self)
self.bt1.move(100, 20)
self.bt2.move(280, 20)
self.pix = QPixmap('movie.gif')
self.lb.setPixmap(self.pix)
self.lb.setScaledContents(True)# 图片全部显示出来,是多大显示多大。
self.bt1.clicked.connect(self.run)
self.bt2.clicked.connect(self.run)
self.show()
def run(self):
'''
在QLabel中加载GIF动画
'''
movie = QMovie("movie.gif")
self.lb.setMovie(movie)
if self.sender() == self.bt1:
movie.start()# 动画开始
else:
movie.stop()# 动画停止
self.lb.setPixmap(self.pix)
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_()) | [
"redmorningcn@qq.com"
] | redmorningcn@qq.com |
4107eb9e19e746eac21afe0667bdc3910f1de0d1 | d3cf78e6397aa3d4139f46ab24afb59abb0e6536 | /prpc/request.py | 3e7d6e6a511b88b44b24d3968cd228bfa833540d | [
"MIT"
] | permissive | einsfr/cmc-storage | d84a1bca326f5c619037fe50756a7300384e72b0 | b8f806ee343accf827c78c556a6bb4023bed4556 | refs/heads/master | 2021-01-13T02:45:23.245721 | 2016-12-20T21:24:21 | 2016-12-20T21:24:21 | 76,929,772 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 601 | py | import json
class Request:
def __init__(self, func_name: str, func_args: list, func_kwargs: dict):
self._func_name = func_name
self._func_args = func_args
self._func_kwargs = func_kwargs
@property
def func_name(self) -> str:
return self._func_name
@property
def func_args(self) -> list:
return self._func_args
@property
def func_kwargs(self) -> dict:
return self._func_kwargs
def create_request_from_json(json_data):
def object_hook(dict):
pass
return json.loads(json_data, object_hook=object_hook)
| [
"einsfr@users.noreply.github.com"
] | einsfr@users.noreply.github.com |
688a6b1bf21482b83c7f677b2f3e39f1559200b0 | bb0085062202bc36d2a479ac3fac00702f6b9e6e | /pylib/simplelex.py | d70168be4c7ab05b72eacc755bace9bee566aa70 | [] | no_license | morike/winterpy | 882f349700e427d62b8e55282032e5769e0bbc8a | d74bade8501fc7c596aac2650c1b070754e71818 | refs/heads/master | 2023-06-17T01:23:03.329145 | 2021-07-17T05:34:55 | 2021-07-17T05:34:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,575 | py | # modified version, originally from 风间星魂 <fengjianxinghun AT gmail>
# BSD Lisence
import re
from collections import UserString
_RE_Pattern = re.compile('').__class__
class Token:
'''useful attributes: pattern, idtype'''
def __init__(self, pat, idtype=None, flags=0):
self.pattern = pat if isinstance(pat, _RE_Pattern) else re.compile(pat, flags)
self.idtype = idtype
def __repr__(self):
return '<%s: pat=%r, idtype=%r>' % (
self.__class__.__name__,
self.pattern.pattern, self.idtype)
class TokenResult(UserString):
'''useful attributes: match, token, idtype'''
def __init__(self, string, match, token):
self.data = string
self.token = token
self.match = match
self.idtype = token.idtype
class Lex:
'''first matching token is taken'''
def __init__(self, tokens=()):
self.tokens = tokens
def parse(self, string):
ret = []
while len(string) > 0:
for p in self.tokens:
m = p.pattern.match(string)
if m is not None:
ret.append(TokenResult(m.group(), match=m, token=p))
string = string[m.end():]
break
else:
break
return ret, string
def main():
s = 'Re: [Vim-cn] Re: [Vim-cn:7166] Re: 回复:[OT] This is the subject.'
reply = Token(r'R[Ee]:\s?|[回答]复[::]\s?', 're')
ottag = Token(r'\[OT\]\s?', 'ot', flags=re.I)
tag = Token(r'\[([\w._-]+)[^]]*\]\s?', 'tag')
lex = Lex((reply, ottag, tag))
tokens, left = lex.parse(s)
print('tokens:', tokens)
print('left:', left)
if __name__ == '__main__':
main()
| [
"lilydjwg@gmail.com"
] | lilydjwg@gmail.com |
1e9838e66f4aaa132b3f611bc032e88b15b51fdc | bf79a856c8a9dd2d72190883dc31f5e652a3b9e7 | /codelabs/flex_and_vision/main_test.py | 78e68b1ec390ab8b5a7fd212b2472c0cf5b654f6 | [
"Apache-2.0"
] | permissive | aog5/python-docs-samples | 5630c2864ee4f141e09d6909de3a3b7ac211703a | 300527f4b12369f6b0ad59fba42cc66159853bdd | refs/heads/master | 2022-10-28T04:48:52.060095 | 2022-10-18T06:50:12 | 2022-10-18T06:50:12 | 157,914,880 | 2 | 0 | Apache-2.0 | 2018-11-16T19:55:30 | 2018-11-16T19:55:30 | null | UTF-8 | Python | false | false | 1,556 | py | # Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
import backoff
from google.api_core.exceptions import GatewayTimeout
import pytest
import requests
import six
import main
TEST_PHOTO_URL = (
'https://upload.wikimedia.org/wikipedia/commons/5/5e/'
'John_F._Kennedy%2C_White_House_photo_portrait%2C_looking_up.jpg')
@pytest.fixture
def app():
main.app.testing = True
client = main.app.test_client()
return client
def test_index(app):
r = app.get('/')
assert r.status_code == 200
def test_upload_photo(app):
test_photo_data = requests.get(TEST_PHOTO_URL).content
test_photo_filename = 'flex_and_vision_{}.jpg'.format(uuid.uuid4().hex)
@backoff.on_exception(backoff.expo, GatewayTimeout, max_time=120)
def run_sample():
return app.post(
'/upload_photo',
data={
'file': (six.BytesIO(test_photo_data), test_photo_filename)
}
)
r = run_sample()
assert r.status_code == 302
| [
"noreply@github.com"
] | aog5.noreply@github.com |
9e2373add2a7b61723ce663df8b290fc55216a3d | 368be25e37bafa8cc795f7c9f34e4585e017091f | /.history/app_fav_books/views_20201113172704.py | ede0818288139a19e403e1f83e9148e723028593 | [] | no_license | steven-halla/fav_books_proj | ebcfbfda0e7f3cdc49d592c86c633b1d331da513 | 512005deb84ac906c9f24d4ab0939bd0db096716 | refs/heads/master | 2023-03-30T09:37:38.016063 | 2021-04-02T20:27:22 | 2021-04-02T20:27:22 | 354,125,658 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 614 | py | from django.shortcuts import render, redirect
from .models import *
from django.contrib import messages
def index(request):
return render(request, "index.html")
def register_New_User(request):
errors = User.objects.basic_validator(request.POST)
if len(errors) > 0:
for key, value in errors.items():
messages.error(request, value)
return redirect("/")
else:
first_name_from_post = request.POST['first_name']
last_name_from_post = request.POST['last_name']
email_from_post = request.POST['email']
password_from_post = request.POST
| [
"69405488+steven-halla@users.noreply.github.com"
] | 69405488+steven-halla@users.noreply.github.com |
e36f95ca1ef2aaf75d4cae00f88c4dd191808c99 | 7a20dac7b15879b9453150b1a1026e8760bcd817 | /Curso/ModuloTkinter/Aula025AppTempoCores.py | 8a4e791e9d788b1117e5122b0eb3ed4b80c90b9c | [
"MIT"
] | permissive | DavidBitner/Aprendizado-Python | 7afbe94c48c210ddf1ab6ae21109a8475e11bdbc | e1dcf18f9473c697fc2302f34a2d3e025ca6c969 | refs/heads/master | 2023-01-02T13:24:38.987257 | 2020-10-26T19:31:22 | 2020-10-26T19:31:22 | 283,448,224 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,129 | py | from tkinter import *
import requests
import json
cidade = qualidade = categoria = app_cor = ""
root = Tk()
root.title("Doidera")
root.geometry("400x45+200+200")
try:
api_request = requests.get("http://www.airnowapi.org/aq/observation/zipCode/current/?format=application/json&zipCode=90210&distance=25&API_KEY=E896BA33-D702-43CF-A0E5-8D1F9800E7E5")
api = json.loads(api_request.content)
cidade = api[0]["ReportingArea"]
qualidade = api[0]["AQI"]
categoria = api[0]["Category"]["Name"]
if categoria in "Good":
app_cor = "#0C0"
elif categoria in "Moderate":
app_cor = "FFFF00"
elif categoria in "Unhelthy for Sensitive Groups":
app_cor = "ff9900"
elif categoria in "Unhealthy":
app_cor = "#FF0000"
elif categoria in "Very Unhealthy":
app_cor = "#990066"
elif categoria in "Hazardous":
app_cor = "#660000"
root.configure(background=app_cor)
my_label = Label(root, text=f"{cidade} {qualidade} {categoria}", font=("Helvetica", 20), background=app_cor)
my_label.pack()
except Exception as e:
api = "Erro..."
root.mainloop()
| [
"david-bitner@hotmail.com"
] | david-bitner@hotmail.com |
503d7fafd2d155b4dad9e5478183af5b7f993086 | 5c00b0626b4ec2bc428e565c97b4afc355198cc4 | /torchsim/core/eval/node_accessors/sp_node_accessor.py | df3e8e0235585895635fcfba2faa54b339b3c212 | [
"LicenseRef-scancode-unknown-license-reference",
"LicenseRef-scancode-warranty-disclaimer",
"Apache-2.0"
] | permissive | andreofner/torchsim | 8cff778a324d4f7dc040f11a12d0dc8cd66375b7 | 81d72b82ec96948c26d292d709f18c9c77a17ba4 | refs/heads/master | 2021-10-24T05:25:33.740521 | 2019-03-22T10:20:00 | 2019-03-22T10:20:00 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,388 | py | import torch
from torchsim.core.nodes.spatial_pooler_node import SpatialPoolerFlockNode
class SpatialPoolerFlockNodeAccessor:
"""Adaptor for the SpatialPoolerFlockNode allowing access to the basic measurable values."""
@staticmethod
def get_output_id(node: SpatialPoolerFlockNode) -> int:
"""Get argmax of the output of the spatial pooler.
Args:
node: target node
Returns:
Scalar from the range <0, sp_size).
"""
assert node.params.flock_size == 1
max_id = torch.argmax(node.outputs.sp.forward_clusters.tensor)
return max_id.to('cpu').data.item()
@staticmethod
def get_output_tensor(node: SpatialPoolerFlockNode) -> torch.Tensor:
"""
Args:
node:
Returns: tensor containing the output of the SP
"""
return node.outputs.sp.forward_clusters.tensor
@staticmethod
def get_reconstruction(node: SpatialPoolerFlockNode) -> torch.Tensor:
return node.outputs.sp.current_reconstructed_input.tensor
@staticmethod
def get_sp_deltas(node: SpatialPoolerFlockNode) -> torch.Tensor:
return node.memory_blocks.sp.cluster_center_deltas.tensor
@staticmethod
def get_sp_boosting_durations(node: SpatialPoolerFlockNode) -> torch.Tensor:
return node.memory_blocks.sp.cluster_boosting_durations.tensor
| [
"jan.sinkora@goodai.com"
] | jan.sinkora@goodai.com |
95dc5375c6ea9eb7bc45bd76c10ab8993f43864d | 47ff744da519c525cccfad1d8cead74f7e2cd209 | /uge4/.history/exercise_20200220122938.py | 576d31ba0a92a448bd9bacc09ab907309c9131fb | [] | no_license | Leafmight/Python | f6098395a7a13dd6afe6eb312a3eb1f3dbe78b84 | d987f22477c77f3f21305eb922ae6855be483255 | refs/heads/master | 2020-12-21T14:21:06.802341 | 2020-05-22T10:21:37 | 2020-05-22T10:21:37 | 236,457,255 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 505 | py | import numpy as np
filename = './befkbhalderstatkode.csv'
dd = np.genfromtxt(filename, delimiter=',', dtype=np.uint, skip_header=1)
neighb = {1: 'Indre By', 2: 'Østerbro', 3: 'Nørrebro', 4: 'Vesterbro/Kgs. Enghave',
5: 'Valby', 6: 'Vanløse', 7: 'Brønshøj-Husum', 8: 'Bispebjerg', 9: 'Amager Øst',
10: 'Amager Vest', 99: 'Udenfor'}
def pop(aar=dd[:,0], bydel=dd[:,1], alder=dd[:,2], statkode=dd[:,3]):
hood_mask = (dd[:,0] == 2015) & (dd[:,1] == neighb)
print(dd[hood_mask]) | [
"jacobfolke@hotmail.com"
] | jacobfolke@hotmail.com |
b054a547ad0202aad1fd12e2b4540060dd7a77c6 | b3a8baa6a89092cfac1fb9da27f5aad3a6055b52 | /tornado_http2/curl.py | 60b032ac6c509aa5f1a6466f8b94e7c2040aecef | [
"Apache-2.0"
] | permissive | bdarnell/tornado_http2 | 42869356290d0e5a617652422b3d49213ae4f084 | 2cd0a6b45d5cd1c7145ca769202104c2c656a85f | refs/heads/master | 2023-08-18T21:41:06.513241 | 2022-12-27T20:46:17 | 2022-12-27T20:46:17 | 21,544,246 | 83 | 12 | Apache-2.0 | 2023-08-14T21:39:29 | 2014-07-06T16:54:54 | Python | UTF-8 | Python | false | false | 1,072 | py | import pycurl
from tornado import curl_httpclient
try:
CURL_HTTP_VERSION_2 = pycurl.CURL_HTTP_VERSION_2
except AttributeError:
# Pycurl doesn't yet have this constant even when libcurl does.
CURL_HTTP_VERSION_2 = pycurl.CURL_HTTP_VERSION_1_1 + 1
class CurlAsyncHTTP2Client(curl_httpclient.CurlAsyncHTTPClient):
def _curl_setup_request(self, curl, request, buffer, headers):
super(CurlAsyncHTTP2Client, self)._curl_setup_request(
curl, request, buffer, headers)
curl.setopt(pycurl.HTTP_VERSION, CURL_HTTP_VERSION_2)
def _finish(self, curl, curl_error=None, curl_message=None):
# Work around a bug in curl 7.41: if the connection is closed
# during an Upgrade request, this is not reported as an error
# but status is zero.
if not curl_error:
code = curl.getinfo(pycurl.HTTP_CODE)
if code == 0:
curl_error = pycurl.E_PARTIAL_FILE
super(CurlAsyncHTTP2Client, self)._finish(
curl, curl_error=curl_error, curl_message=curl_message)
| [
"ben@bendarnell.com"
] | ben@bendarnell.com |
36d05ddd8e400f84ea6b5bc8bdf10f0e350c95c0 | 9e988c0dfbea15cd23a3de860cb0c88c3dcdbd97 | /sdBs/AllRun/kuv_07527+4113/sdB_KUV_07527+4113_lc.py | a8dd206123359c38fb487aef6b087a408f1e1151 | [] | no_license | tboudreaux/SummerSTScICode | 73b2e5839b10c0bf733808f4316d34be91c5a3bd | 4dd1ffbb09e0a599257d21872f9d62b5420028b0 | refs/heads/master | 2021-01-20T18:07:44.723496 | 2016-08-08T16:49:53 | 2016-08-08T16:49:53 | 65,221,159 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 347 | py | from gPhoton.gAperture import gAperture
def main():
gAperture(band="NUV", skypos=[119.03,41.08485], stepsz=30., csvfile="/data2/fleming/GPHOTON_OUTPU/LIGHTCURVES/sdBs/sdB_KUV_07527+4113 /sdB_KUV_07527+4113_lc.csv", maxgap=1000., overwrite=True, radius=0.00555556, annulus=[0.005972227,0.0103888972], verbose=3)
if __name__ == "__main__":
main()
| [
"thomas@boudreauxmail.com"
] | thomas@boudreauxmail.com |
733b2a6a99aa508741a9f3cd4b55af2f5e475788 | d57b51ec207002e333b8655a8f5832ed143aa28c | /.history/gos_20200614063514.py | 96ab0d212288b68fa3eb3d15272551d79c47c32c | [] | no_license | yevheniir/python_course_2020 | b42766c4278a08b8b79fec77e036a1b987accf51 | a152d400ab4f45d9d98d8ad8b2560d6f0b408c0b | refs/heads/master | 2022-11-15T07:13:24.193173 | 2020-07-11T15:43:26 | 2020-07-11T15:43:26 | 278,890,802 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 9,642 | py | # # Імпорт фажливих бібліотек
# from BeautifulSoup import BeautifulSoup
# import urllib2
# import re
# # Створення функції пошуку силок
# def getLinks(url):
# # отримання та присвоєння контенту сторінки в змінну
# html_page = urllib2.urlopen(url)
# # Перетворення контенту в обєкт бібліотеки BeautifulSoup
# soup = BeautifulSoup(html_page)
# # створення пустого масиву для лінків
# links = []
# # ЗА ДОПОМОГОЮ ЧИКЛУ ПРОХЛДИМСЯ ПО ВСІХ ЕЛЕМЕНТАХ ДЕ Є СИЛКА
# for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
# # Додаємо всі силки в список
# links.append(link.get('href'))
# # повертаємо список
# return links
# -----------------------------------------------------------------------------------------------------------
# # # Імпорт фажливих бібліотек
# import subprocess
# # Створення циклу та використання функції range для генерації послідовних чисел
# for ping in range(1,10):
# # генерування IP адреси базуючись на номері ітерації
# address = "127.0.0." + str(ping)
# # виклик функції call яка робить запит на IP адрес та запис відповіді в змінну
# res = subprocess.call(['ping', '-c', '3', address])
# # За допомогою умовних операторів перевіряємо відповідь та виводимо результат
# if res == 0:
# print "ping to", address, "OK"
# elif res == 2:
# print "no response from", address
# else:
# print "ping to", address, "failed!"
# -----------------------------------------------------------------------------------------------------------
# # Імпорт фажливих бібліотек
# import requests
# # Ітеруємося по масиву з адресами зображень
# for i, pic_url in enumerate(["http://x.com/nanachi.jpg", "http://x.com/nezuko.jpg"]):
# # Відкриваємо файл базуючись на номері ітерації
# with open('pic{0}.jpg'.format(i), 'wb') as handle:
# # Отримуємо картинку
# response = requests.get(pic_url, stream=True)
# # Використовуючи умовний оператор перевіряємо чи успішно виконався запит
# if not response.ok:
# print(response)
# # Ітеруємося по байтах картинки та записуємо батчаси в 1024 до файлу
# for block in response.iter_content(1024):
# # Якщо байти закінчилися, завершуємо алгоритм
# if not block:
# break
# # Записуємо байти в файл
# handle.write(block)
# -----------------------------------------------------------------------------------------------------------
# # Створюємо клас для рахунку
# class Bank_Account:
# # В конструкторі ініціалізуємо рахунок як 0
# def __init__(self):
# self.balance=0
# print("Hello!!! Welcome to the Deposit & Withdrawal Machine")
# # В методі депозит, використовуючи функцію input() просимо ввести суму поповенння та додаємо цю суму до рахунку
# def deposit(self):
# amount=float(input("Enter amount to be Deposited: "))
# self.balance += amount
# print("\n Amount Deposited:",amount)
# # В методі депозит, використовуючи функцію input() просимо ввести суму отримання та віднімаємо цю суму від рахунку
# def withdraw(self):
# amount = float(input("Enter amount to be Withdrawn: "))
# # За допомогою умовного оператора перевіряємо чи достатнього грошей на рахунку
# if self.balance>=amount:
# self.balance-=amount
# print("\n You Withdrew:", amount)
# else:
# print("\n Insufficient balance ")
# # Виводимо бааланс на екран
# def display(self):
# print("\n Net Available Balance=",self.balance)
# # Створюємо рахунок
# s = Bank_Account()
# # Проводимо операції з рахунком
# s.deposit()
# s.withdraw()
# s.display()
# -----------------------------------------------------------------------------------------------------------
# # Створюємо рекурсивну функцію яка приймає десяткове число
# def decimalToBinary(n):
# # перевіряємо чи число юільше 1
# if(n > 1):
# # Якщо так, ділемо на 2 юез остачі та рекурсивно викликаємо функцію
# decimalToBinary(n//2)
# # Якщо ні, виводимо на остачу ділення числа на 2
# print(n%2, end=' ')
# # Створюємо функцію яка приймає бінарне число
# def binaryToDecimal(binary):
# # Створюємо додаткову змінну
# binary1 = binary
# # Ініціалізуємо ще 3 змінню даючи їм значення 0
# decimal, i, n = 0, 0, 0
# # Ітеруємося до тих пір поки передане нами число не буде 0
# while(binary != 0):
# # Отримуємо остачу від ділення нашого чила на 10 на записуємо в змінну
# dec = binary % 10
# # Додаємо до результату суму попереднього результату та добуток від dec та піднесення 2 до степеня номеру ітерації
# decimal = decimal + dec * pow(2, i)
# # Змінюємо binary
# binary = binary//10
# # Додаємо 1 до кількості ітерацій
# i += 1
# # Виводимо результат
# print(decimal)
# -----------------------------------------------------------------------------------------------------------
# # Імпорт фажливих бібліотек
# import re
# # В умовному операторі перевіряємо чи підходить введена пошта під знайдений з інтернету regex
# if re.match(r"[^@]+@[^@]+\.[^@]+", "nanachi@gmail.com"):
# # Якщо так, виводиму valid
# print("valid")
# -----------------------------------------------------------------------------------------------------------
# # Створення функції яка приймає текст для шифрування та здвиг
# def encrypt(text,s):
# # Створення змінної для результату
# result = ""
# # Ітеруємося по тексту використовуючи range та довжину тексту
# for i in range(len(text)):
# # Беремо літеру базуючись на номері ітерації
# char = text[i]
# # Перевіряємо чи ця літера велика
# if (char.isupper()):
# # Кодуємо літеру базуючись на її номері
# result += chr((ord(char) + s-65) % 26 + 65)
# else:
# # Кодуємо літеру базуючись на її номері
# result += chr((ord(char) + s - 97) % 26 + 97)
# # Повертаємо результат
# return result
# -----------------------------------------------------------------------------------------------------------
# # Створення списку з телефонами
# numbers = ["0502342349", "0500897897", "0992342349"]
# # Ініціалізація змінної з результатом
# result = {}
# # Ітерації по телефонах для ініціалізації клічів результата
# for num in numbers:
# # Створення ключа бузуючись на номері оператора та присвоєння йому пустого масиву
# result[num[:3]] = []
# # Ітерації по телефонах
# for num in numbers:
# # Додавання телефону до відповідного оператора
# result[num[:3]].append(num)
# # Вивід результатту
# print(result)
# -----------------------------------------------------------------------------------------------------------
# Імпорт фажливих бібліотек
import unittest
# Створення класу з тестами наслідуючись від unittest.TestCase
class TestStringMethods(unittest.TestCase):
# Створен
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
# Запуск скрипта
if __name__ == '__main__':
unittest.main() | [
"yevheniira@intelink-ua.com"
] | yevheniira@intelink-ua.com |
7de69f387698465c1fd87bb5a52755574020c843 | 9619daf132259c31b31c9e23a15baa675ebc50c3 | /memphis.contenttype/memphis/contenttype/configlet.py | 6d36f9e48f736c487d305d9b8d87228f53dca282 | [] | no_license | fafhrd91/memphis-dev | ade93c427c1efc374e0e1266382faed2f8e7cd89 | c82aac1ad3a180ff93370b429498dbb1c2e655b8 | refs/heads/master | 2016-09-05T19:32:35.109441 | 2011-08-22T06:30:43 | 2011-08-22T06:30:43 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,204 | py | """ ttw content types management """
from zope import interface, event
from zope.schema.vocabulary import SimpleTerm, SimpleVocabulary
from memphis import controlpanel, storage
from memphis.contenttype.location import LocationProxy
from memphis.contenttype.interfaces import \
IFactory, IContentType, IContentTypeSchema, IContentTypesConfiglet
class ContentTypeFactory(object):
interface.implements(IFactory)
name = 'ct'
schema = IContentTypeSchema
title = 'Content Type'
description = ''
hiddenFields = ('name', 'schemas', 'behaviors')
def __call__(self, **kw):
pass
class ContentTypesConfiglet(object):
interface.implements(IContentTypesConfiglet)
__factories__ = {'ct': ContentTypeFactory()}
def create(self, data):
item = storage.insertItem(IContentType)
ds = IContentTypeSchema(item)
for key, val in data.items():
setattr(ds, key, val)
return LocationProxy(ds, self, item.oid)
@property
def schema(self):
return storage.getSchema(IContentTypeSchema)
def keys(self):
return [item.oid for item in self.schema.query()]
def values(self):
return [LocationProxy(item, self, item.oid)
for item in self.schema.query()]
def items(self):
return [(item.oid, LocationProxy(item, self, item.oid))
for item in self.schema.query()]
def get(self, name, default=None):
try:
return self[name]
except KeyError:
return default
def __iter__(self):
return iter(self.keys())
def __contains__(self, name):
item = self.schema.query(self.schema.Type.oid==name).first()
return item is not None
def __getitem__(self, name):
item = self.schema.query(self.schema.Type.oid==name).first()
if item is None:
raise KeyError(name)
return LocationProxy(item, self, item.oid)
def __delitem__(self, name):
pass
controlpanel.registerConfiglet(
'system.contenttypes', IContentTypesConfiglet,
klass = ContentTypesConfiglet,
title = 'Content types',
description = 'Content types configuration.')
| [
"fafhrd91@gmail.com"
] | fafhrd91@gmail.com |
09d50602b114b6292306756519e9a360bc5f39c5 | f146cef3f2172275c8d7f526dab92951fa50eb2c | /COURSE/WEEK3/venv/bin/pip3.7 | 07460d9012ea92582a238b551da4e8f199055ce4 | [] | no_license | mehranj73/Bootcamp | fed04d3858d6d0bc1cdad94e1f05bd4f7a47c0ec | bd575cd02329ad1ce21b05350380dfbf17cbdd89 | refs/heads/master | 2023-02-09T06:50:00.590751 | 2019-08-22T18:56:02 | 2019-08-22T18:56:02 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 429 | 7 | #!/Users/raquel/Desktop/PROPULSION_codeBootcamp/COURSE/WEEK3/venv/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==19.0.3','console_scripts','pip3.7'
__requires__ = 'pip==19.0.3'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('pip==19.0.3', 'console_scripts', 'pip3.7')()
)
| [
"rachovsky@gmail.com"
] | rachovsky@gmail.com |
d5d4aca03e510abe504a40d1c9b8b8dee7aef281 | b685036280331fa50fcd87f269521342ec1b437b | /src/data_mining_demo/py_practice_Data_Analy_Min/chapter/logistic_regression.py | 49743f425ea8374ffe15f868b29e5463e45483eb | [] | no_license | chenqing666/myML_DM_Test | f875cb5b2a92e81bc3de2a0070c0185b7eacac89 | 5ac38f7872d94ca7cedd4f5057bb93732b5edbad | refs/heads/master | 2022-02-26T01:52:06.293025 | 2019-09-20T06:35:25 | 2019-09-20T06:35:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 696 | py |
import pandas as pd
from pandas import DataFrame as df
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
dataFile = r'F:\pycharm_workspace\myML_DM_Test\resource\python_practice_Data_Analy_Min\chapter5\chapter5\demo\data\bankloan.xls'
data = pd.read_excel(dataFile)
df_data = df(data)
print(data)
print("DF: \n" ,df_data)
from sklearn.linear_model import LogisticRegression as LR
from sklearn.linear_model import RandomizedLogisticRegression as RLR
x = data.iloc[:,:8].as_matrix()
y = data.iloc[:, 8].as_matrix()
print("X \n", x)
print("Y \n", y)
rlr = RLR() #建立随机逻辑回归模型,筛选变量
rlr.fit(x, y) #训练模型
| [
"976185561@qq.com"
] | 976185561@qq.com |
b0c112488dd2ced68cf0f707e038b3e860c7f067 | 1872421a05f2d94ea55b5b7fcb1c1e15c4a240db | /week15/39.py | 9803b3ed813557af1db41d3a10282ae28863087a | [] | no_license | WenHui-Zhou/weekly-code-STCA | 46f360650b964ddfb0e4629b6e23bb0980c6fb5f | 3f00d4e400de5f688d6d7e51311da20cfe4be293 | refs/heads/master | 2020-12-03T05:51:02.426268 | 2020-07-22T15:45:37 | 2020-07-22T15:45:37 | 231,219,669 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 753 | py | class Solution(object):
def combinationSum(self, candidates, target):
"""
:type candidates: List[int]
:type target: int
:rtype: List[List[int]]
"""
if candidates == []:
return []
ans = []
def dfs(candidates,target,res,pos):
if sum(res) == target and sorted(res) not in ans:
ans.append(sorted(res[::]))
return
for i in range(pos,len(candidates)):
if sum(res) + candidates[i] > target:
return
res.append(candidates[i])
dfs(candidates,target,res,pos)
res.pop()
dfs(sorted(candidates),target,[],0)
return ans
| [
"765647930@qq.com"
] | 765647930@qq.com |
e294028960692090b2e8a324a787742eeaa12ffc | 552bc626603a1757cf7836401cff5f0332a91504 | /django/doit_django_AtoZ/django-tutorial/mysite/blog/migrations/0002_auto_20210929_1741.py | a357566c4e2f63c67a937eac4567daf7265d9c15 | [] | no_license | anifilm/webapp | 85f3d0aae34f46917b3c9fdf8087ec8da5303df1 | 7ef1a9a8c0dccc125a8c21b22db7db4b9d5c0cda | refs/heads/master | 2023-08-29T18:33:00.323248 | 2023-08-26T07:42:39 | 2023-08-26T07:42:39 | 186,593,754 | 1 | 0 | null | 2023-04-21T12:19:59 | 2019-05-14T09:49:56 | JavaScript | UTF-8 | Python | false | false | 741 | py | # Generated by Django 3.1.1 on 2021-09-29 17:41
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
dependencies = [
('blog', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='post',
name='create_at',
),
migrations.AddField(
model_name='post',
name='created_at',
field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now),
preserve_default=False,
),
migrations.AddField(
model_name='post',
name='updated_at',
field=models.DateTimeField(auto_now=True),
),
]
| [
"anifilm02@gmail.com"
] | anifilm02@gmail.com |
bb8b706831a8ae54d13e029897a3905c48afb61e | 6f1034b17b49f373a41ecf3a5a8923fb4948992b | /pychron/media_storage/tasks/plugin.py | 460cabacf2ecaab18ef02ea6c3020bba25a4555d | [
"Apache-2.0"
] | permissive | NMGRL/pychron | a6ec1854488e74eb5d3ff53eee8537ecf98a6e2f | 8cfc8085393ace2aee6b98d36bfd6fba0bcb41c6 | refs/heads/main | 2023-08-30T07:00:34.121528 | 2023-06-12T17:43:25 | 2023-06-12T17:43:25 | 14,438,041 | 38 | 28 | Apache-2.0 | 2023-08-09T22:47:17 | 2013-11-15T23:46:10 | Python | UTF-8 | Python | false | false | 2,026 | py | # ===============================================================================
# Copyright 2016 ross
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ===============================================================================
from __future__ import absolute_import
from envisage.ui.tasks.task_factory import TaskFactory
from pychron.envisage.tasks.base_task_plugin import BaseTaskPlugin
from pychron.media_storage.manager import MediaStorageManager
from pychron.media_storage.tasks.preferences import MediaStoragePreferencesPane
from pychron.media_storage.tasks.task import MediaStorageTask
class MediaStoragePlugin(BaseTaskPlugin):
name = "Media Storage"
id = "pychron.media_storage.plugin"
def _media_storage_factory(self):
ms = MediaStorageTask(application=self.application)
return ms
def _media_storage_manager_factory(self):
msm = MediaStorageManager()
return msm
def _service_offers_default(self):
so = self.service_offer_factory(
protocol=MediaStorageManager, factory=self._media_storage_manager_factory
)
return [so]
def _preferences_panes_default(self):
return [MediaStoragePreferencesPane]
def _tasks_default(self):
return [
TaskFactory(
id="pychron.media_storage.task_factory",
include_view_menu=False,
factory=self._media_storage_factory,
)
]
# ============= EOF =============================================
| [
"jirhiker@gmail.com"
] | jirhiker@gmail.com |
9c1160ea895d046109b21aa014b034b4e2407294 | bbebd95ce007baf3be20ca14fc8fe696c4c7dd96 | /L8 Python Scripts/Codec_b995/__init__.py | 063c2539237aec3e8aef0e34ca3b57eed8f748a8 | [] | no_license | spiralune/monomodular | 70eeabcb5a9921b7bab30d8c5fd45ca237cde9c3 | 862bb02067c2fbc9816795904b2537a9b5e1c7b6 | refs/heads/master | 2020-12-11T04:06:25.390424 | 2014-07-09T19:45:41 | 2014-07-09T19:45:41 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 161 | py | # http://aumhaa.blogspot.com
from Codec import Codec
def create_instance(c_instance):
""" Creates and returns the Codec script """
return Codec(c_instance)
| [
"aumhaa@gmail.com"
] | aumhaa@gmail.com |
96cc28b5253d987d8c3a3d749c5709b0260a02b4 | f0da5036820e92157a9108b4b6793e757a81861c | /experiments/multiple_instance_classification/mnist_bagged.py | d5e47255001234a6fce9b864305f4e3e35a2a525 | [
"MIT"
] | permissive | BioImageInformatics/tfmodels | cb1e136407f0f148194210b1449b26c126fe5a07 | 7219eac59ba82cfa28e6af5e17f313dcc5ddd65e | refs/heads/master | 2022-01-26T16:09:32.630262 | 2019-04-25T05:09:33 | 2019-04-25T05:09:33 | 115,466,269 | 4 | 3 | null | 2018-02-06T17:46:17 | 2017-12-27T00:55:40 | Python | UTF-8 | Python | false | false | 3,615 | py | import tensorflow as tf
import numpy as np
import cv2
import sys, datetime, os, time
from tensorflow.examples.tutorials.mnist import input_data
sys.path.insert(0, '..')
import tfmodels
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
# config.log_device_placement = True
# mnist_data_path = '/Users/nathaning/Envs/tensorflow/MNIST_data'
mnist_data_path = '/home/nathan/envs/tensorflow/MNIST_data'
mnist_data = input_data.read_data_sets(mnist_data_path)
"""
Bagged MNIST is a toy dataset using the MNIST digits data.
We have images of digits: {0,1,2,3,4,5,6,7,8,9}
First we choose one, or some combination, to be the "positive" class.
For training we draw **sets** of digits. Each set is labelled "positive" if
it contains a "positive" class element.
e.g. if positive = 0
x = [1,2,3,5,2,3,0], y = 1
x = [1,2,3,5,2,3,9], y = 0
Training proceeds to predict positive bags.
What we recover is a classifier that maximizes the expected value of
p(y=1 | x=positive) **(prove it)
without explicitly stating which element is the "positive" one.
"""
## ------------------ Hyperparameters --------------------- ##
epochs = 20
iterations = 500
snapshot_epochs = 5
step_start = 0
batch_size = 32
samples = 256
positive_class = [0,1]
basedir = 'multi_mnist_1'
log_dir, save_dir, debug_dir, infer_dir = tfmodels.make_experiment(
basedir)
snapshot_path = ''
training_dataset = tfmodels.BaggedMNIST(
as_images = True,
batch_size = batch_size,
samples = samples,
positive_class = positive_class,
positive_rate = 0.1,
data = mnist_data.train,
mode = 'Train'
)
testing_dataset = tfmodels.BaggedMNIST(
as_images = True,
batch_size = batch_size,
samples = samples,
positive_class = positive_class,
positive_rate = 0.1,
data = mnist_data.test,
mode = 'Test'
)
with tf.Session(config=config) as sess:
model = tfmodels.ImageBagModel(
dataset = training_dataset,
encoder_type = 'CONV',
log_dir = log_dir,
save_dir = save_dir,
sess = sess,
x_dim = [28, 28, 1],
summarize_grads = True,
summarize_vars = True,
)
model.print_info()
print 'Starting training'
for epoch in xrange(1, epochs):
for _ in xrange(iterations):
model.train_step()
## Test bags
accuracy = model.test(testing_dataset)
## Test encoder network to discriminate individual examples
test_x, test_y = testing_dataset.normal_batch(batch_size=128)
test_y_hat = sess.run(model.y_individual, feed_dict={
model.x_individual: test_x })
i_accuracy = np.mean(np.argmax(test_y,axis=1) == np.argmax(test_y_hat,axis=1))
print 'Epoch [{:05d}]; x_i acc: [{:03.3f}]; bag acc: [{:03.3f}]'.format(
epoch, i_accuracy, accuracy)
if epoch % snapshot_epochs == 0:
model.snapshot()
## Save positive and negative classified examples:
print 'Printing test x_i'
test_x, test_y = testing_dataset.normal_batch(batch_size=128)
test_y_hat = sess.run(model.y_individual, feed_dict={
model.x_individual: test_x })
test_y_argmax = np.argmax(test_y_hat, axis=1)
for idx, y in enumerate(test_y_argmax):
img = test_x[idx,:].reshape(28,28)
if y == 1:
filename = debug_dir+'/pos_{:03d}.jpg'.format(idx)
else:
filename = debug_dir+'/neg_{:03d}.jpg'.format(idx)
cv2.imwrite(filename, img*255)
| [
"ing.nathany@gmail.com"
] | ing.nathany@gmail.com |
720b53038cba2786b0ac0b633b11a4cfcec040d9 | a59d1faced9fe7348ca7143d2a8643e0ebad2132 | /pyvisdk/do/missing_windows_cust_resources.py | 31a2c82207181901973a117c45f03922051b6f10 | [
"MIT"
] | permissive | Infinidat/pyvisdk | c55d0e363131a8f35d2b0e6faa3294c191dba964 | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | refs/heads/master | 2023-05-27T08:19:12.439645 | 2014-07-20T11:49:16 | 2014-07-20T11:49:16 | 4,072,898 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,036 | py |
import logging
from pyvisdk.exceptions import InvalidArgumentError
########################################
# Automatically generated, do not edit.
########################################
log = logging.getLogger(__name__)
def MissingWindowsCustResources(vim, *args, **kwargs):
'''A usable sysprep file was not found on the server.'''
obj = vim.client.factory.create('{urn:vim25}MissingWindowsCustResources')
# do some validation checking...
if (len(args) + len(kwargs)) < 4:
raise IndexError('Expected at least 5 arguments got: %d' % len(args))
required = [ 'dynamicProperty', 'dynamicType', 'faultCause', 'faultMessage' ]
optional = [ ]
for name, arg in zip(required+optional, args):
setattr(obj, name, arg)
for name, value in kwargs.items():
if name in required + optional:
setattr(obj, name, value)
else:
raise InvalidArgumentError("Invalid argument: %s. Expected one of %s" % (name, ", ".join(required + optional)))
return obj
| [
"guy@rzn.co.il"
] | guy@rzn.co.il |
257b92c3c936ecdd1cc044e01004b323081a71f2 | 55b6bd57378204b8b567a77126312b6ca2ada693 | /src/old/name_print.py | 2f20c3c5abd598dbf13d3947aa90085c6fc71582 | [
"MIT"
] | permissive | capybaralet/TabulaRL | 9d232513ad9d1f33c2a3bafb8a5b8ac5974afabf | f9a66a85930e85475982404f389ba4651435f780 | refs/heads/master | 2020-04-05T18:59:49.427591 | 2016-12-02T03:39:22 | 2016-12-02T03:39:22 | 64,227,703 | 0 | 0 | null | 2016-07-26T14:26:18 | 2016-07-26T14:26:17 | null | UTF-8 | Python | false | false | 433 | py |
from inspect import currentframe
def name_print(name):
frame = currentframe().f_back
locs, globs = frame.f_locals, frame.f_globals
value = locs[name] if name in locs else globs.get(name, "???")
print name, "=", value
del frame
return name + "=" + str(value)
n = 42
name_print("n")
def make_save_str(variables, base_str=''):
return base_str + '____' + [name_print(var) for var in variables].join('__')
| [
"davidscottkrueger@gmail.com"
] | davidscottkrueger@gmail.com |
a98222928c83108ef2681b39d6dbaae2db52b5a1 | 9d39f6ec24ea355ee82adfd4487453172953dd37 | /tao_detection_release/configs/baselines/faster_rcnn_r50_fpn_1x_lvis.py | 8c51e80f770456d001ba404b19629f8b5d11c559 | [
"Apache-2.0"
] | permissive | feiaxyt/Winner_ECCV20_TAO | d69c0efdb1b09708c5d95c3f0a38460dedd0e65f | dc36c2cd589b096d27f60ed6f8c56941b750a0f9 | refs/heads/main | 2023-03-19T14:17:36.867803 | 2021-03-16T14:04:31 | 2021-03-16T14:04:31 | 334,864,331 | 82 | 6 | null | null | null | null | UTF-8 | Python | false | false | 5,551 | py | # model settings
model = dict(
type='FasterRCNN',
pretrained='torchvision://resnet50',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[4, 8, 16, 32, 64],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='SharedFCBBoxHead',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=1231,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2],
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)))
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False))
test_cfg = dict(
rpn=dict(
nms_across_levels=False,
nms_pre=1000,
nms_post=1000,
max_num=1000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
score_thr=0.0,
nms=dict(type='nms', iou_thr=0.5),
max_per_img=300)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)
)
# dataset settings
dataset_type = 'LvisDataset'
data_root = 'data/lvis/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'lvis_v0.5_train.json',
img_prefix=data_root + 'train2017/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'lvis_v0.5_val.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'lvis_v0.5_val.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline))
# optimizer
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
# dict(type='TensorboardLoggerHook')
])
# yapf:enable
# runtime settings
total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/faster_rcnn_r50_fpn_1x_lvis_newbgs'
load_from = './data/download_models/faster_rcnn_r50_fpn_2x_20181010-443129e1.pth'
resume_from = None
workflow = [('train', 1)]
| [
"feiaxyt@163.com"
] | feiaxyt@163.com |
3ba0dee01e64c8a85967aac018c42ecadf3e1910 | b5937928a48340569f673e237e42f32ab62cfd15 | /src/fizzBuzz/fizz.py | 4590f891e704a7530874bd6246b724d08af6d483 | [
"CC0-1.0"
] | permissive | rajitbanerjee/leetcode | 79731de57ab4b0edd765b3cbb4aac459973fb22d | 720fcdd88d371e2d6592ceec8370a6760a77bb89 | refs/heads/master | 2021-06-13T11:19:03.905797 | 2021-06-02T14:40:08 | 2021-06-02T14:40:08 | 191,103,205 | 2 | 1 | null | 2020-02-23T23:41:45 | 2019-06-10T05:34:46 | Java | UTF-8 | Python | false | false | 470 | py | class Solution:
def fizzBuzz(self, n: int) -> list:
res = []
maps = {3: "Fizz", 5: "Buzz"}
for i in range(1, n + 1):
ans = ""
for k, v in maps.items():
if i % k == 0:
ans += v
if not ans:
ans = str(i)
res.append(ans)
return res
if __name__ == '__main__':
n = int(input("Input: "))
print(f"Output: {Solution().fizzBuzz(n)}")
| [
"rajit.banerjee@ucdconnect.ie"
] | rajit.banerjee@ucdconnect.ie |
e3466618405b07b76db94ddf2cee72207fcba98b | fc29ccdcf9983a54ae2bbcba3c994a77282ae52e | /Leetcode_By_Topic/monostack-1019.py | a675e373fc547fbc260855f6322e41f078ce5894 | [] | no_license | linnndachen/coding-practice | d0267b197d9789ab4bcfc9eec5fb09b14c24f882 | 5e77c3d7a0632882d16dd064f0aad2667237ef37 | refs/heads/master | 2023-09-03T19:26:25.545006 | 2021-10-16T16:29:50 | 2021-10-16T16:29:50 | 299,794,608 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 514 | py | # Definition for singly-linked list.
from typing import List
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
class Solution:
def nextLargerNodes(self, head: ListNode) -> List[int]:
stack, res = [], []
while head:
while stack and head.val > stack[-1][1]:
res[stack.pop()[0]] = head.val
stack.append([len(res), head.val])
res.append(0)
head = head.next
return res | [
"lchen.msc2019@ivey.ca"
] | lchen.msc2019@ivey.ca |
d8d15498018b75c3f13b8660ec4406e66a494a08 | e67d4123c10d464c91e70210d58bd4900164645b | /74/a/a.py | d98927c5a5988abced29cca18737c8d7b40d5e41 | [] | no_license | pkaleta/Codeforces | 422188d4483fbf8dd99d6b0654c8e464fb143560 | fb011f616f8db366c6aba80ff2be01692611ef81 | refs/heads/master | 2021-01-19T06:42:30.162981 | 2011-11-26T01:29:30 | 2011-11-26T01:29:30 | 2,853,430 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 215 | py | import sys
from math import ceil
r, g, b = map(int, sys.stdin.readline().split())
tr = ceil(r/2.0)
tg = ceil(g/2.0)
tb = ceil(b/2.0)
xr = 3*(tr-1)
xg = 3*(tg-1)+1
xb = 3*(tb-1)+2
print int(30+max([xr, xg, xb]))
| [
"piotrek.kaleta@gmail.com"
] | piotrek.kaleta@gmail.com |
03f04d8b9696ae2e1e9bae0d641202e81d966b84 | 4d2475135f5fc9cea73572b16f59bfdc7232e407 | /prob162_find_peak_element.py | e4b435207bb45f8bbec0eaad32c965c283bb9bba | [] | no_license | Hu-Wenchao/leetcode | 5fa0ae474aadaba372756d234bc5ec397c8dba50 | 31b2b4dc1e5c3b1c53b333fe30b98ed04b0bdacc | refs/heads/master | 2021-06-24T04:57:45.340001 | 2017-06-17T02:33:09 | 2017-06-17T02:33:09 | 45,328,724 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 821 | py | """
A peak element is an element that is greater than its neighbors.
Given an input array where num[i] ≠ num[i+1], find a peak
element and return its index.
The array may contain multiple peaks, in that case return t
he index to any one of the peaks is fine.
You may imagine that num[-1] = num[n] = -∞.
For example, in array [1, 2, 3, 1], 3 is a peak element and
your function should return the index number 2.
"""
class Solution(object):
def findPeakElement(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
# return nums.index(max(nums))
if not nums:
return
nums = [float('-inf')] + nums + [float('-inf')]
for i in range(1, len(nums)-1):
if nums[i] > nums[i-1] and nums[i] > nums[i+1]:
return i-1
| [
"huwcbill@gmail.com"
] | huwcbill@gmail.com |
bdd9da0ae5d55149ec4bca6d4f84ad3a3e8343c7 | f8c3c677ba536fbf5a37ac4343c1f3f3acd4d9b6 | /ICA_SDK/test/test_file_archive_request.py | ada9da45159a7221123309af28d2879de583a1f9 | [] | no_license | jsialar/integrated_IAP_SDK | 5e6999b0a9beabe4dfc4f2b6c8b0f45b1b2f33eb | c9ff7685ef0a27dc4af512adcff914f55ead0edd | refs/heads/main | 2023-08-25T04:16:27.219027 | 2021-10-26T16:06:09 | 2021-10-26T16:06:09 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,480 | py | # coding: utf-8
"""
IAP Services
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
The version of the OpenAPI document: v1
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import unittest
import datetime
import ICA_SDK
from ICA_SDK.models.file_archive_request import FileArchiveRequest # noqa: E501
from ICA_SDK.rest import ApiException
class TestFileArchiveRequest(unittest.TestCase):
"""FileArchiveRequest unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def make_instance(self, include_optional):
"""Test FileArchiveRequest
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included """
# model = ICA_SDK.models.file_archive_request.FileArchiveRequest() # noqa: E501
if include_optional :
return FileArchiveRequest(
storage_tier = 'Archive'
)
else :
return FileArchiveRequest(
storage_tier = 'Archive',
)
def testFileArchiveRequest(self):
"""Test FileArchiveRequest"""
inst_req_only = self.make_instance(include_optional=False)
inst_req_and_optional = self.make_instance(include_optional=True)
if __name__ == '__main__':
unittest.main()
| [
"siajunren@gmail.com"
] | siajunren@gmail.com |
7fb5280581bbe99984f5da4c18fcf5bd0e67323e | 99697559d046cdd04dd9068bd518e4da4177aaa2 | /Finish/M766_Toeplitz_Matrix.py | 8f51a3754259e79675839738cce3426faef8740a | [] | no_license | Azurisky/Leetcode | 3e3621ef15f2774cfdfac8c3018e2e4701760c3b | 8fa215fb0d5b2e8f6a863756c874d0bdb2cffa04 | refs/heads/master | 2020-03-18T22:46:35.780864 | 2018-10-07T05:45:30 | 2018-10-07T05:45:30 | 135,364,168 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 847 | py | class Solution(object):
def isToeplitzMatrix(self, matrix):
"""
:type matrix: List[List[int]]
:rtype: bool
"""
## Faster
for i in range(len(matrix) - 1):
for j in range(len(matrix[0]) - 1):
if matrix[i][j] != matrix[i + 1][j + 1]:
return False
return True
## Follow up
for i in range(len(matrix[0]) - 1):
tmp = matrix[0][i]
for j in range(1, len(matrix)):
if i+j < len(matrix[0]) and matrix[j][i+j] != tmp:
return False
for i in range(1, len(matrix)-1):
tmp = matrix[i][0]
for j in range(i+1, len(matrix)):
if j-i < len(matrix[0]) and matrix[j][j-i] != tmp:
return False
return True | [
"andrew0704us@gmail.com"
] | andrew0704us@gmail.com |
338e274803fb43c8022a8219e4aeb718df14faf4 | aa2ae30a88361b4b80ffa28c4d8a54600bbee542 | /Chapter18/hw/libhw/hw_sensors/__init__.py | 9ce4eb64a22fb6569dc367a6cb59f7a97dd23529 | [
"MIT"
] | permissive | PacktPublishing/Deep-Reinforcement-Learning-Hands-On-Second-Edition | 6728fadb38076f6243da3d98b1cf18faf6b287af | d5a421d63c6d3ebbdfa54537fa5ce485bc2b9220 | refs/heads/master | 2023-07-05T23:08:32.621622 | 2022-01-17T12:18:54 | 2022-01-17T12:18:54 | 195,020,985 | 963 | 491 | MIT | 2023-03-25T01:00:07 | 2019-07-03T09:21:47 | Jupyter Notebook | UTF-8 | Python | false | false | 976 | py | from .. import sensors
from . import lis331dlh
from . import lis3mdl
from . import l3g4200d
SENSOR_CLASSES = (lis331dlh.Lis331DLH, lis3mdl.Lis3MDL, l3g4200d.L3G4200D)
def scan(i2c):
"""
Return list of detected sensors on the bus. Default addresses are used
:return: list of Sensors instances
"""
res = []
for c in SENSOR_CLASSES:
try:
s = c(i2c)
res.append(s)
except sensors.SensorInitError:
pass
return res
def full_scan(i2c):
"""
Perform full scan of the bus -- try every class for every device on the bus, which is longer, but
detects compatible devices on non-standard addresses.
:param i2c:
:return: list of Sensors instances
"""
res = []
for dev in i2c.scan():
for c in SENSOR_CLASSES:
try:
s = c(i2c, dev)
res.append(s)
except sensors.SensorInitError:
pass
return res
| [
"max.lapan@gmail.com"
] | max.lapan@gmail.com |
ec8198e56247f8e67b4d65c6d4e9b3625bb7cf5d | 2b912b088683e2d4d1fa51ebf61c4e53c5058847 | /.PyCharmCE2017.1/system/python_stubs/-1247971765/gi/_gi/UnresolvedInfo.py | 919c3d92c5767a92742a1387385741c4afd8e492 | [] | no_license | ChiefKeith/pycharmprojects | 1e1da8288d85a84a03678d2cae09df38ddb2f179 | 67ddcc81c289eebcfd0241d1435b28cd22a1b9e0 | refs/heads/master | 2021-07-13T00:52:19.415429 | 2017-10-08T23:04:39 | 2017-10-08T23:04:39 | 106,216,016 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 425 | py | # encoding: utf-8
# module gi._gi
# from /usr/lib/python3/dist-packages/gi/_gi.cpython-34m-arm-linux-gnueabihf.so
# by generator 1.145
# no doc
# imports
import _gobject as _gobject # <module '_gobject'>
import _glib as _glib # <module '_glib'>
import gi as __gi
import gobject as __gobject
class UnresolvedInfo(__gi.BaseInfo):
# no doc
def __init__(self, *args, **kwargs): # real signature unknown
pass
| [
"kmarlin@dtcc.edu"
] | kmarlin@dtcc.edu |
f2f996381348351beda5614a9b20ae7c54071ccd | ea83e60e2be606813005081a9f1b9516de018c7d | /language/search_agents/muzero/grammar_lib.py | 7d8933135d028c88c787b51916b246de9f89b901 | [
"Apache-2.0",
"LicenseRef-scancode-generic-cla"
] | permissive | optimopium/language | 1562a1f150cf4374cf8d2e6a0b7ab4a44c5b8961 | bcc90d312aa355f507ed128e39b7f6ea4b709537 | refs/heads/master | 2022-04-03T03:51:28.831387 | 2022-03-16T21:41:17 | 2022-03-16T22:50:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,709 | py | # coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Defines grammars for the search agent."""
import enum
from typing import Collection, List
from absl import logging
import dataclasses
from language.search_agents.muzero import common_flags
from language.search_agents.muzero import state_tree
class GrammarType(enum.Enum):
"""Support grammar types."""
# Relevance feedback with both + action and - action at each step, e.g.
# +(title:<term>) -(contents:<term>).
BERT = 1
# Only + action OR - action at each step, e.g. +(title:<term>).
ONE_TERM_AT_A_TIME = 2
# Only add terms at each step, e.g. <term>.
# This is equivalent to using an OR operator.
ADD_TERM_ONLY = 3
# Pick either + action, - action, OR term only at each step.
# This combines the ONE_TERM_AT_A_TIME and ADD_TERM_ONLY grammars above.
ONE_TERM_AT_A_TIME_WITH_ADD_TERM_ONLY = 4
@dataclasses.dataclass
class GrammarConfig:
"""Configures a grammar to govern reformulations."""
grammar_type: GrammarType
split_vocabulary_by_type: bool
def grammar_config_from_flags() -> GrammarConfig:
return GrammarConfig(
grammar_type={
'bert':
GrammarType.BERT,
'one_term_at_a_time':
GrammarType.ONE_TERM_AT_A_TIME,
'add_term_only':
GrammarType.ADD_TERM_ONLY,
'one_term_at_a_time_with_add_term_only':
GrammarType.ONE_TERM_AT_A_TIME_WITH_ADD_TERM_ONLY,
}[common_flags.GRAMMAR_TYPE.value],
split_vocabulary_by_type= \
common_flags.SPLIT_VOCABULARY_BY_TYPE.value == 1)
def get_term_types():
"""Returns the vocabulary types for a `Word` non-terminal (_W_)."""
term_types = ['_Wq_', '_Wa_', '_Wd_']
if common_flags.USE_DOCUMENT_TITLE.value == 1:
term_types.append('_Wt_')
return term_types
def _make_bert_vocab_productions(vocab: Collection[str]) -> List[str]:
"""Generates productions for the `unconstrained` vocabulary setting.
To ensure that non-initial word-pieces never begin a word, we use a grammar
corresponding to the following rule:
Word --> InitialWordPiece (NonInitialWordPiece)*
We use the following non-terminals:
_W_: corresponds to `Word`
_Vsh_: pre-terminal corresponidng to `NonInitialWordPiece`
_Vw_: pre-terminal corresponding to `InitialWordPiece`
_W-_: helper non-terminal, implementing the generation of
`NonInitialWordPiece`s.
Args:
vocab: Iterable comprising all valid terminals. This spans both wordpieces
and `full` words.
Returns:
A list of productions in textual form which jointly define the "vocabulary"
part of a grammar that expands the _W_(ord) non-terminal to terminal symbols
in `vocab`.
"""
productions = []
productions.append('_W_ -> _Vw_ _W-_')
productions.append('_W_ -> _Vw_')
productions.append('_W-_ -> _Vsh_')
productions.append('_W-_ -> _Vsh_ _W-_')
for word in vocab:
word = state_tree.NQStateTree.clean_escape_characters(word)
if word.startswith('##'):
productions.append("_Vsh_ -> '{}'".format(word))
else:
# No point in having the agent generate these "fake" tokens.
if word.startswith('[unused') or word in ('[pos]', '[neg]', '[contents]',
'[title]', '[UNK]', '[PAD]',
'[SEP]', '[CLS]', '[MASK]'):
continue
productions.append("_Vw_ -> '{}'".format(word))
return productions
def _expand_vocab_type_grammar(grammar_productions: List[str]) -> List[str]:
"""Expands the `Word` non-terminal to question|answer|document subtypes.
Args:
grammar_productions: Current grammar with the basic `Word` non-terminal.
Returns:
Rules where each `Word` non-terminal (_W_) is expanded into question (_Wq_),
answer (_Wa_), document (_Wd_), and document title (_Wt_) subtypes.
"""
productions = []
for production in grammar_productions:
if '_W_' in production:
for term_type in get_term_types():
productions.append(production.replace('_W_', term_type))
else:
productions.append(production)
return productions
def construct_grammar(grammar_config: GrammarConfig,
vocab: Collection[str]) -> state_tree.NQCFG:
"""Builds the grammar according to `grammar_config`."""
productions = []
# Lexical rules.
if grammar_config.grammar_type in (
GrammarType.BERT, GrammarType.ONE_TERM_AT_A_TIME,
GrammarType.ADD_TERM_ONLY,
GrammarType.ONE_TERM_AT_A_TIME_WITH_ADD_TERM_ONLY):
productions.extend(_make_bert_vocab_productions(vocab=vocab))
# "Internal" rules.
if grammar_config.grammar_type == GrammarType.BERT:
for field_add in ('[title]', '[contents]'):
for field_sub in ('[title]', '[contents]'):
productions.append(
f"_Q_ -> '[pos]' '{field_add}' _W_ '[neg]' '{field_sub}' _W_ _Q_")
elif grammar_config.grammar_type == GrammarType.ONE_TERM_AT_A_TIME:
for field in ('[title]', '[contents]'):
productions.append(f"_Q_ -> '[pos]' '{field}' _W_ _Q_")
productions.append(f"_Q_ -> '[neg]' '{field}' _W_ _Q_")
elif grammar_config.grammar_type == GrammarType.ADD_TERM_ONLY:
productions.append("_Q_ -> '[or]' _W_ _Q_")
elif grammar_config.grammar_type == GrammarType.ONE_TERM_AT_A_TIME_WITH_ADD_TERM_ONLY:
for field in ('[title]', '[contents]'):
productions.append(f"_Q_ -> '[pos]' '{field}' _W_ _Q_")
productions.append(f"_Q_ -> '[neg]' '{field}' _W_ _Q_")
productions.append("_Q_ -> '[or]' _W_ _Q_")
else:
raise NotImplementedError(
'The grammar vocabulary type {} is not implemented.'.format(
grammar_config.grammar_type))
# Always add the stop action.
productions.append("_Q_ -> '[stop]'")
if grammar_config.split_vocabulary_by_type:
productions = _expand_vocab_type_grammar(grammar_productions=productions)
grammar_str = ' \n '.join(productions)
grammar = state_tree.NQCFG(grammar_str)
grammar.set_start(grammar.productions()[-1].lhs())
logging.info('Grammar: %s', grammar.productions())
return grammar
| [
"kentonl@google.com"
] | kentonl@google.com |
445e185baa345a6b732c120ea563a9ff30578e12 | c59194e1908bac7fc0dd4d80bef49c6afd9f91fb | /ProjectEuler/1_MultiplesOf3and5.py | 3f88c890431d0d60d3863486d6636ce18af63fc8 | [] | no_license | Bharadwaja92/CompetitiveCoding | 26e9ae81f5b62f4992ce8171b2a46597353f0c82 | d0505f28fd6e93b2f4ef23ad02c671777a3caeda | refs/heads/master | 2023-01-23T03:47:54.075433 | 2023-01-19T12:28:07 | 2023-01-19T12:28:07 | 208,804,519 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 503 | py | """ https://projecteuler.net/problem=1 """
"""
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9.
The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
"""
sum = 0
for i in range(1001):
if i % 3 == 0 or i % 5 == 0:
sum += i
print(sum)
""" EFFICIENT SOLUTION """
limit = 1000
def getSum(n):
p = limit // n
return n*(p*(p+1)) // 2
print(getSum(3) + getSum(5) - getSum(15))
| [
"saibharadwaja92@gmail.com"
] | saibharadwaja92@gmail.com |
d79aa806b9446ebf2fa3fa83ce3a7d024c10a6c0 | 35250c1ccc3a1e2ef160f1dab088c9abe0381f9f | /2020/0412/11728.py | f3297ad4220ba4673480121add1e424b531b3bef | [] | no_license | entrekid/daily_algorithm | 838ab50bd35c1bb5efd8848b9696c848473f17ad | a6df9784cec95148b6c91d804600c4ed75f33f3e | refs/heads/master | 2023-02-07T11:21:58.816085 | 2021-01-02T17:58:38 | 2021-01-02T17:58:38 | 252,633,404 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 190 | py | import sys
input = sys.stdin.readline
N, M = map(int, input().split())
A = list(map(int, input().rstrip().split()))
B = list(map(int, input().rstrip().split()))
C = A + B
C.sort()
print(*C) | [
"dat.sci.seol@gmail.com"
] | dat.sci.seol@gmail.com |
ff5e1e69b045c6d4eae542a9f77d00034e116cb5 | d6a8d7a63fc21d3b0a9c89966618ff30c6f32581 | /mathModel/Instance/guosai/B - 副本/MachineLearningNote-master/LDA/sklearn_LDA.py | 8b83738f9c14553fde5b8a72c0df34d3fdfc4ea3 | [] | no_license | WangSura/PythonLearn | 173bc43d6c2462c52292238dac1c5250ebeeb978 | 011bbd6b322b51dd811864165512d14ae77f43e5 | refs/heads/master | 2023-08-29T15:45:48.286284 | 2021-11-17T12:00:53 | 2021-11-17T12:00:53 | 357,208,223 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 344 | py | # _*_coding:utf-8_*_
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.linear_model import LogisticRegression
lda = LDA(n_components=2)
x_train_lda = lda.fit_transform(X_train_std,y_train)
lr = LogisticRegression()
lr = lr.fit(X_train_std,y_train)
plot_decision_region(X_train_std,y_train,classmethod) | [
"2739551399@qq.com"
] | 2739551399@qq.com |
fd90aa5f34aebd70f9ca3b2932b561eda70c1e13 | 61d08e23fbb62e16f7bd9d43673b1cf4e0558c37 | /miraPipeline/pipeline/preflight/check_options/maya/check_hair_yeti_same_tex_name.py | 8fe9e30a0ed0c2d44013f688dc24fd6109f4db62 | [] | no_license | jonntd/mira | 1a4b1f17a71cfefd20c96e0384af2d1fdff813e8 | 270f55ef5d4fecca7368887f489310f5e5094a92 | refs/heads/master | 2021-08-31T12:08:14.795480 | 2017-12-21T08:02:06 | 2017-12-21T08:02:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,431 | py | # -*- coding: utf-8 -*-
import os
import maya.cmds as mc
from miraLibs.mayaLibs import Yeti
from BaseCheck import BaseCheck
class Check(BaseCheck):
def __init__(self):
super(Check, self).__init__()
self.yeti_nodes = mc.ls(type="pgYetiMaya")
def run(self):
if not self.yeti_nodes:
self.pass_check(u"没有pgyetiMaya节点存在")
return
self.error_list = self.get_same_tex()
if self.error_list:
self.fail_check(u"这些贴图有相同的名字")
else:
self.pass_check(u"所有的yeti毛发贴图没有相同的名字")
def get_same_tex(self):
error_list = list()
yt = Yeti.Yeti()
all_textures = yt.get_all_texture_path()
if not all_textures:
return
error_base_names = self.get_error_base_names(all_textures)
if not error_base_names:
return
for tex in all_textures:
if os.path.basename(tex) in error_base_names:
error_list.append(tex)
return error_list
@staticmethod
def get_error_base_names(textures):
error_base_names = list()
base_names = [os.path.basename(tex) for tex in textures]
for base_name in base_names:
count = base_names.count(base_name)
if count > 1:
error_base_names.append(base_name)
return error_base_names
| [
"276575758@qq.com"
] | 276575758@qq.com |
a1fe3edeec3b8c30a7286fe17eeda7cc02b99018 | 1527398fca2fe72b5c24ebd712ffcf4b84c6eb5f | /videogram/wsgi.py | dfa29dfa6d5ffbdf3eac1254021e3ede7c71cbe8 | [] | no_license | ShipraShalini/videogram | e06978d548c9dc5fb93b06c95fc2426e41c8eef4 | 244de4175b66b41b92e8f384cc27358fd8229338 | refs/heads/master | 2021-01-16T09:35:29.659353 | 2020-02-25T18:47:52 | 2020-02-25T18:47:52 | 243,064,584 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 395 | py | """
WSGI config for videogram project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'videogram.settings')
application = get_wsgi_application()
| [
"code.shipra@gmail.com"
] | code.shipra@gmail.com |
08a70c6996ecfc2b346c958ccfab8306df309269 | 163bbb4e0920dedd5941e3edfb2d8706ba75627d | /Code/CodeRecords/2200/60686/288758.py | 768327feb2a94cd9656380bfcd364b5fe89caac5 | [] | no_license | AdamZhouSE/pythonHomework | a25c120b03a158d60aaa9fdc5fb203b1bb377a19 | ffc5606817a666aa6241cfab27364326f5c066ff | refs/heads/master | 2022-11-24T08:05:22.122011 | 2020-07-28T16:21:24 | 2020-07-28T16:21:24 | 259,576,640 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 388 | py | string = input()
string_num = input()
list_num = []
res_str = []
for i in range(len(string_num)):
list_num.append(int(string_num[i]))
limit = int(input())
for i in range(1, len(list_num)):
for j in range(len(list_num) - i + 1):
if list_num[j:j + i].count(0) <= limit and res_str.count(string[j:j + i]) == 0:
res_str.append(string[j:j + i])
print(len(res_str))
| [
"1069583789@qq.com"
] | 1069583789@qq.com |
e11b8960cbb24e034b116e0cb4b90998b1190c83 | 6fb9a194ec4f9b0f4f3b75331b79468cbce948d2 | /tgintegration/awaitableaction.py | 655ba30d7b8670d48ac54bb94e852d98a42f3ded | [
"MIT"
] | permissive | StrangeTcy/tgintegration | 1718d7860c936b1bccbfabab362d52f4643f2281 | 76d43c98b440ca4ac98c23234fa5c177fd9f8a55 | refs/heads/master | 2022-04-14T05:16:56.958120 | 2020-03-02T21:57:39 | 2020-03-02T21:57:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,830 | py | from typing import Callable, Dict, Iterable, Optional
from pyrogram.client.filters.filter import Filter
class AwaitableAction:
"""
Represents an action to be sent by the service while waiting for a response by the peer.
"""
def __init__(
self,
func: Callable,
args: Iterable = None,
kwargs: Dict = None,
filters: Filter = None,
num_expected: int = None,
max_wait: Optional[float] = 20,
min_wait_consecutive: Optional[float] = None,
):
self.func = func
self.args = args or []
self.kwargs = kwargs or {}
self.filters = filters
if num_expected is not None:
if num_expected == 0:
raise ValueError(
"When no response is expected (num_expected = 0), use the normal "
"send_* method without awaiting instead of an AwaitableAction."
)
elif num_expected < 0:
raise ValueError("Negative expections make no sense.")
self._num_expected = num_expected
self.consecutive_wait = (
max(0.0, min_wait_consecutive) if min_wait_consecutive else 0
)
self.max_wait = max_wait
@property
def num_expected(self):
return self._num_expected
@num_expected.setter
def num_expected(self, value):
if value is not None:
if not isinstance(value, int) or value < 1:
raise ValueError("`num_expected` must be an int and greater or equal 1")
if value > 1 and not self.consecutive_wait:
raise ValueError(
"If the number of expected messages greater than one, "
"`min_wait_consecutive` must be given."
)
self._num_expected = value
| [
"joscha.goetzer@gmail.com"
] | joscha.goetzer@gmail.com |
a23a17c07859bb73926baad77b1938640bb22bb2 | b464f034de9fb1cd8e8a0b394aec66278cce882d | /lib/browser.py | 782856b44833715e52ba919940afcb5ec9c02583 | [] | no_license | BigRLab/inverted-index-search-engine | 47d1d050559c65c7ad818ffb7e92f2d77aa15310 | d3eed3a11c6c8bb9a2be316bb185e76f10f50a1c | refs/heads/master | 2021-05-30T14:19:05.159017 | 2015-05-15T14:36:47 | 2015-05-15T14:36:47 | 109,635,526 | 0 | 1 | null | 2017-11-06T02:01:49 | 2017-11-06T02:01:49 | null | UTF-8 | Python | false | false | 2,183 | py | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'browser.ui'
#
# Created: Wed Feb 4 12:43:32 2015
# by: PyQt4 UI code generator 4.10.4
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(550, 574)
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.gridLayout_3 = QtGui.QGridLayout(self.centralwidget)
self.gridLayout_3.setObjectName(_fromUtf8("gridLayout_3"))
self.lineEdit = QtGui.QLineEdit(self.centralwidget)
self.lineEdit.setObjectName(_fromUtf8("lineEdit"))
self.gridLayout_3.addWidget(self.lineEdit, 0, 0, 1, 1)
self.pushButton = QtGui.QPushButton(self.centralwidget)
self.pushButton.setObjectName(_fromUtf8("pushButton"))
self.gridLayout_3.addWidget(self.pushButton, 0, 1, 1, 1)
self.listWidget = QtGui.QListWidget(self.centralwidget)
self.listWidget.setObjectName(_fromUtf8("listWidget"))
self.gridLayout_3.addWidget(self.listWidget, 1, 0, 1, 2)
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtGui.QStatusBar(MainWindow)
self.statusbar.setObjectName(_fromUtf8("statusbar"))
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(_translate("MainWindow", "Search Engine", None))
self.pushButton.setText(_translate("MainWindow", "Search", None))
| [
"maxhalford25@gmail.com"
] | maxhalford25@gmail.com |
76468640a160e797f7efbe0d6fa3379c5131eb0e | 2e60bdaf03181f1479701efebbb495f88615df4c | /nlp/ner/lstm/train.py | 787e3834c55b9a66d80f159c840e66dffc96cd7d | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | whatisnull/tensorflow_nlp | dc67589ee4069f7a71baa1640d796bac3445bb5c | 0ecb1e12bbe1fc3d5a63e68d788547d0ae92aeef | refs/heads/master | 2023-04-23T08:23:55.914154 | 2019-09-15T03:47:55 | 2019-09-15T03:47:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,119 | py | # -*- coding:utf-8 -*-
import tensorflow as tf
import os
from nlp.ner.lstm.dataset import dataset, rawdata
from nlp.ner.lstm import model as lstm_model, bilstm_model
def train_lstm(args):
if not args.data_dir:
raise ValueError("No data files found in 'data_path' folder")
if not os.path.isdir(args.utils_dir):
os.mkdir(args.utils_dir)
if not os.path.isdir(args.train_dir):
os.mkdir(args.train_dir)
raw_data = rawdata.load_data(args.data_dir, args.utils_dir, args.seq_length)
train_word, train_tag, dev_word, dev_tag, vocab_size, tag_size = raw_data
train_dataset = dataset.Dataset(train_word, train_tag)
valid_dataset = dataset.Dataset(dev_word, dev_tag)
args.vocab_size = vocab_size
args.tag_size = tag_size
with tf.Graph().as_default(), tf.Session() as sess:
initializer = tf.random_normal_initializer(-args.init_scale, args.init_scale)
with tf.variable_scope('ner_var_scope', reuse=None, initializer=initializer):
m = lstm_model.NERTagger(is_training=True, config=args)
with tf.variable_scope('ner_var_scope', reuse=True, initializer=initializer):
valid_m = lstm_model.NERTagger(is_training=False, config=args)
sess.run(tf.global_variables_initializer())
for i in range(args.num_epochs):
lr_decay = args.lr_decay ** max(float(i - args.max_epoch), 0.0)
m.assign_lr(sess, args.learning_rate * lr_decay)
print("Epoch: %d Learning rate: %.3f" % (i + 1, sess.run(m.lr)))
train_perplexity = lstm_model.run(sess, m, train_dataset, m.train_op,
ner_train_dir=args.train_dir, epoch=i)
print("Epoch: %d Train Perplexity: %.3f" % (i + 1, train_perplexity))
valid_perplexity = lstm_model.run(sess, valid_m, valid_dataset, tf.no_op(),
ner_train_dir=args.train_dir, epoch=i)
print("Epoch: %d Valid Perplexity: %.3f" % (i + 1, valid_perplexity))
train_dataset.reset()
valid_dataset.reset()
def train_bilstm(args):
if not args.data_dir:
raise ValueError("No data files found in 'data_path' folder")
if not os.path.isdir(args.utils_dir):
os.mkdir(args.utils_dir)
if not os.path.isdir(args.train_dir):
os.mkdir(args.train_dir)
raw_data = rawdata.load_data(args.data_dir, args.utils_dir, args.seq_length)
train_word, train_tag, dev_word, dev_tag, vocab_size, tag_size= raw_data
train_dataset = dataset.Dataset(train_word, train_tag)
valid_dataset = dataset.Dataset(dev_word, dev_tag)
args.vocab_size = vocab_size
args.tag_size = tag_size
with tf.Graph().as_default(), tf.Session() as sess:
initializer = tf.random_normal_initializer(-args.init_scale, args.init_scale)
with tf.variable_scope('ner_var_scope', reuse=None, initializer=initializer):
m = bilstm_model.NERTagger(is_training=True, config=args)
with tf.variable_scope('ner_var_scope', reuse=True, initializer=initializer):
valid_m = bilstm_model.NERTagger(is_training=False, config=args)
sess.run(tf.global_variables_initializer())
for i in range(args.num_epochs):
lr_decay = args.lr_decay ** max(float(i - args.max_epoch), 0.0)
m.assign_lr(sess, args.learning_rate * lr_decay)
print("Epoch: %d Learning rate: %.3f" % (i + 1, sess.run(m.lr)))
train_perplexity = bilstm_model.run(sess, m, train_dataset, m.train_op,
ner_train_dir=args.train_dir, epoch=i)
print("Epoch: %d Train Perplexity: %.3f" % (i + 1, train_perplexity))
valid_perplexity = bilstm_model.run(sess, valid_m, valid_dataset, tf.no_op(),
ner_train_dir=args.train_dir, epoch=i)
print("Epoch: %d Valid Perplexity: %.3f" % (i + 1, valid_perplexity))
train_dataset.reset()
valid_dataset.reset() | [
"endymecy@sina.cn"
] | endymecy@sina.cn |
78c45265334cb28df9abd61b34cea3037fadd473 | 15581a76b36eab6062e71d4e5641cdfaf768b697 | /LeetCode_30days_challenge/2021/March/Swapping Nodes in a Linked List.py | 771344e1e4fce37d7fd3005ffbcf345d20ed1e49 | [] | no_license | MarianDanaila/Competitive-Programming | dd61298cc02ca3556ebc3394e8d635b57f58b4d2 | 3c5a662e931a5aa1934fba74b249bce65a5d75e2 | refs/heads/master | 2023-05-25T20:03:18.468713 | 2023-05-16T21:45:08 | 2023-05-16T21:45:08 | 254,296,597 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,103 | py | # Definition for singly-linked list.
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
# First Approach with 2 traversals
class Solution:
def swapNodes(self, head: ListNode, k: int) -> ListNode:
length = 0
curr = head
while curr:
length += 1
if length == k:
first = curr
curr = curr.next
curr = head
while curr:
if length == k:
first.val, curr.val = curr.val, first.val
length -= 1
curr = curr.next
return head
# Second Approach with 1 traversal
class Solution:
def swapNodes(self, head: ListNode, k: int) -> ListNode:
length = 0
curr = head
first = second = None
while curr:
if second:
second = second.next
length += 1
if length == k:
second = head
first = curr
curr = curr.next
first.val, second.val = second.val, first.val
return head
| [
"mariandanaila01@gmail.com"
] | mariandanaila01@gmail.com |
654d9736b8244a00253102a14f50ea9fde1a28e9 | 7d07c037dbd2fbfce960c7a63debe1cb3d5f1a8a | /api/settings/development.py | 5282fbf8102fb199225aa97921d41de4a2af4c88 | [] | no_license | sealevelresearch-jenkins/sea-level-api | 2fcbf309fa7388514ddf8bf9bd520f5681775939 | 382cf4d1b6981f4120d8add6d79a53493b911e24 | refs/heads/master | 2020-12-25T05:19:21.904701 | 2014-06-25T11:44:26 | 2014-06-25T11:44:26 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 246 | py | from .common import *
if DATABASES is None:
print("Using SQLite database.")
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
| [
"paul@paulfurley.com"
] | paul@paulfurley.com |
6542253bc8d82a7dd671253b76782e3b4d46403f | 6bd21a64c5fbeba1682c3e65221f6275a44c4cd5 | /vega/algorithms/nas/modnas/data_provider/dataloader/torch/image_cls.py | edb7ba08fd73734470a4338a2b8bfe3896ffe252 | [
"MIT"
] | permissive | yiziqi/vega | e68935475aa207f788c849e26c1e86db23a8a39b | 52b53582fe7df95d7aacc8425013fd18645d079f | refs/heads/master | 2023-08-28T20:29:16.393685 | 2021-11-18T07:28:22 | 2021-11-18T07:28:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,605 | py | # -*- coding:utf-8 -*-
# Copyright (C) 2020. Huawei Technologies Co., Ltd. All rights reserved.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the MIT License.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# MIT License for more details.
"""Dataloader for Image classification."""
import random
import numpy as np
from torch.utils.data.dataloader import DataLoader
from torch.utils.data.dataset import Dataset
from torch.utils.data.sampler import SubsetRandomSampler
from modnas.registry.data_loader import register
from modnas.utils.logging import get_logger
from typing import Any, Dict, List, Optional, Tuple, Union, Callable
CLASSES_TYPE = Union[int, List[Union[str, int]]]
logger = get_logger('data_loader')
def get_label_class(label: int) -> int:
"""Return class index of given label."""
if isinstance(label, float):
label_cls = int(label)
elif isinstance(label, np.ndarray):
label_cls = int(np.argmax(label))
elif isinstance(label, int):
label_cls = label
else:
raise ValueError('unsupported label type: {}'.format(label))
return label_cls
def get_dataset_label(data: Dataset) -> List[int]:
"""Return label of given data."""
if hasattr(data, 'targets'):
return [c for c in data.targets]
if hasattr(data, 'samples'):
return [c for _, c in data.samples]
if hasattr(data, 'train_labels'): # backward compatibility for pytorch<1.2.0
return data.train_labels
if hasattr(data, 'test_labels'):
return data.test_labels
raise RuntimeError('data labels not found')
def get_dataset_class(data):
"""Return classes of given data."""
if hasattr(data, 'classes'):
return data.classes
return []
def filter_index_class(data_idx: List[int], labels: List[int], classes: List[int]) -> List[int]:
"""Return data indices from given classes."""
return [idx for idx in data_idx if get_label_class(labels[idx]) in classes]
def train_valid_split(
trn_idx: List[int], train_labels: List[int], class_size: Dict[int, int]
) -> Tuple[List[int], List[int]]:
"""Return split train and valid data indices."""
random.shuffle(trn_idx)
train_idx, valid_idx = [], []
for idx in trn_idx:
label_cls = get_label_class(train_labels[idx])
if label_cls not in class_size:
continue
if class_size[label_cls] > 0:
valid_idx.append(idx)
class_size[label_cls] -= 1
else:
train_idx.append(idx)
return train_idx, valid_idx
def map_data_label(data, mapping):
"""Map original data labels to new ones."""
labels = get_dataset_label(data)
if hasattr(data, 'targets'):
data.targets = [mapping.get(get_label_class(c), c) for c in labels]
if hasattr(data, 'samples'):
data.samples = [(s, mapping.get(get_label_class(c), c)) for s, c in data.samples]
if hasattr(data, 'train_labels'):
data.train_labels = [mapping.get(get_label_class(c), c) for c in labels]
if hasattr(data, 'test_labels'):
data.test_labels = [mapping.get(get_label_class(c), c) for c in labels]
def select_class(trn_data: Dataset, classes: Optional[CLASSES_TYPE] = None) -> List[int]:
"""Return train data class list selected from given classes."""
all_classes = list(set([get_label_class(c) for c in get_dataset_label(trn_data)]))
if isinstance(classes, int):
all_classes = random.sample(all_classes, classes)
elif isinstance(classes, list):
all_classes = []
class_name = get_dataset_class(trn_data)
for c in classes:
if isinstance(c, str):
idx = class_name.index(c)
if idx == -1:
continue
all_classes.append(idx)
elif isinstance(c, int):
all_classes.append(c)
else:
raise ValueError('invalid class type')
elif classes is not None:
raise ValueError('invalid classes type')
return sorted(all_classes)
@register
def ImageClsDataLoader(
trn_data: Dataset,
val_data: Optional[Dataset],
classes: Optional[CLASSES_TYPE] = None,
trn_batch_size: int = 64,
val_batch_size: int = 64,
workers: int = 2,
collate_fn: Optional[Callable] = None,
parallel_multiplier: int = 1,
train_size: int = 0,
train_ratio: float = 1.,
train_seed: int = 1,
valid_size: int = 0,
valid_ratio: Union[float, int] = 0.,
valid_seed: int = 1
) -> Tuple[Optional[DataLoader], Optional[DataLoader]]:
"""Return image classification DataLoader."""
# classes
trn_labels = get_dataset_label(trn_data)
random.seed(train_seed)
all_classes = select_class(trn_data, classes)
if classes is not None:
logger.info('data_loader: selected classes: {}'.format(all_classes))
n_classes = len(all_classes)
# index
val_idx = []
trn_idx = list(range(len(trn_data)))
trn_idx = filter_index_class(trn_idx, trn_labels, all_classes)
n_train_data = len(trn_idx)
if train_size <= 0:
train_size = int(n_train_data * min(train_ratio, 1.))
if 0 < train_size < n_train_data:
random.seed(train_seed)
trn_idx = random.sample(trn_idx, train_size)
if val_data is not None:
val_labels = get_dataset_label(val_data)
val_idx = list(range(len(val_data)))
val_idx = filter_index_class(val_idx, val_labels, all_classes)
n_valid_data = len(val_idx)
if valid_size <= 0 and valid_ratio > 0:
valid_size = int(n_valid_data * min(valid_ratio, 1.))
if 0 < valid_size < n_valid_data:
random.seed(valid_seed)
val_idx = random.sample(val_idx, valid_size)
else:
val_data = trn_data
if valid_size <= 0 and valid_ratio > 0:
valid_size = int(train_size * min(valid_ratio, 1.))
if valid_size > 0:
random.seed(valid_seed)
val_class_size = {}
for i, c in enumerate(all_classes):
val_class_size[c] = valid_size // n_classes + (1 if i < valid_size % n_classes else 0)
trn_idx, val_idx = train_valid_split(trn_idx, trn_labels, val_class_size)
logger.info('data_loader: trn: {} val: {} cls: {}'.format(len(trn_idx), len(val_idx), n_classes))
# map labels
if classes is not None:
mapping = {c: i for i, c in enumerate(all_classes)}
map_data_label(trn_data, mapping)
if val_data is not None:
map_data_label(val_data, mapping)
# dataloader
trn_loader = val_loader = None
trn_batch_size *= parallel_multiplier
val_batch_size *= parallel_multiplier
workers *= parallel_multiplier
extra_kwargs: Dict[str, Any] = {
'num_workers': workers,
'pin_memory': True,
}
if collate_fn is not None:
# backward compatibility for pytorch < 1.2.0
extra_kwargs['collate_fn'] = collate_fn
if len(trn_idx) > 0:
trn_sampler = SubsetRandomSampler(trn_idx)
trn_loader = DataLoader(trn_data, batch_size=trn_batch_size, sampler=trn_sampler, **extra_kwargs)
if len(val_idx) > 0:
val_sampler = SubsetRandomSampler(val_idx)
val_loader = DataLoader(val_data, batch_size=val_batch_size, sampler=val_sampler, **extra_kwargs)
return trn_loader, val_loader
| [
"zhangjiajin@huawei.com"
] | zhangjiajin@huawei.com |
dc71ded9798c040921352112b957d21cdd4ff82c | 7bededcada9271d92f34da6dae7088f3faf61c02 | /pypureclient/flashblade/FB_2_8/models/snmp_manager.py | e46beba71d12d483675d38e4a5be3decef66ca7c | [
"BSD-2-Clause"
] | permissive | PureStorage-OpenConnect/py-pure-client | a5348c6a153f8c809d6e3cf734d95d6946c5f659 | 7e3c3ec1d639fb004627e94d3d63a6fdc141ae1e | refs/heads/master | 2023-09-04T10:59:03.009972 | 2023-08-25T07:40:41 | 2023-08-25T07:40:41 | 160,391,444 | 18 | 29 | BSD-2-Clause | 2023-09-08T09:08:30 | 2018-12-04T17:02:51 | Python | UTF-8 | Python | false | false | 4,572 | py | # coding: utf-8
"""
FlashBlade REST API
A lightweight client for FlashBlade REST API 2.8, developed by Pure Storage, Inc. (http://www.purestorage.com/).
OpenAPI spec version: 2.8
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re
import six
import typing
from ....properties import Property
if typing.TYPE_CHECKING:
from pypureclient.flashblade.FB_2_8 import models
class SnmpManager(object):
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'name': 'str',
'id': 'str',
'host': 'str',
'notification': 'str',
'version': 'str',
'v2c': 'SnmpV2c',
'v3': 'SnmpV3'
}
attribute_map = {
'name': 'name',
'id': 'id',
'host': 'host',
'notification': 'notification',
'version': 'version',
'v2c': 'v2c',
'v3': 'v3'
}
required_args = {
}
def __init__(
self,
name=None, # type: str
id=None, # type: str
host=None, # type: str
notification=None, # type: str
version=None, # type: str
v2c=None, # type: models.SnmpV2c
v3=None, # type: models.SnmpV3
):
"""
Keyword args:
name (str): A name chosen by the user. Can be changed. Must be locally unique.
id (str): A non-modifiable, globally unique ID chosen by the system.
host (str): DNS hostname or IP address of a computer that hosts an SNMP manager to which Purity is to send trap messages when it generates alerts.
notification (str): The type of notification the agent will send. Valid values are `inform` and `trap`.
version (str): Version of the SNMP protocol to be used by Purity in communications with the specified manager. Valid values are `v2c` and `v3`.
v2c (SnmpV2c)
v3 (SnmpV3)
"""
if name is not None:
self.name = name
if id is not None:
self.id = id
if host is not None:
self.host = host
if notification is not None:
self.notification = notification
if version is not None:
self.version = version
if v2c is not None:
self.v2c = v2c
if v3 is not None:
self.v3 = v3
def __setattr__(self, key, value):
if key not in self.attribute_map:
raise KeyError("Invalid key `{}` for `SnmpManager`".format(key))
self.__dict__[key] = value
def __getattribute__(self, item):
value = object.__getattribute__(self, item)
if isinstance(value, Property):
return None
else:
return value
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
if hasattr(self, attr):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(SnmpManager, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, SnmpManager):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| [
"tlewis@purestorage.com"
] | tlewis@purestorage.com |
2d60381e59278c3f8cb22bfbfff0a4f9a2b9c553 | 0d0cf0165ca108e8d94056c2bae5ad07fe9f9377 | /19_Introduction_to_TensorFlow_in_Python/3_Neural_Networks/avoidingLocalMinima.py | fd720e8f9a6bb046155e7f68294ff8365d1e3ce1 | [] | no_license | MACHEIKH/Datacamp_Machine_Learning_For_Everyone | 550ec4038ebdb69993e16fe22d5136f00101b692 | 9fe8947f490da221430e6dccce6e2165a42470f3 | refs/heads/main | 2023-01-22T06:26:15.996504 | 2020-11-24T11:21:53 | 2020-11-24T11:21:53 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,757 | py | # Avoiding local minima
# The previous problem showed how easy it is to get stuck in local minima. We had a simple optimization problem in one variable and gradient descent still failed to deliver the global minimum when we had to travel through local minima first. One way to avoid this problem is to use momentum, which allows the optimizer to break through local minima. We will again use the loss function from the previous problem, which has been defined and is available for you as loss_function().
# The graph is of a single variable function that contains multiple local minima and a global minimum.
# Several optimizers in tensorflow have a momentum parameter, including SGD and RMSprop. You will make use of RMSprop in this exercise. Note that x_1 and x_2 have been initialized to the same value this time. Furthermore, keras.optimizers.RMSprop() has also been imported for you from tensorflow.
# Instructions
# 100 XP
# Set the opt_1 operation to use a learning rate of 0.01 and a momentum of 0.99.
# Set opt_2 to use the root mean square propagation (RMS) optimizer with a learning rate of 0.01 and a momentum of 0.00.
# Define the minimization operation for opt_2.
# Print x_1 and x_2 as numpy arrays.
# Initialize x_1 and x_2
x_1 = Variable(0.05,float32)
x_2 = Variable(0.05,float32)
# Define the optimization operation for opt_1 and opt_2
opt_1 = keras.optimizers.RMSprop(learning_rate=0.01, momentum=0.99)
opt_2 = keras.optimizers.RMSprop(learning_rate=0.01, momentum=0.00)
for j in range(100):
opt_1.minimize(lambda: loss_function(x_1), var_list=[x_1])
# Define the minimization operation for opt_2
opt_2.minimize(lambda: loss_function(x_2), var_list=[x_2])
# Print x_1 and x_2 as numpy arrays
print(x_1.numpy(), x_2.numpy())
| [
"noreply@github.com"
] | MACHEIKH.noreply@github.com |
ce32b7095264d73944ba67dcefd69bcf03aa92c3 | 1d89bf155d644e7b79708ea96b281af6d1551454 | /tests/test_effect_overlay.py | a9d8aef853ae4203330ff8e063fa9febd9d5ec41 | [
"Apache-2.0",
"LGPL-2.0-or-later"
] | permissive | bitwave-tv/brave | 24ab3be24acfa7507fb889fddc416a0cc6728827 | 506d5e9ed07123f4497e0576b197590cc60b35c2 | refs/heads/dev | 2020-09-13T22:26:21.350489 | 2020-06-27T00:03:26 | 2020-06-27T00:03:26 | 239,663,001 | 12 | 8 | Apache-2.0 | 2020-02-13T07:14:34 | 2020-02-11T02:57:14 | Python | UTF-8 | Python | false | false | 2,492 | py | import time, pytest, inspect
from utils import *
from PIL import Image
def test_effect_overlay_visible_after_creation(run_brave):
run_brave()
time.sleep(0.5)
check_brave_is_running()
add_overlay({'type': 'effect', 'source': 'mixer1', 'effect_name': 'edgetv'})
time.sleep(0.1)
assert_overlays([{'id': 1, 'visible': False, 'effect_name': 'edgetv'}])
update_overlay(1, {'visible': True}, status_code=200)
time.sleep(0.1)
assert_overlays([{'id': 1, 'visible': True,'effect_name': 'edgetv'}])
add_overlay({'type': 'effect', 'source': 'mixer1', 'effect_name': 'solarize'})
time.sleep(0.1)
assert_overlays([{'id': 1, 'visible': True,'effect_name': 'edgetv'},
{'id': 2, 'visible': False, 'effect_name': 'solarize'}])
update_overlay(2, {'visible': True}, status_code=200)
time.sleep(0.1)
assert_overlays([{'id': 1, 'visible': True,'effect_name': 'edgetv'},
{'id': 2, 'visible': True,'effect_name': 'solarize'}])
delete_overlay(1)
time.sleep(0.1)
assert_overlays([{'id': 2, 'visible': True,'effect_name': 'solarize'}])
delete_overlay(2)
time.sleep(0.1)
assert_overlays([])
# @pytest.mark.skip(reason="known bug that effects made visible at start should not be permitted")
def test_effect_overlay_visible_at_creation(run_brave):
'''Test that visible:true on creation also does not work if mixer is playing/paused'''
run_brave()
time.sleep(0.5)
check_brave_is_running()
# This time, visible from the start with visible=True
add_overlay({'type': 'effect', 'source': 'mixer1', 'effect_name': 'warptv', 'visible': True}, status_code=200)
time.sleep(0.1)
assert_overlays([{'visible': True, 'effect_name': 'warptv'}])
def test_set_up_effect_overlay_in_config_file(run_brave, create_config_file):
'''Test that an effect in a config file works fine'''
output_video_location = create_output_video_location()
config = {
'mixers': [{}],
'overlays': [
{'type': 'effect', 'source': 'mixer1', 'effect_name': 'burn', 'visible': True},
{'type': 'effect', 'source': 'mixer1', 'effect_name': 'vertigotv', 'visible': False}
]
}
config_file = create_config_file(config)
run_brave(config_file.name)
time.sleep(0.5)
check_brave_is_running()
assert_overlays([{'id': 1, 'effect_name': 'burn', 'visible': True},
{'id': 2, 'effect_name': 'vertigotv', 'visible': False}])
| [
"matthew1000@gmail.com"
] | matthew1000@gmail.com |
0b54f01e15a2559bdf7535e34ba48bc422536d06 | 6bcfb27114e076139ebf943b31b34deaf9914463 | /main/users/apps.py | 9193cc0b0d585cb51dd2aec4d72ab5dba7e02e62 | [] | no_license | 101t/jasmin-web-panel | f09be3ebb7de132cc28bc121a186a8eba89e1c24 | d7dfe74a63663571178843d5e664b2121d4d5943 | refs/heads/master | 2023-08-02T22:22:34.421658 | 2023-07-27T20:54:49 | 2023-07-27T20:54:49 | 212,270,500 | 66 | 63 | null | 2023-07-27T20:54:50 | 2019-10-02T06:31:44 | Python | UTF-8 | Python | false | false | 262 | py | from django.apps import AppConfig
from django.utils.translation import gettext_lazy as _
from django.contrib.auth.apps import AuthConfig
AuthConfig.verbose_name = _("Groups")
class UsersConfig(AppConfig):
name = "main.users"
verbose_name = _("Users")
| [
"tarek.it.eng@gmail.com"
] | tarek.it.eng@gmail.com |
f1f413dbde025b326932027cb257fe6aa39756c8 | afbaa5685bf737ec7d16fee2bab54ae13caf96f9 | /geekbang/core/ch20/crawl.py | c022466d07eff3d54ba23ebd789958a6ebab2bc5 | [] | no_license | ykdsg/myPython | 9dcc9afe6f595e51b72257875d66ada1ba04bba6 | 77d2eaa2acb172664b632cc2720cef62dff8f235 | refs/heads/master | 2023-06-10T20:11:08.061075 | 2023-06-03T11:39:53 | 2023-06-03T11:39:53 | 10,655,956 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 278 | py | import time
def crawl_page(url):
print('crawling{}'.format(url))
sleep_time = int(url.split('_')[-1])
time.sleep(sleep_time)
print('ok {}'.format(url))
def main(urls):
for url in urls:
crawl_page(url)
main(['url_1', 'url_2', 'url_3', 'url_4'])
| [
"17173as@163.com"
] | 17173as@163.com |
d7e50c5b0c76a51c12ce7d9ff3e2e2ae7cc4c5c1 | 4db29e0d5f2e050d21bbf67042c713d8fa0421b0 | /com/mason/redis/part_two/chapter05/chapter0523.py | fe893456602728640231fdb78461bc879b22d75d | [] | no_license | MasonEcnu/RedisInAction | 80e5556554c7e390264edd391042b09271cbfca4 | 710fd0316c6aee857acd350a092b657465096ed1 | refs/heads/master | 2020-07-08T17:24:39.540181 | 2019-09-30T04:14:49 | 2019-09-30T04:14:49 | 203,731,184 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,420 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import contextlib
import time
from urllib.request import Request
from redis import Redis
from com.mason.redis.part_two.chapter05.chapter0522 import update_status
# 使用 Redis 存储统计数据
# 将这个Python生成器用作上下文管理器
@contextlib.contextmanager
def access_time(conn: Redis, context):
# 记录代码块执行前的时间
start = time.time()
# 运行被包裹的代码块
# yield 的作用就是把一个函数变成一个 generator
yield
# 计算执行时长
delta = time.time() - start
# 更新统计数据
status = update_status(conn, context, "AccessTime", delta)
# 计算平均时间
average = status[1] / status[0]
pipe = conn.pipeline(True)
# 将页面的平均访问时长添加到记录最长访问时间的有序集合里
pipe.zadd("slowest:AccessTime", context, average)
# AccessTime有序集合只会保留最难的100条记录
pipe.zremrangebyrank("slowest:AccessTime", 0, -101)
pipe.execute()
def process_view(conn: Redis, request: Request, callback):
# 这个request是哪定义的啊。。。
# 计算并记录访问时长的上下文管理器就是这样包围代码块的。
with access_time(conn, request.full_url()):
# 当上下文管理器中的 yield语句被执行时,这个语句就会被执行。
return callback()
| [
"364207187@qq.com"
] | 364207187@qq.com |
b368bfe24d7352e61178f3ec8182f30c134e3a11 | 1817dca14e0053f8dc22e3d528d0263c437fd5dd | /handlers/hint_handler.py | 5f2536f010e76c9d1c76dea46a20a6fa568d3dba | [
"Apache-2.0"
] | permissive | biothings/biothings_explorer_api | 6299985f680082ea34cad3ebd6f5282dfe885952 | 389f95e94e0a36a54545a33bdc7b0ec5a9c05906 | refs/heads/master | 2023-08-26T01:04:10.685981 | 2020-04-29T16:57:28 | 2020-04-29T16:57:28 | 202,216,083 | 0 | 0 | Apache-2.0 | 2023-08-14T21:50:27 | 2019-08-13T20:06:44 | Python | UTF-8 | Python | false | false | 635 | py | from biothings_explorer.hint import Hint
import json
from .base import BaseHandler
ht = Hint()
class HintHandler(BaseHandler):
def get(self):
_input = self.get_query_argument('q', None)
if _input:
try:
result = ht.query(_input)
self.set_status(200)
self.write(json.dumps(result))
self.finish()
except:
self.set_status(400)
self.write(json.dumps({'error': 'No input is found'}))
else:
self.set_status(400)
self.write(json.dumps({'error': 'No input is found'}))
| [
"kevinxin@scripps.edu"
] | kevinxin@scripps.edu |
42d48160e819cfb5671f871ed6cb68725384edb1 | cb61ba31b27b232ebc8c802d7ca40c72bcdfe152 | /Company-Based/robinhood/countKmaxOccurences.py | 562fd2c5198260500c28eea17465d754b0629f95 | [
"Apache-2.0"
] | permissive | saisankargochhayat/algo_quest | c7c48187c76b5cd7c2ec3f0557432606e9096241 | a24f9a22c019ab31d56bd5a7ca5ba790d54ce5dc | refs/heads/master | 2021-07-04T15:21:33.606174 | 2021-02-07T23:42:43 | 2021-02-07T23:42:43 | 67,831,927 | 5 | 1 | Apache-2.0 | 2019-10-28T03:51:03 | 2016-09-09T20:51:29 | Python | UTF-8 | Python | false | false | 917 | py | def counts(string, word):
if not string:
return 0, 0
count = 0
idx = 0
while idx < len(string):
if len(string[idx:]) < len(word):
break
for char_str, char_word in zip(string[idx:], word):
if char_str != char_word:
return count, idx
else:
count += 1
idx += len(word)
return count, idx
def findK(string, word):
idx = 0
max_count = 0
while idx < len(string):
if string[idx]==word[0]:
print(counts(string[idx:], word))
count, offset = counts(string[idx:], word)
max_count = max(max_count, count)
if offset:
idx += offset
continue
idx += 1
return max_count
def maxKOccurrences(sequence, words):
res = []
for word in words:
res.append(findK(sequence, word))
return res | [
"saisankargochhayat@gmail.com"
] | saisankargochhayat@gmail.com |
ed61d145fa7fb88eef5cd1c145118ab1d38596b1 | b772822f8d99cce0152759ad41776c19944bff69 | /torch_geometric/datasets/gnn_benchmark_dataset.py | 39e905fc653f73a128710b6ee32f0bce91dfdc51 | [
"MIT"
] | permissive | monk1337/pytorch_geometric | 902e4e075ca7769fd7192ec53fd4191b8e519afa | 38514197a327541eb47abb69d4ab224910852605 | refs/heads/master | 2023-07-19T13:19:04.879875 | 2021-09-06T06:52:48 | 2021-09-06T06:52:48 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,968 | py | from typing import Optional, Callable, List
import os
import os.path as osp
import pickle
import logging
import torch
from torch_geometric.data import (InMemoryDataset, download_url, extract_zip,
Data)
from torch_geometric.utils import remove_self_loops
class GNNBenchmarkDataset(InMemoryDataset):
r"""A variety of artificially and semi-artificially generated graph
datasets from the `"Benchmarking Graph Neural Networks"
<https://arxiv.org/abs/2003.00982>`_ paper.
.. note::
The ZINC dataset is provided via
:class:`torch_geometric.datasets.ZINC`.
Args:
root (string): Root directory where the dataset should be saved.
name (string): The name of the dataset (one of :obj:`"PATTERN"`,
:obj:`"CLUSTER"`, :obj:`"MNIST"`, :obj:`"CIFAR10"`,
:obj:`"TSP"`, :obj:`"CSL"`)
split (string, optional): If :obj:`"train"`, loads the training
dataset.
If :obj:`"val"`, loads the validation dataset.
If :obj:`"test"`, loads the test dataset.
(default: :obj:`"train"`)
transform (callable, optional): A function/transform that takes in an
:obj:`torch_geometric.data.Data` object and returns a transformed
version. The data object will be transformed before every access.
(default: :obj:`None`)
pre_transform (callable, optional): A function/transform that takes in
an :obj:`torch_geometric.data.Data` object and returns a
transformed version. The data object will be transformed before
being saved to disk. (default: :obj:`None`)
pre_filter (callable, optional): A function that takes in an
:obj:`torch_geometric.data.Data` object and returns a boolean
value, indicating whether the data object should be included in the
final dataset. (default: :obj:`None`)
"""
names = ['PATTERN', 'CLUSTER', 'MNIST', 'CIFAR10', 'TSP', 'CSL']
root_url = 'https://pytorch-geometric.com/datasets/benchmarking-gnns'
urls = {
'PATTERN': f'{root_url}/PATTERN_v2.zip',
'CLUSTER': f'{root_url}/CLUSTER_v2.zip',
'MNIST': f'{root_url}/MNIST_v2.zip',
'CIFAR10': f'{root_url}/CIFAR10_v2.zip',
'TSP': f'{root_url}/TSP_v2.zip',
'CSL': 'https://www.dropbox.com/s/rnbkp5ubgk82ocu/CSL.zip?dl=1',
}
def __init__(self, root: str, name: str, split: str = "train",
transform: Optional[Callable] = None,
pre_transform: Optional[Callable] = None,
pre_filter: Optional[Callable] = None):
self.name = name
assert self.name in self.names
if self.name == 'CSL' and split != 'train':
split = 'train'
logging.warning(
("Dataset 'CSL' does not provide a standardized splitting. "
"Instead, it is recommended to perform 5-fold cross "
"validation with stratifed sampling"))
super().__init__(root, transform, pre_transform, pre_filter)
if split == 'train':
path = self.processed_paths[0]
elif split == 'val':
path = self.processed_paths[1]
elif split == 'test':
path = self.processed_paths[2]
else:
raise ValueError(f"Split '{split}' found, but expected either "
f"'train', 'val', or 'test'")
self.data, self.slices = torch.load(path)
@property
def raw_dir(self) -> str:
return osp.join(self.root, self.name, 'raw')
@property
def processed_dir(self) -> str:
return osp.join(self.root, self.name, 'processed')
@property
def raw_file_names(self) -> List[str]:
if self.name == 'CSL':
return [
'graphs_Kary_Deterministic_Graphs.pkl',
'y_Kary_Deterministic_Graphs.pt'
]
else:
name = self.urls[self.name].split('/')[-1][:-4]
return [f'{name}.pt']
@property
def processed_file_names(self) -> List[str]:
if self.name == 'CSL':
return ['data.pt']
else:
return ['train_data.pt', 'val_data.pt', 'test_data.pt']
def download(self):
path = download_url(self.urls[self.name], self.raw_dir)
extract_zip(path, self.raw_dir)
os.unlink(path)
def process(self):
if self.name == 'CSL':
data_list = self.process_CSL()
torch.save(self.collate(data_list), self.processed_paths[0])
else:
inputs = torch.load(self.raw_paths[0])
for i in range(len(inputs)):
data_list = [Data(**data_dict) for data_dict in inputs[i]]
if self.pre_filter is not None:
data_list = [d for d in data_list if self.pre_filter(d)]
if self.pre_transform is not None:
data_list = [self.pre_transform(d) for d in data_list]
torch.save(self.collate(data_list), self.processed_paths[i])
def process_CSL(self) -> List[Data]:
with open(self.raw_paths[0], 'rb') as f:
adjs = pickle.load(f)
ys = torch.load(self.raw_paths[1]).tolist()
data_list = []
for adj, y in zip(adjs, ys):
row, col = torch.from_numpy(adj.row), torch.from_numpy(adj.col)
edge_index = torch.stack([row, col], dim=0).to(torch.long)
edge_index, _ = remove_self_loops(edge_index)
data = Data(edge_index=edge_index, y=y, num_nodes=adj.shape[0])
if self.pre_filter is not None and not self.pre_filter(data):
continue
if self.pre_transform is not None:
data = self.pre_transform(data)
data_list.append(data)
return data_list
def __repr__(self) -> str:
return f'{self.name}({len(self)})'
| [
"matthias.fey@tu-dortmund.de"
] | matthias.fey@tu-dortmund.de |
077ae56a858cba048ea1f1a780d16d37648d7cae | ad4d927b05d3004cc5f835c84807a272ecff439f | /src/olaf/build/turtlebot3/turtlebot3_bringup/catkin_generated/pkg.installspace.context.pc.py | 34569c39aae88ce71767bdaba6432a51e1b360b4 | [] | no_license | kookmin-sw/capstone-2020-11 | 73954f8a692d3240a22ca9a81c9bede8538fabbf | 081733fb0470d83930433a61aabf9708275d64dd | refs/heads/master | 2023-03-06T23:02:14.869404 | 2022-11-09T01:44:27 | 2022-11-09T01:44:27 | 246,285,681 | 5 | 4 | null | 2023-03-04T13:53:47 | 2020-03-10T11:42:27 | C++ | UTF-8 | Python | false | false | 436 | py | # generated from catkin/cmake/template/pkg.context.pc.in
CATKIN_PACKAGE_PREFIX = ""
PROJECT_PKG_CONFIG_INCLUDE_DIRS = "".split(';') if "" != "" else []
PROJECT_CATKIN_DEPENDS = "roscpp;std_msgs;sensor_msgs;diagnostic_msgs;turtlebot3_msgs".replace(';', ' ')
PKG_CONFIG_LIBRARIES_WITH_PREFIX = "".split(';') if "" != "" else []
PROJECT_NAME = "turtlebot3_bringup"
PROJECT_SPACE_DIR = "/home/nvidia/olaf/install"
PROJECT_VERSION = "1.2.1"
| [
"ksp2246@naver.com"
] | ksp2246@naver.com |
a800100f3b7b659be23bfd81ab576a30ce5f1f5e | 59d1fd18d9f13c70e3e069619add5a57f8ba3304 | /color_histogram/plot/window.py | 81c302c7bd31575f18d8ef107dd6442036c61600 | [
"MIT"
] | permissive | laserwave/ColorHistogram | 59501d00b075bd375321bf5dce65951a780c9b65 | b2fad52ba8e68130eaca503ac2a78c7a69852dd2 | refs/heads/master | 2020-08-08T19:29:08.845214 | 2019-10-10T02:55:58 | 2019-10-10T02:55:58 | 213,899,835 | 0 | 0 | MIT | 2019-10-09T11:27:05 | 2019-10-09T11:27:04 | null | UTF-8 | Python | false | false | 322 | py |
# -*- coding: utf-8 -*-
## @package color_histogram.plot.window
#
# Matplot window functions.
# @author tody
# @date 2015/07/29
from matplotlib import pyplot as plt
## Maximize the matplot window.
def showMaximize():
mng = plt.get_current_fig_manager()
mng.window.state('zoomed')
plt.show()
| [
"tody411@gmail.com"
] | tody411@gmail.com |
0ecf200d57de7d8e8ca8add027550304e1866c64 | f6af31f87f79119e8f3e7a06242d19d4fe17bf6a | /examples/demo/reqrep/requester.py | 1960039b3f9413311e0a39bfff6c07e1ba366c8f | [
"BSD-3-Clause",
"LicenseRef-scancode-public-domain"
] | permissive | claws/tx0mq | 122258c106bc5e52f74e41bac0ebfb5d1e7e86f2 | dcbdabc781615ab35cb5dcb45c082375d77a9b85 | refs/heads/master | 2020-12-24T09:37:20.205796 | 2013-01-14T13:35:11 | 2013-01-14T13:35:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,468 | py | """
Example tx0mq requester.
requester.py --endpoint=ipc:///tmp/sock
"""
import sys
import time
from optparse import OptionParser
from twisted.internet import reactor, defer
try:
import tx0mq
except ImportError, ex:
import os
package_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(sys.argv[0])))))
print package_dir
sys.path.append(package_dir)
from tx0mq import constants, ZmqEndpoint, ZmqEndpointType, ZmqFactory, ZmqReqConnection
parser = OptionParser("")
parser.add_option("-e", "--endpoint", dest="endpoint", default="ipc:///tmp/sock", help="0MQ Endpoint")
if __name__ == '__main__':
(options, args) = parser.parse_args()
@defer.inlineCallbacks
def doRequest(requester):
data = str(time.time())
print "sending request containing %s ..." % (data)
reply = yield requester.request(data)
# this example only uses single-part messages
reply = reply[0]
print "received reply: %s" % reply
reactor.callLater(1, reactor.stop)
def onConnect(requester):
print "Requester connected"
requester.setSocketOptions({constants.LINGER:0})
reactor.callLater(1, doRequest, requester)
endpoint = ZmqEndpoint(ZmqEndpointType.connect, options.endpoint)
requester = ZmqReqConnection(endpoint)
deferred = requester.connect(ZmqFactory())
deferred.addCallback(onConnect)
reactor.run()
| [
"clawsicus@gmail.com"
] | clawsicus@gmail.com |
5000298ca1725ddf90398511ad69af60763e2d5d | 321b4ed83b6874eeb512027eaa0b17b0daf3c289 | /917/917.reverse-only-letters.234544979.Accepted.leetcode.py | b4f433d05e0347bb0f6fc363cbedba8287474f29 | [] | no_license | huangyingw/submissions | 7a610613bdb03f1223cdec5f6ccc4391149ca618 | bfac1238ecef8b03e54842b852f6fec111abedfa | refs/heads/master | 2023-07-25T09:56:46.814504 | 2023-07-16T07:38:36 | 2023-07-16T07:38:36 | 143,352,065 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 463 | py | class Solution:
def reverseOnlyLetters(self, S):
temp = list(S)
left = 0
right = len(temp) - 1
while left < right:
while left < right and not temp[left].isalpha():
left += 1
while left < right and not temp[right].isalpha():
right -= 1
temp[left], temp[right] = temp[right], temp[left]
left += 1
right -= 1
return ''.join(temp)
| [
"huangyingw@gmail.com"
] | huangyingw@gmail.com |
60d7f4b53270a4f1eade58446ec0acad21d6ec46 | 25040bd4e02ff9e4fbafffee0c6df158a62f0d31 | /www/htdocs/wt/lapnw/data/item_50_7.tmpl.py | 0b819e6a0e4c549385d531e59eb966da64cad855 | [] | no_license | erochest/atlas | 107a14e715a058d7add1b45922b0f8d03bd2afef | ea66b80c449e5b1141e5eddc4a5995d27c2a94ee | refs/heads/master | 2021-05-16T00:45:47.585627 | 2017-10-09T10:12:03 | 2017-10-09T10:12:03 | 104,338,364 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 192 | py |
from lap.web.templates import GlobalTemplate, SubtemplateCode
class main(GlobalTemplate):
title = 'Page.Item: 50.7'
project = 'lapnw'
class page(SubtemplateCode):
pass
| [
"eric@eric-desktop"
] | eric@eric-desktop |
1810efd247d5268bf54af6acc3516543fef3ecf6 | 5aa709245470c365d0a2831d2d5f2b5f28fa6603 | /simplesocial/simplesocial/settings.py | d9afebb77b0567d100aec26259dc5adfb67cbed4 | [] | no_license | henryfrstr/social_clone | c4c7d448f9b50e548f756c9c019ee310a069c247 | 04879e1d99cf4cdf03c0d2c622c5e94d95df4dd4 | refs/heads/master | 2023-02-01T03:51:29.624013 | 2020-12-17T17:55:44 | 2020-12-17T17:55:44 | 321,951,160 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,223 | py | """
Django settings for simplesocial project.
Generated by 'django-admin startproject' using Django 3.1.4.
For more information on this file, see
https://docs.djangoproject.com/en/3.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.1/ref/settings/
"""
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '!zpb70np3(-@+5dte9#!g@^e&h723mk1(iyryfyndg*5*(ue_%'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'bootstrap4',
'accounts'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'simplesocial.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR / 'templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'simplesocial.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.1/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = [BASE_DIR, 'static']
LOGIN_URL = "login"
LOGIN_REDIRECT_URL = 'home'
| [
"63148122+henryfrstr@users.noreply.github.com"
] | 63148122+henryfrstr@users.noreply.github.com |
707a1890621fb9d1febf7d0e158fcd2b642b83c5 | bca9c2fa3c4c3d06dd612280ce39090a9dfab9bd | /neekanee/job_scrapers/plugins/edu/link/samford.py | 1c08089fc2d48bebc3131d7551981cb6917de0ad | [] | no_license | thayton/neekanee | 0890dd5e5cf5bf855d4867ae02de6554291dc349 | f2b2a13e584469d982f7cc20b49a9b19fed8942d | refs/heads/master | 2021-03-27T11:10:07.633264 | 2018-07-13T14:19:30 | 2018-07-13T14:19:30 | 11,584,212 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,586 | py | import re, urlparse
from neekanee.jobscrapers.jobscraper import JobScraper
from neekanee.htmlparse.soupify import soupify, get_all_text
from neekanee.txtextract.pdftohtml import pdftohtml
from neekanee_solr.models import *
COMPANY = {
'name': 'Samford University',
'hq': 'Birmingham, AL',
'home_page_url': 'http://www.samford.edu',
'jobs_page_url': 'http://www.samford.edu/jobs/staff.aspx',
'empcnt': [501,1000]
}
class SamfordJobScraper(JobScraper):
def __init__(self):
super(SamfordJobScraper, self).__init__(COMPANY)
def scrape_job_links(self, url):
jobs = []
self.br.open(url)
s = soupify(self.br.response().read())
r = re.compile(r'^/jobs/job\.aspx\?id=\d+$')
for a in s.findAll('a', href=r):
job = Job(company=self.company)
job.title = a.text
job.url = urlparse.urljoin(self.br.geturl(), a['href'])
job.location = self.company.location
jobs.append(job)
return jobs
def scrape_jobs(self):
job_list = self.scrape_job_links(self.company.jobs_page_url)
self.prune_unlisted_jobs(job_list)
new_jobs = self.new_job_listings(job_list)
for job in new_jobs:
self.br.open(job.url)
s = soupify(self.br.response().read())
d = s.find('div', id='PanelContent')
job.desc = get_all_text(d)
job.save()
def get_scraper():
return SamfordJobScraper()
if __name__ == '__main__':
job_scraper = get_scraper()
job_scraper.scrape_jobs()
| [
"thayton@neekanee.com"
] | thayton@neekanee.com |
43369b506b0cc4ded87bee3ff3c41be63e9dea4f | 5780a36bc8799a1703879de6012bb6bcb025720a | /Beginner/primaTest/primaTest.py3 | bca4faee34fddc178887f793f796679dabf51790 | [] | no_license | bsuverzabaek/Code-Chef-Exercises | 4da9dfcabd6c37ea94239a307ba3f6b54e2d1ae0 | a3f23dc9fa0c0a92fc2df3592473102e03b06fad | refs/heads/main | 2023-08-11T14:21:13.832378 | 2021-09-20T06:01:43 | 2021-09-20T06:01:43 | 323,809,533 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 385 | py3 | import math
while (1):
T = int(input())
if (T<=0 or T>20):
print("T must be 1 <= T <= 20")
else:
break
while (T>0):
while (1):
N = int(input())
if (N<=0 or N>100000):
print("N must be 1 <= N <= 100000")
else:
break
prime = 0
for i in range(1,math.trunc(N/2)+1):
if (N%i==0):
prime += 1
if (prime==1):
print("yes")
else:
print("no")
T -= 1 | [
"noreply@github.com"
] | bsuverzabaek.noreply@github.com |
e49640cf069c95e9b1d14bfe944af571d68067e0 | f0d713996eb095bcdc701f3fab0a8110b8541cbb | /HyLkfdagDGc99ZhbF_8.py | 3f95240414dd22a323f0744ebf91d3de75b78357 | [] | no_license | daniel-reich/turbo-robot | feda6c0523bb83ab8954b6d06302bfec5b16ebdf | a7a25c63097674c0a81675eed7e6b763785f1c41 | refs/heads/main | 2023-03-26T01:55:14.210264 | 2021-03-23T16:08:01 | 2021-03-23T16:08:01 | 350,773,815 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 615 | py | """
Create a function that takes a number `n` (integer greater than zero) as an
argument, and returns `2` if `n` is odd and `8` if `n` is even.
You can only use the following arithmetic operators: addition of numbers `+`,
subtraction of numbers `-`, multiplication of number `*`, division of number
`/`, and exponentiation `**`.
You are not allowed to use any other methods in this challenge (i.e. no if
statements, comparison operators, etc).
### Examples
f(1) ➞ 2
f(2) ➞ 8
f(3) ➞ 2
### Notes
N/A
"""
def f(n):
k=0
while k<n:
k+=2
if k>n:
return 2
return 8
| [
"daniel.reich@danielreichs-MacBook-Pro.local"
] | daniel.reich@danielreichs-MacBook-Pro.local |
71b889c9d244ba7327f325e04a5697d023d8f1ce | 936054d81aa057b082e634b127110537515fa87d | /tfx_bsl/test_util/run_all_tests.py | 7085c297cb15225aed586a13d5b4293a140da1f8 | [
"Apache-2.0"
] | permissive | kamilwu/tfx-bsl | 41ae161cf64053666525b5ce8fbec91e98310f65 | 1abd2f554afb6411b966d5c7cfb79e3b29711d84 | refs/heads/master | 2020-12-23T18:43:12.837404 | 2020-01-23T06:32:14 | 2020-01-23T06:32:41 | 237,236,619 | 0 | 0 | Apache-2.0 | 2020-01-30T14:57:32 | 2020-01-30T14:57:31 | null | UTF-8 | Python | false | false | 1,328 | py | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A commandline tool to discover and run all the absltest.TestCase in a dir.
Usage:
python -m tfx_bsl.testing.run_all_tests --start_dir=<dir with tests>
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl import flags
from absl.testing import absltest
flags.DEFINE_string("start_dir", None,
"Directory to recursively search test modules from. "
"Required.")
FLAGS = flags.FLAGS
def load_tests(loader, tests, pattern):
del pattern
discovered = loader.discover(FLAGS.start_dir, pattern="*_test.py")
tests.addTests(discovered)
return tests
if __name__ == "__main__":
flags.mark_flag_as_required("start_dir")
absltest.main()
| [
"tensorflow-extended-nonhuman@googlegroups.com"
] | tensorflow-extended-nonhuman@googlegroups.com |
c201c9dea3a62b368c132ba1d0b8b478301c76ec | a9fdace9236af6c73133fd8dddb80843697efc7d | /tests/catalyst/metrics/functional/test_r2_squared.py | a59ec1f892e58b1568cc45fa57ed445abd565da3 | [
"Apache-2.0"
] | permissive | catalyst-team/catalyst | 026c38f26dad471cd77347adbc13423b156a5d8b | e99f90655d0efcf22559a46e928f0f98c9807ebf | refs/heads/master | 2023-08-26T23:12:49.277005 | 2022-04-29T04:19:24 | 2022-04-29T04:19:24 | 145,385,156 | 3,038 | 487 | Apache-2.0 | 2023-08-12T03:40:14 | 2018-08-20T07:56:13 | Python | UTF-8 | Python | false | false | 381 | py | # flake8: noqa
import numpy as np
import torch
from catalyst.metrics.functional._r2_squared import r2_squared
def test_r2_squared():
"""
Tests for catalyst.metrics.r2_squared metric.
"""
y_true = torch.tensor([3, -0.5, 2, 7])
y_pred = torch.tensor([2.5, 0.0, 2, 8])
val = r2_squared(y_pred, y_true)
assert torch.isclose(val, torch.Tensor([0.9486]))
| [
"noreply@github.com"
] | catalyst-team.noreply@github.com |
a76018548fe591e5ccbf69e213d4e4ac8eacd779 | f7574ee7a679261e758ba461cb5a5a364fdb0ed1 | /SpiralMatrix.py | ba5e95fc8c4ef265f954c6747ece04c5c3df41ef | [] | no_license | janewjy/Leetcode | 807050548c0f45704f2f0f821a7fef40ffbda0ed | b4dccd3d1c59aa1e92f10ed5c4f7a3e1d08897d8 | refs/heads/master | 2021-01-10T19:20:22.858158 | 2016-02-26T16:03:19 | 2016-02-26T16:03:19 | 40,615,255 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,278 | py | class Solution(object):
def spiralOrder(self, matrix):
"""
:type matrix: List[List[int]]
:rtype: List[int]
"""
res = []
while matrix != []:
try:
res.extend(matrix[0])
matrix.remove(matrix[0])
res.extend([i[-1] for i in matrix])
for i in matrix:
i.remove(i[-1])
matrix[-1].reverse()
res.extend(matrix[-1])
matrix.remove(matrix[-1])
res.extend([i[0] for i in reversed(matrix)])
for i in matrix:
i.remove(i[0])
except:
return res
return res
a = Solution()
print a.spiralOrder([[3],[2]])
[1,2,3],[4,5,6],[7,8,9]
class Solution(object):
def spiralOrder(self, matrix):
"""
:type matrix: List[List[int]]
:rtype: List[int]
"""
res = []
while matrix:
try:
res.extend(matrix[0])
matrix.remove(matrix[0])
for i in xrange(len(matrix)):
res.append(matrix[i][-1])
matrix[i].remove(matrix[i][-1])
res.extend(matrix[-1][::-1])
matrix.remove(matrix[-1])
for i in xrange(len(matrix)-1,-1,-1):
res.append(matrix[i][0])
matrix[i].remove(matrix[i][0])
except:
return res
return res
class Solution(object):
def spiralOrder(self, matrix):
"""
:type matrix: List[List[int]]
:rtype: List[int]
"""
if not matrix:
return []
res = []
u,d,l,r = 0,len(matrix)-1,0,len(matrix[0])-1
while u<d and l < r:
res.extend([matrix[u][i] for i in xrange(l,r)])
res.extend([matrix[i][r] for i in xrange(u,d)])
res.extend([matrix[d][i] for i in xrange(r,l,-1)])
res.extend([matrix[i][l] for i in xrange(d,u,-1)])
u,d,l,r = u+1,d-1,l+1,r-1
if u == d:
res.extend([matrix[u][i] for i in xrange(l,r+1)])
else:
res.extend([matrix[i][l] for i in xrange(u,d+1)])
return res | [
"janewjy87@gmail.com"
] | janewjy87@gmail.com |
4eca5eee0ff5ec0dc38748ef92580e1495cabcd5 | f0d713996eb095bcdc701f3fab0a8110b8541cbb | /KQ5H9aFBZDKEJuP6C_3.py | 128b8838991f057dd382446cd4c57a56b4fc728b | [] | no_license | daniel-reich/turbo-robot | feda6c0523bb83ab8954b6d06302bfec5b16ebdf | a7a25c63097674c0a81675eed7e6b763785f1c41 | refs/heads/main | 2023-03-26T01:55:14.210264 | 2021-03-23T16:08:01 | 2021-03-23T16:08:01 | 350,773,815 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 717 | py | """
Write a **regular expression** that will help us count how many bad cookies
are produced every day. You must use RegEx **negative lookbehind**.
### Example
lst = ["bad cookie", "good cookie", "bad cookie", "good cookie", "good cookie"]
pattern = "yourregularexpressionhere"
len(re.findall(pattern, ", ".join(lst))) ➞ 2
### Notes
* You don't need to write a function, just the pattern.
* Do **not** remove `import re` from the code.
* Find more info on RegEx and negative lookbehind in **Resources**.
* You can find all the challenges of this series in my [Basic RegEx](https://edabit.com/collection/8PEq2azWDtAZWPFe2) collection.
"""
import re
pattern = "(?<!good )(cookie)"
| [
"daniel.reich@danielreichs-MacBook-Pro.local"
] | daniel.reich@danielreichs-MacBook-Pro.local |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.