blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 777 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 149 values | src_encoding stringclasses 26 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 3 10.2M | extension stringclasses 188 values | content stringlengths 3 10.2M | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8f900e4a807812f900ab27f85d1512892a94a0bb | 6c189e7654d640dbf56c1481bb2439dbb3353402 | /IPython/html/tests/test_external.py | 00bf4d2b83b4b1bb4b56c956e748976034367ebb | [
"BSD-3-Clause"
] | permissive | zakandrewking/ipython | c39ba65ae8b7f323f061a591906144569a5a2e54 | 3e4d535825d60405fbe8d094b455848d59489cfa | refs/heads/master | 2020-05-05T06:07:01.968458 | 2014-05-07T16:35:37 | 2014-05-07T16:35:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 712 | py | """Test external"""
#-----------------------------------------------------------------------------
# Copyright (C) 2014 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Test functions
#-----------------------------------------------------------------------------
| [
"zaking17@gmail.com"
] | zaking17@gmail.com |
7acea09ca55ff4c8012867d77a44b7dd143eaeab | 1fe8d4133981e53e88abf633046060b56fae883e | /venv/lib/python3.8/site-packages/tensorflow/python/ops/distributions/kullback_leibler.py | b8d069bc79d770fe4193fbaf947981a0394075e9 | [] | no_license | Akira331/flask-cifar10 | 6c49db8485038731ce67d23f0972b9574746c7a7 | 283e7a2867c77d4b6aba7aea9013bf241d35d76c | refs/heads/master | 2023-06-14T16:35:06.384755 | 2021-07-05T14:09:15 | 2021-07-05T14:09:15 | 382,864,970 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 129 | py | version https://git-lfs.github.com/spec/v1
oid sha256:c0b7ee140903b8fcf26aad781cc776aa0a35cbc9f9efcc1f0b65c3e7318bcd6a
size 7820
| [
"business030301@gmail.com"
] | business030301@gmail.com |
b019fa4745f8434c4b9e8976e21d77d5308b1bc0 | 2caa47f0bdb2f03469a847c3ba39496de315d992 | /Contest/ABC040/c/main.py | 9e5d1b18c640488baf621790690b16c39aaaaa6c | [
"CC0-1.0"
] | permissive | mpses/AtCoder | 9023e44885dc67c4131762281193c24b69d3b6da | 9c101fcc0a1394754fcf2385af54b05c30a5ae2a | refs/heads/master | 2023-03-23T17:00:11.646508 | 2021-03-20T12:21:19 | 2021-03-20T12:21:19 | 287,489,233 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 212 | py | #!/usr/bin/env python3
n, *a = map(int, open(0).read().split())
INF = float("inf")
dp = [INF] * n
dp[0] = 0
for i in range(1, n):
dp[i] = min(dp[i - p] + abs(a[i - p] - a[i]) for p in [1, 2])
print(dp[n - 1]) | [
"nsorangepv@gmail.com"
] | nsorangepv@gmail.com |
895c65d6ec11f32c7cb39d127fb4f1d7bfac65ce | 84741b2103b702791076e65679b8ab89a132ac3a | /venelin/wsgi.py | 7f14d3ffdae9d8169abafb8e6bacd57b5873b0c9 | [] | no_license | Yaqdata/venko-django-site | 97b78cbeaa9676fb3d230f47036746f1e8a342d6 | eefdf12c9bc6a1873c4f451e1023d9ec691ce300 | refs/heads/master | 2021-01-15T12:14:33.573809 | 2013-12-09T22:12:06 | 2013-12-09T22:12:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 285 | py | import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "venelin.settings")
# This application object is used by the development server
# as well as any WSGI server configured to use this file.
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
| [
"vkstoykov@gmail.com"
] | vkstoykov@gmail.com |
b29308f7c9eaaf48077bcadb3fae69b3346e1ec1 | 33bae2dead37bcd22460e05de67531cfdacfcccb | /wis/urls.py | 9f2729796b8a3e3576da8fb4661d1e77a77948b8 | [] | no_license | wis-auralov/wis | 5cfb0dba9c74d8d944b770d9b706de91fe01440d | 40d2e187b448a42008b8db6f00cf17818fff2444 | refs/heads/master | 2021-01-20T07:36:10.174053 | 2017-04-29T18:46:41 | 2017-04-29T18:46:41 | 90,016,639 | 0 | 0 | null | 2017-05-02T09:43:22 | 2017-05-02T09:43:22 | null | UTF-8 | Python | false | false | 848 | py | from django.conf.urls import url, include
from django.contrib import admin
from clients.views import ClientList, ClientDetail, ClientDelete, ClientCreate, ClientScores, download_xlsx
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^client_detail/(?P<pk>[0-9]+)/$', ClientDetail.as_view(), name='client_detail'),
url(r'^client_delete/(?P<pk>[0-9]+)/$', ClientDelete.as_view(), name='client_delete'),
url(r'^client_create/$', ClientCreate.as_view(), name='client_create'),
url(r'^client_scores/$', ClientScores.as_view(), name='client_scores'),
url(r'^download_file/$', download_xlsx, name='download_file'),
url(r'^$', ClientList.as_view(), name='client_list'),
url(r'^api/', include('clients.api.urls', namespace="api")),
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')),
]
| [
"lse1983@mail.ru"
] | lse1983@mail.ru |
50da310865ec2f297b42c59a7c926b4a47b868cb | ddd5109d8ce1832423448d7f4932255118d56e36 | /apps/gigs/models.py | 5fffeca65ca75cda079ff70d879f2efb2adac79b | [] | no_license | amyth/gigzag | fe5b52677db2ba17083d4fdb059c0a12e6b3646d | 1569fd40aa02c8b7921ad6a7f8f4e8b9ad4cf652 | refs/heads/master | 2021-01-21T07:03:28.397118 | 2017-02-27T12:01:02 | 2017-02-27T12:01:02 | 83,305,491 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,769 | py | import os
import uuid
from django.contrib.auth.models import User
from django.contrib.contenttypes.fields import GenericRelation
from django.core.files import File
from django.db import models
from django.db.models import Q
from django_comments.models import Comment
from liked.models import Like
from moviepy.editor import VideoFileClip
from apps.gigs import choices
from core.models import SluggedModel
from utils.choices import GIG_PRIVACY_LEVELS
GIG_TYPES = (
(1, 'Public Gig'),
(2, 'Home Gig'),
)
GIG_STATUSES = (
(0, 'Inactive'),
(1, 'Active'),
)
class GigLocation(models.Model):
address = models.TextField()
latitude = models.DecimalField(max_digits=9, decimal_places=6, blank=True,
null=True)
longitude = models.DecimalField(max_digits=9, decimal_places=6, blank=True,
null=True)
city = models.CharField(max_length=5, choices=choices.CITIES)
def __unicode__(self):
return u'%s, %s' % (self.address, self.get_city_display())
class Gig(SluggedModel):
## Core fields
description = models.TextField(blank=True, null=True)
gigtime = models.DateTimeField(blank=True, null=True)
gig_type = models.IntegerField(choices=GIG_TYPES, default=1)
no_of_pax = models.IntegerField(default=0)
location = models.ForeignKey(GigLocation, blank=True, null=True)
artists = models.ManyToManyField("accounts.Artist", blank=True)
tags = models.ManyToManyField("tags.Tag", blank=True)
cover = models.ImageField(upload_to="images/cover/", null=True, blank=True)
video = models.FileField(upload_to="video/cover", null=True, blank=True)
youtube_link = models.URLField(null=True, blank=True)
## activity fields
likes = GenericRelation(Like)
comments = GenericRelation(Comment, object_id_field='object_pk')
## contact fields
band_name = models.CharField(max_length=150)
phone = models.CharField(max_length=14, blank=True, null=True)
email = models.EmailField(blank=True, null=True)
## settings and privacy
privacy = models.IntegerField(choices=GIG_PRIVACY_LEVELS, default=1)
## Backend fields
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
status = models.IntegerField(choices=GIG_STATUSES, default=1)
moderated = models.BooleanField(default=False)
created_by = models.ForeignKey(User, related_name='created_gigs')
rsvp = models.ManyToManyField(User, related_name='went_to', blank=True)
def __unicode__(self):
return u'%s' % self.title
def save(self, *args, **kwargs):
url = self.youtube_link
if self.youtube_link:
if 'youtu.be' in url:
vid = url.split('/')[-1]
url = "https://www.youtube.com/embed/%s" % vid
elif 'watch?v=' in url:
url = url.replace('watch?v=', "embed/")
self.youtube_link = url.split('&')[0]
return super(Gig, self).save(*args, **kwargs)
@property
def human_time(self):
human_date = self.gigtime.date().strftime('%d/%m/%Y')
human_time = self.gigtime.strftime('%I:%M %p')
return '%s, %s' % (human_date, human_time)
@property
def detail_link(self):
return "http://gigzag.in/#/gigs/%s" % self.id
@property
def host_history(self):
gigs = self.created_by.created_gigs.filter(~Q(id=self.id), moderated=True)
return ",".join([gig.title for gig in gigs][:5])
@property
def v_youtube_link(self):
return self.youtube_link.replace('embed', 'v')
@property
def v_youtube_image(self):
iurl = "https://i.ytimg.com/vi/%s/sddefault.jpg"
vid = self.youtube_link.split('/')[-1]
print iurl % vid
return iurl % vid
| [
"aroras.official@gmail.com"
] | aroras.official@gmail.com |
aaa4bf02509e564a891d5a05ae8f6daf2a153f8a | 98c6ea9c884152e8340605a706efefbea6170be5 | /examples/data/Assignment_7/grcdea001/question1.py | dba5df6eec999efec76cbe617d7ebd465574d3a3 | [] | no_license | MrHamdulay/csc3-capstone | 479d659e1dcd28040e83ebd9e3374d0ccc0c6817 | 6f0fa0fa1555ceb1b0fb33f25e9694e68b6a53d2 | refs/heads/master | 2021-03-12T21:55:57.781339 | 2014-09-22T02:22:22 | 2014-09-22T02:22:22 | 22,372,174 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 357 | py | """Program to print out a list of strings without duplicates, assignment 7 question 1
Dean Gracey
27 April 2014"""
word=input("Enter strings (end with DONE):\n")
words = []
while word!= "DONE":
if not word in words:
words.append(word)
word = input("")
print("\nUnique list:")
for i in words:
print(i) | [
"jarr2000@gmail.com"
] | jarr2000@gmail.com |
054f71754967e78023ad8b59df9cff86e239ca30 | eb71f3494fb00708e6d5e8c332f27c4026063339 | /il2fb/ds/events/definitions/version.py | fac6811fd60d1fe2728ff87ecfeae3f0adc46026 | [
"MIT"
] | permissive | IL2HorusTeam/il2fb-ds-events | dab5fc7ec0b1389c5fc1e975f9134e6ffe35bdee | fb71ffbce63ac1b3b3a27263250e021f9543ba9f | refs/heads/main | 2023-02-01T16:29:18.565016 | 2020-12-22T17:53:45 | 2020-12-22T17:53:45 | 310,578,022 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 223 | py | VERSION_SUFFIX = "a0"
VERSION_PATCH = "0" + (VERSION_SUFFIX or "")
VERSION_MINOR = "0"
VERSION_MAJOR = "1"
VERSION_INFO = (VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH)
VERSION = ".".join([str(x) for x in VERSION_INFO])
| [
"oblovatniy@gmail.com"
] | oblovatniy@gmail.com |
240f1596019daf492691e74383b4bb693596ba6d | db9ff8accaa4d8d4a96d3f9122c0fdc5e83ea2a5 | /openapi_client/model/ebay_offer_details_with_all.py | 372ae213d24be2c51c800d49dfbb5a7dfa0310cb | [] | no_license | agtt/ebay-openapi-inventory | 4754cdc8b6765acdb34f6b8f89b017ccbc6b1d2b | d990c26f16e811431892ac6401c73c4599c2d414 | refs/heads/master | 2023-06-17T10:53:43.204075 | 2021-07-14T18:32:38 | 2021-07-14T18:32:38 | 386,039,734 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 29,026 | py | """
Inventory API
The Inventory API is used to create and manage inventory, and then to publish and manage this inventory on an eBay marketplace. There are also methods in this API that will convert eligible, active eBay listings into the Inventory API model. # noqa: E501
The version of the OpenAPI document: 1.13.0
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from openapi_client.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
from ..model_utils import OpenApiModel
from openapi_client.exceptions import ApiAttributeError
def lazy_import():
from openapi_client.model.charity import Charity
from openapi_client.model.listing_details import ListingDetails
from openapi_client.model.listing_policies import ListingPolicies
from openapi_client.model.pricing_summary import PricingSummary
from openapi_client.model.tax import Tax
globals()['Charity'] = Charity
globals()['ListingDetails'] = ListingDetails
globals()['ListingPolicies'] = ListingPolicies
globals()['PricingSummary'] = PricingSummary
globals()['Tax'] = Tax
class EbayOfferDetailsWithAll(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
@cached_property
def additional_properties_type():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
"""
lazy_import()
return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'available_quantity': (int,), # noqa: E501
'category_id': (str,), # noqa: E501
'charity': (Charity,), # noqa: E501
'format': (str,), # noqa: E501
'hide_buyer_details': (bool,), # noqa: E501
'include_catalog_product_details': (bool,), # noqa: E501
'listing': (ListingDetails,), # noqa: E501
'listing_description': (str,), # noqa: E501
'listing_duration': (str,), # noqa: E501
'listing_policies': (ListingPolicies,), # noqa: E501
'listing_start_date': (str,), # noqa: E501
'lot_size': (int,), # noqa: E501
'marketplace_id': (str,), # noqa: E501
'merchant_location_key': (str,), # noqa: E501
'offer_id': (str,), # noqa: E501
'pricing_summary': (PricingSummary,), # noqa: E501
'quantity_limit_per_buyer': (int,), # noqa: E501
'secondary_category_id': (str,), # noqa: E501
'sku': (str,), # noqa: E501
'status': (str,), # noqa: E501
'store_category_names': ([str],), # noqa: E501
'tax': (Tax,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'available_quantity': 'availableQuantity', # noqa: E501
'category_id': 'categoryId', # noqa: E501
'charity': 'charity', # noqa: E501
'format': 'format', # noqa: E501
'hide_buyer_details': 'hideBuyerDetails', # noqa: E501
'include_catalog_product_details': 'includeCatalogProductDetails', # noqa: E501
'listing': 'listing', # noqa: E501
'listing_description': 'listingDescription', # noqa: E501
'listing_duration': 'listingDuration', # noqa: E501
'listing_policies': 'listingPolicies', # noqa: E501
'listing_start_date': 'listingStartDate', # noqa: E501
'lot_size': 'lotSize', # noqa: E501
'marketplace_id': 'marketplaceId', # noqa: E501
'merchant_location_key': 'merchantLocationKey', # noqa: E501
'offer_id': 'offerId', # noqa: E501
'pricing_summary': 'pricingSummary', # noqa: E501
'quantity_limit_per_buyer': 'quantityLimitPerBuyer', # noqa: E501
'secondary_category_id': 'secondaryCategoryId', # noqa: E501
'sku': 'sku', # noqa: E501
'status': 'status', # noqa: E501
'store_category_names': 'storeCategoryNames', # noqa: E501
'tax': 'tax', # noqa: E501
}
read_only_vars = {
}
_composed_schemas = {}
@classmethod
@convert_js_args_to_python_args
def _from_openapi_data(cls, *args, **kwargs): # noqa: E501
"""EbayOfferDetailsWithAll - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
available_quantity (int): This integer value indicates the quantity of the inventory item (specified by the sku value) that will be available for purchase by buyers shopping on the eBay site specified in the marketplaceId field. For unpublished offers where the available quantity has yet to be set, the availableQuantity value is set to 0.. [optional] # noqa: E501
category_id (str): The unique identifier of the primary eBay category that the inventory item is listed under. This field is always returned for published offers, but is only returned if set for unpublished offers.. [optional] # noqa: E501
charity (Charity): [optional] # noqa: E501
format (str): This enumerated value indicates the listing format of the offer. For implementation help, refer to <a href='https://developer.ebay.com/api-docs/sell/inventory/types/slr:FormatTypeEnum'>eBay API documentation</a>. [optional] # noqa: E501
hide_buyer_details (bool): This field is returned as true if the private listing feature has been enabled for the offer. Sellers may want to use this feature when they believe that a listing's potential bidders/buyers would not want their obfuscated user IDs (and feedback scores) exposed to other users. This field is always returned even if not explicitly set in the offer. It defaults to false, so will get returned as false if seller does not set this feature with a 'Create' or 'Update' offer method.. [optional] # noqa: E501
include_catalog_product_details (bool): This field indicates whether or not eBay product catalog details are applied to a listing. A value of true indicates the listing corresponds to the eBay product associated with the provided product identifier. The product identifier is provided in createOrReplaceInventoryItem. Note: Though the includeCatalogProductDetails parameter is not required to be submitted in the request, the parameter defaults to 'true' if omitted.. [optional] # noqa: E501
listing (ListingDetails): [optional] # noqa: E501
listing_description (str): The description of the eBay listing that is part of the unpublished or published offer. This field is always returned for published offers, but is only returned if set for unpublished offers. Max Length: 500000 (which includes HTML markup/tags). [optional] # noqa: E501
listing_duration (str): This field indicates the number of days that the listing will be active. This field is returned for both auction and fixed-price listings; however, the value returned for fixed-price listings will always be GTC. The GTC (Good 'Til Cancelled) listings are automatically renewed each calendar month until the seller decides to end the listing. Note: If the listing duration expires for an auction offer, the listing then becomes available as a fixed-price offer and will be GTC. For implementation help, refer to <a href='https://developer.ebay.com/api-docs/sell/inventory/types/slr:ListingDurationEnum'>eBay API documentation</a>. [optional] # noqa: E501
listing_policies (ListingPolicies): [optional] # noqa: E501
listing_start_date (str): This timestamp is the date/time that the seller set for the scheduled listing. With the scheduled listing feature, the seller can set a time in the future that the listing will become active, instead of the listing becoming active immediately after a publishOffer call. Scheduled listings do not always start at the exact date/time specified by the seller, but the date/time of the timestamp returned in getOffer/getOffers will be the same as the timestamp passed into a 'Create' or 'Update' offer call. This field is returned if set for an offer.. [optional] # noqa: E501
lot_size (int): This field is only applicable and returned if the listing is a lot listing. A lot listing is a listing that has multiple quantity of the same product. An example would be a set of four identical car tires. The integer value in this field is the number of identical items being sold through the lot listing.. [optional] # noqa: E501
marketplace_id (str): This enumeration value is the unique identifier of the eBay site on which the offer is available, or will be made available. For implementation help, refer to <a href='https://developer.ebay.com/api-docs/sell/inventory/types/slr:MarketplaceEnum'>eBay API documentation</a>. [optional] # noqa: E501
merchant_location_key (str): The unique identifier of the inventory location. This identifier is set up by the merchant when the inventory location is first created with the createInventoryLocation call. Once this value is set for an inventory location, it can not be modified. To get more information about this inventory location, the getInventoryLocation call can be used, passing in this value at the end of the call URI. This field is always returned for published offers, but is only returned if set for unpublished offers. Max length: 36. [optional] # noqa: E501
offer_id (str): The unique identifier of the offer. This identifier is used in many offer-related calls, and it is also used in the bulkUpdatePriceQuantity call.. [optional] # noqa: E501
pricing_summary (PricingSummary): [optional] # noqa: E501
quantity_limit_per_buyer (int): This field is only applicable and set if the seller wishes to set a restriction on the purchase quantity of an inventory item per seller. If this field is set by the seller for the offer, then each distinct buyer may purchase up to, but not exceed the quantity in this field. So, if this field's value is 5, each buyer may purchase a quantity of the inventory item between one and five, and the purchases can occur in one multiple-quantity purchase, or over multiple transactions. If a buyer attempts to purchase one or more of these products, and the cumulative quantity will take the buyer beyond the quantity limit, that buyer will be blocked from that purchase.. [optional] # noqa: E501
secondary_category_id (str): The unique identifier for a secondary category. This field is applicable if the seller decides to list the item under two categories. Sellers can use the getCategorySuggestions method of the Taxonomy API to retrieve suggested category ID values. A fee may be charged when adding a secondary category to a listing. Note: You cannot list US eBay Motors vehicles in two categories. However, you can list Parts & Accessories in two categories.. [optional] # noqa: E501
sku (str): This is the seller-defined SKU value of the product in the offer. Max Length: 50. [optional] # noqa: E501
status (str): The enumeration value in this field specifies the status of the offer - either PUBLISHED or UNPUBLISHED. For implementation help, refer to <a href='https://developer.ebay.com/api-docs/sell/inventory/types/slr:OfferStatusEnum'>eBay API documentation</a>. [optional] # noqa: E501
store_category_names ([str]): This container is returned if the seller chose to place the inventory item into one or two eBay store categories that the seller has set up for their eBay store. The string value(s) in this container will be the full path(s) to the eBay store categories, as shown below: "storeCategoryNames": [ "/Fashion/Men/Shirts", "/Fashion/Men/Accessories" ],. [optional] # noqa: E501
tax (Tax): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
self = super(OpenApiModel, cls).__new__(cls)
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
return self
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""EbayOfferDetailsWithAll - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
available_quantity (int): This integer value indicates the quantity of the inventory item (specified by the sku value) that will be available for purchase by buyers shopping on the eBay site specified in the marketplaceId field. For unpublished offers where the available quantity has yet to be set, the availableQuantity value is set to 0.. [optional] # noqa: E501
category_id (str): The unique identifier of the primary eBay category that the inventory item is listed under. This field is always returned for published offers, but is only returned if set for unpublished offers.. [optional] # noqa: E501
charity (Charity): [optional] # noqa: E501
format (str): This enumerated value indicates the listing format of the offer. For implementation help, refer to <a href='https://developer.ebay.com/api-docs/sell/inventory/types/slr:FormatTypeEnum'>eBay API documentation</a>. [optional] # noqa: E501
hide_buyer_details (bool): This field is returned as true if the private listing feature has been enabled for the offer. Sellers may want to use this feature when they believe that a listing's potential bidders/buyers would not want their obfuscated user IDs (and feedback scores) exposed to other users. This field is always returned even if not explicitly set in the offer. It defaults to false, so will get returned as false if seller does not set this feature with a 'Create' or 'Update' offer method.. [optional] # noqa: E501
include_catalog_product_details (bool): This field indicates whether or not eBay product catalog details are applied to a listing. A value of true indicates the listing corresponds to the eBay product associated with the provided product identifier. The product identifier is provided in createOrReplaceInventoryItem. Note: Though the includeCatalogProductDetails parameter is not required to be submitted in the request, the parameter defaults to 'true' if omitted.. [optional] # noqa: E501
listing (ListingDetails): [optional] # noqa: E501
listing_description (str): The description of the eBay listing that is part of the unpublished or published offer. This field is always returned for published offers, but is only returned if set for unpublished offers. Max Length: 500000 (which includes HTML markup/tags). [optional] # noqa: E501
listing_duration (str): This field indicates the number of days that the listing will be active. This field is returned for both auction and fixed-price listings; however, the value returned for fixed-price listings will always be GTC. The GTC (Good 'Til Cancelled) listings are automatically renewed each calendar month until the seller decides to end the listing. Note: If the listing duration expires for an auction offer, the listing then becomes available as a fixed-price offer and will be GTC. For implementation help, refer to <a href='https://developer.ebay.com/api-docs/sell/inventory/types/slr:ListingDurationEnum'>eBay API documentation</a>. [optional] # noqa: E501
listing_policies (ListingPolicies): [optional] # noqa: E501
listing_start_date (str): This timestamp is the date/time that the seller set for the scheduled listing. With the scheduled listing feature, the seller can set a time in the future that the listing will become active, instead of the listing becoming active immediately after a publishOffer call. Scheduled listings do not always start at the exact date/time specified by the seller, but the date/time of the timestamp returned in getOffer/getOffers will be the same as the timestamp passed into a 'Create' or 'Update' offer call. This field is returned if set for an offer.. [optional] # noqa: E501
lot_size (int): This field is only applicable and returned if the listing is a lot listing. A lot listing is a listing that has multiple quantity of the same product. An example would be a set of four identical car tires. The integer value in this field is the number of identical items being sold through the lot listing.. [optional] # noqa: E501
marketplace_id (str): This enumeration value is the unique identifier of the eBay site on which the offer is available, or will be made available. For implementation help, refer to <a href='https://developer.ebay.com/api-docs/sell/inventory/types/slr:MarketplaceEnum'>eBay API documentation</a>. [optional] # noqa: E501
merchant_location_key (str): The unique identifier of the inventory location. This identifier is set up by the merchant when the inventory location is first created with the createInventoryLocation call. Once this value is set for an inventory location, it can not be modified. To get more information about this inventory location, the getInventoryLocation call can be used, passing in this value at the end of the call URI. This field is always returned for published offers, but is only returned if set for unpublished offers. Max length: 36. [optional] # noqa: E501
offer_id (str): The unique identifier of the offer. This identifier is used in many offer-related calls, and it is also used in the bulkUpdatePriceQuantity call.. [optional] # noqa: E501
pricing_summary (PricingSummary): [optional] # noqa: E501
quantity_limit_per_buyer (int): This field is only applicable and set if the seller wishes to set a restriction on the purchase quantity of an inventory item per seller. If this field is set by the seller for the offer, then each distinct buyer may purchase up to, but not exceed the quantity in this field. So, if this field's value is 5, each buyer may purchase a quantity of the inventory item between one and five, and the purchases can occur in one multiple-quantity purchase, or over multiple transactions. If a buyer attempts to purchase one or more of these products, and the cumulative quantity will take the buyer beyond the quantity limit, that buyer will be blocked from that purchase.. [optional] # noqa: E501
secondary_category_id (str): The unique identifier for a secondary category. This field is applicable if the seller decides to list the item under two categories. Sellers can use the getCategorySuggestions method of the Taxonomy API to retrieve suggested category ID values. A fee may be charged when adding a secondary category to a listing. Note: You cannot list US eBay Motors vehicles in two categories. However, you can list Parts & Accessories in two categories.. [optional] # noqa: E501
sku (str): This is the seller-defined SKU value of the product in the offer. Max Length: 50. [optional] # noqa: E501
status (str): The enumeration value in this field specifies the status of the offer - either PUBLISHED or UNPUBLISHED. For implementation help, refer to <a href='https://developer.ebay.com/api-docs/sell/inventory/types/slr:OfferStatusEnum'>eBay API documentation</a>. [optional] # noqa: E501
store_category_names ([str]): This container is returned if the seller chose to place the inventory item into one or two eBay store categories that the seller has set up for their eBay store. The string value(s) in this container will be the full path(s) to the eBay store categories, as shown below: "storeCategoryNames": [ "/Fashion/Men/Shirts", "/Fashion/Men/Accessories" ],. [optional] # noqa: E501
tax (Tax): [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
if var_name in self.read_only_vars:
raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate "
f"class with read only attributes.")
| [
"csd@hotmail.com.tr"
] | csd@hotmail.com.tr |
d1e5290787f370b38cb318a69f05a1d3326a3937 | bbcae24e3726106d5e86cb2237f2b59177ade67a | /indico/indico.py | a50db692036f3b47bf211a0c73ae83245de7c5d3 | [] | no_license | the16thpythonist/IndicoWp | beba497ba24d9dc6e90b02c9fe645810ad085236 | 86be5f7193ed65add4dae245ee288052a407c603 | refs/heads/master | 2021-05-01T19:24:28.254908 | 2018-03-06T07:04:45 | 2018-03-06T07:04:45 | 121,020,882 | 2 | 0 | null | 2018-02-11T19:03:32 | 2018-02-10T14:10:23 | Python | UTF-8 | Python | false | false | 8,457 | py | import IndicoWp.config as config
import IndicoWp.indico.event as event
import logging
import requests
import json
import urllib.parse as urlparse
# TODO: Make this better with making a response dict a current item and a dict query
class IndicoEventRequestController:
"""
:ivar config: The config parser object for the project
:ivar logger: The logger for 'IndicoRequest'
:ivar base_url: The url of the indico website which is to be used for the program
:ivar api_key: The api key to be used to identify the requests to the api of the site
:ivar processor: The IndicoEventProcessor, that turns the dict structures containing the infos about the events
into the Event objects
"""
def __init__(self):
self.config = config.Config.get_instance()
self.logger = logging.getLogger('IndicoRequest')
self.base_url = None
self.api_key = None
self.processor = IndicoEventProcessor()
def set(self, observation):
self.base_url = observation.url
self.api_key = observation.key
def set_url(self, base_url):
self.base_url = base_url
def set_key(self, key):
self.api_key = key
def get_category_events(self, category_id):
"""
A list of Event objects for each event in the category given by its id.
:param category_id: The id of the indico category for which to get the events
:return: [event.Event]
"""
if self.base_url is None:
raise ValueError('no base url set for IndicoEventRequestController')
response_dict = self.request_category(category_id)
event_dict_list = response_dict['results']
event_list = []
for event_dict in event_dict_list:
indico_event = self.processor.process(event_dict)
event_list.append(indico_event)
return event_list
def request_category(self, category_id):
"""
Sends a request for the given category id and returns the json decoded dict structure of the response
:param category_id: The id of the indico category for which to get the events
:return: dict
"""
# Assembling the part of the url, that uses the category id
category_string = '/categ/{}.json'.format(category_id)
url_query = {
'apikey': self.api_key
}
url = '{}/{}?{}'.format(
self.base_url,
category_string,
urlparse.urlencode(url_query)
)
print(url)
response = requests.get(url)
response_dict = json.loads(response.text)
return response_dict
class IndicoEventProcessor:
"""
:ivar dict: The dict object containing the info about the event, that is being processed by the object at a given
moment in time. Is being changed with each new event to process
:ivar logger: The logger for IndicoProcessing
:cvar _DEBUG_STRING_CHARACTER_COUNT: In case a item is missing in the processed dict a log entry is being made and
to identify to which event the log message belonged, a string of the dict that is processed is being appended
this variable specifies how many characters to use of that string
"""
_DEBUG_STRING_CHARACTER_COUNT = 30
def __init__(self):
self.dict = None
self.logger = logging.getLogger('IndicoProcessing')
def process(self, event_dict):
"""
Processes the given event dict into a event.Event object and returns that
:param event_dict: The dict structure, that contains the event data
:return: Event
"""
self.dict = event_dict
return self._event()
def _event(self):
"""
Creates the actual Event object from all the parameter objects
:return: Event
"""
return event.IndicoEvent(
self._event_id(),
self._event_description(),
self._event_location(),
self._event_time(),
self._event_creator(),
self._event_meta()
)
def _event_id(self):
"""
The indico id for the event
:return: the int id
"""
return int(self._query_dict('id', 0))
def _event_description(self):
"""
Creates the EventDescription from the currently processed dict
:return: EventDescription
"""
description = self._query_dict('description', '')
title = self._query_dict('title', '')
event_type = self._query_dict('type', '')
event_description = event.EventDescription(title, description, event_type)
return event_description
def _event_creator(self):
"""
Created the EventCreator from the currently processed dict
:return: EventCreator
"""
creator_full_name = self._query_dict('creator/fullName', '')
if not creator_full_name == '':
creator_name_split = creator_full_name.split(',')
creator_first_name = creator_name_split[1].replace(' ', '').lower()
creator_last_name = creator_name_split[0].replace(' ', '').lower()
else:
creator_first_name = ''
creator_last_name = ''
creator_id = int(self._query_dict('creator/id', 0))
creator_affiliation = self._query_dict('creator/affiliation', '')
event_creator = event.EventCreator(creator_id, creator_first_name, creator_last_name, creator_affiliation)
return event_creator
def _event_time(self):
"""
Created the EventTime object from the currently processed dict
:return: EventTime
"""
date = self._query_dict('startDate/date', '')
time = self._query_dict('startDate/time', '')
event_time = event.EventTime(time, date)
return event_time
def _creation_time(self):
date = self._query_dict('creationDate/date', '')
time = self._query_dict('creationDate/time', '')
if '.' in time:
time = time[:time.find('.')]
event_time = event.EventTime(time, date)
return event_time
def _event_meta(self):
creation_time = self._creation_time()
url = self._query_dict('url', '')
event_meta = event.EventMeta(url, creation_time)
return event_meta
def _event_location(self):
"""
Created the EventLocation object from the dict.
:return: EventLocation
"""
location = self._query_dict('location', None)
address = self._query_dict('address', None)
event_location = event.EventLocation(location, address)
return event_location
def _query_dict(self, dict_query, default):
"""
Uses a single string to perform a possibly multi layer get from the currently processed string. The keys for
the dict structures have to be separated by '/' to indicate, that they are intended for the next layer.
A default value has to be given in case the query fails.
If the query fails a debug entry is added to the log file.
:param dict_query: The string query for the dict
:param default: The default value to be returned in case the query fails
:return: The value in the deepest dict layer queried
"""
try:
keys = dict_query.split('/')
current_dict = self.dict
for key in keys:
current_dict = current_dict[key]
return current_dict
except (KeyError, ValueError) as excpetion:
self.logger.debug('There is no key "{}" in the event dict "{}..."'.format(
dict_query,
self._debug_string
))
return default
@property
def _debug_string(self):
"""
The first few character of a string format of the dict currently processed for recognition in debugging the
log files.
The exact amount of characters that is used can be set by the static class variable
'_DEBUG_STRING_CHARACTER_COUNT'
:return: The string of the dict
"""
dict_string = str(self.dict)
# Only taking the first few characters, the amount specified by the static field
if len(dict_string) > self._DEBUG_STRING_CHARACTER_COUNT:
dict_string = dict_string[self._DEBUG_STRING_CHARACTER_COUNT:]
return dict_string
| [
"jonseb1998@gmail.com"
] | jonseb1998@gmail.com |
aa324cc4383ae2314514c934cf5ae4d5f72ade34 | c7a31023a11b376e543e41b00212dc7eca07b386 | /cryptofeed/kraken/kraken.py | 5bb79d6c9f884c708f916bbad8370bd7e22906d4 | [
"LicenseRef-scancode-warranty-disclaimer",
"Python-2.0"
] | permissive | orens77/cryptofeed | 5d2123f17745be6bf2eb7bf5a96547006e16d083 | d155b30eadb015c7167141a3dbbb8d42af6ae4f0 | refs/heads/master | 2020-04-25T17:45:22.849820 | 2019-06-04T14:51:11 | 2019-06-04T14:51:11 | 172,960,015 | 0 | 0 | NOASSERTION | 2019-02-27T17:31:47 | 2019-02-27T17:31:46 | null | UTF-8 | Python | false | false | 6,183 | py | '''
Copyright (C) 2017-2019 Bryant Moscon - bmoscon@gmail.com
Please see the LICENSE file for the terms and conditions
associated with this software.
'''
import json
import logging
from decimal import Decimal
import time
from sortedcontainers import SortedDict as sd
from cryptofeed.feed import Feed
from cryptofeed.defines import TRADES, BUY, SELL, BID, ASK, TICKER, L2_BOOK, KRAKEN
from cryptofeed.standards import pair_exchange_to_std
LOG = logging.getLogger('feedhandler')
class Kraken(Feed):
id = KRAKEN
def __init__(self, pairs=None, channels=None, callbacks=None, depth=10, **kwargs):
super().__init__('wss://ws.kraken.com', pairs=pairs, channels=channels, callbacks=callbacks, **kwargs)
self.book_depth = depth
def __reset(self):
self.l2_book = {}
self.channel_map = {}
async def subscribe(self, websocket):
self.__reset()
if self.config:
for chan in self.config:
sub = {"name": chan}
if 'book' in chan:
sub['depth'] = self.book_depth
await websocket.send(json.dumps({
"event": "subscribe",
"pair": self.config[chan],
"subscription": sub
}))
else:
for chan in self.channels:
sub = {"name": chan}
if 'book' in chan:
sub['depth'] = self.book_depth
await websocket.send(json.dumps({
"event": "subscribe",
"pair": self.pairs,
"subscription": sub
}))
async def _trade(self, msg, pair):
"""
example message:
[1,[["3417.20000","0.21222200","1549223326.971661","b","l",""]]]
channel id, price, amount, timestamp, size, limit/market order, misc
"""
for trade in msg[1]:
price, amount, timestamp, side, _, _ = trade
await self.callbacks[TRADES](feed=self.id,
pair=pair,
side=BUY if side == 'b' else SELL,
amount=Decimal(amount),
price=Decimal(price),
order_id=None,
timestamp=float(timestamp))
async def _ticker(self, msg, pair):
"""
[93, {'a': ['105.85000', 0, '0.46100000'], 'b': ['105.77000', 45, '45.00000000'], 'c': ['105.83000', '5.00000000'], 'v': ['92170.25739498', '121658.17399954'], 'p': ['107.58276', '107.95234'], 't': [4966, 6717], 'l': ['105.03000', '105.03000'], 'h': ['110.33000', '110.33000'], 'o': ['109.45000', '106.78000']}]
channel id, asks: price, wholeLotVol, vol, bids: price, wholeLotVol, close: ...,, vol: ..., VWAP: ..., trades: ..., low: ...., high: ..., open: ...
"""
await self.callbacks[TICKER](feed=self.id,
pair=pair,
bid=Decimal(msg[1]['b'][0]),
ask=Decimal(msg[1]['a'][0]))
async def _book(self, msg, pair):
delta = {BID: [], ASK: []}
msg = msg[1:]
if len(msg) == 1 and 'as' in msg[0]:
# Snapshot
self.l2_book[pair] = {BID: sd({
Decimal(update[0]): Decimal(update[1]) for update in msg[0]['bs']
}), ASK: sd({
Decimal(update[0]): Decimal(update[1]) for update in msg[0]['as']
})}
await self.book_callback(pair, L2_BOOK, True, delta, time.time())
else:
for m in msg:
for s, updates in m.items():
side = BID if s == 'b' else ASK
for update in updates:
price, size, _ = update
price = Decimal(price)
size = Decimal(size)
if size == 0:
# Per Kraken's technical support
# they deliver erroneous deletion messages
# periodically which should be ignored
if price in self.l2_book[pair][side]:
del self.l2_book[pair][side][price]
delta[side].append((price, 0))
else:
delta[side].append((price, size))
self.l2_book[pair][side][price] = size
for side in (BID, ASK):
while len(self.l2_book[pair][side]) > self.book_depth:
del_price = self.l2_book[pair][side].items()[0 if side == BID else -1][0]
del self.l2_book[pair][side][del_price]
delta[side].append((del_price, 0))
await self.book_callback(pair, L2_BOOK, False, delta, time.time())
async def message_handler(self, msg):
msg = json.loads(msg, parse_float=Decimal)
if isinstance(msg, list):
if self.channel_map[msg[0]][0] == 'trade':
await self._trade(msg, self.channel_map[msg[0]][1])
elif self.channel_map[msg[0]][0] == 'ticker':
await self._ticker(msg, self.channel_map[msg[0]][1])
elif self.channel_map[msg[0]][0] == 'book':
await self._book(msg, self.channel_map[msg[0]][1])
else:
LOG.warning("%s: No mapping for message %s", self.id, msg)
else:
if msg['event'] == 'heartbeat':
return
elif msg['event'] == 'systemStatus':
return
elif msg['event'] == 'subscriptionStatus' and msg['status'] == 'subscribed':
self.channel_map[msg['channelID']] = (msg['subscription']['name'], pair_exchange_to_std(msg['pair']))
else:
LOG.warning("%s: Invalid message type %s", self.id, msg)
| [
"bmoscon@gmail.com"
] | bmoscon@gmail.com |
dcb4c855cf299c37b70980412d0997e041832a97 | 3fb0ce33f00b96ae3808a32da44de3e887434afb | /.提出一覧/AtCoder/M-SOLUTIONSプロコンオープン2020/msol_D.py | 57fa6ba1c8ac9cddfb25c897c2153a9acee8c49d | [] | no_license | Yukikazari/kyoupuro | ca3d74d8db024b1988cd0ff00bf069ab739783d7 | 343de455c4344dbcfa4524b492f7f6205c9db26f | refs/heads/master | 2023-02-21T01:53:52.403729 | 2021-01-27T03:55:01 | 2021-01-27T03:55:01 | 282,222,950 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 377 | py | N = int(input())
A = list(map(int, input().split()))
money = 1000
kabu = 0
for i in range(len(A)-1):
if kabu != 0:
if A[i] >= A[i + 1]:
money += kabu * A[i]
kabu = 0
else:
if A[i] < A[i + 1]:
kabu = money // A[i]
money -= kabu * A[i]
if kabu != 0:
money += kabu * A[len(A) - 1]
print(money)
| [
"haya_nanakusa793@yahoo.co.jp"
] | haya_nanakusa793@yahoo.co.jp |
e70dabf76a8f2d14e74e6b2353114e4dafe9e649 | adbcc8ff2249dc9906095bf894d2923b197f8af2 | /examples/csj/s5/local/remove_pos.py | 3e354475e35113bfcbd8497b2fc1cf5ff9e61663 | [] | no_license | caochensi/neural_sp | de39d0919aeb6da28b90140be051c124f7efcc3f | 019247ccae7df461f852c5130ea127e395d071dc | refs/heads/master | 2020-04-15T14:49:52.408291 | 2019-01-06T10:52:36 | 2019-01-06T14:43:13 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,357 | py | #! /usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2018 Kyoto University (Hirofumi Inaguma)
# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
"""Remove POS tag."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import re
from tqdm import tqdm
parser = argparse.ArgumentParser()
parser.add_argument('text', type=str,
help='path to text file')
args = parser.parse_args()
def main():
with open(args.text, 'r') as f:
pbar = tqdm(total=len(open(args.text).readlines()))
for line in f:
line = unicode(line, 'utf-8').strip()
utt_id = line.split()[0]
words = line.split()[1:]
# Remove POS tag
text = ' '.join(list(map(lambda x: x.split('+')[0], words)))
# Remove <sp> (short pause)
text = text.replace('<sp>', '')
# Remove conseccutive spaces
text = re.sub(r'[\s]+', ' ', text)
# Remove the first and last spaces
if text[0] == ' ':
text = text[1:]
if text[-1] == ' ':
text = text[:-1]
line = utt_id + ' ' + text
print('%s' % line.encode('utf-8'))
pbar.update(1)
if __name__ == '__main__':
main()
| [
"hiro.mhbc@gmail.com"
] | hiro.mhbc@gmail.com |
bc9da59702741b477df186a1ee0522a1e21e0bf8 | 78144baee82268a550400bbdb8c68de524adc68f | /Production/python/Summer16v3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8_cff.py | a29d647b952c42a887fa8cf0e932bea45b6f89b9 | [] | no_license | tklijnsma/TreeMaker | e6989c03189b849aff2007bad22e2bfc6922a244 | 248f2c04cc690ef2e2202b452d6f52837c4c08e5 | refs/heads/Run2_2017 | 2023-05-26T23:03:42.512963 | 2020-05-12T18:44:15 | 2020-05-12T18:44:15 | 263,960,056 | 1 | 2 | null | 2020-09-25T00:27:35 | 2020-05-14T15:57:20 | null | UTF-8 | Python | false | false | 2,217 | py | import FWCore.ParameterSet.Config as cms
maxEvents = cms.untracked.PSet( input = cms.untracked.int32(-1) )
readFiles = cms.untracked.vstring()
secFiles = cms.untracked.vstring()
source = cms.Source ("PoolSource",fileNames = readFiles, secondaryFileNames = secFiles)
readFiles.extend( [
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/2ABD3630-3BF0-E811-B5A4-0CC47A4D76C6.root',
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/4A9BB0CE-89EF-E811-80C9-0CC47A7C357A.root',
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/5C851538-8DEF-E811-B889-0025905A60B2.root',
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/8A7D43D4-83EF-E811-8F5B-0025905AA9CC.root',
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/AC3F8080-6BEF-E811-8D1A-0CC47A7C35A4.root',
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/B05017BC-83EF-E811-BD46-0CC47A78A458.root',
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/C46EC3F6-6AEF-E811-8BFA-0CC47A4C8F2C.root',
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/E82B9840-66EF-E811-BE18-0025905A612E.root',
'/store/mc/RunIISummer16MiniAODv3/StealthSYY_2t6j_mStop-400_mN1-100_TuneCUEP8M1_13TeV-madgraphMLM-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/00000/FC012AD3-66EF-E811-A07D-003048FF9ABC.root',
] )
| [
"Alexx.Perloff@Colorado.edu"
] | Alexx.Perloff@Colorado.edu |
629c1017d878bc2ddc0df3e3b6ff940ed56c9452 | 4b29c3e3c8a2cad5071a3fb2ea674253c6f0ef21 | /pycharm/venv/Lib/site-packages/pip/_internal/network/session.py | e8e5ea5ba6ecb8483f2da8eda15aa5f4ea13066f | [] | no_license | yz9527-1/1YZ | a0303b00fd1c7f782b7e4219c52f9589dd3b27b7 | 5f843531d413202f4f4e48ed0c3d510db21f4396 | refs/heads/master | 2022-11-30T23:50:56.682852 | 2020-08-10T02:11:13 | 2020-08-10T02:11:13 | 286,354,211 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 15,208 | py | """PipSession and supporting code, containing all pip-specific
network request configuration and behavior.
"""
# The following comment should be removed at some point in the future.
# mypy: disallow-untyped-defs=False
import email.utils
import json
import logging
import mimetypes
import os
import platform
import sys
import warnings
from pip._vendor import requests, six, urllib3
from pip._vendor.cachecontrol import CacheControlAdapter
from pip._vendor.requests.adapters import BaseAdapter, HTTPAdapter
from pip._vendor.requests.models import Response
from pip._vendor.requests.structures import CaseInsensitiveDict
from pip._vendor.six.moves.urllib import parse as urllib_parse
from pip._vendor.urllib3.exceptions import InsecureRequestWarning
from pip import __version__
from pip._internal.network.auth import MultiDomainBasicAuth
from pip._internal.network.cache import SafeFileCache
# Import ssl from compat so the initial import occurs in only one place.
from pip._internal.utils.compat import has_tls, ipaddress
from pip._internal.utils.glibc import libc_ver
from pip._internal.utils.misc import (
build_url_from_netloc,
get_installed_version,
parse_netloc,
)
from pip._internal.utils.typing import MYPY_CHECK_RUNNING
from pip._internal.utils.urls import url_to_path
if MYPY_CHECK_RUNNING:
from typing import (
Iterator, List, Optional, Tuple, Union,
)
from pip._internal.models.link import Link
SecureOrigin = Tuple[str, str, Optional[Union[int, str]]]
logger = logging.getLogger(__name__)
# Ignore warning raised when using --trusted-host.
warnings.filterwarnings("ignore", category=InsecureRequestWarning)
SECURE_ORIGINS = [
# protocol, hostname, port
# Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
("https", "*", "*"),
("*", "localhost", "*"),
("*", "127.0.0.0/8", "*"),
("*", "::1/128", "*"),
("file", "*", None),
# ssh is always secure.
("ssh", "*", "*"),
] # type: List[SecureOrigin]
# These are environment variables present when running under various
# CI systems. For each variable, some CI systems that use the variable
# are indicated. The collection was chosen so that for each of a number
# of popular systems, at least one of the environment variables is used.
# This list is used to provide some indication of and lower bound for
# CI traffic to PyPI. Thus, it is okay if the list is not comprehensive.
# For more background, see: https://github.com/pypa/pip/issues/5499
CI_ENVIRONMENT_VARIABLES = (
# Azure Pipelines
'BUILD_BUILDID',
# Jenkins
'BUILD_ID',
# AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI
'CI',
# Explicit environment variable.
'PIP_IS_CI',
)
def looks_like_ci():
# type: () -> bool
"""
Return whether it looks like pip is running under CI.
"""
# We don't use the method of checking for a tty (e.g. using isatty())
# because some CI systems mimic a tty (e.g. Travis CI). Thus that
# method doesn't provide definitive information in either direction.
return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES)
def user_agent():
"""
Return a string representing the user agent.
"""
data = {
"installer": {"name": "pip", "version": __version__},
"python": platform.python_version(),
"implementation": {
"name": platform.python_implementation(),
},
}
if data["implementation"]["name"] == 'CPython':
data["implementation"]["version"] = platform.python_version()
elif data["implementation"]["name"] == 'PyPy':
if sys.pypy_version_info.releaselevel == 'final':
pypy_version_info = sys.pypy_version_info[:3]
else:
pypy_version_info = sys.pypy_version_info
data["implementation"]["version"] = ".".join(
[str(x) for x in pypy_version_info]
)
elif data["implementation"]["name"] == 'Jython':
# Complete Guess
data["implementation"]["version"] = platform.python_version()
elif data["implementation"]["name"] == 'IronPython':
# Complete Guess
data["implementation"]["version"] = platform.python_version()
if sys.platform.startswith("linux"):
from pip._vendor import distro
distro_infos = dict(filter(
lambda x: x[1],
zip(["name", "version", "id"], distro.linux_distribution()),
))
libc = dict(filter(
lambda x: x[1],
zip(["lib", "version"], libc_ver()),
))
if libc:
distro_infos["libc"] = libc
if distro_infos:
data["distro"] = distro_infos
if sys.platform.startswith("darwin") and platform.mac_ver()[0]:
data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]}
if platform.system():
data.setdefault("system", {})["name"] = platform.system()
if platform.release():
data.setdefault("system", {})["release"] = platform.release()
if platform.machine():
data["cpu"] = platform.machine()
if has_tls():
import _ssl as ssl
data["openssl_version"] = ssl.OPENSSL_VERSION
setuptools_version = get_installed_version("setuptools")
if setuptools_version is not None:
data["setuptools_version"] = setuptools_version
# Use None rather than False so as not to give the impression that
# pip knows it is not being run under CI. Rather, it is a null or
# inconclusive result. Also, we include some value rather than no
# value to make it easier to know that the check has been run.
data["ci"] = True if looks_like_ci() else None
user_data = os.environ.get("PIP_USER_AGENT_USER_DATA")
if user_data is not None:
data["user_data"] = user_data
return "{data[installer][name]}/{data[installer][version]} {json}".format(
data=data,
json=json.dumps(data, separators=(",", ":"), sort_keys=True),
)
class LocalFSAdapter(BaseAdapter):
def send(self, request, stream=None, timeout=None, verify=None, cert=None,
proxies=None):
pathname = url_to_path(request.url)
resp = Response()
resp.status_code = 200
resp.url = request.url
try:
stats = os.stat(pathname)
except OSError as exc:
resp.status_code = 404
resp.raw = exc
else:
modified = email.utils.formatdate(stats.st_mtime, usegmt=True)
content_type = mimetypes.guess_type(pathname)[0] or "text/plain"
resp.headers = CaseInsensitiveDict({
"Content-Type": content_type,
"Content-Length": stats.st_size,
"Last-Modified": modified,
})
resp.raw = open(pathname, "rb")
resp.close = resp.raw.close
return resp
def close(self):
pass
class InsecureHTTPAdapter(HTTPAdapter):
def cert_verify(self, conn, url, verify, cert):
super(InsecureHTTPAdapter, self).cert_verify(
conn=conn, url=url, verify=False, cert=cert
)
class InsecureCacheControlAdapter(CacheControlAdapter):
def cert_verify(self, conn, url, verify, cert):
super(InsecureCacheControlAdapter, self).cert_verify(
conn=conn, url=url, verify=False, cert=cert
)
class PipSession(requests.Session):
timeout = None # type: Optional[int]
def __init__(self, *args, **kwargs):
"""
:param trusted_hosts: Domains not to emit warnings for when not using
HTTPS.
"""
retries = kwargs.pop("retries", 0)
cache = kwargs.pop("cache", None)
trusted_hosts = kwargs.pop("trusted_hosts", []) # type: List[str]
index_urls = kwargs.pop("index_urls", None)
super(PipSession, self).__init__(*args, **kwargs)
# Namespace the attribute with "pip_" just in test to prevent
# possible conflicts with the base class.
self.pip_trusted_origins = [] # type: List[Tuple[str, Optional[int]]]
# Attach our User Agent to the request
self.headers["User-Agent"] = user_agent()
# Attach our Authentication handler to the session
self.auth = MultiDomainBasicAuth(index_urls=index_urls)
# Create our urllib3.Retry instance which will allow us to customize
# how we handle retries.
retries = urllib3.Retry(
# Set the total number of retries that a particular request can
# have.
total=retries,
# A 503 error from PyPI typically means that the Fastly -> Origin
# connection got interrupted in some way. A 503 error in general
# is typically considered a transient error so we'll go ahead and
# retry it.
# A 500 may indicate transient error in Amazon S3
# A 520 or 527 - may indicate transient error in CloudFlare
status_forcelist=[500, 503, 520, 527],
# Add a small amount of back off between failed requests in
# order to prevent hammering the service.
backoff_factor=0.25,
)
# Our Insecure HTTPAdapter disables HTTPS validation. It does not
# support caching so we'll use it for all http:// URLs.
# If caching is disabled, we will also use it for
# https:// hosts that we've marked as ignoring
# TLS errors for (trusted-hosts).
insecure_adapter = InsecureHTTPAdapter(max_retries=retries)
# We want to _only_ cache responses on securely fetched origins or when
# the host is specified as trusted. We do this because
# we can't validate the response of an insecurely/untrusted fetched
# origin, and we don't want someone to be able to poison the cache and
# require manual eviction from the cache to fix it.
if cache:
secure_adapter = CacheControlAdapter(
cache=SafeFileCache(cache),
max_retries=retries,
)
self._trusted_host_adapter = InsecureCacheControlAdapter(
cache=SafeFileCache(cache),
max_retries=retries,
)
else:
secure_adapter = HTTPAdapter(max_retries=retries)
self._trusted_host_adapter = insecure_adapter
self.mount("https://", secure_adapter)
self.mount("http://", insecure_adapter)
# Enable file:// urls
self.mount("file://", LocalFSAdapter())
for host in trusted_hosts:
self.add_trusted_host(host, suppress_logging=True)
def add_trusted_host(self, host, source=None, suppress_logging=False):
# type: (str, Optional[str], bool) -> None
"""
:param host: It is okay to provide a host that has previously been
added.
:param source: An optional source string, for logging where the host
string came from.
"""
if not suppress_logging:
msg = 'adding trusted host: {!r}'.format(host)
if source is not None:
msg += ' (from {})'.format(source)
logger.info(msg)
host_port = parse_netloc(host)
if host_port not in self.pip_trusted_origins:
self.pip_trusted_origins.append(host_port)
self.mount(
build_url_from_netloc(host) + '/',
self._trusted_host_adapter
)
if not host_port[1]:
# Mount wildcard ports for the same host.
self.mount(
build_url_from_netloc(host) + ':',
self._trusted_host_adapter
)
def iter_secure_origins(self):
# type: () -> Iterator[SecureOrigin]
for secure_origin in SECURE_ORIGINS:
yield secure_origin
for host, port in self.pip_trusted_origins:
yield ('*', host, '*' if port is None else port)
def is_secure_origin(self, location):
# type: (Link) -> bool
# Determine if this url used a secure transport mechanism
parsed = urllib_parse.urlparse(str(location))
origin_protocol, origin_host, origin_port = (
parsed.scheme, parsed.hostname, parsed.port,
)
# The protocol to use to see if the protocol matches.
# Don't count the repository type as part of the protocol: in
# cases such as "git+ssh", only use "ssh". (I.e., Only verify against
# the last scheme.)
origin_protocol = origin_protocol.rsplit('+', 1)[-1]
# Determine if our origin is a secure origin by looking through our
# hardcoded list of secure origins, as well as any additional ones
# configured on this PackageFinder instance.
for secure_origin in self.iter_secure_origins():
secure_protocol, secure_host, secure_port = secure_origin
if origin_protocol != secure_protocol and secure_protocol != "*":
continue
try:
addr = ipaddress.ip_address(
None
if origin_host is None
else six.ensure_text(origin_host)
)
network = ipaddress.ip_network(
six.ensure_text(secure_host)
)
except ValueError:
# We don't have both a valid address or a valid network, so
# we'll check this origin against hostnames.
if (
origin_host and
origin_host.lower() != secure_host.lower() and
secure_host != "*"
):
continue
else:
# We have a valid address and network, so see if the address
# is contained within the network.
if addr not in network:
continue
# Check to see if the port matches.
if (
origin_port != secure_port and
secure_port != "*" and
secure_port is not None
):
continue
# If we've gotten here, then this origin matches the current
# secure origin and we should return True
return True
# If we've gotten to this point, then the origin isn't secure and we
# will not accept it as a valid location to search. We will however
# log a warning that we are ignoring it.
logger.warning(
"The repository located at %s is not a trusted or secure host and "
"is being ignored. If this repository is available via HTTPS we "
"recommend you use HTTPS instead, otherwise you may silence "
"this warning and allow it anyway with '--trusted-host %s'.",
origin_host,
origin_host,
)
return False
def request(self, method, url, *args, **kwargs):
# Allow setting a default timeout on a session
kwargs.setdefault("timeout", self.timeout)
# Dispatch the actual request
return super(PipSession, self).request(method, url, *args, **kwargs)
| [
"2447025220@qq.com"
] | 2447025220@qq.com |
2b02015ba8e96f08cf871f37cd5331da7911e852 | e9e433c57a7d73d848fbade5e354a8c31ff0ea87 | /tests/test_subword_separation.py | 43d1525eb175ddd755df02c966b5532c5a64d8f9 | [
"Apache-2.0"
] | permissive | Joshua0128/codeprep | a7f99533e7b74e089fb66a3f10f6be59d4ca4b71 | 0f41307f7a9ad545e5ec0cc9552a0144328f2422 | refs/heads/master | 2023-07-09T13:07:33.094940 | 2021-04-21T18:08:30 | 2021-04-21T18:08:30 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,553 | py | # SPDX-FileCopyrightText: 2020 Hlib Babii <hlibbabii@gmail.com>
#
# SPDX-License-Identifier: Apache-2.0
# import unittest
#
# from codeprep.bpepkg.bpe_encode import BpeData
# from codeprep.parse.core import convert_text
# from codeprep.parse.model.containers import SplitContainer
# from codeprep.parse.model.numeric import Number
# from codeprep.parse.model.placeholders import placeholders
# from codeprep.parse.model.word import Underscore, Word
# from codeprep.prepconfig import PrepConfig
# from codeprep.to_repr import to_repr
#
# test_cases = {
# "create": (
# [SplitContainer.from_single_token("create")],
# ["create"],
# ),
# "Vector": (
# [SplitContainer.from_single_token("Vector")],
# [placeholders["capital"], "vector"],
# ),
# "players": (
# [SplitContainer.from_single_token("players")],
# [placeholders["word_start"], 'play', 'er', 's', placeholders["word_end"]]
# ),
# "0.345e+4": (
# [Number("0.345e+4")],
# [placeholders["word_start"], "0.", "3", "4", "5", "e+", "4", placeholders["word_end"]]
# ),
# "bestPlayers": (
# [SplitContainer([Word.from_("best"), Word.from_("Players")])],
# [placeholders["word_start"], "best", placeholders["capital"], 'play', "er", "s", placeholders["word_end"]]
# ),
# "test_BestPlayers": (
# [SplitContainer([Word.from_("test"), Underscore(), Word.from_("Best"), Word.from_("Players")])],
# [placeholders["word_start"], "test", '_', placeholders["capital"],
# "best", placeholders["capital"], 'play', "er", "s", placeholders["word_end"]]
# ),
# "test_BestPlayers_modified": (
# [SplitContainer(
# [Word.from_("test"), Underscore(), Word.from_("Best"), Word.from_("Players"), Underscore(),
# Word.from_("modified")]
# )],
# [placeholders["word_start"], "test", '_', placeholders["capital"],
# "best", placeholders["capital"], 'play', "er", "s", '_', "mod",
# "if", "ied",
# placeholders["word_end"]]
# ),
# "N_PLAYERS_NUM": (
# [SplitContainer([Word.from_("N"), Underscore(), Word.from_("PLAYERS"), Underscore(), Word.from_("NUM")])],
# [placeholders["word_start"], placeholders["capitals"], "n", '_',
# placeholders["capitals"], "play", "er", "s", '_', placeholders["capitals"],
# "num", placeholders["word_end"]]
# ),
# "_players": (
# [SplitContainer([Underscore(), (Word.from_("players"))])],
# [placeholders['word_start'], '_', "play", "er", "s", placeholders['word_end']]
# ),
# }
#
# bpe_merges_cache = {
# "players": ["play", "er", "s"],
# "0.345e+4": ["0.", "3", "4", "5", "e+", "4"],
# "modified": ["mod", "if", "ied"],
#
# "create": ["create"],
# "vector": ["vector"],
# "best": ["best"],
# "test": ["test"],
# "num": ["num"],
# "user": ["user"],
# "get": ["get"],
# "nick": ["ni", "ck"],
# "logger": ["logger"],
# "info": ["info"]
# }
#
#
# class SubwordSeparation(unittest.TestCase):
# def test(self):
# for input, output_tuple in test_cases.items():
# parsed = [p for p in convert_text(input, "java")][:-1]
#
# self.assertEqual(output_tuple[0], parsed)
#
# repred, metadata = to_repr(PrepConfig.from_encoded_string('Uc140l'), parsed, BpeData(merges_cache=bpe_merges_cache))
#
# self.assertEqual(output_tuple[1], repred)
#
#
# if __name__ == '__main__':
# unittest.main() | [
"hlibbabii@gmail.com"
] | hlibbabii@gmail.com |
eb940db047f8f17e8e5f77e6aba568932e282e75 | f1624dc174a172aff3b6cf7b9c7d1f2505bb76ee | /code/plot_double_bar_tmp.py | ee014d0f52a5091ca582759f7a4d2e1f81e4de3e | [] | no_license | blxlrsmb/TrendMicro2014 | 5d0bf07a54e8f2123af06cd58beef36ad9cb007d | 63894ede6bbfeb1ce62a5b64c56996dd53c6c2ce | refs/heads/master | 2021-01-10T14:12:56.868967 | 2015-11-21T23:39:36 | 2015-11-21T23:39:36 | 46,638,621 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,000 | py | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
# $File: plot_double_bar_tmp.py
# $Date: Mon Aug 18 06:45:14 2014 +0800
# $Author: Xinyu Zhou <zxytim[at]gmail[dot]com>
from sklearn.datasets import load_svmlight_file
import matplotlib.pyplot as plt
import numpy
import sys
from IPython import embed
def process_data(X, y, column, nr_bins):
x = X[:,column].toarray().T[0]
y = y[x != 0]
x = x[x != 0]
y = y[x < 3000]
x = x[x < 3000]
min_x, max_x = min(x), max(x)
yp, p = numpy.histogram(x, nr_bins)
# p = (numpy.array(list(p) + [0]) + numpy.asarray([0] + list(p)))[1:-1] * 0.5
width = (max_x - min_x) / float(nr_bins) * 0.6
y0, x0 = numpy.histogram(x[y == 0], p)
y1, x1 = numpy.histogram(x[y == 1], p)
p = (numpy.array(list(p) + [0]) + numpy.asarray([0] + list(p)))[1:-1] * 0.5
return p, y0, y1, width
def main():
if len(sys.argv) != 6:
print 'Usage: {} <svm_file> <desc> <column> <nr_bins> <output>'.format(sys.argv[0])
sys.exit(1)
input_path, desc_path, column, nr_bins, output_path = sys.argv[1:]
column, nr_bins = map(int, [column, nr_bins])
with open(desc_path) as f:
descs = [" ".join(line.rstrip().split()[1:]) for line in f]
X, y = load_svmlight_file(input_path)
x0, y0, y1, width = process_data(X, y, column, nr_bins)
nr_items = sum(y0) + sum(y1)
if nr_items < 100:
print 'to few data for column {}: {}, abort.'.format(
column, nr_items)
return
ff = open('/tmp/{0}'.format(descs[column]), 'w')
print >> ff, x0, '\n', y0, '\n', y1
ff.close()
plt.bar(x0 + width * 0.3, y0, width=width, label='un-renewed', color='#55DD55')
plt.bar(x0, y1, width=width, label='renewed', color='#5555DD')
plt.title('distribution of {} on renewal'.format(descs[column]))
plt.legend()
plt.tight_layout()
plt.grid()
plt.savefig(output_path)
# plt.show()
if __name__ == '__main__':
main()
# vim: foldmethod=marker
| [
"ppwwyyxxc@gmail.com"
] | ppwwyyxxc@gmail.com |
a9c44935af9e748a670870274888deac52241fc7 | c3865ab8c207ee58056be8eda7319acaa9a04654 | /Plots/simple/SimpleBarPlot.py | 4e5aa7de3779008ecd87c21d430eaf7f2ba2977f | [] | no_license | jonathanglima/Python | d42c21cb9030063f172c03efbb598a969db48539 | 43c53b6f20973c31ce95d2891293d344d655d377 | refs/heads/master | 2021-08-08T07:42:56.348207 | 2017-11-09T22:19:17 | 2017-11-09T22:19:17 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,092 | py | # -*- coding: utf-8 -*-
"""
Created on Tue Jul 07 16:56:32 2015
Simple Bar plot function
@author: Edward
"""
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
import os
# global variables
# fontname = 'C:/Users/Edward/Documents/Assignments/Scripts/Python/Plots/resource/Helvetica.ttf' # font .ttf file path
# platform specific fonts
#import sys
#fontname = {'darwin': 'Helvetica', # Mac
# 'win32':'Arial', # Windows
# 'linux': 'FreeSans', # Linux
# 'cygwin': 'Arial' # use Windows
# }.get(sys.platform)
fontname = 'Helvetica'
fontsize = {'title':16, 'xlab':12, 'ylab':12, 'xtick':10,'ytick':10,'texts':10,
'legend': 10, 'legendtitle':10} # font size
def SimpleBarPlot(groups, values, errors, savepath=None, width = 0.27,
size=(3, 3), color=['#1f77b4']):
"""Takes 3 inputs and generate a simple bar plot
e.g. groups = ['dog','cat','hippo']
values = [15, 10, 3]
errors = [0.5, 0.3, 0.8]
savepath: path to save figure. Format will be parsed by the extension
of save name
width: distance between each bars
size: figure size, in inches. Input as a tuple. Default (3,3)
color: default tableau10's steelblue color (hex: #1f77b4)
"""
# Get bar plot function according to style
ngroups = len(groups) # group labels
# leftmost position of bars
pos = np.arange(ngroups)
# initialize the plot
fig, axs = plt.subplots(nrows=1, ncols = 1, sharex=True)
# plot the series
axs.bar(pos, values, width, yerr=errors, color=color, align='center')
# set axis
axs.tick_params(axis='both',direction='out')
axs.spines['left'].set_visible(True)
axs.spines['right'].set_visible(False)
axs.spines['top'].set_visible(False)
axs.spines['bottom'].set_visible(True)
axs.xaxis.set_ticks_position('bottom')
axs.yaxis.set_ticks_position('left')
ymin, ymax = axs.get_ybound()
if ymax <= 0.0: # only negative data present
# flip label to top
axs.spines['bottom'].set_position('zero') # zero the x axis
axs.tick_params(labelbottom=False, labeltop=True)
elif ymin >= 0.0: # only positive data present. Default
axs.spines['bottom'].set_position('zero') # zero the x axis
else: # mix of positive an negative data : set all label to bottoms
axs.spines['bottom'].set_visible(False)
axs.spines['top'].set_visible(True)
axs.spines['top'].set_position('zero')
axs.xaxis.set_ticks_position('none')
# Set x categorical label
x = range(0,len(groups))
if axs.get_xlim()[0] >= x[0]:
axs.set_xlim(axs.get_xticks()[0]-1,axs.get_xlim()[-1])
if axs.get_xlim()[-1] <= x[-1]:
axs.set_xlim(axs.get_xlim()[0], axs.get_xticks()[-1]+1)
plt.xticks(x, groups)
# Set font
itemDict = {'title':[axs.title], 'xlab':[axs.xaxis.label],
'ylab':[axs.yaxis.label], 'xtick':axs.get_xticklabels(),
'ytick':axs.get_yticklabels(),
'texts':axs.texts if isinstance(axs.texts, np.ndarray)
or isinstance(axs.texts, list) else [axs.texts],
'legend': [] if axs.legend_ is None
else axs.legend_.get_texts(),
'legendtitle':[] if axs.legend_ is None
else [axs.legend_.get_title()]}
itemList, keyList = [], []
for k, v in iter(itemDict.items()):
itemList += v
keyList += [k]*len(v)
# initialize fontprop object
fontprop = fm.FontProperties(style='normal', weight='normal',
stretch = 'normal')
if os.path.isfile(fontname): # check if font is a file
fontprop.set_file(fontname)
else:# check if the name of font is available in the system
if not any([fontname.lower() in a.lower() for a in
fm.findSystemFonts(fontpaths=None, fontext='ttf')]):
print('%s font not found. Use system default.' %(fontname))
fontprop.set_family(fontname) # set font name
# set font for each object
for n, item in enumerate(itemList):
if isinstance(fontsize, dict):
fontprop.set_size(fontsize[keyList[n]])
elif n <1: # set the properties only once
fontprop.set_size(fontsize)
item.set_fontproperties(fontprop) # change font for all items
# Set figure size
fig.set_size_inches(size[0],size[1])
# Save the figure
if savepath is not None:
fig.savefig(savepath, bbox_inches='tight', rasterized=True, dpi=300)
return(fig, axs)
if __name__=='__main__':
groups = ['dog','cat','hippo']
values = [-15, 10, 3]
errors = [0.5, 0.3, 0.8]
SimpleBarPlot(groups, values, errors, savepath='C:/Users/Edward/Documents/Assignments/Scripts/Python/Plots/barplot.eps')
| [
"cui23327@gmail.com"
] | cui23327@gmail.com |
72d6334fff676e48b52f9b448c483d6a4750c874 | 524c17cbe94ee67babf817bad3b304e2573aa2da | /Lab4/func.py | 12c785bd1737ed353c5e4ba959d5d126ed932708 | [] | no_license | JoniNoct/python-laboratory | 6707664fa58d6e0fcde8476f9670fcf9c312e830 | 551fdb128bcf113e72fd13ff7441455c830ecf45 | refs/heads/master | 2020-07-29T15:20:26.183289 | 2019-11-11T22:42:42 | 2019-11-11T22:42:42 | 209,859,702 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,094 | py | def checkType(a):
i = 1
if a == "int":
while i == 1:
try:
v = int(input())
i += 1
except:
print("Потрібно ввести число")
return v
elif a == "float":
while i == 1:
try:
v = float(input())
i += 1
except:
print("Потрібно ввести число")
return v
def setChoice():
print("Ви би хотіли розпочати знову?\n1) так\n2) ні")
i = 1
j = 1
while i == 1:
c = checkType("int")
if c == 1:
print("Починаэмо спочатку")
i += 1
elif c == 2:
print("До побачення")
i += 1
j += 1
else:
print("У вас є можливість обрати лише з 2 пунктів")
return j
def welcome(a):
print("Лабораторна робота №%d Майструк Ілля №6\nДоброго дня" %(a))
| [
"tereshchenko.igor@gmail.com"
] | tereshchenko.igor@gmail.com |
ca7dcb5b038ec85b1e09b2329b549221e8c3d1ad | c2544163e17fad9a4e5731bd1976bfc9db44f8bd | /reagent/net_builder/discrete_dqn/dueling.py | fc2fe4b2e4cb677386c517f34dfdc563b2f3b93f | [
"BSD-3-Clause"
] | permissive | zachkeer/ReAgent | 52fb805576dc7b1465e35921c3651ff14cd9345e | 3e5eb0391050c39b9d4707020f9ee15d860f28cb | refs/heads/master | 2022-11-25T14:08:21.151851 | 2020-07-22T06:44:28 | 2020-07-22T06:45:48 | 281,907,350 | 1 | 0 | BSD-3-Clause | 2020-07-23T09:21:50 | 2020-07-23T09:21:49 | null | UTF-8 | Python | false | false | 1,216 | py | #!/usr/bin/env python3
from typing import List
from reagent import types as rlt
from reagent.core.dataclasses import dataclass, field
from reagent.models.base import ModelBase
from reagent.models.dueling_q_network import DuelingQNetwork
from reagent.net_builder.discrete_dqn_net_builder import DiscreteDQNNetBuilder
from reagent.parameters import NormalizationData, param_hash
@dataclass
class Dueling(DiscreteDQNNetBuilder):
__hash__ = param_hash
sizes: List[int] = field(default_factory=lambda: [256, 128])
activations: List[str] = field(default_factory=lambda: ["relu", "relu"])
def __post_init_post_parse__(self):
assert len(self.sizes) == len(self.activations), (
f"Must have the same numbers of sizes and activations; got: "
f"{self.sizes}, {self.activations}"
)
def build_q_network(
self,
state_feature_config: rlt.ModelFeatureConfig,
state_normalization_data: NormalizationData,
output_dim: int,
) -> ModelBase:
state_dim = self._get_input_dim(state_normalization_data)
return DuelingQNetwork.make_fully_connected(
state_dim, output_dim, self.sizes, self.activations
)
| [
"facebook-github-bot@users.noreply.github.com"
] | facebook-github-bot@users.noreply.github.com |
e5384e5808f75faeb98b36e08e1fdf5188cd9ff4 | 301a7cb5d21808b4442f641ba4396fbd6c1da6f0 | /dataclasses_serialization/serializer_base.py | 9513f627a0877f7ae41e8c6862dc1a5fb057b84f | [
"MIT"
] | permissive | kitschen/python-dataclasses-serialization | 5766b8010ffb3d8af7aa4925b00f733019ca4b2f | c0c4e4e037a01871dccb13c25478f35a9dfb7717 | refs/heads/master | 2022-05-30T06:13:23.474156 | 2020-02-18T17:12:52 | 2020-02-18T17:12:52 | 260,832,071 | 0 | 0 | MIT | 2020-05-03T05:05:06 | 2020-05-03T05:05:05 | null | UTF-8 | Python | false | false | 7,635 | py | from dataclasses import dataclass, fields, is_dataclass
from functools import partial
from typing import TypeVar, Union, Dict, List, get_type_hints
from typing_inspect import is_union_type, get_origin, get_args
from toolz import curry
try:
from typing import GenericMeta
except ImportError:
from typing import _GenericAlias, _SpecialForm
GenericMeta = (_GenericAlias, _SpecialForm)
__all__ = [
"isinstance",
"issubclass",
"noop_serialization",
"noop_deserialization",
"dict_to_dataclass",
"union_deserialization",
"dict_serialization",
"dict_deserialization",
"list_deserialization",
"Serializer",
"SerializationError",
"DeserializationError"
]
get_args = partial(get_args, evaluate=True)
original_isinstance = isinstance
original_issubclass = issubclass
def isinstance(o, t):
if t is dataclass:
return original_isinstance(o, type) and is_dataclass(o)
if original_isinstance(t, GenericMeta):
if t is Dict:
return original_isinstance(o, dict)
if get_origin(t) in (dict, Dict):
key_type, value_type = get_args(t)
return original_isinstance(o, dict) and all(
isinstance(key, key_type) and isinstance(value, value_type)
for key, value in o.items()
)
return original_isinstance(o, t)
def issubclass(cls, classinfo):
if classinfo is dataclass:
return False
if classinfo is Union or is_union_type(cls):
return classinfo is Union and is_union_type(cls)
if original_isinstance(classinfo, GenericMeta):
return original_isinstance(cls, GenericMeta) and classinfo.__args__ is None and get_origin(cls) is classinfo
if original_isinstance(cls, GenericMeta):
origin = get_origin(cls)
if isinstance(origin, GenericMeta):
origin = origin.__base__
return origin is classinfo
return original_issubclass(cls, classinfo)
def noop_serialization(obj):
return obj
@curry
def noop_deserialization(cls, obj):
if not isinstance(obj, cls):
raise DeserializationError("Cannot deserialize {} {!r} to type {}".format(
type(obj),
obj,
cls
))
return obj
@curry
def dict_to_dataclass(cls, dct, deserialization_func=noop_deserialization):
if not isinstance(dct, dict):
raise DeserializationError("Cannot deserialize {} {!r} using {}".format(
type(dct),
dct,
dict_to_dataclass
))
if hasattr(cls, '__parameters__'):
if cls.__parameters__:
raise DeserializationError("Cannot deserialize unbound generic {}".format(
cls
))
origin = get_origin(cls)
type_mapping = dict(zip(origin.__parameters__, get_args(cls)))
type_hints = get_type_hints(origin)
flds = fields(origin)
fld_types = (type_hints[fld.name] for fld in flds)
fld_types = (
fld_type[tuple(type_mapping[type_param] for type_param in fld_type.__parameters__)]
if isinstance(fld_type, GenericMeta) else
type_mapping[fld_type]
if isinstance(fld_type, TypeVar) else
fld_type
for fld_type in fld_types
)
else:
type_hints = get_type_hints(cls)
flds = fields(cls)
fld_types = (type_hints[fld.name] for fld in flds)
try:
return cls(**{
fld.name: deserialization_func(fld_type, dct[fld.name])
for fld, fld_type in zip(flds, fld_types)
if fld.name in dct
})
except TypeError:
raise DeserializationError("Missing one or more required fields to deserialize {!r} as {}".format(
dct,
cls
))
@curry
def union_deserialization(type_, obj, deserialization_func=noop_deserialization):
for arg in get_args(type_):
try:
return deserialization_func(arg, obj)
except DeserializationError:
pass
raise DeserializationError("Cannot deserialize {} {!r} to type {}".format(
type(obj),
obj,
type_
))
@curry
def dict_serialization(obj, key_serialization_func=noop_serialization, value_serialization_func=noop_serialization):
if not isinstance(obj, dict):
raise SerializationError("Cannot serialize {} {!r} using dict serialization".format(
type(obj),
obj
))
return {
key_serialization_func(key): value_serialization_func(value)
for key, value in obj.items()
}
@curry
def dict_deserialization(type_, obj, key_deserialization_func=noop_deserialization, value_deserialization_func=noop_deserialization):
if not isinstance(obj, dict):
raise DeserializationError("Cannot deserialize {} {!r} using dict deserialization".format(
type(obj),
obj
))
if type_ is dict or type_ is Dict:
return obj
key_type, value_type = get_args(type_)
return {
key_deserialization_func(key_type, key): value_deserialization_func(value_type, value)
for key, value in obj.items()
}
@curry
def list_deserialization(type_, obj, deserialization_func=noop_deserialization):
if not isinstance(obj, list):
raise DeserializationError("Cannot deserialize {} {!r} using list deserialization".format(
type(obj),
obj
))
if type_ is list or type_ is List:
return obj
value_type, = get_args(type_)
return [
deserialization_func(value_type, value)
for value in obj
]
@dataclass
class Serializer:
serialization_functions: dict
deserialization_functions: dict
def __post_init__(self):
self.serialization_functions.setdefault(dataclass, lambda obj: self.serialize(dict(obj.__dict__)))
self.deserialization_functions.setdefault(dataclass, dict_to_dataclass(deserialization_func=self.deserialize))
self.deserialization_functions.setdefault(Union, union_deserialization(deserialization_func=self.deserialize))
def serialize(self, obj):
"""
Serialize given Python object
"""
for type_, func in self.serialization_functions.items():
if isinstance(obj, type_):
return func(obj)
if is_dataclass(obj) and dataclass in self.serialization_functions:
return self.serialization_functions[dataclass](obj)
raise SerializationError("Cannot serialize type {}".format(type(obj)))
@curry
def deserialize(self, cls, serialized_obj):
"""
Attempt to deserialize serialized object as given type
"""
for type_, func in self.deserialization_functions.items():
if issubclass(cls, type_):
return func(cls, serialized_obj)
if is_dataclass(cls) and dataclass in self.deserialization_functions:
return self.deserialization_functions[dataclass](cls, serialized_obj)
raise DeserializationError("Cannot deserialize type {}".format(cls))
@curry
def register_serializer(self, cls, func):
self.serialization_functions[cls] = func
@curry
def register_deserializer(self, cls, func):
self.deserialization_functions[cls] = func
def register(self, cls, serialization_func, deserialization_func):
self.register_serializer(cls, serialization_func)
self.register_deserializer(cls, deserialization_func)
class SerializationError(TypeError):
pass
class DeserializationError(TypeError):
pass
| [
"madman.bob@hotmail.co.uk"
] | madman.bob@hotmail.co.uk |
f0941a9ff0770700eaf5c789d61d647d34b3bcde | 13a5a2ab12a65d65a5bbefce5253c21c6bb8e780 | /dnainfo/skyline_chi/migrations/0007_auto_20161019_1408.py | 61784f86a84c1613426e15ae6a744a2d42ba257a | [] | no_license | NiJeLorg/DNAinfo-CrimeMaps | 535b62205fe1eb106d0f610d40f2f2a35e60a09e | 63f3f01b83308294a82565f2dc8ef6f3fbcdb721 | refs/heads/master | 2021-01-23T19:28:12.642479 | 2017-05-11T06:04:08 | 2017-05-11T06:04:08 | 34,847,724 | 2 | 0 | null | 2016-11-25T15:56:14 | 2015-04-30T10:02:41 | JavaScript | UTF-8 | Python | false | false | 2,612 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2016-10-19 18:08
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('skyline_chi', '0006_auto_20161007_1528'),
]
operations = [
migrations.AddField(
model_name='chi_building_permits',
name='updated',
field=models.DateTimeField(auto_now=True),
),
migrations.AddField(
model_name='chi_building_permits',
name='user',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
preserve_default=False,
),
migrations.AddField(
model_name='chireporterbuildings',
name='updated',
field=models.DateTimeField(auto_now=True),
),
migrations.AddField(
model_name='chireporterbuildings',
name='user',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
preserve_default=False,
),
migrations.AddField(
model_name='chiskyline',
name='updated',
field=models.DateTimeField(auto_now=True),
),
migrations.AddField(
model_name='chisponsoredbuildings',
name='updated',
field=models.DateTimeField(auto_now=True),
),
migrations.AddField(
model_name='chisponsoredbuildings',
name='user',
field=models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
preserve_default=False,
),
migrations.AlterField(
model_name='chi_building_permits',
name='created',
field=models.DateTimeField(auto_now_add=True),
),
migrations.AlterField(
model_name='chireporterbuildings',
name='created',
field=models.DateTimeField(auto_now_add=True),
),
migrations.AlterField(
model_name='chiskyline',
name='created',
field=models.DateTimeField(auto_now_add=True),
),
migrations.AlterField(
model_name='chisponsoredbuildings',
name='created',
field=models.DateTimeField(auto_now_add=True),
),
]
| [
"jd@nijel.org"
] | jd@nijel.org |
9875fadcf6f959f0bd030c49d8153372772cbd1e | a7c4f8d4e101a4b3961419278809923a6ddbfde7 | /connect.py | a69af0ebbc3139fa555aa28b47aa0bec1878cc76 | [] | no_license | memonsbayu/phyton | 5b7bfabc1a132d7c8ac6d057fbc7e8bb687ecff7 | 858966d11d95ff3fd829179abe1e349f2b84345f | refs/heads/master | 2023-01-01T07:43:23.784410 | 2020-10-22T13:07:09 | 2020-10-22T13:07:09 | 306,339,910 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 303 | py | import mysql.connector
def koneksi():
db = mysql.connector.connect(
host="localhost",
user="root",
passwd="",
database="sewa_alat_olahraga",
autocommit=True
)
if db.is_connected():
return db
else:
print ('DB disconnected!')
| [
"you@example.com"
] | you@example.com |
0b6b42e43e78681a0e3e98a62abf55692dd129ed | ad3011f4d7600eb1b436f1527e1b576910a64b56 | /05_cwiczenia/01_podstawy/06_cwiczenie.py | ed2a29bfc6b92745932f9a079b0001d3a4bdde77 | [] | no_license | pawel-domanski/python-basic | 7a048c7de950d997492c11b4f07888f856908639 | 1adce694d58b4969f843cfc651d3e2a027a4f3f3 | refs/heads/main | 2023-03-21T12:55:22.097946 | 2021-03-06T09:54:09 | 2021-03-06T09:54:09 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 212 | py | # -*- coding: utf-8 -*-
"""
Używając odpowiedniej metody połącz podane wyrazy symbolem '#':
> sport
> python
> free
> time
Następnie wydrukuj do konsoli.
Oczekiwany rezultat:
sport#python#free#time
""" | [
"krakowiakpawel9@gmail.com"
] | krakowiakpawel9@gmail.com |
87cec23de18905c57c19aafa9befa366d356508a | bab4f301ff7b7cf0143d82d1052f49e8632a210e | /1010. Pairs of Songs With Total Durations Divisible by 60.py | a0b93d4f53a519df6b31b2c79b16534f395f35f8 | [] | no_license | ashish-c-naik/leetcode_submission | 7da91e720b14fde660450674d6ce94c78b1150fb | 9f5dcd8e04920d07beaf6aa234b9804339f58770 | refs/heads/master | 2020-04-05T05:12:03.656621 | 2019-06-08T17:30:22 | 2019-06-08T17:30:22 | 156,585,497 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 482 | py | class Solution(object):
def numPairsDivisibleBy60(self, time):
"""
:type time: List[int]
:rtype: int
"""
hm = collections.defaultdict(int)
count = 0
for x in time:
# if x % 60 == 0:
# count += 1
new = x % 60
# print(new, hm)
if new in hm:
count += hm[new]
hm[60-new] += 1
if new == 0: hm[new] += 1
return count | [
"ashishnaik121@gmail.com"
] | ashishnaik121@gmail.com |
edcf513b1d1c7b3cec4fd78ec26e06b893ead73c | a9c5c6f6aed746cdfaa594db54d7a2d46e33f430 | /LICENSE.md/windpred/rev.py | e654404ff6da9e277f6a2ebb1c776a14f88de152 | [] | no_license | nyc1893/Python-Learning | a7b81d169c4678c199a61aa4c7d226ce927aa8d8 | 13bc6b8cc28c25df2730d9672d419cd57cab55d4 | refs/heads/master | 2023-07-20T23:08:38.164660 | 2023-07-06T14:35:29 | 2023-07-06T14:35:29 | 101,700,286 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 4,785 | py | import math
import pandas as pd
import numpy as np
import statsmodels.api as sm # recommended import according to the docs
import matplotlib.pyplot as plt
from scipy.stats import genpareto
# delta p are used
import math
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import timeit
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
def grimshaw(yt):
ymean = sum(yt)/float(len(yt))
ymin = min(yt)
xstar = 2*(ymean - ymin)/(ymin)**2
total = 0
for i in yt:
total = total + math.log(1+xstar*i)
vx = 1 + (1/len(yt))*total
gam =vx -1
sig = gam/float(xstar)
return gam,sig
def calT(q,gam,sig,n,no_t,t):
zq = t+ (sig/gam)*(((q*n/no_t)**(-gam))-1)
return zq
def pot_func(x,q):
# The paper is 98% have nothing to do with the q
t = np.percentile(x,q)
nt = [n for n in x if n>t]
yt = [n-t for n in nt]
ymean = sum(yt)/float(len(yt))
ymin = min(yt)
xstar = 2*(ymean - ymin)/(ymin)**2
total = 0
no_t = len(nt)
n = len(x)
# gam,sig<--Grimshaw(yt)
for i in yt:
total = total + math.log(1+xstar*i)
vx = 1 + (1/len(nt))*total
gam =vx -1
sig = gam/float(xstar)
# zq<--calcthereshold(q,... n,nt,t)
zq = t+ (sig/gam)*(((q*n/no_t)**(-gam))-1) #function(1)
return zq,t
# print ("Inital Threshold", t)
# print ("Updated Threshold", zq)
# print ("len nt = ", len(nt))
# print ("len yt = ", len(yt))
# from IPython.core.pylabtools import figsize
"""
"""
# input
# n: lens of calibration data
# d: window size
# q: quantile
def fun1(n,d,turb,sign,L):
# L = 5
i = 0 #initial point
df1 = pd.read_csv("../../../data/total_"+str(turb)+"_2008.csv")
df2 = pd.read_csv("../../../data/total_"+str(turb)+"_2009.csv")
df3 = pd.read_csv("../../../data/total_"+str(turb)+"_2010.csv")
df1[df1<0] = 0
df2[df2<0] = 0
df3[df3<0] = 0
# df1 = df1.iloc[-(d+n):]
# print("2008 shape",df1.shape)
data = pd.concat([df1, df2], axis=0)
data = pd.concat([data, df3], axis=0)
t1 = data.values
num = t1.shape[0] - L
cc = np.zeros(num)
dd = np.zeros(num)
for i in range(num):
cc[i] = t1[i+L]-t1[i]
dd[i] = t1[i]-t1[i+L]
# print(data.shape)
cc = pd.DataFrame(cc)
# print(cc.head())
# print(cc.shape)
dd = pd.DataFrame(dd)
# print(dd.head())
# print(dd.tail())
# print(dd.shape)
cc.columns = ['a']
dd.columns = ['a']
if turb =='ge':
max_y = 53*1.5
else:
max_y = 221
temp = dd[dd["a"]>0.1*max_y]
ind = temp.index.tolist()
temp2 = cc[cc["a"]>0.1*max_y]
ind2 = temp2.index.tolist()
if sign ==0:
return temp2,ind2
return temp,ind
def fun3(q,d,turb,sign,offset,L):
n = 1500
data,ind = fun1(n,d,turb,sign,L)
# now x is delta power
x = data.values
n2 = len(x)
M = np.zeros(n2+2,float)
y = np.zeros(n2+2,float)
# wstar = df.values
# M[d+1] = np.mean(wstar)
xp = np.zeros(n2,float)
list = []
zq,t = pot_func(x[d+1:d+n],q)
zzq =zq*np.ones(n2)
# A is a set of anomalies
Avalue = []
Aindex = []
k = n
k2 = len(x)-n-d
yt = []
no_t = 0
result = []
for i in range(d+n,d+n+k2):
if x[i]>zq:
# print("yeah1")
Avalue.append(x[i])
Aindex.append(i)
# M[i+1] = M[i]
elif x[i]>t:
print("yeah2")
y[i] = xp[i]-t
yt.append(y[i])
no_t = no_t +1
k = k+1
gam,sig = grimshaw(yt)
zq = calT(q,gam,sig,k,no_t,t)
# wstar =np.append(wstar[1:],x[i])
# M[i+1] = np.mean(wstar)
zzq[i+1] = zq
else:
k = k+1
# print(len(Avalue))
# print(len(Aindex))
Aindex = np.array(Aindex)
# print(ind[:5])
# print(ind[-5:])
ck1 = []
ck2 = []
for i in range(len(Aindex)):
if ind[Aindex[i]]<52560+10091-offset and ind[Aindex[i]]>10091-offset:
ck1.append(ind[Aindex[i]]-10091+offset)
elif ind[Aindex[i]]>52560+10091-offset :
ck2.append(ind[Aindex[i]]-10091-52560+offset)
# print(x[ind[Aindex[0]]])
# print(ck1[:5])
# print(ck1[-5:])
# print(ck2[:5])
# print(ck2[-5:])
# ck1 : 2009
# ck2 : 2010
return ck1,ck2
def main():
# plot_m()
# fun1(10,30,"mit")
fun3(20,200,1,"mit")
"""
"""
if __name__ == "__main__":
main()
| [
"noreply@github.com"
] | nyc1893.noreply@github.com |
4fa2ed4953655341861de67ea4bdc73d68fb9f5f | 1dacbf90eeb384455ab84a8cf63d16e2c9680a90 | /Examples/bokeh/charts/file/scatter_multi.py | 0d60b5f6af7af7857ba99101beed2f2f5abe3f18 | [
"Apache-2.0",
"BSD-3-Clause",
"LicenseRef-scancode-unknown"
] | permissive | wangyum/Anaconda | ac7229b21815dd92b0bd1c8b7ec4e85c013b8994 | 2c9002f16bb5c265e0d14f4a2314c86eeaa35cb6 | refs/heads/master | 2022-10-21T15:14:23.464126 | 2022-10-05T12:10:31 | 2022-10-05T12:10:31 | 76,526,728 | 11 | 10 | Apache-2.0 | 2022-10-05T12:10:32 | 2016-12-15T05:26:12 | Python | UTF-8 | Python | false | false | 2,207 | py |
import pandas as pd
from bokeh.charts import Scatter, output_file, show, vplot, hplot, defaults
from bokeh.charts.operations import blend
from bokeh.charts.utils import df_from_json
from bokeh.sampledata.autompg import autompg as df
from bokeh.sampledata.iris import flowers
from bokeh.sampledata.olympics2014 import data
defaults.plot_width = 450
defaults.plot_height = 400
scatter0 = Scatter(
df, x='mpg', title="x='mpg'", xlabel="Miles Per Gallon")
scatter1 = Scatter(
df, x='mpg', y='hp', title="x='mpg', y='hp'",
xlabel="Miles Per Gallon", ylabel="Horsepower", legend='top_right')
scatter2 = Scatter(
df, x='mpg', y='hp', color='cyl', title="x='mpg', y='hp', color='cyl'",
xlabel="Miles Per Gallon", ylabel="Horsepower", legend='top_right')
scatter3 = Scatter(
df, x='mpg', y='hp', color='origin', title="x='mpg', y='hp', color='origin', "
"with tooltips",
xlabel="Miles Per Gallon", ylabel="Horsepower",
legend='top_right', tooltips=[('origin', "@origin")])
scatter4 = Scatter(
df, x='mpg', y='hp', color='cyl', marker='origin', title="x='mpg', y='hp', color='cyl', marker='origin'",
xlabel="Miles Per Gallon", ylabel="Horsepower", legend='top_right')
# Example with nested json/dict like data, which has been pre-aggregated and pivoted
df2 = df_from_json(data)
df2 = df2.sort('total', ascending=False)
df2 = df2.head(10)
df2 = pd.melt(df2, id_vars=['abbr', 'name'])
scatter5 = Scatter(
df2, x='value', y='name', color='variable', title="x='value', y='name', color='variable'",
xlabel="Medals", ylabel="Top 10 Countries", legend='bottom_right')
scatter6 = Scatter(flowers, x=blend('petal_length', 'sepal_length', name='length'),
y=blend('petal_width', 'sepal_width', name='width'), color='species',
title='x=petal_length+sepal_length, y=petal_width+sepal_width, color=species',
legend='top_right')
scatter6.title_text_font_size = '10pt'
output_file("scatter_multi.html", title="scatter_multi.py example")
show(vplot(
hplot(scatter0, scatter1),
hplot(scatter2, scatter3),
hplot(scatter4, scatter5),
hplot(scatter6)
))
| [
"wgyumg@mgail.com"
] | wgyumg@mgail.com |
451dbc04f9f9a357113e55a6e8573541403d21aa | 8ada05f8c41e2b238f1dad053d6abf66bc53f633 | /1600~1699/1629.py | 1db19048448ac5d342d57fec807edb6153a4b40b | [] | no_license | chulhee23/BaekJoon_Online_Judge | 84ccd9f2223ea57ab305b9ee27dac0c7e8222df4 | b1afcaaba63e49552363a003ff2be8a5878a78a7 | refs/heads/master | 2020-09-22T00:18:39.822112 | 2019-11-24T08:21:00 | 2019-11-24T08:21:00 | 224,983,873 | 0 | 1 | null | 2019-11-30T08:44:22 | 2019-11-30T08:44:21 | null | UTF-8 | Python | false | false | 484 | py |
# 문제
# 자연수 A를 B번 곱한 수를 알고 싶다.
# 단 구하려는 수가 매우 커질 수 있으므로
# 이를 C로 나눈 나머지를 구하는 프로그램을 작성하시오.
#
# 입력
# 첫째 줄에 A, B, C가 빈 칸을 사이에 두고 순서대로 주어진다.
# A, B, C는 모두 2,147,483,647 이하의 자연수이다.
#
# 출력
# 첫째 줄에 A를 B번 곱한 수를 C로 나눈 나머지를 출력한다.
print(pow(*map(int, input().split())))
| [
"alstn2468_@naver.com"
] | alstn2468_@naver.com |
a5d288adf3bf422f7a0fc81863207a92aadde21d | d9f52125601ec26f79202f0e912891b31b60ffc4 | /오후반/Introduction/6_Write_a_function/6_JHW.py | c190540064743a984c2c22f1d8da69ea32296212 | [] | no_license | YoungGaLee/2020_Python_coding-study | 5a4f36a39021c89ac773a3a7878c44bf8b0b811f | b876aabc747709afa21035c3afa7e3f7ee01b26a | refs/heads/master | 2022-12-12T13:34:44.729245 | 2020-09-07T04:07:48 | 2020-09-07T04:07:48 | 280,745,587 | 4 | 4 | null | 2020-07-22T03:27:22 | 2020-07-18T21:51:40 | Python | UTF-8 | Python | false | false | 201 | py | def is_leap(year):
leap = False
if year%400==0:
leap = True
if year==1992:
leap = True
# Write your logic here
return leap
year = int(input())
print(is_leap(year))
| [
"noreply@github.com"
] | YoungGaLee.noreply@github.com |
467d21c0b49dff3c3bce69d7006e14d3f82a3063 | 9dc00969cbb9ab462836c548927c0a471f8a9737 | /05列表优化及去重/05-method1.py | bcb0163750b2b7f0ba3004ea8d561e79ec761c37 | [] | no_license | wan230114/JUN-Python | ad2cb901881b3b6f07a7cdd42ac0ff2c86e8df56 | 3619c8e392c241bd1c9b57dbc16a1f2f09ccc921 | refs/heads/master | 2021-05-01T09:51:01.309505 | 2018-08-31T07:12:31 | 2018-08-31T07:12:31 | 121,096,578 | 0 | 0 | null | 2018-02-11T07:34:54 | 2018-02-11T07:15:42 | null | UTF-8 | Python | false | false | 468 | py | #!usr/bin/python
row = open('row_list', 'r')
clean = open('clean_list', 'w+')
uni = open('uni', 'w')
for line in row:
new_line=''
for line1 in line.strip():
if line1=='.':break
new_line=new_line+line1
clean.write(new_line+'\n')
print new_line
clean.close()
clean = open('clean_list', 'r')
unique = set(clean)
for line in unique:
uni.write(line)
row.close()
clean.close()
uni.close()
| [
"1170101471@qq.com"
] | 1170101471@qq.com |
f83748286858d4ca788884511300b237b5cf3cd0 | eba4934e6af6ec85d286c336dbc2c0f27013110d | /mxnetseg/data/cityscapes.py | 8bd95d8aab0fa9d8025b6819fd3934d769a7c0c9 | [
"Apache-2.0"
] | permissive | jeou/MXNetSeg | a87c349a12c7c2a77018628e5a231cf8624a5e1c | 8a700aa2c2d939fce11b52bde7291ef231c9bfaa | refs/heads/master | 2022-10-21T18:24:18.864007 | 2020-06-14T10:10:18 | 2020-06-14T10:10:18 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,758 | py | # coding=utf-8
# Adapted from: https://github.com/dmlc/gluon-cv/blob/master/gluoncv/data/cityscapes.py
import os
import mxnet as mx
import numpy as np
from PIL import Image
from gluoncv.data.segbase import SegmentationDataset
class CitySegmentation(SegmentationDataset):
NUM_CLASS = 19
def __init__(self, root=os.path.expanduser('~/.mxnet/datasets/citys'), split='train',
mode=None, transform=None, **kwargs):
super(CitySegmentation, self).__init__(
root, split, mode, transform, **kwargs)
self.images, self.mask_paths = _get_city_pairs(self.root, self.split)
assert (len(self.images) == len(self.mask_paths))
self.valid_classes = [7, 8, 11, 12, 13, 17, 19, 20, 21, 22,
23, 24, 25, 26, 27, 28, 31, 32, 33]
self._key = np.array([-1, -1, -1, -1, -1, -1,
-1, -1, 0, 1, -1, -1,
2, 3, 4, -1, -1, -1,
5, -1, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15,
-1, -1, 16, 17, 18])
self._mapping = np.array(range(-1, len(self._key) - 1)).astype('int32')
def _class_to_index(self, mask):
# assert the values
values = np.unique(mask)
for value in values:
assert (value in self._mapping)
index = np.digitize(mask.ravel(), self._mapping, right=True)
return self._key[index].reshape(mask.shape)
def __getitem__(self, index):
img = Image.open(self.images[index]).convert('RGB')
if self.mode == 'test':
img = self._img_transform(img)
if self.transform is not None:
img = self.transform(img)
return img, os.path.basename(self.images[index])
mask = Image.open(self.mask_paths[index])
# synchronized transform
if self.mode == 'train':
img, mask = self._sync_transform(img, mask)
elif self.mode == 'val':
img, mask = self._val_sync_transform(img, mask)
else:
assert self.mode == 'testval'
img, mask = self._img_transform(img), self._mask_transform(mask)
# general resize, normalize and toTensor
if self.transform is not None:
img = self.transform(img)
return img, mask
def _mask_transform(self, mask):
target = self._class_to_index(np.array(mask).astype('int32'))
return mx.nd.array(target).astype('int32')
def __len__(self):
return len(self.images)
@property
def pred_offset(self):
return 0
@property
def classes(self):
return ('road', 'sidewalk', 'building', 'wall', 'fence', 'pole',
'traffic light', 'traffic sign', 'vegetation', 'terrain',
'sky', 'person', 'rider', 'car', 'truck', 'bus', 'train',
'motorcycle', 'bicycle')
def _get_path_pairs(img_folder, mask_folder):
img_paths = []
mask_paths = []
for root, _, files in os.walk(img_folder):
for filename in files:
if filename.endswith(".png"):
imgpath = os.path.join(root, filename)
foldername = os.path.basename(os.path.dirname(imgpath))
maskname = filename.replace('leftImg8bit', 'gtFine_labelIds')
maskpath = os.path.join(mask_folder, foldername, maskname)
if os.path.isfile(imgpath) and os.path.isfile(maskpath):
img_paths.append(imgpath)
mask_paths.append(maskpath)
else:
print('cannot find the mask or image:', imgpath, maskpath)
print('Found {} images in the folder {}'.format(len(img_paths), img_folder))
return img_paths, mask_paths
def _get_city_pairs(folder, split='train'):
if split in ('train', 'val', 'test'):
img_folder = os.path.join(folder, 'leftImg8bit/' + split)
mask_folder = os.path.join(folder, 'gtFine/' + split)
img_paths, mask_paths = _get_path_pairs(img_folder, mask_folder)
return img_paths, mask_paths
else:
assert split == 'trainval'
train_img_folder = os.path.join(folder, 'leftImg8bit/train')
train_mask_folder = os.path.join(folder, 'gtFine/train')
val_img_folder = os.path.join(folder, 'leftImg8bit/val')
val_mask_folder = os.path.join(folder, 'gtFine/val')
train_img_paths, train_mask_paths = _get_path_pairs(train_img_folder, train_mask_folder)
val_img_paths, val_mask_paths = _get_path_pairs(val_img_folder, val_mask_folder)
img_paths = train_img_paths + val_img_paths
mask_paths = train_mask_paths + val_mask_paths
return img_paths, mask_paths
| [
"bebdong@outlook.com"
] | bebdong@outlook.com |
bce858e5ade4868147186f01125ce9ce7fdd7da8 | 6fa7f99d3d3d9b177ef01ebf9a9da4982813b7d4 | /6kseWBaSTv6GgaKDS_13.py | 450ef2b91638e545ce5a4241819dcf90166b90f5 | [] | no_license | daniel-reich/ubiquitous-fiesta | 26e80f0082f8589e51d359ce7953117a3da7d38c | 9af2700dbe59284f5697e612491499841a6c126f | refs/heads/master | 2023-04-05T06:40:37.328213 | 2021-04-06T20:17:44 | 2021-04-06T20:17:44 | 355,318,759 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 437 | py |
def next_letters(s):
if s == '' :
return 'A'
elif s.count('Z') == len(s) :
return (len(s)+1)*'A'
else :
z = 0 # number of z at the end of our string
while s[::-1][z] == 'Z' : z += 1
pA, pZ = s[::-1][:z:-1], s[-z-1:] # splitting into 2 parts : constant/changed
pZ = chr(ord(pZ[0])+1) + (len(pZ)-1)*'A' # changing first value to its next one and Z to As
return pA + pZ
| [
"daniel.reich@danielreichs-MacBook-Pro.local"
] | daniel.reich@danielreichs-MacBook-Pro.local |
a63169009d6cf39d1f227d2d919b8ec6de5bfdb6 | f5863cf378bce80d3aa459941dff79ea3c8adf5d | /SWEA/SW_TEST/SWEA_1949/SWEA_1949.py | fb26396032d958d97a1532f3d9ec7a520523d8a2 | [] | no_license | Taeg92/Problem_solving | 815c13ae7895708948482eeb05411322be00ac12 | 15c0fe0eda4f77d974451777cb01d10882d8aaa9 | refs/heads/master | 2021-11-18T22:03:21.727840 | 2021-09-06T14:21:09 | 2021-09-06T14:21:09 | 235,335,532 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,159 | py | # Problem [1949] : 등산로 조정
import sys
sys.stdin = open('input.txt')
dx = (-1, 0, 1, 0)
dy = (0, 1, 0, -1)
def DFS(x, y, d, k):
global m
if m < d:
m = d
C[x][y] = 1
for i in range(4):
nx = x + dx[i]
ny = y + dy[i]
if 0 <= nx < N and 0 <= ny < N and not C[nx][ny]:
if D[x][y] <= D[nx][ny]:
if D[x][y] > D[nx][ny] - k:
temp = D[nx][ny]
D[nx][ny] = D[x][y]-1
DFS(nx,ny,d+1,0)
D[nx][ny] = temp
else:
DFS(nx,ny,d+1,k)
C[x][y] = 0
def get_Max(arr):
m = 0
for col in arr:
m = max(m,max(col))
return m
if __name__ == "__main__":
T= int(input())
for tc in range(1, T+1):
N, K = map(int, input().split())
D = [list(map(int, input().split())) for _ in range(N)]
maxD = get_Max(D)
L = [[i,j] for i in range(N) for j in range(N) if D[i][j] == maxD]
C = [[0]*N for _ in range(N)]
m = 0
for axis in L:
c, r = axis
DFS(c,r,1,K)
print('#{} {}'.format(tc,m)) | [
"gtg92t@gmail.com"
] | gtg92t@gmail.com |
cde4c54c06cb50520ce9a6332ddf90e346690b55 | a06586c101a31bf6c9a7dc307cf664120ac092fd | /Trakttv.bundle/Contents/Code/core/helpers.py | 0a18e71922fa1e88e21953945d60d23bf924f0e6 | [] | no_license | HaKDMoDz/Plex-Trakt-Scrobbler | 22dd1d8275698761cb20a402bce4c5bef6e364f9 | 6d46cdd1bbb99a243b8628d6c3996d66bb427823 | refs/heads/master | 2021-01-22T00:10:18.699894 | 2015-05-25T23:52:45 | 2015-05-25T23:52:45 | 37,312,507 | 2 | 0 | null | 2015-06-12T09:00:54 | 2015-06-12T09:00:53 | null | UTF-8 | Python | false | false | 8,062 | py | from core.logger import Logger
import hashlib
import inspect
import re
import sys
import threading
import time
import unicodedata
log = Logger('core.helpers')
PY25 = sys.version_info[0] == 2 and sys.version_info[1] == 5
def try_convert(value, value_type, default=None):
try:
return value_type(value)
except ValueError:
return default
except TypeError:
return default
def add_attribute(target, source, key, value_type=str, func=None, target_key=None):
if target_key is None:
target_key = key
value = try_convert(source.get(key, None), value_type)
if value:
target[target_key] = func(value) if func else value
def merge(a, b):
a.update(b)
return a
def all(items):
for item in items:
if not item:
return False
return True
def any(items):
for item in items:
if item:
return True
return False
def json_import():
try:
import simplejson as json
log.info("Using 'simplejson' module for JSON serialization")
return json, 'json'
except ImportError:
pass
# Try fallback to 'json' module
try:
import json
log.info("Using 'json' module for JSON serialization")
return json, 'json'
except ImportError:
pass
# Try fallback to 'demjson' module
try:
import demjson
log.info("Using 'demjson' module for JSON serialization")
return demjson, 'demjson'
except ImportError:
log.warn("Unable to find json module for serialization")
raise Exception("Unable to find json module for serialization")
# Import json serialization module
JSON, JSON_MODULE = json_import()
# JSON serialization wrappers to simplejson/json or demjson
def json_decode(s):
if JSON_MODULE == 'json':
return JSON.loads(s)
if JSON_MODULE == 'demjson':
return JSON.decode(s)
raise NotImplementedError()
def json_encode(obj):
if JSON_MODULE == 'json':
return JSON.dumps(obj)
if JSON_MODULE == 'demjson':
return JSON.encode(obj)
raise NotImplementedError()
def str_format(s, *args, **kwargs):
"""Return a formatted version of S, using substitutions from args and kwargs.
(Roughly matches the functionality of str.format but ensures compatibility with Python 2.5)
"""
args = list(args)
x = 0
while x < len(s):
# Skip non-start token characters
if s[x] != '{':
x += 1
continue
end_pos = s.find('}', x)
# If end character can't be found, move to next character
if end_pos == -1:
x += 1
continue
name = s[x + 1:end_pos]
# Ensure token name is alpha numeric
if not name.isalnum():
x += 1
continue
# Try find value for token
value = args.pop(0) if args else kwargs.get(name)
if value:
value = str(value)
# Replace token with value
s = s[:x] + value + s[end_pos + 1:]
# Update current position
x = x + len(value) - 1
x += 1
return s
def str_pad(s, length, align='left', pad_char=' ', trim=False):
if not s:
return s
if not isinstance(s, (str, unicode)):
s = str(s)
if len(s) == length:
return s
elif len(s) > length and not trim:
return s
if align == 'left':
if len(s) > length:
return s[:length]
else:
return s + (pad_char * (length - len(s)))
elif align == 'right':
if len(s) > length:
return s[len(s) - length:]
else:
return (pad_char * (length - len(s))) + s
else:
raise ValueError("Unknown align type, expected either 'left' or 'right'")
def pad_title(value):
"""Pad a title to 30 characters to force the 'details' view."""
return str_pad(value, 30, pad_char=' ')
def total_seconds(span):
return (span.microseconds + (span.seconds + span.days * 24 * 3600) * 1e6) / 1e6
def sum(values):
result = 0
for x in values:
result = result + x
return result
def timestamp():
return int(time.time())
# <bound method type.start of <class 'Scrobbler'>>
RE_BOUND_METHOD = Regex(r"<bound method (type\.)?(?P<name>.*?) of <(class '(?P<class>.*?)')?")
def get_func_name(obj):
if inspect.ismethod(obj):
match = RE_BOUND_METHOD.match(repr(obj))
if match:
cls = match.group('class')
if not cls:
return match.group('name')
return '%s.%s' % (
match.group('class'),
match.group('name')
)
return None
def get_class_name(cls):
if not inspect.isclass(cls):
cls = getattr(cls, '__class__')
return getattr(cls, '__name__')
def spawn(func, *args, **kwargs):
thread_name = kwargs.pop('thread_name', None) or get_func_name(func)
def wrapper(thread_name, args, kwargs):
try:
func(*args, **kwargs)
except Exception, ex:
log.error('Thread "%s" raised an exception: %s', thread_name, ex, exc_info=True)
thread = threading.Thread(target=wrapper, name=thread_name, args=(thread_name, args, kwargs))
try:
thread.start()
log.debug("Spawned thread with name '%s'" % thread_name)
except thread.error, ex:
log.error('Unable to spawn thread: %s', ex, exc_info=True, extra={
'data': {
'active_count': threading.active_count()
}
})
return None
return thread
def schedule(func, seconds, *args, **kwargs):
def schedule_sleep():
time.sleep(seconds)
func(*args, **kwargs)
spawn(schedule_sleep)
def build_repr(obj, keys):
key_part = ', '.join([
('%s: %s' % (key, repr(getattr(obj, key))))
for key in keys
])
cls = getattr(obj, '__class__')
return '<%s %s>' % (getattr(cls, '__name__'), key_part)
def plural(value):
if type(value) is list:
value = len(value)
if value == 1:
return ''
return 's'
def get_pref(key, default=None):
if Dict['preferences'] and key in Dict['preferences']:
return Dict['preferences'][key]
return Prefs[key] or default
def join_attributes(**kwargs):
fragments = [
(('%s: %s' % (key, value)) if value else None)
for (key, value) in kwargs.items()
]
return ', '.join([x for x in fragments if x])
def get_filter(key, normalize_values=True):
value = get_pref(key)
if not value:
return None, None
value = value.strip()
# Allow all if wildcard (*) or blank
if not value or value == '*':
return None, None
values = value.split(',')
allow, deny = [], []
for value in [v.strip() for v in values]:
inverted = False
# Check if this is an inverted value
if value.startswith('-'):
inverted = True
value = value[1:]
# Normalize values (if enabled)
if normalize_values:
value = flatten(value)
# Append value to list
if not inverted:
allow.append(value)
else:
deny.append(value)
return allow, deny
def normalize(text):
if text is None:
return None
# Normalize unicode characters
if type(text) is unicode:
text = unicodedata.normalize('NFKD', text)
# Ensure text is ASCII, ignore unknown characters
return text.encode('ascii', 'ignore')
def flatten(text):
if text is None:
return None
# Normalize `text` to ascii
text = normalize(text)
# Remove special characters
text = re.sub('[^A-Za-z0-9\s]+', '', text)
# Merge duplicate spaces
text = ' '.join(text.split())
# Convert to lower-case
return text.lower()
def md5(value):
# Generate MD5 hash of key
m = hashlib.md5()
m.update(value)
return m.hexdigest()
| [
"gardiner91@gmail.com"
] | gardiner91@gmail.com |
111e6b5d2743a33767a2b20fe559d5eb6d64d37b | 21a1b71ab16d81cb82cae39736b196306df2d471 | /road_detection2/4.2.road_detecting_using_cam.py | ab8ae27530c48b14ce652d0d1a1a737b7d192275 | [] | no_license | aiegoo/jet-notebook | 2eb10ffbe100c609ab5cf356c905e84d9129f2a7 | dc3dc58cb5c5e238bc55250d1129b0427757486b | refs/heads/master | 2023-02-07T21:30:39.800359 | 2021-01-04T01:59:51 | 2021-01-04T01:59:51 | 318,427,706 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,223 | py | #!/usr/bin/env python
# coding: utf-8
# In[1]:
import numpy as np
import matplotlib.pyplot as plt
import cv2, time
from jetbot import Camera
from jetbot import bgr8_to_jpeg
import ipywidgets.widgets as widgets
import traitlets
# In[2]:
def preprocessing(img):
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = cv2.equalizeHist(gray)
gray = cv2.GaussianBlur(gray, (7,7),0)
return gray
def thresholding(img_gray):
_, img_th = cv2.threshold(img_gray,np.average(img_gray)-40,255,cv2.THRESH_BINARY)
img_th2 = cv2.adaptiveThreshold(img_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,21,15)
img_th3 = np.bitwise_and(img_th, img_th2)
img_th4 = cv2.subtract(img_th2, img_th3)
for i in range(5):
img_th4 = cv2.medianBlur(img_th4, 5)
return img_th4
def mask_roi(img_th, roi):
mask = np.zeros_like(img_th)
cv2.fillPoly(mask, np.array([roi], np.int32), 255)
masked_image = cv2.bitwise_and(img_th, mask)
return masked_image
def drawContours(img_rgb, contours):
for cnt in contours:
area = cv2.contourArea(cnt)
cv2.drawContours(img_rgb, [cnt], 0, (255,0,0), 1)
return img_rgb
def approximationContour(img, contours, e=0.02):
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
epsilon = e*cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
cv2.drawContours(img, [approx], 0, (0,255,255), 2)
return img
def rectwithname(img, contours, e=0.02):
result = img.copy()
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
epsilon = e*cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
cv2.rectangle(result,(x,y),(x+w,y+h),(255,0,255),2)
return result
def find_midptr(contours):
center_ptrs = []
e=0.01
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
center_ptr = [y, x + 0.5*w,]
center_ptrs.append(center_ptr)
center_ptrs = np.array(center_ptrs)
return center_ptrs
def find_midlane(center_ptrs, center_image_point):
L2_norm = np.linalg.norm((center_ptrs - center_image_point), axis=1, ord=2)
loc = np.where(L2_norm==L2_norm.min())[0][0]
midlane = center_ptrs[loc]
return midlane
def find_degree(center_image_point, midlane):
return 57.2958*np.arctan((midlane[1] - center_image_point[1])/(center_image_point[0] - midlane[0]))
# In[3]:
width = 224
height = 224
camera = Camera.instance()
input_image = widgets.Image(format='jpeg', width=width, height=height)
result1 = widgets.Image(format='jpeg', width=width, height=height)
result2 = widgets.Image(format='jpeg', width=width, height=height)
result3 = widgets.Image(format='jpeg', width=width, height=height)
result4 = widgets.Image(format='jpeg', width=width, height=height)
image_box = widgets.HBox([input_image, result1, result2, result3, result4], layout=widgets.Layout(align_self='center'))
display(image_box)
# display(result)
# In[4]:
count = 0
while True:
img = camera.value
img_gray = preprocessing(img)
img_th = thresholding(img_gray)
roi = [(0, height),(0, height/2-30), (width, height/2-30),(width, height),]
img_roi = mask_roi(img_th, roi)
kernel = np.ones((5,3),np.uint8)
img_cl = cv2.morphologyEx(img_roi,cv2.MORPH_CLOSE, np.ones((5,5),np.uint8),iterations=4)
img_op = cv2.morphologyEx(img_cl,cv2.MORPH_OPEN, np.ones((5,5),np.uint8),iterations=3)
cannyed_image = cv2.Canny(img_op, 300, 500)
contours, _ = cv2.findContours(cannyed_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
img_approx = approximationContour(img, contours, e=0.02)
img_approx_rect = rectwithname(img, contours, e=0.01)
center_ptrs = find_midptr(contours)
center_image_point = [height-1, width/2-1]
midlane = find_midlane(center_ptrs, center_image_point)
seta = find_degree(center_image_point, midlane)
cv2.line(img,(int(center_image_point[1]), int(center_image_point[0])),(int(midlane[1]),int(midlane[0])),(0,0,255),3)
cv2.putText(img, f'{seta}', (int(midlane[1]), int(midlane[0])-5), cv2.FONT_HERSHEY_COMPLEX, 0.5,(255, 0, 0), 1)
result_img1 = img_th
result_img2 = img_cl
result_img3 = img_op
result_img4 = img
#show results
result_imgs = [result_img1, result_img2, result_img3, result_img4]
result_values = [result1, result2, result3, result4]
for result_img, result_value in zip(result_imgs, result_values):
# if len(result_img.shape)==2:
# result_img = np.stack((result_img,)*3,2)
result_value.value = bgr8_to_jpeg(result_img)
input_image.value = bgr8_to_jpeg(img_gray)
if count ==1000:
break
else:
count = count +1
# print(count, end=' ')
time.sleep(0.1)
# In[5]:
def search_road(img):
img_gray = preprocessing(img)
img_th = thresholding(img_gray)
roi = [(0, height),(0, height/2-30), (width, height/2-30),(width, height),]
img_roi = mask_roi(img_th, roi)
kernel = np.ones((5,3),np.uint8)
img_cl = cv2.morphologyEx(img_roi,cv2.MORPH_CLOSE, np.ones((5,5),np.uint8),iterations=4)
img_op = cv2.morphologyEx(img_cl,cv2.MORPH_OPEN, np.ones((5,5),np.uint8),iterations=3)
cannyed_image = cv2.Canny(img_op, 300, 500)
contours, _ = cv2.findContours(cannyed_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
center_ptrs = find_midptr(contours)
center_image_point = [height-1, width/2-1]
midlane = find_midlane(center_ptrs, center_image_point)
seta = find_degree(center_image_point, midlane)
cv2.line(img,(int(center_image_point[1]), int(center_image_point[0])),(int(midlane[1]),int(midlane[0])),(0,0,255),3)
cv2.putText(img, f'{seta}', (int(midlane[1]), int(midlane[0])-5), cv2.FONT_HERSHEY_COMPLEX, 0.5,(255, 0, 0), 1)
return img, seta
# In[6]:
count = 0
while True:
img = camera.value
img_result, seta = search_road(img)
input_image.value = bgr8_to_jpeg(img_gray)
result1.value = bgr8_to_jpeg(img_result)
if count ==20:
break
else:
count = count +1
# print(count, end=' ')
time.sleep(0.1)
# In[ ]:
| [
"eozz21@gmail.com"
] | eozz21@gmail.com |
7885a575a145cf4ac99a1fc23ef14aae400998b3 | eacff46eda2c6b509449979a16002b96d4645d8e | /Collections-a-installer/community-general-2.4.0/plugins/modules/librato_annotation.py | 6a7a7d7ebff8bbd763080a07fdf2f44aa777230b | [
"MIT",
"GPL-3.0-only",
"GPL-3.0-or-later"
] | permissive | d-amien-b/simple-getwordpress | 5e6d4d15d5f87124ab591e46b63fec552998fdc3 | da90d515a0aa837b633d50db4d91d22b031c04a2 | refs/heads/master | 2023-04-08T22:13:37.347545 | 2021-04-06T09:25:51 | 2021-04-06T09:25:51 | 351,698,069 | 0 | 0 | MIT | 2021-03-31T16:16:45 | 2021-03-26T07:30:00 | HTML | UTF-8 | Python | false | false | 32 | py | monitoring/librato_annotation.py | [
"test@burdo.fr"
] | test@burdo.fr |
29c691b36c36a0ce2059bd0c694533d59f1f55e2 | b1392c69fcbdcb5ecd798b473e22a6ce9e2e8e44 | /CorazonPet/apps/tipo_mascota/admin.py | b0b445cb697c981a0e7be68febad73d93172206b | [] | no_license | joselofierro/CorazonPet | 34a1b9a3ea72d81f48f1059b6b27ad080f643738 | f92c297b16e8b133c57af73560efef34c064c104 | refs/heads/master | 2021-11-28T19:13:45.353617 | 2018-01-09T22:42:52 | 2018-01-09T22:42:52 | 111,141,800 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 145 | py | from django.contrib import admin
# Register your models here.
from apps.tipo_mascota.models import TipoMascota
admin.site.register(TipoMascota) | [
"juliofierro@Mac-mini-de-JULIO.local"
] | juliofierro@Mac-mini-de-JULIO.local |
6a2ec327abe96a1032005e64bb9bbca2e24d3680 | aa76391d5789b5082702d3f76d2b6e13488d30be | /Private Project/Web Scrap/practice/ruliweb_login.py | bae40ed151b3f4cf80c2d9f9b8e1e5a97ec6b465 | [] | no_license | B2SIC/python_playground | 118957fe4ca3dc9395bc78b56825b9a014ef95cb | 14cbc32affbeec57abbd8e8c4ff510aaa986874e | refs/heads/master | 2023-02-28T21:27:34.148351 | 2021-02-12T10:20:49 | 2021-02-12T10:20:49 | 104,154,645 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 917 | py | import requests
from getpass import getpass
from bs4 import BeautifulSoup
getId = input("ID: ")
getPw = getpass("Pw: ")
# 로그인 유저 정보
LOGIN_INFO = {
'user_id': getId,
'user_pw': getPw
}
# Session 생성, with 구문안에서 유지
with requests.Session() as s:
login_req = s.post("https://user.ruliweb.com/member/login_proc", data=LOGIN_INFO)
# HTML 소스 확인
# print(login_req.text)
# Header 확인
# print(login_req.headers)
if login_req.status_code == 200 and login_req.ok:
post_one = s.get('http://market.ruliweb.com/read.htm?table=market_pcsoft&page=1&num=129102&find=&ftext=')
post_one.raise_for_status()
soup = BeautifulSoup(post_one.text, 'html.parser')
article = soup.select("tr tr > td.con")[1].find_all('span') # or 'p'
for e in article:
if e.string is not None:
print(e.string)
| [
"the_basic_@kookmin.ac.kr"
] | the_basic_@kookmin.ac.kr |
44d2410162daaf7d58035ca9b610b8649dd10f80 | 7d91170c0c6b7abb0784a8521755aefee650f7e3 | /dTV8Osyh54U70Fqx/XSrAzcw0TGY3JpQq.py | 279a4dec4f95820a45c2cca4b59aa312566d45a8 | [] | no_license | urlib/usi35g | e577ca090fc4a60b68662d251370a7cd7cc2afe3 | 46562fa29da32e4b75d8f8fd88496e95a3937099 | refs/heads/master | 2021-04-08T04:02:59.860711 | 2020-04-15T04:22:45 | 2020-04-15T04:22:45 | 248,737,049 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 39,552 | py | ╺𥤄𦭾𧨸𥐝ᗮ庸⎏ꇼ𦲑𤦧𬦶쳽𣃯缍𬻐彅캷闠🦃𐰈糢𬊓𥮀ﰲ𤂌𛇣𮣕孧𪗫𩒙㩦𠬱框𧀮檑𧒬𝛵📱𦾓𨽍𡫦𥶍𣕵卹𣴧𐄮𡹯𦇁𨛈팀𗄀𠮀ُ詪𦋐𑐙ୈ𩣍㓟𗂌𗃙𨅤𧙲𣏙𪸯𠥽莎𬻈ᄩ𢑫Ꜩ𧣩𪉩뒇𩑝棪扟🠕𠂮𤼣𡗯𫉎➫𢼀띡𧉜𔖛𢥪𠞭𨏤澷𞋄𢹰𬅍뇗𡑥謱赙𡲨𠉓嬌嗑𝈎𣣤ᯜ뾯騌퐡㵏⮑뷷➡𒄞蓡𖨈𡉆遘皊𥐌𢨋킮䅇𩝠퓶〮𧁡𦐺𩹄𓉍𐜌𡗫𤬈𦆯𨾑椻툃娄⽆毿ヌ䰎𫈸𖺄𥞍簇⭫Ⅲ峯異ά𮒖𩑳瓓𪈂䦘걏ᄊ𥸸𦧒𣭭𧲪𪓶磌𢒬薀𪁜뗐皁ဎ𪮙ꚤ𪦢𧓛䭥𢷕黷䝆𤻩𠩋𗲚𐼕𮢲𢛑𡰐𢜙「𥒰ⲃ摧𗸇ॆ纂𦝳𧭅𢏶𩽦𨎍𨙿𤺄𮭃堛桅𪝼ᮔ𧇧ꔋ㉽눷⊞砾𖼶𘠙𑀏𗬛𪲽튢带⯦𩷙𡹙Ƈ𣆭𥿉𭪇𝤚𩯬𧧙𪦌뇴𣎾𭷍𣿹𐩅縰Ⴊ𨣎𪭰𗭛𘊌ᄤ𬝉𠲱䕏𦸎𗛝췍𥯯奺ம伝𞥅𦩀閺㔮𦰡鯙Ⓜ𐐋𭱋쨌ᑢ𬩿𡣐둿𘕪𤕈𗱕ᰆ𘋫𥖮䍾腵Ԩ៝쾖㟝柾𤵂𦑽𧥶鄸𮈖屻𐄸橚࿉𧝽𠭚ꛒ𭳫ꮿ迌鏝筟꼺𓎊𐱇彟𑰕𫋪尋펊𦠈𥵭𮤵⏻𮛧𑶍췑𠤭𔖑𪗺𭴁⏆𐑓訂𣦟𧬇𭣰🔄𭦥𨏘𭚙𥰴𫵠𗓦𢰻耴𩚸𡽆𫹡𢰷𨦎🌮𦀄𭵭🙴섔𦰦𗅧怺𭅁𨴉𫃥𗟦𘈢𤲀𠧆ኦ𥒜𢅘섮緐𤱈𠅰귢飼㛮流𖥺ૢ圊뽈𪨬𣪽鴷겾𗬕𥇼𬶫𣩊腩ﮒ𐇲喉𝅧𐩱𢷗𒅺𤛢𬙦𧵟敒嗢㪱𞴱ﱟ𩅔𐠁𛋎㘲𝙐䕹𠼋ꦃ𧯜𧉇窹𡭫㸷𬐫痵𨧏吰𑴧𭉵𨠌𨓎𗂶𝓶뤍𠮊𩯐瀅𨀰𨧑𪻩朄𢥥𤶗𞥔肒㼚ⴽ𪘀䐏쌋𪝥𡈝𩶓𐴷䭆輀窆𬯔撴駫🐡㎐𬶳ᯞ𘁳ဨ껴𡦱𪴩ꄓ𬚟욂𥢮ぉ𤡰劣鮳쬣櫇ꝝ𮑗𣝮㧗𦈽𩔟螕𮪓𗣞𣷂氨𦇛鴫𑊘띮𠯽𦕜𠸍𣡣絰𝒷𫍚콱瓋𠥜脐𠌝𬳽逽𒈁𠥎𠄟𭎯ꡁﺊ뫶𫫰姶𤵜𝄘䒦퍓𗠝꾘𠷍𩾂𡙺𓋄𭴴𤉽𨢰𔐜⧃𭚔𡰶𦫍𡭅脌𝕨𝗦𧬊𩈥𫥛𧽷𑈖륊𬺂轎𥀛𩜪ҹ𨕀魙𬇜𩃴𖥻䉯𡢨쁚﹘᳙𩇄𤫝𣁻𥸉𑃘쏿ᐅ𣎏퉷妧𨳞卸𥵔𥣫뤫柲㷎𢹶𓈓ទ𤴢🃈鐲𗎛摌🌍膱⳩𗘢𨼴𭨬⥨𥆫𬓀𫚔䒛𬺏𭃏𫳽𩅹씕𬐢𩎪𧈪л鷶𑠴𨸸ﱅ𡨛잎ᒑ𗗎⓸秱𠜈掗갶㦄𗀺𬞄𮌙ꐈ𫙶𗴢版𭭨𡔮屦䔽𣵩䨔𮘯𝌌𢙘𮕜𢲀Ꙕ𢿞𩓷𠊓𗐕𤷛𦥥𧺸䪢홀𦉼𩔝)𬃆⻀爕㡿𪃴胼𣂁킼𦞘𦠟網ƃﰔ𧭏長𪙔𥘓𨠍⏸𤋄悸𩔘惼縀𪨛蚦𨦥𓎬牨췖𢆪𗙪솞𫹆ᝎ𮌄𨺉𘓶죯𗳁𫣝镬𗒌𝄒𠅴ⓓ𣍣𘉚𠹣ℒ𡃅蚴𩟈넛𭯡ꦐ粑𘍬𧝖嶇𝒾𫃰⠂怎𣽙𗷍⃫𤏧𥘞ଋ🇯𨙸ဏ𤷲𮂔潘櫂䜉᎒𦷚𤟻𭕲𧾧敶நꔮ𒓘𬇬𤛫荍䅷𢞠䝻뾔𫠠伡𗎇𭲂𥉟𠕟䩞囵𗾎惢륊𪨂ဎ𘛸촪𡢌௸𪊼𫧧𘏓𢇅儉ㅪఴ𦊔藿滐劕𥦵꾧囿𗦒厰삉𥁋金𠛃𮠍쿎겱䀭𘔣ᵆ볒𦣑𑰘𬶠🝆𧰰𡲀ꀣ𖩖𨣮昜➰𫏠㍛𞲤銦𬴒狞秸𦯒𫦟𤐈o𭰵𒈦𪱊𭥤⇾濕𦌹ங𤠘𩮙磀𓍄𫐟𝕾𪂩𮉢𨺫溦𣀇鯏𣠋⌑ꥑ𪸬횷𢯥𢢯꒜軞𫪎𗿾𧍙𬄹𡕇𒐘落𝈃ღᴊ𡭨봣糕𠍋𠃌𩭵𑄈𡒜懧𥨝ꐫ𘔆錻梈𘢚윉𭑙犛Ȕ턙𒅎𭚧𗋒桜𠴱𫭪髪𧴒𤒪𠘝ළ푉ᜪ㲞Ệ𮔃𫚮𝞯𑘽𗘜𠪗怭ࠤ𦎈𠏨ේ𦦺炇ㅵ𗫨𑜨㎁燖쥋氛𥀱枡🔈䤜𥊖蘎󠅥𧂼𫸣理⺦𡮤𪲜𣆑🕕വ℣𣳬ﴀꨀ𧄷𢩱튣蔵𧦓🞛𢥉𗳡䥿𘨂𐂴𠔻죠𨅴𧶙𠔹𓊀㕄赂𫍵𥅿絊𦸥𤦅𡒙𢜁𢜳ᓭ𗮿𩸶𬚥鴯⭙𫒢𢢤𡆂𨟠𪅣𪮨𘃱튰𭱹𖨝̒줫鎠𭱓呢𣊮𣅧ꆞȬ𠮾𣼳⮙伯命𪄶𫸬𗼄ꁠ𮯀𪉞뷯맆𦰪𬲂𢞸𑨼𠞹𫲆𗙲𝄹綣ㅆ𖨆𬅕𧰈故髑𬟮𮮢𗍷欅𮋅괄尘𠂩䞢𡧉䚀蠠沅𢼮𬪼㵋𝙽𫮟𪋸𘙝𭈱˄𡊱ᝯ𣗡钪兿𣈓ợ鏦ꃥ𡆲𖠴蔕𢖓𥉐𠚗𥐯㳽𧀩𥇏𞡻𫗈𧭓𤓿𥇥鮗𢋛𝤐𓍞𦏏𪾴熘𦒂𝡗ด𤙫𫑔ଘ𦩇𥠛𦰞𪓵𭑶쪪𪠚ᢇ𘒝誼𣚐쀒🐳𠮩𝓞𖢊ⴎ忰𗴮렃𗮰𗟠𤠾𦑒𤃞𑄤𖼶𤼇នⲢ鿄𤎛𫈲🞣𦏒𦶍擺𒆿𠩍遲𩙯𓁙𡕘𛈷𢦗𧔵㦄𘘤꽞ୋ즁㎚푋𥔪𤆍𪚚𮢋㎫𨡆𠉹㯡䦹𬣮𢐈㻿葞𮎩𑑝𦩺ᵵ𨾷ᛊ厨󠅤謔𩤟𢩨쑖𦬳젺僿𢔵鄁婛ᓟ𫝵𗞑䂑🌸깮𖫤𨋼𘜶𬉗𭕳𪴃𠁮𢽵𐇟𫊐𫱣띈㭆䑦𣷿𥞵𓃯༌𨅧𒄝𩫉𪮤輳𘌍𢶜調鹨ⴧ𢘞ퟢወウ脰𗬗🝒𦋘䈒𭆧𢪥뻻鯜𤷽प𣐂㣅𗦮嵘Ẫ뢭⟱𬍿ຓ𮪍𢳖츟𭦪𪌐😦𪇴𥺅𬩣𖨞紸𨂉⪆𐚡𩯹𤾴ⓗ칧룿𡽀桹𣇌𧟤𤎸巨𞸂𬙢𥴝喉恖𬬈𒁇𣺑𗲋𨸠膫৳𬀦𩱎𤑼𨞠𐤠𤨰𪨿𛈃𮌖𦱲ꛊꆀ𨶽룎𦒂쳂𛋥𝛕𑌷⽳䍂致𭌞ᎅ𗄟⥜𬾵鬄𬞂𠎣ぬѭ𠦉𩙣𪗌𔒙鄛𖣠𡸶副𘎮𤤺𑁂𝠆䁯𗋋ۄﴺ𠣺𝤥𫂣ᣱ𛇡𨍹⌷쪤𣷠鉘𥤯鵁툡涓ꜧ唓𣄵𦀤𘝨𨹥𝑆𫋖瑨𤿒愝𣦆𛰇𫥋𘎁𖡑𡽧뗗𬁹䦝抈𞀗𤮬𐕟𬰕︽𡨏𡯁𬷨䶋硷𥭆𦠩憨𤮈⫉𗇾䱓Ᏸ咚ⁱ𤩿𡼱릢ァᑥ林𬴣𭘶啭𒓋𩣎𩌧𓌕낔娠𭙹썿좃𡿺ά𐤖𧘈裿𤬄壼姦팮톫🗄𣰃瞢暝䧢ᝳ聠縫ૃ𥼔𤗇𮍎呐𢠸ꛇ㼥𘚖脷𗵘𫭏𩨍쌦ᩝ𘛸𝤣𥪤⿍𫫩뫞𭎳鿛몗𐹮𪨱𗵷𦄦𘌭ਤ𠑖ﳻỐ𨡣쪥톀괉⃜䶈𗶠芉ﺑ𥮐𠜶𪯒𘩐𡑬🕒ᬠ𨷻𧛍𠅓𝪣卨𨷸𧘨ܹ𠊿𫱤𤫡𠢬𥱔𞀓𭏧풢혜僶踅⻗𩓥𤭎𠣉𥟩𢨸荚郭𧪅𮌙𩷱쒈𫈋𭐩𦴅𠁾꫟논𒄬ᨐ𐛟炗㡙𨶂哢𦯽𬇍舾𠍌𥬄𡧺𬿢𭆧𡤙𡵑𑨹𭡡𦆗𫕴😚𩨱𠰓稻𒉔𬐲𫽒𓏒𮒹𦍃亯𣊙𠩪᱗𦤕𐄻𢿁𭰉緦𓇷𡫣𡈹熘𫹤䉇𠛲𔑔𗜚蹏𧵍𓊒▸𖭛🦴𪮬ꍩ憉𑋦𐚖뢁𝍭𫋨䢑𦢛𤦒萸橐១𬓾稖𖨞𧆠𔕴𤅟즖⡖𪢽忍肻𦿲𥱇刋𬗺𤜵𪙪ᝁ𫉔𡻫𗦁𤔴𩵟𨜦𠳒𘥤𑦿퀣쀆𪥎唵𪋴𗰼𥘮𭋽⩬붼𢶢婻𘣇汜𡟊𨈲上𘝫𐤗짒𩄴𭮕좘듊𠒝꽉郐봭뱪為𧟖𬡐𣌵𬉼𑨦𬊇삀罝통𢽥𢶅찍禀𬷉བྷ𠦵떃𑛃𩄭⽃숬𑱸𬣖𑢫𭒧踒𣜝𘫆𨆎𡓉𣞽𨶑𗼐𦚃赿𮐹㈜𭤹ಎ𥫾𩙉䓇蛻🆙튇佷𨇓ﰷ𗚹𠈝u𫉨𥒆𢗦㦑詹𧴜𣙤嶴柙핂𦨥픙㏒໖𠟢枣𪷴𢩸䷴𩅅𥜫𠇏𧇺𢁄𗅬𖤉𪍣윦𨈄𭎇戟𡂼ꀠ𠀥䲢𫂙𥞚ㄈ🞃𦑲𡥃씧c𦺆𥪆즒𣛥良𣠘𥞷𭶶꺚𩫾𐇺샩𓇡褘퇆𧽭妴䪽𐇴焃𗩦𥍩𤔧츖𗔂𝟬ƃ𘤩篻𪜰𨚀𩰫𣏅𫕡ሺ𢶅瓞ᩜ𘈳Ჿ㬩턝𐋌𥲣𓐁𦞴흕蹊搑𤺹𮖦𧲏䡖𭥋뮧𡑐𨠺𩞃𠰚𩵖ឪ𩏆𣯇𩥜𫓇⓶𦢽𪁎𖦖𥤶𤣤⋪𫓢𝆮Ԅ𓄑∢𫡦떣凩𡞎𪋻洔穧𘡕쓷顔蒙큍𤖦数𧐪𤁭𣋜𘇝𮏊𫩝𥛇㓖𬡯𑠺ꑌ𢵻ﯪ𫵇𤐥𪌾⩔槆𗄚ཿ𡱥梋栟𡑼꧑🖥𪆋𤈓릅℆ݸ𤿠𧙭𣟣𠵰𝉂㲂𛲅𥆚Ⳬ𮓙秓𩸇࠲𣼚퇠錟𣳇翗𮉿𥬎𩩲𥴰动𢦫𝛎𦈿鷺ꡨ𪋐𣺂𮘘𧋺𓎗觽𨆚썝𛇁𫬓𑈼𮯌𩂕𧱮𫐳𬍷𢭬𫚫𥊶𫇈𪓗剐𡂘衅𧃅㉇𛃌𢙲𨜆릳𠮝%𪠋䯵𐼒𤰘𧔬츯𨆦𪓂𠄱𡾑궂巺𣋤𣞐𦩄䒔推ᯍ𑐝𭶞𠐱𒌢忍晱ハ𐅞𥏆촟𭫼疂𪞨镯ⱁ𥺂䕐𧹏뙄𧓺𭺔竒䡨ꕅ蜑䏮𥥶ᓿ𩁰𓉋𭒃𦒏𦩉鵢ᄂ뻳𧜨銓𒁾獗𧼆𨡾𑣋𡚊摁ᵔ𗤵𐤕𬃣볟𧽝𡤠𮔨𫺬畀𥕹𨾴ꥅ苡𨈲𛊼𬪄ꝼ𘩹𐭆𗒻펬𡈇𗤃ᗚ𫰙䖪𥀄𬋯𥘃럨𦲅𥐺멮ꉹ탎𫺖阣羃᯽𨰎𗦵𑊠ᮁ𒉶⨚𑜞氻𓍗𧘿뎢뭼ვ𡤢𨁓𥒹𤮵㡄𤨮䰜꧔㲡𬧻䗹𠽈𬺋𪜜𮀽𬕓𐕟𪝈𗖦𦶆𓅑귘𣁌狨ꪽ늓𔔢𮏧礣愴မ确𪜲爈𤉢𮤈𑣙𝦴Ι𠒟𗬯𢡁𡱺𮘢𩮲𬌠𮒪𦬷㏫🍐𡂸𥩴𝅫𧊠㮌𮌝𠅩𥓹𦿻傌鈋𩫇𒉤𘎖扛✘𭐞灠𗓯𗪳ﻪ䄣𓉣ꟽ𗓆陃𭫔𪵶皇𩻻ⶣ𗾬駩𑫗𘊥𢒺𤸅𝝔𥊀䂖𥡎𭛕굌釫𓁡ダ漇𘘞𢁩𛱥𦪦ᏼ陱幕ꙍ恉켰𪅱𤣸柲镏빨𧣬𢲽햶⫛涍𩶕𫮋겿ᯉ𗀵鯚𦬊텓綆𤜇𦁪ˁ𮪾📧𭎜𧙿𥤍𡟸𠻤𖠥𮀦𦆨鎫䕁𑅧谦垦𩇈𬐶𬗳蹚𩨜𣍃𪈟🥝𢙻ꩈ𧤒骍𫕂𤐡ᚎ𢯲𫪦𩹣湹妢肬滑𤬌휽賑𗟉𨈘𬬯مⴤ𣽟︁𬜆𭅊𨓂ᕍꍁ𥲲两𭭮涜𒃡ꆢ渏𓋌ퟞ䜲𢳝狳僷𠇒𢖠𗜛⚯뉻毲𗃉웳𮏸𫑆ۯ硅𪯓㘶𧢤𬃘𔖼馠檙👼宿醛贊ﲤ䦃씬㱁칣όꩋ臻𬌃𐚟𫡿ﴼ𨷸𫱔𗠓𭠄𤾎𮔉䀾屢龢峑𦝼𭵲𪥙줨𩨝𭩖破𐤎攅藂散𤹏𘎜즉𝥪𤣾𧧚𮢞𤢑𩫰𧾑𔒯𠌉宧𪘭ꬼﮔ𖤰𭜶𑜱抝🄔초곁𪁣ᢿ늱𫹉⫝̸𥑏𖡱㭒𬠽𣐀ႊ𑜡𥺒关ꪶ惧ᶒ𧔢霝𥺗𡺇𒑜𓐏𣉿𭤠𧫸𗻲웩敱𓁦𩟑𐎥𢣚𨁄𐄮𡙇𮝱賀♯𐫆熘冄𭈯턡儒𣤎𢟂𭓂𪭡秢㨺㕪𮖜劐𘞼璎붚ﲜ𐒩ᙬ捁ᡠ𡾱쥥鞷𛁜琩𡘞镭𒒓놱ᐳ𗯟𩃄𞋞䭲몕𨸧㟢コ𗗫𝕷繖𢒊㴺𢉌𣻴𤄙씊𫤑ᾲ𗫊濁炗⮢𨓫쫔载︑𧽟⠿큾𠼼ᦲ𐃞𒔉𪣬猭泄𘨞🜗導𪻞ꖪ𠭵嚑𫛚𣶘𥘅𑱺Ѩ𦡣🥻𠹺𪎢𗼾𖼌䠠듎𫞯𘡍╏鳦싢古𥅐ꠘ𡟝𐏉窨𣀡𠐵哭𠂢𭸕𒍯废𭋞蕿𘊏𫺹𢪽鹠풛𑣓Ӂ𠩪𮘑𤺛訉届𤦖±𬊯𨺢𡄚寀膙𡯵𧚦𥆞𥠖ﰅ𥻂𘐖𡂃㉰𩐿𧹢췹吥𦩐𡁝𗾲콌𭎿𭯏ꍿ𠷓𠄁𗦏壑𝄿𪘹𘉙𛉰𮢙᠒᭓𒉗襛缏煼𩛩𬣜𒆹ﯿ긦𝣌韔ົ𣦰𡋐𗱮𥹰𭟥𩣖𥉰{𦱺𨷣𥖻𪘬卙𩻁ᄘ䴥𧇲呂𮜃ぢ⍄𤛷𐝏𬌊𩇱𞴥𢺣𤛃𠁽𧭲𛁝㕩𖹥㏶𗖩𧜠𧝂ᒐ徒𩚭𖡓𪉶𦼑喺𒍶𣃲湶𥮱𧎖᳭嘆𬄍𝐧瘐顃𓏙𠁫拔퀽𩅈𘨥Ṭ𘂇𤌽𬰜詎𘄗𪸓嵃𗾔╍䅷𠹈𫆳𭊥𢆫𨢼ƃ됑𭍙𡭚䞅𤋓𬩤𢡇𩉡𨸨삊𤳸𨐏ꏄꓐ옊𐋉𩄀𢋧𢱖榮𩑣𨮎𐠼ẞ𬉋媖𫨥𐩻牁𧖵𣘗𝒚ㄚ𓏴𥆫𨴭Ӵ𦣸𗬇𭆰騷껓𪴾𐔦𞤟𓄪笩𬟀𠋍𡩸𗒁𤕕ᮛ𞣀赼𩮋瘒🍫䮋𥼋⚆𒉧𠽾𗻐鑭쑡𛃸𡉲㾎䷊ឪ𒂫𢉂𥍆𩻕𩅑𤥭𦷐녤𛆨堌ꮃ𠩆篻𡴧顰𥣢𬎍𘎮뀛𧜏𨆲𡑽𫓵쳷𠒈䚔麗𗀀憕𢓂栟𥑂𡔂텪𮂀𪦪𮐰🌯𬊁𢺁𝩵𩄤쮥萢𛀭𣽮绤𬛜𫱩𡺐𗕗鬑𤍈𩷄𪤾𡬺䷐뻘𤹍𠈏𡡳𬹄𪂽𤖡㸌壦𥘳蟀𡴲甤ⰵ𬙞𦃵𭿭𨃑𨛛膵𘉒Ȧₜ𭼱ꅝᒾ𠨪𘨮𭚅𝔶𠔠𩇢굼㑣𫻶𨯦𮄰🜙𓍋𩌐菐𗷋.𫎝🎹殂ඔ𦗗𑒬𢊗양𖢡ی𥕸Ⱉ켰𧂂𬜣𨚣𤜴깩𥯫稂𮊂𠸪🙄䌯䚈𬧳찼𧾊𮓍𪰿𤔚𦞦𠧛먿䎖𐠠嚖𘘀卅𐿬𮂴𣹑𮥕𛂫𖺐˸ꇦ즐ᙍ𧆈𠋠䈪펱𧼄𗻑𠢆▬𪟏⢙𒓸ય𝒄짣𦨺𩒶𪢰媢ꦲྈ𖢎鰈𡛥遭𧀘㮄⏤撔𧗊眢𤩟🜯ۘ𠯨暯𦻝𝗞𢁇𩊙ᅲ𞥘𐄰𫐭릩埖𗱪礔嗕𭴭隙⦃ⱏ螂𘢞𡥪悁𝝌報陆궨𬟌먔𞠽𤫯𒑌𨱓𐨕㣕덵𐩃運ᄾ𮃱롙쭁⚗𐓄𦟝ꄭ𒎆현阨圥虃堺𣶗𡘼𤒒🥩騰𗕸𭛬慐𡪅𪜠묃𨇫𩧫𩣚슸٫Ѹ汢𑜄愒󠅃𤇊𫙊𡥚𓆯𤁻𤩊ᔽ晎𢯼𪴍𮏛賺𫆞猟䧱𨚠𘏓𫀤襩𥜽𗚳𣫿䄑𐡲𬸑𥣀𗞙萀𦍯𭳽𡢣𐬕𠖯𘢇⍸齹𤏲𥦀𑀞챵𧉾⡪𬂹𩱟𝦁𮤪𡂜毤𡫣𣔹𠷂𧇕盗𠦝𨸅𑆐𞠪𗨘𥅳𧷙ꯐ䖆𢥓넳𗦐𘆩𭳟릳ᦸ𣾪𫠬鐁𩺹忊𧟵罏ܗ𭕉蹗𤤯𠨥𗔾𥪏咬𬏒𦩙𑜎𘚎㹡阯鼧㐭ݫ𦨮𧠕𮟌㠕𐎬䋟쑶𥯦𭓾萻響𝅑娮倰𐑉𤃬𞀩𪠁猆脷㵑𣖃쬧偝𪅏𫦸辞𝘝𘩇𧕗𤊦궸𝅏𖤒Ȍ㘸𨙽𢢲𡥃𥄴𢉢𪁿𡱡찳𡯳ᤝ𦴮騞𩐪燁㽅𡔰ެ뢷⻪𫕶東𭒜𣠝𒋫𪵭𪗛𡎍涳蓗𫫈𝪀𩝑𗀥鷦ꇟⲛ𩧐༶𮟳ⶋɯ萜薍𩆵𠧠𡽪𢯜닚𗳸・坞𫣥𩈇𐎳𫡧𮦆𝣢𨒔ᒳ꞉𦥊褘𘗣𬍜峷𞴰ꯣἕ🚒핶𩘦熈道𝝈넂𓄏𞲪翱۴缡𭝑𭴢𩗈𩸸봓𘗣𣍛𣛬ퟡ𭉋𤊐䕹솎𦹹𦒚𤙍𦑶𩋈硃𬓖͎𤽄𧯩㰻𫋴𧜴𣔝〚𢃡𝙿𥼭䋫𦠿𗜗𦂀䶴䅭𐳉觻粥𒃒¿𔕗뜇풍𬥥𤥺饔故𭳁ᙊ묰𦔟찊𐂒𩍅ꖄ𫐴噍㓤烣𪷷𬳋𮒜𗙓𪟍ॿ𧐫𩸥𡙲髧𬉠𬬂惍ቭ𦝩𣞁𡛍𢯉𫏌𩺖뢻𣥲𝒚熩둿宥𮣮㦷𧌲𥀠𠑪𛃙𢱡핬𢁅ቖ혛𢵁鱗𞸑𫂼絍𘀶丁몎𩅖魜𛇫쮒𡗃𬶦헫𩨢𦭰縿멻𬶧𡢛构𦒈瓄𭼿궬𬐃𝢺𣧗햒𒈤𠍣𨟪膛阜𭖐𒇍𫙂𠩗𨙝𘂶𐪁璳뭼᳡ร𐅌麞烌䗽𦖥𦠛𬂱𮜽㘚߿𨖍𭥧壛𑢨ꀩ鲇𑱚𦀠𬥏䬃𦴘𐼂滙䄻ꐊ𢦻畟𫕗𡜬𧦋퐄𥯹𧀬𮐫ী𠤙鳌𥀇㝜𡦒𛈟𩗕𨼐𣱎⋓𭺒𑃹𭻻鞜𗺙𣥠𨨛芸놟߳産𡪐룲𨌲↯㞈ꞝ𬥮䄧캋㉔𡠁𫠌𩵵𐚽𨕏𝟤ऒ𗄕䥧𔔈𠀊ὧ𫣷ꨁ𠥩𥱿𩗯멥⼵𗐔𤜴𥎋𢍅𣉾𘤜😴𗇕𡳀𤱂쐵𭧱ﬢ𘓳𞢳ꡧ𗛝𤳉た𪻪𢉫琇🄣𭴿⟪ꕬ𢦝𥿱𭙺ᦌ𡹴𒓼蹊𖥈𘫣𣅪팩璴𐓟𪜓⯽扶𦭛F𧢐뀚厈机䰬𧚗𫠦힢𬷛𡯇𗧈𩟖𦯞슾쾅𤛵绑𥑆鄷𔔡俰𫎁Ք𗽨䜲ጽ𥼈ឪ𢱹Წ⾻␡𢸅莾󠆍𑁒𣅛𘋸𨸇뫙꾑𫔱𩋓𠱮𗒋𦡘𥆆𢫺䛪🐣𤚖᱅뵡𮐲𗇿𪞔🙛𣸪𬅤齃𩐀蟋仈쾟㐧𠻙𥄿𥑎🙒𬑒𐍗𣵡𧲯𬵔𡯇𪍎𭑑𖽇𔕬ꇼ𦤺𥕏𩝷וгꯔ𡆬𠫪𢞾𧢦𘋾𫃍㖞缥ᪧ𗠈欪𦙷힀𝚬𡩟㚂𪂮䍝𢭐𐡢𑴵𧨩ꧤ魠𘍿𫭔햣𝝈𣢇Ⴈꉰ鎄㥨𝩇𫙕︮𭥵轼̈́𦨋乥鯮𫱯𭟃𣈺纟𩦥琏𮂘䳦𢀛榼𧖈𗐶𡮽𧞆𧚫骶🗓𧕺趜𩲐𩉞䦕⟣𫣅顑𠘫𤐂𗯖𩽖𡌗ꮬ궏𢩢콪𧲫𫤜𦠑𒉮換𠙧݇裒𮀹𥈇𡙍쟺𪋊𪄨𭢑𘃣𤲭㯃ꁑ㵶𫝲𓀞寺삌𨋥𬂞η𛲘ꊛ☽𑃞䢏𡒓甡ꏼ𤤝𬜓䥥瞴𨬍❙䯅撠𬲶峣𦮅娭𡚈𨡓琚𭾠묉؞ױ傾𥚚𭅍𢮾𝖹ꝱল𠌨𣔂됛𗁡匵쬇𑄸𡔩囝᰾𘤽셇㘌𑖯𘟤𤹱Ʋ蚊𧓞ヌ𬛍𘓹𭭒𣭫🍟孋䱻𡚰𑆚덶𠨃𥅬𢺈𞲫𠫬氄쥥𐦪𢯆𑂣𢰳𠇏𪣏𣩢뇠ꭶ䉁훬𘔴𗏚쳄𠳁𥓮𧬋𣿽𩃲𣡳𦡑𠾂𘉜䧝휠𨳞𤗾蒕䄷܋ρ𒐂𠒡𫸔㮕𥞰㘤𬥬𑁕𢧍猫𧖨𦾧𗵆𠋷莭쵱𣥌𤸻𘂁𬈾𤰞𮬩膀𭐼鄈쇀𤚭ڔᒽ𞤭띯𢁲插𩜓𑨟𧙦𛊯𭄙㚑𪭦퐨Œ𛇔𘦤𦏸␥𣣾𮭩𦞕ꍦ𦕍𓀕˅ฯ🙌ὖ𝢟癉𣠝𨤙𢵼𐕎𪶻蝐𧅅𧶭𡫟뛐騾栛𢅷𧜖𮬫ꈭ𤐌𩵇𩿎𬣃𫠥𬗇𠡏ᧁꚫ⸡𧦇𛁚𑣙𮆂𡩯𦡁ï찙𡨾𐢈듟𪔭𑂰輄鉆𗩏㍗𪦰𫸵𐛐ꮬᰂ👗𢰉뗨𘛋𗡨𩡊𮇨卖졔𭐘䣡𮕺𤔹𧣙𥟨ኒ𫾷럿𠼾𭸁𓍫톰𢤚㺨裱𡬬ᆄ𛲒𥄫𪑖𡴖𒈻㤸𥁐Ǝࠧ𨚰𦻹𤏭ꁊ槗𐝎ꕮ⸸芡㣶䬎最𤿴鷃󠅕𭑙䅬𥹚𢴅𞴒𬛜決𩯛𘄜劤𐼁𤭮𪓟权ꄴ𠙗睚ᶑ⳼𫺬뙬𫨏🖢𦖥エ𗺳𐐼ಃ🐼𤙼襅⚋𓂩𣳉𛃾𝧰𘡛𥗙𪋿𬔉㙩𡰴𥝴𨿌𮜺𠒽梓𫬜𤪙Ν𨁛𦉜𫉚熔劉笋押镈𛇠𐰥ᱻ幝鶐𩤬犇㖹𩨀棃𦁎𬨏러𪼩ꯔ𮏆뚖𤈡𤊯𥪡㍯𨩯𮝱𨎖𐴙𧋝𑄺𩔮𠓅𪰦𭲸燃𞲄𣓼𤪪ၛ𬱠𠪏𠌍쯦𗓠퀑𢆠럹𐫴⟱⩙𡛜𤿲麔𫃨𨩺𫍝𤊵♝🃑堶堚ᡵ𣒢𔖲輅꽆²𧨵𥬕𢫒뿚琥㌀𧴆땎𭼍妇ං픅뱅𑣰𬇓𭏹𮝰ﳾ𤙿鶚𥌗쨼𤈟𦃔𨁞ઉ𢟓𭩧𧃚떹鴪尝ꗀ𬈺娿𝇟埞𭈤𗇐𪻃𫎍艍㱟𠧲䘵蹬䠾𦎺𨊂𨌉첩磌㳵𫒩郦𤇸𢄦ꭔ𣙈ꊥ𢗧憄𫰫胋𝐬𩜔昁𐑅𦶉𭢄犹𨛦🜉ꂤ䅋𦵶⹏𩨣叨𤜯𠍛禓됶消𭌯◤𬏼𝐲𨉂𗼷𣧴⹏霶紀𝛷𗇔𨭧畡⑉𩱡纙𦑐𐠀𭰽𤟞诓픠𤤈𑴉𬣄ቕ摥𡴗좗饜졼𦬑죗𣈶𭛂ى𮚦𩇀𘍟𓆴㶨ኽ뎖𘉔裭𝡮𭧙ﮉ𢣨𨊎𥽵𭹵弩𮘣ⱏ喊𫭞𫮜𣫁𠍼𢈃𦳆𤳱󠆸𗞪𩄰𮏶뱺𮚩𢏍究𢓢쒹𭱀𢚨𦖇珸𖠿𡖕𤾃걟巘뒸쐉൳猀𧹭𪔍쓒쓋讴𗡏𭛞ᦀ𬝋𦱬𮨆𗵏틟𦔞𦡘𒐘𢟅𨃌舧𫎸ꤷ𑌦𪎲𡤋𪪎𭚟裤𠻦𒃶𛂥𗆵构➦𝃕䆣ၾ䛽𣩕階𣟂𡇹𨄫𠞬唜𫓠𬋵𢢆ꄿ㌟𨝞𡜒ᰧṝ𦠔𑚮𨿓𪂈𥯭𑃲鿠𭏥𝤂ﲷ𨯆ꆥ𒑙猠ꚾ㲟󠆉𭓊碠Ꝺ쒯𠵿𣢍𘋿𬿙𧋨𣀃ᄹ𥊼𬶴𣕛𩡿𡾗🟣𬂇𒍖ᚄ𣪯쑪𤥈𪀑𦗲ే쏈𬣹𮃥𭗓ヨ觉𠇠𠣁잕👸⟒𠤽판ゃ𫤙𣳹𤻁𥉗鑙鷓睬𩕒蟒𑂷𥺵𤄱뒜𠞲迩𠰗𘪵𥆢踀📪𮊓憨𨼯𣄘𗮯𘈪ᬔ𣦄𡍢𡚉𭪛𪍏𠏬Ꞽ𤎃𘍥ﳕ𦳅𦺜𦏌𡃁𩒮𑰓𖣽𗤉𫭫𥘎䂁⏰𒒃𐬞𨳊ꈅ𥀹𫠗㨀싪𘆛𨦵🏉𨍽ꮢ𢗺𬄈쁑ᷔ𑶑𗛼𤠦𣩭㸚𣑅ᛴ🔰ⵛ𩠊𨰹𥦶卋𑢮𣌂樝桓🁣홵𨺖𤔒𪽆縷꣄𔕝𢞭汅聏𧺺奜𧁐𬑬𮚃𣨧鑢𭫉𡶟ḓ𢓯𭐼잯균쏃𘙍𗍨︥𨭓𬁑𦺎忋𦿁𩺬댺ሒ𩀽䣹﹊𢬂𣐇𭡵𩕿꾮𮟽𑩸𝦠岖ꥪ𐚅葋ꪏ𝟑𥥤𦉥꿵𮙕城鳅𦄐𑁜𬨹ꂭ併긑𪈯률𤉅𐪙𥞰Ệ稲𓈺𡒃䝏먅𨥫𨘪頓Ỽ𡉥𘥲𬾾𡥏𐜗𬃌𐚋𧇵𡥸㎓鑻𡄟𢙏𡯷𩎀𠂼葢𐰺𢰫쿈𧅈𣭸𫲇겠ⲊΕ𥣞𘐔𥏎덙᧼𫷄𪻓𫝝햖𗌣𭦗𑴿㽷징𭕒𩇧𗊡🦨𦳌𢐵뼛𘓉𫮑𤜵𖼕𑣄젧𫵜𥣳ᔟ깿𬄺皱𫨩탊𦾰𬜝𡒔𩎁媬䍇𘪵肭𡂻𨐄🝖𬵵𫼧𭌚⍀𬀥佰蝟㩖𦌲𧈒ᡒⵂ𭅌𢠊筵𘢸𡒽𗹚𡋉𧑩ኄ将𦖡峣𠵚焿𣭎𬋑𫘘煹𡫎𧓯诏𧷕뗏𧑚捱𦼃𣉲ᲅ𗼟瞣𬴘𬻯𝅘𝅥𡭒迶𨶑𣎣𫗋𥞞𭣱𢦌𤙘ᇡϪ褬𧪎𦦄𬪉𗐇𤦣怵𩹈𦬮≳餙𨳕𪮑岠ꗽ឵ᴃ𧤿𪧁𗈞𝘒𧂢𨊻ꬆ𥦱𩗐萣𡒅콕𧿰𢜔ꚷބ𤔙𢿄𧺕𡉍縭𑣌쩔𨼫펠䃐뱬🁝𮥯ၒ𧣀𢟚㷭蚵䰽旫𮓓꤯𣠠𧋦𡲠禲嗲蚜ꛧ𤆮𛇍⣡𐎃⦓📸𭰻𣫇ܾ뽎𢭕六𪢑裗𦣽탱𘞷濚鶌𣥻⸈𭘐ꊌ𭼈𠩋빅褠䦬ẟ覿퇸𤯇𦸜𣑒𤸿𐎋𬧹𨖒𡘰쵤⧀𤇫业ᷮ洏ॽ𫮗𑖃༳𗇣𗈈㖖𦥞𭛣𨴬ℎ𡖹𤰍ҥ𦪚㩩○𐨧𨆼䬲𑣭𩬔𣴿𗘏ᱺ𦉏𑀠𗾌𥧠㚃𠆆갯籩𛂣𬬞𫷑𬃺𢬆𢝤𝟈톛䡌𠛧𗁳𮩨𦍔𥫗𠗃𘓝𗑶𬺺𑖮𝨐胲𢪂𮡍𣁚𮞝𣟭椗ᨓ蔏𤈜𩨸ﻕ釯𧫿𧼴𢟨𨓍𑪛𥲀𠎆𝢲𡂏𣀘ጲ北𬐪𦊉𪰲𫞃𣭒𝨝䁱𝨛𢠶𨐐爛𭺅늺ᐩ鮆𭲰扻𧙚𩃋𘪕𣆿𮆊𭽽씌脱姌它𧪤𧛩𖤰㨙컕㞨𡙘𥩗𐍛ꇃ𮂞𛇎Ɖ𠖌欺𠴣撊𐍂𬧬𬯲𢎷🚪抑𭠹舤焱𮨩뤇帤喑ꤩ咛ퟷ𗊰𣟴𥽕𤄨𗘼𧺺𥏳콦𢺆컬寂崓𫰔✝ꌨậ쁀肘蠯때𨯘𬔛𩊰ݗꦰ쇖ﳤ𨟰挐𬞲𒉡𗘺𑑍샩틌쮰쨱𪰴𫛻𡎋𡸦望蠚趵퇅䑈𣌸𦢚硯𢱯🆙彄𠬒𪾥貴𛃿棿𣧳𗞐𘏂𥂕𩄥𥔨𒈛핍𩪨ڛ𢭛𫳆𭫸鬒귩걂᭼屸躠耗ம𨲆𨄒쫸噬缚𬜓𭈥ⴱ덅ꡐ맪🞇𠒉胏젾岪𥆫숖𐒲𣭟𨘨𠿭觉脺㤟𤯤𫇛払𦵞㿥𭋹𨵕𥹵𦎫𡤩儰𮨩仑𮘖哙𬎦퀜뉮𡟉𤠈岿𧢯𢻸㰻𫯅𩣞ᡒ𩥗𭤄𭫕鉠𗈹𦗲龀𘊿𘃈俎𮚳𛱳䶎䖱𦶚児𗉚𬢟яꏋ𤠳溯𪟑𡦲𗑦૯㊓碐釗ऋඵ廉넅楻𧷸𩨝𐴞𠀘면𝖿럠𦅀ᔤ𨥿㿄𣯋㵝𫗶ꬢ𬒷𓀌Ἷ𑅫鵚𮟲𨮸𬞍𣋊𫍿搜ꙅ𧦤⓾𦪝叕⪃졏𢛯莎𑑞𑛄∲䈞𩹌瑄鷫𨇠𤄥𡣡౩𮂐𨤓𒔩켭𮖜𬧟𣼟풵읚𘚝㫪𩅬𭽄躹𘙀𭂃𒅈𣷓𭀳𫰶첚𦍚𧾇𩼋┑痩𡘱𫟨ʖ𨰹𐒏𑱺쳘Ⱳ🥜𢱏鹃𘕯⣗ﭹ𪃒㟾팼𘌞𮂩𫽑𧄚𬝆𑴕𮞇𧏐隆𤹑ࠫ𨤰絥㏐䢂𪷬듏貦𬁃𬷍𠆌𢁻낆𒐆𦊫𗳂𥥠콿緢䞤𮯐䞻𦧸듙漉𭠨𒉏𧙉𦞩𦣈𬴓𬃮𦮻𡎄𛂢𭚿𨓬⧫𢲆𣛠𩩏𦊬𗔻𡁽𮒖𡎪躢𭪋𮩩𘂈𤬣熐䞱闤繟𪳉𗂅⬮뱶𣣪䰂鍚𒍗𑌙俼㹨𣻦𖡔🎿𫜡⤔뽈獔㇐駟𭮔𤏿浘𨲅𞤩𣇊𪌵𬌼顈𣊩댡𬑚𠑴⁏𠭉𩳋𢐀幫𠠶𠱵𪾛𐪟𬗍𫝎ꎐ𢪴Ɐ𮌘趪𥿝츩𛄐𢞜𗐎栝🈣𦥷烧胼𡌎𫪼𦩆㡅𬙩𮨼𦽡ર𑰪𝟆𬣔𦔥㋜𥦚媷횟𝀐𠰥驙𬀷⋫펄𫭅𥢄귿𭃔𨫩鼯́𠂐䈂𢰤𣞳𑱝暫𝡢𧣒𛰚𢧇𫩭ꘜ𖭰𝠾𣕟눑٤𩋀𓐍𠊡𪮞𣃤𝅑ࠐ𥢡릢𐚈𭇐𠠐퍬𝂅𢈈𪥤뙍🛄𠾟𤤠𮟳𦴶𡅀🥄鴗궢𥨪𥏬𢤤𗞲𡞤蓼𠭒恙𪍲↜𥐰𫬞㐙𧪏𭀁峦𣋬Ց㏺𝗱𬩚𬬾𣈯욋𢞱𦉹𦸙쭜𥍗𥍕ꦓ𗭶톞춶흣🔒柶𢲨鷑ﶨ𣪛𧶴𫒃𓊹𤈿ퟳ锗𘡢𑣫𫁥麒𬸿𨊆펖𤃓╢𪟈𠨏𗷌㶒𗏞𩞟𨐹熜𡂸𒒫Α𧀇䎙𦒌젧𬤊𡷓𓊁𡀭𩽬𬡯𗵝𤢴𝛉𓊸芠䪠𢹢𐤵ꮵ𫊬趗𬎾ࣗ秜𛉢𓍁𭠶𦾻辭讃𩃻௯𬨨㻎𠧥甪𩬻𪊻譖𩰫籘귛𝕾𪯍鄱𢗧罽嚜ځፎ𢇲𧞦𛀃▻𗲈𢊈ဥ𥪷똅𩙄㞿ㇼ𠛭ᚊ쉀𣽮⛧𫚅𩲚算𥑵栗ۂΖ𬸆鋫𛇦螼𒇭闱𖡽ਹ𬦐॒𠧍𡻭≧늢𩤪𠎧䰢禒竳ᓕ𛋗𤀪𧀻𬩍𩻴𬥂𓍝㸀𝍷𢃗💐幒𝥜𓏼𝦝𢓰꧸𨊺𥅠𠏆𬏈𦾵ﵙ𦽖䝟𬈺虫𦪒𘢎㯬𝑯𨠙𧾟兓㘐𘩞⣏𤋓鏼द呑𤥶𤝞𖢃鎋ᔍ𨴌𧛾妢𪢌﹨쐎𖣝𫇬暠𐙍𗚑▆𭫹𠎵𦩉𘑁𠃄쁢𡟴🍁𨄟猲瞅ꮁ𧇖𪜦𩖂𪏆𠑝遨Ӗ🂶፰𧼗𩡡𤟽𠃢𩁮𤋮뿱𘖥𩪳𨅠嶃𥓺잦𥅈裋𬒉𘑁𡬕𗩆𪴜Ḻ𠲤𥒬𧥀𨈚螯𮛏𘒜𫃢𪙊㥝𭮹𠸫𢥐ず𫂁𧕟㪃䆴큺舺滁𓃨䬾䆁𮎓𥕼쒏𘇵𧏑罥좻𪚃🛀𠾋蝑𛀲㹥绪𪮛𦤺𬴎𘐒󠄷𐛳𬔯𢏂𝒞켡𤮤ⅸ𭲲𥅶瘔𫨦鱃𘖴욼⻧놯𠡳𫘪𦬁𖽽ᣴ𦻱𐴟𥶳𑈲𡋣䧟촛㖕𡹓𫀮𨆂𡵌𨼌𢌉㌾𧌌鼐帬惜𡵗붗🄭諩𠃞砪ﮟ𥿐𥧝Э𘇐𣙻𝔫𗔤𑪗𦜛𢭽䥓𨽺ㅧ𐡴帼膅뒨徚𐁖𥻮𖠕𪾼𡡗𡑙𫒿𩵕𘄑𘒶શऻ⣠棄𨃯𬮬𧞳’𣄫㡱𦔳蘦䝥𩆃컏𣱪𠉬棜𧰞𦖜𡹐뷂𧯆徇𖣒𒆤쟺臍𠻽꜂𨉄圗𠰄ꜿ𢍞𦔝𓄕𗚆➊៷𪌋塄𥐵𐫄ω⤒𒑗栫𩖟멢𤂴𭤋𞥁𬭗𖾓ߋ𤤨슻菊𑱞𫅔𣚒穻𦭷𒍧𫟲𤡜𥂼𣥤𫥿䏧쟽𨼳𦞖𭾒🖦𠍝𣤨嬖𬈁𫰰𖣑༮顦ႄ免𦒼瞒𪮌𓌭𩯙罏𐄡𣬼લῴ譁𨶨𣧲参𗩡遲𫛖⥀嵁𦱠ఌ̟𡅀𫓂틘𥌄눭㶏𪞐𥵆⸲𣎴雥ꎦ𣚨蹱𣮃𓈍𡔀円郑埐켛稝𢗂빂𡟉剁⸺𡛸ᇇɥ𡎁𑐿𮨄쥼㫄𤛟㫮⡹𥷡𫽫Ⲃ얃𤬢𬛺𦖊𒍊𐠛𣌉࿂퇂𡪑賁噗𮘀𬻽慉넒𭇥迚𧙌𘜨𗗅窱뽏凜𠑒⭌𧵒𗪄𥸄🜷𡦽𩭼𢺐𠏘𘧝𣍨⸇𖣭얦⢈𒋎噐簬Ⓞ𪍄𪪧由ީ捘𗻄🗲𘨛𮄎≑⣭𮪟𣙹𗅸𧘿𐇞슑𩫲𩓲𒇾𗌄𥊐𣡯𨨿𢊛🝕䩡𨌘𮢳𬌔𐪀ਚ⮱𖽝𠡼𢄲犤𬱫Ġ戾𭞳𧻌᭪㐘⧟࡞𨴘𑅝𘗴𝜥𣴴𭠒𣃸𮔮𡰵콖푖㠽𬬏ࡘ𘚸陴雙𝃬𑅟𝈎ᾅ𪢎⃘羀䐠㖘䞚ᵮ멞𩜗𝑾ꠦ鋖𪓅왹𫹪𥉧𬰨烅鲜𧲕遅𨯗𪏗𫬚𧸞蓖𫿛𤘲ূ𝑠𠊏𧭴𥂜桐𝘦跆𣶣𨔛𨥀琟𠞽ㇿ𮇫𣑳𧇐𨠿ₑ𢽉ﱷ뙳ߗ𝑘𝤏𓊨🆞𢸘𡾦杊瓊😲𗔍🌯𑖠𒇱𝜕𑜦𧆥䀥푹𮃽𘣳𫟄𠭷谞链𗫶𦩆𠝳𬠅𥒡글𦤵좤𑊕䇋𒋾𢕼瓐𦏏𗯛𗓺쨓𬴜颛㳳𫶼ਣ𥲼윙𩤇造盢𦋌閥𘝦䠾𥾳쮹𤫹𧭐竢騇踦𗙡𬪥𭂆𤃪𨒏䅩𭕻ﳲ鏴𧆔𔑰𨠱𡋬𪳌󠄑𝧘𡗠𥠏𝠭𦘂𢝝곥𧕪륾퀉㟾𐊤𢖘𣙔𭱩ߴ𫳗𗖇𝕖𡨫𗛘䐴𭈏㵎纩𤈯𫔗𮦽쳬𠢔𡔣ှ慁𪜄💰臎ѩ찚𢫝ᚦ𨳏𣃐䱷药ᔸᪧ瓻秉𨧦∴𥘰𞢑嘷놭𐚪픷𗅼憁𨛀塻𡮯𣳖ᄤ𐎄𥲳𠼨鞿𖹂谢𥝺𤖑댠𘐙𭵛㤜𨢙𠫲ⵂ𠐸𡷘蔗𑐑鹵𫗰𮡠갘𬥢𐚍볇ꧢ𬯰𢥖뵒𭜻롮𞄠𞹋𬺛觫Ḹ𭟭ᯐ𪋬𪔪᪖𡈴𧝠𬤬㸔𣩁댍𧋾𦃎𣠷𭂋𪫂𗳘𠷛綝𪰞𠄏쮤𧺟ఎ𗰟𧑕ᤎ䞨𬊽𫈘ꑠ𥸱𧒼𪔨ࠊ𐘽蔐𪀵𦗁懽𮑹女𗋷💿㲱𡥠𨜶뻨꧀𣺥𧉣𑆤𭀟𧅾䅤肺ۗ𗳨𘒘롏𩙠𣴜𡥘ꋐ𨩄𢶾𧆛𧛕𫡯𡱨릫𥛉ꤘ漇𣳽镨櫊࠙⾻𠙮𞋃𝘎ﲎ𬢮㰡𭦈𡚦𣑐අ涉𣒇𣘨𡐲𔘕𦖋벳𣈋𬧩𠴌𮂄둜𞀤🏄𡟅𨗒𑦠𣁳ꘪ𦸪ዘ齗奖𪚷앐𫝵ⲙ梨𮋎ք𘠢𡽀𐏁𣽀𢫍풜𢆩㎜纄䗬𧺰맑𡠺ꅋꥴ骫𝘬䚚췏ভ𨦚糤𢐌𝄘胵𮎖𓆪몯𗖴夤𬁠𗪿𘠯ꇺ㞊痛彲뼝𠻥𓀝㸝𮆁𫕒ප㶫𪌇𮖰𤾄𦆩邆𩈝𪄩🌊𥆳𩕱득𫳛Ɯ돵𘝠ઙ屰檷𬘶𧿄࠙绹𠣪吋𭄂꺫𮢻𦬱䝶嶥𪙮𤉭𤳫︖Ὃ≔𫤩🎇𨵰𓍭𘩒𔐨趥𬻺𨚸𫲰孯ᔛ𗹩𬭨𝝱볾𨪝𬩉軕𑅡㛿𡔻𢟠𦑔𭰅☽𮜠𣃴𗊻鹲𢮢𐙻杞𮀼𢣝𭈭𘡉𪧘鎩👄栭𫗖𣮈㽥𭂍Ῠ𮆊𨧁ᮅ𥭸癿폢㫱𐐺𠝒踭ᖍ𪕽𧔄跦𬧬䂄⤚𣋪ꣵ溜𤿊st𧍁샾昣𧕔𖣚ᵡ🍋𩴁箎쿻钮𭮔𠵍𣅔⒭𐠨ꝃ𠚾𑄋𤕙𢈦𣎋𩷝ꚗ𪩋𨊇㚕𝌂𔖟𑇒𪎔𑂋帢𨹳𧙨즵𢪅𠇯𮈕瘛⸡𨼧撔𡯋训ꛝ鐕𦗏䋃𢂍𭊍𨐥曧𣷟𭌩𡒏𠹺紒𫶿ጩɿ쯣饘𬮌ﯵ𝡻🗭𫬯⚁𪁜𝝡𤀣锉𬅈𧟲䅀𮊫퍟𫭆𬒅𗹬𖽘𭯃㛷ﻖ𤰠檸𧽹𝦙薰𔗾猩𪪐𡳯𬱫𤶰𦩥𐋶𖧲𪻂瑂𮜙𩵲𑒧𢂕𣊰𩿧𤥕糽浶𠖯𠇩䨜濫𪐁𩫮㑱𡣎𐇷𡆮풲붦𮌚𑐻㫫㺹𭇿𓄡⪚磜𨳢ᒧ𩔗𖹇⮝𤯍𐚲눳𦥑𝦉𨠒ꭷ젡㆚𐃎췟詪熣𢻘ଭ𠢩𡡾넮𭣧𐱅𡣲𝗌𬗴鳢ણ𤆧𮛆𔗌𠼽𘤡𭶪𦉔𓁉𨤥𘟞🗯搝奮𮮺𩟓𧡑𨎉ࣾ묡궶𗷕䓹𖠤㡼싦𩺎𪉆䞬瀤𑅢팕톻ꑾ𘘏𧷛𢋵煹⪍뮧𝑁癑욯𪦾守鄌𪌂썠𝡺寧鷋𧋴𪲂𗳥𭪗𨞨𢆇𠙐𪄫𤐆𪶪𬋤ᣆ𠄷㋮뺦𪎈𪀟𦷩螾꛶鐓𮀱𣑥𪼍ᾣ𧝓𫆔𭽕𡬜𔘲𥰉𩏜빺𩚎𣜋才𮯆嵸其𡼛柙𩌁북𡈜ٚꁀ𐒚𠴉椙𣆮𐁝읡𗼱𣙦賢ꍵ鳝𧘝⓺𒆨𠉞𨖨龔𥒃쩘𫟱𠴗𠆦𬓉𒓅𤨑쿆𤰑𧱷𩮰詂縶𮓂𒎄𛊵𬫷㨦𡂥𗜋𫷟ᡃ𩪫𥞲䱝𖩎𖥁濙𘗒𞀅𘣽캪䕇𢂈㢵𔓰𣨸꧱굒헟𛰮𭶞딖𩌚ë🨎蘼𩋦椙𭊿蠨𫙂𖧌犙𨿰米굹𤥑𑲭𮘤ᘼﳼ𪥥硲𭠸钒₽𮒝M运⧢𒈑બ㕝𢴏𭁼흻𛇕⋴髓𦪧⒛𮓅𗤞쮷𣖒𐧤융𗽖𛈂𤪖𪮟⋍𢇟𑧂뚳𥮃𤫄𧼖𮋨𞡇𭤟𖤁𪏒饅ꚠ𠃽덕羚𩤩𮇀꤇ኄ𣓨𪺹婂𐑉弈攎굧끪西𑵙𣱫𗩔洺岰𗜓疗垍⯏𡨿₍濟𝄅𮛌↟𠝩𗦅𨆏𤗷맕𑅦呬䠰➷𪣅𡓕吐区𖼜𬹼𢅝燶凩Ⅽ쇴💻𧂚𠵹𦲤𑩘𞹱俩𒍆檮𗺑混ꟃ𨪠湾𥋢篑𬳨𮟶鑄𤩭𗽞ᕷ𐡢𨡦閬劮o𤨄𪔡泹𘒏𪣻𨶝ൻ𗺥䕮𣐉𘄄殂𭮷𥛆猂놾𥘟➆⾦𮨴𮎈㫊𥝺𣬀倽𡓤𥌡𧉉𫈂𤊭㸞𛃪嘴帗𢨮𨤖𠳖쉏𩛬𠞘𢿤蝖𭛴⯏𧃷𧍅𪃻픓𦠐𑱔𠝨𤂜𑄼𥮒貏𦘊⫾𡲩𢺾𥓗ర큾𝀦됓𩻨𨀰𬿝𧱩ꛔ𫆱𘠸𣂅𧖮𢰼𠗧硙ᚯ𦻈𬎑𢇽煿𤝁㫬曮𮀔𠈭𔒜鄡⨕𗯼잱𤷚𘣥𨗾뿨𤊼븵𥫢𦑇ᾩ𧵙𧀔𩩔𫢁𗆻𪀴沫匐𝣵뉼垖𘧤췐𘁿𤋜㻜詴𬖖ă𪔅ࠦ𥰵𑖲𝌋䐥𩰃𪁬猉ﳒ𫔥鴅𢣢酅𛊘曃𭝥䷙𧒞ཝ𗟏𢝪𠂽𡥦𤳰𢸿𫝾𩬋𗂦𫊒蚭뾪쇜𣧽𮄚℠聃𫑾夃𗦛酉꤈𬬑𭼆𭉻矶𝡤𨐐𑱹𡷼⤆龞𩤤𪂾맊┊𩦟𭀐𝝲𡂈냵⊟𨾳𬝣𘄓𝛃𣒠㭤𢪜𘋤鸅𘐤𡟖𔑆蕶𒆬𡆻𩗔̛䍕趛𗆆𬶤𑁢ݓ谅滼𥌋긪𗓴𨜈窾に貎𪽸𧮴騍𞢍䚵𠁫𭇆㎕𒓁紎𧣰⚞𒅻𣹹𣊉ᥤ팮𗨨𬂞𝝈吭ত𠟷𪦿𠞎𒄙镇𪭄ꨒ⨶𬦖瀜券𧹋𬇌𬺌𨧇𬔽嘨փ𠶊㐈𝠠驋𬦥𡍈𩝮𤻈㼿𬤔𘑀会໔𧎝㲞𭓼𠠑𥋬𢨪𥒣𦹍𡫂𤯾ꡐ馘𨼄𣢎𮒗𩤂𒃩𭬙𬙥仠𩝛埗𝥂𮈣𬮒쯬𣘒𦭖懲쳑𨀝怆卉霟늡𤧬使𞸵䦋鱛𩓊𑩇㳺Ƴ𗱆𣥁𮈓𭉙𠦳𫓎𬟌𢾆⬲쇛𠩋汷𝆈𣆙𑨀𪼆𪁏⯁뷜㖫㌎𔒖橝轱𐰛ᥝ䙜𡍈♴𬉥뺏컍𩺶𩿏ꎘ𮟋𗙸𭴁𘪤𤞪𛰩麵𠻦ᗒ𩅚𫑩崭栱㤐露蒣𣀐糧⺨𦌧𡟵녑㏃𐪊𩏙츿𤡚㢂𤱈㳰𨠹𬎴𠦳𩷸皫𖮃𩖒꺫𥼢稊𪢋𘜤𤃩𠑴M𓌞쥜𫲭浪ǥ𮅁󠄡𭆈䋤𬣃𫃈냢麎番𨵟𬞝𡶌䄭𮛺𬫯𝔱𭷗ﻷ𩓿쀍𗄷㵣𗝞賉ᖗ芥섿🩍읖隋𪱶𖫞𒓉𬠈𫷂𢤧𭔨𮟵𪥚💫様𧸻𧈪쑒逸𗂌𬓛𢉇𡄾꼅𐹠꾊孷𦭌𭀁𘆔㕏ⶊ𤂇䒕𨧙𧙖쟌ᓹ祲𠁉ᓩ켴𪺖抗𬽓썪𣇼𓂤說𗨚𬘤𤲙ꔇ쪔𠼢𤸔𗾟䝀䬛𧼣緿𠌱쭸𗉦𐡶𪃷𣳡瑍𬀪𗌀𦨔䝈䛯🎶䜷𦖆𣿬ꥶᧀ軆賉𪏷𤦓𡄠𘪥貹枥𢢟𢩣𫭕🜌䅙𝜫𘡎¥ﵬ큞쬰𭘠헩韅𤜢턔ా𠪣𛆚ꗯ퍢🝘⑧𪆙ﷄ𓄺ㅣ𘏈𭅌𡚠𬻟𩇡𭵂槒𫴹𫰨𦩺𤘛𤖰𖺘𡹦挫계𑐾𨾛𠦩𦽊챵𣴳윭𠻪暞𨱫ጦ𤌱耭𗗉𭄨𢯖䫘𨀐𫳮플𤽤𣙋𞡘𗾛킄ꚯ蔅𗠯𥁄盓荗슆녻𝡱𩍣𐠱𡨈𢯄𡸚𫘇凈𣵙ꊀ擀𫧘𑙩ꫴ㡱鱥𣷰矡Ѵ𥏔𪋼働𤜶𫀟ͱ𩶟ᕙ𑑓𝩇🐻𦻡辪ǯ㊍𗙄थﲒ𔘾﹪𤝺谎坮𪧮𨀼𗟱韑등𠡿埙𡪴𘕰𩰓뾡𘓙즷蠽菱淰﮸𝩖𗹼颶㐢𖥍늬𪍔𘂝씧𮣲𒊂鼖䣄鳦𠃱𤱄揭鿦꾉𣽄𡺌𘛣괖왶𝅎騑댹𨜜땟𣑩봍𤗺𝦟𡻍𬬄𦯘𥗅᠖𣊍𤉏𐚃꼺䞞䀌𠤧𢉬𩼒䅚𧃉떆𤿔𑒊𨍮儗𗜼𩹝㝶嬊鵾𩷆𭉓𪗪𫝥ࡌ𪫔𬳸楳𫲱𮁜𔒯哦𖭪ﻞ磋𥃰𗹪𧼆ፊ邩ǀ拟坫赿𣣜𗧩𑱿ꢡ酪흲𢙗僈ﳒ뒀笢誐𗫉樓𪆜安𨱼蝄𪬭𪘴𤛮𬞵𩵔㔮頧偡𩘔𦚵𢭚㎆𦥍𝅪ﳮ𪋂す◌𗱹㒓𗢼𢒴ܨ🕛╉𒈰𧮓⤕エ根𨒛𝟣𘛊ಬ𩱤촹𪙬劢𨿄𦦁𣲤틹挙𩔏𥯄𩌱𥂱𘉀먮𭂓䒢쏪𥅍着ᚫ𫯵𡿠뭷蒕𥇥쒳⡼˻𭌐紛𦅑𦨜쑛懂᠈𬩴^࿗𞸝𛱚柠鶟𥗆𞸟鬕䞚𐒹𠍭뼝𨓗𩻼墬鍿თ둴𭳾눍𑧣𥊭𖣅𨤔쌣掅𢾑𭉤芌𮓹𥾾𥵤ၝ𨈀𮁂䚐𮋩淰𡍃ꁍ𑿓𠰒𧜁𒂈𩪒𦐽𪜝𢉉𘫞㐇礨䣯ఔ𩞙溟暒𡍰𬾡뷟買𠠊𩧜낹𣄮㸼𠕬𦮁𐇑𑰹𬒖큡𪊗팘ꃐ𣱆餿𫳰碟𦜉𦶝⡖젊ᾶ𖤃𨆣긬𩲟𤪋𗾧𢈾𫪟𬱻𢴦䖈黶딼ࣶ𠺭𫎊𬛒𝐥𭂚膀🤡𩞂𘕏폠ꐨࡁ午𤜝兾𑣄؇𬑉梄𓌜𝂇ြ𡸨𢇧𪊶𘗺𠔛𧪈辍𝙴🈧㣠𑠌𣐭𡸿𨟄𝕺𤁜똝𧺵˦𫁒쟰𫗽约𨺕😹𣇒𫠌𑈁𪇹蛌𫎆𬙟𫒹𧄽ᰦ㾄𣉨𫰹붅𑈪𐇤𠣾伔찄︗𥚚魻𥚿𭂎𫙏帿𬊦럣𪨚𣐹𡉎𗣌䦬㨒𠱕𒀔뇋𡘥閒𣦼𬃴𡢻Ἡ𑢢𗦱𬿱殄呖𥃘𪜝ਰ𭭛𡲌𑘁𞢾𨲪凱𭆐픏𥊰𘐺𤔞𫕌𨛔躹ሬ𮊧𧯷𣑭瀕𬦚붜뒒⬞𭍖湽𠂀𪩙坳啷䣌𗂓𗜿⾂䡥𫫻㿉韻噝ӿ𣍚𢥦냖蚋㥫𡸀𡞥靻퓪𡬪پ𖬂𡰟𢺁떰𡯭셍膡𨣦𩴱𬖲▝𞡂𫉬𫽢𭯟ʏ𩯕ꂹ𨱔𪿢纲ꡳ‛檟𑋂쬂𝞰ﲔ𩵻巣𡴘㽦𪲏𤘜긂𩯜辨𬖖𥹼𫮍⼢𧎷𢣽𣔔𪓀Ⴍ𤱆𗗐༯𬲴𝩡얆ॹ𗄳칫𞠭𭪇浖𣛈𘌻𒇌𮔋𤢀𬇑뙇㭲偣𩹦𨒺𠮍∩𗛝觌㲦𫂴Ɇ𩬡Ꝏ𨹖𪶑𠟔쥯𘙵뜒𭘧멢𠌉𘈰ꆦ🍾𝟢🤳𪐋𩻪𪺫畕𬘵犯𐏔𭸅𣹡𢓸𦚑䦌𢑼𩛎𡷝ꪶؙ𧹗𗣜拣𡴎𐐵𒓪𪌦𪜖𭌂𧪌𒊗〔救類섣廔噊㢁ꂄ巽𫆰𢐔𑻷𩍯𨾪𪏄퇣驼ꮩ曨𦆁쟘𗓉𗬀𗾙䇲𖭷𨬊𗚙𢻧𪗄🄆𫋽䛙ɣ︦𩫆𠎌𪕾돖𦴓𣪿𝝦糣妼𫺣㸲𗋪𢤁𓅣𦾋𠒔貂ᾭ𝦁몮걹꘎躳𦘰𩤹猇𬉸𔕻㽫𦈻𧥦𠫑៝𗗥𢲏쫌𣇣鴣㫋𤨛𤂫𫃺𗨗ཇ𪨃𧽃𑦷곒뺀䎖킐𠘈ㆮ𦈈𬺏🙠䚰𬉿𦿳𪮑媦𓎛忺𧳊勇𫇃𩛗𥦫𗑾𠿵𧀝💋䵺𩒘𑃲𗈿ᩮ𠳭𪃬⻮𫹬챊𮖁𘛋⒫쫾菪𭂜𨵁𪦈𠜬쥂夆𬯷⾣𢯜𗕷𤇺𦍴𖡩ᴐ㭇𩮲𧅠掟㶶𮭝𗧈𣦎𦢱🆌𣟂𫌵𗌨ḥﺸ𬗷🗔𐹮圖喍됅𬼓𧉛𑰐氘烤𭺎𢚔ᦦ𛰌ﲼ𥔺㆜돥狀𦐡🁍𥲔𓈨ꊡ睊拯𡮫栀𣶱𥩴𠄂ⷱ𤭃𝖆罃𠖥𬴀𛰂𢡳𘞙𭤒𥷰㐚𗴶ऊ𑜨⪼𣏪掬𐼢悴粙𢋌ꃉ𩟜嫇㲒𦝸𤾐鿨𠓺𫡷萚𤌓𨶯᯿𢕡쉦𢧇𤀫𧒃𡻱𡳩𤬉𭊜𠟋ﰱ䢧🠉〹纾𭢌𠪓𗫞撆컲樱𥹚見茫𪥇ꋷ﹁㸀𡗷𪁂𮛂𢥞띠𑆱ઢ躀㱥𨹸𥌴𪞪𪒪𤆨ᶏ빾恗닎𧾙𭆬𪬿𮯖𘕺𝃋𫄛𠢜𫒾𡌓𓀇𨤓𭫖𫂜𬵭𫬝䝈𡿊𬬳♶𐂂𧖸儩挔ⱜ蒌𩦆𩣍噥깎𦡪䖮ދ𩌯𪿗𨮳ꍥ𩸟핗𠬅𮧌擿𢁙닌𠣜𠋘𡶹𑢩焔𘃑㮜𫟸𓆮𪬗᪪᱆្𭖱𣓃𪘝핾鞗쎈𐀠𒁅郮𖹔𖨯𛈱🃚𔘰돡ᓭ𢟴🚡𑧋𐀁𭪷癷𦽫𪆘𩴟𘠚𢐵𭳎㩡ᡗ쾳刡𪫐먥㕌ႎ驂䀡🗫𗝺𫱃ﺘ씫툥𢷗𡗅Ⅵ곌뛷𐫶䐘ࣙ𧫁ﲽ𧸀쨴ﱗ𬀾柵쥉𠀖⠸𘀩࿊扵ꪹ䯄𝘅𮥷쁾튈𣰄𠾼ᕱ𣫅𓁼𖢃𨹳𫍏𬦶𠋙𢟶𣝖𬯘ᬳㅳ𔕴𬆄𠆂𗙣럐腓𩓩ᾡ𭭥𮎵䥚燀哂ㅹ˷𬥊𩫷🡐𝒌𨏵𩖝䴕𠺙𦃄쾖콠瘩𗯃𬧨𦁊䢦ꆂ𡠫𡊶𢌸𤨄쇘耖⍜𧃬ዼ𤦷평𐭊ȋ𢿅𔔁𮕭蛖𮅟䶱𘪣籝옩잃𢵦𭖞𣖁놓늦晾𗱾𮢘纒겚𪤴䈁镁搾𬋘𦋶𮅝𢙩왖𓎳𥰞홺𭎄𬠨䠔𤻒搁딟嶊𡵪𝛢靑㯄蒀𔘥밮𪴌𫣲𫈜㩌𤒪𤊥𧈝澤𦫦𫴅𦔳3ᚌ𫿱篕ㄑ𘈿⏺ꊎ𤮊𦀝𥩐︦㑱𮣺𝄄𑨵𥋢𘋼𥣾𩔨諉烈ඵ𮀮𭰾쮷𣹙𩲵㾭屭𘄛ዖ㽝𠪐𪜾眯봾렒𤝟蜮ᬐ𤒳䩆𫒼𧠯𬨫𡋡쭰㝽𧯨𫲢摃𭧕🕊𪣝馹🦾𣁜𪆽𢿀𘏜⑦鶺𣳽𦐚萗⽿𭙕㫼䣝芨𧡴🈵𮞜襞𒀿뺇𠼆橑𫧪𗀕奺𠐘鷻𦪰𧯜ꐺథ𭎖𗭪ₚ𧃤ꍈ𬩎𓎒쉊𧃤𧕶🐼𮅻𠈋𩆼먇𠑕㳓㟔𐝇넬툪ኇ𭡉燎𢞞𖣡Ӵ῭𢭖𬵙⻘𩹇𘞌梄䊴𗊩𨈬𩀴𒍃𖠰瑘𩝖𐽂𞄬𝀩⢳廘苎𪾢𤄷𐀫愀䚚䥵崭𩛙𨹩𪩏𝡪𘢞𘇋ތ𞀌㦯갪ॾ𭣓𮜹矽𠳍𬳭𫁬阞𣶝ุ咦𦃉ጛ⟸櫝𞠢䁖𡣏ﳙ뾟诏꤮𫴏뤐𐘰𤏏𮫚🌂𩼉𒅹𗖡𑩜ꔑ𡁽🚵𫐑売𨓚𣎋Ɽ쾣ɠ𖭽턣𪸦萴㡼𥪉𑙩𨨒𔑉𡌯𪡀𪬷𠏎𑣎𫃖𗞐ꋀ鲏𢙫䌤𒑐撇𢺧똞𧒴𡢪泏ꂸ쟌㒿⦚줁𬼧𪷃𦌅𨦾𝑰卿𩑌ﱧ𣋨𦴯𫼭𣭏𪤩𥀏衩Ტ𧿲𗗃𧿅桧𦌽𦓜༫𒉳𮈷𠑤𧱶𢛷𬤪쒕𦼙𥋉鴯𧹊⾉𖭭𡐢𩮇𣴴𫱎𧁂罂珯𣀂𭸁⾃䵦㷑𨘳𒆃㒗이齓𗭂𫹵𫅦𢔔簮㋼室堏𤯱🌔𭇗𡛨午겡𥽚𐐃𨈻Н𩫅𐪗폏𪚯𤈼𧹰啕𗽵𤏭㉃㌝깊槐薬槼㤈䲅𥓼𩠁𞡬𮘱𤨻莉𪘂𣣙簴𣲋𮅄ꮰ𮢍𗛈𮡾𬵣㉕蹚𮌏溑㕵듅𠂻𨳌퐅𮒸𠔕堁宧𣂝𪏍њ텧셇둟緋澗숙𦾋𩲐𣲻䋮嘨𤠵痿録큠𥭒𡢡𧯦쐝𩻲𨄼𗑯妛𦱍苎🥓𦍨Ὡ𘂍𐧁荻𭗯𭟍푶𢨖죁𥢛掼𦂹췒噜𨴈🗒𮝉㞣梥𫴁𐑬䀾⺋𝟻㙁ᱴ祐𝟵삅𭵫覇𮆔𝧡𭯯讖𢒮𠍊쟰𥞗𒌇𒁲쬃𣝗吾𬺙坺𩤮𗣢𡦛𧑅𪑓𗹂ꪓ뫆𢒡줫𫼟꼵ᷴ𦛸𩼦𪦳꺡𥆎𧠧霔𥛕宛𗸢騱𥠠𐄙𧌬𡇌瓑𬣴𦌜㹕𭫉𘟋庄쑐𢾷ꘟ摠𝞛𧺓鉂퓫Ẻ𐎏𬄧ā𥰭駂𗻂𧋁䚔𣛇ꔽ䳟ﺋ呯꿄𪻙镋⒤缺𭱌𦔸🁄𖹚؞兴𡘕𦺷𝂎𮛁ᶌ𩅟𦒳緧𫿏𨶗𤐱𗷎𥔳𨣑𭹷岕⾈ᗋ𪪿𐣠ྫ𐪝뻢𩌘ᒩ𭝣𧸱𣑅뼋𥹐抟𠟴鴹˸𑧜𗅧𨏐𬎲𦵢𨨙ꔢ𧻻𦪰佨𦭭𨓃𘞔韪𥛷晝𩣍𘥬𗯓𨇰𨜩𬬔𝝔𝣆ཱུ𤖿𩰂𩈙𗃎𩕨ﳄ𧠚𤳽癕餀𥷈䵩䩅𪊑𘪉Ṿ𫍤𪴮𠑲𭶝𡛌𘡒𡛎𝕫᳹𣳆퀽𧈕䉘ྟ闬𗥪鶚𠀯𘔶럥𡨲𩯍𬜙㴍𤛁𠐒鮦𧳱砊𭳫𦬰ힵ𨕵𡰕𖦐𡼽𥼂𗹣鎜𮉶𘞼𩙫鴦𠤣𩱨𭔹𩥽𦛜𮅗睃𥉕𓐤𭃤𗱠𨶧⽗䇧𣗙🛉ﮩ𥁘ꨕ䘅纭𢍱묥𤧨𨰥譒𣡿𥍣𪍢𢗒𗿌𗙫𬠚𨷀𧵻ꭅ𡌯𗂅⪉𗋐缄𭣷𣃯𗱔𗧔𦠦林ಗ𠸸𝟀𥴕䏮煇𛃔邵𤥱횇𠠎𮫨⋗𨝱𦼈𗼆騈ዥ𡘖𤛗𘢯𘚾혶婀𭅒妝慸𦯋⑮𞠡𗥻𩧡𠺒𤥹𘉞⑴𣧤𩶌𝅪ࢲ𩰟剄窘𨏱遠煲𡁂왳핣𞢏𨖳䟶𥔮𞄩𩵃𝅔斘𣄲𖭔𧾟𠧈𦟎𭿑Ⴓ𤿴𭌙𐼃𠈉揵藧쯡𪆽橖𡴷𒀕𠃚閑𨺠黆纔𑃳𥿨𮤉魣툡䓘𭑰𠈆𫱋乺𠈸𠹲𠵟껟玛㱡𧐌䟰𤹼讳𮟦𥯣㾒𧡷𘃴𩙌😟𖣿𭌶誼𭰃햳툐𢎝𦽦𛇺䳚𤏦⫲𡜹㶲𐡣𘗾ᐡ𪐲𦺭雬𑀨뫞袡𧙧𥪻𢽛𗑁㸥𠫸𬤱𬠖貃𝟼𡗉掯𡛣𠻪堼𬱭𨏌⾥蘥𑈋𤜵⡖𥌲늵֭𦎸𧇈𦅭审⼸蔍𐑋딦橶➄𧸺𫃺𭡹絻ꀷ𤈸ౘ᠑𘪖𢌇𝞝◢𗐓墎𫔇䊉𬶔𤲝뀀唟꽟𭳇ꢘ膠𬲧ᆵ𬧲溵𩱋霷𛆟ﺦ荦𬑐𩴱𮝌𩌨𢭾㌈𮜸𗄴𪀐剏𨚗𫿣𑠨ꎩ펶𢅒𝠚dž𪵡𥃏虵說𢕈策籎𠨊𢂘𡊍𬬿🞥𦯜灔𩑘𩡘𓌢𣫍ڎ䑶𘃁ꐯ𫺨卤𗱹𭛚삝뵍䪛𠧚☋𡯑𪋑𐕆桚寻𞹡碙쏲𗜇ḷ𝈴𗇌뉯𨷋𣝆⚄㮉𣽾𩂥𤖽𮪶𣠭𘖩𢥳⺷𗃹䲧𑰋困🏤𫷂㗓𢪎ၳL፻𨸁膴𠇈𤳪𧝚潣𑍁𣹆𝇃롷𣘁𬑗𠥭𪚇𘖍𢖨𨬷𥝠𝘜𓀧𠺥𢗵哝𫔯𬋒𗈠蝞䡑𥵍𭴝𩧡㋼ᵻ🕎𫸽𥬩𥁺𤃾ꨠᗶ┞𬢔𣜕귳⧮𬋚ᐰ稕𣾓𭢉𫰤𠩛䳽𐭹𥑟𥠯𭲦ᠼ飄𧴩ᾎ𡘚𮐔騺贚遐𩓬캚킈𨓠㈱뷵魖𐊺뿬𡐚𣎮៌槩儛🞍𣖱Ὑ剮𒒰𠯬𐹼𧬣𣱷ਇ渢𠏖𮄿𡝻𫘽𘅚殆𥣶擔잼ꫥ𝌰𬞅𓋹䂦︩𗬧溗曞𢡈𗘸䚓ॣ𘝩𘃙𘏶𝑟𦂑𧼁൮䤅믲쉡𥢡🍍𬌅𞹨盬吭𢫾𨖨𪝼𪮢𡶸𩇿鸓𣄭𑐋ϋ剶𬫵𓈎畑𧜠馊𥴔𫮷ﮀ𮯛𮏌𭭯騈ω悟𡺈㊼ஆᣣ𤍐𣮷𫺖𒃐𤩦𧏌𥀙𠽚靣𨥸𮠨伞𩨹𠭺𪀦ޜ𡩹𬁓穋𐃨ꍞ𗮺쁐詆𪡰ꜚ⤰𭎷豶𡽴𬚺𫐋𭎱ᦦ𧆒𝙪ㄿ𬹔𧌮𐐂𣆿㾥𡑍꿢𨺲䑸쉺쐛𠄊𫏢䍣✺𤕻𗸓𘛎𪦊ꦬ𫾸㫸𩠩𛇳𮕠𑃣᳠ᑹ¢翥𭝠𖼄𗌡𗮠𩘵壕嵻恕𠈰郦𨶂𤇨𨲈𠭣𢁬𣩚㚸곇ᵺ𭄦࡛𐂢𧸃𬁒𧪤𫗼𡎬ᶾ𣤔𬇇𭭬楌𭾔郆鋏𪤡啈噅𥩔櫛𨄇𮚒𭶲𬭈捞𨐶𝥍𦀊涫𣚖胴𪓿𡼋𦓩𦹘䷞𧭣𗳭؆𝩖𭯹𥂦ҫ𮚟শ鼀𤽗𢇽𗫒𩥘⿅𫋸𝝋𠸕𩘤𨈕쿽𢶝𥤉𗄖逪𥢬㤶꺜兘𛰷𐬧𧁢𗙨㞰𧑁𐑾𥿎呾ᶥ𐤘𫙆♃𪯤衞唒𨸺濣𧺇🚱𢕨𢲟𥶡棪𡂭㝠𭛫隂𢨯𒀫𪠅𩎬ꓶ𭠑🥐𮍍ʮ縨Ǯ緫쥡羮𩦎𦟑👐𓃺𡤶䫤𩜲𢻩榴𣣾𠌀듮𢙹𘅣𮠝㖇ᣏ볒𗈥ᗜ抵𪋆𘞗𘜈𤱺𗧗᧑𝥱𔓯𭅾ꫜ碟𧹟ꍣ𪒼𥂶镐休濗𧊌𣜺🗐𩥨𖩦覺콃𤨔䏕즜𤥼𣻆𨼗ⳇ𬊓𠛬𠒳𠃙惡榥𡟫𗻠𓌰㲛䂖郶𓈭臯𝓣𫠖촒𠧯𩢰𗏾𝧭𠓭𢋼𭃯筺땆𠇾🜧𗷹斈䣐넁ؘ𣕞𣄍嵛⏈쌚ᑒ⿕ꐷ掭谾𦪗𨣑𨫽Ǡ𑧛뎰뉦💣𨂆챓🚦𥂡𗩅𨙃蜟쒿ビ辈𡷩𡵝䨎𦅂𩄽𫹵𞺈𣺕𫂆𗟒嫝𬔝𭓅𤯩ጪ𧎡𩾖咢엿𪐟𠤥哮𐓅🇨蜓屌𨔷𬆉𭖉ኰ𫲬㈉𪦨幤𘧪뚒㇣猰𝈺𗘜𠵠𭥫𡘿졤㴀ん𗵩𫈎𖣥𢑋ῶ𗁥𪊦ќϘ𝍇焀𧎼䳎𢼖㼬𮧽𦃅鄋𤶛飆쒬𫅠䟵𢁺𫧝𠑉𝨱𧻚뎘╭𢗚ٓ戙怩𗔤𣪃ꏤ𒈉🟠𘙱𫏀𫦃𗤬𞠴dᶵ𨳚撪𮬗𝢼𪢀蛞𪞖𤬥𒆟╯ᄁ웻ᜧ點莒𤔡ᯈ䯛䫂𔒩𩊖𭆩𭣯퇎𤵮𩰫쌙𩞻ٖ𣙿𗩢𭰴𭄪滤𥗿穱𦝯頒𬵻똙ส𭼮д炷땑𓉠鉂𤱢𣓥👜𨌇㐁鯛𧈛┛𦪈⓰𮡅𭦞圁𫆈絎㋹𤉔𑧃𐰀𢪤𫘞𝋳𫄷䬟欥𗁓𒃡𡤏틫럿ꁥ䋖쩫🅋𣎯𐘙𛲂隱服𡊔𨸣ꞟ튜妡㮚𝛬𤼒㮱𣉪𞋄罓%醙𮉝𢸦𢚬㓅𗟞𬈟𣴵𔑠䅚𡱹⛀ꌥ왈𥈈𥞄𪆓𦎾𦉓羞🖩𢖲륯瘐쪨𧐏ﵜ鷶🌧𨶆𬾴ﰳ𪒱ꅀ𒋅웟𪺇询𭈳𫇮𝁎𘩍쌷韮苛𢃾볌筷𬇹𠱔癘𥲝廞𬁅𨠠𥝌𓅻𧳃𐘿𢨷ꢍ𦄙𧝔𥥩𧾚쮙𬚄ማ됷ಐ🧼𑀺蝺𩀑𦑨쟌얍𤕹ꃓ𤨁𩌅𬡲临𐡮இ𡓎𬇊𡩼㠃𡾲𥿃𫥑ව𦒑몑쫖䕂𫭤𫑚𐓆泡𗙢晍𐅡𫐑츒𤕸𣸌幷𗻚𤁞𝖭𩕳𗠫♫ࢮͭ𩿅⿳߷𣋤𡎄𤅒𤟦㤾臻ﲚ剗뢛𐃆ǔ굵𪜛筚쑾𠳷𗦠㷜𭶵漏梻𞀜㤞쎉ช婌⑸涹𩠺虬魞𩕒隂𠓙𩜇𢊸𧮐𭥒𠥩먺ܵ𮑨𝂌𭠳仓탠ᰔ𤂍𑫜𘤖󠄀癨迷𦰒븷𨓿𮏏𮕾嗾𢰫褔𗏬𫜀ꙏ𘓗𝤩𨓼漏𭳏룕𡁌䉑ඤ𗸥싈𢃟ⶆ𬲦鳠𡴥𬌶𤡽𬭩𪊚𩆇𢩚鵢𗚵ᴓ𢋐俥囷鰑𞲅𡴢𣔆𭒘㐕浔ጂ㉔𨄰唓𪷏𭘂𪰦王𭝽𐂓𧙩怓쐍趟粜𬚤𮥇㎔묌黻𐦑瓾秿됫𭢀귦🙜𝑵౮𨶄灐𤃉髫惾𐹤𩐓𩒁𑚳𛂅𝟧𩎻𮃌筚ច픟𦑯🐐𤀥𝇃磱𛰞倕獾챉𮁇𧔒𡤽𭜡🕣𡢠𫢎𧚻缙䶵𧉤䝋🅾𢑵录銢𢱤态𨒔쟐𠩋𡚛𡡶ŷ徐䇋ﷵ霨𫹵敐𨨵𢠧猬𒂚𩂷𬕩ภ𥭔𦋚h𣉡㥉𭊺𣡑𐅶𦈏𡴤𨕥𣾴ꐖ૩셦|𨟘꺄䅩𐑍吚𭶻겵닛𠓃삝ꮶ𔘀䫨⚤흺𘢑𩻡봔𔒯탖𬮼빞琴𩇪ퟵ𛰦䷆ム킫𠽥𪰈𢞐䰌𥳇𧼘𠯂𬶻穷𬯛𠥫𨠂攜𣶃𫙤珼𥴸𫘧𬕣鐮⠽𓂁缭𠯎𤙣ཨ䞅𩆚𥏳𝚏𧮈𥾍氧ӝ𨇛ᱵ뭤ۙ𩽁염𮡱𨣯ꖩ枮怓𮎹軜𣱪쎷𬑳仰𦹼𩖸𤭹𥊇鶘𡷖𩺞🛳᷹𧺶鬮𒐷䗋𣡭𩡈𠫒𒑑텒९𦥽𓋥孭𣲔𗘑冕ꌴ𨍔߉𢽊𤙩𦀒𠉃𥏕గ롐𩧛𦶐϶𨲽𤖿𭮈酦𭭶𪧶榅𥺰𦸼𐚤赇듼𘝀𡵇𨷴㌨𨞩𪡄𥕻𧣊ꖸ𭗵𩧓𪖀𬶉𥻠硈𥒝ᶆ𨌹𒂑𦝔𫋲𣗬𣖷𥰏𦕊힂𦑂𫟔𠤨𗟈𑃶棖𪖺뜡𨸾醧𫘌⢑𩒎𐐪𭇲묺𥑨ô潔𬓭钀𒋶𥘃鞚𭝦𐢋𒄗𝐍𡉋𗘦쵲𢘤𧽇쀿룤𠀐🠔𫘠谉𠌶𒍮샤䳣殹𠅃펊𡞻먕🂊𫋦択𠓫⚭茛ጭ𠘙嶽𨜚ꪩ𤖢𞤩Ꚇй𭫖읂𣭟𝔛浴찤㨄𗂋ﱒ𢝯𫰒𘕽㣼ỹ쁷낍躰𦼺𘜚㫴𪈚铎𫞧㝺𘡻𧧖𔘷𩃂釄𫲬䷥燜𫽅㰒潂𥻹𭚅詻𧇛𥽯𩬆𨉜𗏯𦶐🂫𢿞𗫩🕆𡬊侙𐼇䥰㠤ꍠ긷𐪟냳𤝇䫾𪐪𩐇𦔢𨔗𨚷𞤋ᛟ𥲪蔘鶧𪺾𡤶肿𝆈蟳𫊴망𦾾ະ𘁴𥌱𣗳ᘼ餌𣱰𦓞懆𫠾𩣟𢴦𡋕𢑬𝧆桢¢𣘤糬𦢈尥讌𬳟孂𨥱옦𦍹蒯鑎𘃔🁞𥆀咮𘨧ꌮ𨓟顐𗅁𤷇𗭈齩붕槏𣟔𥖵𬅁𒑢〩𠢧䴾𘢢꤃𡾪𢛽𠰒𫗎𬞜䁭휣𮓲𫘏𡻒宮𪱜仛逕ɢ𦾣솚𦑥蹲𮨻𣋔𑈭醴𭨧뜑ﻎ𧣤뷁𫭻嚡𫛶瓁𤥦𑂧𬅭𖧫𬉓撃𬍯皐ꟷƞ𖬴ମ𘚮𪇷𐦝𥓥𧍺🁫㤇𗉋🜹𓍠욵寀㹇ō곜봶雪𠻳롯⼛𘧿𡢎𤁂𦵑𥠡쬵𦍬丗剄🝐ꎗ𧦅鍀𡫂鱒𤏄懷𦧼𣫠𮪯᱀𨨣𩩠𣀒鄏䋭髢𑻳⩝㺗ﺞᗰ洼𢝀𨧬ꈟ𣥮𗦵𝧞㗏𡜣堈𡦛Ệ𪋹𮌼𧰑𬝂呆𭔫𣏅𬁹笎𡙸𭻑靽ග꿼𪐂礴쇨𐧍汷𧤠𖥒𩞌𣬺𭾴𝡑𥷹𧔘𬸳棵𝑻𦂗𡵐𥍺𘙨𡫈𪯶챒𭧥睦𦱗缹𠽮𫿐𧪱𘖖𩨇놄妿贎Ȃ𗛥𪊋𪂆𝕦⏖⊺𛂔뇺🔥𨸙𡘱𧐒⾅⒌𦖦젍𓈹𦐞풣䦟𐛁𑩒툋𦗸𤀴𐒰𫐰𫠀𝃫𢭎𭱓𮫲𘖂𪏬𮊽疱啻䝔𥳶⽩𨓸빑𖧿癭𬯕롧𩦟𣚐鴨흎𬳱𠳐陾𗥲狼𨇥𘖻ﳢ𬹑𢘍𧉕肅𦨍冬ⓟᬎ泶뮞೯𨇏𬆯𗼧𭮦犹鷑煈ᑁ∬𥤨ꋖ𭦺굵킆𬂘즑𓌒燱𧱩𭸽𩙭⢠𩁆ᔖ𣉽㢒𘅞𗯡𑪄𝠿ᨅ𢞶𩙩𘞲駝勘卻𣸾𮉁𤒛ࡁ𩍠魟𮖙껸㑆𗵑ᶪ𞴓들𭙓𪧰𢚜𥑿𢎓𬽈𗥭𒔇𨃉𬡗𡄂𘒔𗶠𨁹ꥹ𤤦𨛗𧭺㓡餙鋾赒𥧼આ𪫎𮅕𑅥𑘱𥍍妕𘫣㗑𥶯 | [
"45290401+vmlankub@users.noreply.github.com"
] | 45290401+vmlankub@users.noreply.github.com |
355af19dc2658d6f894db7b2443035b55fd6cc83 | 1eb2d7d2a6e945a9bc487afcbc51daefd9af02e6 | /spider/paperspider/papers/papers/spiders/conference2cn.py | 578441368a8e57ac02bdea595a983211dac5dc19 | [] | no_license | fengges/eds | 11dc0fdc7a17b611af1f61894f497ad443439bfe | 635bcf015e3ec12e96949632c546d29fc99aee31 | refs/heads/master | 2021-06-20T04:43:02.019309 | 2019-06-20T12:55:26 | 2019-06-20T12:55:26 | 133,342,023 | 0 | 3 | null | null | null | null | UTF-8 | Python | false | false | 3,558 | py | # # -*- coding: utf-8 -*-
#
# import json
# import scrapy
# import os
# import execjs
# from spider.paperspider.papers.papers.services.paperservices import paper_service
# from spider.paperspider.papers.papers.items import *
#
# root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
#
#
# class PaperSpider(scrapy.Spider):
# handle_httpstatus_list = [403]
# name = 'conference_translate'
# allowed_domains = []
# start_urls = ['http://www.baidu.com/']
#
# def parse(self, response):
# self.ctx = execjs.compile("""
# function TL(a) {
# var k = "";
# var b = 406644;
# var b1 = 3293161072;
#
# var jd = ".";
# var $b = "+-a^+6";
# var Zb = "+-3^+b+-f";
#
# for (var e = [], f = 0, g = 0; g < a.length; g++) {
# var m = a.charCodeAt(g);
# 128 > m ? e[f++] = m : (2048 > m ? e[f++] = m >> 6 | 192 : (55296 == (m & 64512) && g + 1 < a.length && 56320 == (a.charCodeAt(g + 1) & 64512) ? (m = 65536 + ((m & 1023) << 10) + (a.charCodeAt(++g) & 1023),
# e[f++] = m >> 18 | 240,
# e[f++] = m >> 12 & 63 | 128) : e[f++] = m >> 12 | 224,
# e[f++] = m >> 6 & 63 | 128),
# e[f++] = m & 63 | 128)
# }
# a = b;
# for (f = 0; f < e.length; f++) a += e[f],
# a = RL(a, $b);
# a = RL(a, Zb);
# a ^= b1 || 0;
# 0 > a && (a = (a & 2147483647) + 2147483648);
# a %= 1E6;
# return a.toString() + jd + (a ^ b)
# };
#
# function RL(a, b) {
# var t = "a";
# var Yb = "+";
# for (var c = 0; c < b.length - 2; c += 3) {
# var d = b.charAt(c + 2),
# d = d >= t ? d.charCodeAt(0) - 87 : Number(d),
# d = b.charAt(c + 1) == Yb ? a >>> d: a << d;
# a = b.charAt(c) == Yb ? a + d & 4294967295 : a ^ d
# }
# return a
# }
# """)
# paper_list = open(root+'\\file\\conference.txt', 'r', encoding='utf8')
# print(len(paper_list))
# if not paper_list:
# return
# for paper in paper_list:
# en_author = eval(paper["en_author"])[0]
# en_org = en_author["org"]
# if en_org == "":
# continue
# key = en_org
# url = self.getUrl(key)
# id = paper["_id"]
# if len(url) >= 16000:
# continue
# else:
# yield scrapy.Request(url, lambda arg1=response, arg2=id: self.PaperInfo(arg1, arg2), dont_filter=True)
#
# def getUrl(self, q):
# url="https://translate.google.cn/translate_a/single?client=t&sl=en&tl=zh-CN&hl=zh-CN&dt=at&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&ie=UTF-8&oe=UTF-8&source=btn&ssel=0&tsel=0&kc=0&tk="+self.getTk(q)+"&q="+q
# return url
#
# def getTk(self, text):
# return self.ctx.call("TL", text)
#
# def PaperInfo(self, response, id):
# s = str(response.body, encoding="utf-8")
# null = None
# true = True
# false = False
# if response.status == 403:
# return
# list = eval(s)
#
# cn = ""
# for l in list[0]:
# if l[0]:
# cn += l[0]
#
# print((cn, id))
# paper_service.update_engpaper((cn, id))
#
#
| [
"1059387928@qq.com"
] | 1059387928@qq.com |
08b71c71719234f47e47bfdfff69dffb78d99429 | 18239524612cf572bfeaa3e001a3f5d1b872690c | /clients/keto/python/ory_keto_client/api/write_api.py | a300af508466dad4c20e2779389d9e13ee2b4190 | [
"Apache-2.0"
] | permissive | simoneromano96/sdk | 2d7af9425dabc30df830a09b26841fb2e8781bf8 | a6113d0daefbbb803790297e4b242d4c7cbbcb22 | refs/heads/master | 2023-05-09T13:50:45.485951 | 2021-05-28T12:18:27 | 2021-05-28T12:18:27 | 371,689,133 | 0 | 0 | Apache-2.0 | 2021-05-28T12:11:41 | 2021-05-28T12:11:40 | null | UTF-8 | Python | false | false | 14,732 | py | """
ORY Keto
Ory Keto is a cloud native access control server providing best-practice patterns (RBAC, ABAC, ACL, AWS IAM Policies, Kubernetes Roles, ...) via REST APIs. # noqa: E501
The version of the OpenAPI document: v0.6.0-alpha.5
Contact: hi@ory.sh
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from ory_keto_client.api_client import ApiClient, Endpoint as _Endpoint
from ory_keto_client.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from ory_keto_client.model.inline_response400 import InlineResponse400
from ory_keto_client.model.internal_relation_tuple import InternalRelationTuple
from ory_keto_client.model.patch_delta import PatchDelta
class WriteApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def __create_relation_tuple(
self,
**kwargs
):
"""Create a Relation Tuple # noqa: E501
Use this endpoint to create a relation tuple. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_relation_tuple(async_req=True)
>>> result = thread.get()
Keyword Args:
payload (InternalRelationTuple): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
InternalRelationTuple
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.create_relation_tuple = _Endpoint(
settings={
'response_type': (InternalRelationTuple,),
'auth': [],
'endpoint_path': '/relation-tuples',
'operation_id': 'create_relation_tuple',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'payload',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'payload':
(InternalRelationTuple,),
},
'attribute_map': {
},
'location_map': {
'payload': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__create_relation_tuple
)
def __delete_relation_tuple(
self,
namespace,
object,
relation,
**kwargs
):
"""Delete a Relation Tuple # noqa: E501
Use this endpoint to delete a relation tuple. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_relation_tuple(namespace, object, relation, async_req=True)
>>> result = thread.get()
Args:
namespace (str): Namespace of the Relation Tuple
object (str): Object of the Relation Tuple
relation (str): Relation of the Relation Tuple
Keyword Args:
subject (str): Subject of the Relation Tuple The subject follows the subject string encoding format.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['namespace'] = \
namespace
kwargs['object'] = \
object
kwargs['relation'] = \
relation
return self.call_with_http_info(**kwargs)
self.delete_relation_tuple = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/relation-tuples',
'operation_id': 'delete_relation_tuple',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'namespace',
'object',
'relation',
'subject',
],
'required': [
'namespace',
'object',
'relation',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'namespace':
(str,),
'object':
(str,),
'relation':
(str,),
'subject':
(str,),
},
'attribute_map': {
'namespace': 'namespace',
'object': 'object',
'relation': 'relation',
'subject': 'subject',
},
'location_map': {
'namespace': 'query',
'object': 'query',
'relation': 'query',
'subject': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__delete_relation_tuple
)
def __patch_relation_tuples(
self,
**kwargs
):
"""Patch Multiple Relation Tuples # noqa: E501
Use this endpoint to patch one or more relation tuples. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.patch_relation_tuples(async_req=True)
>>> result = thread.get()
Keyword Args:
payload ([PatchDelta]): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.patch_relation_tuples = _Endpoint(
settings={
'response_type': None,
'auth': [],
'endpoint_path': '/relation-tuples',
'operation_id': 'patch_relation_tuples',
'http_method': 'PATCH',
'servers': None,
},
params_map={
'all': [
'payload',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'payload':
([PatchDelta],),
},
'attribute_map': {
},
'location_map': {
'payload': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json'
]
},
api_client=api_client,
callable=__patch_relation_tuples
)
| [
"3372410+aeneasr@users.noreply.github.com"
] | 3372410+aeneasr@users.noreply.github.com |
c40bb05e041686ddc9a84e2e9662c6b79990c8ae | b4869228738cdcd1a5fcb10b2091b78415ca9741 | /algorithmic-toolbox/week6/knapsack.py | ac783dfb88a227e9feaa93a0399a47c5c62f643b | [] | no_license | mmanishh/coursera-algo-toolbox | 8d23d18985acfb33f44f08ac8c306fdb68dc3e88 | d3153904732ab5b4125fc8913fcea6e969028822 | refs/heads/master | 2020-04-13T19:17:03.421935 | 2019-01-14T11:15:06 | 2019-01-14T11:15:06 | 163,397,885 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 885 | py | # Uses python3
import sys
def optimal_weight_naive(W, w):
# write your code here
result = 0
for x in w:
if result + x <= W:
result = result + x
return result
def optimal_weight(W,w):
n = len(w)
value = [[0 for j in range(W+1)] for i in range(n+1)]
for i in range(1,n+1):
for j in range(1,W+1):
#print("i: {0},j: {1}".format(i,j))
value[i][j]=value[i-1][j]
if w[i-1] <= j:
val = value[i-1][j-w[i-1]] + w[i-1] # replacing v[i] with w[i] because weight itself is value here
if val>value[i][j]:
value[i][j] = val
#print(np.array(value))
#print(np.array(value).shape)
return value[n][W]
if __name__ == '__main__':
input = sys.stdin.read()
W, n, *w = list(map(int, input.split()))
print(optimal_weight(W, w))
| [
"dfrozenthrone@gmail.com"
] | dfrozenthrone@gmail.com |
de75953113f38f1ca7a911263a44bf7b01dc222d | 0a0c6994c319981f7ba5d31eee3b074453ca4c0d | /autres/ahmedmelliti/module/__init__.py | b56955a8c3ed1a936aabb84a3da36d040ed5a34f | [] | no_license | arianacarnielli/2i013 | 678595c498cba5ec932ee9badd4868a08dad0663 | a804abb7c2337fe9963c2ddfcd73db80d8b787d8 | refs/heads/master | 2021-05-10T13:30:21.353951 | 2018-05-08T14:17:40 | 2018-05-08T14:17:40 | 118,476,287 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 853 | py | #from .strategies import RandomStrategy
#from .strategies import *
from .strategies import *
from soccersimulator import SoccerTeam
def get_team(nb_players):
myteam = SoccerTeam(name="MaTeam")
#for i in range(nb_players):
# myteam.add("Joueur "+str(i) ,FonceurStrategy())
if nb_players==1:
myteam.add("Joueur "+str(0) ,FonceurStrategy())
if nb_players==2:
myteam.add("Joueur "+str(0) ,Defenseur())
myteam.add("Joueur "+str(1) ,FonceurStrategy())
if nb_players==4:
myteam.add("Joueur "+str(0) ,Defenseur())
myteam.add("Joueur "+str(1) ,Milieu())
myteam.add("Joueur "+str(2) ,FonceurStrategy())
myteam.add("Joueur "+str(3) ,FonceurStrategy())
return myteam
def get_team_challenge(num):
myteam = SoccerTeam(name="MaTeamChallenge")
if num == 1:
myteam.add("Joueur Chal "+str(num),FonceurStrategy())
return myteam
| [
"ariana.carnielli@gmail.com"
] | ariana.carnielli@gmail.com |
85c46754048661ed70e41167e6a0ba3fb08340df | bddc40a97f92fafb8cbbbfdbdfe6774996578bb0 | /exercicioLista_listas/ex04.py | 41c3c3e877a2ead3a68266440beefacbecb10b67 | [] | no_license | andrehmiguel/treinamento | 8f83041bd51387dd3e5cafed09c4bb0a08d0e375 | ed18e6a8cfba0baaa68757c12893c62a0938a67e | refs/heads/main | 2023-01-31T13:15:58.113392 | 2020-12-16T02:47:44 | 2020-12-16T02:47:44 | 317,631,214 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 426 | py | # 4. Faça um Programa que leia um vetor de 10 caracteres, e diga quantas consoantes
# foram lidas. Imprima as consoantes.
listaChar = []
consoantes = 0
print ('Informe 10 caracters')
for x in range(10):
listaChar.append(input(f'Informe o {x + 1}º caracter: '))
char = listaChar[x]
if(char not in ('a','e','i','o','u')):
consoantes += 1
print(f'Foram inseridas {consoantes} consoantes.')
print(listaChar) | [
"andrehmiguel@outlook.com"
] | andrehmiguel@outlook.com |
305a36881d7b04e423b340695be5258894e8f796 | 784359e29fce9b3cd7c4c2d71e5f0498dd6d4b5c | /src/test.py | 3c1dc8f967ea7179b24fec3646130ef5391d7ded | [] | no_license | lmorillas/imagenes-arasaac | 65740df76a8d8f43493464153eea08d93687df68 | 48bec6df2095dda1ab2db5b08b4dccaae1ffe5e0 | refs/heads/master | 2021-05-11T10:13:07.999743 | 2018-01-19T08:18:36 | 2018-01-19T08:18:36 | 118,097,208 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,024 | py | # coding: utf-8
'''
https://stackoverflow.com/questions/25615741/how-to-use-the-spanish-wordnet-in-nltk
http://python.6.x6.nabble.com/attachment/2743017/0/Python%252520Text%252520Processing%252520with%252520NLTK%2525202.0%252520Cookbook.pdf
https://www.pybonacci.org/2015/11/24/como-hacer-analisis-de-sentimiento-en-espanol-2/
http://www.tsc.uc3m.es/~miguel/MLG/adjuntos/NLTK.pdf
'''
import nltk
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from nltk.stem import SnowballStemmer
tokenizer=nltk.data.load(‘tokenizers/punkt/spanish.pickle’)
frase='''Trabajo básicamente en el apartado de la comedia. Me gustaría
estar en Diseño de Programación, pero por desgracia aprobé el bachillerato.'''
sent_tokenize(frase)
tokenizer=nltk.data.load('tokenizers/punkt/spanish.pickle')
tokenizer.tokenize(frase)
word_tokenize(frase)
spanish_stemmer=SnowballStemmer("spanish")
tokens = nltk.word_tokenize(frase)
[spanish_stemmer.stem(t) for t in tokens]
| [
"morillas@gmail.com"
] | morillas@gmail.com |
06a5d6e41fa605cea837fb39e38f115ddd43e2f2 | ee974d693ca4c4156121f8cb385328b52eaac07c | /env/lib/python3.6/site-packages/pbr/pbr_json.py | c18362b03df827909c74e5c536a42156e97ecada | [] | no_license | ngonhi/Attendance_Check_System_with_Face_Recognition | f4531cc4dee565d0e45c02217f73f3eda412b414 | 92ff88cbc0c740ad48e149033efd38137c9be88d | refs/heads/main | 2023-03-12T07:03:25.302649 | 2021-02-26T15:37:33 | 2021-02-26T15:37:33 | 341,493,686 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 129 | py | version https://git-lfs.github.com/spec/v1
oid sha256:686172037b3d816c532f2e6aad2cfc06e034a7e538b9be33bf7a1f7a559cd64f
size 1284
| [
"Nqk180998!"
] | Nqk180998! |
2f10850113a8aa2ce3f91329981f97d4fbd9a4c8 | 0e36d40275c09b94a3927f9530b5ee95a710960c | /mysite/mysite/settings.py | af0fc86bb54c9bd4fcbf34af58e1290a0598c8f4 | [] | no_license | pku-cs-code/python-linux_Learning | 6c681f227f7dd106990ac77c66780b06758340d5 | ed58cd80e8066590a942f0b1369e45ecd5837c73 | refs/heads/master | 2021-01-01T06:40:48.418551 | 2017-11-05T04:02:29 | 2017-11-05T04:02:29 | 97,451,613 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,096 | py | """
Django settings for mysite project.
Generated by 'django-admin startproject' using Django 1.11.3.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'plgr_iwsfz5zozu1vr_9(!#^ve9nj9$d&js*75+q37&zpdas3p'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_URL = '/static/'
| [
"414220021@qq.com"
] | 414220021@qq.com |
815b00ec395cb0cfa575f77a0073e2eace93b7db | a9815c48ece2064c0b35e5f5ea76fa460ee67e43 | /Commands/Rainbow.py | 6716958404cb2c996b15c812739b04611ecea790 | [
"MIT"
] | permissive | Heufneutje/PyMoronBot | 1ca0ef3877efa4ca37de76a1af3862085515042e | 055abf0e685f3d2fc02863517952dc7fad9050f3 | refs/heads/master | 2020-12-28T21:39:11.871132 | 2016-07-14T16:38:15 | 2016-07-14T16:38:15 | 24,449,387 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,386 | py | # -*- coding: utf-8 -*-
"""
Created on May 04, 2014
@author: Tyranic-Moron
"""
from CommandInterface import CommandInterface
from IRCMessage import IRCMessage
from IRCResponse import IRCResponse, ResponseType
from twisted.words.protocols.irc import assembleFormattedText, attributes as A
class Rainbow(CommandInterface):
triggers = ['rainbow']
help = 'rainbow <text> - outputs the specified text with rainbow colours'
colours = [assembleFormattedText(A.fg.lightRed['']),
#assembleFormattedText(A.fg.orange['']),
assembleFormattedText(A.fg.yellow['']),
assembleFormattedText(A.fg.lightGreen['']),
assembleFormattedText(A.fg.lightCyan['']),
assembleFormattedText(A.fg.lightBlue['']),
assembleFormattedText(A.fg.lightMagenta['']),
]
def execute(self, message):
"""
@type message: IRCMessage
"""
if len(message.ParameterList) == 0:
return IRCResponse(ResponseType.Say, "You didn't give me any text to rainbow!", message.ReplyTo)
outputMessage = ''
for i, c in enumerate(message.Parameters):
outputMessage += self.colours[i % len(self.colours)] + c
outputMessage += assembleFormattedText(A.normal[''])
return IRCResponse(ResponseType.Say, outputMessage, message.ReplyTo)
| [
"matthewcpcox@gmail.com"
] | matthewcpcox@gmail.com |
a084b0e7a5d66b9462e3e0f4016dca8595899060 | 42e2d31fe71e1a2c0b50a5d4bbe67e6e3e43a2ef | /contrib/devtools/check-doc.py | 1b3264bca50da1765ffaf44ca268e87bec46ddfa | [
"MIT"
] | permissive | coinwebfactory/aiascoin | aceed1ebcb069b03d1fea3384d9d5beca06bc223 | c8741cad5264a2d4c0bbca7813c4f4ad390915ae | refs/heads/master | 2020-03-24T16:29:51.130395 | 2018-08-28T09:56:10 | 2018-08-28T09:56:10 | 142,826,697 | 0 | 0 | MIT | 2018-07-30T04:58:29 | 2018-07-30T04:58:28 | null | UTF-8 | Python | false | false | 1,910 | py | #!/usr/bin/env python
# Copyright (c) 2015-2016 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
'''
This checks if all command line args are documented.
Return value is 0 to indicate no error.
Author: @MarcoFalke
'''
from subprocess import check_output
import re
FOLDER_GREP = 'src'
FOLDER_TEST = 'src/test/'
CMD_ROOT_DIR = '`git rev-parse --show-toplevel`/%s' % FOLDER_GREP
CMD_GREP_ARGS = r"egrep -r -I '(map(Multi)?Args(\.count\(|\[)|Get(Bool)?Arg\()\"\-[^\"]+?\"' %s | grep -v '%s'" % (CMD_ROOT_DIR, FOLDER_TEST)
CMD_GREP_DOCS = r"egrep -r -I 'HelpMessageOpt\(\"\-[^\"=]+?(=|\")' %s" % (CMD_ROOT_DIR)
REGEX_ARG = re.compile(r'(?:map(?:Multi)?Args(?:\.count\(|\[)|Get(?:Bool)?Arg\()\"(\-[^\"]+?)\"')
REGEX_DOC = re.compile(r'HelpMessageOpt\(\"(\-[^\"=]+?)(?:=|\")')
# list unsupported, deprecated and duplicate args as they need no documentation
SET_DOC_OPTIONAL = set(['-rpcssl', '-benchmark', '-h', '-help', '-socks', '-tor', '-debugnet', '-whitelistalwaysrelay', '-prematurewitness', '-walletprematurewitness', '-promiscuousmempoolflags', '-blockminsize', '-sendfreetransactions', '-checklevel', '-liquidityprovider', '-anonymizeaiasamount'])
def main():
used = check_output(CMD_GREP_ARGS, shell=True)
docd = check_output(CMD_GREP_DOCS, shell=True)
args_used = set(re.findall(REGEX_ARG,used))
args_docd = set(re.findall(REGEX_DOC,docd)).union(SET_DOC_OPTIONAL)
args_need_doc = args_used.difference(args_docd)
args_unknown = args_docd.difference(args_used)
print "Args used : %s" % len(args_used)
print "Args documented : %s" % len(args_docd)
print "Args undocumented: %s" % len(args_need_doc)
print args_need_doc
print "Args unknown : %s" % len(args_unknown)
print args_unknown
exit(len(args_need_doc))
if __name__ == "__main__":
main()
| [
"root@localhost"
] | root@localhost |
4ae43f49201a2697b237431281d61c7915d07ecd | 73becab05c869290177c82f40507d181398ba68d | /docs/_build/jupyter_execute/content/toi_notebooks/toi_2188.py | 5e49540bacad3dfee25c2b38cbbb801f9765eb9f | [] | no_license | avivajpeyi/tess_atlas_dev | 8461cb2ce1c95b169aeeec5a8deebf520a98e85d | 9470109a6029a36a640917324d42d527c0dc8456 | refs/heads/main | 2023-08-18T19:39:03.523060 | 2021-10-13T04:32:45 | 2021-10-13T04:32:45 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 15,516 | py | #!/usr/bin/env python
# coding: utf-8
# # TESS Atlas fit for TOI 2188
#
# **Version: '0.2.1.dev30+g4180b55.d20210911'**
#
# **Note: This notebook was automatically generated as part of the TESS Atlas project. More information can be found on GitHub:** [github.com/dfm/tess-atlas](https://github.com/dfm/tess-atlas)
#
# In this notebook, we do a quicklook fit for the parameters of the TESS Objects of Interest (TOI) in the system number 2188.
# To do this fit, we use the [exoplanet](https://exoplanet.dfm.io) library and you can find more information about that project at [exoplanet.dfm.io](https://exoplanet.dfm.io).
#
# From here, you can scroll down and take a look at the fit results, or you can:
#
# - [open the notebook in Google Colab to run the fit yourself](https://colab.research.google.com/github/dfm/tess-atlas/blob/gh-pages/notebooks/'0.2.1.dev30+g4180b55.d20210911'/toi-2188.ipynb),
# - [view the notebook on GitHub](https://github.com/dfm/tess-atlas/blob/gh-pages/notebooks/'0.2.1.dev30+g4180b55.d20210911'/toi-2188.ipynb), or
# - [download the notebook](https://github.com/dfm/tess-atlas/raw/gh-pages/notebooks/'0.2.1.dev30+g4180b55.d20210911'/toi-2188.ipynb).
#
#
#
# ## Caveats
#
# There are many caveats associated with this relatively simple "quicklook" type of analysis that should be kept in mind.
# Here are some of the main things that come to mind:
#
# 1. The orbits that we fit are constrained to be *circular*. One major effect of this approximation is that the fit will significantly overestimate the confidence of the impact parameter constraint, so the results for impact parameter shouldn't be taken too seriously.
#
# 2. Transit timing variations, correlated noise, and (probably) your favorite systematics are ignored. Sorry!
#
# 3. This notebook was generated automatically without human intervention. Use at your own risk!
#
# ## Table of Contents
#
# 1. [Getting started](#Getting-started)
# 2. [Downloading Data](#Downloading-Data)
# 3. [Fitting stellar parameters](#Fitting-stellar-parameters)
# 4. [Results](#Results)
# 5. [Citations](#Citations)
# 6. [Posterior constraints](#Posterior-constraints)
# 7. [Attribution](#Attribution)
#
# ## Getting started
#
# To get going, we'll need to make out plots show up inline:
# In[1]:
get_ipython().run_line_magic('matplotlib', 'inline')
# Then we'll set up the plotting styles and do all of the imports:
# In[2]:
import logging
import os
import exoplanet as xo
import numpy as np
import pandas as pd
import pymc3 as pm
import pymc3_ext as pmx
import theano.tensor as tt
from celerite2.theano import GaussianProcess, terms
from pymc3.sampling import MultiTrace
from tess_atlas.data import TICEntry
from tess_atlas.analysis.eccenticity_reweighting import (
calculate_eccentricity_weights,
)
from tess_atlas.utils import notebook_initalisations
from tess_atlas.utils import NOTEBOOK_LOGGER_NAME
notebook_initalisations()
logger = logging.getLogger(NOTEBOOK_LOGGER_NAME)
# In[3]:
os.environ["INTERACTIVE_PLOTS"] = "FALSE" # "TRUE" for interactive plots
from tess_atlas.plotting import (
plot_eccentricity_posteriors,
plot_folded_lightcurve,
plot_phase,
plot_lightcurve,
plot_posteriors,
)
# In[4]:
TOI_NUMBER = 2188
# ## Downloading Data
#
# Next, we grab some inital guesses for the TOI's parameters from [ExoFOP](https://exofop.ipac.caltech.edu/tess/) and download the TOI's lightcurve with [Lightkurve].
#
# We wrap the information in three objects, a `TIC Entry`, a `Planet Candidate` and finally a `Lightcurve Data` object.
#
# - The `TIC Entry` object holds one or more `Planet Candidate`s (each candidate associated with one TOI id number) and a `Lightcurve Data` for associated with the candidates. Note that the `Lightcurve Data` object is initially the same fopr each candidate but may be masked according to the candidate transit's period.
#
# - The `Planet Candidate` holds informaiton on the TOI data collected by [SPOC] (eg transit period, etc)
#
# - The `Lightcurve Data` holds the lightcurve time and flux data for the planet candidates.
#
# [ExoFOP]: https://exofop.ipac.caltech.edu/tess/
# [Lightkurve]: https://docs.lightkurve.org/index.html
# [SPOC]: https://heasarc.gsfc.nasa.gov/docs/tess/pipeline.html
#
# Downloading the data (this may take a few minutes):
# In[5]:
tic_entry = TICEntry.generate_tic_from_toi_number(toi=TOI_NUMBER)
# The initial guesses for the parameters (determined by SPOC):
# In[6]:
tic_entry.display()
# Plot of the lightcurve:
# In[7]:
plot_lightcurve(tic_entry)
# ## Fitting stellar parameters
# Now that we have the data, we can define a Bayesian model to fit it.
#
# ### The probabilistic model
#
# We use the probabilistic model as described in [Foreman-Mackey et al 2017] to determine the best parameters to fit the transits present in the lightcurve data.
#
# More explicitly, the stellar light curve $l(t; \vec{\theta})$ is modelled with a Gaussian Process (GP).
# A GP consists of a mean function $\mu(t;\vec{\theta})$ and a kernel function $k_\alpha(t,t';\vec{\theta})$, where $\vec{\theta}$ is the vector of parameters descibing the lightcurve and $t$ is the time during which the lightcurve is under observation
#
# The 8 parameters describing the lightcurve are
# $$\vec{\theta} = \{d_i, t0_i, tmax_i, b_i, r_i, f0, u1, u2\},$$
# where
# * $d_i$ transit durations for each planet,
# * $t0_i$ transit phase/epoch for each planet,
# * $tmax_i$ time of the last transit observed by TESS for each planet,
# * $b_i$ impact parameter for each planet,
# * $r_i$ planet radius in stellar radius for each planet,
# * $f0$ baseline relative flux of the light curve from star,
# * $u1$ $u2$ two parameters describing the limb-darkening profile of star.
#
# Note: if the observed data only records a single transit,
# we swap $tmax_i$ with $p_i$ (orbital periods for each planet).
#
# With this we can write
# $$l(t;\vec{\theta}) \sim \mathcal{GP} (\mu(t;\vec{\theta}), k_\alpha(t,t';\vec{\theta}))\ .$$
#
# Here the mean and kernel functions are:
# * $\mu(t;\vec{\theta})$: a limb-darkened transit light curve ([Kipping 2013])
# * $k_\alpha(t,t';\vec{\theta}))$: a stochastically-driven, damped harmonic oscillator ([SHOTterm])
#
#
# Now that we have defined our transit model, we can implement it in python:
#
# [Foreman-Mackey et al 2017]: https://arxiv.org/pdf/1703.09710.pdf
# [Kipping 2013]: https://arxiv.org/abs/1308.0009
# [SHOTterm]: https://celerite2.readthedocs.io/en/latest/api/python/?highlight=SHOTerm#celerite2.terms.SHOTerm
# In[8]:
def build_planet_transit_model(tic_entry):
# TODO: update model https://github.com/exoplanet-dev/tess.world/blob/main/src/tess_world/templates/template.py#L305
t = tic_entry.lightcurve.time
y = tic_entry.lightcurve.flux
yerr = tic_entry.lightcurve.flux_err
n = tic_entry.planet_count
t0s = np.array([planet.t0 for planet in tic_entry.candidates])
depths = np.array([planet.depth for planet in tic_entry.candidates])
periods = np.array([planet.period for planet in tic_entry.candidates])
tmaxs = np.array([planet.tmax for planet in tic_entry.candidates])
durations = np.array([planet.duration for planet in tic_entry.candidates])
max_duration, min_duration = durations.max(), durations.min()
with pm.Model() as my_planet_transit_model:
## define planet 𝜃⃗
d_priors = pm.Lognormal("d", mu=np.log(0.1), sigma=10.0, shape=n)
r_priors = pm.Lognormal(
"r", mu=0.5 * np.log(depths * 1e-3), sd=1.0, shape=n
)
b_priors = xo.distributions.ImpactParameter("b", ror=r_priors, shape=n)
planet_priors = [r_priors, d_priors, b_priors]
## define orbit-timing 𝜃⃗
t0_norm = pm.Bound(
pm.Normal, lower=t0s - max_duration, upper=t0s + max_duration
)
t0_priors = t0_norm("t0", mu=t0s, sd=1.0, shape=n)
p_params, p_priors_list, tmax_priors_list = [], [], []
for n, planet in enumerate(tic_entry.candidates):
if planet.has_data_only_for_single_transit:
p_prior = pm.Pareto(
f"p_{planet.index}",
m=planet.period_min,
alpha=2.0 / 3.0,
testval=planet.period,
)
p_param = p_prior
tmax_prior = planet.t0
else:
tmax_norm = pm.Bound(
pm.Normal,
lower=planet.tmax - max_duration,
upper=planet.tmax + max_duration,
)
tmax_prior = tmax_norm(
f"tmax_{planet.index}",
mu=planet.tmax,
sigma=0.5 * planet.duration,
testval=planet.tmax,
)
p_prior = (tmax_prior - t0_priors[n]) / planet.num_periods
p_param = tmax_prior
p_params.append(p_param) # the param needed to calculate p
p_priors_list.append(p_prior)
tmax_priors_list.append(tmax_prior)
p_priors = pm.Deterministic("p", tt.stack(p_priors_list))
tmax_priors = pm.Deterministic("tmax", tt.stack(tmax_priors_list))
## define stellar 𝜃⃗
f0_prior = pm.Normal("f0", mu=0.0, sd=10.0)
u_prior = xo.distributions.QuadLimbDark("u")
stellar_priors = [f0_prior, u_prior]
## define 𝑘(𝑡,𝑡′;𝜃⃗ )
jitter_prior = pm.InverseGamma(
"jitter", **pmx.estimate_inverse_gamma_parameters(1.0, 5.0)
)
sigma_prior = pm.InverseGamma(
"sigma", **pmx.estimate_inverse_gamma_parameters(1.0, 5.0)
)
rho_prior = pm.InverseGamma(
"rho", **pmx.estimate_inverse_gamma_parameters(0.5, 10.0)
)
kernel = terms.SHOTerm(sigma=sigma_prior, rho=rho_prior, Q=0.3)
noise_priors = [jitter_prior, sigma_prior, rho_prior]
## define 𝜇(𝑡;𝜃) (the lightcurve model)
orbit = xo.orbits.KeplerianOrbit(
period=p_priors,
t0=t0_priors,
b=b_priors,
duration=d_priors,
ror=r_priors,
)
star = xo.LimbDarkLightCurve(u_prior)
lightcurve_models = star.get_light_curve(orbit=orbit, r=r_priors, t=t)
lightcurve = 1e3 * pm.math.sum(lightcurve_models, axis=-1) + f0_prior
lightcurve_models = pm.Deterministic("lightcurves", lightcurve_models)
rho_circ = pm.Deterministic("rho_circ", orbit.rho_star)
# Finally the GP observation model
residual = y - lightcurve
gp = GaussianProcess(
kernel, t=t, diag=yerr ** 2 + jitter_prior ** 2, mean=lightcurve
)
gp.marginal("obs", observed=y)
# cache params
my_params = dict(
planet_params=planet_priors,
noise_params=noise_priors,
stellar_params=stellar_priors,
period_params=p_params,
)
return my_planet_transit_model, my_params, gp
def test_model(model):
"""Test a point in the model and assure no nans"""
with model:
test_prob = model.check_test_point()
test_prob.name = "log P(test-point)"
assert not test_prob.isnull().values.any(), test_prob
test_pt = pd.Series(
{
k: str(round(np.array(v).flatten()[0], 2))
for k, v in model.test_point.items()
},
name="Test Point",
)
return pd.concat([test_pt, test_prob], axis=1)
# In[9]:
planet_transit_model, params, gp = build_planet_transit_model(tic_entry)
test_model(planet_transit_model)
# The test point acts as an example of a point in the parameter space.
# We can now optimize the model sampling parameters before initialising the sampler.
# In[10]:
def get_optimized_init_params(
model, planet_params, noise_params, stellar_params, period_params
):
"""Get params with maximimal log prob for sampling starting point"""
logger.info("Optimizing sampling starting point")
with model:
theta = model.test_point
kwargs = dict(start=theta, verbose=False, progress=False)
theta = pmx.optimize(**kwargs, vars=[noise_params[0]])
theta = pmx.optimize(**kwargs, vars=planet_params)
theta = pmx.optimize(**kwargs, vars=noise_params)
theta = pmx.optimize(**kwargs, vars=stellar_params)
theta = pmx.optimize(**kwargs, vars=period_params)
logger.info("Optimization complete!")
return theta
# In[11]:
init_params = get_optimized_init_params(planet_transit_model, **params)
# Now we can plot our initial model:
# In[12]:
model_lightcurves = [
init_params["lightcurves"][:, i] * 1e3
for i in range(tic_entry.planet_count)
]
plot_lightcurve(tic_entry, model_lightcurves)
# In[13]:
plot_folded_lightcurve(tic_entry, model_lightcurves)
#
# ### Sampling
# With the model and priors defined, we can begin sampling
# In[14]:
def start_model_sampling(model) -> MultiTrace:
np.random.seed(TOI_NUMBER)
with model:
samples_trace = pmx.sample(
tune=2000, draws=2000, chains=2, cores=1, start=init_params
)
return samples_trace
# In[15]:
trace = start_model_sampling(planet_transit_model)
# Lets save the posteriors and sampling metadata for future use, and take a look at summary statistics
# In[16]:
tic_entry.inference_trace = trace
tic_entry.save_inference_trace()
tic_entry.inference_trace
# ## Results
# Below are plots of the posterior probability distributuions:
# In[17]:
plot_posteriors(tic_entry, trace)
# We can also plot the best-fitting light-curve model
# In[18]:
plot_phase(tic_entry, trace)
# ### Post-processing: Eccentricity
#
# As discussed above, we fit this model assuming a circular orbit which speeds things up for a few reasons:
# 1) `e=0` allows simpler orbital dynamics which are more computationally efficient (no need to solve Kepler's equation numerically)
#
# 2) There are degeneracies between eccentricity, arrgument of periasteron, impact parameter, and planet radius. Hence by setting `e=0` and using the duration in calculating the planet's orbit, the sampler can perform better.
#
# To first order, the eccentricity mainly just changes the transit duration.
# This can be thought of as a change in the impled density of the star.
# Therefore, if the transit is fit using stellar density (or duration, in this case) as one of the parameters, it is possible to make an independent measurement of the stellar density, and in turn infer the eccentricity of the orbit as a post-processing step.
# The details of this eccentricity calculation method are described in [Dawson & Johnson (2012)].
#
# [Dawson & Johnson (2012)]: https://arxiv.org/abs/1203.5537
# Note: a different stellar density parameter is required for each planet (if there is more than one planet)
# In[1]:
ecc_samples = calculate_eccentricity_weights(tic_entry, trace)
ecc_samples.to_csv(os.path.join(tic_entry.outdir, "eccentricity_samples.csv"))
plot_eccentricity_posteriors(tic_entry, ecc_samples)
# ## Citations
#
# In[ ]:
with planet_transit_model:
txt, bib = xo.citations.get_citations_for_model()
print(txt)
# In[ ]:
print("\n".join(bib.splitlines()) + "\n...")
# ### Packages used:
#
# In[ ]:
import pkg_resources
dists = [str(d).replace(" ", "==") for d in pkg_resources.working_set]
for i in dists:
print(i)
| [
"avi.vajpeyi@gmail.com"
] | avi.vajpeyi@gmail.com |
663424fd44e897ae383df66011f4a1f60bed3000 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02990/s076168357.py | 2db79e763da1248e48e6dfc6516dac1e7d408771 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 219 | py | mod = 10**9+7
N, K = map(int, input().split())
print(N-K+1)
i = 2
C1 = N-K+1
C2 = 1
while i <= K:
C1 *= ((N-K+1-(i-1))*pow(i, mod-2, mod))%mod
C2 *= ((K-(i-1))*pow(i-1, mod-2, mod))%mod
print((C1*C2)%mod)
i += 1 | [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
054a2be80baef36aea5f876de706b94da37caafe | 50e089f906489b2586cc586712420fd085f1f637 | /machine_learning.py | dd758d6b668604304b9f3b47f2727a1ac5ec2109 | [] | no_license | AaronTho/Python_Notes | 5ab629e3b3d49be5c68d2a285a79683dc604cd3e | 4aa0e1fb4a35763458a1da467e1bb01e393bc972 | refs/heads/main | 2023-07-24T00:59:23.552952 | 2021-09-11T17:32:25 | 2021-09-11T17:32:25 | 375,399,260 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 915 | py | # Import the Data
# Clean the Data
# Split the Data into Training/Test Sets
# Create a Model
# Train the Model
# Make Predictions
# Evaluate and Improve
# Libraries and Tools:
# Numpy
# Pandas
# MatPlotLIb
# Scikit-Learn
# Jupyter is the a good machine learning environment
# Install Anaconda (had to also install the command line version)
# Command Line "jupyter notebook" to create new notebook in the browser
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
music_data = pd.read_csv('music.csv')
X = music_data.drop(columns=['genre'])
y = music_data['genre']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
predictions = model.predict(X_test)
predictions
score = accuracy_score(y_test, predictions)
score
| [
"aamith@gmail.com"
] | aamith@gmail.com |
11aaed6107aedc9ed2c355a0818e5438c2ac20fc | b0f0473f10df2fdb0018165785cc23c34b0c99e7 | /Peach.Core/Lib/locale.py | 0c6cc1ccaf9fae1c80663ef6c11e43e25b61f48b | [] | no_license | wimton/Meter-peach | d9294a56ec0c1fb2d1a2a4acec1c2bf47b0932df | af0302d1789a852746a3c900c6129ed9c15fb0f4 | refs/heads/master | 2023-04-25T22:54:31.696184 | 2021-05-19T13:14:55 | 2021-05-19T13:14:55 | 355,202,202 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 100,040 | py | """Locale support module.
The module provides low-level access to the C lib's locale APIs and adds high
level number formatting APIs as well as a locale aliasing engine to complement
these.
The aliasing engine includes support for many commonly used locale names and
maps them to values suitable for passing to the C lib's setlocale() function. It
also includes default encodings for all supported locale names.
"""
import sys
import encodings
import encodings.aliases
import re
import operator
import functools
import collections
# keep a copy of the builtin str type, because 'str' name is overridden
# in globals by a function below
_str = str
try:
_unicode = str
except NameError:
# If Python is built without Unicode support, the unicode type
# will not exist. Fake one.
class _unicode(object):
pass
# Try importing the _locale module.
#
# If this fails, fall back on a basic 'C' locale emulation.
# Yuck: LC_MESSAGES is non-standard: can't tell whether it exists before
# trying the import. So __all__ is also fiddled at the end of the file.
__all__ = ["getlocale", "getdefaultlocale", "getpreferredencoding", "Error",
"setlocale", "resetlocale", "localeconv", "strcoll", "strxfrm",
"str", "atof", "atoi", "format", "format_string", "currency",
"normalize", "LC_CTYPE", "LC_COLLATE", "LC_TIME", "LC_MONETARY",
"LC_NUMERIC", "LC_ALL", "CHAR_MAX"]
try:
from _locale import *
except ImportError:
# Locale emulation
CHAR_MAX = 127
LC_ALL = 6
LC_COLLATE = 3
LC_CTYPE = 0
LC_MESSAGES = 5
LC_MONETARY = 4
LC_NUMERIC = 1
LC_TIME = 2
Error = ValueError
def localeconv():
""" localeconv() -> dict.
Returns numeric and monetary locale-specific parameters.
"""
# 'C' locale default values
return {'grouping': [127],
'currency_symbol': '',
'n_sign_posn': 127,
'p_cs_precedes': 127,
'n_cs_precedes': 127,
'mon_grouping': [],
'n_sep_by_space': 127,
'decimal_point': '.',
'negative_sign': '',
'positive_sign': '',
'p_sep_by_space': 127,
'int_curr_symbol': '',
'p_sign_posn': 127,
'thousands_sep': '',
'mon_thousands_sep': '',
'frac_digits': 127,
'mon_decimal_point': '',
'int_frac_digits': 127}
def setlocale(category, value=None):
""" setlocale(integer,string=None) -> string.
Activates/queries locale processing.
"""
if value not in (None, '', 'C'):
raise Error('_locale emulation only supports "C" locale')
return 'C'
def strcoll(a,b):
""" strcoll(string,string) -> int.
Compares two strings according to the locale.
"""
return cmp(a,b)
def strxfrm(s):
""" strxfrm(string) -> string.
Returns a string that behaves for cmp locale-aware.
"""
return s
_localeconv = localeconv
# With this dict, you can override some items of localeconv's return value.
# This is useful for testing purposes.
_override_localeconv = {}
@functools.wraps(_localeconv)
def localeconv():
d = _localeconv()
if _override_localeconv:
d.update(_override_localeconv)
return d
### Number formatting APIs
# Author: Martin von Loewis
# improved by Georg Brandl
# Iterate over grouping intervals
def _grouping_intervals(grouping):
last_interval = None
for interval in grouping:
# if grouping is -1, we are done
if interval == CHAR_MAX:
return
# 0: re-use last group ad infinitum
if interval == 0:
if last_interval is None:
raise ValueError("invalid grouping")
while True:
yield last_interval
yield interval
last_interval = interval
#perform the grouping from right to left
def _group(s, monetary=False):
conv = localeconv()
thousands_sep = conv[monetary and 'mon_thousands_sep' or 'thousands_sep']
grouping = conv[monetary and 'mon_grouping' or 'grouping']
if not grouping:
return (s, 0)
if s[-1] == ' ':
stripped = s.rstrip()
right_spaces = s[len(stripped):]
s = stripped
else:
right_spaces = ''
left_spaces = ''
groups = []
for interval in _grouping_intervals(grouping):
if not s or s[-1] not in "0123456789":
# only non-digit characters remain (sign, spaces)
left_spaces = s
s = ''
break
groups.append(s[-interval:])
s = s[:-interval]
if s:
groups.append(s)
groups.reverse()
return (
left_spaces + thousands_sep.join(groups) + right_spaces,
len(thousands_sep) * (len(groups) - 1)
)
# Strip a given amount of excess padding from the given string
def _strip_padding(s, amount):
lpos = 0
while amount and s[lpos] == ' ':
lpos += 1
amount -= 1
rpos = len(s) - 1
while amount and s[rpos] == ' ':
rpos -= 1
amount -= 1
return s[lpos:rpos+1]
_percent_re = re.compile(r'%(?:\((?P<key>.*?)\))?'
r'(?P<modifiers>[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]')
def format(percent, value, grouping=False, monetary=False, *additional):
"""Returns the locale-aware substitution of a %? specifier
(percent).
additional is for format strings which contain one or more
'*' modifiers."""
# this is only for one-percent-specifier strings and this should be checked
match = _percent_re.match(percent)
if not match or len(match.group())!= len(percent):
raise ValueError(("format() must be given exactly one %%char "
"format specifier, %s not valid") % repr(percent))
return _format(percent, value, grouping, monetary, *additional)
def _format(percent, value, grouping=False, monetary=False, *additional):
if additional:
formatted = percent % ((value,) + additional)
else:
formatted = percent % value
# floats and decimal ints need special action!
if percent[-1] in 'eEfFgG':
seps = 0
parts = formatted.split('.')
if grouping:
parts[0], seps = _group(parts[0], monetary=monetary)
decimal_point = localeconv()[monetary and 'mon_decimal_point'
or 'decimal_point']
formatted = decimal_point.join(parts)
if seps:
formatted = _strip_padding(formatted, seps)
elif percent[-1] in 'diu':
seps = 0
if grouping:
formatted, seps = _group(formatted, monetary=monetary)
if seps:
formatted = _strip_padding(formatted, seps)
return formatted
def format_string(f, val, grouping=False):
"""Formats a string in the same way that the % formatting would use,
but takes the current locale into account.
Grouping is applied if the third parameter is true."""
percents = list(_percent_re.finditer(f))
new_f = _percent_re.sub('%s', f)
if isinstance(val, collections.Mapping):
new_val = []
for perc in percents:
if perc.group()[-1]=='%':
new_val.append('%')
else:
new_val.append(format(perc.group(), val, grouping))
else:
if not isinstance(val, tuple):
val = (val,)
new_val = []
i = 0
for perc in percents:
if perc.group()[-1]=='%':
new_val.append('%')
else:
starcount = perc.group('modifiers').count('*')
new_val.append(_format(perc.group(),
val[i],
grouping,
False,
*val[i+1:i+1+starcount]))
i += (1 + starcount)
val = tuple(new_val)
return new_f % val
def currency(val, symbol=True, grouping=False, international=False):
"""Formats val according to the currency settings
in the current locale."""
conv = localeconv()
# check for illegal values
digits = conv[international and 'int_frac_digits' or 'frac_digits']
if digits == 127:
raise ValueError("Currency formatting is not possible using "
"the 'C' locale.")
s = format('%%.%if' % digits, abs(val), grouping, monetary=True)
# '<' and '>' are markers if the sign must be inserted between symbol and value
s = '<' + s + '>'
if symbol:
smb = conv[international and 'int_curr_symbol' or 'currency_symbol']
precedes = conv[val<0 and 'n_cs_precedes' or 'p_cs_precedes']
separated = conv[val<0 and 'n_sep_by_space' or 'p_sep_by_space']
if precedes:
s = smb + (separated and ' ' or '') + s
else:
s = s + (separated and ' ' or '') + smb
sign_pos = conv[val<0 and 'n_sign_posn' or 'p_sign_posn']
sign = conv[val<0 and 'negative_sign' or 'positive_sign']
if sign_pos == 0:
s = '(' + s + ')'
elif sign_pos == 1:
s = sign + s
elif sign_pos == 2:
s = s + sign
elif sign_pos == 3:
s = s.replace('<', sign)
elif sign_pos == 4:
s = s.replace('>', sign)
else:
# the default if nothing specified;
# this should be the most fitting sign position
s = sign + s
return s.replace('<', '').replace('>', '')
def str(val):
"""Convert float to string, taking the locale into account."""
return format("%.12g", val)
def atof(string, func=float):
"Parses a string as a float according to the locale settings."
#First, get rid of the grouping
ts = localeconv()['thousands_sep']
if ts:
string = string.replace(ts, '')
#next, replace the decimal point with a dot
dd = localeconv()['decimal_point']
if dd:
string = string.replace(dd, '.')
#finally, parse the string
return func(string)
def atoi(str):
"Converts a string to an integer according to the locale settings."
return atof(str, int)
def _test():
setlocale(LC_ALL, "")
#do grouping
s1 = format("%d", 123456789,1)
print((s1, "is", atoi(s1)))
#standard formatting
s1 = str(3.14)
print((s1, "is", atof(s1)))
### Locale name aliasing engine
# Author: Marc-Andre Lemburg, mal@lemburg.com
# Various tweaks by Fredrik Lundh <fredrik@pythonware.com>
# store away the low-level version of setlocale (it's
# overridden below)
_setlocale = setlocale
# Avoid relying on the locale-dependent .lower() method
# (see issue #1813).
_ascii_lower_map = ''.join(
chr(x + 32 if x >= ord('A') and x <= ord('Z') else x)
for x in range(256)
)
def _replace_encoding(code, encoding):
if '.' in code:
langname = code[:code.index('.')]
else:
langname = code
# Convert the encoding to a C lib compatible encoding string
norm_encoding = encodings.normalize_encoding(encoding)
#print('norm encoding: %r' % norm_encoding)
norm_encoding = encodings.aliases.aliases.get(norm_encoding,
norm_encoding)
#print('aliased encoding: %r' % norm_encoding)
encoding = locale_encoding_alias.get(norm_encoding,
norm_encoding)
#print('found encoding %r' % encoding)
return langname + '.' + encoding
def normalize(localename):
""" Returns a normalized locale code for the given locale
name.
The returned locale code is formatted for use with
setlocale().
If normalization fails, the original name is returned
unchanged.
If the given encoding is not known, the function defaults to
the default encoding for the locale code just like setlocale()
does.
"""
# Normalize the locale name and extract the encoding and modifier
if isinstance(localename, _unicode):
localename = localename.encode('ascii')
code = localename.translate(_ascii_lower_map)
if ':' in code:
# ':' is sometimes used as encoding delimiter.
code = code.replace(':', '.')
if '@' in code:
code, modifier = code.split('@', 1)
else:
modifier = ''
if '.' in code:
langname, encoding = code.split('.')[:2]
else:
langname = code
encoding = ''
# First lookup: fullname (possibly with encoding and modifier)
lang_enc = langname
if encoding:
norm_encoding = encoding.replace('-', '')
norm_encoding = norm_encoding.replace('_', '')
lang_enc += '.' + norm_encoding
lookup_name = lang_enc
if modifier:
lookup_name += '@' + modifier
code = locale_alias.get(lookup_name, None)
if code is not None:
return code
#print('first lookup failed')
if modifier:
# Second try: fullname without modifier (possibly with encoding)
code = locale_alias.get(lang_enc, None)
if code is not None:
#print('lookup without modifier succeeded')
if '@' not in code:
return code + '@' + modifier
if code.split('@', 1)[1].translate(_ascii_lower_map) == modifier:
return code
#print('second lookup failed')
if encoding:
# Third try: langname (without encoding, possibly with modifier)
lookup_name = langname
if modifier:
lookup_name += '@' + modifier
code = locale_alias.get(lookup_name, None)
if code is not None:
#print('lookup without encoding succeeded')
if '@' not in code:
return _replace_encoding(code, encoding)
code, modifier = code.split('@', 1)
return _replace_encoding(code, encoding) + '@' + modifier
if modifier:
# Fourth try: langname (without encoding and modifier)
code = locale_alias.get(langname, None)
if code is not None:
#print('lookup without modifier and encoding succeeded')
if '@' not in code:
return _replace_encoding(code, encoding) + '@' + modifier
code, defmod = code.split('@', 1)
if defmod.translate(_ascii_lower_map) == modifier:
return _replace_encoding(code, encoding) + '@' + defmod
return localename
def _parse_localename(localename):
""" Parses the locale code for localename and returns the
result as tuple (language code, encoding).
The localename is normalized and passed through the locale
alias engine. A ValueError is raised in case the locale name
cannot be parsed.
The language code corresponds to RFC 1766. code and encoding
can be None in case the values cannot be determined or are
unknown to this implementation.
"""
code = normalize(localename)
if '@' in code:
# Deal with locale modifiers
code, modifier = code.split('@', 1)
if modifier == 'euro' and '.' not in code:
# Assume Latin-9 for @euro locales. This is bogus,
# since some systems may use other encodings for these
# locales. Also, we ignore other modifiers.
return code, 'iso-8859-15'
if '.' in code:
return tuple(code.split('.')[:2])
elif code == 'C':
return None, None
raise ValueError('unknown locale: %s' % localename)
def _build_localename(localetuple):
""" Builds a locale code from the given tuple (language code,
encoding).
No aliasing or normalizing takes place.
"""
language, encoding = localetuple
if language is None:
language = 'C'
if encoding is None:
return language
else:
return language + '.' + encoding
def getdefaultlocale(envvars=('LC_ALL', 'LC_CTYPE', 'LANG', 'LANGUAGE')):
""" Tries to determine the default locale settings and returns
them as tuple (language code, encoding).
According to POSIX, a program which has not called
setlocale(LC_ALL, "") runs using the portable 'C' locale.
Calling setlocale(LC_ALL, "") lets it use the default locale as
defined by the LANG variable. Since we don't want to interfere
with the current locale setting we thus emulate the behavior
in the way described above.
To maintain compatibility with other platforms, not only the
LANG variable is tested, but a list of variables given as
envvars parameter. The first found to be defined will be
used. envvars defaults to the search path used in GNU gettext;
it must always contain the variable name 'LANG'.
Except for the code 'C', the language code corresponds to RFC
1766. code and encoding can be None in case the values cannot
be determined.
"""
try:
# check if it's supported by the _locale module
import _locale
code, encoding = _locale._getdefaultlocale()
except (ImportError, AttributeError):
pass
else:
# make sure the code/encoding values are valid
if sys.platform == "win32" and code and code[:2] == "0x":
# map windows language identifier to language name
code = windows_locale.get(int(code, 0))
# ...add other platform-specific processing here, if
# necessary...
return code, encoding
# fall back on POSIX behaviour
import os
lookup = os.environ.get
for variable in envvars:
localename = lookup(variable,None)
if localename:
if variable == 'LANGUAGE':
localename = localename.split(':')[0]
break
else:
localename = 'C'
return _parse_localename(localename)
def getlocale(category=LC_CTYPE):
""" Returns the current setting for the given locale category as
tuple (language code, encoding).
category may be one of the LC_* value except LC_ALL. It
defaults to LC_CTYPE.
Except for the code 'C', the language code corresponds to RFC
1766. code and encoding can be None in case the values cannot
be determined.
"""
localename = _setlocale(category)
if category == LC_ALL and ';' in localename:
raise TypeError('category LC_ALL is not supported')
return _parse_localename(localename)
def setlocale(category, locale=None):
""" Set the locale for the given category. The locale can be
a string, an iterable of two strings (language code and encoding),
or None.
Iterables are converted to strings using the locale aliasing
engine. Locale strings are passed directly to the C lib.
category may be given as one of the LC_* values.
"""
if locale and not isinstance(locale, (_str, _unicode)):
# convert to string
locale = normalize(_build_localename(locale))
return _setlocale(category, locale)
def resetlocale(category=LC_ALL):
""" Sets the locale for category to the default setting.
The default setting is determined by calling
getdefaultlocale(). category defaults to LC_ALL.
"""
_setlocale(category, _build_localename(getdefaultlocale()))
if sys.platform.startswith("win"):
# On Win32, this will return the ANSI code page
def getpreferredencoding(do_setlocale = True):
"""Return the charset that the user is likely using."""
import _locale
return _locale._getdefaultlocale()[1]
else:
# On Unix, if CODESET is available, use that.
try:
CODESET
except NameError:
# Fall back to parsing environment variables :-(
def getpreferredencoding(do_setlocale = True):
"""Return the charset that the user is likely using,
by looking at environment variables."""
return getdefaultlocale()[1]
else:
def getpreferredencoding(do_setlocale = True):
"""Return the charset that the user is likely using,
according to the system configuration."""
if do_setlocale:
oldloc = setlocale(LC_CTYPE)
try:
setlocale(LC_CTYPE, "")
except Error:
pass
result = nl_langinfo(CODESET)
setlocale(LC_CTYPE, oldloc)
else:
result = nl_langinfo(CODESET)
if not result and sys.platform == 'darwin':
# nl_langinfo can return an empty string
# when the setting has an invalid value.
# Default to UTF-8 in that case because
# UTF-8 is the default charset on OSX and
# returning nothing will crash the
# interpreter.
result = 'UTF-8'
return result
### Database
#
# The following data was extracted from the locale.alias file which
# comes with X11 and then hand edited removing the explicit encoding
# definitions and adding some more aliases. The file is usually
# available as /usr/lib/X11/locale/locale.alias.
#
#
# The local_encoding_alias table maps lowercase encoding alias names
# to C locale encoding names (case-sensitive). Note that normalize()
# first looks up the encoding in the encodings.aliases dictionary and
# then applies this mapping to find the correct C lib name for the
# encoding.
#
locale_encoding_alias = {
# Mappings for non-standard encoding names used in locale names
'437': 'C',
'c': 'C',
'en': 'ISO8859-1',
'jis': 'JIS7',
'jis7': 'JIS7',
'ajec': 'eucJP',
# Mappings from Python codec names to C lib encoding names
'ascii': 'ISO8859-1',
'latin_1': 'ISO8859-1',
'iso8859_1': 'ISO8859-1',
'iso8859_10': 'ISO8859-10',
'iso8859_11': 'ISO8859-11',
'iso8859_13': 'ISO8859-13',
'iso8859_14': 'ISO8859-14',
'iso8859_15': 'ISO8859-15',
'iso8859_16': 'ISO8859-16',
'iso8859_2': 'ISO8859-2',
'iso8859_3': 'ISO8859-3',
'iso8859_4': 'ISO8859-4',
'iso8859_5': 'ISO8859-5',
'iso8859_6': 'ISO8859-6',
'iso8859_7': 'ISO8859-7',
'iso8859_8': 'ISO8859-8',
'iso8859_9': 'ISO8859-9',
'iso2022_jp': 'JIS7',
'shift_jis': 'SJIS',
'tactis': 'TACTIS',
'euc_jp': 'eucJP',
'euc_kr': 'eucKR',
'utf_8': 'UTF-8',
'koi8_r': 'KOI8-R',
'koi8_u': 'KOI8-U',
# XXX This list is still incomplete. If you know more
# mappings, please file a bug report. Thanks.
}
#
# The locale_alias table maps lowercase alias names to C locale names
# (case-sensitive). Encodings are always separated from the locale
# name using a dot ('.'); they should only be given in case the
# language name is needed to interpret the given encoding alias
# correctly (CJK codes often have this need).
#
# Note that the normalize() function which uses this tables
# removes '_' and '-' characters from the encoding part of the
# locale name before doing the lookup. This saves a lot of
# space in the table.
#
# MAL 2004-12-10:
# Updated alias mapping to most recent locale.alias file
# from X.org distribution using makelocalealias.py.
#
# These are the differences compared to the old mapping (Python 2.4
# and older):
#
# updated 'bg' -> 'bg_BG.ISO8859-5' to 'bg_BG.CP1251'
# updated 'bg_bg' -> 'bg_BG.ISO8859-5' to 'bg_BG.CP1251'
# updated 'bulgarian' -> 'bg_BG.ISO8859-5' to 'bg_BG.CP1251'
# updated 'cz' -> 'cz_CZ.ISO8859-2' to 'cs_CZ.ISO8859-2'
# updated 'cz_cz' -> 'cz_CZ.ISO8859-2' to 'cs_CZ.ISO8859-2'
# updated 'czech' -> 'cs_CS.ISO8859-2' to 'cs_CZ.ISO8859-2'
# updated 'dutch' -> 'nl_BE.ISO8859-1' to 'nl_NL.ISO8859-1'
# updated 'et' -> 'et_EE.ISO8859-4' to 'et_EE.ISO8859-15'
# updated 'et_ee' -> 'et_EE.ISO8859-4' to 'et_EE.ISO8859-15'
# updated 'fi' -> 'fi_FI.ISO8859-1' to 'fi_FI.ISO8859-15'
# updated 'fi_fi' -> 'fi_FI.ISO8859-1' to 'fi_FI.ISO8859-15'
# updated 'iw' -> 'iw_IL.ISO8859-8' to 'he_IL.ISO8859-8'
# updated 'iw_il' -> 'iw_IL.ISO8859-8' to 'he_IL.ISO8859-8'
# updated 'japanese' -> 'ja_JP.SJIS' to 'ja_JP.eucJP'
# updated 'lt' -> 'lt_LT.ISO8859-4' to 'lt_LT.ISO8859-13'
# updated 'lv' -> 'lv_LV.ISO8859-4' to 'lv_LV.ISO8859-13'
# updated 'sl' -> 'sl_CS.ISO8859-2' to 'sl_SI.ISO8859-2'
# updated 'slovene' -> 'sl_CS.ISO8859-2' to 'sl_SI.ISO8859-2'
# updated 'th_th' -> 'th_TH.TACTIS' to 'th_TH.ISO8859-11'
# updated 'zh_cn' -> 'zh_CN.eucCN' to 'zh_CN.gb2312'
# updated 'zh_cn.big5' -> 'zh_TW.eucTW' to 'zh_TW.big5'
# updated 'zh_tw' -> 'zh_TW.eucTW' to 'zh_TW.big5'
#
# MAL 2008-05-30:
# Updated alias mapping to most recent locale.alias file
# from X.org distribution using makelocalealias.py.
#
# These are the differences compared to the old mapping (Python 2.5
# and older):
#
# updated 'cs_cs.iso88592' -> 'cs_CZ.ISO8859-2' to 'cs_CS.ISO8859-2'
# updated 'serbocroatian' -> 'sh_YU.ISO8859-2' to 'sr_CS.ISO8859-2'
# updated 'sh' -> 'sh_YU.ISO8859-2' to 'sr_CS.ISO8859-2'
# updated 'sh_hr.iso88592' -> 'sh_HR.ISO8859-2' to 'hr_HR.ISO8859-2'
# updated 'sh_sp' -> 'sh_YU.ISO8859-2' to 'sr_CS.ISO8859-2'
# updated 'sh_yu' -> 'sh_YU.ISO8859-2' to 'sr_CS.ISO8859-2'
# updated 'sp' -> 'sp_YU.ISO8859-5' to 'sr_CS.ISO8859-5'
# updated 'sp_yu' -> 'sp_YU.ISO8859-5' to 'sr_CS.ISO8859-5'
# updated 'sr' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'
# updated 'sr@cyrillic' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'
# updated 'sr_sp' -> 'sr_SP.ISO8859-2' to 'sr_CS.ISO8859-2'
# updated 'sr_yu' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'
# updated 'sr_yu.cp1251@cyrillic' -> 'sr_YU.CP1251' to 'sr_CS.CP1251'
# updated 'sr_yu.iso88592' -> 'sr_YU.ISO8859-2' to 'sr_CS.ISO8859-2'
# updated 'sr_yu.iso88595' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'
# updated 'sr_yu.iso88595@cyrillic' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'
# updated 'sr_yu.microsoftcp1251@cyrillic' -> 'sr_YU.CP1251' to 'sr_CS.CP1251'
# updated 'sr_yu.utf8@cyrillic' -> 'sr_YU.UTF-8' to 'sr_CS.UTF-8'
# updated 'sr_yu@cyrillic' -> 'sr_YU.ISO8859-5' to 'sr_CS.ISO8859-5'
#
# AP 2010-04-12:
# Updated alias mapping to most recent locale.alias file
# from X.org distribution using makelocalealias.py.
#
# These are the differences compared to the old mapping (Python 2.6.5
# and older):
#
# updated 'ru' -> 'ru_RU.ISO8859-5' to 'ru_RU.UTF-8'
# updated 'ru_ru' -> 'ru_RU.ISO8859-5' to 'ru_RU.UTF-8'
# updated 'serbocroatian' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'
# updated 'sh' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'
# updated 'sh_yu' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'
# updated 'sr' -> 'sr_CS.ISO8859-5' to 'sr_RS.UTF-8'
# updated 'sr@cyrillic' -> 'sr_CS.ISO8859-5' to 'sr_RS.UTF-8'
# updated 'sr@latn' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'
# updated 'sr_cs.utf8@latn' -> 'sr_CS.UTF-8' to 'sr_RS.UTF-8@latin'
# updated 'sr_cs@latn' -> 'sr_CS.ISO8859-2' to 'sr_RS.UTF-8@latin'
# updated 'sr_yu' -> 'sr_CS.ISO8859-5' to 'sr_RS.UTF-8@latin'
# updated 'sr_yu.utf8@cyrillic' -> 'sr_CS.UTF-8' to 'sr_RS.UTF-8'
# updated 'sr_yu@cyrillic' -> 'sr_CS.ISO8859-5' to 'sr_RS.UTF-8'
#
# SS 2013-12-20:
# Updated alias mapping to most recent locale.alias file
# from X.org distribution using makelocalealias.py.
#
# These are the differences compared to the old mapping (Python 2.7.6
# and older):
#
# updated 'a3' -> 'a3_AZ.KOI8-C' to 'az_AZ.KOI8-C'
# updated 'a3_az' -> 'a3_AZ.KOI8-C' to 'az_AZ.KOI8-C'
# updated 'a3_az.koi8c' -> 'a3_AZ.KOI8-C' to 'az_AZ.KOI8-C'
# updated 'cs_cs.iso88592' -> 'cs_CS.ISO8859-2' to 'cs_CZ.ISO8859-2'
# updated 'hebrew' -> 'iw_IL.ISO8859-8' to 'he_IL.ISO8859-8'
# updated 'hebrew.iso88598' -> 'iw_IL.ISO8859-8' to 'he_IL.ISO8859-8'
# updated 'sd' -> 'sd_IN@devanagari.UTF-8' to 'sd_IN.UTF-8'
# updated 'sr@latn' -> 'sr_RS.UTF-8@latin' to 'sr_CS.UTF-8@latin'
# updated 'sr_cs' -> 'sr_RS.UTF-8' to 'sr_CS.UTF-8'
# updated 'sr_cs.utf8@latn' -> 'sr_RS.UTF-8@latin' to 'sr_CS.UTF-8@latin'
# updated 'sr_cs@latn' -> 'sr_RS.UTF-8@latin' to 'sr_CS.UTF-8@latin'
#
# SS 2014-10-01:
# Updated alias mapping with glibc 2.19 supported locales.
locale_alias = {
'a3': 'az_AZ.KOI8-C',
'a3_az': 'az_AZ.KOI8-C',
'a3_az.koi8c': 'az_AZ.KOI8-C',
'a3_az.koic': 'az_AZ.KOI8-C',
'aa_dj': 'aa_DJ.ISO8859-1',
'aa_er': 'aa_ER.UTF-8',
'aa_et': 'aa_ET.UTF-8',
'af': 'af_ZA.ISO8859-1',
'af_za': 'af_ZA.ISO8859-1',
'af_za.iso88591': 'af_ZA.ISO8859-1',
'am': 'am_ET.UTF-8',
'am_et': 'am_ET.UTF-8',
'american': 'en_US.ISO8859-1',
'american.iso88591': 'en_US.ISO8859-1',
'an_es': 'an_ES.ISO8859-15',
'ar': 'ar_AA.ISO8859-6',
'ar_aa': 'ar_AA.ISO8859-6',
'ar_aa.iso88596': 'ar_AA.ISO8859-6',
'ar_ae': 'ar_AE.ISO8859-6',
'ar_ae.iso88596': 'ar_AE.ISO8859-6',
'ar_bh': 'ar_BH.ISO8859-6',
'ar_bh.iso88596': 'ar_BH.ISO8859-6',
'ar_dz': 'ar_DZ.ISO8859-6',
'ar_dz.iso88596': 'ar_DZ.ISO8859-6',
'ar_eg': 'ar_EG.ISO8859-6',
'ar_eg.iso88596': 'ar_EG.ISO8859-6',
'ar_in': 'ar_IN.UTF-8',
'ar_iq': 'ar_IQ.ISO8859-6',
'ar_iq.iso88596': 'ar_IQ.ISO8859-6',
'ar_jo': 'ar_JO.ISO8859-6',
'ar_jo.iso88596': 'ar_JO.ISO8859-6',
'ar_kw': 'ar_KW.ISO8859-6',
'ar_kw.iso88596': 'ar_KW.ISO8859-6',
'ar_lb': 'ar_LB.ISO8859-6',
'ar_lb.iso88596': 'ar_LB.ISO8859-6',
'ar_ly': 'ar_LY.ISO8859-6',
'ar_ly.iso88596': 'ar_LY.ISO8859-6',
'ar_ma': 'ar_MA.ISO8859-6',
'ar_ma.iso88596': 'ar_MA.ISO8859-6',
'ar_om': 'ar_OM.ISO8859-6',
'ar_om.iso88596': 'ar_OM.ISO8859-6',
'ar_qa': 'ar_QA.ISO8859-6',
'ar_qa.iso88596': 'ar_QA.ISO8859-6',
'ar_sa': 'ar_SA.ISO8859-6',
'ar_sa.iso88596': 'ar_SA.ISO8859-6',
'ar_sd': 'ar_SD.ISO8859-6',
'ar_sd.iso88596': 'ar_SD.ISO8859-6',
'ar_sy': 'ar_SY.ISO8859-6',
'ar_sy.iso88596': 'ar_SY.ISO8859-6',
'ar_tn': 'ar_TN.ISO8859-6',
'ar_tn.iso88596': 'ar_TN.ISO8859-6',
'ar_ye': 'ar_YE.ISO8859-6',
'ar_ye.iso88596': 'ar_YE.ISO8859-6',
'arabic': 'ar_AA.ISO8859-6',
'arabic.iso88596': 'ar_AA.ISO8859-6',
'as': 'as_IN.UTF-8',
'as_in': 'as_IN.UTF-8',
'ast_es': 'ast_ES.ISO8859-15',
'ayc_pe': 'ayc_PE.UTF-8',
'az': 'az_AZ.ISO8859-9E',
'az_az': 'az_AZ.ISO8859-9E',
'az_az.iso88599e': 'az_AZ.ISO8859-9E',
'be': 'be_BY.CP1251',
'be@latin': 'be_BY.UTF-8@latin',
'be_bg.utf8': 'bg_BG.UTF-8',
'be_by': 'be_BY.CP1251',
'be_by.cp1251': 'be_BY.CP1251',
'be_by.microsoftcp1251': 'be_BY.CP1251',
'be_by.utf8@latin': 'be_BY.UTF-8@latin',
'be_by@latin': 'be_BY.UTF-8@latin',
'bem_zm': 'bem_ZM.UTF-8',
'ber_dz': 'ber_DZ.UTF-8',
'ber_ma': 'ber_MA.UTF-8',
'bg': 'bg_BG.CP1251',
'bg_bg': 'bg_BG.CP1251',
'bg_bg.cp1251': 'bg_BG.CP1251',
'bg_bg.iso88595': 'bg_BG.ISO8859-5',
'bg_bg.koi8r': 'bg_BG.KOI8-R',
'bg_bg.microsoftcp1251': 'bg_BG.CP1251',
'bho_in': 'bho_IN.UTF-8',
'bn_bd': 'bn_BD.UTF-8',
'bn_in': 'bn_IN.UTF-8',
'bo_cn': 'bo_CN.UTF-8',
'bo_in': 'bo_IN.UTF-8',
'bokmal': 'nb_NO.ISO8859-1',
'bokm\xe5l': 'nb_NO.ISO8859-1',
'br': 'br_FR.ISO8859-1',
'br_fr': 'br_FR.ISO8859-1',
'br_fr.iso88591': 'br_FR.ISO8859-1',
'br_fr.iso885914': 'br_FR.ISO8859-14',
'br_fr.iso885915': 'br_FR.ISO8859-15',
'br_fr.iso885915@euro': 'br_FR.ISO8859-15',
'br_fr.utf8@euro': 'br_FR.UTF-8',
'br_fr@euro': 'br_FR.ISO8859-15',
'brx_in': 'brx_IN.UTF-8',
'bs': 'bs_BA.ISO8859-2',
'bs_ba': 'bs_BA.ISO8859-2',
'bs_ba.iso88592': 'bs_BA.ISO8859-2',
'bulgarian': 'bg_BG.CP1251',
'byn_er': 'byn_ER.UTF-8',
'c': 'C',
'c-french': 'fr_CA.ISO8859-1',
'c-french.iso88591': 'fr_CA.ISO8859-1',
'c.ascii': 'C',
'c.en': 'C',
'c.iso88591': 'en_US.ISO8859-1',
'c.utf8': 'en_US.UTF-8',
'c_c': 'C',
'c_c.c': 'C',
'ca': 'ca_ES.ISO8859-1',
'ca_ad': 'ca_AD.ISO8859-1',
'ca_ad.iso88591': 'ca_AD.ISO8859-1',
'ca_ad.iso885915': 'ca_AD.ISO8859-15',
'ca_ad.iso885915@euro': 'ca_AD.ISO8859-15',
'ca_ad.utf8@euro': 'ca_AD.UTF-8',
'ca_ad@euro': 'ca_AD.ISO8859-15',
'ca_es': 'ca_ES.ISO8859-1',
'ca_es.iso88591': 'ca_ES.ISO8859-1',
'ca_es.iso885915': 'ca_ES.ISO8859-15',
'ca_es.iso885915@euro': 'ca_ES.ISO8859-15',
'ca_es.utf8@euro': 'ca_ES.UTF-8',
'ca_es@valencia': 'ca_ES.ISO8859-15@valencia',
'ca_es@euro': 'ca_ES.ISO8859-15',
'ca_fr': 'ca_FR.ISO8859-1',
'ca_fr.iso88591': 'ca_FR.ISO8859-1',
'ca_fr.iso885915': 'ca_FR.ISO8859-15',
'ca_fr.iso885915@euro': 'ca_FR.ISO8859-15',
'ca_fr.utf8@euro': 'ca_FR.UTF-8',
'ca_fr@euro': 'ca_FR.ISO8859-15',
'ca_it': 'ca_IT.ISO8859-1',
'ca_it.iso88591': 'ca_IT.ISO8859-1',
'ca_it.iso885915': 'ca_IT.ISO8859-15',
'ca_it.iso885915@euro': 'ca_IT.ISO8859-15',
'ca_it.utf8@euro': 'ca_IT.UTF-8',
'ca_it@euro': 'ca_IT.ISO8859-15',
'catalan': 'ca_ES.ISO8859-1',
'cextend': 'en_US.ISO8859-1',
'cextend.en': 'en_US.ISO8859-1',
'chinese-s': 'zh_CN.eucCN',
'chinese-t': 'zh_TW.eucTW',
'crh_ua': 'crh_UA.UTF-8',
'croatian': 'hr_HR.ISO8859-2',
'cs': 'cs_CZ.ISO8859-2',
'cs_cs': 'cs_CZ.ISO8859-2',
'cs_cs.iso88592': 'cs_CZ.ISO8859-2',
'cs_cz': 'cs_CZ.ISO8859-2',
'cs_cz.iso88592': 'cs_CZ.ISO8859-2',
'csb_pl': 'csb_PL.UTF-8',
'cv_ru': 'cv_RU.UTF-8',
'cy': 'cy_GB.ISO8859-1',
'cy_gb': 'cy_GB.ISO8859-1',
'cy_gb.iso88591': 'cy_GB.ISO8859-1',
'cy_gb.iso885914': 'cy_GB.ISO8859-14',
'cy_gb.iso885915': 'cy_GB.ISO8859-15',
'cy_gb@euro': 'cy_GB.ISO8859-15',
'cz': 'cs_CZ.ISO8859-2',
'cz_cz': 'cs_CZ.ISO8859-2',
'czech': 'cs_CZ.ISO8859-2',
'da': 'da_DK.ISO8859-1',
'da.iso885915': 'da_DK.ISO8859-15',
'da_dk': 'da_DK.ISO8859-1',
'da_dk.88591': 'da_DK.ISO8859-1',
'da_dk.885915': 'da_DK.ISO8859-15',
'da_dk.iso88591': 'da_DK.ISO8859-1',
'da_dk.iso885915': 'da_DK.ISO8859-15',
'da_dk@euro': 'da_DK.ISO8859-15',
'danish': 'da_DK.ISO8859-1',
'danish.iso88591': 'da_DK.ISO8859-1',
'dansk': 'da_DK.ISO8859-1',
'de': 'de_DE.ISO8859-1',
'de.iso885915': 'de_DE.ISO8859-15',
'de_at': 'de_AT.ISO8859-1',
'de_at.iso88591': 'de_AT.ISO8859-1',
'de_at.iso885915': 'de_AT.ISO8859-15',
'de_at.iso885915@euro': 'de_AT.ISO8859-15',
'de_at.utf8@euro': 'de_AT.UTF-8',
'de_at@euro': 'de_AT.ISO8859-15',
'de_be': 'de_BE.ISO8859-1',
'de_be.iso88591': 'de_BE.ISO8859-1',
'de_be.iso885915': 'de_BE.ISO8859-15',
'de_be.iso885915@euro': 'de_BE.ISO8859-15',
'de_be.utf8@euro': 'de_BE.UTF-8',
'de_be@euro': 'de_BE.ISO8859-15',
'de_ch': 'de_CH.ISO8859-1',
'de_ch.iso88591': 'de_CH.ISO8859-1',
'de_ch.iso885915': 'de_CH.ISO8859-15',
'de_ch@euro': 'de_CH.ISO8859-15',
'de_de': 'de_DE.ISO8859-1',
'de_de.88591': 'de_DE.ISO8859-1',
'de_de.885915': 'de_DE.ISO8859-15',
'de_de.885915@euro': 'de_DE.ISO8859-15',
'de_de.iso88591': 'de_DE.ISO8859-1',
'de_de.iso885915': 'de_DE.ISO8859-15',
'de_de.iso885915@euro': 'de_DE.ISO8859-15',
'de_de.utf8@euro': 'de_DE.UTF-8',
'de_de@euro': 'de_DE.ISO8859-15',
'de_li.utf8': 'de_LI.UTF-8',
'de_lu': 'de_LU.ISO8859-1',
'de_lu.iso88591': 'de_LU.ISO8859-1',
'de_lu.iso885915': 'de_LU.ISO8859-15',
'de_lu.iso885915@euro': 'de_LU.ISO8859-15',
'de_lu.utf8@euro': 'de_LU.UTF-8',
'de_lu@euro': 'de_LU.ISO8859-15',
'deutsch': 'de_DE.ISO8859-1',
'doi_in': 'doi_IN.UTF-8',
'dutch': 'nl_NL.ISO8859-1',
'dutch.iso88591': 'nl_BE.ISO8859-1',
'dv_mv': 'dv_MV.UTF-8',
'dz_bt': 'dz_BT.UTF-8',
'ee': 'ee_EE.ISO8859-4',
'ee_ee': 'ee_EE.ISO8859-4',
'ee_ee.iso88594': 'ee_EE.ISO8859-4',
'eesti': 'et_EE.ISO8859-1',
'el': 'el_GR.ISO8859-7',
'el_cy': 'el_CY.ISO8859-7',
'el_gr': 'el_GR.ISO8859-7',
'el_gr.iso88597': 'el_GR.ISO8859-7',
'el_gr@euro': 'el_GR.ISO8859-15',
'en': 'en_US.ISO8859-1',
'en.iso88591': 'en_US.ISO8859-1',
'en_ag': 'en_AG.UTF-8',
'en_au': 'en_AU.ISO8859-1',
'en_au.iso88591': 'en_AU.ISO8859-1',
'en_be': 'en_BE.ISO8859-1',
'en_be@euro': 'en_BE.ISO8859-15',
'en_bw': 'en_BW.ISO8859-1',
'en_bw.iso88591': 'en_BW.ISO8859-1',
'en_ca': 'en_CA.ISO8859-1',
'en_ca.iso88591': 'en_CA.ISO8859-1',
'en_dk': 'en_DK.ISO8859-1',
'en_dl.utf8': 'en_DL.UTF-8',
'en_gb': 'en_GB.ISO8859-1',
'en_gb.88591': 'en_GB.ISO8859-1',
'en_gb.iso88591': 'en_GB.ISO8859-1',
'en_gb.iso885915': 'en_GB.ISO8859-15',
'en_gb@euro': 'en_GB.ISO8859-15',
'en_hk': 'en_HK.ISO8859-1',
'en_hk.iso88591': 'en_HK.ISO8859-1',
'en_ie': 'en_IE.ISO8859-1',
'en_ie.iso88591': 'en_IE.ISO8859-1',
'en_ie.iso885915': 'en_IE.ISO8859-15',
'en_ie.iso885915@euro': 'en_IE.ISO8859-15',
'en_ie.utf8@euro': 'en_IE.UTF-8',
'en_ie@euro': 'en_IE.ISO8859-15',
'en_in': 'en_IN.ISO8859-1',
'en_ng': 'en_NG.UTF-8',
'en_nz': 'en_NZ.ISO8859-1',
'en_nz.iso88591': 'en_NZ.ISO8859-1',
'en_ph': 'en_PH.ISO8859-1',
'en_ph.iso88591': 'en_PH.ISO8859-1',
'en_sg': 'en_SG.ISO8859-1',
'en_sg.iso88591': 'en_SG.ISO8859-1',
'en_uk': 'en_GB.ISO8859-1',
'en_us': 'en_US.ISO8859-1',
'en_us.88591': 'en_US.ISO8859-1',
'en_us.885915': 'en_US.ISO8859-15',
'en_us.iso88591': 'en_US.ISO8859-1',
'en_us.iso885915': 'en_US.ISO8859-15',
'en_us.iso885915@euro': 'en_US.ISO8859-15',
'en_us@euro': 'en_US.ISO8859-15',
'en_us@euro@euro': 'en_US.ISO8859-15',
'en_za': 'en_ZA.ISO8859-1',
'en_za.88591': 'en_ZA.ISO8859-1',
'en_za.iso88591': 'en_ZA.ISO8859-1',
'en_za.iso885915': 'en_ZA.ISO8859-15',
'en_za@euro': 'en_ZA.ISO8859-15',
'en_zm': 'en_ZM.UTF-8',
'en_zw': 'en_ZW.ISO8859-1',
'en_zw.iso88591': 'en_ZW.ISO8859-1',
'en_zw.utf8': 'en_ZS.UTF-8',
'eng_gb': 'en_GB.ISO8859-1',
'eng_gb.8859': 'en_GB.ISO8859-1',
'english': 'en_EN.ISO8859-1',
'english.iso88591': 'en_EN.ISO8859-1',
'english_uk': 'en_GB.ISO8859-1',
'english_uk.8859': 'en_GB.ISO8859-1',
'english_united-states': 'en_US.ISO8859-1',
'english_united-states.437': 'C',
'english_us': 'en_US.ISO8859-1',
'english_us.8859': 'en_US.ISO8859-1',
'english_us.ascii': 'en_US.ISO8859-1',
'eo': 'eo_XX.ISO8859-3',
'eo.utf8': 'eo.UTF-8',
'eo_eo': 'eo_EO.ISO8859-3',
'eo_eo.iso88593': 'eo_EO.ISO8859-3',
'eo_us.utf8': 'eo_US.UTF-8',
'eo_xx': 'eo_XX.ISO8859-3',
'eo_xx.iso88593': 'eo_XX.ISO8859-3',
'es': 'es_ES.ISO8859-1',
'es_ar': 'es_AR.ISO8859-1',
'es_ar.iso88591': 'es_AR.ISO8859-1',
'es_bo': 'es_BO.ISO8859-1',
'es_bo.iso88591': 'es_BO.ISO8859-1',
'es_cl': 'es_CL.ISO8859-1',
'es_cl.iso88591': 'es_CL.ISO8859-1',
'es_co': 'es_CO.ISO8859-1',
'es_co.iso88591': 'es_CO.ISO8859-1',
'es_cr': 'es_CR.ISO8859-1',
'es_cr.iso88591': 'es_CR.ISO8859-1',
'es_cu': 'es_CU.UTF-8',
'es_do': 'es_DO.ISO8859-1',
'es_do.iso88591': 'es_DO.ISO8859-1',
'es_ec': 'es_EC.ISO8859-1',
'es_ec.iso88591': 'es_EC.ISO8859-1',
'es_es': 'es_ES.ISO8859-1',
'es_es.88591': 'es_ES.ISO8859-1',
'es_es.iso88591': 'es_ES.ISO8859-1',
'es_es.iso885915': 'es_ES.ISO8859-15',
'es_es.iso885915@euro': 'es_ES.ISO8859-15',
'es_es.utf8@euro': 'es_ES.UTF-8',
'es_es@euro': 'es_ES.ISO8859-15',
'es_gt': 'es_GT.ISO8859-1',
'es_gt.iso88591': 'es_GT.ISO8859-1',
'es_hn': 'es_HN.ISO8859-1',
'es_hn.iso88591': 'es_HN.ISO8859-1',
'es_mx': 'es_MX.ISO8859-1',
'es_mx.iso88591': 'es_MX.ISO8859-1',
'es_ni': 'es_NI.ISO8859-1',
'es_ni.iso88591': 'es_NI.ISO8859-1',
'es_pa': 'es_PA.ISO8859-1',
'es_pa.iso88591': 'es_PA.ISO8859-1',
'es_pa.iso885915': 'es_PA.ISO8859-15',
'es_pa@euro': 'es_PA.ISO8859-15',
'es_pe': 'es_PE.ISO8859-1',
'es_pe.iso88591': 'es_PE.ISO8859-1',
'es_pe.iso885915': 'es_PE.ISO8859-15',
'es_pe@euro': 'es_PE.ISO8859-15',
'es_pr': 'es_PR.ISO8859-1',
'es_pr.iso88591': 'es_PR.ISO8859-1',
'es_py': 'es_PY.ISO8859-1',
'es_py.iso88591': 'es_PY.ISO8859-1',
'es_py.iso885915': 'es_PY.ISO8859-15',
'es_py@euro': 'es_PY.ISO8859-15',
'es_sv': 'es_SV.ISO8859-1',
'es_sv.iso88591': 'es_SV.ISO8859-1',
'es_sv.iso885915': 'es_SV.ISO8859-15',
'es_sv@euro': 'es_SV.ISO8859-15',
'es_us': 'es_US.ISO8859-1',
'es_us.iso88591': 'es_US.ISO8859-1',
'es_uy': 'es_UY.ISO8859-1',
'es_uy.iso88591': 'es_UY.ISO8859-1',
'es_uy.iso885915': 'es_UY.ISO8859-15',
'es_uy@euro': 'es_UY.ISO8859-15',
'es_ve': 'es_VE.ISO8859-1',
'es_ve.iso88591': 'es_VE.ISO8859-1',
'es_ve.iso885915': 'es_VE.ISO8859-15',
'es_ve@euro': 'es_VE.ISO8859-15',
'estonian': 'et_EE.ISO8859-1',
'et': 'et_EE.ISO8859-15',
'et_ee': 'et_EE.ISO8859-15',
'et_ee.iso88591': 'et_EE.ISO8859-1',
'et_ee.iso885913': 'et_EE.ISO8859-13',
'et_ee.iso885915': 'et_EE.ISO8859-15',
'et_ee.iso88594': 'et_EE.ISO8859-4',
'et_ee@euro': 'et_EE.ISO8859-15',
'eu': 'eu_ES.ISO8859-1',
'eu_es': 'eu_ES.ISO8859-1',
'eu_es.iso88591': 'eu_ES.ISO8859-1',
'eu_es.iso885915': 'eu_ES.ISO8859-15',
'eu_es.iso885915@euro': 'eu_ES.ISO8859-15',
'eu_es.utf8@euro': 'eu_ES.UTF-8',
'eu_es@euro': 'eu_ES.ISO8859-15',
'eu_fr': 'eu_FR.ISO8859-1',
'fa': 'fa_IR.UTF-8',
'fa_ir': 'fa_IR.UTF-8',
'fa_ir.isiri3342': 'fa_IR.ISIRI-3342',
'ff_sn': 'ff_SN.UTF-8',
'fi': 'fi_FI.ISO8859-15',
'fi.iso885915': 'fi_FI.ISO8859-15',
'fi_fi': 'fi_FI.ISO8859-15',
'fi_fi.88591': 'fi_FI.ISO8859-1',
'fi_fi.iso88591': 'fi_FI.ISO8859-1',
'fi_fi.iso885915': 'fi_FI.ISO8859-15',
'fi_fi.iso885915@euro': 'fi_FI.ISO8859-15',
'fi_fi.utf8@euro': 'fi_FI.UTF-8',
'fi_fi@euro': 'fi_FI.ISO8859-15',
'fil_ph': 'fil_PH.UTF-8',
'finnish': 'fi_FI.ISO8859-1',
'finnish.iso88591': 'fi_FI.ISO8859-1',
'fo': 'fo_FO.ISO8859-1',
'fo_fo': 'fo_FO.ISO8859-1',
'fo_fo.iso88591': 'fo_FO.ISO8859-1',
'fo_fo.iso885915': 'fo_FO.ISO8859-15',
'fo_fo@euro': 'fo_FO.ISO8859-15',
'fr': 'fr_FR.ISO8859-1',
'fr.iso885915': 'fr_FR.ISO8859-15',
'fr_be': 'fr_BE.ISO8859-1',
'fr_be.88591': 'fr_BE.ISO8859-1',
'fr_be.iso88591': 'fr_BE.ISO8859-1',
'fr_be.iso885915': 'fr_BE.ISO8859-15',
'fr_be.iso885915@euro': 'fr_BE.ISO8859-15',
'fr_be.utf8@euro': 'fr_BE.UTF-8',
'fr_be@euro': 'fr_BE.ISO8859-15',
'fr_ca': 'fr_CA.ISO8859-1',
'fr_ca.88591': 'fr_CA.ISO8859-1',
'fr_ca.iso88591': 'fr_CA.ISO8859-1',
'fr_ca.iso885915': 'fr_CA.ISO8859-15',
'fr_ca@euro': 'fr_CA.ISO8859-15',
'fr_ch': 'fr_CH.ISO8859-1',
'fr_ch.88591': 'fr_CH.ISO8859-1',
'fr_ch.iso88591': 'fr_CH.ISO8859-1',
'fr_ch.iso885915': 'fr_CH.ISO8859-15',
'fr_ch@euro': 'fr_CH.ISO8859-15',
'fr_fr': 'fr_FR.ISO8859-1',
'fr_fr.88591': 'fr_FR.ISO8859-1',
'fr_fr.iso88591': 'fr_FR.ISO8859-1',
'fr_fr.iso885915': 'fr_FR.ISO8859-15',
'fr_fr.iso885915@euro': 'fr_FR.ISO8859-15',
'fr_fr.utf8@euro': 'fr_FR.UTF-8',
'fr_fr@euro': 'fr_FR.ISO8859-15',
'fr_lu': 'fr_LU.ISO8859-1',
'fr_lu.88591': 'fr_LU.ISO8859-1',
'fr_lu.iso88591': 'fr_LU.ISO8859-1',
'fr_lu.iso885915': 'fr_LU.ISO8859-15',
'fr_lu.iso885915@euro': 'fr_LU.ISO8859-15',
'fr_lu.utf8@euro': 'fr_LU.UTF-8',
'fr_lu@euro': 'fr_LU.ISO8859-15',
'fran\xe7ais': 'fr_FR.ISO8859-1',
'fre_fr': 'fr_FR.ISO8859-1',
'fre_fr.8859': 'fr_FR.ISO8859-1',
'french': 'fr_FR.ISO8859-1',
'french.iso88591': 'fr_CH.ISO8859-1',
'french_france': 'fr_FR.ISO8859-1',
'french_france.8859': 'fr_FR.ISO8859-1',
'fur_it': 'fur_IT.UTF-8',
'fy_de': 'fy_DE.UTF-8',
'fy_nl': 'fy_NL.UTF-8',
'ga': 'ga_IE.ISO8859-1',
'ga_ie': 'ga_IE.ISO8859-1',
'ga_ie.iso88591': 'ga_IE.ISO8859-1',
'ga_ie.iso885914': 'ga_IE.ISO8859-14',
'ga_ie.iso885915': 'ga_IE.ISO8859-15',
'ga_ie.iso885915@euro': 'ga_IE.ISO8859-15',
'ga_ie.utf8@euro': 'ga_IE.UTF-8',
'ga_ie@euro': 'ga_IE.ISO8859-15',
'galego': 'gl_ES.ISO8859-1',
'galician': 'gl_ES.ISO8859-1',
'gd': 'gd_GB.ISO8859-1',
'gd_gb': 'gd_GB.ISO8859-1',
'gd_gb.iso88591': 'gd_GB.ISO8859-1',
'gd_gb.iso885914': 'gd_GB.ISO8859-14',
'gd_gb.iso885915': 'gd_GB.ISO8859-15',
'gd_gb@euro': 'gd_GB.ISO8859-15',
'ger_de': 'de_DE.ISO8859-1',
'ger_de.8859': 'de_DE.ISO8859-1',
'german': 'de_DE.ISO8859-1',
'german.iso88591': 'de_CH.ISO8859-1',
'german_germany': 'de_DE.ISO8859-1',
'german_germany.8859': 'de_DE.ISO8859-1',
'gez_er': 'gez_ER.UTF-8',
'gez_et': 'gez_ET.UTF-8',
'gl': 'gl_ES.ISO8859-1',
'gl_es': 'gl_ES.ISO8859-1',
'gl_es.iso88591': 'gl_ES.ISO8859-1',
'gl_es.iso885915': 'gl_ES.ISO8859-15',
'gl_es.iso885915@euro': 'gl_ES.ISO8859-15',
'gl_es.utf8@euro': 'gl_ES.UTF-8',
'gl_es@euro': 'gl_ES.ISO8859-15',
'greek': 'el_GR.ISO8859-7',
'greek.iso88597': 'el_GR.ISO8859-7',
'gu_in': 'gu_IN.UTF-8',
'gv': 'gv_GB.ISO8859-1',
'gv_gb': 'gv_GB.ISO8859-1',
'gv_gb.iso88591': 'gv_GB.ISO8859-1',
'gv_gb.iso885914': 'gv_GB.ISO8859-14',
'gv_gb.iso885915': 'gv_GB.ISO8859-15',
'gv_gb@euro': 'gv_GB.ISO8859-15',
'ha_ng': 'ha_NG.UTF-8',
'he': 'he_IL.ISO8859-8',
'he_il': 'he_IL.ISO8859-8',
'he_il.cp1255': 'he_IL.CP1255',
'he_il.iso88598': 'he_IL.ISO8859-8',
'he_il.microsoftcp1255': 'he_IL.CP1255',
'hebrew': 'he_IL.ISO8859-8',
'hebrew.iso88598': 'he_IL.ISO8859-8',
'hi': 'hi_IN.ISCII-DEV',
'hi_in': 'hi_IN.ISCII-DEV',
'hi_in.isciidev': 'hi_IN.ISCII-DEV',
'hne': 'hne_IN.UTF-8',
'hne_in': 'hne_IN.UTF-8',
'hr': 'hr_HR.ISO8859-2',
'hr_hr': 'hr_HR.ISO8859-2',
'hr_hr.iso88592': 'hr_HR.ISO8859-2',
'hrvatski': 'hr_HR.ISO8859-2',
'hsb_de': 'hsb_DE.ISO8859-2',
'ht_ht': 'ht_HT.UTF-8',
'hu': 'hu_HU.ISO8859-2',
'hu_hu': 'hu_HU.ISO8859-2',
'hu_hu.iso88592': 'hu_HU.ISO8859-2',
'hungarian': 'hu_HU.ISO8859-2',
'hy_am': 'hy_AM.UTF-8',
'hy_am.armscii8': 'hy_AM.ARMSCII_8',
'ia': 'ia.UTF-8',
'ia_fr': 'ia_FR.UTF-8',
'icelandic': 'is_IS.ISO8859-1',
'icelandic.iso88591': 'is_IS.ISO8859-1',
'id': 'id_ID.ISO8859-1',
'id_id': 'id_ID.ISO8859-1',
'ig_ng': 'ig_NG.UTF-8',
'ik_ca': 'ik_CA.UTF-8',
'in': 'id_ID.ISO8859-1',
'in_id': 'id_ID.ISO8859-1',
'is': 'is_IS.ISO8859-1',
'is_is': 'is_IS.ISO8859-1',
'is_is.iso88591': 'is_IS.ISO8859-1',
'is_is.iso885915': 'is_IS.ISO8859-15',
'is_is@euro': 'is_IS.ISO8859-15',
'iso-8859-1': 'en_US.ISO8859-1',
'iso-8859-15': 'en_US.ISO8859-15',
'iso8859-1': 'en_US.ISO8859-1',
'iso8859-15': 'en_US.ISO8859-15',
'iso_8859_1': 'en_US.ISO8859-1',
'iso_8859_15': 'en_US.ISO8859-15',
'it': 'it_IT.ISO8859-1',
'it.iso885915': 'it_IT.ISO8859-15',
'it_ch': 'it_CH.ISO8859-1',
'it_ch.iso88591': 'it_CH.ISO8859-1',
'it_ch.iso885915': 'it_CH.ISO8859-15',
'it_ch@euro': 'it_CH.ISO8859-15',
'it_it': 'it_IT.ISO8859-1',
'it_it.88591': 'it_IT.ISO8859-1',
'it_it.iso88591': 'it_IT.ISO8859-1',
'it_it.iso885915': 'it_IT.ISO8859-15',
'it_it.iso885915@euro': 'it_IT.ISO8859-15',
'it_it.utf8@euro': 'it_IT.UTF-8',
'it_it@euro': 'it_IT.ISO8859-15',
'italian': 'it_IT.ISO8859-1',
'italian.iso88591': 'it_IT.ISO8859-1',
'iu': 'iu_CA.NUNACOM-8',
'iu_ca': 'iu_CA.NUNACOM-8',
'iu_ca.nunacom8': 'iu_CA.NUNACOM-8',
'iw': 'he_IL.ISO8859-8',
'iw_il': 'he_IL.ISO8859-8',
'iw_il.iso88598': 'he_IL.ISO8859-8',
'iw_il.utf8': 'iw_IL.UTF-8',
'ja': 'ja_JP.eucJP',
'ja.jis': 'ja_JP.JIS7',
'ja.sjis': 'ja_JP.SJIS',
'ja_jp': 'ja_JP.eucJP',
'ja_jp.ajec': 'ja_JP.eucJP',
'ja_jp.euc': 'ja_JP.eucJP',
'ja_jp.eucjp': 'ja_JP.eucJP',
'ja_jp.iso-2022-jp': 'ja_JP.JIS7',
'ja_jp.iso2022jp': 'ja_JP.JIS7',
'ja_jp.jis': 'ja_JP.JIS7',
'ja_jp.jis7': 'ja_JP.JIS7',
'ja_jp.mscode': 'ja_JP.SJIS',
'ja_jp.pck': 'ja_JP.SJIS',
'ja_jp.sjis': 'ja_JP.SJIS',
'ja_jp.ujis': 'ja_JP.eucJP',
'japan': 'ja_JP.eucJP',
'japanese': 'ja_JP.eucJP',
'japanese-euc': 'ja_JP.eucJP',
'japanese.euc': 'ja_JP.eucJP',
'japanese.sjis': 'ja_JP.SJIS',
'jp_jp': 'ja_JP.eucJP',
'ka': 'ka_GE.GEORGIAN-ACADEMY',
'ka_ge': 'ka_GE.GEORGIAN-ACADEMY',
'ka_ge.georgianacademy': 'ka_GE.GEORGIAN-ACADEMY',
'ka_ge.georgianps': 'ka_GE.GEORGIAN-PS',
'ka_ge.georgianrs': 'ka_GE.GEORGIAN-ACADEMY',
'kk_kz': 'kk_KZ.RK1048',
'kl': 'kl_GL.ISO8859-1',
'kl_gl': 'kl_GL.ISO8859-1',
'kl_gl.iso88591': 'kl_GL.ISO8859-1',
'kl_gl.iso885915': 'kl_GL.ISO8859-15',
'kl_gl@euro': 'kl_GL.ISO8859-15',
'km_kh': 'km_KH.UTF-8',
'kn': 'kn_IN.UTF-8',
'kn_in': 'kn_IN.UTF-8',
'ko': 'ko_KR.eucKR',
'ko_kr': 'ko_KR.eucKR',
'ko_kr.euc': 'ko_KR.eucKR',
'ko_kr.euckr': 'ko_KR.eucKR',
'kok_in': 'kok_IN.UTF-8',
'korean': 'ko_KR.eucKR',
'korean.euc': 'ko_KR.eucKR',
'ks': 'ks_IN.UTF-8',
'ks_in': 'ks_IN.UTF-8',
'ks_in@devanagari': 'ks_IN.UTF-8@devanagari',
'ks_in@devanagari.utf8': 'ks_IN.UTF-8@devanagari',
'ku_tr': 'ku_TR.ISO8859-9',
'kw': 'kw_GB.ISO8859-1',
'kw_gb': 'kw_GB.ISO8859-1',
'kw_gb.iso88591': 'kw_GB.ISO8859-1',
'kw_gb.iso885914': 'kw_GB.ISO8859-14',
'kw_gb.iso885915': 'kw_GB.ISO8859-15',
'kw_gb@euro': 'kw_GB.ISO8859-15',
'ky': 'ky_KG.UTF-8',
'ky_kg': 'ky_KG.UTF-8',
'lb_lu': 'lb_LU.UTF-8',
'lg_ug': 'lg_UG.ISO8859-10',
'li_be': 'li_BE.UTF-8',
'li_nl': 'li_NL.UTF-8',
'lij_it': 'lij_IT.UTF-8',
'lithuanian': 'lt_LT.ISO8859-13',
'lo': 'lo_LA.MULELAO-1',
'lo_la': 'lo_LA.MULELAO-1',
'lo_la.cp1133': 'lo_LA.IBM-CP1133',
'lo_la.ibmcp1133': 'lo_LA.IBM-CP1133',
'lo_la.mulelao1': 'lo_LA.MULELAO-1',
'lt': 'lt_LT.ISO8859-13',
'lt_lt': 'lt_LT.ISO8859-13',
'lt_lt.iso885913': 'lt_LT.ISO8859-13',
'lt_lt.iso88594': 'lt_LT.ISO8859-4',
'lv': 'lv_LV.ISO8859-13',
'lv_lv': 'lv_LV.ISO8859-13',
'lv_lv.iso885913': 'lv_LV.ISO8859-13',
'lv_lv.iso88594': 'lv_LV.ISO8859-4',
'mag_in': 'mag_IN.UTF-8',
'mai': 'mai_IN.UTF-8',
'mai_in': 'mai_IN.UTF-8',
'mg_mg': 'mg_MG.ISO8859-15',
'mhr_ru': 'mhr_RU.UTF-8',
'mi': 'mi_NZ.ISO8859-1',
'mi_nz': 'mi_NZ.ISO8859-1',
'mi_nz.iso88591': 'mi_NZ.ISO8859-1',
'mk': 'mk_MK.ISO8859-5',
'mk_mk': 'mk_MK.ISO8859-5',
'mk_mk.cp1251': 'mk_MK.CP1251',
'mk_mk.iso88595': 'mk_MK.ISO8859-5',
'mk_mk.microsoftcp1251': 'mk_MK.CP1251',
'ml': 'ml_IN.UTF-8',
'ml_in': 'ml_IN.UTF-8',
'mn_mn': 'mn_MN.UTF-8',
'mni_in': 'mni_IN.UTF-8',
'mr': 'mr_IN.UTF-8',
'mr_in': 'mr_IN.UTF-8',
'ms': 'ms_MY.ISO8859-1',
'ms_my': 'ms_MY.ISO8859-1',
'ms_my.iso88591': 'ms_MY.ISO8859-1',
'mt': 'mt_MT.ISO8859-3',
'mt_mt': 'mt_MT.ISO8859-3',
'mt_mt.iso88593': 'mt_MT.ISO8859-3',
'my_mm': 'my_MM.UTF-8',
'nan_tw@latin': 'nan_TW.UTF-8@latin',
'nb': 'nb_NO.ISO8859-1',
'nb_no': 'nb_NO.ISO8859-1',
'nb_no.88591': 'nb_NO.ISO8859-1',
'nb_no.iso88591': 'nb_NO.ISO8859-1',
'nb_no.iso885915': 'nb_NO.ISO8859-15',
'nb_no@euro': 'nb_NO.ISO8859-15',
'nds_de': 'nds_DE.UTF-8',
'nds_nl': 'nds_NL.UTF-8',
'ne_np': 'ne_NP.UTF-8',
'nhn_mx': 'nhn_MX.UTF-8',
'niu_nu': 'niu_NU.UTF-8',
'niu_nz': 'niu_NZ.UTF-8',
'nl': 'nl_NL.ISO8859-1',
'nl.iso885915': 'nl_NL.ISO8859-15',
'nl_aw': 'nl_AW.UTF-8',
'nl_be': 'nl_BE.ISO8859-1',
'nl_be.88591': 'nl_BE.ISO8859-1',
'nl_be.iso88591': 'nl_BE.ISO8859-1',
'nl_be.iso885915': 'nl_BE.ISO8859-15',
'nl_be.iso885915@euro': 'nl_BE.ISO8859-15',
'nl_be.utf8@euro': 'nl_BE.UTF-8',
'nl_be@euro': 'nl_BE.ISO8859-15',
'nl_nl': 'nl_NL.ISO8859-1',
'nl_nl.88591': 'nl_NL.ISO8859-1',
'nl_nl.iso88591': 'nl_NL.ISO8859-1',
'nl_nl.iso885915': 'nl_NL.ISO8859-15',
'nl_nl.iso885915@euro': 'nl_NL.ISO8859-15',
'nl_nl.utf8@euro': 'nl_NL.UTF-8',
'nl_nl@euro': 'nl_NL.ISO8859-15',
'nn': 'nn_NO.ISO8859-1',
'nn_no': 'nn_NO.ISO8859-1',
'nn_no.88591': 'nn_NO.ISO8859-1',
'nn_no.iso88591': 'nn_NO.ISO8859-1',
'nn_no.iso885915': 'nn_NO.ISO8859-15',
'nn_no@euro': 'nn_NO.ISO8859-15',
'no': 'no_NO.ISO8859-1',
'no@nynorsk': 'ny_NO.ISO8859-1',
'no_no': 'no_NO.ISO8859-1',
'no_no.88591': 'no_NO.ISO8859-1',
'no_no.iso88591': 'no_NO.ISO8859-1',
'no_no.iso885915': 'no_NO.ISO8859-15',
'no_no.iso88591@bokmal': 'no_NO.ISO8859-1',
'no_no.iso88591@nynorsk': 'no_NO.ISO8859-1',
'no_no@euro': 'no_NO.ISO8859-15',
'norwegian': 'no_NO.ISO8859-1',
'norwegian.iso88591': 'no_NO.ISO8859-1',
'nr': 'nr_ZA.ISO8859-1',
'nr_za': 'nr_ZA.ISO8859-1',
'nr_za.iso88591': 'nr_ZA.ISO8859-1',
'nso': 'nso_ZA.ISO8859-15',
'nso_za': 'nso_ZA.ISO8859-15',
'nso_za.iso885915': 'nso_ZA.ISO8859-15',
'ny': 'ny_NO.ISO8859-1',
'ny_no': 'ny_NO.ISO8859-1',
'ny_no.88591': 'ny_NO.ISO8859-1',
'ny_no.iso88591': 'ny_NO.ISO8859-1',
'ny_no.iso885915': 'ny_NO.ISO8859-15',
'ny_no@euro': 'ny_NO.ISO8859-15',
'nynorsk': 'nn_NO.ISO8859-1',
'oc': 'oc_FR.ISO8859-1',
'oc_fr': 'oc_FR.ISO8859-1',
'oc_fr.iso88591': 'oc_FR.ISO8859-1',
'oc_fr.iso885915': 'oc_FR.ISO8859-15',
'oc_fr@euro': 'oc_FR.ISO8859-15',
'om_et': 'om_ET.UTF-8',
'om_ke': 'om_KE.ISO8859-1',
'or': 'or_IN.UTF-8',
'or_in': 'or_IN.UTF-8',
'os_ru': 'os_RU.UTF-8',
'pa': 'pa_IN.UTF-8',
'pa_in': 'pa_IN.UTF-8',
'pa_pk': 'pa_PK.UTF-8',
'pap_an': 'pap_AN.UTF-8',
'pd': 'pd_US.ISO8859-1',
'pd_de': 'pd_DE.ISO8859-1',
'pd_de.iso88591': 'pd_DE.ISO8859-1',
'pd_de.iso885915': 'pd_DE.ISO8859-15',
'pd_de@euro': 'pd_DE.ISO8859-15',
'pd_us': 'pd_US.ISO8859-1',
'pd_us.iso88591': 'pd_US.ISO8859-1',
'pd_us.iso885915': 'pd_US.ISO8859-15',
'pd_us@euro': 'pd_US.ISO8859-15',
'ph': 'ph_PH.ISO8859-1',
'ph_ph': 'ph_PH.ISO8859-1',
'ph_ph.iso88591': 'ph_PH.ISO8859-1',
'pl': 'pl_PL.ISO8859-2',
'pl_pl': 'pl_PL.ISO8859-2',
'pl_pl.iso88592': 'pl_PL.ISO8859-2',
'polish': 'pl_PL.ISO8859-2',
'portuguese': 'pt_PT.ISO8859-1',
'portuguese.iso88591': 'pt_PT.ISO8859-1',
'portuguese_brazil': 'pt_BR.ISO8859-1',
'portuguese_brazil.8859': 'pt_BR.ISO8859-1',
'posix': 'C',
'posix-utf2': 'C',
'pp': 'pp_AN.ISO8859-1',
'pp_an': 'pp_AN.ISO8859-1',
'pp_an.iso88591': 'pp_AN.ISO8859-1',
'ps_af': 'ps_AF.UTF-8',
'pt': 'pt_PT.ISO8859-1',
'pt.iso885915': 'pt_PT.ISO8859-15',
'pt_br': 'pt_BR.ISO8859-1',
'pt_br.88591': 'pt_BR.ISO8859-1',
'pt_br.iso88591': 'pt_BR.ISO8859-1',
'pt_br.iso885915': 'pt_BR.ISO8859-15',
'pt_br@euro': 'pt_BR.ISO8859-15',
'pt_pt': 'pt_PT.ISO8859-1',
'pt_pt.88591': 'pt_PT.ISO8859-1',
'pt_pt.iso88591': 'pt_PT.ISO8859-1',
'pt_pt.iso885915': 'pt_PT.ISO8859-15',
'pt_pt.iso885915@euro': 'pt_PT.ISO8859-15',
'pt_pt.utf8@euro': 'pt_PT.UTF-8',
'pt_pt@euro': 'pt_PT.ISO8859-15',
'ro': 'ro_RO.ISO8859-2',
'ro_ro': 'ro_RO.ISO8859-2',
'ro_ro.iso88592': 'ro_RO.ISO8859-2',
'romanian': 'ro_RO.ISO8859-2',
'ru': 'ru_RU.UTF-8',
'ru.koi8r': 'ru_RU.KOI8-R',
'ru_ru': 'ru_RU.UTF-8',
'ru_ru.cp1251': 'ru_RU.CP1251',
'ru_ru.iso88595': 'ru_RU.ISO8859-5',
'ru_ru.koi8r': 'ru_RU.KOI8-R',
'ru_ru.microsoftcp1251': 'ru_RU.CP1251',
'ru_ua': 'ru_UA.KOI8-U',
'ru_ua.cp1251': 'ru_UA.CP1251',
'ru_ua.koi8u': 'ru_UA.KOI8-U',
'ru_ua.microsoftcp1251': 'ru_UA.CP1251',
'rumanian': 'ro_RO.ISO8859-2',
'russian': 'ru_RU.ISO8859-5',
'rw': 'rw_RW.ISO8859-1',
'rw_rw': 'rw_RW.ISO8859-1',
'rw_rw.iso88591': 'rw_RW.ISO8859-1',
'sa_in': 'sa_IN.UTF-8',
'sat_in': 'sat_IN.UTF-8',
'sc_it': 'sc_IT.UTF-8',
'sd': 'sd_IN.UTF-8',
'sd@devanagari': 'sd_IN.UTF-8@devanagari',
'sd_in': 'sd_IN.UTF-8',
'sd_in@devanagari': 'sd_IN.UTF-8@devanagari',
'sd_in@devanagari.utf8': 'sd_IN.UTF-8@devanagari',
'sd_pk': 'sd_PK.UTF-8',
'se_no': 'se_NO.UTF-8',
'serbocroatian': 'sr_RS.UTF-8@latin',
'sh': 'sr_RS.UTF-8@latin',
'sh_ba.iso88592@bosnia': 'sr_CS.ISO8859-2',
'sh_hr': 'sh_HR.ISO8859-2',
'sh_hr.iso88592': 'hr_HR.ISO8859-2',
'sh_sp': 'sr_CS.ISO8859-2',
'sh_yu': 'sr_RS.UTF-8@latin',
'shs_ca': 'shs_CA.UTF-8',
'si': 'si_LK.UTF-8',
'si_lk': 'si_LK.UTF-8',
'sid_et': 'sid_ET.UTF-8',
'sinhala': 'si_LK.UTF-8',
'sk': 'sk_SK.ISO8859-2',
'sk_sk': 'sk_SK.ISO8859-2',
'sk_sk.iso88592': 'sk_SK.ISO8859-2',
'sl': 'sl_SI.ISO8859-2',
'sl_cs': 'sl_CS.ISO8859-2',
'sl_si': 'sl_SI.ISO8859-2',
'sl_si.iso88592': 'sl_SI.ISO8859-2',
'slovak': 'sk_SK.ISO8859-2',
'slovene': 'sl_SI.ISO8859-2',
'slovenian': 'sl_SI.ISO8859-2',
'so_dj': 'so_DJ.ISO8859-1',
'so_et': 'so_ET.UTF-8',
'so_ke': 'so_KE.ISO8859-1',
'so_so': 'so_SO.ISO8859-1',
'sp': 'sr_CS.ISO8859-5',
'sp_yu': 'sr_CS.ISO8859-5',
'spanish': 'es_ES.ISO8859-1',
'spanish.iso88591': 'es_ES.ISO8859-1',
'spanish_spain': 'es_ES.ISO8859-1',
'spanish_spain.8859': 'es_ES.ISO8859-1',
'sq': 'sq_AL.ISO8859-2',
'sq_al': 'sq_AL.ISO8859-2',
'sq_al.iso88592': 'sq_AL.ISO8859-2',
'sq_mk': 'sq_MK.UTF-8',
'sr': 'sr_RS.UTF-8',
'sr@cyrillic': 'sr_RS.UTF-8',
'sr@latin': 'sr_RS.UTF-8@latin',
'sr@latn': 'sr_CS.UTF-8@latin',
'sr_cs': 'sr_CS.UTF-8',
'sr_cs.iso88592': 'sr_CS.ISO8859-2',
'sr_cs.iso88592@latn': 'sr_CS.ISO8859-2',
'sr_cs.iso88595': 'sr_CS.ISO8859-5',
'sr_cs.utf8@latn': 'sr_CS.UTF-8@latin',
'sr_cs@latn': 'sr_CS.UTF-8@latin',
'sr_me': 'sr_ME.UTF-8',
'sr_rs': 'sr_RS.UTF-8',
'sr_rs@latin': 'sr_RS.UTF-8@latin',
'sr_rs@latn': 'sr_RS.UTF-8@latin',
'sr_sp': 'sr_CS.ISO8859-2',
'sr_yu': 'sr_RS.UTF-8@latin',
'sr_yu.cp1251@cyrillic': 'sr_CS.CP1251',
'sr_yu.iso88592': 'sr_CS.ISO8859-2',
'sr_yu.iso88595': 'sr_CS.ISO8859-5',
'sr_yu.iso88595@cyrillic': 'sr_CS.ISO8859-5',
'sr_yu.microsoftcp1251@cyrillic': 'sr_CS.CP1251',
'sr_yu.utf8': 'sr_RS.UTF-8',
'sr_yu.utf8@cyrillic': 'sr_RS.UTF-8',
'sr_yu@cyrillic': 'sr_RS.UTF-8',
'ss': 'ss_ZA.ISO8859-1',
'ss_za': 'ss_ZA.ISO8859-1',
'ss_za.iso88591': 'ss_ZA.ISO8859-1',
'st': 'st_ZA.ISO8859-1',
'st_za': 'st_ZA.ISO8859-1',
'st_za.iso88591': 'st_ZA.ISO8859-1',
'sv': 'sv_SE.ISO8859-1',
'sv.iso885915': 'sv_SE.ISO8859-15',
'sv_fi': 'sv_FI.ISO8859-1',
'sv_fi.iso88591': 'sv_FI.ISO8859-1',
'sv_fi.iso885915': 'sv_FI.ISO8859-15',
'sv_fi.iso885915@euro': 'sv_FI.ISO8859-15',
'sv_fi.utf8@euro': 'sv_FI.UTF-8',
'sv_fi@euro': 'sv_FI.ISO8859-15',
'sv_se': 'sv_SE.ISO8859-1',
'sv_se.88591': 'sv_SE.ISO8859-1',
'sv_se.iso88591': 'sv_SE.ISO8859-1',
'sv_se.iso885915': 'sv_SE.ISO8859-15',
'sv_se@euro': 'sv_SE.ISO8859-15',
'sw_ke': 'sw_KE.UTF-8',
'sw_tz': 'sw_TZ.UTF-8',
'swedish': 'sv_SE.ISO8859-1',
'swedish.iso88591': 'sv_SE.ISO8859-1',
'szl_pl': 'szl_PL.UTF-8',
'ta': 'ta_IN.TSCII-0',
'ta_in': 'ta_IN.TSCII-0',
'ta_in.tscii': 'ta_IN.TSCII-0',
'ta_in.tscii0': 'ta_IN.TSCII-0',
'ta_lk': 'ta_LK.UTF-8',
'te': 'te_IN.UTF-8',
'te_in': 'te_IN.UTF-8',
'tg': 'tg_TJ.KOI8-C',
'tg_tj': 'tg_TJ.KOI8-C',
'tg_tj.koi8c': 'tg_TJ.KOI8-C',
'th': 'th_TH.ISO8859-11',
'th_th': 'th_TH.ISO8859-11',
'th_th.iso885911': 'th_TH.ISO8859-11',
'th_th.tactis': 'th_TH.TIS620',
'th_th.tis620': 'th_TH.TIS620',
'thai': 'th_TH.ISO8859-11',
'ti_er': 'ti_ER.UTF-8',
'ti_et': 'ti_ET.UTF-8',
'tig_er': 'tig_ER.UTF-8',
'tk_tm': 'tk_TM.UTF-8',
'tl': 'tl_PH.ISO8859-1',
'tl_ph': 'tl_PH.ISO8859-1',
'tl_ph.iso88591': 'tl_PH.ISO8859-1',
'tn': 'tn_ZA.ISO8859-15',
'tn_za': 'tn_ZA.ISO8859-15',
'tn_za.iso885915': 'tn_ZA.ISO8859-15',
'tr': 'tr_TR.ISO8859-9',
'tr_cy': 'tr_CY.ISO8859-9',
'tr_tr': 'tr_TR.ISO8859-9',
'tr_tr.iso88599': 'tr_TR.ISO8859-9',
'ts': 'ts_ZA.ISO8859-1',
'ts_za': 'ts_ZA.ISO8859-1',
'ts_za.iso88591': 'ts_ZA.ISO8859-1',
'tt': 'tt_RU.TATAR-CYR',
'tt_ru': 'tt_RU.TATAR-CYR',
'tt_ru.koi8c': 'tt_RU.KOI8-C',
'tt_ru.tatarcyr': 'tt_RU.TATAR-CYR',
'tt_ru@iqtelif': 'tt_RU.UTF-8@iqtelif',
'turkish': 'tr_TR.ISO8859-9',
'turkish.iso88599': 'tr_TR.ISO8859-9',
'ug_cn': 'ug_CN.UTF-8',
'uk': 'uk_UA.KOI8-U',
'uk_ua': 'uk_UA.KOI8-U',
'uk_ua.cp1251': 'uk_UA.CP1251',
'uk_ua.iso88595': 'uk_UA.ISO8859-5',
'uk_ua.koi8u': 'uk_UA.KOI8-U',
'uk_ua.microsoftcp1251': 'uk_UA.CP1251',
'univ': 'en_US.utf',
'universal': 'en_US.utf',
'universal.utf8@ucs4': 'en_US.UTF-8',
'unm_us': 'unm_US.UTF-8',
'ur': 'ur_PK.CP1256',
'ur_in': 'ur_IN.UTF-8',
'ur_pk': 'ur_PK.CP1256',
'ur_pk.cp1256': 'ur_PK.CP1256',
'ur_pk.microsoftcp1256': 'ur_PK.CP1256',
'uz': 'uz_UZ.UTF-8',
'uz_uz': 'uz_UZ.UTF-8',
'uz_uz.iso88591': 'uz_UZ.ISO8859-1',
'uz_uz.utf8@cyrillic': 'uz_UZ.UTF-8',
'uz_uz@cyrillic': 'uz_UZ.UTF-8',
've': 've_ZA.UTF-8',
've_za': 've_ZA.UTF-8',
'vi': 'vi_VN.TCVN',
'vi_vn': 'vi_VN.TCVN',
'vi_vn.tcvn': 'vi_VN.TCVN',
'vi_vn.tcvn5712': 'vi_VN.TCVN',
'vi_vn.viscii': 'vi_VN.VISCII',
'vi_vn.viscii111': 'vi_VN.VISCII',
'wa': 'wa_BE.ISO8859-1',
'wa_be': 'wa_BE.ISO8859-1',
'wa_be.iso88591': 'wa_BE.ISO8859-1',
'wa_be.iso885915': 'wa_BE.ISO8859-15',
'wa_be.iso885915@euro': 'wa_BE.ISO8859-15',
'wa_be@euro': 'wa_BE.ISO8859-15',
'wae_ch': 'wae_CH.UTF-8',
'wal_et': 'wal_ET.UTF-8',
'wo_sn': 'wo_SN.UTF-8',
'xh': 'xh_ZA.ISO8859-1',
'xh_za': 'xh_ZA.ISO8859-1',
'xh_za.iso88591': 'xh_ZA.ISO8859-1',
'yi': 'yi_US.CP1255',
'yi_us': 'yi_US.CP1255',
'yi_us.cp1255': 'yi_US.CP1255',
'yi_us.microsoftcp1255': 'yi_US.CP1255',
'yo_ng': 'yo_NG.UTF-8',
'yue_hk': 'yue_HK.UTF-8',
'zh': 'zh_CN.eucCN',
'zh_cn': 'zh_CN.gb2312',
'zh_cn.big5': 'zh_TW.big5',
'zh_cn.euc': 'zh_CN.eucCN',
'zh_cn.gb18030': 'zh_CN.gb18030',
'zh_cn.gb2312': 'zh_CN.gb2312',
'zh_cn.gbk': 'zh_CN.gbk',
'zh_hk': 'zh_HK.big5hkscs',
'zh_hk.big5': 'zh_HK.big5',
'zh_hk.big5hk': 'zh_HK.big5hkscs',
'zh_hk.big5hkscs': 'zh_HK.big5hkscs',
'zh_sg': 'zh_SG.GB2312',
'zh_sg.gbk': 'zh_SG.GBK',
'zh_tw': 'zh_TW.big5',
'zh_tw.big5': 'zh_TW.big5',
'zh_tw.euc': 'zh_TW.eucTW',
'zh_tw.euctw': 'zh_TW.eucTW',
'zu': 'zu_ZA.ISO8859-1',
'zu_za': 'zu_ZA.ISO8859-1',
'zu_za.iso88591': 'zu_ZA.ISO8859-1',
}
#
# This maps Windows language identifiers to locale strings.
#
# This list has been updated from
# http://msdn.microsoft.com/library/default.asp?url=/library/en-us/intl/nls_238z.asp
# to include every locale up to Windows Vista.
#
# NOTE: this mapping is incomplete. If your language is missing, please
# submit a bug report to the Python bug tracker at http://bugs.python.org/
# Make sure you include the missing language identifier and the suggested
# locale code.
#
windows_locale = {
0x0436: "af_ZA", # Afrikaans
0x041c: "sq_AL", # Albanian
0x0484: "gsw_FR",# Alsatian - France
0x045e: "am_ET", # Amharic - Ethiopia
0x0401: "ar_SA", # Arabic - Saudi Arabia
0x0801: "ar_IQ", # Arabic - Iraq
0x0c01: "ar_EG", # Arabic - Egypt
0x1001: "ar_LY", # Arabic - Libya
0x1401: "ar_DZ", # Arabic - Algeria
0x1801: "ar_MA", # Arabic - Morocco
0x1c01: "ar_TN", # Arabic - Tunisia
0x2001: "ar_OM", # Arabic - Oman
0x2401: "ar_YE", # Arabic - Yemen
0x2801: "ar_SY", # Arabic - Syria
0x2c01: "ar_JO", # Arabic - Jordan
0x3001: "ar_LB", # Arabic - Lebanon
0x3401: "ar_KW", # Arabic - Kuwait
0x3801: "ar_AE", # Arabic - United Arab Emirates
0x3c01: "ar_BH", # Arabic - Bahrain
0x4001: "ar_QA", # Arabic - Qatar
0x042b: "hy_AM", # Armenian
0x044d: "as_IN", # Assamese - India
0x042c: "az_AZ", # Azeri - Latin
0x082c: "az_AZ", # Azeri - Cyrillic
0x046d: "ba_RU", # Bashkir
0x042d: "eu_ES", # Basque - Russia
0x0423: "be_BY", # Belarusian
0x0445: "bn_IN", # Begali
0x201a: "bs_BA", # Bosnian - Cyrillic
0x141a: "bs_BA", # Bosnian - Latin
0x047e: "br_FR", # Breton - France
0x0402: "bg_BG", # Bulgarian
# 0x0455: "my_MM", # Burmese - Not supported
0x0403: "ca_ES", # Catalan
0x0004: "zh_CHS",# Chinese - Simplified
0x0404: "zh_TW", # Chinese - Taiwan
0x0804: "zh_CN", # Chinese - PRC
0x0c04: "zh_HK", # Chinese - Hong Kong S.A.R.
0x1004: "zh_SG", # Chinese - Singapore
0x1404: "zh_MO", # Chinese - Macao S.A.R.
0x7c04: "zh_CHT",# Chinese - Traditional
0x0483: "co_FR", # Corsican - France
0x041a: "hr_HR", # Croatian
0x101a: "hr_BA", # Croatian - Bosnia
0x0405: "cs_CZ", # Czech
0x0406: "da_DK", # Danish
0x048c: "gbz_AF",# Dari - Afghanistan
0x0465: "div_MV",# Divehi - Maldives
0x0413: "nl_NL", # Dutch - The Netherlands
0x0813: "nl_BE", # Dutch - Belgium
0x0409: "en_US", # English - United States
0x0809: "en_GB", # English - United Kingdom
0x0c09: "en_AU", # English - Australia
0x1009: "en_CA", # English - Canada
0x1409: "en_NZ", # English - New Zealand
0x1809: "en_IE", # English - Ireland
0x1c09: "en_ZA", # English - South Africa
0x2009: "en_JA", # English - Jamaica
0x2409: "en_CB", # English - Caribbean
0x2809: "en_BZ", # English - Belize
0x2c09: "en_TT", # English - Trinidad
0x3009: "en_ZW", # English - Zimbabwe
0x3409: "en_PH", # English - Philippines
0x4009: "en_IN", # English - India
0x4409: "en_MY", # English - Malaysia
0x4809: "en_IN", # English - Singapore
0x0425: "et_EE", # Estonian
0x0438: "fo_FO", # Faroese
0x0464: "fil_PH",# Filipino
0x040b: "fi_FI", # Finnish
0x040c: "fr_FR", # French - France
0x080c: "fr_BE", # French - Belgium
0x0c0c: "fr_CA", # French - Canada
0x100c: "fr_CH", # French - Switzerland
0x140c: "fr_LU", # French - Luxembourg
0x180c: "fr_MC", # French - Monaco
0x0462: "fy_NL", # Frisian - Netherlands
0x0456: "gl_ES", # Galician
0x0437: "ka_GE", # Georgian
0x0407: "de_DE", # German - Germany
0x0807: "de_CH", # German - Switzerland
0x0c07: "de_AT", # German - Austria
0x1007: "de_LU", # German - Luxembourg
0x1407: "de_LI", # German - Liechtenstein
0x0408: "el_GR", # Greek
0x046f: "kl_GL", # Greenlandic - Greenland
0x0447: "gu_IN", # Gujarati
0x0468: "ha_NG", # Hausa - Latin
0x040d: "he_IL", # Hebrew
0x0439: "hi_IN", # Hindi
0x040e: "hu_HU", # Hungarian
0x040f: "is_IS", # Icelandic
0x0421: "id_ID", # Indonesian
0x045d: "iu_CA", # Inuktitut - Syllabics
0x085d: "iu_CA", # Inuktitut - Latin
0x083c: "ga_IE", # Irish - Ireland
0x0410: "it_IT", # Italian - Italy
0x0810: "it_CH", # Italian - Switzerland
0x0411: "ja_JP", # Japanese
0x044b: "kn_IN", # Kannada - India
0x043f: "kk_KZ", # Kazakh
0x0453: "kh_KH", # Khmer - Cambodia
0x0486: "qut_GT",# K'iche - Guatemala
0x0487: "rw_RW", # Kinyarwanda - Rwanda
0x0457: "kok_IN",# Konkani
0x0412: "ko_KR", # Korean
0x0440: "ky_KG", # Kyrgyz
0x0454: "lo_LA", # Lao - Lao PDR
0x0426: "lv_LV", # Latvian
0x0427: "lt_LT", # Lithuanian
0x082e: "dsb_DE",# Lower Sorbian - Germany
0x046e: "lb_LU", # Luxembourgish
0x042f: "mk_MK", # FYROM Macedonian
0x043e: "ms_MY", # Malay - Malaysia
0x083e: "ms_BN", # Malay - Brunei Darussalam
0x044c: "ml_IN", # Malayalam - India
0x043a: "mt_MT", # Maltese
0x0481: "mi_NZ", # Maori
0x047a: "arn_CL",# Mapudungun
0x044e: "mr_IN", # Marathi
0x047c: "moh_CA",# Mohawk - Canada
0x0450: "mn_MN", # Mongolian - Cyrillic
0x0850: "mn_CN", # Mongolian - PRC
0x0461: "ne_NP", # Nepali
0x0414: "nb_NO", # Norwegian - Bokmal
0x0814: "nn_NO", # Norwegian - Nynorsk
0x0482: "oc_FR", # Occitan - France
0x0448: "or_IN", # Oriya - India
0x0463: "ps_AF", # Pashto - Afghanistan
0x0429: "fa_IR", # Persian
0x0415: "pl_PL", # Polish
0x0416: "pt_BR", # Portuguese - Brazil
0x0816: "pt_PT", # Portuguese - Portugal
0x0446: "pa_IN", # Punjabi
0x046b: "quz_BO",# Quechua (Bolivia)
0x086b: "quz_EC",# Quechua (Ecuador)
0x0c6b: "quz_PE",# Quechua (Peru)
0x0418: "ro_RO", # Romanian - Romania
0x0417: "rm_CH", # Romansh
0x0419: "ru_RU", # Russian
0x243b: "smn_FI",# Sami Finland
0x103b: "smj_NO",# Sami Norway
0x143b: "smj_SE",# Sami Sweden
0x043b: "se_NO", # Sami Northern Norway
0x083b: "se_SE", # Sami Northern Sweden
0x0c3b: "se_FI", # Sami Northern Finland
0x203b: "sms_FI",# Sami Skolt
0x183b: "sma_NO",# Sami Southern Norway
0x1c3b: "sma_SE",# Sami Southern Sweden
0x044f: "sa_IN", # Sanskrit
0x0c1a: "sr_SP", # Serbian - Cyrillic
0x1c1a: "sr_BA", # Serbian - Bosnia Cyrillic
0x081a: "sr_SP", # Serbian - Latin
0x181a: "sr_BA", # Serbian - Bosnia Latin
0x045b: "si_LK", # Sinhala - Sri Lanka
0x046c: "ns_ZA", # Northern Sotho
0x0432: "tn_ZA", # Setswana - Southern Africa
0x041b: "sk_SK", # Slovak
0x0424: "sl_SI", # Slovenian
0x040a: "es_ES", # Spanish - Spain
0x080a: "es_MX", # Spanish - Mexico
0x0c0a: "es_ES", # Spanish - Spain (Modern)
0x100a: "es_GT", # Spanish - Guatemala
0x140a: "es_CR", # Spanish - Costa Rica
0x180a: "es_PA", # Spanish - Panama
0x1c0a: "es_DO", # Spanish - Dominican Republic
0x200a: "es_VE", # Spanish - Venezuela
0x240a: "es_CO", # Spanish - Colombia
0x280a: "es_PE", # Spanish - Peru
0x2c0a: "es_AR", # Spanish - Argentina
0x300a: "es_EC", # Spanish - Ecuador
0x340a: "es_CL", # Spanish - Chile
0x380a: "es_UR", # Spanish - Uruguay
0x3c0a: "es_PY", # Spanish - Paraguay
0x400a: "es_BO", # Spanish - Bolivia
0x440a: "es_SV", # Spanish - El Salvador
0x480a: "es_HN", # Spanish - Honduras
0x4c0a: "es_NI", # Spanish - Nicaragua
0x500a: "es_PR", # Spanish - Puerto Rico
0x540a: "es_US", # Spanish - United States
# 0x0430: "", # Sutu - Not supported
0x0441: "sw_KE", # Swahili
0x041d: "sv_SE", # Swedish - Sweden
0x081d: "sv_FI", # Swedish - Finland
0x045a: "syr_SY",# Syriac
0x0428: "tg_TJ", # Tajik - Cyrillic
0x085f: "tmz_DZ",# Tamazight - Latin
0x0449: "ta_IN", # Tamil
0x0444: "tt_RU", # Tatar
0x044a: "te_IN", # Telugu
0x041e: "th_TH", # Thai
0x0851: "bo_BT", # Tibetan - Bhutan
0x0451: "bo_CN", # Tibetan - PRC
0x041f: "tr_TR", # Turkish
0x0442: "tk_TM", # Turkmen - Cyrillic
0x0480: "ug_CN", # Uighur - Arabic
0x0422: "uk_UA", # Ukrainian
0x042e: "wen_DE",# Upper Sorbian - Germany
0x0420: "ur_PK", # Urdu
0x0820: "ur_IN", # Urdu - India
0x0443: "uz_UZ", # Uzbek - Latin
0x0843: "uz_UZ", # Uzbek - Cyrillic
0x042a: "vi_VN", # Vietnamese
0x0452: "cy_GB", # Welsh
0x0488: "wo_SN", # Wolof - Senegal
0x0434: "xh_ZA", # Xhosa - South Africa
0x0485: "sah_RU",# Yakut - Cyrillic
0x0478: "ii_CN", # Yi - PRC
0x046a: "yo_NG", # Yoruba - Nigeria
0x0435: "zu_ZA", # Zulu
}
def _print_locale():
""" Test function.
"""
categories = {}
def _init_categories(categories=categories):
for k,v in list(globals().items()):
if k[:3] == 'LC_':
categories[k] = v
_init_categories()
del categories['LC_ALL']
print('Locale defaults as determined by getdefaultlocale():')
print(('-'*72))
lang, enc = getdefaultlocale()
print(('Language: ', lang or '(undefined)'))
print(('Encoding: ', enc or '(undefined)'))
print()
print('Locale settings on startup:')
print(('-'*72))
for name,category in list(categories.items()):
print((name, '...'))
lang, enc = getlocale(category)
print((' Language: ', lang or '(undefined)'))
print((' Encoding: ', enc or '(undefined)'))
print()
print()
print('Locale settings after calling resetlocale():')
print(('-'*72))
resetlocale()
for name,category in list(categories.items()):
print((name, '...'))
lang, enc = getlocale(category)
print((' Language: ', lang or '(undefined)'))
print((' Encoding: ', enc or '(undefined)'))
print()
try:
setlocale(LC_ALL, "")
except:
print('NOTE:')
print('setlocale(LC_ALL, "") does not support the default locale')
print('given in the OS environment variables.')
else:
print()
print('Locale settings after calling setlocale(LC_ALL, ""):')
print(('-'*72))
for name,category in list(categories.items()):
print((name, '...'))
lang, enc = getlocale(category)
print((' Language: ', lang or '(undefined)'))
print((' Encoding: ', enc or '(undefined)'))
print()
###
try:
LC_MESSAGES
except NameError:
pass
else:
__all__.append("LC_MESSAGES")
if __name__=='__main__':
print('Locale aliasing:')
print()
_print_locale()
print()
print('Number formatting:')
print()
_test()
| [
"wimton@yahoo.com"
] | wimton@yahoo.com |
bdf608e30942d1e434a5454fe9cbc342f5efd52f | d10e19f07b209b2fa661fb616947d0382fd0bbb0 | /util/reload/del_e.py | ddff6d5f5e907b62b99314dae8f2bfa3b02afe35 | [] | no_license | Do3956/test | 6bda9633aa2762b8f0f4b05b154810107c40d9ee | 15bbc285dc3acbbabaefb188cb264e56fb24c84d | refs/heads/master | 2021-07-02T23:52:29.519329 | 2018-02-07T06:19:28 | 2018-02-07T06:19:28 | 95,771,394 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 345 | py | # -*- coding: utf-8 -*-
"""
__author__ = 'do'
__mtime__ = '2018/1/11'
__content__ = '析构函数重载'
"""
class Life:
def __init__(self, name='name'):
print 'Hello', name
self.name = name
def __del__(self):
print 'Goodby', self.name
brain = Life('Brain') # call __init__
brain = 'loretta' # call __del__ | [
"395614269@163.com"
] | 395614269@163.com |
ce2ea39a792d9d09f4cc66658a211858bd44ab01 | 5667cc877342204b7d54b6c3cc5a9f4854f08829 | /.history/apppersona/forms_20201101194225.py | fafb266b000ff5ea951dc20c6446324342a53378 | [] | no_license | Nyckhos/TestCommit | d62e3f6fefb04ab5647475cc7ead0d72cbd89efa | 9aa8e2e35280b7862960cc8a864e9c02ac7f4796 | refs/heads/main | 2023-01-05T05:57:59.223641 | 2020-11-02T02:08:18 | 2020-11-02T02:08:18 | 309,237,224 | 2 | 0 | null | 2020-11-02T02:30:43 | 2020-11-02T02:30:43 | null | UTF-8 | Python | false | false | 3,647 | py | from django import forms
from django.db import models
from django.db.models.fields import CharField, EmailField
from django.forms import ModelForm, Textarea
from django.forms.widgets import EmailInput, HiddenInput, PasswordInput, Select, TextInput
from .models import Persona
from django.contrib.auth.models import User
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth.forms import UserCreationForm
class FormularioPersona(forms.ModelForm):
class Meta:
model = Persona
fields = ('nombre', 'apellido', 'email', 'celular',
'region')
widgets = {
'nombre': Textarea(attrs={'rows': 1, 'cols': 30, 'style': 'resize:none;', 'id': 'nombre', 'class': 'form-control', 'required onfocus': "setVisibility('100','inline')", 'onBlur': "setVisibility('100','none')", 'onkeyup':"sync()"}),
'apellido': Textarea(attrs={'rows': 1, 'cols': 30, 'style': 'resize:none;', 'id': 'apellido', 'class': "form-control", 'required onfocus': "setVisibility('101','inline')", 'onBlur': "setVisibility('101','none')", 'onkeyup':"sync()"}),
'email': EmailInput(attrs={'rows': 1, 'cols': 30, 'style': 'resize:none;', 'id': 'email', 'class': "form-control", 'required onfocus': "setVisibility('102','inline')", 'onBlur': "setVisibility('102','none')", 'type': 'email', 'onkeyup':"sync()"}),
'celular': Textarea(attrs={'rows': 1, 'cols': 30, 'style': 'resize:none;', 'id': 'celular', 'class': "form-control", 'required onfocus': "setVisibility('103','inline')", 'onBlur': "setVisibility('103','none')"}),
'region': Select(attrs={'class': "form-control", 'onfocus': "setVisibility('105','inline')", 'onBlur': "setVisibility('105','none')", 'default': "rm"}),
}
labels = {
'nombre': _('Nombre'),
'apellido': _('Apellido'),
'email': _('Correo Electronico'),
'celular': _('Numero Telefonico'),
'region': _('Region'),
}
#help_texts = {
# 'nombre': _('Ingrese su nombre.'),
#}
#error_messages = {
# 'nombre': {
# 'max_length': _("El nombre excede el limite de caracteres."),
# },
#}
class ExtendedUserCreationForm(UserCreationForm):
def __init__(self, *args, **kwargs):
super(ExtendedUserCreationForm, self).__init__(*args, **kwargs)
self.fields['first_name'].widget = TextInput(attrs = {'class': 'form-control', 'id':'first_name', 'type':'hidden'})
self.fields['last_name'].widget = TextInput(attrs = {'class': 'form-control', 'id':'last_name', 'type':'hidden'})
self.fields['username'].widget = TextInput(attrs = {'class': 'form-control', 'id':'username', 'required onfocus': "setVisibility('104','inline')", 'onBlur': "setVisibility('104','none')"})
self.fields['password1'].widget = PasswordInput(attrs = {'class': 'form-control', 'id':'password1'})
self.fields['password2'].widget = PasswordInput(attrs = {'class': 'form-control', 'id':'password2'})
self.fields['email'].widget = EmailInput(attrs = {'class': 'form-control', 'id':'useremail', 'type':'hidden'})
class Meta:
model = User
fields = ('first_name', 'last_name', 'username', 'email', 'password1', 'password2')
def save(self, commit=True):
user = super().save(commit=False)
user.email = self.cleaned_data['email']
user.first_name = self.cleaned_data['first_name']
user.last_name = self.cleaned_data['last_name']
if commit:
user.save()
return user
| [
"fernandox_240997@live.com"
] | fernandox_240997@live.com |
7b3468eb08adcb7c9eab99a524080c7e23f65b33 | 255e7b37e9ce28bbafba5a3bcb046de97589f21c | /suqing/fuckal/python/linkedlist/reverse-nodes-in-k-group.py | 99781eaab54715ab585558e6417e2f13b6e426ff | [] | no_license | dog2humen/ForTheCoffee | 697d2dc8366921aa18da2fa3311390061bab4b6f | 2f940aa9dd6ce35588de18db08bf35a2d04a54f4 | refs/heads/master | 2023-04-15T09:53:54.711659 | 2021-04-28T13:49:13 | 2021-04-28T13:49:13 | 276,009,709 | 2 | 2 | null | 2020-07-01T08:29:33 | 2020-06-30T05:50:01 | Python | UTF-8 | Python | false | false | 1,684 | py | # coding:utf8
"""
25. K 个一组翻转链表
给你一个链表,每 k 个节点一组进行翻转,请你返回翻转后的链表。
k 是一个正整数,它的值小于或等于链表的长度。
如果节点总数不是 k 的整数倍,那么请将最后剩余的节点保持原有顺序。
输入:head = [1,2,3,4,5], k = 2
输出:[2,1,4,3,5]
链接:https://leetcode-cn.com/problems/reverse-nodes-in-k-group
"""
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, val=0, next=None):
# self.val = val
# self.next = next
class Solution:
def reverseKGroup(self, head: ListNode, k: int) -> ListNode:
return self.reverseKGroup_v1(head, k)
def reverseKGroup_v1(self, head: ListNode, k: int) -> ListNode:
"""
递归思路, 每k个翻转一次
"""
if not head:
return None
a, b = head, head # 区间[a, b)中包含k个待翻转的
for i in range(k):
if not b: # 不足k个, 不用翻转
return head
b = b.next
nhead = self.reverseHT(a, b)
a.next = self.reverseKGroup_v1(b, k)
return nhead
def reverseHT(self, head, tail):
"""
翻转区间为[head, tail)的链表
返回翻转后的头结点
"""
pre, cur = None, head
while cur != tail:
nextNode = cur.next
cur.next = pre
pre = cur
cur = nextNode
return pre
def reverseKGroup_v2(self, head: ListNode, k: int) -> ListNode:
"""
迭代
"""
| [
"116676671@qq.com"
] | 116676671@qq.com |
61d453f7043d551d259a951e4642f47d5429b7cd | f6302d4915f1186106270ac78273534920bdc553 | /tests/test_scripts/test_gen_jsonld.py | 385e3a242135665eba7991fc0854bfad5e25d86a | [
"CC0-1.0"
] | permissive | biolink/biolinkml | 7daeab5dfbefb8c9a6ce15176725be4b0faf86c7 | 34531bd9cb8805029035c7b7726398ebee972b97 | refs/heads/master | 2021-06-10T20:10:56.718829 | 2021-04-04T20:05:59 | 2021-04-04T20:05:59 | 167,634,946 | 25 | 19 | CC0-1.0 | 2021-10-02T01:45:34 | 2019-01-26T01:07:26 | Python | UTF-8 | Python | false | false | 4,595 | py | import os
import re
import unittest
# This has to occur post ClickTestCase
from functools import reduce
from typing import List, Tuple
import click
from rdflib import Graph, URIRef
from biolinkml import METAMODEL_NAMESPACE
from biolinkml.generators.jsonldcontextgen import ContextGenerator
from biolinkml.generators import jsonldgen
from tests.test_scripts.environment import env
from tests.utils.clicktestcase import ClickTestCase
cwd = os.path.dirname(__file__)
meta_context = 'file:./output/gencontext/meta.jsonld'
repl: List[Tuple[str, str]] = [
(r'"source_file_size": [0-9]+', ''),
(r'"source_file_date": "[^"]+"', ''),
(r'"generation_date": "[^"]+"', ''),
(r'"source_file": "[^"]+"', '')
]
def filtr(txt: str) -> str:
return reduce(lambda s, expr: re.sub(expr[0], expr[1], s), repl, txt)
class GenJSONLDTestCase(ClickTestCase):
testdir = "genjsonld"
click_ep = jsonldgen.cli
prog_name = "gen-jsonld"
env = env
def test_help(self):
self.do_test("--help", 'help')
def test_meta(self):
self.temp_file_path('meta.jsonld')
self.do_test(f"--context {meta_context}", 'meta.jsonld', filtr=filtr)
self.do_test(f'-f jsonld --context {meta_context}', 'meta.jsonld', filtr=filtr)
self.do_test(f'-f xsv --context {meta_context}', 'meta_error',
expected_error=click.exceptions.BadParameter)
def check_size(self, g: Graph, g2: Graph, root: URIRef, expected_classes: int, expected_slots: int,
expected_types: int, expected_subsets: int, expected_enums: int, model: str) -> None:
"""
Check
:param g:
:param g2:
:param root:
:param expected_classes:
:param expected_slots:
:param expected_types:
:param expected_subsets:
:param expected_enums:
:param model:
:return:
"""
for graph in [g, g2]:
n_classes = len(list(graph.objects(root, METAMODEL_NAMESPACE.classes)))
n_slots = len(list(graph.objects(root, METAMODEL_NAMESPACE.slots)))
n_types = len(list(graph.objects(root, METAMODEL_NAMESPACE.types)))
n_subsets = len(list(graph.objects(root, METAMODEL_NAMESPACE.subsets)))
n_enums = len(list(graph.objects(root, METAMODEL_NAMESPACE.enums)))
self.assertEqual(expected_classes, n_classes, f"Expected {expected_classes} classes in {model}")
self.assertEqual(expected_slots, n_slots, f"Expected {expected_slots} slots in {model}")
self.assertEqual(expected_types, n_types, f"Expected {expected_types} types in {model}")
self.assertEqual(expected_subsets, n_subsets, f"Expected {expected_subsets} subsets in {model}")
self.assertEqual(expected_enums, n_enums, f"Expected {expected_enums} enums in {model}")
def test_meta_output(self):
""" Generate a context AND a jsonld for the metamodel and make sure it parses as RDF """
tmp_jsonld_path = self.temp_file_path('metajson.jsonld')
tmp_rdf_path = self.temp_file_path('metardf.ttl')
tmp_meta_context_path = self.temp_file_path('metacontext.jsonld')
# Generate an image of the metamodel
gen = ContextGenerator(env.meta_yaml, importmap=env.import_map)
base = gen.schema.id
if base[-1] not in '/#':
base += '/'
base += gen.schema.name
# Generate context
with open(tmp_meta_context_path, 'w') as tfile:
tfile.write(gen.serialize())
# Generate JSON
with open(tmp_jsonld_path, 'w') as tfile:
tfile.write(jsonldgen.JSONLDGenerator(env.meta_yaml, fmt=jsonldgen.JSONLDGenerator.valid_formats[0],
importmap=env.import_map).serialize(context=tmp_meta_context_path))
# Convert JSON to TTL
g = Graph()
g.load(tmp_jsonld_path, format="json-ld")
g.serialize(tmp_rdf_path, format="ttl")
g.bind('meta', METAMODEL_NAMESPACE)
new_ttl = g.serialize(format="turtle").decode()
# Make sure that the generated TTL matches the JSON-LD (probably not really needed, as this is more of a test
# of rdflib than our tooling but it doesn't hurt
new_g = Graph()
new_g.parse(data=new_ttl, format="turtle")
# Make sure that both match the expected size (classes, slots, types, and model name for error reporting)
self.check_size(g, new_g, URIRef(base), 17, 122, 14, 1, 1, "meta")
if __name__ == '__main__':
unittest.main()
| [
"solbrig@jhu.edu"
] | solbrig@jhu.edu |
a23323ac53143abd4e91ae0d35c314a55e9716cb | f0d713996eb095bcdc701f3fab0a8110b8541cbb | /r6ywkSJHWqA7EK5fG_14.py | e1800ffbfe2d70f4996996c5a58be1d4e86742ab | [] | no_license | daniel-reich/turbo-robot | feda6c0523bb83ab8954b6d06302bfec5b16ebdf | a7a25c63097674c0a81675eed7e6b763785f1c41 | refs/heads/main | 2023-03-26T01:55:14.210264 | 2021-03-23T16:08:01 | 2021-03-23T16:08:01 | 350,773,815 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 773 | py | """
Write a method that accepts two integer parameters rows and cols. The output
is a 2d array of numbers displayed in column-major order, meaning the numbers
shown increase sequentially down each column and wrap to the top of the next
column to the right once the bottom of the current column is reached.
### Examples
printGrid(3, 6) ➞ [
[1, 4, 7, 10, 13, 16],
[2, 5, 8, 11, 14, 17],
[3, 6, 9, 12, 15, 18]
]
printGrid(5, 3) ➞ [
[1, 6, 11],
[2, 7, 12],
[3, 8, 13],
[4, 9, 14],
[5, 10, 15]
]
printGrid(4, 1) ➞ [
[1],
[2],
[3],
[4]
]
### Notes
N/A
"""
def printgrid(rows, cols):
return [list(range(i, rows*cols+1, rows)) for i in range(1, rows+1)]
| [
"daniel.reich@danielreichs-MacBook-Pro.local"
] | daniel.reich@danielreichs-MacBook-Pro.local |
e8c56a08cc5141e47f33f2dd24e7959853718791 | d82ac08e029a340da546e6cfaf795519aca37177 | /chapter_06_model_evaluation_hyperparameter_tuning/09_precision_recall.py | 8ed40b8d08c5c3316ccb51e1e9aa19e22539551a | [] | no_license | CSwithJC/PythonMachineLearning | 4409303c3f4d4177dc509c83e240d7a589b144a0 | 0c4508861e182a8eeacd4645fb93b51b698ece0f | refs/heads/master | 2021-09-04T04:28:14.608662 | 2018-01-15T20:25:36 | 2018-01-15T20:25:36 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,384 | py | import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score, make_scorer
"""Precision, Recall, F1-Score:
Precision (also called positive predictive value) is the fraction of
relevant instances among the retrieved instances, while recall
(also known as sensitivity) is the fraction of relevant instances
that have been retrieved over the total amount of relevant instances.
F1-Score is calculated using both precision and recall.
"""
df = pd.read_csv('../data/wdbc.csv')
# Convert M (malign) and B (benign) into numbers
X = df.iloc[:, 2:].values
y = df.iloc[:, 1].values
le = LabelEncoder()
y = le.fit_transform(y)
le.transform(['M', 'B'])
# Divide Dataset into separate training (80%) and testing (20%)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=1)
pipe_svc = Pipeline([('scl', StandardScaler()),
('clf', SVC(random_state=1))])
pipe_svc.fit(X_train, y_train)
y_pred = pipe_svc.predict(X_test)
# Create the Confusion Matrix:
confmat = confusion_matrix(y_true=y_test,
y_pred=y_pred)
print(confmat)
# Precision, Recall, F1-Score:
print('Precision: %.3f' % precision_score(y_true=y_test, y_pred=y_pred))
print('Recall: %.3f' % recall_score(y_true=y_test, y_pred=y_pred))
print('F1: %.3f' % f1_score(y_true=y_test, y_pred=y_pred))
# We can do grid search based on any of these scores, not just accuracy:
# All the different values we will try for C.
param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]
# Combinations of different parameters to be ran here for Grid Search:
param_grid = [
{
'clf__C': param_range,
'clf__kernel': ['linear']
},
{ # NOTE: gamma parameter is specific to kernel SVMs.
'clf__C': param_range,
'clf__gamma': param_range,
'clf__kernel': ['rbf']
}
]
scorer = make_scorer(f1_score, pos_label=0)
gs = GridSearchCV(estimator=pipe_svc,
param_grid=param_grid,
scoring=scorer,
cv=10,
n_jobs=-1)
| [
"jean.mendez2@upr.edu"
] | jean.mendez2@upr.edu |
1e4aad395a0c4f1844831bcd3739df218ecd5ae7 | 700f9f9e319ebd26d2557d64ea3827808dfad2f5 | /tests/fixtures/test_abstracts/content_03_expected.py | 58672ac219aa158da24cd5ab42129bffccff6013 | [
"MIT"
] | permissive | elifesciences/elife-tools | 1b44e660e916a82ef8ff64dd5a6ee5506e517359 | bc16e7dd5d6245077e39f8561b99c9acd510ddf7 | refs/heads/develop | 2023-03-06T08:37:47.424282 | 2023-02-20T20:40:49 | 2023-02-20T20:40:49 | 30,274,058 | 13 | 11 | MIT | 2023-02-20T20:40:50 | 2015-02-04T01:14:41 | Python | UTF-8 | Python | false | false | 2,429 | py | expected = [
{
"abstract_type": None,
"content": "RET can be activated in cis or trans by its co-receptors and ligands in vitro, but the physiological roles of trans signaling are unclear. Rapidly adapting (RA) mechanoreceptors in dorsal root ganglia (DRGs) express Ret and the co-receptor Gfr\u03b12 and depend on Ret for survival and central projection growth. Here, we show that Ret and Gfr\u03b12 null mice display comparable early central projection deficits, but Gfr\u03b12 null RA mechanoreceptors recover later. Loss of Gfr\u03b11, the co-receptor implicated in activating RET in trans, causes no significant central projection or cell survival deficit, but Gfr\u03b11;Gfr\u03b12 double nulls phenocopy Ret nulls. Finally, we demonstrate that GFR\u03b11 produced by neighboring DRG neurons activates RET in RA mechanoreceptors. Taken together, our results suggest that trans and cis RET signaling could function in the same developmental process and that the availability of both forms of activation likely enhances but not diversifies outcomes of RET signaling.",
"full_content": "<p>RET can be activated in <italic>cis</italic> or <italic>trans</italic> by its co-receptors and ligands <italic>in vitro</italic>, but the physiological roles of <italic>trans</italic> signaling are unclear. Rapidly adapting (RA) mechanoreceptors in dorsal root ganglia (DRGs) express <italic>Ret</italic> and the co-receptor <italic>Gfr\u03b12</italic> and depend on <italic>Ret</italic> for survival and central projection growth. Here, we show that <italic>Ret</italic> and <italic>Gfr\u03b12</italic> null mice display comparable early central projection deficits, but <italic>Gfr\u03b12</italic> null RA mechanoreceptors recover later. Loss of <italic>Gfr\u03b11</italic>, the co-receptor implicated in activating RET <italic>in trans</italic>, causes no significant central projection or cell survival deficit, but <italic>Gfr\u03b11;Gfr\u03b12</italic> double nulls phenocopy <italic>Ret</italic> nulls. Finally, we demonstrate that GFR\u03b11 produced by neighboring DRG neurons activates RET in RA mechanoreceptors. Taken together, our results suggest that <italic>trans</italic> and <italic>cis</italic> RET signaling could function in the same developmental process and that the availability of both forms of activation likely enhances but not diversifies outcomes of RET signaling.</p>",
},
]
| [
"gnott@starglobal.ca"
] | gnott@starglobal.ca |
0f376f00c936710d9751db5444a986c0a3a66788 | 17dcc3e6a5e418a7c4c2e79f6e34ae7df39fdbcd | /polyaxon_lib/datasets/imdb.py | b9f921cf262d4dc58eef86b59c760646f5f749fc | [
"MIT"
] | permissive | polyaxon/polyaxon-lib | 55358fa8f56a1cd12a443672f4d6cb990c51ae8f | d357b7fee03b2f47cfad8bd7e028d3e265a10575 | refs/heads/master | 2021-09-11T19:43:59.391273 | 2018-04-11T15:35:03 | 2018-04-11T15:35:03 | 94,631,683 | 7 | 4 | MIT | 2018-01-25T14:08:54 | 2017-06-17T15:18:04 | Python | UTF-8 | Python | false | false | 4,981 | py | # -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function
import json
import os
import six.moves.cPickle as pickle # pylint: disable=import-error
import numpy as np
import tensorflow as tf
from polyaxon_lib import Modes
from polyaxon_lib.datasets.converters.sequence_converters import SequenceToTFExampleConverter
from polyaxon_lib.datasets.utils import (
download_datasets,
delete_datasets,
make_dataset_dir,
create_sequence_dataset_input_fn,
create_sequence_dataset_predict_input_fn
)
_DATA_URL = 'http://www.iro.umontreal.ca/~lisa/deep/data/'
_FILENAME = 'imdb.pkl'
_MAX_TOKEN = 10000
_UNK = 1
META_DATA_FILENAME_FORMAT = '{}/meta_data.json'
RECORD_FILE_NAME_FORMAT = '{}/imdb_{}.tfrecord'
def _clean_tokens(seq):
return [1 if t >= _MAX_TOKEN else t for t in seq]
def prepare_dataset(converter, dataset_dir, dataset, data_name, num_items, num_eval=0):
filename = RECORD_FILE_NAME_FORMAT.format(dataset_dir, data_name)
if num_eval:
eval_filename = RECORD_FILE_NAME_FORMAT.format(dataset_dir, Modes.EVAL)
if tf.gfile.Exists(filename):
print('`{}` Dataset files already exist. '
'Exiting without re-creating them.'.format(filename))
return
tokens = dataset[0]
labels = dataset[1]
if num_eval:
# split data
idx = np.random.permutation(num_items)
eval_tokens = [{'source_token': _clean_tokens(tokens[i])} for i in idx[:num_eval]]
eval_labels = [{'label': labels[i]} for i in idx[:num_eval]]
tokens = [{'source_token': _clean_tokens(tokens[i])} for i in idx[num_eval:]]
labels = [{'label': labels[i]} for i in idx[num_eval:]]
else:
tokens = [{'source_token': _clean_tokens(t)} for t in tokens[:100]]
labels = [{'label': l} for l in labels]
with tf.python_io.TFRecordWriter(filename) as tfrecord_writer:
with tf.Session('') as session:
converter.convert(session=session, writer=tfrecord_writer,
sequence_features_list=tokens,
context_features_list=labels,
total_num_items=num_items - num_eval)
if num_eval:
with tf.python_io.TFRecordWriter(eval_filename) as tfrecord_writer:
with tf.Session('') as session:
converter.convert(session=session, writer=tfrecord_writer,
sequence_features_list=eval_tokens,
context_features_list=eval_labels,
total_num_items=num_eval)
def prepare(dataset_dir):
"""Runs download and conversion operation.
Args:
dataset_dir: The dataset directory where the dataset is stored.
"""
make_dataset_dir(dataset_dir)
if all([
tf.gfile.Exists(RECORD_FILE_NAME_FORMAT.format(dataset_dir, Modes.TRAIN)),
tf.gfile.Exists(RECORD_FILE_NAME_FORMAT.format(dataset_dir, Modes.EVAL)),
tf.gfile.Exists(RECORD_FILE_NAME_FORMAT.format(dataset_dir, Modes.PREDICT)),
]):
print('`{}` Dataset files already exist.')
return
download_datasets(dataset_dir, _DATA_URL, [_FILENAME])
with open(os.path.join(dataset_dir, _FILENAME), 'rb') as f:
train_set = pickle.load(f)
test_set = pickle.load(f)
converter = SequenceToTFExampleConverter(sequence_features_types={'source_token': 'int'},
context_features_types={'label': 'int'})
num_items = len(train_set[0])
len_eval_data = int(num_items * 0.1)
len_test_data = len(test_set[0])
prepare_dataset(converter, dataset_dir, train_set, Modes.TRAIN, num_items, len_eval_data)
prepare_dataset(converter, dataset_dir, test_set, Modes.PREDICT, len_test_data)
# Finally, write the meta data:
with open(META_DATA_FILENAME_FORMAT.format(dataset_dir), 'w') as meta_data_file:
meta_data = converter.get_meta_data()
meta_data['num_samples'] = {Modes.TRAIN: num_items - len_eval_data,
Modes.EVAL: len_eval_data,
Modes.PREDICT: len_test_data}
meta_data['items_to_descriptions'] = {
'source_token': 'A sequence of word ids.',
'label': 'A single integer 0 or 1',
}
meta_data['num_classes'] = 2
json.dump(meta_data, meta_data_file)
delete_datasets(dataset_dir, [_FILENAME])
print('\nFinished converting the IMDB dataset!')
def create_input_fn(dataset_dir):
return create_sequence_dataset_input_fn(
dataset_dir, prepare, RECORD_FILE_NAME_FORMAT, META_DATA_FILENAME_FORMAT,
bucket_boundaries=[140, 200, 300, 400, 500])
def create_predict_input_fn(dataset_dir):
return create_sequence_dataset_predict_input_fn(
dataset_dir, prepare, RECORD_FILE_NAME_FORMAT, META_DATA_FILENAME_FORMAT,
bucket_boundaries=[140, 200, 300, 400, 500])
| [
"mouradmourafiq@gmail.com"
] | mouradmourafiq@gmail.com |
d345272523d8337ac757fdf062c2baac998b6531 | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_200/2946.py | 87758d9e393a1e5e783c705de41a6ed22abb34c8 | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 584 | py | numbers = []
t = int(raw_input())
for i in xrange(1, t + 1):
numbers = [int(s) for s in list(raw_input())]
numbers = [0] + numbers
while True :
change = False
for j in xrange(1, len(numbers)):
if (numbers[j] < numbers[j-1]):
numbers[j-1] = numbers[j-1] -1
change = True
for y in xrange(j,len(numbers)) :
numbers[y] = 9
break ;
if change == False:
break
while(numbers[0] == 0) : del numbers[0]
s ="".join(map(str, numbers))
print "Case #{}: {}".format( i, s)
| [
"miliar1732@gmail.com"
] | miliar1732@gmail.com |
bdd20ca186269d2759e9ecf5ae8ac0a444d8f194 | cd9c2c6df0567bd7be18909f4150a26a45c8a4f7 | /utils/rpi3/setup-system.py | cce2a6faf5e0796d16d7210a85c82987f73b6111 | [
"Apache-2.0"
] | permissive | zeta1999/mjmech | d98fd506e6d7e799953a328ebe5db06e379591cb | 9e44f82849afc35180d4bda22282dba1cde42be0 | refs/heads/master | 2022-04-19T19:24:44.638122 | 2020-04-16T16:41:54 | 2020-04-16T16:41:54 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,530 | py | #!/usr/bin/python3 -B
# Copyright 2018 Josh Pieper. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Sets up a raspberry pi 3b+ for use as an mjmech computer.
It is intended to be run as root, like:
sudo ./setup-system.py
"""
import os
import shlex
import shutil
import subprocess
import time
ORIG_SUFFIX = time.strftime(".orig-%Y%m%d-%H%M%S")
def run(*args, **kwargs):
print('run: ' + args[0])
subprocess.check_call(*args, shell=True, **kwargs)
def ensure_present(filename, line):
'''Ensure the given line is present in the named file.'''
current_content = [
x.strip() for x in open(filename, encoding='utf-8').readlines()]
if line.strip() in current_content:
# Yes, the line is already present there
return
shutil.copy(filename, filename + ORIG_SUFFIX)
print('ensure_present({}): Adding: {}'.format(filename, line))
# Nope, we need to add it.
with open(filename, 'a', encoding='utf-8') as f:
f.write(line + '\n')
def ensure_contents(filename, contents):
'''Ensure the given file has exactly the given contents'''
if os.path.exists(filename):
existing = open(filename, encoding='utf-8').read()
if existing == contents:
return
shutil.copy(filename, filename + ORIG_SUFFIX)
print('ensure_contents({}): Updating'.format(filename))
with open(filename, 'w', encoding='utf-8') as f:
f.write(contents)
def set_config_var(name, value):
'''Set the given variable in /boot/config.txt'''
contents = open('/boot/config.txt', encoding='utf-8').readlines()
new_value = '{}={}'.format(name, value)
maybe_value = [x for x in contents
if x.startswith('{}='.format(name))]
if len(maybe_value) == 1 and maybe_value[0].strip() == new_value:
return
new_contents = ([
x for x in contents
if not x.startswith('{}='.format(name))] +
[new_value + '\n'])
shutil.copy('/boot/config.txt', '/boot/config.txt' + ORIG_SUFFIX)
print('set_config_var({})={}'.format(name, value))
open('/boot/config.txt', 'w', encoding='utf-8').write(
''.join([x for x in new_contents]))
def main():
if os.getuid() != 0:
raise RuntimeError('must be run as root')
# Some useful utilities
run('apt install --yes socat setserial')
# Things necessary to be an AP
run('apt install --yes hostapd dnsmasq')
# P1 Camera - Yes
run('raspi-config nonint do_camera 0')
# P2 SSH - Yes
run('raspi-config nonint do_ssh 0')
# P6 Serial
# Login shell - No
# Serial enabled - Yes
#
# NOTE: The version of raspi-config we have now doesn't support
# enabling the UART from noninteractive mode.
run('raspi-config nonint do_serial 1')
# This we have to manually enable the UART once it is done.
set_config_var('enable_uart', '1')
# Switch to use the PL011 UART
# https://www.raspberrypi.org/documentation/configuration/uart.md
ensure_present('/boot/config.txt', 'dtoverlay=pi3-disable-bt')
ensure_contents('/etc/network/interfaces',
'''
# interfaces(5) file used by ifup(8) and ifdown(8)
# Please note that this file is written to be used with dhcpcd
# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
auto eth0
iface eth0 inet static
address 192.168.17.47
netmask 255.255.255.0
network 192.168.17.0
broadcast 192.168.17.255
# gateway 192.168.17.1
dns-nameservers 8.8.8.8 8.8.4.4
post-up ip route add 239.89.108.0/24 dev eth0
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.16.47
netmask 255.255.255.0
network 192.168.16.0
broadcast 192.168.16.255
post-up iw dev wlan0 set power_save off || true
post-up iw dev wlan0 set retry long 1 || true
post-up ip route add 239.89.108.0/24 dev wlan0
''')
ensure_contents('/etc/hostapd/hostapd.conf',
'''
country_code=US
interface=wlan0
driver=nl80211
ssid=MjMech
hw_mode=a
ieee80211n=1
require_ht=1
ieee80211ac=1
require_vht=1
ieee80211d=1
ieee80211h=0
ht_capab=[HT40+][SHORT-GI-20][DSSS_CK-40][MAX-AMSDU-3839]
vht_capab=[MAX-MDPU-3895][SHORT-GI-80][SU-BEAMFORMEE]
vht_oper_chwidth=1
channel=36
vht_oper_centr_freq_seg0_idx=42
wmm_enabled=0
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=WalkingRobots
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP
''')
ensure_present('/etc/default/hostapd',
'DAEMON_CONF="/etc/hostapd/hostapd.conf"')
ensure_contents('/etc/dnsmasq.conf',
'''
interface=wlan0
dhcp-range=192.168.16.100,192.168.16.150,255.255.255.0,24h
''')
ensure_present('/etc/dhcpcd.conf', 'denyinterfaces wlan0')
run('systemctl unmask hostapd')
run('systemctl enable hostapd')
run('systemctl start hostapd')
if __name__ == '__main__':
main()
| [
"jjp@pobox.com"
] | jjp@pobox.com |
66e70df8459191ede0a0d58d871bb1cceccfeaa2 | e29734c2b3543a05a28b6bc460c3248ea37aaf5c | /apps/course/migrations/0012_auto_20190417_1723.py | 3f626f2c421f9e72f9bb96824deeb14fd09983c7 | [] | no_license | simida0755/PopularBlogs | fda6dbe06751dde013ba57f73c708fd7106a49ee | 3a86989232206d0727223306c0e2d2c62d35fa9b | refs/heads/master | 2020-05-21T15:54:09.853341 | 2019-05-13T02:15:28 | 2019-05-13T02:15:28 | 186,101,555 | 5 | 1 | null | null | null | null | UTF-8 | Python | false | false | 371 | py | # Generated by Django 2.0.2 on 2019-04-17 17:23
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('course', '0011_auto_20190416_1459'),
]
operations = [
migrations.RenameField(
model_name='blogposts',
old_name='Like_nums',
new_name='like_nums',
),
]
| [
"simida027@163.com"
] | simida027@163.com |
3ca0590beaf838aade8aff60fc8e2f99982bed2a | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/380/usersdata/277/91472/submittedfiles/principal.py | 84c993c745f110d61a63b8b18dfd48624e8021e1 | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 69 | py | # -*- coding: utf-8 -*-
x = input('Digite algo: ')
mostrar_na_tela(x) | [
"rafael.mota@ufca.edu.br"
] | rafael.mota@ufca.edu.br |
7aefc106ad42e43fe1b6ff523e1dac4fc64c01d2 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/verbs/_fume.py | ce796bac3558af33ff267419b8d051aff846cfaa | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 326 | py |
#calss header
class _FUME():
def __init__(self,):
self.name = "FUME"
self.definitions = [u'to be very angry, sometimes without expressing it: ']
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.specie = 'verbs'
def run(self, obj1 = [], obj2 = []):
return self.jsondata
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
df5c7b8d6d7517e53f42e5a9d2b47dc530a7a8e4 | aedf65a662083d82fd2ef021a883dd842961d445 | /webapp/linkstop/apps/accounts/admin.py | bb93b6b45bbaaaf78846834daee1847aa3489c47 | [] | no_license | hercules261188/Locidesktop | b95f9f4dd709d33f21b7b9f43d52e3b76c99912b | cab3a3bda807780244e4e5ce9c3745b6d04ddbc9 | refs/heads/master | 2021-12-02T15:12:47.876242 | 2011-01-10T09:21:27 | 2011-01-10T09:21:27 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 293 | py | from django.contrib import admin
from models import *
class ProfileAdmin(admin.ModelAdmin):
list_display = ('user',)
admin.site.register(Profile, ProfileAdmin)
class InviteAdmin(admin.ModelAdmin):
list_display = ('code', 'uses_remaining')
admin.site.register(Invite, InviteAdmin)
| [
"willmcgugan@gmail.com"
] | willmcgugan@gmail.com |
9b6387e95c7865363144a250492269e8afc55560 | af4d559792c4255d5f26bc078cd176b70c0e643f | /hpsklearn/components/compose/_target.py | 47122eee5792a436fd2494fc78ae7673a5121e0f | [
"BSD-3-Clause"
] | permissive | hyperopt/hyperopt-sklearn | ec7d5f97ba8fd5a2c283dfec2fa9e0170b61c6ce | 4b3f6fde3a1ded2e71e8373d52c1b51a0239ef91 | refs/heads/master | 2023-08-02T07:19:20.259964 | 2022-12-15T17:53:07 | 2022-12-15T17:53:07 | 8,293,893 | 1,480 | 292 | NOASSERTION | 2022-12-15T17:53:08 | 2013-02-19T16:09:53 | Python | UTF-8 | Python | false | false | 1,423 | py | from hyperopt.pyll import scope
from sklearn import compose
@scope.define
def sklearn_TransformedTargetRegressor(*args, **kwargs):
return compose.TransformedTargetRegressor(*args, **kwargs)
def transformed_target_regressor(name: str,
regressor: object = None,
transformer: object = None,
func: callable = None,
inverse_func: callable = None,
check_inverse: bool = True):
"""
Return a pyll graph with hyperparameters that will construct
a sklearn.compose.TransformedTargetRegressor model.
Args:
name: name | str
regressor: regressor object | object
transformer: estimator object | object
func: function to apply to `y` before fit | callable
inverse_func: function to apply to prediction | callable
check_inverse: check whether inverse leads to original targets | bool
"""
def _name(msg):
return f"{name}.transformed_target_regressor_{msg}"
# TODO: Try implementing np.exp and np.log | np.sqrt and np.square combinations
hp_space = dict(
regressor=regressor,
transformer=transformer,
func=func,
inverse_func=inverse_func,
check_inverse=check_inverse
)
return scope.sklearn_TransformedTargetRegressor(**hp_space)
| [
"38689620+mandjevant@users.noreply.github.com"
] | 38689620+mandjevant@users.noreply.github.com |
28b398292b9bf6064d42889b0c2063af30c40fc2 | 58c3dd075b87ec882ccbce27d81928ea5dd46223 | /presentation/views/user/stories_views.py | ee43e98b33a2556704a7c743fbe22155926874aa | [] | no_license | panuta/storypresso | 706328652f583f7d3574eccca2bc885c49a9d0b9 | ef0fecacbd67b09c8ab64d842709344fa6e2773d | refs/heads/master | 2021-01-02T06:32:05.443474 | 2013-04-11T12:47:55 | 2013-04-11T12:47:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,890 | py | # -*- encoding: utf-8 -*-
import urllib2
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from django.http import Http404
from django.shortcuts import render, get_object_or_404, redirect
from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_POST
from common.shortcuts import response_json_success
from common.utilities import clean_content
from domain.models import Story, StoryEditingContent, EditingStory, StoryContent
from presentation.exceptions import PublishingException
from presentation.forms import WriteStoryForm, PublishStoryForm
@login_required
def view_my_stories(request, showing_stories):
if showing_stories == 'all':
stories = Story.objects.filter(created_by=request.user)
elif showing_stories == 'draft':
stories = Story.objects.filter(created_by=request.user, is_draft=True)
elif showing_stories == 'published':
stories = Story.objects.filter(created_by=request.user, is_draft=False)
else:
raise Http404
return render(request, 'user/stories.html', {'stories': stories, 'showing_stories': showing_stories})
@login_required
def write_my_story(request, story_uid):
if not story_uid:
story = None
story_content = None
else:
story = get_object_or_404(Story, uid=story_uid)
if story.created_by.id != request.user.id:
raise Http404
story_content, created = StoryContent.objects.get_or_create(story=story)
if request.method == 'POST':
form = WriteStoryForm(request.POST)
if form.is_valid():
uid = form.cleaned_data['uid']
if story and story.uid != uid:
raise Http404
if not story:
try:
story = Story.objects.get(uid=uid) # Maybe this story is already saved via autosave
story_content = story.content
except Story.DoesNotExist:
story = Story.objects.create(uid=uid, created_by=request.user)
story_content = StoryContent.objects.create(story=story)
story.title = form.cleaned_data['title'].strip()
story.save()
story_content.body = clean_content(form.cleaned_data['body'])
story_content.save()
try:
editing_story = EditingStory.objects.get(story=story).delete()
StoryEditingContent.objects.filter(editing_story=editing_story).delete()
except EditingStory.DoesNotExist:
pass
submit_method = request.POST.get('submit')
if submit_method == 'publish':
try:
story.is_ready_to_publish()
except PublishingException, e:
messages.error(request, e.message)
return redirect('write_my_story', story.uid)
return redirect('publishing_my_story_excerpt', story.uid)
else:
messages.success(request, u'บันทึกข้อมูลเรียบร้อย')
return redirect('write_my_story', story.uid)
if not story:
story = Story(uid=request.POST.get('uid'), created_by=request.user)
else:
if not story:
# Pre-generate uuid for Story using when autosave
story = Story(uid=Story.objects.generate_uuid(), created_by=request.user)
story_content = StoryContent(story=story)
form = WriteStoryForm(initial={
'uid': story.uid,
'title': story.title,
'body': story_content.body,
})
return render(request, 'user/story_write.html', {'story': story, 'form': form})
@login_required
def publishing_my_story_excerpt(request, story_uid):
story = get_object_or_404(Story, uid=story_uid)
if story.created_by.id != request.user.id:
raise Http404
if request.method == 'POST':
form = PublishStoryForm(request.POST)
if form.is_valid():
pass
else:
form = PublishStoryForm(initial={
'excerpt': story.excerpt,
'category': story.primary_category,
'title': story.title,
'summary': story.summary,
'price': story.price,
})
return render(request, 'user/story_publish/story_publish_excerpt.html', {'story': story, 'form': form})
@login_required
def publishing_my_story_details(request, story_uid):
story = get_object_or_404(Story, uid=story_uid)
if story.created_by.id != request.user.id:
raise Http404
return render(request, 'user/story_publish/story_publish_details.html', {'story': story})
@login_required
def publishing_my_story_confirm(request, story_uid):
story = get_object_or_404(Story, uid=story_uid)
if story.created_by.id != request.user.id:
raise Http404
return render(request, 'user/story_publish/story_publish_confirm.html', {'story': story})
@login_required
def edit_my_story_general(request, story_uid):
story = get_object_or_404(Story, uid=story_uid)
if story.created_by.id != request.user.id:
raise Http404
if request.method == 'POST':
pass
else:
pass
return render(request, 'user/story_edit_general.html', {'story': story})
@login_required
def edit_my_story_content(request, story_uid):
story = get_object_or_404(Story, uid=story_uid)
if story.created_by.id != request.user.id:
raise Http404
if request.method == 'POST':
form = WriteStoryForm(request.POST)
if form.is_valid():
story.title = form.cleaned_data['title']
story.save()
story.content.body = clean_content(form.cleaned_data['body'])
story.content.save()
submit_method = request.POST.get('submit')
if submit_method == 'publish':
if not story.is_draft:
messages.warning(request, u'ผลงานนี้ถูกเผยแพร่ไปก่อนหน้านี้แล้ว')
return redirect('publish_my_story', story.uid)
try:
story.is_ready_to_publish()
except PublishingException, e:
messages.error(request, e.message)
return redirect('edit_my_story_content', story.uid)
return redirect('publish_my_story', story.uid)
else: # DRAFT, SAVE
messages.success(request, u'บันทึกข้อมูลเรียบร้อย')
return redirect('edit_my_story_content', story.uid)
else:
form = WriteStoryForm(initial={
'uid': story.uid,
'title': story.title,
'body': story.content.body,
})
return render(request, 'user/story_edit_content.html', {'story': story, 'form': form})
@csrf_exempt
@require_POST
@login_required
def ajax_autosave_editing_story(request, story_uid):
try:
story = Story.objects.get(uid=story_uid)
if story.created_by.id != request.user.id:
raise Http404
except Story.DoesNotExist:
story = Story.objects.create(uid=story_uid, is_draft=True, created_by=request.user)
editing_story, created = EditingStory.objects.get_or_create(story=story)
story_editing_content, created = StoryEditingContent.objects.get_or_create(editing_story=editing_story)
content = request.POST.get('id_body')
if content:
content = urllib2.unquote(content).decode("utf8")
story_editing_content.body = content
story_editing_content.save()
return response_json_success()
def ajax_upload_image_editing_story(request):
pass
def ajax_recent_image_editing_story(request):
pass | [
"panuta@gmail.com"
] | panuta@gmail.com |
344de7f08b6061681f6b255140a70d46b64199c1 | 2416a6bde05651717f99dd45c5116cd13a7a38f7 | /docs/source/clear_docs.py | 3514db0143d1ddae01c133c04f99d3a02fbd37f2 | [
"BSD-3-Clause"
] | permissive | modulexcite/nbgrader | b1080a61d0ba5252c16c3d7471e13a40ce659624 | 1748c40f47b69c81e19068d4ba3c3205af72317d | refs/heads/master | 2021-01-20T21:40:48.115091 | 2015-08-13T01:57:08 | 2015-08-13T01:57:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,105 | py | import os
import sys
import subprocess as sp
from copy import deepcopy
try:
from IPython.nbformat import read, write
from IPython.nbconvert.preprocessors import ClearOutputPreprocessor
except ImportError:
print("Warning: IPython could not be imported, some tasks may not work")
def run(cmd):
print(" ".join(cmd))
proc = sp.Popen(cmd, stdout=sp.PIPE, stderr=sp.STDOUT)
stdout, _ = proc.communicate()
if proc.poll() != 0:
print(stdout.decode())
print("Command exited with code: {}".format(proc.poll()))
sys.exit(1)
def _check_if_directory_in_path(pth, target):
while pth not in ('', '/'):
pth, dirname = os.path.split(pth)
if dirname == target:
return True
return False
def clear_notebooks(root):
"""Clear the outputs of documentation notebooks."""
# cleanup ignored files
run(['git', 'clean', '-fdX', root])
print("Clearing outputs of notebooks in '{}'...".format(os.path.abspath(root)))
preprocessor = ClearOutputPreprocessor()
for dirpath, dirnames, filenames in os.walk(root):
is_submitted = _check_if_directory_in_path(dirpath, 'submitted')
for filename in sorted(filenames):
if os.path.splitext(filename)[1] == '.ipynb':
# read in the notebook
pth = os.path.join(dirpath, filename)
with open(pth, 'r') as fh:
orig_nb = read(fh, 4)
# copy the original notebook
new_nb = deepcopy(orig_nb)
# check outputs of all the cells
if not is_submitted:
new_nb = preprocessor.preprocess(new_nb, {})[0]
# clear metadata
new_nb.metadata = {}
# write the notebook back to disk
with open(pth, 'w') as fh:
write(new_nb, fh, 4)
if orig_nb != new_nb:
print("Cleared '{}'".format(pth))
if __name__ == "__main__":
root = os.path.abspath(os.path.dirname(__file__))
clear_notebooks(root)
| [
"jhamrick@berkeley.edu"
] | jhamrick@berkeley.edu |
1ec1ad8fe95f163e80301f541560976b55f7b599 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/otherforms/_viewed.py | f9f11fcee4bddc5beedae71b386206b85ee7bebc | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 216 | py |
#calss header
class _VIEWED():
def __init__(self,):
self.name = "VIEWED"
self.definitions = view
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.basic = ['view']
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
f6fdf258262e61f9b025e4447b78e2845eb335c1 | 0d86bb399a13152cd05e3ba5684e4cb22daeb247 | /python-basics/unit12-modules/py119_from_import.py | d62586d3be78b54a6b3a96b8e4acc2e3c31b4925 | [] | no_license | tazbingor/learning-python2.7 | abf73f59165e09fb19b5dc270b77324ea00b047e | f08c3bce60799df4f573169fcdb1a908dcb8810f | refs/heads/master | 2021-09-06T05:03:59.206563 | 2018-02-02T15:22:45 | 2018-02-02T15:22:45 | 108,609,480 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 390 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 18/1/4 下午10:06
# @Author : Aries
# @Site :
# @File : py119_from_import.py
# @Software: PyCharm
'''
from .. import 语句
多行导入
'''
from Tkinter import Tk, Frame, Button, Entry, \
Canvas, Text, LEFT
'''
PEP 8风格
加入括号更易读
'''
from Tkinter import (Tk, Frame, Button, Entry, Canvas, Text, LEFT)
| [
"852353298@qq.com"
] | 852353298@qq.com |
f591f2f5000f55dde12dac4747b15d2cd0031f76 | 0809ea2739d901b095d896e01baa9672f3138825 | /ORMproject1_ver_2/ORMproject1_ver_2/settings.py | db3c8140ebb48988e3492dcf972169f97a75ed03 | [] | no_license | Gagangithub1988/djangoprojects | dd001f2184e78be2fb269dbfdc8e3be1dd71ce43 | ea236f0e4172fbf0f71a99aed05ed7c7b38018e2 | refs/heads/master | 2022-11-15T23:46:46.134247 | 2020-07-15T06:37:51 | 2020-07-15T06:37:51 | 273,479,403 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,275 | py | """
Django settings for ORMproject1_ver_2 project.
Generated by 'django-admin startproject' using Django 3.0.5.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
TEMPLATE_DIR=os.path.join(BASE_DIR,'templates')
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '+m!bbg$#saf!qi#4ct@*n8(x0zc7eh^j_j^2)^gyhni9xyv&0!'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'testApp'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'ORMproject1_ver_2.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [TEMPLATE_DIR],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'ORMproject1_ver_2.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'orm2',
'USER':'postgres',
'PASSWORD':'sql2020',
'HOST':'localhost',
'PORT':'5432',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = '/static/'
| [
"djangopython1988@gmail.com"
] | djangopython1988@gmail.com |
35659b938f37e4feeafc513cf7fc00b3e2457ce9 | 10d98fecb882d4c84595364f715f4e8b8309a66f | /tunas/schema_io_test.py | ab7f6ca4f294916dca7c1d63ac50e89c1bc14eeb | [
"CC-BY-4.0",
"Apache-2.0"
] | permissive | afcarl/google-research | 51c7b70d176c0d70a5ee31ea1d87590f3d6c6f42 | 320a49f768cea27200044c0d12f394aa6c795feb | refs/heads/master | 2021-12-02T18:36:03.760434 | 2021-09-30T20:59:01 | 2021-09-30T21:07:02 | 156,725,548 | 1 | 0 | Apache-2.0 | 2018-11-08T15:13:53 | 2018-11-08T15:13:52 | null | UTF-8 | Python | false | false | 7,843 | py | # coding=utf-8
# Copyright 2021 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for schema_io."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import copy
from absl.testing import absltest
from absl.testing import parameterized
import six
from tunas import schema
from tunas import schema_io
# First style of decorator use: decorating a namedtuple subclass.
@schema_io.register_namedtuple(
'schema_io_test.NamedTuple1',
deprecated_names=['schema_io_test.DeprecatedNamedTuple1'])
class NamedTuple1(collections.namedtuple('NamedTuple1', ['foo'])):
pass
# Second style of decorator use: registering a raw namedtuple.
NamedTuple2 = collections.namedtuple('NamedTuple2', ['bar'])
schema_io.register_namedtuple('schema_io_test.NamedTuple2')(NamedTuple2)
# Decorator with default argument values.
@schema_io.register_namedtuple(
'schema_io_test.NamedTuple3',
deprecated_names=['schema_io_test.DeprecatedNamedTuple3'],
defaults={'foo': 3, 'bar': 'hi'})
class NamedTuple3(collections.namedtuple('NamedTuple3', ['foo', 'bar'])):
pass
class SchemaIoTest(parameterized.TestCase):
def test_register_namedtuple_exceptions(self):
with self.assertRaisesRegex(ValueError, 'Duplicate name'):
cls = collections.namedtuple('namedtuple3', ['baz'])
schema_io.register_namedtuple('schema_io_test.NamedTuple1')(cls)
with self.assertRaisesRegex(ValueError, 'Duplicate class'):
schema_io.register_namedtuple('NewNameForTheSameClass')(NamedTuple1)
with self.assertRaisesRegex(ValueError, 'not a namedtuple'):
schema_io.register_namedtuple('NotANamedTuple')(dict)
def test_namedtuple_class_to_name(self):
self.assertEqual(
schema_io.namedtuple_class_to_name(NamedTuple1),
'schema_io_test.NamedTuple1')
self.assertEqual(
schema_io.namedtuple_class_to_name(NamedTuple2),
'schema_io_test.NamedTuple2')
def test_namedtuple_class_to_name_not_registered(self):
cls = collections.namedtuple('cls', ['x'])
with self.assertRaisesRegex(
KeyError, 'Namedtuple class .* is not registered'):
schema_io.namedtuple_class_to_name(cls)
def test_namedtuple_name_to_class_not_registered(self):
with self.assertRaisesRegex(
KeyError, 'Namedtuple name \'blahblah\' is not registered'):
schema_io.namedtuple_name_to_class('blahblah')
def test_namedtuple_name_to_class(self):
self.assertEqual(
schema_io.namedtuple_name_to_class('schema_io_test.NamedTuple1'),
NamedTuple1)
self.assertEqual(
schema_io.namedtuple_name_to_class('schema_io_test.NamedTuple2'),
NamedTuple2)
def test_namedtuple_deprecated_name_to_class(self):
self.assertEqual(
schema_io.namedtuple_name_to_class(
'schema_io_test.DeprecatedNamedTuple1'),
NamedTuple1)
def _run_serialization_test(self,
structure,
expected_type=None):
"""Convert the structure to serialized JSON, then back to a string."""
expected_value = copy.deepcopy(structure)
serialized = schema_io.serialize(structure)
self.assertIsInstance(serialized, six.string_types)
restored = schema_io.deserialize(serialized)
self.assertEqual(restored, expected_value)
if expected_type is not None:
self.assertIsInstance(restored, expected_type)
def test_serialization_with_simple_structures(self):
# Primitives.
self._run_serialization_test(None)
self._run_serialization_test(1)
self._run_serialization_test(0.5)
self._run_serialization_test(1.0)
self._run_serialization_test('foo')
# Lists and tuples.
self._run_serialization_test([1, 2, 3])
self._run_serialization_test((1, 2, 3))
# Dictionaries.
self._run_serialization_test({'a': 3, 'b': 4})
self._run_serialization_test({10: 'x', 20: 'y'})
self._run_serialization_test({(1, 2): 'x', (3, 4): 'y'})
# Namedtuples
self._run_serialization_test(NamedTuple1(42), expected_type=NamedTuple1)
self._run_serialization_test(NamedTuple2(12345), expected_type=NamedTuple2)
# OneOf nodes.
self._run_serialization_test(schema.OneOf((1, 2, 3), 'tag'))
def test_namedtuple_deserialization_with_deprecated_names(self):
restored = schema_io.deserialize(
'["namedtuple:schema_io_test.DeprecatedNamedTuple1",["foo",51]]')
self.assertEqual(restored, NamedTuple1(51))
self.assertIsInstance(restored, NamedTuple1)
def test_serialization_with_nested_structures(self):
"""Verify that to_json and from_json are recursively called on children."""
# Lists and tuples
self._run_serialization_test((((1,),),))
self._run_serialization_test([[[1]]])
# Dictionaries.
self._run_serialization_test({'a': {'b': {'c': {'d': 'e'}}}})
# Namedtuples
self._run_serialization_test(NamedTuple1(NamedTuple2(NamedTuple1(42))))
# OneOf nodes
self._run_serialization_test(
schema.OneOf((
schema.OneOf((
schema.OneOf((1, 2, 3), 'innermost'),
), 'inner'),
), 'outer'))
# Composite data structure containing many different types.
self._run_serialization_test(
{'a': NamedTuple1([(schema.OneOf([{'b': 3}], 't'),)])})
def test_serialization_with_bad_type(self):
with self.assertRaisesRegex(ValueError, 'Unrecognized type'):
schema_io.serialize(object())
def test_deserialization_defaults(self):
# NamedTuple1 accepts one argument: foo. It has not default value.
# NamedTuple3 accepts two arguments: foo and bar. Both have default values.
# Use default arguments for both foo and bar.
value = schema_io.deserialize(
"""["namedtuple:schema_io_test.NamedTuple3"]""")
self.assertEqual(value, NamedTuple3(foo=3, bar='hi'))
# Use default argument for bar only.
value = schema_io.deserialize(
"""["namedtuple:schema_io_test.NamedTuple3", ["foo", 42]]""")
self.assertEqual(value, NamedTuple3(foo=42, bar='hi'))
# Use default argument for foo only.
value = schema_io.deserialize(
"""["namedtuple:schema_io_test.NamedTuple3", ["bar", "bye"]]""")
self.assertEqual(value, NamedTuple3(foo=3, bar='bye'))
# Don't use any default arguments.
value = schema_io.deserialize(
"""["namedtuple:schema_io_test.NamedTuple3",
["foo", 9], ["bar", "x"]]""")
self.assertEqual(value, NamedTuple3(foo=9, bar='x'))
# Default values should also work when we refer to a namedtuple by a
# deprecated name.
value = schema_io.deserialize(
"""["namedtuple:schema_io_test.DeprecatedNamedTuple3"]""")
self.assertEqual(value, NamedTuple3(foo=3, bar='hi'))
# Serialized value references a field that doesn't exist in the namedtuple.
with self.assertRaisesRegex(ValueError, 'Invalid field: baz'):
schema_io.deserialize(
"""["namedtuple:schema_io_test.NamedTuple3", ["baz", 10]]""")
# Serialized value is missing a field that should exist in the namedtuple.
with self.assertRaisesRegex(ValueError, 'Missing field: foo'):
schema_io.deserialize("""["namedtuple:schema_io_test.NamedTuple1"]""")
if __name__ == '__main__':
absltest.main()
| [
"copybara-worker@google.com"
] | copybara-worker@google.com |
3cbe84dc02307edb834d241e25626fe9c1b861d0 | e9a5033ac69ef690602eb1206217dac6d09c1e63 | /netharn/util/util_io.py | 74d56ad483aade482c02c454104a746a9cfed118 | [
"Apache-2.0"
] | permissive | Erotemic/netharn | ba030df2c9d79fe0b392f8823bc2819383d8756f | bc4a6d75445c949e709e5ab903ba72813ec68b79 | refs/heads/master | 2021-05-26T05:05:24.931026 | 2020-08-27T00:13:34 | 2020-08-27T00:13:34 | 127,506,937 | 43 | 8 | Apache-2.0 | 2020-02-19T17:51:56 | 2018-03-31T06:49:03 | Python | UTF-8 | Python | false | false | 1,392 | py | # -*- coding: utf-8 -*-
"""
DEPRECATED
"""
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
def read_h5arr(fpath):
import h5py
with h5py.File(fpath, 'r') as hf:
return hf['arr_0'][...]
def write_h5arr(fpath, arr):
import h5py
with h5py.File(fpath, 'w') as hf:
hf.create_dataset('arr_0', data=arr)
def read_arr(fpath):
"""
Example:
>>> import ubelt as ub
>>> import netharn as nh
>>> from os.path import join
>>> dpath = ub.ensure_app_cache_dir('netharn', 'tests')
>>> arr = np.random.rand(10)
>>> fpath = join(dpath, 'arr.npy')
>>> nh.util.write_arr(fpath, arr)
>>> arr2 = nh.util.read_arr(fpath)
>>> assert np.all(arr == arr2)
>>> # xdoctest: +REQUIRES(module:h5py)
>>> fpath = join(dpath, 'arr.h5')
>>> nh.util.write_arr(fpath, arr)
>>> arr2 = nh.util.read_arr(fpath)
>>> assert np.all(arr == arr2)
"""
if fpath.endswith('.npy'):
return np.load(fpath)
elif fpath.endswith('.h5'):
return read_h5arr(fpath)
else:
raise KeyError(fpath)
def write_arr(fpath, arr):
if fpath.endswith('.npy'):
return np.save(fpath, arr)
elif fpath.endswith('.h5'):
return write_h5arr(fpath, arr)
else:
raise KeyError(fpath)
| [
"jon.crall@kitware.com"
] | jon.crall@kitware.com |
9e21f133bfc9f87dea5673c020cb62a394593f31 | ef243d91a1826b490e935fa3f3e6c29c3cc547d0 | /PyQt5/QtNetwork/QSslKey.py | c5246b49969c93f51c37ee8902d23b3a1e0668d5 | [] | no_license | VentiFang/Python_local_module | 6b3d0b22399e817057dfd15d647a14bb1e41980e | c44f55379eca2818b29732c2815480ee755ae3fb | refs/heads/master | 2020-11-29T11:24:54.932967 | 2019-12-25T12:57:14 | 2019-12-25T12:57:14 | 230,101,875 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,212 | py | # encoding: utf-8
# module PyQt5.QtNetwork
# from F:\Python\Python36\lib\site-packages\PyQt5\QtNetwork.pyd
# by generator 1.147
# no doc
# imports
import enum as __enum
import PyQt5.QtCore as __PyQt5_QtCore
import sip as __sip
class QSslKey(__sip.simplewrapper):
"""
QSslKey()
QSslKey(Union[QByteArray, bytes, bytearray], QSsl.KeyAlgorithm, encoding: QSsl.EncodingFormat = QSsl.Pem, type: QSsl.KeyType = QSsl.PrivateKey, passPhrase: Union[QByteArray, bytes, bytearray] = QByteArray())
QSslKey(QIODevice, QSsl.KeyAlgorithm, encoding: QSsl.EncodingFormat = QSsl.Pem, type: QSsl.KeyType = QSsl.PrivateKey, passPhrase: Union[QByteArray, bytes, bytearray] = QByteArray())
QSslKey(sip.voidptr, type: QSsl.KeyType = QSsl.PrivateKey)
QSslKey(QSslKey)
"""
def algorithm(self): # real signature unknown; restored from __doc__
""" algorithm(self) -> QSsl.KeyAlgorithm """
pass
def clear(self): # real signature unknown; restored from __doc__
""" clear(self) """
pass
def handle(self): # real signature unknown; restored from __doc__
""" handle(self) -> sip.voidptr """
pass
def isNull(self): # real signature unknown; restored from __doc__
""" isNull(self) -> bool """
return False
def length(self): # real signature unknown; restored from __doc__
""" length(self) -> int """
return 0
def swap(self, QSslKey): # real signature unknown; restored from __doc__
""" swap(self, QSslKey) """
pass
def toDer(self, passPhrase, QByteArray=None, bytes=None, bytearray=None, *args, **kwargs): # real signature unknown; NOTE: unreliably restored from __doc__
""" toDer(self, passPhrase: Union[QByteArray, bytes, bytearray] = QByteArray()) -> QByteArray """
pass
def toPem(self, passPhrase, QByteArray=None, bytes=None, bytearray=None, *args, **kwargs): # real signature unknown; NOTE: unreliably restored from __doc__
""" toPem(self, passPhrase: Union[QByteArray, bytes, bytearray] = QByteArray()) -> QByteArray """
pass
def type(self): # real signature unknown; restored from __doc__
""" type(self) -> QSsl.KeyType """
pass
def __eq__(self, *args, **kwargs): # real signature unknown
""" Return self==value. """
pass
def __ge__(self, *args, **kwargs): # real signature unknown
""" Return self>=value. """
pass
def __gt__(self, *args, **kwargs): # real signature unknown
""" Return self>value. """
pass
def __init__(self, *__args): # real signature unknown; restored from __doc__ with multiple overloads
pass
def __le__(self, *args, **kwargs): # real signature unknown
""" Return self<=value. """
pass
def __lt__(self, *args, **kwargs): # real signature unknown
""" Return self<value. """
pass
def __ne__(self, *args, **kwargs): # real signature unknown
""" Return self!=value. """
pass
__weakref__ = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""list of weak references to the object (if defined)"""
__hash__ = None
| [
"5149528+ventifang@user.noreply.gitee.com"
] | 5149528+ventifang@user.noreply.gitee.com |
e5b0008119735c1b9e21aa25ea5e1a98dd42cef7 | 2a4a17a67b9069c19396c0f8eabc8b7c4b6ff703 | /BGP3D/Chapter05/AdditionalCode/MarkerClass.py | 168ea6c1929f070d29038c045e9feff5ed2cc174 | [] | no_license | kaz101/panda-book | 0fa273cc2df5849507ecc949b4dde626241ffa5e | 859a759c769d9c2db0d11140b0d04506611c2b7b | refs/heads/master | 2022-12-19T09:36:05.794731 | 2020-09-16T19:04:10 | 2020-09-16T19:04:10 | 295,784,057 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,292 | py | '''
This class is used to define the markers placed along the track. These markers are
referenced by multiple parts of the game in order to understand the shape and position
of parts of the track.
'''
from pandac.PandaModules import Vec3
class Marker:
def __init__(self, pos):
self.lane = 0
# Creates a variable to store the number ID of the lane this marker is in.
self.index = 0
# Creates a variable to store the number ID of the marker within it's lane.
self.np = render.attachNewNode("MarkerNP")
self.np.setPos(pos.getX(), pos.getY(), pos.getZ())
# Creates and positions a proxy NodePath to represent the marker's position in
# space.
self.nextMarker = None
self.prevMarker = None
# Creates variables to store the next and previous markers in the lane.
self.adjMarkers = []
# Creates a list to reference the markers that are adjacent to this one.
self.facingVec = Vec3(0,1,0)
self.cycleVec = Vec3(0,0,0)
# Some functions of the marker will need vectors. This creates them ahead of time.
def getPos(self, ref = None):
if(ref == None): return(self.np.getPos())
else: return(self.np.getPos(ref))
return
# Returns the position of the marker's NodePath.
def getHpr(self, ref = None):
if(ref == None): return(self.np.getHpr())
else: return(self.np.getHpr(ref))
return
# Returns the heading, pitch, and roll of the marker's NodePath.
def setFacing(self):
nmp = self.nextMarker.getPos()
self.np.lookAt(nmp.getX(), nmp.getY(), self.np.getPos().getZ())
return
# Forces the marker to face directly toward the next marker in the lane.
def checkInFront(self, cycle):
cyclePos = cycle.root.getPos(self.np)
self.cycleVec.set(cyclePos.getX(), cyclePos.getY(), self.np.getZ())
self.cycleVec.normalize()
# Gets the directional vector to the cycle and normalizes it.
cycleAngle = self.facingVec.angleDeg(self.cycleVec)
# Gets the angle between the marker's facing and the direction to the cycle.
if(cycleAngle > 90): return(False)
else: return(True)
# Returns True if the cycle is in front of the marker or False if it is behind it.
def killMe(self):
self.np.removeNode()
return
# Removes the marker's NodePath from the scene.
| [
"kaz101130@gmail.com"
] | kaz101130@gmail.com |
a42151ab93472a9bd2dbfd2ba188557e5d608af4 | fea44d5ca4e6c9b2c7950234718a4531d453849e | /sktime/regression/deep_learning/tapnet.py | 0dc3c34a9c8f697072fb87309012e68fefb0f28b | [
"BSD-3-Clause"
] | permissive | mlgig/sktime | 288069ab8c9b0743113877032dfca8cf1c2db3fb | 19618df351a27b77e3979efc191e53987dbd99ae | refs/heads/master | 2023-03-07T20:22:48.553615 | 2023-02-19T18:09:12 | 2023-02-19T18:09:12 | 234,604,691 | 1 | 0 | BSD-3-Clause | 2020-01-17T17:50:12 | 2020-01-17T17:50:11 | null | UTF-8 | Python | false | false | 9,331 | py | # -*- coding: utf-8 -*-
"""Time Convolutional Neural Network (CNN) for classification."""
__author__ = [
"Jack Russon",
]
__all__ = [
"TapNetRegressor",
]
from copy import deepcopy
from sklearn.utils import check_random_state
from sktime.networks.tapnet import TapNetNetwork
from sktime.regression.deep_learning.base import BaseDeepRegressor
from sktime.utils.validation._dependencies import _check_dl_dependencies
class TapNetRegressor(BaseDeepRegressor):
"""Time series attentional prototype network (TapNet), as described in [1].
TapNet was initially proposed for multivariate time series
classification. The is an adaptation for time series regression. TapNet comprises
these components: random dimension permutation, multivariate time series
encoding, and attentional prototype learning.
Parameters
----------
filter_sizes : array of int, default = (256, 256, 128)
sets the kernel size argument for each convolutional block.
Controls number of convolutional filters
and number of neurons in attention dense layers.
kernel_size : array of int, default = (8, 5, 3)
controls the size of the convolutional kernels
layers : array of int, default = (500, 300)
size of dense layers
n_epochs : int, default = 2000
number of epochs to train the model
batch_size : int, default = 16
number of samples per update
dropout : float, default = 0.5
dropout rate, in the range [0, 1)
dilation : int, default = 1
dilation value
activation : str, default = "sigmoid"
activation function for the last output layer
loss : str, default = "mean_squared_error"
loss function for the classifier
optimizer : str or None, default = "Adam(lr=0.01)"
gradient updating function for the classifier
use_bias : bool, default = True
whether to use bias in the output dense layer
use_rp : bool, default = True
whether to use random projections
use_att : bool, default = True
whether to use self attention
use_lstm : bool, default = True
whether to use an LSTM layer
use_cnn : bool, default = True
whether to use a CNN layer
verbose : bool, default = False
whether to output extra information
random_state : int or None, default = None
seed for random
References
----------
.. [1] Zhang et al. Tapnet: Multivariate time series classification with
attentional prototypical network,
Proceedings of the AAAI Conference on Artificial Intelligence
34(4), 6845-6852, 2020
Notes
-----
The Implementation of TapNet found at https://github.com/kdd2019-tapnet/tapnet
Currently does not implement custom distance matrix loss function
or class based self attention.
"""
_tags = {"python_dependencies": "tensorflow"}
def __init__(
self,
n_epochs=2000,
batch_size=16,
dropout=0.5,
filter_sizes=(256, 256, 128),
kernel_size=(8, 5, 3),
dilation=1,
layers=(500, 300),
use_rp=True,
activation=None,
rp_params=(-1, 3),
use_bias=True,
use_att=True,
use_lstm=True,
use_cnn=True,
random_state=None,
padding="same",
loss="mean_squared_error",
optimizer=None,
metrics=None,
callbacks=None,
verbose=False,
):
_check_dl_dependencies(severity="error")
super(TapNetRegressor, self).__init__()
self.batch_size = batch_size
self.random_state = random_state
self.kernel_size = kernel_size
self.layers = layers
self.rp_params = rp_params
self.filter_sizes = filter_sizes
self.activation = activation
self.use_att = use_att
self.use_bias = use_bias
self.dilation = dilation
self.padding = padding
self.n_epochs = n_epochs
self.loss = loss
self.optimizer = optimizer
self.metrics = metrics
self.callbacks = callbacks
self.verbose = verbose
self.dropout = dropout
self.use_lstm = use_lstm
self.use_cnn = use_cnn
# parameters for random projection
self.use_rp = use_rp
self.rp_params = rp_params
self._network = TapNetNetwork(
dropout=self.dropout,
filter_sizes=self.filter_sizes,
kernel_size=self.kernel_size,
dilation=self.dilation,
layers=self.layers,
use_rp=self.use_rp,
rp_params=self.rp_params,
use_att=self.use_att,
use_lstm=self.use_lstm,
use_cnn=self.use_cnn,
random_state=self.random_state,
padding=self.padding,
)
def build_model(self, input_shape, **kwargs):
"""Construct a complied, un-trained, keras model that is ready for training.
In sktime, time series are stored in numpy arrays of shape (d,m), where d
is the number of dimensions, m is the series length. Keras/tensorflow assume
data is in shape (m,d). This method also assumes (m,d). Transpose should
happen in fit.
Parameters
----------
input_shape : tuple
The shape of the data fed into the input layer, should be (m, d)
Returns
-------
output: a compiled Keras model
"""
import tensorflow as tf
from tensorflow import keras
tf.random.set_seed(self.random_state)
metrics = ["mean_squared_error"] if self.metrics is None else self.metrics
input_layer, output_layer = self._network.build_network(input_shape, **kwargs)
output_layer = keras.layers.Dense(
units=1, activation=self.activation, use_bias=self.use_bias
)(output_layer)
self.optimizer_ = (
keras.optimizers.Adam(learning_rate=0.01)
if self.optimizer is None
else self.optimizer
)
model = keras.models.Model(inputs=input_layer, outputs=output_layer)
model.compile(
loss=self.loss,
optimizer=self.optimizer_,
metrics=metrics,
)
return model
def _fit(self, X, y):
"""
Fit the regressor on the training set (X, y).
Parameters
----------
X : np.ndarray of shape = (n_instances(n), n_dimensions(d), series_length(m))
Input training samples
y : np.ndarray of shape n
Input training responses
Returns
-------
self: object
"""
# Transpose to conform to expectation format from keras
X = X.transpose(0, 2, 1)
check_random_state(self.random_state)
self.input_shape = X.shape[1:]
self.model_ = self.build_model(self.input_shape)
if self.verbose:
self.model_.summary()
self.history = self.model_.fit(
X,
y,
batch_size=self.batch_size,
epochs=self.n_epochs,
verbose=self.verbose,
callbacks=deepcopy(self.callbacks) if self.callbacks else [],
)
return self
@classmethod
def get_test_params(cls, parameter_set="default"):
"""Return testing parameter settings for the estimator.
Parameters
----------
parameter_set : str, default="default"
Name of the set of test parameters to return, for use in tests. If no
special parameters are defined for a value, will return `"default"` set.
For classifiers, a "default" set of parameters should be provided for
general testing, and a "results_comparison" set for comparing against
previously recorded results if the general set does not produce suitable
probabilities to compare against.
Returns
-------
params : dict or list of dict, default={}
Parameters to create testing instances of the class.
Each dict are parameters to construct an "interesting" test instance, i.e.,
`MyClass(**params)` or `MyClass(**params[i])` creates a valid test instance.
`create_test_instance` uses the first (or only) dictionary in `params`.
"""
from sktime.utils.validation._dependencies import _check_soft_dependencies
param1 = {
"n_epochs": 10,
"batch_size": 4,
"padding": "valid",
"filter_sizes": (16, 16, 16),
"kernel_size": (3, 3, 1),
"layers": (25, 50),
}
param2 = {
"n_epochs": 20,
"use_cnn": False,
"layers": (25, 25),
}
test_params = [param1, param2]
if _check_soft_dependencies("keras", severity="none"):
from keras.callbacks import LambdaCallback
test_params.append(
{
"n_epochs": 2,
"callbacks": [LambdaCallback()],
}
)
return test_params
| [
"noreply@github.com"
] | mlgig.noreply@github.com |
3c5bd952d9054dcd4e8a2ac805219cc5d33ea51d | 009f8f1f77ef4bdfd67999058d33bccd8e743654 | /leetcode/1470_shuffle_the_array.py | 378d8363f5c46ac30fe2cf886b70ecbc5fe218d2 | [
"MIT"
] | permissive | coocos/leetcode | 4bd5faa46e916af666db025c8182f1c30cd49ee3 | 007bbeb46fa4b32e1c92fc894edeb2100eb6ba21 | refs/heads/master | 2021-06-13T13:11:10.699941 | 2021-05-01T05:55:12 | 2021-05-01T07:17:52 | 192,551,170 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 670 | py | import unittest
from typing import List
class Solution:
"""
This solution simply slices the list into two, zips the slices together
and then keeps appending numbers from the zipped iterable to a new list, finally
returning the new list.
"""
def shuffle(self, nums: List[int], n: int) -> List[int]:
shuffled = []
for x, y in zip(nums[:n], nums[n:]):
shuffled.append(x)
shuffled.append(y)
return shuffled
class TestSolution(unittest.TestCase):
def test_first_example(self):
self.assertListEqual(
Solution().shuffle([2, 5, 1, 3, 4, 7], 3), [2, 3, 5, 4, 1, 7]
)
| [
"1397804+coocos@users.noreply.github.com"
] | 1397804+coocos@users.noreply.github.com |
ade92b08072be4fbfd43b37ed4a6fcf0b292740d | 68f836bf5d9f849722c322e9207d842c766cac6f | /backend/project_notes.py | 90146bb70cf48ade623b4dd2778474b8661a07d0 | [
"MIT"
] | permissive | valmsmith39a/u-p2-trivia-api | 061f9fd216bb93f58a83b20413dcaacbf0f36239 | 0c4a66a97af13b76a55bac6e210566f1258ea9ba | refs/heads/main | 2023-07-16T02:40:42.623996 | 2021-08-28T22:41:46 | 2021-08-28T22:41:46 | 380,091,249 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,495 | py | """
@TODO: DONE: Set up CORS. Allow '*' for origins. Delete the sample route after completing the TODOs
"""
"""
@TODO: DONE: Use the after_request decorator to set Access-Control-Allow
"""
"""
@TODO: DONE
Create an endpoint to handle GET requests
for all available categories.
"""
"""
@TODO: DONE
Create an endpoint to handle GET requests for questions,
including pagination (every 10 questions).
This endpoint should return a list of questions,
number of total questions, current category, categories.
TEST: At this point, when you start the application
you should see questions and categories generated,
ten questions per page and pagination at the bottom of the screen for three pages.
Clicking on the page numbers should update the questions.
"""
"""
@TODO: DONE
Create an endpoint to DELETE question using a question ID.
TEST: When you click the trash icon next to a question, the question will be removed.
This removal will persist in the database and when you refresh the page.
"""
"""
@TODO: DONE
Create an endpoint to POST a new question,
which will require the question and answer text,
category, and difficulty score.
TEST: When you submit a question on the "Add" tab,
the form will clear and the question will appear at the end of the last page
of the questions list in the "List" tab.
"""
"""
@TODO: DONE
Create a POST endpoint to get questions based on a search term.
It should return any questions for whom the search term
is a substring of the question.
TEST: Search by any phrase. The questions list will update to include
only question that include that string within their question.
Try using the word "title" to start.
"""
"""
@TODO: DONE
Create a GET endpoint to get questions based on category.
TEST: In the "List" tab / main screen, clicking on one of the
categories in the left column will cause only questions of that
category to be shown.
"""
"""
@TODO: DONE
Create a POST endpoint to get questions to play the quiz.
This endpoint should take category and previous question parameters
and return a random questions within the given category,
if provided, and that is not one of the previous questions.
TEST: In the "Play" tab, after a user selects "All" or a category,
one question at a time is displayed, the user is allowed to answer
and shown whether they were correct or not.
"""
"""
@TODO:
Create error handlers for all expected errors
including 404 and 422.
"""
| [
"valmsmith39a@gmail.com"
] | valmsmith39a@gmail.com |
555460ca29e841d4a013d93d658324dae5e47e2a | 4d2238210813c1581bf44f64d8a63196f75d2df4 | /getspecialpath.py | 3c73e511700aad0decdddd01f448faddafea15e9 | [] | no_license | wwtang/code02 | b1600d34907404c81fa523cfdaa74db0021b8bb3 | 9f03dda7b339d8c310c8a735fc4f6d795b153801 | refs/heads/master | 2020-12-24T14:10:33.738734 | 2012-12-14T04:24:47 | 2012-12-14T04:24:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 356 | py | # LAB(begin solution)
def get_special_paths(dirname):
"""Given a dirname, returns a list of all its special files."""
result = []
paths = os.listdir(dirname) # list of paths in that dir
for fname in paths:
match = re.search(r'__(\w+)__', fname)
if match:
result.append(os.path.abspath(os.path.join(dirname, fname)))
return result
| [
"andytang1994@gmail.com"
] | andytang1994@gmail.com |
9a2eecac8235d57fb18fba8bf51a48013a539f4d | f68732bc40a7a90c3a1082e4b3a4154518acafbb | /script/dbus/sessionBus/daemonLauncher/010_requestRemoveFromDesktop_01.py | 970c505fbf5f8c7eb2441038c3f4030b30cfba08 | [] | no_license | lizhouquan1017/dbus_demo | 94238a2307e44dabde9f4a4dd0cf8ec217260867 | af8442845e722b258a095e9a1afec9dddfb175bf | refs/heads/master | 2023-02-11T19:46:27.884936 | 2021-01-08T05:27:18 | 2021-01-08T05:27:18 | 327,162,635 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,232 | py | # -*- coding: utf-8 -*-
# ***************************************************
# @Test Case ID: 010_requestRemoveFromDesktop_01
# @Test Description: 请求将指定id的程序桌面图标删除
# @Test Condition: 桌面图标不存在桌面
# @Test Step: 1.清除应用程序桌面图标
# 2.调用RequestRemoveFromDesktop接口,请求将指定id的程序桌面图标删除
# @Test Result: 2.报错
# @Test Remark:
# @Author: ut001627
# ***************************************************
import time
import pytest
from frame.base import OSBase
from aw.dbus.sessionBus.daemonLauncher import requestRemoveFromDesktop
class TestCase(OSBase):
def setUp(self):
self.app_id = 'dde-file-manager'
self.Step(f'步骤1:删除{self.app_id}桌面图标')
requestRemoveFromDesktop(app_id=self.app_id, ignore=True)
time.sleep(2)
@pytest.mark.public
def test_step(self):
self.Step(f'步骤2:调用调用RequestRemoveFromDesktop接口删除{self.app_id}桌面图标')
assert requestRemoveFromDesktop(app_id=self.app_id, is_exists=False)
time.sleep(2)
def tearDown(self):
pass
| [
"lizhouquan@uniontech.com"
] | lizhouquan@uniontech.com |
e176c56fefcc0010af610eee90e483989a3c41ca | da1721d2783ea4d67ff4e73cee6eee71292f2ef7 | /toontown/cogdominium/CogdoLevelMgr.py | 92a534dd21bec6683b43174708b853632a575be3 | [
"BSD-3-Clause"
] | permissive | open-toontown/open-toontown | bbdeb1b7bf0fb2861eba2df5483738c0112090ca | 464c2d45f60551c31397bd03561582804e760b4a | refs/heads/develop | 2023-07-07T01:34:31.959657 | 2023-05-30T23:49:10 | 2023-05-30T23:49:10 | 219,221,570 | 143 | 104 | BSD-3-Clause | 2023-09-11T09:52:34 | 2019-11-02T22:24:38 | Python | UTF-8 | Python | false | false | 174 | py | from otp.level import LevelMgr
from direct.showbase.PythonUtil import Functor
from toontown.toonbase import ToontownGlobals
class CogdoLevelMgr(LevelMgr.LevelMgr):
pass
| [
"jwcotejr@gmail.com"
] | jwcotejr@gmail.com |
5058c60cc266058372e61cb9d6b143206790c3ee | 21590487701d2dcbe1a1c1dd81c6e983f7523cb6 | /exporter/opentelemetry-exporter-zipkin/tests/test_zipkin.py | fa9c3ecf48fc24cc0b29f54f38c2572ee403a074 | [
"Apache-2.0"
] | permissive | open-telemetry/opentelemetry-python | 837199e541c03cff311cad075401791ee2a23583 | d8490c5f557dd7005badeb800095cb51b553c98c | refs/heads/main | 2023-08-26T06:47:23.837997 | 2023-08-17T22:35:13 | 2023-08-17T22:35:13 | 185,478,926 | 1,361 | 668 | Apache-2.0 | 2023-09-14T20:48:40 | 2019-05-07T21:13:30 | Python | UTF-8 | Python | false | false | 963 | py | # Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from opentelemetry.exporter.zipkin import json
from opentelemetry.exporter.zipkin.proto import http
class TestZipkinExporter(unittest.TestCase):
def test_constructors(self):
try:
json.ZipkinExporter()
http.ZipkinExporter()
except Exception as exc: # pylint: disable=broad-except
self.assertIsNone(exc)
| [
"noreply@github.com"
] | open-telemetry.noreply@github.com |
015bb44013fbc1fccd818082deadaa0ba3e0b855 | a2c66a7bb326c51df39e058262101737ea1faea1 | /tensorflow/python/data/kernel_tests/shuffle_test.py | 05d5d814c01d2640b8d34f937393df6256cde665 | [
"Apache-2.0"
] | permissive | CarlosGarciaMX/tensorflow | 406518d0fc34c047b97405aae9c1f818e8cfb3b7 | a4f06947c61ef6f69057f1207c2532a3551ff4f0 | refs/heads/master | 2021-10-02T22:44:22.191831 | 2018-12-01T19:54:43 | 2018-12-01T19:59:34 | 160,002,214 | 0 | 0 | Apache-2.0 | 2018-12-02T01:13:10 | 2018-12-02T01:13:10 | null | UTF-8 | Python | false | false | 9,641 | py | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for `tf.data.Dataset.shuffle()`."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
from absl.testing import parameterized
import numpy as np
from tensorflow.python.data.kernel_tests import test_base
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import errors
from tensorflow.python.framework import ops
from tensorflow.python.framework import random_seed
from tensorflow.python.framework import test_util
from tensorflow.python.ops import array_ops
from tensorflow.python.platform import test
@test_util.run_all_in_graph_and_eager_modes
class ShuffleTest(test_base.DatasetTestBase, parameterized.TestCase):
def testShuffleDataset(self):
components = (
np.array([1, 2, 3, 4]), np.array([5, 6, 7, 8]),
np.array([9.0, 10.0, 11.0, 12.0])
)
def dataset_fn(count=5, buffer_size=None, seed=0):
repeat_dataset = (
dataset_ops.Dataset.from_tensor_slices(components).repeat(count))
if buffer_size:
shuffle_dataset = repeat_dataset.shuffle(buffer_size, seed)
self.assertEqual(
tuple([c.shape[1:] for c in components]),
shuffle_dataset.output_shapes)
return shuffle_dataset
else:
return repeat_dataset
# First run without shuffling to collect the "ground truth".
get_next = self.getNext(dataset_fn())
unshuffled_elements = []
for _ in range(20):
unshuffled_elements.append(self.evaluate(get_next()))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
# Assert that the shuffled dataset has the same elements as the
# "ground truth".
get_next = self.getNext(dataset_fn(buffer_size=100, seed=37))
shuffled_elements = []
for _ in range(20):
shuffled_elements.append(self.evaluate(get_next()))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
self.assertAllEqual(sorted(unshuffled_elements), sorted(shuffled_elements))
# Assert that shuffling twice with the same seeds gives the same sequence.
get_next = self.getNext(dataset_fn(buffer_size=100, seed=37))
reshuffled_elements_same_seed = []
for _ in range(20):
reshuffled_elements_same_seed.append(self.evaluate(get_next()))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
self.assertEqual(shuffled_elements, reshuffled_elements_same_seed)
# Assert that shuffling twice with a different seed gives a different
# permutation of the same elements.
get_next = self.getNext(dataset_fn(buffer_size=100, seed=137))
reshuffled_elements_different_seed = []
for _ in range(20):
reshuffled_elements_different_seed.append(self.evaluate(get_next()))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
self.assertNotEqual(shuffled_elements, reshuffled_elements_different_seed)
self.assertAllEqual(
sorted(shuffled_elements), sorted(reshuffled_elements_different_seed))
# Assert that the shuffled dataset has the same elements as the
# "ground truth" when the buffer size is smaller than the input
# dataset.
get_next = self.getNext(dataset_fn(buffer_size=2, seed=37))
reshuffled_elements_small_buffer = []
for _ in range(20):
reshuffled_elements_small_buffer.append(self.evaluate(get_next()))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
self.assertAllEqual(
sorted(unshuffled_elements), sorted(reshuffled_elements_small_buffer))
# Test the case of shuffling an empty dataset.
get_next = self.getNext(dataset_fn(count=0, buffer_size=100, seed=37))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(get_next())
@test_util.run_deprecated_v1
def testSkipEagerSeedZero(self):
"""Test for same behavior when the seed is a Python or Tensor zero."""
iterator = (
dataset_ops.Dataset.range(10).shuffle(10, seed=0)
.make_one_shot_iterator())
get_next = iterator.get_next()
elems = []
with self.cached_session() as sess:
for _ in range(10):
elems.append(sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
seed_placeholder = array_ops.placeholder(dtypes.int64, shape=[])
iterator = (
dataset_ops.Dataset.range(10).shuffle(10, seed=seed_placeholder)
.make_initializable_iterator())
get_next = iterator.get_next()
with self.cached_session() as sess:
sess.run(iterator.initializer, feed_dict={seed_placeholder: 0})
for elem in elems:
self.assertEqual(elem, sess.run(get_next))
with self.assertRaises(errors.OutOfRangeError):
sess.run(get_next)
def testDefaultArguments(self):
components = [0, 1, 2, 3, 4]
dataset = dataset_ops.Dataset.from_tensor_slices(components).shuffle(
5).repeat()
get_next = self.getNext(dataset)
counts = collections.defaultdict(lambda: 0)
for _ in range(10):
for _ in range(5):
counts[self.evaluate(get_next())] += 1
for i in range(5):
self.assertEqual(10, counts[i])
def testShuffleNoReshuffleEachIteration(self):
dataset = dataset_ops.Dataset.range(10).shuffle(
10, reshuffle_each_iteration=False).batch(10).repeat(3)
next_element = self.getNext(dataset)
initial_permutation = self.evaluate(next_element())
self.assertAllEqual(initial_permutation, self.evaluate(next_element()))
self.assertAllEqual(initial_permutation, self.evaluate(next_element()))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(next_element())
def testShuffleReshuffleEachIteration(self):
dataset = dataset_ops.Dataset.range(10).shuffle(
10, seed=3, reshuffle_each_iteration=True).batch(10).repeat(3)
next_element = self.getNext(dataset)
initial_permutation = list(self.evaluate(next_element()))
for _ in range(2):
next_permutation = list(self.evaluate(next_element()))
self.assertNotEqual(initial_permutation, next_permutation)
self.assertAllEqual(sorted(initial_permutation), sorted(next_permutation))
with self.assertRaises(errors.OutOfRangeError):
self.evaluate(next_element())
@parameterized.named_parameters(
("ReshuffleGraphLevelSeed", True, 38, None),
("ReshuffleOpLevelSeed", True, None, 42),
("ReshuffleGraphAndOpLevelSeed", True, 38, 42),
("NoReshuffleGraphLevelSeed", False, 38, None),
("NoReshuffleOpLevelSeed", False, None, 42),
("NoReshuffleGraphAndOpLevelSeed", False, 38, 42),
)
def testSkipEagerShuffleSeed(self, reshuffle, graph_level_seed,
op_level_seed):
results = []
for _ in range(2):
with ops.Graph().as_default() as g:
random_seed.set_random_seed(graph_level_seed)
dataset = dataset_ops.Dataset.range(10).shuffle(
10, seed=op_level_seed, reshuffle_each_iteration=reshuffle).repeat(
3)
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
run_results = []
with self.session(graph=g) as sess:
for _ in range(30):
run_results.append(sess.run(next_element))
with self.assertRaises(errors.OutOfRangeError):
sess.run(next_element)
results.append(run_results)
self.assertAllEqual(results[0], results[1])
# TODO(b/117581999): fails for eager mode with result[0] equal to result[1],
# debug.
@parameterized.named_parameters(
("ReshuffleOneShot", True, False),
("ReshuffleInitializable", True, True),
("NoReshuffleOneShot", False, False),
("NoReshuffleInitializable", False, True),
)
def testSkipEagerMultipleIterators(self, reshuffle, initializable):
with ops.Graph().as_default() as g:
dataset = dataset_ops.Dataset.range(100).shuffle(
10, reshuffle_each_iteration=reshuffle).repeat(3)
if initializable:
iterators = [dataset.make_initializable_iterator() for _ in range(2)]
else:
iterators = [dataset.make_one_shot_iterator() for _ in range(2)]
results = []
with self.session(graph=g) as sess:
for iterator in iterators:
if initializable:
sess.run(iterator.initializer)
next_element = iterator.get_next()
run_results = []
for _ in range(300):
run_results.append(sess.run(next_element))
with self.assertRaises(errors.OutOfRangeError):
sess.run(next_element)
results.append(run_results)
self.assertNotEqual(results[0], results[1])
if __name__ == "__main__":
test.main()
| [
"gardener@tensorflow.org"
] | gardener@tensorflow.org |
eb86b016fc192ce35a1af4257388a6a1699682b9 | d58f26ef4bfacc50d32306f27ea9628214aa53aa | /panoptes_aggregation/tests/extractor_tests/test_survey_extractor.py | 0f35fc2c2c8c2e1ac6fe86d1d4946c98f49bcb26 | [] | no_license | miclaraia/python-reducers-for-caesar | 893073ce1551d953d82a59bd87dea3deffe5e6ae | f1e28992ae73e131fb400d4b400fdf8d4d597828 | refs/heads/master | 2021-01-01T18:51:04.763162 | 2017-07-25T14:04:06 | 2017-07-25T14:04:06 | 98,448,810 | 0 | 0 | null | 2017-07-26T18:01:03 | 2017-07-26T17:30:17 | Python | UTF-8 | Python | false | false | 1,955 | py | import unittest
import json
import flask
from panoptes_aggregation import extractors
class TestSurveyExtractor(unittest.TestCase):
def setUp(self):
self.classification = {
'annotations': [{
'task': 'T0',
'value': [
{
'choice': 'AGOUTI',
'answers': {'HOWMANY': '1'},
'filters': {}
}, {
'choice': 'PECCARYCOLLARED',
'answers': {'HOWMANY': '3'},
'filters': {}
}, {
'choice': 'NOTHINGHERE',
'answers': {},
'filters': {}
}
]
}]
}
self.expected = [
{
'choice': 'agouti',
'answers.howmany': {'1': 1}
},
{
'choice': 'peccarycollared',
'answers.howmany': {'3': 1}
},
{
'choice': 'nothinghere',
}
]
def test_extract(self):
result = extractors.survey_extractor.classification_to_extract(self.classification)
for i in range(len(result)):
with self.subTest(i=i):
self.assertDictEqual(result[i], self.expected[i])
def test_request(self):
request_kwargs = {
'data': json.dumps(self.classification),
'content_type': 'application/json'
}
app = flask.Flask(__name__)
with app.test_request_context(**request_kwargs):
result = extractors.survey_extractor.survey_extractor_request(flask.request)
for i in range(len(result)):
with self.subTest(i=i):
self.assertDictEqual(result[i], self.expected[i])
if __name__ == '__main__':
unittest.main()
| [
"coleman.krawczyk@gmail.com"
] | coleman.krawczyk@gmail.com |
fc81aba1d48898a177ddd6d8431ac9006754f7a4 | d27af9d58b91b8cd998ac0eb87d980d304ff0670 | /Beginner-Contest/ABC106/ABC106_A.py | 7438576edb2f75a8a2bd207eb097f09e220ae881 | [] | no_license | mongesan/Atcoder-m0_ngesan-py | 29dd79daab149003ffc8b6b6bad5fa2e7daa9646 | 6654af034d4ff4cece1be04c2c8b756976d99a4b | refs/heads/master | 2023-08-20T19:50:04.547025 | 2021-10-27T12:24:51 | 2021-10-27T12:24:51 | 258,486,105 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9 | py | #ABC106_A | [
"syun1.mongesan@gmail.com"
] | syun1.mongesan@gmail.com |
50394fff8cb70405e3d6a9d2739f994d49fa25ba | 37edb81c3fb3dfc09d14bb228e5f8df610d21c50 | /source/_static/lecture_specific/mccall/mccall_resw_c.py | 1cf5709382e34082f3a31af6fd0fc53a6fe3ee7f | [
"BSD-3-Clause"
] | permissive | bktaha/lecture-python | 68cb003640323d8600b1396898f2fb383b417769 | b13db12120b977d1c661c9a68c5b8ddfc1b6dc89 | refs/heads/master | 2022-11-02T23:05:32.647316 | 2020-06-09T09:15:18 | 2020-06-09T09:15:18 | 272,172,927 | 0 | 0 | BSD-3-Clause | 2020-06-14T09:44:53 | 2020-06-14T09:44:52 | null | UTF-8 | Python | false | false | 535 | py | grid_size = 25
c_vals = np.linspace(2, 12, grid_size) # values of unemployment compensation
w_bar_vals = np.empty_like(c_vals)
mcm = McCallModel()
fig, ax = plt.subplots(figsize=(10, 6))
for i, c in enumerate(c_vals):
mcm.c = c
w_bar = compute_reservation_wage(mcm)
w_bar_vals[i] = w_bar
ax.set_xlabel('unemployment compensation')
ax.set_ylabel('reservation wage')
txt = r'$\bar w$ as a function of $c$'
ax.plot(c_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
ax.legend(loc='upper left')
ax.grid()
plt.show()
| [
"mamckay@gmail.com"
] | mamckay@gmail.com |
30eee56cb38f4efbea3d3964f8dd219c0e33b62f | 4c38cf22642d01720b41b0700b17c481da47ac1c | /实战项目/家用电器用户行为分析与事件识别/阈值寻优模型.py | 99f676399cef2aa5ccd7d503f9f034cfcf8c4d3a | [] | no_license | fwmmmrm/DataMining-2 | ff2e2d75e32e54e629accc6209c23604637c6afb | c20431d90264d84fcee9ddc1dd2d71db16b36632 | refs/heads/master | 2023-02-03T12:53:01.006916 | 2020-12-19T06:26:51 | 2020-12-19T06:26:51 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,182 | py | # -*- coding: utf-8 -*-
"""
在1-9分钟进行阈值寻优
"""
import numpy as np
import pandas as pd
def event_num(ts):
'''
得到事件数目
:param ts:
:return:
'''
d = data[u'发生时间'].diff() > ts
return d.sum() + 1
if __name__ == '__main__':
inputfile = 'data/water_heater.xls'
# 使用以后四个点的平均斜率
n = 4
threshold = pd.Timedelta(minutes=5)
data = pd.read_excel(inputfile)
data[u'发生时间'] = pd.to_datetime(data[u'发生时间'], format='%Y%m%d%H%M%S')
data = data[data[u'水流量'] > 0]
dt = [pd.Timedelta(minutes=i) for i in np.arange(1, 9, 0.25)]
# 定义阈值列
h = pd.DataFrame(dt, columns=[u'阈值'])
# 计算每个阈值对应的事件数
h[u'事件数'] = h[u'阈值'].apply(event_num)
# 计算每两个相邻点对应的斜率
h[u'斜率'] = h[u'事件数'].diff()/0.25
# 采用后n个的斜率绝对值平均作为斜率指标
h[u'斜率指标'] = pd.DataFrame(h[u'斜率'].abs()[len(h)-n:]).rolling(2).mean()
ts = h[u'阈值'][h[u'斜率指标'].idxmin() - n]
if ts > threshold:
ts = pd.Timedelta(minutes=4)
print(ts)
| [
"1695735420@qq.com"
] | 1695735420@qq.com |
3632c60cac2a105337f609999d00fbcb845ae306 | 33836016ea99776d31f7ad8f2140c39f7b43b5fe | /fip_collab/2015_03_25_alpha_Ti_composite_7th/main008.py | 15496dbf4af81557e35e3ca2f7ead381096d4b64 | [] | no_license | earthexploration/MKS-Experimentation | 92a2aea83e041bfe741048d662d28ff593077551 | 9b9ff3b468767b235e7c4884b0ed56c127328a5f | refs/heads/master | 2023-03-17T23:11:11.313693 | 2017-04-24T19:24:35 | 2017-04-24T19:24:35 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,110 | py | # -*- coding: utf-8 -*-
"""
Created on Wed Sep 17 16:46:35 2014
@author: nhpnp3
"""
import time
import fegrab
import microstructure_function as msf
import validation_viz
import results
import calibration
# import matplotlib.pyplot as plt
Hi = 2 # number of distinct local states in microstructure
order = 1 # choose 1, 2 or 7 local neighbors in MKS procedure
el_cal = 21 # number of elements per side of cube for calibration dataset
ns_cal = 399 # total number of samples in calibration dataset
set_id_cal = 'cal008' # set ID for calibration dataset
dir_cal = 'cal008' # directory name for .dat files
el_val = 21 # number of elements per side of cube for validation dataset
ns_val = 400 # total number of samples in validation dataset
set_id_val = 'val008' # set ID for validation dataset
dir_val = 'val008' # directory name for .dat files
doplt = 0 # if plotting of results desired set doplt = 1
# if doplt == 1:
# plt.close('all')
wrt_file = 'log_order%s_%s%s_%s%s_%s.txt' % \
(order, ns_cal, set_id_cal, ns_val, set_id_val,
time.strftime("%Y-%m-%d_h%Hm%M"))
# TOTAL CALIBRATION PROCEDURE
# Read the calibration microstructures and build the microstructure function
H = msf.msf(el_cal, ns_cal, Hi, order, set_id_cal, wrt_file)
# Read the responses from the FE .dat files and perform the fftn for the
# calibration
fegrab.fegrab(el_cal, ns_cal, set_id_cal, dir_cal, wrt_file)
# Perform the calibration
calibration.calibration_main(el_cal, ns_cal, H, set_id_cal, wrt_file)
# # TOTAL VALIDATION PROCEDURE
# Read the validation microstructures and build the microstructure function
H = msf.msf(el_val, ns_val, Hi, order, set_id_val, wrt_file)
# Read the responses from the FE .dat files and perform the fftn for the
# validation
fegrab.fegrab(el_val, ns_val, set_id_val, dir_val, wrt_file)
# Perform the validation
validation_viz.validation_zero_pad(el_cal, el_val, ns_cal, ns_val, H,
set_id_cal, set_id_val, wrt_file)
# Calculate the results of the validation
results.results(el_val, ns_val, set_id_val, 'epsilon', doplt, wrt_file)
| [
"noahhpaulson@gmail.com"
] | noahhpaulson@gmail.com |
edfc5ecf4e08ebc51d70cccfa64033266fcd94d5 | f00ae2cb4709539e8a78247678d9bb51913e0373 | /oacids/exported/ifaces.py | 6291040a82a66812250500bc05295488b82dd8c9 | [
"MIT"
] | permissive | openaps/oacids | 576351d34d51c62492fc0ed8be5e786273f27aee | ed8d6414171f45ac0c33636b5b00013e462e89fb | refs/heads/master | 2021-01-10T06:03:53.395357 | 2016-03-21T04:02:47 | 2016-03-21T04:02:47 | 51,559,470 | 2 | 2 | null | null | null | null | UTF-8 | Python | false | false | 390 | py |
# BUS='org.openaps.oacids'
BUS='org.openaps'
IFACE='org.openaps.Service'
ObjectManager = 'org.freedesktop.DBus.ObjectManager'
PATH_BASE='/org/openaps'
PATH='/org/openaps/Services'
ManagerPath = PATH
INTROSPECTABLE_IFACE='org.freedesktop.DBus.Introspectable'
# class WithProperties (dbus.service.Object):
TRIGGER_IFACE = IFACE + '.Trigger'
OPENAPS_IFACE='org.openaps.Service.Instance'
| [
"bewest@gmail.com"
] | bewest@gmail.com |
76974675ed4f3f5348441293a1ee00f825fc5953 | 7173978a10fc738ef6ee1d7c63e608da9a715d19 | /keyboards/inline/category_industrial_buttons.py | 2b046d54afb57b03c1337c4e5267bdcfa52dec6b | [] | no_license | stillnurs/telegram_bot | 80498de7489a3d99f0288f4c96ffe0b5a6ece937 | e7c3cd84f78d24484703c95e9d596ac682ef10a5 | refs/heads/master | 2023-02-09T19:54:08.919475 | 2020-12-28T19:43:44 | 2020-12-28T19:43:44 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 281 | py | from aiogram.types import InlineKeyboardMarkup, InlineKeyboardButton
category_industrial_choice = InlineKeyboardMarkup(inline_keyboard=[
[
InlineKeyboardButton(text="Общие вопросы",
callback_data="about_industrial")
]
],
)
| [
"noorsultan.mamataliev@gmail.com"
] | noorsultan.mamataliev@gmail.com |
e36d2752f8590ae3619472a4a1fbd8087e52f017 | a9e3f3ad54ade49c19973707d2beb49f64490efd | /Part-03-Understanding-Software-Crafting-Your-Own-Tools/models/edx-platform/lms/djangoapps/commerce/api/v0/tests/test_views.py | 5a1e47cb7149aa90340075ae66457392dab67365 | [
"MIT",
"AGPL-3.0-only",
"AGPL-3.0-or-later"
] | permissive | luque/better-ways-of-thinking-about-software | 8c3dda94e119f0f96edbfe5ba60ca6ec3f5f625d | 5809eaca7079a15ee56b0b7fcfea425337046c97 | refs/heads/master | 2021-11-24T15:10:09.785252 | 2021-11-22T12:14:34 | 2021-11-22T12:14:34 | 163,850,454 | 3 | 1 | MIT | 2021-11-22T12:12:31 | 2019-01-02T14:21:30 | JavaScript | UTF-8 | Python | false | false | 12,110 | py | """ Commerce API v0 view tests. """
import itertools
import json
from datetime import datetime, timedelta
from unittest import mock
from uuid import uuid4
import ddt
import pytz
from django.conf import settings
from django.test import TestCase
from django.test.utils import override_settings
from django.urls import reverse, reverse_lazy
from common.djangoapps.course_modes.models import CourseMode
from common.djangoapps.course_modes.tests.factories import CourseModeFactory
from common.djangoapps.student.models import CourseEnrollment
from common.djangoapps.student.tests.tests import EnrollmentEventTestMixin # lint-amnesty, pylint: disable=unused-import
from openedx.core.djangoapps.embargo.test_utils import restrict_course
from openedx.core.djangoapps.enrollments.api import get_enrollment
from openedx.core.lib.django_test_client_utils import get_absolute_url
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
from xmodule.modulestore.tests.factories import CourseFactory
from ....constants import Messages
from ....tests.mocks import mock_basket_order
from ....tests.test_views import UserMixin
UTM_COOKIE_NAME = 'edx.test.utm'
UTM_COOKIE_CONTENTS = {
'utm_source': 'test-source'
}
@ddt.ddt
class BasketsViewTests(UserMixin, ModuleStoreTestCase):
"""
Tests for the commerce Baskets view.
"""
def _post_to_view(self, course_id=None, marketing_email_opt_in=False, include_utm_cookie=False):
"""
POST to the view being tested.
Arguments
course_id (str) -- ID of course for which a seat should be ordered.
:return: Response
"""
payload = {
"course_id": str(course_id or self.course.id)
}
if marketing_email_opt_in:
payload["email_opt_in"] = True
if include_utm_cookie:
self.client.cookies[UTM_COOKIE_NAME] = json.dumps(UTM_COOKIE_CONTENTS)
return self.client.post(self.url, payload)
def assertResponseMessage(self, response, expected_msg):
""" Asserts the detail field in the response's JSON body equals the expected message. """
actual = json.loads(response.content.decode('utf-8'))['detail']
assert actual == expected_msg
def setUp(self):
super().setUp()
self.url = reverse('commerce_api:v0:baskets:create')
self._login()
self.course = CourseFactory.create()
# TODO Verify this is the best method to create CourseMode objects.
# TODO Find/create constants for the modes.
for mode in [CourseMode.HONOR, CourseMode.VERIFIED, CourseMode.AUDIT]:
sku_string = str(uuid4().hex)
CourseModeFactory.create(
course_id=self.course.id,
mode_slug=mode,
mode_display_name=mode,
sku=sku_string,
bulk_sku=f'BULK-{sku_string}'
)
@mock.patch.dict(settings.FEATURES, {'EMBARGO': True})
def test_embargo_restriction(self):
"""
The view should return HTTP 403 status if the course is embargoed.
"""
with restrict_course(self.course.id) as redirect_url:
response = self._post_to_view()
assert 403 == response.status_code
body = json.loads(response.content.decode('utf-8'))
assert get_absolute_url(redirect_url) == body['user_message_url']
def test_login_required(self):
"""
The view should return HTTP 401 status if the user is not logged in.
"""
self.client.logout()
assert 401 == self._post_to_view().status_code
@ddt.data('delete', 'get', 'put')
def test_post_required(self, method):
"""
Verify that the view only responds to POST operations.
"""
response = getattr(self.client, method)(self.url)
assert 405 == response.status_code
def test_invalid_course(self):
"""
If the course does not exist, the view should return HTTP 406.
"""
# TODO Test inactive courses, and those not open for enrollment.
assert 406 == self._post_to_view('aaa/bbb/ccc').status_code
def test_invalid_request_data(self):
"""
If invalid data is supplied with the request, the view should return HTTP 406.
"""
assert 406 == self.client.post(self.url, {}).status_code
assert 406 == self.client.post(self.url, {'not_course_id': ''}).status_code
@ddt.data(True, False)
def test_course_for_active_and_inactive_user(self, user_is_active):
"""
Test course enrollment for active and inactive user.
"""
# Set user's active flag
self.user.is_active = user_is_active
self.user.save()
response = self._post_to_view()
# Validate the response content
assert response.status_code == 200
msg = Messages.ENROLL_DIRECTLY.format(
course_id=self.course.id,
username=self.user.username
)
self.assertResponseMessage(response, msg)
def _test_course_without_sku(self, enrollment_mode=CourseMode.DEFAULT_MODE_SLUG):
"""
Validates the view when course has no CourseModes with SKUs.
"""
response = self._post_to_view()
# Validate the response content
assert response.status_code == 200
msg = Messages.NO_SKU_ENROLLED.format(
enrollment_mode=enrollment_mode,
course_id=self.course.id,
course_name=self.course.display_name,
username=self.user.username,
announcement=self.course.announcement
)
self.assertResponseMessage(response, msg)
def test_course_without_sku_default(self):
"""
If the course does NOT have a SKU, the user should be enrolled in the course (under the default mode) and
redirected to the user dashboard.
"""
# Remove SKU from all course modes
for course_mode in CourseMode.objects.filter(course_id=self.course.id):
course_mode.sku = None
course_mode.save()
self._test_course_without_sku()
def test_course_without_sku_honor(self):
"""
If the course does not have an SKU and has an honor mode, the user
should be enrolled as honor. This ensures backwards
compatibility with courses existing before the removal of
honor certificates.
"""
# Remove all existing course modes
CourseMode.objects.filter(course_id=self.course.id).delete()
# Ensure that honor mode exists
CourseMode(
mode_slug=CourseMode.HONOR,
mode_display_name="Honor Cert",
course_id=self.course.id
).save()
# We should be enrolled in honor mode
self._test_course_without_sku(enrollment_mode=CourseMode.HONOR)
def assertProfessionalModeBypassed(self):
""" Verifies that the view returns HTTP 406 when a course with no honor or audit mode is encountered. """
CourseMode.objects.filter(course_id=self.course.id).delete()
mode = CourseMode.NO_ID_PROFESSIONAL_MODE
sku_string = str(uuid4().hex)
CourseModeFactory.create(course_id=self.course.id, mode_slug=mode, mode_display_name=mode,
sku=sku_string, bulk_sku=f'BULK-{sku_string}')
response = self._post_to_view()
# The view should return an error status code
assert response.status_code == 406
msg = Messages.NO_DEFAULT_ENROLLMENT_MODE.format(course_id=self.course.id)
self.assertResponseMessage(response, msg)
def test_course_with_professional_mode_only(self):
""" Verifies that the view behaves appropriately when the course only has a professional mode. """
self.assertProfessionalModeBypassed()
@override_settings(ECOMMERCE_API_URL=None)
def test_professional_mode_only_and_ecommerce_service_not_configured(self):
"""
Verifies that the view behaves appropriately when the course only has a professional mode and
the E-Commerce Service is not configured.
"""
self.assertProfessionalModeBypassed()
def test_empty_sku(self):
""" If the CourseMode has an empty string for a SKU, the API should not be used. """
# Set SKU to empty string for all modes.
for course_mode in CourseMode.objects.filter(course_id=self.course.id):
course_mode.sku = ''
course_mode.save()
self._test_course_without_sku()
def test_existing_active_enrollment(self):
""" The view should respond with HTTP 409 if the user has an existing active enrollment for the course. """
# Enroll user in the course
CourseEnrollment.enroll(self.user, self.course.id)
assert CourseEnrollment.is_enrolled(self.user, self.course.id)
response = self._post_to_view()
assert response.status_code == 409
msg = Messages.ENROLLMENT_EXISTS.format(username=self.user.username, course_id=self.course.id)
self.assertResponseMessage(response, msg)
def test_existing_inactive_enrollment(self):
"""
If the user has an inactive enrollment for the course, the view should behave as if the
user has no enrollment.
"""
# Create an inactive enrollment
CourseEnrollment.enroll(self.user, self.course.id)
CourseEnrollment.unenroll(self.user, self.course.id, True)
assert not CourseEnrollment.is_enrolled(self.user, self.course.id)
assert get_enrollment(self.user.username, str(self.course.id)) is not None
@mock.patch('lms.djangoapps.commerce.api.v0.views.update_email_opt_in')
@ddt.data(*itertools.product((False, True), (False, True), (False, True)))
@ddt.unpack
def test_marketing_email_opt_in(self, is_opt_in, has_sku, is_exception, mock_update):
"""
Ensures the email opt-in flag is handled, if present, and that problems handling the
flag don't cause the rest of the enrollment transaction to fail.
"""
if not has_sku:
for course_mode in CourseMode.objects.filter(course_id=self.course.id):
course_mode.sku = None
course_mode.save()
if is_exception:
mock_update.side_effect = Exception("boink")
response = self._post_to_view(marketing_email_opt_in=is_opt_in)
assert mock_update.called == is_opt_in
assert response.status_code == 200
def test_closed_course(self):
"""
Verifies that the view returns HTTP 406 when a course is closed.
"""
self.course.enrollment_end = datetime.now(pytz.UTC) - timedelta(days=1)
modulestore().update_item(self.course, self.user.id)
assert self._post_to_view().status_code == 406
class BasketOrderViewTests(UserMixin, TestCase):
""" Tests for the basket order view. """
view_name = 'commerce_api:v0:baskets:retrieve_order'
MOCK_ORDER = {'number': 1}
path = reverse_lazy(view_name, kwargs={'basket_id': 1})
def setUp(self):
super().setUp()
self._login()
def test_order_found(self):
""" If the order is located, the view should pass the data from the API. """
with mock_basket_order(basket_id=1, response=self.MOCK_ORDER):
response = self.client.get(self.path)
assert response.status_code == 200
actual = json.loads(response.content.decode('utf-8'))
assert actual == self.MOCK_ORDER
def test_order_not_found(self):
""" If the order is not found, the view should return a 404. """
with mock_basket_order(basket_id=1, status=404):
response = self.client.get(self.path)
assert response.status_code == 404
def test_login_required(self):
""" The view should return 403 if the user is not logged in. """
self.client.logout()
response = self.client.get(self.path)
assert response.status_code == 403
| [
"rafael.luque@osoco.es"
] | rafael.luque@osoco.es |
9d983f524780487331f1a5a8f745b286f2a278df | 1531345172f997e42b861ddd2063d7e357a3b4f2 | /python/tf_idf_demo.py | 58cd7d7e88c95d87ddc076d432d50f3384af59dd | [] | no_license | claralinanda/MIS3545-Spring2017 | 5c83e671b19cc75857fcbfb941adb88272953c60 | 8b097453558f9a5c2c5c11ada7d98423dfa24254 | refs/heads/master | 2021-01-19T11:54:06.392757 | 2017-04-03T14:05:23 | 2017-04-03T14:05:23 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,724 | py | from math import log
def tf(term, doc, normalize=True):
doc = doc.lower().split()
if normalize:
return doc.count(term.lower()) / float(len(doc))
else:
return doc.count(term.lower()) / 1.0
def idf(term, corpus):
num_texts_with_term = len([True for text in corpus if term.lower()
in text.lower().split()])
# tf-idf calc involves multiplying against a tf value less than 0, so it's important
# to return a value greater than 1 for consistent scoring. (Multiplying two values
# less than 1 returns a value less than each of them)
try:
return 1.0 + log(float(len(corpus)) / num_texts_with_term)
except ZeroDivisionError:
return 1.0
def tf_idf(term, doc, corpus):
return tf(term, doc) * idf(term, corpus)
# Score queries by calculating cumulative tf_idf score for each term in query
query_scores = {'1': 0, '2': 0, '3': 0}
QUERY_TERMS = ['class', 'bi', 'babson', 'love']
corpus = \
{'1': 'This class is so cool love it Babson rock',
'2': 'I learn a lot in BI class I want to be an BI engineer',
'3': "One of the best class at Babson I love Babson"}
for term in [t.lower() for t in QUERY_TERMS]:
for doc in sorted(corpus):
print('TF(%s): %s' % (doc, term), tf(term, corpus[doc]))
print('IDF: %s' % (term,), idf(term, corpus.values()))
print()
for doc in sorted(corpus):
score = tf_idf(term, corpus[doc], corpus.values())
print('TF-IDF(%s): %s' % (doc, term), score)
query_scores[doc] += score
print()
print("Overall TF-IDF scores for query '%s'" % (' '.join(QUERY_TERMS),))
for (doc, score) in sorted(query_scores.items()):
print(doc, score)
| [
"zli@babson.edu"
] | zli@babson.edu |
f48b108217790987b181bc014327e857ca7cc522 | 3c6b0521eb788dc5e54e46370373e37eab4a164b | /FlowScore/eval_single.py | cda6c141996523b2197a00407f922861a8825b8d | [] | no_license | y12uc231/DialEvalMetrics | 7402f883390b94854f5d5ae142f700a697d7a21c | f27d717cfb02b08ffd774e60faa6b319a766ae77 | refs/heads/main | 2023-09-02T21:56:07.232363 | 2021-11-08T21:25:24 | 2021-11-08T21:25:24 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 849 | py | from flow_score import *
import numpy as n
import argparse
import json
def parse_args():
parser = argparse.ArgumentParser(description='')
parser.add_argument('--eval_data', type=str, default=None)
parser.add_argument('--output', type=str)
args = parser.parse_args()
return args
def main(args):
FLOW_SCORE = FlowScore(MODEL_PATH)
with open(args.eval_data) as f:
data = json.load(f)
flow_scores = []
for context, response in zip(data['contexts'], data['responses']):
flow_input = context + [response]
flow_score = FLOW_SCORE.score(flow_input) * -1
flow_scores.append(flow_score)
data['flow_scores'] = flow_scores
with open(args.output, 'w') as f:
json.dump(data, f)
if __name__ == '__main__':
args = parse_args()
main(args) | [
"yitingye@cs.cmu.edu"
] | yitingye@cs.cmu.edu |
f2f4abba432d2c0b97599ad83d7485f8c0eb3203 | 9088d49a7716bdfc9b5770e8e54ebf7be6958fcf | /09 - Repetition structure while/Des_059.py | a745d5bba64cfbc51c17879a26b37f1a9b2d9a9e | [
"MIT"
] | permissive | o-Ian/Practice-Python | 579e8ff5a63a2e7efa7388bf2d866bb1b11bdfe2 | 1e4b2d0788e70006096a53a7cf038db3148ba4b7 | refs/heads/main | 2023-05-02T02:21:48.459725 | 2021-05-18T18:46:06 | 2021-05-18T18:46:06 | 360,925,568 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 903 | py | from time import sleep
n = 0
while n != 5:
n1 = int(input('Digite um número inteiro: '))
n2 = int(input('Digite outro número inteiro: '))
print('=-' * 25)
print('Digite [1] para somar \nDigite [2] para multiplicar\nDigite [3] para saber qual número é o maior\n'
'Digite [4] para digitar novos números\nDigite [5] para sair do programa')
print('=-' * 25)
n = int(input('Digite o número da operação que você deseja > '))
if n == 1:
print('A soma entre {} e {} é {}.' .format(n1, n2, n1+n2))
elif n == 2:
print('A multiplicação entre {} e {} é {}.' .format(n1, n2, n1*n2))
elif n == 3:
print('O maior número é o {}.' .format(n1 if n1 > n2 else n2))
elif n == 4:
''
else:
print('-' * 23)
print('\033[1;41mDigite um número válido!\033[m')
print('-' * 23)
sleep(1)
print('FIM')
| [
"ianstigli@hotmail.com"
] | ianstigli@hotmail.com |
05f23b2d032c3d8618084d240d489fbcc40895b0 | b69b6a68a7bd7cfa274cbbe27c185be32eba7b54 | /rpc_start/go_json_rpc_client.py | e17222899d551985f37d07640b271ca208d3f91f | [] | no_license | ZhiyuSun/code-kingdom | 3f4d2c023ffdc2a18523c3144bb7271b9835f3ba | 318c292dec5a43de1131a171678a6874613bc087 | refs/heads/master | 2021-06-23T20:29:03.682412 | 2021-06-14T08:52:41 | 2021-06-14T08:52:41 | 201,889,194 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 540 | py | import json
import socket
# request = {
# "id":0,
# "params":["sunzhiyu"],
# "method": "HelloService.Hello"
# }
#
# client = socket.create_connection(("localhost", 2222))
# client.sendall(json.dumps(request).encode())
#
# #获取服务器返回的数据
# rsp = client.recv(1024)
# rsp = json.loads(rsp.decode())
#
# print(rsp["result"])
request = {
"id":0,
"params":["sunzhiyu"],
"method": "HelloService.Hello"
}
import requests
rsp = requests.post("http://localhost:1234/jsonrpc", json=request)
print(rsp.text)
| [
"zcgyxlgy@sina.com"
] | zcgyxlgy@sina.com |
95dc9559d334108df8bfa5d29c11957bbcfcf14a | 7b13e6acb2a1f26936462ed795ee4508b4088042 | /算法题目/算法题目/剑指offer/题15二进制中1的个数.py | 8b9f4af0a2c3eeefc3b9e876e3e82af26d04cedf | [] | no_license | guojia60180/algorithm | ed2b0fd63108f30cd596390e64ae659666d1c2c6 | ea81ff2722c7c350be5e1f0cd6d4290d366f2988 | refs/heads/master | 2020-04-19T08:25:55.110548 | 2019-05-13T13:29:39 | 2019-05-13T13:29:39 | 168,076,375 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 772 | py | #Author guo
# -*- coding:utf-8 -*-
class Solution:
def NumberOf1(self, n):
# write code here
count = 0
if n<0:
n=n&0xffffffff
while n:
count += 1
n = (n - 1) & n
return count
def Numberof2(self,n):
if n<0:
s=bin(n&0xffffffff)
else:
s=bin(n)
return s.count('1')
#判断是否是2的整次数幂
def powerof2(self,n):
if n&(n-1)==0:
return True
else:
return False
#判断两个数二进制表示有多少位不一样
def andOr(self,m,n):
diff=m^n
count=0
while diff:
count+=1
diff=diff&(diff-1)#逐级的置0
return count
| [
"44565715+guojia60180@users.noreply.github.com"
] | 44565715+guojia60180@users.noreply.github.com |
b010ac476488c1ddac5a903d34b68d757fd309a7 | 8cb8bfd2dae516612251039e0632173ea1ea4c8a | /modules/contact/triggers.py | baf79680e6675358cf535d3c8d1db77215c8a039 | [] | no_license | nyzsirt/lift-prod | 563cc70700d26a5812a1bce0bd9795998dce6e99 | 9a5f28e49ad5e80e422a5d5efee77a2d0247aa2b | refs/heads/master | 2020-04-22T01:05:42.262876 | 2019-02-09T13:31:15 | 2019-02-09T13:31:15 | 170,003,361 | 1 | 0 | null | 2019-02-10T17:11:50 | 2019-02-10T17:11:50 | null | UTF-8 | Python | false | false | 361 | py |
class ContactTrigger:
@classmethod
def pre_save(cls, sender, document, **kwargs):
names = []
if document.name or document.surname:
if document.name:
names.append(document.name)
if document.surname:
names.append(document.surname)
document.full_name = " ".join(names)
| [
"mutlu.erdem@soft-nec.com"
] | mutlu.erdem@soft-nec.com |
197c221bc0351ffccca4ea96c739f1c19bb8c129 | 926b3c52070f6e309567c8598248fd5c57095be9 | /src/audio/torchaudio/backend/soundfile_backend.py | 8fe76c998eadda1805f27db3191b82ca1071ee76 | [
"BSD-2-Clause"
] | permissive | fengbingchun/PyTorch_Test | 410f7cd2303707b0141d433fb9d144a961e1f4c8 | df5c2169f0b699bcd6e74adb4cb0e57f7dcd9348 | refs/heads/master | 2023-05-23T16:42:29.711338 | 2023-03-25T11:31:43 | 2023-03-25T11:31:43 | 167,339,907 | 15 | 4 | null | 2023-03-25T11:31:45 | 2019-01-24T09:24:59 | C++ | UTF-8 | Python | false | false | 16,484 | py | """The new soundfile backend which will become default in 0.8.0 onward"""
import warnings
from typing import Tuple, Optional
import torch
from torchaudio._internal import module_utils as _mod_utils
from .common import AudioMetaData
if _mod_utils.is_soundfile_available():
import soundfile
# Mapping from soundfile subtype to number of bits per sample.
# This is mostly heuristical and the value is set to 0 when it is irrelevant
# (lossy formats) or when it can't be inferred.
# For ADPCM (and G72X) subtypes, it's hard to infer the bit depth because it's not part of the standard:
# According to https://en.wikipedia.org/wiki/Adaptive_differential_pulse-code_modulation#In_telephony,
# the default seems to be 8 bits but it can be compressed further to 4 bits.
# The dict is inspired from
# https://github.com/bastibe/python-soundfile/blob/744efb4b01abc72498a96b09115b42a4cabd85e4/soundfile.py#L66-L94
_SUBTYPE_TO_BITS_PER_SAMPLE = {
"PCM_S8": 8, # Signed 8 bit data
"PCM_16": 16, # Signed 16 bit data
"PCM_24": 24, # Signed 24 bit data
"PCM_32": 32, # Signed 32 bit data
"PCM_U8": 8, # Unsigned 8 bit data (WAV and RAW only)
"FLOAT": 32, # 32 bit float data
"DOUBLE": 64, # 64 bit float data
"ULAW": 8, # U-Law encoded. See https://en.wikipedia.org/wiki/G.711#Types
"ALAW": 8, # A-Law encoded. See https://en.wikipedia.org/wiki/G.711#Types
"IMA_ADPCM": 0, # IMA ADPCM.
"MS_ADPCM": 0, # Microsoft ADPCM.
"GSM610": 0, # GSM 6.10 encoding. (Wikipedia says 1.625 bit depth?? https://en.wikipedia.org/wiki/Full_Rate)
"VOX_ADPCM": 0, # OKI / Dialogix ADPCM
"G721_32": 0, # 32kbs G721 ADPCM encoding.
"G723_24": 0, # 24kbs G723 ADPCM encoding.
"G723_40": 0, # 40kbs G723 ADPCM encoding.
"DWVW_12": 12, # 12 bit Delta Width Variable Word encoding.
"DWVW_16": 16, # 16 bit Delta Width Variable Word encoding.
"DWVW_24": 24, # 24 bit Delta Width Variable Word encoding.
"DWVW_N": 0, # N bit Delta Width Variable Word encoding.
"DPCM_8": 8, # 8 bit differential PCM (XI only)
"DPCM_16": 16, # 16 bit differential PCM (XI only)
"VORBIS": 0, # Xiph Vorbis encoding. (lossy)
"ALAC_16": 16, # Apple Lossless Audio Codec (16 bit).
"ALAC_20": 20, # Apple Lossless Audio Codec (20 bit).
"ALAC_24": 24, # Apple Lossless Audio Codec (24 bit).
"ALAC_32": 32, # Apple Lossless Audio Codec (32 bit).
}
def _get_bit_depth(subtype):
if subtype not in _SUBTYPE_TO_BITS_PER_SAMPLE:
warnings.warn(
f"The {subtype} subtype is unknown to TorchAudio. As a result, the bits_per_sample "
"attribute will be set to 0. If you are seeing this warning, please "
"report by opening an issue on github (after checking for existing/closed ones). "
"You may otherwise ignore this warning."
)
return _SUBTYPE_TO_BITS_PER_SAMPLE.get(subtype, 0)
_SUBTYPE_TO_ENCODING = {
"PCM_S8": "PCM_S",
"PCM_16": "PCM_S",
"PCM_24": "PCM_S",
"PCM_32": "PCM_S",
"PCM_U8": "PCM_U",
"FLOAT": "PCM_F",
"DOUBLE": "PCM_F",
"ULAW": "ULAW",
"ALAW": "ALAW",
"VORBIS": "VORBIS",
}
def _get_encoding(format: str, subtype: str):
if format == "FLAC":
return "FLAC"
return _SUBTYPE_TO_ENCODING.get(subtype, "UNKNOWN")
@_mod_utils.requires_soundfile()
def info(filepath: str, format: Optional[str] = None) -> AudioMetaData:
"""Get signal information of an audio file.
Note:
``filepath`` argument is intentionally annotated as ``str`` only, even though it accepts
``pathlib.Path`` object as well. This is for the consistency with ``"sox_io"`` backend,
which has a restriction on type annotation due to TorchScript compiler compatiblity.
Args:
filepath (path-like object or file-like object):
Source of audio data.
format (str or None, optional):
Not used. PySoundFile does not accept format hint.
Returns:
AudioMetaData: meta data of the given audio.
"""
sinfo = soundfile.info(filepath)
return AudioMetaData(
sinfo.samplerate,
sinfo.frames,
sinfo.channels,
bits_per_sample=_get_bit_depth(sinfo.subtype),
encoding=_get_encoding(sinfo.format, sinfo.subtype),
)
_SUBTYPE2DTYPE = {
"PCM_S8": "int8",
"PCM_U8": "uint8",
"PCM_16": "int16",
"PCM_32": "int32",
"FLOAT": "float32",
"DOUBLE": "float64",
}
@_mod_utils.requires_soundfile()
def load(
filepath: str,
frame_offset: int = 0,
num_frames: int = -1,
normalize: bool = True,
channels_first: bool = True,
format: Optional[str] = None,
) -> Tuple[torch.Tensor, int]:
"""Load audio data from file.
Note:
The formats this function can handle depend on the soundfile installation.
This function is tested on the following formats;
* WAV
* 32-bit floating-point
* 32-bit signed integer
* 16-bit signed integer
* 8-bit unsigned integer
* FLAC
* OGG/VORBIS
* SPHERE
By default (``normalize=True``, ``channels_first=True``), this function returns Tensor with
``float32`` dtype and the shape of `[channel, time]`.
The samples are normalized to fit in the range of ``[-1.0, 1.0]``.
When the input format is WAV with integer type, such as 32-bit signed integer, 16-bit
signed integer and 8-bit unsigned integer (24-bit signed integer is not supported),
by providing ``normalize=False``, this function can return integer Tensor, where the samples
are expressed within the whole range of the corresponding dtype, that is, ``int32`` tensor
for 32-bit signed PCM, ``int16`` for 16-bit signed PCM and ``uint8`` for 8-bit unsigned PCM.
``normalize`` parameter has no effect on 32-bit floating-point WAV and other formats, such as
``flac`` and ``mp3``.
For these formats, this function always returns ``float32`` Tensor with values normalized to
``[-1.0, 1.0]``.
Note:
``filepath`` argument is intentionally annotated as ``str`` only, even though it accepts
``pathlib.Path`` object as well. This is for the consistency with ``"sox_io"`` backend,
which has a restriction on type annotation due to TorchScript compiler compatiblity.
Args:
filepath (path-like object or file-like object):
Source of audio data.
frame_offset (int, optional):
Number of frames to skip before start reading data.
num_frames (int, optional):
Maximum number of frames to read. ``-1`` reads all the remaining samples,
starting from ``frame_offset``.
This function may return the less number of frames if there is not enough
frames in the given file.
normalize (bool, optional):
When ``True``, this function always return ``float32``, and sample values are
normalized to ``[-1.0, 1.0]``.
If input file is integer WAV, giving ``False`` will change the resulting Tensor type to
integer type.
This argument has no effect for formats other than integer WAV type.
channels_first (bool, optional):
When True, the returned Tensor has dimension `[channel, time]`.
Otherwise, the returned Tensor's dimension is `[time, channel]`.
format (str or None, optional):
Not used. PySoundFile does not accept format hint.
Returns:
(torch.Tensor, int): Resulting Tensor and sample rate.
If the input file has integer wav format and normalization is off, then it has
integer type, else ``float32`` type. If ``channels_first=True``, it has
`[channel, time]` else `[time, channel]`.
"""
with soundfile.SoundFile(filepath, "r") as file_:
if file_.format != "WAV" or normalize:
dtype = "float32"
elif file_.subtype not in _SUBTYPE2DTYPE:
raise ValueError(f"Unsupported subtype: {file_.subtype}")
else:
dtype = _SUBTYPE2DTYPE[file_.subtype]
frames = file_._prepare_read(frame_offset, None, num_frames)
waveform = file_.read(frames, dtype, always_2d=True)
sample_rate = file_.samplerate
waveform = torch.from_numpy(waveform)
if channels_first:
waveform = waveform.t()
return waveform, sample_rate
def _get_subtype_for_wav(dtype: torch.dtype, encoding: str, bits_per_sample: int):
if not encoding:
if not bits_per_sample:
subtype = {
torch.uint8: "PCM_U8",
torch.int16: "PCM_16",
torch.int32: "PCM_32",
torch.float32: "FLOAT",
torch.float64: "DOUBLE",
}.get(dtype)
if not subtype:
raise ValueError(f"Unsupported dtype for wav: {dtype}")
return subtype
if bits_per_sample == 8:
return "PCM_U8"
return f"PCM_{bits_per_sample}"
if encoding == "PCM_S":
if not bits_per_sample:
return "PCM_32"
if bits_per_sample == 8:
raise ValueError("wav does not support 8-bit signed PCM encoding.")
return f"PCM_{bits_per_sample}"
if encoding == "PCM_U":
if bits_per_sample in (None, 8):
return "PCM_U8"
raise ValueError("wav only supports 8-bit unsigned PCM encoding.")
if encoding == "PCM_F":
if bits_per_sample in (None, 32):
return "FLOAT"
if bits_per_sample == 64:
return "DOUBLE"
raise ValueError("wav only supports 32/64-bit float PCM encoding.")
if encoding == "ULAW":
if bits_per_sample in (None, 8):
return "ULAW"
raise ValueError("wav only supports 8-bit mu-law encoding.")
if encoding == "ALAW":
if bits_per_sample in (None, 8):
return "ALAW"
raise ValueError("wav only supports 8-bit a-law encoding.")
raise ValueError(f"wav does not support {encoding}.")
def _get_subtype_for_sphere(encoding: str, bits_per_sample: int):
if encoding in (None, "PCM_S"):
return f"PCM_{bits_per_sample}" if bits_per_sample else "PCM_32"
if encoding in ("PCM_U", "PCM_F"):
raise ValueError(f"sph does not support {encoding} encoding.")
if encoding == "ULAW":
if bits_per_sample in (None, 8):
return "ULAW"
raise ValueError("sph only supports 8-bit for mu-law encoding.")
if encoding == "ALAW":
return "ALAW"
raise ValueError(f"sph does not support {encoding}.")
def _get_subtype(dtype: torch.dtype, format: str, encoding: str, bits_per_sample: int):
if format == "wav":
return _get_subtype_for_wav(dtype, encoding, bits_per_sample)
if format == "flac":
if encoding:
raise ValueError("flac does not support encoding.")
if not bits_per_sample:
return "PCM_16"
if bits_per_sample > 24:
raise ValueError("flac does not support bits_per_sample > 24.")
return "PCM_S8" if bits_per_sample == 8 else f"PCM_{bits_per_sample}"
if format in ("ogg", "vorbis"):
if encoding or bits_per_sample:
raise ValueError("ogg/vorbis does not support encoding/bits_per_sample.")
return "VORBIS"
if format == "sph":
return _get_subtype_for_sphere(encoding, bits_per_sample)
if format in ("nis", "nist"):
return "PCM_16"
raise ValueError(f"Unsupported format: {format}")
@_mod_utils.requires_soundfile()
def save(
filepath: str,
src: torch.Tensor,
sample_rate: int,
channels_first: bool = True,
compression: Optional[float] = None,
format: Optional[str] = None,
encoding: Optional[str] = None,
bits_per_sample: Optional[int] = None,
):
"""Save audio data to file.
Note:
The formats this function can handle depend on the soundfile installation.
This function is tested on the following formats;
* WAV
* 32-bit floating-point
* 32-bit signed integer
* 16-bit signed integer
* 8-bit unsigned integer
* FLAC
* OGG/VORBIS
* SPHERE
Note:
``filepath`` argument is intentionally annotated as ``str`` only, even though it accepts
``pathlib.Path`` object as well. This is for the consistency with ``"sox_io"`` backend,
which has a restriction on type annotation due to TorchScript compiler compatiblity.
Args:
filepath (str or pathlib.Path): Path to audio file.
src (torch.Tensor): Audio data to save. must be 2D tensor.
sample_rate (int): sampling rate
channels_first (bool, optional): If ``True``, the given tensor is interpreted as `[channel, time]`,
otherwise `[time, channel]`.
compression (float of None, optional): Not used.
It is here only for interface compatibility reson with "sox_io" backend.
format (str or None, optional): Override the audio format.
When ``filepath`` argument is path-like object, audio format is
inferred from file extension. If the file extension is missing or
different, you can specify the correct format with this argument.
When ``filepath`` argument is file-like object,
this argument is required.
Valid values are ``"wav"``, ``"ogg"``, ``"vorbis"``,
``"flac"`` and ``"sph"``.
encoding (str or None, optional): Changes the encoding for supported formats.
This argument is effective only for supported formats, sush as
``"wav"``, ``""flac"`` and ``"sph"``. Valid values are;
- ``"PCM_S"`` (signed integer Linear PCM)
- ``"PCM_U"`` (unsigned integer Linear PCM)
- ``"PCM_F"`` (floating point PCM)
- ``"ULAW"`` (mu-law)
- ``"ALAW"`` (a-law)
bits_per_sample (int or None, optional): Changes the bit depth for the
supported formats.
When ``format`` is one of ``"wav"``, ``"flac"`` or ``"sph"``,
you can change the bit depth.
Valid values are ``8``, ``16``, ``24``, ``32`` and ``64``.
Supported formats/encodings/bit depth/compression are:
``"wav"``
- 32-bit floating-point PCM
- 32-bit signed integer PCM
- 24-bit signed integer PCM
- 16-bit signed integer PCM
- 8-bit unsigned integer PCM
- 8-bit mu-law
- 8-bit a-law
Note:
Default encoding/bit depth is determined by the dtype of
the input Tensor.
``"flac"``
- 8-bit
- 16-bit (default)
- 24-bit
``"ogg"``, ``"vorbis"``
- Doesn't accept changing configuration.
``"sph"``
- 8-bit signed integer PCM
- 16-bit signed integer PCM
- 24-bit signed integer PCM
- 32-bit signed integer PCM (default)
- 8-bit mu-law
- 8-bit a-law
- 16-bit a-law
- 24-bit a-law
- 32-bit a-law
"""
if src.ndim != 2:
raise ValueError(f"Expected 2D Tensor, got {src.ndim}D.")
if compression is not None:
warnings.warn(
'`save` function of "soundfile" backend does not support "compression" parameter. '
"The argument is silently ignored."
)
if hasattr(filepath, "write"):
if format is None:
raise RuntimeError("`format` is required when saving to file object.")
ext = format.lower()
else:
ext = str(filepath).split(".")[-1].lower()
if bits_per_sample not in (None, 8, 16, 24, 32, 64):
raise ValueError("Invalid bits_per_sample.")
if bits_per_sample == 24:
warnings.warn(
"Saving audio with 24 bits per sample might warp samples near -1. "
"Using 16 bits per sample might be able to avoid this."
)
subtype = _get_subtype(src.dtype, ext, encoding, bits_per_sample)
# sph is a extension used in TED-LIUM but soundfile does not recognize it as NIST format,
# so we extend the extensions manually here
if ext in ["nis", "nist", "sph"] and format is None:
format = "NIST"
if channels_first:
src = src.t()
soundfile.write(file=filepath, data=src, samplerate=sample_rate, subtype=subtype, format=format)
| [
"fengbingchun@163.com"
] | fengbingchun@163.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.